parquet-converter commited on
Commit
55c8914
·
1 Parent(s): 89b3735

Update parquet files (step 84 of 397)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/Uniop Exor Designer 6 __EXCLUSIVE__.md +0 -104
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autocom 2.12.2 Keygen Download and Install the Latest Version of Autocom Software.md +0 -104
  3. spaces/1gistliPinn/ChatGPT4/Examples/BaByliss E702YTE User Manual Download The Best Way to Enjoy Your Hair Care Experience.md +0 -6
  4. spaces/1gistliPinn/ChatGPT4/Examples/Beware The Night Ralph Sarchie Epub Gratis ((TOP)).md +0 -6
  5. spaces/1gistliPinn/ChatGPT4/Examples/Download Windows Xp Home Edition Ulcpc Iso _BEST_.md +0 -68
  6. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download CSR Classics MOD APK and Enjoy Unlimited Money and Gold.md +0 -83
  7. spaces/1phancelerku/anime-remove-background/Download 2go APK and Enjoy Live Online Hangouts with Friends.md +0 -109
  8. spaces/1phancelerku/anime-remove-background/Download Bitcoin Loophole App and Start Trading Cryptocurrencies Today.md +0 -144
  9. spaces/232labs/VToonify/vtoonify/model/raft/core/utils/frame_utils.py +0 -137
  10. spaces/839871171w/newbingAI/README.md +0 -12
  11. spaces/AFOL/GigaGan/README.md +0 -12
  12. spaces/AI4PD/hexviz/hexviz/models.py +0 -64
  13. spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/utils/config.py +0 -94
  14. spaces/AbandonedMuse/UnlimitedMusicGen/web-ui.bat +0 -1
  15. spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/tool_using.py +0 -315
  16. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/Factory.d.ts +0 -5
  17. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/Factory.d.ts +0 -5
  18. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/instructpix2pix.md +0 -215
  19. spaces/Andy1621/uniformer_image_detection/configs/cityscapes/README.md +0 -33
  20. spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpn_crop640_50e_coco.py +0 -74
  21. spaces/Andy1621/uniformer_image_detection/configs/resnest/cascade_rcnn_s101_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py +0 -4
  22. spaces/Andy1621/uniformer_image_detection/tools/test.py +0 -220
  23. spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/fcn_hr18.py +0 -52
  24. spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50b-d8_769x769_80k_cityscapes.py +0 -2
  25. spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/data_preprocessor.py +0 -199
  26. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py +0 -353
  27. spaces/Aveygo/AstroSleuth/training.md +0 -23
  28. spaces/Awesimo/jojogan/e4e/criteria/__init__.py +0 -0
  29. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_export.py +0 -207
  30. spaces/AzumaSeren100/XuanShen-Bert-VITS2/monotonic_align/__init__.py +0 -15
  31. spaces/Benson/text-generation/Examples/Casos Criminales Misterios Del Pasado Mod Apk ltima Versin.md +0 -39
  32. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/__init__.py +0 -18
  33. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/wheel.py +0 -1082
  34. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/live.py +0 -375
  35. spaces/BigDL/bigdl_nano_demo/data.py +0 -233
  36. spaces/BraydenMoore/MARCI-NFL-Betting/Source/Train/xgboost_ML.py +0 -69
  37. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/README.md +0 -75
  38. spaces/ChrisCaviar/ControlNet-v1-1/model.py +0 -591
  39. spaces/CobaltZvc/Hyper_Bot/README.md +0 -10
  40. spaces/CofAI/chat.b4/g4f/Provider/Providers/Fakeopen.py +0 -54
  41. spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cpu/dcn_v2_cpu.cpp +0 -229
  42. spaces/DD0101/Disfluency-base/README.md +0 -12
  43. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/teePen.py +0 -54
  44. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-329f8260.css +0 -1
  45. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-9d98c4c0.js +0 -2
  46. spaces/DataScienceEngineering/2-GradioLiveASR/README.md +0 -13
  47. spaces/DevashishBhake/Question_Generation/README.md +0 -13
  48. spaces/Dinoking/Guccio-AI-Designer/netdissect/modelconfig.py +0 -144
  49. spaces/Dorado607/ChuanhuChatGPT/readme/README_ja.md +0 -139
  50. spaces/ECCV2022/bytetrack/deploy/TensorRT/cpp/README.md +0 -58
spaces/1acneusushi/gradio-2dmoleculeeditor/Uniop Exor Designer 6 __EXCLUSIVE__.md DELETED
@@ -1,104 +0,0 @@
1
- ## Uniop exor designer 6
2
-
3
-
4
-
5
-
6
-
7
- ![Uniop Exor Designer 6 __EXCLUSIVE__](https://3.bp.blogspot.com/-mAjkad6j2hA/To6ew_qrvFI/AAAAAAAAAoo/JyUU98g7cuo/s1600/Designer_v6.01.PNG)
8
-
9
-
10
-
11
-
12
-
13
- **Click Here ::: [https://www.google.com/url?q=https%3A%2F%2Fbltlly.com%2F2txKOu&sa=D&sntz=1&usg=AOvVaw2JetTkHxDAlVCUUCz9gTg7](https://www.google.com/url?q=https%3A%2F%2Fbltlly.com%2F2txKOu&sa=D&sntz=1&usg=AOvVaw2JetTkHxDAlVCUUCz9gTg7)**
14
-
15
-
16
-
17
-
18
-
19
-
20
-
21
-
22
-
23
-
24
-
25
- Here is a possible title and article with html formatting for the keyword "Uniop exor designer 6":
26
-
27
- # How to Use Uniop Exor Designer 6 to Create Stunning HMI Applications
28
-
29
-
30
-
31
- Uniop Exor Designer 6 is a software tool that allows you to design and develop graphical user interfaces for Uniop HMI panels. With Designer 6, you can create dynamic and interactive applications that connect to your plant-floor devices using a wide range of communication protocols. Whether you need a simple display or a complex dashboard, Designer 6 can help you achieve your goals.
32
-
33
-
34
-
35
- In this article, we will show you how to use Designer 6 to create a simple HMI application that monitors and controls a temperature sensor and a heater. We will also show you how to test and download your application to a Uniop HMI panel.
36
-
37
-
38
-
39
- ## Step 1: Create a New Project
40
-
41
-
42
-
43
- To start a new project in Designer 6, open the software and click on File > New Project. You will see a dialog box where you can enter the project name, description, and location. You can also choose the target HMI panel model and resolution from the drop-down menus. For this example, we will use an eTOP05 panel with a resolution of 320x240 pixels. Click OK to create the project.
44
-
45
-
46
-
47
- ## Step 2: Add Pages and Objects
48
-
49
-
50
-
51
- A project in Designer 6 consists of one or more pages that contain graphical objects. You can add pages by clicking on the Page menu and selecting New Page. You can rename the pages by double-clicking on their names in the Project Explorer window. You can also set the background color and image of each page by right-clicking on them and choosing Properties.
52
-
53
-
54
-
55
- To add objects to a page, you can use the Toolbox window that contains various categories of objects, such as buttons, lamps, gauges, graphs, etc. You can drag and drop the objects from the Toolbox to the page and resize and position them as you like. You can also change their properties by right-clicking on them and choosing Properties.
56
-
57
-
58
-
59
- For this example, we will create two pages: one for monitoring the temperature sensor and one for controlling the heater. On the first page, we will add a text object that displays the current temperature value, a gauge object that shows the temperature level, and a button object that switches to the second page. On the second page, we will add a text object that displays the heater status (on or off), a lamp object that indicates the heater state (red or green), and a button object that toggles the heater state. We will also add another button object that switches back to the first page.
60
-
61
-
62
-
63
- ## Step 3: Define Tags and Variables
64
-
65
-
66
-
67
- To communicate with your plant-floor devices, you need to define tags and variables in Designer 6. Tags are symbolic names that represent data points in your devices, such as inputs, outputs, registers, etc. Variables are internal memory locations that store data values in your HMI panel.
68
-
69
-
70
-
71
- You can define tags and variables by clicking on the Tag menu and selecting Tag Editor. You will see a table where you can enter the tag name, type, address, format, description, etc. You can also import tags from external files or export tags to external files.
72
-
73
-
74
-
75
- For this example, we will define two tags: one for reading the temperature value from the sensor (Temp) and one for writing the heater state to the heater (Heat). We will also define two variables: one for storing the current page number (Page) and one for storing the heater status (Status).
76
-
77
-
78
-
79
- ## Step 4: Assign Actions and Expressions
80
-
81
-
82
-
83
- To make your HMI application interactive and dynamic, you need to assign actions and expressions to your objects in Designer 6. Actions are commands that execute when an event occurs on an object, such as pressing a button or changing a value. Expressions are formulas that calculate or manipulate data values based on tags or variables.
84
-
85
-
86
-
87
- You can assign actions and expressions by right-clicking on an object and choosing Actions or Expressions. You will see a dialog box where you can select an event type (such as On Press or On Change) and enter an action or expression code using a simple scripting language.
88
-
89
-
90
-
91
- For this example, we will assign actions and expressions to our objects as follows:
92
-
93
-
94
-
95
- - The text object on the first page will have an expression that displays the value of Temp tag formatted as "Temp: #.# ° dfd1c89656
96
-
97
-
98
-
99
-
100
-
101
-
102
-
103
-
104
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autocom 2.12.2 Keygen Download and Install the Latest Version of Autocom Software.md DELETED
@@ -1,104 +0,0 @@
1
- <br />
2
- <h1>Autocom 2.12.2 Keygen: What You Need to Know</h1>
3
- <p>If you are looking for a reliable and versatile diagnostic tool for your car or truck, you might have heard of <strong>Autocom</strong>. Autocom is a multi-brand car and truck diagnostic software that allows you to perform various tests, repairs, and adjustments on your vehicle. It supports a wide range of models, systems, and functions, making it a useful tool for both professionals and hobbyists.</p>
4
- <h2>autocom 2.12.2 keygen</h2><br /><p><b><b>Download Zip</b> &#10026;&#10026;&#10026; <a href="https://byltly.com/2uKzHi">https://byltly.com/2uKzHi</a></b></p><br /><br />
5
- <p>However, to use Autocom, you need a valid activation file that matches your device serial number and software version. This is where a <strong>keygen</strong> comes in handy. A keygen is a program that generates a unique activation file for you, so you can use Autocom without any limitations or restrictions.</p>
6
- <p>In this article, we will show you how to download, install, and use <strong>Autocom 2.12.2 keygen</strong>, which is one of the latest versions of the software. We will also give you some tips and tricks for using Autocom effectively and efficiently.</p>
7
- <h2>How to Download and Install Autocom 2.12.2 Keygen</h2>
8
- <p>The first step is to download the keygen from a reliable source. You can find the download link in this forum thread, where you can also read more about the features and updates of Autocom 2.12.2 keygen.</p>
9
- <p>The file is compressed in a ZIP archive, so you will need a program like WinRAR or 7-Zip to extract it. The file also requires a password, which you can get by sending a private message to the thread author after thanking him and adding reputation.</p>
10
- <p>Once you have the password, you can extract the keygen folder to your desktop or any other location of your choice. Inside the folder, you will find two files: <code>Autocom 2020_23.exe</code> and <code>date_patch.exe</code>. The first one is the keygen itself, while the second one is a patch that automatically changes your system date to match the activation date.</p>
11
- <p>autocom 2.12.2 activation code generator<br />
12
- autocom 2.12.2 crack download free<br />
13
- autocom 2.12.2 serial number finder<br />
14
- autocom 2.12.2 license key online<br />
15
- autocom 2.12.2 patch file full version<br />
16
- autocom 2.12.2 keygen torrent magnet<br />
17
- autocom 2.12.2 registration code software<br />
18
- autocom 2.12.2 product key windows<br />
19
- autocom 2.12.2 unlock code tool<br />
20
- autocom 2.12.2 keygen rar password<br />
21
- autocom 2.12.2 activation key mac<br />
22
- autocom 2.12.2 crack zip file<br />
23
- autocom 2.12.2 serial key generator<br />
24
- autocom 2.12.2 license code online<br />
25
- autocom 2.12.2 patch exe download<br />
26
- autocom 2.12.2 keygen direct link<br />
27
- autocom 2.12.2 registration key software<br />
28
- autocom 2.12.2 product code windows<br />
29
- autocom 2.12.2 unlock key tool<br />
30
- autocom 2.12.2 keygen zip password<br />
31
- autocom 2.12.2 activation code mac<br />
32
- autocom 2.12.2 crack rar file<br />
33
- autocom 2.12.2 serial number generator<br />
34
- autocom 2.12.2 license key online<br />
35
- autocom 2.12.2 patch rar download<br />
36
- autocom 2.12.2 keygen torrent link<br />
37
- autocom 2.12.2 registration code software<br />
38
- autocom 2.12.2 product key windows<br />
39
- autocom 2.12.2 unlock code tool<br />
40
- autocom 2.12.2 keygen exe password<br />
41
- autocom 2.12.2 activation key mac<br />
42
- autocom 2.12.2 crack zip file<br />
43
- autocom 2.12.2 serial key generator<br />
44
- autocom 2.12.2 license code online<br />
45
- autocom 2.12.2 patch zip download<br />
46
- autocom 2.12.2 keygen direct link<br />
47
- autocom 2.12.2 registration key software<br />
48
- autocom 2.12.2 product code windows<br />
49
- autocom 2017 release r1 full version with keygen and patch tool for free download.</p>
50
- <p>To run the keygen, you need to double-click on <code>Autocom 2020_23.exe</code>. You will see a window like this:</p>
51
- <img src="https://i.imgur.com/8ZyXkQa.png" alt="Autocom 2020_23.exe window">
52
- <p>The keygen will generate an activation file for you based on your device serial number and software version. You can enter these values manually or click on <code>Browse</code> to select them from your installation folder.</p>
53
- <p>After entering or selecting your serial number and software version, click on <code>Generate FileActivation.xml</code>. The keygen will create an activation file named <code>FileActivation.xml</code> in the same folder as the keygen.</p>
54
- <p>The next step is to copy this file to your installation folder, which is usually located at <code>C:\Program Files (x86)\Autocom\Delphi Cars 2020_23\bin</code>. You can use Windows Explorer or any other file manager to do this.</p>
55
- <h2>How to Use Autocom 2.12.2 Keygen</h2>
56
- <p>Now that you have installed the keygen and copied the activation file, you are ready to use Autocom 2.12.2 keygen.</p>
57
- <p>The first thing you need to do is connect your device (such as CDP+ or Delphi DS150E) to your computer via USB or Bluetooth. Make sure your device is turned on and recognized by your computer.</p>
58
- <p>The next thing you need to do is run <code>date_patch.exe</code>, which is located in the same folder as the keygen. This patch will change your system date temporarily to match the activation date of your file (usually 01/01/2020). This is necessary because Autocom checks your system date every time you run it.</p>
59
- <p>After running <code>date_patch.exe</code>, you will see a window like this:</p>
60
- <img src="https://i.imgur.com/4Yc6f9u.png" alt="date_patch.exe window">
61
- <p>You don't need to do anything else with this window, just leave it open until you finish using Autocom.</p>
62
- <p>The final step is to run Autocom from your desktop shortcut or start menu. You will see a window like this:</p>
63
- <img src="https://i.imgur.com/6lL7J4M.png" alt="Autocom window">
64
- <p>You can now select your vehicle brand, model, year, system, function, etc., and perform various diagnostic tasks with Autocom.</p>
65
- <h2>Tips and Tricks for Using Autocom 2.12.2 Keygen</h2>
66
- <p>To make the most out of Autocom 2.12.2 keygen, here are some tips and tricks that you should keep in mind:</p>
67
- <ul>
68
- <li>If you encounter any errors or problems with Autocom, such as communication failure, license error, hardware fault, etc., try restarting your device and/or computer, reconnecting your device via USB or Bluetooth, running <code>date_patch.exe</code> again, or reinstalling Autocom.</li>
69
- <li>If you want to customize your settings and preferences with Autocom, such as language, units, sound, appearance, etc., go to <code>Settings</code> > <code>User Preferences</code>.</li>
70
- <li>If you want to access additional features and functions with Autocom, such as data logging, oscilloscope, flight recorder, service reset, etc., go to <code>Diagnostics</code> > <code>Advanced Features</code>.</li>
71
- <li>If you want to update your firmware and software with Autocom, go to <code>Diagnostics</code> > <code>Firmware Update</code>. You will need an internet connection for this.</li>
72
- <li>If you want to learn more about how to use Autocom effectively and efficiently, go to <code>Diagnostics</code> > <code>User Manual</code>, where you can find detailed instructions and tutorials for various tasks.</li>
73
- </ul>
74
- <h2>Conclusion</h2>
75
- <p>In conclusion, Autocom 2.12.2 keygen is a powerful and versatile diagnostic tool that can help you diagnose and repair your car or truck easily and quickly.</p>
76
- <p>By following this article, you should be able to download, install, and use Autocom 2.12.2 keygen without any hassle or difficulty.</p>
77
- <p>If you have any questions or feedback about this article or Autocom 2.12.2 keygen in general, feel free to leave a comment below or contact us via email.</p>
78
- <p>We hope you enjoyed this article and found it useful!</p>
79
- Here are some FAQs after the conclusion: <h3>Frequently Asked Questions (FAQs)</h3>
80
- <ol>
81
- <li><strong>What are the system requirements for running Autocom 2.12.2 keygen?</strong></li>
82
- <p>You need a Windows PC (Windows 10 or Windows 8) with at least 4 GB RAM (depending on the OS), 16 GB free space on the hard drive, screen resolution of 1440 x 900 or higher, connection to the internet, Bluetooth (SPP), USB port, Adobe Acrobat Reader 8.0 or higher, and .NET framework 3.5 and 2.0 turned on. You also need a compatible device (such as CDP+ or Delphi DS150E) that supports Autocom software.</p>
83
- <li><strong>Is Autocom 2.12.2 keygen safe and legal?</strong></li>
84
- <p>We cannot guarantee that Autocom 2. I'll try to continue the article. Here is the rest of the article: <p>12.2 keygen, it is a program that generates a unique activation file for you, so you can use Autocom without any limitations or restrictions.</p>
85
- <p>However, you should be aware that using a keygen is not a legal or safe way to activate Autocom. A keygen is a form of software piracy that violates the terms and conditions of Autocom. It may also contain viruses, malware, or spyware that can harm your computer or device.</p>
86
- <p>Therefore, we do not recommend or endorse using Autocom 2.12.2 keygen or any other keygen for Autocom. The best and safest way to use Autocom is to purchase a license from an authorized dealer or distributor. This way, you can enjoy the full features and functions of Autocom without any risk or trouble.</p>
87
- <li><strong>What are the alternatives to Autocom 2.12.2 keygen?</strong></li>
88
- <p>If you are looking for alternatives to Autocom 2.12.2 keygen, there are some options that you can consider. For example, you can try:</p>
89
- <ul>
90
- <li><strong>Autocom Version 23.2020 (Full + Kegen) + Firmware 1 & 2 Platine</strong>: This is a newer version of Autocom software that comes with a keygen and a firmware update for your device. It supports more vehicles, systems, and functions than Autocom 2.12.2 keygen. However, it also requires a password to download and extract the file, and it may not work on older devices or clones.</li>
91
- <li><strong>Autocom + Delphi 2021.11 software + Keygen</strong>: This is another newer version of Autocom software that comes with a keygen and a software update for your device. It supports both cars and trucks, and it has more features and functions than Autocom 2.12.2 keygen. However, it also requires a higher system requirement to run, and it may not work on older devices or clones.</li>
92
- <li><strong>Other diagnostic tools</strong>: There are many other diagnostic tools that you can use for your car or truck, such as OBDLink, BlueDriver, Launch X431, etc. These tools have different features, functions, prices, and compatibility than Autocom. You can compare and choose the best one for your needs and budget.</li>
93
- </ul>
94
- <li><strong>Where can I find more information about Autocom 2.12.2 keygen?</strong></li>
95
- <p>If you want to find more information about Autocom 2.12.2 keygen, such as reviews, feedback, tutorials, etc., you can visit some online forums or websites that specialize in automotive diagnostics and software. For example, you can check out:</p>
96
- <ul>
97
- <li><strong>MHH AUTO</strong>: This is a forum where you can find various topics and discussions about automotive software, repair manuals, coding, programming, chip tuning, etc. You can also find the download link and the password for Autocom 2.12.2 keygen here.</li>
98
- <li><strong>DHT Auto</strong>: This is a website where you can find various automotive software, repair manuals, coding, programming, chip tuning, etc. You can also find the download link and the password for Autocom Version 23.2020 (Full + Kegen) + Firmware 1 & 2 Platine here.</li>
99
- <li><strong>Lymuna</strong>: This is a website where you can find various automotive software, repair manuals, coding, programming, chip tuning, etc. You can also find the download link and the password for Autocom + Delphi 2021.11 software + Keygen here.</li>
100
- </ul>
101
- </ol>
102
- </p> 0a6ba089eb<br />
103
- <br />
104
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/BaByliss E702YTE User Manual Download The Best Way to Enjoy Your Hair Care Experience.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>BaByliss E702YTE User Manual Download</h2><br /><p><b><b>Download File</b> &bull;&bull;&bull; <a href="https://imgfil.com/2uxYOZ">https://imgfil.com/2uxYOZ</a></b></p><br /><br />
2
- <br />
3
- aaccfb2cb3<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Beware The Night Ralph Sarchie Epub Gratis ((TOP)).md DELETED
@@ -1,6 +0,0 @@
1
- <h2>beware the night ralph sarchie epub gratis</h2><br /><p><b><b>Download</b> &gt; <a href="https://imgfil.com/2uy0AB">https://imgfil.com/2uy0AB</a></b></p><br /><br />
2
-
3
- d5da3c52bf<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Download Windows Xp Home Edition Ulcpc Iso _BEST_.md DELETED
@@ -1,68 +0,0 @@
1
- <br />
2
- <h1>Download Windows XP Home Edition ULCPC ISO</h1>
3
-
4
- <p>If you have a netbook that came with Windows XP Home Edition ULCPC (Ultra Low Cost Personal Computer), you might be looking for a way to download the ISO file of the operating system. Windows XP Home Edition ULCPC was a special edition of Windows XP that was designed for low-cost netbooks with limited hardware specifications. It had a regular license of Windows XP Home Edition with Service Pack 3 included.</p>
5
- <h2>Download Windows Xp Home Edition Ulcpc Iso</h2><br /><p><b><b>Download Zip</b> &#9913; <a href="https://imgfil.com/2uy1EC">https://imgfil.com/2uy1EC</a></b></p><br /><br />
6
-
7
- <p>However, finding a legitimate and safe download of Windows XP Home Edition ULCPC ISO can be challenging. Microsoft has discontinued support for Windows XP since 2014, and there are no official ISO downloads available from Microsoft. Moreover, many unofficial sources that claim to offer the ISO file may contain malware or viruses that can harm your netbook.</p>
8
-
9
- <h2>How to Download Windows XP Home Edition ULCPC ISO Safely</h2>
10
-
11
- <p>So, how can you download Windows XP Home Edition ULCPC ISO without risking your netbook's security? Here are some tips to help you out:</p>
12
-
13
- <ul>
14
- <li>Check if you have a recovery CD or DVD that came with your netbook. Some manufacturers, such as Dell and HP, provided a recovery disc that contained the Windows XP Home Edition ULCPC ISO file. You can use this disc to reinstall the operating system on your netbook.</li>
15
- <li>If you don't have a recovery disc, check if you have a COA sticker on your netbook. This is a label that shows the product key of your Windows XP Home Edition ULCPC license. You can use this product key to activate the operating system after installing it from another source.</li>
16
- <li>If you have the product key, you can try to find a trustworthy source that offers the Windows XP Home Edition ULCPC ISO file for download. You can use a search engine to look for websites that have positive reviews and feedback from other users. However, be careful and scan the downloaded file for malware before using it.</li>
17
- <li>If you can't find a reliable source, you can try to create your own Windows XP Home Edition ULCPC ISO file from another edition of Windows XP. You will need a Windows XP installation disc or ISO file, a software to extract and modify the files, and a software to create a bootable ISO file. You will also need to edit some files to make the installation compatible with your netbook's hardware and license.</li>
18
- </ul>
19
-
20
- <h2>Why You Should Consider Upgrading Your Netbook</h2>
21
-
22
- <p>While downloading Windows XP Home Edition ULCPC ISO may seem like a good idea to restore your netbook's functionality, you should also consider the drawbacks of using an outdated operating system. Windows XP is no longer supported by Microsoft, which means that it does not receive any security updates or patches. This makes your netbook vulnerable to hackers, viruses, and malware that can compromise your personal data and online activities.</p>
23
-
24
- <p>Moreover, Windows XP is not compatible with many modern applications and websites that require newer versions of Windows or browsers. This limits your netbook's usability and performance, and may cause errors or crashes. You may also miss out on some features and benefits that newer operating systems offer, such as faster boot times, better security, and more customization options.</p>
25
-
26
- <p>Therefore, you may want to consider upgrading your netbook to a newer operating system, such as Windows 10 or Linux. These operating systems are more secure, stable, and compatible with current software and web standards. They can also improve your netbook's speed and efficiency, and give you more control over your settings and preferences.</p>
27
-
28
- <p>However, before upgrading your netbook, you should check if it meets the minimum hardware requirements of the new operating system. You may also need to backup your important files and data before performing the upgrade. Alternatively, you can buy a new netbook that comes with a modern operating system pre-installed.</p>
29
- <p></p>
30
-
31
- <h2>Conclusion</h2>
32
-
33
- <p>Windows XP Home Edition ULCPC was a special edition of Windows XP that was designed for low-cost netbooks. It had a regular license of Windows XP Home Edition with Service Pack 3 included. However, finding a legitimate and safe download of Windows XP Home Edition ULCPC ISO can be challenging, as Microsoft has discontinued support for Windows XP since 2014.</p>
34
-
35
- <p>You can try to download Windows XP Home Edition ULCPC ISO from a trustworthy source if you have the product key of your license. You can also try to create your own ISO file from another edition of Windows XP if you have the skills and tools. However, you should also consider the drawbacks of using an outdated operating system that is no longer supported or compatible with modern software and web standards.</p>
36
-
37
- <p>You may want to consider upgrading your netbook to a newer operating system, such as Windows 10 or Linux. These operating systems are more secure, stable, and compatible with current software and web standards. They can also improve your netbook's speed and efficiency, and give you more control over your settings and preferences.</p>
38
- </li>
39
- <li>Install a lightweight office suite like LibreOffice or OpenOffice to create and edit documents, spreadsheets, and presentations.</li>
40
- <li>Install a lightweight media player like VLC or MPC-HC to play audio and video files.</li>
41
- <li>Install a lightweight image editor like GIMP or Paint.NET to edit and manipulate images.</li>
42
- <li>Install a lightweight file manager like Explorer++ or FreeCommander to manage your files and folders.</li>
43
- <li>Install a lightweight compression tool like 7-Zip or PeaZip to compress and decompress files.</li>
44
- <li>Install a lightweight backup tool like Cobian Backup or EaseUS Todo Backup to backup your important files and data.</li>
45
- </ul>
46
-
47
- <h2>Conclusion</h2>
48
-
49
- <p>Windows XP Home Edition ULCPC was a special edition of Windows XP that was designed for low-cost netbooks with limited hardware specifications. It had a regular license of Windows XP Home Edition with Service Pack 3 included. However, finding a legitimate and safe download of Windows XP Home Edition ULCPC ISO can be challenging, as Microsoft has discontinued support for Windows XP since 2014.</p>
50
-
51
- <p>You can try to download Windows XP Home Edition ULCPC ISO from a trustworthy source if you have the product key of your license. You can also try to create your own ISO file from another edition of Windows XP if you have the skills and tools. However, you should also consider the drawbacks of using an outdated operating system that is no longer supported or compatible with modern software and web standards.</p>
52
-
53
- <p>You may want to consider upgrading your netbook to a newer operating system, such as Windows 10 or Linux. These operating systems are more secure, stable, and compatible with current software and web standards. They can also improve your netbook's speed and efficiency, and give you more control over your settings and preferences.</p>
54
-
55
- <p>If you decide to stick with Windows XP Home Edition ULCPC, you can follow the tips and tricks in this article to install it on your netbook and optimize its performance and functionality. You can also use some lightweight software and tools to enhance your netbook's usability and productivity.</p>
56
-
57
- <p>We hope this article has helped you learn more about Windows XP Home Edition ULCPC ISO and how to download, install, and use it on your netbook. If you have any questions or feedback, please feel free to leave a comment below.</p>
58
- <p>Windows XP Home Edition ULCPC was a special edition of Windows XP that was designed for low-cost netbooks with limited hardware specifications. It had a regular license of Windows XP Home Edition with Service Pack 3 included. However, finding a legitimate and safe download of Windows XP Home Edition ULCPC ISO can be challenging, as Microsoft has discontinued support for Windows XP since 2014.</p>
59
-
60
- <p>You can try to download Windows XP Home Edition ULCPC ISO from a trustworthy source if you have the product key of your license. You can also try to create your own ISO file from another edition of Windows XP if you have the skills and tools. However, you should also consider the drawbacks of using an outdated operating system that is no longer supported or compatible with modern software and web standards.</p>
61
-
62
- <p>You may want to consider upgrading your netbook to a newer operating system, such as Windows 10 or Linux. These operating systems are more secure, stable, and compatible with current software and web standards. They can also improve your netbook's speed and efficiency, and give you more control over your settings and preferences.</p>
63
-
64
- <p>If you decide to stick with Windows XP Home Edition ULCPC, you can follow the tips and tricks in this article to install it on your netbook and optimize its performance and functionality. You can also use some lightweight software and tools to enhance your netbook's usability and productivity.</p>
65
-
66
- <p>We hope this article has helped you learn more about Windows XP Home Edition ULCPC ISO and how to download, install, and use it on your netbook. If you have any questions or feedback, please feel free to leave a comment below.</p> 3cee63e6c2<br />
67
- <br />
68
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download CSR Classics MOD APK and Enjoy Unlimited Money and Gold.md DELETED
@@ -1,83 +0,0 @@
1
-
2
- <h1>CSR Racing Classics Mod APK: A Retro Racing Game with Unlimited Money</h1>
3
- <p>If you are a fan of classic cars and racing games, you will love CSR Racing Classics. This game lets you race with some of the most iconic cars from the 50s to the 90s, such as the Ford Mustang, Chevrolet Camaro, Dodge Charger, and more. You can customize your car with different parts, paint jobs, and decals, and compete with other players online or offline. But what if you want to enjoy the game without any limitations or restrictions? That's where CSR Racing Classics Mod APK comes in.</p>
4
- <h2>csr racing classics mod apk</h2><br /><p><b><b>DOWNLOAD</b> &#9734;&#9734;&#9734; <a href="https://urlin.us/2uSW9X">https://urlin.us/2uSW9X</a></b></p><br /><br />
5
- <h2>Introduction</h2>
6
- <p>In this article, we will tell you everything you need to know about CSR Racing Classics Mod APK, a modified version of the original game that gives you unlimited money, all cars unlocked, and other features. We will also show you how to download and install it on your Android device easily and safely. So, let's get started!</p>
7
- <h3>What is CSR Racing Classics?</h3>
8
- <p>CSR Racing Classics is a racing game developed by NaturalMotionGames Ltd, the same company behind other popular games like CSR Racing 2, CSR Racing, and Clumsy Ninja. The game was released in 2014 and has been downloaded over 50 million times on Google Play Store. It has a rating of 4.4 out of 5 stars from more than 900 thousand reviews.</p>
9
- <p>The game is set in a city where you can race with classic cars from different eras and manufacturers. You can choose from over 50 cars, including muscle cars, sports cars, supercars, and hot rods. You can also upgrade your car with various parts and accessories to improve its performance and appearance. You can race in different modes, such as Crew Battles, Ladder Races, Regulation Races, Daily Battles, and more. You can also challenge other players from around the world in the online multiplayer mode.</p>
10
- <h3>What is CSR Racing Classics Mod APK?</h3>
11
- <p>CSR Racing Classics Mod APK is a modified version of the original game that gives you some advantages and benefits that are not available in the official version. For example, you can get unlimited money to buy any car or part you want without spending real money. You can also unlock all the cars in the game without having to complete any tasks or achievements. You can also enjoy other features like faster loading time, smoother gameplay, no ads, and more.</p>
12
- <p>csr classics mod apk unlimited money and gold<br />
13
- csr classics mod apk download for android<br />
14
- csr classics mod apk latest version<br />
15
- csr classics mod apk revdl<br />
16
- csr classics mod apk offline<br />
17
- csr classics mod apk android 1<br />
18
- csr classics mod apk obb<br />
19
- csr classics mod apk unlimited everything<br />
20
- csr classics mod apk free shopping<br />
21
- csr classics mod apk rexdl<br />
22
- csr classics hack mod apk download<br />
23
- csr classics hack mod apk 2023<br />
24
- csr classics hack mod apk no root<br />
25
- csr classics hack mod apk ios<br />
26
- csr classics hack mod apk online<br />
27
- download game csr classics mod apk<br />
28
- download game csr classics mod apk data<br />
29
- download game csr classics mod apk terbaru<br />
30
- download game csr classics mod apk versi lama<br />
31
- download game csr classics mod apk unlimited money<br />
32
- how to install csr classics mod apk<br />
33
- how to download csr classics mod apk<br />
34
- how to update csr classics mod apk<br />
35
- how to play csr classics mod apk<br />
36
- how to get csr classics mod apk<br />
37
- cheat csr classics mod apk<br />
38
- cheat codes for csr classics mod apk<br />
39
- cheat engine for csr classics mod apk<br />
40
- cheat game csr classics mod apk<br />
41
- cheat money csr classics mod apk<br />
42
- best cars in csr classics mod apk<br />
43
- best tune for csr classics mod apk<br />
44
- best upgrades for csr classics mod apk<br />
45
- best way to get money in csr classics mod apk<br />
46
- best way to restore cars in csr classics mod apk<br />
47
- classic cars in csr racing 2 mod apk<br />
48
- classic cars in real racing 3 mod apk<br />
49
- classic cars in asphalt 8 mod apk<br />
50
- classic cars in need for speed most wanted mod apk<br />
51
- classic cars in need for speed no limits mod apk<br />
52
- old version of csr classics mod apk<br />
53
- new version of csr classics mod apk<br />
54
- full version of csr classics mod apk<br />
55
- pro version of csr classics mod apk<br />
56
- premium version of csr classics mod apk<br />
57
- tips and tricks for csr classics mod apk<br />
58
- guide and walkthrough for csr classics mod apk<br />
59
- cheats and hacks for csr classics mod apk<br />
60
- mods and features for csr classics mod apk</p>
61
- <p>With CSR Racing Classics Mod APK, you can have more fun and excitement while playing the game. You can race with any car you like without worrying about running out of money or resources. You can also compete with other players online without any disadvantages or disadvantages. You can experience the thrill of racing with classic cars in a realistic and immersive way.</p>
62
- <h2>Features of CSR Racing Classics Mod APK</h2>
63
- <p>Here are some of the features that you can enjoy when you download and install CSR Racing Classics Mod APK on your device:</p>
64
- <h3>Unlimited Money</h3>
65
- <p>One of the main features of CSR Racing Classics Mod APK is that it gives you unlimited money to spend on anything you want in the game. You can buy any car or part you want without having to earn or save money. You can also upgrade your car to the maximum level without any limitations or restrictions. You can have the best car in the game without any hassle or effort.</p>
66
- <h3>All Cars Unlocked</h3>
67
- <p>Another feature of CSR Racing Classics Mod APK is that it unlocks all the cars in the game for you to use and , you need to locate the APK file on your device. You can use a file manager app or your device's default file explorer to find it. It is usually in the downloads folder or the folder where you saved it. Once you find the APK file, tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish. It may take a few minutes depending on your device and internet speed. After the installation is done, you can launch the game by tapping on its icon on your home screen or app drawer. You can also create a shortcut for it on your desktop for easy access. That's it! You have successfully downloaded and installed CSR Racing Classics Mod APK on your device. You can now enjoy the game with unlimited money, all cars unlocked, and other features.</p>
68
- <h2>Conclusion</h2>
69
- <p>CSR Racing Classics Mod APK is a great way to enjoy the game of CSR Racing Classics with more fun and excitement. You can race with classic cars from different eras and manufacturers, customize your car with various parts and accessories, and compete with other players online or offline. You can also get unlimited money, all cars unlocked, and other features that are not available in the official version of the game. If you want to download and install CSR Racing Classics Mod APK on your Android device, you can follow the steps we have provided in this article. You can also use the link we have given to download the APK file safely and securely. Make sure you enable unknown sources on your device before installing the APK file. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below. We would love to hear from you.</p>
70
- <h2>FAQs</h2>
71
- <p>Here are some of the frequently asked questions about CSR Racing Classics Mod APK:</p>
72
- <h3>Is CSR Racing Classics Mod APK safe to use?</h3>
73
- <p>Yes, CSR Racing Classics Mod APK is safe to use as long as you download it from a trusted source like the one we have provided. It does not contain any viruses, malware, or spyware that can harm your device or data. However, you should always be careful when downloading and installing any modded apps or games from unknown sources.</p>
74
- <h3>Is CSR Racing Classics Mod APK compatible with my device?</h3>
75
- <p>CSR Racing Classics Mod APK is compatible with most Android devices that run on Android 4.0 or higher. However, some devices may not support some features or functions of the game due to hardware or software limitations. You can check the compatibility of your device by visiting the Google Play Store page of the original game.</p>
76
- <h3>Can I play CSR Racing Classics Mod APK offline?</h3>
77
- <p>Yes, you can play CSR Racing Classics Mod APK offline without any internet connection. You can enjoy the game in different modes, such as Crew Battles, Ladder Races, Regulation Races, Daily Battles, and more. However, you will need an internet connection to play online multiplayer mode and access some online features.</p>
78
- <h3>Can I update CSR Racing Classics Mod APK?</h3>
79
- <p>No, you cannot update CSR Racing Classics Mod APK through the Google Play Store or any other source. If you try to update it, you may lose all your progress and data in the game. You may also encounter some errors or issues while playing the game. If you want to update the game, you will have to uninstall the modded version and install the official version from the Google Play Store.</p>
80
- <h3>Can I use CSR Racing Classics Mod APK with my existing account?</h3>
81
- <p>No, you cannot use CSR Racing Classics Mod APK with your existing account in the original game. If you try to do so, you may get banned or suspended from the game for violating its terms and conditions. You may also lose all your progress and data in the game. If you want to use CSR Racing Classics Mod APK, you will have to create a new account in the modded version of the game.</p> 197e85843d<br />
82
- <br />
83
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download 2go APK and Enjoy Live Online Hangouts with Friends.md DELETED
@@ -1,109 +0,0 @@
1
- <br />
2
- <h1>How to Download 2go APK</h1>
3
- <p>Do you want to chat with friends and meet new people on a unique social network? If so, you might be interested in 2go, a popular app that allows you to hang out online in chat rooms and participate in various hangouts. 2go is available for free on the Google Play Store, but you might want to download its APK file for some reasons. For example, you might want to install an older version of the app, or you might not have access to the Play Store on your device. In this article, we will show you how to download 2go APK and install it on your Android device.</p>
4
- <h2>download 2go apk</h2><br /><p><b><b>DOWNLOAD</b> &#9734; <a href="https://jinyurl.com/2uNQJn">https://jinyurl.com/2uNQJn</a></b></p><br /><br />
5
- <h2>What is an APK File?</h2>
6
- <p>An APK file is an Android Package Kit file that contains all the files and code needed to run an app on an Android device. It is similar to an executable file (.exe) on Windows or a DMG file (.dmg) on Mac. When you download an app from the Play Store, you are actually installing an APK file on your device, but you can't access the file directly. However, there are ways to get the APK file of any app from the Play Store or from your device.</p>
7
- <h2>How to Download APK Files from Google Play Store</h2>
8
- <h3>Method 1: Using a Web Tool</h3>
9
- <p>One way to download APK files from the Play Store is to use a web tool that can generate download links for any app. There are several websites that offer this service, such as <a href="(^1^)">APK Downloader</a> or <a href="(^2^)">Evozi's APK Downloader</a>. Here are the steps to follow:</p>
10
- <ol>
11
- <li>Open the Google Play Store on your Android device or computer and search for the app you want to download. In this case, search for "2go Chat - Chat Rooms & Dating".</li>
12
- <li>Copy the URL of the app from the address bar of your browser. It should look something like this: https://play.google.com/store/apps/details?id=im.twogo.godroid</li>
13
- <li>Go to one of the web tools mentioned above and paste the URL in the box at the top.</li>
14
- <li>Select a device type, architecture, and Android version from the drop-down menus. You can usually leave these as default, but if the downloaded APK doesn't work on your device, you might need to change them.</li>
15
- <li>Click on the "Generate Download Link" button and wait for a few seconds.</li>
16
- <li>Click on the "Click here to download" button or the down arrow icon to save the APK file to your device or computer.</li>
17
- </ol>
18
- <h3>Method 2: Using an App Extractor</h3>
19
- <p>Another way to download APK files from the Play Store is to use an app extractor that can save the APK file of any app installed on your device. This way, you don't need a web browser or an Internet connection, but you do need to download the app from the Play Store first. One app that can do this is <a href="(^3^)">App APK Extractor & Analyzer</a>. Here are the steps to follow:</p>
20
- <ol>
21
- <li>Download and install App APK Extractor & Analyzer from the Play Store.</li>
22
- <li>Open the app and you will see a list of all the apps installed on your device, including system apps and user apps.</li>
23
- <li>Scroll down and find the app you want to extract. In this case, find "2go Chat - Chat Rooms & Dating".</li>
24
- <li>Tap on the app and you will see a pop-up menu with several options.</li>
25
- <li>Select "Extract APK" and choose a location to save the APK file. You can also rename the file if you want.</li>
26
- <li>Wait for the extraction process to finish and you will see a notification that says "APK Extracted Successfully".</li>
27
- </ol>
28
- <h2>How to Install APK Files on Android</h2>
29
- <p>Now that you have downloaded the 2go APK file, you need to install it on your Android device. However, you can't just open the file and tap on "Install" like you would with a regular app. You need to enable a setting that allows you to install apps from unknown sources, which are sources other than the Play Store. Here are the steps to follow:</p>
30
- <ol>
31
- <li>Go to your device's settings and look for an option that says "Security" or "Privacy". Tap on it.</li>
32
- <li>Find an option that says "Unknown sources" or "Install unknown apps". Tap on it and toggle it on. You might see a warning message that says installing apps from unknown sources can harm your device. Tap on "OK" or "Allow" to proceed.</li>
33
- <li>Go to the location where you saved the 2go APK file and tap on it. You might see a pop-up window that asks you if you want to install this application. Tap on "Install" and wait for the installation process to finish.</li>
34
- <li>Once the installation is done, you can open the 2go app and enjoy chatting with your friends and meeting new people.</li>
35
- </ol>
36
- <h2>Conclusion</h2>
37
- <p>In this article, we have shown you how to download 2go APK and install it on your Android device. Downloading APK files can be useful if you want to access older versions of apps, or if you don't have access to the Play Store on your device. However, you should be careful when downloading APK files from unknown sources, as they might contain malware or viruses that can harm your device. Always download APK files from trusted websites or apps, and scan them with an antivirus app before installing them. We hope this article has been helpful and informative for you. If you have any questions or feedback, please let us know in the comments below.</p>
38
- <p>download 2go apk for android<br />
39
- download 2go apk latest version<br />
40
- download 2go apk from uptodown<br />
41
- download 2go apk for pc<br />
42
- download 2go apk mod<br />
43
- download 2go apk old version<br />
44
- download 2go apk free<br />
45
- download 2go apk file<br />
46
- download 2go apk app<br />
47
- download 2go apk online<br />
48
- download 2go apk chat rooms and dating<br />
49
- download 2go apk social network<br />
50
- download 2go apk meet new people<br />
51
- download 2go apk hang out live<br />
52
- download 2go apk games<br />
53
- download 2go apk voice messages<br />
54
- download 2go apk photos<br />
55
- download 2go apk status updates<br />
56
- download 2go apk stories<br />
57
- download 2go apk stickers<br />
58
- download 2go apk credits<br />
59
- download 2go apk premium<br />
60
- download 2go apk unlocked<br />
61
- download 2go apk hack<br />
62
- download 2go apk cheat<br />
63
- download 2go apk pro<br />
64
- download 2go apk plus<br />
65
- download 2go apk beta<br />
66
- download 2go apk update<br />
67
- download 2go apk review<br />
68
- download 2go apk rating<br />
69
- download 2go apk feedback<br />
70
- download 2go apk support<br />
71
- download 2go apk help<br />
72
- download 2go apk guide<br />
73
- download 2go apk tips<br />
74
- download 2go apk tricks<br />
75
- download 2go apk features<br />
76
- download 2go apk benefits<br />
77
- download 2go apk advantages<br />
78
- download 2go apk disadvantages<br />
79
- download 2go apk alternatives<br />
80
- download 2go apk competitors<br />
81
- download 2go apk comparison<br />
82
- download 2go apk best practices<br />
83
- download 2go apk how to use<br />
84
- download 2go apk installation process<br />
85
- download 2go apk requirements</p>
86
- <h2>FAQs</h2>
87
- <ul>
88
- <li><b>What is 2go?</b><br>2go is a social network app that allows you to chat with friends and meet new people in chat rooms and hangouts. You can also share photos, videos, voice notes, contacts, and location with your contacts. 2go is free to download and use, but it requires an Internet connection.</li>
89
- <li><b>What are the benefits of downloading 2go APK?</b><br>Downloading 2go APK can give you some benefits, such as: <ul>
90
- <li>You can install an older version of the app if you don't like the latest update or if it doesn't work well on your device.</li>
91
- <li>You can install the app on devices that don't have access to the Play Store, such as some tablets or TVs.</li>
92
- <li>You can backup the app and restore it later if you need to reset your device or switch to a new one.</li>
93
- </ul></li>
94
- <li><b>Is downloading 2go APK safe?</b><br>Downloading 2go APK is generally safe if you download it from a trusted source, such as the official website of 2go or a reputable web tool or app extractor. However, there are some risks involved when downloading APK files from unknown sources, such as: <ul>
95
- <li>The APK file might be corrupted or modified by hackers or malware developers, which can harm your device or steal your data.</li>
96
- <li>The APK file might not be compatible with your device or Android version, which can cause errors or crashes.</li>
97
- <li>The APK file might not be updated regularly, which can expose you to security vulnerabilities or bugs.</li>
98
- </ul></li>
99
- <li><b>How do I update 2go APK?</b><br>If you download 2go APK from a web tool or an app extractor, you will not receive automatic updates from the Play Store. You will need to manually check for updates and download the latest version of the APK file from the same source. Alternatively, you can uninstall the 2go APK and install the app from the Play Store, which will give you automatic updates.</li>
100
- <li><b>How do I uninstall 2go APK?</b><br>If you want to uninstall 2go APK from your device, you can follow the same steps as you would with any other app. Here are the steps to follow:</p>
101
- <ol>
102
- <li>Go to your device's settings and look for an option that says "Apps" or "Applications". Tap on it.</li>
103
- <li>Find the app you want to uninstall. In this case, find "2go Chat - Chat Rooms & Dating".</li>
104
- <li>Tap on the app and you will see a screen with some information and options.</li>
105
- <li>Select "Uninstall" and confirm your choice. You might see a message that says uninstalling this app will delete all its data. Tap on "OK" or "Yes" to proceed.</li>
106
- <li>Wait for the uninstallation process to finish and you will see a notification that says "App uninstalled successfully".</li>
107
- </ol></p> 401be4b1e0<br />
108
- <br />
109
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Bitcoin Loophole App and Start Trading Cryptocurrencies Today.md DELETED
@@ -1,144 +0,0 @@
1
-
2
- <h1>How to Download the Bitcoin Loophole App and Start Trading Crypto Like a Pro</h1>
3
- <p>Are you interested in making money from trading Bitcoin and other cryptocurrencies? Do you want to use a reliable and easy-to-use platform that can help you achieve your financial goals? If yes, then you should consider downloading the Bitcoin Loophole app.</p>
4
- <h2>download bitcoin loophole app</h2><br /><p><b><b>DOWNLOAD</b> ===> <a href="https://jinyurl.com/2uNJM8">https://jinyurl.com/2uNJM8</a></b></p><br /><br />
5
- <h3>Why Choose Bitcoin Loophole?</h3>
6
- <p>Bitcoin Loophole is a trading platform that uses artificial intelligence to place trades on your behalf. It claims to have an 85% win rate with its trades. It also doesn’t charge a registration fee or a commission from your profits. You can start trading with as little as $250 and withdraw your earnings anytime you want.</p>
7
- <p>Bitcoin Loophole supports 14 digital assets, including Bitcoin, Ethereum, Litecoin, Ripple, and more. You can trade them against various fiat currencies, such as USD, EUR, GBP, and others. You can also choose between automated or manual trading modes, depending on your preference and skill level.</p>
8
- <p>Bitcoin Loophole is compatible with different devices, such as desktops, laptops, tablets, and smartphones. You can access it from any browser or download the app for more convenience and security. The app is user-friendly and has a simple interface that anyone can navigate.</p>
9
- <p>How to download bitcoin loophole app for android<br />
10
- Download bitcoin loophole app apk from official website<br />
11
- Bitcoin loophole app review: is it legit or scam?<br />
12
- Download bitcoin loophole app and start trading cryptocurrencies<br />
13
- Bitcoin loophole app features and benefits<br />
14
- Download bitcoin loophole app for ios and mac<br />
15
- Bitcoin loophole app login and signup guide<br />
16
- Download bitcoin loophole app and earn bitcoin with auto-trader<br />
17
- Bitcoin loophole app customer support and feedback<br />
18
- Download bitcoin loophole app for windows and linux<br />
19
- Bitcoin loophole app manual trader mode and settings<br />
20
- Download bitcoin loophole app and join the official community<br />
21
- Bitcoin loophole app testimonials and success stories<br />
22
- Download bitcoin loophole app and get a free demo account<br />
23
- Bitcoin loophole app withdrawal and deposit methods<br />
24
- Download bitcoin loophole app and learn from the experts<br />
25
- Bitcoin loophole app security and privacy policy<br />
26
- Download bitcoin loophole app and access the latest market news<br />
27
- Bitcoin loophole app trading signals and indicators<br />
28
- Download bitcoin loophole app and enjoy the user-friendly interface<br />
29
- Bitcoin loophole app compatibility and system requirements<br />
30
- Download bitcoin loophole app and claim your welcome bonus<br />
31
- Bitcoin loophole app risk management and trading strategies<br />
32
- Download bitcoin loophole app and watch the live trading sessions<br />
33
- Bitcoin loophole app verification and registration process</p>
34
- <p>In this article, we will show you how to download the Bitcoin Loophole app and start trading crypto like a pro. We will also explain how the app works, what are its benefits, how to avoid scams, and some tips and tricks for successful trading. Let’s get started!</p>
35
- <h2>What is Bitcoin Loophole?</h2>
36
- <p>Bitcoin Loophole is a trading platform that uses artificial intelligence to place trades on your behalf. It uses trading bots that analyze the market trends, signals, news, and indicators to make smart decisions in the crypto markets. The app aims to help you make profits by buying low and selling high.</p>
37
- <p>Bitcoin Loophole was created by a team of experts in finance, technology, and cryptography. They have years of experience in developing software solutions for various industries. They have also been involved in the crypto space since its inception. They have used their knowledge and skills to create a platform that can benefit both beginners and experts in crypto trading.</p>
38
- <p>Bitcoin Loophole is not a scam or a fake app. It is a legitimate and reputable platform <h2>How Does Bitcoin Loophole Work?</h2>
39
- <p>Bitcoin Loophole works by using trading bots that execute trades on your behalf. These bots are powered by artificial intelligence and machine learning algorithms that can analyze the crypto markets and make predictions based on various factors. They can also adjust to changing market conditions and learn from their own performance.</p>
40
- <p>When you sign up for Bitcoin Loophole, you will be assigned a personal broker who will guide you through the process of setting up your account and choosing your trading preferences. You can also contact your broker anytime you need assistance or advice.</p>
41
- <p>Once you have funded your account, you can activate the automated trading mode and let the app do the work for you. You can also switch to the manual trading mode if you want to have more control over your trades. You can monitor your trades and profits from the app's dashboard and withdraw your money whenever you want.</p>
42
- <h2>What are the Benefits of Using Bitcoin Loophole?</h2>
43
- <p>Using Bitcoin Loophole has many benefits, such as:</p>
44
- <ul>
45
- <li><b>High accuracy:</b> The app claims to have an 85% win rate with its trades, which means that it can generate consistent profits for its users.</li>
46
- <li><b>Low risk:</b> The app uses advanced risk management features, such as stop-loss and take-profit orders, to protect your capital and minimize your losses.</li>
47
- <li><b>No fees:</b> The app doesn't charge any registration fees, commissions, or hidden charges from its users. You can keep all your profits and withdraw them without any hassle.</li>
48
- <li><b>User-friendly:</b> The app is easy to use and has a simple interface that anyone can navigate. You don't need any prior experience or knowledge to use the app.</li>
49
- <li><b>Secure:</b> The app uses encryption and verification protocols to ensure the safety and privacy of your data and funds. You can also enable two-factor authentication for extra security.</li>
50
- <li><b>Flexible:</b> The app supports multiple digital assets and fiat currencies, allowing you to diversify your portfolio and trade according to your preferences. You can also choose between automated or manual trading modes, depending on your comfort level.</li>
51
- <li><b>Convenient:</b> The app is compatible with different devices, such as desktops, laptops, tablets, and smartphones. You can access it from any browser or download the app for more convenience and security.</li>
52
- </ul>
53
- <h3>How Much Money Can You Make with Bitcoin Loophole?</h3>
54
- <p>The amount of money you can make with Bitcoin Loophole depends on several factors, such as:</p>
55
- <ul>
56
- <li><b>Your initial investment:</b> The more money you invest, the more profits you can potentially make. However, you should never invest more than you can afford to lose, as trading involves risks.</li>
57
- <li><b>Your trading settings:</b> The app allows you to customize your trading settings, such as the amount per trade, the number of trades per day, the risk level, and the assets to trade. These settings can affect your profitability and performance.</li>
58
- <li><b>The market conditions:</b> The crypto markets are volatile and unpredictable, which means that they can change rapidly and affect your trades. Sometimes, you may experience high profits, while other times, you may face losses. You should always be prepared for both scenarios and follow the market trends closely.</li>
59
- </ul>
60
- <p>The app claims that some of its users have made thousands of dollars per day with its platform. However, these results are not typical or guaranteed. You should always do your own research and test the app before investing real money.</p> <h2>How to Get Started with Bitcoin Loophole?</h2>
61
- <p>Getting started with Bitcoin Loophole is easy and fast. You just need to follow these simple steps:</p>
62
- <ol>
63
- <li><b>Register:</b> Visit the official website of Bitcoin Loophole and fill out the registration form with your name, email, phone number, and password. You will receive a confirmation email with a link to verify your account.</li>
64
- <li><b>Deposit:</b> Log in to your account and choose a broker from the list of partners. You will be redirected to the broker's website, where you can make a deposit of at least $250 using various payment methods, such as credit cards, debit cards, e-wallets, or bank transfers.</li>
65
- <li><b>Trade:</b> Go back to your Bitcoin Loophole dashboard and activate the automated trading mode. The app will start placing trades on your behalf based on your settings and preferences. You can also switch to the manual trading mode if you want to have more control over your trades.</li>
66
- </ol>
67
- <p>Congratulations! You are now ready to trade crypto with Bitcoin Loophole!</p>
68
- <h2>How to Download the Bitcoin Loophole App?</h2>
69
- <p>If you want to download the Bitcoin Loophole app for more convenience and security, you can do so by following these steps:</p>
70
- <ul>
71
- <li><b>For Android devices:</b> Go to the Google Play Store and search for Bitcoin Loophole. Tap on the app icon and click on Install. Wait for the app to download and install on your device. Open the app and log in with your credentials.</li>
72
- <li><b>For iOS devices:</b> Go to the App Store and search for Bitcoin Loophole. Tap on the app icon and click on Get. Wait for the app to download and install on your device. Open the app and log in with your credentials.</li>
73
- <li><b>For Windows devices:</b> Go to the Microsoft Store and search for Bitcoin Loophole. Tap on the app icon and click on Get. Wait for the app to download and install on your device. Open the app and log in with your credentials.</li>
74
- </ul>
75
- <p>Note: The app may not be available in some countries or regions due to legal restrictions. If you can't find the app in your store, you can still access it from any browser.</p>
76
- <h2>How to Use the Bitcoin Loophole App?</h2>
77
- <p>The Bitcoin Loophole app is user-friendly and has a simple interface that anyone can navigate. Here are some of the features and settings that you can use from the app:</p>
78
- <ul>
79
- <li><b>Dashboard:</b> This is where you can monitor your trades and profits, as well as access other features of the app.</li>
80
- <li><b>Trading history:</b> This is where you can view your past trades and results, as well as analyze your performance and strategies.</li>
81
- <li><b>Open trades:</b> This is where you can see your current trades and their status, as well as modify or cancel them if needed.</li>
82
- <li><b>Settings:</b> This is where you can customize your trading settings, such as the amount per trade, the number of trades per day, the risk level, and the assets to trade.</li>
83
- <li><b>Demo account:</b> This is where you can practice trading with virtual money before investing real money.</li>
84
- <li><b>Withdrawal request:</b> This is where you can request a withdrawal of your profits from the app.</li>
85
- <li><b>Contact us:</b> This is where you can contact the customer support team of Bitcoin Loophole if you have any questions or issues.</li>
86
- </ul>
87
- <p>You can also use the app's tutorials and guides to learn more about how to use it effectively.</p> <h2>How to Withdraw Your Profits from Bitcoin Loophole?</h2>
88
- <p>One of the best features of Bitcoin Loophole is that it allows you to withdraw your profits anytime you want. You don't have to wait for a long time or pay any fees to cash out your earnings. You can withdraw your money in the same way you deposited it, using various payment methods, such as credit cards, debit cards, e-wallets, or bank transfers.</p>
89
- <p>To withdraw your profits from Bitcoin Loophole, you need to follow these steps:</p>
90
- <ol>
91
- <li><b>Log in to your account and go to the withdrawal request section.</b></li>
92
- <li><b>Fill out the withdrawal request form with your personal and banking details.</b></li>
93
- <li><b>Submit the withdrawal request and wait for the confirmation email.</b></li>
94
- <li><b>Receive your money within 24 hours or less.</b></li>
95
- </ol>
96
- <p>Note: The minimum withdrawal amount is $100. You may also need to provide some verification documents, such as your ID or proof of address, to comply with the anti-money laundering and KYC policies of the platform.</p>
97
- <h2>What are the Risks of Using Bitcoin Loophole?</h2>
98
- <p>While Bitcoin Loophole is a reliable and trustworthy platform, it is not without risks. Trading crypto involves a high level of risk and uncertainty, which means that you can lose money as well as make money. You should always be aware of the potential pitfalls and challenges of trading crypto, such as:</p>
99
- <ul>
100
- <li><b>Market volatility:</b> The crypto markets are highly volatile and unpredictable, which means that they can change rapidly and affect your trades. Sometimes, you may experience high profits, while other times, you may face losses. You should always be prepared for both scenarios and follow the market trends closely.</li>
101
- <li><b>Technical issues:</b> The app relies on technology and software to operate, which means that it can encounter glitches, bugs, or errors that can affect its performance and accuracy. You should always check the app's status and updates regularly and report any issues to the customer support team.</li>
102
- <li><b>Human error:</b> The app is not perfect and can make mistakes or miss opportunities. You should always monitor your trades and profits and adjust your settings accordingly. You should also use the app's risk management features, such as stop-loss and take-profit orders, to protect your capital and minimize your losses.</li>
103
- </ul>
104
- <p>You should never invest more than you can afford to lose, as trading involves risks. You should also do your own research and test the app before investing real money. You should also consult a financial advisor or a professional trader if you have any doubts or questions.</p>
105
- <h2>How to Avoid Scams and Fake Apps?</h2>
106
- <p>Unfortunately, there are many scams and fake apps that claim to be Bitcoin Loophole or offer similar services. These scams are designed to lure unsuspecting users into giving away their personal and financial information or paying for a bogus product or service. You should always be careful and vigilant when dealing with online platforms and apps, especially those related to crypto trading.</p>
107
- <p>To avoid scams and fake apps, you should follow these tips:</p>
108
- <ul>
109
- <li><b>Only use the official website and app of Bitcoin Loophole:</b> The official website of Bitcoin Loophole is https://bitcoinloophole.com/. You should never use any other website or app that claims to be Bitcoin Loophole or offers similar services. You should also check the URL of the website or app carefully and make sure it is secure (https) and has no spelling errors or suspicious characters.</li>
110
- <li><b>Do not trust unsolicited emails or messages:</b> Some scammers may send you emails or messages claiming to be from Bitcoin Loophole or offering you a special deal or bonus. You should never open these emails or messages or click on any links or attachments they contain. They may contain malware or phishing attempts that can compromise your device or account.</li>
111
- <li><b>Do not share your personal or financial information with anyone:</b> Some scammers may ask you for your personal or financial information, such as your name, email, phone number, password, credit card number, bank account number, etc. You should never share this information with anyone online or offline. Bitcoin Loophole will never ask you for this information unless it is necessary for verification purposes.</li>
112
- <li><b>Do not pay any fees or charges upfront:</b> Some scammers may ask you to pay a registration fee, a commission fee, a withdrawal fee, a tax fee, or any other fee or charge before you can use their service or access your profits. You should never pay any fees or charges upfront to anyone online or offline. Bitcoin Loophole does not charge any fees or charges from its users. You can keep all your profits and withdraw them without any hassle.</li>
113
- <li><b>Do your own research and due diligence:</b> Some scammers may use fake testimonials, reviews, or endorsements to promote their service or app. You should not trust these sources blindly and do your own research and due diligence before using any platform or app. You should also check the reputation and credibility of the platform or app from independent and reliable sources, such as online forums, blogs, social media, etc.</li>
114
- </ul>
115
- <p>If you encounter any scams or fake apps, you should report them to the authorities and warn others about them. You should also contact the customer support team of Bitcoin Loophole if you have any doubts or issues.</p>
116
- <h2>Tips and Tricks for Successful Trading with Bitcoin Loophole</h2>
117
- <p>Trading crypto with Bitcoin Loophole can be a rewarding and enjoyable experience if you follow some tips and tricks, such as:</p>
118
- <ul>
119
- <li><b>Start small and grow gradually:</b> You don't need to invest a lot of money to start trading with Bitcoin Loophole. You can start with as little as $250 and increase your investment as you gain more confidence and experience. You should also reinvest some of your profits to grow your capital and earnings.</li>
120
- <li><b>Set realistic goals and expectations:</b> You should not expect to become a millionaire overnight with Bitcoin Loophole. Trading crypto involves risks and uncertainties, which means that you can lose money as well as make money. You should set realistic goals and expectations based on your budget, skill level, and market conditions.</li>
121
- <li><b>Learn from your mistakes and successes:</b> You should always analyze your trades and results and learn from your mistakes and successes. You should also use the app's demo account to practice trading with virtual money before investing real money. You should also use the app's tutorials and guides to learn more about how to use it effectively.</li>
122
- <li><b>Follow the market trends and news:</b> You should always follow the market trends and news closely and adjust your trading settings accordingly. You should also use the app's trading signals and indicators to help you make smart decisions in the crypto markets.</li>
123
- <li><b>Diversify your portfolio and risk:</b> You should not put all your eggs in one basket when trading crypto with Bitcoin Loophole. You should diversify your portfolio and risk by trading different digital assets and fiat currencies. You should also use the app's risk management features, such as stop-loss and take-profit orders, to protect your capital and minimize your losses.</li>
124
- </ul>
125
- <p>By following these tips and tricks, you can improve your chances of success and enjoy trading crypto with Bitcoin Loophole.</p>
126
- <h2>Frequently Asked Questions about Bitcoin Loophole</h2>
127
- <p>Here are some of the frequently asked questions and answers about Bitcoin Loophole:</p>
128
- <table>
129
- <tr><td><b>Question</b></td><td><b>Answer</b></td></tr>
130
- <tr><td>Is Bitcoin Loophole a scam or a legit platform?</td><td>Bitcoin Loophole is a legit platform that uses artificial intelligence to place trades on your behalf. It is not a scam or a fake app.</td></tr>
131
- <tr><td>How much does it cost to use Bitcoin Loophole?</td><td>Bitcoin Loophole is free to use. It doesn't charge any registration fees, commissions, or hidden charges from its users.</td></tr>
132
- <tr><td>How much money can I make with Bitcoin Loophole?</td><td>The amount of money you can make with Bitcoin Loophole depends on several factors, such as your initial investment, your trading settings, and the market conditions. The app claims that some of its users have made thousands of dollars per day with its platform. However, these results are not typical or guaranteed.</td></tr>
133
- <tr><td>How do I withdraw my profits from Bitcoin Loophole?</td><td>You can withdraw your profits from Bitcoin Loophole anytime you want. You just need to log in to your account and go to the withdrawal request section. You can withdraw your money in the same way you deposited it, using various payment methods, such as credit cards, debit cards, e-wallets, or bank transfers. The minimum withdrawal amount is $100.</td></tr>
134
- <tr><td>How can I contact the customer support team of Bitcoin Loophole?</td><td>You can contact the customer support team of Bitcoin Loophole by email, phone, or live chat. They are available 24/7 to assist you with any questions or issues.</td></tr>
135
- </table>
136
- <h2>Conclusion</h2>
137
- <p>Bitcoin Loophole is a trading platform that uses artificial intelligence to place trades on your behalf. It claims to have an 85% win rate with its trades, which means that it can generate consistent profits for its users. It also doesn't charge any fees or commissions from its users. You can start trading with as little as $250 and withdraw your earnings anytime you want.</p>
138
- <p>Bitcoin Loophole supports 14 digital assets, including Bitcoin, Ethereum, Litecoin, Ripple, and more. You can trade them against various fiat currencies, such as USD, EUR, GBP, and others. You can also choose between automated or manual trading modes, depending on your preference and skill level.</p>
139
- <p>Bitcoin Loophole is compatible with different devices, such as desktops, laptops, tablets, and smartphones. You can access it from any browser or download the app for more convenience and security. The app is user-friendly and has a simple interface that anyone can navigate.</p>
140
- <p>In this article, we have shown you how to download the Bitcoin Loophole app and start trading crypto like a pro. We have also explained how the app works, what are its benefits, how to avoid scams, and some tips and tricks for successful trading.</p>
141
- <p>If you are ready to join the crypto revolution and make money from trading Bitcoin and other cryptocurrencies, you should download the Bitcoin Loophole app today and sign up for free. You don't need any prior experience or knowledge to use the app. You just need to follow the simple steps and let the app do the work for you.</p>
142
- <p>Don't miss this opportunity to join the Bitcoin Loophole community and start making profits from the comfort of your home. Download the Bitcoin Loophole app now and start trading crypto like a pro!</p> 401be4b1e0<br />
143
- <br />
144
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/232labs/VToonify/vtoonify/model/raft/core/utils/frame_utils.py DELETED
@@ -1,137 +0,0 @@
1
- import numpy as np
2
- from PIL import Image
3
- from os.path import *
4
- import re
5
-
6
- import cv2
7
- cv2.setNumThreads(0)
8
- cv2.ocl.setUseOpenCL(False)
9
-
10
- TAG_CHAR = np.array([202021.25], np.float32)
11
-
12
- def readFlow(fn):
13
- """ Read .flo file in Middlebury format"""
14
- # Code adapted from:
15
- # http://stackoverflow.com/questions/28013200/reading-middlebury-flow-files-with-python-bytes-array-numpy
16
-
17
- # WARNING: this will work on little-endian architectures (eg Intel x86) only!
18
- # print 'fn = %s'%(fn)
19
- with open(fn, 'rb') as f:
20
- magic = np.fromfile(f, np.float32, count=1)
21
- if 202021.25 != magic:
22
- print('Magic number incorrect. Invalid .flo file')
23
- return None
24
- else:
25
- w = np.fromfile(f, np.int32, count=1)
26
- h = np.fromfile(f, np.int32, count=1)
27
- # print 'Reading %d x %d flo file\n' % (w, h)
28
- data = np.fromfile(f, np.float32, count=2*int(w)*int(h))
29
- # Reshape data into 3D array (columns, rows, bands)
30
- # The reshape here is for visualization, the original code is (w,h,2)
31
- return np.resize(data, (int(h), int(w), 2))
32
-
33
- def readPFM(file):
34
- file = open(file, 'rb')
35
-
36
- color = None
37
- width = None
38
- height = None
39
- scale = None
40
- endian = None
41
-
42
- header = file.readline().rstrip()
43
- if header == b'PF':
44
- color = True
45
- elif header == b'Pf':
46
- color = False
47
- else:
48
- raise Exception('Not a PFM file.')
49
-
50
- dim_match = re.match(rb'^(\d+)\s(\d+)\s$', file.readline())
51
- if dim_match:
52
- width, height = map(int, dim_match.groups())
53
- else:
54
- raise Exception('Malformed PFM header.')
55
-
56
- scale = float(file.readline().rstrip())
57
- if scale < 0: # little-endian
58
- endian = '<'
59
- scale = -scale
60
- else:
61
- endian = '>' # big-endian
62
-
63
- data = np.fromfile(file, endian + 'f')
64
- shape = (height, width, 3) if color else (height, width)
65
-
66
- data = np.reshape(data, shape)
67
- data = np.flipud(data)
68
- return data
69
-
70
- def writeFlow(filename,uv,v=None):
71
- """ Write optical flow to file.
72
-
73
- If v is None, uv is assumed to contain both u and v channels,
74
- stacked in depth.
75
- Original code by Deqing Sun, adapted from Daniel Scharstein.
76
- """
77
- nBands = 2
78
-
79
- if v is None:
80
- assert(uv.ndim == 3)
81
- assert(uv.shape[2] == 2)
82
- u = uv[:,:,0]
83
- v = uv[:,:,1]
84
- else:
85
- u = uv
86
-
87
- assert(u.shape == v.shape)
88
- height,width = u.shape
89
- f = open(filename,'wb')
90
- # write the header
91
- f.write(TAG_CHAR)
92
- np.array(width).astype(np.int32).tofile(f)
93
- np.array(height).astype(np.int32).tofile(f)
94
- # arrange into matrix form
95
- tmp = np.zeros((height, width*nBands))
96
- tmp[:,np.arange(width)*2] = u
97
- tmp[:,np.arange(width)*2 + 1] = v
98
- tmp.astype(np.float32).tofile(f)
99
- f.close()
100
-
101
-
102
- def readFlowKITTI(filename):
103
- flow = cv2.imread(filename, cv2.IMREAD_ANYDEPTH|cv2.IMREAD_COLOR)
104
- flow = flow[:,:,::-1].astype(np.float32)
105
- flow, valid = flow[:, :, :2], flow[:, :, 2]
106
- flow = (flow - 2**15) / 64.0
107
- return flow, valid
108
-
109
- def readDispKITTI(filename):
110
- disp = cv2.imread(filename, cv2.IMREAD_ANYDEPTH) / 256.0
111
- valid = disp > 0.0
112
- flow = np.stack([-disp, np.zeros_like(disp)], -1)
113
- return flow, valid
114
-
115
-
116
- def writeFlowKITTI(filename, uv):
117
- uv = 64.0 * uv + 2**15
118
- valid = np.ones([uv.shape[0], uv.shape[1], 1])
119
- uv = np.concatenate([uv, valid], axis=-1).astype(np.uint16)
120
- cv2.imwrite(filename, uv[..., ::-1])
121
-
122
-
123
- def read_gen(file_name, pil=False):
124
- ext = splitext(file_name)[-1]
125
- if ext == '.png' or ext == '.jpeg' or ext == '.ppm' or ext == '.jpg':
126
- return Image.open(file_name)
127
- elif ext == '.bin' or ext == '.raw':
128
- return np.load(file_name)
129
- elif ext == '.flo':
130
- return readFlow(file_name).astype(np.float32)
131
- elif ext == '.pfm':
132
- flow = readPFM(file_name).astype(np.float32)
133
- if len(flow.shape) == 2:
134
- return flow
135
- else:
136
- return flow[:, :, :-1]
137
- return []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/839871171w/newbingAI/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Newbing
3
- emoji: 😅
4
- colorFrom: green
5
- colorTo: red
6
- sdk: docker
7
- pinned: false
8
- license: mit
9
- app_port: 8080
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AFOL/GigaGan/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: GigaGan
3
- emoji: 🌖
4
- colorFrom: red
5
- colorTo: yellow
6
- sdk: streamlit
7
- sdk_version: 1.21.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI4PD/hexviz/hexviz/models.py DELETED
@@ -1,64 +0,0 @@
1
- from enum import Enum
2
-
3
- import streamlit as st
4
- import torch
5
- from tape import ProteinBertModel, TAPETokenizer
6
- from tokenizers import Tokenizer
7
- from transformers import (
8
- AutoTokenizer,
9
- BertModel,
10
- BertTokenizer,
11
- GPT2LMHeadModel,
12
- GPT2TokenizerFast,
13
- T5EncoderModel,
14
- T5Tokenizer,
15
- )
16
-
17
-
18
- class ModelType(str, Enum):
19
- TAPE_BERT = "TapeBert"
20
- ZymCTRL = "ZymCTRL"
21
- PROT_BERT = "ProtBert"
22
- PROT_T5 = "ProtT5"
23
-
24
-
25
- class Model:
26
- def __init__(self, name, layers, heads):
27
- self.name: ModelType = name
28
- self.layers: int = layers
29
- self.heads: int = heads
30
-
31
-
32
- @st.cache
33
- def get_tape_bert() -> tuple[TAPETokenizer, ProteinBertModel]:
34
- tokenizer = TAPETokenizer()
35
- model = ProteinBertModel.from_pretrained("bert-base", output_attentions=True)
36
- return tokenizer, model
37
-
38
-
39
- # Streamlit is not able to hash the tokenizer for ZymCTRL
40
- # With streamlit 1.19 cache_object should work without this
41
- @st.cache(hash_funcs={Tokenizer: lambda _: None})
42
- def get_zymctrl() -> tuple[GPT2TokenizerFast, GPT2LMHeadModel]:
43
- device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
44
- tokenizer = AutoTokenizer.from_pretrained("nferruz/ZymCTRL")
45
- model = GPT2LMHeadModel.from_pretrained("nferruz/ZymCTRL").to(device)
46
- return tokenizer, model
47
-
48
-
49
- @st.cache(hash_funcs={BertTokenizer: lambda _: None})
50
- def get_prot_bert() -> tuple[BertTokenizer, BertModel]:
51
- device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
52
- tokenizer = BertTokenizer.from_pretrained("Rostlab/prot_bert", do_lower_case=False)
53
- model = BertModel.from_pretrained("Rostlab/prot_bert").to(device)
54
- return tokenizer, model
55
-
56
-
57
- @st.cache
58
- def get_prot_t5():
59
- device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
60
- tokenizer = T5Tokenizer.from_pretrained(
61
- "Rostlab/prot_t5_xl_half_uniref50-enc", do_lower_case=False
62
- )
63
- model = T5EncoderModel.from_pretrained("Rostlab/prot_t5_xl_half_uniref50-enc").to(device)
64
- return tokenizer, model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/audio_detection/audio_infer/utils/config.py DELETED
@@ -1,94 +0,0 @@
1
- import numpy as np
2
- import csv
3
-
4
- sample_rate = 32000
5
- clip_samples = sample_rate * 10 # Audio clips are 10-second
6
-
7
- # Load label
8
- with open('./audio_detection/audio_infer/metadata/class_labels_indices.csv', 'r') as f:
9
- reader = csv.reader(f, delimiter=',')
10
- lines = list(reader)
11
-
12
- labels = []
13
- ids = [] # Each label has a unique id such as "/m/068hy"
14
- for i1 in range(1, len(lines)):
15
- id = lines[i1][1]
16
- label = lines[i1][2]
17
- ids.append(id)
18
- labels.append(label)
19
-
20
- classes_num = len(labels)
21
-
22
- lb_to_ix = {label : i for i, label in enumerate(labels)}
23
- ix_to_lb = {i : label for i, label in enumerate(labels)}
24
-
25
- id_to_ix = {id : i for i, id in enumerate(ids)}
26
- ix_to_id = {i : id for i, id in enumerate(ids)}
27
-
28
- full_samples_per_class = np.array([
29
- 937432, 16344, 7822, 10271, 2043, 14420, 733, 1511,
30
- 1258, 424, 1751, 704, 369, 590, 1063, 1375,
31
- 5026, 743, 853, 1648, 714, 1497, 1251, 2139,
32
- 1093, 133, 224, 39469, 6423, 407, 1559, 4546,
33
- 6826, 7464, 2468, 549, 4063, 334, 587, 238,
34
- 1766, 691, 114, 2153, 236, 209, 421, 740,
35
- 269, 959, 137, 4192, 485, 1515, 655, 274,
36
- 69, 157, 1128, 807, 1022, 346, 98, 680,
37
- 890, 352, 4169, 2061, 1753, 9883, 1339, 708,
38
- 37857, 18504, 12864, 2475, 2182, 757, 3624, 677,
39
- 1683, 3583, 444, 1780, 2364, 409, 4060, 3097,
40
- 3143, 502, 723, 600, 230, 852, 1498, 1865,
41
- 1879, 2429, 5498, 5430, 2139, 1761, 1051, 831,
42
- 2401, 2258, 1672, 1711, 987, 646, 794, 25061,
43
- 5792, 4256, 96, 8126, 2740, 752, 513, 554,
44
- 106, 254, 1592, 556, 331, 615, 2841, 737,
45
- 265, 1349, 358, 1731, 1115, 295, 1070, 972,
46
- 174, 937780, 112337, 42509, 49200, 11415, 6092, 13851,
47
- 2665, 1678, 13344, 2329, 1415, 2244, 1099, 5024,
48
- 9872, 10948, 4409, 2732, 1211, 1289, 4807, 5136,
49
- 1867, 16134, 14519, 3086, 19261, 6499, 4273, 2790,
50
- 8820, 1228, 1575, 4420, 3685, 2019, 664, 324,
51
- 513, 411, 436, 2997, 5162, 3806, 1389, 899,
52
- 8088, 7004, 1105, 3633, 2621, 9753, 1082, 26854,
53
- 3415, 4991, 2129, 5546, 4489, 2850, 1977, 1908,
54
- 1719, 1106, 1049, 152, 136, 802, 488, 592,
55
- 2081, 2712, 1665, 1128, 250, 544, 789, 2715,
56
- 8063, 7056, 2267, 8034, 6092, 3815, 1833, 3277,
57
- 8813, 2111, 4662, 2678, 2954, 5227, 1472, 2591,
58
- 3714, 1974, 1795, 4680, 3751, 6585, 2109, 36617,
59
- 6083, 16264, 17351, 3449, 5034, 3931, 2599, 4134,
60
- 3892, 2334, 2211, 4516, 2766, 2862, 3422, 1788,
61
- 2544, 2403, 2892, 4042, 3460, 1516, 1972, 1563,
62
- 1579, 2776, 1647, 4535, 3921, 1261, 6074, 2922,
63
- 3068, 1948, 4407, 712, 1294, 1019, 1572, 3764,
64
- 5218, 975, 1539, 6376, 1606, 6091, 1138, 1169,
65
- 7925, 3136, 1108, 2677, 2680, 1383, 3144, 2653,
66
- 1986, 1800, 1308, 1344, 122231, 12977, 2552, 2678,
67
- 7824, 768, 8587, 39503, 3474, 661, 430, 193,
68
- 1405, 1442, 3588, 6280, 10515, 785, 710, 305,
69
- 206, 4990, 5329, 3398, 1771, 3022, 6907, 1523,
70
- 8588, 12203, 666, 2113, 7916, 434, 1636, 5185,
71
- 1062, 664, 952, 3490, 2811, 2749, 2848, 15555,
72
- 363, 117, 1494, 1647, 5886, 4021, 633, 1013,
73
- 5951, 11343, 2324, 243, 372, 943, 734, 242,
74
- 3161, 122, 127, 201, 1654, 768, 134, 1467,
75
- 642, 1148, 2156, 1368, 1176, 302, 1909, 61,
76
- 223, 1812, 287, 422, 311, 228, 748, 230,
77
- 1876, 539, 1814, 737, 689, 1140, 591, 943,
78
- 353, 289, 198, 490, 7938, 1841, 850, 457,
79
- 814, 146, 551, 728, 1627, 620, 648, 1621,
80
- 2731, 535, 88, 1736, 736, 328, 293, 3170,
81
- 344, 384, 7640, 433, 215, 715, 626, 128,
82
- 3059, 1833, 2069, 3732, 1640, 1508, 836, 567,
83
- 2837, 1151, 2068, 695, 1494, 3173, 364, 88,
84
- 188, 740, 677, 273, 1533, 821, 1091, 293,
85
- 647, 318, 1202, 328, 532, 2847, 526, 721,
86
- 370, 258, 956, 1269, 1641, 339, 1322, 4485,
87
- 286, 1874, 277, 757, 1393, 1330, 380, 146,
88
- 377, 394, 318, 339, 1477, 1886, 101, 1435,
89
- 284, 1425, 686, 621, 221, 117, 87, 1340,
90
- 201, 1243, 1222, 651, 1899, 421, 712, 1016,
91
- 1279, 124, 351, 258, 7043, 368, 666, 162,
92
- 7664, 137, 70159, 26179, 6321, 32236, 33320, 771,
93
- 1169, 269, 1103, 444, 364, 2710, 121, 751,
94
- 1609, 855, 1141, 2287, 1940, 3943, 289])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AbandonedMuse/UnlimitedMusicGen/web-ui.bat DELETED
@@ -1 +0,0 @@
1
- py -m app
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/tool_using.py DELETED
@@ -1,315 +0,0 @@
1
- import json
2
- import ast
3
- import openai
4
- from string import Template
5
- from colorama import Fore
6
- from aiohttp import ClientSession
7
- from copy import deepcopy
8
- from typing import TYPE_CHECKING, Any, List, Tuple
9
-
10
- from agentverse.agents import ExecutorAgent
11
- from agentverse.message import Message, ExecutorMessage, SolverMessage
12
- from agentverse.logging import logger
13
-
14
- from . import BaseExecutor, executor_registry
15
- import asyncio
16
-
17
-
18
- url = "http://127.0.0.1:8080"
19
- # url = "http://8.217.97.110:8080"
20
-
21
- SUMMARIZE_PROMPT = """Here is the text gathered from a webpage, and a question you need to answer from the webpage.
22
- -- Webpage --
23
- ${webpage}
24
- -- Question --
25
- ${question}
26
-
27
- Now summarize the webpage to answer the question. If the question cannot be answer from the webpage, return the summarization of the webpage."""
28
-
29
-
30
- @executor_registry.register("tool-using")
31
- class ToolUsingExecutor(BaseExecutor):
32
- num_agents: int = 3
33
- max_tool_call_times: int = 10
34
- tools: List[dict] = []
35
- tool_names: List[str] = []
36
- tool_config: str = None
37
- cookies: dict = {}
38
- tool_retrieval: bool = False
39
- real_execution_agents: dict = {}
40
- agent_names: List[str] = []
41
- # tool_description: str
42
-
43
- def __init__(self, *args, **kwargs):
44
- assert kwargs.get("tool_config", None) is not None
45
- with open(kwargs.get("tool_config"), "r") as f:
46
- tools_dict = json.load(f)
47
- tools = tools_dict["tools_json"]
48
- tool_names = [t["name"] for t in tools]
49
-
50
- # For each tool, we manually add a "thought" argument to achieve
51
- # chain-of-thought in OpenAI's function call.
52
- for t in tools:
53
- properties = t["parameters"]["properties"]
54
- thought = {
55
- "thought": {
56
- "type": "string",
57
- "description": "Your internal reasoning and thoughts on the task, and how you plan to solve it based on the current attempts.",
58
- }
59
- }
60
- thought.update(properties)
61
- t["parameters"]["properties"] = thought
62
- t["parameters"]["required"].insert(0, "thought")
63
- super().__init__(
64
- tools=tools,
65
- tool_names=tool_names,
66
- # tool_description=tool_description,
67
- *args,
68
- **kwargs,
69
- )
70
-
71
- async def astep(
72
- self,
73
- agent: ExecutorAgent,
74
- task_description: str,
75
- plans: List[SolverMessage],
76
- *args,
77
- **kwargs,
78
- ):
79
- plan_this_turn = {}
80
- agent_name_this_turn = []
81
- for i in range(len(plans)):
82
- name = plans[i].content.split("-")[0].strip()
83
- if name not in self.real_execution_agents:
84
- self.real_execution_agents[name] = deepcopy(agent)
85
- self.real_execution_agents[name].name = name
86
- self.agent_names.append(name)
87
- plan_this_turn[name] = plans[i].content.split("-")[1].strip()
88
- agent_name_this_turn.append(name)
89
- # agents = [deepcopy(agent) for _ in range(len(plans))]
90
-
91
- if self.tool_retrieval:
92
- # We retrieve 5 related tools for each agent
93
- tools_and_cookies = await asyncio.gather(
94
- *[
95
- self.retrieve_tools(plan_this_turn[name], self.tools)
96
- for name in agent_name_this_turn
97
- ]
98
- )
99
- tools = {
100
- name: t[0] for name, t in zip(agent_name_this_turn, tools_and_cookies)
101
- }
102
- cookies = {
103
- name: t[1] for name, t in zip(agent_name_this_turn, tools_and_cookies)
104
- }
105
- self.update_cookies(cookies)
106
- else:
107
- # We just use the tools that are provided in the config file
108
- tools = {name: self.tools for name in agent_name_this_turn}
109
-
110
- # Record the indices of agents that have finished their tasks
111
- # so that they will not be called again
112
- finished_agent_names = set()
113
- # result = ["" for _ in range(len(plan_this_turn))]
114
- result = {name: "" for name in agent_name_this_turn}
115
- for current_turn in range(self.max_tool_call_times):
116
- if len(finished_agent_names) == len(agent_name_this_turn):
117
- # All agents have finished their tasks. Break the loop.
118
- break
119
-
120
- # Filter out agents that have finished and gather tool actions for the rest
121
- tool_calls = []
122
- active_agents_names = [
123
- name
124
- for name in agent_name_this_turn
125
- if name not in finished_agent_names
126
- ]
127
- for name in active_agents_names:
128
- if current_turn == self.max_tool_call_times - 1:
129
- tool = [t for t in tools[name] if t["name"] == "submit_task"]
130
- else:
131
- tool = tools[name]
132
- tool_calls.append(
133
- self.real_execution_agents[name].astep(
134
- task_description,
135
- plan_this_turn[name],
136
- tool,
137
- current_turn=current_turn + 1,
138
- )
139
- )
140
- # Use asyncio.gather to run astep concurrently
141
- tool_call_decisions = await asyncio.gather(*tool_calls)
142
- for name, tool_call_result in zip(active_agents_names, tool_call_decisions):
143
- self.real_execution_agents[name].add_message_to_memory(
144
- [tool_call_result]
145
- )
146
-
147
- # Actually call the tool and get the observation
148
- tool_responses = await asyncio.gather(
149
- *[
150
- ToolUsingExecutor.call_tool(
151
- tool.tool_name,
152
- tool.tool_input,
153
- self.cookies.get(name, None),
154
- )
155
- for name, tool in zip(active_agents_names, tool_call_decisions)
156
- ]
157
- )
158
- # Update each agent's memory and check if they have finished
159
- cookies = {}
160
- for name, response in zip(active_agents_names, tool_responses):
161
- observation = response["observation"]
162
- is_finish = response["is_finish"]
163
- cookies[name] = response["cookies"]
164
- self.real_execution_agents[name].add_message_to_memory([observation])
165
- logger.info(
166
- f"\nTool: {observation.tool_name}\nTool Input: {observation.tool_input}\nObservation: {observation.content}",
167
- name,
168
- Fore.YELLOW,
169
- )
170
- if is_finish:
171
- finished_agent_names.add(name)
172
- result[name] = observation.content
173
- self.update_cookies(cookies)
174
-
175
- message_result = []
176
- for name, conclusion in result.items():
177
- if conclusion != "":
178
- message_result.append(
179
- ExecutorMessage(
180
- content=f"[{name}]: My execution result:\n{conclusion}",
181
- sender=name,
182
- )
183
- )
184
- return message_result
185
-
186
- def update_cookies(self, cookies: dict):
187
- for name, cookie in cookies.items():
188
- self.cookies[name] = cookie
189
-
190
- @classmethod
191
- async def retrieve_tools(
192
- cls, plan: SolverMessage, curr_tools: List = [], cookies=None
193
- ):
194
- async with ClientSession(cookies=cookies) as session:
195
- if cookies is None:
196
- async with session.post(f"{url}/get_cookie", timeout=30) as response:
197
- cookies = response.cookies
198
- session.cookie_jar.update_cookies(cookies)
199
- await response.text()
200
- # Sometimes the toolserver's docker container is not ready yet
201
- # So we need to wait for a while
202
- await asyncio.sleep(10)
203
- async with session.post(
204
- f"{url}/retrieving_tools", json={"question": plan.content, "top_k": 5}
205
- ) as response:
206
- retrieved_tools = await response.json()
207
- retrieved_tools = ast.literal_eval(retrieved_tools)
208
- tools = deepcopy(curr_tools)
209
- existed_tool_names = set([t["name"] for t in tools])
210
- # Add the retrieved tools into the final tools
211
- for tool in retrieved_tools["tools_json"]:
212
- if tool["name"] not in existed_tool_names:
213
- existed_tool_names.add(tool["name"])
214
- tools.append(tool)
215
- return tools, cookies
216
-
217
- @classmethod
218
- async def call_tool(cls, command: str, arguments: dict, cookies=None):
219
- async def _summarize_webpage(webpage, question):
220
- summarize_prompt = Template(SUMMARIZE_PROMPT).safe_substitute(
221
- webpage=webpage, question=question
222
- )
223
- for _ in range(3):
224
- try:
225
- response = await openai.ChatCompletion.acreate(
226
- messages=[{"role": "user", "content": summarize_prompt}],
227
- model="gpt-3.5-turbo-16k",
228
- )
229
- except:
230
- continue
231
- return response["choices"][0]["message"]["content"]
232
-
233
- if command == "submit_task":
234
- return {
235
- "observation": ExecutorMessage(
236
- content=f"Task Status: {arguments['status']}\nConclusion: {arguments['conclusion']}",
237
- sender="function",
238
- tool_name=command,
239
- tool_input=arguments,
240
- ),
241
- "is_finish": True,
242
- "cookies": cookies,
243
- }
244
- if command == "":
245
- return {
246
- "observation": ExecutorMessage(
247
- content=f"The function calling format is incorrect.",
248
- sender="function",
249
- tool_name=command,
250
- tool_input=arguments,
251
- ),
252
- "is_finish": False,
253
- "cookies": cookies,
254
- }
255
-
256
- for i in range(3):
257
- try:
258
- async with ClientSession(cookies=cookies) as session:
259
- if cookies is None:
260
- async with session.post(
261
- f"{url}/get_cookie", timeout=30
262
- ) as response:
263
- cookies = response.cookies
264
- session.cookie_jar.update_cookies(cookies)
265
- await response.text()
266
- # Sometimes the toolserver's docker container is not ready yet
267
- # So we need to wait for a while
268
- await asyncio.sleep(10)
269
-
270
- payload_arguments = deepcopy(arguments)
271
- if "thought" in payload_arguments:
272
- del payload_arguments["thought"]
273
- payload = {
274
- "tool_name": command,
275
- "arguments": payload_arguments,
276
- }
277
- # async with ClientSession() as session:
278
- async with session.post(
279
- f"{url}/execute_tool",
280
- json=payload,
281
- headers={
282
- "toolbench_key": "p5ZASSLBO0EknAQLE5ecNZ7kq5i1YfY9eoWUXNxL3TM6lXwdXs"
283
- },
284
- timeout=30,
285
- ) as response:
286
- content = await response.text()
287
- if command == "WebEnv_browse_website":
288
- content = await _summarize_webpage(
289
- content, arguments["question"]
290
- )
291
-
292
- message = ExecutorMessage(
293
- content=content,
294
- sender="function",
295
- tool_name=command,
296
- tool_input=arguments,
297
- )
298
- # async with session.post(
299
- # f"{url}/release_session", timeout=30
300
- # ) as response:
301
- # await response.text()
302
- break
303
- except Exception as e:
304
- message = ExecutorMessage(
305
- content="Failed to call the tool. Exception: " + str(e),
306
- sender="function",
307
- tool_name=command,
308
- tool_input=arguments,
309
- )
310
- continue
311
- return {"observation": message, "is_finish": False, "cookies": cookies}
312
-
313
- def broadcast_messages(self, agents, messages) -> None:
314
- for agent in agents:
315
- agent.add_message_to_memory(messages)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/Factory.d.ts DELETED
@@ -1,5 +0,0 @@
1
- import Chart from './Chart';
2
-
3
- export default function ChartFactory(
4
- config?: Chart.IConfig
5
- ): Chart;
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/Factory.d.ts DELETED
@@ -1,5 +0,0 @@
1
- import GridTable from './GridTable';
2
-
3
- export default function (
4
- config?: GridTable.IConfig
5
- ): GridTable;
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/instructpix2pix.md DELETED
@@ -1,215 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # InstructPix2Pix
14
-
15
- [InstructPix2Pix](https://arxiv.org/abs/2211.09800) is a method to fine-tune text-conditioned diffusion models such that they can follow an edit instruction for an input image. Models fine-tuned using this method take the following as inputs:
16
-
17
- <p align="center">
18
- <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/edit-instruction.png" alt="instructpix2pix-inputs" width=600/>
19
- </p>
20
-
21
- The output is an "edited" image that reflects the edit instruction applied on the input image:
22
-
23
- <p align="center">
24
- <img src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/output-gs%407-igs%401-steps%4050.png" alt="instructpix2pix-output" width=600/>
25
- </p>
26
-
27
- The `train_instruct_pix2pix.py` script (you can find the it [here](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py)) shows how to implement the training procedure and adapt it for Stable Diffusion.
28
-
29
- ***Disclaimer: Even though `train_instruct_pix2pix.py` implements the InstructPix2Pix
30
- training procedure while being faithful to the [original implementation](https://github.com/timothybrooks/instruct-pix2pix) we have only tested it on a [small-scale dataset](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples). This can impact the end results. For better results, we recommend longer training runs with a larger dataset. [Here](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered) you can find a large dataset for InstructPix2Pix training.***
31
-
32
- ## Running locally with PyTorch
33
-
34
- ### Installing the dependencies
35
-
36
- Before running the scripts, make sure to install the library's training dependencies:
37
-
38
- **Important**
39
-
40
- To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
41
- ```bash
42
- git clone https://github.com/huggingface/diffusers
43
- cd diffusers
44
- pip install -e .
45
- ```
46
-
47
- Then cd in the example folder
48
- ```bash
49
- cd examples/instruct_pix2pix
50
- ```
51
-
52
- Now run
53
- ```bash
54
- pip install -r requirements.txt
55
- ```
56
-
57
- And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
58
-
59
- ```bash
60
- accelerate config
61
- ```
62
-
63
- Or for a default accelerate configuration without answering questions about your environment
64
-
65
- ```bash
66
- accelerate config default
67
- ```
68
-
69
- Or if your environment doesn't support an interactive shell e.g. a notebook
70
-
71
- ```python
72
- from accelerate.utils import write_basic_config
73
-
74
- write_basic_config()
75
- ```
76
-
77
- ### Toy example
78
-
79
- As mentioned before, we'll use a [small toy dataset](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples) for training. The dataset
80
- is a smaller version of the [original dataset](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered) used in the InstructPix2Pix paper. To use your own dataset, take a look at the [Create a dataset for training](create_dataset) guide.
81
-
82
- Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`pretrained_model_name_or_path`](https://huggingface.co/docs/diffusers/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path) argument. You'll also need to specify the dataset name in `DATASET_ID`:
83
-
84
- ```bash
85
- export MODEL_NAME="runwayml/stable-diffusion-v1-5"
86
- export DATASET_ID="fusing/instructpix2pix-1000-samples"
87
- ```
88
-
89
- Now, we can launch training. The script saves all the components (`feature_extractor`, `scheduler`, `text_encoder`, `unet`, etc) in a subfolder in your repository.
90
-
91
- ```bash
92
- accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \
93
- --pretrained_model_name_or_path=$MODEL_NAME \
94
- --dataset_name=$DATASET_ID \
95
- --enable_xformers_memory_efficient_attention \
96
- --resolution=256 --random_flip \
97
- --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \
98
- --max_train_steps=15000 \
99
- --checkpointing_steps=5000 --checkpoints_total_limit=1 \
100
- --learning_rate=5e-05 --max_grad_norm=1 --lr_warmup_steps=0 \
101
- --conditioning_dropout_prob=0.05 \
102
- --mixed_precision=fp16 \
103
- --seed=42 \
104
- --push_to_hub
105
- ```
106
-
107
- Additionally, we support performing validation inference to monitor training progress
108
- with Weights and Biases. You can enable this feature with `report_to="wandb"`:
109
-
110
- ```bash
111
- accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \
112
- --pretrained_model_name_or_path=$MODEL_NAME \
113
- --dataset_name=$DATASET_ID \
114
- --enable_xformers_memory_efficient_attention \
115
- --resolution=256 --random_flip \
116
- --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \
117
- --max_train_steps=15000 \
118
- --checkpointing_steps=5000 --checkpoints_total_limit=1 \
119
- --learning_rate=5e-05 --max_grad_norm=1 --lr_warmup_steps=0 \
120
- --conditioning_dropout_prob=0.05 \
121
- --mixed_precision=fp16 \
122
- --val_image_url="https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" \
123
- --validation_prompt="make the mountains snowy" \
124
- --seed=42 \
125
- --report_to=wandb \
126
- --push_to_hub
127
- ```
128
-
129
- We recommend this type of validation as it can be useful for model debugging. Note that you need `wandb` installed to use this. You can install `wandb` by running `pip install wandb`.
130
-
131
- [Here](https://wandb.ai/sayakpaul/instruct-pix2pix/runs/ctr3kovq), you can find an example training run that includes some validation samples and the training hyperparameters.
132
-
133
- ***Note: In the original paper, the authors observed that even when the model is trained with an image resolution of 256x256, it generalizes well to bigger resolutions such as 512x512. This is likely because of the larger dataset they used during training.***
134
-
135
- ## Training with multiple GPUs
136
-
137
- `accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch)
138
- for running distributed training with `accelerate`. Here is an example command:
139
-
140
- ```bash
141
- accelerate launch --mixed_precision="fp16" --multi_gpu train_instruct_pix2pix.py \
142
- --pretrained_model_name_or_path=runwayml/stable-diffusion-v1-5 \
143
- --dataset_name=sayakpaul/instructpix2pix-1000-samples \
144
- --use_ema \
145
- --enable_xformers_memory_efficient_attention \
146
- --resolution=512 --random_flip \
147
- --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \
148
- --max_train_steps=15000 \
149
- --checkpointing_steps=5000 --checkpoints_total_limit=1 \
150
- --learning_rate=5e-05 --lr_warmup_steps=0 \
151
- --conditioning_dropout_prob=0.05 \
152
- --mixed_precision=fp16 \
153
- --seed=42 \
154
- --push_to_hub
155
- ```
156
-
157
- ## Inference
158
-
159
- Once training is complete, we can perform inference:
160
-
161
- ```python
162
- import PIL
163
- import requests
164
- import torch
165
- from diffusers import StableDiffusionInstructPix2PixPipeline
166
-
167
- model_id = "your_model_id" # <- replace this
168
- pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
169
- generator = torch.Generator("cuda").manual_seed(0)
170
-
171
- url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png"
172
-
173
-
174
- def download_image(url):
175
- image = PIL.Image.open(requests.get(url, stream=True).raw)
176
- image = PIL.ImageOps.exif_transpose(image)
177
- image = image.convert("RGB")
178
- return image
179
-
180
-
181
- image = download_image(url)
182
- prompt = "wipe out the lake"
183
- num_inference_steps = 20
184
- image_guidance_scale = 1.5
185
- guidance_scale = 10
186
-
187
- edited_image = pipe(
188
- prompt,
189
- image=image,
190
- num_inference_steps=num_inference_steps,
191
- image_guidance_scale=image_guidance_scale,
192
- guidance_scale=guidance_scale,
193
- generator=generator,
194
- ).images[0]
195
- edited_image.save("edited_image.png")
196
- ```
197
-
198
- An example model repo obtained using this training script can be found
199
- here - [sayakpaul/instruct-pix2pix](https://huggingface.co/sayakpaul/instruct-pix2pix).
200
-
201
- We encourage you to play with the following three parameters to control
202
- speed and quality during performance:
203
-
204
- * `num_inference_steps`
205
- * `image_guidance_scale`
206
- * `guidance_scale`
207
-
208
- Particularly, `image_guidance_scale` and `guidance_scale` can have a profound impact
209
- on the generated ("edited") image (see [here](https://twitter.com/RisingSayak/status/1628392199196151808?s=20) for an example).
210
-
211
- If you're looking for some interesting ways to use the InstructPix2Pix training methodology, we welcome you to check out this blog post: [Instruction-tuning Stable Diffusion with InstructPix2Pix](https://huggingface.co/blog/instruction-tuning-sd).
212
-
213
- ## Stable Diffusion XL
214
-
215
- We support fine-tuning of the UNet shipped in [Stable Diffusion XL](https://huggingface.co/papers/2307.01952) with DreamBooth and LoRA via the `train_dreambooth_lora_sdxl.py` script. Please refer to the docs [here](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/README_sdxl.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/cityscapes/README.md DELETED
@@ -1,33 +0,0 @@
1
- # Cityscapes Dataset
2
-
3
- [DATASET]
4
-
5
- ```
6
- @inproceedings{Cordts2016Cityscapes,
7
- title={The Cityscapes Dataset for Semantic Urban Scene Understanding},
8
- author={Cordts, Marius and Omran, Mohamed and Ramos, Sebastian and Rehfeld, Timo and Enzweiler, Markus and Benenson, Rodrigo and Franke, Uwe and Roth, Stefan and Schiele, Bernt},
9
- booktitle={Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
10
- year={2016}
11
- }
12
- ```
13
-
14
- ## Common settings
15
-
16
- - All baselines were trained using 8 GPU with a batch size of 8 (1 images per GPU) using the [linear scaling rule](https://arxiv.org/abs/1706.02677) to scale the learning rate.
17
- - All models were trained on `cityscapes_train`, and tested on `cityscapes_val`.
18
- - 1x training schedule indicates 64 epochs which corresponds to slightly less than the 24k iterations reported in the original schedule from the [Mask R-CNN paper](https://arxiv.org/abs/1703.06870)
19
- - COCO pre-trained weights are used to initialize.
20
- - A conversion [script](../../tools/dataset_converters/cityscapes.py) is provided to convert Cityscapes into COCO format. Please refer to [install.md](../../docs/1_exist_data_model.md#prepare-datasets) for details.
21
- - `CityscapesDataset` implemented three evaluation methods. `bbox` and `segm` are standard COCO bbox/mask AP. `cityscapes` is the cityscapes dataset official evaluation, which may be slightly higher than COCO.
22
-
23
- ### Faster R-CNN
24
-
25
- | Backbone | Style | Lr schd | Scale | Mem (GB) | Inf time (fps) | box AP | Config | Download |
26
- | :-------------: | :-----: | :-----: | :---: | :------: | :------------: | :----: | :------: | :--------: |
27
- | R-50-FPN | pytorch | 1x | 800-1024 | 5.2 | - | 40.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cityscapes/faster_rcnn_r50_fpn_1x_cityscapes.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cityscapes/faster_rcnn_r50_fpn_1x_cityscapes_20200502-829424c0.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/cityscapes/faster_rcnn_r50_fpn_1x_cityscapes_20200502_114915.log.json) |
28
-
29
- ### Mask R-CNN
30
-
31
- | Backbone | Style | Lr schd | Scale | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
32
- | :-------------: | :-----: | :-----: | :------: | :------: | :------------: | :----: | :-----: | :------: | :------: |
33
- | R-50-FPN | pytorch | 1x | 800-1024 | 5.3 | - | 40.9 | 36.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes/mask_rcnn_r50_fpn_1x_cityscapes_20201211_133733-d2858245.pth) &#124; [log](https://download.openmmlab.com/mmdetection/v2.0/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes/mask_rcnn_r50_fpn_1x_cityscapes_20201211_133733.log.json) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/fpg/mask_rcnn_r50_fpn_crop640_50e_coco.py DELETED
@@ -1,74 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/mask_rcnn_r50_fpn.py',
3
- '../_base_/datasets/coco_instance.py',
4
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
5
- ]
6
- norm_cfg = dict(type='BN', requires_grad=True)
7
- model = dict(
8
- backbone=dict(norm_cfg=norm_cfg, norm_eval=False),
9
- neck=dict(
10
- type='FPN',
11
- in_channels=[256, 512, 1024, 2048],
12
- out_channels=256,
13
- norm_cfg=norm_cfg,
14
- num_outs=5),
15
- roi_head=dict(
16
- bbox_head=dict(norm_cfg=norm_cfg), mask_head=dict(norm_cfg=norm_cfg)))
17
- dataset_type = 'CocoDataset'
18
- data_root = 'data/coco/'
19
- img_norm_cfg = dict(
20
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
21
- train_pipeline = [
22
- dict(type='LoadImageFromFile'),
23
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
24
- dict(
25
- type='Resize',
26
- img_scale=(640, 640),
27
- ratio_range=(0.8, 1.2),
28
- keep_ratio=True),
29
- dict(type='RandomCrop', crop_size=(640, 640)),
30
- dict(type='RandomFlip', flip_ratio=0.5),
31
- dict(type='Normalize', **img_norm_cfg),
32
- dict(type='Pad', size=(640, 640)),
33
- dict(type='DefaultFormatBundle'),
34
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
35
- ]
36
- test_pipeline = [
37
- dict(type='LoadImageFromFile'),
38
- dict(
39
- type='MultiScaleFlipAug',
40
- img_scale=(640, 640),
41
- flip=False,
42
- transforms=[
43
- dict(type='Resize', keep_ratio=True),
44
- dict(type='RandomFlip'),
45
- dict(type='Normalize', **img_norm_cfg),
46
- dict(type='Pad', size_divisor=64),
47
- dict(type='ImageToTensor', keys=['img']),
48
- dict(type='Collect', keys=['img']),
49
- ])
50
- ]
51
- data = dict(
52
- samples_per_gpu=8,
53
- workers_per_gpu=4,
54
- train=dict(pipeline=train_pipeline),
55
- val=dict(pipeline=test_pipeline),
56
- test=dict(pipeline=test_pipeline))
57
- # learning policy
58
- optimizer = dict(
59
- type='SGD',
60
- lr=0.08,
61
- momentum=0.9,
62
- weight_decay=0.0001,
63
- paramwise_cfg=dict(norm_decay_mult=0, bypass_duplicate=True))
64
- optimizer_config = dict(grad_clip=None)
65
- # learning policy
66
- lr_config = dict(
67
- policy='step',
68
- warmup='linear',
69
- warmup_iters=1000,
70
- warmup_ratio=0.1,
71
- step=[30, 40])
72
- # runtime settings
73
- runner = dict(max_epochs=50)
74
- evaluation = dict(interval=2)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/resnest/cascade_rcnn_s101_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py DELETED
@@ -1,4 +0,0 @@
1
- _base_ = './cascade_rcnn_s50_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py'
2
- model = dict(
3
- pretrained='open-mmlab://resnest101',
4
- backbone=dict(stem_channels=128, depth=101))
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/tools/test.py DELETED
@@ -1,220 +0,0 @@
1
- import argparse
2
- import os
3
- import warnings
4
-
5
- import mmcv
6
- import torch
7
- from mmcv import Config, DictAction
8
- from mmcv.cnn import fuse_conv_bn
9
- from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
10
- from mmcv.runner import (get_dist_info, init_dist, load_checkpoint,
11
- wrap_fp16_model)
12
-
13
- from mmdet.apis import multi_gpu_test, single_gpu_test
14
- from mmdet.datasets import (build_dataloader, build_dataset,
15
- replace_ImageToTensor)
16
- from mmdet.models import build_detector
17
-
18
-
19
- def parse_args():
20
- parser = argparse.ArgumentParser(
21
- description='MMDet test (and eval) a model')
22
- parser.add_argument('config', help='test config file path')
23
- parser.add_argument('checkpoint', help='checkpoint file')
24
- parser.add_argument('--out', help='output result file in pickle format')
25
- parser.add_argument(
26
- '--fuse-conv-bn',
27
- action='store_true',
28
- help='Whether to fuse conv and bn, this will slightly increase'
29
- 'the inference speed')
30
- parser.add_argument(
31
- '--format-only',
32
- action='store_true',
33
- help='Format the output results without perform evaluation. It is'
34
- 'useful when you want to format the result to a specific format and '
35
- 'submit it to the test server')
36
- parser.add_argument(
37
- '--eval',
38
- type=str,
39
- nargs='+',
40
- help='evaluation metrics, which depends on the dataset, e.g., "bbox",'
41
- ' "segm", "proposal" for COCO, and "mAP", "recall" for PASCAL VOC')
42
- parser.add_argument('--show', action='store_true', help='show results')
43
- parser.add_argument(
44
- '--show-dir', help='directory where painted images will be saved')
45
- parser.add_argument(
46
- '--show-score-thr',
47
- type=float,
48
- default=0.3,
49
- help='score threshold (default: 0.3)')
50
- parser.add_argument(
51
- '--gpu-collect',
52
- action='store_true',
53
- help='whether to use gpu to collect results.')
54
- parser.add_argument(
55
- '--tmpdir',
56
- help='tmp directory used for collecting results from multiple '
57
- 'workers, available when gpu-collect is not specified')
58
- parser.add_argument(
59
- '--cfg-options',
60
- nargs='+',
61
- action=DictAction,
62
- help='override some settings in the used config, the key-value pair '
63
- 'in xxx=yyy format will be merged into config file. If the value to '
64
- 'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
65
- 'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
66
- 'Note that the quotation marks are necessary and that no white space '
67
- 'is allowed.')
68
- parser.add_argument(
69
- '--options',
70
- nargs='+',
71
- action=DictAction,
72
- help='custom options for evaluation, the key-value pair in xxx=yyy '
73
- 'format will be kwargs for dataset.evaluate() function (deprecate), '
74
- 'change to --eval-options instead.')
75
- parser.add_argument(
76
- '--eval-options',
77
- nargs='+',
78
- action=DictAction,
79
- help='custom options for evaluation, the key-value pair in xxx=yyy '
80
- 'format will be kwargs for dataset.evaluate() function')
81
- parser.add_argument(
82
- '--launcher',
83
- choices=['none', 'pytorch', 'slurm', 'mpi'],
84
- default='none',
85
- help='job launcher')
86
- parser.add_argument('--local_rank', type=int, default=0)
87
- args = parser.parse_args()
88
- if 'LOCAL_RANK' not in os.environ:
89
- os.environ['LOCAL_RANK'] = str(args.local_rank)
90
-
91
- if args.options and args.eval_options:
92
- raise ValueError(
93
- '--options and --eval-options cannot be both '
94
- 'specified, --options is deprecated in favor of --eval-options')
95
- if args.options:
96
- warnings.warn('--options is deprecated in favor of --eval-options')
97
- args.eval_options = args.options
98
- return args
99
-
100
-
101
- def main():
102
- args = parse_args()
103
-
104
- assert args.out or args.eval or args.format_only or args.show \
105
- or args.show_dir, \
106
- ('Please specify at least one operation (save/eval/format/show the '
107
- 'results / save the results) with the argument "--out", "--eval"'
108
- ', "--format-only", "--show" or "--show-dir"')
109
-
110
- if args.eval and args.format_only:
111
- raise ValueError('--eval and --format_only cannot be both specified')
112
-
113
- if args.out is not None and not args.out.endswith(('.pkl', '.pickle')):
114
- raise ValueError('The output file must be a pkl file.')
115
-
116
- cfg = Config.fromfile(args.config)
117
- if args.cfg_options is not None:
118
- cfg.merge_from_dict(args.cfg_options)
119
- # import modules from string list.
120
- if cfg.get('custom_imports', None):
121
- from mmcv.utils import import_modules_from_strings
122
- import_modules_from_strings(**cfg['custom_imports'])
123
- # set cudnn_benchmark
124
- if cfg.get('cudnn_benchmark', False):
125
- torch.backends.cudnn.benchmark = True
126
- cfg.model.pretrained = None
127
- if cfg.model.get('neck'):
128
- if isinstance(cfg.model.neck, list):
129
- for neck_cfg in cfg.model.neck:
130
- if neck_cfg.get('rfp_backbone'):
131
- if neck_cfg.rfp_backbone.get('pretrained'):
132
- neck_cfg.rfp_backbone.pretrained = None
133
- elif cfg.model.neck.get('rfp_backbone'):
134
- if cfg.model.neck.rfp_backbone.get('pretrained'):
135
- cfg.model.neck.rfp_backbone.pretrained = None
136
-
137
- # in case the test dataset is concatenated
138
- samples_per_gpu = 1
139
- if isinstance(cfg.data.test, dict):
140
- cfg.data.test.test_mode = True
141
- samples_per_gpu = cfg.data.test.pop('samples_per_gpu', 1)
142
- if samples_per_gpu > 1:
143
- # Replace 'ImageToTensor' to 'DefaultFormatBundle'
144
- cfg.data.test.pipeline = replace_ImageToTensor(
145
- cfg.data.test.pipeline)
146
- elif isinstance(cfg.data.test, list):
147
- for ds_cfg in cfg.data.test:
148
- ds_cfg.test_mode = True
149
- samples_per_gpu = max(
150
- [ds_cfg.pop('samples_per_gpu', 1) for ds_cfg in cfg.data.test])
151
- if samples_per_gpu > 1:
152
- for ds_cfg in cfg.data.test:
153
- ds_cfg.pipeline = replace_ImageToTensor(ds_cfg.pipeline)
154
-
155
- # init distributed env first, since logger depends on the dist info.
156
- if args.launcher == 'none':
157
- distributed = False
158
- else:
159
- distributed = True
160
- init_dist(args.launcher, **cfg.dist_params)
161
-
162
- # build the dataloader
163
- dataset = build_dataset(cfg.data.test)
164
- data_loader = build_dataloader(
165
- dataset,
166
- samples_per_gpu=samples_per_gpu,
167
- workers_per_gpu=cfg.data.workers_per_gpu,
168
- dist=distributed,
169
- shuffle=False)
170
-
171
- # build the model and load checkpoint
172
- cfg.model.train_cfg = None
173
- model = build_detector(cfg.model, test_cfg=cfg.get('test_cfg'))
174
- fp16_cfg = cfg.get('fp16', None)
175
- if fp16_cfg is not None:
176
- wrap_fp16_model(model)
177
- checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu')
178
- if args.fuse_conv_bn:
179
- model = fuse_conv_bn(model)
180
- # old versions did not save class info in checkpoints, this walkaround is
181
- # for backward compatibility
182
- if 'CLASSES' in checkpoint.get('meta', {}):
183
- model.CLASSES = checkpoint['meta']['CLASSES']
184
- else:
185
- model.CLASSES = dataset.CLASSES
186
-
187
- if not distributed:
188
- model = MMDataParallel(model, device_ids=[0])
189
- outputs = single_gpu_test(model, data_loader, args.show, args.show_dir,
190
- args.show_score_thr)
191
- else:
192
- model = MMDistributedDataParallel(
193
- model.cuda(),
194
- device_ids=[torch.cuda.current_device()],
195
- broadcast_buffers=False)
196
- outputs = multi_gpu_test(model, data_loader, args.tmpdir,
197
- args.gpu_collect)
198
-
199
- rank, _ = get_dist_info()
200
- if rank == 0:
201
- if args.out:
202
- print(f'\nwriting results to {args.out}')
203
- mmcv.dump(outputs, args.out)
204
- kwargs = {} if args.eval_options is None else args.eval_options
205
- if args.format_only:
206
- dataset.format_results(outputs, **kwargs)
207
- if args.eval:
208
- eval_kwargs = cfg.get('evaluation', {}).copy()
209
- # hard-code way to remove EvalHook args
210
- for key in [
211
- 'interval', 'tmpdir', 'start', 'gpu_collect', 'save_best',
212
- 'rule'
213
- ]:
214
- eval_kwargs.pop(key, None)
215
- eval_kwargs.update(dict(metric=args.eval, **kwargs))
216
- print(dataset.evaluate(outputs, **eval_kwargs))
217
-
218
-
219
- if __name__ == '__main__':
220
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/fcn_hr18.py DELETED
@@ -1,52 +0,0 @@
1
- # model settings
2
- norm_cfg = dict(type='SyncBN', requires_grad=True)
3
- model = dict(
4
- type='EncoderDecoder',
5
- pretrained='open-mmlab://msra/hrnetv2_w18',
6
- backbone=dict(
7
- type='HRNet',
8
- norm_cfg=norm_cfg,
9
- norm_eval=False,
10
- extra=dict(
11
- stage1=dict(
12
- num_modules=1,
13
- num_branches=1,
14
- block='BOTTLENECK',
15
- num_blocks=(4, ),
16
- num_channels=(64, )),
17
- stage2=dict(
18
- num_modules=1,
19
- num_branches=2,
20
- block='BASIC',
21
- num_blocks=(4, 4),
22
- num_channels=(18, 36)),
23
- stage3=dict(
24
- num_modules=4,
25
- num_branches=3,
26
- block='BASIC',
27
- num_blocks=(4, 4, 4),
28
- num_channels=(18, 36, 72)),
29
- stage4=dict(
30
- num_modules=3,
31
- num_branches=4,
32
- block='BASIC',
33
- num_blocks=(4, 4, 4, 4),
34
- num_channels=(18, 36, 72, 144)))),
35
- decode_head=dict(
36
- type='FCNHead',
37
- in_channels=[18, 36, 72, 144],
38
- in_index=(0, 1, 2, 3),
39
- channels=sum([18, 36, 72, 144]),
40
- input_transform='resize_concat',
41
- kernel_size=1,
42
- num_convs=1,
43
- concat_input=False,
44
- dropout_ratio=-1,
45
- num_classes=19,
46
- norm_cfg=norm_cfg,
47
- align_corners=False,
48
- loss_decode=dict(
49
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
50
- # model training and testing settings
51
- train_cfg=dict(),
52
- test_cfg=dict(mode='whole'))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50b-d8_769x769_80k_cityscapes.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './deeplabv3_r50-d8_769x769_80k_cityscapes.py'
2
- model = dict(pretrained='torchvision://resnet50', backbone=dict(type='ResNet'))
 
 
 
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/data_preprocessor.py DELETED
@@ -1,199 +0,0 @@
1
- """
2
- This module contains utils for preprocessing the text before converting it to embeddings.
3
-
4
- - TextPreprocessorBuilder preprocesses individual strings.
5
- * lowering cases
6
- * converting numbers to words or characters
7
- * merging and stripping spaces
8
- * removing punctuation
9
- * removing stop words
10
- * lemmatizing
11
- * removing specific parts of speech (adverbs and interjections)
12
- - TextSummarizer extracts the most important sentences from a long string using text-ranking.
13
- """
14
- import pytextrank
15
- import string
16
- import spacy
17
- import math
18
- import nltk
19
- import re
20
-
21
- from nltk.corpus import stopwords
22
- from nltk.stem import WordNetLemmatizer
23
- from num2words import num2words
24
-
25
-
26
- class TextPreprocessorBuilder:
27
- # Define class variables as None initially
28
- _stop_words = set(stopwords.words('english'))
29
- _lemmatizer = WordNetLemmatizer()
30
-
31
- # Some of the functions are expensive. We cache the results.
32
- _lemmatizer_cache = {}
33
- _pos_remove_cache = {}
34
-
35
-
36
- def __init__(self, text: str):
37
- self.text = text
38
-
39
-
40
- def to_lower(self):
41
- # Match both words and non-word characters
42
- tokens = re.findall(r'\b\w+\b|\W+', self.text)
43
- for i, token in enumerate(tokens):
44
- # Check if token is a word
45
- if re.match(r'^\w+$', token):
46
- # Check if token is not an abbreviation or constant
47
- if not re.match(r'^[A-Z]+$', token) and not re.match(r'^[A-Z_]+$', token):
48
- tokens[i] = token.lower()
49
- self.text = "".join(tokens)
50
- return self
51
-
52
-
53
- def num_to_word(self, min_len: int = 1):
54
- # Match both words and non-word characters
55
- tokens = re.findall(r'\b\w+\b|\W+', self.text)
56
- for i, token in enumerate(tokens):
57
- # Check if token is a number of length `min_len` or more
58
- if token.isdigit() and len(token) >= min_len:
59
- # This is done to pay better attention to numbers (e.g. ticket numbers, thread numbers, post numbers)
60
- # 740700 will become "seven hundred and forty thousand seven hundred".
61
- tokens[i] = num2words(int(token)).replace(",","") # Remove commas from num2words.
62
- self.text = "".join(tokens)
63
- return self
64
-
65
-
66
- def num_to_char_long(self, min_len: int = 1):
67
- # Match both words and non-word characters
68
- tokens = re.findall(r'\b\w+\b|\W+', self.text)
69
- for i, token in enumerate(tokens):
70
- # Check if token is a number of length `min_len` or more
71
- if token.isdigit() and len(token) >= min_len:
72
- # This is done to pay better attention to numbers (e.g. ticket numbers, thread numbers, post numbers)
73
- # 740700 will become HHHHHHEEEEEAAAAHHHAAA
74
- convert_token = lambda token: ''.join((chr(int(digit) + 65) * (i + 1)) for i, digit in enumerate(token[::-1]))[::-1]
75
- tokens[i] = convert_token(tokens[i])
76
- self.text = "".join(tokens)
77
- return self
78
-
79
- def num_to_char(self, min_len: int = 1):
80
- # Match both words and non-word characters
81
- tokens = re.findall(r'\b\w+\b|\W+', self.text)
82
- for i, token in enumerate(tokens):
83
- # Check if token is a number of length `min_len` or more
84
- if token.isdigit() and len(token) >= min_len:
85
- # This is done to pay better attention to numbers (e.g. ticket numbers, thread numbers, post numbers)
86
- # 740700 will become HEAHAA
87
- tokens[i] = ''.join(chr(int(digit) + 65) for digit in token)
88
- self.text = "".join(tokens)
89
- return self
90
-
91
- def merge_spaces(self):
92
- self.text = re.sub(' +', ' ', self.text)
93
- return self
94
-
95
- def strip(self):
96
- self.text = self.text.strip()
97
- return self
98
-
99
- def remove_punctuation(self):
100
- self.text = self.text.translate(str.maketrans('', '', string.punctuation))
101
- return self
102
-
103
- def remove_stopwords(self):
104
- self.text = "".join([word for word in re.findall(r'\b\w+\b|\W+', self.text) if word not in TextPreprocessorBuilder._stop_words])
105
- return self
106
-
107
- def remove_specific_pos(self):
108
- """
109
- In the English language, adverbs and interjections rarely provide meaningul information.
110
- Removing them improves the embedding precision. Don't tell JK Rowling, though.
111
- """
112
- processed_text = TextPreprocessorBuilder._pos_remove_cache.get(self.text)
113
- if processed_text:
114
- self.text = processed_text
115
- return self
116
-
117
- # Match both words and non-word characters
118
- tokens = re.findall(r'\b\w+\b|\W+', self.text)
119
-
120
- # Exclude adverbs and interjections
121
- excluded_tags = ['RB', 'RBR', 'RBS', 'UH']
122
-
123
- for i, token in enumerate(tokens):
124
- # Check if token is a word
125
- if re.match(r'^\w+$', token):
126
- # Part-of-speech tag the word
127
- pos = nltk.pos_tag([token])[0][1]
128
- # If the word's POS tag is in the excluded list, remove the word
129
- if pos in excluded_tags:
130
- tokens[i] = ''
131
-
132
- new_text = "".join(tokens)
133
- TextPreprocessorBuilder._pos_remove_cache[self.text] = new_text
134
- self.text = new_text
135
-
136
- return self
137
-
138
- def lemmatize(self):
139
- processed_text = TextPreprocessorBuilder._lemmatizer_cache.get(self.text)
140
- if processed_text:
141
- self.text = processed_text
142
- return self
143
-
144
- new_text = "".join([TextPreprocessorBuilder._lemmatizer.lemmatize(word) for word in re.findall(r'\b\w+\b|\W+', self.text)])
145
- TextPreprocessorBuilder._lemmatizer_cache[self.text] = new_text
146
- self.text = new_text
147
-
148
- return self
149
-
150
- def build(self):
151
- return self.text
152
-
153
- class TextSummarizer:
154
- _nlp_pipeline = None
155
- _cache = {}
156
-
157
- @staticmethod
158
- def _load_nlp_pipeline():
159
- # Lazy-load it.
160
- if TextSummarizer._nlp_pipeline is None:
161
- TextSummarizer._nlp_pipeline = spacy.load('en_core_web_sm')
162
- TextSummarizer._nlp_pipeline.add_pipe("textrank", last=True)
163
- return TextSummarizer._nlp_pipeline
164
-
165
- @staticmethod
166
- def process_long_text(text: str, min_num_sent: int) -> list[str]:
167
- """
168
- This function applies a text summarization process on a given text string, extracting
169
- the most important sentences based on the principle that 20% of the content is responsible
170
- for 80% of the meaning (the Pareto Principle).
171
-
172
- Returns:
173
- list: A list of the most important sentences
174
- """
175
-
176
- # Attempt to get the result from cache
177
- cache_key = (text, min_num_sent)
178
- cached_result = TextSummarizer._cache.get(cache_key, None)
179
- if cached_result is not None:
180
- return cached_result
181
-
182
- nlp_pipeline = TextSummarizer._load_nlp_pipeline()
183
- doc = nlp_pipeline(text)
184
-
185
- num_sent = len(list(doc.sents))
186
- result = []
187
-
188
- if num_sent >= min_num_sent:
189
-
190
- limit_phrases = math.ceil(len(doc._.phrases) * 0.20) # 20% of the phrases, rounded up
191
- limit_sentences = math.ceil(num_sent * 0.20) # 20% of the sentences, rounded up
192
- result = [str(sent) for sent in doc._.textrank.summary(limit_phrases=limit_phrases, limit_sentences=limit_sentences)]
193
-
194
- else:
195
- result = [text]
196
-
197
- # Store the result in cache before returning it
198
- TextSummarizer._cache[cache_key] = result
199
- return result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py DELETED
@@ -1,353 +0,0 @@
1
- """This is invoked in a subprocess to call the build backend hooks.
2
-
3
- It expects:
4
- - Command line args: hook_name, control_dir
5
- - Environment variables:
6
- PEP517_BUILD_BACKEND=entry.point:spec
7
- PEP517_BACKEND_PATH=paths (separated with os.pathsep)
8
- - control_dir/input.json:
9
- - {"kwargs": {...}}
10
-
11
- Results:
12
- - control_dir/output.json
13
- - {"return_val": ...}
14
- """
15
- import json
16
- import os
17
- import os.path
18
- import re
19
- import shutil
20
- import sys
21
- import traceback
22
- from glob import glob
23
- from importlib import import_module
24
- from os.path import join as pjoin
25
-
26
- # This file is run as a script, and `import wrappers` is not zip-safe, so we
27
- # include write_json() and read_json() from wrappers.py.
28
-
29
-
30
- def write_json(obj, path, **kwargs):
31
- with open(path, 'w', encoding='utf-8') as f:
32
- json.dump(obj, f, **kwargs)
33
-
34
-
35
- def read_json(path):
36
- with open(path, encoding='utf-8') as f:
37
- return json.load(f)
38
-
39
-
40
- class BackendUnavailable(Exception):
41
- """Raised if we cannot import the backend"""
42
- def __init__(self, traceback):
43
- self.traceback = traceback
44
-
45
-
46
- class BackendInvalid(Exception):
47
- """Raised if the backend is invalid"""
48
- def __init__(self, message):
49
- self.message = message
50
-
51
-
52
- class HookMissing(Exception):
53
- """Raised if a hook is missing and we are not executing the fallback"""
54
- def __init__(self, hook_name=None):
55
- super().__init__(hook_name)
56
- self.hook_name = hook_name
57
-
58
-
59
- def contained_in(filename, directory):
60
- """Test if a file is located within the given directory."""
61
- filename = os.path.normcase(os.path.abspath(filename))
62
- directory = os.path.normcase(os.path.abspath(directory))
63
- return os.path.commonprefix([filename, directory]) == directory
64
-
65
-
66
- def _build_backend():
67
- """Find and load the build backend"""
68
- # Add in-tree backend directories to the front of sys.path.
69
- backend_path = os.environ.get('PEP517_BACKEND_PATH')
70
- if backend_path:
71
- extra_pathitems = backend_path.split(os.pathsep)
72
- sys.path[:0] = extra_pathitems
73
-
74
- ep = os.environ['PEP517_BUILD_BACKEND']
75
- mod_path, _, obj_path = ep.partition(':')
76
- try:
77
- obj = import_module(mod_path)
78
- except ImportError:
79
- raise BackendUnavailable(traceback.format_exc())
80
-
81
- if backend_path:
82
- if not any(
83
- contained_in(obj.__file__, path)
84
- for path in extra_pathitems
85
- ):
86
- raise BackendInvalid("Backend was not loaded from backend-path")
87
-
88
- if obj_path:
89
- for path_part in obj_path.split('.'):
90
- obj = getattr(obj, path_part)
91
- return obj
92
-
93
-
94
- def _supported_features():
95
- """Return the list of options features supported by the backend.
96
-
97
- Returns a list of strings.
98
- The only possible value is 'build_editable'.
99
- """
100
- backend = _build_backend()
101
- features = []
102
- if hasattr(backend, "build_editable"):
103
- features.append("build_editable")
104
- return features
105
-
106
-
107
- def get_requires_for_build_wheel(config_settings):
108
- """Invoke the optional get_requires_for_build_wheel hook
109
-
110
- Returns [] if the hook is not defined.
111
- """
112
- backend = _build_backend()
113
- try:
114
- hook = backend.get_requires_for_build_wheel
115
- except AttributeError:
116
- return []
117
- else:
118
- return hook(config_settings)
119
-
120
-
121
- def get_requires_for_build_editable(config_settings):
122
- """Invoke the optional get_requires_for_build_editable hook
123
-
124
- Returns [] if the hook is not defined.
125
- """
126
- backend = _build_backend()
127
- try:
128
- hook = backend.get_requires_for_build_editable
129
- except AttributeError:
130
- return []
131
- else:
132
- return hook(config_settings)
133
-
134
-
135
- def prepare_metadata_for_build_wheel(
136
- metadata_directory, config_settings, _allow_fallback):
137
- """Invoke optional prepare_metadata_for_build_wheel
138
-
139
- Implements a fallback by building a wheel if the hook isn't defined,
140
- unless _allow_fallback is False in which case HookMissing is raised.
141
- """
142
- backend = _build_backend()
143
- try:
144
- hook = backend.prepare_metadata_for_build_wheel
145
- except AttributeError:
146
- if not _allow_fallback:
147
- raise HookMissing()
148
- else:
149
- return hook(metadata_directory, config_settings)
150
- # fallback to build_wheel outside the try block to avoid exception chaining
151
- # which can be confusing to users and is not relevant
152
- whl_basename = backend.build_wheel(metadata_directory, config_settings)
153
- return _get_wheel_metadata_from_wheel(whl_basename, metadata_directory,
154
- config_settings)
155
-
156
-
157
- def prepare_metadata_for_build_editable(
158
- metadata_directory, config_settings, _allow_fallback):
159
- """Invoke optional prepare_metadata_for_build_editable
160
-
161
- Implements a fallback by building an editable wheel if the hook isn't
162
- defined, unless _allow_fallback is False in which case HookMissing is
163
- raised.
164
- """
165
- backend = _build_backend()
166
- try:
167
- hook = backend.prepare_metadata_for_build_editable
168
- except AttributeError:
169
- if not _allow_fallback:
170
- raise HookMissing()
171
- try:
172
- build_hook = backend.build_editable
173
- except AttributeError:
174
- raise HookMissing(hook_name='build_editable')
175
- else:
176
- whl_basename = build_hook(metadata_directory, config_settings)
177
- return _get_wheel_metadata_from_wheel(whl_basename,
178
- metadata_directory,
179
- config_settings)
180
- else:
181
- return hook(metadata_directory, config_settings)
182
-
183
-
184
- WHEEL_BUILT_MARKER = 'PEP517_ALREADY_BUILT_WHEEL'
185
-
186
-
187
- def _dist_info_files(whl_zip):
188
- """Identify the .dist-info folder inside a wheel ZipFile."""
189
- res = []
190
- for path in whl_zip.namelist():
191
- m = re.match(r'[^/\\]+-[^/\\]+\.dist-info/', path)
192
- if m:
193
- res.append(path)
194
- if res:
195
- return res
196
- raise Exception("No .dist-info folder found in wheel")
197
-
198
-
199
- def _get_wheel_metadata_from_wheel(
200
- whl_basename, metadata_directory, config_settings):
201
- """Extract the metadata from a wheel.
202
-
203
- Fallback for when the build backend does not
204
- define the 'get_wheel_metadata' hook.
205
- """
206
- from zipfile import ZipFile
207
- with open(os.path.join(metadata_directory, WHEEL_BUILT_MARKER), 'wb'):
208
- pass # Touch marker file
209
-
210
- whl_file = os.path.join(metadata_directory, whl_basename)
211
- with ZipFile(whl_file) as zipf:
212
- dist_info = _dist_info_files(zipf)
213
- zipf.extractall(path=metadata_directory, members=dist_info)
214
- return dist_info[0].split('/')[0]
215
-
216
-
217
- def _find_already_built_wheel(metadata_directory):
218
- """Check for a wheel already built during the get_wheel_metadata hook.
219
- """
220
- if not metadata_directory:
221
- return None
222
- metadata_parent = os.path.dirname(metadata_directory)
223
- if not os.path.isfile(pjoin(metadata_parent, WHEEL_BUILT_MARKER)):
224
- return None
225
-
226
- whl_files = glob(os.path.join(metadata_parent, '*.whl'))
227
- if not whl_files:
228
- print('Found wheel built marker, but no .whl files')
229
- return None
230
- if len(whl_files) > 1:
231
- print('Found multiple .whl files; unspecified behaviour. '
232
- 'Will call build_wheel.')
233
- return None
234
-
235
- # Exactly one .whl file
236
- return whl_files[0]
237
-
238
-
239
- def build_wheel(wheel_directory, config_settings, metadata_directory=None):
240
- """Invoke the mandatory build_wheel hook.
241
-
242
- If a wheel was already built in the
243
- prepare_metadata_for_build_wheel fallback, this
244
- will copy it rather than rebuilding the wheel.
245
- """
246
- prebuilt_whl = _find_already_built_wheel(metadata_directory)
247
- if prebuilt_whl:
248
- shutil.copy2(prebuilt_whl, wheel_directory)
249
- return os.path.basename(prebuilt_whl)
250
-
251
- return _build_backend().build_wheel(wheel_directory, config_settings,
252
- metadata_directory)
253
-
254
-
255
- def build_editable(wheel_directory, config_settings, metadata_directory=None):
256
- """Invoke the optional build_editable hook.
257
-
258
- If a wheel was already built in the
259
- prepare_metadata_for_build_editable fallback, this
260
- will copy it rather than rebuilding the wheel.
261
- """
262
- backend = _build_backend()
263
- try:
264
- hook = backend.build_editable
265
- except AttributeError:
266
- raise HookMissing()
267
- else:
268
- prebuilt_whl = _find_already_built_wheel(metadata_directory)
269
- if prebuilt_whl:
270
- shutil.copy2(prebuilt_whl, wheel_directory)
271
- return os.path.basename(prebuilt_whl)
272
-
273
- return hook(wheel_directory, config_settings, metadata_directory)
274
-
275
-
276
- def get_requires_for_build_sdist(config_settings):
277
- """Invoke the optional get_requires_for_build_wheel hook
278
-
279
- Returns [] if the hook is not defined.
280
- """
281
- backend = _build_backend()
282
- try:
283
- hook = backend.get_requires_for_build_sdist
284
- except AttributeError:
285
- return []
286
- else:
287
- return hook(config_settings)
288
-
289
-
290
- class _DummyException(Exception):
291
- """Nothing should ever raise this exception"""
292
-
293
-
294
- class GotUnsupportedOperation(Exception):
295
- """For internal use when backend raises UnsupportedOperation"""
296
- def __init__(self, traceback):
297
- self.traceback = traceback
298
-
299
-
300
- def build_sdist(sdist_directory, config_settings):
301
- """Invoke the mandatory build_sdist hook."""
302
- backend = _build_backend()
303
- try:
304
- return backend.build_sdist(sdist_directory, config_settings)
305
- except getattr(backend, 'UnsupportedOperation', _DummyException):
306
- raise GotUnsupportedOperation(traceback.format_exc())
307
-
308
-
309
- HOOK_NAMES = {
310
- 'get_requires_for_build_wheel',
311
- 'prepare_metadata_for_build_wheel',
312
- 'build_wheel',
313
- 'get_requires_for_build_editable',
314
- 'prepare_metadata_for_build_editable',
315
- 'build_editable',
316
- 'get_requires_for_build_sdist',
317
- 'build_sdist',
318
- '_supported_features',
319
- }
320
-
321
-
322
- def main():
323
- if len(sys.argv) < 3:
324
- sys.exit("Needs args: hook_name, control_dir")
325
- hook_name = sys.argv[1]
326
- control_dir = sys.argv[2]
327
- if hook_name not in HOOK_NAMES:
328
- sys.exit("Unknown hook: %s" % hook_name)
329
- hook = globals()[hook_name]
330
-
331
- hook_input = read_json(pjoin(control_dir, 'input.json'))
332
-
333
- json_out = {'unsupported': False, 'return_val': None}
334
- try:
335
- json_out['return_val'] = hook(**hook_input['kwargs'])
336
- except BackendUnavailable as e:
337
- json_out['no_backend'] = True
338
- json_out['traceback'] = e.traceback
339
- except BackendInvalid as e:
340
- json_out['backend_invalid'] = True
341
- json_out['backend_error'] = e.message
342
- except GotUnsupportedOperation as e:
343
- json_out['unsupported'] = True
344
- json_out['traceback'] = e.traceback
345
- except HookMissing as e:
346
- json_out['hook_missing'] = True
347
- json_out['missing_hook_name'] = e.hook_name or hook_name
348
-
349
- write_json(json_out, pjoin(control_dir, 'output.json'), indent=2)
350
-
351
-
352
- if __name__ == '__main__':
353
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aveygo/AstroSleuth/training.md DELETED
@@ -1,23 +0,0 @@
1
- # Training details
2
-
3
- Astrosleuth_v1 is a 6b RealESR-GAN model, trained on 15 thousand images of various deep space objects, taken from multiple sources including:
4
- - [AstroBin](https://welcome.astrobin.com/)
5
- - [NASA](https://www.nasa.gov/multimedia/imagegallery/index.html)
6
- - [r/Astrophotography](https://www.reddit.com/r/astrophotography/)
7
- - [r/Astronomy](https://www.reddit.com/r/astronomy/)
8
-
9
- Astrosleuth_v2 is a continuation of v1, where I add the following improvements:
10
- - [Projected discriminator](https://github.com/autonomousvision/projected-gan)
11
- - Greator emphasis on visual quality rather than accuracy
12
- - Custom VGG model for perceptual loss
13
- - Less JPG degredations and more motion / guassian blur
14
- - Trained with a much lower learning rate
15
- - Pruned the dataset with CLIP for more high quality ground truths
16
-
17
- As of writing, v2 is not publicly available, but will be added to the hugging face repository in the very near future.
18
-
19
- ## Future plans
20
-
21
- Astrosleuth_v3 will be v2 with more emphasis on accuracy (no discriminator) to complete with BlurXTerminator as I believe that the community will appreciate such changes.
22
-
23
- I am also considering a true "zero-knowledge" model, as described by [Deep Image Prior](https://arxiv.org/abs/1711.10925), but will leave alone for now to focus on current work.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awesimo/jojogan/e4e/criteria/__init__.py DELETED
File without changes
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/export/caffe2_export.py DELETED
@@ -1,207 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
-
3
- import copy
4
- import io
5
- import logging
6
- import numpy as np
7
- from typing import List
8
- import onnx
9
- import torch
10
- from caffe2.proto import caffe2_pb2
11
- from caffe2.python import core
12
- from caffe2.python.onnx.backend import Caffe2Backend
13
- from tabulate import tabulate
14
- from termcolor import colored
15
- from torch.onnx import OperatorExportTypes
16
-
17
- from .shared import (
18
- ScopedWS,
19
- construct_init_net_from_params,
20
- fuse_alias_placeholder,
21
- fuse_copy_between_cpu_and_gpu,
22
- get_params_from_init_net,
23
- group_norm_replace_aten_with_caffe2,
24
- infer_device_type,
25
- remove_dead_end_ops,
26
- remove_reshape_for_fc,
27
- save_graph,
28
- )
29
-
30
- logger = logging.getLogger(__name__)
31
-
32
-
33
- def export_onnx_model(model, inputs):
34
- """
35
- Trace and export a model to onnx format.
36
-
37
- Args:
38
- model (nn.Module):
39
- inputs (tuple[args]): the model will be called by `model(*inputs)`
40
-
41
- Returns:
42
- an onnx model
43
- """
44
- assert isinstance(model, torch.nn.Module)
45
-
46
- # make sure all modules are in eval mode, onnx may change the training state
47
- # of the module if the states are not consistent
48
- def _check_eval(module):
49
- assert not module.training
50
-
51
- model.apply(_check_eval)
52
-
53
- # Export the model to ONNX
54
- with torch.no_grad():
55
- with io.BytesIO() as f:
56
- torch.onnx.export(
57
- model,
58
- inputs,
59
- f,
60
- operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK,
61
- # verbose=True, # NOTE: uncomment this for debugging
62
- # export_params=True,
63
- )
64
- onnx_model = onnx.load_from_string(f.getvalue())
65
-
66
- # Apply ONNX's Optimization
67
- all_passes = onnx.optimizer.get_available_passes()
68
- passes = ["fuse_bn_into_conv"]
69
- assert all(p in all_passes for p in passes)
70
- onnx_model = onnx.optimizer.optimize(onnx_model, passes)
71
- return onnx_model
72
-
73
-
74
- def _op_stats(net_def):
75
- type_count = {}
76
- for t in [op.type for op in net_def.op]:
77
- type_count[t] = type_count.get(t, 0) + 1
78
- type_count_list = sorted(type_count.items(), key=lambda kv: kv[0]) # alphabet
79
- type_count_list = sorted(type_count_list, key=lambda kv: -kv[1]) # count
80
- return "\n".join("{:>4}x {}".format(count, name) for name, count in type_count_list)
81
-
82
-
83
- def _assign_device_option(
84
- predict_net: caffe2_pb2.NetDef, init_net: caffe2_pb2.NetDef, tensor_inputs: List[torch.Tensor]
85
- ):
86
- """
87
- ONNX exported network doesn't have concept of device, assign necessary
88
- device option for each op in order to make it runable on GPU runtime.
89
- """
90
-
91
- def _get_device_type(torch_tensor):
92
- assert torch_tensor.device.type in ["cpu", "cuda"]
93
- assert torch_tensor.device.index == 0
94
- return torch_tensor.device.type
95
-
96
- def _assign_op_device_option(net_proto, net_ssa, blob_device_types):
97
- for op, ssa_i in zip(net_proto.op, net_ssa):
98
- if op.type in ["CopyCPUToGPU", "CopyGPUToCPU"]:
99
- op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0))
100
- else:
101
- devices = [blob_device_types[b] for b in ssa_i[0] + ssa_i[1]]
102
- assert all(d == devices[0] for d in devices)
103
- if devices[0] == "cuda":
104
- op.device_option.CopyFrom(core.DeviceOption(caffe2_pb2.CUDA, 0))
105
-
106
- # update ops in predict_net
107
- predict_net_input_device_types = {
108
- (name, 0): _get_device_type(tensor)
109
- for name, tensor in zip(predict_net.external_input, tensor_inputs)
110
- }
111
- predict_net_device_types = infer_device_type(
112
- predict_net, known_status=predict_net_input_device_types, device_name_style="pytorch"
113
- )
114
- predict_net_ssa, _ = core.get_ssa(predict_net)
115
- _assign_op_device_option(predict_net, predict_net_ssa, predict_net_device_types)
116
-
117
- # update ops in init_net
118
- init_net_ssa, versions = core.get_ssa(init_net)
119
- init_net_output_device_types = {
120
- (name, versions[name]): predict_net_device_types[(name, 0)]
121
- for name in init_net.external_output
122
- }
123
- init_net_device_types = infer_device_type(
124
- init_net, known_status=init_net_output_device_types, device_name_style="pytorch"
125
- )
126
- _assign_op_device_option(init_net, init_net_ssa, init_net_device_types)
127
-
128
-
129
- def export_caffe2_detection_model(model: torch.nn.Module, tensor_inputs: List[torch.Tensor]):
130
- """
131
- Export a caffe2-compatible Detectron2 model to caffe2 format via ONNX.
132
-
133
- Arg:
134
- model: a caffe2-compatible version of detectron2 model, defined in caffe2_modeling.py
135
- tensor_inputs: a list of tensors that caffe2 model takes as input.
136
- """
137
- model = copy.deepcopy(model)
138
- assert isinstance(model, torch.nn.Module)
139
- assert hasattr(model, "encode_additional_info")
140
-
141
- # Export via ONNX
142
- logger.info(
143
- "Exporting a {} model via ONNX ...".format(type(model).__name__)
144
- + " Some warnings from ONNX are expected and are usually not to worry about."
145
- )
146
- onnx_model = export_onnx_model(model, (tensor_inputs,))
147
- # Convert ONNX model to Caffe2 protobuf
148
- init_net, predict_net = Caffe2Backend.onnx_graph_to_caffe2_net(onnx_model)
149
- ops_table = [[op.type, op.input, op.output] for op in predict_net.op]
150
- table = tabulate(ops_table, headers=["type", "input", "output"], tablefmt="pipe")
151
- logger.info(
152
- "ONNX export Done. Exported predict_net (before optimizations):\n" + colored(table, "cyan")
153
- )
154
-
155
- # Apply protobuf optimization
156
- fuse_alias_placeholder(predict_net, init_net)
157
- if any(t.device.type != "cpu" for t in tensor_inputs):
158
- fuse_copy_between_cpu_and_gpu(predict_net)
159
- remove_dead_end_ops(init_net)
160
- _assign_device_option(predict_net, init_net, tensor_inputs)
161
- params, device_options = get_params_from_init_net(init_net)
162
- predict_net, params = remove_reshape_for_fc(predict_net, params)
163
- init_net = construct_init_net_from_params(params, device_options)
164
- group_norm_replace_aten_with_caffe2(predict_net)
165
-
166
- # Record necessary information for running the pb model in Detectron2 system.
167
- model.encode_additional_info(predict_net, init_net)
168
-
169
- logger.info("Operators used in predict_net: \n{}".format(_op_stats(predict_net)))
170
- logger.info("Operators used in init_net: \n{}".format(_op_stats(init_net)))
171
-
172
- return predict_net, init_net
173
-
174
-
175
- def run_and_save_graph(predict_net, init_net, tensor_inputs, graph_save_path):
176
- """
177
- Run the caffe2 model on given inputs, recording the shape and draw the graph.
178
-
179
- predict_net/init_net: caffe2 model.
180
- tensor_inputs: a list of tensors that caffe2 model takes as input.
181
- graph_save_path: path for saving graph of exported model.
182
- """
183
-
184
- logger.info("Saving graph of ONNX exported model to {} ...".format(graph_save_path))
185
- save_graph(predict_net, graph_save_path, op_only=False)
186
-
187
- # Run the exported Caffe2 net
188
- logger.info("Running ONNX exported model ...")
189
- with ScopedWS("__ws_tmp__", True) as ws:
190
- ws.RunNetOnce(init_net)
191
- initialized_blobs = set(ws.Blobs())
192
- uninitialized = [inp for inp in predict_net.external_input if inp not in initialized_blobs]
193
- for name, blob in zip(uninitialized, tensor_inputs):
194
- ws.FeedBlob(name, blob)
195
-
196
- try:
197
- ws.RunNetOnce(predict_net)
198
- except RuntimeError as e:
199
- logger.warning("Encountered RuntimeError: \n{}".format(str(e)))
200
-
201
- ws_blobs = {b: ws.FetchBlob(b) for b in ws.Blobs()}
202
- blob_sizes = {b: ws_blobs[b].shape for b in ws_blobs if isinstance(ws_blobs[b], np.ndarray)}
203
-
204
- logger.info("Saving graph with blob shapes to {} ...".format(graph_save_path))
205
- save_graph(predict_net, graph_save_path, op_only=False, blob_sizes=blob_sizes)
206
-
207
- return ws_blobs
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AzumaSeren100/XuanShen-Bert-VITS2/monotonic_align/__init__.py DELETED
@@ -1,15 +0,0 @@
1
- from numpy import zeros, int32, float32
2
- from torch import from_numpy
3
-
4
- from .core import maximum_path_jit
5
-
6
- def maximum_path(neg_cent, mask):
7
- device = neg_cent.device
8
- dtype = neg_cent.dtype
9
- neg_cent = neg_cent.data.cpu().numpy().astype(float32)
10
- path = zeros(neg_cent.shape, dtype=int32)
11
-
12
- t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32)
13
- t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32)
14
- maximum_path_jit(path, neg_cent, t_t_max, t_s_max)
15
- return from_numpy(path).to(device=device, dtype=dtype)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Casos Criminales Misterios Del Pasado Mod Apk ltima Versin.md DELETED
@@ -1,39 +0,0 @@
1
- <br />
2
- <h1>Caso penal: Misterios del pasado Mod APK - Una guía para los jugadores</h1>
3
- <p>Si eres un fan de los juegos de detectives, es posible que hayas oído hablar de Criminal Case: Mysteries of the Past, un popular juego desarrollado por Pretty Simple. En este juego, se puede viajar en el tiempo al siglo 19 y resolver cientos de casos de crimen en diferentes lugares. También puedes unir fuerzas con otros jugadores y competir por la mejor puntuación. Pero, ¿qué pasa si quieres disfrutar del juego sin limitaciones ni restricciones? Ahí es donde Caso Criminal: Misterios del Pasado Mod APK viene muy bien. En este artículo, te contaremos todo lo que necesitas saber sobre esta versión modificada del juego, cómo descargarlo e instalarlo, y cómo jugarlo como un profesional. </p>
4
- <h2>¿Qué es el Caso Criminal: Misterios del Pasado? </h2>
5
- <p>Criminal Case: Mysteries of the Past es un juego de aventura de objetos ocultos que te permite convertirte en un detective en la era victoriana. Puedes explorar varias escenas, encontrar pistas, interrogar sospechosos y analizar pruebas para atrapar a los asesinos. También puedes personalizar a tu personaje, coleccionar atuendos y desbloquear logros. El juego tiene más de 60 casos para resolver, cada uno con su propia historia y personajes. También puedes jugar con tus amigos y comparar tus puntuaciones en la clasificación. </p>
6
- <h2>casos criminales misterios del pasado mod apk última versión</h2><br /><p><b><b>Download</b> &#9999; <a href="https://bltlly.com/2v6KQQ">https://bltlly.com/2v6KQQ</a></b></p><br /><br />
7
- <h2>¿Qué es el caso penal: Misterios del pasado Mod APK? </h2>
8
- <p>Caso Penal: Misterios del Pasado Mod APK es una versión modificada del juego original que le da algunos beneficios adicionales. Por ejemplo, puedes obtener estrellas, energía, monedas y pistas ilimitadas en esta versión. Esto significa que puedes jugar todo el tiempo que quieras, sin esperar a que la energía se llene o gastar dinero real en compras dentro de la aplicación. También puede acceder a todos los casos y trajes sin restricciones. Con este mod, podrás disfrutar del juego al máximo y resolver todos los misterios con facilidad. </p>
9
- <h2>Cómo descargar e instalar Caso Penal: Misterios del Pasado Mod APK? </h2>
10
-
11
- <ol>
12
- <li>Ir a <a href="( 1 )">HappyMod.com</a> y buscar Caso Criminal: Misterios del Pasado Mod APK.</li>
13
- <li>Seleccione la última versión (2.39) y haga clic en Descargar.</li>
14
- <li>Espere a que el archivo se descargue en su dispositivo. </li>
15
- <li>Ir a su gestor de archivos y localizar el archivo descargado. </li>
16
- <li>Toque en él y permitir la instalación de fuentes desconocidas si se le solicita. </li>
17
- <li>Espere a que se complete la instalación. </li>
18
- <li>Iniciar el juego y disfrutar! </li>
19
- </ol>
20
- <p>Nota: Es posible que tenga que desinstalar la versión original del juego antes de instalar el modded uno. </p>
21
- <h2>¿Cómo se juega caso penal: Misterios del pasado Mod APK? </h2>
22
- <p>Jugar Criminal Case: Mysteries of the Past Mod APK es muy similar a jugar el juego original. Solo tienes que seguir estos consejos y trucos:</p>
23
- <ul>
24
- <li>Seleccione un caso del mapa y comience a investigar. </li>
25
- <li>Toque en los objetos de la escena para recogerlos como pistas. </li>
26
- <li>Usa sugerencias si te quedas atascado o quieres acelerar tu progreso. </li>
27
- <li>Analiza tus pistas en el laboratorio o con tu pareja. </li>
28
- <li>Interroga a sospechosos y testigos para obtener más información. </li>
29
- <li>Usa estrellas para desbloquear nuevas escenas o acciones. </li>
30
- <li>Acusar al asesino cuando se tiene suficiente evidencia : Misterios del pasado Mod APK? </h3>
31
- <p>No, no es necesario raíz de su dispositivo para utilizar Caso Penal: Misterios del Pasado Mod APK. El mod funciona bien en ambos dispositivos arraigados y no arraigados. Sin embargo, algunas características pueden requerir acceso root, como eliminar anuncios o cambiar permisos. Puede comprobar los detalles del mod en HappyMod.com antes de descargarlo. </p>
32
- <h3>¿Puedo jugar Caso Penal: Misterios del pasado Mod APK en línea? </h3>
33
-
34
- <h3>¿Puedo actualizar Caso Penal: Misterios del pasado Mod APK? </h3>
35
- <p>Sí, puede actualizar Caso Penal: Misterios del Pasado Mod APK cada vez que una nueva versión está disponible. Puedes buscar actualizaciones en HappyMod.com o activar las notificaciones en tu dispositivo. También puede desinstalar la versión anterior e instalar la nueva siguiendo los mismos pasos que hemos proporcionado en este artículo. </p>
36
- <h3>¿Puedo usar Caso Penal: Misterios del pasado Mod APK en PC? </h3>
37
- <p>Sí, puede utilizar Caso Penal: Misterios del pasado Mod APK en el PC utilizando un emulador de Android. Un emulador es un software que le permite ejecutar aplicaciones Android en su computadora. Algunos emuladores populares son BlueStacks, NoxPlayer y LDPlayer. Puedes descargar cualquiera de estos emuladores desde sus sitios web oficiales e instalarlos en tu PC. A continuación, puede descargar e instalar Criminal Case: Mysteries of the Past Mod APK de HappyMod.com y reproducirlo en su PC.</p> 64aa2da5cf<br />
38
- <br />
39
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/cachecontrol/__init__.py DELETED
@@ -1,18 +0,0 @@
1
- # SPDX-FileCopyrightText: 2015 Eric Larson
2
- #
3
- # SPDX-License-Identifier: Apache-2.0
4
-
5
- """CacheControl import Interface.
6
-
7
- Make it easy to import from cachecontrol without long namespaces.
8
- """
9
- __author__ = "Eric Larson"
10
- __email__ = "[email protected]"
11
- __version__ = "0.12.11"
12
-
13
- from .wrapper import CacheControl
14
- from .adapter import CacheControlAdapter
15
- from .controller import CacheController
16
-
17
- import logging
18
- logging.getLogger(__name__).addHandler(logging.NullHandler())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/distlib/wheel.py DELETED
@@ -1,1082 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- #
3
- # Copyright (C) 2013-2020 Vinay Sajip.
4
- # Licensed to the Python Software Foundation under a contributor agreement.
5
- # See LICENSE.txt and CONTRIBUTORS.txt.
6
- #
7
- from __future__ import unicode_literals
8
-
9
- import base64
10
- import codecs
11
- import datetime
12
- from email import message_from_file
13
- import hashlib
14
- import json
15
- import logging
16
- import os
17
- import posixpath
18
- import re
19
- import shutil
20
- import sys
21
- import tempfile
22
- import zipfile
23
-
24
- from . import __version__, DistlibException
25
- from .compat import sysconfig, ZipFile, fsdecode, text_type, filter
26
- from .database import InstalledDistribution
27
- from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME,
28
- LEGACY_METADATA_FILENAME)
29
- from .util import (FileOperator, convert_path, CSVReader, CSVWriter, Cache,
30
- cached_property, get_cache_base, read_exports, tempdir,
31
- get_platform)
32
- from .version import NormalizedVersion, UnsupportedVersionError
33
-
34
- logger = logging.getLogger(__name__)
35
-
36
- cache = None # created when needed
37
-
38
- if hasattr(sys, 'pypy_version_info'): # pragma: no cover
39
- IMP_PREFIX = 'pp'
40
- elif sys.platform.startswith('java'): # pragma: no cover
41
- IMP_PREFIX = 'jy'
42
- elif sys.platform == 'cli': # pragma: no cover
43
- IMP_PREFIX = 'ip'
44
- else:
45
- IMP_PREFIX = 'cp'
46
-
47
- VER_SUFFIX = sysconfig.get_config_var('py_version_nodot')
48
- if not VER_SUFFIX: # pragma: no cover
49
- VER_SUFFIX = '%s%s' % sys.version_info[:2]
50
- PYVER = 'py' + VER_SUFFIX
51
- IMPVER = IMP_PREFIX + VER_SUFFIX
52
-
53
- ARCH = get_platform().replace('-', '_').replace('.', '_')
54
-
55
- ABI = sysconfig.get_config_var('SOABI')
56
- if ABI and ABI.startswith('cpython-'):
57
- ABI = ABI.replace('cpython-', 'cp').split('-')[0]
58
- else:
59
- def _derive_abi():
60
- parts = ['cp', VER_SUFFIX]
61
- if sysconfig.get_config_var('Py_DEBUG'):
62
- parts.append('d')
63
- if IMP_PREFIX == 'cp':
64
- vi = sys.version_info[:2]
65
- if vi < (3, 8):
66
- wpm = sysconfig.get_config_var('WITH_PYMALLOC')
67
- if wpm is None:
68
- wpm = True
69
- if wpm:
70
- parts.append('m')
71
- if vi < (3, 3):
72
- us = sysconfig.get_config_var('Py_UNICODE_SIZE')
73
- if us == 4 or (us is None and sys.maxunicode == 0x10FFFF):
74
- parts.append('u')
75
- return ''.join(parts)
76
- ABI = _derive_abi()
77
- del _derive_abi
78
-
79
- FILENAME_RE = re.compile(r'''
80
- (?P<nm>[^-]+)
81
- -(?P<vn>\d+[^-]*)
82
- (-(?P<bn>\d+[^-]*))?
83
- -(?P<py>\w+\d+(\.\w+\d+)*)
84
- -(?P<bi>\w+)
85
- -(?P<ar>\w+(\.\w+)*)
86
- \.whl$
87
- ''', re.IGNORECASE | re.VERBOSE)
88
-
89
- NAME_VERSION_RE = re.compile(r'''
90
- (?P<nm>[^-]+)
91
- -(?P<vn>\d+[^-]*)
92
- (-(?P<bn>\d+[^-]*))?$
93
- ''', re.IGNORECASE | re.VERBOSE)
94
-
95
- SHEBANG_RE = re.compile(br'\s*#![^\r\n]*')
96
- SHEBANG_DETAIL_RE = re.compile(br'^(\s*#!("[^"]+"|\S+))\s+(.*)$')
97
- SHEBANG_PYTHON = b'#!python'
98
- SHEBANG_PYTHONW = b'#!pythonw'
99
-
100
- if os.sep == '/':
101
- to_posix = lambda o: o
102
- else:
103
- to_posix = lambda o: o.replace(os.sep, '/')
104
-
105
- if sys.version_info[0] < 3:
106
- import imp
107
- else:
108
- imp = None
109
- import importlib.machinery
110
- import importlib.util
111
-
112
- def _get_suffixes():
113
- if imp:
114
- return [s[0] for s in imp.get_suffixes()]
115
- else:
116
- return importlib.machinery.EXTENSION_SUFFIXES
117
-
118
- def _load_dynamic(name, path):
119
- # https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly
120
- if imp:
121
- return imp.load_dynamic(name, path)
122
- else:
123
- spec = importlib.util.spec_from_file_location(name, path)
124
- module = importlib.util.module_from_spec(spec)
125
- sys.modules[name] = module
126
- spec.loader.exec_module(module)
127
- return module
128
-
129
- class Mounter(object):
130
- def __init__(self):
131
- self.impure_wheels = {}
132
- self.libs = {}
133
-
134
- def add(self, pathname, extensions):
135
- self.impure_wheels[pathname] = extensions
136
- self.libs.update(extensions)
137
-
138
- def remove(self, pathname):
139
- extensions = self.impure_wheels.pop(pathname)
140
- for k, v in extensions:
141
- if k in self.libs:
142
- del self.libs[k]
143
-
144
- def find_module(self, fullname, path=None):
145
- if fullname in self.libs:
146
- result = self
147
- else:
148
- result = None
149
- return result
150
-
151
- def load_module(self, fullname):
152
- if fullname in sys.modules:
153
- result = sys.modules[fullname]
154
- else:
155
- if fullname not in self.libs:
156
- raise ImportError('unable to find extension for %s' % fullname)
157
- result = _load_dynamic(fullname, self.libs[fullname])
158
- result.__loader__ = self
159
- parts = fullname.rsplit('.', 1)
160
- if len(parts) > 1:
161
- result.__package__ = parts[0]
162
- return result
163
-
164
- _hook = Mounter()
165
-
166
-
167
- class Wheel(object):
168
- """
169
- Class to build and install from Wheel files (PEP 427).
170
- """
171
-
172
- wheel_version = (1, 1)
173
- hash_kind = 'sha256'
174
-
175
- def __init__(self, filename=None, sign=False, verify=False):
176
- """
177
- Initialise an instance using a (valid) filename.
178
- """
179
- self.sign = sign
180
- self.should_verify = verify
181
- self.buildver = ''
182
- self.pyver = [PYVER]
183
- self.abi = ['none']
184
- self.arch = ['any']
185
- self.dirname = os.getcwd()
186
- if filename is None:
187
- self.name = 'dummy'
188
- self.version = '0.1'
189
- self._filename = self.filename
190
- else:
191
- m = NAME_VERSION_RE.match(filename)
192
- if m:
193
- info = m.groupdict('')
194
- self.name = info['nm']
195
- # Reinstate the local version separator
196
- self.version = info['vn'].replace('_', '-')
197
- self.buildver = info['bn']
198
- self._filename = self.filename
199
- else:
200
- dirname, filename = os.path.split(filename)
201
- m = FILENAME_RE.match(filename)
202
- if not m:
203
- raise DistlibException('Invalid name or '
204
- 'filename: %r' % filename)
205
- if dirname:
206
- self.dirname = os.path.abspath(dirname)
207
- self._filename = filename
208
- info = m.groupdict('')
209
- self.name = info['nm']
210
- self.version = info['vn']
211
- self.buildver = info['bn']
212
- self.pyver = info['py'].split('.')
213
- self.abi = info['bi'].split('.')
214
- self.arch = info['ar'].split('.')
215
-
216
- @property
217
- def filename(self):
218
- """
219
- Build and return a filename from the various components.
220
- """
221
- if self.buildver:
222
- buildver = '-' + self.buildver
223
- else:
224
- buildver = ''
225
- pyver = '.'.join(self.pyver)
226
- abi = '.'.join(self.abi)
227
- arch = '.'.join(self.arch)
228
- # replace - with _ as a local version separator
229
- version = self.version.replace('-', '_')
230
- return '%s-%s%s-%s-%s-%s.whl' % (self.name, version, buildver,
231
- pyver, abi, arch)
232
-
233
- @property
234
- def exists(self):
235
- path = os.path.join(self.dirname, self.filename)
236
- return os.path.isfile(path)
237
-
238
- @property
239
- def tags(self):
240
- for pyver in self.pyver:
241
- for abi in self.abi:
242
- for arch in self.arch:
243
- yield pyver, abi, arch
244
-
245
- @cached_property
246
- def metadata(self):
247
- pathname = os.path.join(self.dirname, self.filename)
248
- name_ver = '%s-%s' % (self.name, self.version)
249
- info_dir = '%s.dist-info' % name_ver
250
- wrapper = codecs.getreader('utf-8')
251
- with ZipFile(pathname, 'r') as zf:
252
- wheel_metadata = self.get_wheel_metadata(zf)
253
- wv = wheel_metadata['Wheel-Version'].split('.', 1)
254
- file_version = tuple([int(i) for i in wv])
255
- # if file_version < (1, 1):
256
- # fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME,
257
- # LEGACY_METADATA_FILENAME]
258
- # else:
259
- # fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME]
260
- fns = [WHEEL_METADATA_FILENAME, LEGACY_METADATA_FILENAME]
261
- result = None
262
- for fn in fns:
263
- try:
264
- metadata_filename = posixpath.join(info_dir, fn)
265
- with zf.open(metadata_filename) as bf:
266
- wf = wrapper(bf)
267
- result = Metadata(fileobj=wf)
268
- if result:
269
- break
270
- except KeyError:
271
- pass
272
- if not result:
273
- raise ValueError('Invalid wheel, because metadata is '
274
- 'missing: looked in %s' % ', '.join(fns))
275
- return result
276
-
277
- def get_wheel_metadata(self, zf):
278
- name_ver = '%s-%s' % (self.name, self.version)
279
- info_dir = '%s.dist-info' % name_ver
280
- metadata_filename = posixpath.join(info_dir, 'WHEEL')
281
- with zf.open(metadata_filename) as bf:
282
- wf = codecs.getreader('utf-8')(bf)
283
- message = message_from_file(wf)
284
- return dict(message)
285
-
286
- @cached_property
287
- def info(self):
288
- pathname = os.path.join(self.dirname, self.filename)
289
- with ZipFile(pathname, 'r') as zf:
290
- result = self.get_wheel_metadata(zf)
291
- return result
292
-
293
- def process_shebang(self, data):
294
- m = SHEBANG_RE.match(data)
295
- if m:
296
- end = m.end()
297
- shebang, data_after_shebang = data[:end], data[end:]
298
- # Preserve any arguments after the interpreter
299
- if b'pythonw' in shebang.lower():
300
- shebang_python = SHEBANG_PYTHONW
301
- else:
302
- shebang_python = SHEBANG_PYTHON
303
- m = SHEBANG_DETAIL_RE.match(shebang)
304
- if m:
305
- args = b' ' + m.groups()[-1]
306
- else:
307
- args = b''
308
- shebang = shebang_python + args
309
- data = shebang + data_after_shebang
310
- else:
311
- cr = data.find(b'\r')
312
- lf = data.find(b'\n')
313
- if cr < 0 or cr > lf:
314
- term = b'\n'
315
- else:
316
- if data[cr:cr + 2] == b'\r\n':
317
- term = b'\r\n'
318
- else:
319
- term = b'\r'
320
- data = SHEBANG_PYTHON + term + data
321
- return data
322
-
323
- def get_hash(self, data, hash_kind=None):
324
- if hash_kind is None:
325
- hash_kind = self.hash_kind
326
- try:
327
- hasher = getattr(hashlib, hash_kind)
328
- except AttributeError:
329
- raise DistlibException('Unsupported hash algorithm: %r' % hash_kind)
330
- result = hasher(data).digest()
331
- result = base64.urlsafe_b64encode(result).rstrip(b'=').decode('ascii')
332
- return hash_kind, result
333
-
334
- def write_record(self, records, record_path, archive_record_path):
335
- records = list(records) # make a copy, as mutated
336
- records.append((archive_record_path, '', ''))
337
- with CSVWriter(record_path) as writer:
338
- for row in records:
339
- writer.writerow(row)
340
-
341
- def write_records(self, info, libdir, archive_paths):
342
- records = []
343
- distinfo, info_dir = info
344
- hasher = getattr(hashlib, self.hash_kind)
345
- for ap, p in archive_paths:
346
- with open(p, 'rb') as f:
347
- data = f.read()
348
- digest = '%s=%s' % self.get_hash(data)
349
- size = os.path.getsize(p)
350
- records.append((ap, digest, size))
351
-
352
- p = os.path.join(distinfo, 'RECORD')
353
- ap = to_posix(os.path.join(info_dir, 'RECORD'))
354
- self.write_record(records, p, ap)
355
- archive_paths.append((ap, p))
356
-
357
- def build_zip(self, pathname, archive_paths):
358
- with ZipFile(pathname, 'w', zipfile.ZIP_DEFLATED) as zf:
359
- for ap, p in archive_paths:
360
- logger.debug('Wrote %s to %s in wheel', p, ap)
361
- zf.write(p, ap)
362
-
363
- def build(self, paths, tags=None, wheel_version=None):
364
- """
365
- Build a wheel from files in specified paths, and use any specified tags
366
- when determining the name of the wheel.
367
- """
368
- if tags is None:
369
- tags = {}
370
-
371
- libkey = list(filter(lambda o: o in paths, ('purelib', 'platlib')))[0]
372
- if libkey == 'platlib':
373
- is_pure = 'false'
374
- default_pyver = [IMPVER]
375
- default_abi = [ABI]
376
- default_arch = [ARCH]
377
- else:
378
- is_pure = 'true'
379
- default_pyver = [PYVER]
380
- default_abi = ['none']
381
- default_arch = ['any']
382
-
383
- self.pyver = tags.get('pyver', default_pyver)
384
- self.abi = tags.get('abi', default_abi)
385
- self.arch = tags.get('arch', default_arch)
386
-
387
- libdir = paths[libkey]
388
-
389
- name_ver = '%s-%s' % (self.name, self.version)
390
- data_dir = '%s.data' % name_ver
391
- info_dir = '%s.dist-info' % name_ver
392
-
393
- archive_paths = []
394
-
395
- # First, stuff which is not in site-packages
396
- for key in ('data', 'headers', 'scripts'):
397
- if key not in paths:
398
- continue
399
- path = paths[key]
400
- if os.path.isdir(path):
401
- for root, dirs, files in os.walk(path):
402
- for fn in files:
403
- p = fsdecode(os.path.join(root, fn))
404
- rp = os.path.relpath(p, path)
405
- ap = to_posix(os.path.join(data_dir, key, rp))
406
- archive_paths.append((ap, p))
407
- if key == 'scripts' and not p.endswith('.exe'):
408
- with open(p, 'rb') as f:
409
- data = f.read()
410
- data = self.process_shebang(data)
411
- with open(p, 'wb') as f:
412
- f.write(data)
413
-
414
- # Now, stuff which is in site-packages, other than the
415
- # distinfo stuff.
416
- path = libdir
417
- distinfo = None
418
- for root, dirs, files in os.walk(path):
419
- if root == path:
420
- # At the top level only, save distinfo for later
421
- # and skip it for now
422
- for i, dn in enumerate(dirs):
423
- dn = fsdecode(dn)
424
- if dn.endswith('.dist-info'):
425
- distinfo = os.path.join(root, dn)
426
- del dirs[i]
427
- break
428
- assert distinfo, '.dist-info directory expected, not found'
429
-
430
- for fn in files:
431
- # comment out next suite to leave .pyc files in
432
- if fsdecode(fn).endswith(('.pyc', '.pyo')):
433
- continue
434
- p = os.path.join(root, fn)
435
- rp = to_posix(os.path.relpath(p, path))
436
- archive_paths.append((rp, p))
437
-
438
- # Now distinfo. Assumed to be flat, i.e. os.listdir is enough.
439
- files = os.listdir(distinfo)
440
- for fn in files:
441
- if fn not in ('RECORD', 'INSTALLER', 'SHARED', 'WHEEL'):
442
- p = fsdecode(os.path.join(distinfo, fn))
443
- ap = to_posix(os.path.join(info_dir, fn))
444
- archive_paths.append((ap, p))
445
-
446
- wheel_metadata = [
447
- 'Wheel-Version: %d.%d' % (wheel_version or self.wheel_version),
448
- 'Generator: distlib %s' % __version__,
449
- 'Root-Is-Purelib: %s' % is_pure,
450
- ]
451
- for pyver, abi, arch in self.tags:
452
- wheel_metadata.append('Tag: %s-%s-%s' % (pyver, abi, arch))
453
- p = os.path.join(distinfo, 'WHEEL')
454
- with open(p, 'w') as f:
455
- f.write('\n'.join(wheel_metadata))
456
- ap = to_posix(os.path.join(info_dir, 'WHEEL'))
457
- archive_paths.append((ap, p))
458
-
459
- # sort the entries by archive path. Not needed by any spec, but it
460
- # keeps the archive listing and RECORD tidier than they would otherwise
461
- # be. Use the number of path segments to keep directory entries together,
462
- # and keep the dist-info stuff at the end.
463
- def sorter(t):
464
- ap = t[0]
465
- n = ap.count('/')
466
- if '.dist-info' in ap:
467
- n += 10000
468
- return (n, ap)
469
- archive_paths = sorted(archive_paths, key=sorter)
470
-
471
- # Now, at last, RECORD.
472
- # Paths in here are archive paths - nothing else makes sense.
473
- self.write_records((distinfo, info_dir), libdir, archive_paths)
474
- # Now, ready to build the zip file
475
- pathname = os.path.join(self.dirname, self.filename)
476
- self.build_zip(pathname, archive_paths)
477
- return pathname
478
-
479
- def skip_entry(self, arcname):
480
- """
481
- Determine whether an archive entry should be skipped when verifying
482
- or installing.
483
- """
484
- # The signature file won't be in RECORD,
485
- # and we don't currently don't do anything with it
486
- # We also skip directories, as they won't be in RECORD
487
- # either. See:
488
- #
489
- # https://github.com/pypa/wheel/issues/294
490
- # https://github.com/pypa/wheel/issues/287
491
- # https://github.com/pypa/wheel/pull/289
492
- #
493
- return arcname.endswith(('/', '/RECORD.jws'))
494
-
495
- def install(self, paths, maker, **kwargs):
496
- """
497
- Install a wheel to the specified paths. If kwarg ``warner`` is
498
- specified, it should be a callable, which will be called with two
499
- tuples indicating the wheel version of this software and the wheel
500
- version in the file, if there is a discrepancy in the versions.
501
- This can be used to issue any warnings to raise any exceptions.
502
- If kwarg ``lib_only`` is True, only the purelib/platlib files are
503
- installed, and the headers, scripts, data and dist-info metadata are
504
- not written. If kwarg ``bytecode_hashed_invalidation`` is True, written
505
- bytecode will try to use file-hash based invalidation (PEP-552) on
506
- supported interpreter versions (CPython 2.7+).
507
-
508
- The return value is a :class:`InstalledDistribution` instance unless
509
- ``options.lib_only`` is True, in which case the return value is ``None``.
510
- """
511
-
512
- dry_run = maker.dry_run
513
- warner = kwargs.get('warner')
514
- lib_only = kwargs.get('lib_only', False)
515
- bc_hashed_invalidation = kwargs.get('bytecode_hashed_invalidation', False)
516
-
517
- pathname = os.path.join(self.dirname, self.filename)
518
- name_ver = '%s-%s' % (self.name, self.version)
519
- data_dir = '%s.data' % name_ver
520
- info_dir = '%s.dist-info' % name_ver
521
-
522
- metadata_name = posixpath.join(info_dir, LEGACY_METADATA_FILENAME)
523
- wheel_metadata_name = posixpath.join(info_dir, 'WHEEL')
524
- record_name = posixpath.join(info_dir, 'RECORD')
525
-
526
- wrapper = codecs.getreader('utf-8')
527
-
528
- with ZipFile(pathname, 'r') as zf:
529
- with zf.open(wheel_metadata_name) as bwf:
530
- wf = wrapper(bwf)
531
- message = message_from_file(wf)
532
- wv = message['Wheel-Version'].split('.', 1)
533
- file_version = tuple([int(i) for i in wv])
534
- if (file_version != self.wheel_version) and warner:
535
- warner(self.wheel_version, file_version)
536
-
537
- if message['Root-Is-Purelib'] == 'true':
538
- libdir = paths['purelib']
539
- else:
540
- libdir = paths['platlib']
541
-
542
- records = {}
543
- with zf.open(record_name) as bf:
544
- with CSVReader(stream=bf) as reader:
545
- for row in reader:
546
- p = row[0]
547
- records[p] = row
548
-
549
- data_pfx = posixpath.join(data_dir, '')
550
- info_pfx = posixpath.join(info_dir, '')
551
- script_pfx = posixpath.join(data_dir, 'scripts', '')
552
-
553
- # make a new instance rather than a copy of maker's,
554
- # as we mutate it
555
- fileop = FileOperator(dry_run=dry_run)
556
- fileop.record = True # so we can rollback if needed
557
-
558
- bc = not sys.dont_write_bytecode # Double negatives. Lovely!
559
-
560
- outfiles = [] # for RECORD writing
561
-
562
- # for script copying/shebang processing
563
- workdir = tempfile.mkdtemp()
564
- # set target dir later
565
- # we default add_launchers to False, as the
566
- # Python Launcher should be used instead
567
- maker.source_dir = workdir
568
- maker.target_dir = None
569
- try:
570
- for zinfo in zf.infolist():
571
- arcname = zinfo.filename
572
- if isinstance(arcname, text_type):
573
- u_arcname = arcname
574
- else:
575
- u_arcname = arcname.decode('utf-8')
576
- if self.skip_entry(u_arcname):
577
- continue
578
- row = records[u_arcname]
579
- if row[2] and str(zinfo.file_size) != row[2]:
580
- raise DistlibException('size mismatch for '
581
- '%s' % u_arcname)
582
- if row[1]:
583
- kind, value = row[1].split('=', 1)
584
- with zf.open(arcname) as bf:
585
- data = bf.read()
586
- _, digest = self.get_hash(data, kind)
587
- if digest != value:
588
- raise DistlibException('digest mismatch for '
589
- '%s' % arcname)
590
-
591
- if lib_only and u_arcname.startswith((info_pfx, data_pfx)):
592
- logger.debug('lib_only: skipping %s', u_arcname)
593
- continue
594
- is_script = (u_arcname.startswith(script_pfx)
595
- and not u_arcname.endswith('.exe'))
596
-
597
- if u_arcname.startswith(data_pfx):
598
- _, where, rp = u_arcname.split('/', 2)
599
- outfile = os.path.join(paths[where], convert_path(rp))
600
- else:
601
- # meant for site-packages.
602
- if u_arcname in (wheel_metadata_name, record_name):
603
- continue
604
- outfile = os.path.join(libdir, convert_path(u_arcname))
605
- if not is_script:
606
- with zf.open(arcname) as bf:
607
- fileop.copy_stream(bf, outfile)
608
- # Issue #147: permission bits aren't preserved. Using
609
- # zf.extract(zinfo, libdir) should have worked, but didn't,
610
- # see https://www.thetopsites.net/article/53834422.shtml
611
- # So ... manually preserve permission bits as given in zinfo
612
- if os.name == 'posix':
613
- # just set the normal permission bits
614
- os.chmod(outfile, (zinfo.external_attr >> 16) & 0x1FF)
615
- outfiles.append(outfile)
616
- # Double check the digest of the written file
617
- if not dry_run and row[1]:
618
- with open(outfile, 'rb') as bf:
619
- data = bf.read()
620
- _, newdigest = self.get_hash(data, kind)
621
- if newdigest != digest:
622
- raise DistlibException('digest mismatch '
623
- 'on write for '
624
- '%s' % outfile)
625
- if bc and outfile.endswith('.py'):
626
- try:
627
- pyc = fileop.byte_compile(outfile,
628
- hashed_invalidation=bc_hashed_invalidation)
629
- outfiles.append(pyc)
630
- except Exception:
631
- # Don't give up if byte-compilation fails,
632
- # but log it and perhaps warn the user
633
- logger.warning('Byte-compilation failed',
634
- exc_info=True)
635
- else:
636
- fn = os.path.basename(convert_path(arcname))
637
- workname = os.path.join(workdir, fn)
638
- with zf.open(arcname) as bf:
639
- fileop.copy_stream(bf, workname)
640
-
641
- dn, fn = os.path.split(outfile)
642
- maker.target_dir = dn
643
- filenames = maker.make(fn)
644
- fileop.set_executable_mode(filenames)
645
- outfiles.extend(filenames)
646
-
647
- if lib_only:
648
- logger.debug('lib_only: returning None')
649
- dist = None
650
- else:
651
- # Generate scripts
652
-
653
- # Try to get pydist.json so we can see if there are
654
- # any commands to generate. If this fails (e.g. because
655
- # of a legacy wheel), log a warning but don't give up.
656
- commands = None
657
- file_version = self.info['Wheel-Version']
658
- if file_version == '1.0':
659
- # Use legacy info
660
- ep = posixpath.join(info_dir, 'entry_points.txt')
661
- try:
662
- with zf.open(ep) as bwf:
663
- epdata = read_exports(bwf)
664
- commands = {}
665
- for key in ('console', 'gui'):
666
- k = '%s_scripts' % key
667
- if k in epdata:
668
- commands['wrap_%s' % key] = d = {}
669
- for v in epdata[k].values():
670
- s = '%s:%s' % (v.prefix, v.suffix)
671
- if v.flags:
672
- s += ' [%s]' % ','.join(v.flags)
673
- d[v.name] = s
674
- except Exception:
675
- logger.warning('Unable to read legacy script '
676
- 'metadata, so cannot generate '
677
- 'scripts')
678
- else:
679
- try:
680
- with zf.open(metadata_name) as bwf:
681
- wf = wrapper(bwf)
682
- commands = json.load(wf).get('extensions')
683
- if commands:
684
- commands = commands.get('python.commands')
685
- except Exception:
686
- logger.warning('Unable to read JSON metadata, so '
687
- 'cannot generate scripts')
688
- if commands:
689
- console_scripts = commands.get('wrap_console', {})
690
- gui_scripts = commands.get('wrap_gui', {})
691
- if console_scripts or gui_scripts:
692
- script_dir = paths.get('scripts', '')
693
- if not os.path.isdir(script_dir):
694
- raise ValueError('Valid script path not '
695
- 'specified')
696
- maker.target_dir = script_dir
697
- for k, v in console_scripts.items():
698
- script = '%s = %s' % (k, v)
699
- filenames = maker.make(script)
700
- fileop.set_executable_mode(filenames)
701
-
702
- if gui_scripts:
703
- options = {'gui': True }
704
- for k, v in gui_scripts.items():
705
- script = '%s = %s' % (k, v)
706
- filenames = maker.make(script, options)
707
- fileop.set_executable_mode(filenames)
708
-
709
- p = os.path.join(libdir, info_dir)
710
- dist = InstalledDistribution(p)
711
-
712
- # Write SHARED
713
- paths = dict(paths) # don't change passed in dict
714
- del paths['purelib']
715
- del paths['platlib']
716
- paths['lib'] = libdir
717
- p = dist.write_shared_locations(paths, dry_run)
718
- if p:
719
- outfiles.append(p)
720
-
721
- # Write RECORD
722
- dist.write_installed_files(outfiles, paths['prefix'],
723
- dry_run)
724
- return dist
725
- except Exception: # pragma: no cover
726
- logger.exception('installation failed.')
727
- fileop.rollback()
728
- raise
729
- finally:
730
- shutil.rmtree(workdir)
731
-
732
- def _get_dylib_cache(self):
733
- global cache
734
- if cache is None:
735
- # Use native string to avoid issues on 2.x: see Python #20140.
736
- base = os.path.join(get_cache_base(), str('dylib-cache'),
737
- '%s.%s' % sys.version_info[:2])
738
- cache = Cache(base)
739
- return cache
740
-
741
- def _get_extensions(self):
742
- pathname = os.path.join(self.dirname, self.filename)
743
- name_ver = '%s-%s' % (self.name, self.version)
744
- info_dir = '%s.dist-info' % name_ver
745
- arcname = posixpath.join(info_dir, 'EXTENSIONS')
746
- wrapper = codecs.getreader('utf-8')
747
- result = []
748
- with ZipFile(pathname, 'r') as zf:
749
- try:
750
- with zf.open(arcname) as bf:
751
- wf = wrapper(bf)
752
- extensions = json.load(wf)
753
- cache = self._get_dylib_cache()
754
- prefix = cache.prefix_to_dir(pathname)
755
- cache_base = os.path.join(cache.base, prefix)
756
- if not os.path.isdir(cache_base):
757
- os.makedirs(cache_base)
758
- for name, relpath in extensions.items():
759
- dest = os.path.join(cache_base, convert_path(relpath))
760
- if not os.path.exists(dest):
761
- extract = True
762
- else:
763
- file_time = os.stat(dest).st_mtime
764
- file_time = datetime.datetime.fromtimestamp(file_time)
765
- info = zf.getinfo(relpath)
766
- wheel_time = datetime.datetime(*info.date_time)
767
- extract = wheel_time > file_time
768
- if extract:
769
- zf.extract(relpath, cache_base)
770
- result.append((name, dest))
771
- except KeyError:
772
- pass
773
- return result
774
-
775
- def is_compatible(self):
776
- """
777
- Determine if a wheel is compatible with the running system.
778
- """
779
- return is_compatible(self)
780
-
781
- def is_mountable(self):
782
- """
783
- Determine if a wheel is asserted as mountable by its metadata.
784
- """
785
- return True # for now - metadata details TBD
786
-
787
- def mount(self, append=False):
788
- pathname = os.path.abspath(os.path.join(self.dirname, self.filename))
789
- if not self.is_compatible():
790
- msg = 'Wheel %s not compatible with this Python.' % pathname
791
- raise DistlibException(msg)
792
- if not self.is_mountable():
793
- msg = 'Wheel %s is marked as not mountable.' % pathname
794
- raise DistlibException(msg)
795
- if pathname in sys.path:
796
- logger.debug('%s already in path', pathname)
797
- else:
798
- if append:
799
- sys.path.append(pathname)
800
- else:
801
- sys.path.insert(0, pathname)
802
- extensions = self._get_extensions()
803
- if extensions:
804
- if _hook not in sys.meta_path:
805
- sys.meta_path.append(_hook)
806
- _hook.add(pathname, extensions)
807
-
808
- def unmount(self):
809
- pathname = os.path.abspath(os.path.join(self.dirname, self.filename))
810
- if pathname not in sys.path:
811
- logger.debug('%s not in path', pathname)
812
- else:
813
- sys.path.remove(pathname)
814
- if pathname in _hook.impure_wheels:
815
- _hook.remove(pathname)
816
- if not _hook.impure_wheels:
817
- if _hook in sys.meta_path:
818
- sys.meta_path.remove(_hook)
819
-
820
- def verify(self):
821
- pathname = os.path.join(self.dirname, self.filename)
822
- name_ver = '%s-%s' % (self.name, self.version)
823
- data_dir = '%s.data' % name_ver
824
- info_dir = '%s.dist-info' % name_ver
825
-
826
- metadata_name = posixpath.join(info_dir, LEGACY_METADATA_FILENAME)
827
- wheel_metadata_name = posixpath.join(info_dir, 'WHEEL')
828
- record_name = posixpath.join(info_dir, 'RECORD')
829
-
830
- wrapper = codecs.getreader('utf-8')
831
-
832
- with ZipFile(pathname, 'r') as zf:
833
- with zf.open(wheel_metadata_name) as bwf:
834
- wf = wrapper(bwf)
835
- message = message_from_file(wf)
836
- wv = message['Wheel-Version'].split('.', 1)
837
- file_version = tuple([int(i) for i in wv])
838
- # TODO version verification
839
-
840
- records = {}
841
- with zf.open(record_name) as bf:
842
- with CSVReader(stream=bf) as reader:
843
- for row in reader:
844
- p = row[0]
845
- records[p] = row
846
-
847
- for zinfo in zf.infolist():
848
- arcname = zinfo.filename
849
- if isinstance(arcname, text_type):
850
- u_arcname = arcname
851
- else:
852
- u_arcname = arcname.decode('utf-8')
853
- # See issue #115: some wheels have .. in their entries, but
854
- # in the filename ... e.g. __main__..py ! So the check is
855
- # updated to look for .. in the directory portions
856
- p = u_arcname.split('/')
857
- if '..' in p:
858
- raise DistlibException('invalid entry in '
859
- 'wheel: %r' % u_arcname)
860
-
861
- if self.skip_entry(u_arcname):
862
- continue
863
- row = records[u_arcname]
864
- if row[2] and str(zinfo.file_size) != row[2]:
865
- raise DistlibException('size mismatch for '
866
- '%s' % u_arcname)
867
- if row[1]:
868
- kind, value = row[1].split('=', 1)
869
- with zf.open(arcname) as bf:
870
- data = bf.read()
871
- _, digest = self.get_hash(data, kind)
872
- if digest != value:
873
- raise DistlibException('digest mismatch for '
874
- '%s' % arcname)
875
-
876
- def update(self, modifier, dest_dir=None, **kwargs):
877
- """
878
- Update the contents of a wheel in a generic way. The modifier should
879
- be a callable which expects a dictionary argument: its keys are
880
- archive-entry paths, and its values are absolute filesystem paths
881
- where the contents the corresponding archive entries can be found. The
882
- modifier is free to change the contents of the files pointed to, add
883
- new entries and remove entries, before returning. This method will
884
- extract the entire contents of the wheel to a temporary location, call
885
- the modifier, and then use the passed (and possibly updated)
886
- dictionary to write a new wheel. If ``dest_dir`` is specified, the new
887
- wheel is written there -- otherwise, the original wheel is overwritten.
888
-
889
- The modifier should return True if it updated the wheel, else False.
890
- This method returns the same value the modifier returns.
891
- """
892
-
893
- def get_version(path_map, info_dir):
894
- version = path = None
895
- key = '%s/%s' % (info_dir, LEGACY_METADATA_FILENAME)
896
- if key not in path_map:
897
- key = '%s/PKG-INFO' % info_dir
898
- if key in path_map:
899
- path = path_map[key]
900
- version = Metadata(path=path).version
901
- return version, path
902
-
903
- def update_version(version, path):
904
- updated = None
905
- try:
906
- v = NormalizedVersion(version)
907
- i = version.find('-')
908
- if i < 0:
909
- updated = '%s+1' % version
910
- else:
911
- parts = [int(s) for s in version[i + 1:].split('.')]
912
- parts[-1] += 1
913
- updated = '%s+%s' % (version[:i],
914
- '.'.join(str(i) for i in parts))
915
- except UnsupportedVersionError:
916
- logger.debug('Cannot update non-compliant (PEP-440) '
917
- 'version %r', version)
918
- if updated:
919
- md = Metadata(path=path)
920
- md.version = updated
921
- legacy = path.endswith(LEGACY_METADATA_FILENAME)
922
- md.write(path=path, legacy=legacy)
923
- logger.debug('Version updated from %r to %r', version,
924
- updated)
925
-
926
- pathname = os.path.join(self.dirname, self.filename)
927
- name_ver = '%s-%s' % (self.name, self.version)
928
- info_dir = '%s.dist-info' % name_ver
929
- record_name = posixpath.join(info_dir, 'RECORD')
930
- with tempdir() as workdir:
931
- with ZipFile(pathname, 'r') as zf:
932
- path_map = {}
933
- for zinfo in zf.infolist():
934
- arcname = zinfo.filename
935
- if isinstance(arcname, text_type):
936
- u_arcname = arcname
937
- else:
938
- u_arcname = arcname.decode('utf-8')
939
- if u_arcname == record_name:
940
- continue
941
- if '..' in u_arcname:
942
- raise DistlibException('invalid entry in '
943
- 'wheel: %r' % u_arcname)
944
- zf.extract(zinfo, workdir)
945
- path = os.path.join(workdir, convert_path(u_arcname))
946
- path_map[u_arcname] = path
947
-
948
- # Remember the version.
949
- original_version, _ = get_version(path_map, info_dir)
950
- # Files extracted. Call the modifier.
951
- modified = modifier(path_map, **kwargs)
952
- if modified:
953
- # Something changed - need to build a new wheel.
954
- current_version, path = get_version(path_map, info_dir)
955
- if current_version and (current_version == original_version):
956
- # Add or update local version to signify changes.
957
- update_version(current_version, path)
958
- # Decide where the new wheel goes.
959
- if dest_dir is None:
960
- fd, newpath = tempfile.mkstemp(suffix='.whl',
961
- prefix='wheel-update-',
962
- dir=workdir)
963
- os.close(fd)
964
- else:
965
- if not os.path.isdir(dest_dir):
966
- raise DistlibException('Not a directory: %r' % dest_dir)
967
- newpath = os.path.join(dest_dir, self.filename)
968
- archive_paths = list(path_map.items())
969
- distinfo = os.path.join(workdir, info_dir)
970
- info = distinfo, info_dir
971
- self.write_records(info, workdir, archive_paths)
972
- self.build_zip(newpath, archive_paths)
973
- if dest_dir is None:
974
- shutil.copyfile(newpath, pathname)
975
- return modified
976
-
977
- def _get_glibc_version():
978
- import platform
979
- ver = platform.libc_ver()
980
- result = []
981
- if ver[0] == 'glibc':
982
- for s in ver[1].split('.'):
983
- result.append(int(s) if s.isdigit() else 0)
984
- result = tuple(result)
985
- return result
986
-
987
- def compatible_tags():
988
- """
989
- Return (pyver, abi, arch) tuples compatible with this Python.
990
- """
991
- versions = [VER_SUFFIX]
992
- major = VER_SUFFIX[0]
993
- for minor in range(sys.version_info[1] - 1, - 1, -1):
994
- versions.append(''.join([major, str(minor)]))
995
-
996
- abis = []
997
- for suffix in _get_suffixes():
998
- if suffix.startswith('.abi'):
999
- abis.append(suffix.split('.', 2)[1])
1000
- abis.sort()
1001
- if ABI != 'none':
1002
- abis.insert(0, ABI)
1003
- abis.append('none')
1004
- result = []
1005
-
1006
- arches = [ARCH]
1007
- if sys.platform == 'darwin':
1008
- m = re.match(r'(\w+)_(\d+)_(\d+)_(\w+)$', ARCH)
1009
- if m:
1010
- name, major, minor, arch = m.groups()
1011
- minor = int(minor)
1012
- matches = [arch]
1013
- if arch in ('i386', 'ppc'):
1014
- matches.append('fat')
1015
- if arch in ('i386', 'ppc', 'x86_64'):
1016
- matches.append('fat3')
1017
- if arch in ('ppc64', 'x86_64'):
1018
- matches.append('fat64')
1019
- if arch in ('i386', 'x86_64'):
1020
- matches.append('intel')
1021
- if arch in ('i386', 'x86_64', 'intel', 'ppc', 'ppc64'):
1022
- matches.append('universal')
1023
- while minor >= 0:
1024
- for match in matches:
1025
- s = '%s_%s_%s_%s' % (name, major, minor, match)
1026
- if s != ARCH: # already there
1027
- arches.append(s)
1028
- minor -= 1
1029
-
1030
- # Most specific - our Python version, ABI and arch
1031
- for abi in abis:
1032
- for arch in arches:
1033
- result.append((''.join((IMP_PREFIX, versions[0])), abi, arch))
1034
- # manylinux
1035
- if abi != 'none' and sys.platform.startswith('linux'):
1036
- arch = arch.replace('linux_', '')
1037
- parts = _get_glibc_version()
1038
- if len(parts) == 2:
1039
- if parts >= (2, 5):
1040
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
1041
- 'manylinux1_%s' % arch))
1042
- if parts >= (2, 12):
1043
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
1044
- 'manylinux2010_%s' % arch))
1045
- if parts >= (2, 17):
1046
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
1047
- 'manylinux2014_%s' % arch))
1048
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
1049
- 'manylinux_%s_%s_%s' % (parts[0], parts[1],
1050
- arch)))
1051
-
1052
- # where no ABI / arch dependency, but IMP_PREFIX dependency
1053
- for i, version in enumerate(versions):
1054
- result.append((''.join((IMP_PREFIX, version)), 'none', 'any'))
1055
- if i == 0:
1056
- result.append((''.join((IMP_PREFIX, version[0])), 'none', 'any'))
1057
-
1058
- # no IMP_PREFIX, ABI or arch dependency
1059
- for i, version in enumerate(versions):
1060
- result.append((''.join(('py', version)), 'none', 'any'))
1061
- if i == 0:
1062
- result.append((''.join(('py', version[0])), 'none', 'any'))
1063
-
1064
- return set(result)
1065
-
1066
-
1067
- COMPATIBLE_TAGS = compatible_tags()
1068
-
1069
- del compatible_tags
1070
-
1071
-
1072
- def is_compatible(wheel, tags=None):
1073
- if not isinstance(wheel, Wheel):
1074
- wheel = Wheel(wheel) # assume it's a filename
1075
- result = False
1076
- if tags is None:
1077
- tags = COMPATIBLE_TAGS
1078
- for ver, abi, arch in tags:
1079
- if ver in wheel.pyver and abi in wheel.abi and arch in wheel.arch:
1080
- result = True
1081
- break
1082
- return result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/live.py DELETED
@@ -1,375 +0,0 @@
1
- import sys
2
- from threading import Event, RLock, Thread
3
- from types import TracebackType
4
- from typing import IO, Any, Callable, List, Optional, TextIO, Type, cast
5
-
6
- from . import get_console
7
- from .console import Console, ConsoleRenderable, RenderableType, RenderHook
8
- from .control import Control
9
- from .file_proxy import FileProxy
10
- from .jupyter import JupyterMixin
11
- from .live_render import LiveRender, VerticalOverflowMethod
12
- from .screen import Screen
13
- from .text import Text
14
-
15
-
16
- class _RefreshThread(Thread):
17
- """A thread that calls refresh() at regular intervals."""
18
-
19
- def __init__(self, live: "Live", refresh_per_second: float) -> None:
20
- self.live = live
21
- self.refresh_per_second = refresh_per_second
22
- self.done = Event()
23
- super().__init__(daemon=True)
24
-
25
- def stop(self) -> None:
26
- self.done.set()
27
-
28
- def run(self) -> None:
29
- while not self.done.wait(1 / self.refresh_per_second):
30
- with self.live._lock:
31
- if not self.done.is_set():
32
- self.live.refresh()
33
-
34
-
35
- class Live(JupyterMixin, RenderHook):
36
- """Renders an auto-updating live display of any given renderable.
37
-
38
- Args:
39
- renderable (RenderableType, optional): The renderable to live display. Defaults to displaying nothing.
40
- console (Console, optional): Optional Console instance. Default will an internal Console instance writing to stdout.
41
- screen (bool, optional): Enable alternate screen mode. Defaults to False.
42
- auto_refresh (bool, optional): Enable auto refresh. If disabled, you will need to call `refresh()` or `update()` with refresh flag. Defaults to True
43
- refresh_per_second (float, optional): Number of times per second to refresh the live display. Defaults to 4.
44
- transient (bool, optional): Clear the renderable on exit (has no effect when screen=True). Defaults to False.
45
- redirect_stdout (bool, optional): Enable redirection of stdout, so ``print`` may be used. Defaults to True.
46
- redirect_stderr (bool, optional): Enable redirection of stderr. Defaults to True.
47
- vertical_overflow (VerticalOverflowMethod, optional): How to handle renderable when it is too tall for the console. Defaults to "ellipsis".
48
- get_renderable (Callable[[], RenderableType], optional): Optional callable to get renderable. Defaults to None.
49
- """
50
-
51
- def __init__(
52
- self,
53
- renderable: Optional[RenderableType] = None,
54
- *,
55
- console: Optional[Console] = None,
56
- screen: bool = False,
57
- auto_refresh: bool = True,
58
- refresh_per_second: float = 4,
59
- transient: bool = False,
60
- redirect_stdout: bool = True,
61
- redirect_stderr: bool = True,
62
- vertical_overflow: VerticalOverflowMethod = "ellipsis",
63
- get_renderable: Optional[Callable[[], RenderableType]] = None,
64
- ) -> None:
65
- assert refresh_per_second > 0, "refresh_per_second must be > 0"
66
- self._renderable = renderable
67
- self.console = console if console is not None else get_console()
68
- self._screen = screen
69
- self._alt_screen = False
70
-
71
- self._redirect_stdout = redirect_stdout
72
- self._redirect_stderr = redirect_stderr
73
- self._restore_stdout: Optional[IO[str]] = None
74
- self._restore_stderr: Optional[IO[str]] = None
75
-
76
- self._lock = RLock()
77
- self.ipy_widget: Optional[Any] = None
78
- self.auto_refresh = auto_refresh
79
- self._started: bool = False
80
- self.transient = True if screen else transient
81
-
82
- self._refresh_thread: Optional[_RefreshThread] = None
83
- self.refresh_per_second = refresh_per_second
84
-
85
- self.vertical_overflow = vertical_overflow
86
- self._get_renderable = get_renderable
87
- self._live_render = LiveRender(
88
- self.get_renderable(), vertical_overflow=vertical_overflow
89
- )
90
-
91
- @property
92
- def is_started(self) -> bool:
93
- """Check if live display has been started."""
94
- return self._started
95
-
96
- def get_renderable(self) -> RenderableType:
97
- renderable = (
98
- self._get_renderable()
99
- if self._get_renderable is not None
100
- else self._renderable
101
- )
102
- return renderable or ""
103
-
104
- def start(self, refresh: bool = False) -> None:
105
- """Start live rendering display.
106
-
107
- Args:
108
- refresh (bool, optional): Also refresh. Defaults to False.
109
- """
110
- with self._lock:
111
- if self._started:
112
- return
113
- self.console.set_live(self)
114
- self._started = True
115
- if self._screen:
116
- self._alt_screen = self.console.set_alt_screen(True)
117
- self.console.show_cursor(False)
118
- self._enable_redirect_io()
119
- self.console.push_render_hook(self)
120
- if refresh:
121
- try:
122
- self.refresh()
123
- except Exception:
124
- # If refresh fails, we want to stop the redirection of sys.stderr,
125
- # so the error stacktrace is properly displayed in the terminal.
126
- # (or, if the code that calls Rich captures the exception and wants to display something,
127
- # let this be displayed in the terminal).
128
- self.stop()
129
- raise
130
- if self.auto_refresh:
131
- self._refresh_thread = _RefreshThread(self, self.refresh_per_second)
132
- self._refresh_thread.start()
133
-
134
- def stop(self) -> None:
135
- """Stop live rendering display."""
136
- with self._lock:
137
- if not self._started:
138
- return
139
- self.console.clear_live()
140
- self._started = False
141
-
142
- if self.auto_refresh and self._refresh_thread is not None:
143
- self._refresh_thread.stop()
144
- self._refresh_thread = None
145
- # allow it to fully render on the last even if overflow
146
- self.vertical_overflow = "visible"
147
- with self.console:
148
- try:
149
- if not self._alt_screen and not self.console.is_jupyter:
150
- self.refresh()
151
- finally:
152
- self._disable_redirect_io()
153
- self.console.pop_render_hook()
154
- if not self._alt_screen and self.console.is_terminal:
155
- self.console.line()
156
- self.console.show_cursor(True)
157
- if self._alt_screen:
158
- self.console.set_alt_screen(False)
159
-
160
- if self.transient and not self._alt_screen:
161
- self.console.control(self._live_render.restore_cursor())
162
- if self.ipy_widget is not None and self.transient:
163
- self.ipy_widget.close() # pragma: no cover
164
-
165
- def __enter__(self) -> "Live":
166
- self.start(refresh=self._renderable is not None)
167
- return self
168
-
169
- def __exit__(
170
- self,
171
- exc_type: Optional[Type[BaseException]],
172
- exc_val: Optional[BaseException],
173
- exc_tb: Optional[TracebackType],
174
- ) -> None:
175
- self.stop()
176
-
177
- def _enable_redirect_io(self) -> None:
178
- """Enable redirecting of stdout / stderr."""
179
- if self.console.is_terminal or self.console.is_jupyter:
180
- if self._redirect_stdout and not isinstance(sys.stdout, FileProxy):
181
- self._restore_stdout = sys.stdout
182
- sys.stdout = cast("TextIO", FileProxy(self.console, sys.stdout))
183
- if self._redirect_stderr and not isinstance(sys.stderr, FileProxy):
184
- self._restore_stderr = sys.stderr
185
- sys.stderr = cast("TextIO", FileProxy(self.console, sys.stderr))
186
-
187
- def _disable_redirect_io(self) -> None:
188
- """Disable redirecting of stdout / stderr."""
189
- if self._restore_stdout:
190
- sys.stdout = cast("TextIO", self._restore_stdout)
191
- self._restore_stdout = None
192
- if self._restore_stderr:
193
- sys.stderr = cast("TextIO", self._restore_stderr)
194
- self._restore_stderr = None
195
-
196
- @property
197
- def renderable(self) -> RenderableType:
198
- """Get the renderable that is being displayed
199
-
200
- Returns:
201
- RenderableType: Displayed renderable.
202
- """
203
- renderable = self.get_renderable()
204
- return Screen(renderable) if self._alt_screen else renderable
205
-
206
- def update(self, renderable: RenderableType, *, refresh: bool = False) -> None:
207
- """Update the renderable that is being displayed
208
-
209
- Args:
210
- renderable (RenderableType): New renderable to use.
211
- refresh (bool, optional): Refresh the display. Defaults to False.
212
- """
213
- if isinstance(renderable, str):
214
- renderable = self.console.render_str(renderable)
215
- with self._lock:
216
- self._renderable = renderable
217
- if refresh:
218
- self.refresh()
219
-
220
- def refresh(self) -> None:
221
- """Update the display of the Live Render."""
222
- with self._lock:
223
- self._live_render.set_renderable(self.renderable)
224
- if self.console.is_jupyter: # pragma: no cover
225
- try:
226
- from IPython.display import display
227
- from ipywidgets import Output
228
- except ImportError:
229
- import warnings
230
-
231
- warnings.warn('install "ipywidgets" for Jupyter support')
232
- else:
233
- if self.ipy_widget is None:
234
- self.ipy_widget = Output()
235
- display(self.ipy_widget)
236
-
237
- with self.ipy_widget:
238
- self.ipy_widget.clear_output(wait=True)
239
- self.console.print(self._live_render.renderable)
240
- elif self.console.is_terminal and not self.console.is_dumb_terminal:
241
- with self.console:
242
- self.console.print(Control())
243
- elif (
244
- not self._started and not self.transient
245
- ): # if it is finished allow files or dumb-terminals to see final result
246
- with self.console:
247
- self.console.print(Control())
248
-
249
- def process_renderables(
250
- self, renderables: List[ConsoleRenderable]
251
- ) -> List[ConsoleRenderable]:
252
- """Process renderables to restore cursor and display progress."""
253
- self._live_render.vertical_overflow = self.vertical_overflow
254
- if self.console.is_interactive:
255
- # lock needs acquiring as user can modify live_render renderable at any time unlike in Progress.
256
- with self._lock:
257
- reset = (
258
- Control.home()
259
- if self._alt_screen
260
- else self._live_render.position_cursor()
261
- )
262
- renderables = [reset, *renderables, self._live_render]
263
- elif (
264
- not self._started and not self.transient
265
- ): # if it is finished render the final output for files or dumb_terminals
266
- renderables = [*renderables, self._live_render]
267
-
268
- return renderables
269
-
270
-
271
- if __name__ == "__main__": # pragma: no cover
272
- import random
273
- import time
274
- from itertools import cycle
275
- from typing import Dict, List, Tuple
276
-
277
- from .align import Align
278
- from .console import Console
279
- from .live import Live as Live
280
- from .panel import Panel
281
- from .rule import Rule
282
- from .syntax import Syntax
283
- from .table import Table
284
-
285
- console = Console()
286
-
287
- syntax = Syntax(
288
- '''def loop_last(values: Iterable[T]) -> Iterable[Tuple[bool, T]]:
289
- """Iterate and generate a tuple with a flag for last value."""
290
- iter_values = iter(values)
291
- try:
292
- previous_value = next(iter_values)
293
- except StopIteration:
294
- return
295
- for value in iter_values:
296
- yield False, previous_value
297
- previous_value = value
298
- yield True, previous_value''',
299
- "python",
300
- line_numbers=True,
301
- )
302
-
303
- table = Table("foo", "bar", "baz")
304
- table.add_row("1", "2", "3")
305
-
306
- progress_renderables = [
307
- "You can make the terminal shorter and taller to see the live table hide"
308
- "Text may be printed while the progress bars are rendering.",
309
- Panel("In fact, [i]any[/i] renderable will work"),
310
- "Such as [magenta]tables[/]...",
311
- table,
312
- "Pretty printed structures...",
313
- {"type": "example", "text": "Pretty printed"},
314
- "Syntax...",
315
- syntax,
316
- Rule("Give it a try!"),
317
- ]
318
-
319
- examples = cycle(progress_renderables)
320
-
321
- exchanges = [
322
- "SGD",
323
- "MYR",
324
- "EUR",
325
- "USD",
326
- "AUD",
327
- "JPY",
328
- "CNH",
329
- "HKD",
330
- "CAD",
331
- "INR",
332
- "DKK",
333
- "GBP",
334
- "RUB",
335
- "NZD",
336
- "MXN",
337
- "IDR",
338
- "TWD",
339
- "THB",
340
- "VND",
341
- ]
342
- with Live(console=console) as live_table:
343
- exchange_rate_dict: Dict[Tuple[str, str], float] = {}
344
-
345
- for index in range(100):
346
- select_exchange = exchanges[index % len(exchanges)]
347
-
348
- for exchange in exchanges:
349
- if exchange == select_exchange:
350
- continue
351
- time.sleep(0.4)
352
- if random.randint(0, 10) < 1:
353
- console.log(next(examples))
354
- exchange_rate_dict[(select_exchange, exchange)] = 200 / (
355
- (random.random() * 320) + 1
356
- )
357
- if len(exchange_rate_dict) > len(exchanges) - 1:
358
- exchange_rate_dict.pop(list(exchange_rate_dict.keys())[0])
359
- table = Table(title="Exchange Rates")
360
-
361
- table.add_column("Source Currency")
362
- table.add_column("Destination Currency")
363
- table.add_column("Exchange Rate")
364
-
365
- for ((source, dest), exchange_rate) in exchange_rate_dict.items():
366
- table.add_row(
367
- source,
368
- dest,
369
- Text(
370
- f"{exchange_rate:.4f}",
371
- style="red" if exchange_rate < 1.0 else "green",
372
- ),
373
- )
374
-
375
- live_table.update(Align.center(table))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BigDL/bigdl_nano_demo/data.py DELETED
@@ -1,233 +0,0 @@
1
- # This file is copied from https://github.com/rnwzd/FSPBT-Image-Translation/blob/master/data.py
2
-
3
- # MIT License
4
-
5
- # Copyright (c) 2022 Lorenzo Breschi
6
-
7
- # Permission is hereby granted, free of charge, to any person obtaining a copy
8
- # of this software and associated documentation files (the "Software"), to deal
9
- # in the Software without restriction, including without limitation the rights
10
- # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
11
- # copies of the Software, and to permit persons to whom the Software is
12
- # furnished to do so, subject to the following conditions:
13
-
14
- # The above copyright notice and this permission notice shall be included in all
15
- # copies or substantial portions of the Software.
16
-
17
- # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
18
- # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
19
- # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
20
- # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
21
- # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
22
- # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
23
- # SOFTWARE.
24
-
25
- from typing import Callable, Dict
26
-
27
- import torch
28
-
29
- from torch.utils.data import Dataset
30
-
31
- import torchvision.transforms.functional as F
32
- from torchvision import transforms
33
- import pytorch_lightning as pl
34
-
35
- from collections.abc import Iterable
36
-
37
-
38
- # image reader writer
39
- from pathlib import Path
40
- from PIL import Image
41
- from typing import Tuple
42
-
43
-
44
- def read_image(filepath: Path, mode: str = None) -> Image:
45
- with open(filepath, 'rb') as file:
46
- image = Image.open(file)
47
- return image.convert(mode)
48
-
49
-
50
- image2tensor = transforms.ToTensor()
51
- tensor2image = transforms.ToPILImage()
52
-
53
-
54
- def write_image(image: Image, filepath: Path):
55
- filepath.parent.mkdir(parents=True, exist_ok=True)
56
- image.save(str(filepath))
57
-
58
-
59
- def read_image_tensor(filepath: Path, mode: str = 'RGB') -> torch.Tensor:
60
- return image2tensor(read_image(filepath, mode))
61
-
62
-
63
- def write_image_tensor(input: torch.Tensor, filepath: Path):
64
- write_image(tensor2image(input), filepath)
65
-
66
-
67
- def get_valid_indices(H: int, W: int, patch_size: int, random_overlap: int = 0):
68
-
69
- vih = torch.arange(random_overlap, H-patch_size -
70
- random_overlap+1, patch_size)
71
- viw = torch.arange(random_overlap, W-patch_size -
72
- random_overlap+1, patch_size)
73
- if random_overlap > 0:
74
- rih = torch.randint_like(vih, -random_overlap, random_overlap)
75
- riw = torch.randint_like(viw, -random_overlap, random_overlap)
76
- vih += rih
77
- viw += riw
78
- vi = torch.stack(torch.meshgrid(vih, viw)).view(2, -1).t()
79
- return vi
80
-
81
-
82
- def cut_patches(input: torch.Tensor, indices: Tuple[Tuple[int, int]], patch_size: int, padding: int = 0):
83
- # TODO use slices to get all patches at the same time ?
84
-
85
- patches_l = []
86
- for n in range(len(indices)):
87
-
88
- patch = F.crop(input, *(indices[n]-padding),
89
- *(patch_size+padding*2,)*2)
90
- patches_l.append(patch)
91
- patches = torch.cat(patches_l, dim=0)
92
-
93
- return patches
94
-
95
-
96
- def prepare_data(data_path: Path, read_func: Callable = read_image_tensor) -> Dict:
97
- """
98
- Takes a data_path of a folder which contains subfolders with input, target, etc.
99
- lablelled by the same names.
100
- :param data_path: Path of the folder containing data
101
- :param read_func: function that reads data and returns a tensor
102
- """
103
- data_dict = {}
104
-
105
- subdir_names = ["target", "input", "mask"] # ,"helper"
106
-
107
- # checks only files for which there is an target
108
- # TODO check for images
109
- name_ls = [file.name for file in (
110
- data_path / "target").iterdir() if file.is_file()]
111
-
112
- subdirs = [data_path / sdn for sdn in subdir_names]
113
- for sd in subdirs:
114
- if sd.is_dir():
115
- data_ls = []
116
- files = [sd / name for name in name_ls]
117
- for file in files:
118
- tensor = read_func(file)
119
- H, W = tensor.shape[-2:]
120
- data_ls.append(tensor)
121
- # TODO check that all sizes match
122
- data_dict[sd.name] = torch.stack(data_ls, dim=0)
123
-
124
- data_dict['name'] = name_ls
125
- data_dict['len'] = len(data_dict['name'])
126
- data_dict['H'] = H
127
- data_dict['W'] = W
128
- return data_dict
129
-
130
-
131
- # TODO an image is loaded whenever a patch is needed, this may be a bottleneck
132
- class DataDictLoader():
133
- def __init__(self, data_dict: Dict,
134
- batch_size: int = 16,
135
- max_length: int = 128,
136
- shuffle: bool = False):
137
- """
138
- """
139
-
140
- self.batch_size = batch_size
141
- self.shuffle = shuffle
142
-
143
- self.batch_size = batch_size
144
-
145
- self.data_dict = data_dict
146
- self.dataset_len = data_dict['len']
147
- self.len = self.dataset_len if max_length is None else min(
148
- self.dataset_len, max_length)
149
- # Calculate # batches
150
- num_batches, remainder = divmod(self.len, self.batch_size)
151
- if remainder > 0:
152
- num_batches += 1
153
- self.num_batches = num_batches
154
-
155
- def __iter__(self):
156
- if self.shuffle:
157
- r = torch.randperm(self.dataset_len)
158
- self.data_dict = {k: v[r] if isinstance(
159
- v, Iterable) else v for k, v in self.data_dict.items()}
160
- self.i = 0
161
- return self
162
-
163
- def __next__(self):
164
- if self.i >= self.len:
165
- raise StopIteration
166
- batch = {k: v[self.i:self.i+self.batch_size]
167
- if isinstance(v, Iterable) else v for k, v in self.data_dict.items()}
168
-
169
- self.i += self.batch_size
170
- return batch
171
-
172
- def __len__(self):
173
- return self.num_batches
174
-
175
-
176
- class PatchDataModule(pl.LightningDataModule):
177
-
178
- def __init__(self, data_dict,
179
- patch_size: int = 2**5,
180
- batch_size: int = 2**4,
181
- patch_num: int = 2**6):
182
- super().__init__()
183
- self.data_dict = data_dict
184
- self.H, self.W = data_dict['H'], data_dict['W']
185
- self.len = data_dict['len']
186
-
187
- self.batch_size = batch_size
188
- self.patch_size = patch_size
189
- self.patch_num = patch_num
190
-
191
- def dataloader(self, data_dict, **kwargs):
192
- return DataDictLoader(data_dict, **kwargs)
193
-
194
- def train_dataloader(self):
195
- patches = self.cut_patches()
196
- return self.dataloader(patches, batch_size=self.batch_size, shuffle=True,
197
- max_length=self.patch_num)
198
-
199
- def val_dataloader(self):
200
- return self.dataloader(self.data_dict, batch_size=1)
201
-
202
- def test_dataloader(self):
203
- return self.dataloader(self.data_dict) # TODO batch size
204
-
205
- def cut_patches(self):
206
- # TODO cycle once
207
- patch_indices = get_valid_indices(
208
- self.H, self.W, self.patch_size, self.patch_size//4)
209
- dd = {k: cut_patches(
210
- v, patch_indices, self.patch_size) for k, v in self.data_dict.items()
211
- if isinstance(v, torch.Tensor)
212
- }
213
- threshold = 0.1
214
- mask_p = torch.mean(
215
- dd.get('mask', torch.ones_like(dd['input'])), dim=(-1, -2, -3))
216
- masked_idx = (mask_p > threshold).nonzero(as_tuple=True)[0]
217
- dd = {k: v[masked_idx] for k, v in dd.items()}
218
- dd['len'] = len(masked_idx)
219
- dd['H'], dd['W'] = (self.patch_size,)*2
220
-
221
- return dd
222
-
223
-
224
- class ImageDataset(Dataset):
225
- def __init__(self, file_paths: Iterable, read_func: Callable = read_image_tensor):
226
- self.file_paths = file_paths
227
-
228
- def __getitem__(self, idx: int) -> dict:
229
- file = self.file_paths[idx]
230
- return read_image_tensor(file), file.name
231
-
232
- def __len__(self) -> int:
233
- return len(self.file_paths)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BraydenMoore/MARCI-NFL-Betting/Source/Train/xgboost_ML.py DELETED
@@ -1,69 +0,0 @@
1
- import xgboost as xgb
2
- import pandas as pd
3
- import pickle as pkl
4
- import numpy as np
5
- from tqdm import tqdm
6
- from IPython.display import clear_output
7
- from sklearn.metrics import accuracy_score
8
- from sklearn.model_selection import train_test_split
9
- import os
10
-
11
- current_directory = os.path.dirname(os.path.abspath(__file__))
12
- parent_directory = os.path.dirname(current_directory)
13
- data_directory = os.path.join(parent_directory, 'Data')
14
- model_directory = os.path.join(parent_directory, 'Models')
15
- pickle_directory = os.path.join(parent_directory, 'Pickles')
16
-
17
- file_path = os.path.join(data_directory, 'gbg_and_odds.csv')
18
- data = pd.read_csv(file_path).dropna()
19
-
20
- margin = data['Home-Team-Win']
21
- data.drop(columns=['Home-Team-Win','Over','Season','home_team','away_team','game_date','Key','Home Score','Away Score','Home Odds Close','Away Odds Close','Home Winnings','Away Winnings', 'Home Odds', 'Away Odds'], inplace=True)
22
-
23
- acc_results = []
24
-
25
- for x in tqdm(range(100)):
26
- X_train, X_test, y_train, y_test = train_test_split(data, margin, test_size=.1)
27
-
28
- train_games = X_train['game_id']
29
- test_games = X_test['game_id']
30
-
31
- X_train.drop(columns=['game_id'], inplace=True)
32
- X_test.drop(columns=['game_id'], inplace=True)
33
-
34
- train = xgb.DMatrix(X_train.astype(float).values, label=y_train)
35
- test = xgb.DMatrix(X_test.astype(float).values, label=y_test)
36
-
37
- param = {
38
- 'max_depth': 2,
39
- 'eta': 0.01,
40
- 'objective': 'multi:softprob',
41
- 'num_class': 2
42
- }
43
- epochs = 500
44
-
45
- model = xgb.train(param, train, epochs)
46
- predictions = model.predict(test)
47
- y = []
48
- for z in predictions:
49
- y.append(np.argmax(z))
50
-
51
- acc = round(accuracy_score(y_test, y)*100, 1)
52
- acc_results.append(acc)
53
- clear_output(wait=True)
54
- print(f"Best accuracy: {max(acc_results)}%")
55
-
56
- # only save results if they are the best so far
57
- if acc == max(acc_results):
58
- file_path = os.path.join(pickle_directory, 'train_games_ML_no_odds.pkl')
59
- with open(file_path,'wb') as f:
60
- pkl.dump(train_games,f)
61
-
62
- file_path = os.path.join(pickle_directory, 'test_games_ML_no_odds.pkl')
63
- with open(file_path,'wb') as f:
64
- pkl.dump(test_games,f)
65
-
66
- file_path = os.path.join(model_directory, f'xgboost_ML_no_odds_{acc}%.json')
67
- model.save_model(file_path)
68
-
69
- print('Done')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/README.md DELETED
@@ -1,75 +0,0 @@
1
- # In Defense of Grid Features for Visual Question Answering
2
- **Grid Feature Pre-Training Code**
3
-
4
- <p align="center">
5
- <img src="http://xinleic.xyz/images/grid-vqa.png" width="500" />
6
- </p>
7
-
8
- This is a feature pre-training code release of the [paper](https://arxiv.org/abs/2001.03615):
9
- ```
10
- @InProceedings{jiang2020defense,
11
- title={In Defense of Grid Features for Visual Question Answering},
12
- author={Jiang, Huaizu and Misra, Ishan and Rohrbach, Marcus and Learned-Miller, Erik and Chen, Xinlei},
13
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
14
- year={2020}
15
- }
16
- ```
17
- For more sustained maintenance, we release code using [Detectron2](https://github.com/facebookresearch/detectron2) instead of [mask-rcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) which the original code is based on. The current repository should reproduce the results reported in the paper, *e.g.*, reporting **~72.5** single model VQA score for a X-101 backbone paired with [MCAN](https://github.com/MILVLG/mcan-vqa)-large.
18
-
19
- ## Installation
20
- Install Detectron 2 following [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md). Since Detectron 2 is also being actively updated which can result in breaking behaviors, it is **highly recommended** to install via the following command:
21
- ```bash
22
- python -m pip install 'git+https://github.com/facebookresearch/detectron2.git@ffff8ac'
23
- ```
24
- Commits before or after `ffff8ac` might also work, but it could be risky.
25
- Then clone this repository:
26
- ```bash
27
- git clone [email protected]:facebookresearch/grid-feats-vqa.git
28
- cd grid-feats-vqa
29
- ```
30
-
31
- ## Data
32
- [Visual Genome](http://visualgenome.org/) `train+val` splits released from the bottom-up-attention [code](https://github.com/peteanderson80/bottom-up-attention) are used for pre-training, and `test` split is used for evaluating detection performance. All of them are prepared in [COCO](http://cocodataset.org/) format but include an additional field for `attribute` prediction. We provide the `.json` files [here](https://dl.fbaipublicfiles.com/grid-feats-vqa/json/visual_genome.tgz) which can be directly loaded by Detectron2. Same as in Detectron2, the expected dataset structure under the `DETECTRON2_DATASETS` (default is `./datasets` relative to your current working directory) folder should be:
33
- ```
34
- visual_genome/
35
- annotations/
36
- visual_genome_{train,val,test}.json
37
- images/
38
- # visual genome images (~108K)
39
- ```
40
-
41
- ## Training
42
- Once the dataset is setup, to train a model, run (by default we use 8 GPUs):
43
- ```bash
44
- python train_net.py --num-gpus 8 --config-file <config.yaml>
45
- ```
46
- For example, to launch grid-feature pre-training with ResNet-50 backbone on 8 GPUs, one should execute:
47
- ```bash
48
- python train_net.py --num-gpus 8 --config-file configs/R-50-grid.yaml
49
- ```
50
- The final model by default should be saved under `./output` of your current working directory once it is done training. We also provide the region-feature pre-training configuration `configs/R-50-updn.yaml` for reference. Note that we use `0.2` attribute loss (`MODEL.ROI_ATTRIBUTE_HEAD.LOSS_WEIGHT = 0.2`), which is better for down-stream tasks like VQA per our analysis.
51
-
52
- We also release the configuration (`configs/R-50-updn.yaml`) for training the region features described in **bottom-up-attention** paper, which is a faithful re-implementation of the original [one](https://github.com/peteanderson80/bottom-up-attention) in Detectron2.
53
-
54
- ## Feature Extraction
55
- Grid feature extraction can be done by simply running once the model is trained (or you can directly download our pre-trained models, see below):
56
- ```bash
57
- python extract_grid_feature.py -config-file configs/R-50-grid.yaml --dataset <dataset>
58
- ```
59
- and the code will load the final model from `cfg.OUTPUT_DIR` (which one can override in command line) and start extracting features for `<dataset>`, we provide three options for the dataset: `coco_2014_train`, `coco_2014_val` and `coco_2015_test`, they correspond to `train`, `val` and `test` splits of the VQA dataset. The extracted features can be conveniently loaded in [Pythia](https://github.com/facebookresearch/pythia).
60
-
61
- To extract features on your customized dataset, you may want to dump the image information into [COCO](http://cocodataset.org/) `.json` format, and add the dataset information to use `extract_grid_feature.py`, or you can hack `extract_grid_feature.py` and directly loop over images.
62
-
63
- ## Pre-Trained Models and Features
64
- We release several pre-trained models for grid features: one with R-50 backbone, one with X-101, one with X-152, and one with additional improvements used for the 2020 VQA Challenge (see `X-152-challenge.yaml`). The models can be used directly to extract features. For your convenience, we also release the pre-extracted features for direct download.
65
-
66
- | Backbone | AP<sub>50:95</sub> | Download |
67
- | -------- | ---- | -------- |
68
- | R-50 | 3.1 | <a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/R-50/R-50.pth">model</a>&nbsp;\| &nbsp;<a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/R-50/metrics.json">metrics</a>&nbsp;\| &nbsp;<a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/R-50/R-50-features.tgz">features</a> |
69
- | X-101 | 4.3 | <a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/X-101/X-101.pth">model</a>&nbsp;\| &nbsp;<a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/X-101/metrics.json">metrics</a>&nbsp;\| &nbsp;<a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/X-101/X-101-features.tgz">features</a> |
70
- | X-152 | 4.7 | <a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/X-152/X-152.pth">model</a>&nbsp;\| &nbsp;<a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/X-152/metrics.json">metrics</a>&nbsp;\| &nbsp;<a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/X-152/X-152-features.tgz">features</a> |
71
- | X-152++ | 3.7 | <a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/X-152pp/X-152pp.pth">model</a>&nbsp;\| &nbsp;<a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/X-152pp/metrics.json">metrics</a>&nbsp;\| &nbsp;<a href="https://dl.fbaipublicfiles.com/grid-feats-vqa/X-152pp/X-152pp-features.tgz">features</a> |
72
-
73
- ## License
74
-
75
- The code is released under the [Apache 2.0 license](LICENSE).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChrisCaviar/ControlNet-v1-1/model.py DELETED
@@ -1,591 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import gc
4
-
5
- import numpy as np
6
- import PIL.Image
7
- import torch
8
- from controlnet_aux.util import HWC3
9
- from diffusers import (ControlNetModel, DiffusionPipeline,
10
- StableDiffusionControlNetPipeline,
11
- UniPCMultistepScheduler)
12
-
13
- from cv_utils import resize_image
14
- from preprocessor import Preprocessor
15
-
16
- CONTROLNET_MODEL_IDS = {
17
- 'Openpose': 'lllyasviel/control_v11p_sd15_openpose',
18
- 'Canny': 'lllyasviel/control_v11p_sd15_canny',
19
- 'MLSD': 'lllyasviel/control_v11p_sd15_mlsd',
20
- 'scribble': 'lllyasviel/control_v11p_sd15_scribble',
21
- 'softedge': 'lllyasviel/control_v11p_sd15_softedge',
22
- 'segmentation': 'lllyasviel/control_v11p_sd15_seg',
23
- 'depth': 'lllyasviel/control_v11f1p_sd15_depth',
24
- 'NormalBae': 'lllyasviel/control_v11p_sd15_normalbae',
25
- 'lineart': 'lllyasviel/control_v11p_sd15_lineart',
26
- 'lineart_anime': 'lllyasviel/control_v11p_sd15s2_lineart_anime',
27
- 'shuffle': 'lllyasviel/control_v11e_sd15_shuffle',
28
- 'ip2p': 'lllyasviel/control_v11e_sd15_ip2p',
29
- 'inpaint': 'lllyasviel/control_v11e_sd15_inpaint',
30
- }
31
-
32
-
33
- def download_all_controlnet_weights() -> None:
34
- for model_id in CONTROLNET_MODEL_IDS.values():
35
- ControlNetModel.from_pretrained(model_id)
36
-
37
-
38
- class Model:
39
- def __init__(self,
40
- base_model_id: str = 'runwayml/stable-diffusion-v1-5',
41
- task_name: str = 'Canny'):
42
- self.device = torch.device(
43
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
44
- self.base_model_id = ''
45
- self.task_name = ''
46
- self.pipe = self.load_pipe(base_model_id, task_name)
47
- self.preprocessor = Preprocessor()
48
-
49
- def load_pipe(self, base_model_id: str, task_name) -> DiffusionPipeline:
50
- if base_model_id == self.base_model_id and task_name == self.task_name and hasattr(
51
- self, 'pipe') and self.pipe is not None:
52
- return self.pipe
53
- model_id = CONTROLNET_MODEL_IDS[task_name]
54
- controlnet = ControlNetModel.from_pretrained(model_id,
55
- torch_dtype=torch.float16)
56
- pipe = StableDiffusionControlNetPipeline.from_pretrained(
57
- base_model_id,
58
- safety_checker=None,
59
- controlnet=controlnet,
60
- torch_dtype=torch.float16)
61
- pipe.scheduler = UniPCMultistepScheduler.from_config(
62
- pipe.scheduler.config)
63
- if self.device.type == 'cuda':
64
- pipe.enable_xformers_memory_efficient_attention()
65
- pipe.to(self.device)
66
- torch.cuda.empty_cache()
67
- gc.collect()
68
- self.base_model_id = base_model_id
69
- self.task_name = task_name
70
- return pipe
71
-
72
- def set_base_model(self, base_model_id: str) -> str:
73
- if not base_model_id or base_model_id == self.base_model_id:
74
- return self.base_model_id
75
- del self.pipe
76
- torch.cuda.empty_cache()
77
- gc.collect()
78
- try:
79
- self.pipe = self.load_pipe(base_model_id, self.task_name)
80
- except Exception:
81
- self.pipe = self.load_pipe(self.base_model_id, self.task_name)
82
- return self.base_model_id
83
-
84
- def load_controlnet_weight(self, task_name: str) -> None:
85
- if task_name == self.task_name:
86
- return
87
- if self.pipe is not None and hasattr(self.pipe, 'controlnet'):
88
- del self.pipe.controlnet
89
- torch.cuda.empty_cache()
90
- gc.collect()
91
- model_id = CONTROLNET_MODEL_IDS[task_name]
92
- controlnet = ControlNetModel.from_pretrained(model_id,
93
- torch_dtype=torch.float16)
94
- controlnet.to(self.device)
95
- torch.cuda.empty_cache()
96
- gc.collect()
97
- self.pipe.controlnet = controlnet
98
- self.task_name = task_name
99
-
100
- def get_prompt(self, prompt: str, additional_prompt: str) -> str:
101
- if not prompt:
102
- prompt = additional_prompt
103
- else:
104
- prompt = f'{prompt}, {additional_prompt}'
105
- return prompt
106
-
107
- @torch.autocast('cuda')
108
- def run_pipe(
109
- self,
110
- prompt: str,
111
- negative_prompt: str,
112
- control_image: PIL.Image.Image,
113
- num_images: int,
114
- num_steps: int,
115
- guidance_scale: float,
116
- seed: int,
117
- ) -> list[PIL.Image.Image]:
118
- if seed == -1:
119
- seed = np.random.randint(0, np.iinfo(np.int64).max)
120
- generator = torch.Generator().manual_seed(seed)
121
- return self.pipe(prompt=prompt,
122
- negative_prompt=negative_prompt,
123
- guidance_scale=guidance_scale,
124
- num_images_per_prompt=num_images,
125
- num_inference_steps=num_steps,
126
- generator=generator,
127
- image=control_image).images
128
-
129
- @torch.inference_mode()
130
- def process_canny(
131
- self,
132
- image: np.ndarray,
133
- prompt: str,
134
- additional_prompt: str,
135
- negative_prompt: str,
136
- num_images: int,
137
- image_resolution: int,
138
- num_steps: int,
139
- guidance_scale: float,
140
- seed: int,
141
- low_threshold: int,
142
- high_threshold: int,
143
- ) -> list[PIL.Image.Image]:
144
- self.preprocessor.load('Canny')
145
- control_image = self.preprocessor(image=image,
146
- low_threshold=low_threshold,
147
- high_threshold=high_threshold,
148
- detect_resolution=image_resolution)
149
-
150
- self.load_controlnet_weight('Canny')
151
- results = self.run_pipe(
152
- prompt=self.get_prompt(prompt, additional_prompt),
153
- negative_prompt=negative_prompt,
154
- control_image=control_image,
155
- num_images=num_images,
156
- num_steps=num_steps,
157
- guidance_scale=guidance_scale,
158
- seed=seed,
159
- )
160
- return [control_image] + results
161
-
162
- @torch.inference_mode()
163
- def process_mlsd(
164
- self,
165
- image: np.ndarray,
166
- prompt: str,
167
- additional_prompt: str,
168
- negative_prompt: str,
169
- num_images: int,
170
- image_resolution: int,
171
- preprocess_resolution: int,
172
- num_steps: int,
173
- guidance_scale: float,
174
- seed: int,
175
- value_threshold: float,
176
- distance_threshold: float,
177
- ) -> list[PIL.Image.Image]:
178
- self.preprocessor.load('MLSD')
179
- control_image = self.preprocessor(
180
- image=image,
181
- image_resolution=image_resolution,
182
- detect_resolution=preprocess_resolution,
183
- thr_v=value_threshold,
184
- thr_d=distance_threshold,
185
- )
186
- self.load_controlnet_weight('MLSD')
187
- results = self.run_pipe(
188
- prompt=self.get_prompt(prompt, additional_prompt),
189
- negative_prompt=negative_prompt,
190
- control_image=control_image,
191
- num_images=num_images,
192
- num_steps=num_steps,
193
- guidance_scale=guidance_scale,
194
- seed=seed,
195
- )
196
- return [control_image] + results
197
-
198
- @torch.inference_mode()
199
- def process_scribble(
200
- self,
201
- image: np.ndarray,
202
- prompt: str,
203
- additional_prompt: str,
204
- negative_prompt: str,
205
- num_images: int,
206
- image_resolution: int,
207
- preprocess_resolution: int,
208
- num_steps: int,
209
- guidance_scale: float,
210
- seed: int,
211
- preprocessor_name: str,
212
- ) -> list[PIL.Image.Image]:
213
- if preprocessor_name == 'None':
214
- image = HWC3(image)
215
- image = resize_image(image, resolution=image_resolution)
216
- control_image = PIL.Image.fromarray(image)
217
- elif preprocessor_name == 'HED':
218
- self.preprocessor.load(preprocessor_name)
219
- control_image = self.preprocessor(
220
- image=image,
221
- image_resolution=image_resolution,
222
- detect_resolution=preprocess_resolution,
223
- scribble=False,
224
- )
225
- elif preprocessor_name == 'PidiNet':
226
- self.preprocessor.load(preprocessor_name)
227
- control_image = self.preprocessor(
228
- image=image,
229
- image_resolution=image_resolution,
230
- detect_resolution=preprocess_resolution,
231
- safe=False,
232
- )
233
- self.load_controlnet_weight('scribble')
234
- results = self.run_pipe(
235
- prompt=self.get_prompt(prompt, additional_prompt),
236
- negative_prompt=negative_prompt,
237
- control_image=control_image,
238
- num_images=num_images,
239
- num_steps=num_steps,
240
- guidance_scale=guidance_scale,
241
- seed=seed,
242
- )
243
- return [control_image] + results
244
-
245
- @torch.inference_mode()
246
- def process_scribble_interactive(
247
- self,
248
- image_and_mask: dict[str, np.ndarray],
249
- prompt: str,
250
- additional_prompt: str,
251
- negative_prompt: str,
252
- num_images: int,
253
- image_resolution: int,
254
- num_steps: int,
255
- guidance_scale: float,
256
- seed: int,
257
- ) -> list[PIL.Image.Image]:
258
- image = image_and_mask['mask']
259
- image = HWC3(image)
260
- image = resize_image(image, resolution=image_resolution)
261
- control_image = PIL.Image.fromarray(image)
262
-
263
- self.load_controlnet_weight('scribble')
264
- results = self.run_pipe(
265
- prompt=self.get_prompt(prompt, additional_prompt),
266
- negative_prompt=negative_prompt,
267
- control_image=control_image,
268
- num_images=num_images,
269
- num_steps=num_steps,
270
- guidance_scale=guidance_scale,
271
- seed=seed,
272
- )
273
- return [control_image] + results
274
-
275
- @torch.inference_mode()
276
- def process_softedge(
277
- self,
278
- image: np.ndarray,
279
- prompt: str,
280
- additional_prompt: str,
281
- negative_prompt: str,
282
- num_images: int,
283
- image_resolution: int,
284
- preprocess_resolution: int,
285
- num_steps: int,
286
- guidance_scale: float,
287
- seed: int,
288
- preprocessor_name: str,
289
- ) -> list[PIL.Image.Image]:
290
- if preprocessor_name == 'None':
291
- image = HWC3(image)
292
- image = resize_image(image, resolution=image_resolution)
293
- control_image = PIL.Image.fromarray(image)
294
- elif preprocessor_name in ['HED', 'HED safe']:
295
- safe = 'safe' in preprocessor_name
296
- self.preprocessor.load('HED')
297
- control_image = self.preprocessor(
298
- image=image,
299
- image_resolution=image_resolution,
300
- detect_resolution=preprocess_resolution,
301
- scribble=safe,
302
- )
303
- elif preprocessor_name in ['PidiNet', 'PidiNet safe']:
304
- safe = 'safe' in preprocessor_name
305
- self.preprocessor.load('PidiNet')
306
- control_image = self.preprocessor(
307
- image=image,
308
- image_resolution=image_resolution,
309
- detect_resolution=preprocess_resolution,
310
- safe=safe,
311
- )
312
- else:
313
- raise ValueError
314
- self.load_controlnet_weight('softedge')
315
- results = self.run_pipe(
316
- prompt=self.get_prompt(prompt, additional_prompt),
317
- negative_prompt=negative_prompt,
318
- control_image=control_image,
319
- num_images=num_images,
320
- num_steps=num_steps,
321
- guidance_scale=guidance_scale,
322
- seed=seed,
323
- )
324
- return [control_image] + results
325
-
326
- @torch.inference_mode()
327
- def process_openpose(
328
- self,
329
- image: np.ndarray,
330
- prompt: str,
331
- additional_prompt: str,
332
- negative_prompt: str,
333
- num_images: int,
334
- image_resolution: int,
335
- preprocess_resolution: int,
336
- num_steps: int,
337
- guidance_scale: float,
338
- seed: int,
339
- preprocessor_name: str,
340
- ) -> list[PIL.Image.Image]:
341
- if preprocessor_name == 'None':
342
- image = HWC3(image)
343
- image = resize_image(image, resolution=image_resolution)
344
- control_image = PIL.Image.fromarray(image)
345
- else:
346
- self.preprocessor.load('Openpose')
347
- control_image = self.preprocessor(
348
- image=image,
349
- image_resolution=image_resolution,
350
- detect_resolution=preprocess_resolution,
351
- hand_and_face=True,
352
- )
353
- self.load_controlnet_weight('Openpose')
354
- results = self.run_pipe(
355
- prompt=self.get_prompt(prompt, additional_prompt),
356
- negative_prompt=negative_prompt,
357
- control_image=control_image,
358
- num_images=num_images,
359
- num_steps=num_steps,
360
- guidance_scale=guidance_scale,
361
- seed=seed,
362
- )
363
- return [control_image] + results
364
-
365
- @torch.inference_mode()
366
- def process_segmentation(
367
- self,
368
- image: np.ndarray,
369
- prompt: str,
370
- additional_prompt: str,
371
- negative_prompt: str,
372
- num_images: int,
373
- image_resolution: int,
374
- preprocess_resolution: int,
375
- num_steps: int,
376
- guidance_scale: float,
377
- seed: int,
378
- preprocessor_name: str,
379
- ) -> list[PIL.Image.Image]:
380
- if preprocessor_name == 'None':
381
- image = HWC3(image)
382
- image = resize_image(image, resolution=image_resolution)
383
- control_image = PIL.Image.fromarray(image)
384
- else:
385
- self.preprocessor.load(preprocessor_name)
386
- control_image = self.preprocessor(
387
- image=image,
388
- image_resolution=image_resolution,
389
- detect_resolution=preprocess_resolution,
390
- )
391
- self.load_controlnet_weight('segmentation')
392
- results = self.run_pipe(
393
- prompt=self.get_prompt(prompt, additional_prompt),
394
- negative_prompt=negative_prompt,
395
- control_image=control_image,
396
- num_images=num_images,
397
- num_steps=num_steps,
398
- guidance_scale=guidance_scale,
399
- seed=seed,
400
- )
401
- return [control_image] + results
402
-
403
- @torch.inference_mode()
404
- def process_depth(
405
- self,
406
- image: np.ndarray,
407
- prompt: str,
408
- additional_prompt: str,
409
- negative_prompt: str,
410
- num_images: int,
411
- image_resolution: int,
412
- preprocess_resolution: int,
413
- num_steps: int,
414
- guidance_scale: float,
415
- seed: int,
416
- preprocessor_name: str,
417
- ) -> list[PIL.Image.Image]:
418
- if preprocessor_name == 'None':
419
- image = HWC3(image)
420
- image = resize_image(image, resolution=image_resolution)
421
- control_image = PIL.Image.fromarray(image)
422
- else:
423
- self.preprocessor.load(preprocessor_name)
424
- control_image = self.preprocessor(
425
- image=image,
426
- image_resolution=image_resolution,
427
- detect_resolution=preprocess_resolution,
428
- )
429
- self.load_controlnet_weight('depth')
430
- results = self.run_pipe(
431
- prompt=self.get_prompt(prompt, additional_prompt),
432
- negative_prompt=negative_prompt,
433
- control_image=control_image,
434
- num_images=num_images,
435
- num_steps=num_steps,
436
- guidance_scale=guidance_scale,
437
- seed=seed,
438
- )
439
- return [control_image] + results
440
-
441
- @torch.inference_mode()
442
- def process_normal(
443
- self,
444
- image: np.ndarray,
445
- prompt: str,
446
- additional_prompt: str,
447
- negative_prompt: str,
448
- num_images: int,
449
- image_resolution: int,
450
- preprocess_resolution: int,
451
- num_steps: int,
452
- guidance_scale: float,
453
- seed: int,
454
- preprocessor_name: str,
455
- ) -> list[PIL.Image.Image]:
456
- if preprocessor_name == 'None':
457
- image = HWC3(image)
458
- image = resize_image(image, resolution=image_resolution)
459
- control_image = PIL.Image.fromarray(image)
460
- else:
461
- self.preprocessor.load('NormalBae')
462
- control_image = self.preprocessor(
463
- image=image,
464
- image_resolution=image_resolution,
465
- detect_resolution=preprocess_resolution,
466
- )
467
- self.load_controlnet_weight('NormalBae')
468
- results = self.run_pipe(
469
- prompt=self.get_prompt(prompt, additional_prompt),
470
- negative_prompt=negative_prompt,
471
- control_image=control_image,
472
- num_images=num_images,
473
- num_steps=num_steps,
474
- guidance_scale=guidance_scale,
475
- seed=seed,
476
- )
477
- return [control_image] + results
478
-
479
- @torch.inference_mode()
480
- def process_lineart(
481
- self,
482
- image: np.ndarray,
483
- prompt: str,
484
- additional_prompt: str,
485
- negative_prompt: str,
486
- num_images: int,
487
- image_resolution: int,
488
- preprocess_resolution: int,
489
- num_steps: int,
490
- guidance_scale: float,
491
- seed: int,
492
- preprocessor_name: str,
493
- ) -> list[PIL.Image.Image]:
494
- if preprocessor_name in ['None', 'None (anime)']:
495
- image = HWC3(image)
496
- image = resize_image(image, resolution=image_resolution)
497
- control_image = PIL.Image.fromarray(image)
498
- elif preprocessor_name in ['Lineart', 'Lineart coarse']:
499
- coarse = 'coarse' in preprocessor_name
500
- self.preprocessor.load('Lineart')
501
- control_image = self.preprocessor(
502
- image=image,
503
- image_resolution=image_resolution,
504
- detect_resolution=preprocess_resolution,
505
- coarse=coarse,
506
- )
507
- elif preprocessor_name == 'Lineart (anime)':
508
- self.preprocessor.load('LineartAnime')
509
- control_image = self.preprocessor(
510
- image=image,
511
- image_resolution=image_resolution,
512
- detect_resolution=preprocess_resolution,
513
- )
514
- if 'anime' in preprocessor_name:
515
- self.load_controlnet_weight('lineart_anime')
516
- else:
517
- self.load_controlnet_weight('lineart')
518
- results = self.run_pipe(
519
- prompt=self.get_prompt(prompt, additional_prompt),
520
- negative_prompt=negative_prompt,
521
- control_image=control_image,
522
- num_images=num_images,
523
- num_steps=num_steps,
524
- guidance_scale=guidance_scale,
525
- seed=seed,
526
- )
527
- return [control_image] + results
528
-
529
- @torch.inference_mode()
530
- def process_shuffle(
531
- self,
532
- image: np.ndarray,
533
- prompt: str,
534
- additional_prompt: str,
535
- negative_prompt: str,
536
- num_images: int,
537
- image_resolution: int,
538
- num_steps: int,
539
- guidance_scale: float,
540
- seed: int,
541
- preprocessor_name: str,
542
- ) -> list[PIL.Image.Image]:
543
- if preprocessor_name == 'None':
544
- image = HWC3(image)
545
- image = resize_image(image, resolution=image_resolution)
546
- control_image = PIL.Image.fromarray(image)
547
- else:
548
- self.preprocessor.load(preprocessor_name)
549
- control_image = self.preprocessor(
550
- image=image,
551
- image_resolution=image_resolution,
552
- )
553
- self.load_controlnet_weight('shuffle')
554
- results = self.run_pipe(
555
- prompt=self.get_prompt(prompt, additional_prompt),
556
- negative_prompt=negative_prompt,
557
- control_image=control_image,
558
- num_images=num_images,
559
- num_steps=num_steps,
560
- guidance_scale=guidance_scale,
561
- seed=seed,
562
- )
563
- return [control_image] + results
564
-
565
- @torch.inference_mode()
566
- def process_ip2p(
567
- self,
568
- image: np.ndarray,
569
- prompt: str,
570
- additional_prompt: str,
571
- negative_prompt: str,
572
- num_images: int,
573
- image_resolution: int,
574
- num_steps: int,
575
- guidance_scale: float,
576
- seed: int,
577
- ) -> list[PIL.Image.Image]:
578
- image = HWC3(image)
579
- image = resize_image(image, resolution=image_resolution)
580
- control_image = PIL.Image.fromarray(image)
581
- self.load_controlnet_weight('ip2p')
582
- results = self.run_pipe(
583
- prompt=self.get_prompt(prompt, additional_prompt),
584
- negative_prompt=negative_prompt,
585
- control_image=control_image,
586
- num_images=num_images,
587
- num_steps=num_steps,
588
- guidance_scale=guidance_scale,
589
- seed=seed,
590
- )
591
- return [control_image] + results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CobaltZvc/Hyper_Bot/README.md DELETED
@@ -1,10 +0,0 @@
1
- ---
2
- title: Hyper Bot
3
- emoji: 🤖
4
- colorFrom: gray
5
- colorTo: yellow
6
- sdk: static
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/chat.b4/g4f/Provider/Providers/Fakeopen.py DELETED
@@ -1,54 +0,0 @@
1
- import os
2
- import json
3
- import requests
4
- from typing import Dict, get_type_hints
5
-
6
- url = 'https://ai.fakeopen.com/v1/'
7
- model = [
8
- 'gpt-3.5-turbo',
9
- 'gpt-3.5-turbo-0613'
10
- 'gpt-3.5-turbo-16k',
11
- 'gpt-3.5-turbo-16k-0613',
12
- ]
13
-
14
- supports_stream = True
15
- needs_auth = False
16
-
17
-
18
- def _create_completion(model: str, messages: list, stream: bool, **kwargs):
19
-
20
- headers = {
21
- 'Content-Type': 'application/json',
22
- 'accept': 'text/event-stream',
23
- 'Cache-Control': 'no-cache',
24
- 'Proxy-Connection': 'keep-alive',
25
- 'Authorization': f"Bearer {os.environ.get('FAKE_OPEN_KEY', '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')}",
26
- }
27
-
28
- json_data = {
29
- 'messages': messages,
30
- 'temperature': 1.0,
31
- 'model': model,
32
- 'stream': stream,
33
- }
34
-
35
- response = requests.post(
36
- 'https://ai.fakeopen.com/v1/chat/completions', headers=headers, json=json_data, stream=True
37
- )
38
-
39
- for token in response.iter_lines():
40
- decoded = token.decode('utf-8')
41
- if decoded == '[DONE]':
42
- break
43
- if decoded.startswith('data: '):
44
- data_str = decoded.replace('data: ', '')
45
- if data_str != '[DONE]':
46
- data = json.loads(data_str)
47
- if 'choices' in data and 'delta' in data['choices'][0] and 'content' in data['choices'][0]['delta']:
48
- yield data['choices'][0]['delta']['content']
49
-
50
-
51
-
52
-
53
- params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + '(%s)' % ', '.join(
54
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cpu/dcn_v2_cpu.cpp DELETED
@@ -1,229 +0,0 @@
1
- #include <vector>
2
- #include "cpu/dcn_v2_im2col_cpu.h"
3
- #include <iostream>
4
-
5
- #include <ATen/ATen.h>
6
- //#include <ATen/cuda/CUDAContext.h>
7
-
8
- #include <TH/TH.h>
9
- //#include <THC/THCAtomics.cuh>
10
- //#include <THC/THCDeviceUtils.cuh>
11
-
12
- //extern THCState *state;
13
-
14
- // author: Charles Shang
15
- // https://github.com/torch/cunn/blob/master/lib/THCUNN/generic/SpatialConvolutionMM.cu
16
-
17
- // modified from the CUDA version for CPU use by Daniel K. Suhendro
18
-
19
- // edit by: James Bockman and Matthew Howe
20
- // modified for torch implementation to remove use of deprecated torch access to Blas
21
-
22
- at::Tensor
23
- dcn_v2_cpu_forward(const at::Tensor &input,
24
- const at::Tensor &weight,
25
- const at::Tensor &bias,
26
- const at::Tensor &offset,
27
- const at::Tensor &mask,
28
- const int kernel_h,
29
- const int kernel_w,
30
- const int stride_h,
31
- const int stride_w,
32
- const int pad_h,
33
- const int pad_w,
34
- const int dilation_h,
35
- const int dilation_w,
36
- const int deformable_group)
37
- {
38
- // THCAssertSameGPU(THCudaTensor_checkGPU(state, 5, input, weight, bias, offset, mask));
39
- /*AT_ASSERTM(input.is_cuda(), "input must be a CUDA tensor");
40
- AT_ASSERTM(weight.is_cuda(), "weight must be a CUDA tensor");
41
- AT_ASSERTM(bias.is_cuda(), "bias must be a CUDA tensor");
42
- AT_ASSERTM(offset.is_cuda(), "offset must be a CUDA tensor");
43
- AT_ASSERTM(mask.is_cuda(), "mask must be a CUDA tensor");*/
44
-
45
- const int batch = input.size(0);
46
- const int channels = input.size(1);
47
- const int height = input.size(2);
48
- const int width = input.size(3);
49
-
50
- const int channels_out = weight.size(0);
51
- const int channels_kernel = weight.size(1);
52
- const int kernel_h_ = weight.size(2);
53
- const int kernel_w_ = weight.size(3);
54
-
55
- // printf("Kernels: %d %d %d %d\n", kernel_h_, kernel_w_, kernel_w, kernel_h);
56
- // printf("Channels: %d %d\n", channels, channels_kernel);
57
- // printf("Channels: %d %d\n", channels_out, channels_kernel);
58
-
59
- AT_ASSERTM(kernel_h_ == kernel_h && kernel_w_ == kernel_w,
60
- "Input shape and kernel shape wont match: (%d x %d vs %d x %d).", kernel_h_, kernel_w, kernel_h_, kernel_w_);
61
-
62
- AT_ASSERTM(channels == channels_kernel,
63
- "Input shape and kernel channels wont match: (%d vs %d).", channels, channels_kernel);
64
-
65
- const int height_out = (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1;
66
- const int width_out = (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1;
67
-
68
- // auto ones = at::ones({height_out, width_out}, input.options());
69
- auto ones = at::ones({bias.sizes()[0], height_out, width_out}, input.options());
70
- auto columns = at::empty({channels * kernel_h * kernel_w, 1 * height_out * width_out}, input.options());
71
- auto output = at::zeros({batch, channels_out, height_out, width_out}, input.options());
72
-
73
- using scalar_t = float;
74
- for (int b = 0; b < batch; b++)
75
- {
76
- auto input_n = input.select(0, b);
77
- auto offset_n = offset.select(0, b);
78
- auto mask_n = mask.select(0, b);
79
- auto output_n = output.select(0, b);
80
- // std::cout << "output_n: " << output_n << "output.select(0,b): " << output.select(0,b) << "\n";
81
-
82
- // Do Bias first:
83
- // M,N,K are dims of matrix A and B
84
- // (see http://docs.nvidia.com/cuda/cublas/#cublas-lt-t-gt-gemm)
85
- // (N x 1) (1 x M)
86
-
87
- // torch implementation
88
- auto ones_T = at::transpose(ones.contiguous(), 2, 0);
89
- ones_T = at::mul(ones_T, bias.contiguous());
90
- ones_T = at::transpose(ones_T, 2, 0);
91
- output_n = at::add(output_n, ones_T);
92
-
93
- modulated_deformable_im2col_cpu(input_n.data_ptr<scalar_t>(),
94
- offset_n.data_ptr<scalar_t>(),
95
- mask_n.data_ptr<scalar_t>(),
96
- 1, channels, height, width,
97
- height_out, width_out, kernel_h, kernel_w,
98
- pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w,
99
- deformable_group,
100
- columns.data_ptr<scalar_t>());
101
-
102
- //(k * m) x (m * n)
103
- // Y = WC
104
-
105
- // torch implementation
106
- auto weight_flat = weight.view({channels_out, channels * kernel_h * kernel_w});
107
- auto product = at::matmul(weight_flat, columns);
108
- output.select(0, b) = at::add(output_n, product.view({channels_out, height_out, width_out}));
109
- }
110
- return output;
111
- }
112
-
113
- std::vector<at::Tensor> dcn_v2_cpu_backward(const at::Tensor &input,
114
- const at::Tensor &weight,
115
- const at::Tensor &bias,
116
- const at::Tensor &offset,
117
- const at::Tensor &mask,
118
- const at::Tensor &grad_output,
119
- int kernel_h, int kernel_w,
120
- int stride_h, int stride_w,
121
- int pad_h, int pad_w,
122
- int dilation_h, int dilation_w,
123
- int deformable_group)
124
- {
125
-
126
- THArgCheck(input.is_contiguous(), 1, "input tensor has to be contiguous");
127
- THArgCheck(weight.is_contiguous(), 2, "weight tensor has to be contiguous");
128
-
129
- /*AT_ASSERTM(input.is_cuda(), "input must be a CUDA tensor");
130
- AT_ASSERTM(weight.is_cuda(), "weight must be a CUDA tensor");
131
- AT_ASSERTM(bias.is_cuda(), "bias must be a CUDA tensor");
132
- AT_ASSERTM(offset.is_cuda(), "offset must be a CUDA tensor");
133
- AT_ASSERTM(mask.is_cuda(), "mask must be a CUDA tensor");*/
134
-
135
- const int batch = input.size(0);
136
- const int channels = input.size(1);
137
- const int height = input.size(2);
138
- const int width = input.size(3);
139
-
140
- const int channels_out = weight.size(0);
141
- const int channels_kernel = weight.size(1);
142
- const int kernel_h_ = weight.size(2);
143
- const int kernel_w_ = weight.size(3);
144
-
145
- AT_ASSERTM(kernel_h_ == kernel_h && kernel_w_ == kernel_w,
146
- "Input shape and kernel shape wont match: (%d x %d vs %d x %d).", kernel_h_, kernel_w, kernel_h_, kernel_w_);
147
-
148
- AT_ASSERTM(channels == channels_kernel,
149
- "Input shape and kernel channels wont match: (%d vs %d).", channels, channels_kernel);
150
-
151
- const int height_out = (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1;
152
- const int width_out = (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1;
153
-
154
- auto ones = at::ones({height_out, width_out}, input.options());
155
- auto columns = at::zeros({channels * kernel_h * kernel_w, 1 * height_out * width_out}, input.options());
156
- auto output = at::empty({batch, channels_out, height_out, width_out}, input.options());
157
-
158
- auto grad_input = at::zeros_like(input);
159
- auto grad_weight = at::zeros_like(weight);
160
- auto grad_bias = at::zeros_like(bias);
161
- auto grad_offset = at::zeros_like(offset);
162
- auto grad_mask = at::zeros_like(mask);
163
-
164
- using scalar_t = float;
165
-
166
- for (int b = 0; b < batch; b++)
167
- {
168
- auto input_n = input.select(0, b);
169
- auto offset_n = offset.select(0, b);
170
- auto mask_n = mask.select(0, b);
171
- auto grad_output_n = grad_output.select(0, b);
172
- auto grad_input_n = grad_input.select(0, b);
173
- auto grad_offset_n = grad_offset.select(0, b);
174
- auto grad_mask_n = grad_mask.select(0, b);
175
-
176
-
177
-
178
- // Torch implementation
179
- auto weight_flat = weight.view({channels_out, channels*kernel_h*kernel_w});
180
- weight_flat = at::transpose(weight_flat, 1, 0);
181
- auto grad_output_n_flat = grad_output_n.view({channels_out, height_out*width_out});
182
- columns = at::matmul(weight_flat, grad_output_n_flat);
183
-
184
- // gradient w.r.t. input coordinate data
185
- modulated_deformable_col2im_coord_cpu(columns.data_ptr<scalar_t>(),
186
- input_n.data_ptr<scalar_t>(),
187
- offset_n.data_ptr<scalar_t>(),
188
- mask_n.data_ptr<scalar_t>(),
189
- 1, channels, height, width,
190
- height_out, width_out, kernel_h, kernel_w,
191
- pad_h, pad_w, stride_h, stride_w,
192
- dilation_h, dilation_w, deformable_group,
193
- grad_offset_n.data_ptr<scalar_t>(),
194
- grad_mask_n.data_ptr<scalar_t>());
195
- // gradient w.r.t. input data
196
- modulated_deformable_col2im_cpu(columns.data_ptr<scalar_t>(),
197
- offset_n.data_ptr<scalar_t>(),
198
- mask_n.data_ptr<scalar_t>(),
199
- 1, channels, height, width,
200
- height_out, width_out, kernel_h, kernel_w,
201
- pad_h, pad_w, stride_h, stride_w,
202
- dilation_h, dilation_w, deformable_group,
203
- grad_input_n.data_ptr<scalar_t>());
204
-
205
- // gradient w.r.t. weight, dWeight should accumulate across the batch and group
206
- modulated_deformable_im2col_cpu(input_n.data_ptr<scalar_t>(),
207
- offset_n.data_ptr<scalar_t>(),
208
- mask_n.data_ptr<scalar_t>(),
209
- 1, channels, height, width,
210
- height_out, width_out, kernel_h, kernel_w,
211
- pad_h, pad_w, stride_h, stride_w,
212
- dilation_h, dilation_w, deformable_group,
213
- columns.data_ptr<scalar_t>());
214
-
215
- // Torch implementation
216
- auto product = at::matmul(grad_output_n_flat, at::transpose(columns, 1, 0));
217
- grad_weight = at::add(grad_weight, product.view({channels_out, channels, kernel_h, kernel_w}));
218
-
219
-
220
- // Torch implementation
221
- auto ones_flat = ones.view({height_out*width_out});
222
- product = at::matmul(grad_output_n_flat, ones_flat);
223
- grad_bias = at::add(grad_bias, product);
224
- }
225
-
226
- return {
227
- grad_input, grad_offset, grad_mask, grad_weight, grad_bias
228
- };
229
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DD0101/Disfluency-base/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Disfluency Base
3
- emoji: 😻
4
- colorFrom: gray
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 3.23.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/teePen.py DELETED
@@ -1,54 +0,0 @@
1
- """Pen multiplexing drawing to one or more pens."""
2
- from fontTools.pens.basePen import AbstractPen
3
-
4
-
5
- __all__ = ["TeePen"]
6
-
7
-
8
- class TeePen(AbstractPen):
9
- """Pen multiplexing drawing to one or more pens.
10
-
11
- Use either as TeePen(pen1, pen2, ...) or TeePen(iterableOfPens)."""
12
-
13
- def __init__(self, *pens):
14
- if len(pens) == 1:
15
- pens = pens[0]
16
- self.pens = pens
17
-
18
- def moveTo(self, p0):
19
- for pen in self.pens:
20
- pen.moveTo(p0)
21
-
22
- def lineTo(self, p1):
23
- for pen in self.pens:
24
- pen.lineTo(p1)
25
-
26
- def qCurveTo(self, *points):
27
- for pen in self.pens:
28
- pen.qCurveTo(*points)
29
-
30
- def curveTo(self, *points):
31
- for pen in self.pens:
32
- pen.curveTo(*points)
33
-
34
- def closePath(self):
35
- for pen in self.pens:
36
- pen.closePath()
37
-
38
- def endPath(self):
39
- for pen in self.pens:
40
- pen.endPath()
41
-
42
- def addComponent(self, glyphName, transformation):
43
- for pen in self.pens:
44
- pen.addComponent(glyphName, transformation)
45
-
46
-
47
- if __name__ == "__main__":
48
- from fontTools.pens.basePen import _TestPen
49
-
50
- pen = TeePen(_TestPen(), _TestPen())
51
- pen.moveTo((0, 0))
52
- pen.lineTo((0, 100))
53
- pen.curveTo((50, 75), (60, 50), (50, 25))
54
- pen.closePath()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-329f8260.css DELETED
@@ -1 +0,0 @@
1
- .min.svelte-1ybaih5{min-height:var(--size-24)}.hide.svelte-1ybaih5{display:none}div.svelte-1ed2p3z{transition:.15s}.pending.svelte-1ed2p3z{opacity:.2}
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-9d98c4c0.js DELETED
@@ -1,2 +0,0 @@
1
- import{S as v,e as g,s as d,a9 as r,N as q,K as o,as as h,U as f,p as b,ab as w,ac as R,ad as j,z as C,v as S,A as z}from"./index-3370be2a.js";function A(i){let e,_,s;const u=i[6].default,a=r(u,i,i[5],null);return{c(){e=q("div"),a&&a.c(),o(e,"id",i[1]),o(e,"class",_=h(i[2].join(" "))+" svelte-15lo0d8"),f(e,"compact",i[4]==="compact"),f(e,"panel",i[4]==="panel"),f(e,"unequal-height",i[0]===!1),f(e,"stretch",i[0]),f(e,"hide",!i[3])},m(l,t){b(l,e,t),a&&a.m(e,null),s=!0},p(l,[t]){a&&a.p&&(!s||t&32)&&w(a,u,l,l[5],s?j(u,l[5],t,null):R(l[5]),null),(!s||t&2)&&o(e,"id",l[1]),(!s||t&4&&_!==(_=h(l[2].join(" "))+" svelte-15lo0d8"))&&o(e,"class",_),(!s||t&20)&&f(e,"compact",l[4]==="compact"),(!s||t&20)&&f(e,"panel",l[4]==="panel"),(!s||t&5)&&f(e,"unequal-height",l[0]===!1),(!s||t&5)&&f(e,"stretch",l[0]),(!s||t&12)&&f(e,"hide",!l[3])},i(l){s||(C(a,l),s=!0)},o(l){S(a,l),s=!1},d(l){l&&z(e),a&&a.d(l)}}}function K(i,e,_){let{$$slots:s={},$$scope:u}=e,{equal_height:a=!0}=e,{elem_id:l}=e,{elem_classes:t=[]}=e,{visible:m=!0}=e,{variant:c="default"}=e;return i.$$set=n=>{"equal_height"in n&&_(0,a=n.equal_height),"elem_id"in n&&_(1,l=n.elem_id),"elem_classes"in n&&_(2,t=n.elem_classes),"visible"in n&&_(3,m=n.visible),"variant"in n&&_(4,c=n.variant),"$$scope"in n&&_(5,u=n.$$scope)},[a,l,t,m,c,u,s]}class N extends v{constructor(e){super(),g(this,e,K,A,d,{equal_height:0,elem_id:1,elem_classes:2,visible:3,variant:4})}}const k=N,B=["static"];export{k as Component,B as modes};
2
- //# sourceMappingURL=index-9d98c4c0.js.map
 
 
 
spaces/DataScienceEngineering/2-GradioLiveASR/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: 🗣️Live ASR Speech Recognition Gradio🧠💾
3
- emoji: 2-Live🗣️
4
- colorFrom: purple
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 3.5
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DevashishBhake/Question_Generation/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Question Generation
3
- emoji: 🐨
4
- colorFrom: green
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.28.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dinoking/Guccio-AI-Designer/netdissect/modelconfig.py DELETED
@@ -1,144 +0,0 @@
1
- '''
2
- Original from https://github.com/CSAILVision/GANDissect
3
- Modified by Erik Härkönen, 29.11.2019
4
- '''
5
-
6
- import numbers
7
- import torch
8
- from netdissect.autoeval import autoimport_eval
9
- from netdissect.progress import print_progress
10
- from netdissect.nethook import InstrumentedModel
11
- from netdissect.easydict import EasyDict
12
-
13
- def create_instrumented_model(args, **kwargs):
14
- '''
15
- Creates an instrumented model out of a namespace of arguments that
16
- correspond to ArgumentParser command-line args:
17
- model: a string to evaluate as a constructor for the model.
18
- pthfile: (optional) filename of .pth file for the model.
19
- layers: a list of layers to instrument, defaulted if not provided.
20
- edit: True to instrument the layers for editing.
21
- gen: True for a generator model. One-pixel input assumed.
22
- imgsize: For non-generator models, (y, x) dimensions for RGB input.
23
- cuda: True to use CUDA.
24
-
25
- The constructed model will be decorated with the following attributes:
26
- input_shape: (usually 4d) tensor shape for single-image input.
27
- output_shape: 4d tensor shape for output.
28
- feature_shape: map of layer names to 4d tensor shape for featuremaps.
29
- retained: map of layernames to tensors, filled after every evaluation.
30
- ablation: if editing, map of layernames to [0..1] alpha values to fill.
31
- replacement: if editing, map of layernames to values to fill.
32
-
33
- When editing, the feature value x will be replaced by:
34
- `x = (replacement * ablation) + (x * (1 - ablation))`
35
- '''
36
-
37
- args = EasyDict(vars(args), **kwargs)
38
-
39
- # Construct the network
40
- if args.model is None:
41
- print_progress('No model specified')
42
- return None
43
- if isinstance(args.model, torch.nn.Module):
44
- model = args.model
45
- else:
46
- model = autoimport_eval(args.model)
47
- # Unwrap any DataParallel-wrapped model
48
- if isinstance(model, torch.nn.DataParallel):
49
- model = next(model.children())
50
-
51
- # Load its state dict
52
- meta = {}
53
- if getattr(args, 'pthfile', None) is not None:
54
- data = torch.load(args.pthfile)
55
- if 'state_dict' in data:
56
- meta = {}
57
- for key in data:
58
- if isinstance(data[key], numbers.Number):
59
- meta[key] = data[key]
60
- data = data['state_dict']
61
- submodule = getattr(args, 'submodule', None)
62
- if submodule is not None and len(submodule):
63
- remove_prefix = submodule + '.'
64
- data = { k[len(remove_prefix):]: v for k, v in data.items()
65
- if k.startswith(remove_prefix)}
66
- if not len(data):
67
- print_progress('No submodule %s found in %s' %
68
- (submodule, args.pthfile))
69
- return None
70
- model.load_state_dict(data, strict=not getattr(args, 'unstrict', False))
71
-
72
- # Decide which layers to instrument.
73
- if getattr(args, 'layer', None) is not None:
74
- args.layers = [args.layer]
75
- if getattr(args, 'layers', None) is None:
76
- # Skip wrappers with only one named model
77
- container = model
78
- prefix = ''
79
- while len(list(container.named_children())) == 1:
80
- name, container = next(container.named_children())
81
- prefix += name + '.'
82
- # Default to all nontrivial top-level layers except last.
83
- args.layers = [prefix + name
84
- for name, module in container.named_children()
85
- if type(module).__module__ not in [
86
- # Skip ReLU and other activations.
87
- 'torch.nn.modules.activation',
88
- # Skip pooling layers.
89
- 'torch.nn.modules.pooling']
90
- ][:-1]
91
- print_progress('Defaulting to layers: %s' % ' '.join(args.layers))
92
-
93
- # Now wrap the model for instrumentation.
94
- model = InstrumentedModel(model)
95
- model.meta = meta
96
-
97
- # Instrument the layers.
98
- model.retain_layers(args.layers)
99
- model.eval()
100
- if args.cuda:
101
- model.cuda()
102
-
103
- # Annotate input, output, and feature shapes
104
- annotate_model_shapes(model,
105
- gen=getattr(args, 'gen', False),
106
- imgsize=getattr(args, 'imgsize', None),
107
- latent_shape=getattr(args, 'latent_shape', None))
108
- return model
109
-
110
- def annotate_model_shapes(model, gen=False, imgsize=None, latent_shape=None):
111
- assert (imgsize is not None) or gen
112
-
113
- # Figure the input shape.
114
- if gen:
115
- if latent_shape is None:
116
- # We can guess a generator's input shape by looking at the model.
117
- # Examine first conv in model to determine input feature size.
118
- first_layer = [c for c in model.modules()
119
- if isinstance(c, (torch.nn.Conv2d, torch.nn.ConvTranspose2d,
120
- torch.nn.Linear))][0]
121
- # 4d input if convolutional, 2d input if first layer is linear.
122
- if isinstance(first_layer, (torch.nn.Conv2d, torch.nn.ConvTranspose2d)):
123
- input_shape = (1, first_layer.in_channels, 1, 1)
124
- else:
125
- input_shape = (1, first_layer.in_features)
126
- else:
127
- # Specify input shape manually
128
- input_shape = latent_shape
129
- else:
130
- # For a classifier, the input image shape is given as an argument.
131
- input_shape = (1, 3) + tuple(imgsize)
132
-
133
- # Run the model once to observe feature shapes.
134
- device = next(model.parameters()).device
135
- dry_run = torch.zeros(input_shape).to(device)
136
- with torch.no_grad():
137
- output = model(dry_run)
138
-
139
- # Annotate shapes.
140
- model.input_shape = input_shape
141
- model.feature_shape = { layer: feature.shape
142
- for layer, feature in model.retained_features().items() }
143
- model.output_shape = output.shape
144
- return model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dorado607/ChuanhuChatGPT/readme/README_ja.md DELETED
@@ -1,139 +0,0 @@
1
- <div align="right">
2
- <!-- Language: -->
3
- <a title="Chinese" href="../README_origin.md">简体中文</a> | <a title="English" href="README_en.md">English</a> | 日本語
4
- </div>
5
-
6
- <h1 align="center">川虎 Chat 🐯 Chuanhu Chat</h1>
7
- <div align="center">
8
- <a href="https://github.com/GaiZhenBiao/ChuanhuChatGPT">
9
- <img src="https://github.com/GaiZhenbiao/ChuanhuChatGPT/assets/70903329/aca3a7ec-4f1d-4667-890c-a6f47bf08f63" alt="Logo" height="156">
10
- </a>
11
-
12
- <p align="center">
13
- <h3>ChatGPT/ChatGLM/LLaMAなどのLLMのための軽量でユーザーフレンドリーなWeb-UI</h3>
14
- <p align="center">
15
- <a href="https://github.com/GaiZhenbiao/ChuanhuChatGPT/blob/main/LICENSE">
16
- <img alt="Tests Passing" src="https://img.shields.io/github/license/GaiZhenbiao/ChuanhuChatGPT" />
17
- </a>
18
- <a href="https://gradio.app/">
19
- <img alt="GitHub Contributors" src="https://img.shields.io/badge/Base-Gradio-fb7d1a?style=flat" />
20
- </a>
21
- <a href="https://t.me/tkdifferent">
22
- <img alt="GitHub pull requests" src="https://img.shields.io/badge/Telegram-Group-blue.svg?logo=telegram" />
23
- </a>
24
- <p>
25
- ストリーム出力/会話回数無制限/履歴保存/プリセットプロンプト/ファイルへの質問チャット<br>
26
- ウェブ検索/LaTeXレンダリング/表レンダリング/コードハイライト<br>
27
- オートダークモード/アダプティブ・ウェブ・インターフェイス/WeChatライク・テーマ<br />
28
- マルチパラメーターチューニング/マルチAPI-Key対応/マルチユーザー対応<br>
29
- GPT-4対応/LLMのローカルデプロイ可能。
30
- </p>
31
- <a href="https://www.youtube.com/watch?v=MtxS4XZWbJE"><strong>動画チュートリアル</strong></a>
32
- ·
33
- <a href="https://www.youtube.com/watch?v=77nw7iimYDE"><strong>2.0 イントロダクション</strong></a>
34
- ·
35
- <a href="https://www.youtube.com/watch?v=x-O1jjBqgu4"><strong>3.0 イントロダクション & チュートリアル</strong></a>
36
- ||
37
- <a href="https://huggingface.co/spaces/JohnSmith9982/ChuanhuChatGPT"><strong>オンライントライアル</strong></a>
38
- ·
39
- <a href="https://huggingface.co/login?next=%2Fspaces%2FJohnSmith9982%2FChuanhuChatGPT%3Fduplicate%3Dtrue"><strong>ワンクリックデプロイ</strong></a>
40
- </p>
41
- <p align="center">
42
- <img alt="Animation Demo" src="https://user-images.githubusercontent.com/51039745/226255695-6b17ff1f-ea8d-464f-b69b-a7b6b68fffe8.gif" />
43
- </p>
44
- </p>
45
- </div>
46
-
47
- ## サポートされている大規模言語モデル
48
-
49
- **APIを通じてアクセス可能な大規模言語モデル**:
50
-
51
- - [ChatGPT](https://chat.openai.com) ([GPT-4](https://openai.com/product/gpt-4))
52
- - [Google PaLM](https://developers.generativeai.google/products/palm)
53
- - [Inspur Yuan 1.0](https://air.inspur.com/home)
54
- - [MiniMax](https://api.minimax.chat/)
55
- - [XMChat](https://github.com/MILVLG/xmchat)
56
-
57
- **ローカルに展開された大規模言語モデル**:
58
-
59
- - [ChatGLM](https://github.com/THUDM/ChatGLM-6B) ([ChatGLM2](https://github.com/THUDM/ChatGLM2-6B))
60
- - [LLaMA](https://github.com/facebookresearch/llama)
61
- - [StableLM](https://github.com/Stability-AI/StableLM)
62
- - [MOSS](https://github.com/OpenLMLab/MOSS)
63
-
64
- ## 使う上でのTips
65
-
66
- - ChatGPTをより適切に制御するために、システムプロンプトを使用できます。
67
- - プロンプトテンプレートを使用するには、プロンプトテンプレートコレクションを選択し、ドロップダウンメニューから特定のプロンプトを選択。回答が不十分な場合は、`🔄再生成`ボタンを使って再試行します。
68
- - 入力ボックスで改行するには、<kbd>Shift</kbd> + <kbd>Enter</kbd>キーを押してください。
69
- - 入力履歴を素早く切り替えるには、入力ボックスで <kbd>↑</kbd>と<kbd>↓</kbd>キーを押す。
70
- - プログラムをサーバーに展開するには、`config.json` 内の `"server_name": "0.0.0.0", "server_port": <ポート番号>`を設定してください。
71
- - 共有リンクを取得するには、 `config.json` 内の `"share": true` を設定してください。なお、公開リンクでアクセスするためには、プログラムが実行されている必要があることに注意してください。
72
- - Hugging Face Spacesで使用する場合: より速く、より安全に利用するために、**Duplicate Space**を使用し、自分のスペースでプログラムを実行することをお勧めします。
73
-
74
- ## クイックスタート
75
-
76
- ```shell
77
- git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git
78
- cd ChuanhuChatGPT
79
- pip install -r requirements.txt
80
- ```
81
-
82
- 次に `config_example.json`をコピーして `config.json`にリネームし、そのファイルにAPI-Keyなどの設定を記入する。
83
-
84
- ```shell
85
- python ChuanhuChatbot.py
86
- ```
87
-
88
- ブラウザのウィンドウが開き、ChatGPTとチ���ットできるようになります。
89
-
90
- > **Note**
91
- >
92
- > 詳しい手順は[wikiページ](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程)をご確認ください。
93
-
94
- ## トラブルシューティング
95
-
96
- 問題が発生した場合は、まずこのプロジェクトの最新の変更点を手動で引っ張ってみるのがよいでしょう。その手順は以下の通りです:
97
-
98
- 1. ウェブページの `Download ZIP` をクリックして最新のコードアーカイブをダウンロードするか、または
99
- ```shell
100
- git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f
101
- ```
102
- 2. 新しい依存関係が導入されている可能性があるため、依存関係を再度インストールしてみてください。
103
- ```
104
- pip install -r requirements.txt
105
- ```
106
-
107
- 一般的に、以下の手順でほとんどの問題を解決することができます。
108
-
109
- それでも問題が解決しない場合は、こちらのページをご参照ください: [よくある質問(FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)
110
-
111
- このページでは、考えられるほぼすべての問題点と解決策を掲載しています。よくお読みください。
112
-
113
- ## More Information
114
-
115
- より詳細な情報は、[wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki) をご覧ください。:
116
-
117
- - [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization)
118
- - [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南)
119
- - [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目)
120
- - [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志)
121
- - [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可)
122
-
123
- ## Starchart
124
-
125
- [![Star History Chart](https://api.star-history.com/svg?repos=GaiZhenbiao/ChuanhuChatGPT&type=Date)](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date)
126
-
127
- ## Contributors
128
-
129
- <a href="https://github.com/GaiZhenbiao/ChuanhuChatGPT/graphs/contributors">
130
- <img src="https://contrib.rocks/image?repo=GaiZhenbiao/ChuanhuChatGPT" />
131
- </a>
132
-
133
- ## Sponsor
134
-
135
- 🐯 この企画が役に立ったら、遠慮なくコーラかコーヒーでもおごってください〜。
136
-
137
- <a href="https://www.buymeacoffee.com/ChuanhuChat" ><img src="https://img.buymeacoffee.com/button-api/?text=Buy me a coffee&emoji=&slug=ChuanhuChat&button_colour=219d53&font_colour=ffffff&font_family=Poppins&outline_colour=ffffff&coffee_colour=FFDD00" alt="Buy Me A Coffee" width="250"></a>
138
-
139
- <img width="250" alt="image" src="https://user-images.githubusercontent.com/51039745/226920291-e8ec0b0a-400f-4c20-ac13-dafac0c3aeeb.JPG">
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ECCV2022/bytetrack/deploy/TensorRT/cpp/README.md DELETED
@@ -1,58 +0,0 @@
1
- # ByteTrack-TensorRT in C++
2
-
3
- ## Installation
4
-
5
- Install opencv with ```sudo apt-get install libopencv-dev``` (we don't need a higher version of opencv like v3.3+).
6
-
7
- Install eigen-3.3.9 [[google]](https://drive.google.com/file/d/1rqO74CYCNrmRAg8Rra0JP3yZtJ-rfket/view?usp=sharing), [[baidu(code:ueq4)]](https://pan.baidu.com/s/15kEfCxpy-T7tz60msxxExg).
8
-
9
- ```shell
10
- unzip eigen-3.3.9.zip
11
- cd eigen-3.3.9
12
- mkdir build
13
- cd build
14
- cmake ..
15
- sudo make install
16
- ```
17
-
18
- ## Prepare serialized engine file
19
-
20
- Follow the TensorRT Python demo to convert and save the serialized engine file.
21
-
22
- Check the 'model_trt.engine' file, which will be automatically saved at the YOLOX_output dir.
23
-
24
- ## Build the demo
25
-
26
- You should set the TensorRT path and CUDA path in CMakeLists.txt.
27
-
28
- For bytetrack_s model, we set the input frame size 1088 x 608. For bytetrack_m, bytetrack_l, bytetrack_x models, we set the input frame size 1440 x 800. You can modify the INPUT_W and INPUT_H in src/bytetrack.cpp
29
-
30
- ```c++
31
- static const int INPUT_W = 1088;
32
- static const int INPUT_H = 608;
33
- ```
34
-
35
- You can first build the demo:
36
-
37
- ```shell
38
- cd <ByteTrack_HOME>/demo/TensorRT/cpp
39
- mkdir build
40
- cd build
41
- cmake ..
42
- make
43
- ```
44
-
45
- Then you can run the demo with **200 FPS**:
46
-
47
- ```shell
48
- ./bytetrack ../../../../YOLOX_outputs/yolox_s_mix_det/model_trt.engine -i ../../../../videos/palace.mp4
49
- ```
50
-
51
- (If you find the output video lose some frames, you can convert the input video by running:
52
-
53
- ```shell
54
- cd <ByteTrack_HOME>
55
- python3 tools/convert_video.py
56
- ```
57
- to generate an appropriate input video for TensorRT C++ demo. )
58
-