parquet-converter commited on
Commit
55bbec3
·
1 Parent(s): bb15ce9

Update parquet files (step 51 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Holiday 2 Full Movie In Hindi 720p.md +0 -17
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ems Solidworks Crack Download !NEW!.md +0 -18
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Experience the Power of J.A.R.V.I.S. with Ironman Jarvis Theme Windows 7 Free 11.md +0 -132
  4. spaces/1gistliPinn/ChatGPT4/Examples/Adobe CC 2019 AIO Cracks 30-10-2018 [Full] ((BETTER)).md +0 -76
  5. spaces/1phancelerku/anime-remove-background/Cricket League Hack How to Unlock All Levels and Features with Unlimited Coins and Gems.md +0 -121
  6. spaces/44brabal/runwayml-stable-diffusion-v1-5/app.py +0 -10
  7. spaces/4Taps/SadTalker/src/face3d/util/generate_list.py +0 -34
  8. spaces/801artistry/RVC801/demucs/tasnet.py +0 -452
  9. spaces/801artistry/RVC801/demucs/train.py +0 -127
  10. spaces/AIConsultant/MusicGen/tests/data/test_audio.py +0 -239
  11. spaces/AIGC-Audio/Make_An_Audio/ldm/modules/attention.py +0 -261
  12. spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/predict.py +0 -90
  13. spaces/AIWaves/SOP_Generation-single/State.py +0 -142
  14. spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/gpt4love.py +0 -48
  15. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov6/yolov6_s_syncbn_fast_8xb32-300e_coco.py +0 -33
  16. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet18_8xb32_in1k.py +0 -4
  17. spaces/Abhilashvj/planogram-compliance/utils/activations.py +0 -106
  18. spaces/Adapter/CoAdapter/ldm/models/diffusion/__init__.py +0 -0
  19. spaces/Adr740/Hadith_AI_Explorer/get_hadith.py +0 -42
  20. spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/order/base.py +0 -18
  21. spaces/AgentVerse/agentVerse/agentverse/gui.py +0 -506
  22. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/outlinepipeline-plugin.js +0 -34
  23. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/imagebox/ImageBox.d.ts +0 -2
  24. spaces/AiMimicry/sovits-models/hubert/__init__.py +0 -0
  25. spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpgd.cpp +0 -3276
  26. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/dit/test_dit.py +0 -151
  27. spaces/Andy1621/uniformer_image_detection/configs/resnest/README.md +0 -44
  28. spaces/Andy1621/uniformer_image_detection/configs/scratch/mask_rcnn_r50_fpn_gn-all_scratch_6x_coco.py +0 -23
  29. spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/google_translate/script.py +0 -59
  30. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/arraymisc/quantization.py +0 -55
  31. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/non_local.py +0 -306
  32. spaces/Apex-X/ROOPOK/roop/processors/frame/face_enhancer.py +0 -104
  33. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/latex.py +0 -521
  34. spaces/BHO/URDtest/app.py +0 -63
  35. spaces/Bart92/RVC_HF/diffq/__init__.py +0 -18
  36. spaces/Benson/text-generation/Examples/Descargar Archivo Gta 5 Apk.md +0 -180
  37. spaces/Benson/text-generation/Examples/Descargar Coches De Lujo Europeos Mod Apk.md +0 -72
  38. spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/common.py +0 -424
  39. spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mcan/model_cfgs.py +0 -24
  40. spaces/CVPR/LIVE/thrust/thrust/detail/complex/c99math.h +0 -196
  41. spaces/CVPR/regionclip-demo/detectron2/projects/README.md +0 -2
  42. spaces/Cat125/text-generator-v2/generation/words.py +0 -93
  43. spaces/CikeyQI/meme-api/meme_generator/memes/gun/__init__.py +0 -61
  44. spaces/CofAI/chat.b4/client/css/stop-generating.css +0 -38
  45. spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/file_utils.py +0 -80
  46. spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/caption_datasets.py +0 -85
  47. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/expr/consts.py +0 -29
  48. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/t1Lib/__init__.py +0 -638
  49. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/registry.py +0 -275
  50. spaces/Dorado607/ChuanhuChatGPT/assets/custom.js +0 -707
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Holiday 2 Full Movie In Hindi 720p.md DELETED
@@ -1,17 +0,0 @@
1
- <br />
2
- <h1>Download Holiday 2 Full Movie In Hindi 720p: A Thrilling Sequel to the 2014 Hit</h1>
3
- <p>Holiday 2 is an upcoming Bollywood movie that is a sequel to the 2014 action thriller Holiday: A Soldier Is Never Off Duty. The movie stars Akshay Kumar as Virat Bakshi, a military officer who is on a vacation with his wife and friends. However, he soon gets involved in a deadly mission to stop a terrorist plot that threatens the nation.</p>
4
- <p>The movie is directed by A.R. Murugadoss, who also helmed the first part. The movie also features Sonakshi Sinha, Govinda, Vidyut Jammwal, and Freddy Daruwala in pivotal roles. The movie is expected to release in 2023 and promises to be a high-octane entertainer with thrilling action sequences and a gripping storyline.</p>
5
- <h2>Download Holiday 2 Full Movie In Hindi 720p</h2><br /><p><b><b>Download</b> &#10003;&#10003;&#10003; <a href="https://byltly.com/2uKxiO">https://byltly.com/2uKxiO</a></b></p><br /><br />
6
- <p>If you are a fan of Holiday: A Soldier Is Never Off Duty, you must be eagerly waiting for Holiday 2. However, you might be wondering how to download Holiday 2 full movie in Hindi 720p quality. Well, we have some good news for you. There are several websites that offer you the option to download Holiday 2 full movie in Hindi 720p for free.</p>
7
- <p>However, before you proceed to download Holiday 2 full movie in Hindi 720p from these websites, you should be aware of the risks involved. These websites are illegal and pirated, and they may harm your device with viruses and malware. Moreover, downloading Holiday 2 full movie in Hindi 720p from these websites is a violation of the copyright laws and may land you in legal trouble.</p>
8
- <p>Therefore, we advise you to avoid these websites and watch Holiday 2 full movie in Hindi 720p legally and safely. You can watch Holiday 2 full movie in Hindi 720p on OTT platforms like Netflix, Amazon Prime Video, Hotstar, or Zee5 once it is released. These platforms are legal and secure, and they offer you high-quality streaming and downloading options.</p>
9
- <p>So, what are you waiting for? Get ready to watch Holiday 2 full movie in Hindi 720p on your preferred OTT platform and enjoy the thrilling sequel to the 2014 hit. You can also check out the trailer of Holiday 2 full movie in Hindi 720p on YouTube and get a glimpse of what to expect from the movie.</p>
10
-
11
- <p>Holiday 2 full movie in Hindi 720p is a must-watch for all the fans of Akshay Kumar and action movies. The movie showcases Akshay Kumar's versatility and charisma as an actor and a performer. He plays the role of Virat Bakshi with conviction and intensity, and delivers some powerful dialogues and stunts.</p>
12
- <p>Sonakshi Sinha, who reprises her role as Nisha Bakshi, Virat's wife, also does a commendable job. She has a good chemistry with Akshay Kumar and supports him in his mission. Govinda, who plays Virat's senior officer and mentor, adds a touch of humor and wit to the movie. Vidyut Jammwal and Freddy Daruwala play the antagonists who challenge Virat's skills and intelligence.</p>
13
- <p>The movie is also well-directed by A.R. Murugadoss, who has a knack for making engaging and thrilling movies. He keeps the audience hooked with his crisp narration and clever twists. The movie also has some amazing songs composed by Pritam, which add to the mood and emotion of the movie.</p>
14
- <p></p>
15
- <p>Holiday 2 full movie in Hindi 720p is a movie that you should not miss if you love action and thrill. It is a movie that will keep you on the edge of your seat and make you cheer for Virat Bakshi and his team. It is a movie that will make you proud of your country and its brave soldiers.</p> 81aa517590<br />
16
- <br />
17
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ems Solidworks Crack Download !NEW!.md DELETED
@@ -1,18 +0,0 @@
1
-
2
- <h1>How to Download and Install EMWorks EMS for SolidWorks with Crack</h1>
3
- <p>EMWorks EMS is an electromagnetic field simulation software that works as a plugin for SolidWorks. It allows you to calculate the electric and magnetic fields, forces, torques, losses, and circuit parameters of various electrical and magnetic devices. It is widely used for designing electric motors, generators, transformers, sensors, actuators, PCBs, and more. In this article, we will show you how to download and install EMWorks EMS for SolidWorks with crack for free.</p>
4
- <h2>Step 1: Download EMWorks EMS for SolidWorks</h2>
5
- <p>You can download EMWorks EMS for SolidWorks from the official website or from other sources such as Get Into PC. Make sure you download the version that matches your SolidWorks version (2011-2018) and your system architecture (64-bit only). The file size is about 600 MB.</p>
6
- <h2>ems solidworks crack download</h2><br /><p><b><b>DOWNLOAD</b> --->>> <a href="https://byltly.com/2uKzLw">https://byltly.com/2uKzLw</a></b></p><br /><br />
7
- <h2>Step 2: Extract the downloaded file</h2>
8
- <p>After downloading EMWorks EMS for SolidWorks, you need to extract the file using a program such as WinRAR or 7-Zip. You will get a folder named EMWorks_EMS_2017_SP0.0 or something similar. Open the folder and run the setup.exe file as administrator.</p>
9
- <h2>Step 3: Install EMWorks EMS for SolidWorks</h2>
10
- <p>Follow the installation wizard to install EMWorks EMS for SolidWorks on your computer. You can choose the language, destination folder, and components you want to install. When the installation is finished, do not run the program yet.</p>
11
- <h2>Step 4: Copy and paste the crack file</h2>
12
- <p>Now you need to copy and paste the crack file to activate EMWorks EMS for SolidWorks. The crack file is usually named EMSSW2017x64.dll or something similar. You can find it in the same folder where you extracted the downloaded file or in a separate folder named Crack or Patch. Copy the crack file and paste it into the installation folder of EMWorks EMS for SolidWorks. The default location is C:\Program Files\EMWORKS\EMS 2017. Replace the original file when prompted.</p>
13
- <h2>Step 5: Run EMWorks EMS for SolidWorks</h2>
14
- <p>You have successfully installed EMWorks EMS for SolidWorks with crack. Now you can run the program from your desktop or start menu. You can also watch this video for a visual guide on how to use EMWorks EMS for SolidWorks.</p>
15
- <h2>Conclusion</h2>
16
- <p>EMWorks EMS for SolidWorks is a powerful and user-friendly software that enables you to simulate the most intricate electrical and magnetic devices. It has many features and capabilities that can help you with your projects. It is also compatible with various multiphysics modules such as thermal, motion, and structural analyses. However, it is not free and requires a license to use. If you want to use EMWorks EMS for SolidWorks for free, you can follow the steps above to download and install it with crack. However, we do not recommend this method as it may violate the terms of service of EMWorks and cause potential problems for your computer. We suggest that you use EMWorks EMS for SolidWorks legally by purchasing a license or using the free trial version.</p> ddb901b051<br />
17
- <br />
18
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Experience the Power of J.A.R.V.I.S. with Ironman Jarvis Theme Windows 7 Free 11.md DELETED
@@ -1,132 +0,0 @@
1
-
2
- <h1>Ironman Jarvis Theme Windows 7 Free 11: How to Turn Your PC into a Superhero's Computer</h1>
3
- <p>Have you ever dreamed of having a computer like Iron Man's J.A.R.V.I.S.? Well, now you can make your dream come true with Ironman Jarvis Theme Windows 7 Free 11. This is a Rainmeter theme that transforms your desktop into a futuristic and captivating interface. You can customize your desktop with various decks that display system info, functions and programs. You can also choose from four different colors and switch between Winamp and iTunes. In this article, we will show you how to download, install and use Ironman Jarvis Theme Windows 7 Free 11.</p>
4
- <h2>Features of Ironman Jarvis Theme Windows 7 Free 11</h2>
5
- <h3>Customizable decks for apps, folders and weblinks</h3>
6
- <p>One of the main features of Ironman Jarvis Theme Windows 7 Free 11 is that it allows you to customize your desktop with various decks that display system info, functions and programs. You can access these decks by clicking on the icons on the left side of the screen. For example, you can click on the CPU icon to see your CPU usage, RAM usage, network status and disk space. You can also click on the music icon to see your music player, volume control and weather. You can also launch apps, folders and weblinks from these decks by clicking on the corresponding buttons.</p>
7
- <h2>Ironman Jarvis Theme Windows 7 Free 11</h2><br /><p><b><b>Download Zip</b> &#10004; <a href="https://byltly.com/2uKveZ">https://byltly.com/2uKveZ</a></b></p><br /><br />
8
- <h3>Available in four colors: blue, red, yellow and green</h3>
9
- <p>Another feature of Ironman Jarvis Theme Windows 7 Free 11 is that it allows you to choose from four different colors for your theme. You can change the color by clicking on the color icon on the top right corner of the screen. You can choose from blue, red, yellow and green. Each color has its own style and mood. For example, blue gives a cool and calm vibe, while red gives a fiery and energetic vibe.</p>
10
- <h3>Options for both Winamp and iTunes</h3>
11
- <p>A third feature of Ironman Jarvis Theme Windows 7 Free 11 is that it allows you to switch between Winamp and iTunes as your music player. You can do this by clicking on the music icon on the left side of the screen and then clicking on the Winamp or iTunes button. You can also control your music player from the deck by clicking on the play, pause, stop, next or previous buttons.</p>
12
- <h3>Config tool to facilitate all customizations</h3>
13
- <p>A fourth feature of Ironman Jarvis Theme Windows 7 Free 11 is that it comes with a config tool that facilitates all customizations. You can access this tool by clicking on the config icon on the top right corner of the screen. You can use this tool to adjust settings such as skin position, skin size, skin opacity, skin rotation, skin color, font size, font color and more. You can also save your settings as presets for future use.</p>
14
- <h2>How to Download and Install Ironman Jarvis Theme Windows 7 Free 11</h2>
15
- <h3>Step 1: Download Rainmeter app and Jarvis theme pack</h3>
16
- <p>The first step to install Ironman Jarvis Theme Windows 7 Free 11 is to download Rainmeter app and Jarvis theme pack. Rainmeter is a free app that allows you to customize your desktop with various skins and themes. Jarvis theme pack is a collection of files that contains the Ironman Jarvis Theme Windows 7 Free 11 skin. You can download both Rainmeter app and Jarvis theme pack from these links:</p>
17
- <ul>
18
- <li>Rainmeter app: <a href="https://www.rainmeter.net/">https://www.rainmeter.net/</a></li>
19
- <li>Jarvis theme pack: <a href="https://visualskins.com/skin/jrvis-shield-os">https://visualskins.com/skin/jrvis-shield-os</a></li>
20
- </ul>
21
- <p>Make sure you download the latest versions of both Rainmeter app and Jarvis theme pack.</p>
22
- <p>How to install Ironman Jarvis Theme on Windows 7 for free<br />
23
- Download Ironman Jarvis Theme for Windows 7 64 bit free<br />
24
- Ironman Jarvis Theme Windows 7 Free 11 tutorial<br />
25
- Best Ironman Jarvis Theme for Windows 7 free download<br />
26
- Ironman Jarvis Theme Windows 7 Free 11 review<br />
27
- Ironman Jarvis Theme Windows 7 Free 11 features<br />
28
- Ironman Jarvis Theme Windows 7 Free 11 customization<br />
29
- Ironman Jarvis Theme Windows 7 Free 11 system requirements<br />
30
- Ironman Jarvis Theme Windows 7 Free 11 update<br />
31
- Ironman Jarvis Theme Windows 7 Free 11 alternatives<br />
32
- Ironman Jarvis Theme Windows 7 Free 11 vs Rainmeter<br />
33
- Ironman Jarvis Theme Windows 7 Free 11 skins<br />
34
- Ironman Jarvis Theme Windows 7 Free 11 voice command<br />
35
- Ironman Jarvis Theme Windows 7 Free 11 wallpaper<br />
36
- Ironman Jarvis Theme Windows 7 Free 11 icons<br />
37
- Ironman Jarvis Theme Windows 7 Free 11 sounds<br />
38
- Ironman Jarvis Theme Windows 7 Free 11 widgets<br />
39
- Ironman Jarvis Theme Windows 7 Free 11 launcher<br />
40
- Ironman Jarvis Theme Windows 7 Free 11 error fix<br />
41
- Ironman Jarvis Theme Windows 7 Free 11 uninstall<br />
42
- Ironman Jarvis Theme for Windows 10 free download<br />
43
- Ironman Jarvis Theme for Windows XP free download<br />
44
- Ironman Jarvis Theme for Mac free download<br />
45
- Ironman Jarvis Theme for Linux free download<br />
46
- Ironman Jarvis Theme for Android free download<br />
47
- Ironman Jarvis Theme for iPhone free download<br />
48
- Ironman Jarvis Theme for Chrome free download<br />
49
- Ironman Jarvis Theme for Firefox free download<br />
50
- Ironman Jarvis Theme for Edge free download<br />
51
- Ironman Jarvis Theme for Opera free download<br />
52
- How to make your own Ironman Jarvis Theme for free<br />
53
- How to get Ironman Jarvis voice for your theme<br />
54
- How to change the color of your Ironman Jarvis theme<br />
55
- How to add more features to your Ironman Jarvis theme<br />
56
- How to make your Ironman Jarvis theme more responsive<br />
57
- How to make your Ironman Jarvis theme more secure<br />
58
- How to make your Ironman Jarvis theme more fun<br />
59
- How to make your Ironman Jarvis theme more realistic<br />
60
- How to make your Ironman Jarvis theme more interactive<br />
61
- How to make your Ironman Jarvis theme more personalized<br />
62
- Benefits of using an Ironman Jarvis theme for your computer<br />
63
- Drawbacks of using an Ironman Jarvis theme for your computer<br />
64
- Tips and tricks for using an Ironman Jarvis theme for your computer<br />
65
- FAQs about using an Ironman Jarvis theme for your computer<br />
66
- Testimonials from users of an Ironman Jarvis theme for their computer</p>
67
- <h3>Step 2: Install Rainmeter app and Jarvis theme pack</h3>
68
- <p>The second step to install Ironman Jarvis Theme Windows 7 Free 11 is to install Rainmeter app and Jarvis theme pack. To do this, follow these steps:</p>
69
- <ol>
70
- <li>Run the Rainmeter installer file that you downloaded in step 1. Follow the instructions on the screen to complete the installation.</li>
71
- <li>Run the Jarvis theme pack file that you downloaded in step 1. It will automatically install the Ironman Jarvis Theme Windows 7 Free 11 skin into your Rainmeter app.</li>
72
- <li>Restart your computer.</li>
73
- </ol>
74
- <h3>Step 3: Load Jarvis theme and customize it according to your preferences</h3>
75
- <p>The third step to install Ironman Jarvis Theme Windows 7 Free 11 is to load Jarvis theme and customize it according to your preferences. To do this, follow these steps:</p>
76
- <ol>
77
- <li>Right-click on an empty area of your desktop and select "Rainmeter" from the menu.</li>
78
- <li>Select "Manage" from the submenu.</li>
79
- <li>In the Rainmeter Manager window, select "JARVIS + SHIELD OS" from the list of skins.</li>
80
- <li>Select "JARVIS + SHIELD OS.ini" from the list of variants.</li>
81
- <li>Click on "Load" button at the bottom right corner of the window.</li>
82
- <li>You will see the Ironman Jarvis Theme Windows 7 Free 11 appear on your desktop.</li>
83
- <li>You can customize it according to your preferences by using the features described in section "Features of Ironman Jarvis Theme Windows 7 Free 11".</li>
84
- </ol>
85
- <h2>How to Use Ironman Jarvis Theme Windows 7 Free 11</h2>
86
- <h3>How to access the decks and launch apps, folders and weblinks</h3>
87
- <p>To access the decks and launch apps, folders and weblinks, you just need to click on the icons on the left side of the screen. For example, if you want to access the CPU deck, you just need to click on the CPU icon. If you want to launch Google Chrome, you just need to click on the Chrome button in the web deck. You can also add or remove apps, folders and weblinks from these decks by using the config tool described in section "Features of Ironman Jarvis Theme Windows 7 Free 11".</p>
88
- <h3>How to change the colors of the theme</h3>
89
- <p>To change the colors of the theme, you just need to click on the color icon on the top right corner of the screen. You can choose from blue, red, yellow or green. Each color has its own style and mood.</p>
90
- <h3>How to switch between Winamp and iTunes</h3>
91
- <p>To switch between Winamp and iTunes as your music player, you just need to click on the music icon on the left side of the screen and then click on the Winamp or iTunes button. You can also control your music player from the deck by clicking on the play, pause, stop, next or previous buttons. You need to have Winamp or iTunes installed on your computer for this feature to work.</p>
92
- <h3>How to use the config tool to adjust settings</h3>
93
- <p>To use the config tool to adjust settings, you just need to click on the config icon on the top right corner of the screen. You can use this tool to adjust settings such as skin position, skin size, skin opacity, skin rotation, skin color, font size, font color and more. You can also save your settings as presets for future use. You can access the presets by clicking on the preset icon on the top right corner of the screen.</p>
94
- <h2>Pros and Cons of Ironman Jarvis Theme Windows 7 Free 11</h2>
95
- <h3>Pros: Cool graphics, smooth animations, easy to use, free to download</h3>
96
- <p>Some of the pros of Ironman Jarvis Theme Windows 7 Free 11 are:</p>
97
- <ul>
98
- <li>It has cool graphics that resemble Iron Man's J.A.R.V.I.S. interface. It also has smooth animations that make it look realistic and futuristic.</li>
99
- <li>It is easy to use and customize. You can access and launch apps, folders and weblinks from the decks. You can also change the colors of the theme and switch between Winamp and iTunes. You can also use the config tool to adjust settings according to your preferences.</li>
100
- <li>It is free to download and install. You just need to have Rainmeter app and Jarvis theme pack. You don't need to pay anything or register anything.</li>
101
- </ul>
102
- <h3>Cons: Requires Rainmeter app, may not work on some versions of Windows, may consume more resources</h3>
103
- <p>Some of the cons of Ironman Jarvis Theme Windows 7 Free 11 are:</p>
104
- <ul>
105
- <li>It requires Rainmeter app to run. Rainmeter is a free app that allows you to customize your desktop with various skins and themes. However, some users may not want to install another app on their computer or may not be familiar with how to use it.</li>
106
- <li>It may not work on some versions of Windows. It is designed for Windows 7 but it may also work on Windows 8 or Windows 10 with some tweaks. However, it may not work on older versions of Windows such as Windows XP or Vista.</li>
107
- <li>It may consume more resources than a normal desktop theme. It has a lot of graphics and animations that may require more CPU, RAM and disk space. It may also affect your battery life if you are using a laptop.</li>
108
- </ul>
109
- <h2>Conclusion and FAQs</h2>
110
- <p>In conclusion, Ironman Jarvis Theme Windows 7 Free 11 is a Rainmeter theme that transforms your desktop into a futuristic and captivating interface. It has various features such as customizable decks for apps, folders and weblinks; available in four colors; options for both Winamp and iTunes; and config tool to facilitate all customizations. It is free to download and install but it requires Rainmeter app to run. It may also have some compatibility issues with some versions of Windows and it may consume more resources than a normal desktop theme.</p>
111
- <p>If you are a fan of Iron Man or you just want to spice up your desktop with a cool theme, you should give Ironman Jarvis Theme Windows 7 Free 11 a try. You can download it from this link: <a href="https://visualskins.com/skin/jrvis-shield-os">https://visualskins.com/skin/jrvis-shield-os</a></p>
112
- <p>Here are some FAQs about Ironman Jarvis Theme Windows 7 Free 11:</p>
113
- <ol>
114
- <li><b>Q: How do I uninstall Ironman Jarvis Theme Windows 7 Free 11?</b></li>
115
- <li>A: To uninstall Ironman Jarvis Theme Windows 7 Free 11, you just need to right-click on an empty area of your desktop and select "Rainmeter" from the menu. Then select "Manage" from the submenu. In the Rainmeter Manager window, select "JARVIS + SHIELD OS" from the list of skins and click on "Unload" button at the bottom right corner of the window. You can also delete the folder "JARVIS + SHIELD OS" from your Rainmeter skins folder.</li>
116
- <li><b>Q: How do I update Ironman Jarvis Theme Windows 7 Free 11?</b></li>
117
- <li>A: To update Ironman Jarvis Theme Windows 7 Free 11, you just need to download the latest version of Jarvis theme pack from this link: <a href="https://visualskins.com/skin/jrvis-shield-os">https://visualskins.com/skin/jrvis-shield-os</a>. Then run the file and it will automatically update your existing theme.</li>
118
- <li><b>Q: How do I get more skins or themes for Rainmeter?</b></li>
119
- <li>A: To get more skins or themes for Rainmeter, you can visit these websites:</li>
120
- <ul>
121
- <li><a href="https://www.rainmeter.net/">https://www.rainmeter.net/</a>: The official website of Rainmeter where you can download the app and find documentation and tutorials.</li>
122
- <li><a href="https://visualskins.com/">https://visualskins.com/</a>: A website that showcases various skins and themes for Rainmeter.</li>
123
- <li><a href="https://www.deviantart.com/rainmeter/gallery">https://www.deviantart.com/rainmeter/gallery</a>: A website that hosts a large collection of skins and themes for Rainmeter created by artists.</li>
124
- </ul>
125
- <li><b>Q: How do I contact the developer of Ironman Jarvis Theme Windows 7 Free 11?</b></li>
126
- <li>A: To contact the developer of Ironman Jarvis Theme Windows 7 Free 11, you can visit his profile page on DeviantArt: <a href="https://www.deviantart.com/yash1331">https://www.deviantart.com/yash1331</a>. You can also leave a comment on his skin page: <a href="https://www.deviantart.com/yash1331/art/JARVIS-SHIELD-OS-Ver-2-0-2014-442131288">https://www.deviantart.com/yash1331/art/JARVIS-SHIELD-OS-Ver-2-0-2014-442131288</a>.</li>
127
- <li><b>Q: How do I make my own skin or theme for Rainmeter?</b></li>
128
- <li>A: To make your own skin or theme for Rainmeter, you need to learn how to use Rainmeter's scripting language called RML (Rainmeter Markup Language). You can find documentation and tutorials on how to use RML on Rainmeter's official website: <a href="https://docs.rainmeter.net/">https://docs.rainmeter.net/</a>. You can also find examples and templates of RML code on various websites such as VisualSkins or DeviantArt.</li>
129
- </ol>
130
- </p> 0a6ba089eb<br />
131
- <br />
132
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Adobe CC 2019 AIO Cracks 30-10-2018 [Full] ((BETTER)).md DELETED
@@ -1,76 +0,0 @@
1
-
2
- <h1>Adobe CC 2019 AIO Cracks 30-10-2018 [Full] - How to Activate All Adobe Products in One Click</h1>
3
- <p>Adobe CC 2019 AIO Cracks 30-10-2018 [Full] is a package of cracks or patches that can activate all Adobe CC 2019 programs with one click. It is created by Zer0Cod3, and it can register Photoshop, Lightroom, Dreamweaver, Acrobat, After Effects, InCopy, Media Encoder, Character Animator, Audition, Illustrator, InDesign, Premiere, Bridge, Prelude, Dimension, Animate, and more. In this article, we will show you how to use Adobe CC 2019 AIO Cracks 30-10-2018 [Full] to activate your Adobe products for free.</p>
4
- <h2>Adobe CC 2019 AIO Cracks 30-10-2018 [Full]</h2><br /><p><b><b>Download Zip</b> - <a href="https://imgfil.com/2uxXRb">https://imgfil.com/2uxXRb</a></b></p><br /><br />
5
- <h2>What is Adobe CC 2019 AIO Cracks 30-10-2018 [Full]?</h2>
6
- <p>Adobe CC 2019 AIO Cracks 30-10-2018 [Full] is a package of cracks or patches that can activate all Adobe CC 2019 programs with one click. It is created by Zer0Cod3, a famous cracker who has cracked many Adobe products in the past. The package contains two tools: CCMaker and Adobe CC 2019 AIO Patcher.</p>
7
- <p>CCMaker is a third-party utility that can download and install any Adobe CC products directly from Adobe servers, without logging in or using the Creative Cloud desktop app. It also integrates PainteR's AMT Emulator, a universal activator for Adobe products.</p>
8
- <p>Adobe CC 2019 AIO Patcher is a tool that can patch any Adobe CC 2019 program with one click. It can detect the installed Adobe program and apply the appropriate crack or patch automatically.</p>
9
- <h2>Why use Adobe CC 2019 AIO Cracks 30-10-2018 [Full]?</h2>
10
- <p>Adobe CC 2019 AIO Cracks 30-10-2018 [Full] has many advantages over other methods of activating Adobe products. Some of them are:</p>
11
- <ul>
12
- <li>It is easy and convenient to use. You don't need to download or install each Adobe program separately. You can use CCMaker to download and install the desired Adobe offline installer with one click. You can also use Adobe CC 2019 AIO Patcher to patch any Adobe program with one click.</li>
13
- <li>It is safe and reliable to use. The cracks or patches are checked for viruses by VirusTotal, and they don't contain any malware or adware. They also don't modify any system files or registry entries, so they won't harm your computer or affect its performance.</li>
14
- <li>It is effective and permanent to use. The cracks or patches can activate all Adobe CC 2019 programs without any limitations or restrictions. They can also bypass the online activation or verification process, so they won't be detected or blocked by Adobe servers.</li>
15
- </ul>
16
- <h2>How to use Adobe CC 2019 AIO Cracks 30-10-2018 [Full]?</h2>
17
- <p>To use Adobe CC 2019 AIO Cracks 30-10-2018 [Full], you need to follow these steps:</p>
18
- <p></p>
19
- <ol>
20
- <li>Download the package from the link below and extract it to a folder on your computer.</li>
21
- <li>Run CCMaker.exe as administrator and select the language and the Adobe product you want to download and install. You can also select the components and language resources you want to include.</li>
22
- <li>Click on Download & Install button and wait for the process to finish. The program will be installed and activated automatically.</li>
23
- <li>If you want to patch another Adobe program, run Adobe CC 2019 AIO Patcher.exe as administrator and select the program you want to patch from the list.</li>
24
- <li>Click on Download & Patch button and wait for the process to finish. The program will be patched automatically.</li>
25
- <li>Enjoy your activated Adobe products!</li>
26
- </ol>
27
- <h2>Conclusion</h2>
28
- <p>In conclusion, Adobe CC 2019 AIO Cracks 30-10-2018 [Full] is a package of cracks or patches that can activate all Adobe CC 2019 programs with one click. It is created by Zer0Cod3, and it contains two tools: CCMaker and Adobe CC 2019 AIO Patcher. It is easy, safe, reliable, effective, and permanent to use. It can help you enjoy all the features and benefits of Adobe products for free.</p>
29
- <h2>What are some tips and warnings for using Adobe CC 2019 AIO Cracks 30-10-2018 [Full]?</h2>
30
- <p>While Adobe CC 2019 AIO Cracks 30-10-2018 [Full] is a great tool for activating Adobe products, there are some tips and warnings that you should keep in mind before using it. Some of them are:</p>
31
- <ul>
32
- <li>Make sure to disable your antivirus or firewall before running the cracks or patches, as they may be detected as false positives or threats by some security software.</li>
33
- <li>Make sure to backup your important files or data before installing or patching any Adobe program, as some cracks or patches may overwrite or delete some files or settings.</li>
34
- <li>Make sure to disconnect your internet connection before installing or patching any Adobe program, as some cracks or patches may require offline mode or block online access.</li>
35
- <li>Make sure to read the instructions carefully and follow them step by step, as some cracks or patches may have specific requirements or procedures.</li>
36
- <li>Make sure to use the cracks or patches only for personal or educational purposes, and not for commercial or illegal purposes, as they may violate the terms and conditions of Adobe.</li>
37
- </ul>
38
- <h2>What are some alternatives to Adobe CC 2019 AIO Cracks 30-10-2018 [Full]?</h2>
39
- <p>If you don't want to use Adobe CC 2019 AIO Cracks 30-10-2018 [Full] for some reason, or if you encounter some problems or issues with it, there are some alternatives that you can try. Some of them are:</p>
40
- <ul>
41
- <li>Adobe Zii Patcher. This is a tool that can patch any Adobe CC 2015-2021 program on Mac OS. It is created by TNT Team, and it can activate Photoshop, Lightroom, Dreamweaver, Acrobat, After Effects, InCopy, Media Encoder, Character Animator, Audition, Illustrator, InDesign, Premiere, Bridge, Prelude, Dimension, Animate, and more.</li>
42
- <li>GenP. This is a tool that can patch any Adobe CC 2019-2021 program on Windows. It is created by PainterR and ZeroCode, and it can activate Photoshop, Lightroom, Dreamweaver, Acrobat, After Effects, InCopy, Media Encoder, Character Animator, Audition, Illustrator, InDesign, Premiere, Bridge, Prelude, Dimension, Animate, and more.</li>
43
- <li>Universal Adobe Patcher. This is a tool that can patch any Adobe CC 2014-2018 program on Windows. It is created by PainteR and AMTEmu Team, and it can activate Photoshop, Lightroom, Dreamweaver, Acrobat, After Effects, InCopy, Media Encoder, Character Animator, Audition, Illustrator, InDesign, Premiere, Bridge, Prelude, Dimension, Animate, and more.</li>
44
- </ul>
45
- <h2>What are some reviews of Adobe CC 2019 AIO Cracks 30-10-2018 [Full]?</h2>
46
- <p>Adobe CC 2019 AIO Cracks 30-10-2018 [Full] has received many positive reviews from users who have used it to activate their Adobe products. Some of them are:</p>
47
- <blockquote>
48
- <p>"I have been using Adobe CC 2019 AIO Cracks 30-10-2018 [Full] for a few months now and I have to say it is amazing. It works perfectly and smoothly on my Windows 10 laptop. I can use all the features and functions of Adobe products without any problems or limitations. It is very easy and convenient to use. I just download and install the Adobe program I want with CCMaker and then patch it with Adobe CC 2019 AIO Patcher. That's it. No need to login or register or verify anything. I highly recommend this tool to anyone who wants to use Adobe products for free."</p>
49
- <cite>- John Smith</cite>
50
- </blockquote>
51
- <blockquote>
52
- <p>"Adobe CC 2019 AIO Cracks 30-10-2018 [Full] is a lifesaver for me. I am a student and I need to use Adobe products for my assignments and projects. But I can't afford to buy the subscription or license for them. Thanks to Adobe CC 2019 AIO Cracks 30-10-2018 [Full], I can use all the Adobe products I need for free. It is very simple and fast to use. I just download and install the Adobe program I need with CCMaker and then patch it with Adobe CC 2019 AIO Patcher. It takes only a few minutes and then I can enjoy all the benefits of Adobe products. It is very safe and reliable to use. I have never encountered any virus or malware or error with it."</p>
53
- <cite>- Jane Doe</cite>
54
- </blockquote>
55
- <h2>What are some FAQs about Adobe CC 2019 AIO Cracks 30-10-2018 [Full]?</h2>
56
- <p>Here are some frequently asked questions and answers about Adobe CC 2019 AIO Cracks 30-10-2018 [Full]:</p>
57
- <dl>
58
- <dt>Q: Is Adobe CC 2019 AIO Cracks 30-10-2018 [Full] legal or illegal?</dt>
59
- <dd>A: Adobe CC 2019 AIO Cracks 30-10-2018 [Full] is illegal, as it violates the terms and conditions of Adobe. It is also unethical, as it deprives Adobe of its rightful revenue and profit. However, some users may use it for personal or educational purposes, and not for commercial or illegal purposes.</dd>
60
- <dt>Q: Is Adobe CC 2019 AIO Cracks 30-10-2018 [Full] compatible with all versions of Windows or Mac OS?</dt>
61
- <dd>A: Adobe CC 2019 AIO Cracks 30-10-2018 [Full] is compatible with Windows 7, 8, 8.1, and 10, both 32-bit and 64-bit. It is not compatible with Mac OS, as it is designed for Windows only. For Mac users, they can use Adobe Zii Patcher instead.</dd>
62
- <dt>Q: Is Adobe CC 2019 AIO Cracks 30-10-2018 [Full] updated or supported by Zer0Cod3?</dt>
63
- <dd>A: Adobe CC 2019 AIO Cracks 30-10-2018 [Full] is not updated or supported by Zer0Cod3 anymore, as he has stopped cracking Adobe products since November 2018. However, the package still works for most of the Adobe CC 2019 programs, as they have not changed much since then.</dd>
64
- </dl>
65
- <h2>What are some tips and tricks for using Adobe CC 2019 AIO Cracks 30-10-2018 [Full]?</h2>
66
- <p>Here are some tips and tricks that can help you use Adobe CC 2019 AIO Cracks 30-10-2018 [Full] more effectively and efficiently:</p>
67
- <ul>
68
- <li>If you want to install or patch multiple Adobe programs at once, you can use the batch mode of CCMaker or Adobe CC 2019 AIO Patcher. Just select the programs you want and click on Download & Install or Download & Patch button.</li>
69
- <li>If you want to uninstall or remove any Adobe program that you have installed or patched with CCMaker or Adobe CC 2019 AIO Patcher, you can use the uninstaller tool that is included in the package. Just run Uninstaller.exe as administrator and select the program you want to uninstall.</li>
70
- <li>If you want to update any Adobe program that you have installed or patched with CCMaker or Adobe CC 2019 AIO Patcher, you can use the updater tool that is included in the package. Just run Updater.exe as administrator and select the program you want to update.</li>
71
- <li>If you want to backup or restore any Adobe program that you have installed or patched with CCMaker or Adobe CC 2019 AIO Patcher, you can use the backup tool that is included in the package. Just run Backup.exe as administrator and select the program you want to backup or restore.</li>
72
- </ul>
73
- <h2>Conclusion</h2>
74
- <p>In conclusion, Adobe CC 2019 AIO Cracks 30-10-2018 [Full] is a package of cracks or patches that can activate all Adobe CC 2019 programs with one click. It is created by Zer0Cod3, and it contains two tools: CCMaker and Adobe CC 2019 AIO Patcher. It is easy, safe, reliable, effective, and permanent to use. It can help you enjoy all the features and benefits of Adobe products for free.</p> 3cee63e6c2<br />
75
- <br />
76
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Cricket League Hack How to Unlock All Levels and Features with Unlimited Coins and Gems.md DELETED
@@ -1,121 +0,0 @@
1
-
2
- <h1>Cricket League Hack: How to Get Unlimited Coins and Gems for Free</h1>
3
- <p>Are you a fan of cricket and want to play an amazing mobile version of the sport? Do you want to compete with your friends and other players from around the world in quick two over matches? Do you want to unlock the dream team and collect over 25 characters with different skills and abilities? If you answered yes to any of these questions, then you should try Cricket League, a 3D multiplayer cricket sports game developed by Miniclip.com.</p>
4
- <h2>Introduction</h2>
5
- <p>In this article, we will tell you everything you need to know about Cricket League, a fast, fun, exciting and authentic real-time multiplayer cricket game. We will also show you how to hack Cricket League and get unlimited coins and gems for free, using two different methods: a modded APK file and an online generator tool. By using these hacks, you will be able to enjoy the game without any limitations or restrictions.</p>
6
- <h2>cricket league hack unlimited coins and gems download</h2><br /><p><b><b>DOWNLOAD</b> ===> <a href="https://jinyurl.com/2uNQNi">https://jinyurl.com/2uNQNi</a></b></p><br /><br />
7
- <h3>What is Cricket League?</h3>
8
- <p>Cricket League is a free online cricket game that you can download and play on your Android or iOS device. The game features easy to learn batting and bowling controls, realistic physics and graphics, and various game modes and locations. You can play quick two over matches against your friends or players around the world in just a few minutes. You can also create your own team and top the leagues by winning matches and earning coins. You can use the coins to buy new types of balls, such as Doosra, Sling, In/Out Swings, that can increase your chances of winning. You can also collect over 25 characters, each with their own strengths and weaknesses, and level them up to unlock new ways to play. You can travel all over the world playing against the best cricketers from the best pitches all over the world where the top ODI,T20 matches have taken place: Mumbai, Karachi, Adelaide, Dubai, Johannesburg, Dhaka, Melbourne, London. You can also unlock new locations to win even more coins.</p>
9
- <h3>Why do you need coins and gems in Cricket League?</h3>
10
- <p>Coins and gems are the two main currencies in Cricket League. You need coins to buy new balls, upgrade your characters, unlock new locations, and enter higher leagues. You need gems to buy premium characters, skip waiting times, and get extra rewards. Coins and gems are very important if you want to enjoy the game fully and have an edge over your opponents. However, earning coins and gems in the game can be very slow and tedious. You only get a small amount of coins for winning matches, and gems are very rare to find. You can also buy coins and gems with real money, but that can be very expensive and not everyone can afford it. That's why many players look for ways to hack Cricket League and get unlimited coins and gems for free.</p>
11
- <h2>How to hack Cricket League and get unlimited coins and gems?</h2>
12
- <p>There are two methods that you can use to hack Cricket League and get unlimited coins and gems for free: using a modded APK file or using an online generator tool. We will explain each method in detail below.</p>
13
- <h3>Method 1: Use a modded APK file</h3>
14
- <h4>What is a modded APK file?</h4>
15
- <p>A modded APK file is a modified version of the original APK file of the game. It has some changes or additions that can alter the gameplay or give you some advantages. <h4>How to download and install a modded APK file for Cricket League?</h4>
16
- <p>To download and install a modded APK file for Cricket League, you need to follow these steps:</p>
17
- <ol>
18
- <li>Find a reliable source that offers a modded APK file for Cricket League. You can search on Google or use websites like APKPure, APKMirror, or ModAPKDown. Make sure that the modded APK file has the features that you want, such as unlimited coins and gems, unlocked characters, etc.</li>
19
- <li>Download the modded APK file to your device. You may need to enable the option to install apps from unknown sources in your device settings. This will allow you to install apps that are not from the official app store.</li>
20
- <li>Locate the downloaded modded APK file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.</li>
21
- <li>Launch the game and enjoy the hack. You should see that you have unlimited coins and gems in your account, and you can access all the features of the game without any restrictions.</li>
22
- </ol>
23
- <h4>Pros and cons of using a modded APK file for Cricket League</h4>
24
- <p>Using a modded APK file for Cricket League has some advantages and disadvantages that you should be aware of before using it. Here are some of them:</p>
25
- <table>
26
- <tr>
27
- <th>Pros</th>
28
- <th>Cons</th>
29
- </tr>
30
- <tr>
31
- <td>You can get unlimited coins and gems for free.</td>
32
- <td>You may risk getting banned from the game or losing your progress if the developers detect the hack.</td>
33
- </tr>
34
- <tr>
35
- <td>You can unlock all the characters, balls, locations, and leagues in the game.</td>
36
- <td>You may encounter some bugs or glitches in the game due to the modifications.</td>
37
- </tr>
38
- <tr>
39
- <td>You can have more fun and excitement playing the game without any limitations.</td>
40
- <td>You may lose the challenge and thrill of playing the game fairly and competitively.</td>
41
- </tr>
42
- </table>
43
- <h3>Method 2: Use an online generator tool</h3>
44
- <h4>What is an online generator tool?</h4>
45
- <p>An online generator tool is a website that can generate coins and gems for your Cricket League account. It does not require you to download or install anything on your device. It works by connecting to the game server and injecting some codes that can modify your account balance.</p>
46
- <h4>How to use an online generator tool for Cricket League?</h4>
47
- <p>To use an online generator tool for Cricket League, you need to follow these steps:</p>
48
- <p>cricket league mod apk unlimited money and gems<br />
49
- how to hack cricket league game and get free coins<br />
50
- cricket league cheat codes for android and ios devices<br />
51
- download cricket league hacked version with unlimited resources<br />
52
- cricket league hack tool online no survey no human verification<br />
53
- cricket league unlimited coins and gems apk download for free<br />
54
- best cricket league hacks and tips to win every match<br />
55
- cricket league hack generator 2023 working 100%<br />
56
- cricket league mod menu with unlimited features and options<br />
57
- cricket league hack apk latest version download 2023<br />
58
- cricket league hack no root no jailbreak required<br />
59
- cricket league unlimited coins and gems mod apk obb data<br />
60
- cricket league hack app download for android and iphone<br />
61
- cricket league hack without verification or password<br />
62
- cricket league unlimited money and gems glitch 2023<br />
63
- cricket league hack apk free download mediafire link<br />
64
- cricket league mod apk unlocked all premium features and levels<br />
65
- cricket league hack online free coins and gems generator<br />
66
- cricket league hack apk download for pc windows 10/8/7<br />
67
- cricket league unlimited coins and gems redeem code 2023<br />
68
- cricket league hack apk pure original file download<br />
69
- cricket league mod apk revdl rexdl apkpure<br />
70
- cricket league hack version download for android phone<br />
71
- cricket league unlimited coins and gems trick 2023<br />
72
- cricket league hack apk mirror direct download link<br />
73
- cricket league mod apk happymod with unlimited everything<br />
74
- cricket league hack ios download ipa file no jailbreak<br />
75
- cricket league unlimited coins and gems mod apk offline<br />
76
- cricket league hack 2023 new update download now<br />
77
- cricket league mod apk android 1 with unlimited resources</p>
78
- <ol>
79
- <li>Find a trustworthy website that offers an online generator tool for Cricket League. You can search on Google or use websites like HackCricketLeague.com, CricketLeagueCheats.com, or CricketLeagueGenerator.com. Make sure that the website is secure and has positive reviews from other users.</li>
80
- <li>Enter your username or email address that you use to play Cricket League. Choose your device platform (Android or iOS) and select the amount of coins and gems that you want to generate. You may also need to complete some verification steps, such as completing a survey or a captcha, to prove that you are not a robot.</li>
81
- <li>Click on the generate button and wait for the process to finish. The website will connect to the game server and add the coins and gems to your account.</li>
82
- <li>Open the game and check your account balance. You should see that you have received the coins and gems that you requested.</li>
83
- </ol>
84
- <h4>Pros and cons of using an online generator tool for Cricket League</h4>
85
- <p>Using an online generator tool for Cricket League has some advantages and disadvantages that you should be aware of before using it. Here are some of them:</p>
86
- <table>
87
- <tr>
88
- <th>Pros</th>
89
- <th>Cons</th>
90
- </tr>
91
- <tr>
92
- <td>You can get unlimited coins and gems for free.</td>
93
- <td>You may risk getting scammed or infected by malware if the website is not reliable or safe.</td>
94
- </tr>
95
- <tr>
96
- <td>You do not need to download or install anything on your device.</td>
97
- <td>You may need to complete some annoying verification steps, such as surveys or captchas, to access the tool.</td>
98
- </tr>
99
- <tr>
100
- <td>You can use it anytime and anywhere as long as you have an internet connection.</td>
101
- <td>You may not get the coins and gems instantly or at all if the tool is not working properly or updated regularly.</td>
102
- </tr>
103
- </table>
104
- <h2>Conclusion</h2>
105
- <p>In this article, we have shown you how to hack Cricket League and get unlimited coins and gems for free, using two different methods: a modded APK file and an online generator tool. We have also explained what these methods are, how to use them, and what are their pros and cons. We hope that you have found this article helpful and informative, and that you can now enjoy playing Cricket League without any limitations or restrictions. However, we also advise you to use these hacks responsibly and at your own risk, as they may violate the terms of service of the game or cause some issues with your device or account. We also recommend that you support the developers of the game by purchasing some coins and gems with real money if you can afford it, as they have worked hard to create this amazing game for you.</p>
106
- <h3>FAQs</h3>
107
- <p>Here are some frequently asked questions about Cricket League hack and their answers:</p>
108
- <ol>
109
- <li>Q: Is Cricket League hack safe to use?</li>
110
- <li>A: Cricket League hack is safe to use as long as you use a reliable source or website that offers a modded APK file or an online generator tool. However, there is always a possibility that the hack may not work properly or cause some problems with your device or account, so use it at your own risk.</li>
111
- <li>Q: Can I get banned from Cricket League for using a hack?</li>
112
- <li>A: There is a chance that you may get banned from Cricket League for using a hack, as it may violate the terms of service of the game or be detected by the anti-cheat system. To avoid getting banned, you should not use the hack too often or too blatantly, and you should not brag about it to other players. You should also have a backup account in case your main account gets banned.</li>
113
- <li>Q: How can I update Cricket League hack?</li>
114
- <li>A: To update Cricket League hack, you need to download and install the latest version of the modded APK file or visit the latest version of the online generator tool. You should always check for updates regularly, as the game may release new patches or features that may make the hack obsolete or incompatible.</li>
115
- <li>Q: How can I contact the developers of Cricket League hack?</li>
116
- <li>A: To contact the developers of Cricket League hack, you need to visit their website or social media pages and leave them a message or feedback. You can also report any bugs or issues that you encounter with the hack, or request any new features or improvements that you would like to see in the future.</li>
117
- <li>Q: How can I share Cricket League hack with my friends?</li>
118
- <li>A: To share Cricket League hack with your friends, you can send them the link to the website that offers the modded APK file or the online generator tool, or share it on your social media platforms. You can also invite them to play Cricket League with you and enjoy the game together.</li>
119
- </ol></p> 197e85843d<br />
120
- <br />
121
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/44brabal/runwayml-stable-diffusion-v1-5/app.py DELETED
@@ -1,10 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch()
4
-
5
- from diffusers import ControlNetModel, StableDiffusionControlNetPipeline
6
-
7
- controlnet = ControlNetModel.from_pretrained("monster-labs/control_v1p_sd15_qrcode_monster")
8
- pipeline = StableDiffusionControlNetPipeline.from_pretrained(
9
- "fill-in-base-model", controlnet=controlnet
10
- )
 
 
 
 
 
 
 
 
 
 
 
spaces/4Taps/SadTalker/src/face3d/util/generate_list.py DELETED
@@ -1,34 +0,0 @@
1
- """This script is to generate training list files for Deep3DFaceRecon_pytorch
2
- """
3
-
4
- import os
5
-
6
- # save path to training data
7
- def write_list(lms_list, imgs_list, msks_list, mode='train',save_folder='datalist', save_name=''):
8
- save_path = os.path.join(save_folder, mode)
9
- if not os.path.isdir(save_path):
10
- os.makedirs(save_path)
11
- with open(os.path.join(save_path, save_name + 'landmarks.txt'), 'w') as fd:
12
- fd.writelines([i + '\n' for i in lms_list])
13
-
14
- with open(os.path.join(save_path, save_name + 'images.txt'), 'w') as fd:
15
- fd.writelines([i + '\n' for i in imgs_list])
16
-
17
- with open(os.path.join(save_path, save_name + 'masks.txt'), 'w') as fd:
18
- fd.writelines([i + '\n' for i in msks_list])
19
-
20
- # check if the path is valid
21
- def check_list(rlms_list, rimgs_list, rmsks_list):
22
- lms_list, imgs_list, msks_list = [], [], []
23
- for i in range(len(rlms_list)):
24
- flag = 'false'
25
- lm_path = rlms_list[i]
26
- im_path = rimgs_list[i]
27
- msk_path = rmsks_list[i]
28
- if os.path.isfile(lm_path) and os.path.isfile(im_path) and os.path.isfile(msk_path):
29
- flag = 'true'
30
- lms_list.append(rlms_list[i])
31
- imgs_list.append(rimgs_list[i])
32
- msks_list.append(rmsks_list[i])
33
- print(i, rlms_list[i], flag)
34
- return lms_list, imgs_list, msks_list
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/demucs/tasnet.py DELETED
@@ -1,452 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
- #
7
- # Created on 2018/12
8
- # Author: Kaituo XU
9
- # Modified on 2019/11 by Alexandre Defossez, added support for multiple output channels
10
- # Here is the original license:
11
- # The MIT License (MIT)
12
- #
13
- # Copyright (c) 2018 Kaituo XU
14
- #
15
- # Permission is hereby granted, free of charge, to any person obtaining a copy
16
- # of this software and associated documentation files (the "Software"), to deal
17
- # in the Software without restriction, including without limitation the rights
18
- # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
19
- # copies of the Software, and to permit persons to whom the Software is
20
- # furnished to do so, subject to the following conditions:
21
- #
22
- # The above copyright notice and this permission notice shall be included in all
23
- # copies or substantial portions of the Software.
24
- #
25
- # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
26
- # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
27
- # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
28
- # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
29
- # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
30
- # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
31
- # SOFTWARE.
32
-
33
- import math
34
-
35
- import torch
36
- import torch.nn as nn
37
- import torch.nn.functional as F
38
-
39
- from .utils import capture_init
40
-
41
- EPS = 1e-8
42
-
43
-
44
- def overlap_and_add(signal, frame_step):
45
- outer_dimensions = signal.size()[:-2]
46
- frames, frame_length = signal.size()[-2:]
47
-
48
- subframe_length = math.gcd(frame_length, frame_step) # gcd=Greatest Common Divisor
49
- subframe_step = frame_step // subframe_length
50
- subframes_per_frame = frame_length // subframe_length
51
- output_size = frame_step * (frames - 1) + frame_length
52
- output_subframes = output_size // subframe_length
53
-
54
- subframe_signal = signal.view(*outer_dimensions, -1, subframe_length)
55
-
56
- frame = torch.arange(0, output_subframes,
57
- device=signal.device).unfold(0, subframes_per_frame, subframe_step)
58
- frame = frame.long() # signal may in GPU or CPU
59
- frame = frame.contiguous().view(-1)
60
-
61
- result = signal.new_zeros(*outer_dimensions, output_subframes, subframe_length)
62
- result.index_add_(-2, frame, subframe_signal)
63
- result = result.view(*outer_dimensions, -1)
64
- return result
65
-
66
-
67
- class ConvTasNet(nn.Module):
68
- @capture_init
69
- def __init__(self,
70
- sources,
71
- N=256,
72
- L=20,
73
- B=256,
74
- H=512,
75
- P=3,
76
- X=8,
77
- R=4,
78
- audio_channels=2,
79
- norm_type="gLN",
80
- causal=False,
81
- mask_nonlinear='relu',
82
- samplerate=44100,
83
- segment_length=44100 * 2 * 4):
84
- """
85
- Args:
86
- sources: list of sources
87
- N: Number of filters in autoencoder
88
- L: Length of the filters (in samples)
89
- B: Number of channels in bottleneck 1 × 1-conv block
90
- H: Number of channels in convolutional blocks
91
- P: Kernel size in convolutional blocks
92
- X: Number of convolutional blocks in each repeat
93
- R: Number of repeats
94
- norm_type: BN, gLN, cLN
95
- causal: causal or non-causal
96
- mask_nonlinear: use which non-linear function to generate mask
97
- """
98
- super(ConvTasNet, self).__init__()
99
- # Hyper-parameter
100
- self.sources = sources
101
- self.C = len(sources)
102
- self.N, self.L, self.B, self.H, self.P, self.X, self.R = N, L, B, H, P, X, R
103
- self.norm_type = norm_type
104
- self.causal = causal
105
- self.mask_nonlinear = mask_nonlinear
106
- self.audio_channels = audio_channels
107
- self.samplerate = samplerate
108
- self.segment_length = segment_length
109
- # Components
110
- self.encoder = Encoder(L, N, audio_channels)
111
- self.separator = TemporalConvNet(
112
- N, B, H, P, X, R, self.C, norm_type, causal, mask_nonlinear)
113
- self.decoder = Decoder(N, L, audio_channels)
114
- # init
115
- for p in self.parameters():
116
- if p.dim() > 1:
117
- nn.init.xavier_normal_(p)
118
-
119
- def valid_length(self, length):
120
- return length
121
-
122
- def forward(self, mixture):
123
- """
124
- Args:
125
- mixture: [M, T], M is batch size, T is #samples
126
- Returns:
127
- est_source: [M, C, T]
128
- """
129
- mixture_w = self.encoder(mixture)
130
- est_mask = self.separator(mixture_w)
131
- est_source = self.decoder(mixture_w, est_mask)
132
-
133
- # T changed after conv1d in encoder, fix it here
134
- T_origin = mixture.size(-1)
135
- T_conv = est_source.size(-1)
136
- est_source = F.pad(est_source, (0, T_origin - T_conv))
137
- return est_source
138
-
139
-
140
- class Encoder(nn.Module):
141
- """Estimation of the nonnegative mixture weight by a 1-D conv layer.
142
- """
143
- def __init__(self, L, N, audio_channels):
144
- super(Encoder, self).__init__()
145
- # Hyper-parameter
146
- self.L, self.N = L, N
147
- # Components
148
- # 50% overlap
149
- self.conv1d_U = nn.Conv1d(audio_channels, N, kernel_size=L, stride=L // 2, bias=False)
150
-
151
- def forward(self, mixture):
152
- """
153
- Args:
154
- mixture: [M, T], M is batch size, T is #samples
155
- Returns:
156
- mixture_w: [M, N, K], where K = (T-L)/(L/2)+1 = 2T/L-1
157
- """
158
- mixture_w = F.relu(self.conv1d_U(mixture)) # [M, N, K]
159
- return mixture_w
160
-
161
-
162
- class Decoder(nn.Module):
163
- def __init__(self, N, L, audio_channels):
164
- super(Decoder, self).__init__()
165
- # Hyper-parameter
166
- self.N, self.L = N, L
167
- self.audio_channels = audio_channels
168
- # Components
169
- self.basis_signals = nn.Linear(N, audio_channels * L, bias=False)
170
-
171
- def forward(self, mixture_w, est_mask):
172
- """
173
- Args:
174
- mixture_w: [M, N, K]
175
- est_mask: [M, C, N, K]
176
- Returns:
177
- est_source: [M, C, T]
178
- """
179
- # D = W * M
180
- source_w = torch.unsqueeze(mixture_w, 1) * est_mask # [M, C, N, K]
181
- source_w = torch.transpose(source_w, 2, 3) # [M, C, K, N]
182
- # S = DV
183
- est_source = self.basis_signals(source_w) # [M, C, K, ac * L]
184
- m, c, k, _ = est_source.size()
185
- est_source = est_source.view(m, c, k, self.audio_channels, -1).transpose(2, 3).contiguous()
186
- est_source = overlap_and_add(est_source, self.L // 2) # M x C x ac x T
187
- return est_source
188
-
189
-
190
- class TemporalConvNet(nn.Module):
191
- def __init__(self, N, B, H, P, X, R, C, norm_type="gLN", causal=False, mask_nonlinear='relu'):
192
- """
193
- Args:
194
- N: Number of filters in autoencoder
195
- B: Number of channels in bottleneck 1 × 1-conv block
196
- H: Number of channels in convolutional blocks
197
- P: Kernel size in convolutional blocks
198
- X: Number of convolutional blocks in each repeat
199
- R: Number of repeats
200
- C: Number of speakers
201
- norm_type: BN, gLN, cLN
202
- causal: causal or non-causal
203
- mask_nonlinear: use which non-linear function to generate mask
204
- """
205
- super(TemporalConvNet, self).__init__()
206
- # Hyper-parameter
207
- self.C = C
208
- self.mask_nonlinear = mask_nonlinear
209
- # Components
210
- # [M, N, K] -> [M, N, K]
211
- layer_norm = ChannelwiseLayerNorm(N)
212
- # [M, N, K] -> [M, B, K]
213
- bottleneck_conv1x1 = nn.Conv1d(N, B, 1, bias=False)
214
- # [M, B, K] -> [M, B, K]
215
- repeats = []
216
- for r in range(R):
217
- blocks = []
218
- for x in range(X):
219
- dilation = 2**x
220
- padding = (P - 1) * dilation if causal else (P - 1) * dilation // 2
221
- blocks += [
222
- TemporalBlock(B,
223
- H,
224
- P,
225
- stride=1,
226
- padding=padding,
227
- dilation=dilation,
228
- norm_type=norm_type,
229
- causal=causal)
230
- ]
231
- repeats += [nn.Sequential(*blocks)]
232
- temporal_conv_net = nn.Sequential(*repeats)
233
- # [M, B, K] -> [M, C*N, K]
234
- mask_conv1x1 = nn.Conv1d(B, C * N, 1, bias=False)
235
- # Put together
236
- self.network = nn.Sequential(layer_norm, bottleneck_conv1x1, temporal_conv_net,
237
- mask_conv1x1)
238
-
239
- def forward(self, mixture_w):
240
- """
241
- Keep this API same with TasNet
242
- Args:
243
- mixture_w: [M, N, K], M is batch size
244
- returns:
245
- est_mask: [M, C, N, K]
246
- """
247
- M, N, K = mixture_w.size()
248
- score = self.network(mixture_w) # [M, N, K] -> [M, C*N, K]
249
- score = score.view(M, self.C, N, K) # [M, C*N, K] -> [M, C, N, K]
250
- if self.mask_nonlinear == 'softmax':
251
- est_mask = F.softmax(score, dim=1)
252
- elif self.mask_nonlinear == 'relu':
253
- est_mask = F.relu(score)
254
- else:
255
- raise ValueError("Unsupported mask non-linear function")
256
- return est_mask
257
-
258
-
259
- class TemporalBlock(nn.Module):
260
- def __init__(self,
261
- in_channels,
262
- out_channels,
263
- kernel_size,
264
- stride,
265
- padding,
266
- dilation,
267
- norm_type="gLN",
268
- causal=False):
269
- super(TemporalBlock, self).__init__()
270
- # [M, B, K] -> [M, H, K]
271
- conv1x1 = nn.Conv1d(in_channels, out_channels, 1, bias=False)
272
- prelu = nn.PReLU()
273
- norm = chose_norm(norm_type, out_channels)
274
- # [M, H, K] -> [M, B, K]
275
- dsconv = DepthwiseSeparableConv(out_channels, in_channels, kernel_size, stride, padding,
276
- dilation, norm_type, causal)
277
- # Put together
278
- self.net = nn.Sequential(conv1x1, prelu, norm, dsconv)
279
-
280
- def forward(self, x):
281
- """
282
- Args:
283
- x: [M, B, K]
284
- Returns:
285
- [M, B, K]
286
- """
287
- residual = x
288
- out = self.net(x)
289
- # TODO: when P = 3 here works fine, but when P = 2 maybe need to pad?
290
- return out + residual # look like w/o F.relu is better than w/ F.relu
291
- # return F.relu(out + residual)
292
-
293
-
294
- class DepthwiseSeparableConv(nn.Module):
295
- def __init__(self,
296
- in_channels,
297
- out_channels,
298
- kernel_size,
299
- stride,
300
- padding,
301
- dilation,
302
- norm_type="gLN",
303
- causal=False):
304
- super(DepthwiseSeparableConv, self).__init__()
305
- # Use `groups` option to implement depthwise convolution
306
- # [M, H, K] -> [M, H, K]
307
- depthwise_conv = nn.Conv1d(in_channels,
308
- in_channels,
309
- kernel_size,
310
- stride=stride,
311
- padding=padding,
312
- dilation=dilation,
313
- groups=in_channels,
314
- bias=False)
315
- if causal:
316
- chomp = Chomp1d(padding)
317
- prelu = nn.PReLU()
318
- norm = chose_norm(norm_type, in_channels)
319
- # [M, H, K] -> [M, B, K]
320
- pointwise_conv = nn.Conv1d(in_channels, out_channels, 1, bias=False)
321
- # Put together
322
- if causal:
323
- self.net = nn.Sequential(depthwise_conv, chomp, prelu, norm, pointwise_conv)
324
- else:
325
- self.net = nn.Sequential(depthwise_conv, prelu, norm, pointwise_conv)
326
-
327
- def forward(self, x):
328
- """
329
- Args:
330
- x: [M, H, K]
331
- Returns:
332
- result: [M, B, K]
333
- """
334
- return self.net(x)
335
-
336
-
337
- class Chomp1d(nn.Module):
338
- """To ensure the output length is the same as the input.
339
- """
340
- def __init__(self, chomp_size):
341
- super(Chomp1d, self).__init__()
342
- self.chomp_size = chomp_size
343
-
344
- def forward(self, x):
345
- """
346
- Args:
347
- x: [M, H, Kpad]
348
- Returns:
349
- [M, H, K]
350
- """
351
- return x[:, :, :-self.chomp_size].contiguous()
352
-
353
-
354
- def chose_norm(norm_type, channel_size):
355
- """The input of normlization will be (M, C, K), where M is batch size,
356
- C is channel size and K is sequence length.
357
- """
358
- if norm_type == "gLN":
359
- return GlobalLayerNorm(channel_size)
360
- elif norm_type == "cLN":
361
- return ChannelwiseLayerNorm(channel_size)
362
- elif norm_type == "id":
363
- return nn.Identity()
364
- else: # norm_type == "BN":
365
- # Given input (M, C, K), nn.BatchNorm1d(C) will accumulate statics
366
- # along M and K, so this BN usage is right.
367
- return nn.BatchNorm1d(channel_size)
368
-
369
-
370
- # TODO: Use nn.LayerNorm to impl cLN to speed up
371
- class ChannelwiseLayerNorm(nn.Module):
372
- """Channel-wise Layer Normalization (cLN)"""
373
- def __init__(self, channel_size):
374
- super(ChannelwiseLayerNorm, self).__init__()
375
- self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
376
- self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
377
- self.reset_parameters()
378
-
379
- def reset_parameters(self):
380
- self.gamma.data.fill_(1)
381
- self.beta.data.zero_()
382
-
383
- def forward(self, y):
384
- """
385
- Args:
386
- y: [M, N, K], M is batch size, N is channel size, K is length
387
- Returns:
388
- cLN_y: [M, N, K]
389
- """
390
- mean = torch.mean(y, dim=1, keepdim=True) # [M, 1, K]
391
- var = torch.var(y, dim=1, keepdim=True, unbiased=False) # [M, 1, K]
392
- cLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta
393
- return cLN_y
394
-
395
-
396
- class GlobalLayerNorm(nn.Module):
397
- """Global Layer Normalization (gLN)"""
398
- def __init__(self, channel_size):
399
- super(GlobalLayerNorm, self).__init__()
400
- self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
401
- self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
402
- self.reset_parameters()
403
-
404
- def reset_parameters(self):
405
- self.gamma.data.fill_(1)
406
- self.beta.data.zero_()
407
-
408
- def forward(self, y):
409
- """
410
- Args:
411
- y: [M, N, K], M is batch size, N is channel size, K is length
412
- Returns:
413
- gLN_y: [M, N, K]
414
- """
415
- # TODO: in torch 1.0, torch.mean() support dim list
416
- mean = y.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True) # [M, 1, 1]
417
- var = (torch.pow(y - mean, 2)).mean(dim=1, keepdim=True).mean(dim=2, keepdim=True)
418
- gLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta
419
- return gLN_y
420
-
421
-
422
- if __name__ == "__main__":
423
- torch.manual_seed(123)
424
- M, N, L, T = 2, 3, 4, 12
425
- K = 2 * T // L - 1
426
- B, H, P, X, R, C, norm_type, causal = 2, 3, 3, 3, 2, 2, "gLN", False
427
- mixture = torch.randint(3, (M, T))
428
- # test Encoder
429
- encoder = Encoder(L, N)
430
- encoder.conv1d_U.weight.data = torch.randint(2, encoder.conv1d_U.weight.size())
431
- mixture_w = encoder(mixture)
432
- print('mixture', mixture)
433
- print('U', encoder.conv1d_U.weight)
434
- print('mixture_w', mixture_w)
435
- print('mixture_w size', mixture_w.size())
436
-
437
- # test TemporalConvNet
438
- separator = TemporalConvNet(N, B, H, P, X, R, C, norm_type=norm_type, causal=causal)
439
- est_mask = separator(mixture_w)
440
- print('est_mask', est_mask)
441
-
442
- # test Decoder
443
- decoder = Decoder(N, L)
444
- est_mask = torch.randint(2, (B, K, C, N))
445
- est_source = decoder(mixture_w, est_mask)
446
- print('est_source', est_source)
447
-
448
- # test Conv-TasNet
449
- conv_tasnet = ConvTasNet(N, L, B, H, P, X, R, C, norm_type=norm_type)
450
- est_source = conv_tasnet(mixture)
451
- print('est_source', est_source)
452
- print('est_source size', est_source.size())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/demucs/train.py DELETED
@@ -1,127 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- import sys
8
-
9
- import tqdm
10
- from torch.utils.data import DataLoader
11
- from torch.utils.data.distributed import DistributedSampler
12
-
13
- from .utils import apply_model, average_metric, center_trim
14
-
15
-
16
- def train_model(epoch,
17
- dataset,
18
- model,
19
- criterion,
20
- optimizer,
21
- augment,
22
- quantizer=None,
23
- diffq=0,
24
- repeat=1,
25
- device="cpu",
26
- seed=None,
27
- workers=4,
28
- world_size=1,
29
- batch_size=16):
30
-
31
- if world_size > 1:
32
- sampler = DistributedSampler(dataset)
33
- sampler_epoch = epoch * repeat
34
- if seed is not None:
35
- sampler_epoch += seed * 1000
36
- sampler.set_epoch(sampler_epoch)
37
- batch_size //= world_size
38
- loader = DataLoader(dataset, batch_size=batch_size, sampler=sampler, num_workers=workers)
39
- else:
40
- loader = DataLoader(dataset, batch_size=batch_size, num_workers=workers, shuffle=True)
41
- current_loss = 0
42
- model_size = 0
43
- for repetition in range(repeat):
44
- tq = tqdm.tqdm(loader,
45
- ncols=120,
46
- desc=f"[{epoch:03d}] train ({repetition + 1}/{repeat})",
47
- leave=False,
48
- file=sys.stdout,
49
- unit=" batch")
50
- total_loss = 0
51
- for idx, sources in enumerate(tq):
52
- if len(sources) < batch_size:
53
- # skip uncomplete batch for augment.Remix to work properly
54
- continue
55
- sources = sources.to(device)
56
- sources = augment(sources)
57
- mix = sources.sum(dim=1)
58
-
59
- estimates = model(mix)
60
- sources = center_trim(sources, estimates)
61
- loss = criterion(estimates, sources)
62
- model_size = 0
63
- if quantizer is not None:
64
- model_size = quantizer.model_size()
65
-
66
- train_loss = loss + diffq * model_size
67
- train_loss.backward()
68
- grad_norm = 0
69
- for p in model.parameters():
70
- if p.grad is not None:
71
- grad_norm += p.grad.data.norm()**2
72
- grad_norm = grad_norm**0.5
73
- optimizer.step()
74
- optimizer.zero_grad()
75
-
76
- if quantizer is not None:
77
- model_size = model_size.item()
78
-
79
- total_loss += loss.item()
80
- current_loss = total_loss / (1 + idx)
81
- tq.set_postfix(loss=f"{current_loss:.4f}", ms=f"{model_size:.2f}",
82
- grad=f"{grad_norm:.5f}")
83
-
84
- # free some space before next round
85
- del sources, mix, estimates, loss, train_loss
86
-
87
- if world_size > 1:
88
- sampler.epoch += 1
89
-
90
- if world_size > 1:
91
- current_loss = average_metric(current_loss)
92
- return current_loss, model_size
93
-
94
-
95
- def validate_model(epoch,
96
- dataset,
97
- model,
98
- criterion,
99
- device="cpu",
100
- rank=0,
101
- world_size=1,
102
- shifts=0,
103
- overlap=0.25,
104
- split=False):
105
- indexes = range(rank, len(dataset), world_size)
106
- tq = tqdm.tqdm(indexes,
107
- ncols=120,
108
- desc=f"[{epoch:03d}] valid",
109
- leave=False,
110
- file=sys.stdout,
111
- unit=" track")
112
- current_loss = 0
113
- for index in tq:
114
- streams = dataset[index]
115
- # first five minutes to avoid OOM on --upsample models
116
- streams = streams[..., :15_000_000]
117
- streams = streams.to(device)
118
- sources = streams[1:]
119
- mix = streams[0]
120
- estimates = apply_model(model, mix, shifts=shifts, split=split, overlap=overlap)
121
- loss = criterion(estimates, sources)
122
- current_loss += loss.item() / len(indexes)
123
- del estimates, streams, sources
124
-
125
- if world_size > 1:
126
- current_loss = average_metric(current_loss, len(indexes))
127
- return current_loss
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/tests/data/test_audio.py DELETED
@@ -1,239 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- from itertools import product
8
- import random
9
-
10
- import numpy as np
11
- import torch
12
- import torchaudio
13
-
14
- from audiocraft.data.audio import audio_info, audio_read, audio_write, _av_read
15
-
16
- from ..common_utils import TempDirMixin, get_white_noise, save_wav
17
-
18
-
19
- class TestInfo(TempDirMixin):
20
-
21
- def test_info_mp3(self):
22
- sample_rates = [8000, 16_000]
23
- channels = [1, 2]
24
- duration = 1.
25
- for sample_rate, ch in product(sample_rates, channels):
26
- wav = get_white_noise(ch, int(sample_rate * duration))
27
- path = self.get_temp_path('sample_wav.mp3')
28
- save_wav(path, wav, sample_rate)
29
- info = audio_info(path)
30
- assert info.sample_rate == sample_rate
31
- assert info.channels == ch
32
- # we cannot trust torchaudio for num_frames, so we don't check
33
-
34
- def _test_info_format(self, ext: str):
35
- sample_rates = [8000, 16_000]
36
- channels = [1, 2]
37
- duration = 1.
38
- for sample_rate, ch in product(sample_rates, channels):
39
- n_frames = int(sample_rate * duration)
40
- wav = get_white_noise(ch, n_frames)
41
- path = self.get_temp_path(f'sample_wav{ext}')
42
- save_wav(path, wav, sample_rate)
43
- info = audio_info(path)
44
- assert info.sample_rate == sample_rate
45
- assert info.channels == ch
46
- assert np.isclose(info.duration, duration, atol=1e-5)
47
-
48
- def test_info_wav(self):
49
- self._test_info_format('.wav')
50
-
51
- def test_info_flac(self):
52
- self._test_info_format('.flac')
53
-
54
- def test_info_ogg(self):
55
- self._test_info_format('.ogg')
56
-
57
- def test_info_m4a(self):
58
- # TODO: generate m4a file programmatically
59
- # self._test_info_format('.m4a')
60
- pass
61
-
62
-
63
- class TestRead(TempDirMixin):
64
-
65
- def test_read_full_wav(self):
66
- sample_rates = [8000, 16_000]
67
- channels = [1, 2]
68
- duration = 1.
69
- for sample_rate, ch in product(sample_rates, channels):
70
- n_frames = int(sample_rate * duration)
71
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
72
- path = self.get_temp_path('sample_wav.wav')
73
- save_wav(path, wav, sample_rate)
74
- read_wav, read_sr = audio_read(path)
75
- assert read_sr == sample_rate
76
- assert read_wav.shape[0] == wav.shape[0]
77
- assert read_wav.shape[1] == wav.shape[1]
78
- assert torch.allclose(read_wav, wav, rtol=1e-03, atol=1e-04)
79
-
80
- def test_read_partial_wav(self):
81
- sample_rates = [8000, 16_000]
82
- channels = [1, 2]
83
- duration = 1.
84
- read_duration = torch.rand(1).item()
85
- for sample_rate, ch in product(sample_rates, channels):
86
- n_frames = int(sample_rate * duration)
87
- read_frames = int(sample_rate * read_duration)
88
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
89
- path = self.get_temp_path('sample_wav.wav')
90
- save_wav(path, wav, sample_rate)
91
- read_wav, read_sr = audio_read(path, 0, read_duration)
92
- assert read_sr == sample_rate
93
- assert read_wav.shape[0] == wav.shape[0]
94
- assert read_wav.shape[1] == read_frames
95
- assert torch.allclose(read_wav[..., 0:read_frames], wav[..., 0:read_frames], rtol=1e-03, atol=1e-04)
96
-
97
- def test_read_seek_time_wav(self):
98
- sample_rates = [8000, 16_000]
99
- channels = [1, 2]
100
- duration = 1.
101
- read_duration = 1.
102
- for sample_rate, ch in product(sample_rates, channels):
103
- n_frames = int(sample_rate * duration)
104
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
105
- path = self.get_temp_path('sample_wav.wav')
106
- save_wav(path, wav, sample_rate)
107
- seek_time = torch.rand(1).item()
108
- read_wav, read_sr = audio_read(path, seek_time, read_duration)
109
- seek_frames = int(sample_rate * seek_time)
110
- expected_frames = n_frames - seek_frames
111
- assert read_sr == sample_rate
112
- assert read_wav.shape[0] == wav.shape[0]
113
- assert read_wav.shape[1] == expected_frames
114
- assert torch.allclose(read_wav, wav[..., seek_frames:], rtol=1e-03, atol=1e-04)
115
-
116
- def test_read_seek_time_wav_padded(self):
117
- sample_rates = [8000, 16_000]
118
- channels = [1, 2]
119
- duration = 1.
120
- read_duration = 1.
121
- for sample_rate, ch in product(sample_rates, channels):
122
- n_frames = int(sample_rate * duration)
123
- read_frames = int(sample_rate * read_duration)
124
- wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99)
125
- path = self.get_temp_path('sample_wav.wav')
126
- save_wav(path, wav, sample_rate)
127
- seek_time = torch.rand(1).item()
128
- seek_frames = int(sample_rate * seek_time)
129
- expected_frames = n_frames - seek_frames
130
- read_wav, read_sr = audio_read(path, seek_time, read_duration, pad=True)
131
- expected_pad_wav = torch.zeros(wav.shape[0], read_frames - expected_frames)
132
- assert read_sr == sample_rate
133
- assert read_wav.shape[0] == wav.shape[0]
134
- assert read_wav.shape[1] == read_frames
135
- assert torch.allclose(read_wav[..., :expected_frames], wav[..., seek_frames:], rtol=1e-03, atol=1e-04)
136
- assert torch.allclose(read_wav[..., expected_frames:], expected_pad_wav)
137
-
138
-
139
- class TestAvRead(TempDirMixin):
140
-
141
- def test_avread_seek_base(self):
142
- sample_rates = [8000, 16_000]
143
- channels = [1, 2]
144
- duration = 2.
145
- for sample_rate, ch in product(sample_rates, channels):
146
- n_frames = int(sample_rate * duration)
147
- wav = get_white_noise(ch, n_frames)
148
- path = self.get_temp_path(f'reference_a_{sample_rate}_{ch}.wav')
149
- save_wav(path, wav, sample_rate)
150
- for _ in range(100):
151
- # seek will always load a full duration segment in the file
152
- seek_time = random.uniform(0.0, 1.0)
153
- seek_duration = random.uniform(0.001, 1.0)
154
- read_wav, read_sr = _av_read(path, seek_time, seek_duration)
155
- assert read_sr == sample_rate
156
- assert read_wav.shape[0] == wav.shape[0]
157
- assert read_wav.shape[-1] == int(seek_duration * sample_rate)
158
-
159
- def test_avread_seek_partial(self):
160
- sample_rates = [8000, 16_000]
161
- channels = [1, 2]
162
- duration = 1.
163
- for sample_rate, ch in product(sample_rates, channels):
164
- n_frames = int(sample_rate * duration)
165
- wav = get_white_noise(ch, n_frames)
166
- path = self.get_temp_path(f'reference_b_{sample_rate}_{ch}.wav')
167
- save_wav(path, wav, sample_rate)
168
- for _ in range(100):
169
- # seek will always load a partial segment
170
- seek_time = random.uniform(0.5, 1.)
171
- seek_duration = 1.
172
- expected_num_frames = n_frames - int(seek_time * sample_rate)
173
- read_wav, read_sr = _av_read(path, seek_time, seek_duration)
174
- assert read_sr == sample_rate
175
- assert read_wav.shape[0] == wav.shape[0]
176
- assert read_wav.shape[-1] == expected_num_frames
177
-
178
- def test_avread_seek_outofbound(self):
179
- sample_rates = [8000, 16_000]
180
- channels = [1, 2]
181
- duration = 1.
182
- for sample_rate, ch in product(sample_rates, channels):
183
- n_frames = int(sample_rate * duration)
184
- wav = get_white_noise(ch, n_frames)
185
- path = self.get_temp_path(f'reference_c_{sample_rate}_{ch}.wav')
186
- save_wav(path, wav, sample_rate)
187
- seek_time = 1.5
188
- read_wav, read_sr = _av_read(path, seek_time, 1.)
189
- assert read_sr == sample_rate
190
- assert read_wav.shape[0] == wav.shape[0]
191
- assert read_wav.shape[-1] == 0
192
-
193
- def test_avread_seek_edge(self):
194
- sample_rates = [8000, 16_000]
195
- # some of these values will have
196
- # int(((frames - 1) / sample_rate) * sample_rate) != (frames - 1)
197
- n_frames = [1000, 1001, 1002]
198
- channels = [1, 2]
199
- for sample_rate, ch, frames in product(sample_rates, channels, n_frames):
200
- duration = frames / sample_rate
201
- wav = get_white_noise(ch, frames)
202
- path = self.get_temp_path(f'reference_d_{sample_rate}_{ch}.wav')
203
- save_wav(path, wav, sample_rate)
204
- seek_time = (frames - 1) / sample_rate
205
- seek_frames = int(seek_time * sample_rate)
206
- read_wav, read_sr = _av_read(path, seek_time, duration)
207
- assert read_sr == sample_rate
208
- assert read_wav.shape[0] == wav.shape[0]
209
- assert read_wav.shape[-1] == (frames - seek_frames)
210
-
211
-
212
- class TestAudioWrite(TempDirMixin):
213
-
214
- def test_audio_write_wav(self):
215
- torch.manual_seed(1234)
216
- sample_rates = [8000, 16_000]
217
- n_frames = [1000, 1001, 1002]
218
- channels = [1, 2]
219
- strategies = ["peak", "clip", "rms"]
220
- formats = ["wav", "mp3"]
221
- for sample_rate, ch, frames in product(sample_rates, channels, n_frames):
222
- for format_, strategy in product(formats, strategies):
223
- wav = get_white_noise(ch, frames)
224
- path = self.get_temp_path(f'pred_{sample_rate}_{ch}')
225
- audio_write(path, wav, sample_rate, format_, strategy=strategy)
226
- read_wav, read_sr = torchaudio.load(f'{path}.{format_}')
227
- if format_ == "wav":
228
- assert read_wav.shape == wav.shape
229
-
230
- if format_ == "wav" and strategy in ["peak", "rms"]:
231
- rescaled_read_wav = read_wav / read_wav.abs().max() * wav.abs().max()
232
- # for a Gaussian, the typical max scale will be less than ~5x the std.
233
- # The error when writing to disk will ~ 1/2**15, and when rescaling, 5x that.
234
- # For RMS target, rescaling leaves more headroom by default, leading
235
- # to a 20x rescaling typically
236
- atol = (5 if strategy == "peak" else 20) / 2**15
237
- delta = (rescaled_read_wav - wav).abs().max()
238
- assert torch.allclose(wav, rescaled_read_wav, rtol=0, atol=atol), (delta, atol)
239
- formats = ["wav"] # faster unit tests
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio/ldm/modules/attention.py DELETED
@@ -1,261 +0,0 @@
1
- from inspect import isfunction
2
- import math
3
- import torch
4
- import torch.nn.functional as F
5
- from torch import nn, einsum
6
- from einops import rearrange, repeat
7
-
8
- from ldm.modules.diffusionmodules.util import checkpoint
9
-
10
-
11
- def exists(val):
12
- return val is not None
13
-
14
-
15
- def uniq(arr):
16
- return{el: True for el in arr}.keys()
17
-
18
-
19
- def default(val, d):
20
- if exists(val):
21
- return val
22
- return d() if isfunction(d) else d
23
-
24
-
25
- def max_neg_value(t):
26
- return -torch.finfo(t.dtype).max
27
-
28
-
29
- def init_(tensor):
30
- dim = tensor.shape[-1]
31
- std = 1 / math.sqrt(dim)
32
- tensor.uniform_(-std, std)
33
- return tensor
34
-
35
-
36
- # feedforward
37
- class GEGLU(nn.Module):
38
- def __init__(self, dim_in, dim_out):
39
- super().__init__()
40
- self.proj = nn.Linear(dim_in, dim_out * 2)
41
-
42
- def forward(self, x):
43
- x, gate = self.proj(x).chunk(2, dim=-1)
44
- return x * F.gelu(gate)
45
-
46
-
47
- class FeedForward(nn.Module):
48
- def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):
49
- super().__init__()
50
- inner_dim = int(dim * mult)
51
- dim_out = default(dim_out, dim)
52
- project_in = nn.Sequential(
53
- nn.Linear(dim, inner_dim),
54
- nn.GELU()
55
- ) if not glu else GEGLU(dim, inner_dim)
56
-
57
- self.net = nn.Sequential(
58
- project_in,
59
- nn.Dropout(dropout),
60
- nn.Linear(inner_dim, dim_out)
61
- )
62
-
63
- def forward(self, x):
64
- return self.net(x)
65
-
66
-
67
- def zero_module(module):
68
- """
69
- Zero out the parameters of a module and return it.
70
- """
71
- for p in module.parameters():
72
- p.detach().zero_()
73
- return module
74
-
75
-
76
- def Normalize(in_channels):
77
- return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
78
-
79
-
80
- class LinearAttention(nn.Module):
81
- def __init__(self, dim, heads=4, dim_head=32):
82
- super().__init__()
83
- self.heads = heads
84
- hidden_dim = dim_head * heads
85
- self.to_qkv = nn.Conv2d(dim, hidden_dim * 3, 1, bias = False)
86
- self.to_out = nn.Conv2d(hidden_dim, dim, 1)
87
-
88
- def forward(self, x):
89
- b, c, h, w = x.shape
90
- qkv = self.to_qkv(x)
91
- q, k, v = rearrange(qkv, 'b (qkv heads c) h w -> qkv b heads c (h w)', heads = self.heads, qkv=3)
92
- k = k.softmax(dim=-1)
93
- context = torch.einsum('bhdn,bhen->bhde', k, v)
94
- out = torch.einsum('bhde,bhdn->bhen', context, q)
95
- out = rearrange(out, 'b heads c (h w) -> b (heads c) h w', heads=self.heads, h=h, w=w)
96
- return self.to_out(out)
97
-
98
-
99
- class SpatialSelfAttention(nn.Module):
100
- def __init__(self, in_channels):
101
- super().__init__()
102
- self.in_channels = in_channels
103
-
104
- self.norm = Normalize(in_channels)
105
- self.q = torch.nn.Conv2d(in_channels,
106
- in_channels,
107
- kernel_size=1,
108
- stride=1,
109
- padding=0)
110
- self.k = torch.nn.Conv2d(in_channels,
111
- in_channels,
112
- kernel_size=1,
113
- stride=1,
114
- padding=0)
115
- self.v = torch.nn.Conv2d(in_channels,
116
- in_channels,
117
- kernel_size=1,
118
- stride=1,
119
- padding=0)
120
- self.proj_out = torch.nn.Conv2d(in_channels,
121
- in_channels,
122
- kernel_size=1,
123
- stride=1,
124
- padding=0)
125
-
126
- def forward(self, x):
127
- h_ = x
128
- h_ = self.norm(h_)
129
- q = self.q(h_)
130
- k = self.k(h_)
131
- v = self.v(h_)
132
-
133
- # compute attention
134
- b,c,h,w = q.shape
135
- q = rearrange(q, 'b c h w -> b (h w) c')
136
- k = rearrange(k, 'b c h w -> b c (h w)')
137
- w_ = torch.einsum('bij,bjk->bik', q, k)
138
-
139
- w_ = w_ * (int(c)**(-0.5))
140
- w_ = torch.nn.functional.softmax(w_, dim=2)
141
-
142
- # attend to values
143
- v = rearrange(v, 'b c h w -> b c (h w)')
144
- w_ = rearrange(w_, 'b i j -> b j i')
145
- h_ = torch.einsum('bij,bjk->bik', v, w_)
146
- h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h)
147
- h_ = self.proj_out(h_)
148
-
149
- return x+h_
150
-
151
-
152
- class CrossAttention(nn.Module):
153
- def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.):# 如果设置了context_dim就不是自注意力了
154
- super().__init__()
155
- inner_dim = dim_head * heads # inner_dim == SpatialTransformer.model_channels
156
- context_dim = default(context_dim, query_dim)
157
-
158
- self.scale = dim_head ** -0.5
159
- self.heads = heads
160
-
161
- self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
162
- self.to_k = nn.Linear(context_dim, inner_dim, bias=False)
163
- self.to_v = nn.Linear(context_dim, inner_dim, bias=False)
164
-
165
- self.to_out = nn.Sequential(
166
- nn.Linear(inner_dim, query_dim),
167
- nn.Dropout(dropout)
168
- )
169
-
170
- def forward(self, x, context=None, mask=None):# x:(b,h*w,c), context:(b,seq_len,context_dim)
171
- h = self.heads
172
-
173
- q = self.to_q(x)# q:(b,h*w,inner_dim)
174
- context = default(context, x)
175
- k = self.to_k(context)# (b,seq_len,inner_dim)
176
- v = self.to_v(context)# (b,seq_len,inner_dim)
177
-
178
- q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))# n is seq_len for k and v
179
-
180
- sim = einsum('b i d, b j d -> b i j', q, k) * self.scale # (b*head,h*w,seq_len)
181
-
182
- if exists(mask):# false
183
- mask = rearrange(mask, 'b ... -> b (...)')
184
- max_neg_value = -torch.finfo(sim.dtype).max
185
- mask = repeat(mask, 'b j -> (b h) () j', h=h)
186
- sim.masked_fill_(~mask, max_neg_value)
187
-
188
- # attention, what we cannot get enough of
189
- attn = sim.softmax(dim=-1)
190
-
191
- out = einsum('b i j, b j d -> b i d', attn, v)# (b*head,h*w,inner_dim/head)
192
- out = rearrange(out, '(b h) n d -> b n (h d)', h=h)# (b,h*w,inner_dim)
193
- return self.to_out(out)
194
-
195
-
196
- class BasicTransformerBlock(nn.Module):
197
- def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True):
198
- super().__init__()
199
- self.attn1 = CrossAttention(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout) # is a self-attention
200
- self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff)
201
- self.attn2 = CrossAttention(query_dim=dim, context_dim=context_dim,
202
- heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none
203
- self.norm1 = nn.LayerNorm(dim)
204
- self.norm2 = nn.LayerNorm(dim)
205
- self.norm3 = nn.LayerNorm(dim)
206
- self.checkpoint = checkpoint
207
-
208
- def forward(self, x, context=None):
209
- return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
210
-
211
- def _forward(self, x, context=None):
212
- x = self.attn1(self.norm1(x)) + x
213
- x = self.attn2(self.norm2(x), context=context) + x
214
- x = self.ff(self.norm3(x)) + x
215
- return x
216
-
217
-
218
- class SpatialTransformer(nn.Module):
219
- """
220
- Transformer block for image-like data.
221
- First, project the input (aka embedding)
222
- and reshape to b, t, d.
223
- Then apply standard transformer action.
224
- Finally, reshape to image
225
- """
226
- def __init__(self, in_channels, n_heads, d_head,
227
- depth=1, dropout=0., context_dim=None):
228
- super().__init__()
229
- self.in_channels = in_channels
230
- inner_dim = n_heads * d_head
231
- self.norm = Normalize(in_channels)
232
-
233
- self.proj_in = nn.Conv2d(in_channels,
234
- inner_dim,
235
- kernel_size=1,
236
- stride=1,
237
- padding=0)
238
-
239
- self.transformer_blocks = nn.ModuleList(
240
- [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim)
241
- for d in range(depth)]
242
- )
243
-
244
- self.proj_out = zero_module(nn.Conv2d(inner_dim,
245
- in_channels,
246
- kernel_size=1,
247
- stride=1,
248
- padding=0))
249
-
250
- def forward(self, x, context=None):
251
- # note: if no context is given, cross-attention defaults to self-attention
252
- b, c, h, w = x.shape # such as [2,320,10,106]
253
- x_in = x
254
- x = self.norm(x)# group norm
255
- x = self.proj_in(x)# no shape change
256
- x = rearrange(x, 'b c h w -> b (h w) c')
257
- for block in self.transformer_blocks:
258
- x = block(x, context=context)# context shape [b,seq_len=77,context_dim]
259
- x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w)
260
- x = self.proj_out(x)
261
- return x + x_in
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/predict.py DELETED
@@ -1,90 +0,0 @@
1
- import os
2
- from torch.utils.data import DataLoader
3
- import torchvision
4
- from tqdm import tqdm
5
- from dataset import VGGSound
6
- import torch
7
- import torch.nn as nn
8
- from metrics import metrics
9
- from omegaconf import OmegaConf
10
- from model import VGGishish
11
- from transforms import Crop, StandardNormalizeAudio, ToTensor
12
-
13
-
14
- if __name__ == '__main__':
15
- cfg_cli = OmegaConf.from_cli()
16
- print(cfg_cli.config)
17
- cfg_yml = OmegaConf.load(cfg_cli.config)
18
- # the latter arguments are prioritized
19
- cfg = OmegaConf.merge(cfg_yml, cfg_cli)
20
- OmegaConf.set_readonly(cfg, True)
21
- print(OmegaConf.to_yaml(cfg))
22
-
23
- # logger = LoggerWithTBoard(cfg)
24
- transforms = [
25
- StandardNormalizeAudio(cfg.mels_path),
26
- ToTensor(),
27
- ]
28
- if cfg.cropped_size not in [None, 'None', 'none']:
29
- transforms.append(Crop(cfg.cropped_size))
30
- transforms = torchvision.transforms.transforms.Compose(transforms)
31
-
32
- datasets = {
33
- 'test': VGGSound('test', cfg.mels_path, transforms),
34
- }
35
-
36
- loaders = {
37
- 'test': DataLoader(datasets['test'], batch_size=cfg.batch_size,
38
- num_workers=cfg.num_workers, pin_memory=True)
39
- }
40
-
41
- device = torch.device(cfg.device if torch.cuda.is_available() else 'cpu')
42
- model = VGGishish(cfg.conv_layers, cfg.use_bn, num_classes=len(datasets['test'].target2label))
43
- model = model.to(device)
44
-
45
- optimizer = torch.optim.Adam(model.parameters(), lr=cfg.learning_rate)
46
- criterion = nn.CrossEntropyLoss()
47
-
48
- # loading the best model
49
- folder_name = os.path.split(cfg.config)[0].split('/')[-1]
50
- print(folder_name)
51
- ckpt = torch.load(f'./logs/{folder_name}/vggishish-{folder_name}.pt', map_location='cpu')
52
- model.load_state_dict(ckpt['model'])
53
- print((f'The model was trained for {ckpt["epoch"]} epochs. Loss: {ckpt["loss"]:.4f}'))
54
-
55
- # Testing the model
56
- model.eval()
57
- running_loss = 0
58
- preds_from_each_batch = []
59
- targets_from_each_batch = []
60
-
61
- for i, batch in enumerate(tqdm(loaders['test'])):
62
- inputs = batch['input'].to(device)
63
- targets = batch['target'].to(device)
64
-
65
- # zero the parameter gradients
66
- optimizer.zero_grad()
67
-
68
- # forward + backward + optimize
69
- with torch.set_grad_enabled(False):
70
- outputs = model(inputs)
71
- loss = criterion(outputs, targets)
72
-
73
- # loss
74
- running_loss += loss.item()
75
-
76
- # for metrics calculation later on
77
- preds_from_each_batch += [outputs.detach().cpu()]
78
- targets_from_each_batch += [targets.cpu()]
79
-
80
- # logging metrics
81
- preds_from_each_batch = torch.cat(preds_from_each_batch)
82
- targets_from_each_batch = torch.cat(targets_from_each_batch)
83
- test_metrics_dict = metrics(targets_from_each_batch, preds_from_each_batch)
84
- test_metrics_dict['avg_loss'] = running_loss / len(loaders['test'])
85
- test_metrics_dict['param_num'] = sum(p.numel() for p in model.parameters() if p.requires_grad)
86
-
87
- # TODO: I have no idea why tboard doesn't keep metrics (hparams) in a tensorboard when
88
- # I run this experiment from cli: `python main.py config=./configs/vggish.yaml`
89
- # while when I run it in vscode debugger the metrics are present in the tboard (weird)
90
- print(test_metrics_dict)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIWaves/SOP_Generation-single/State.py DELETED
@@ -1,142 +0,0 @@
1
- from Component import *
2
-
3
-
4
- class State:
5
- """
6
- Sub-scenes of role activities, responsible for storing the tasks that each role needs to do
7
- """
8
- def __init__(self, **kwargs):
9
- self.next_states = {}
10
- self.name = kwargs["name"]
11
-
12
- self.environment_prompt = (
13
- kwargs["environment_prompt"] if "environment_prompt" in kwargs else ""
14
- )
15
-
16
- self.roles = kwargs["roles"] if "roles" in kwargs else (list(kwargs["agent_states"].keys()) if "agent_states" in kwargs else [0])
17
- if len(self.roles) == 0:
18
- self.roles = [0]
19
- self.begin_role = (
20
- kwargs["begin_role"] if "begin_role" in kwargs else self.roles[0]
21
- )
22
- self.begin_query = kwargs["begin_query"] if "begin_query" in kwargs else None
23
-
24
- self.is_begin = True
25
-
26
- self.summary_prompt = (
27
- kwargs["summary_prompt"] if "summary_prompt" in kwargs else None
28
- )
29
- self.current_role = self.begin_role
30
- self.components = (
31
- self.init_components(kwargs["agent_states"])
32
- if "agent_states" in kwargs
33
- else {}
34
- )
35
- self.index = (
36
- self.roles.index(self.begin_role) if self.begin_role in self.roles else 0
37
- )
38
- self.chat_nums = 0
39
-
40
- def init_components(self, agent_states_dict: dict):
41
- agent_states = {}
42
- for role, components in agent_states_dict.items():
43
- component_dict = {}
44
- for component, component_args in components.items():
45
- if component:
46
- # "role" "style"
47
- if component == "style":
48
- component_dict["style"] = StyleComponent(component_args["role"])
49
-
50
- # "task"
51
- elif component == "task":
52
- component_dict["task"] = TaskComponent(component_args["task"])
53
-
54
- # "rule"
55
- elif component == "rule":
56
- component_dict["rule"] = RuleComponent(component_args["rule"])
57
-
58
- # "demonstration"
59
- elif component == "demonstrations":
60
- component_dict["demonstrations"] = DemonstrationComponent(
61
- component_args["demonstrations"]
62
- )
63
-
64
- # "output"
65
- elif component == "output":
66
- component_dict["output"] = OutputComponent(
67
- component_args["output"]
68
- )
69
-
70
- elif component == "last":
71
- component_dict["last"] = LastComponent(
72
- component_args["last_prompt"]
73
- )
74
-
75
- # "demonstrations"
76
- elif component == "cot":
77
- component_dict["cot"] = CoTComponent(
78
- component_args["demonstrations"]
79
- )
80
- elif component == "CustomizeComponent":
81
- component_dict["CustomizeComponent"] = CustomizeComponent(
82
- component_args["template"], component_args["keywords"]
83
- )
84
-
85
- elif component == "system" :
86
- component_dict["system"] = SystemComponent(
87
- component_args["system_prompt"]
88
- )
89
-
90
- # =================================================================================#
91
-
92
- # "output"
93
- elif component == "StaticComponent":
94
- component_dict["StaticComponent"] = StaticComponent(
95
- component_args["output"]
96
- )
97
-
98
- # "top_k" "type" "knowledge_base" "system_prompt" "last_prompt"
99
- elif component == "KnowledgeBaseComponent":
100
- component_dict["tool"] = KnowledgeBaseComponent(
101
- component_args["top_k"],
102
- component_args["type"],
103
- component_args["knowledge_path"],
104
- )
105
-
106
- elif component == "CategoryRequirementsComponent":
107
- component_dict[
108
- "CategoryRequirementsComponent"
109
- ] = CategoryRequirementsComponent(
110
- component_args["information_path"]
111
- )
112
-
113
- elif component == "FunctionComponent":
114
- component_dict["FunctionComponent"] = FunctionComponent(component_args[""])
115
- # "short_memory_extract_words" "long_memory_extract_words" "system_prompt" "last_prompt"
116
- elif component == "ExtractComponent":
117
- component_dict["ExtractComponent"] = ExtractComponent(
118
- component_args["extract_words"],
119
- component_args["system_prompt"],
120
- component_args["last_prompt"],
121
- )
122
- elif component == "WebSearchComponent":
123
- component_dict["WebSearchComponent"] = WebSearchComponent(
124
- component_args["engine_name"], component_args["api"]
125
- )
126
- elif component == "WebCrawlComponent":
127
- component_dict["WebCrawlComponent"] = WebCrawlComponent(
128
- component_args["name"]
129
- )
130
-
131
- elif component == "CodeComponent":
132
- component_dict["CodeComponent"] = CodeComponent(
133
- component_args["file_name"], component_args["keyword"]
134
- )
135
-
136
- # ====================================================
137
- else:
138
- continue
139
-
140
- agent_states[role] = component_dict
141
-
142
- return agent_states
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/gpt4love.py DELETED
@@ -1,48 +0,0 @@
1
- import json
2
- import sys
3
- from re import findall
4
- from curl_cffi import requests
5
-
6
- config = json.loads(sys.argv[1])
7
- prompt = config['messages'][-1]['content']
8
-
9
- headers = {
10
- 'authority': 'api.gptplus.one',
11
- 'accept': 'application/json, text/plain, */*',
12
- 'accept-language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4',
13
- 'content-type': 'application/octet-stream',
14
- 'origin': 'https://ai.gptforlove.com/',
15
- 'referer': 'https://ai.gptforlove.com/',
16
- 'sec-ch-ua': '"Google Chrome";v="113", "Chromium";v="113", "Not-A.Brand";v="24"',
17
- 'sec-ch-ua-mobile': '?0',
18
- 'sec-ch-ua-platform': '"macOS"',
19
- 'sec-fetch-dest': 'empty',
20
- 'sec-fetch-mode': 'cors',
21
- 'sec-fetch-site': 'cross-site',
22
- 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36',
23
- }
24
-
25
- json_data = {
26
- 'prompt': prompt,
27
- 'options': {}
28
- }
29
-
30
- def format(chunk):
31
- try:
32
- completion_chunk = findall(r'content":"(.*)"},"fin', chunk.decode())[0]
33
- print(completion_chunk, flush=True, end='')
34
-
35
- except Exception as e:
36
- print(f'[ERROR] an error occured, retrying... | [[{chunk.decode()}]]', flush=True)
37
- return
38
-
39
- while True:
40
- try:
41
- response = requests.post('https://api.gptplus.one/api/chat-process',
42
- headers=headers, json=json_data, content_callback=format, impersonate='chrome110')
43
-
44
- exit(0)
45
-
46
- except Exception as e:
47
- print('[ERROR] an error occured, retrying... |', e, flush=True)
48
- continue
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov6/yolov6_s_syncbn_fast_8xb32-300e_coco.py DELETED
@@ -1,33 +0,0 @@
1
- _base_ = './yolov6_s_syncbn_fast_8xb32-400e_coco.py'
2
-
3
- # ======================= Frequently modified parameters =====================
4
- # -----train val related-----
5
- # Base learning rate for optim_wrapper
6
- max_epochs = 300 # Maximum training epochs
7
- num_last_epochs = 15 # Last epoch number to switch training pipeline
8
-
9
- # ============================== Unmodified in most cases ===================
10
- default_hooks = dict(
11
- param_scheduler=dict(
12
- type='YOLOv5ParamSchedulerHook',
13
- scheduler_type='cosine',
14
- lr_factor=0.01,
15
- max_epochs=max_epochs))
16
-
17
- custom_hooks = [
18
- dict(
19
- type='EMAHook',
20
- ema_type='ExpMomentumEMA',
21
- momentum=0.0001,
22
- update_buffers=True,
23
- strict_load=False,
24
- priority=49),
25
- dict(
26
- type='mmdet.PipelineSwitchHook',
27
- switch_epoch=max_epochs - num_last_epochs,
28
- switch_pipeline=_base_.train_pipeline_stage2)
29
- ]
30
-
31
- train_cfg = dict(
32
- max_epochs=max_epochs,
33
- dynamic_intervals=[(max_epochs - num_last_epochs, 1)])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet18_8xb32_in1k.py DELETED
@@ -1,4 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/resnet18.py', '../_base_/datasets/imagenet_bs32.py',
3
- '../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
4
- ]
 
 
 
 
 
spaces/Abhilashvj/planogram-compliance/utils/activations.py DELETED
@@ -1,106 +0,0 @@
1
- # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
- """
3
- Activation functions
4
- """
5
-
6
- import torch
7
- import torch.nn as nn
8
- import torch.nn.functional as F
9
-
10
-
11
- class SiLU(nn.Module):
12
- # SiLU activation https://arxiv.org/pdf/1606.08415.pdf
13
- @staticmethod
14
- def forward(x):
15
- return x * torch.sigmoid(x)
16
-
17
-
18
- class Hardswish(nn.Module):
19
- # Hard-SiLU activation
20
- @staticmethod
21
- def forward(x):
22
- # return x * F.hardsigmoid(x) # for TorchScript and CoreML
23
- return (
24
- x * F.hardtanh(x + 3, 0.0, 6.0) / 6.0
25
- ) # for TorchScript, CoreML and ONNX
26
-
27
-
28
- class Mish(nn.Module):
29
- # Mish activation https://github.com/digantamisra98/Mish
30
- @staticmethod
31
- def forward(x):
32
- return x * F.softplus(x).tanh()
33
-
34
-
35
- class MemoryEfficientMish(nn.Module):
36
- # Mish activation memory-efficient
37
- class F(torch.autograd.Function):
38
- @staticmethod
39
- def forward(ctx, x):
40
- ctx.save_for_backward(x)
41
- return x.mul(torch.tanh(F.softplus(x))) # x * tanh(ln(1 + exp(x)))
42
-
43
- @staticmethod
44
- def backward(ctx, grad_output):
45
- x = ctx.saved_tensors[0]
46
- sx = torch.sigmoid(x)
47
- fx = F.softplus(x).tanh()
48
- return grad_output * (fx + x * sx * (1 - fx * fx))
49
-
50
- def forward(self, x):
51
- return self.F.apply(x)
52
-
53
-
54
- class FReLU(nn.Module):
55
- # FReLU activation https://arxiv.org/abs/2007.11824
56
- def __init__(self, c1, k=3): # ch_in, kernel
57
- super().__init__()
58
- self.conv = nn.Conv2d(c1, c1, k, 1, 1, groups=c1, bias=False)
59
- self.bn = nn.BatchNorm2d(c1)
60
-
61
- def forward(self, x):
62
- return torch.max(x, self.bn(self.conv(x)))
63
-
64
-
65
- class AconC(nn.Module):
66
- r"""ACON activation (activate or not)
67
- AconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is a learnable parameter
68
- according to "Activate or Not: Learning Customized Activation" <https://arxiv.org/pdf/2009.04759.pdf>.
69
- """
70
-
71
- def __init__(self, c1):
72
- super().__init__()
73
- self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1))
74
- self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1))
75
- self.beta = nn.Parameter(torch.ones(1, c1, 1, 1))
76
-
77
- def forward(self, x):
78
- dpx = (self.p1 - self.p2) * x
79
- return dpx * torch.sigmoid(self.beta * dpx) + self.p2 * x
80
-
81
-
82
- class MetaAconC(nn.Module):
83
- r"""ACON activation (activate or not)
84
- MetaAconC: (p1*x-p2*x) * sigmoid(beta*(p1*x-p2*x)) + p2*x, beta is generated by a small network
85
- according to "Activate or Not: Learning Customized Activation" <https://arxiv.org/pdf/2009.04759.pdf>.
86
- """
87
-
88
- def __init__(self, c1, k=1, s=1, r=16): # ch_in, kernel, stride, r
89
- super().__init__()
90
- c2 = max(r, c1 // r)
91
- self.p1 = nn.Parameter(torch.randn(1, c1, 1, 1))
92
- self.p2 = nn.Parameter(torch.randn(1, c1, 1, 1))
93
- self.fc1 = nn.Conv2d(c1, c2, k, s, bias=True)
94
- self.fc2 = nn.Conv2d(c2, c1, k, s, bias=True)
95
- # self.bn1 = nn.BatchNorm2d(c2)
96
- # self.bn2 = nn.BatchNorm2d(c1)
97
-
98
- def forward(self, x):
99
- y = x.mean(dim=2, keepdims=True).mean(dim=3, keepdims=True)
100
- # batch-size 1 bug/instabilities https://github.com/ultralytics/yolov5/issues/2891
101
- # beta = torch.sigmoid(self.bn2(self.fc2(self.bn1(self.fc1(y))))) # bug/unstable
102
- beta = torch.sigmoid(
103
- self.fc2(self.fc1(y))
104
- ) # bug patch BN layers removed
105
- dpx = (self.p1 - self.p2) * x
106
- return dpx * torch.sigmoid(beta * dpx) + self.p2 * x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/ldm/models/diffusion/__init__.py DELETED
File without changes
spaces/Adr740/Hadith_AI_Explorer/get_hadith.py DELETED
@@ -1,42 +0,0 @@
1
- import pandas as pd
2
- import openai
3
- from data import data as df
4
- import numpy as np
5
- import os
6
-
7
- openai.api_key = os.environ.get("apk")
8
-
9
- def cosine_similarity(a, b):
10
- return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))
11
-
12
-
13
- def get_embedding(text, model="text-embedding-ada-002"):
14
- try:
15
- text = text.replace("\n", " ")
16
- except:
17
- None
18
- return openai.Embedding.create(input = [text], model=model)['data'][0]['embedding']
19
-
20
- def search_hadiths(search, nb=3, pprint=True):
21
- embedding = get_embedding(search, model='text-embedding-ada-002')
22
- dff = df.copy()
23
- dff['similarities'] = dff.embeding.apply(lambda x: cosine_similarity(x, embedding))
24
- res = dff.sort_values('similarities', ascending=False).head(int(nb))
25
- try:
26
- res.drop(columns=["id","hadith_id", "embeding"], inplace=True)
27
- except:
28
- pass
29
- return res
30
-
31
- def get_hadiths(text, nb):
32
- result = search_hadiths(text,nb).to_dict(orient="records")
33
- final_str = ""
34
- for r in result:
35
- final_str += "### Source: " + str(r["source"]) + " | Chapter name : "+ str(r["chapter"]) +" | Chapter number: " + str(r["chapter_no"]) + " | Hadith number : " + str(r["chapter_no"]) + "\n\n"
36
- final_str += "Similarity with query: " + str(round(r["similarities"]*100,2)) + "%" +" | Chain index: " + str(r["chain_indx"]) + "\n\n"
37
- final_str += "### Hadith content:" + "\n\n" + str(r["text_en"]) + "\n\n"
38
- final_str += "Arabic version: \n\n" + str(r["text_ar"])
39
- final_str += "\n\n-----------------------------------------------------------------------------------------------------\n\n"
40
-
41
- final_str = final_str.replace("`", "")
42
- return final_str
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/order/base.py DELETED
@@ -1,18 +0,0 @@
1
- from __future__ import annotations
2
-
3
- from abc import abstractmethod
4
- from typing import TYPE_CHECKING, Any, List
5
-
6
- from pydantic import BaseModel
7
-
8
- if TYPE_CHECKING:
9
- from agentverse.environments import BaseEnvironment
10
-
11
-
12
- class BaseOrder(BaseModel):
13
- @abstractmethod
14
- def get_next_agent_idx(self, environment: BaseEnvironment) -> List[int]:
15
- """Return the index of the next agent to speak"""
16
-
17
- def reset(self) -> None:
18
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/gui.py DELETED
@@ -1,506 +0,0 @@
1
- import base64
2
- import itertools
3
- import json
4
- from typing import Dict, List, Tuple
5
-
6
- import cv2
7
- import gradio as gr
8
-
9
- from agentverse import TaskSolving
10
- from agentverse.simulation import Simulation
11
- from agentverse.message import Message
12
-
13
-
14
- def cover_img(background, img, place: Tuple[int, int]):
15
- """
16
- Overlays the specified image to the specified position of the background image.
17
- :param background: background image
18
- :param img: the specified image
19
- :param place: the top-left coordinate of the target location
20
- """
21
- back_h, back_w, _ = background.shape
22
- height, width, _ = img.shape
23
- for i, j in itertools.product(range(height), range(width)):
24
- if img[i, j, 3]:
25
- background[place[0] + i, place[1] + j] = img[i, j, :3]
26
-
27
-
28
- class GUI:
29
- """
30
- the UI of frontend
31
- """
32
-
33
- def __init__(self, task: str, tasks_dir: str):
34
- """
35
- init a UI.
36
- default number of students is 0
37
- """
38
- self.messages = []
39
- self.task = task
40
- if task == "pipeline_brainstorming":
41
- self.backend = TaskSolving.from_task(task, tasks_dir)
42
- else:
43
- self.backend = Simulation.from_task(task, tasks_dir)
44
- self.turns_remain = 0
45
- self.agent_id = {
46
- self.backend.agents[idx].name: idx
47
- for idx in range(len(self.backend.agents))
48
- }
49
- self.stu_num = len(self.agent_id) - 1
50
- self.autoplay = False
51
- self.image_now = None
52
- self.text_now = None
53
- self.tot_solutions = 5
54
- self.solution_status = [False] * self.tot_solutions
55
-
56
- def get_avatar(self, idx):
57
- if idx == -1:
58
- img = cv2.imread("./imgs/db_diag/-1.png")
59
- elif self.task == "prisoner_dilemma":
60
- img = cv2.imread(f"./imgs/prison/{idx}.png")
61
- elif self.task == "db_diag":
62
- img = cv2.imread(f"./imgs/db_diag/{idx}.png")
63
- elif "sde" in self.task:
64
- img = cv2.imread(f"./imgs/sde/{idx}.png")
65
- else:
66
- img = cv2.imread(f"./imgs/{idx}.png")
67
- base64_str = cv2.imencode(".png", img)[1].tostring()
68
- return "data:image/png;base64," + base64.b64encode(base64_str).decode("utf-8")
69
-
70
- def stop_autoplay(self):
71
- self.autoplay = False
72
- return (
73
- gr.Button.update(interactive=False),
74
- gr.Button.update(interactive=False),
75
- gr.Button.update(interactive=False),
76
- )
77
-
78
- def start_autoplay(self):
79
- self.autoplay = True
80
- yield (
81
- self.image_now,
82
- self.text_now,
83
- gr.Button.update(interactive=False),
84
- gr.Button.update(interactive=True),
85
- gr.Button.update(interactive=False),
86
- *[gr.Button.update(visible=statu) for statu in self.solution_status],
87
- gr.Box.update(visible=any(self.solution_status)),
88
- )
89
-
90
- while self.autoplay and self.turns_remain > 0:
91
- outputs = self.gen_output()
92
- self.image_now, self.text_now = outputs
93
-
94
- yield (
95
- *outputs,
96
- gr.Button.update(interactive=not self.autoplay and self.turns_remain > 0),
97
- gr.Button.update(interactive=self.autoplay and self.turns_remain > 0),
98
- gr.Button.update(interactive=not self.autoplay and self.turns_remain > 0),
99
- *[gr.Button.update(visible=statu) for statu in self.solution_status],
100
- gr.Box.update(visible=any(self.solution_status))
101
- )
102
-
103
- def delay_gen_output(self):
104
- yield (
105
- self.image_now,
106
- self.text_now,
107
- gr.Button.update(interactive=False),
108
- gr.Button.update(interactive=False),
109
- *[gr.Button.update(visible=statu) for statu in self.solution_status],
110
- gr.Box.update(visible=any(self.solution_status))
111
- )
112
-
113
- outputs = self.gen_output()
114
- self.image_now, self.text_now = outputs
115
-
116
- yield (
117
- self.image_now,
118
- self.text_now,
119
- gr.Button.update(interactive=self.turns_remain > 0),
120
- gr.Button.update(interactive=self.turns_remain > 0),
121
- *[gr.Button.update(visible=statu) for statu in self.solution_status],
122
- gr.Box.update(visible=any(self.solution_status))
123
- )
124
-
125
- def delay_reset(self):
126
- self.autoplay = False
127
- self.image_now, self.text_now = self.reset()
128
- return (
129
- self.image_now,
130
- self.text_now,
131
- gr.Button.update(interactive=True),
132
- gr.Button.update(interactive=False),
133
- gr.Button.update(interactive=True),
134
- *[gr.Button.update(visible=statu) for statu in self.solution_status],
135
- gr.Box.update(visible=any(self.solution_status))
136
- )
137
-
138
- def reset(self, stu_num=0):
139
- """
140
- tell backend the new number of students and generate new empty image
141
- :param stu_num:
142
- :return: [empty image, empty message]
143
- """
144
- if not 0 <= stu_num <= 30:
145
- raise gr.Error("the number of students must be between 0 and 30.")
146
-
147
- """
148
- # [To-Do] Need to add a function to assign agent numbers into the backend.
149
- """
150
- # self.backend.reset(stu_num)
151
- # self.stu_num = stu_num
152
-
153
- """
154
- # [To-Do] Pass the parameters to reset
155
- """
156
- self.backend.reset()
157
- self.turns_remain = self.backend.environment.max_turns
158
-
159
- if self.task == "prisoner_dilemma":
160
- background = cv2.imread("./imgs/prison/case_1.png")
161
- elif self.task == "db_diag":
162
- background = cv2.imread("./imgs/db_diag/background.png")
163
- elif "sde" in self.task:
164
- background = cv2.imread("./imgs/sde/background.png")
165
- else:
166
- background = cv2.imread("./imgs/background.png")
167
- back_h, back_w, _ = background.shape
168
- stu_cnt = 0
169
- for h_begin, w_begin in itertools.product(
170
- range(800, back_h, 300), range(135, back_w - 200, 200)
171
- ):
172
- stu_cnt += 1
173
- img = cv2.imread(
174
- f"./imgs/{(stu_cnt - 1) % 11 + 1 if stu_cnt <= self.stu_num else 'empty'}.png",
175
- cv2.IMREAD_UNCHANGED,
176
- )
177
- cover_img(
178
- background,
179
- img,
180
- (h_begin - 30 if img.shape[0] > 190 else h_begin, w_begin),
181
- )
182
- self.messages = []
183
- self.solution_status = [False] * self.tot_solutions
184
- return [cv2.cvtColor(background, cv2.COLOR_BGR2RGB), ""]
185
-
186
- def gen_img(self, data: List[Dict]):
187
- """
188
- generate new image with sender rank
189
- :param data:
190
- :return: the new image
191
- """
192
- # The following code need to be more general. This one is too task-specific.
193
- # if len(data) != self.stu_num:
194
- if len(data) != self.stu_num + 1:
195
- raise gr.Error("data length is not equal to the total number of students.")
196
- if self.task == "prisoner_dilemma":
197
- img = cv2.imread("./imgs/speaking.png", cv2.IMREAD_UNCHANGED)
198
- if (
199
- len(self.messages) < 2
200
- or self.messages[-1][0] == 1
201
- or self.messages[-2][0] == 2
202
- ):
203
- background = cv2.imread("./imgs/prison/case_1.png")
204
- if data[0]["message"] != "":
205
- cover_img(background, img, (400, 480))
206
- else:
207
- background = cv2.imread("./imgs/prison/case_2.png")
208
- if data[0]["message"] != "":
209
- cover_img(background, img, (400, 880))
210
- if data[1]["message"] != "":
211
- cover_img(background, img, (550, 480))
212
- if data[2]["message"] != "":
213
- cover_img(background, img, (550, 880))
214
- elif self.task == "db_diag":
215
- background = cv2.imread("./imgs/db_diag/background.png")
216
- img = cv2.imread("./imgs/db_diag/speaking.png", cv2.IMREAD_UNCHANGED)
217
- if data[0]["message"] != "":
218
- cover_img(background, img, (750, 80))
219
- if data[1]["message"] != "":
220
- cover_img(background, img, (310, 220))
221
- if data[2]["message"] != "":
222
- cover_img(background, img, (522, 11))
223
- elif "sde" in self.task:
224
- background = cv2.imread("./imgs/sde/background.png")
225
- img = cv2.imread("./imgs/sde/speaking.png", cv2.IMREAD_UNCHANGED)
226
- if data[0]["message"] != "":
227
- cover_img(background, img, (692, 330))
228
- if data[1]["message"] != "":
229
- cover_img(background, img, (692, 660))
230
- if data[2]["message"] != "":
231
- cover_img(background, img, (692, 990))
232
- else:
233
- background = cv2.imread("./imgs/background.png")
234
- back_h, back_w, _ = background.shape
235
- stu_cnt = 0
236
- if data[stu_cnt]["message"] not in ["", "[RaiseHand]"]:
237
- img = cv2.imread("./imgs/speaking.png", cv2.IMREAD_UNCHANGED)
238
- cover_img(background, img, (370, 1250))
239
- for h_begin, w_begin in itertools.product(
240
- range(800, back_h, 300), range(135, back_w - 200, 200)
241
- ):
242
- stu_cnt += 1
243
- if stu_cnt <= self.stu_num:
244
- img = cv2.imread(
245
- f"./imgs/{(stu_cnt - 1) % 11 + 1}.png", cv2.IMREAD_UNCHANGED
246
- )
247
- cover_img(
248
- background,
249
- img,
250
- (h_begin - 30 if img.shape[0] > 190 else h_begin, w_begin),
251
- )
252
- if "[RaiseHand]" in data[stu_cnt]["message"]:
253
- # elif data[stu_cnt]["message"] == "[RaiseHand]":
254
- img = cv2.imread("./imgs/hand.png", cv2.IMREAD_UNCHANGED)
255
- cover_img(background, img, (h_begin - 90, w_begin + 10))
256
- elif data[stu_cnt]["message"] not in ["", "[RaiseHand]"]:
257
- img = cv2.imread("./imgs/speaking.png", cv2.IMREAD_UNCHANGED)
258
- cover_img(background, img, (h_begin - 90, w_begin + 10))
259
-
260
- else:
261
- img = cv2.imread("./imgs/empty.png", cv2.IMREAD_UNCHANGED)
262
- cover_img(background, img, (h_begin, w_begin))
263
- return cv2.cvtColor(background, cv2.COLOR_BGR2RGB)
264
-
265
- def return_format(self, messages: List[Message]):
266
- _format = [{"message": "", "sender": idx} for idx in range(len(self.agent_id))]
267
-
268
- for message in messages:
269
- if self.task == "db_diag":
270
- content_json: dict = message.content
271
- content_json["diagnose"] = f"[{message.sender}]: {content_json['diagnose']}"
272
- _format[self.agent_id[message.sender]]["message"] = json.dumps(content_json)
273
- elif "sde" in self.task:
274
- if message.sender == "code_tester":
275
- pre_message, message_ = message.content.split("\n")
276
- message_ = "{}\n{}".format(pre_message, json.loads(message_)["feedback"])
277
- _format[self.agent_id[message.sender]]["message"] = "[{}]: {}".format(
278
- message.sender, message_
279
- )
280
- else:
281
- _format[self.agent_id[message.sender]]["message"] = "[{}]: {}".format(
282
- message.sender, message.content
283
- )
284
-
285
- else:
286
- _format[self.agent_id[message.sender]]["message"] = "[{}]: {}".format(
287
- message.sender, message.content
288
- )
289
-
290
- return _format
291
-
292
- def gen_output(self):
293
- """
294
- generate new image and message of next step
295
- :return: [new image, new message]
296
- """
297
-
298
- # data = self.backend.next_data()
299
- return_message = self.backend.next()
300
- data = self.return_format(return_message)
301
-
302
- # data.sort(key=lambda item: item["sender"])
303
- """
304
- # [To-Do]; Check the message from the backend: only 1 person can speak
305
- """
306
-
307
- for item in data:
308
- if item["message"] not in ["", "[RaiseHand]"]:
309
- self.messages.append((item["sender"], item["message"]))
310
-
311
- message = self.gen_message()
312
- self.turns_remain -= 1
313
- return [self.gen_img(data), message]
314
-
315
- def gen_message(self):
316
- # If the backend cannot handle this error, use the following code.
317
- message = ""
318
- """
319
- for item in data:
320
- if item["message"] not in ["", "[RaiseHand]"]:
321
- message = item["message"]
322
- break
323
- """
324
- for sender, msg in self.messages:
325
- if sender == 0:
326
- avatar = self.get_avatar(0)
327
- elif sender == -1:
328
- avatar = self.get_avatar(-1)
329
- else:
330
- avatar = self.get_avatar((sender - 1) % 11 + 1)
331
- if self.task == "db_diag":
332
- msg_json = json.loads(msg)
333
- self.solution_status = [False] * self.tot_solutions
334
- msg = msg_json["diagnose"]
335
- if msg_json["solution"] != "":
336
- solution: List[str] = msg_json["solution"]
337
- for solu in solution:
338
- if "query" in solu or "queries" in solu:
339
- self.solution_status[0] = True
340
- solu = solu.replace("query", '<span style="color:yellow;">query</span>')
341
- solu = solu.replace("queries", '<span style="color:yellow;">queries</span>')
342
- if "join" in solu:
343
- self.solution_status[1] = True
344
- solu = solu.replace("join", '<span style="color:yellow;">join</span>')
345
- if "index" in solu:
346
- self.solution_status[2] = True
347
- solu = solu.replace("index", '<span style="color:yellow;">index</span>')
348
- if "system configuration" in solu:
349
- self.solution_status[3] = True
350
- solu = solu.replace("system configuration",
351
- '<span style="color:yellow;">system configuration</span>')
352
- if "monitor" in solu or "Monitor" in solu or "Investigate" in solu:
353
- self.solution_status[4] = True
354
- solu = solu.replace("monitor", '<span style="color:yellow;">monitor</span>')
355
- solu = solu.replace("Monitor", '<span style="color:yellow;">Monitor</span>')
356
- solu = solu.replace("Investigate", '<span style="color:yellow;">Investigate</span>')
357
- msg = f"{msg}<br>{solu}"
358
- if msg_json["knowledge"] != "":
359
- msg = f'{msg}<hr style="margin: 5px 0"><span style="font-style: italic">{msg_json["knowledge"]}<span>'
360
- else:
361
- msg = msg.replace("<", "&lt;")
362
- msg = msg.replace(">", "&gt;")
363
- message = (
364
- f'<div style="display: flex; align-items: center; margin-bottom: 10px;overflow:auto;">'
365
- f'<img src="{avatar}" style="width: 5%; height: 5%; border-radius: 25px; margin-right: 10px;">'
366
- f'<div style="background-color: gray; color: white; padding: 10px; border-radius: 10px;'
367
- f'max-width: 70%; white-space: pre-wrap">'
368
- f"{msg}"
369
- f"</div></div>" + message
370
- )
371
- message = '<div id="divDetail" style="height:600px;overflow:auto;">' + message + "</div>"
372
- return message
373
-
374
- def submit(self, message: str):
375
- """
376
- submit message to backend
377
- :param message: message
378
- :return: [new image, new message]
379
- """
380
- self.backend.submit(message)
381
- self.messages.append((-1, f"[User]: {message}"))
382
- return self.gen_img([{"message": ""}] * len(self.agent_id)), self.gen_message()
383
-
384
- def launch(self, single_agent=False, discussion_mode=False):
385
- if self.task == "pipeline_brainstorming":
386
- with gr.Blocks() as demo:
387
- chatbot = gr.Chatbot(height=800, show_label=False)
388
- msg = gr.Textbox(label="Input")
389
-
390
- def respond(message, chat_history):
391
- chat_history.append((message, None))
392
- yield "", chat_history
393
- for response in self.backend.iter_run(single_agent=single_agent, discussion_mode=discussion_mode):
394
- print(response)
395
- chat_history.append((None, response))
396
- yield "", chat_history
397
-
398
- msg.submit(respond, [msg, chatbot], [msg, chatbot])
399
- else:
400
- with gr.Blocks() as demo:
401
- with gr.Row():
402
- with gr.Column():
403
- image_output = gr.Image()
404
- with gr.Row():
405
- reset_btn = gr.Button("Reset")
406
- # next_btn = gr.Button("Next", variant="primary")
407
- next_btn = gr.Button("Next", interactive=False)
408
- stop_autoplay_btn = gr.Button(
409
- "Stop Autoplay", interactive=False
410
- )
411
- start_autoplay_btn = gr.Button("Start Autoplay", interactive=False)
412
- with gr.Box(visible=False) as solutions:
413
- with gr.Column():
414
- gr.HTML("Optimization Solutions:")
415
- with gr.Row():
416
- rewrite_slow_query_btn = gr.Button("Rewrite Slow Query", visible=False)
417
- add_query_hints_btn = gr.Button("Add Query Hints", visible=False)
418
- update_indexes_btn = gr.Button("Update Indexes", visible=False)
419
- tune_parameters_btn = gr.Button("Tune Parameters", visible=False)
420
- gather_more_info_btn = gr.Button("Gather More Info", visible=False)
421
- # text_output = gr.Textbox()
422
- text_output = gr.HTML(self.reset()[1])
423
-
424
- # Given a botton to provide student numbers and their inf.
425
- # stu_num = gr.Number(label="Student Number", precision=0)
426
- # stu_num = self.stu_num
427
-
428
- if self.task == "db_diag":
429
- user_msg = gr.Textbox()
430
- submit_btn = gr.Button("Submit", variant="primary")
431
-
432
- submit_btn.click(fn=self.submit, inputs=user_msg, outputs=[image_output, text_output],
433
- show_progress=False)
434
- else:
435
- pass
436
-
437
- # next_btn.click(fn=self.gen_output, inputs=None, outputs=[image_output, text_output],
438
- # show_progress=False)
439
- next_btn.click(
440
- fn=self.delay_gen_output,
441
- inputs=None,
442
- outputs=[
443
- image_output,
444
- text_output,
445
- next_btn,
446
- start_autoplay_btn,
447
- rewrite_slow_query_btn,
448
- add_query_hints_btn,
449
- update_indexes_btn,
450
- tune_parameters_btn,
451
- gather_more_info_btn,
452
- solutions
453
- ],
454
- show_progress=False,
455
- )
456
-
457
- # [To-Do] Add botton: re-start (load different people and env)
458
- # reset_btn.click(fn=self.reset, inputs=stu_num, outputs=[image_output, text_output],
459
- # show_progress=False)
460
- # reset_btn.click(fn=self.reset, inputs=None, outputs=[image_output, text_output], show_progress=False)
461
- reset_btn.click(
462
- fn=self.delay_reset,
463
- inputs=None,
464
- outputs=[
465
- image_output,
466
- text_output,
467
- next_btn,
468
- stop_autoplay_btn,
469
- start_autoplay_btn,
470
- rewrite_slow_query_btn,
471
- add_query_hints_btn,
472
- update_indexes_btn,
473
- tune_parameters_btn,
474
- gather_more_info_btn,
475
- solutions
476
- ],
477
- show_progress=False,
478
- )
479
-
480
- stop_autoplay_btn.click(
481
- fn=self.stop_autoplay,
482
- inputs=None,
483
- outputs=[next_btn, stop_autoplay_btn, start_autoplay_btn],
484
- show_progress=False,
485
- )
486
- start_autoplay_btn.click(
487
- fn=self.start_autoplay,
488
- inputs=None,
489
- outputs=[
490
- image_output,
491
- text_output,
492
- next_btn,
493
- stop_autoplay_btn,
494
- start_autoplay_btn,
495
- rewrite_slow_query_btn,
496
- add_query_hints_btn,
497
- update_indexes_btn,
498
- tune_parameters_btn,
499
- gather_more_info_btn,
500
- solutions
501
- ],
502
- show_progress=False,
503
- )
504
-
505
- demo.queue(concurrency_count=5, max_size=20).launch()
506
- # demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/outlinepipeline-plugin.js DELETED
@@ -1,34 +0,0 @@
1
- import OutlinePostFxPipeline from './outlinepipeline.js';
2
- import BasePostFxPipelinePlugin from './utils/renderer/postfxpipeline/BasePostFxPipelinePlugin.js';
3
- import SetValue from './utils/object/SetValue.js';
4
-
5
- const GetValue = Phaser.Utils.Objects.GetValue;
6
-
7
- class OutlinePipelinePlugin extends BasePostFxPipelinePlugin {
8
- constructor(pluginManager) {
9
- super(pluginManager);
10
- this.setPostPipelineClass(OutlinePostFxPipeline, 'rexOutlinePostFx');
11
- }
12
-
13
- add(gameObject, config) {
14
- this.setQuality(GetValue(config, 'quality', this.quality));
15
- return super.add(gameObject, config);
16
- }
17
-
18
- setQuality(value) {
19
- OutlinePostFxPipeline.setQuality(value);
20
- return this;
21
- }
22
-
23
- set quality(value) {
24
- this.setQuality(value);
25
- }
26
-
27
- get quality() {
28
- return OutlinePostFxPipeline.getQuality();
29
- }
30
- }
31
-
32
- SetValue(window, 'RexPlugins.Pipelines.OutlinePostFx', OutlinePostFxPipeline);
33
-
34
- export default OutlinePipelinePlugin;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/imagebox/ImageBox.d.ts DELETED
@@ -1,2 +0,0 @@
1
- import ImageBox from '../../../plugins/imagebox';
2
- export default ImageBox;
 
 
 
spaces/AiMimicry/sovits-models/hubert/__init__.py DELETED
File without changes
spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/libJPG/jpgd.cpp DELETED
@@ -1,3276 +0,0 @@
1
- // jpgd.cpp - C++ class for JPEG decompression.
2
- // Public domain, Rich Geldreich <[email protected]>
3
- // Last updated Apr. 16, 2011
4
- // Alex Evans: Linear memory allocator (taken from jpge.h).
5
- //
6
- // Supports progressive and baseline sequential JPEG image files, and the most common chroma subsampling factors: Y, H1V1, H2V1, H1V2, and H2V2.
7
- //
8
- // Chroma upsampling quality: H2V2 is upsampled in the frequency domain, H2V1 and H1V2 are upsampled using point sampling.
9
- // Chroma upsampling reference: "Fast Scheme for Image Size Change in the Compressed Domain"
10
- // http://vision.ai.uiuc.edu/~dugad/research/dct/index.html
11
-
12
- #include "jpgd.h"
13
- #include <string.h>
14
-
15
- #include <assert.h>
16
- // BEGIN EPIC MOD
17
- #define JPGD_ASSERT(x) { assert(x); CA_ASSUME(x); } (void)0
18
- // END EPIC MOD
19
-
20
- #ifdef _MSC_VER
21
- #pragma warning (disable : 4611) // warning C4611: interaction between '_setjmp' and C++ object destruction is non-portable
22
- #endif
23
-
24
- // Set to 1 to enable freq. domain chroma upsampling on images using H2V2 subsampling (0=faster nearest neighbor sampling).
25
- // This is slower, but results in higher quality on images with highly saturated colors.
26
- #define JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING 1
27
-
28
- #define JPGD_TRUE (1)
29
- #define JPGD_FALSE (0)
30
-
31
- #define JPGD_MAX(a,b) (((a)>(b)) ? (a) : (b))
32
- #define JPGD_MIN(a,b) (((a)<(b)) ? (a) : (b))
33
-
34
- namespace jpgd {
35
-
36
- static inline void *jpgd_malloc(size_t nSize) { return FMemory::Malloc(nSize); }
37
- static inline void jpgd_free(void *p) { FMemory::Free(p); }
38
-
39
- // BEGIN EPIC MOD
40
- //@UE3 - use UE3 BGRA encoding instead of assuming RGBA
41
- // stolen from IImageWrapper.h
42
- enum ERGBFormatJPG
43
- {
44
- Invalid = -1,
45
- RGBA = 0,
46
- BGRA = 1,
47
- Gray = 2,
48
- };
49
- static ERGBFormatJPG jpg_format;
50
- // END EPIC MOD
51
-
52
- // DCT coefficients are stored in this sequence.
53
- static int g_ZAG[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 };
54
-
55
- enum JPEG_MARKER
56
- {
57
- M_SOF0 = 0xC0, M_SOF1 = 0xC1, M_SOF2 = 0xC2, M_SOF3 = 0xC3, M_SOF5 = 0xC5, M_SOF6 = 0xC6, M_SOF7 = 0xC7, M_JPG = 0xC8,
58
- M_SOF9 = 0xC9, M_SOF10 = 0xCA, M_SOF11 = 0xCB, M_SOF13 = 0xCD, M_SOF14 = 0xCE, M_SOF15 = 0xCF, M_DHT = 0xC4, M_DAC = 0xCC,
59
- M_RST0 = 0xD0, M_RST1 = 0xD1, M_RST2 = 0xD2, M_RST3 = 0xD3, M_RST4 = 0xD4, M_RST5 = 0xD5, M_RST6 = 0xD6, M_RST7 = 0xD7,
60
- M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_DNL = 0xDC, M_DRI = 0xDD, M_DHP = 0xDE, M_EXP = 0xDF,
61
- M_APP0 = 0xE0, M_APP15 = 0xEF, M_JPG0 = 0xF0, M_JPG13 = 0xFD, M_COM = 0xFE, M_TEM = 0x01, M_ERROR = 0x100, RST0 = 0xD0
62
- };
63
-
64
- enum JPEG_SUBSAMPLING { JPGD_GRAYSCALE = 0, JPGD_YH1V1, JPGD_YH2V1, JPGD_YH1V2, JPGD_YH2V2 };
65
-
66
- #define CONST_BITS 13
67
- #define PASS1_BITS 2
68
- #define SCALEDONE ((int32)1)
69
-
70
- #define FIX_0_298631336 ((int32)2446) /* FIX(0.298631336) */
71
- #define FIX_0_390180644 ((int32)3196) /* FIX(0.390180644) */
72
- #define FIX_0_541196100 ((int32)4433) /* FIX(0.541196100) */
73
- #define FIX_0_765366865 ((int32)6270) /* FIX(0.765366865) */
74
- #define FIX_0_899976223 ((int32)7373) /* FIX(0.899976223) */
75
- #define FIX_1_175875602 ((int32)9633) /* FIX(1.175875602) */
76
- #define FIX_1_501321110 ((int32)12299) /* FIX(1.501321110) */
77
- #define FIX_1_847759065 ((int32)15137) /* FIX(1.847759065) */
78
- #define FIX_1_961570560 ((int32)16069) /* FIX(1.961570560) */
79
- #define FIX_2_053119869 ((int32)16819) /* FIX(2.053119869) */
80
- #define FIX_2_562915447 ((int32)20995) /* FIX(2.562915447) */
81
- #define FIX_3_072711026 ((int32)25172) /* FIX(3.072711026) */
82
-
83
- #define DESCALE(x,n) (((x) + (SCALEDONE << ((n)-1))) >> (n))
84
- #define DESCALE_ZEROSHIFT(x,n) (((x) + (128 << (n)) + (SCALEDONE << ((n)-1))) >> (n))
85
-
86
- #define MULTIPLY(var, cnst) ((var) * (cnst))
87
-
88
- #define CLAMP(i) ((static_cast<uint>(i) > 255) ? (((~i) >> 31) & 0xFF) : (i))
89
-
90
- // Compiler creates a fast path 1D IDCT for X non-zero columns
91
- template <int NONZERO_COLS>
92
- struct Row
93
- {
94
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
95
- {
96
- // ACCESS_COL() will be optimized at compile time to either an array access, or 0.
97
- #define ACCESS_COL(x) (((x) < NONZERO_COLS) ? (int)pSrc[x] : 0)
98
-
99
- const int z2 = ACCESS_COL(2), z3 = ACCESS_COL(6);
100
-
101
- const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100);
102
- const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065);
103
- const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865);
104
-
105
- const int tmp0 = (ACCESS_COL(0) + ACCESS_COL(4)) << CONST_BITS;
106
- const int tmp1 = (ACCESS_COL(0) - ACCESS_COL(4)) << CONST_BITS;
107
-
108
- const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2;
109
-
110
- const int atmp0 = ACCESS_COL(7), atmp1 = ACCESS_COL(5), atmp2 = ACCESS_COL(3), atmp3 = ACCESS_COL(1);
111
-
112
- const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3;
113
- const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602);
114
-
115
- const int az1 = MULTIPLY(bz1, - FIX_0_899976223);
116
- const int az2 = MULTIPLY(bz2, - FIX_2_562915447);
117
- const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5;
118
- const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5;
119
-
120
- const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3;
121
- const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4;
122
- const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3;
123
- const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4;
124
-
125
- pTemp[0] = DESCALE(tmp10 + btmp3, CONST_BITS-PASS1_BITS);
126
- pTemp[7] = DESCALE(tmp10 - btmp3, CONST_BITS-PASS1_BITS);
127
- pTemp[1] = DESCALE(tmp11 + btmp2, CONST_BITS-PASS1_BITS);
128
- pTemp[6] = DESCALE(tmp11 - btmp2, CONST_BITS-PASS1_BITS);
129
- pTemp[2] = DESCALE(tmp12 + btmp1, CONST_BITS-PASS1_BITS);
130
- pTemp[5] = DESCALE(tmp12 - btmp1, CONST_BITS-PASS1_BITS);
131
- pTemp[3] = DESCALE(tmp13 + btmp0, CONST_BITS-PASS1_BITS);
132
- pTemp[4] = DESCALE(tmp13 - btmp0, CONST_BITS-PASS1_BITS);
133
- }
134
- };
135
-
136
- template <>
137
- struct Row<0>
138
- {
139
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
140
- {
141
- #ifdef _MSC_VER
142
- pTemp; pSrc;
143
- #endif
144
- }
145
- };
146
-
147
- template <>
148
- struct Row<1>
149
- {
150
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
151
- {
152
- const int dcval = (pSrc[0] << PASS1_BITS);
153
-
154
- pTemp[0] = dcval;
155
- pTemp[1] = dcval;
156
- pTemp[2] = dcval;
157
- pTemp[3] = dcval;
158
- pTemp[4] = dcval;
159
- pTemp[5] = dcval;
160
- pTemp[6] = dcval;
161
- pTemp[7] = dcval;
162
- }
163
- };
164
-
165
- // Compiler creates a fast path 1D IDCT for X non-zero rows
166
- template <int NONZERO_ROWS>
167
- struct Col
168
- {
169
- static void idct(uint8* pDst_ptr, const int* pTemp)
170
- {
171
- // ACCESS_ROW() will be optimized at compile time to either an array access, or 0.
172
- #define ACCESS_ROW(x) (((x) < NONZERO_ROWS) ? pTemp[x * 8] : 0)
173
-
174
- const int z2 = ACCESS_ROW(2);
175
- const int z3 = ACCESS_ROW(6);
176
-
177
- const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100);
178
- const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065);
179
- const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865);
180
-
181
- const int tmp0 = (ACCESS_ROW(0) + ACCESS_ROW(4)) << CONST_BITS;
182
- const int tmp1 = (ACCESS_ROW(0) - ACCESS_ROW(4)) << CONST_BITS;
183
-
184
- const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2;
185
-
186
- const int atmp0 = ACCESS_ROW(7), atmp1 = ACCESS_ROW(5), atmp2 = ACCESS_ROW(3), atmp3 = ACCESS_ROW(1);
187
-
188
- const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3;
189
- const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602);
190
-
191
- const int az1 = MULTIPLY(bz1, - FIX_0_899976223);
192
- const int az2 = MULTIPLY(bz2, - FIX_2_562915447);
193
- const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5;
194
- const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5;
195
-
196
- const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3;
197
- const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4;
198
- const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3;
199
- const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4;
200
-
201
- int i = DESCALE_ZEROSHIFT(tmp10 + btmp3, CONST_BITS+PASS1_BITS+3);
202
- pDst_ptr[8*0] = (uint8)CLAMP(i);
203
-
204
- i = DESCALE_ZEROSHIFT(tmp10 - btmp3, CONST_BITS+PASS1_BITS+3);
205
- pDst_ptr[8*7] = (uint8)CLAMP(i);
206
-
207
- i = DESCALE_ZEROSHIFT(tmp11 + btmp2, CONST_BITS+PASS1_BITS+3);
208
- pDst_ptr[8*1] = (uint8)CLAMP(i);
209
-
210
- i = DESCALE_ZEROSHIFT(tmp11 - btmp2, CONST_BITS+PASS1_BITS+3);
211
- pDst_ptr[8*6] = (uint8)CLAMP(i);
212
-
213
- i = DESCALE_ZEROSHIFT(tmp12 + btmp1, CONST_BITS+PASS1_BITS+3);
214
- pDst_ptr[8*2] = (uint8)CLAMP(i);
215
-
216
- i = DESCALE_ZEROSHIFT(tmp12 - btmp1, CONST_BITS+PASS1_BITS+3);
217
- pDst_ptr[8*5] = (uint8)CLAMP(i);
218
-
219
- i = DESCALE_ZEROSHIFT(tmp13 + btmp0, CONST_BITS+PASS1_BITS+3);
220
- pDst_ptr[8*3] = (uint8)CLAMP(i);
221
-
222
- i = DESCALE_ZEROSHIFT(tmp13 - btmp0, CONST_BITS+PASS1_BITS+3);
223
- pDst_ptr[8*4] = (uint8)CLAMP(i);
224
- }
225
- };
226
-
227
- template <>
228
- struct Col<1>
229
- {
230
- static void idct(uint8* pDst_ptr, const int* pTemp)
231
- {
232
- int dcval = DESCALE_ZEROSHIFT(pTemp[0], PASS1_BITS+3);
233
- const uint8 dcval_clamped = (uint8)CLAMP(dcval);
234
- pDst_ptr[0*8] = dcval_clamped;
235
- pDst_ptr[1*8] = dcval_clamped;
236
- pDst_ptr[2*8] = dcval_clamped;
237
- pDst_ptr[3*8] = dcval_clamped;
238
- pDst_ptr[4*8] = dcval_clamped;
239
- pDst_ptr[5*8] = dcval_clamped;
240
- pDst_ptr[6*8] = dcval_clamped;
241
- pDst_ptr[7*8] = dcval_clamped;
242
- }
243
- };
244
-
245
- static const uint8 s_idct_row_table[] =
246
- {
247
- 1,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0, 2,1,0,0,0,0,0,0, 2,1,1,0,0,0,0,0, 2,2,1,0,0,0,0,0, 3,2,1,0,0,0,0,0, 4,2,1,0,0,0,0,0, 4,3,1,0,0,0,0,0,
248
- 4,3,2,0,0,0,0,0, 4,3,2,1,0,0,0,0, 4,3,2,1,1,0,0,0, 4,3,2,2,1,0,0,0, 4,3,3,2,1,0,0,0, 4,4,3,2,1,0,0,0, 5,4,3,2,1,0,0,0, 6,4,3,2,1,0,0,0,
249
- 6,5,3,2,1,0,0,0, 6,5,4,2,1,0,0,0, 6,5,4,3,1,0,0,0, 6,5,4,3,2,0,0,0, 6,5,4,3,2,1,0,0, 6,5,4,3,2,1,1,0, 6,5,4,3,2,2,1,0, 6,5,4,3,3,2,1,0,
250
- 6,5,4,4,3,2,1,0, 6,5,5,4,3,2,1,0, 6,6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0, 8,6,5,4,3,2,1,0, 8,7,5,4,3,2,1,0, 8,7,6,4,3,2,1,0, 8,7,6,5,3,2,1,0,
251
- 8,7,6,5,4,2,1,0, 8,7,6,5,4,3,1,0, 8,7,6,5,4,3,2,0, 8,7,6,5,4,3,2,1, 8,7,6,5,4,3,2,2, 8,7,6,5,4,3,3,2, 8,7,6,5,4,4,3,2, 8,7,6,5,5,4,3,2,
252
- 8,7,6,6,5,4,3,2, 8,7,7,6,5,4,3,2, 8,8,7,6,5,4,3,2, 8,8,8,6,5,4,3,2, 8,8,8,7,5,4,3,2, 8,8,8,7,6,4,3,2, 8,8,8,7,6,5,3,2, 8,8,8,7,6,5,4,2,
253
- 8,8,8,7,6,5,4,3, 8,8,8,7,6,5,4,4, 8,8,8,7,6,5,5,4, 8,8,8,7,6,6,5,4, 8,8,8,7,7,6,5,4, 8,8,8,8,7,6,5,4, 8,8,8,8,8,6,5,4, 8,8,8,8,8,7,5,4,
254
- 8,8,8,8,8,7,6,4, 8,8,8,8,8,7,6,5, 8,8,8,8,8,7,6,6, 8,8,8,8,8,7,7,6, 8,8,8,8,8,8,7,6, 8,8,8,8,8,8,8,6, 8,8,8,8,8,8,8,7, 8,8,8,8,8,8,8,8,
255
- };
256
-
257
- static const uint8 s_idct_col_table[] = { 1, 1, 2, 3, 3, 3, 3, 3, 3, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8 };
258
-
259
- void idct(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr, int block_max_zag)
260
- {
261
- JPGD_ASSERT(block_max_zag >= 1);
262
- JPGD_ASSERT(block_max_zag <= 64);
263
-
264
- if (block_max_zag == 1)
265
- {
266
- int k = ((pSrc_ptr[0] + 4) >> 3) + 128;
267
- k = CLAMP(k);
268
- k = k | (k<<8);
269
- k = k | (k<<16);
270
-
271
- for (int i = 8; i > 0; i--)
272
- {
273
- *(int*)&pDst_ptr[0] = k;
274
- *(int*)&pDst_ptr[4] = k;
275
- pDst_ptr += 8;
276
- }
277
- return;
278
- }
279
-
280
- int temp[64];
281
-
282
- const jpgd_block_t* pSrc = pSrc_ptr;
283
- int* pTemp = temp;
284
-
285
- const uint8* pRow_tab = &s_idct_row_table[(block_max_zag - 1) * 8];
286
- int i;
287
- for (i = 8; i > 0; i--, pRow_tab++)
288
- {
289
- switch (*pRow_tab)
290
- {
291
- case 0: Row<0>::idct(pTemp, pSrc); break;
292
- case 1: Row<1>::idct(pTemp, pSrc); break;
293
- case 2: Row<2>::idct(pTemp, pSrc); break;
294
- case 3: Row<3>::idct(pTemp, pSrc); break;
295
- case 4: Row<4>::idct(pTemp, pSrc); break;
296
- case 5: Row<5>::idct(pTemp, pSrc); break;
297
- case 6: Row<6>::idct(pTemp, pSrc); break;
298
- case 7: Row<7>::idct(pTemp, pSrc); break;
299
- case 8: Row<8>::idct(pTemp, pSrc); break;
300
- }
301
-
302
- pSrc += 8;
303
- pTemp += 8;
304
- }
305
-
306
- pTemp = temp;
307
-
308
- const int nonzero_rows = s_idct_col_table[block_max_zag - 1];
309
- for (i = 8; i > 0; i--)
310
- {
311
- switch (nonzero_rows)
312
- {
313
- case 1: Col<1>::idct(pDst_ptr, pTemp); break;
314
- case 2: Col<2>::idct(pDst_ptr, pTemp); break;
315
- case 3: Col<3>::idct(pDst_ptr, pTemp); break;
316
- case 4: Col<4>::idct(pDst_ptr, pTemp); break;
317
- case 5: Col<5>::idct(pDst_ptr, pTemp); break;
318
- case 6: Col<6>::idct(pDst_ptr, pTemp); break;
319
- case 7: Col<7>::idct(pDst_ptr, pTemp); break;
320
- case 8: Col<8>::idct(pDst_ptr, pTemp); break;
321
- }
322
-
323
- pTemp++;
324
- pDst_ptr++;
325
- }
326
- }
327
-
328
- void idct_4x4(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr)
329
- {
330
- int temp[64];
331
- int* pTemp = temp;
332
- const jpgd_block_t* pSrc = pSrc_ptr;
333
-
334
- for (int i = 4; i > 0; i--)
335
- {
336
- Row<4>::idct(pTemp, pSrc);
337
- pSrc += 8;
338
- pTemp += 8;
339
- }
340
-
341
- pTemp = temp;
342
- for (int i = 8; i > 0; i--)
343
- {
344
- Col<4>::idct(pDst_ptr, pTemp);
345
- pTemp++;
346
- pDst_ptr++;
347
- }
348
- }
349
-
350
- // Retrieve one character from the input stream.
351
- inline uint jpeg_decoder::get_char()
352
- {
353
- // Any bytes remaining in buffer?
354
- if (!m_in_buf_left)
355
- {
356
- // Try to get more bytes.
357
- prep_in_buffer();
358
- // Still nothing to get?
359
- if (!m_in_buf_left)
360
- {
361
- // Pad the end of the stream with 0xFF 0xD9 (EOI marker)
362
- int t = m_tem_flag;
363
- m_tem_flag ^= 1;
364
- if (t)
365
- return 0xD9;
366
- else
367
- return 0xFF;
368
- }
369
- }
370
-
371
- uint c = *m_pIn_buf_ofs++;
372
- m_in_buf_left--;
373
-
374
- return c;
375
- }
376
-
377
- // Same as previous method, except can indicate if the character is a pad character or not.
378
- inline uint jpeg_decoder::get_char(bool *pPadding_flag)
379
- {
380
- if (!m_in_buf_left)
381
- {
382
- prep_in_buffer();
383
- if (!m_in_buf_left)
384
- {
385
- *pPadding_flag = true;
386
- int t = m_tem_flag;
387
- m_tem_flag ^= 1;
388
- if (t)
389
- return 0xD9;
390
- else
391
- return 0xFF;
392
- }
393
- }
394
-
395
- *pPadding_flag = false;
396
-
397
- uint c = *m_pIn_buf_ofs++;
398
- m_in_buf_left--;
399
-
400
- return c;
401
- }
402
-
403
- // Inserts a previously retrieved character back into the input buffer.
404
- inline void jpeg_decoder::stuff_char(uint8 q)
405
- {
406
- *(--m_pIn_buf_ofs) = q;
407
- m_in_buf_left++;
408
- }
409
-
410
- // Retrieves one character from the input stream, but does not read past markers. Will continue to return 0xFF when a marker is encountered.
411
- inline uint8 jpeg_decoder::get_octet()
412
- {
413
- bool padding_flag;
414
- int c = get_char(&padding_flag);
415
-
416
- if (c == 0xFF)
417
- {
418
- if (padding_flag)
419
- return 0xFF;
420
-
421
- c = get_char(&padding_flag);
422
- if (padding_flag)
423
- {
424
- stuff_char(0xFF);
425
- return 0xFF;
426
- }
427
-
428
- if (c == 0x00)
429
- return 0xFF;
430
- else
431
- {
432
- stuff_char(static_cast<uint8>(c));
433
- stuff_char(0xFF);
434
- return 0xFF;
435
- }
436
- }
437
-
438
- return static_cast<uint8>(c);
439
- }
440
-
441
- // Retrieves a variable number of bits from the input stream. Does not recognize markers.
442
- inline uint jpeg_decoder::get_bits(int num_bits)
443
- {
444
- if (!num_bits)
445
- return 0;
446
-
447
- uint i = m_bit_buf >> (32 - num_bits);
448
-
449
- if ((m_bits_left -= num_bits) <= 0)
450
- {
451
- m_bit_buf <<= (num_bits += m_bits_left);
452
-
453
- uint c1 = get_char();
454
- uint c2 = get_char();
455
- m_bit_buf = (m_bit_buf & 0xFFFF0000) | (c1 << 8) | c2;
456
-
457
- m_bit_buf <<= -m_bits_left;
458
-
459
- m_bits_left += 16;
460
-
461
- JPGD_ASSERT(m_bits_left >= 0);
462
- }
463
- else
464
- m_bit_buf <<= num_bits;
465
-
466
- return i;
467
- }
468
-
469
- // Retrieves a variable number of bits from the input stream. Markers will not be read into the input bit buffer. Instead, an infinite number of all 1's will be returned when a marker is encountered.
470
- inline uint jpeg_decoder::get_bits_no_markers(int num_bits)
471
- {
472
- if (!num_bits)
473
- return 0;
474
-
475
- uint i = m_bit_buf >> (32 - num_bits);
476
-
477
- if ((m_bits_left -= num_bits) <= 0)
478
- {
479
- m_bit_buf <<= (num_bits += m_bits_left);
480
-
481
- if ((m_in_buf_left < 2) || (m_pIn_buf_ofs[0] == 0xFF) || (m_pIn_buf_ofs[1] == 0xFF))
482
- {
483
- uint c1 = get_octet();
484
- uint c2 = get_octet();
485
- m_bit_buf |= (c1 << 8) | c2;
486
- }
487
- else
488
- {
489
- m_bit_buf |= ((uint)m_pIn_buf_ofs[0] << 8) | m_pIn_buf_ofs[1];
490
- m_in_buf_left -= 2;
491
- m_pIn_buf_ofs += 2;
492
- }
493
-
494
- m_bit_buf <<= -m_bits_left;
495
-
496
- m_bits_left += 16;
497
-
498
- JPGD_ASSERT(m_bits_left >= 0);
499
- }
500
- else
501
- m_bit_buf <<= num_bits;
502
-
503
- return i;
504
- }
505
-
506
- // Decodes a Huffman encoded symbol.
507
- inline int jpeg_decoder::huff_decode(huff_tables *pH)
508
- {
509
- int symbol;
510
-
511
- // Check first 8-bits: do we have a complete symbol?
512
- if ((symbol = pH->look_up[m_bit_buf >> 24]) < 0)
513
- {
514
- // Decode more bits, use a tree traversal to find symbol.
515
- int ofs = 23;
516
- do
517
- {
518
- symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))];
519
- ofs--;
520
- } while (symbol < 0);
521
-
522
- get_bits_no_markers(8 + (23 - ofs));
523
- }
524
- else
525
- get_bits_no_markers(pH->code_size[symbol]);
526
-
527
- return symbol;
528
- }
529
-
530
- // Decodes a Huffman encoded symbol.
531
- inline int jpeg_decoder::huff_decode(huff_tables *pH, int& extra_bits)
532
- {
533
- int symbol;
534
-
535
- // Check first 8-bits: do we have a complete symbol?
536
- if ((symbol = pH->look_up2[m_bit_buf >> 24]) < 0)
537
- {
538
- // Use a tree traversal to find symbol.
539
- int ofs = 23;
540
- do
541
- {
542
- symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))];
543
- ofs--;
544
- } while (symbol < 0);
545
-
546
- get_bits_no_markers(8 + (23 - ofs));
547
-
548
- extra_bits = get_bits_no_markers(symbol & 0xF);
549
- }
550
- else
551
- {
552
- JPGD_ASSERT(((symbol >> 8) & 31) == pH->code_size[symbol & 255] + ((symbol & 0x8000) ? (symbol & 15) : 0));
553
-
554
- if (symbol & 0x8000)
555
- {
556
- get_bits_no_markers((symbol >> 8) & 31);
557
- extra_bits = symbol >> 16;
558
- }
559
- else
560
- {
561
- int code_size = (symbol >> 8) & 31;
562
- int num_extra_bits = symbol & 0xF;
563
- int bits = code_size + num_extra_bits;
564
- if (bits <= (m_bits_left + 16))
565
- extra_bits = get_bits_no_markers(bits) & ((1 << num_extra_bits) - 1);
566
- else
567
- {
568
- get_bits_no_markers(code_size);
569
- extra_bits = get_bits_no_markers(num_extra_bits);
570
- }
571
- }
572
-
573
- symbol &= 0xFF;
574
- }
575
-
576
- return symbol;
577
- }
578
-
579
- // Tables and macro used to fully decode the DPCM differences.
580
- static const int s_extend_test[16] = { 0, 0x0001, 0x0002, 0x0004, 0x0008, 0x0010, 0x0020, 0x0040, 0x0080, 0x0100, 0x0200, 0x0400, 0x0800, 0x1000, 0x2000, 0x4000 };
581
- static const int s_extend_offset[16] = { 0, -1, -3, -7, -15, -31, -63, -127, -255, -511, -1023, -2047, -4095, -8191, -16383, -32767 };
582
- static const int s_extend_mask[] = { 0, (1<<0), (1<<1), (1<<2), (1<<3), (1<<4), (1<<5), (1<<6), (1<<7), (1<<8), (1<<9), (1<<10), (1<<11), (1<<12), (1<<13), (1<<14), (1<<15), (1<<16) };
583
- #define HUFF_EXTEND(x,s) ((x) < s_extend_test[s] ? (x) + s_extend_offset[s] : (x))
584
-
585
- // Clamps a value between 0-255.
586
- inline uint8 jpeg_decoder::clamp(int i)
587
- {
588
- if (static_cast<uint>(i) > 255)
589
- i = (((~i) >> 31) & 0xFF);
590
-
591
- return static_cast<uint8>(i);
592
- }
593
-
594
- namespace DCT_Upsample
595
- {
596
- struct Matrix44
597
- {
598
- typedef int Element_Type;
599
- enum { NUM_ROWS = 4, NUM_COLS = 4 };
600
-
601
- Element_Type v[NUM_ROWS][NUM_COLS];
602
-
603
- inline int rows() const { return NUM_ROWS; }
604
- inline int cols() const { return NUM_COLS; }
605
-
606
- inline const Element_Type & at(int r, int c) const { return v[r][c]; }
607
- inline Element_Type & at(int r, int c) { return v[r][c]; }
608
-
609
- inline Matrix44() { }
610
-
611
- inline Matrix44& operator += (const Matrix44& a)
612
- {
613
- for (int r = 0; r < NUM_ROWS; r++)
614
- {
615
- at(r, 0) += a.at(r, 0);
616
- at(r, 1) += a.at(r, 1);
617
- at(r, 2) += a.at(r, 2);
618
- at(r, 3) += a.at(r, 3);
619
- }
620
- return *this;
621
- }
622
-
623
- inline Matrix44& operator -= (const Matrix44& a)
624
- {
625
- for (int r = 0; r < NUM_ROWS; r++)
626
- {
627
- at(r, 0) -= a.at(r, 0);
628
- at(r, 1) -= a.at(r, 1);
629
- at(r, 2) -= a.at(r, 2);
630
- at(r, 3) -= a.at(r, 3);
631
- }
632
- return *this;
633
- }
634
-
635
- friend inline Matrix44 operator + (const Matrix44& a, const Matrix44& b)
636
- {
637
- Matrix44 ret;
638
- for (int r = 0; r < NUM_ROWS; r++)
639
- {
640
- ret.at(r, 0) = a.at(r, 0) + b.at(r, 0);
641
- ret.at(r, 1) = a.at(r, 1) + b.at(r, 1);
642
- ret.at(r, 2) = a.at(r, 2) + b.at(r, 2);
643
- ret.at(r, 3) = a.at(r, 3) + b.at(r, 3);
644
- }
645
- return ret;
646
- }
647
-
648
- friend inline Matrix44 operator - (const Matrix44& a, const Matrix44& b)
649
- {
650
- Matrix44 ret;
651
- for (int r = 0; r < NUM_ROWS; r++)
652
- {
653
- ret.at(r, 0) = a.at(r, 0) - b.at(r, 0);
654
- ret.at(r, 1) = a.at(r, 1) - b.at(r, 1);
655
- ret.at(r, 2) = a.at(r, 2) - b.at(r, 2);
656
- ret.at(r, 3) = a.at(r, 3) - b.at(r, 3);
657
- }
658
- return ret;
659
- }
660
-
661
- static inline void add_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b)
662
- {
663
- for (int r = 0; r < 4; r++)
664
- {
665
- pDst[0*8 + r] = static_cast<jpgd_block_t>(a.at(r, 0) + b.at(r, 0));
666
- pDst[1*8 + r] = static_cast<jpgd_block_t>(a.at(r, 1) + b.at(r, 1));
667
- pDst[2*8 + r] = static_cast<jpgd_block_t>(a.at(r, 2) + b.at(r, 2));
668
- pDst[3*8 + r] = static_cast<jpgd_block_t>(a.at(r, 3) + b.at(r, 3));
669
- }
670
- }
671
-
672
- static inline void sub_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b)
673
- {
674
- for (int r = 0; r < 4; r++)
675
- {
676
- pDst[0*8 + r] = static_cast<jpgd_block_t>(a.at(r, 0) - b.at(r, 0));
677
- pDst[1*8 + r] = static_cast<jpgd_block_t>(a.at(r, 1) - b.at(r, 1));
678
- pDst[2*8 + r] = static_cast<jpgd_block_t>(a.at(r, 2) - b.at(r, 2));
679
- pDst[3*8 + r] = static_cast<jpgd_block_t>(a.at(r, 3) - b.at(r, 3));
680
- }
681
- }
682
- };
683
-
684
- const int FRACT_BITS = 10;
685
- const int SCALE = 1 << FRACT_BITS;
686
-
687
- typedef int Temp_Type;
688
- #define D(i) (((i) + (SCALE >> 1)) >> FRACT_BITS)
689
- #define F(i) ((int)((i) * SCALE + .5f))
690
-
691
- // Any decent C++ compiler will optimize this at compile time to a 0, or an array access.
692
- #define AT(c, r) ((((c)>=NUM_COLS)||((r)>=NUM_ROWS)) ? 0 : pSrc[(c)+(r)*8])
693
-
694
- // NUM_ROWS/NUM_COLS = # of non-zero rows/cols in input matrix
695
- template<int NUM_ROWS, int NUM_COLS>
696
- struct P_Q
697
- {
698
- static void calc(Matrix44& P, Matrix44& Q, const jpgd_block_t* pSrc)
699
- {
700
- // 4x8 = 4x8 times 8x8, matrix 0 is constant
701
- const Temp_Type X000 = AT(0, 0);
702
- const Temp_Type X001 = AT(0, 1);
703
- const Temp_Type X002 = AT(0, 2);
704
- const Temp_Type X003 = AT(0, 3);
705
- const Temp_Type X004 = AT(0, 4);
706
- const Temp_Type X005 = AT(0, 5);
707
- const Temp_Type X006 = AT(0, 6);
708
- const Temp_Type X007 = AT(0, 7);
709
- const Temp_Type X010 = D(F(0.415735f) * AT(1, 0) + F(0.791065f) * AT(3, 0) + F(-0.352443f) * AT(5, 0) + F(0.277785f) * AT(7, 0));
710
- const Temp_Type X011 = D(F(0.415735f) * AT(1, 1) + F(0.791065f) * AT(3, 1) + F(-0.352443f) * AT(5, 1) + F(0.277785f) * AT(7, 1));
711
- const Temp_Type X012 = D(F(0.415735f) * AT(1, 2) + F(0.791065f) * AT(3, 2) + F(-0.352443f) * AT(5, 2) + F(0.277785f) * AT(7, 2));
712
- const Temp_Type X013 = D(F(0.415735f) * AT(1, 3) + F(0.791065f) * AT(3, 3) + F(-0.352443f) * AT(5, 3) + F(0.277785f) * AT(7, 3));
713
- const Temp_Type X014 = D(F(0.415735f) * AT(1, 4) + F(0.791065f) * AT(3, 4) + F(-0.352443f) * AT(5, 4) + F(0.277785f) * AT(7, 4));
714
- const Temp_Type X015 = D(F(0.415735f) * AT(1, 5) + F(0.791065f) * AT(3, 5) + F(-0.352443f) * AT(5, 5) + F(0.277785f) * AT(7, 5));
715
- const Temp_Type X016 = D(F(0.415735f) * AT(1, 6) + F(0.791065f) * AT(3, 6) + F(-0.352443f) * AT(5, 6) + F(0.277785f) * AT(7, 6));
716
- const Temp_Type X017 = D(F(0.415735f) * AT(1, 7) + F(0.791065f) * AT(3, 7) + F(-0.352443f) * AT(5, 7) + F(0.277785f) * AT(7, 7));
717
- const Temp_Type X020 = AT(4, 0);
718
- const Temp_Type X021 = AT(4, 1);
719
- const Temp_Type X022 = AT(4, 2);
720
- const Temp_Type X023 = AT(4, 3);
721
- const Temp_Type X024 = AT(4, 4);
722
- const Temp_Type X025 = AT(4, 5);
723
- const Temp_Type X026 = AT(4, 6);
724
- const Temp_Type X027 = AT(4, 7);
725
- const Temp_Type X030 = D(F(0.022887f) * AT(1, 0) + F(-0.097545f) * AT(3, 0) + F(0.490393f) * AT(5, 0) + F(0.865723f) * AT(7, 0));
726
- const Temp_Type X031 = D(F(0.022887f) * AT(1, 1) + F(-0.097545f) * AT(3, 1) + F(0.490393f) * AT(5, 1) + F(0.865723f) * AT(7, 1));
727
- const Temp_Type X032 = D(F(0.022887f) * AT(1, 2) + F(-0.097545f) * AT(3, 2) + F(0.490393f) * AT(5, 2) + F(0.865723f) * AT(7, 2));
728
- const Temp_Type X033 = D(F(0.022887f) * AT(1, 3) + F(-0.097545f) * AT(3, 3) + F(0.490393f) * AT(5, 3) + F(0.865723f) * AT(7, 3));
729
- const Temp_Type X034 = D(F(0.022887f) * AT(1, 4) + F(-0.097545f) * AT(3, 4) + F(0.490393f) * AT(5, 4) + F(0.865723f) * AT(7, 4));
730
- const Temp_Type X035 = D(F(0.022887f) * AT(1, 5) + F(-0.097545f) * AT(3, 5) + F(0.490393f) * AT(5, 5) + F(0.865723f) * AT(7, 5));
731
- const Temp_Type X036 = D(F(0.022887f) * AT(1, 6) + F(-0.097545f) * AT(3, 6) + F(0.490393f) * AT(5, 6) + F(0.865723f) * AT(7, 6));
732
- const Temp_Type X037 = D(F(0.022887f) * AT(1, 7) + F(-0.097545f) * AT(3, 7) + F(0.490393f) * AT(5, 7) + F(0.865723f) * AT(7, 7));
733
-
734
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
735
- P.at(0, 0) = X000;
736
- P.at(0, 1) = D(X001 * F(0.415735f) + X003 * F(0.791065f) + X005 * F(-0.352443f) + X007 * F(0.277785f));
737
- P.at(0, 2) = X004;
738
- P.at(0, 3) = D(X001 * F(0.022887f) + X003 * F(-0.097545f) + X005 * F(0.490393f) + X007 * F(0.865723f));
739
- P.at(1, 0) = X010;
740
- P.at(1, 1) = D(X011 * F(0.415735f) + X013 * F(0.791065f) + X015 * F(-0.352443f) + X017 * F(0.277785f));
741
- P.at(1, 2) = X014;
742
- P.at(1, 3) = D(X011 * F(0.022887f) + X013 * F(-0.097545f) + X015 * F(0.490393f) + X017 * F(0.865723f));
743
- P.at(2, 0) = X020;
744
- P.at(2, 1) = D(X021 * F(0.415735f) + X023 * F(0.791065f) + X025 * F(-0.352443f) + X027 * F(0.277785f));
745
- P.at(2, 2) = X024;
746
- P.at(2, 3) = D(X021 * F(0.022887f) + X023 * F(-0.097545f) + X025 * F(0.490393f) + X027 * F(0.865723f));
747
- P.at(3, 0) = X030;
748
- P.at(3, 1) = D(X031 * F(0.415735f) + X033 * F(0.791065f) + X035 * F(-0.352443f) + X037 * F(0.277785f));
749
- P.at(3, 2) = X034;
750
- P.at(3, 3) = D(X031 * F(0.022887f) + X033 * F(-0.097545f) + X035 * F(0.490393f) + X037 * F(0.865723f));
751
- // 40 muls 24 adds
752
-
753
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
754
- Q.at(0, 0) = D(X001 * F(0.906127f) + X003 * F(-0.318190f) + X005 * F(0.212608f) + X007 * F(-0.180240f));
755
- Q.at(0, 1) = X002;
756
- Q.at(0, 2) = D(X001 * F(-0.074658f) + X003 * F(0.513280f) + X005 * F(0.768178f) + X007 * F(-0.375330f));
757
- Q.at(0, 3) = X006;
758
- Q.at(1, 0) = D(X011 * F(0.906127f) + X013 * F(-0.318190f) + X015 * F(0.212608f) + X017 * F(-0.180240f));
759
- Q.at(1, 1) = X012;
760
- Q.at(1, 2) = D(X011 * F(-0.074658f) + X013 * F(0.513280f) + X015 * F(0.768178f) + X017 * F(-0.375330f));
761
- Q.at(1, 3) = X016;
762
- Q.at(2, 0) = D(X021 * F(0.906127f) + X023 * F(-0.318190f) + X025 * F(0.212608f) + X027 * F(-0.180240f));
763
- Q.at(2, 1) = X022;
764
- Q.at(2, 2) = D(X021 * F(-0.074658f) + X023 * F(0.513280f) + X025 * F(0.768178f) + X027 * F(-0.375330f));
765
- Q.at(2, 3) = X026;
766
- Q.at(3, 0) = D(X031 * F(0.906127f) + X033 * F(-0.318190f) + X035 * F(0.212608f) + X037 * F(-0.180240f));
767
- Q.at(3, 1) = X032;
768
- Q.at(3, 2) = D(X031 * F(-0.074658f) + X033 * F(0.513280f) + X035 * F(0.768178f) + X037 * F(-0.375330f));
769
- Q.at(3, 3) = X036;
770
- // 40 muls 24 adds
771
- }
772
- };
773
-
774
- template<int NUM_ROWS, int NUM_COLS>
775
- struct R_S
776
- {
777
- static void calc(Matrix44& R, Matrix44& S, const jpgd_block_t* pSrc)
778
- {
779
- // 4x8 = 4x8 times 8x8, matrix 0 is constant
780
- const Temp_Type X100 = D(F(0.906127f) * AT(1, 0) + F(-0.318190f) * AT(3, 0) + F(0.212608f) * AT(5, 0) + F(-0.180240f) * AT(7, 0));
781
- const Temp_Type X101 = D(F(0.906127f) * AT(1, 1) + F(-0.318190f) * AT(3, 1) + F(0.212608f) * AT(5, 1) + F(-0.180240f) * AT(7, 1));
782
- const Temp_Type X102 = D(F(0.906127f) * AT(1, 2) + F(-0.318190f) * AT(3, 2) + F(0.212608f) * AT(5, 2) + F(-0.180240f) * AT(7, 2));
783
- const Temp_Type X103 = D(F(0.906127f) * AT(1, 3) + F(-0.318190f) * AT(3, 3) + F(0.212608f) * AT(5, 3) + F(-0.180240f) * AT(7, 3));
784
- const Temp_Type X104 = D(F(0.906127f) * AT(1, 4) + F(-0.318190f) * AT(3, 4) + F(0.212608f) * AT(5, 4) + F(-0.180240f) * AT(7, 4));
785
- const Temp_Type X105 = D(F(0.906127f) * AT(1, 5) + F(-0.318190f) * AT(3, 5) + F(0.212608f) * AT(5, 5) + F(-0.180240f) * AT(7, 5));
786
- const Temp_Type X106 = D(F(0.906127f) * AT(1, 6) + F(-0.318190f) * AT(3, 6) + F(0.212608f) * AT(5, 6) + F(-0.180240f) * AT(7, 6));
787
- const Temp_Type X107 = D(F(0.906127f) * AT(1, 7) + F(-0.318190f) * AT(3, 7) + F(0.212608f) * AT(5, 7) + F(-0.180240f) * AT(7, 7));
788
- const Temp_Type X110 = AT(2, 0);
789
- const Temp_Type X111 = AT(2, 1);
790
- const Temp_Type X112 = AT(2, 2);
791
- const Temp_Type X113 = AT(2, 3);
792
- const Temp_Type X114 = AT(2, 4);
793
- const Temp_Type X115 = AT(2, 5);
794
- const Temp_Type X116 = AT(2, 6);
795
- const Temp_Type X117 = AT(2, 7);
796
- const Temp_Type X120 = D(F(-0.074658f) * AT(1, 0) + F(0.513280f) * AT(3, 0) + F(0.768178f) * AT(5, 0) + F(-0.375330f) * AT(7, 0));
797
- const Temp_Type X121 = D(F(-0.074658f) * AT(1, 1) + F(0.513280f) * AT(3, 1) + F(0.768178f) * AT(5, 1) + F(-0.375330f) * AT(7, 1));
798
- const Temp_Type X122 = D(F(-0.074658f) * AT(1, 2) + F(0.513280f) * AT(3, 2) + F(0.768178f) * AT(5, 2) + F(-0.375330f) * AT(7, 2));
799
- const Temp_Type X123 = D(F(-0.074658f) * AT(1, 3) + F(0.513280f) * AT(3, 3) + F(0.768178f) * AT(5, 3) + F(-0.375330f) * AT(7, 3));
800
- const Temp_Type X124 = D(F(-0.074658f) * AT(1, 4) + F(0.513280f) * AT(3, 4) + F(0.768178f) * AT(5, 4) + F(-0.375330f) * AT(7, 4));
801
- const Temp_Type X125 = D(F(-0.074658f) * AT(1, 5) + F(0.513280f) * AT(3, 5) + F(0.768178f) * AT(5, 5) + F(-0.375330f) * AT(7, 5));
802
- const Temp_Type X126 = D(F(-0.074658f) * AT(1, 6) + F(0.513280f) * AT(3, 6) + F(0.768178f) * AT(5, 6) + F(-0.375330f) * AT(7, 6));
803
- const Temp_Type X127 = D(F(-0.074658f) * AT(1, 7) + F(0.513280f) * AT(3, 7) + F(0.768178f) * AT(5, 7) + F(-0.375330f) * AT(7, 7));
804
- const Temp_Type X130 = AT(6, 0);
805
- const Temp_Type X131 = AT(6, 1);
806
- const Temp_Type X132 = AT(6, 2);
807
- const Temp_Type X133 = AT(6, 3);
808
- const Temp_Type X134 = AT(6, 4);
809
- const Temp_Type X135 = AT(6, 5);
810
- const Temp_Type X136 = AT(6, 6);
811
- const Temp_Type X137 = AT(6, 7);
812
- // 80 muls 48 adds
813
-
814
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
815
- R.at(0, 0) = X100;
816
- R.at(0, 1) = D(X101 * F(0.415735f) + X103 * F(0.791065f) + X105 * F(-0.352443f) + X107 * F(0.277785f));
817
- R.at(0, 2) = X104;
818
- R.at(0, 3) = D(X101 * F(0.022887f) + X103 * F(-0.097545f) + X105 * F(0.490393f) + X107 * F(0.865723f));
819
- R.at(1, 0) = X110;
820
- R.at(1, 1) = D(X111 * F(0.415735f) + X113 * F(0.791065f) + X115 * F(-0.352443f) + X117 * F(0.277785f));
821
- R.at(1, 2) = X114;
822
- R.at(1, 3) = D(X111 * F(0.022887f) + X113 * F(-0.097545f) + X115 * F(0.490393f) + X117 * F(0.865723f));
823
- R.at(2, 0) = X120;
824
- R.at(2, 1) = D(X121 * F(0.415735f) + X123 * F(0.791065f) + X125 * F(-0.352443f) + X127 * F(0.277785f));
825
- R.at(2, 2) = X124;
826
- R.at(2, 3) = D(X121 * F(0.022887f) + X123 * F(-0.097545f) + X125 * F(0.490393f) + X127 * F(0.865723f));
827
- R.at(3, 0) = X130;
828
- R.at(3, 1) = D(X131 * F(0.415735f) + X133 * F(0.791065f) + X135 * F(-0.352443f) + X137 * F(0.277785f));
829
- R.at(3, 2) = X134;
830
- R.at(3, 3) = D(X131 * F(0.022887f) + X133 * F(-0.097545f) + X135 * F(0.490393f) + X137 * F(0.865723f));
831
- // 40 muls 24 adds
832
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
833
- S.at(0, 0) = D(X101 * F(0.906127f) + X103 * F(-0.318190f) + X105 * F(0.212608f) + X107 * F(-0.180240f));
834
- S.at(0, 1) = X102;
835
- S.at(0, 2) = D(X101 * F(-0.074658f) + X103 * F(0.513280f) + X105 * F(0.768178f) + X107 * F(-0.375330f));
836
- S.at(0, 3) = X106;
837
- S.at(1, 0) = D(X111 * F(0.906127f) + X113 * F(-0.318190f) + X115 * F(0.212608f) + X117 * F(-0.180240f));
838
- S.at(1, 1) = X112;
839
- S.at(1, 2) = D(X111 * F(-0.074658f) + X113 * F(0.513280f) + X115 * F(0.768178f) + X117 * F(-0.375330f));
840
- S.at(1, 3) = X116;
841
- S.at(2, 0) = D(X121 * F(0.906127f) + X123 * F(-0.318190f) + X125 * F(0.212608f) + X127 * F(-0.180240f));
842
- S.at(2, 1) = X122;
843
- S.at(2, 2) = D(X121 * F(-0.074658f) + X123 * F(0.513280f) + X125 * F(0.768178f) + X127 * F(-0.375330f));
844
- S.at(2, 3) = X126;
845
- S.at(3, 0) = D(X131 * F(0.906127f) + X133 * F(-0.318190f) + X135 * F(0.212608f) + X137 * F(-0.180240f));
846
- S.at(3, 1) = X132;
847
- S.at(3, 2) = D(X131 * F(-0.074658f) + X133 * F(0.513280f) + X135 * F(0.768178f) + X137 * F(-0.375330f));
848
- S.at(3, 3) = X136;
849
- // 40 muls 24 adds
850
- }
851
- };
852
- } // end namespace DCT_Upsample
853
-
854
- // Unconditionally frees all allocated m_blocks.
855
- void jpeg_decoder::free_all_blocks()
856
- {
857
- m_pStream = NULL;
858
- for (mem_block *b = m_pMem_blocks; b; )
859
- {
860
- mem_block *n = b->m_pNext;
861
- jpgd_free(b);
862
- b = n;
863
- }
864
- m_pMem_blocks = NULL;
865
- }
866
-
867
- // This method handles all errors.
868
- // It could easily be changed to use C++ exceptions.
869
- void jpeg_decoder::stop_decoding(jpgd_status status)
870
- {
871
- m_error_code = status;
872
- free_all_blocks();
873
- longjmp(m_jmp_state, status);
874
-
875
- // we shouldn't get here as longjmp shouldn't return, but we put it here to make it explicit
876
- // that this function doesn't return, otherwise we get this error:
877
- //
878
- // error : function declared 'noreturn' should not return
879
- exit(1);
880
- }
881
-
882
- void *jpeg_decoder::alloc(size_t nSize, bool zero)
883
- {
884
- nSize = (JPGD_MAX(nSize, 1) + 3) & ~3;
885
- char *rv = NULL;
886
- for (mem_block *b = m_pMem_blocks; b; b = b->m_pNext)
887
- {
888
- if ((b->m_used_count + nSize) <= b->m_size)
889
- {
890
- rv = b->m_data + b->m_used_count;
891
- b->m_used_count += nSize;
892
- break;
893
- }
894
- }
895
- if (!rv)
896
- {
897
- int capacity = JPGD_MAX(32768 - 256, (nSize + 2047) & ~2047);
898
- mem_block *b = (mem_block*)jpgd_malloc(sizeof(mem_block) + capacity);
899
- if (!b) stop_decoding(JPGD_NOTENOUGHMEM);
900
- b->m_pNext = m_pMem_blocks; m_pMem_blocks = b;
901
- b->m_used_count = nSize;
902
- b->m_size = capacity;
903
- rv = b->m_data;
904
- }
905
- if (zero) memset(rv, 0, nSize);
906
- return rv;
907
- }
908
-
909
- void jpeg_decoder::word_clear(void *p, uint16 c, uint n)
910
- {
911
- uint8 *pD = (uint8*)p;
912
- const uint8 l = c & 0xFF, h = (c >> 8) & 0xFF;
913
- while (n)
914
- {
915
- pD[0] = l; pD[1] = h; pD += 2;
916
- n--;
917
- }
918
- }
919
-
920
- // Refill the input buffer.
921
- // This method will sit in a loop until (A) the buffer is full or (B)
922
- // the stream's read() method reports and end of file condition.
923
- void jpeg_decoder::prep_in_buffer()
924
- {
925
- m_in_buf_left = 0;
926
- m_pIn_buf_ofs = m_in_buf;
927
-
928
- if (m_eof_flag)
929
- return;
930
-
931
- do
932
- {
933
- int bytes_read = m_pStream->read(m_in_buf + m_in_buf_left, JPGD_IN_BUF_SIZE - m_in_buf_left, &m_eof_flag);
934
- if (bytes_read == -1)
935
- stop_decoding(JPGD_STREAM_READ);
936
-
937
- m_in_buf_left += bytes_read;
938
- } while ((m_in_buf_left < JPGD_IN_BUF_SIZE) && (!m_eof_flag));
939
-
940
- m_total_bytes_read += m_in_buf_left;
941
-
942
- // Pad the end of the block with M_EOI (prevents the decompressor from going off the rails if the stream is invalid).
943
- // (This dates way back to when this decompressor was written in C/asm, and the all-asm Huffman decoder did some fancy things to increase perf.)
944
- word_clear(m_pIn_buf_ofs + m_in_buf_left, 0xD9FF, 64);
945
- }
946
-
947
- // Read a Huffman code table.
948
- void jpeg_decoder::read_dht_marker()
949
- {
950
- int i, index, count;
951
- uint8 huff_num[17];
952
- uint8 huff_val[256];
953
-
954
- uint num_left = get_bits(16);
955
-
956
- if (num_left < 2)
957
- stop_decoding(JPGD_BAD_DHT_MARKER);
958
-
959
- num_left -= 2;
960
-
961
- while (num_left)
962
- {
963
- index = get_bits(8);
964
-
965
- huff_num[0] = 0;
966
-
967
- count = 0;
968
-
969
- for (i = 1; i <= 16; i++)
970
- {
971
- huff_num[i] = static_cast<uint8>(get_bits(8));
972
- count += huff_num[i];
973
- }
974
-
975
- if (count > 255)
976
- stop_decoding(JPGD_BAD_DHT_COUNTS);
977
-
978
- for (i = 0; i < count; i++)
979
- huff_val[i] = static_cast<uint8>(get_bits(8));
980
-
981
- i = 1 + 16 + count;
982
-
983
- if (num_left < (uint)i)
984
- stop_decoding(JPGD_BAD_DHT_MARKER);
985
-
986
- num_left -= i;
987
-
988
- if ((index & 0x10) > 0x10)
989
- stop_decoding(JPGD_BAD_DHT_INDEX);
990
-
991
- index = (index & 0x0F) + ((index & 0x10) >> 4) * (JPGD_MAX_HUFF_TABLES >> 1);
992
-
993
- if (index >= JPGD_MAX_HUFF_TABLES)
994
- stop_decoding(JPGD_BAD_DHT_INDEX);
995
-
996
- if (!m_huff_num[index])
997
- m_huff_num[index] = (uint8 *)alloc(17);
998
-
999
- if (!m_huff_val[index])
1000
- m_huff_val[index] = (uint8 *)alloc(256);
1001
-
1002
- m_huff_ac[index] = (index & 0x10) != 0;
1003
- memcpy(m_huff_num[index], huff_num, 17);
1004
- memcpy(m_huff_val[index], huff_val, 256);
1005
- }
1006
- }
1007
-
1008
- // Read a quantization table.
1009
- void jpeg_decoder::read_dqt_marker()
1010
- {
1011
- int n, i, prec;
1012
- uint num_left;
1013
- uint temp;
1014
-
1015
- num_left = get_bits(16);
1016
-
1017
- if (num_left < 2)
1018
- stop_decoding(JPGD_BAD_DQT_MARKER);
1019
-
1020
- num_left -= 2;
1021
-
1022
- while (num_left)
1023
- {
1024
- n = get_bits(8);
1025
- prec = n >> 4;
1026
- n &= 0x0F;
1027
-
1028
- if (n >= JPGD_MAX_QUANT_TABLES)
1029
- stop_decoding(JPGD_BAD_DQT_TABLE);
1030
-
1031
- if (!m_quant[n])
1032
- m_quant[n] = (jpgd_quant_t *)alloc(64 * sizeof(jpgd_quant_t));
1033
-
1034
- // read quantization entries, in zag order
1035
- for (i = 0; i < 64; i++)
1036
- {
1037
- temp = get_bits(8);
1038
-
1039
- if (prec)
1040
- temp = (temp << 8) + get_bits(8);
1041
-
1042
- m_quant[n][i] = static_cast<jpgd_quant_t>(temp);
1043
- }
1044
-
1045
- i = 64 + 1;
1046
-
1047
- if (prec)
1048
- i += 64;
1049
-
1050
- if (num_left < (uint)i)
1051
- stop_decoding(JPGD_BAD_DQT_LENGTH);
1052
-
1053
- num_left -= i;
1054
- }
1055
- }
1056
-
1057
- // Read the start of frame (SOF) marker.
1058
- void jpeg_decoder::read_sof_marker()
1059
- {
1060
- int i;
1061
- uint num_left;
1062
-
1063
- num_left = get_bits(16);
1064
-
1065
- if (get_bits(8) != 8) /* precision: sorry, only 8-bit precision is supported right now */
1066
- stop_decoding(JPGD_BAD_PRECISION);
1067
-
1068
- m_image_y_size = get_bits(16);
1069
-
1070
- if ((m_image_y_size < 1) || (m_image_y_size > JPGD_MAX_HEIGHT))
1071
- stop_decoding(JPGD_BAD_HEIGHT);
1072
-
1073
- m_image_x_size = get_bits(16);
1074
-
1075
- if ((m_image_x_size < 1) || (m_image_x_size > JPGD_MAX_WIDTH))
1076
- stop_decoding(JPGD_BAD_WIDTH);
1077
-
1078
- m_comps_in_frame = get_bits(8);
1079
-
1080
- if (m_comps_in_frame > JPGD_MAX_COMPONENTS)
1081
- stop_decoding(JPGD_TOO_MANY_COMPONENTS);
1082
-
1083
- if (num_left != (uint)(m_comps_in_frame * 3 + 8))
1084
- stop_decoding(JPGD_BAD_SOF_LENGTH);
1085
-
1086
- for (i = 0; i < m_comps_in_frame; i++)
1087
- {
1088
- m_comp_ident[i] = get_bits(8);
1089
- m_comp_h_samp[i] = get_bits(4);
1090
- m_comp_v_samp[i] = get_bits(4);
1091
- m_comp_quant[i] = get_bits(8);
1092
- }
1093
- }
1094
-
1095
- // Used to skip unrecognized markers.
1096
- void jpeg_decoder::skip_variable_marker()
1097
- {
1098
- uint num_left;
1099
-
1100
- num_left = get_bits(16);
1101
-
1102
- if (num_left < 2)
1103
- stop_decoding(JPGD_BAD_VARIABLE_MARKER);
1104
-
1105
- num_left -= 2;
1106
-
1107
- while (num_left)
1108
- {
1109
- get_bits(8);
1110
- num_left--;
1111
- }
1112
- }
1113
-
1114
- // Read a define restart interval (DRI) marker.
1115
- void jpeg_decoder::read_dri_marker()
1116
- {
1117
- if (get_bits(16) != 4)
1118
- stop_decoding(JPGD_BAD_DRI_LENGTH);
1119
-
1120
- m_restart_interval = get_bits(16);
1121
- }
1122
-
1123
- // Read a start of scan (SOS) marker.
1124
- void jpeg_decoder::read_sos_marker()
1125
- {
1126
- uint num_left;
1127
- int i, ci, n, c, cc;
1128
-
1129
- num_left = get_bits(16);
1130
-
1131
- n = get_bits(8);
1132
-
1133
- m_comps_in_scan = n;
1134
-
1135
- num_left -= 3;
1136
-
1137
- if ( (num_left != (uint)(n * 2 + 3)) || (n < 1) || (n > JPGD_MAX_COMPS_IN_SCAN) )
1138
- stop_decoding(JPGD_BAD_SOS_LENGTH);
1139
-
1140
- for (i = 0; i < n; i++)
1141
- {
1142
- cc = get_bits(8);
1143
- c = get_bits(8);
1144
- num_left -= 2;
1145
-
1146
- for (ci = 0; ci < m_comps_in_frame; ci++)
1147
- if (cc == m_comp_ident[ci])
1148
- break;
1149
-
1150
- if (ci >= m_comps_in_frame)
1151
- stop_decoding(JPGD_BAD_SOS_COMP_ID);
1152
-
1153
- m_comp_list[i] = ci;
1154
- m_comp_dc_tab[ci] = (c >> 4) & 15;
1155
- m_comp_ac_tab[ci] = (c & 15) + (JPGD_MAX_HUFF_TABLES >> 1);
1156
- }
1157
-
1158
- m_spectral_start = get_bits(8);
1159
- m_spectral_end = get_bits(8);
1160
- m_successive_high = get_bits(4);
1161
- m_successive_low = get_bits(4);
1162
-
1163
- if (!m_progressive_flag)
1164
- {
1165
- m_spectral_start = 0;
1166
- m_spectral_end = 63;
1167
- }
1168
-
1169
- num_left -= 3;
1170
-
1171
- while (num_left) /* read past whatever is num_left */
1172
- {
1173
- get_bits(8);
1174
- num_left--;
1175
- }
1176
- }
1177
-
1178
- // Finds the next marker.
1179
- int jpeg_decoder::next_marker()
1180
- {
1181
- uint c, bytes;
1182
-
1183
- bytes = 0;
1184
-
1185
- do
1186
- {
1187
- do
1188
- {
1189
- bytes++;
1190
- c = get_bits(8);
1191
- } while (c != 0xFF);
1192
-
1193
- do
1194
- {
1195
- c = get_bits(8);
1196
- } while (c == 0xFF);
1197
-
1198
- } while (c == 0);
1199
-
1200
- // If bytes > 0 here, there where extra bytes before the marker (not good).
1201
-
1202
- return c;
1203
- }
1204
-
1205
- // Process markers. Returns when an SOFx, SOI, EOI, or SOS marker is
1206
- // encountered.
1207
- int jpeg_decoder::process_markers()
1208
- {
1209
- int c;
1210
-
1211
- for ( ; ; )
1212
- {
1213
- c = next_marker();
1214
-
1215
- switch (c)
1216
- {
1217
- case M_SOF0:
1218
- case M_SOF1:
1219
- case M_SOF2:
1220
- case M_SOF3:
1221
- case M_SOF5:
1222
- case M_SOF6:
1223
- case M_SOF7:
1224
- // case M_JPG:
1225
- case M_SOF9:
1226
- case M_SOF10:
1227
- case M_SOF11:
1228
- case M_SOF13:
1229
- case M_SOF14:
1230
- case M_SOF15:
1231
- case M_SOI:
1232
- case M_EOI:
1233
- case M_SOS:
1234
- {
1235
- return c;
1236
- }
1237
- case M_DHT:
1238
- {
1239
- read_dht_marker();
1240
- break;
1241
- }
1242
- // No arithmitic support - dumb patents!
1243
- case M_DAC:
1244
- {
1245
- stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT);
1246
- break;
1247
- }
1248
- case M_DQT:
1249
- {
1250
- read_dqt_marker();
1251
- break;
1252
- }
1253
- case M_DRI:
1254
- {
1255
- read_dri_marker();
1256
- break;
1257
- }
1258
- //case M_APP0: /* no need to read the JFIF marker */
1259
-
1260
- case M_JPG:
1261
- case M_RST0: /* no parameters */
1262
- case M_RST1:
1263
- case M_RST2:
1264
- case M_RST3:
1265
- case M_RST4:
1266
- case M_RST5:
1267
- case M_RST6:
1268
- case M_RST7:
1269
- case M_TEM:
1270
- {
1271
- stop_decoding(JPGD_UNEXPECTED_MARKER);
1272
- break;
1273
- }
1274
- default: /* must be DNL, DHP, EXP, APPn, JPGn, COM, or RESn or APP0 */
1275
- {
1276
- skip_variable_marker();
1277
- break;
1278
- }
1279
- }
1280
- }
1281
- }
1282
-
1283
- // Finds the start of image (SOI) marker.
1284
- // This code is rather defensive: it only checks the first 512 bytes to avoid
1285
- // false positives.
1286
- void jpeg_decoder::locate_soi_marker()
1287
- {
1288
- uint lastchar, thischar;
1289
- uint bytesleft;
1290
-
1291
- lastchar = get_bits(8);
1292
-
1293
- thischar = get_bits(8);
1294
-
1295
- /* ok if it's a normal JPEG file without a special header */
1296
-
1297
- if ((lastchar == 0xFF) && (thischar == M_SOI))
1298
- return;
1299
-
1300
- bytesleft = 4096; //512;
1301
-
1302
- for ( ; ; )
1303
- {
1304
- if (--bytesleft == 0)
1305
- stop_decoding(JPGD_NOT_JPEG);
1306
-
1307
- lastchar = thischar;
1308
-
1309
- thischar = get_bits(8);
1310
-
1311
- if (lastchar == 0xFF)
1312
- {
1313
- if (thischar == M_SOI)
1314
- break;
1315
- else if (thischar == M_EOI) // get_bits will keep returning M_EOI if we read past the end
1316
- stop_decoding(JPGD_NOT_JPEG);
1317
- }
1318
- }
1319
-
1320
- // Check the next character after marker: if it's not 0xFF, it can't be the start of the next marker, so the file is bad.
1321
- thischar = (m_bit_buf >> 24) & 0xFF;
1322
-
1323
- if (thischar != 0xFF)
1324
- stop_decoding(JPGD_NOT_JPEG);
1325
- }
1326
-
1327
- // Find a start of frame (SOF) marker.
1328
- void jpeg_decoder::locate_sof_marker()
1329
- {
1330
- locate_soi_marker();
1331
-
1332
- int c = process_markers();
1333
-
1334
- switch (c)
1335
- {
1336
- case M_SOF2:
1337
- m_progressive_flag = JPGD_TRUE;
1338
- case M_SOF0: /* baseline DCT */
1339
- case M_SOF1: /* extended sequential DCT */
1340
- {
1341
- read_sof_marker();
1342
- break;
1343
- }
1344
- case M_SOF9: /* Arithmitic coding */
1345
- {
1346
- stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT);
1347
- break;
1348
- }
1349
- default:
1350
- {
1351
- stop_decoding(JPGD_UNSUPPORTED_MARKER);
1352
- break;
1353
- }
1354
- }
1355
- }
1356
-
1357
- // Find a start of scan (SOS) marker.
1358
- int jpeg_decoder::locate_sos_marker()
1359
- {
1360
- int c;
1361
-
1362
- c = process_markers();
1363
-
1364
- if (c == M_EOI)
1365
- return JPGD_FALSE;
1366
- else if (c != M_SOS)
1367
- stop_decoding(JPGD_UNEXPECTED_MARKER);
1368
-
1369
- read_sos_marker();
1370
-
1371
- return JPGD_TRUE;
1372
- }
1373
-
1374
- // Reset everything to default/uninitialized state.
1375
- void jpeg_decoder::init(jpeg_decoder_stream *pStream)
1376
- {
1377
- m_pMem_blocks = NULL;
1378
- m_error_code = JPGD_SUCCESS;
1379
- m_ready_flag = false;
1380
- m_image_x_size = m_image_y_size = 0;
1381
- m_pStream = pStream;
1382
- m_progressive_flag = JPGD_FALSE;
1383
-
1384
- memset(m_huff_ac, 0, sizeof(m_huff_ac));
1385
- memset(m_huff_num, 0, sizeof(m_huff_num));
1386
- memset(m_huff_val, 0, sizeof(m_huff_val));
1387
- memset(m_quant, 0, sizeof(m_quant));
1388
-
1389
- m_scan_type = 0;
1390
- m_comps_in_frame = 0;
1391
-
1392
- memset(m_comp_h_samp, 0, sizeof(m_comp_h_samp));
1393
- memset(m_comp_v_samp, 0, sizeof(m_comp_v_samp));
1394
- memset(m_comp_quant, 0, sizeof(m_comp_quant));
1395
- memset(m_comp_ident, 0, sizeof(m_comp_ident));
1396
- memset(m_comp_h_blocks, 0, sizeof(m_comp_h_blocks));
1397
- memset(m_comp_v_blocks, 0, sizeof(m_comp_v_blocks));
1398
-
1399
- m_comps_in_scan = 0;
1400
- memset(m_comp_list, 0, sizeof(m_comp_list));
1401
- memset(m_comp_dc_tab, 0, sizeof(m_comp_dc_tab));
1402
- memset(m_comp_ac_tab, 0, sizeof(m_comp_ac_tab));
1403
-
1404
- m_spectral_start = 0;
1405
- m_spectral_end = 0;
1406
- m_successive_low = 0;
1407
- m_successive_high = 0;
1408
- m_max_mcu_x_size = 0;
1409
- m_max_mcu_y_size = 0;
1410
- m_blocks_per_mcu = 0;
1411
- m_max_blocks_per_row = 0;
1412
- m_mcus_per_row = 0;
1413
- m_mcus_per_col = 0;
1414
- m_expanded_blocks_per_component = 0;
1415
- m_expanded_blocks_per_mcu = 0;
1416
- m_expanded_blocks_per_row = 0;
1417
- m_freq_domain_chroma_upsample = false;
1418
-
1419
- memset(m_mcu_org, 0, sizeof(m_mcu_org));
1420
-
1421
- m_total_lines_left = 0;
1422
- m_mcu_lines_left = 0;
1423
- m_real_dest_bytes_per_scan_line = 0;
1424
- m_dest_bytes_per_scan_line = 0;
1425
- m_dest_bytes_per_pixel = 0;
1426
-
1427
- memset(m_pHuff_tabs, 0, sizeof(m_pHuff_tabs));
1428
-
1429
- memset(m_dc_coeffs, 0, sizeof(m_dc_coeffs));
1430
- memset(m_ac_coeffs, 0, sizeof(m_ac_coeffs));
1431
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
1432
-
1433
- m_eob_run = 0;
1434
-
1435
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
1436
-
1437
- m_pIn_buf_ofs = m_in_buf;
1438
- m_in_buf_left = 0;
1439
- m_eof_flag = false;
1440
- m_tem_flag = 0;
1441
-
1442
- memset(m_in_buf_pad_start, 0, sizeof(m_in_buf_pad_start));
1443
- memset(m_in_buf, 0, sizeof(m_in_buf));
1444
- memset(m_in_buf_pad_end, 0, sizeof(m_in_buf_pad_end));
1445
-
1446
- m_restart_interval = 0;
1447
- m_restarts_left = 0;
1448
- m_next_restart_num = 0;
1449
-
1450
- m_max_mcus_per_row = 0;
1451
- m_max_blocks_per_mcu = 0;
1452
- m_max_mcus_per_col = 0;
1453
-
1454
- memset(m_last_dc_val, 0, sizeof(m_last_dc_val));
1455
- m_pMCU_coefficients = NULL;
1456
- m_pSample_buf = NULL;
1457
-
1458
- m_total_bytes_read = 0;
1459
-
1460
- m_pScan_line_0 = NULL;
1461
- m_pScan_line_1 = NULL;
1462
-
1463
- // Ready the input buffer.
1464
- prep_in_buffer();
1465
-
1466
- // Prime the bit buffer.
1467
- m_bits_left = 16;
1468
- m_bit_buf = 0;
1469
-
1470
- get_bits(16);
1471
- get_bits(16);
1472
-
1473
- for (int i = 0; i < JPGD_MAX_BLOCKS_PER_MCU; i++)
1474
- m_mcu_block_max_zag[i] = 64;
1475
- }
1476
-
1477
- #define SCALEBITS 16
1478
- #define ONE_HALF ((int) 1 << (SCALEBITS-1))
1479
- #define FIX(x) ((int) ((x) * (1L<<SCALEBITS) + 0.5f))
1480
-
1481
- // Create a few tables that allow us to quickly convert YCbCr to RGB.
1482
- void jpeg_decoder::create_look_ups()
1483
- {
1484
- for (int i = 0; i <= 255; i++)
1485
- {
1486
- int k = i - 128;
1487
- m_crr[i] = ( FIX(1.40200f) * k + ONE_HALF) >> SCALEBITS;
1488
- m_cbb[i] = ( FIX(1.77200f) * k + ONE_HALF) >> SCALEBITS;
1489
- m_crg[i] = (-FIX(0.71414f)) * k;
1490
- m_cbg[i] = (-FIX(0.34414f)) * k + ONE_HALF;
1491
- }
1492
- }
1493
-
1494
- // This method throws back into the stream any bytes that where read
1495
- // into the bit buffer during initial marker scanning.
1496
- void jpeg_decoder::fix_in_buffer()
1497
- {
1498
- // In case any 0xFF's where pulled into the buffer during marker scanning.
1499
- JPGD_ASSERT((m_bits_left & 7) == 0);
1500
-
1501
- if (m_bits_left == 16)
1502
- stuff_char( (uint8)(m_bit_buf & 0xFF));
1503
-
1504
- if (m_bits_left >= 8)
1505
- stuff_char( (uint8)((m_bit_buf >> 8) & 0xFF));
1506
-
1507
- stuff_char((uint8)((m_bit_buf >> 16) & 0xFF));
1508
- stuff_char((uint8)((m_bit_buf >> 24) & 0xFF));
1509
-
1510
- m_bits_left = 16;
1511
- get_bits_no_markers(16);
1512
- get_bits_no_markers(16);
1513
- }
1514
-
1515
- void jpeg_decoder::transform_mcu(int mcu_row)
1516
- {
1517
- jpgd_block_t* pSrc_ptr = m_pMCU_coefficients;
1518
- uint8* pDst_ptr = m_pSample_buf + mcu_row * m_blocks_per_mcu * 64;
1519
-
1520
- for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
1521
- {
1522
- idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]);
1523
- pSrc_ptr += 64;
1524
- pDst_ptr += 64;
1525
- }
1526
- }
1527
-
1528
- static const uint8 s_max_rc[64] =
1529
- {
1530
- 17, 18, 34, 50, 50, 51, 52, 52, 52, 68, 84, 84, 84, 84, 85, 86, 86, 86, 86, 86,
1531
- 102, 118, 118, 118, 118, 118, 118, 119, 120, 120, 120, 120, 120, 120, 120, 136,
1532
- 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136,
1533
- 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136
1534
- };
1535
-
1536
- void jpeg_decoder::transform_mcu_expand(int mcu_row)
1537
- {
1538
- jpgd_block_t* pSrc_ptr = m_pMCU_coefficients;
1539
- uint8* pDst_ptr = m_pSample_buf + mcu_row * m_expanded_blocks_per_mcu * 64;
1540
-
1541
- // Y IDCT
1542
- int mcu_block;
1543
- for (mcu_block = 0; mcu_block < m_expanded_blocks_per_component; mcu_block++)
1544
- {
1545
- idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]);
1546
- pSrc_ptr += 64;
1547
- pDst_ptr += 64;
1548
- }
1549
-
1550
- // Chroma IDCT, with upsampling
1551
- jpgd_block_t temp_block[64];
1552
-
1553
- for (int i = 0; i < 2; i++)
1554
- {
1555
- DCT_Upsample::Matrix44 P, Q, R, S;
1556
-
1557
- JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] >= 1);
1558
- JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] <= 64);
1559
-
1560
- switch (s_max_rc[m_mcu_block_max_zag[mcu_block++] - 1])
1561
- {
1562
- case 1*16+1:
1563
- DCT_Upsample::P_Q<1, 1>::calc(P, Q, pSrc_ptr);
1564
- DCT_Upsample::R_S<1, 1>::calc(R, S, pSrc_ptr);
1565
- break;
1566
- case 1*16+2:
1567
- DCT_Upsample::P_Q<1, 2>::calc(P, Q, pSrc_ptr);
1568
- DCT_Upsample::R_S<1, 2>::calc(R, S, pSrc_ptr);
1569
- break;
1570
- case 2*16+2:
1571
- DCT_Upsample::P_Q<2, 2>::calc(P, Q, pSrc_ptr);
1572
- DCT_Upsample::R_S<2, 2>::calc(R, S, pSrc_ptr);
1573
- break;
1574
- case 3*16+2:
1575
- DCT_Upsample::P_Q<3, 2>::calc(P, Q, pSrc_ptr);
1576
- DCT_Upsample::R_S<3, 2>::calc(R, S, pSrc_ptr);
1577
- break;
1578
- case 3*16+3:
1579
- DCT_Upsample::P_Q<3, 3>::calc(P, Q, pSrc_ptr);
1580
- DCT_Upsample::R_S<3, 3>::calc(R, S, pSrc_ptr);
1581
- break;
1582
- case 3*16+4:
1583
- DCT_Upsample::P_Q<3, 4>::calc(P, Q, pSrc_ptr);
1584
- DCT_Upsample::R_S<3, 4>::calc(R, S, pSrc_ptr);
1585
- break;
1586
- case 4*16+4:
1587
- DCT_Upsample::P_Q<4, 4>::calc(P, Q, pSrc_ptr);
1588
- DCT_Upsample::R_S<4, 4>::calc(R, S, pSrc_ptr);
1589
- break;
1590
- case 5*16+4:
1591
- DCT_Upsample::P_Q<5, 4>::calc(P, Q, pSrc_ptr);
1592
- DCT_Upsample::R_S<5, 4>::calc(R, S, pSrc_ptr);
1593
- break;
1594
- case 5*16+5:
1595
- DCT_Upsample::P_Q<5, 5>::calc(P, Q, pSrc_ptr);
1596
- DCT_Upsample::R_S<5, 5>::calc(R, S, pSrc_ptr);
1597
- break;
1598
- case 5*16+6:
1599
- DCT_Upsample::P_Q<5, 6>::calc(P, Q, pSrc_ptr);
1600
- DCT_Upsample::R_S<5, 6>::calc(R, S, pSrc_ptr);
1601
- break;
1602
- case 6*16+6:
1603
- DCT_Upsample::P_Q<6, 6>::calc(P, Q, pSrc_ptr);
1604
- DCT_Upsample::R_S<6, 6>::calc(R, S, pSrc_ptr);
1605
- break;
1606
- case 7*16+6:
1607
- DCT_Upsample::P_Q<7, 6>::calc(P, Q, pSrc_ptr);
1608
- DCT_Upsample::R_S<7, 6>::calc(R, S, pSrc_ptr);
1609
- break;
1610
- case 7*16+7:
1611
- DCT_Upsample::P_Q<7, 7>::calc(P, Q, pSrc_ptr);
1612
- DCT_Upsample::R_S<7, 7>::calc(R, S, pSrc_ptr);
1613
- break;
1614
- case 7*16+8:
1615
- DCT_Upsample::P_Q<7, 8>::calc(P, Q, pSrc_ptr);
1616
- DCT_Upsample::R_S<7, 8>::calc(R, S, pSrc_ptr);
1617
- break;
1618
- case 8*16+8:
1619
- DCT_Upsample::P_Q<8, 8>::calc(P, Q, pSrc_ptr);
1620
- DCT_Upsample::R_S<8, 8>::calc(R, S, pSrc_ptr);
1621
- break;
1622
- default:
1623
- JPGD_ASSERT(false);
1624
- }
1625
-
1626
- DCT_Upsample::Matrix44 a(P + Q); P -= Q;
1627
- DCT_Upsample::Matrix44& b = P;
1628
- DCT_Upsample::Matrix44 c(R + S); R -= S;
1629
- DCT_Upsample::Matrix44& d = R;
1630
-
1631
- DCT_Upsample::Matrix44::add_and_store(temp_block, a, c);
1632
- idct_4x4(temp_block, pDst_ptr);
1633
- pDst_ptr += 64;
1634
-
1635
- DCT_Upsample::Matrix44::sub_and_store(temp_block, a, c);
1636
- idct_4x4(temp_block, pDst_ptr);
1637
- pDst_ptr += 64;
1638
-
1639
- DCT_Upsample::Matrix44::add_and_store(temp_block, b, d);
1640
- idct_4x4(temp_block, pDst_ptr);
1641
- pDst_ptr += 64;
1642
-
1643
- DCT_Upsample::Matrix44::sub_and_store(temp_block, b, d);
1644
- idct_4x4(temp_block, pDst_ptr);
1645
- pDst_ptr += 64;
1646
-
1647
- pSrc_ptr += 64;
1648
- }
1649
- }
1650
-
1651
- // Loads and dequantizes the next row of (already decoded) coefficients.
1652
- // Progressive images only.
1653
- void jpeg_decoder::load_next_row()
1654
- {
1655
- int i;
1656
- jpgd_block_t *p;
1657
- jpgd_quant_t *q;
1658
- int mcu_row, mcu_block, row_block = 0;
1659
- int component_num, component_id;
1660
- int block_x_mcu[JPGD_MAX_COMPONENTS];
1661
-
1662
- memset(block_x_mcu, 0, JPGD_MAX_COMPONENTS * sizeof(int));
1663
-
1664
- for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
1665
- {
1666
- int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0;
1667
-
1668
- for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
1669
- {
1670
- component_id = m_mcu_org[mcu_block];
1671
- q = m_quant[m_comp_quant[component_id]];
1672
-
1673
- p = m_pMCU_coefficients + 64 * mcu_block;
1674
-
1675
- jpgd_block_t* pAC = coeff_buf_getp(m_ac_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
1676
- jpgd_block_t* pDC = coeff_buf_getp(m_dc_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
1677
- p[0] = pDC[0];
1678
- memcpy(&p[1], &pAC[1], 63 * sizeof(jpgd_block_t));
1679
-
1680
- for (i = 63; i > 0; i--)
1681
- if (p[g_ZAG[i]])
1682
- break;
1683
-
1684
- m_mcu_block_max_zag[mcu_block] = i + 1;
1685
-
1686
- for ( ; i >= 0; i--)
1687
- if (p[g_ZAG[i]])
1688
- p[g_ZAG[i]] = static_cast<jpgd_block_t>(p[g_ZAG[i]] * q[i]);
1689
-
1690
- row_block++;
1691
-
1692
- if (m_comps_in_scan == 1)
1693
- block_x_mcu[component_id]++;
1694
- else
1695
- {
1696
- if (++block_x_mcu_ofs == m_comp_h_samp[component_id])
1697
- {
1698
- block_x_mcu_ofs = 0;
1699
-
1700
- if (++block_y_mcu_ofs == m_comp_v_samp[component_id])
1701
- {
1702
- block_y_mcu_ofs = 0;
1703
-
1704
- block_x_mcu[component_id] += m_comp_h_samp[component_id];
1705
- }
1706
- }
1707
- }
1708
- }
1709
-
1710
- if (m_freq_domain_chroma_upsample)
1711
- transform_mcu_expand(mcu_row);
1712
- else
1713
- transform_mcu(mcu_row);
1714
- }
1715
-
1716
- if (m_comps_in_scan == 1)
1717
- m_block_y_mcu[m_comp_list[0]]++;
1718
- else
1719
- {
1720
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
1721
- {
1722
- component_id = m_comp_list[component_num];
1723
-
1724
- m_block_y_mcu[component_id] += m_comp_v_samp[component_id];
1725
- }
1726
- }
1727
- }
1728
-
1729
- // Restart interval processing.
1730
- void jpeg_decoder::process_restart()
1731
- {
1732
- int i;
1733
- int c = 0;
1734
-
1735
- // Align to a byte boundry
1736
- // FIXME: Is this really necessary? get_bits_no_markers() never reads in markers!
1737
- //get_bits_no_markers(m_bits_left & 7);
1738
-
1739
- // Let's scan a little bit to find the marker, but not _too_ far.
1740
- // 1536 is a "fudge factor" that determines how much to scan.
1741
- for (i = 1536; i > 0; i--)
1742
- if (get_char() == 0xFF)
1743
- break;
1744
-
1745
- if (i == 0)
1746
- stop_decoding(JPGD_BAD_RESTART_MARKER);
1747
-
1748
- for ( ; i > 0; i--)
1749
- if ((c = get_char()) != 0xFF)
1750
- break;
1751
-
1752
- if (i == 0)
1753
- stop_decoding(JPGD_BAD_RESTART_MARKER);
1754
-
1755
- // Is it the expected marker? If not, something bad happened.
1756
- if (c != (m_next_restart_num + M_RST0))
1757
- stop_decoding(JPGD_BAD_RESTART_MARKER);
1758
-
1759
- // Reset each component's DC prediction values.
1760
- memset(&m_last_dc_val, 0, m_comps_in_frame * sizeof(uint));
1761
-
1762
- m_eob_run = 0;
1763
-
1764
- m_restarts_left = m_restart_interval;
1765
-
1766
- m_next_restart_num = (m_next_restart_num + 1) & 7;
1767
-
1768
- // Get the bit buffer going again...
1769
-
1770
- m_bits_left = 16;
1771
- get_bits_no_markers(16);
1772
- get_bits_no_markers(16);
1773
- }
1774
-
1775
- static inline int dequantize_ac(int c, int q) { c *= q; return c; }
1776
-
1777
- // Decodes and dequantizes the next row of coefficients.
1778
- void jpeg_decoder::decode_next_row()
1779
- {
1780
- int row_block = 0;
1781
-
1782
- for (int mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
1783
- {
1784
- if ((m_restart_interval) && (m_restarts_left == 0))
1785
- process_restart();
1786
-
1787
- jpgd_block_t* p = m_pMCU_coefficients;
1788
- for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++, p += 64)
1789
- {
1790
- int component_id = m_mcu_org[mcu_block];
1791
- jpgd_quant_t* q = m_quant[m_comp_quant[component_id]];
1792
-
1793
- int r, s;
1794
- s = huff_decode(m_pHuff_tabs[m_comp_dc_tab[component_id]], r);
1795
- s = HUFF_EXTEND(r, s);
1796
-
1797
- m_last_dc_val[component_id] = (s += m_last_dc_val[component_id]);
1798
-
1799
- p[0] = static_cast<jpgd_block_t>(s * q[0]);
1800
-
1801
- int prev_num_set = m_mcu_block_max_zag[mcu_block];
1802
-
1803
- huff_tables *pH = m_pHuff_tabs[m_comp_ac_tab[component_id]];
1804
-
1805
- int k;
1806
- for (k = 1; k < 64; k++)
1807
- {
1808
- int extra_bits;
1809
- s = huff_decode(pH, extra_bits);
1810
-
1811
- r = s >> 4;
1812
- s &= 15;
1813
-
1814
- if (s)
1815
- {
1816
- if (r)
1817
- {
1818
- if ((k + r) > 63)
1819
- stop_decoding(JPGD_DECODE_ERROR);
1820
-
1821
- if (k < prev_num_set)
1822
- {
1823
- int n = JPGD_MIN(r, prev_num_set - k);
1824
- int kt = k;
1825
- while (n--)
1826
- p[g_ZAG[kt++]] = 0;
1827
- }
1828
-
1829
- k += r;
1830
- }
1831
-
1832
- s = HUFF_EXTEND(extra_bits, s);
1833
-
1834
- JPGD_ASSERT(k < 64);
1835
-
1836
- p[g_ZAG[k]] = static_cast<jpgd_block_t>(dequantize_ac(s, q[k])); //s * q[k];
1837
- }
1838
- else
1839
- {
1840
- if (r == 15)
1841
- {
1842
- if ((k + 16) > 64)
1843
- stop_decoding(JPGD_DECODE_ERROR);
1844
-
1845
- if (k < prev_num_set)
1846
- {
1847
- int n = JPGD_MIN(16, prev_num_set - k);
1848
- int kt = k;
1849
- while (n--)
1850
- {
1851
- JPGD_ASSERT(kt <= 63);
1852
- p[g_ZAG[kt++]] = 0;
1853
- }
1854
- }
1855
-
1856
- k += 16 - 1; // - 1 because the loop counter is k
1857
- // BEGIN EPIC MOD
1858
- JPGD_ASSERT(k < 64 && p[g_ZAG[k]] == 0);
1859
- // END EPIC MOD
1860
- }
1861
- else
1862
- break;
1863
- }
1864
- }
1865
-
1866
- if (k < prev_num_set)
1867
- {
1868
- int kt = k;
1869
- while (kt < prev_num_set)
1870
- p[g_ZAG[kt++]] = 0;
1871
- }
1872
-
1873
- m_mcu_block_max_zag[mcu_block] = k;
1874
-
1875
- row_block++;
1876
- }
1877
-
1878
- if (m_freq_domain_chroma_upsample)
1879
- transform_mcu_expand(mcu_row);
1880
- else
1881
- transform_mcu(mcu_row);
1882
-
1883
- m_restarts_left--;
1884
- }
1885
- }
1886
-
1887
- // YCbCr H1V1 (1x1:1:1, 3 m_blocks per MCU) to RGB
1888
- void jpeg_decoder::H1V1Convert()
1889
- {
1890
- int row = m_max_mcu_y_size - m_mcu_lines_left;
1891
- uint8 *d = m_pScan_line_0;
1892
- uint8 *s = m_pSample_buf + row * 8;
1893
-
1894
- for (int i = m_max_mcus_per_row; i > 0; i--)
1895
- {
1896
- for (int j = 0; j < 8; j++)
1897
- {
1898
- int y = s[j];
1899
- int cb = s[64+j];
1900
- int cr = s[128+j];
1901
-
1902
- if (jpg_format == ERGBFormatJPG::BGRA)
1903
- {
1904
- d[0] = clamp(y + m_cbb[cb]);
1905
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
1906
- d[2] = clamp(y + m_crr[cr]);
1907
- d[3] = 255;
1908
- }
1909
- else
1910
- {
1911
- d[0] = clamp(y + m_crr[cr]);
1912
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
1913
- d[2] = clamp(y + m_cbb[cb]);
1914
- d[3] = 255;
1915
- }
1916
- d += 4;
1917
- }
1918
-
1919
- s += 64*3;
1920
- }
1921
- }
1922
-
1923
- // YCbCr H2V1 (2x1:1:1, 4 m_blocks per MCU) to RGB
1924
- void jpeg_decoder::H2V1Convert()
1925
- {
1926
- int row = m_max_mcu_y_size - m_mcu_lines_left;
1927
- uint8 *d0 = m_pScan_line_0;
1928
- uint8 *y = m_pSample_buf + row * 8;
1929
- uint8 *c = m_pSample_buf + 2*64 + row * 8;
1930
-
1931
- for (int i = m_max_mcus_per_row; i > 0; i--)
1932
- {
1933
- for (int l = 0; l < 2; l++)
1934
- {
1935
- for (int j = 0; j < 4; j++)
1936
- {
1937
- int cb = c[0];
1938
- int cr = c[64];
1939
-
1940
- int rc = m_crr[cr];
1941
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
1942
- int bc = m_cbb[cb];
1943
-
1944
- int yy = y[j<<1];
1945
- if (jpg_format == ERGBFormatJPG::BGRA)
1946
- {
1947
- d0[0] = clamp(yy+bc);
1948
- d0[1] = clamp(yy+gc);
1949
- d0[2] = clamp(yy+rc);
1950
- d0[3] = 255;
1951
- yy = y[(j<<1)+1];
1952
- d0[4] = clamp(yy+bc);
1953
- d0[5] = clamp(yy+gc);
1954
- d0[6] = clamp(yy+rc);
1955
- d0[7] = 255;
1956
- }
1957
- else
1958
- {
1959
- d0[0] = clamp(yy+rc);
1960
- d0[1] = clamp(yy+gc);
1961
- d0[2] = clamp(yy+bc);
1962
- d0[3] = 255;
1963
- yy = y[(j<<1)+1];
1964
- d0[4] = clamp(yy+rc);
1965
- d0[5] = clamp(yy+gc);
1966
- d0[6] = clamp(yy+bc);
1967
- d0[7] = 255;
1968
- }
1969
-
1970
- d0 += 8;
1971
-
1972
- c++;
1973
- }
1974
- y += 64;
1975
- }
1976
-
1977
- y += 64*4 - 64*2;
1978
- c += 64*4 - 8;
1979
- }
1980
- }
1981
-
1982
- // YCbCr H2V1 (1x2:1:1, 4 m_blocks per MCU) to RGB
1983
- void jpeg_decoder::H1V2Convert()
1984
- {
1985
- int row = m_max_mcu_y_size - m_mcu_lines_left;
1986
- uint8 *d0 = m_pScan_line_0;
1987
- uint8 *d1 = m_pScan_line_1;
1988
- uint8 *y;
1989
- uint8 *c;
1990
-
1991
- if (row < 8)
1992
- y = m_pSample_buf + row * 8;
1993
- else
1994
- y = m_pSample_buf + 64*1 + (row & 7) * 8;
1995
-
1996
- c = m_pSample_buf + 64*2 + (row >> 1) * 8;
1997
-
1998
- for (int i = m_max_mcus_per_row; i > 0; i--)
1999
- {
2000
- for (int j = 0; j < 8; j++)
2001
- {
2002
- int cb = c[0+j];
2003
- int cr = c[64+j];
2004
-
2005
- int rc = m_crr[cr];
2006
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
2007
- int bc = m_cbb[cb];
2008
-
2009
- int yy = y[j];
2010
- if (jpg_format == ERGBFormatJPG::BGRA)
2011
- {
2012
- d0[0] = clamp(yy+bc);
2013
- d0[1] = clamp(yy+gc);
2014
- d0[2] = clamp(yy+rc);
2015
- d0[3] = 255;
2016
- yy = y[8+j];
2017
- d1[0] = clamp(yy+bc);
2018
- d1[1] = clamp(yy+gc);
2019
- d1[2] = clamp(yy+rc);
2020
- d1[3] = 255;
2021
- }
2022
- else
2023
- {
2024
- d0[0] = clamp(yy+rc);
2025
- d0[1] = clamp(yy+gc);
2026
- d0[2] = clamp(yy+bc);
2027
- d0[3] = 255;
2028
- yy = y[8+j];
2029
- d1[0] = clamp(yy+rc);
2030
- d1[1] = clamp(yy+gc);
2031
- d1[2] = clamp(yy+bc);
2032
- d1[3] = 255;
2033
- }
2034
-
2035
- d0 += 4;
2036
- d1 += 4;
2037
- }
2038
-
2039
- y += 64*4;
2040
- c += 64*4;
2041
- }
2042
- }
2043
-
2044
- // YCbCr H2V2 (2x2:1:1, 6 m_blocks per MCU) to RGB
2045
- void jpeg_decoder::H2V2Convert()
2046
- {
2047
- int row = m_max_mcu_y_size - m_mcu_lines_left;
2048
- uint8 *d0 = m_pScan_line_0;
2049
- uint8 *d1 = m_pScan_line_1;
2050
- uint8 *y;
2051
- uint8 *c;
2052
-
2053
- if (row < 8)
2054
- y = m_pSample_buf + row * 8;
2055
- else
2056
- y = m_pSample_buf + 64*2 + (row & 7) * 8;
2057
-
2058
- c = m_pSample_buf + 64*4 + (row >> 1) * 8;
2059
-
2060
- for (int i = m_max_mcus_per_row; i > 0; i--)
2061
- {
2062
- for (int l = 0; l < 2; l++)
2063
- {
2064
- for (int j = 0; j < 8; j += 2)
2065
- {
2066
- int cb = c[0];
2067
- int cr = c[64];
2068
-
2069
- int rc = m_crr[cr];
2070
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
2071
- int bc = m_cbb[cb];
2072
-
2073
- int yy = y[j];
2074
- if (jpg_format == ERGBFormatJPG::BGRA)
2075
- {
2076
- d0[0] = clamp(yy+bc);
2077
- d0[1] = clamp(yy+gc);
2078
- d0[2] = clamp(yy+rc);
2079
- d0[3] = 255;
2080
- yy = y[j+1];
2081
- d0[4] = clamp(yy+bc);
2082
- d0[5] = clamp(yy+gc);
2083
- d0[6] = clamp(yy+rc);
2084
- d0[7] = 255;
2085
- yy = y[j+8];
2086
- d1[0] = clamp(yy+bc);
2087
- d1[1] = clamp(yy+gc);
2088
- d1[2] = clamp(yy+rc);
2089
- d1[3] = 255;
2090
- yy = y[j+8+1];
2091
- d1[4] = clamp(yy+bc);
2092
- d1[5] = clamp(yy+gc);
2093
- d1[6] = clamp(yy+rc);
2094
- d1[7] = 255;
2095
- }
2096
- else
2097
- {
2098
- d0[0] = clamp(yy+rc);
2099
- d0[1] = clamp(yy+gc);
2100
- d0[2] = clamp(yy+bc);
2101
- d0[3] = 255;
2102
- yy = y[j+1];
2103
- d0[4] = clamp(yy+rc);
2104
- d0[5] = clamp(yy+gc);
2105
- d0[6] = clamp(yy+bc);
2106
- d0[7] = 255;
2107
- yy = y[j+8];
2108
- d1[0] = clamp(yy+rc);
2109
- d1[1] = clamp(yy+gc);
2110
- d1[2] = clamp(yy+bc);
2111
- d1[3] = 255;
2112
- yy = y[j+8+1];
2113
- d1[4] = clamp(yy+rc);
2114
- d1[5] = clamp(yy+gc);
2115
- d1[6] = clamp(yy+bc);
2116
- d1[7] = 255;
2117
- }
2118
-
2119
- d0 += 8;
2120
- d1 += 8;
2121
-
2122
- c++;
2123
- }
2124
- y += 64;
2125
- }
2126
-
2127
- y += 64*6 - 64*2;
2128
- c += 64*6 - 8;
2129
- }
2130
- }
2131
-
2132
- // Y (1 block per MCU) to 8-bit grayscale
2133
- void jpeg_decoder::gray_convert()
2134
- {
2135
- int row = m_max_mcu_y_size - m_mcu_lines_left;
2136
- uint8 *d = m_pScan_line_0;
2137
- uint8 *s = m_pSample_buf + row * 8;
2138
-
2139
- for (int i = m_max_mcus_per_row; i > 0; i--)
2140
- {
2141
- *(uint *)d = *(uint *)s;
2142
- *(uint *)(&d[4]) = *(uint *)(&s[4]);
2143
-
2144
- s += 64;
2145
- d += 8;
2146
- }
2147
- }
2148
-
2149
- void jpeg_decoder::expanded_convert()
2150
- {
2151
- int row = m_max_mcu_y_size - m_mcu_lines_left;
2152
-
2153
- uint8* Py = m_pSample_buf + (row / 8) * 64 * m_comp_h_samp[0] + (row & 7) * 8;
2154
-
2155
- uint8* d = m_pScan_line_0;
2156
-
2157
- for (int i = m_max_mcus_per_row; i > 0; i--)
2158
- {
2159
- for (int k = 0; k < m_max_mcu_x_size; k += 8)
2160
- {
2161
- const int Y_ofs = k * 8;
2162
- const int Cb_ofs = Y_ofs + 64 * m_expanded_blocks_per_component;
2163
- const int Cr_ofs = Y_ofs + 64 * m_expanded_blocks_per_component * 2;
2164
- for (int j = 0; j < 8; j++)
2165
- {
2166
- int y = Py[Y_ofs + j];
2167
- int cb = Py[Cb_ofs + j];
2168
- int cr = Py[Cr_ofs + j];
2169
-
2170
- if (jpg_format == ERGBFormatJPG::BGRA)
2171
- {
2172
- d[0] = clamp(y + m_cbb[cb]);
2173
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
2174
- d[2] = clamp(y + m_crr[cr]);
2175
- d[3] = 255;
2176
- }
2177
- else
2178
- {
2179
- d[0] = clamp(y + m_crr[cr]);
2180
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
2181
- d[2] = clamp(y + m_cbb[cb]);
2182
- d[3] = 255;
2183
- }
2184
-
2185
- d += 4;
2186
- }
2187
- }
2188
-
2189
- Py += 64 * m_expanded_blocks_per_mcu;
2190
- }
2191
- }
2192
-
2193
- // Find end of image (EOI) marker, so we can return to the user the exact size of the input stream.
2194
- void jpeg_decoder::find_eoi()
2195
- {
2196
- if (!m_progressive_flag)
2197
- {
2198
- // Attempt to read the EOI marker.
2199
- //get_bits_no_markers(m_bits_left & 7);
2200
-
2201
- // Prime the bit buffer
2202
- m_bits_left = 16;
2203
- get_bits(16);
2204
- get_bits(16);
2205
-
2206
- // The next marker _should_ be EOI
2207
- process_markers();
2208
- }
2209
-
2210
- m_total_bytes_read -= m_in_buf_left;
2211
- }
2212
-
2213
- int jpeg_decoder::decode(const void** pScan_line, uint* pScan_line_len)
2214
- {
2215
- if ((m_error_code) || (!m_ready_flag))
2216
- return JPGD_FAILED;
2217
-
2218
- if (m_total_lines_left == 0)
2219
- return JPGD_DONE;
2220
-
2221
- if (m_mcu_lines_left == 0)
2222
- {
2223
- if (setjmp(m_jmp_state))
2224
- return JPGD_FAILED;
2225
-
2226
- if (m_progressive_flag)
2227
- load_next_row();
2228
- else
2229
- decode_next_row();
2230
-
2231
- // Find the EOI marker if that was the last row.
2232
- if (m_total_lines_left <= m_max_mcu_y_size)
2233
- find_eoi();
2234
-
2235
- m_mcu_lines_left = m_max_mcu_y_size;
2236
- }
2237
-
2238
- if (m_freq_domain_chroma_upsample)
2239
- {
2240
- expanded_convert();
2241
- *pScan_line = m_pScan_line_0;
2242
- }
2243
- else
2244
- {
2245
- switch (m_scan_type)
2246
- {
2247
- case JPGD_YH2V2:
2248
- {
2249
- if ((m_mcu_lines_left & 1) == 0)
2250
- {
2251
- H2V2Convert();
2252
- *pScan_line = m_pScan_line_0;
2253
- }
2254
- else
2255
- *pScan_line = m_pScan_line_1;
2256
-
2257
- break;
2258
- }
2259
- case JPGD_YH2V1:
2260
- {
2261
- H2V1Convert();
2262
- *pScan_line = m_pScan_line_0;
2263
- break;
2264
- }
2265
- case JPGD_YH1V2:
2266
- {
2267
- if ((m_mcu_lines_left & 1) == 0)
2268
- {
2269
- H1V2Convert();
2270
- *pScan_line = m_pScan_line_0;
2271
- }
2272
- else
2273
- *pScan_line = m_pScan_line_1;
2274
-
2275
- break;
2276
- }
2277
- case JPGD_YH1V1:
2278
- {
2279
- H1V1Convert();
2280
- *pScan_line = m_pScan_line_0;
2281
- break;
2282
- }
2283
- case JPGD_GRAYSCALE:
2284
- {
2285
- gray_convert();
2286
- *pScan_line = m_pScan_line_0;
2287
-
2288
- break;
2289
- }
2290
- }
2291
- }
2292
-
2293
- *pScan_line_len = m_real_dest_bytes_per_scan_line;
2294
-
2295
- m_mcu_lines_left--;
2296
- m_total_lines_left--;
2297
-
2298
- return JPGD_SUCCESS;
2299
- }
2300
-
2301
- // Creates the tables needed for efficient Huffman decoding.
2302
- void jpeg_decoder::make_huff_table(int index, huff_tables *pH)
2303
- {
2304
- int p, i, l, si;
2305
- uint8 huffsize[257];
2306
- uint huffcode[257];
2307
- uint code;
2308
- uint subtree;
2309
- int code_size;
2310
- int lastp;
2311
- int nextfreeentry;
2312
- int currententry;
2313
-
2314
- pH->ac_table = m_huff_ac[index] != 0;
2315
-
2316
- p = 0;
2317
-
2318
- for (l = 1; l <= 16; l++)
2319
- {
2320
- for (i = 1; i <= m_huff_num[index][l]; i++)
2321
- huffsize[p++] = static_cast<uint8>(l);
2322
- }
2323
-
2324
- huffsize[p] = 0;
2325
-
2326
- lastp = p;
2327
-
2328
- code = 0;
2329
- si = huffsize[0];
2330
- p = 0;
2331
-
2332
- while (huffsize[p])
2333
- {
2334
- while (huffsize[p] == si)
2335
- {
2336
- huffcode[p++] = code;
2337
- code++;
2338
- }
2339
-
2340
- code <<= 1;
2341
- si++;
2342
- }
2343
-
2344
- memset(pH->look_up, 0, sizeof(pH->look_up));
2345
- memset(pH->look_up2, 0, sizeof(pH->look_up2));
2346
- memset(pH->tree, 0, sizeof(pH->tree));
2347
- memset(pH->code_size, 0, sizeof(pH->code_size));
2348
-
2349
- nextfreeentry = -1;
2350
-
2351
- p = 0;
2352
-
2353
- while (p < lastp)
2354
- {
2355
- i = m_huff_val[index][p];
2356
- code = huffcode[p];
2357
- code_size = huffsize[p];
2358
-
2359
- pH->code_size[i] = static_cast<uint8>(code_size);
2360
-
2361
- if (code_size <= 8)
2362
- {
2363
- code <<= (8 - code_size);
2364
-
2365
- for (l = 1 << (8 - code_size); l > 0; l--)
2366
- {
2367
- JPGD_ASSERT(i < 256);
2368
-
2369
- pH->look_up[code] = i;
2370
-
2371
- bool has_extrabits = false;
2372
- int extra_bits = 0;
2373
- int num_extra_bits = i & 15;
2374
-
2375
- int bits_to_fetch = code_size;
2376
- if (num_extra_bits)
2377
- {
2378
- int total_codesize = code_size + num_extra_bits;
2379
- if (total_codesize <= 8)
2380
- {
2381
- has_extrabits = true;
2382
- extra_bits = ((1 << num_extra_bits) - 1) & (code >> (8 - total_codesize));
2383
- JPGD_ASSERT(extra_bits <= 0x7FFF);
2384
- bits_to_fetch += num_extra_bits;
2385
- }
2386
- }
2387
-
2388
- if (!has_extrabits)
2389
- pH->look_up2[code] = i | (bits_to_fetch << 8);
2390
- else
2391
- pH->look_up2[code] = i | 0x8000 | (extra_bits << 16) | (bits_to_fetch << 8);
2392
-
2393
- code++;
2394
- }
2395
- }
2396
- else
2397
- {
2398
- subtree = (code >> (code_size - 8)) & 0xFF;
2399
-
2400
- currententry = pH->look_up[subtree];
2401
-
2402
- if (currententry == 0)
2403
- {
2404
- pH->look_up[subtree] = currententry = nextfreeentry;
2405
- pH->look_up2[subtree] = currententry = nextfreeentry;
2406
-
2407
- nextfreeentry -= 2;
2408
- }
2409
-
2410
- code <<= (16 - (code_size - 8));
2411
-
2412
- for (l = code_size; l > 9; l--)
2413
- {
2414
- if ((code & 0x8000) == 0)
2415
- currententry--;
2416
-
2417
- if (pH->tree[-currententry - 1] == 0)
2418
- {
2419
- pH->tree[-currententry - 1] = nextfreeentry;
2420
-
2421
- currententry = nextfreeentry;
2422
-
2423
- nextfreeentry -= 2;
2424
- }
2425
- else
2426
- currententry = pH->tree[-currententry - 1];
2427
-
2428
- code <<= 1;
2429
- }
2430
-
2431
- if ((code & 0x8000) == 0)
2432
- currententry--;
2433
-
2434
- pH->tree[-currententry - 1] = i;
2435
- }
2436
-
2437
- p++;
2438
- }
2439
- }
2440
-
2441
- // Verifies the quantization tables needed for this scan are available.
2442
- void jpeg_decoder::check_quant_tables()
2443
- {
2444
- for (int i = 0; i < m_comps_in_scan; i++)
2445
- if (m_quant[m_comp_quant[m_comp_list[i]]] == NULL)
2446
- stop_decoding(JPGD_UNDEFINED_QUANT_TABLE);
2447
- }
2448
-
2449
- // Verifies that all the Huffman tables needed for this scan are available.
2450
- void jpeg_decoder::check_huff_tables()
2451
- {
2452
- for (int i = 0; i < m_comps_in_scan; i++)
2453
- {
2454
- if ((m_spectral_start == 0) && (m_huff_num[m_comp_dc_tab[m_comp_list[i]]] == NULL))
2455
- stop_decoding(JPGD_UNDEFINED_HUFF_TABLE);
2456
-
2457
- if ((m_spectral_end > 0) && (m_huff_num[m_comp_ac_tab[m_comp_list[i]]] == NULL))
2458
- stop_decoding(JPGD_UNDEFINED_HUFF_TABLE);
2459
- }
2460
-
2461
- for (int i = 0; i < JPGD_MAX_HUFF_TABLES; i++)
2462
- if (m_huff_num[i])
2463
- {
2464
- if (!m_pHuff_tabs[i])
2465
- m_pHuff_tabs[i] = (huff_tables *)alloc(sizeof(huff_tables));
2466
-
2467
- make_huff_table(i, m_pHuff_tabs[i]);
2468
- }
2469
- }
2470
-
2471
- // Determines the component order inside each MCU.
2472
- // Also calcs how many MCU's are on each row, etc.
2473
- void jpeg_decoder::calc_mcu_block_order()
2474
- {
2475
- int component_num, component_id;
2476
- int max_h_samp = 0, max_v_samp = 0;
2477
-
2478
- for (component_id = 0; component_id < m_comps_in_frame; component_id++)
2479
- {
2480
- if (m_comp_h_samp[component_id] > max_h_samp)
2481
- max_h_samp = m_comp_h_samp[component_id];
2482
-
2483
- if (m_comp_v_samp[component_id] > max_v_samp)
2484
- max_v_samp = m_comp_v_samp[component_id];
2485
- }
2486
-
2487
- for (component_id = 0; component_id < m_comps_in_frame; component_id++)
2488
- {
2489
- m_comp_h_blocks[component_id] = ((((m_image_x_size * m_comp_h_samp[component_id]) + (max_h_samp - 1)) / max_h_samp) + 7) / 8;
2490
- m_comp_v_blocks[component_id] = ((((m_image_y_size * m_comp_v_samp[component_id]) + (max_v_samp - 1)) / max_v_samp) + 7) / 8;
2491
- }
2492
-
2493
- if (m_comps_in_scan == 1)
2494
- {
2495
- m_mcus_per_row = m_comp_h_blocks[m_comp_list[0]];
2496
- m_mcus_per_col = m_comp_v_blocks[m_comp_list[0]];
2497
- }
2498
- else
2499
- {
2500
- m_mcus_per_row = (((m_image_x_size + 7) / 8) + (max_h_samp - 1)) / max_h_samp;
2501
- m_mcus_per_col = (((m_image_y_size + 7) / 8) + (max_v_samp - 1)) / max_v_samp;
2502
- }
2503
-
2504
- if (m_comps_in_scan == 1)
2505
- {
2506
- m_mcu_org[0] = m_comp_list[0];
2507
-
2508
- m_blocks_per_mcu = 1;
2509
- }
2510
- else
2511
- {
2512
- m_blocks_per_mcu = 0;
2513
-
2514
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
2515
- {
2516
- int num_blocks;
2517
-
2518
- component_id = m_comp_list[component_num];
2519
-
2520
- num_blocks = m_comp_h_samp[component_id] * m_comp_v_samp[component_id];
2521
-
2522
- while (num_blocks--)
2523
- m_mcu_org[m_blocks_per_mcu++] = component_id;
2524
- }
2525
- }
2526
- }
2527
-
2528
- // Starts a new scan.
2529
- int jpeg_decoder::init_scan()
2530
- {
2531
- if (!locate_sos_marker())
2532
- return JPGD_FALSE;
2533
-
2534
- calc_mcu_block_order();
2535
-
2536
- check_huff_tables();
2537
-
2538
- check_quant_tables();
2539
-
2540
- memset(m_last_dc_val, 0, m_comps_in_frame * sizeof(uint));
2541
-
2542
- m_eob_run = 0;
2543
-
2544
- if (m_restart_interval)
2545
- {
2546
- m_restarts_left = m_restart_interval;
2547
- m_next_restart_num = 0;
2548
- }
2549
-
2550
- fix_in_buffer();
2551
-
2552
- return JPGD_TRUE;
2553
- }
2554
-
2555
- // Starts a frame. Determines if the number of components or sampling factors
2556
- // are supported.
2557
- void jpeg_decoder::init_frame()
2558
- {
2559
- int i;
2560
-
2561
- if (m_comps_in_frame == 1)
2562
- {
2563
- if ((m_comp_h_samp[0] != 1) || (m_comp_v_samp[0] != 1))
2564
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
2565
-
2566
- m_scan_type = JPGD_GRAYSCALE;
2567
- m_max_blocks_per_mcu = 1;
2568
- m_max_mcu_x_size = 8;
2569
- m_max_mcu_y_size = 8;
2570
- }
2571
- else if (m_comps_in_frame == 3)
2572
- {
2573
- if ( ((m_comp_h_samp[1] != 1) || (m_comp_v_samp[1] != 1)) ||
2574
- ((m_comp_h_samp[2] != 1) || (m_comp_v_samp[2] != 1)) )
2575
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
2576
-
2577
- if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1))
2578
- {
2579
- m_scan_type = JPGD_YH1V1;
2580
-
2581
- m_max_blocks_per_mcu = 3;
2582
- m_max_mcu_x_size = 8;
2583
- m_max_mcu_y_size = 8;
2584
- }
2585
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1))
2586
- {
2587
- m_scan_type = JPGD_YH2V1;
2588
- m_max_blocks_per_mcu = 4;
2589
- m_max_mcu_x_size = 16;
2590
- m_max_mcu_y_size = 8;
2591
- }
2592
- else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 2))
2593
- {
2594
- m_scan_type = JPGD_YH1V2;
2595
- m_max_blocks_per_mcu = 4;
2596
- m_max_mcu_x_size = 8;
2597
- m_max_mcu_y_size = 16;
2598
- }
2599
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2))
2600
- {
2601
- m_scan_type = JPGD_YH2V2;
2602
- m_max_blocks_per_mcu = 6;
2603
- m_max_mcu_x_size = 16;
2604
- m_max_mcu_y_size = 16;
2605
- }
2606
- else
2607
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
2608
- }
2609
- else
2610
- stop_decoding(JPGD_UNSUPPORTED_COLORSPACE);
2611
-
2612
- m_max_mcus_per_row = (m_image_x_size + (m_max_mcu_x_size - 1)) / m_max_mcu_x_size;
2613
- m_max_mcus_per_col = (m_image_y_size + (m_max_mcu_y_size - 1)) / m_max_mcu_y_size;
2614
-
2615
- // These values are for the *destination* pixels: after conversion.
2616
- if (m_scan_type == JPGD_GRAYSCALE)
2617
- m_dest_bytes_per_pixel = 1;
2618
- else
2619
- m_dest_bytes_per_pixel = 4;
2620
-
2621
- m_dest_bytes_per_scan_line = ((m_image_x_size + 15) & 0xFFF0) * m_dest_bytes_per_pixel;
2622
-
2623
- m_real_dest_bytes_per_scan_line = (m_image_x_size * m_dest_bytes_per_pixel);
2624
-
2625
- // Initialize two scan line buffers.
2626
- m_pScan_line_0 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true);
2627
- if ((m_scan_type == JPGD_YH1V2) || (m_scan_type == JPGD_YH2V2))
2628
- m_pScan_line_1 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true);
2629
-
2630
- m_max_blocks_per_row = m_max_mcus_per_row * m_max_blocks_per_mcu;
2631
-
2632
- // Should never happen
2633
- if (m_max_blocks_per_row > JPGD_MAX_BLOCKS_PER_ROW)
2634
- stop_decoding(JPGD_ASSERTION_ERROR);
2635
-
2636
- // Allocate the coefficient buffer, enough for one MCU
2637
- m_pMCU_coefficients = (jpgd_block_t*)alloc(m_max_blocks_per_mcu * 64 * sizeof(jpgd_block_t));
2638
-
2639
- for (i = 0; i < m_max_blocks_per_mcu; i++)
2640
- m_mcu_block_max_zag[i] = 64;
2641
-
2642
- m_expanded_blocks_per_component = m_comp_h_samp[0] * m_comp_v_samp[0];
2643
- m_expanded_blocks_per_mcu = m_expanded_blocks_per_component * m_comps_in_frame;
2644
- m_expanded_blocks_per_row = m_max_mcus_per_row * m_expanded_blocks_per_mcu;
2645
- // Freq. domain chroma upsampling is only supported for H2V2 subsampling factor.
2646
- // BEGIN EPIC MOD
2647
- #if JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING
2648
- m_freq_domain_chroma_upsample = (m_expanded_blocks_per_mcu == 4*3);
2649
- #else
2650
- m_freq_domain_chroma_upsample = 0;
2651
- #endif
2652
- // END EPIC MOD
2653
-
2654
- if (m_freq_domain_chroma_upsample)
2655
- m_pSample_buf = (uint8 *)alloc(m_expanded_blocks_per_row * 64);
2656
- else
2657
- m_pSample_buf = (uint8 *)alloc(m_max_blocks_per_row * 64);
2658
-
2659
- m_total_lines_left = m_image_y_size;
2660
-
2661
- m_mcu_lines_left = 0;
2662
-
2663
- create_look_ups();
2664
- }
2665
-
2666
- // The coeff_buf series of methods originally stored the coefficients
2667
- // into a "virtual" file which was located in EMS, XMS, or a disk file. A cache
2668
- // was used to make this process more efficient. Now, we can store the entire
2669
- // thing in RAM.
2670
- jpeg_decoder::coeff_buf* jpeg_decoder::coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y)
2671
- {
2672
- coeff_buf* cb = (coeff_buf*)alloc(sizeof(coeff_buf));
2673
-
2674
- cb->block_num_x = block_num_x;
2675
- cb->block_num_y = block_num_y;
2676
- cb->block_len_x = block_len_x;
2677
- cb->block_len_y = block_len_y;
2678
- cb->block_size = (block_len_x * block_len_y) * sizeof(jpgd_block_t);
2679
- cb->pData = (uint8 *)alloc(cb->block_size * block_num_x * block_num_y, true);
2680
- return cb;
2681
- }
2682
-
2683
- inline jpgd_block_t *jpeg_decoder::coeff_buf_getp(coeff_buf *cb, int block_x, int block_y)
2684
- {
2685
- JPGD_ASSERT((block_x < cb->block_num_x) && (block_y < cb->block_num_y));
2686
- return (jpgd_block_t *)(cb->pData + block_x * cb->block_size + block_y * (cb->block_size * cb->block_num_x));
2687
- }
2688
-
2689
- // The following methods decode the various types of m_blocks encountered
2690
- // in progressively encoded images.
2691
- void jpeg_decoder::decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y)
2692
- {
2693
- int s, r;
2694
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y);
2695
-
2696
- if ((s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_dc_tab[component_id]])) != 0)
2697
- {
2698
- r = pD->get_bits_no_markers(s);
2699
- s = HUFF_EXTEND(r, s);
2700
- }
2701
-
2702
- pD->m_last_dc_val[component_id] = (s += pD->m_last_dc_val[component_id]);
2703
-
2704
- p[0] = static_cast<jpgd_block_t>(s << pD->m_successive_low);
2705
- }
2706
-
2707
- void jpeg_decoder::decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y)
2708
- {
2709
- if (pD->get_bits_no_markers(1))
2710
- {
2711
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y);
2712
-
2713
- p[0] |= (1 << pD->m_successive_low);
2714
- }
2715
- }
2716
-
2717
- void jpeg_decoder::decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y)
2718
- {
2719
- int k, s, r;
2720
-
2721
- if (pD->m_eob_run)
2722
- {
2723
- pD->m_eob_run--;
2724
- return;
2725
- }
2726
-
2727
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y);
2728
-
2729
- for (k = pD->m_spectral_start; k <= pD->m_spectral_end; k++)
2730
- {
2731
- s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]);
2732
-
2733
- r = s >> 4;
2734
- s &= 15;
2735
-
2736
- if (s)
2737
- {
2738
- if ((k += r) > 63)
2739
- pD->stop_decoding(JPGD_DECODE_ERROR);
2740
-
2741
- r = pD->get_bits_no_markers(s);
2742
- s = HUFF_EXTEND(r, s);
2743
-
2744
- p[g_ZAG[k]] = static_cast<jpgd_block_t>(s << pD->m_successive_low);
2745
- }
2746
- else
2747
- {
2748
- if (r == 15)
2749
- {
2750
- if ((k += 15) > 63)
2751
- pD->stop_decoding(JPGD_DECODE_ERROR);
2752
- }
2753
- else
2754
- {
2755
- pD->m_eob_run = 1 << r;
2756
-
2757
- if (r)
2758
- pD->m_eob_run += pD->get_bits_no_markers(r);
2759
-
2760
- pD->m_eob_run--;
2761
-
2762
- break;
2763
- }
2764
- }
2765
- }
2766
- }
2767
-
2768
- void jpeg_decoder::decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y)
2769
- {
2770
- int s, k, r;
2771
- int p1 = 1 << pD->m_successive_low;
2772
- int m1 = (-1) << pD->m_successive_low;
2773
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y);
2774
-
2775
- k = pD->m_spectral_start;
2776
-
2777
- if (pD->m_eob_run == 0)
2778
- {
2779
- for ( ; k <= pD->m_spectral_end; k++)
2780
- {
2781
- s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]);
2782
-
2783
- r = s >> 4;
2784
- s &= 15;
2785
-
2786
- if (s)
2787
- {
2788
- if (s != 1)
2789
- pD->stop_decoding(JPGD_DECODE_ERROR);
2790
-
2791
- if (pD->get_bits_no_markers(1))
2792
- s = p1;
2793
- else
2794
- s = m1;
2795
- }
2796
- else
2797
- {
2798
- if (r != 15)
2799
- {
2800
- pD->m_eob_run = 1 << r;
2801
-
2802
- if (r)
2803
- pD->m_eob_run += pD->get_bits_no_markers(r);
2804
-
2805
- break;
2806
- }
2807
- }
2808
-
2809
- do
2810
- {
2811
- // BEGIN EPIC MOD
2812
- JPGD_ASSERT(k < 64);
2813
- // END EPIC MOD
2814
-
2815
- jpgd_block_t *this_coef = p + g_ZAG[k];
2816
-
2817
- if (*this_coef != 0)
2818
- {
2819
- if (pD->get_bits_no_markers(1))
2820
- {
2821
- if ((*this_coef & p1) == 0)
2822
- {
2823
- if (*this_coef >= 0)
2824
- *this_coef = static_cast<jpgd_block_t>(*this_coef + p1);
2825
- else
2826
- *this_coef = static_cast<jpgd_block_t>(*this_coef + m1);
2827
- }
2828
- }
2829
- }
2830
- else
2831
- {
2832
- if (--r < 0)
2833
- break;
2834
- }
2835
-
2836
- k++;
2837
-
2838
- } while (k <= pD->m_spectral_end);
2839
-
2840
- if ((s) && (k < 64))
2841
- {
2842
- p[g_ZAG[k]] = static_cast<jpgd_block_t>(s);
2843
- }
2844
- }
2845
- }
2846
-
2847
- if (pD->m_eob_run > 0)
2848
- {
2849
- for ( ; k <= pD->m_spectral_end; k++)
2850
- {
2851
- // BEGIN EPIC MOD
2852
- JPGD_ASSERT(k < 64);
2853
- // END EPIC MOD
2854
-
2855
- jpgd_block_t *this_coef = p + g_ZAG[k];
2856
-
2857
- if (*this_coef != 0)
2858
- {
2859
- if (pD->get_bits_no_markers(1))
2860
- {
2861
- if ((*this_coef & p1) == 0)
2862
- {
2863
- if (*this_coef >= 0)
2864
- *this_coef = static_cast<jpgd_block_t>(*this_coef + p1);
2865
- else
2866
- *this_coef = static_cast<jpgd_block_t>(*this_coef + m1);
2867
- }
2868
- }
2869
- }
2870
- }
2871
-
2872
- pD->m_eob_run--;
2873
- }
2874
- }
2875
-
2876
- // Decode a scan in a progressively encoded image.
2877
- void jpeg_decoder::decode_scan(pDecode_block_func decode_block_func)
2878
- {
2879
- int mcu_row, mcu_col, mcu_block;
2880
- int block_x_mcu[JPGD_MAX_COMPONENTS], m_block_y_mcu[JPGD_MAX_COMPONENTS];
2881
-
2882
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
2883
-
2884
- for (mcu_col = 0; mcu_col < m_mcus_per_col; mcu_col++)
2885
- {
2886
- int component_num, component_id;
2887
-
2888
- memset(block_x_mcu, 0, sizeof(block_x_mcu));
2889
-
2890
- for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
2891
- {
2892
- int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0;
2893
-
2894
- if ((m_restart_interval) && (m_restarts_left == 0))
2895
- process_restart();
2896
-
2897
- for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
2898
- {
2899
- component_id = m_mcu_org[mcu_block];
2900
-
2901
- decode_block_func(this, component_id, block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
2902
-
2903
- if (m_comps_in_scan == 1)
2904
- block_x_mcu[component_id]++;
2905
- else
2906
- {
2907
- if (++block_x_mcu_ofs == m_comp_h_samp[component_id])
2908
- {
2909
- block_x_mcu_ofs = 0;
2910
-
2911
- if (++block_y_mcu_ofs == m_comp_v_samp[component_id])
2912
- {
2913
- block_y_mcu_ofs = 0;
2914
- block_x_mcu[component_id] += m_comp_h_samp[component_id];
2915
- }
2916
- }
2917
- }
2918
- }
2919
-
2920
- m_restarts_left--;
2921
- }
2922
-
2923
- if (m_comps_in_scan == 1)
2924
- m_block_y_mcu[m_comp_list[0]]++;
2925
- else
2926
- {
2927
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
2928
- {
2929
- component_id = m_comp_list[component_num];
2930
- m_block_y_mcu[component_id] += m_comp_v_samp[component_id];
2931
- }
2932
- }
2933
- }
2934
- }
2935
-
2936
- // Decode a progressively encoded image.
2937
- void jpeg_decoder::init_progressive()
2938
- {
2939
- int i;
2940
-
2941
- if (m_comps_in_frame == 4)
2942
- stop_decoding(JPGD_UNSUPPORTED_COLORSPACE);
2943
-
2944
- // Allocate the coefficient buffers.
2945
- for (i = 0; i < m_comps_in_frame; i++)
2946
- {
2947
- m_dc_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 1, 1);
2948
- m_ac_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 8, 8);
2949
- }
2950
-
2951
- for ( ; ; )
2952
- {
2953
- int dc_only_scan, refinement_scan;
2954
- pDecode_block_func decode_block_func;
2955
-
2956
- if (!init_scan())
2957
- break;
2958
-
2959
- dc_only_scan = (m_spectral_start == 0);
2960
- refinement_scan = (m_successive_high != 0);
2961
-
2962
- if ((m_spectral_start > m_spectral_end) || (m_spectral_end > 63))
2963
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
2964
-
2965
- if (dc_only_scan)
2966
- {
2967
- if (m_spectral_end)
2968
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
2969
- }
2970
- else if (m_comps_in_scan != 1) /* AC scans can only contain one component */
2971
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
2972
-
2973
- if ((refinement_scan) && (m_successive_low != m_successive_high - 1))
2974
- stop_decoding(JPGD_BAD_SOS_SUCCESSIVE);
2975
-
2976
- if (dc_only_scan)
2977
- {
2978
- if (refinement_scan)
2979
- decode_block_func = decode_block_dc_refine;
2980
- else
2981
- decode_block_func = decode_block_dc_first;
2982
- }
2983
- else
2984
- {
2985
- if (refinement_scan)
2986
- decode_block_func = decode_block_ac_refine;
2987
- else
2988
- decode_block_func = decode_block_ac_first;
2989
- }
2990
-
2991
- decode_scan(decode_block_func);
2992
-
2993
- m_bits_left = 16;
2994
- get_bits(16);
2995
- get_bits(16);
2996
- }
2997
-
2998
- m_comps_in_scan = m_comps_in_frame;
2999
-
3000
- for (i = 0; i < m_comps_in_frame; i++)
3001
- m_comp_list[i] = i;
3002
-
3003
- calc_mcu_block_order();
3004
- }
3005
-
3006
- void jpeg_decoder::init_sequential()
3007
- {
3008
- if (!init_scan())
3009
- stop_decoding(JPGD_UNEXPECTED_MARKER);
3010
- }
3011
-
3012
- void jpeg_decoder::decode_start()
3013
- {
3014
- init_frame();
3015
-
3016
- if (m_progressive_flag)
3017
- init_progressive();
3018
- else
3019
- init_sequential();
3020
- }
3021
-
3022
- void jpeg_decoder::decode_init(jpeg_decoder_stream *pStream)
3023
- {
3024
- init(pStream);
3025
- locate_sof_marker();
3026
- }
3027
-
3028
- jpeg_decoder::jpeg_decoder(jpeg_decoder_stream *pStream)
3029
- {
3030
- if (setjmp(m_jmp_state))
3031
- return;
3032
- decode_init(pStream);
3033
- }
3034
-
3035
- int jpeg_decoder::begin_decoding()
3036
- {
3037
- if (m_ready_flag)
3038
- return JPGD_SUCCESS;
3039
-
3040
- if (m_error_code)
3041
- return JPGD_FAILED;
3042
-
3043
- if (setjmp(m_jmp_state))
3044
- return JPGD_FAILED;
3045
-
3046
- decode_start();
3047
-
3048
- m_ready_flag = true;
3049
-
3050
- return JPGD_SUCCESS;
3051
- }
3052
-
3053
- jpeg_decoder::~jpeg_decoder()
3054
- {
3055
- free_all_blocks();
3056
- }
3057
-
3058
- jpeg_decoder_file_stream::jpeg_decoder_file_stream()
3059
- {
3060
- m_pFile = NULL;
3061
- m_eof_flag = false;
3062
- m_error_flag = false;
3063
- }
3064
-
3065
- void jpeg_decoder_file_stream::close()
3066
- {
3067
- if (m_pFile)
3068
- {
3069
- fclose(m_pFile);
3070
- m_pFile = NULL;
3071
- }
3072
-
3073
- m_eof_flag = false;
3074
- m_error_flag = false;
3075
- }
3076
-
3077
- jpeg_decoder_file_stream::~jpeg_decoder_file_stream()
3078
- {
3079
- close();
3080
- }
3081
-
3082
- bool jpeg_decoder_file_stream::open(const char *Pfilename)
3083
- {
3084
- close();
3085
-
3086
- m_eof_flag = false;
3087
- m_error_flag = false;
3088
-
3089
- #if defined(_MSC_VER)
3090
- m_pFile = NULL;
3091
- fopen_s(&m_pFile, Pfilename, "rb");
3092
- #else
3093
- m_pFile = fopen(Pfilename, "rb");
3094
- #endif
3095
- return m_pFile != NULL;
3096
- }
3097
-
3098
- int jpeg_decoder_file_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag)
3099
- {
3100
- if (!m_pFile)
3101
- return -1;
3102
-
3103
- if (m_eof_flag)
3104
- {
3105
- *pEOF_flag = true;
3106
- return 0;
3107
- }
3108
-
3109
- if (m_error_flag)
3110
- return -1;
3111
-
3112
- int bytes_read = static_cast<int>(fread(pBuf, 1, max_bytes_to_read, m_pFile));
3113
- if (bytes_read < max_bytes_to_read)
3114
- {
3115
- if (ferror(m_pFile))
3116
- {
3117
- m_error_flag = true;
3118
- return -1;
3119
- }
3120
-
3121
- m_eof_flag = true;
3122
- *pEOF_flag = true;
3123
- }
3124
-
3125
- return bytes_read;
3126
- }
3127
-
3128
- bool jpeg_decoder_mem_stream::open(const uint8 *pSrc_data, uint size)
3129
- {
3130
- close();
3131
- m_pSrc_data = pSrc_data;
3132
- m_ofs = 0;
3133
- m_size = size;
3134
- return true;
3135
- }
3136
-
3137
- int jpeg_decoder_mem_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag)
3138
- {
3139
- *pEOF_flag = false;
3140
-
3141
- if (!m_pSrc_data)
3142
- return -1;
3143
-
3144
- uint bytes_remaining = m_size - m_ofs;
3145
- if ((uint)max_bytes_to_read > bytes_remaining)
3146
- {
3147
- max_bytes_to_read = bytes_remaining;
3148
- *pEOF_flag = true;
3149
- }
3150
-
3151
- memcpy(pBuf, m_pSrc_data + m_ofs, max_bytes_to_read);
3152
- m_ofs += max_bytes_to_read;
3153
-
3154
- return max_bytes_to_read;
3155
- }
3156
-
3157
- unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps)
3158
- {
3159
- if (!actual_comps)
3160
- return NULL;
3161
- *actual_comps = 0;
3162
-
3163
- if ((!pStream) || (!width) || (!height) || (!req_comps))
3164
- return NULL;
3165
-
3166
- if ((req_comps != 1) && (req_comps != 3) && (req_comps != 4))
3167
- return NULL;
3168
-
3169
- jpeg_decoder decoder(pStream);
3170
- if (decoder.get_error_code() != JPGD_SUCCESS)
3171
- return NULL;
3172
-
3173
- const int image_width = decoder.get_width(), image_height = decoder.get_height();
3174
- *width = image_width;
3175
- *height = image_height;
3176
- *actual_comps = decoder.get_num_components();
3177
-
3178
- if (decoder.begin_decoding() != JPGD_SUCCESS)
3179
- return NULL;
3180
-
3181
- const int dst_bpl = image_width * req_comps;
3182
-
3183
- uint8 *pImage_data = (uint8*)jpgd_malloc(dst_bpl * image_height);
3184
- if (!pImage_data)
3185
- return NULL;
3186
-
3187
- for (int y = 0; y < image_height; y++)
3188
- {
3189
- const uint8* pScan_line = 0;
3190
- uint scan_line_len;
3191
- if (decoder.decode((const void**)&pScan_line, &scan_line_len) != JPGD_SUCCESS)
3192
- {
3193
- jpgd_free(pImage_data);
3194
- return NULL;
3195
- }
3196
-
3197
- uint8 *pDst = pImage_data + y * dst_bpl;
3198
-
3199
- if (((req_comps == 4) && (decoder.get_num_components() == 3)) ||
3200
- ((req_comps == 1) && (decoder.get_num_components() == 1)))
3201
- {
3202
- memcpy(pDst, pScan_line, dst_bpl);
3203
- }
3204
- else if (decoder.get_num_components() == 1)
3205
- {
3206
- if (req_comps == 3)
3207
- {
3208
- for (int x = 0; x < image_width; x++)
3209
- {
3210
- uint8 luma = pScan_line[x];
3211
- pDst[0] = luma;
3212
- pDst[1] = luma;
3213
- pDst[2] = luma;
3214
- pDst += 3;
3215
- }
3216
- }
3217
- else
3218
- {
3219
- for (int x = 0; x < image_width; x++)
3220
- {
3221
- uint8 luma = pScan_line[x];
3222
- pDst[0] = luma;
3223
- pDst[1] = luma;
3224
- pDst[2] = luma;
3225
- pDst[3] = 255;
3226
- pDst += 4;
3227
- }
3228
- }
3229
- }
3230
- else if (decoder.get_num_components() == 3)
3231
- {
3232
- if (req_comps == 1)
3233
- {
3234
- const int YR = 19595, YG = 38470, YB = 7471;
3235
- for (int x = 0; x < image_width; x++)
3236
- {
3237
- int r = pScan_line[x*4+0];
3238
- int g = pScan_line[x*4+1];
3239
- int b = pScan_line[x*4+2];
3240
- *pDst++ = static_cast<uint8>((r * YR + g * YG + b * YB + 32768) >> 16);
3241
- }
3242
- }
3243
- else
3244
- {
3245
- for (int x = 0; x < image_width; x++)
3246
- {
3247
- pDst[0] = pScan_line[x*4+0];
3248
- pDst[1] = pScan_line[x*4+1];
3249
- pDst[2] = pScan_line[x*4+2];
3250
- pDst += 3;
3251
- }
3252
- }
3253
- }
3254
- }
3255
-
3256
- return pImage_data;
3257
- }
3258
-
3259
- // BEGIN EPIC MOD
3260
- unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format)
3261
- {
3262
- jpg_format = (ERGBFormatJPG)format;
3263
- // EMD EPIC MOD
3264
- jpgd::jpeg_decoder_mem_stream mem_stream(pSrc_data, src_data_size);
3265
- return decompress_jpeg_image_from_stream(&mem_stream, width, height, actual_comps, req_comps);
3266
- }
3267
-
3268
- unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps)
3269
- {
3270
- jpgd::jpeg_decoder_file_stream file_stream;
3271
- if (!file_stream.open(pSrc_filename))
3272
- return NULL;
3273
- return decompress_jpeg_image_from_stream(&file_stream, width, height, actual_comps, req_comps);
3274
- }
3275
-
3276
- } // namespace jpgd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/dit/test_dit.py DELETED
@@ -1,151 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 HuggingFace Inc.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import gc
17
- import unittest
18
-
19
- import numpy as np
20
- import torch
21
-
22
- from diffusers import AutoencoderKL, DDIMScheduler, DiTPipeline, DPMSolverMultistepScheduler, Transformer2DModel
23
- from diffusers.utils import is_xformers_available, load_numpy, slow, torch_device
24
- from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
25
-
26
- from ..pipeline_params import (
27
- CLASS_CONDITIONED_IMAGE_GENERATION_BATCH_PARAMS,
28
- CLASS_CONDITIONED_IMAGE_GENERATION_PARAMS,
29
- )
30
- from ..test_pipelines_common import PipelineTesterMixin
31
-
32
-
33
- enable_full_determinism()
34
-
35
-
36
- class DiTPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
37
- pipeline_class = DiTPipeline
38
- params = CLASS_CONDITIONED_IMAGE_GENERATION_PARAMS
39
- required_optional_params = PipelineTesterMixin.required_optional_params - {
40
- "latents",
41
- "num_images_per_prompt",
42
- "callback",
43
- "callback_steps",
44
- }
45
- batch_params = CLASS_CONDITIONED_IMAGE_GENERATION_BATCH_PARAMS
46
-
47
- def get_dummy_components(self):
48
- torch.manual_seed(0)
49
- transformer = Transformer2DModel(
50
- sample_size=16,
51
- num_layers=2,
52
- patch_size=4,
53
- attention_head_dim=8,
54
- num_attention_heads=2,
55
- in_channels=4,
56
- out_channels=8,
57
- attention_bias=True,
58
- activation_fn="gelu-approximate",
59
- num_embeds_ada_norm=1000,
60
- norm_type="ada_norm_zero",
61
- norm_elementwise_affine=False,
62
- )
63
- vae = AutoencoderKL()
64
- scheduler = DDIMScheduler()
65
- components = {"transformer": transformer.eval(), "vae": vae.eval(), "scheduler": scheduler}
66
- return components
67
-
68
- def get_dummy_inputs(self, device, seed=0):
69
- if str(device).startswith("mps"):
70
- generator = torch.manual_seed(seed)
71
- else:
72
- generator = torch.Generator(device=device).manual_seed(seed)
73
- inputs = {
74
- "class_labels": [1],
75
- "generator": generator,
76
- "num_inference_steps": 2,
77
- "output_type": "numpy",
78
- }
79
- return inputs
80
-
81
- def test_inference(self):
82
- device = "cpu"
83
-
84
- components = self.get_dummy_components()
85
- pipe = self.pipeline_class(**components)
86
- pipe.to(device)
87
- pipe.set_progress_bar_config(disable=None)
88
-
89
- inputs = self.get_dummy_inputs(device)
90
- image = pipe(**inputs).images
91
- image_slice = image[0, -3:, -3:, -1]
92
-
93
- self.assertEqual(image.shape, (1, 16, 16, 3))
94
- expected_slice = np.array([0.2946, 0.6601, 0.4329, 0.3296, 0.4144, 0.5319, 0.7273, 0.5013, 0.4457])
95
- max_diff = np.abs(image_slice.flatten() - expected_slice).max()
96
- self.assertLessEqual(max_diff, 1e-3)
97
-
98
- def test_inference_batch_single_identical(self):
99
- self._test_inference_batch_single_identical(relax_max_difference=True, expected_max_diff=1e-3)
100
-
101
- @unittest.skipIf(
102
- torch_device != "cuda" or not is_xformers_available(),
103
- reason="XFormers attention is only available with CUDA and `xformers` installed",
104
- )
105
- def test_xformers_attention_forwardGenerator_pass(self):
106
- self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=1e-3)
107
-
108
-
109
- @require_torch_gpu
110
- @slow
111
- class DiTPipelineIntegrationTests(unittest.TestCase):
112
- def tearDown(self):
113
- super().tearDown()
114
- gc.collect()
115
- torch.cuda.empty_cache()
116
-
117
- def test_dit_256(self):
118
- generator = torch.manual_seed(0)
119
-
120
- pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256")
121
- pipe.to("cuda")
122
-
123
- words = ["vase", "umbrella", "white shark", "white wolf"]
124
- ids = pipe.get_label_ids(words)
125
-
126
- images = pipe(ids, generator=generator, num_inference_steps=40, output_type="np").images
127
-
128
- for word, image in zip(words, images):
129
- expected_image = load_numpy(
130
- f"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/dit/{word}.npy"
131
- )
132
- assert np.abs((expected_image - image).max()) < 1e-2
133
-
134
- def test_dit_512(self):
135
- pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-512")
136
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
137
- pipe.to("cuda")
138
-
139
- words = ["vase", "umbrella"]
140
- ids = pipe.get_label_ids(words)
141
-
142
- generator = torch.manual_seed(0)
143
- images = pipe(ids, generator=generator, num_inference_steps=25, output_type="np").images
144
-
145
- for word, image in zip(words, images):
146
- expected_image = load_numpy(
147
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
148
- f"/dit/{word}_512.npy"
149
- )
150
-
151
- assert np.abs((expected_image - image).max()) < 1e-1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/resnest/README.md DELETED
@@ -1,44 +0,0 @@
1
- # ResNeSt: Split-Attention Networks
2
-
3
- ## Introduction
4
-
5
- [BACKBONE]
6
-
7
- ```latex
8
- @article{zhang2020resnest,
9
- title={ResNeSt: Split-Attention Networks},
10
- author={Zhang, Hang and Wu, Chongruo and Zhang, Zhongyue and Zhu, Yi and Zhang, Zhi and Lin, Haibin and Sun, Yue and He, Tong and Muller, Jonas and Manmatha, R. and Li, Mu and Smola, Alexander},
11
- journal={arXiv preprint arXiv:2004.08955},
12
- year={2020}
13
- }
14
- ```
15
-
16
- ## Results and Models
17
-
18
- ### Faster R-CNN
19
-
20
- | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
21
- | :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: |
22
- |S-50-FPN | pytorch | 1x | 4.8 | - | 42.0 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/resnest/faster_rcnn_s50_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/resnest/faster_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco/faster_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco_20200926_125502-20289c16.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/resnest/faster_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco/faster_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco-20200926_125502.log.json) |
23
- |S-101-FPN | pytorch | 1x | 7.1 | - | 44.5 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/resnest/faster_rcnn_s101_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/resnest/faster_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco/faster_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco_20201006_021058-421517f1.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/resnest/faster_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco/faster_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco-20201006_021058.log.json) |
24
-
25
- ### Mask R-CNN
26
-
27
- | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
28
- | :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :------: | :--------: |
29
- |S-50-FPN | pytorch | 1x | 5.5 | - | 42.6 | 38.1 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/resnest/mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/resnest/mask_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco/mask_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco_20200926_125503-8a2c3d47.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/resnest/mask_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco/mask_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco-20200926_125503.log.json) |
30
- |S-101-FPN | pytorch | 1x | 7.8 | - | 45.2 | 40.2 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/resnest/mask_rcnn_s101_fpn_syncbn-backbone+head_mstrain_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/resnest/mask_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco/mask_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco_20201005_215831-af60cdf9.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/resnest/mask_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco/mask_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco-20201005_215831.log.json) |
31
-
32
- ### Cascade R-CNN
33
-
34
- | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
35
- | :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: |
36
- |S-50-FPN | pytorch | 1x | - | - | 44.5 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/resnest/cascade_rcnn_s50_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/resnest/cascade_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco/cascade_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco_20201122_213640-763cc7b5.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/resnest/cascade_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco/cascade_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco-20201005_113242.log.json) |
37
- |S-101-FPN | pytorch | 1x | 8.4 | - | 46.8 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/resnest/cascade_rcnn_s101_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/resnest/cascade_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco/cascade_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco_20201005_113242-b9459f8f.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/resnest/cascade_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco/cascade_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco-20201122_213640.log.json) |
38
-
39
- ### Cascade Mask R-CNN
40
-
41
- | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
42
- | :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :------: | :--------: |
43
- |S-50-FPN | pytorch | 1x | - | - | 45.4 | 39.5 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/resnest/cascade_mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/resnest/cascade_mask_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco/cascade_mask_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco_20201122_104428-99eca4c7.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/resnest/cascade_mask_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco/cascade_mask_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco-20201122_104428.log.json) |
44
- |S-101-FPN | pytorch | 1x | 10.5 | - | 47.7 | 41.4 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone+head_mstrain_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco/cascade_mask_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco_20201005_113243-42607475.pth) &#124; [log](http://download.openmmlab.com/mmdetection/v2.0/resnest/cascade_mask_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco/cascade_mask_rcnn_s101_fpn_syncbn-backbone%2Bhead_mstrain_1x_coco-20201005_113243.log.json) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/scratch/mask_rcnn_r50_fpn_gn-all_scratch_6x_coco.py DELETED
@@ -1,23 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/mask_rcnn_r50_fpn.py',
3
- '../_base_/datasets/coco_instance.py',
4
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
5
- ]
6
- norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
7
- model = dict(
8
- pretrained=None,
9
- backbone=dict(
10
- frozen_stages=-1, zero_init_residual=False, norm_cfg=norm_cfg),
11
- neck=dict(norm_cfg=norm_cfg),
12
- roi_head=dict(
13
- bbox_head=dict(
14
- type='Shared4Conv1FCBBoxHead',
15
- conv_out_channels=256,
16
- norm_cfg=norm_cfg),
17
- mask_head=dict(norm_cfg=norm_cfg)))
18
- # optimizer
19
- optimizer = dict(paramwise_cfg=dict(norm_decay_mult=0))
20
- optimizer_config = dict(_delete_=True, grad_clip=None)
21
- # learning policy
22
- lr_config = dict(warmup_ratio=0.1, step=[65, 71])
23
- runner = dict(type='EpochBasedRunner', max_epochs=73)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/google_translate/script.py DELETED
@@ -1,59 +0,0 @@
1
- import html
2
-
3
- import gradio as gr
4
- from deep_translator import GoogleTranslator
5
-
6
- params = {
7
- "activate": True,
8
- "language string": "ja",
9
- }
10
-
11
- language_codes = {'Afrikaans': 'af', 'Albanian': 'sq', 'Amharic': 'am', 'Arabic': 'ar', 'Armenian': 'hy', 'Azerbaijani': 'az', 'Basque': 'eu', 'Belarusian': 'be', 'Bengali': 'bn', 'Bosnian': 'bs', 'Bulgarian': 'bg', 'Catalan': 'ca', 'Cebuano': 'ceb', 'Chinese (Simplified)': 'zh-CN', 'Chinese (Traditional)': 'zh-TW', 'Corsican': 'co', 'Croatian': 'hr', 'Czech': 'cs', 'Danish': 'da', 'Dutch': 'nl', 'English': 'en', 'Esperanto': 'eo', 'Estonian': 'et', 'Finnish': 'fi', 'French': 'fr', 'Frisian': 'fy', 'Galician': 'gl', 'Georgian': 'ka', 'German': 'de', 'Greek': 'el', 'Gujarati': 'gu', 'Haitian Creole': 'ht', 'Hausa': 'ha', 'Hawaiian': 'haw', 'Hebrew': 'iw', 'Hindi': 'hi', 'Hmong': 'hmn', 'Hungarian': 'hu', 'Icelandic': 'is', 'Igbo': 'ig', 'Indonesian': 'id', 'Irish': 'ga', 'Italian': 'it', 'Japanese': 'ja', 'Javanese': 'jw', 'Kannada': 'kn', 'Kazakh': 'kk', 'Khmer': 'km', 'Korean': 'ko', 'Kurdish': 'ku', 'Kyrgyz': 'ky', 'Lao': 'lo', 'Latin': 'la', 'Latvian': 'lv', 'Lithuanian': 'lt', 'Luxembourgish': 'lb', 'Macedonian': 'mk', 'Malagasy': 'mg', 'Malay': 'ms', 'Malayalam': 'ml', 'Maltese': 'mt', 'Maori': 'mi', 'Marathi': 'mr', 'Mongolian': 'mn', 'Myanmar (Burmese)': 'my', 'Nepali': 'ne', 'Norwegian': 'no', 'Nyanja (Chichewa)': 'ny', 'Pashto': 'ps', 'Persian': 'fa', 'Polish': 'pl', 'Portuguese (Portugal, Brazil)': 'pt', 'Punjabi': 'pa', 'Romanian': 'ro', 'Russian': 'ru', 'Samoan': 'sm', 'Scots Gaelic': 'gd', 'Serbian': 'sr', 'Sesotho': 'st', 'Shona': 'sn', 'Sindhi': 'sd', 'Sinhala (Sinhalese)': 'si', 'Slovak': 'sk', 'Slovenian': 'sl', 'Somali': 'so', 'Spanish': 'es', 'Sundanese': 'su', 'Swahili': 'sw', 'Swedish': 'sv', 'Tagalog (Filipino)': 'tl', 'Tajik': 'tg', 'Tamil': 'ta', 'Telugu': 'te', 'Thai': 'th', 'Turkish': 'tr', 'Ukrainian': 'uk', 'Urdu': 'ur', 'Uzbek': 'uz', 'Vietnamese': 'vi', 'Welsh': 'cy', 'Xhosa': 'xh', 'Yiddish': 'yi', 'Yoruba': 'yo', 'Zulu': 'zu'}
12
-
13
-
14
- def input_modifier(string):
15
- """
16
- This function is applied to your text inputs before
17
- they are fed into the model.
18
- """
19
- if not params['activate']:
20
- return string
21
-
22
- return GoogleTranslator(source=params['language string'], target='en').translate(string)
23
-
24
-
25
- def output_modifier(string):
26
- """
27
- This function is applied to the model outputs.
28
- """
29
- if not params['activate']:
30
- return string
31
-
32
- translated_str = GoogleTranslator(source='en', target=params['language string']).translate(html.unescape(string))
33
- return html.escape(translated_str)
34
-
35
-
36
- def bot_prefix_modifier(string):
37
- """
38
- This function is only applied in chat mode. It modifies
39
- the prefix text for the Bot and can be used to bias its
40
- behavior.
41
- """
42
-
43
- return string
44
-
45
-
46
- def ui():
47
- # Finding the language name from the language code to use as the default value
48
- language_name = list(language_codes.keys())[list(language_codes.values()).index(params['language string'])]
49
-
50
- # Gradio elements
51
- with gr.Row():
52
- activate = gr.Checkbox(value=params['activate'], label='Activate translation')
53
-
54
- with gr.Row():
55
- language = gr.Dropdown(value=language_name, choices=[k for k in language_codes], label='Language')
56
-
57
- # Event functions to update the parameters in the backend
58
- activate.change(lambda x: params.update({"activate": x}), activate, None)
59
- language.change(lambda x: params.update({"language string": language_codes[x]}), language, None)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/arraymisc/quantization.py DELETED
@@ -1,55 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- import numpy as np
3
-
4
-
5
- def quantize(arr, min_val, max_val, levels, dtype=np.int64):
6
- """Quantize an array of (-inf, inf) to [0, levels-1].
7
-
8
- Args:
9
- arr (ndarray): Input array.
10
- min_val (scalar): Minimum value to be clipped.
11
- max_val (scalar): Maximum value to be clipped.
12
- levels (int): Quantization levels.
13
- dtype (np.type): The type of the quantized array.
14
-
15
- Returns:
16
- tuple: Quantized array.
17
- """
18
- if not (isinstance(levels, int) and levels > 1):
19
- raise ValueError(
20
- f'levels must be a positive integer, but got {levels}')
21
- if min_val >= max_val:
22
- raise ValueError(
23
- f'min_val ({min_val}) must be smaller than max_val ({max_val})')
24
-
25
- arr = np.clip(arr, min_val, max_val) - min_val
26
- quantized_arr = np.minimum(
27
- np.floor(levels * arr / (max_val - min_val)).astype(dtype), levels - 1)
28
-
29
- return quantized_arr
30
-
31
-
32
- def dequantize(arr, min_val, max_val, levels, dtype=np.float64):
33
- """Dequantize an array.
34
-
35
- Args:
36
- arr (ndarray): Input array.
37
- min_val (scalar): Minimum value to be clipped.
38
- max_val (scalar): Maximum value to be clipped.
39
- levels (int): Quantization levels.
40
- dtype (np.type): The type of the dequantized array.
41
-
42
- Returns:
43
- tuple: Dequantized array.
44
- """
45
- if not (isinstance(levels, int) and levels > 1):
46
- raise ValueError(
47
- f'levels must be a positive integer, but got {levels}')
48
- if min_val >= max_val:
49
- raise ValueError(
50
- f'min_val ({min_val}) must be smaller than max_val ({max_val})')
51
-
52
- dequantized_arr = (arr + 0.5).astype(dtype) * (max_val -
53
- min_val) / levels + min_val
54
-
55
- return dequantized_arr
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/non_local.py DELETED
@@ -1,306 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- from abc import ABCMeta
3
-
4
- import torch
5
- import torch.nn as nn
6
-
7
- from ..utils import constant_init, normal_init
8
- from .conv_module import ConvModule
9
- from .registry import PLUGIN_LAYERS
10
-
11
-
12
- class _NonLocalNd(nn.Module, metaclass=ABCMeta):
13
- """Basic Non-local module.
14
-
15
- This module is proposed in
16
- "Non-local Neural Networks"
17
- Paper reference: https://arxiv.org/abs/1711.07971
18
- Code reference: https://github.com/AlexHex7/Non-local_pytorch
19
-
20
- Args:
21
- in_channels (int): Channels of the input feature map.
22
- reduction (int): Channel reduction ratio. Default: 2.
23
- use_scale (bool): Whether to scale pairwise_weight by
24
- `1/sqrt(inter_channels)` when the mode is `embedded_gaussian`.
25
- Default: True.
26
- conv_cfg (None | dict): The config dict for convolution layers.
27
- If not specified, it will use `nn.Conv2d` for convolution layers.
28
- Default: None.
29
- norm_cfg (None | dict): The config dict for normalization layers.
30
- Default: None. (This parameter is only applicable to conv_out.)
31
- mode (str): Options are `gaussian`, `concatenation`,
32
- `embedded_gaussian` and `dot_product`. Default: embedded_gaussian.
33
- """
34
-
35
- def __init__(self,
36
- in_channels,
37
- reduction=2,
38
- use_scale=True,
39
- conv_cfg=None,
40
- norm_cfg=None,
41
- mode='embedded_gaussian',
42
- **kwargs):
43
- super(_NonLocalNd, self).__init__()
44
- self.in_channels = in_channels
45
- self.reduction = reduction
46
- self.use_scale = use_scale
47
- self.inter_channels = max(in_channels // reduction, 1)
48
- self.mode = mode
49
-
50
- if mode not in [
51
- 'gaussian', 'embedded_gaussian', 'dot_product', 'concatenation'
52
- ]:
53
- raise ValueError("Mode should be in 'gaussian', 'concatenation', "
54
- f"'embedded_gaussian' or 'dot_product', but got "
55
- f'{mode} instead.')
56
-
57
- # g, theta, phi are defaulted as `nn.ConvNd`.
58
- # Here we use ConvModule for potential usage.
59
- self.g = ConvModule(
60
- self.in_channels,
61
- self.inter_channels,
62
- kernel_size=1,
63
- conv_cfg=conv_cfg,
64
- act_cfg=None)
65
- self.conv_out = ConvModule(
66
- self.inter_channels,
67
- self.in_channels,
68
- kernel_size=1,
69
- conv_cfg=conv_cfg,
70
- norm_cfg=norm_cfg,
71
- act_cfg=None)
72
-
73
- if self.mode != 'gaussian':
74
- self.theta = ConvModule(
75
- self.in_channels,
76
- self.inter_channels,
77
- kernel_size=1,
78
- conv_cfg=conv_cfg,
79
- act_cfg=None)
80
- self.phi = ConvModule(
81
- self.in_channels,
82
- self.inter_channels,
83
- kernel_size=1,
84
- conv_cfg=conv_cfg,
85
- act_cfg=None)
86
-
87
- if self.mode == 'concatenation':
88
- self.concat_project = ConvModule(
89
- self.inter_channels * 2,
90
- 1,
91
- kernel_size=1,
92
- stride=1,
93
- padding=0,
94
- bias=False,
95
- act_cfg=dict(type='ReLU'))
96
-
97
- self.init_weights(**kwargs)
98
-
99
- def init_weights(self, std=0.01, zeros_init=True):
100
- if self.mode != 'gaussian':
101
- for m in [self.g, self.theta, self.phi]:
102
- normal_init(m.conv, std=std)
103
- else:
104
- normal_init(self.g.conv, std=std)
105
- if zeros_init:
106
- if self.conv_out.norm_cfg is None:
107
- constant_init(self.conv_out.conv, 0)
108
- else:
109
- constant_init(self.conv_out.norm, 0)
110
- else:
111
- if self.conv_out.norm_cfg is None:
112
- normal_init(self.conv_out.conv, std=std)
113
- else:
114
- normal_init(self.conv_out.norm, std=std)
115
-
116
- def gaussian(self, theta_x, phi_x):
117
- # NonLocal1d pairwise_weight: [N, H, H]
118
- # NonLocal2d pairwise_weight: [N, HxW, HxW]
119
- # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW]
120
- pairwise_weight = torch.matmul(theta_x, phi_x)
121
- pairwise_weight = pairwise_weight.softmax(dim=-1)
122
- return pairwise_weight
123
-
124
- def embedded_gaussian(self, theta_x, phi_x):
125
- # NonLocal1d pairwise_weight: [N, H, H]
126
- # NonLocal2d pairwise_weight: [N, HxW, HxW]
127
- # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW]
128
- pairwise_weight = torch.matmul(theta_x, phi_x)
129
- if self.use_scale:
130
- # theta_x.shape[-1] is `self.inter_channels`
131
- pairwise_weight /= theta_x.shape[-1]**0.5
132
- pairwise_weight = pairwise_weight.softmax(dim=-1)
133
- return pairwise_weight
134
-
135
- def dot_product(self, theta_x, phi_x):
136
- # NonLocal1d pairwise_weight: [N, H, H]
137
- # NonLocal2d pairwise_weight: [N, HxW, HxW]
138
- # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW]
139
- pairwise_weight = torch.matmul(theta_x, phi_x)
140
- pairwise_weight /= pairwise_weight.shape[-1]
141
- return pairwise_weight
142
-
143
- def concatenation(self, theta_x, phi_x):
144
- # NonLocal1d pairwise_weight: [N, H, H]
145
- # NonLocal2d pairwise_weight: [N, HxW, HxW]
146
- # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW]
147
- h = theta_x.size(2)
148
- w = phi_x.size(3)
149
- theta_x = theta_x.repeat(1, 1, 1, w)
150
- phi_x = phi_x.repeat(1, 1, h, 1)
151
-
152
- concat_feature = torch.cat([theta_x, phi_x], dim=1)
153
- pairwise_weight = self.concat_project(concat_feature)
154
- n, _, h, w = pairwise_weight.size()
155
- pairwise_weight = pairwise_weight.view(n, h, w)
156
- pairwise_weight /= pairwise_weight.shape[-1]
157
-
158
- return pairwise_weight
159
-
160
- def forward(self, x):
161
- # Assume `reduction = 1`, then `inter_channels = C`
162
- # or `inter_channels = C` when `mode="gaussian"`
163
-
164
- # NonLocal1d x: [N, C, H]
165
- # NonLocal2d x: [N, C, H, W]
166
- # NonLocal3d x: [N, C, T, H, W]
167
- n = x.size(0)
168
-
169
- # NonLocal1d g_x: [N, H, C]
170
- # NonLocal2d g_x: [N, HxW, C]
171
- # NonLocal3d g_x: [N, TxHxW, C]
172
- g_x = self.g(x).view(n, self.inter_channels, -1)
173
- g_x = g_x.permute(0, 2, 1)
174
-
175
- # NonLocal1d theta_x: [N, H, C], phi_x: [N, C, H]
176
- # NonLocal2d theta_x: [N, HxW, C], phi_x: [N, C, HxW]
177
- # NonLocal3d theta_x: [N, TxHxW, C], phi_x: [N, C, TxHxW]
178
- if self.mode == 'gaussian':
179
- theta_x = x.view(n, self.in_channels, -1)
180
- theta_x = theta_x.permute(0, 2, 1)
181
- if self.sub_sample:
182
- phi_x = self.phi(x).view(n, self.in_channels, -1)
183
- else:
184
- phi_x = x.view(n, self.in_channels, -1)
185
- elif self.mode == 'concatenation':
186
- theta_x = self.theta(x).view(n, self.inter_channels, -1, 1)
187
- phi_x = self.phi(x).view(n, self.inter_channels, 1, -1)
188
- else:
189
- theta_x = self.theta(x).view(n, self.inter_channels, -1)
190
- theta_x = theta_x.permute(0, 2, 1)
191
- phi_x = self.phi(x).view(n, self.inter_channels, -1)
192
-
193
- pairwise_func = getattr(self, self.mode)
194
- # NonLocal1d pairwise_weight: [N, H, H]
195
- # NonLocal2d pairwise_weight: [N, HxW, HxW]
196
- # NonLocal3d pairwise_weight: [N, TxHxW, TxHxW]
197
- pairwise_weight = pairwise_func(theta_x, phi_x)
198
-
199
- # NonLocal1d y: [N, H, C]
200
- # NonLocal2d y: [N, HxW, C]
201
- # NonLocal3d y: [N, TxHxW, C]
202
- y = torch.matmul(pairwise_weight, g_x)
203
- # NonLocal1d y: [N, C, H]
204
- # NonLocal2d y: [N, C, H, W]
205
- # NonLocal3d y: [N, C, T, H, W]
206
- y = y.permute(0, 2, 1).contiguous().reshape(n, self.inter_channels,
207
- *x.size()[2:])
208
-
209
- output = x + self.conv_out(y)
210
-
211
- return output
212
-
213
-
214
- class NonLocal1d(_NonLocalNd):
215
- """1D Non-local module.
216
-
217
- Args:
218
- in_channels (int): Same as `NonLocalND`.
219
- sub_sample (bool): Whether to apply max pooling after pairwise
220
- function (Note that the `sub_sample` is applied on spatial only).
221
- Default: False.
222
- conv_cfg (None | dict): Same as `NonLocalND`.
223
- Default: dict(type='Conv1d').
224
- """
225
-
226
- def __init__(self,
227
- in_channels,
228
- sub_sample=False,
229
- conv_cfg=dict(type='Conv1d'),
230
- **kwargs):
231
- super(NonLocal1d, self).__init__(
232
- in_channels, conv_cfg=conv_cfg, **kwargs)
233
-
234
- self.sub_sample = sub_sample
235
-
236
- if sub_sample:
237
- max_pool_layer = nn.MaxPool1d(kernel_size=2)
238
- self.g = nn.Sequential(self.g, max_pool_layer)
239
- if self.mode != 'gaussian':
240
- self.phi = nn.Sequential(self.phi, max_pool_layer)
241
- else:
242
- self.phi = max_pool_layer
243
-
244
-
245
- @PLUGIN_LAYERS.register_module()
246
- class NonLocal2d(_NonLocalNd):
247
- """2D Non-local module.
248
-
249
- Args:
250
- in_channels (int): Same as `NonLocalND`.
251
- sub_sample (bool): Whether to apply max pooling after pairwise
252
- function (Note that the `sub_sample` is applied on spatial only).
253
- Default: False.
254
- conv_cfg (None | dict): Same as `NonLocalND`.
255
- Default: dict(type='Conv2d').
256
- """
257
-
258
- _abbr_ = 'nonlocal_block'
259
-
260
- def __init__(self,
261
- in_channels,
262
- sub_sample=False,
263
- conv_cfg=dict(type='Conv2d'),
264
- **kwargs):
265
- super(NonLocal2d, self).__init__(
266
- in_channels, conv_cfg=conv_cfg, **kwargs)
267
-
268
- self.sub_sample = sub_sample
269
-
270
- if sub_sample:
271
- max_pool_layer = nn.MaxPool2d(kernel_size=(2, 2))
272
- self.g = nn.Sequential(self.g, max_pool_layer)
273
- if self.mode != 'gaussian':
274
- self.phi = nn.Sequential(self.phi, max_pool_layer)
275
- else:
276
- self.phi = max_pool_layer
277
-
278
-
279
- class NonLocal3d(_NonLocalNd):
280
- """3D Non-local module.
281
-
282
- Args:
283
- in_channels (int): Same as `NonLocalND`.
284
- sub_sample (bool): Whether to apply max pooling after pairwise
285
- function (Note that the `sub_sample` is applied on spatial only).
286
- Default: False.
287
- conv_cfg (None | dict): Same as `NonLocalND`.
288
- Default: dict(type='Conv3d').
289
- """
290
-
291
- def __init__(self,
292
- in_channels,
293
- sub_sample=False,
294
- conv_cfg=dict(type='Conv3d'),
295
- **kwargs):
296
- super(NonLocal3d, self).__init__(
297
- in_channels, conv_cfg=conv_cfg, **kwargs)
298
- self.sub_sample = sub_sample
299
-
300
- if sub_sample:
301
- max_pool_layer = nn.MaxPool3d(kernel_size=(1, 2, 2))
302
- self.g = nn.Sequential(self.g, max_pool_layer)
303
- if self.mode != 'gaussian':
304
- self.phi = nn.Sequential(self.phi, max_pool_layer)
305
- else:
306
- self.phi = max_pool_layer
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Apex-X/ROOPOK/roop/processors/frame/face_enhancer.py DELETED
@@ -1,104 +0,0 @@
1
- from typing import Any, List, Callable
2
- import cv2
3
- import threading
4
- from gfpgan.utils import GFPGANer
5
-
6
- import roop.globals
7
- import roop.processors.frame.core
8
- from roop.core import update_status
9
- from roop.face_analyser import get_many_faces
10
- from roop.typing import Frame, Face
11
- from roop.utilities import conditional_download, resolve_relative_path, is_image, is_video
12
-
13
- FACE_ENHANCER = None
14
- THREAD_SEMAPHORE = threading.Semaphore()
15
- THREAD_LOCK = threading.Lock()
16
- NAME = 'ROOP.FACE-ENHANCER'
17
-
18
-
19
- def get_face_enhancer() -> Any:
20
- global FACE_ENHANCER
21
-
22
- with THREAD_LOCK:
23
- if FACE_ENHANCER is None:
24
- model_path = resolve_relative_path('../models/GFPGANv1.4.pth')
25
- # todo: set models path -> https://github.com/TencentARC/GFPGAN/issues/399
26
- FACE_ENHANCER = GFPGANer(model_path=model_path, upscale=1, device=get_device())
27
- return FACE_ENHANCER
28
-
29
-
30
- def get_device() -> str:
31
- if 'CUDAExecutionProvider' in roop.globals.execution_providers:
32
- return 'cuda'
33
- if 'CoreMLExecutionProvider' in roop.globals.execution_providers:
34
- return 'mps'
35
- return 'cpu'
36
-
37
-
38
- def clear_face_enhancer() -> None:
39
- global FACE_ENHANCER
40
-
41
- FACE_ENHANCER = None
42
-
43
-
44
- def pre_check() -> bool:
45
- download_directory_path = resolve_relative_path('../models')
46
- conditional_download(download_directory_path, ['https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth'])
47
- return True
48
-
49
-
50
- def pre_start() -> bool:
51
- if not is_image(roop.globals.target_path) and not is_video(roop.globals.target_path):
52
- update_status('Select an image or video for target path.', NAME)
53
- return False
54
- return True
55
-
56
-
57
- def post_process() -> None:
58
- clear_face_enhancer()
59
-
60
-
61
- def enhance_face(target_face: Face, temp_frame: Frame) -> Frame:
62
- start_x, start_y, end_x, end_y = map(int, target_face['bbox'])
63
- padding_x = int((end_x - start_x) * 0.5)
64
- padding_y = int((end_y - start_y) * 0.5)
65
- start_x = max(0, start_x - padding_x)
66
- start_y = max(0, start_y - padding_y)
67
- end_x = max(0, end_x + padding_x)
68
- end_y = max(0, end_y + padding_y)
69
- temp_face = temp_frame[start_y:end_y, start_x:end_x]
70
- if temp_face.size:
71
- with THREAD_SEMAPHORE:
72
- _, _, temp_face = get_face_enhancer().enhance(
73
- temp_face,
74
- paste_back=True
75
- )
76
- temp_frame[start_y:end_y, start_x:end_x] = temp_face
77
- return temp_frame
78
-
79
-
80
- def process_frame(source_face: Face, reference_face: Face, temp_frame: Frame) -> Frame:
81
- many_faces = get_many_faces(temp_frame)
82
- if many_faces:
83
- for target_face in many_faces:
84
- temp_frame = enhance_face(target_face, temp_frame)
85
- return temp_frame
86
-
87
-
88
- def process_frames(source_path: str, temp_frame_paths: List[str], update: Callable[[], None]) -> None:
89
- for temp_frame_path in temp_frame_paths:
90
- temp_frame = cv2.imread(temp_frame_path)
91
- result = process_frame(None, None, temp_frame)
92
- cv2.imwrite(temp_frame_path, result)
93
- if update:
94
- update()
95
-
96
-
97
- def process_image(source_path: str, target_path: str, output_path: str) -> None:
98
- target_frame = cv2.imread(target_path)
99
- result = process_frame(None, None, target_frame)
100
- cv2.imwrite(output_path, result)
101
-
102
-
103
- def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
104
- roop.processors.frame.core.process_video(None, temp_frame_paths, process_frames)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/latex.py DELETED
@@ -1,521 +0,0 @@
1
- """
2
- pygments.formatters.latex
3
- ~~~~~~~~~~~~~~~~~~~~~~~~~
4
-
5
- Formatter for LaTeX fancyvrb output.
6
-
7
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
8
- :license: BSD, see LICENSE for details.
9
- """
10
-
11
- from io import StringIO
12
-
13
- from pip._vendor.pygments.formatter import Formatter
14
- from pip._vendor.pygments.lexer import Lexer, do_insertions
15
- from pip._vendor.pygments.token import Token, STANDARD_TYPES
16
- from pip._vendor.pygments.util import get_bool_opt, get_int_opt
17
-
18
-
19
- __all__ = ['LatexFormatter']
20
-
21
-
22
- def escape_tex(text, commandprefix):
23
- return text.replace('\\', '\x00'). \
24
- replace('{', '\x01'). \
25
- replace('}', '\x02'). \
26
- replace('\x00', r'\%sZbs{}' % commandprefix). \
27
- replace('\x01', r'\%sZob{}' % commandprefix). \
28
- replace('\x02', r'\%sZcb{}' % commandprefix). \
29
- replace('^', r'\%sZca{}' % commandprefix). \
30
- replace('_', r'\%sZus{}' % commandprefix). \
31
- replace('&', r'\%sZam{}' % commandprefix). \
32
- replace('<', r'\%sZlt{}' % commandprefix). \
33
- replace('>', r'\%sZgt{}' % commandprefix). \
34
- replace('#', r'\%sZsh{}' % commandprefix). \
35
- replace('%', r'\%sZpc{}' % commandprefix). \
36
- replace('$', r'\%sZdl{}' % commandprefix). \
37
- replace('-', r'\%sZhy{}' % commandprefix). \
38
- replace("'", r'\%sZsq{}' % commandprefix). \
39
- replace('"', r'\%sZdq{}' % commandprefix). \
40
- replace('~', r'\%sZti{}' % commandprefix)
41
-
42
-
43
- DOC_TEMPLATE = r'''
44
- \documentclass{%(docclass)s}
45
- \usepackage{fancyvrb}
46
- \usepackage{color}
47
- \usepackage[%(encoding)s]{inputenc}
48
- %(preamble)s
49
-
50
- %(styledefs)s
51
-
52
- \begin{document}
53
-
54
- \section*{%(title)s}
55
-
56
- %(code)s
57
- \end{document}
58
- '''
59
-
60
- ## Small explanation of the mess below :)
61
- #
62
- # The previous version of the LaTeX formatter just assigned a command to
63
- # each token type defined in the current style. That obviously is
64
- # problematic if the highlighted code is produced for a different style
65
- # than the style commands themselves.
66
- #
67
- # This version works much like the HTML formatter which assigns multiple
68
- # CSS classes to each <span> tag, from the most specific to the least
69
- # specific token type, thus falling back to the parent token type if one
70
- # is not defined. Here, the classes are there too and use the same short
71
- # forms given in token.STANDARD_TYPES.
72
- #
73
- # Highlighted code now only uses one custom command, which by default is
74
- # \PY and selectable by the commandprefix option (and in addition the
75
- # escapes \PYZat, \PYZlb and \PYZrb which haven't been renamed for
76
- # backwards compatibility purposes).
77
- #
78
- # \PY has two arguments: the classes, separated by +, and the text to
79
- # render in that style. The classes are resolved into the respective
80
- # style commands by magic, which serves to ignore unknown classes.
81
- #
82
- # The magic macros are:
83
- # * \PY@it, \PY@bf, etc. are unconditionally wrapped around the text
84
- # to render in \PY@do. Their definition determines the style.
85
- # * \PY@reset resets \PY@it etc. to do nothing.
86
- # * \PY@toks parses the list of classes, using magic inspired by the
87
- # keyval package (but modified to use plusses instead of commas
88
- # because fancyvrb redefines commas inside its environments).
89
- # * \PY@tok processes one class, calling the \PY@tok@classname command
90
- # if it exists.
91
- # * \PY@tok@classname sets the \PY@it etc. to reflect the chosen style
92
- # for its class.
93
- # * \PY resets the style, parses the classnames and then calls \PY@do.
94
- #
95
- # Tip: to read this code, print it out in substituted form using e.g.
96
- # >>> print STYLE_TEMPLATE % {'cp': 'PY'}
97
-
98
- STYLE_TEMPLATE = r'''
99
- \makeatletter
100
- \def\%(cp)s@reset{\let\%(cp)s@it=\relax \let\%(cp)s@bf=\relax%%
101
- \let\%(cp)s@ul=\relax \let\%(cp)s@tc=\relax%%
102
- \let\%(cp)s@bc=\relax \let\%(cp)s@ff=\relax}
103
- \def\%(cp)s@tok#1{\csname %(cp)s@tok@#1\endcsname}
104
- \def\%(cp)s@toks#1+{\ifx\relax#1\empty\else%%
105
- \%(cp)s@tok{#1}\expandafter\%(cp)s@toks\fi}
106
- \def\%(cp)s@do#1{\%(cp)s@bc{\%(cp)s@tc{\%(cp)s@ul{%%
107
- \%(cp)s@it{\%(cp)s@bf{\%(cp)s@ff{#1}}}}}}}
108
- \def\%(cp)s#1#2{\%(cp)s@reset\%(cp)s@toks#1+\relax+\%(cp)s@do{#2}}
109
-
110
- %(styles)s
111
-
112
- \def\%(cp)sZbs{\char`\\}
113
- \def\%(cp)sZus{\char`\_}
114
- \def\%(cp)sZob{\char`\{}
115
- \def\%(cp)sZcb{\char`\}}
116
- \def\%(cp)sZca{\char`\^}
117
- \def\%(cp)sZam{\char`\&}
118
- \def\%(cp)sZlt{\char`\<}
119
- \def\%(cp)sZgt{\char`\>}
120
- \def\%(cp)sZsh{\char`\#}
121
- \def\%(cp)sZpc{\char`\%%}
122
- \def\%(cp)sZdl{\char`\$}
123
- \def\%(cp)sZhy{\char`\-}
124
- \def\%(cp)sZsq{\char`\'}
125
- \def\%(cp)sZdq{\char`\"}
126
- \def\%(cp)sZti{\char`\~}
127
- %% for compatibility with earlier versions
128
- \def\%(cp)sZat{@}
129
- \def\%(cp)sZlb{[}
130
- \def\%(cp)sZrb{]}
131
- \makeatother
132
- '''
133
-
134
-
135
- def _get_ttype_name(ttype):
136
- fname = STANDARD_TYPES.get(ttype)
137
- if fname:
138
- return fname
139
- aname = ''
140
- while fname is None:
141
- aname = ttype[-1] + aname
142
- ttype = ttype.parent
143
- fname = STANDARD_TYPES.get(ttype)
144
- return fname + aname
145
-
146
-
147
- class LatexFormatter(Formatter):
148
- r"""
149
- Format tokens as LaTeX code. This needs the `fancyvrb` and `color`
150
- standard packages.
151
-
152
- Without the `full` option, code is formatted as one ``Verbatim``
153
- environment, like this:
154
-
155
- .. sourcecode:: latex
156
-
157
- \begin{Verbatim}[commandchars=\\\{\}]
158
- \PY{k}{def }\PY{n+nf}{foo}(\PY{n}{bar}):
159
- \PY{k}{pass}
160
- \end{Verbatim}
161
-
162
- Wrapping can be disabled using the `nowrap` option.
163
-
164
- The special command used here (``\PY``) and all the other macros it needs
165
- are output by the `get_style_defs` method.
166
-
167
- With the `full` option, a complete LaTeX document is output, including
168
- the command definitions in the preamble.
169
-
170
- The `get_style_defs()` method of a `LatexFormatter` returns a string
171
- containing ``\def`` commands defining the macros needed inside the
172
- ``Verbatim`` environments.
173
-
174
- Additional options accepted:
175
-
176
- `nowrap`
177
- If set to ``True``, don't wrap the tokens at all, not even inside a
178
- ``\begin{Verbatim}`` environment. This disables most other options
179
- (default: ``False``).
180
-
181
- `style`
182
- The style to use, can be a string or a Style subclass (default:
183
- ``'default'``).
184
-
185
- `full`
186
- Tells the formatter to output a "full" document, i.e. a complete
187
- self-contained document (default: ``False``).
188
-
189
- `title`
190
- If `full` is true, the title that should be used to caption the
191
- document (default: ``''``).
192
-
193
- `docclass`
194
- If the `full` option is enabled, this is the document class to use
195
- (default: ``'article'``).
196
-
197
- `preamble`
198
- If the `full` option is enabled, this can be further preamble commands,
199
- e.g. ``\usepackage`` (default: ``''``).
200
-
201
- `linenos`
202
- If set to ``True``, output line numbers (default: ``False``).
203
-
204
- `linenostart`
205
- The line number for the first line (default: ``1``).
206
-
207
- `linenostep`
208
- If set to a number n > 1, only every nth line number is printed.
209
-
210
- `verboptions`
211
- Additional options given to the Verbatim environment (see the *fancyvrb*
212
- docs for possible values) (default: ``''``).
213
-
214
- `commandprefix`
215
- The LaTeX commands used to produce colored output are constructed
216
- using this prefix and some letters (default: ``'PY'``).
217
-
218
- .. versionadded:: 0.7
219
- .. versionchanged:: 0.10
220
- The default is now ``'PY'`` instead of ``'C'``.
221
-
222
- `texcomments`
223
- If set to ``True``, enables LaTeX comment lines. That is, LaTex markup
224
- in comment tokens is not escaped so that LaTeX can render it (default:
225
- ``False``).
226
-
227
- .. versionadded:: 1.2
228
-
229
- `mathescape`
230
- If set to ``True``, enables LaTeX math mode escape in comments. That
231
- is, ``'$...$'`` inside a comment will trigger math mode (default:
232
- ``False``).
233
-
234
- .. versionadded:: 1.2
235
-
236
- `escapeinside`
237
- If set to a string of length 2, enables escaping to LaTeX. Text
238
- delimited by these 2 characters is read as LaTeX code and
239
- typeset accordingly. It has no effect in string literals. It has
240
- no effect in comments if `texcomments` or `mathescape` is
241
- set. (default: ``''``).
242
-
243
- .. versionadded:: 2.0
244
-
245
- `envname`
246
- Allows you to pick an alternative environment name replacing Verbatim.
247
- The alternate environment still has to support Verbatim's option syntax.
248
- (default: ``'Verbatim'``).
249
-
250
- .. versionadded:: 2.0
251
- """
252
- name = 'LaTeX'
253
- aliases = ['latex', 'tex']
254
- filenames = ['*.tex']
255
-
256
- def __init__(self, **options):
257
- Formatter.__init__(self, **options)
258
- self.nowrap = get_bool_opt(options, 'nowrap', False)
259
- self.docclass = options.get('docclass', 'article')
260
- self.preamble = options.get('preamble', '')
261
- self.linenos = get_bool_opt(options, 'linenos', False)
262
- self.linenostart = abs(get_int_opt(options, 'linenostart', 1))
263
- self.linenostep = abs(get_int_opt(options, 'linenostep', 1))
264
- self.verboptions = options.get('verboptions', '')
265
- self.nobackground = get_bool_opt(options, 'nobackground', False)
266
- self.commandprefix = options.get('commandprefix', 'PY')
267
- self.texcomments = get_bool_opt(options, 'texcomments', False)
268
- self.mathescape = get_bool_opt(options, 'mathescape', False)
269
- self.escapeinside = options.get('escapeinside', '')
270
- if len(self.escapeinside) == 2:
271
- self.left = self.escapeinside[0]
272
- self.right = self.escapeinside[1]
273
- else:
274
- self.escapeinside = ''
275
- self.envname = options.get('envname', 'Verbatim')
276
-
277
- self._create_stylesheet()
278
-
279
- def _create_stylesheet(self):
280
- t2n = self.ttype2name = {Token: ''}
281
- c2d = self.cmd2def = {}
282
- cp = self.commandprefix
283
-
284
- def rgbcolor(col):
285
- if col:
286
- return ','.join(['%.2f' % (int(col[i] + col[i + 1], 16) / 255.0)
287
- for i in (0, 2, 4)])
288
- else:
289
- return '1,1,1'
290
-
291
- for ttype, ndef in self.style:
292
- name = _get_ttype_name(ttype)
293
- cmndef = ''
294
- if ndef['bold']:
295
- cmndef += r'\let\$$@bf=\textbf'
296
- if ndef['italic']:
297
- cmndef += r'\let\$$@it=\textit'
298
- if ndef['underline']:
299
- cmndef += r'\let\$$@ul=\underline'
300
- if ndef['roman']:
301
- cmndef += r'\let\$$@ff=\textrm'
302
- if ndef['sans']:
303
- cmndef += r'\let\$$@ff=\textsf'
304
- if ndef['mono']:
305
- cmndef += r'\let\$$@ff=\textsf'
306
- if ndef['color']:
307
- cmndef += (r'\def\$$@tc##1{\textcolor[rgb]{%s}{##1}}' %
308
- rgbcolor(ndef['color']))
309
- if ndef['border']:
310
- cmndef += (r'\def\$$@bc##1{{\setlength{\fboxsep}{\string -\fboxrule}'
311
- r'\fcolorbox[rgb]{%s}{%s}{\strut ##1}}}' %
312
- (rgbcolor(ndef['border']),
313
- rgbcolor(ndef['bgcolor'])))
314
- elif ndef['bgcolor']:
315
- cmndef += (r'\def\$$@bc##1{{\setlength{\fboxsep}{0pt}'
316
- r'\colorbox[rgb]{%s}{\strut ##1}}}' %
317
- rgbcolor(ndef['bgcolor']))
318
- if cmndef == '':
319
- continue
320
- cmndef = cmndef.replace('$$', cp)
321
- t2n[ttype] = name
322
- c2d[name] = cmndef
323
-
324
- def get_style_defs(self, arg=''):
325
- """
326
- Return the command sequences needed to define the commands
327
- used to format text in the verbatim environment. ``arg`` is ignored.
328
- """
329
- cp = self.commandprefix
330
- styles = []
331
- for name, definition in self.cmd2def.items():
332
- styles.append(r'\@namedef{%s@tok@%s}{%s}' % (cp, name, definition))
333
- return STYLE_TEMPLATE % {'cp': self.commandprefix,
334
- 'styles': '\n'.join(styles)}
335
-
336
- def format_unencoded(self, tokensource, outfile):
337
- # TODO: add support for background colors
338
- t2n = self.ttype2name
339
- cp = self.commandprefix
340
-
341
- if self.full:
342
- realoutfile = outfile
343
- outfile = StringIO()
344
-
345
- if not self.nowrap:
346
- outfile.write('\\begin{' + self.envname + '}[commandchars=\\\\\\{\\}')
347
- if self.linenos:
348
- start, step = self.linenostart, self.linenostep
349
- outfile.write(',numbers=left' +
350
- (start and ',firstnumber=%d' % start or '') +
351
- (step and ',stepnumber=%d' % step or ''))
352
- if self.mathescape or self.texcomments or self.escapeinside:
353
- outfile.write(',codes={\\catcode`\\$=3\\catcode`\\^=7'
354
- '\\catcode`\\_=8\\relax}')
355
- if self.verboptions:
356
- outfile.write(',' + self.verboptions)
357
- outfile.write(']\n')
358
-
359
- for ttype, value in tokensource:
360
- if ttype in Token.Comment:
361
- if self.texcomments:
362
- # Try to guess comment starting lexeme and escape it ...
363
- start = value[0:1]
364
- for i in range(1, len(value)):
365
- if start[0] != value[i]:
366
- break
367
- start += value[i]
368
-
369
- value = value[len(start):]
370
- start = escape_tex(start, cp)
371
-
372
- # ... but do not escape inside comment.
373
- value = start + value
374
- elif self.mathescape:
375
- # Only escape parts not inside a math environment.
376
- parts = value.split('$')
377
- in_math = False
378
- for i, part in enumerate(parts):
379
- if not in_math:
380
- parts[i] = escape_tex(part, cp)
381
- in_math = not in_math
382
- value = '$'.join(parts)
383
- elif self.escapeinside:
384
- text = value
385
- value = ''
386
- while text:
387
- a, sep1, text = text.partition(self.left)
388
- if sep1:
389
- b, sep2, text = text.partition(self.right)
390
- if sep2:
391
- value += escape_tex(a, cp) + b
392
- else:
393
- value += escape_tex(a + sep1 + b, cp)
394
- else:
395
- value += escape_tex(a, cp)
396
- else:
397
- value = escape_tex(value, cp)
398
- elif ttype not in Token.Escape:
399
- value = escape_tex(value, cp)
400
- styles = []
401
- while ttype is not Token:
402
- try:
403
- styles.append(t2n[ttype])
404
- except KeyError:
405
- # not in current style
406
- styles.append(_get_ttype_name(ttype))
407
- ttype = ttype.parent
408
- styleval = '+'.join(reversed(styles))
409
- if styleval:
410
- spl = value.split('\n')
411
- for line in spl[:-1]:
412
- if line:
413
- outfile.write("\\%s{%s}{%s}" % (cp, styleval, line))
414
- outfile.write('\n')
415
- if spl[-1]:
416
- outfile.write("\\%s{%s}{%s}" % (cp, styleval, spl[-1]))
417
- else:
418
- outfile.write(value)
419
-
420
- if not self.nowrap:
421
- outfile.write('\\end{' + self.envname + '}\n')
422
-
423
- if self.full:
424
- encoding = self.encoding or 'utf8'
425
- # map known existings encodings from LaTeX distribution
426
- encoding = {
427
- 'utf_8': 'utf8',
428
- 'latin_1': 'latin1',
429
- 'iso_8859_1': 'latin1',
430
- }.get(encoding.replace('-', '_'), encoding)
431
- realoutfile.write(DOC_TEMPLATE %
432
- dict(docclass = self.docclass,
433
- preamble = self.preamble,
434
- title = self.title,
435
- encoding = encoding,
436
- styledefs = self.get_style_defs(),
437
- code = outfile.getvalue()))
438
-
439
-
440
- class LatexEmbeddedLexer(Lexer):
441
- """
442
- This lexer takes one lexer as argument, the lexer for the language
443
- being formatted, and the left and right delimiters for escaped text.
444
-
445
- First everything is scanned using the language lexer to obtain
446
- strings and comments. All other consecutive tokens are merged and
447
- the resulting text is scanned for escaped segments, which are given
448
- the Token.Escape type. Finally text that is not escaped is scanned
449
- again with the language lexer.
450
- """
451
- def __init__(self, left, right, lang, **options):
452
- self.left = left
453
- self.right = right
454
- self.lang = lang
455
- Lexer.__init__(self, **options)
456
-
457
- def get_tokens_unprocessed(self, text):
458
- # find and remove all the escape tokens (replace with an empty string)
459
- # this is very similar to DelegatingLexer.get_tokens_unprocessed.
460
- buffered = ''
461
- insertions = []
462
- insertion_buf = []
463
- for i, t, v in self._find_safe_escape_tokens(text):
464
- if t is None:
465
- if insertion_buf:
466
- insertions.append((len(buffered), insertion_buf))
467
- insertion_buf = []
468
- buffered += v
469
- else:
470
- insertion_buf.append((i, t, v))
471
- if insertion_buf:
472
- insertions.append((len(buffered), insertion_buf))
473
- return do_insertions(insertions,
474
- self.lang.get_tokens_unprocessed(buffered))
475
-
476
- def _find_safe_escape_tokens(self, text):
477
- """ find escape tokens that are not in strings or comments """
478
- for i, t, v in self._filter_to(
479
- self.lang.get_tokens_unprocessed(text),
480
- lambda t: t in Token.Comment or t in Token.String
481
- ):
482
- if t is None:
483
- for i2, t2, v2 in self._find_escape_tokens(v):
484
- yield i + i2, t2, v2
485
- else:
486
- yield i, None, v
487
-
488
- def _filter_to(self, it, pred):
489
- """ Keep only the tokens that match `pred`, merge the others together """
490
- buf = ''
491
- idx = 0
492
- for i, t, v in it:
493
- if pred(t):
494
- if buf:
495
- yield idx, None, buf
496
- buf = ''
497
- yield i, t, v
498
- else:
499
- if not buf:
500
- idx = i
501
- buf += v
502
- if buf:
503
- yield idx, None, buf
504
-
505
- def _find_escape_tokens(self, text):
506
- """ Find escape tokens within text, give token=None otherwise """
507
- index = 0
508
- while text:
509
- a, sep1, text = text.partition(self.left)
510
- if a:
511
- yield index, None, a
512
- index += len(a)
513
- if sep1:
514
- b, sep2, text = text.partition(self.right)
515
- if sep2:
516
- yield index + len(sep1), Token.Escape, b
517
- index += len(sep1) + len(b) + len(sep2)
518
- else:
519
- yield index, Token.Error, sep1
520
- index += len(sep1)
521
- text = b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BHO/URDtest/app.py DELETED
@@ -1,63 +0,0 @@
1
- import gradio as gr
2
- import os
3
- from langchain.chains import RetrievalQA
4
- from langchain.llms import OpenAI
5
- from langchain.document_loaders import PyPDFLoader
6
- from langchain.document_loaders import DirectoryLoader
7
- from langchain.text_splitter import CharacterTextSplitter
8
- from langchain.embeddings import OpenAIEmbeddings
9
- from langchain.vectorstores import Chroma
10
-
11
-
12
- # Set the path of your new directory
13
- dir_path = "./docs"
14
-
15
- # Create the directory using the os module
16
- os.makedirs(dir_path, exist_ok=True)
17
-
18
- # Print a confirmation message
19
- print(f"New directory created at {dir_path}")
20
-
21
- def qa_system(pdf_file, openai_key, prompt, chain_type, k):
22
- os.environ["OPENAI_API_KEY"] = openai_key
23
-
24
- # load document
25
- # loader = PyPDFLoader(pdf_file.name)
26
- loader = DirectoryLoader(dir_path, glob="**/*.pdf") #, loader_cls=PDFLoader)
27
- documents = loader.load()
28
-
29
- # split the documents into chunks
30
- text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
31
- texts = text_splitter.split_documents(documents)
32
-
33
- # select which embeddings we want to use
34
- embeddings = OpenAIEmbeddings()
35
-
36
- # create the vectorestore to use as the index
37
- db = Chroma.from_documents(texts, embeddings)
38
-
39
- # expose this index in a retriever interface
40
- retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": k})
41
-
42
- # create a chain to answer questions
43
- qa = RetrievalQA.from_chain_type(
44
- llm=OpenAI(), chain_type=chain_type, retriever=retriever, return_source_documents=True)
45
-
46
- # get the result
47
- result = qa({"query": prompt})
48
- return result['result'], [doc.page_content for doc in result["source_documents"]]
49
-
50
- # define the Gradio interface
51
- input_file = gr.inputs.File(label="PDF File")
52
- openai_key = gr.inputs.Textbox(label="OpenAI API Key", type="password")
53
- prompt = gr.inputs.Textbox(label="Question Prompt")
54
- chain_type = gr.inputs.Radio(['stuff', 'map_reduce', "refine", "map_rerank"], label="Chain Type")
55
- k = gr.inputs.Slider(minimum=1, maximum=5, default=1, label="Number of Relevant Chunks")
56
-
57
- output_text = gr.outputs.Textbox(label="Answer")
58
- output_docs = gr.outputs.Textbox(label="Relevant Source Text")
59
-
60
- gr.Interface(qa_system, inputs=[input_file, openai_key, prompt, chain_type, k], outputs=[output_text, output_docs],
61
- title="Question Answering with PDF File and OpenAI",
62
- description="Upload a PDF file, enter your OpenAI API key, type a question prompt, select a chain type, and choose the number of relevant chunks to use for the answer.").launch(debug = True)
63
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/diffq/__init__.py DELETED
@@ -1,18 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- # flake8: noqa
8
- """
9
- This package implements different quantization strategies:
10
-
11
- - `diffq.uniform.UniformQuantizer`: classic uniform quantization over n bits.
12
- - `diffq.diffq.DiffQuantizer`: differentiable quantizer based on scaled noise injection.
13
-
14
- Also, do check `diffq.base.BaseQuantizer` for the common methods of all Quantizers.
15
- """
16
-
17
- from .uniform import UniformQuantizer
18
- from .diffq import DiffQuantizer
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Archivo Gta 5 Apk.md DELETED
@@ -1,180 +0,0 @@
1
-
2
- <h1>Descargar GTA 5 APK para Android sin verificación</h1>
3
- <p>GTA 5 es uno de los juegos más exitosos en la historia de los videojuegos. Ha vendido más de 150 millones de copias en todo el mundo y ha ganado numerosos premios y reconocimientos. Sin embargo, muchos fans quieren jugar en sus dispositivos móviles, especialmente en los teléfonos inteligentes Android. Desafortunadamente, GTA 5 no está disponible oficialmente en Google Play Store o cualquier otra tienda de aplicaciones para Android. Entonces, ¿cómo se puede descargar GTA 5 APK para Android sin verificación? </p>
4
- <p>En este artículo, le mostraremos cómo descargar, instalar y jugar GTA 5 en su teléfono inteligente Android sin ninguna verificación o registro. También le proporcionaremos información y consejos sobre las características del juego, la jugabilidad, los requisitos del sistema, consejos y trucos, y la revisión y calificación. Así que vamos a empezar! </p>
5
- <h2>descargar archivo gta 5 apk</h2><br /><p><b><b>Download</b> &#10084; <a href="https://bltlly.com/2v6IY0">https://bltlly.com/2v6IY0</a></b></p><br /><br />
6
- <h2>¿Qué es GTA 5 y por qué es popular? </h2>
7
- <p>GTA 5 es un juego de acción y aventura desarrollado por Rockstar Games y lanzado en 2013. Es la quinta entrega principal de la serie Grand Theft Auto, que es conocida por su juego de sandbox de mundo abierto, historias con temas de crimen y humor satírico. </p>
8
- <p>GTA 5 se encuentra en la ciudad ficticia de Los Santos y sus alrededores, que se basan en Los Ángeles y el sur de California. El juego sigue las vidas de tres protagonistas criminales: Michael De Santa, un ladrón de bancos retirado; Trevor Philips, un traficante de drogas psicópata; y Franklin Clinton, un joven estafador callejero. El jugador puede cambiar entre estos personajes en cualquier momento y experimentar el juego desde diferentes perspectivas. </p>
9
- <p>GTA 5 es popular por su vasto e inmersivo mundo abierto, sus atractivas y variadas misiones, sus gráficos realistas y detallados, su banda sonora dinámica y diversa, su modo multijugador en línea llamado GTA Online y sus infinitas posibilidades de diversión y caos. </p>
10
- <h3>Características y jugabilidad de GTA 5</h3>
11
- <p>GTA 5 ofrece muchas características y opciones de juego para que los jugadores disfruten. Algunas de ellas son:</p>
12
-
13
- <li><b>Exploración del mundo abierto:</b> El juego permite a los jugadores explorar cada pulgada de Los Santos y el condado de Blaine, desde las calles urbanas hasta las colinas rurales. Los jugadores pueden conducir varios vehículos, como coches, bicicletas, barcos, aviones, helicópteros, tanques, etc., o caminar a pie. Los jugadores también pueden interactuar con varios PNJ (personajes no jugadores), como peatones, comerciantes, policías, pandilleros, etc., o causar caos al atacarlos o destruir propiedades. </li>
14
- <li><b>Actividades paralelas:</b> El juego también tiene muchas actividades paralelas que los jugadores pueden hacer por diversión o beneficio. Estos incluyen mini-juegos, como golf, tenis, dardos, etc., aficiones, como la caza, carreras, paracaidismo, etc., desafíos, tales como saltos de acrobacia, alborotos, etc., y eventos aleatorios, como rescatar a extraños, detener crímenes, etc.</li>>
15
- <li><b>Personalización de personajes:</b> El juego permite a los jugadores personalizar la apariencia, ropa, accesorios, tatuajes, etc. Los jugadores también pueden mejorar las habilidades de sus personajes, como conducir, disparar, sigilo, etc., practicándolos o completando misiones. </li>
16
- <li><b>Multijugador en línea:</b> El juego tiene un modo en línea llamado GTA Online, donde los jugadores pueden crear sus propios personajes personalizados y unirse a otros jugadores en varias actividades. Estos incluyen misiones cooperativas, modos competitivos, carreras, deathmatches, robos, etc. Los jugadores también pueden comprar y personalizar sus propias propiedades, vehículos, armas, negocios, etc., y unirse o crear equipos con otros jugadores. </li>
17
- </ul>
18
- <h3>Requisitos del sistema GTA 5 para Android</h3>
19
- <p>GTA 5 es un juego muy exigente que requiere muchos recursos para funcionar sin problemas. Por lo tanto, no todos los dispositivos Android pueden soportarlo. Estos son los requisitos mínimos y recomendados del sistema para GTA 5 para Android:</p>
20
- <tabla>
21
- <tr>
22
- <th>Requisitos mínimos</th>
23
- <th>Requisitos recomendados</th>
24
- </tr>
25
- <tr>
26
- <td>Versión para Android: 4.0 o superior</td>
27
- <td>Versión para Android: 6.0 o superior</td>
28
- </tr>
29
- <tr>
30
-
31
- <td>CPU: Octa-core 2.0 GHz o superior</td>
32
- </tr>
33
- <tr>
34
- <td>RAM: 2 GB o superior</td>
35
- <td>RAM: 4 GB o superior</td>
36
- </tr>
37
- <tr>
38
- <td>Almacenamiento: 3 GB o superior</td>
39
- <td>Almacenamiento: 5 GB o superior</td>
40
- </tr>
41
- <tr>
42
- <td>Gráficos: Adreno 330 o superior</td>
43
- <td>Gráficos: Adreno 530 o superior</td>
44
- </tr>
45
- <tr>
46
- <td>Conexión a Internet: Requerido para GTA Online y actualizaciones</td>
47
- <td>Conexión a Internet: Requerido para GTA Online y actualizaciones</td>
48
- </tr>
49
- </tabla>
50
- <h2>Cómo descargar e instalar GTA 5 en Android</h2>
51
- <p>Como se mencionó anteriormente, GTA 5 no está disponible oficialmente en la Google Play Store o cualquier otra tienda de aplicaciones para Android. Por lo tanto, es necesario descargar el archivo GTA 5 APK de una fuente de confianza e instalarlo manualmente en el dispositivo. Estos son los pasos para hacerlo:</p>
52
- <p></p>
53
- <h3>Descargar GTA 5 APK de una fuente de confianza</h3>
54
- <p>El primer paso es descargar el archivo GTA 5 APK de una fuente de confianza. Hay muchos sitios web que afirman ofrecer el archivo GTA 5 APK para Android, pero no todos ellos son seguros y fiables. Algunos de ellos pueden contener virus, malware o archivos falsos que pueden dañar su dispositivo o robar sus datos. </p>
55
- <p>Para evitar estos riesgos, es necesario descargar el archivo GTA 5 APK de una fuente de confianza que tiene comentarios positivos y comentarios de otros usuarios. Una de esas fuentes es [GTA5Mobile.com], que es un sitio web de buena reputación que proporciona el archivo GTA 5 APK para Android junto con instrucciones y soporte. </p>
56
- <p>Para descargar el archivo GTA 5 APK de [GTA5Mobile.com], debe seguir estos pasos:</p>
57
- <ol>
58
- <li>Ir a [GTA5Mobile.com] en el navegador de su dispositivo Android. </li>
59
- <li>Toque en el botón "Descargar" y espere a que comience la descarga. </li>
60
- <li>Si ves una ventana emergente pidiéndote que permitas descargas de fuentes desconocidas, toca "Configuración" y habilita la opción. </li>
61
- <li> Una vez que la descarga se ha completado, localizar el archivo GTA 5 APK en el almacenamiento de su dispositivo y toque en él. </li>
62
-
63
- <li>Felicidades! Usted ha descargado e instalado con éxito el archivo GTA 5 APK en su dispositivo Android. </li>
64
- </ol>
65
- <p>El siguiente paso es instalar el archivo GTA 5 APK en su dispositivo. Este es un proceso simple y directo que no requiere ninguna verificación o registro. Sin embargo, debes asegurarte de que tu dispositivo cumple con los requisitos mínimos del sistema para GTA 5 para Android, como se mencionó anteriormente. </p>
66
- <p>Para instalar el archivo GTA 5 APK en su dispositivo, debe seguir estos pasos:</p>
67
- <ol>
68
- <li>Abra el archivo GTA 5 APK que ha descargado e instalado desde [GTA5Mobile.com]. </li>
69
- <li>Toque en "Continuar" y acepte los términos y condiciones. </li>
70
- <li>Seleccione las opciones de instalación que se adapten a sus preferencias, como el idioma, la calidad gráfica, el volumen de sonido, etc.</li>
71
- <li>Espere a que se complete la instalación. Esto puede tardar algún tiempo dependiendo del rendimiento y el espacio de almacenamiento del dispositivo. </li>
72
- <li>Una vez completada la instalación, verá un icono de acceso directo de GTA 5 en la pantalla de inicio del dispositivo o en el cajón de la aplicación. </li>
73
- <li>Toque en el icono y poner en marcha GTA 5 en su dispositivo Android. </li>
74
- </ol>
75
- <h3>Lanza GTA 5 y disfruta del juego</h3>
76
- <p>El paso final es lanzar GTA 5 y disfrutar del juego en tu dispositivo Android. Puedes jugar GTA 5 en dos modos: modo historia o modo online. Modo historia le permite seguir la historia principal del juego y cambiar entre los tres protagonistas. El modo online te permite crear tu propio personaje y unirte a otros jugadores en varias actividades. </p>
77
- <p>Para lanzar GTA 5 y disfrutar del juego en tu dispositivo Android, debes seguir estos pasos:</p>
78
- <ol>
79
- <li>Toque en el icono de GTA 5 en la pantalla de inicio del dispositivo o en el cajón de aplicaciones. </li>
80
- <li>Espere a que el juego se cargue. Esto puede tardar algún tiempo dependiendo de su conexión a Internet y el rendimiento del dispositivo. </li>
81
- <li>Seleccione el modo que desea jugar: modo historia o modo en línea. </li>
82
-
83
- <li>Si eliges el modo online, puedes crear tu propio personaje eligiendo su género, apariencia, ropa, etc. También puedes unirte o crear un equipo con otros jugadores y personalizar tus propiedades, vehículos, armas, etc.</li>
84
- <li>Disfruta jugando GTA 5 en tu dispositivo Android! </li>
85
- </ol>
86
- <p>GTA 5 es un juego divertido y adictivo que puede mantenerte entretenido durante horas. Sin embargo, también puede ser desafiante y frustrante a veces, especialmente si eres nuevo en el juego o quieres lograr más. Por lo tanto, hemos recopilado algunos consejos y trucos que pueden ayudarle a mejorar su experiencia GTA 5 en Android. Estos son algunos de ellos:</p>
87
- <h3>Cambiar entre caracteres y utilizar sus habilidades especiales</h3>
88
- <p>Una de las características únicas de GTA 5 es que puedes cambiar entre los tres protagonistas en cualquier momento y usar sus habilidades especiales. Cada personaje tiene una personalidad diferente, un conjunto de habilidades y una habilidad especial que pueden darte una ventaja en ciertas situaciones. </p>
89
- <p>La habilidad especial de Michael es ralentizar el tiempo mientras apunta, lo que puede ayudarlo a eliminar a los enemigos de manera más precisa y eficiente. La habilidad especial de Trevor es entrar en un modo de ira, lo que aumenta su daño y reduce el daño que recibe. La habilidad especial de Franklin es ralentizar el tiempo mientras conduce, lo que puede ayudarlo a maniobrar a través del tráfico y evitar colisiones. </p>
90
- <p>Para cambiar entre caracteres, puede tocar en sus iconos en la esquina inferior derecha de la pantalla. Para activar sus habilidades especiales, puedes tocar la barra azul sobre sus iconos cuando esté lleno. También puedes rellenar la barra completando misiones, matando enemigos o realizando acrobacias. </p>
91
- <h3>Explora el mundo abierto de Los Santos y el condado de Blaine</h3>
92
-
93
- <p>Para explorar el mundo abierto de Los Santos y el condado de Blaine, puede utilizar el mapa en la esquina superior izquierda de la pantalla. Puedes acercar y alejar la pantalla y moverte arrastrando la pantalla. También puedes tocar los iconos del mapa para ver más información sobre ellos, como sus nombres, descripciones, distancias, etc.</p>
94
- <p>También puede utilizar el sistema GPS para navegar a sus destinos. Puedes establecer un waypoint tocando una ubicación en el mapa o seleccionando una misión o actividad del menú. A continuación, verá una línea amarilla en la carretera que le muestra la ruta más corta a su waypoint. También escuchará las instrucciones de voz desde el altavoz o los auriculares del dispositivo. </p>
95
- <h3>Personaliza tus vehículos y armas</h3>
96
- <p>GTA 5 tiene muchos vehículos y armas que puedes usar para viajar y luchar en el juego. Sin embargo, también puede personalizarlos para adaptarse a sus preferencias y necesidades. Puedes cambiar sus colores, diseños, rendimiento, características, etc. </p>
97
- <p>Para personalizar sus vehículos, puede visitar cualquiera de las tiendas de Aduanas de Los Santos alrededor de la ciudad. Allí, puede modificar el motor de sus vehículos, frenos, suspensión, armadura, neumáticos, etc., así como su trabajo de pintura, tinte de la ventana, ruedas, luces, bocinas, etc. También puede comprar vehículos nuevos de varios sitios web o concesionarios en el juego. </p>
98
- <h3>Juega GTA Online con otros jugadores</h3>
99
- <p>GTA 5 también tiene un modo en línea llamado GTA Online, donde puedes crear tu propio personaje y unirte a otros jugadores en varias actividades. GTA Online es un mundo separado de GTA 5, donde puedes tener tus propias propiedades, vehículos, armas, negocios, etc., y unirte o crear equipos con otros jugadores. </p>
100
-
101
- <p>En GTA Online, puedes hacer muchas cosas, como:</p>
102
- <ul>
103
- <li><b>Misiones:</b> Puedes participar en varias misiones similares a las de GTA 5, pero con diferentes objetivos y recompensas. También puedes crear tus propias misiones usando la herramienta Content Creator. </li>
104
- <li><b>Modos:</b> Puedes competir con otros jugadores en varios modos, como carreras, deathmatches, robos, etc. También puedes crear tus propios modos usando la herramienta Content Creator. </li>
105
- <li><b>Eventos:</b> Puedes unirte a varios eventos que ocurren aleatoriamente en el mundo del juego, como batallas de negocios, desafíos de modo libre, etc. Estos eventos ofrecen recompensas y bonos adicionales por participar. </li>
106
- <li><b>Actualizaciones:</b> Puedes disfrutar de nuevos contenidos y características que se agregan regularmente a GTA Online, como nuevos vehículos, armas, misiones, modos, etc.</li>
107
- </ul>
108
- <h2>GTA 5 revisión y calificación para Android</h2>
109
- <p>GTA 5 es sin duda uno de los mejores juegos jamás hecho, y es aún más impresionante que se puede ejecutar en dispositivos Android. Sin embargo, ¿cómo se compara con otros juegos en la misma plataforma? Aquí está nuestra revisión y valoración de GTA 5 para Android basada en sus gráficos, sonido, jugabilidad y valor de reproducción. </p>
110
- <h3>Pros y contras de GTA 5 para Android</h3>
111
- <p>GTA 5 para Android tiene muchos pros y contras que debes considerar antes de descargarlo y reproducirlo. Estos son algunos de ellos:</p>
112
- <tabla>
113
- <tr>
114
- <th>Pros</th>
115
- <th>Contras</th>
116
- </tr>
117
- <tr>
118
- <td>- Increíbles gráficos y calidad de sonido que rivalizan con las versiones de consola y PC. </td>
119
- <td>- Altos requisitos del sistema que pueden no ser compatibles con todos los dispositivos Android. </td>
120
- </tr>
121
- <tr>
122
- <td>- Juego inmersivo y variado que ofrece infinitas posibilidades de diversión y caos. </td>
123
- <td>- Gran tamaño de archivo que puede ocupar mucho espacio de almacenamiento y uso de datos. </td>
124
- </tr>
125
- <tr>
126
- <td>- Interesante y humorística historia que cuenta con tres protagonistas diferentes. </td>
127
- <td>- No hay soporte oficial o actualizaciones de Rockstar Games o Google Play Store.</td>
128
- </tr>
129
- <tr>
130
-
131
- <td>- Riesgos potenciales de virus, malware o archivos falsos de fuentes no confiables. </td>
132
- </tr>
133
- </tabla>
134
- <h3> Valoración general basada en gráficos, sonido, jugabilidad y valor de reproducción</h3>
135
- <p>GTA 5 para Android es un logro notable que merece elogios y reconocimiento. Es un juego que puede proporcionar horas de entretenimiento y satisfacción para cualquier fan de los juegos de acción y aventura. Sin embargo, no es perfecto y tiene algunos defectos y limitaciones que pueden afectar su rendimiento y calidad. Por lo tanto, le damos a GTA 5 para Android una calificación general de 4.5 de 5 estrellas en base a los siguientes criterios:</p>
136
- <tabla>
137
- <tr>
138
- <th>Criterios</th>
139
- <th>Valoración</th>
140
- <th>Explicación</th>
141
- </tr>
142
- <tr>
143
- <td>Gráficos</td>
144
- <td>5/5</td>
145
- <td>Los gráficos de GTA 5 para Android son impresionantes y realistas. El juego tiene un alto nivel de detalle y textura que hacen que el mundo del juego se vea vivo y vibrante. El juego también cuenta con iluminación dinámica y sombras, efectos meteorológicos, reflejos, etc., que mejoran la experiencia visual. El juego funciona sin problemas y sin problemas técnicos o errores en la mayoría de los dispositivos. </td>
146
- </tr>
147
- <tr>
148
- <td>Sonido</td>
149
- <td>5/5</td>
150
- <td>El sonido de GTA 5 para Android también es impresionante e inmersivo. El juego tiene una banda sonora rica y diversa que cuenta con varios géneros y artistas. El juego también tiene efectos de sonido realistas y claros que coinciden con las acciones y eventos en el juego. El juego también tiene una excelente actuación de voz y diálogo que transmiten las emociones y personalidades de los personajes. </td>
151
- </tr>
152
- <tr>
153
- <td>Juego</td>
154
- <td>4/5</td>
155
-
156
- </tr>
157
- <tr>
158
- <td>Valor de reproducción</td>
159
- <td>4/5</td>
160
- <td>El valor de reproducción de GTA 5 para Android es alto y bajo. El juego tiene un montón de contenido y características que pueden mantener a los jugadores enganchados durante mucho tiempo. El juego tiene una historia principal que puede tardar hasta 30 horas en completarse, así como muchas actividades secundarias que pueden tardar hasta 100 horas en completarse. El juego también tiene un modo en línea que puede ofrecer interminables horas de diversión e interacción con otros jugadores. Sin embargo, el valor de repetición de GTA 5 para Android también depende de las preferencias y objetivos del jugador. El juego se puede reproducir de diferentes maneras, como cambiar de personaje, elegir diferentes resultados, completar diferentes desafíos, etc. Sin embargo, el juego también puede perder su atractivo e interés después de un tiempo, especialmente si los jugadores han completado todo o no tienen nada nuevo que hacer. </td>
161
- </tr>
162
- </tabla>
163
- <h2>Conclusión</h2>
164
- <p>GTA 5 para Android es un juego increíble que merece una oportunidad de cualquier fan de los juegos de acción y aventura. Es un juego que puede proporcionar horas de entretenimiento y satisfacción para cualquier jugador que ama la exploración del mundo abierto, misiones atractivas, gráficos realistas, sonido dinámico, multijugador en línea y un sinfín de posibilidades para la diversión y el caos. Sin embargo, también es un juego que tiene algunos defectos y limitaciones que pueden afectar su rendimiento y calidad. Por lo tanto, recomendamos GTA 5 para Android a cualquiera que tenga un dispositivo compatible y una conexión a Internet confiable, y que esté dispuesto a descargar el archivo GTA 5 APK de una fuente confiable. </p>
165
- <h2>Preguntas frecuentes</h2>
166
- <p>Aquí hay algunas preguntas frecuentes sobre GTA 5 para Android:</p>
167
- <ul>
168
- <li><b>Q: ¿Es GTA 5 para Android gratis? </b></li>
169
- <li><b>A: Sí, GTA 5 para Android es gratis para descargar y jugar. Sin embargo, es necesario descargar el archivo GTA 5 APK de una fuente de confianza, como [GTA5Mobile.com], que puede requerir alguna verificación o registro. </b></li>
170
- <li><b>Q: ¿Es seguro GTA 5 para Android? </b></li>
171
-
172
- <li><b>Q: ¿Es GTA 5 para Android legal? </b></li>
173
- <li><b>A: Sí, GTA 5 para Android es legal para descargar y jugar si tienes una copia del juego original en otra plataforma, como PC o consola. Sin embargo, no debe distribuir o vender el archivo GTA 5 APK a otros sin el permiso de Rockstar Games.</b></li>
174
- <li><b>Q: ¿Está actualizado GTA 5 para Android? </b></li>
175
- <li><b>A: No, GTA 5 para Android no es actualizado por Rockstar Games o Google Play Store. Por lo tanto, es posible que no reciba ningún nuevo contenido o características que se agreguen a las versiones de PC o consola del juego. Sin embargo, puede recibir algunas actualizaciones de [GTA5Mobile.com], que pueden mejorar el rendimiento o la calidad del juego. </b></li>
176
- <li><b>Q: ¿Cómo puedo contactar con [GTA5Mobile.com]? </b></li>
177
- <li><b>A: Puede ponerse en contacto con [GTA5Mobile.com] visitando su sitio web y rellenando su formulario de contacto. También puede seguirlos en sus cuentas de redes sociales o enviarlos por correo electrónico a [email protected]. </b></li>
178
- </ul></p> 64aa2da5cf<br />
179
- <br />
180
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Coches De Lujo Europeos Mod Apk.md DELETED
@@ -1,72 +0,0 @@
1
-
2
- <h1>Descargar coches de lujo europeos Mod APK: Un juego de carreras gratis con vehículos personalizables</h1>
3
- <p>Si eres un fan de los coches de lujo europeos, como Rolls-Royce, Bugatti, Bentley, Maserati, o Jaguar, es posible que desee probar European Luxury Cars Mod APK. Este es un juego de carreras gratuito que te permite elegir tu propio vehículo de lujo europeo y darle una vuelta en una isla privada. También puede personalizar su coche con varias opciones y modificaciones, tales como alerones, ruedas, parachoques, luces de neón, frenos brillantes o nitro boost. También puede conducir con amigos o solo en modo multijugador o para un jugador. En este artículo, le diremos qué es European Luxury Cars Mod APK, por qué debe jugar, y cómo descargarlo e instalarlo en su dispositivo Android. </p>
4
- <h2>descargar coches de lujo europeos mod apk</h2><br /><p><b><b>Download</b> &rArr;&rArr;&rArr; <a href="https://bltlly.com/2v6LAx">https://bltlly.com/2v6LAx</a></b></p><br /><br />
5
- <h2>¿Qué es el Europeo de Coches de Lujo Mod APK? </h2>
6
- <p>European Luxury Cars Mod APK es una versión modificada del juego original European Luxury Cars, que fue desarrollado por DMNK Studio y lanzado en 2022. La versión modificada tiene algunas ventajas sobre la versión original, como:</p>
7
- <ul>
8
- <li>Dinero y monedas ilimitados</li>
9
- <li>Todos los coches desbloqueados</li>
10
- <li>No hay anuncios</li>
11
- <li>No se requiere raíz</li>
12
- </ul>
13
- <h3>Características de los coches de lujo europeos Mod APK</h3>
14
- <p>Algunas de las características de los coches de lujo europeos Mod APK son:</p>
15
- <ul>
16
- <li>Gráficos de alta calidad y sonidos realistas</li>
17
- <li> Amplia gama de opciones de personalización para su coche</li>
18
- <li>Funciones del coche totalmente controlables, tales como puertas abiertas/ cerradas, ajustar la suspensión de aire, encendido/ apagado del motor, ABS, ESP, TCS, etc.</li>
19
- <li>Tres modos de conducción física: carreras, simulador, o deriva</li>
20
- <li>Ciclo dinámico de día y noche</li>
21
- <li> Modo de foto y modo drone para tomar fotos de su coche</li>
22
- <li>Modo multijugador para conducir con amigos en línea</li>
23
- <li>Modo de un solo jugador para conducir sin conexión</li>
24
- <li>Un mapa grande con diferentes áreas para explorar</li>
25
- <li>Remolques de coches para transportar su coche a diferentes lugares</li>
26
-
27
- </ul>
28
- <h3>Cómo descargar e instalar coches de lujo europeos Mod APK</h3>
29
- <p>Para descargar e instalar European Luxury Cars Mod APK en su dispositivo Android, debe seguir estos pasos:</p>
30
- <p></p>
31
- <ol>
32
- <li>Ir a [APKMODY]( 5 ), un sitio web que ofrece miles de APK original, APK MOD y Premium APK de juegos y aplicaciones de forma gratuita. </li>
33
- <li>Buscar "Coches de lujo europeos" en la barra de búsqueda. </li>
34
- <li>Seleccione la última versión de European Luxury Cars Mod APK de los resultados. </li>
35
- <li>Haga clic en el botón "Descargar" y espere a que el archivo se descargue. </li>
36
- <li>Después de que se complete la descarga, busque el archivo en el administrador de archivos de su dispositivo y toque en él para instalarlo. </li>
37
- <li>Si ves un mensaje de advertencia que dice "Instalar bloqueado", ve a la configuración de tu dispositivo y habilita "Fuentes desconocidas" en las opciones de seguridad. </li>
38
- <li>Una vez que se hace la instalación, abrir el juego y disfrutar de la conducción de su coche de ensueño. </li>
39
- </ol>
40
- <h2>¿Por qué debe jugar European Luxury Cars Mod APK? </h2>
41
- <p>Hay muchas razones por las que debe jugar European Luxury Cars Mod APK. Aquí están algunos de ellos:</p>
42
- <h3>Disfruta de gráficos y sonidos realistas</h3>
43
- <p>El juego tiene gráficos de alta calidad que hacen que los coches y el medio ambiente se vean realistas y detallados. Usted puede ver los reflejos del sol en el cuerpo de su coche, las sombras de los árboles en la carretera, el humo de su tubo de escape, o el polvo de sus neumáticos. También puede escuchar los sonidos realistas del motor de su automóvil, bocina, frenos o nitro. El juego también tiene un dinámico ciclo de día y noche que cambia la iluminación y la atmósfera de la isla. </p>
44
- <h3>Personaliza tu propio coche de lujo</h3>
45
-
46
- <h3>Conduce con amigos o solo en una isla privada</h3>
47
- <p>El juego te permite conducir con amigos o solo en una isla privada que tiene diferentes áreas para explorar. Puede unirse o crear una sala multijugador e invitar a sus amigos a conducir con usted en línea. También puede chatear con ellos utilizando la función de chat de voz. También puede conducir sin conexión en modo de un solo jugador y disfrutar del paisaje y la libertad de conducir sin tráfico ni reglas. La isla tiene diferentes áreas para explorar, como playas, montañas, bosques, desiertos, ciudades, aeropuertos, puertos, puentes, túneles o carreteras. También puede encontrar remolques de coches que pueden transportar su coche a diferentes lugares de la isla. </p>
48
- <h2>¿Cuáles son algunos consejos y trucos para jugar European Luxury Cars Mod APK? </h2>
49
- <p>Aquí hay algunos consejos y trucos para jugar European Luxury Cars Mod APK:</p>
50
- <h3>Elija el modo de conducción física correcta</h3>
51
- <p>El juego tiene tres modos de conducción física: carreras, simulador, o deriva. Puede elegir el que se adapte a su preferencia y estilo de conducción. El modo de carreras es para aquellos que quieren conducir rápido y furioso. El modo simulador es para aquellos que quieren conducir con realismo y cuidado. El modo de deriva es para aquellos que quieren deslizarse y deslizarse en la carretera. Puede cambiar el modo de conducción física en el menú de configuración. </p>
52
- <h3>Utilice el impulso nitro sabiamente</h3>
53
- <p>El juego tiene una función de impulso nitro que puede hacer que su coche sea más rápido y más potente. Sin embargo, debe usarlo con prudencia y moderación. El impulso nitro consume mucho combustible y puede dañar su coche si lo usa demasiado. Usted puede rellenar su impulso nitro conduciendo sobre las gasolineras azules en la carretera. También puede actualizar su impulso nitro gastando monedas en el menú de personalización. </p>
54
- <h3>Explora diferentes partes de la isla</h3>
55
-
56
- <h2>Conclusión</h2>
57
- <p>Coches de lujo europeos Mod APK es un juego de carreras gratuito que le permite conducir su propio coche de lujo europeo en una isla privada. Puede personalizar su coche con varias opciones y modificaciones. También puede conducir con amigos o solo en modo multijugador o para un jugador. El juego tiene gráficos realistas y sonidos que te hacen sentir como si realmente estuvieras conduciendo un coche de lujo. El juego también tiene un gran mapa con diferentes áreas para explorar y descubrir. Si usted está buscando un divertido y emocionante juego de carreras que le permite vivir su sueño de conducir un coche de lujo europeo, usted debe descargar European Luxury Cars Mod APK hoy. </p>
58
- <h2>Preguntas frecuentes</h2>
59
- <ul>
60
- <li><b>Q: ¿Es seguro descargar e instalar European Luxury Cars Mod APK? </b></li>
61
- <li>A: Sí, European Luxury Cars Mod APK es seguro para descargar e instalar desde [APKMODY], un sitio web de confianza que ofrece APK original, APK MOD y APK Premium de juegos y aplicaciones de forma gratuita. </li>
62
- <li><b>Q: ¿Cuáles son los requisitos para jugar European Luxury Cars Mod APK? </b></li>
63
- <li>A: Para jugar European Luxury Cars Mod APK, necesita un dispositivo Android con Android 4.4 o una versión superior y al menos 1 GB de RAM y 500 MB de espacio de almacenamiento gratuito. </li>
64
- <li><b>Q: ¿Cómo puedo actualizar European Luxury Cars Mod APK? </b></li>
65
- <li>A: Para actualizar European Luxury Cars Mod APK, debe seguir los mismos pasos que cuando lo descargó e instaló por primera vez. También puede buscar actualizaciones en [APKMODY] o activar la función de actualización automática en el menú de configuración. </li>
66
- <li><b>Q: ¿Cómo puedo contactar al desarrollador de European Luxury Cars Mod APK? </b></li>
67
- <li>A: Puede ponerse en contacto con el desarrollador de European Luxury Cars Mod APK enviando un correo electrónico a [email protected] o visitando su página de Facebook en https://www.facebook.com/dmknstudio.</li>
68
- <li><b>Q: ¿Cómo puedo apoyar al desarrollador de European Luxury Cars Mod APK? </b></li>
69
-
70
- </ul></p> 64aa2da5cf<br />
71
- <br />
72
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/pyparsing/common.py DELETED
@@ -1,424 +0,0 @@
1
- # common.py
2
- from .core import *
3
- from .helpers import delimited_list, any_open_tag, any_close_tag
4
- from datetime import datetime
5
-
6
-
7
- # some other useful expressions - using lower-case class name since we are really using this as a namespace
8
- class pyparsing_common:
9
- """Here are some common low-level expressions that may be useful in
10
- jump-starting parser development:
11
-
12
- - numeric forms (:class:`integers<integer>`, :class:`reals<real>`,
13
- :class:`scientific notation<sci_real>`)
14
- - common :class:`programming identifiers<identifier>`
15
- - network addresses (:class:`MAC<mac_address>`,
16
- :class:`IPv4<ipv4_address>`, :class:`IPv6<ipv6_address>`)
17
- - ISO8601 :class:`dates<iso8601_date>` and
18
- :class:`datetime<iso8601_datetime>`
19
- - :class:`UUID<uuid>`
20
- - :class:`comma-separated list<comma_separated_list>`
21
- - :class:`url`
22
-
23
- Parse actions:
24
-
25
- - :class:`convertToInteger`
26
- - :class:`convertToFloat`
27
- - :class:`convertToDate`
28
- - :class:`convertToDatetime`
29
- - :class:`stripHTMLTags`
30
- - :class:`upcaseTokens`
31
- - :class:`downcaseTokens`
32
-
33
- Example::
34
-
35
- pyparsing_common.number.runTests('''
36
- # any int or real number, returned as the appropriate type
37
- 100
38
- -100
39
- +100
40
- 3.14159
41
- 6.02e23
42
- 1e-12
43
- ''')
44
-
45
- pyparsing_common.fnumber.runTests('''
46
- # any int or real number, returned as float
47
- 100
48
- -100
49
- +100
50
- 3.14159
51
- 6.02e23
52
- 1e-12
53
- ''')
54
-
55
- pyparsing_common.hex_integer.runTests('''
56
- # hex numbers
57
- 100
58
- FF
59
- ''')
60
-
61
- pyparsing_common.fraction.runTests('''
62
- # fractions
63
- 1/2
64
- -3/4
65
- ''')
66
-
67
- pyparsing_common.mixed_integer.runTests('''
68
- # mixed fractions
69
- 1
70
- 1/2
71
- -3/4
72
- 1-3/4
73
- ''')
74
-
75
- import uuid
76
- pyparsing_common.uuid.setParseAction(tokenMap(uuid.UUID))
77
- pyparsing_common.uuid.runTests('''
78
- # uuid
79
- 12345678-1234-5678-1234-567812345678
80
- ''')
81
-
82
- prints::
83
-
84
- # any int or real number, returned as the appropriate type
85
- 100
86
- [100]
87
-
88
- -100
89
- [-100]
90
-
91
- +100
92
- [100]
93
-
94
- 3.14159
95
- [3.14159]
96
-
97
- 6.02e23
98
- [6.02e+23]
99
-
100
- 1e-12
101
- [1e-12]
102
-
103
- # any int or real number, returned as float
104
- 100
105
- [100.0]
106
-
107
- -100
108
- [-100.0]
109
-
110
- +100
111
- [100.0]
112
-
113
- 3.14159
114
- [3.14159]
115
-
116
- 6.02e23
117
- [6.02e+23]
118
-
119
- 1e-12
120
- [1e-12]
121
-
122
- # hex numbers
123
- 100
124
- [256]
125
-
126
- FF
127
- [255]
128
-
129
- # fractions
130
- 1/2
131
- [0.5]
132
-
133
- -3/4
134
- [-0.75]
135
-
136
- # mixed fractions
137
- 1
138
- [1]
139
-
140
- 1/2
141
- [0.5]
142
-
143
- -3/4
144
- [-0.75]
145
-
146
- 1-3/4
147
- [1.75]
148
-
149
- # uuid
150
- 12345678-1234-5678-1234-567812345678
151
- [UUID('12345678-1234-5678-1234-567812345678')]
152
- """
153
-
154
- convert_to_integer = token_map(int)
155
- """
156
- Parse action for converting parsed integers to Python int
157
- """
158
-
159
- convert_to_float = token_map(float)
160
- """
161
- Parse action for converting parsed numbers to Python float
162
- """
163
-
164
- integer = Word(nums).set_name("integer").set_parse_action(convert_to_integer)
165
- """expression that parses an unsigned integer, returns an int"""
166
-
167
- hex_integer = (
168
- Word(hexnums).set_name("hex integer").set_parse_action(token_map(int, 16))
169
- )
170
- """expression that parses a hexadecimal integer, returns an int"""
171
-
172
- signed_integer = (
173
- Regex(r"[+-]?\d+")
174
- .set_name("signed integer")
175
- .set_parse_action(convert_to_integer)
176
- )
177
- """expression that parses an integer with optional leading sign, returns an int"""
178
-
179
- fraction = (
180
- signed_integer().set_parse_action(convert_to_float)
181
- + "/"
182
- + signed_integer().set_parse_action(convert_to_float)
183
- ).set_name("fraction")
184
- """fractional expression of an integer divided by an integer, returns a float"""
185
- fraction.add_parse_action(lambda tt: tt[0] / tt[-1])
186
-
187
- mixed_integer = (
188
- fraction | signed_integer + Opt(Opt("-").suppress() + fraction)
189
- ).set_name("fraction or mixed integer-fraction")
190
- """mixed integer of the form 'integer - fraction', with optional leading integer, returns float"""
191
- mixed_integer.add_parse_action(sum)
192
-
193
- real = (
194
- Regex(r"[+-]?(?:\d+\.\d*|\.\d+)")
195
- .set_name("real number")
196
- .set_parse_action(convert_to_float)
197
- )
198
- """expression that parses a floating point number and returns a float"""
199
-
200
- sci_real = (
201
- Regex(r"[+-]?(?:\d+(?:[eE][+-]?\d+)|(?:\d+\.\d*|\.\d+)(?:[eE][+-]?\d+)?)")
202
- .set_name("real number with scientific notation")
203
- .set_parse_action(convert_to_float)
204
- )
205
- """expression that parses a floating point number with optional
206
- scientific notation and returns a float"""
207
-
208
- # streamlining this expression makes the docs nicer-looking
209
- number = (sci_real | real | signed_integer).setName("number").streamline()
210
- """any numeric expression, returns the corresponding Python type"""
211
-
212
- fnumber = (
213
- Regex(r"[+-]?\d+\.?\d*([eE][+-]?\d+)?")
214
- .set_name("fnumber")
215
- .set_parse_action(convert_to_float)
216
- )
217
- """any int or real number, returned as float"""
218
-
219
- identifier = Word(identchars, identbodychars).set_name("identifier")
220
- """typical code identifier (leading alpha or '_', followed by 0 or more alphas, nums, or '_')"""
221
-
222
- ipv4_address = Regex(
223
- r"(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})(\.(25[0-5]|2[0-4][0-9]|1?[0-9]{1,2})){3}"
224
- ).set_name("IPv4 address")
225
- "IPv4 address (``0.0.0.0 - 255.255.255.255``)"
226
-
227
- _ipv6_part = Regex(r"[0-9a-fA-F]{1,4}").set_name("hex_integer")
228
- _full_ipv6_address = (_ipv6_part + (":" + _ipv6_part) * 7).set_name(
229
- "full IPv6 address"
230
- )
231
- _short_ipv6_address = (
232
- Opt(_ipv6_part + (":" + _ipv6_part) * (0, 6))
233
- + "::"
234
- + Opt(_ipv6_part + (":" + _ipv6_part) * (0, 6))
235
- ).set_name("short IPv6 address")
236
- _short_ipv6_address.add_condition(
237
- lambda t: sum(1 for tt in t if pyparsing_common._ipv6_part.matches(tt)) < 8
238
- )
239
- _mixed_ipv6_address = ("::ffff:" + ipv4_address).set_name("mixed IPv6 address")
240
- ipv6_address = Combine(
241
- (_full_ipv6_address | _mixed_ipv6_address | _short_ipv6_address).set_name(
242
- "IPv6 address"
243
- )
244
- ).set_name("IPv6 address")
245
- "IPv6 address (long, short, or mixed form)"
246
-
247
- mac_address = Regex(
248
- r"[0-9a-fA-F]{2}([:.-])[0-9a-fA-F]{2}(?:\1[0-9a-fA-F]{2}){4}"
249
- ).set_name("MAC address")
250
- "MAC address xx:xx:xx:xx:xx (may also have '-' or '.' delimiters)"
251
-
252
- @staticmethod
253
- def convert_to_date(fmt: str = "%Y-%m-%d"):
254
- """
255
- Helper to create a parse action for converting parsed date string to Python datetime.date
256
-
257
- Params -
258
- - fmt - format to be passed to datetime.strptime (default= ``"%Y-%m-%d"``)
259
-
260
- Example::
261
-
262
- date_expr = pyparsing_common.iso8601_date.copy()
263
- date_expr.setParseAction(pyparsing_common.convertToDate())
264
- print(date_expr.parseString("1999-12-31"))
265
-
266
- prints::
267
-
268
- [datetime.date(1999, 12, 31)]
269
- """
270
-
271
- def cvt_fn(ss, ll, tt):
272
- try:
273
- return datetime.strptime(tt[0], fmt).date()
274
- except ValueError as ve:
275
- raise ParseException(ss, ll, str(ve))
276
-
277
- return cvt_fn
278
-
279
- @staticmethod
280
- def convert_to_datetime(fmt: str = "%Y-%m-%dT%H:%M:%S.%f"):
281
- """Helper to create a parse action for converting parsed
282
- datetime string to Python datetime.datetime
283
-
284
- Params -
285
- - fmt - format to be passed to datetime.strptime (default= ``"%Y-%m-%dT%H:%M:%S.%f"``)
286
-
287
- Example::
288
-
289
- dt_expr = pyparsing_common.iso8601_datetime.copy()
290
- dt_expr.setParseAction(pyparsing_common.convertToDatetime())
291
- print(dt_expr.parseString("1999-12-31T23:59:59.999"))
292
-
293
- prints::
294
-
295
- [datetime.datetime(1999, 12, 31, 23, 59, 59, 999000)]
296
- """
297
-
298
- def cvt_fn(s, l, t):
299
- try:
300
- return datetime.strptime(t[0], fmt)
301
- except ValueError as ve:
302
- raise ParseException(s, l, str(ve))
303
-
304
- return cvt_fn
305
-
306
- iso8601_date = Regex(
307
- r"(?P<year>\d{4})(?:-(?P<month>\d\d)(?:-(?P<day>\d\d))?)?"
308
- ).set_name("ISO8601 date")
309
- "ISO8601 date (``yyyy-mm-dd``)"
310
-
311
- iso8601_datetime = Regex(
312
- r"(?P<year>\d{4})-(?P<month>\d\d)-(?P<day>\d\d)[T ](?P<hour>\d\d):(?P<minute>\d\d)(:(?P<second>\d\d(\.\d*)?)?)?(?P<tz>Z|[+-]\d\d:?\d\d)?"
313
- ).set_name("ISO8601 datetime")
314
- "ISO8601 datetime (``yyyy-mm-ddThh:mm:ss.s(Z|+-00:00)``) - trailing seconds, milliseconds, and timezone optional; accepts separating ``'T'`` or ``' '``"
315
-
316
- uuid = Regex(r"[0-9a-fA-F]{8}(-[0-9a-fA-F]{4}){3}-[0-9a-fA-F]{12}").set_name("UUID")
317
- "UUID (``xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx``)"
318
-
319
- _html_stripper = any_open_tag.suppress() | any_close_tag.suppress()
320
-
321
- @staticmethod
322
- def strip_html_tags(s: str, l: int, tokens: ParseResults):
323
- """Parse action to remove HTML tags from web page HTML source
324
-
325
- Example::
326
-
327
- # strip HTML links from normal text
328
- text = '<td>More info at the <a href="https://github.com/pyparsing/pyparsing/wiki">pyparsing</a> wiki page</td>'
329
- td, td_end = makeHTMLTags("TD")
330
- table_text = td + SkipTo(td_end).setParseAction(pyparsing_common.stripHTMLTags)("body") + td_end
331
- print(table_text.parseString(text).body)
332
-
333
- Prints::
334
-
335
- More info at the pyparsing wiki page
336
- """
337
- return pyparsing_common._html_stripper.transform_string(tokens[0])
338
-
339
- _commasepitem = (
340
- Combine(
341
- OneOrMore(
342
- ~Literal(",")
343
- + ~LineEnd()
344
- + Word(printables, exclude_chars=",")
345
- + Opt(White(" \t") + ~FollowedBy(LineEnd() | ","))
346
- )
347
- )
348
- .streamline()
349
- .set_name("commaItem")
350
- )
351
- comma_separated_list = delimited_list(
352
- Opt(quoted_string.copy() | _commasepitem, default="")
353
- ).set_name("comma separated list")
354
- """Predefined expression of 1 or more printable words or quoted strings, separated by commas."""
355
-
356
- upcase_tokens = staticmethod(token_map(lambda t: t.upper()))
357
- """Parse action to convert tokens to upper case."""
358
-
359
- downcase_tokens = staticmethod(token_map(lambda t: t.lower()))
360
- """Parse action to convert tokens to lower case."""
361
-
362
- # fmt: off
363
- url = Regex(
364
- # https://mathiasbynens.be/demo/url-regex
365
- # https://gist.github.com/dperini/729294
366
- r"^" +
367
- # protocol identifier (optional)
368
- # short syntax // still required
369
- r"(?:(?:(?P<scheme>https?|ftp):)?\/\/)" +
370
- # user:pass BasicAuth (optional)
371
- r"(?:(?P<auth>\S+(?::\S*)?)@)?" +
372
- r"(?P<host>" +
373
- # IP address exclusion
374
- # private & local networks
375
- r"(?!(?:10|127)(?:\.\d{1,3}){3})" +
376
- r"(?!(?:169\.254|192\.168)(?:\.\d{1,3}){2})" +
377
- r"(?!172\.(?:1[6-9]|2\d|3[0-1])(?:\.\d{1,3}){2})" +
378
- # IP address dotted notation octets
379
- # excludes loopback network 0.0.0.0
380
- # excludes reserved space >= 224.0.0.0
381
- # excludes network & broadcast addresses
382
- # (first & last IP address of each class)
383
- r"(?:[1-9]\d?|1\d\d|2[01]\d|22[0-3])" +
384
- r"(?:\.(?:1?\d{1,2}|2[0-4]\d|25[0-5])){2}" +
385
- r"(?:\.(?:[1-9]\d?|1\d\d|2[0-4]\d|25[0-4]))" +
386
- r"|" +
387
- # host & domain names, may end with dot
388
- # can be replaced by a shortest alternative
389
- # (?![-_])(?:[-\w\u00a1-\uffff]{0,63}[^-_]\.)+
390
- r"(?:" +
391
- r"(?:" +
392
- r"[a-z0-9\u00a1-\uffff]" +
393
- r"[a-z0-9\u00a1-\uffff_-]{0,62}" +
394
- r")?" +
395
- r"[a-z0-9\u00a1-\uffff]\." +
396
- r")+" +
397
- # TLD identifier name, may end with dot
398
- r"(?:[a-z\u00a1-\uffff]{2,}\.?)" +
399
- r")" +
400
- # port number (optional)
401
- r"(:(?P<port>\d{2,5}))?" +
402
- # resource path (optional)
403
- r"(?P<path>\/[^?# ]*)?" +
404
- # query string (optional)
405
- r"(\?(?P<query>[^#]*))?" +
406
- # fragment (optional)
407
- r"(#(?P<fragment>\S*))?" +
408
- r"$"
409
- ).set_name("url")
410
- # fmt: on
411
-
412
- # pre-PEP8 compatibility names
413
- convertToInteger = convert_to_integer
414
- convertToFloat = convert_to_float
415
- convertToDate = convert_to_date
416
- convertToDatetime = convert_to_datetime
417
- stripHTMLTags = strip_html_tags
418
- upcaseTokens = upcase_tokens
419
- downcaseTokens = downcase_tokens
420
-
421
-
422
- _builtin_exprs = [
423
- v for v in vars(pyparsing_common).values() if isinstance(v, ParserElement)
424
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mcan/model_cfgs.py DELETED
@@ -1,24 +0,0 @@
1
- # --------------------------------------------------------
2
- # OpenVQA
3
- # Written by Yuhao Cui https://github.com/cuiyuhao1996
4
- # --------------------------------------------------------
5
-
6
- from openvqa.core.base_cfgs import BaseCfgs
7
-
8
-
9
- class Cfgs(BaseCfgs):
10
- def __init__(self):
11
- super(Cfgs, self).__init__()
12
-
13
- self.LAYER = 6
14
- self.HIDDEN_SIZE = 512
15
- self.BBOXFEAT_EMB_SIZE = 2048
16
- self.FF_SIZE = 2048
17
- self.MULTI_HEAD = 8
18
- self.DROPOUT_R = 0.1
19
- self.FLAT_MLP_SIZE = 512
20
- self.FLAT_GLIMPSES = 1
21
- self.FLAT_OUT_SIZE = 1024
22
- self.USE_AUX_FEAT = False
23
- self.USE_BBOX_FEAT = False
24
- self.BBOX_NORMALIZE = True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/detail/complex/c99math.h DELETED
@@ -1,196 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- * Copyright 2013 Filipe RNC Maia
4
- *
5
- * Licensed under the Apache License, Version 2.0 (the "License");
6
- * you may not use this file except in compliance with the License.
7
- * You may obtain a copy of the License at
8
- *
9
- * http://www.apache.org/licenses/LICENSE-2.0
10
- *
11
- * Unless required by applicable law or agreed to in writing, software
12
- * distributed under the License is distributed on an "AS IS" BASIS,
13
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
- * See the License for the specific language governing permissions and
15
- * limitations under the License.
16
- */
17
- #pragma once
18
-
19
- #include <math.h>
20
- #include <cmath>
21
- #include <thrust/detail/complex/math_private.h>
22
-
23
- namespace thrust
24
- {
25
- namespace detail
26
- {
27
- namespace complex
28
- {
29
-
30
- // Define basic arithmetic functions so we can use them without explicit scope
31
- // keeping the code as close as possible to FreeBSDs for ease of maintenance.
32
- // It also provides an easy way to support compilers with missing C99 functions.
33
- // When possible, just use the names in the global scope.
34
- // Some platforms define these as macros, others as free functions.
35
- // Avoid using the std:: form of these as nvcc may treat std::foo() as __host__ functions.
36
-
37
- using ::log;
38
- using ::acos;
39
- using ::asin;
40
- using ::sqrt;
41
- using ::sinh;
42
- using ::tan;
43
- using ::cos;
44
- using ::sin;
45
- using ::exp;
46
- using ::cosh;
47
- using ::atan;
48
-
49
- template <typename T>
50
- inline __host__ __device__ T infinity();
51
-
52
- template <>
53
- inline __host__ __device__ float infinity<float>()
54
- {
55
- float res;
56
- set_float_word(res, 0x7f800000);
57
- return res;
58
- }
59
-
60
-
61
- template <>
62
- inline __host__ __device__ double infinity<double>()
63
- {
64
- double res;
65
- insert_words(res, 0x7ff00000,0);
66
- return res;
67
- }
68
-
69
- #if defined _MSC_VER
70
- __host__ __device__ inline int isinf(float x){
71
- return std::abs(x) == infinity<float>();
72
- }
73
-
74
- __host__ __device__ inline int isinf(double x){
75
- return std::abs(x) == infinity<double>();
76
- }
77
-
78
- __host__ __device__ inline int isnan(float x){
79
- return x != x;
80
- }
81
-
82
- __host__ __device__ inline int isnan(double x){
83
- return x != x;
84
- }
85
-
86
- __host__ __device__ inline int signbit(float x){
87
- return (*((uint32_t *)&x)) & 0x80000000;
88
- }
89
-
90
- __host__ __device__ inline int signbit(double x){
91
- return (*((uint32_t *)&x)) & 0x80000000;
92
- }
93
-
94
- __host__ __device__ inline int isfinite(float x){
95
- return !isnan(x) && !isinf(x);
96
- }
97
-
98
- __host__ __device__ inline int isfinite(double x){
99
- return !isnan(x) && !isinf(x);
100
- }
101
-
102
- #else
103
-
104
- # if defined(__CUDACC__) && !(defined(__CUDA__) && defined(__clang__)) && !defined(__NVCOMPILER_CUDA__)
105
- // NVCC implements at least some signature of these as functions not macros.
106
- using ::isinf;
107
- using ::isnan;
108
- using ::signbit;
109
- using ::isfinite;
110
- # else
111
- // Some compilers do not provide these in the global scope, because they are
112
- // supposed to be macros. The versions in `std` are supposed to be functions.
113
- // Since we're not compiling with nvcc, it's safe to use the functions in std::
114
- using std::isinf;
115
- using std::isnan;
116
- using std::signbit;
117
- using std::isfinite;
118
- # endif // __CUDACC__
119
- #endif // _MSC_VER
120
-
121
- using ::atanh;
122
-
123
- #if defined _MSC_VER
124
-
125
- __host__ __device__ inline double copysign(double x, double y){
126
- uint32_t hx,hy;
127
- get_high_word(hx,x);
128
- get_high_word(hy,y);
129
- set_high_word(x,(hx&0x7fffffff)|(hy&0x80000000));
130
- return x;
131
- }
132
-
133
- __host__ __device__ inline float copysignf(float x, float y){
134
- uint32_t ix,iy;
135
- get_float_word(ix,x);
136
- get_float_word(iy,y);
137
- set_float_word(x,(ix&0x7fffffff)|(iy&0x80000000));
138
- return x;
139
- }
140
-
141
-
142
-
143
- #ifndef __CUDACC__
144
-
145
- // Simple approximation to log1p as Visual Studio is lacking one
146
- inline double log1p(double x){
147
- double u = 1.0+x;
148
- if(u == 1.0){
149
- return x;
150
- }else{
151
- if(u > 2.0){
152
- // Use normal log for large arguments
153
- return log(u);
154
- }else{
155
- return log(u)*(x/(u-1.0));
156
- }
157
- }
158
- }
159
-
160
- inline float log1pf(float x){
161
- float u = 1.0f+x;
162
- if(u == 1.0f){
163
- return x;
164
- }else{
165
- if(u > 2.0f){
166
- // Use normal log for large arguments
167
- return logf(u);
168
- }else{
169
- return logf(u)*(x/(u-1.0f));
170
- }
171
- }
172
- }
173
-
174
- #if _MSV_VER <= 1500
175
- #include <complex>
176
-
177
- inline float hypotf(float x, float y){
178
- return abs(std::complex<float>(x,y));
179
- }
180
-
181
- inline double hypot(double x, double y){
182
- return _hypot(x,y);
183
- }
184
-
185
- #endif // _MSC_VER <= 1500
186
-
187
- #endif // __CUDACC__
188
-
189
- #endif // _MSC_VER
190
-
191
- } // namespace complex
192
-
193
- } // namespace detail
194
-
195
- } // namespace thrust
196
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/projects/README.md DELETED
@@ -1,2 +0,0 @@
1
-
2
- Projects live in the [`projects` directory](../../projects) under the root of this repository, but not here.
 
 
 
spaces/Cat125/text-generator-v2/generation/words.py DELETED
@@ -1,93 +0,0 @@
1
- from random import choice, choices
2
-
3
- WEIGHTS_MAP_HARD = [
4
- 100,
5
- 10,
6
- 8,
7
- 6,
8
- 2
9
- ]
10
-
11
- WEIGHTS_MAP_SOFT = [
12
- 80,
13
- 30,
14
- 10,
15
- 7,
16
- 2
17
- ]
18
-
19
- def get_next_word_results(db, message, prev_word, text, _):
20
- results = []
21
- if prev_word not in db:
22
- return results
23
- for token in db[prev_word]:
24
- token.score = 0
25
- for context in token.contexts:
26
- if context in message:
27
- token.score += 2
28
- if context in text:
29
- token.score += 1
30
- if ")" in token.word and text.count("(") > text.count(")"):
31
- token.score += 10
32
- if token.score > 0:
33
- results.append(token)
34
- return results
35
-
36
-
37
- def get_next_word(db, message, prevword, text, conf, repeat=0):
38
- if prevword == '' or '.' in prevword or '?' in prevword or '!' in prevword:
39
- return get_first_word(db, message, text, conf, repeat)
40
- results = get_next_word_results(db, message, prevword, text, conf)
41
- if len(results) == 0:
42
- if repeat >= 1:
43
- return choice(list(db.keys()))
44
- else:
45
- return get_next_word(db, message, prevword, text, conf, repeat + 1)
46
- results = list(sorted(results, key=lambda x: x.score, reverse=True))
47
- total_results = []
48
- max_score = 0
49
- for i in range(min(len(results), 5)):
50
- if max_score == 0:
51
- total_results.append(results[i].word)
52
- max_score = results[i].score
53
- elif max_score == results[i].score:
54
- total_results.append(results[i].word)
55
- if len(total_results) == 0:
56
- return get_next_word(db, message, prevword, text, conf, repeat + 1)
57
- return choice(total_results)
58
-
59
-
60
- def get_first_word_results(db, message, text, _):
61
- results = []
62
- if '' not in db:
63
- return results
64
- for token in db['']:
65
- token.score = 0
66
- for context in token.contexts:
67
- if context in message:
68
- token.score += 2
69
- if context in text:
70
- token.score += 1
71
- if token.starter:
72
- token.score += 15
73
- if token.score > 0:
74
- results.append(token)
75
- return results
76
-
77
-
78
- def get_first_word(db, message, text, conf, repeat=0):
79
- results = get_first_word_results(db, message, text, conf)
80
- if len(results) == 0:
81
- if repeat >= 1:
82
- return choice(list(db.keys()))
83
- else:
84
- return get_first_word(db, message, text, conf, repeat + 1)
85
- results = list(sorted(results, key=lambda x: x.score, reverse=True))
86
- total_results = []
87
- weights = []
88
- for i in range(min(len(results), 5)):
89
- total_results.append(results[i].word)
90
- weights.append(WEIGHTS_MAP_SOFT[i])
91
- if len(total_results) == 0:
92
- return get_first_word(db, message, text, conf, repeat + 1)
93
- return (choices(total_results, weights=weights, k=1) or '.')[0]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/meme_generator/memes/gun/__init__.py DELETED
@@ -1,61 +0,0 @@
1
- from pathlib import Path
2
- from typing import List, Literal
3
-
4
- from PIL.Image import Transpose
5
- from pil_utils import BuildImage
6
- from pydantic import Field
7
-
8
- from meme_generator import MemeArgsModel, MemeArgsParser, MemeArgsType, add_meme
9
-
10
- img_dir = Path(__file__).parent / "images"
11
-
12
-
13
- help = "枪的位置"
14
-
15
- parser = MemeArgsParser(prefix_chars="-/")
16
- group = parser.add_mutually_exclusive_group()
17
- group.add_argument(
18
- "-p",
19
- "--position",
20
- dest="position",
21
- type=str,
22
- choices=["left", "right", "both"],
23
- default="left",
24
- help=help,
25
- )
26
- group.add_argument("--left", "/左手", action="store_const", const="left", dest="position")
27
- group.add_argument(
28
- "--right", "/右手", action="store_const", const="right", dest="position"
29
- )
30
- group.add_argument("--both", "/双手", action="store_const", const="both", dest="position")
31
-
32
-
33
- class Model(MemeArgsModel):
34
- position: Literal["left", "right", "both"] = Field("left", description=help)
35
-
36
-
37
- def gun(images: List[BuildImage], texts, args: Model):
38
- frame = images[0].convert("RGBA").resize((500, 500), keep_ratio=True)
39
- gun = BuildImage.open(img_dir / "0.png")
40
- position = args.position
41
- left = position in ["left", "both"]
42
- right = position in ["right", "both"]
43
- if left:
44
- frame.paste(gun, alpha=True)
45
- if right:
46
- frame.paste(gun.transpose(Transpose.FLIP_LEFT_RIGHT), alpha=True)
47
- return frame.save_jpg()
48
-
49
-
50
- add_meme(
51
- "gun",
52
- gun,
53
- min_images=1,
54
- max_images=1,
55
- args_type=MemeArgsType(
56
- parser,
57
- Model,
58
- [Model(position="left"), Model(position="right"), Model(position="both")],
59
- ),
60
- keywords=["手枪"],
61
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/chat.b4/client/css/stop-generating.css DELETED
@@ -1,38 +0,0 @@
1
- .stop-generating {
2
- position: absolute;
3
- bottom: 128px;
4
- left: 50%;
5
- transform: translateX(-50%);
6
- z-index: 1000000;
7
- }
8
-
9
- .stop-generating button {
10
- backdrop-filter: blur(20px);
11
- -webkit-backdrop-filter: blur(20px);
12
- background-color: var(--blur-bg);
13
- color: var(--colour-3);
14
- cursor: pointer;
15
- animation: show_popup 0.4s;
16
- }
17
-
18
- @keyframes show_popup {
19
- from {
20
- opacity: 0;
21
- transform: translateY(10px);
22
- }
23
- }
24
-
25
- @keyframes hide_popup {
26
- to {
27
- opacity: 0;
28
- transform: translateY(10px);
29
- }
30
- }
31
-
32
- .stop-generating-hiding button {
33
- animation: hide_popup 0.4s;
34
- }
35
-
36
- .stop-generating-hidden button {
37
- display: none;
38
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/file_utils.py DELETED
@@ -1,80 +0,0 @@
1
- import os
2
- import pandas as pd
3
- import json
4
- from os.path import join as pjoin
5
- import time
6
- import cv2
7
-
8
-
9
- def save_corners(file_path, corners, compo_name, clear=True):
10
- try:
11
- df = pd.read_csv(file_path, index_col=0)
12
- except:
13
- df = pd.DataFrame(columns=['component', 'x_max', 'x_min', 'y_max', 'y_min', 'height', 'width'])
14
-
15
- if clear:
16
- df = df.drop(df.index)
17
- for corner in corners:
18
- (up_left, bottom_right) = corner
19
- c = {'component': compo_name}
20
- (c['y_min'], c['x_min']) = up_left
21
- (c['y_max'], c['x_max']) = bottom_right
22
- c['width'] = c['y_max'] - c['y_min']
23
- c['height'] = c['x_max'] - c['x_min']
24
- df = df.append(c, True)
25
- df.to_csv(file_path)
26
-
27
-
28
- def save_corners_json(file_path, compos):
29
- # img_shape = [int(x * ratio) for x in compos[0].image_shape]
30
- # w_h_ratio = org.shape[1] / org.shape[0]
31
- # img_shape = org.shape
32
-
33
- img_shape = compos[0].image_shape
34
- output = {'img_shape': img_shape, 'compos': []}
35
- f_out = open(file_path, 'w')
36
-
37
- for compo in compos:
38
- bbox = compo.put_bbox()
39
- # bbox = [int(x * ratio) for x in bbox]
40
- c = {'id': compo.id, 'class': compo.category}
41
- (c['column_min'], c['row_min'], c['column_max'], c['row_max']) = bbox
42
- c['width'] = compo.width
43
- c['height'] = compo.height
44
- # c['width'] = int(compo.width * ratio)
45
- # c['height'] = int(compo.height * ratio)
46
- output['compos'].append(c)
47
-
48
- json.dump(output, f_out, indent=4)
49
-
50
-
51
- def save_clipping(org, output_root, corners, compo_classes, compo_index):
52
- if not os.path.exists(output_root):
53
- os.mkdir(output_root)
54
- pad = 2
55
- for i in range(len(corners)):
56
- compo = compo_classes[i]
57
- (up_left, bottom_right) = corners[i]
58
- (col_min, row_min) = up_left
59
- (col_max, row_max) = bottom_right
60
- col_min = max(col_min - pad, 0)
61
- col_max = min(col_max + pad, org.shape[1])
62
- row_min = max(row_min - pad, 0)
63
- row_max = min(row_max + pad, org.shape[0])
64
-
65
- # if component type already exists, index increase by 1, otherwise add this type
66
- compo_path = pjoin(output_root, compo)
67
- if compo_classes[i] not in compo_index:
68
- compo_index[compo_classes[i]] = 0
69
- if not os.path.exists(compo_path):
70
- os.mkdir(compo_path)
71
- else:
72
- compo_index[compo_classes[i]] += 1
73
- clip = org[row_min:row_max, col_min:col_max]
74
- cv2.imwrite(pjoin(compo_path, str(compo_index[compo_classes[i]]) + '.png'), clip)
75
-
76
-
77
- def build_directory(directory):
78
- if not os.path.exists(directory):
79
- os.mkdir(directory)
80
- return directory
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/caption_datasets.py DELETED
@@ -1,85 +0,0 @@
1
- """
2
- Copyright (c) 2022, salesforce.com, inc.
3
- All rights reserved.
4
- SPDX-License-Identifier: BSD-3-Clause
5
- For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
6
- """
7
-
8
- import os
9
- from collections import OrderedDict
10
-
11
- from video_llama.datasets.datasets.base_dataset import BaseDataset
12
- from PIL import Image
13
-
14
-
15
- class __DisplMixin:
16
- def displ_item(self, index):
17
- sample, ann = self.__getitem__(index), self.annotation[index]
18
-
19
- return OrderedDict(
20
- {
21
- "file": ann["image"],
22
- "caption": ann["caption"],
23
- "image": sample["image"],
24
- }
25
- )
26
-
27
-
28
- class CaptionDataset(BaseDataset, __DisplMixin):
29
- def __init__(self, vis_processor, text_processor, vis_root, ann_paths):
30
- """
31
- vis_root (string): Root directory of images (e.g. coco/images/)
32
- ann_root (string): directory to store the annotation file
33
- """
34
- super().__init__(vis_processor, text_processor, vis_root, ann_paths)
35
-
36
- self.img_ids = {}
37
- n = 0
38
- for ann in self.annotation:
39
- img_id = ann["image_id"]
40
- if img_id not in self.img_ids.keys():
41
- self.img_ids[img_id] = n
42
- n += 1
43
-
44
- def __getitem__(self, index):
45
-
46
- # TODO this assumes image input, not general enough
47
- ann = self.annotation[index]
48
-
49
- img_file = '{:0>12}.jpg'.format(ann["image_id"])
50
- image_path = os.path.join(self.vis_root, img_file)
51
- image = Image.open(image_path).convert("RGB")
52
-
53
- image = self.vis_processor(image)
54
- caption = self.text_processor(ann["caption"])
55
-
56
- return {
57
- "image": image,
58
- "text_input": caption,
59
- "image_id": self.img_ids[ann["image_id"]],
60
- }
61
-
62
-
63
- class CaptionEvalDataset(BaseDataset, __DisplMixin):
64
- def __init__(self, vis_processor, text_processor, vis_root, ann_paths):
65
- """
66
- vis_root (string): Root directory of images (e.g. coco/images/)
67
- ann_root (string): directory to store the annotation file
68
- split (string): val or test
69
- """
70
- super().__init__(vis_processor, text_processor, vis_root, ann_paths)
71
-
72
- def __getitem__(self, index):
73
-
74
- ann = self.annotation[index]
75
-
76
- image_path = os.path.join(self.vis_root, ann["image"])
77
- image = Image.open(image_path).convert("RGB")
78
-
79
- image = self.vis_processor(image)
80
-
81
- return {
82
- "image": image,
83
- "image_id": ann["image_id"],
84
- "instance_id": ann["instance_id"],
85
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/altair/expr/consts.py DELETED
@@ -1,29 +0,0 @@
1
- from typing import Dict
2
-
3
- from .core import ConstExpression
4
-
5
-
6
- CONST_LISTING = {
7
- "NaN": "not a number (same as JavaScript literal NaN)",
8
- "LN10": "the natural log of 10 (alias to Math.LN10)",
9
- "E": "the transcendental number e (alias to Math.E)",
10
- "LOG10E": "the base 10 logarithm e (alias to Math.LOG10E)",
11
- "LOG2E": "the base 2 logarithm of e (alias to Math.LOG2E)",
12
- "SQRT1_2": "the square root of 0.5 (alias to Math.SQRT1_2)",
13
- "LN2": "the natural log of 2 (alias to Math.LN2)",
14
- "SQRT2": "the square root of 2 (alias to Math.SQRT1_2)",
15
- "PI": "the transcendental number pi (alias to Math.PI)",
16
- }
17
-
18
- NAME_MAP: Dict[str, str] = {}
19
-
20
-
21
- def _populate_namespace():
22
- globals_ = globals()
23
- for name, doc in CONST_LISTING.items():
24
- py_name = NAME_MAP.get(name, name)
25
- globals_[py_name] = ConstExpression(name, doc)
26
- yield py_name
27
-
28
-
29
- __all__ = list(_populate_namespace())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/t1Lib/__init__.py DELETED
@@ -1,638 +0,0 @@
1
- """fontTools.t1Lib.py -- Tools for PostScript Type 1 fonts (Python2 only)
2
-
3
- Functions for reading and writing raw Type 1 data:
4
-
5
- read(path)
6
- reads any Type 1 font file, returns the raw data and a type indicator:
7
- 'LWFN', 'PFB' or 'OTHER', depending on the format of the file pointed
8
- to by 'path'.
9
- Raises an error when the file does not contain valid Type 1 data.
10
-
11
- write(path, data, kind='OTHER', dohex=False)
12
- writes raw Type 1 data to the file pointed to by 'path'.
13
- 'kind' can be one of 'LWFN', 'PFB' or 'OTHER'; it defaults to 'OTHER'.
14
- 'dohex' is a flag which determines whether the eexec encrypted
15
- part should be written as hexadecimal or binary, but only if kind
16
- is 'OTHER'.
17
- """
18
- import fontTools
19
- from fontTools.misc import eexec
20
- from fontTools.misc.macCreatorType import getMacCreatorAndType
21
- from fontTools.misc.textTools import bytechr, byteord, bytesjoin, tobytes
22
- from fontTools.misc.psOperators import (
23
- _type1_pre_eexec_order,
24
- _type1_fontinfo_order,
25
- _type1_post_eexec_order,
26
- )
27
- from fontTools.encodings.StandardEncoding import StandardEncoding
28
- import os
29
- import re
30
-
31
- __author__ = "jvr"
32
- __version__ = "1.0b3"
33
- DEBUG = 0
34
-
35
-
36
- try:
37
- try:
38
- from Carbon import Res
39
- except ImportError:
40
- import Res # MacPython < 2.2
41
- except ImportError:
42
- haveMacSupport = 0
43
- else:
44
- haveMacSupport = 1
45
-
46
-
47
- class T1Error(Exception):
48
- pass
49
-
50
-
51
- class T1Font(object):
52
-
53
- """Type 1 font class.
54
-
55
- Uses a minimal interpeter that supports just about enough PS to parse
56
- Type 1 fonts.
57
- """
58
-
59
- def __init__(self, path, encoding="ascii", kind=None):
60
- if kind is None:
61
- self.data, _ = read(path)
62
- elif kind == "LWFN":
63
- self.data = readLWFN(path)
64
- elif kind == "PFB":
65
- self.data = readPFB(path)
66
- elif kind == "OTHER":
67
- self.data = readOther(path)
68
- else:
69
- raise ValueError(kind)
70
- self.encoding = encoding
71
-
72
- def saveAs(self, path, type, dohex=False):
73
- write(path, self.getData(), type, dohex)
74
-
75
- def getData(self):
76
- if not hasattr(self, "data"):
77
- self.data = self.createData()
78
- return self.data
79
-
80
- def getGlyphSet(self):
81
- """Return a generic GlyphSet, which is a dict-like object
82
- mapping glyph names to glyph objects. The returned glyph objects
83
- have a .draw() method that supports the Pen protocol, and will
84
- have an attribute named 'width', but only *after* the .draw() method
85
- has been called.
86
-
87
- In the case of Type 1, the GlyphSet is simply the CharStrings dict.
88
- """
89
- return self["CharStrings"]
90
-
91
- def __getitem__(self, key):
92
- if not hasattr(self, "font"):
93
- self.parse()
94
- return self.font[key]
95
-
96
- def parse(self):
97
- from fontTools.misc import psLib
98
- from fontTools.misc import psCharStrings
99
-
100
- self.font = psLib.suckfont(self.data, self.encoding)
101
- charStrings = self.font["CharStrings"]
102
- lenIV = self.font["Private"].get("lenIV", 4)
103
- assert lenIV >= 0
104
- subrs = self.font["Private"]["Subrs"]
105
- for glyphName, charString in charStrings.items():
106
- charString, R = eexec.decrypt(charString, 4330)
107
- charStrings[glyphName] = psCharStrings.T1CharString(
108
- charString[lenIV:], subrs=subrs
109
- )
110
- for i in range(len(subrs)):
111
- charString, R = eexec.decrypt(subrs[i], 4330)
112
- subrs[i] = psCharStrings.T1CharString(charString[lenIV:], subrs=subrs)
113
- del self.data
114
-
115
- def createData(self):
116
- sf = self.font
117
-
118
- eexec_began = False
119
- eexec_dict = {}
120
- lines = []
121
- lines.extend(
122
- [
123
- self._tobytes(f"%!FontType1-1.1: {sf['FontName']}"),
124
- self._tobytes(f"%t1Font: ({fontTools.version})"),
125
- self._tobytes(f"%%BeginResource: font {sf['FontName']}"),
126
- ]
127
- )
128
- # follow t1write.c:writeRegNameKeyedFont
129
- size = 3 # Headroom for new key addition
130
- size += 1 # FontMatrix is always counted
131
- size += 1 + 1 # Private, CharStings
132
- for key in font_dictionary_keys:
133
- size += int(key in sf)
134
- lines.append(self._tobytes(f"{size} dict dup begin"))
135
-
136
- for key, value in sf.items():
137
- if eexec_began:
138
- eexec_dict[key] = value
139
- continue
140
-
141
- if key == "FontInfo":
142
- fi = sf["FontInfo"]
143
- # follow t1write.c:writeFontInfoDict
144
- size = 3 # Headroom for new key addition
145
- for subkey in FontInfo_dictionary_keys:
146
- size += int(subkey in fi)
147
- lines.append(self._tobytes(f"/FontInfo {size} dict dup begin"))
148
-
149
- for subkey, subvalue in fi.items():
150
- lines.extend(self._make_lines(subkey, subvalue))
151
- lines.append(b"end def")
152
- elif key in _type1_post_eexec_order: # usually 'Private'
153
- eexec_dict[key] = value
154
- eexec_began = True
155
- else:
156
- lines.extend(self._make_lines(key, value))
157
- lines.append(b"end")
158
- eexec_portion = self.encode_eexec(eexec_dict)
159
- lines.append(bytesjoin([b"currentfile eexec ", eexec_portion]))
160
-
161
- for _ in range(8):
162
- lines.append(self._tobytes("0" * 64))
163
- lines.extend([b"cleartomark", b"%%EndResource", b"%%EOF"])
164
-
165
- data = bytesjoin(lines, "\n")
166
- return data
167
-
168
- def encode_eexec(self, eexec_dict):
169
- lines = []
170
-
171
- # '-|', '|-', '|'
172
- RD_key, ND_key, NP_key = None, None, None
173
-
174
- for key, value in eexec_dict.items():
175
- if key == "Private":
176
- pr = eexec_dict["Private"]
177
- # follow t1write.c:writePrivateDict
178
- size = 3 # for RD, ND, NP
179
- for subkey in Private_dictionary_keys:
180
- size += int(subkey in pr)
181
- lines.append(b"dup /Private")
182
- lines.append(self._tobytes(f"{size} dict dup begin"))
183
- for subkey, subvalue in pr.items():
184
- if not RD_key and subvalue == RD_value:
185
- RD_key = subkey
186
- elif not ND_key and subvalue == ND_value:
187
- ND_key = subkey
188
- elif not NP_key and subvalue == PD_value:
189
- NP_key = subkey
190
-
191
- if subkey == "OtherSubrs":
192
- # XXX: assert that no flex hint is used
193
- lines.append(self._tobytes(hintothers))
194
- elif subkey == "Subrs":
195
- # XXX: standard Subrs only
196
- lines.append(b"/Subrs 5 array")
197
- for i, subr_bin in enumerate(std_subrs):
198
- encrypted_subr, R = eexec.encrypt(
199
- bytesjoin([char_IV, subr_bin]), 4330
200
- )
201
- lines.append(
202
- bytesjoin(
203
- [
204
- self._tobytes(
205
- f"dup {i} {len(encrypted_subr)} {RD_key} "
206
- ),
207
- encrypted_subr,
208
- self._tobytes(f" {NP_key}"),
209
- ]
210
- )
211
- )
212
- lines.append(b"def")
213
-
214
- lines.append(b"put")
215
- else:
216
- lines.extend(self._make_lines(subkey, subvalue))
217
- elif key == "CharStrings":
218
- lines.append(b"dup /CharStrings")
219
- lines.append(
220
- self._tobytes(f"{len(eexec_dict['CharStrings'])} dict dup begin")
221
- )
222
- for glyph_name, char_bin in eexec_dict["CharStrings"].items():
223
- char_bin.compile()
224
- encrypted_char, R = eexec.encrypt(
225
- bytesjoin([char_IV, char_bin.bytecode]), 4330
226
- )
227
- lines.append(
228
- bytesjoin(
229
- [
230
- self._tobytes(
231
- f"/{glyph_name} {len(encrypted_char)} {RD_key} "
232
- ),
233
- encrypted_char,
234
- self._tobytes(f" {ND_key}"),
235
- ]
236
- )
237
- )
238
- lines.append(b"end put")
239
- else:
240
- lines.extend(self._make_lines(key, value))
241
-
242
- lines.extend(
243
- [
244
- b"end",
245
- b"dup /FontName get exch definefont pop",
246
- b"mark",
247
- b"currentfile closefile\n",
248
- ]
249
- )
250
-
251
- eexec_portion = bytesjoin(lines, "\n")
252
- encrypted_eexec, R = eexec.encrypt(bytesjoin([eexec_IV, eexec_portion]), 55665)
253
-
254
- return encrypted_eexec
255
-
256
- def _make_lines(self, key, value):
257
- if key == "FontName":
258
- return [self._tobytes(f"/{key} /{value} def")]
259
- if key in ["isFixedPitch", "ForceBold", "RndStemUp"]:
260
- return [self._tobytes(f"/{key} {'true' if value else 'false'} def")]
261
- elif key == "Encoding":
262
- if value == StandardEncoding:
263
- return [self._tobytes(f"/{key} StandardEncoding def")]
264
- else:
265
- # follow fontTools.misc.psOperators._type1_Encoding_repr
266
- lines = []
267
- lines.append(b"/Encoding 256 array")
268
- lines.append(b"0 1 255 {1 index exch /.notdef put} for")
269
- for i in range(256):
270
- name = value[i]
271
- if name != ".notdef":
272
- lines.append(self._tobytes(f"dup {i} /{name} put"))
273
- lines.append(b"def")
274
- return lines
275
- if isinstance(value, str):
276
- return [self._tobytes(f"/{key} ({value}) def")]
277
- elif isinstance(value, bool):
278
- return [self._tobytes(f"/{key} {'true' if value else 'false'} def")]
279
- elif isinstance(value, list):
280
- return [self._tobytes(f"/{key} [{' '.join(str(v) for v in value)}] def")]
281
- elif isinstance(value, tuple):
282
- return [self._tobytes(f"/{key} {{{' '.join(str(v) for v in value)}}} def")]
283
- else:
284
- return [self._tobytes(f"/{key} {value} def")]
285
-
286
- def _tobytes(self, s, errors="strict"):
287
- return tobytes(s, self.encoding, errors)
288
-
289
-
290
- # low level T1 data read and write functions
291
-
292
-
293
- def read(path, onlyHeader=False):
294
- """reads any Type 1 font file, returns raw data"""
295
- _, ext = os.path.splitext(path)
296
- ext = ext.lower()
297
- creator, typ = getMacCreatorAndType(path)
298
- if typ == "LWFN":
299
- return readLWFN(path, onlyHeader), "LWFN"
300
- if ext == ".pfb":
301
- return readPFB(path, onlyHeader), "PFB"
302
- else:
303
- return readOther(path), "OTHER"
304
-
305
-
306
- def write(path, data, kind="OTHER", dohex=False):
307
- assertType1(data)
308
- kind = kind.upper()
309
- try:
310
- os.remove(path)
311
- except os.error:
312
- pass
313
- err = 1
314
- try:
315
- if kind == "LWFN":
316
- writeLWFN(path, data)
317
- elif kind == "PFB":
318
- writePFB(path, data)
319
- else:
320
- writeOther(path, data, dohex)
321
- err = 0
322
- finally:
323
- if err and not DEBUG:
324
- try:
325
- os.remove(path)
326
- except os.error:
327
- pass
328
-
329
-
330
- # -- internal --
331
-
332
- LWFNCHUNKSIZE = 2000
333
- HEXLINELENGTH = 80
334
-
335
-
336
- def readLWFN(path, onlyHeader=False):
337
- """reads an LWFN font file, returns raw data"""
338
- from fontTools.misc.macRes import ResourceReader
339
-
340
- reader = ResourceReader(path)
341
- try:
342
- data = []
343
- for res in reader.get("POST", []):
344
- code = byteord(res.data[0])
345
- if byteord(res.data[1]) != 0:
346
- raise T1Error("corrupt LWFN file")
347
- if code in [1, 2]:
348
- if onlyHeader and code == 2:
349
- break
350
- data.append(res.data[2:])
351
- elif code in [3, 5]:
352
- break
353
- elif code == 4:
354
- with open(path, "rb") as f:
355
- data.append(f.read())
356
- elif code == 0:
357
- pass # comment, ignore
358
- else:
359
- raise T1Error("bad chunk code: " + repr(code))
360
- finally:
361
- reader.close()
362
- data = bytesjoin(data)
363
- assertType1(data)
364
- return data
365
-
366
-
367
- def readPFB(path, onlyHeader=False):
368
- """reads a PFB font file, returns raw data"""
369
- data = []
370
- with open(path, "rb") as f:
371
- while True:
372
- if f.read(1) != bytechr(128):
373
- raise T1Error("corrupt PFB file")
374
- code = byteord(f.read(1))
375
- if code in [1, 2]:
376
- chunklen = stringToLong(f.read(4))
377
- chunk = f.read(chunklen)
378
- assert len(chunk) == chunklen
379
- data.append(chunk)
380
- elif code == 3:
381
- break
382
- else:
383
- raise T1Error("bad chunk code: " + repr(code))
384
- if onlyHeader:
385
- break
386
- data = bytesjoin(data)
387
- assertType1(data)
388
- return data
389
-
390
-
391
- def readOther(path):
392
- """reads any (font) file, returns raw data"""
393
- with open(path, "rb") as f:
394
- data = f.read()
395
- assertType1(data)
396
- chunks = findEncryptedChunks(data)
397
- data = []
398
- for isEncrypted, chunk in chunks:
399
- if isEncrypted and isHex(chunk[:4]):
400
- data.append(deHexString(chunk))
401
- else:
402
- data.append(chunk)
403
- return bytesjoin(data)
404
-
405
-
406
- # file writing tools
407
-
408
-
409
- def writeLWFN(path, data):
410
- # Res.FSpCreateResFile was deprecated in OS X 10.5
411
- Res.FSpCreateResFile(path, "just", "LWFN", 0)
412
- resRef = Res.FSOpenResFile(path, 2) # write-only
413
- try:
414
- Res.UseResFile(resRef)
415
- resID = 501
416
- chunks = findEncryptedChunks(data)
417
- for isEncrypted, chunk in chunks:
418
- if isEncrypted:
419
- code = 2
420
- else:
421
- code = 1
422
- while chunk:
423
- res = Res.Resource(bytechr(code) + "\0" + chunk[: LWFNCHUNKSIZE - 2])
424
- res.AddResource("POST", resID, "")
425
- chunk = chunk[LWFNCHUNKSIZE - 2 :]
426
- resID = resID + 1
427
- res = Res.Resource(bytechr(5) + "\0")
428
- res.AddResource("POST", resID, "")
429
- finally:
430
- Res.CloseResFile(resRef)
431
-
432
-
433
- def writePFB(path, data):
434
- chunks = findEncryptedChunks(data)
435
- with open(path, "wb") as f:
436
- for isEncrypted, chunk in chunks:
437
- if isEncrypted:
438
- code = 2
439
- else:
440
- code = 1
441
- f.write(bytechr(128) + bytechr(code))
442
- f.write(longToString(len(chunk)))
443
- f.write(chunk)
444
- f.write(bytechr(128) + bytechr(3))
445
-
446
-
447
- def writeOther(path, data, dohex=False):
448
- chunks = findEncryptedChunks(data)
449
- with open(path, "wb") as f:
450
- hexlinelen = HEXLINELENGTH // 2
451
- for isEncrypted, chunk in chunks:
452
- if isEncrypted:
453
- code = 2
454
- else:
455
- code = 1
456
- if code == 2 and dohex:
457
- while chunk:
458
- f.write(eexec.hexString(chunk[:hexlinelen]))
459
- f.write(b"\r")
460
- chunk = chunk[hexlinelen:]
461
- else:
462
- f.write(chunk)
463
-
464
-
465
- # decryption tools
466
-
467
- EEXECBEGIN = b"currentfile eexec"
468
- # The spec allows for 512 ASCII zeros interrupted by arbitrary whitespace to
469
- # follow eexec
470
- EEXECEND = re.compile(b"(0[ \t\r\n]*){512}", flags=re.M)
471
- EEXECINTERNALEND = b"currentfile closefile"
472
- EEXECBEGINMARKER = b"%-- eexec start\r"
473
- EEXECENDMARKER = b"%-- eexec end\r"
474
-
475
- _ishexRE = re.compile(b"[0-9A-Fa-f]*$")
476
-
477
-
478
- def isHex(text):
479
- return _ishexRE.match(text) is not None
480
-
481
-
482
- def decryptType1(data):
483
- chunks = findEncryptedChunks(data)
484
- data = []
485
- for isEncrypted, chunk in chunks:
486
- if isEncrypted:
487
- if isHex(chunk[:4]):
488
- chunk = deHexString(chunk)
489
- decrypted, R = eexec.decrypt(chunk, 55665)
490
- decrypted = decrypted[4:]
491
- if (
492
- decrypted[-len(EEXECINTERNALEND) - 1 : -1] != EEXECINTERNALEND
493
- and decrypted[-len(EEXECINTERNALEND) - 2 : -2] != EEXECINTERNALEND
494
- ):
495
- raise T1Error("invalid end of eexec part")
496
- decrypted = decrypted[: -len(EEXECINTERNALEND) - 2] + b"\r"
497
- data.append(EEXECBEGINMARKER + decrypted + EEXECENDMARKER)
498
- else:
499
- if chunk[-len(EEXECBEGIN) - 1 : -1] == EEXECBEGIN:
500
- data.append(chunk[: -len(EEXECBEGIN) - 1])
501
- else:
502
- data.append(chunk)
503
- return bytesjoin(data)
504
-
505
-
506
- def findEncryptedChunks(data):
507
- chunks = []
508
- while True:
509
- eBegin = data.find(EEXECBEGIN)
510
- if eBegin < 0:
511
- break
512
- eBegin = eBegin + len(EEXECBEGIN) + 1
513
- endMatch = EEXECEND.search(data, eBegin)
514
- if endMatch is None:
515
- raise T1Error("can't find end of eexec part")
516
- eEnd = endMatch.start()
517
- cypherText = data[eBegin : eEnd + 2]
518
- if isHex(cypherText[:4]):
519
- cypherText = deHexString(cypherText)
520
- plainText, R = eexec.decrypt(cypherText, 55665)
521
- eEndLocal = plainText.find(EEXECINTERNALEND)
522
- if eEndLocal < 0:
523
- raise T1Error("can't find end of eexec part")
524
- chunks.append((0, data[:eBegin]))
525
- chunks.append((1, cypherText[: eEndLocal + len(EEXECINTERNALEND) + 1]))
526
- data = data[eEnd:]
527
- chunks.append((0, data))
528
- return chunks
529
-
530
-
531
- def deHexString(hexstring):
532
- return eexec.deHexString(bytesjoin(hexstring.split()))
533
-
534
-
535
- # Type 1 assertion
536
-
537
- _fontType1RE = re.compile(rb"/FontType\s+1\s+def")
538
-
539
-
540
- def assertType1(data):
541
- for head in [b"%!PS-AdobeFont", b"%!FontType1"]:
542
- if data[: len(head)] == head:
543
- break
544
- else:
545
- raise T1Error("not a PostScript font")
546
- if not _fontType1RE.search(data):
547
- raise T1Error("not a Type 1 font")
548
- if data.find(b"currentfile eexec") < 0:
549
- raise T1Error("not an encrypted Type 1 font")
550
- # XXX what else?
551
- return data
552
-
553
-
554
- # pfb helpers
555
-
556
-
557
- def longToString(long):
558
- s = b""
559
- for i in range(4):
560
- s += bytechr((long & (0xFF << (i * 8))) >> i * 8)
561
- return s
562
-
563
-
564
- def stringToLong(s):
565
- if len(s) != 4:
566
- raise ValueError("string must be 4 bytes long")
567
- l = 0
568
- for i in range(4):
569
- l += byteord(s[i]) << (i * 8)
570
- return l
571
-
572
-
573
- # PS stream helpers
574
-
575
- font_dictionary_keys = list(_type1_pre_eexec_order)
576
- # t1write.c:writeRegNameKeyedFont
577
- # always counts following keys
578
- font_dictionary_keys.remove("FontMatrix")
579
-
580
- FontInfo_dictionary_keys = list(_type1_fontinfo_order)
581
- # extend because AFDKO tx may use following keys
582
- FontInfo_dictionary_keys.extend(
583
- [
584
- "FSType",
585
- "Copyright",
586
- ]
587
- )
588
-
589
- Private_dictionary_keys = [
590
- # We don't know what names will be actually used.
591
- # "RD",
592
- # "ND",
593
- # "NP",
594
- "Subrs",
595
- "OtherSubrs",
596
- "UniqueID",
597
- "BlueValues",
598
- "OtherBlues",
599
- "FamilyBlues",
600
- "FamilyOtherBlues",
601
- "BlueScale",
602
- "BlueShift",
603
- "BlueFuzz",
604
- "StdHW",
605
- "StdVW",
606
- "StemSnapH",
607
- "StemSnapV",
608
- "ForceBold",
609
- "LanguageGroup",
610
- "password",
611
- "lenIV",
612
- "MinFeature",
613
- "RndStemUp",
614
- ]
615
-
616
- # t1write_hintothers.h
617
- hintothers = """/OtherSubrs[{}{}{}{systemdict/internaldict known not{pop 3}{1183615869
618
- systemdict/internaldict get exec dup/startlock known{/startlock get exec}{dup
619
- /strtlck known{/strtlck get exec}{pop 3}ifelse}ifelse}ifelse}executeonly]def"""
620
- # t1write.c:saveStdSubrs
621
- std_subrs = [
622
- # 3 0 callother pop pop setcurrentpoint return
623
- b"\x8e\x8b\x0c\x10\x0c\x11\x0c\x11\x0c\x21\x0b",
624
- # 0 1 callother return
625
- b"\x8b\x8c\x0c\x10\x0b",
626
- # 0 2 callother return
627
- b"\x8b\x8d\x0c\x10\x0b",
628
- # return
629
- b"\x0b",
630
- # 3 1 3 callother pop callsubr return
631
- b"\x8e\x8c\x8e\x0c\x10\x0c\x11\x0a\x0b",
632
- ]
633
- # follow t1write.c:writeRegNameKeyedFont
634
- eexec_IV = b"cccc"
635
- char_IV = b"\x0c\x0c\x0c\x0c"
636
- RD_value = ("string", "currentfile", "exch", "readstring", "pop")
637
- ND_value = ("def",)
638
- PD_value = ("put",)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/registry.py DELETED
@@ -1,275 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import importlib
4
- import types
5
- import warnings
6
-
7
- __all__ = ["registry", "get_filesystem_class", "default"]
8
-
9
- # internal, mutable
10
- _registry: dict[str, type] = {}
11
-
12
- # external, immutable
13
- registry = types.MappingProxyType(_registry)
14
- default = "file"
15
-
16
-
17
- def register_implementation(name, cls, clobber=False, errtxt=None):
18
- """Add implementation class to the registry
19
-
20
- Parameters
21
- ----------
22
- name: str
23
- Protocol name to associate with the class
24
- cls: class or str
25
- if a class: fsspec-compliant implementation class (normally inherits from
26
- ``fsspec.AbstractFileSystem``, gets added straight to the registry. If a
27
- str, the full path to an implementation class like package.module.class,
28
- which gets added to known_implementations,
29
- so the import is deferred until the filesystem is actually used.
30
- clobber: bool (optional)
31
- Whether to overwrite a protocol with the same name; if False, will raise
32
- instead.
33
- errtxt: str (optional)
34
- If given, then a failure to import the given class will result in this
35
- text being given.
36
- """
37
- if isinstance(cls, str):
38
- if name in known_implementations and clobber is False:
39
- if cls != known_implementations[name]["class"]:
40
- raise ValueError(
41
- "Name (%s) already in the known_implementations and clobber "
42
- "is False" % name
43
- )
44
- else:
45
- known_implementations[name] = {
46
- "class": cls,
47
- "err": errtxt or "%s import failed for protocol %s" % (cls, name),
48
- }
49
-
50
- else:
51
- if name in registry and clobber is False:
52
- if _registry[name] is not cls:
53
- raise ValueError(
54
- "Name (%s) already in the registry and clobber is False" % name
55
- )
56
- else:
57
- _registry[name] = cls
58
-
59
-
60
- # protocols mapped to the class which implements them. This dict can
61
- # updated with register_implementation
62
- known_implementations = {
63
- "file": {"class": "fsspec.implementations.local.LocalFileSystem"},
64
- "memory": {"class": "fsspec.implementations.memory.MemoryFileSystem"},
65
- "dropbox": {
66
- "class": "dropboxdrivefs.DropboxDriveFileSystem",
67
- "err": (
68
- 'DropboxFileSystem requires "dropboxdrivefs",'
69
- '"requests" and "dropbox" to be installed'
70
- ),
71
- },
72
- "http": {
73
- "class": "fsspec.implementations.http.HTTPFileSystem",
74
- "err": 'HTTPFileSystem requires "requests" and "aiohttp" to be installed',
75
- },
76
- "https": {
77
- "class": "fsspec.implementations.http.HTTPFileSystem",
78
- "err": 'HTTPFileSystem requires "requests" and "aiohttp" to be installed',
79
- },
80
- "zip": {"class": "fsspec.implementations.zip.ZipFileSystem"},
81
- "tar": {"class": "fsspec.implementations.tar.TarFileSystem"},
82
- "gcs": {
83
- "class": "gcsfs.GCSFileSystem",
84
- "err": "Please install gcsfs to access Google Storage",
85
- },
86
- "gs": {
87
- "class": "gcsfs.GCSFileSystem",
88
- "err": "Please install gcsfs to access Google Storage",
89
- },
90
- "gdrive": {
91
- "class": "gdrivefs.GoogleDriveFileSystem",
92
- "err": "Please install gdrivefs for access to Google Drive",
93
- },
94
- "sftp": {
95
- "class": "fsspec.implementations.sftp.SFTPFileSystem",
96
- "err": 'SFTPFileSystem requires "paramiko" to be installed',
97
- },
98
- "ssh": {
99
- "class": "fsspec.implementations.sftp.SFTPFileSystem",
100
- "err": 'SFTPFileSystem requires "paramiko" to be installed',
101
- },
102
- "ftp": {"class": "fsspec.implementations.ftp.FTPFileSystem"},
103
- "hdfs": {
104
- "class": "fsspec.implementations.arrow.HadoopFileSystem",
105
- "err": "pyarrow and local java libraries required for HDFS",
106
- },
107
- "arrow_hdfs": {
108
- "class": "fsspec.implementations.arrow.HadoopFileSystem",
109
- "err": "pyarrow and local java libraries required for HDFS",
110
- },
111
- "webhdfs": {
112
- "class": "fsspec.implementations.webhdfs.WebHDFS",
113
- "err": 'webHDFS access requires "requests" to be installed',
114
- },
115
- "s3": {"class": "s3fs.S3FileSystem", "err": "Install s3fs to access S3"},
116
- "s3a": {"class": "s3fs.S3FileSystem", "err": "Install s3fs to access S3"},
117
- "wandb": {"class": "wandbfs.WandbFS", "err": "Install wandbfs to access wandb"},
118
- "oci": {
119
- "class": "ocifs.OCIFileSystem",
120
- "err": "Install ocifs to access OCI Object Storage",
121
- },
122
- "asynclocal": {
123
- "class": "morefs.asyn_local.AsyncLocalFileSystem",
124
- "err": "Install 'morefs[asynclocalfs]' to use AsyncLocalFileSystem",
125
- },
126
- "adl": {
127
- "class": "adlfs.AzureDatalakeFileSystem",
128
- "err": "Install adlfs to access Azure Datalake Gen1",
129
- },
130
- "abfs": {
131
- "class": "adlfs.AzureBlobFileSystem",
132
- "err": "Install adlfs to access Azure Datalake Gen2 and Azure Blob Storage",
133
- },
134
- "az": {
135
- "class": "adlfs.AzureBlobFileSystem",
136
- "err": "Install adlfs to access Azure Datalake Gen2 and Azure Blob Storage",
137
- },
138
- "cached": {"class": "fsspec.implementations.cached.CachingFileSystem"},
139
- "blockcache": {"class": "fsspec.implementations.cached.CachingFileSystem"},
140
- "filecache": {"class": "fsspec.implementations.cached.WholeFileCacheFileSystem"},
141
- "simplecache": {"class": "fsspec.implementations.cached.SimpleCacheFileSystem"},
142
- "dask": {
143
- "class": "fsspec.implementations.dask.DaskWorkerFileSystem",
144
- "err": "Install dask distributed to access worker file system",
145
- },
146
- "dbfs": {
147
- "class": "fsspec.implementations.dbfs.DatabricksFileSystem",
148
- "err": "Install the requests package to use the DatabricksFileSystem",
149
- },
150
- "github": {
151
- "class": "fsspec.implementations.github.GithubFileSystem",
152
- "err": "Install the requests package to use the github FS",
153
- },
154
- "git": {
155
- "class": "fsspec.implementations.git.GitFileSystem",
156
- "err": "Install pygit2 to browse local git repos",
157
- },
158
- "smb": {
159
- "class": "fsspec.implementations.smb.SMBFileSystem",
160
- "err": 'SMB requires "smbprotocol" or "smbprotocol[kerberos]" installed',
161
- },
162
- "jupyter": {
163
- "class": "fsspec.implementations.jupyter.JupyterFileSystem",
164
- "err": "Jupyter FS requires requests to be installed",
165
- },
166
- "jlab": {
167
- "class": "fsspec.implementations.jupyter.JupyterFileSystem",
168
- "err": "Jupyter FS requires requests to be installed",
169
- },
170
- "libarchive": {
171
- "class": "fsspec.implementations.libarchive.LibArchiveFileSystem",
172
- "err": "LibArchive requires to be installed",
173
- },
174
- "reference": {"class": "fsspec.implementations.reference.ReferenceFileSystem"},
175
- "generic": {"class": "fsspec.generic.GenericFileSystem"},
176
- "oss": {
177
- "class": "ossfs.OSSFileSystem",
178
- "err": "Install ossfs to access Alibaba Object Storage System",
179
- },
180
- "webdav": {
181
- "class": "webdav4.fsspec.WebdavFileSystem",
182
- "err": "Install webdav4 to access WebDAV",
183
- },
184
- "dvc": {
185
- "class": "dvc.api.DVCFileSystem",
186
- "err": "Install dvc to access DVCFileSystem",
187
- },
188
- "hf": {
189
- "class": "huggingface_hub.HfFileSystem",
190
- "err": "Install huggingface_hub to access HfFileSystem",
191
- },
192
- "root": {
193
- "class": "fsspec_xrootd.XRootDFileSystem",
194
- "err": "Install fsspec-xrootd to access xrootd storage system."
195
- + " Note: 'root' is the protocol name for xrootd storage systems,"
196
- + " not referring to root directories",
197
- },
198
- "dir": {"class": "fsspec.implementations.dirfs.DirFileSystem"},
199
- "box": {
200
- "class": "boxfs.BoxFileSystem",
201
- "err": "Please install boxfs to access BoxFileSystem",
202
- },
203
- }
204
-
205
-
206
- def get_filesystem_class(protocol):
207
- """Fetch named protocol implementation from the registry
208
-
209
- The dict ``known_implementations`` maps protocol names to the locations
210
- of classes implementing the corresponding file-system. When used for the
211
- first time, appropriate imports will happen and the class will be placed in
212
- the registry. All subsequent calls will fetch directly from the registry.
213
-
214
- Some protocol implementations require additional dependencies, and so the
215
- import may fail. In this case, the string in the "err" field of the
216
- ``known_implementations`` will be given as the error message.
217
- """
218
- if not protocol:
219
- protocol = default
220
-
221
- if protocol not in registry:
222
- if protocol not in known_implementations:
223
- raise ValueError("Protocol not known: %s" % protocol)
224
- bit = known_implementations[protocol]
225
- try:
226
- register_implementation(protocol, _import_class(bit["class"]))
227
- except ImportError as e:
228
- raise ImportError(bit["err"]) from e
229
- cls = registry[protocol]
230
- if getattr(cls, "protocol", None) in ("abstract", None):
231
- cls.protocol = protocol
232
-
233
- return cls
234
-
235
-
236
- def _import_class(cls, minv=None):
237
- """Take a string FQP and return the imported class or identifier
238
-
239
- clas is of the form "package.module.klass" or "package.module:subobject.klass"
240
- """
241
- if ":" in cls:
242
- mod, name = cls.rsplit(":", 1)
243
- mod = importlib.import_module(mod)
244
- for part in name.split("."):
245
- mod = getattr(mod, part)
246
- return mod
247
- else:
248
- mod, name = cls.rsplit(".", 1)
249
- mod = importlib.import_module(mod)
250
- return getattr(mod, name)
251
-
252
-
253
- def filesystem(protocol, **storage_options):
254
- """Instantiate filesystems for given protocol and arguments
255
-
256
- ``storage_options`` are specific to the protocol being chosen, and are
257
- passed directly to the class.
258
- """
259
- if protocol == "arrow_hdfs":
260
- warnings.warn(
261
- "The 'arrow_hdfs' protocol has been deprecated and will be "
262
- "removed in the future. Specify it as 'hdfs'.",
263
- DeprecationWarning,
264
- )
265
-
266
- cls = get_filesystem_class(protocol)
267
- return cls(**storage_options)
268
-
269
-
270
- def available_protocols():
271
- """Return a list of the implemented protocols.
272
-
273
- Note that any given protocol may require extra packages to be importable.
274
- """
275
- return list(known_implementations)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dorado607/ChuanhuChatGPT/assets/custom.js DELETED
@@ -1,707 +0,0 @@
1
-
2
- // custom javascript here
3
-
4
- const MAX_HISTORY_LENGTH = 32;
5
-
6
- var key_down_history = [];
7
- var currentIndex = -1;
8
- var user_input_ta;
9
-
10
- var gradioContainer = null;
11
- var user_input_ta = null;
12
- var user_input_tb = null;
13
- var userInfoDiv = null;
14
- var appTitleDiv = null;
15
- var chatbot = null;
16
- var chatbotWrap = null;
17
- var apSwitch = null;
18
- var messageBotDivs = null;
19
- var loginUserForm = null;
20
- var logginUser = null;
21
- var updateToast = null;
22
- var sendBtn = null;
23
- var cancelBtn = null;
24
- var sliders = null;
25
-
26
- var userLogged = false;
27
- var usernameGotten = false;
28
- var historyLoaded = false;
29
- var updateInfoGotten = false;
30
- var isLatestVersion = localStorage.getItem('isLatestVersion') || false;
31
-
32
- var ga = document.getElementsByTagName("gradio-app");
33
- var targetNode = ga[0];
34
- var isInIframe = (window.self !== window.top);
35
- var language = navigator.language.slice(0,2);
36
- var currentTime = new Date().getTime();
37
-
38
- // i18n
39
- var forView_i18n = {
40
- 'zh': "仅供查看",
41
- 'en': "For viewing only",
42
- 'ja': "閲覧専用",
43
- 'ko': "읽기 전용",
44
- 'fr': "Pour consultation seulement",
45
- 'es': "Solo para visualización",
46
- 'sv': "Endast för visning",
47
- };
48
-
49
- var deleteConfirm_i18n_pref = {
50
- 'zh': "你真的要删除 ",
51
- 'en': "Are you sure you want to delete ",
52
- 'ja': "本当に ",
53
- 'ko': "정말로 ",
54
- 'sv': "Är du säker på att du vill ta bort "
55
- };
56
- var deleteConfirm_i18n_suff = {
57
- 'zh': " 吗?",
58
- 'en': " ?",
59
- 'ja': " を削除してもよろしいですか?",
60
- 'ko': " 을(를) 삭제하시겠습니까?",
61
- 'sv': " ?"
62
- };
63
- var deleteConfirm_msg_pref = "Are you sure you want to delete ";
64
- var deleteConfirm_msg_suff = " ?";
65
-
66
- var usingLatest_i18n = {
67
- 'zh': "您使用的就是最新版!",
68
- 'en': "You are using the latest version!",
69
- 'ja': "最新バージョンを使用しています!",
70
- 'ko': "최신 버전을 사용하고 있습니다!",
71
- 'sv': "Du använder den senaste versionen!"
72
- };
73
-
74
- // gradio 页面加载好了么??? 我能动你的元素了么??
75
- function gradioLoaded(mutations) {
76
- for (var i = 0; i < mutations.length; i++) {
77
- if (mutations[i].addedNodes.length) {
78
- loginUserForm = document.querySelector(".gradio-container > .main > .wrap > .panel > .form")
79
- gradioContainer = document.querySelector(".gradio-container");
80
- user_input_tb = document.getElementById('user_input_tb');
81
- userInfoDiv = document.getElementById("user_info");
82
- appTitleDiv = document.getElementById("app_title");
83
- chatbot = document.querySelector('#chuanhu_chatbot');
84
- chatbotWrap = document.querySelector('#chuanhu_chatbot > .wrapper > .wrap');
85
- apSwitch = document.querySelector('.apSwitch input[type="checkbox"]');
86
- updateToast = document.querySelector("#toast-update");
87
- sendBtn = document.getElementById("submit_btn");
88
- cancelBtn = document.getElementById("cancel_btn");
89
- sliders = document.querySelectorAll('input[type="range"]');
90
-
91
- if (loginUserForm) {
92
- localStorage.setItem("userLogged", true);
93
- userLogged = true;
94
- }
95
-
96
- if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没?
97
- adjustDarkMode();
98
- }
99
- if (user_input_tb) { // user_input_tb 加载出来了没?
100
- selectHistory();
101
- }
102
- if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没?
103
- if (!usernameGotten) {
104
- getUserInfo();
105
- }
106
- setTimeout(showOrHideUserInfo(), 2000);
107
- }
108
- if (chatbot) { // chatbot 加载出来了没?
109
- setChatbotHeight();
110
- }
111
- if (chatbotWrap) {
112
- if (!historyLoaded) {
113
- loadHistoryHtml();
114
- }
115
- setChatbotScroll();
116
- mObserver.observe(chatbotWrap, { attributes: true, childList: true, subtree: true, characterData: true});
117
- }
118
- if (sliders) {
119
- setSlider();
120
- }
121
- if (updateToast) {
122
- const lastCheckTime = localStorage.getItem('lastCheckTime') || 0;
123
- const longTimeNoCheck = currentTime - lastCheckTime > 3 * 24 * 60 * 60 * 1000;
124
- if (longTimeNoCheck && !updateInfoGotten && !isLatestVersion || isLatestVersion && !updateInfoGotten) {
125
- updateLatestVersion();
126
- }
127
- }
128
- if (cancelBtn) {
129
- submitObserver.observe(cancelBtn, { attributes: true, characterData: true});
130
- }
131
- }
132
- }
133
- }
134
-
135
- function webLocale() {
136
- // console.log("webLocale", language);
137
- if (forView_i18n.hasOwnProperty(language)) {
138
- var forView = forView_i18n[language];
139
- var forViewStyle = document.createElement('style');
140
- forViewStyle.innerHTML = '.wrapper>.wrap>.history-message>:last-child::after { content: "' + forView + '"!important; }';
141
- document.head.appendChild(forViewStyle);
142
- }
143
- if (deleteConfirm_i18n_pref.hasOwnProperty(language)) {
144
- deleteConfirm_msg_pref = deleteConfirm_i18n_pref[language];
145
- deleteConfirm_msg_suff = deleteConfirm_i18n_suff[language];
146
- }
147
- }
148
-
149
- function showConfirmationDialog(a, file, c) {
150
- if (file != "") {
151
- var result = confirm(deleteConfirm_msg_pref + file + deleteConfirm_msg_suff);
152
- if (result) {
153
- return [a, file, c];
154
- }
155
- }
156
- return [a, "CANCELED", c];
157
- }
158
-
159
- function selectHistory() {
160
- user_input_ta = user_input_tb.querySelector("textarea");
161
- if (user_input_ta) {
162
- observer.disconnect(); // 停止监听
163
- disableSendBtn();
164
- // 在 textarea 上监听 keydown 事件
165
- user_input_ta.addEventListener("keydown", function (event) {
166
- var value = user_input_ta.value.trim();
167
- // 判断按下的是否为方向键
168
- if (event.code === 'ArrowUp' || event.code === 'ArrowDown') {
169
- // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作
170
- if (value && key_down_history.indexOf(value) === -1)
171
- return;
172
- // 对于需要响应的动作,阻止默认行为。
173
- event.preventDefault();
174
- var length = key_down_history.length;
175
- if (length === 0) {
176
- currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置
177
- return;
178
- }
179
- if (currentIndex === -1) {
180
- currentIndex = length;
181
- }
182
- if (event.code === 'ArrowUp' && currentIndex > 0) {
183
- currentIndex--;
184
- user_input_ta.value = key_down_history[currentIndex];
185
- } else if (event.code === 'ArrowDown' && currentIndex < length - 1) {
186
- currentIndex++;
187
- user_input_ta.value = key_down_history[currentIndex];
188
- }
189
- user_input_ta.selectionStart = user_input_ta.value.length;
190
- user_input_ta.selectionEnd = user_input_ta.value.length;
191
- const input_event = new InputEvent("input", { bubbles: true, cancelable: true });
192
- user_input_ta.dispatchEvent(input_event);
193
- } else if (event.code === "Enter") {
194
- if (value) {
195
- currentIndex = -1;
196
- if (key_down_history.indexOf(value) === -1) {
197
- key_down_history.push(value);
198
- if (key_down_history.length > MAX_HISTORY_LENGTH) {
199
- key_down_history.shift();
200
- }
201
- }
202
- }
203
- }
204
- });
205
- }
206
- }
207
-
208
- function disableSendBtn() {
209
- sendBtn.disabled = user_input_ta.value.trim() === '';
210
- user_input_ta.addEventListener('input', () => {
211
- sendBtn.disabled = user_input_ta.value.trim() === '';
212
- });
213
- }
214
-
215
- var username = null;
216
- function getUserInfo() {
217
- if (usernameGotten) {
218
- return;
219
- }
220
- userLogged = localStorage.getItem('userLogged');
221
- if (userLogged) {
222
- username = userInfoDiv.innerText;
223
- if (username) {
224
- if (username.includes("getting user info…")) {
225
- setTimeout(getUserInfo, 500);
226
- return;
227
- } else if (username === " ") {
228
- localStorage.removeItem("username");
229
- localStorage.removeItem("userLogged")
230
- userLogged = false;
231
- usernameGotten = true;
232
- return;
233
- } else {
234
- username = username.match(/User:\s*(.*)/)[1] || username;
235
- localStorage.setItem("username", username);
236
- usernameGotten = true;
237
- clearHistoryHtml();
238
- }
239
- }
240
- }
241
- }
242
-
243
- function toggleUserInfoVisibility(shouldHide) {
244
- if (userInfoDiv) {
245
- if (shouldHide) {
246
- userInfoDiv.classList.add("hideK");
247
- } else {
248
- userInfoDiv.classList.remove("hideK");
249
- }
250
- }
251
- }
252
- function showOrHideUserInfo() {
253
- // Bind mouse/touch events to show/hide user info
254
- appTitleDiv.addEventListener("mouseenter", function () {
255
- toggleUserInfoVisibility(false);
256
- });
257
- userInfoDiv.addEventListener("mouseenter", function () {
258
- toggleUserInfoVisibility(false);
259
- });
260
- sendBtn.addEventListener("mouseenter", function () {
261
- toggleUserInfoVisibility(false);
262
- });
263
-
264
- appTitleDiv.addEventListener("mouseleave", function () {
265
- toggleUserInfoVisibility(true);
266
- });
267
- userInfoDiv.addEventListener("mouseleave", function () {
268
- toggleUserInfoVisibility(true);
269
- });
270
- sendBtn.addEventListener("mouseleave", function () {
271
- toggleUserInfoVisibility(true);
272
- });
273
-
274
- appTitleDiv.ontouchstart = function () {
275
- toggleUserInfoVisibility(false);
276
- };
277
- userInfoDiv.ontouchstart = function () {
278
- toggleUserInfoVisibility(false);
279
- };
280
- sendBtn.ontouchstart = function () {
281
- toggleUserInfoVisibility(false);
282
- };
283
-
284
- appTitleDiv.ontouchend = function () {
285
- setTimeout(function () {
286
- toggleUserInfoVisibility(true);
287
- }, 3000);
288
- };
289
- userInfoDiv.ontouchend = function () {
290
- setTimeout(function () {
291
- toggleUserInfoVisibility(true);
292
- }, 3000);
293
- };
294
- sendBtn.ontouchend = function () {
295
- setTimeout(function () {
296
- toggleUserInfoVisibility(true);
297
- }, 3000); // Delay 1 second to hide user info
298
- };
299
-
300
- // Hide user info after 2 second
301
- setTimeout(function () {
302
- toggleUserInfoVisibility(true);
303
- }, 2000);
304
- }
305
-
306
- function toggleDarkMode(isEnabled) {
307
- if (isEnabled) {
308
- document.body.classList.add("dark");
309
- document.body.style.setProperty("background-color", "var(--neutral-950)", "important");
310
- } else {
311
- document.body.classList.remove("dark");
312
- document.body.style.backgroundColor = "";
313
- }
314
- }
315
- function adjustDarkMode() {
316
- const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)");
317
-
318
- // 根据当前颜色模式设置初始状态
319
- apSwitch.checked = darkModeQuery.matches;
320
- toggleDarkMode(darkModeQuery.matches);
321
- // 监听颜色模式变化
322
- darkModeQuery.addEventListener("change", (e) => {
323
- apSwitch.checked = e.matches;
324
- toggleDarkMode(e.matches);
325
- });
326
- // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]');
327
- apSwitch.addEventListener("change", (e) => {
328
- toggleDarkMode(e.target.checked);
329
- });
330
- }
331
-
332
- function setChatbotHeight() {
333
- const screenWidth = window.innerWidth;
334
- const statusDisplay = document.querySelector('#status_display');
335
- const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0;
336
- const vh = window.innerHeight * 0.01;
337
- document.documentElement.style.setProperty('--vh', `${vh}px`);
338
- if (isInIframe) {
339
- chatbot.style.height = `700px`;
340
- chatbotWrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`
341
- } else {
342
- if (screenWidth <= 320) {
343
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`;
344
- chatbotWrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
345
- } else if (screenWidth <= 499) {
346
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`;
347
- chatbotWrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
348
- } else {
349
- chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`;
350
- chatbotWrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`;
351
- }
352
- }
353
- }
354
- function setChatbotScroll() {
355
- var scrollHeight = chatbotWrap.scrollHeight;
356
- chatbotWrap.scrollTo(0,scrollHeight)
357
- }
358
- var rangeInputs = null;
359
- var numberInputs = null;
360
- function setSlider() {
361
- rangeInputs = document.querySelectorAll('input[type="range"]');
362
- numberInputs = document.querySelectorAll('input[type="number"]')
363
- setSliderRange();
364
- rangeInputs.forEach(rangeInput => {
365
- rangeInput.addEventListener('input', setSliderRange);
366
- });
367
- numberInputs.forEach(numberInput => {
368
- numberInput.addEventListener('input', setSliderRange);
369
- })
370
- }
371
- function setSliderRange() {
372
- var range = document.querySelectorAll('input[type="range"]');
373
- range.forEach(range => {
374
- range.style.backgroundSize = (range.value - range.min) / (range.max - range.min) * 100 + '% 100%';
375
- });
376
- }
377
-
378
- function addChuanhuButton(botElement) {
379
- var rawMessage = null;
380
- var mdMessage = null;
381
- rawMessage = botElement.querySelector('.raw-message');
382
- mdMessage = botElement.querySelector('.md-message');
383
- if (!rawMessage) {
384
- var buttons = botElement.querySelectorAll('button.chuanhu-btn');
385
- for (var i = 0; i < buttons.length; i++) {
386
- buttons[i].parentNode.removeChild(buttons[i]);
387
- }
388
- return;
389
- }
390
- var oldCopyButton = null;
391
- var oldToggleButton = null;
392
- oldCopyButton = botElement.querySelector('button.copy-bot-btn');
393
- oldToggleButton = botElement.querySelector('button.toggle-md-btn');
394
- if (oldCopyButton) oldCopyButton.remove();
395
- if (oldToggleButton) oldToggleButton.remove();
396
-
397
- // Copy bot button
398
- var copyButton = document.createElement('button');
399
- copyButton.classList.add('chuanhu-btn');
400
- copyButton.classList.add('copy-bot-btn');
401
- copyButton.setAttribute('aria-label', 'Copy');
402
- copyButton.innerHTML = copyIcon;
403
- copyButton.addEventListener('click', async () => {
404
- const textToCopy = rawMessage.innerText;
405
- try {
406
- if ("clipboard" in navigator) {
407
- await navigator.clipboard.writeText(textToCopy);
408
- copyButton.innerHTML = copiedIcon;
409
- setTimeout(() => {
410
- copyButton.innerHTML = copyIcon;
411
- }, 1500);
412
- } else {
413
- const textArea = document.createElement("textarea");
414
- textArea.value = textToCopy;
415
- document.body.appendChild(textArea);
416
- textArea.select();
417
- try {
418
- document.execCommand('copy');
419
- copyButton.innerHTML = copiedIcon;
420
- setTimeout(() => {
421
- copyButton.innerHTML = copyIcon;
422
- }, 1500);
423
- } catch (error) {
424
- console.error("Copy failed: ", error);
425
- }
426
- document.body.removeChild(textArea);
427
- }
428
- } catch (error) {
429
- console.error("Copy failed: ", error);
430
- }
431
- });
432
- botElement.appendChild(copyButton);
433
-
434
- // Toggle button
435
- var toggleButton = document.createElement('button');
436
- toggleButton.classList.add('chuanhu-btn');
437
- toggleButton.classList.add('toggle-md-btn');
438
- toggleButton.setAttribute('aria-label', 'Toggle');
439
- var renderMarkdown = mdMessage.classList.contains('hideM');
440
- toggleButton.innerHTML = renderMarkdown ? mdIcon : rawIcon;
441
- toggleButton.addEventListener('click', () => {
442
- renderMarkdown = mdMessage.classList.contains('hideM');
443
- if (renderMarkdown){
444
- renderMarkdownText(botElement);
445
- toggleButton.innerHTML=rawIcon;
446
- } else {
447
- removeMarkdownText(botElement);
448
- toggleButton.innerHTML=mdIcon;
449
- }
450
- });
451
- botElement.insertBefore(toggleButton, copyButton);
452
- }
453
-
454
- function renderMarkdownText(message) {
455
- var mdDiv = message.querySelector('.md-message');
456
- if (mdDiv) mdDiv.classList.remove('hideM');
457
- var rawDiv = message.querySelector('.raw-message');
458
- if (rawDiv) rawDiv.classList.add('hideM');
459
- }
460
- function removeMarkdownText(message) {
461
- var rawDiv = message.querySelector('.raw-message');
462
- if (rawDiv) rawDiv.classList.remove('hideM');
463
- var mdDiv = message.querySelector('.md-message');
464
- if (mdDiv) mdDiv.classList.add('hideM');
465
- }
466
-
467
- let timeoutId;
468
- let isThrottled = false;
469
- var mmutation
470
- // 监听chatWrap元素的变化,为 bot 消息添加复制按钮。
471
- var mObserver = new MutationObserver(function (mutationsList) {
472
- for (mmutation of mutationsList) {
473
- if (mmutation.type === 'childList') {
474
- for (var node of mmutation.addedNodes) {
475
- if (node.nodeType === 1 && node.classList.contains('message')) {
476
- saveHistoryHtml();
477
- disableSendBtn();
478
- document.querySelectorAll('#chuanhu_chatbot .message-wrap .message.bot').forEach(addChuanhuButton);
479
- }
480
- }
481
- for (var node of mmutation.removedNodes) {
482
- if (node.nodeType === 1 && node.classList.contains('message')) {
483
- saveHistoryHtml();
484
- disableSendBtn();
485
- document.querySelectorAll('#chuanhu_chatbot .message-wrap .message.bot').forEach(addChuanhuButton);
486
- }
487
- }
488
- } else if (mmutation.type === 'attributes') {
489
- if (isThrottled) break; // 为了防止重复不断疯狂渲染,加上等待_(:з」∠)_
490
- isThrottled = true;
491
- clearTimeout(timeoutId);
492
- timeoutId = setTimeout(() => {
493
- isThrottled = false;
494
- document.querySelectorAll('#chuanhu_chatbot .message-wrap .message.bot').forEach(addChuanhuButton);
495
- saveHistoryHtml();
496
- disableSendBtn();
497
- }, 1500);
498
- }
499
- }
500
- });
501
- // mObserver.observe(targetNode, { attributes: true, childList: true, subtree: true, characterData: true});
502
-
503
- var submitObserver = new MutationObserver(function (mutationsList) {
504
- document.querySelectorAll('#chuanhu_chatbot .message-wrap .message.bot').forEach(addChuanhuButton);
505
- saveHistoryHtml();
506
- });
507
-
508
- var loadhistorytime = 0; // for debugging
509
- function saveHistoryHtml() {
510
- var historyHtml = document.querySelector('#chuanhu_chatbot>.wrapper>.wrap');
511
- if (!historyHtml) return; // no history, do nothing
512
- localStorage.setItem('chatHistory', historyHtml.innerHTML);
513
- // console.log("History Saved")
514
- historyLoaded = false;
515
- }
516
- function loadHistoryHtml() {
517
- var historyHtml = localStorage.getItem('chatHistory');
518
- if (!historyHtml) {
519
- historyLoaded = true;
520
- return; // no history, do nothing
521
- }
522
- userLogged = localStorage.getItem('userLogged');
523
- if (userLogged){
524
- historyLoaded = true;
525
- return; // logged in, do nothing
526
- }
527
- if (!historyLoaded) {
528
- var tempDiv = document.createElement('div');
529
- tempDiv.innerHTML = historyHtml;
530
- var buttons = tempDiv.querySelectorAll('button.chuanhu-btn');
531
- var gradioCopyButtons = tempDiv.querySelectorAll('button.copy_code_button');
532
- for (var i = 0; i < buttons.length; i++) {
533
- buttons[i].parentNode.removeChild(buttons[i]);
534
- }
535
- for (var i = 0; i < gradioCopyButtons.length; i++) {
536
- gradioCopyButtons[i].parentNode.removeChild(gradioCopyButtons[i]);
537
- }
538
- var fakeHistory = document.createElement('div');
539
- fakeHistory.classList.add('history-message');
540
- fakeHistory.innerHTML = tempDiv.innerHTML;
541
- webLocale();
542
- chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild);
543
- // var fakeHistory = document.createElement('div');
544
- // fakeHistory.classList.add('history-message');
545
- // fakeHistory.innerHTML = historyHtml;
546
- // chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild);
547
- historyLoaded = true;
548
- console.log("History Loaded");
549
- loadhistorytime += 1; // for debugging
550
- } else {
551
- historyLoaded = false;
552
- }
553
- }
554
- function clearHistoryHtml() {
555
- localStorage.removeItem("chatHistory");
556
- historyMessages = chatbotWrap.querySelector('.history-message');
557
- if (historyMessages) {
558
- chatbotWrap.removeChild(historyMessages);
559
- console.log("History Cleared");
560
- }
561
- }
562
-
563
- var showingUpdateInfo = false;
564
- async function getLatestRelease() {
565
- try {
566
- const response = await fetch('https://api.github.com/repos/gaizhenbiao/chuanhuchatgpt/releases/latest');
567
- if (!response.ok) {
568
- console.log(`Error: ${response.status} - ${response.statusText}`);
569
- updateInfoGotten = true;
570
- return null;
571
- }
572
- const data = await response.json();
573
- updateInfoGotten = true;
574
- return data;
575
- } catch (error) {
576
- console.log(`Error: ${error}`);
577
- updateInfoGotten = true;
578
- return null;
579
- }
580
- }
581
- async function updateLatestVersion() {
582
- const currentVersionElement = document.getElementById('current-version');
583
- const latestVersionElement = document.getElementById('latest-version-title');
584
- const releaseNoteElement = document.getElementById('release-note-content');
585
- const currentVersion = currentVersionElement.textContent;
586
- const versionTime = document.getElementById('version-time').innerText;
587
- const localVersionTime = versionTime !== "unknown" ? (new Date(versionTime)).getTime() : 0;
588
- updateInfoGotten = true; //无论成功与否都只执行一次,否则容易api超限...
589
- try {
590
- const data = await getLatestRelease();
591
- const releaseNote = data.body;
592
- if (releaseNote) {
593
- releaseNoteElement.innerHTML = marked.parse(releaseNote, {mangle: false, headerIds: false});
594
- }
595
- const latestVersion = data.tag_name;
596
- const latestVersionTime = (new Date(data.created_at)).getTime();
597
- if (latestVersionTime) {
598
- if (localVersionTime < latestVersionTime) {
599
- latestVersionElement.textContent = latestVersion;
600
- console.log(`New version ${latestVersion} found!`);
601
- if (!isInIframe) {openUpdateToast();}
602
- } else {
603
- noUpdate();
604
- }
605
- currentTime = new Date().getTime();
606
- localStorage.setItem('lastCheckTime', currentTime);
607
- }
608
- } catch (error) {
609
- console.error(error);
610
- }
611
- }
612
- function getUpdate() {
613
- window.open('https://github.com/gaizhenbiao/chuanhuchatgpt/releases/latest', '_blank');
614
- closeUpdateToast();
615
- }
616
- function cancelUpdate() {
617
- closeUpdateToast();
618
- }
619
- function openUpdateToast() {
620
- showingUpdateInfo = true;
621
- setUpdateWindowHeight();
622
- }
623
- function closeUpdateToast() {
624
- updateToast.style.setProperty('top', '-500px');
625
- showingUpdateInfo = false;
626
- }
627
- function manualCheckUpdate() {
628
- openUpdateToast();
629
- updateLatestVersion();
630
- currentTime = new Date().getTime();
631
- localStorage.setItem('lastCheckTime', currentTime);
632
- }
633
- function noUpdate() {
634
- localStorage.setItem('isLatestVersion', 'true');
635
- isLatestVersion = true;
636
- const versionInfoElement = document.getElementById('version-info-title');
637
- const releaseNoteWrap = document.getElementById('release-note-wrap');
638
- const gotoUpdateBtn = document.getElementById('goto-update-btn');
639
- const closeUpdateBtn = document.getElementById('close-update-btn');
640
-
641
- versionInfoElement.textContent = usingLatest_i18n.hasOwnProperty(language) ? usingLatest_i18n[language] : usingLatest_i18n['en'];
642
- releaseNoteWrap.style.setProperty('display', 'none');
643
- gotoUpdateBtn.classList.add('hideK');
644
- closeUpdateBtn.classList.remove('hideK');
645
- }
646
- function setUpdateWindowHeight() {
647
- if (!showingUpdateInfo) {return;}
648
- const scrollPosition = window.scrollY;
649
- // const originalTop = updateToast.style.getPropertyValue('top');
650
- const resultTop = scrollPosition - 20 + 'px';
651
- updateToast.style.setProperty('top', resultTop);
652
- }
653
-
654
- // 监视页面内部 DOM 变动
655
- var observer = new MutationObserver(function (mutations) {
656
- gradioLoaded(mutations);
657
- });
658
- observer.observe(targetNode, { childList: true, subtree: true });
659
-
660
- // 监视页面变化
661
- window.addEventListener("DOMContentLoaded", function () {
662
- isInIframe = (window.self !== window.top);
663
- historyLoaded = false;
664
- });
665
- window.addEventListener('resize', setChatbotHeight);
666
- window.addEventListener('scroll', function(){setChatbotHeight();setUpdateWindowHeight();});
667
- window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode);
668
-
669
- // console suprise
670
- var styleTitle1 = `
671
- font-size: 16px;
672
- font-family: ui-monospace, monospace;
673
- color: #06AE56;
674
- `
675
- var styleDesc1 = `
676
- font-size: 12px;
677
- font-family: ui-monospace, monospace;
678
- `
679
- function makeML(str) {
680
- let l = new String(str)
681
- l = l.substring(l.indexOf("/*") + 3, l.lastIndexOf("*/"))
682
- return l
683
- }
684
- let ChuanhuInfo = function () {
685
- /*
686
- ________ __ ________ __
687
- / ____/ /_ __ ______ _____ / /_ __ __ / ____/ /_ ____ _/ /_
688
- / / / __ \/ / / / __ `/ __ \/ __ \/ / / / / / / __ \/ __ `/ __/
689
- / /___/ / / / /_/ / /_/ / / / / / / / /_/ / / /___/ / / / /_/ / /_
690
- \____/_/ /_/\__,_/\__,_/_/ /_/_/ /_/\__,_/ \____/_/ /_/\__,_/\__/
691
-
692
- 川虎Chat (Chuanhu Chat) - GUI for ChatGPT API and many LLMs
693
- */
694
- }
695
- let description = `
696
- © 2023 Chuanhu, MZhao, Keldos
697
- GitHub repository: [https://github.com/GaiZhenbiao/ChuanhuChatGPT]\n
698
- Enjoy our project!\n
699
- `
700
- console.log(`%c${makeML(ChuanhuInfo)}`,styleTitle1)
701
- console.log(`%c${description}`, styleDesc1)
702
-
703
- // button svg code
704
- const copyIcon = '<span><svg stroke="currentColor" fill="none" stroke-width="2" viewBox="0 0 24 24" stroke-linecap="round" stroke-linejoin="round" height=".8em" width=".8em" xmlns="http://www.w3.org/2000/svg"><rect x="9" y="9" width="13" height="13" rx="2" ry="2"></rect><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"></path></svg></span>';
705
- const copiedIcon = '<span><svg stroke="currentColor" fill="none" stroke-width="2" viewBox="0 0 24 24" stroke-linecap="round" stroke-linejoin="round" height=".8em" width=".8em" xmlns="http://www.w3.org/2000/svg"><polyline points="20 6 9 17 4 12"></polyline></svg></span>';
706
- const mdIcon = '<span><svg stroke="currentColor" fill="none" stroke-width="1" viewBox="0 0 14 18" stroke-linecap="round" stroke-linejoin="round" height=".8em" width=".8em" xmlns="http://www.w3.org/2000/svg"><g transform-origin="center" transform="scale(0.85)"><path d="M1.5,0 L12.5,0 C13.3284271,-1.52179594e-16 14,0.671572875 14,1.5 L14,16.5 C14,17.3284271 13.3284271,18 12.5,18 L1.5,18 C0.671572875,18 1.01453063e-16,17.3284271 0,16.5 L0,1.5 C-1.01453063e-16,0.671572875 0.671572875,1.52179594e-16 1.5,0 Z" stroke-width="1.8"></path><line x1="3.5" y1="3.5" x2="10.5" y2="3.5"></line><line x1="3.5" y1="6.5" x2="8" y2="6.5"></line></g><path d="M4,9 L10,9 C10.5522847,9 11,9.44771525 11,10 L11,13.5 C11,14.0522847 10.5522847,14.5 10,14.5 L4,14.5 C3.44771525,14.5 3,14.0522847 3,13.5 L3,10 C3,9.44771525 3.44771525,9 4,9 Z" stroke="none" fill="currentColor"></path></svg></span>';
707
- const rawIcon = '<span><svg stroke="currentColor" fill="none" stroke-width="1.8" viewBox="0 0 18 14" stroke-linecap="round" stroke-linejoin="round" height=".8em" width=".8em" xmlns="http://www.w3.org/2000/svg"><g transform-origin="center" transform="scale(0.85)"><polyline points="4 3 0 7 4 11"></polyline><polyline points="14 3 18 7 14 11"></polyline><line x1="12" y1="0" x2="6" y2="14"></line></g></svg></span>';