parquet-converter commited on
Commit
26baced
·
1 Parent(s): 7c98a40

Update parquet files (step 67 of 397)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1368565466ki/ZSTRD/text/__init__.py +0 -57
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anokha Anubhav 1080p Dual Audio Movies Fix.md +0 -15
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Aula Killing The Soul Gaming Mouse Driver Enhance Your Gaming Experience with Aula.md +0 -93
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Minitool Power Data Recovery.md +0 -18
  5. spaces/1gistliPinn/ChatGPT4/Examples/Ab Tumhare Hawale Watan Sathiyo Full Movie in HD Tamil 1080p A Tribute to the Indian Army.md +0 -6
  6. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Descargar Hungry Shark World APK El juego oficial de la Semana del Tiburn.md +0 -117
  7. spaces/1phancelerku/anime-remove-background/Cara Download APK CarX Street Game Balap Mobil Terbaru 2023.md +0 -145
  8. spaces/1phancelerku/anime-remove-background/Download Final Burn Neo APK and Play Retro Games on Your Android Phone.md +0 -162
  9. spaces/1phancelerku/anime-remove-background/Download Orangedox and Secure Your Documents with Data Rooms.md +0 -80
  10. spaces/1phancelerku/anime-remove-background/Download Resident Evil 4 PSP ISO for PPSSPP Emulator 2019.md +0 -143
  11. spaces/A666sxr/Genshin_TTS/attentions.py +0 -303
  12. spaces/AIConsultant/MusicGen/audiocraft/modules/conditioners.py +0 -1411
  13. spaces/AIWaves/Debate/gradio_config.py +0 -437
  14. spaces/AIWaves/SOP_Generation-single/Environment/__init__.py +0 -1
  15. spaces/AP123/text-to-3D/app.py +0 -264
  16. spaces/ASJMO/freegpt/g4f/Provider/Providers/Lockchat.py +0 -32
  17. spaces/Abhaykoul/Palm-2/app.py +0 -38
  18. spaces/Abhilashvj/planogram-compliance/utils/augmentations.py +0 -564
  19. spaces/Adapter/T2I-Adapter/ldm/modules/ema.py +0 -80
  20. spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/selector/pokemon.py +0 -98
  21. spaces/AgentVerse/agentVerse/agentverse/message.py +0 -35
  22. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/DropDownList.js +0 -119
  23. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/GridTable.js +0 -140
  24. spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/NLP/Chinese.pm +0 -239
  25. spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/NLP/English.pm +0 -0
  26. spaces/AlanMars/QYL-AI-Space/modules/__init__.py +0 -0
  27. spaces/AlexWang/lama/saicinpainting/training/modules/ffc.py +0 -485
  28. spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/longcode/jpgd.cpp +0 -3276
  29. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_instruction_pix2pix.py +0 -180
  30. spaces/Andy1621/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w18_2x_coco.py +0 -5
  31. spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_769x769_80k_cityscapes.py +0 -2
  32. spaces/Aniquel/bert-large-uncased-whole-word-masking/README.md +0 -12
  33. spaces/Antonpy/stable-diffusion-license/README.md +0 -11
  34. spaces/Aravindan/BreedClassification/app.py +0 -19
  35. spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/util/inference.py +0 -259
  36. spaces/Audio-AGI/WavJourney/VoiceParser/__init__.py +0 -0
  37. spaces/Benson/text-generation/Examples/Animal Rebelin Batalla Simulador Pc Apk.md +0 -69
  38. spaces/Benson/text-generation/Examples/Bitcoin Cloud Mining Apk.md +0 -88
  39. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/table.py +0 -1002
  40. spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_reqs.py +0 -19
  41. spaces/CVH-vn1210/make_hair/minigpt4/models/base_model.py +0 -247
  42. spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/eval.py +0 -230
  43. spaces/CVPR/winoground-explorer/README.md +0 -12
  44. spaces/Casio991ms/MathBot/app.py +0 -779
  45. spaces/ChihChiu29/mychatbot/tutorial.md +0 -44
  46. spaces/Clebersla/RVC_V2_Huggingface_Version/lib/infer_pack/modules/F0Predictor/__init__.py +0 -0
  47. spaces/CofAI/njpad/index.html +0 -48
  48. spaces/Cvandi/remake/tests/test_discriminator_arch.py +0 -19
  49. spaces/Cyril666/my_abi/modules/transformer.py +0 -901
  50. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/otlLib/builder.py +0 -0
spaces/1368565466ki/ZSTRD/text/__init__.py DELETED
@@ -1,57 +0,0 @@
1
- """ from https://github.com/keithito/tacotron """
2
- from text import cleaners
3
- from text.symbols import symbols
4
-
5
-
6
- # Mappings from symbol to numeric ID and vice versa:
7
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
8
- _id_to_symbol = {i: s for i, s in enumerate(symbols)}
9
-
10
-
11
- def text_to_sequence(text, symbols, cleaner_names):
12
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
13
- Args:
14
- text: string to convert to a sequence
15
- cleaner_names: names of the cleaner functions to run the text through
16
- Returns:
17
- List of integers corresponding to the symbols in the text
18
- '''
19
- _symbol_to_id = {s: i for i, s in enumerate(symbols)}
20
- sequence = []
21
-
22
- clean_text = _clean_text(text, cleaner_names)
23
- for symbol in clean_text:
24
- if symbol not in _symbol_to_id.keys():
25
- continue
26
- symbol_id = _symbol_to_id[symbol]
27
- sequence += [symbol_id]
28
- return sequence, clean_text
29
-
30
-
31
- def cleaned_text_to_sequence(cleaned_text):
32
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
33
- Args:
34
- text: string to convert to a sequence
35
- Returns:
36
- List of integers corresponding to the symbols in the text
37
- '''
38
- sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()]
39
- return sequence
40
-
41
-
42
- def sequence_to_text(sequence):
43
- '''Converts a sequence of IDs back to a string'''
44
- result = ''
45
- for symbol_id in sequence:
46
- s = _id_to_symbol[symbol_id]
47
- result += s
48
- return result
49
-
50
-
51
- def _clean_text(text, cleaner_names):
52
- for name in cleaner_names:
53
- cleaner = getattr(cleaners, name)
54
- if not cleaner:
55
- raise Exception('Unknown cleaner: %s' % name)
56
- text = cleaner(text)
57
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Anokha Anubhav 1080p Dual Audio Movies Fix.md DELETED
@@ -1,15 +0,0 @@
1
-
2
- <h1>Anokha Anubhav: A Unique Experience of Dual Audio Movies in 1080p</h1>
3
- <p>If you are looking for a movie that will give you a unique experience of dual audio movies in 1080p, then you should check out Anokha Anubhav. This movie is a Hindi thriller that was released in 2003 and stars Divya Dutta, Sanjay Kapoor, Juhi Chawla and others. The movie revolves around a woman who gets involved in a murder mystery and has to face the consequences of her actions.</p>
4
- <p>Anokha Anubhav is one of the rare movies that offers dual audio options in 1080p quality. You can enjoy the movie in both Hindi and English languages, depending on your preference. The movie also has subtitles in both languages for better understanding. The dual audio feature allows you to appreciate the movie from different perspectives and enjoy the nuances of the dialogues and the performances.</p>
5
- <h2>Anokha Anubhav 1080p dual audio movies</h2><br /><p><b><b>Download</b> &#10027; <a href="https://byltly.com/2uKwDV">https://byltly.com/2uKwDV</a></b></p><br /><br />
6
- <p>The movie is also available in 1080p resolution, which means that you can watch it in high definition and experience the crisp and clear visuals. The movie has some stunning scenes and cinematography that will captivate your eyes. The 1080p resolution also enhances the sound quality and makes you feel immersed in the movie.</p>
7
- <p>Anokha Anubhav is a movie that you should not miss if you are a fan of dual audio movies in 1080p. You can find this movie on various online platforms that offer 4k dual audio movies, ultra HD movies, 2160 movies, 2160p movies, 1080p 60FPS movies, 4k HEVC movies, 1080p 10Bit movies, 1080p x265 Hevc, 4k Bluray Movies, WeB-DL Series, WeB-DL Movies, High Quality Audio Movies[^1^]. You can also download this movie from some torrent sites[^2^] if you want to watch it offline.</p>
8
- <p>Anokha Anubhav is a movie that will give you a unique experience of dual audio movies in 1080p. Watch it today and enjoy the thrill and suspense of this movie.</p>
9
-
10
- <p>Anokha Anubhav is a movie that explores the theme of forbidden love and its consequences. The movie shows how Mona and Neha, two close friends since their childhood, develop a romantic relationship that goes beyond the boundaries of society and morality. They both are married to different men, but they cannot resist their attraction for each other. They start meeting secretly and indulge in their affair, unaware of the dangers that await them.</p>
11
- <p>The movie also depicts how their husbands react to their betrayal and how they seek revenge. The movie has some twists and turns that keep the audience hooked till the end. The movie also shows how greed and lust can ruin lives and relationships. The movie has some bold scenes and dialogues that reflect the intensity of the emotions of the characters.</p>
12
- <p>Anokha Anubhav is a movie that will make you think about the meaning of love and loyalty. It will also make you question the norms and values of society and how they affect people's choices. The movie is a gripping thriller that will keep you on the edge of your seat. If you are looking for a movie that will challenge your views and expectations, then Anokha Anubhav is the one for you.</p>
13
- <p></p> cec2833e83<br />
14
- <br />
15
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Aula Killing The Soul Gaming Mouse Driver Enhance Your Gaming Experience with Aula.md DELETED
@@ -1,93 +0,0 @@
1
-
2
- <h1>Aula Killing The Soul Gaming Mouse Driver: How to Download and Install It</h1>
3
- <h2>Introduction</h2>
4
- <p>If you are looking for a gaming mouse that offers high performance, ergonomic design, and customizable features, you might want to check out <strong>Aula Killing The Soul Gaming Mouse</strong>. This is a wired optical gaming mouse that comes with a Pixart 5050 sensor, six DPI presets, seven programmable buttons, adjustable backlighting, Huano long life switches, and a dedicated software. But before you can enjoy all these benefits, you need to download and install <strong>Aula Killing The Soul Gaming Mouse Driver</strong> on your computer. This driver will allow you to configure your mouse settings, program your buttons and macros, and update your firmware.</p>
5
- <p>In this article, we will show you how to download and install Aula Killing The Soul Gaming Mouse Driver from the official website of Aula Gaming. We will also give you a brief overview of this gaming mouse and its main features. By the end of this article, you will be able to use your Aula Killing The Soul Gaming Mouse to its full potential.</p>
6
- <h2>Aula Killing The Soul Gaming Mouse Driver</h2><br /><p><b><b>Download Zip</b> &#10042;&#10042;&#10042; <a href="https://byltly.com/2uKxZk">https://byltly.com/2uKxZk</a></b></p><br /><br />
7
- <h2>Aula Killing The Soul Gaming Mouse: A Brief Overview</h2>
8
- <h3>Design and Ergonomics</h3>
9
- <p>Aula Killing The Soul Gaming Mouse is a sleek and stylish gaming mouse that has a black plastic body with a matte finish. It has a symmetrical shape that fits comfortably in your right hand. It has a curved back that supports your palm, a textured thumb rest that prevents slipping, and a smooth scroll wheel that indicates your DPI level with different colors. It also has a braided cable that is strengthened to prevent tangling.</p>
10
- <p>The dimensions of this gaming mouse are 11.7 cm x 7.7 cm x 3.9 cm (L x W x H) and it weighs 102 g without cable. It has seven buttons in total, including left click, right click, scroll wheel click, double click, DPI switch, forward, and backward. All these buttons are programmable using the dedicated software.</p>
11
- <h3>Performance and Customization</h3>
12
- <p>Aula Killing The Soul Gaming Mouse uses a Pixart 5050 optical sensor that delivers accurate tracking and smooth movement. It has six DPI presets that range from 500 to 3500 DPI, which you can switch on-the-fly using the DPI button. You can also adjust the polling rate from 125 Hz to 1000 Hz using the software.</p>
13
- <p>The cable length of this gaming mouse is 160 cm and it has a USB 2.0 connector that plugs into your computer. It also has an adjustable backlighting system that lets you choose from seven different colors for each button. You can also turn off the backlighting if you prefer.</p>
14
- <p>The most impressive feature of this gaming mouse is its fully programmable buttons and macros. You can use the dedicated software to assign different functions, commands, keystrokes, or combinations to each button. You can also create custom macros that execute multiple actions with one click. You can save up to five profiles for different games or scenarios.</p>
15
- <h2>How to Download and Install Aula Killing The Soul Gaming Mouse Driver</h2>
16
- <h3>Step 1: Visit the Official Website</h3>
17
- <p>The first step to download and install Aula Killing The Soul Gaming Mouse Driver is to visit <a href="https://aulagaming.eu/en-us">the official website of Aula Gaming</a>. This is where you can find all the information about this gaming mouse and its driver. You can also browse other products from Aula Gaming, such as keyboards, headsets, mouse pads, etc.</p>
18
- <p>On <a href="https://aulagaming.eu/en-us">the official website</a>, you can find <a href="https://aulagaming.eu/en-us/aula/mice/aula-killing-soul-v2-gaming-mouse">the product page for Aula Killing The Soul V2 Gaming Mouse</a>, which is an improved version of this gaming mouse. On this page, you can see <strong>the features</strong>, <strong>the specifications</strong>, <strong>the gallery</strong>, <strong>the reviews</strong>, <strong>and <a href="https://aulagaming.eu/en-us/support/aula/mice/aula-killing-soul-expert-gaming-mouse">the FAQ section</a></strong> for this gaming mouse.</p>
19
- <h3>Step 2: Download the Driver File</h3>
20
- <p>The next step is to download <strong>the driver file</strong> for your Aula Killing The Soul Gaming Mouse model. To do this, you need to go to <a href="https://aulagaming.eu/en-us/support/aula/mice/aula-killing-soul-expert-gaming-mouse">the support page for this gaming mouse</a>. On this page, you can see <strong>a download link</strong> for <strong>AULA Software V1.0.zip</strong>. This is <strong>the driver file</strong> that you need to download.</p>
21
- <p>How to install Aula Killing The Soul Gaming Mouse Driver<br />
22
- Aula Killing The Soul Gaming Mouse Driver download link<br />
23
- Aula Killing The Soul Gaming Mouse Driver compatibility issues<br />
24
- Aula Killing The Soul Gaming Mouse Driver update<br />
25
- Aula Killing The Soul Gaming Mouse Driver error fix<br />
26
- Aula Killing The Soul Gaming Mouse Driver review<br />
27
- Aula Killing The Soul Gaming Mouse Driver features<br />
28
- Aula Killing The Soul Gaming Mouse Driver manual<br />
29
- Aula Killing The Soul Gaming Mouse Driver troubleshooting<br />
30
- Aula Killing The Soul Gaming Mouse Driver settings<br />
31
- Aula Killing The Soul Gaming Mouse Driver software<br />
32
- Aula Killing The Soul Gaming Mouse Driver vs other gaming mice<br />
33
- Aula Killing The Soul Gaming Mouse Driver warranty<br />
34
- Aula Killing The Soul Gaming Mouse Driver price<br />
35
- Aula Killing The Soul Gaming Mouse Driver discount code<br />
36
- Aula Killing The Soul Gaming Mouse Driver alternatives<br />
37
- Aula Killing The Soul Gaming Mouse Driver pros and cons<br />
38
- Aula Killing The Soul Gaming Mouse Driver specifications<br />
39
- Aula Killing The Soul Gaming Mouse Driver installation guide<br />
40
- Aula Killing The Soul Gaming Mouse Driver user feedback<br />
41
- Aula Killing The Soul Gaming Mouse Driver support<br />
42
- Aula Killing The Soul Gaming Mouse Driver tips and tricks<br />
43
- Aula Killing The Soul Gaming Mouse Driver customization options<br />
44
- Aula Killing The Soul Gaming Mouse Driver performance test<br />
45
- Aula Killing The Soul Gaming Mouse Driver comparison chart<br />
46
- Aula Killing The Soul Gaming Mouse Driver best practices<br />
47
- Aula Killing The Soul Gaming Mouse Driver FAQ<br />
48
- Aula Killing The Soul Gaming Mouse Driver benefits<br />
49
- Aula Killing The Soul Gaming Mouse Driver drawbacks<br />
50
- Aula Killing The Soul Gaming Mouse Driver video tutorial<br />
51
- Aula Killing The Soul Gaming Mouse Driver unboxing video<br />
52
- Aula Killing The Soul Gaming Mouse Driver setup instructions<br />
53
- Aula Killing The Soul Gaming Mouse Driver product description<br />
54
- Aula Killing The Soul Gaming Mouse Driver customer service number<br />
55
- Aula Killing The Soul Gaming Mouse Driver online store link<br />
56
- Aula Killing The Soul Gaming Mouse Driver official website link<br />
57
- Aula Killing The Soul Gaming Mouse Driver technical specifications<br />
58
- Aula Killing The Soul Gaming Mouse Driver system requirements<br />
59
- Aula Killing The Soul Gaming Mouse Driver DPI settings<br />
60
- Aula Killing The Soul Gaming Mouse Driver RGB lighting effects<br />
61
- Aula Killing The Soul Gaming Mouse Driver ergonomic design<br />
62
- Aula Killing The Soul Gaming Mouse Driver durability test<br />
63
- Aula Killing The Soul Gaming Mouse Driver wireless version availability<br />
64
- Aula Killing The Soul Gaming Mouse Driver macro programming function<br />
65
- Aula Killing The Soul Gaming Mouse Driver sensor type and quality<br />
66
- Aula Killing The Soul Gaming Mouse Driver button layout and functions<br />
67
- Aula Killing The Soul Gaming Mouse Driver weight and size measurements<br />
68
- Aula Killing The Soul Gaming Mouse Driver cable length and quality<br />
69
- Aula Killing The Soul Gaming Mouse Driver scroll wheel responsiveness and smoothness</p>
70
- <p>To download <strong>the driver file</strong>, simply click on <strong>the download link</strong> and save it on your computer. The file size is about 6 MB and it works on Windows XP/Vista/7/8/10 operating systems. You need to have at least 50 MB of free disk space on your computer to install it.</p>
71
- <h3>Step 3: Install the Driver on Your Computer</h3>
72
- <p>The final step is to install <strong>AULA Software V1.0.zip</strong> on your computer. To do this, you need to unzip <strong>the driver file</strong> that you downloaded in step 2 using any unzip software such as WinRAR or WinZip. Then, you need to run <strong>AULA Software V1.0.exe</strong>, which is <strong>the installation file</strong>.</p>
73
- <p>To install <strong>AULA Software V1.0.exe</strong>, simply follow <strong>the steps</strong> and <strong>the options</strong> that appear on your screen during <strong>the installation process</strong>. You need to agree with <strong>the license agreement</strong>, choose <strong>a destination folder</strong>, create <strong>a desktop shortcut</strong>, etc. The installation process should take only a few minutes.</p>
74
- <href="https://www.youtube.com/watch?v=QyOHK1MiW-M">a video tutorial</a> on how to use <strong>AULA Software V1.0.exe</strong> on your screen. You should also see <strong>your mouse model</strong> and <strong>your firmware version</strong> on the top left corner of the software window.</p>
75
- <p>Now, you can use <strong>AULA Software V1.0.exe</strong> to configure your Aula Killing The Soul Gaming Mouse settings, such as DPI, polling rate, backlighting, buttons, and macros. You can also update your firmware if there is a new version available. You can save your settings on your mouse memory or on your computer. You can also switch between different profiles for different games or scenarios.</p>
76
- <h2>Conclusion</h2>
77
- <p>Aula Killing The Soul Gaming Mouse is a great gaming mouse that offers high performance, ergonomic design, and customizable features. It comes with a Pixart 5050 sensor, six DPI presets, seven programmable buttons, adjustable backlighting, Huano long life switches, and a dedicated software. To use this gaming mouse to its full potential, you need to download and install Aula Killing The Soul Gaming Mouse Driver from the official website of Aula Gaming. This driver will allow you to configure your mouse settings, program your buttons and macros, and update your firmware.</p>
78
- <p>If you are looking for a gaming mouse that can enhance your gaming experience and give you an edge over your opponents, you should definitely try out Aula Killing The Soul Gaming Mouse and its driver. You will not regret it.</p>
79
- <p>Thank you for reading this article. We hope you found it helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.</p>
80
- <h2>FAQs</h2>
81
- <h3>Q: How can I change the backlighting color of my Aula Killing The Soul Gaming Mouse?</h3>
82
- <p>A: You can change the backlighting color of your Aula Killing The Soul Gaming Mouse using <strong>AULA Software V1.0.exe</strong>. Simply launch the software and go to <strong>the lighting tab</strong>. There, you can choose from seven different colors for each button or turn off the backlighting completely. You can also adjust the brightness and speed of the backlighting.</p>
83
- <h3>Q: How can I create custom macros for my Aula Killing The Soul Gaming Mouse?</h3>
84
- <p>A: You can create custom macros for your Aula Killing The Soul Gaming Mouse using <strong>AULA Software V1.0.exe</strong>. Simply launch the software and go to <strong>the macro tab</strong>. There, you can create new macros or edit existing ones. You can record keystrokes, mouse clicks, delays, loops, etc. You can also assign macros to any button on your mouse.</p>
85
- <h3>Q: How can I update the firmware of my Aula Killing The Soul Gaming Mouse?</h3>
86
- <p>A: You can update the firmware of your Aula Killing The Soul Gaming Mouse using <strong>AULA Software V1.0.exe</strong>. Simply launch the software and go to <strong>the update tab</strong>. There, you can check if there is a new firmware version available for your mouse model. If there is, you can download and install it with one click.</p>
87
- <h3>Q: How can I reset my Aula Killing The Soul Gaming Mouse settings to default?</h3>
88
- <p>A: You can reset your Aula Killing The Soul Gaming Mouse settings to default using <strong>AULA Software V1.0.exe</strong>. Simply launch the software and go to <strong>the setting tab</strong>. There, you can click on <strong>the restore button</strong> to reset your mouse settings to factory default.</p>
89
- <h3>Q: Where can I find more information about Aula Killing The Soul Gaming Mouse and its driver?</h3>
90
- <p>A: You can find more information about Aula Killing The Soul Gaming Mouse and its driver on <a href="https://aulagaming.eu/en-us">the official website of Aula Gaming</a>. There, you can see <a href="https://aulagaming.eu/en-us/aula/mice/aula-killing-soul-v2-gaming-mouse">the product page for this gaming mouse</a>, which includes <strong>the features</strong>, <strong>the specifications</strong>, <strong>the gallery</strong>, <strong>the reviews</strong>, <strong>and <a href="https://aulagaming.eu/en-us/support/aula/mice/aula-killing-soul-expert-gaming-mouse">the FAQ section</a></strong>. You can also browse other products from Aula Gaming or contact their customer service.</p>
91
- </p> 0a6ba089eb<br />
92
- <br />
93
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Minitool Power Data Recovery.md DELETED
@@ -1,18 +0,0 @@
1
-
2
- <h1>Why You Should Not Crack MiniTool Power Data Recovery</h1>
3
- <p>MiniTool Power Data Recovery is a professional data recovery software that can help you recover deleted or lost files from various devices, such as Windows PC, external hard drive, USB drive, SD card, memory card, and more. It supports 100+ types of files, including photos, videos, audios, documents, and more. It also offers features such as quick scan, deep scan, preview before recovery, load previous scan result, and more.</p>
4
- <p>If you are looking for a way to crack MiniTool Power Data Recovery, you may be tempted to use some websites that offer illegal copies of the software with serial keys, license codes, keygens, or cracks. However, this is not a good idea for several reasons. First of all, cracking MiniTool Power Data Recovery is illegal and unethical. You are violating the copyright and license agreement of the software developer. Second, cracking MiniTool Power Data Recovery can expose your computer to malware and viruses that can harm your system and compromise your data. Third, cracking MiniTool Power Data Recovery can result in poor performance and compatibility issues with your system and other programs.</p>
5
- <h2>crack minitool power data recovery</h2><br /><p><b><b>Download</b> &#10037; <a href="https://byltly.com/2uKxDF">https://byltly.com/2uKxDF</a></b></p><br /><br />
6
- <p>Therefore, the best way to use MiniTool Power Data Recovery is to download it from the official website . You can download a free trial version of MiniTool Power Data Recovery that allows you to recover up to 1 GB of data for free. After that, you can either buy a license to continue using the software or uninstall it from your system.</p>
7
- <p>To download MiniTool Power Data Recovery from the official website , follow these steps:</p>
8
- <ol>
9
- <li>Go to <a href="https://www.minitool.com/data-recovery-software/free-for-windows.html">https://www.minitool.com/data-recovery-software/free-for-windows.html</a> and click on the "Free Download" button.</li>
10
- <li>Save the file to your computer and run it.</li>
11
- <li>Follow the installation instructions and launch the software.</li>
12
- <li>Select the device and scan mode you want to recover data from.</li>
13
- <li>Preview and select the files you want to recover and save them to a different location.</li>
14
- </ol>
15
- <p>If you want to buy a license for MiniTool Power Data Recovery after the trial period expires, you can do so from the same website. The price of a single-user license is $69 USD. You can also buy monthly subscription or annual subscription licenses for personal use or ultimate perpetual license for 3 PCs.</p>
16
- <p>MiniTool Power Data Recovery is a reliable and effective data recovery software that can help you recover your precious data in various data loss situations. By downloading it from the official website , you can ensure that you get a safe and legal copy of the software that works well with your system and other programs. Don't risk cracking MiniTool Power Data Recovery from unreliable sources that can harm your computer and data.</p> ddb901b051<br />
17
- <br />
18
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Ab Tumhare Hawale Watan Sathiyo Full Movie in HD Tamil 1080p A Tribute to the Indian Army.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>hd tamil Ab Tumhare Hawale Watan Sathiyo 1080p</h2><br /><p><b><b>Download</b> &rArr; <a href="https://imgfil.com/2uxYB6">https://imgfil.com/2uxYB6</a></b></p><br /><br />
2
- <br />
3
- aaccfb2cb3<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Descargar Hungry Shark World APK El juego oficial de la Semana del Tiburn.md DELETED
@@ -1,117 +0,0 @@
1
- <br />
2
- <h1>Hungry Shark World: A Game Review</h1>
3
- <p>Are you looking for a fun and exciting game that lets you experience being a shark in a feeding frenzy? If so, you might want to check out <strong>Hungry Shark World</strong>, a game developed by Ubisoft Entertainment that is available for Android devices. In this game, you can control a shark of your choice and eat your way through various oceans, feasting on everything from fish and birds to whales and humans. You can also explore different locations, customize your shark, complete missions, fight bosses, recruit pets, and more. In this article, we will review the features of Hungry Shark World, show you how to download the APK file for Android, and give you our verdict on the game.</p>
4
- <h2>hungry shark world descargar apk</h2><br /><p><b><b>Download File</b> &#127775; <a href="https://urlin.us/2uSZsQ">https://urlin.us/2uSZsQ</a></b></p><br /><br />
5
- <h2>Features of Hungry Shark World</h2>
6
- <p>Hungry Shark World is a game that offers a lot of features for shark lovers. Here are some of them:</p>
7
- <h3>41 Species of Sharks</h3>
8
- <p>One of the main attractions of Hungry Shark World is that you can choose from a range of sharks in eight different size tiers, including the iconic ocean predator: the Great White. Each shark has its own stats, abilities, and appearance. You can unlock more sharks as you progress in the game and collect coins and gems.</p>
9
- <h3>Huge Open Worlds</h3>
10
- <p>Another feature of Hungry Shark World is that you can explore various oceans and locations, each with its own theme, scenery, prey, enemies, secrets, and challenges. You can visit the lush Pacific Islands, the frozen Arctic Ocean, the exotic Arabian Sea, and the South China Sea, a vibrant urban destination full of fresh victims.</p>
11
- <h3>Feast for Your Eyes</h3>
12
- <p>Hungry Shark World also boasts stunning console quality 3D graphics that will blow everything else out of the water. The game has realistic animations, lighting effects, shadows, reflections, and water physics. You can also enjoy the game in full HD with redefined and fully optimized gamepad controls.</p>
13
- <h3>Survival of the Hungriest</h3>
14
- <p>The core gameplay of Hungry Shark World is simple but addictive. You have to eat as much as you can to survive and grow bigger. You can eat anything that moves or doesn't move in your way, from bite-size fish and birds to tasty whales and unwitting humans. But be careful not to bite off more than you can chew. There are also dangers lurking in the water, such as mines, jellyfish, submarines, and other sharks.</p>
15
- <h3>Smashing Shark Swag</h3>
16
- <p <p>As you play Hungry Shark World, you can also level up your shark and equip it with various gadgets, skins, and accessories to enhance your abilities and style. You can use jetpacks, lasers, umbrellas, hats, headphones, and more. You can also unlock different skins for your shark, such as the Robo Shark, the Zombie Shark, and the Tiger Shark.</p>
17
- <h3>Manic Missions and Badass Bosses</h3>
18
- <p>To spice up your gameplay, Hungry Shark World also offers a variety of missions, hunts, and fights that you can take on to earn rewards and trophies. You can complete daily missions, special missions, and bonus missions to get coins and gems. You can also hunt down specific prey or enemies to get extra points and bonuses. And if you are feeling brave, you can challenge yourself to fight against powerful bosses, such as the Giant Crab, the King Squid, and the Colossal Squid.</p>
19
- <h3>Helpful Predatory Pets</h3>
20
- <p>If you need some help in your feeding frenzy, you can also recruit some predatory pets to assist you. You can choose from a range of animals that will follow you around and help you eat more, such as baby sharks, whales, octopus, and even a bald eagle. Each pet has its own ability and personality. You can unlock more pets as you play the game and collect eggs.</p>
21
- <h3>Supersized Meal Deal</h3>
22
- <p>Hungry Shark World also lets you unleash your shark's potential with special modes, powers, and effects that will make you unstoppable. You can activate the Gold Rush mode to turn everything into gold and multiply your score. You can also use the Mega Gold Rush mode to turn everything into mega gold and multiply your score even more. You can also use special powers like the Freeze Blast, the Fireball, and the Lightning Blast to freeze, burn, or electrocute your enemies.</p>
23
- <p>hungry shark world apk download free<br />
24
- hungry shark world apk mod unlimited money<br />
25
- hungry shark world apk obb latest version<br />
26
- hungry shark world apk android oyun club<br />
27
- hungry shark world apk pure no ads<br />
28
- hungry shark world apk revdl offline<br />
29
- hungry shark world apk hack mega<br />
30
- hungry shark world apk uptodown español<br />
31
- hungry shark world apk data highly compressed<br />
32
- hungry shark world apk rexdl unlocked<br />
33
- hungry shark world descargar gratis para android<br />
34
- hungry shark world descargar sin internet<br />
35
- hungry shark world descargar ultima version 2023<br />
36
- hungry shark world descargar con todo desbloqueado<br />
37
- hungry shark world descargar por mediafire<br />
38
- hungry shark world descargar para pc windows 10<br />
39
- hungry shark world descargar juego completo<br />
40
- hungry shark world descargar en español<br />
41
- hungry shark world descargar mod menu<br />
42
- hungry shark world descargar para ios<br />
43
- descargar e instalar hungry shark world apk<br />
44
- descargar e instalar hungry shark world para pc<br />
45
- como descargar e instalar hungry shark world hackeado<br />
46
- como descargar e instalar hungry shark world en android<br />
47
- como descargar e instalar hungry shark world sin play store<br />
48
- como descargar e instalar hungry shark world gratis<br />
49
- como descargar e instalar hungry shark world mega mod<br />
50
- como descargar e instalar hungry shark world ultima actualizacion<br />
51
- como descargar e instalar hungry shark world con datos obb<br />
52
- como descargar e instalar hungry shark world en pc sin emulador<br />
53
- bajar juego de hungry shark world apk gratis<br />
54
- bajar juego de hungry shark world para android sin internet<br />
55
- bajar juego de hungry shark world hackeado con todo infinito<br />
56
- bajar juego de hungry shark world para pc full español<br />
57
- bajar juego de hungry shark world ultima version 2023 mega<br />
58
- bajar juego de hungry shark world mod menu vip<br />
59
- bajar juego de hungry shark world con datos sd<br />
60
- bajar juego de hungry shark world en español latino<br />
61
- bajar juego de hungry shark world sin anuncios ni compras integradas<br />
62
- bajar juego de hungry shark world desde google play store<br />
63
- como jugar a hungry shark world apk sin conexion a internet<br />
64
- como jugar a hungry shark world apk con joystick bluetooth<br />
65
- como jugar a hungry shark world apk en pantalla completa<br />
66
- como jugar a hungry shark world apk con amigos online multiplayer <br />
67
- como jugar a hungry shark world apk con trucos y consejos <br />
68
- como jugar a hungry shark world apk en modo historia <br />
69
- como jugar a hungry shark world apk en modo extincion <br />
70
- como jugar a hungry shark world apk con todos los tiburones desbloqueados <br />
71
- como jugar a hungry shark world apk en pc con teclado y mouse</p>
72
- <h3>Extinction Mode</h3>
73
- <p>The ultimate feature of Hungry Shark World is the Extinction Mode, where you can save the world from destruction by activating Apex sharks and rampaging through the ocean. Apex sharks are the most powerful sharks in the game that have unique abilities and appearances. You can unlock them by completing all the missions in each location. Once you have unlocked them, you can use them to destroy meteors that are threatening to wipe out life on Earth.</p>
74
- <h2>How to Download Hungry Shark World APK for Android</h2>
75
- <p>If you want to play Hungry Shark World on your Android device, you can download it from the Google Play Store for free. However, if you want to enjoy the game without ads or in-app purchases, and access all the features and content without restrictions, you can download the APK file from a trusted source. Here is how:</p>
76
- <h3>Requirements</h3>
77
- <p>Before you download the APK file, make sure that your device meets the following requirements:</p>
78
- <ul>
79
- <li>Your device must have Android 5.0 or higher.</li>
80
- <li>Your device must have at least 2 GB of RAM.</li>
81
- <li>Your device must have at least 1 GB of free storage space.</li>
82
- <li>You must enable unknown sources in your device settings.</li>
83
- </ul>
84
- <h3>Steps</h3>
85
- <p>Follow these steps to download and install the APK file:</p>
86
- <ol>
87
- <li>Go to a trusted website that offers Hungry Shark World APK file for Android. For example, you can go to [this link].</li>
88
- <li>Click on the download button and wait for the file to be downloaded.</li>
89
- <li>Locate the file in your device's file manager and tap on it.</li>
90
- <li>Follow the instructions on the screen to install the app.</li>
91
- <li>Launch the app and enjoy playing Hungry Shark World.</li>
92
- </ol>
93
- <h3>Benefits</h3>
94
- <p>By downloading the APK file for Android, you can enjoy these benefits:</p>
95
- <ul>
96
- <li>You can play Hungry Shark World without ads or in-app purchases.</li>
97
- <li>You can access all the features and content of the game without restrictions.</li>
98
- <li>You can play Hungry Shark World offline without an internet connection.</li>
99
- <li>You can update Hungry Shark World manually whenever a new version is available.</li>
100
- </ul>
101
- <h2>Conclusion</h2>
102
- <p>Hungry Shark World is a game that will satisfy your appetite for fun and adventure. It is a game that lets you control a shark of your choice and eat everything in your way. It is a game that offers a lot of features for shark lovers, such as 41 species of sharks, huge open worlds, stunning 3D graphics, smashing shark swag, manic missions and badass bosses, helpful predatory pets, supersized meal deal, and extinction mode. It <p>It is a game that you can download for free from the Google Play Store, or you can download the APK file from a trusted source and enjoy the game without ads or in-app purchases. If you are a fan of sharks, or you just want to have some fun, you should definitely give Hungry Shark World a try. You will not regret it.</p>
103
- <p>So, what are you waiting for? Download Hungry Shark World today and unleash your inner shark!</p>
104
- <h2>FAQs</h2>
105
- <p>Here are some frequently asked questions about Hungry Shark World:</p>
106
- <h3>Q: How do I get more coins and gems in Hungry Shark World?</h3>
107
- <p>A: You can get more coins and gems by completing missions, hunts, and fights, by activating Gold Rush and Mega Gold Rush modes, by watching ads, or by buying them with real money. However, if you download the APK file, you can get unlimited coins and gems without spending any money.</p>
108
- <h3>Q: How do I unlock more sharks in Hungry Shark World?</h3>
109
- <p>A: You can unlock more sharks by collecting coins and gems, by leveling up your current shark, or by buying them with real money. However, if you download the APK file, you can unlock all the sharks without spending any money.</p>
110
- <h3>Q: How do I play Hungry Shark World offline?</h3>
111
- <p>A: You can play Hungry Shark World offline by downloading the APK file and installing it on your device. You will not need an internet connection to play the game, except for updating it.</p>
112
- <h3>Q: Is Hungry Shark World safe to download and play?</h3>
113
- <p>A: Yes, Hungry Shark World is safe to download and play, as long as you download it from the Google Play Store or a trusted source. The game does not contain any viruses or malware that can harm your device or data.</p>
114
- <h3>Q: Is Hungry Shark World suitable for children?</h3>
115
- <p>A: Hungry Shark World is rated 12+ on the Google Play Store, which means that it contains moderate violence, blood, and gore. The game may not be suitable for younger children who may be scared or disturbed by the graphic depictions of sharks eating humans and other animals. Parents should use their discretion and supervise their children when playing the game.</p> 197e85843d<br />
116
- <br />
117
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Cara Download APK CarX Street Game Balap Mobil Terbaru 2023.md DELETED
@@ -1,145 +0,0 @@
1
- <br />
2
- <h1>Cara Download APK CarX Street: Panduan Lengkap untuk Pecinta Game Balap</h1>
3
- <p>CarX Street adalah game balap yang menawarkan fisika mobil yang realistis dan drifting berkecepatan tinggi. Game ini juga memiliki berbagai jenis peta dari seluruh dunia, dan pemain dapat memilih dari beberapa mode game yang berbeda. Pemain dapat bersaing melawan pemain lain, atau berpartisipasi dalam balapan dan acara.</p>
4
- <h2>cara download apk carx street</h2><br /><p><b><b>Download Zip</b> &#10003; <a href="https://jinyurl.com/2uNKNb">https://jinyurl.com/2uNKNb</a></b></p><br /><br />
5
- <p>Game ini saat ini hanya tersedia di beberapa negara untuk perangkat Android dan iOS, tetapi akan segera hadir di konsol dan PC. Jika Anda ingin mencoba game ini, Anda perlu tahu cara mendownload APK CarX Street di perangkat Anda. Berikut adalah panduan lengkapnya.</p>
6
- <h2>Apa itu CarX Street?</h2>
7
- <p>CarX Street adalah game balap yang dikembangkan oleh CarX Technologies, LLC, pembuat game populer seperti CarX Drift Racing 2 dan CarX Highway Racing. Game ini merupakan game balap jalanan open world yang memungkinkan pemain untuk menjelajahi kota besar dan sekitarnya, dari jalan-jalan kota yang ramai hingga jalan-jalan pegunungan yang berliku dan jalan raya pantai yang memukau.</p>
8
- <p>Pemain dapat membangun mobil impian mereka dengan menggunakan opsi tuning yang mendetail, yang membuka semua fisika perilaku mobil dari teknologi CarX. Pemain juga dapat menyesuaikan penampilan mobil mereka dengan berbagai aksesoris dan warna. Selain itu, pemain dapat bergabung dengan klub, mengalahkan bos, dan membuktikan kepada semua orang bahwa mereka adalah pengemudi terbaik di kota ini.</p>
9
- <h3>Fitur-fitur menarik dari CarX Street</h3>
10
- <p>Berikut adalah beberapa fitur menarik yang ditawarkan oleh game CarX Street:</p>
11
- <p>cara download apk carx street versi terbaru 2022<br />
12
- cara download apk carx street mod unlimited money<br />
13
- cara download apk carx street dari google play store<br />
14
- cara download apk carx street + file obb lengkap<br />
15
- cara download apk carx street di android gratis<br />
16
- cara download apk carx street tanpa root<br />
17
- cara download apk carx street offline<br />
18
- cara download apk carx street dengan mudah dan cepat<br />
19
- cara download apk carx street update terbaru<br />
20
- cara download apk carx street no ads<br />
21
- cara download apk carx street full unlocked<br />
22
- cara download apk carx street untuk pc<br />
23
- cara download apk carx street menggunakan vpn<br />
24
- cara download apk carx street lewat browser<br />
25
- cara download apk carx street melalui link alternatif<br />
26
- cara download apk carx street hack version<br />
27
- cara download apk carx street dengan fitur premium<br />
28
- cara download apk carx street tanpa verifikasi<br />
29
- cara download apk carx street anti banned<br />
30
- cara download apk carx street original<br />
31
- cara download apk carx street terbaik 2022<br />
32
- cara download apk carx street support semua hp android<br />
33
- cara download apk carx street spesifikasi rendah<br />
34
- cara download apk carx street ukuran kecil<br />
35
- cara download apk carx street gameplay menarik<br />
36
- cara download apk carx street grafis hd<br />
37
- cara download apk carx street suara realistis<br />
38
- cara download apk carx street kontrol mudah<br />
39
- cara download apk carx street mode multiplayer<br />
40
- cara download apk carx street banyak pilihan mobil<br />
41
- cara download apk carx street customisasi mobil keren<br />
42
- cara download apk carx street balapan liar di jalanan<br />
43
- cara download apk carx street tantangan seru dan menegangkan<br />
44
- cara download apk carx street open world luas dan dinamis<br />
45
- cara download apk carx street event dan misi beragam<br />
46
- cara download apk carx street reward dan bonus menarik<br />
47
- cara download apk carx street rating dan review positif<br />
48
- cara download apk carx street developer terpercaya dan profesional<br />
49
- cara download apk carx street tips dan trik bermainnya<br />
50
- cara download apk carx street panduan lengkap instalasinya</p>
51
- <ul>
52
- <li>Fisika mobil yang realistis dan kontrol yang mengesankan yang membuat Anda menjadi penguasa mobil Anda.</li>
53
- <li>Grafis modern berkualitas tinggi dan dunia terbuka yang luas.</li>
54
- <li>Berbagai jenis mobil untuk dipilih, mulai dari sedan hingga supercar.</li>
55
- <li>Sistem tuning yang mendetail yang memungkinkan Anda mengganti bagian-bagian mobil Anda dan menyesuaikannya untuk balapan tertentu.</li>
56
- <li>Mode karir yang menantang yang mengharuskan Anda bergabung dengan klub, mengalahkan bos, dan membuka potensi 100% dari mobil Anda.</li>
57
- <li>Mode online yang memungkinkan Anda bertemu dengan pemain lain di dunia nyata, mengobrol, berpacu, atau berdrift bersama.</li>
58
- <li>Perubahan siang/malam dinamis. Anda dapat mengemudi kapan saja siang atau malam hari.</li>
59
- <li>Beli rumah untuk mobil Anda dan kumpulkan koleksi untuk setiap mode balapan.</li>
60
- <li>Isi bensin dengan gas yang tepat untuk balapan berikutnya di pom bensin kota.</li>
61
- </ul <h3>Cara download CarX Street di Android</h3>
62
- <p>Ada beberapa cara untuk mendownload CarX Street di perangkat Android Anda, tergantung pada negara Anda dan preferensi Anda. Berikut adalah beberapa cara yang dapat Anda coba:</p>
63
- <h4>Melalui Google Play Store</h4>
64
- <p>Cara termudah untuk mendownload CarX Street di Android adalah melalui Google Play Store. Namun, game ini hanya tersedia di beberapa negara tertentu, seperti Rusia, Ukraina, Belarus, dan Kazakhstan. Jika Anda berada di salah satu negara ini, Anda dapat mengikuti langkah-langkah berikut:</p>
65
- <ol>
66
- <li>Buka Google Play Store di perangkat Android Anda.</li>
67
- <li>Ketik "CarX Street" di kolom pencarian dan tekan enter.</li>
68
- <li>Pilih game CarX Street dari daftar hasil dan klik tombol "Install".</li>
69
- <li>Tunggu hingga proses instalasi selesai dan nikmati game CarX Street di perangkat Anda.</li>
70
- </ol>
71
- <h4>Melalui APKCombo</h4>
72
- <p>Jika Anda tidak berada di salah satu negara yang didukung oleh Google Play Store, Anda dapat mencoba menggunakan situs web APKCombo untuk mendownload APK CarX Street. APKCombo adalah situs web yang menyediakan berbagai file APK dari berbagai aplikasi dan game Android. Anda dapat mengikuti langkah-langkah berikut:</p>
73
- <ol>
74
- <li>Buka situs web APKCombo di browser Anda. Anda dapat menggunakan tautan ini: <a href="">https://apkcombo.com/</a></li>
75
- <li>Ketik "CarX Street" di kolom pencarian dan tekan enter.</li>
76
- <li>Pilih game CarX Street dari daftar hasil dan klik tombol "Download APK".</li>
77
- <li>Pilih versi APK yang sesuai dengan perangkat Anda dan klik tombol "Download".</li>
78
- <li>Setelah file APK selesai didownload, buka file manager di perangkat Anda dan cari file APK CarX Street.</li>
79
- <li>Ketuk file APK CarX Street dan izinkan instalasi dari sumber yang tidak dikenal jika diminta.</li>
80
- <li>Tunggu hingga proses instalasi selesai dan nikmati game CarX Street di perangkat Anda.</li>
81
- </ol>
82
- <h4>Melalui Mod APK</h4>
83
- <p>Jika Anda ingin mendapatkan fitur tambahan atau keuntungan dalam game CarX Street, Anda dapat mencoba menggunakan Mod APK. Mod APK adalah file APK yang telah dimodifikasi oleh pihak ketiga untuk memberikan fitur atau fungsi yang tidak ada dalam versi aslinya. Namun, Anda harus berhati-hati saat menggunakan Mod APK, karena bisa saja mengandung virus atau malware yang dapat merusak perangkat Anda. Selain itu, penggunaan Mod APK juga dapat menyebabkan akun Anda diblokir oleh pengembang game. Jika Anda tetap ingin mencoba Mod APK, Anda dapat mengikuti langkah-langkah berikut:</p>
84
- <ol>
85
- <li>Buka situs web Mod APK yang andal dan terpercaya di browser Anda. Salah satu contohnya adalah <a href="">https://www.happymod.com/</a></li>
86
- <li>Ketik "CarX Street" di kolom pencarian dan tekan enter.</li>
87
- <li>Pilih Mod APK CarX Street dari daftar hasil dan baca deskripsi fitur-fiturnya.</li>
88
- <li>Jika Anda tertarik, klik tombol "Download" dan tunggu hingga file Mod APK selesai didownload.</li>
89
- <li>Setelah file Mod APK selesai didownload, buka file manager di perangkat Anda dan cari file Mod APK CarX Street.</li>
90
- <li>Ketuk file Mod APK CarX Street dan izinkan instalasi dari sumber yang tidak dikenal jika diminta.</li>
91
- <li>Tunggu hingga proses instalasi selesai dan nikmati game CarX Street dengan fitur tambahan di perangkat Anda.</li>
92
- </ol> <h3>Cara download CarX Street di PC dan Mac</h3>
93
- <p>Bagi Anda yang ingin bermain CarX Street di PC atau Mac, Anda juga memiliki beberapa pilihan, yaitu melalui Steam atau melalui emulator. Berikut adalah penjelasan masing-masing cara:</p>
94
- <h4>Melalui Steam</h4>
95
- <p>Cara paling nyaman untuk bermain CarX Street di PC atau Mac adalah melalui Steam, platform distribusi digital yang menyediakan berbagai game dan aplikasi. Namun, game ini belum dirilis secara resmi di Steam, dan masih dalam tahap pengembangan. Jika Anda ingin mengikuti perkembangan game ini, Anda dapat mengikuti langkah-langkah berikut:</p>
96
- <ol>
97
- <li>Buka situs web resmi CarX Street di browser Anda. Anda dapat menggunakan tautan ini: <a href="">https://carx-tech.com/carx-street/</a></li>
98
- <li>Gulir ke bawah hingga Anda menemukan bagian "Join the beta test".</li>
99
- <li>Klik tombol "Steam" dan tunggu hingga muncul halaman Steam.</li>
100
- <li>Klik tombol "Follow" untuk mengikuti game CarX Street di Steam.</li>
101
- <li>Anda juga dapat mengklik tombol "Add to your wishlist" untuk menambahkan game CarX Street ke daftar keinginan Anda di Steam.</li>
102
- <li>Anda akan mendapatkan notifikasi dari Steam jika game CarX Street sudah tersedia untuk didownload dan dimainkan.</li>
103
- </ol>
104
- <h4>Melalui Emulator</h4>
105
- <p>Jika Anda tidak sabar menunggu game CarX Street dirilis di Steam, Anda dapat mencoba menggunakan emulator untuk menjalankan game CarX Street di PC atau Mac. Emulator adalah aplikasi yang memungkinkan Anda untuk menjalankan aplikasi atau game Android di perangkat lain. Namun, Anda harus berhati-hati saat menggunakan emulator, karena bisa saja mengandung virus atau malware yang dapat merusak perangkat Anda. Selain itu, penggunaan emulator juga dapat menyebabkan kinerja game yang tidak optimal atau masalah kompatibilitas. Jika Anda tetap ingin mencoba emulator, Anda dapat mengikuti langkah-langkah berikut:</p>
106
- <ol>
107
- <li>Buka situs web emulator Android yang andal dan terpercaya di browser Anda. Salah satu contohnya adalah <a href="">https://www.bluestacks.com/</a></li>
108
- <li>Download dan instal emulator Android di PC atau Mac Anda.</li>
109
- <li>Buka emulator Android dan masuk dengan akun Google Anda.</li>
110
- <li>Buka Google Play Store atau situs web APKCombo di emulator Android dan cari game CarX Street.</li>
111
- <li>Download dan instal game CarX Street di emulator Android.</li>
112
- <li>Nikmati game CarX Street di PC atau Mac Anda dengan menggunakan emulator Android.</li>
113
- </ol> <h2>Tips dan trik bermain CarX Street</h2>
114
- <p>Setelah Anda berhasil mendownload dan memainkan game CarX Street, Anda mungkin ingin mengetahui beberapa tips dan trik untuk meningkatkan keterampilan dan pengalaman bermain Anda. Berikut adalah beberapa tips dan trik yang dapat Anda coba:</p>
115
- <h3>Ikuti tutorialnya</h3>
116
- <p>Salah satu hal pertama yang harus Anda lakukan saat memulai game CarX Street adalah mengikuti tutorialnya. Tutorial ini akan mengajarkan Anda dasar-dasar mengemudi, drifting, tuning, dan mode game yang berbeda. Tutorial ini juga akan memberi Anda beberapa hadiah, seperti uang, mobil, dan aksesoris. Jadi, jangan lewatkan tutorial ini jika Anda ingin mempelajari semua fitur game CarX Street.</p>
117
- <h3>Jelajahi kota untuk mendapatkan hadiah</h3>
118
- <p>Game CarX Street memiliki dunia terbuka yang luas yang dapat Anda jelajahi dengan bebas. Anda dapat mengemudi di mana saja di kota dan sekitarnya, dan menemukan berbagai hal menarik. Misalnya, Anda dapat menemukan kotak hadiah yang berisi uang, mobil, atau aksesoris. Anda juga dapat menemukan tempat-tempat rahasia yang menyembunyikan tantangan atau misi khusus. Jadi, jangan ragu untuk menjelajahi kota dan mendapatkan hadiah sebanyak mungkin.</p>
119
- <h3>Ikut serta dalam sprint dan klub</h3>
120
- <p>Salah satu cara untuk meningkatkan keterampilan dan reputasi Anda dalam game CarX Street adalah dengan ikut serta dalam sprint dan klub. Sprint adalah balapan singkat yang terjadi secara acak di kota. Anda dapat bergabung dengan sprint dengan mengikuti tanda panah hijau di peta. Sprint akan memberi Anda uang, poin pengalaman, dan poin reputasi jika Anda menang atau masuk dalam tiga besar.</p>
121
- <p>Klub adalah kelompok pemain yang memiliki tujuan bersama dalam game CarX Street. Anda dapat bergabung dengan klub yang sudah ada atau membuat klub sendiri. Klub akan memberi Anda manfaat seperti bonus uang, bonus poin pengalaman, bonus poin reputasi, dan akses ke acara klub eksklusif. Klub juga akan memungkinkan Anda untuk berinteraksi dengan pemain lain, berbagi mobil, dan bersaing dalam peringkat klub.</p>
122
- <h3>Pilih mobil terbaik dan sesuaikan penampilannya</h3>
123
- <p>Salah satu hal terpenting dalam game CarX Street adalah mobil Anda. Mobil Anda akan menentukan seberapa cepat, kuat, dan stabil Anda dalam balapan. Oleh karena itu, Anda harus memilih mobil terbaik yang sesuai dengan gaya dan preferensi Anda. Game CarX Street memiliki berbagai jenis mobil untuk dipilih, mulai dari sedan hingga supercar. Setiap mobil memiliki statistik yang berbeda, seperti kecepatan maksimum, akselerasi, rem, daya tahan, traksi, dan drift.</p>
124
- <p>Selain memilih mobil terbaik, Anda juga dapat menyesuaikan penampilannya dengan menggunakan opsi tuning yang mendetail. Anda dapat mengganti bagian-bagian mobil Anda, seperti mesin, transmisi, suspensi, ban, knalpot, turbo, nitro, dan lainnya. Anda juga dapat menyesuaikan warna mobil Anda, serta menambahkan aksesoris seperti stiker, lampu neon, sayap belakang, roda baru, dan lainnya. Dengan cara ini, Anda dapat membuat mobil impian Anda yang unik dan menarik.</p>
125
- <h2>Kesimpulan dan FAQ</h2>
126
- <p>CarX Street adalah game balap jalanan open world yang menawarkan fisika mobil yang realistis dan drifting berkecepatan tinggi. Game ini juga memiliki berbagai jenis peta dari seluruh dunia, dan pemain dapat memilih dari beberapa mode game yang berbeda. Pemain dapat bersaing melawan pemain lain, atau berpartisipasi dalam balapan dan acara.</p>
127
- <p>Game ini saat ini hanya tersedia di beberapa negara untuk perangkat Android dan iOS, tetapi akan segera hadir di konsol dan PC. Jika Anda ingin mencoba game ini, Anda perlu tahu cara mendownload APK CarX Street di perangkat Anda. Ada beberapa cara untuk mend ownload APK CarX Street di perangkat Anda, tergantung pada negara Anda dan preferensi Anda. Anda dapat menggunakan Google Play Store, APKCombo, Mod APK, App Store, TestFlight, atau Steam. Anda juga dapat menggunakan emulator Android untuk menjalankan game CarX Street di PC atau Mac.</p>
128
- <p>Setelah Anda berhasil mendownload dan memainkan game CarX Street, Anda mungkin ingin mengetahui beberapa tips dan trik untuk meningkatkan keterampilan dan pengalaman bermain Anda. Anda dapat mengikuti tutorialnya, jelajahi kota untuk mendapatkan hadiah, ikut serta dalam sprint dan klub, pilih mobil terbaik dan sesuaikan penampilannya, dan lainnya.</p>
129
- <p>Demikianlah artikel tentang cara download APK CarX Street. Semoga artikel ini bermanfaat dan informatif bagi Anda. Jika Anda memiliki pertanyaan atau saran, silakan tinggalkan komentar di bawah. Terima kasih telah membaca dan selamat bermain!</p>
130
- <h2>FAQ</h2>
131
- <p>Berikut adalah beberapa pertanyaan yang sering diajukan tentang game CarX Street:</p>
132
- <ol>
133
- <li>Apakah game CarX Street gratis untuk dimainkan?</li>
134
- <p>Ya, game CarX Street gratis untuk dimainkan. Namun, game ini juga memiliki fitur pembelian dalam aplikasi yang memungkinkan Anda untuk membeli uang, mobil, atau aksesoris dengan uang sungguhan.</p>
135
- <li>Apakah game CarX Street membutuhkan koneksi internet?</li>
136
- <p>Ya, game CarX Street membutuhkan koneksi internet yang stabil untuk dimainkan. Game ini menggunakan koneksi internet untuk mengunduh data game, menyimpan progres Anda, dan berinteraksi dengan pemain lain.</p>
137
- <li>Apakah game CarX Street memiliki mode offline?</li>
138
- <p>Tidak, game CarX Street tidak memiliki mode offline. Anda harus selalu terhubung dengan internet untuk memainkan game ini.</p>
139
- <li>Apakah game CarX Street aman untuk anak-anak?</li>
140
- <p>Tidak sepenuhnya. Game CarX Street memiliki rating 12+ di App Store dan 3+ di Google Play Store. Game ini mengandung adegan balapan yang intens dan berbahaya, serta kemungkinan interaksi dengan pemain lain yang tidak diketahui. Oleh karena itu, disarankan untuk mengawasi anak-anak saat mereka memainkan game ini.</p>
141
- <li>Apakah game CarX Street mendukung kontroler?</li>
142
- <p>Ya, game CarX Street mendukung kontroler. Anda dapat menggunakan kontroler Bluetooth untuk mengontrol mobil Anda dalam game ini. Anda juga dapat menyesuaikan tata letak tombol kontroler sesuai dengan preferensi Anda.</p>
143
- </ol></p> 197e85843d<br />
144
- <br />
145
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Final Burn Neo APK and Play Retro Games on Your Android Phone.md DELETED
@@ -1,162 +0,0 @@
1
-
2
- <h1>How to Download and Install Final Burn Neo APK File for Android</h1>
3
- <p>If you are a fan of arcade games and retro consoles, you might have heard of Final Burn Neo, an emulator that supports hundreds of titles from various platforms. In this article, we will show you how to download and install the Final Burn Neo APK file for Android devices, so you can enjoy your favorite games on the go.</p>
4
- <h2>What is Final Burn Neo and what are its features</h2>
5
- <p>Final Burn Neo (also known as FBNeo or FBN) is a multi-system emulator that is based on the emulators FinalBurn and old versions of MAME. It is compatible with arcade games from Capcom, SNK, Sega, Data East, Cave, and many others, as well as consoles like Neo Geo, Sega Genesis, TurboGrafx-16, and more. It is also under active development, which means it is constantly updated with new features and bug fixes.</p>
6
- <h2>final burn neo apk download</h2><br /><p><b><b>DOWNLOAD</b> &#9999; <a href="https://jinyurl.com/2uNQ1r">https://jinyurl.com/2uNQ1r</a></b></p><br /><br />
7
- <p>Some of the features of Final Burn Neo include:</p>
8
- <ul>
9
- <li>High compatibility with a wide range of arcade games, including many popular titles from the 1980s and 1990s</li>
10
- <li>Advanced features such as save states, cheat codes, custom graphics filters, input presets, netplay, and RetroAchievements</li>
11
- <li>Wide compatibility with platforms and features supported by libretro, such as RetroArch</li>
12
- <li>Focus on playability and performance rather than accuracy, which means it can run smoothly on older or low-end devices</li>
13
- </ul>
14
- <h2>Why would you want to download and install the Final Burn Neo APK file</h2>
15
- <p>If you want to play arcade games and retro consoles on your Android device, you might be wondering why you should download and install the Final Burn Neo APK file instead of using other emulators. Here are some reasons why:</p>
16
- <p>final burn neo emulator apk download<br />
17
- final burn neo rom set apk download<br />
18
- final burn neo android apk download<br />
19
- final burn neo 1.0.0.2 apk download<br />
20
- final burn neo arcade emulator apk download<br />
21
- final burn neo full romset apk download<br />
22
- final burn neo latest version apk download<br />
23
- final burn neo retroarch core apk download<br />
24
- final burn neo cheats zip apk download<br />
25
- final burn neo bios files apk download<br />
26
- final burn neo windows xp apk download<br />
27
- final burn neo mac os x apk download<br />
28
- final burn neo linux sdl2 apk download<br />
29
- final burn neo raspberry pi apk download<br />
30
- final burn neo support pack apk download<br />
31
- final burn neo channel f games apk download<br />
32
- final burn neo seibu spi games apk download<br />
33
- final burn neo namco nb1 games apk download<br />
34
- final burn neo namco na1 games apk download<br />
35
- final burn neo game genie support apk download<br />
36
- final burn neo lowpass filter option apk download<br />
37
- final burn neo k007452 multiplier apk download<br />
38
- final burn neo cheat dat subsystem apk download<br />
39
- final burn neo big endian support apk download<br />
40
- final burn neo basic blitter fix apk download<br />
41
- final burn neo ym2151 buffered mode apk download<br />
42
- final burn neo demon front region fix apk download<br />
43
- final burn neo mighty guy sound fix apk download<br />
44
- final burn neo surprise attack hang fix apk download<br />
45
- final burn neo dragon world 3 fix apk download<br />
46
- final burn neo ecco the dolphin sms fix apk download<br />
47
- final burn neo sys16b sprite priority fix apk download<br />
48
- final burn neo wally wo sagase fix apk download<br />
49
- final burn neo paperboy rom fix apk download<br />
50
- final burn neo vector geometry fix apk download<br />
51
- final burn neo hiscore support update apk download<br />
52
- final burn neo macros for all inputs apk download<br />
53
- final burn neo cheeky mouse samples apk download<br />
54
- final burn neo console homebrew games apk download<br />
55
- final burn neo zx spectrum library update apk download<br />
56
- how to install final burn neo on android apk <br />
57
- how to use cheats in final burn neo android apk <br />
58
- how to play neogeo games in aes mode in final burn neo android apk <br />
59
- how to scan and rebuild roms for final burn neo android apk <br />
60
- how to import roms to retroarch list for final burn neo android apk <br />
61
- how to fix previews for arcade games in final burn neo android apk <br />
62
- how to play killer instinct games in final burn neo android apk <br />
63
- how to get netplay working in final burn neo android apk <br />
64
- best settings for performance and quality in final burn neo android apk</p>
65
- <ul>
66
- <li>The Final Burn Neo APK file is a standalone application that does not require any additional software or installation. You just need to download it, copy it to your device, and run it.</li>
67
- <li>The Final Burn Neo APK file is optimized for Android devices, which means it has a user-friendly interface, touch controls, and support for external controllers.</li>
68
- <li>The Final Burn Neo APK file is updated regularly with the latest version of the emulator core, which means you can enjoy the newest features and fixes without waiting for official releases.</li>
69
- <li>The Final Burn Neo APK file allows you to play games that are not supported by other emulators, such as Killer Instinct or Street Fighter III: Third Strike.</li>
70
- </ul>
71
- <h2>Requirements</h2>
72
- <h3>What are the minimum system requirements for running Final Burn Neo on Android</h3>
73
- <p>To run Final Burn Neo on your Android device, you will need:</p>
74
- <ul>
75
- <li>A device with a 64-bit processor and operating system</li>
76
- <li>Android 7.0 or higher</li>
77
- <li>At least 1 GB of RAM</li>
78
- <li>At least 100 MB of free storage space</li>
79
- <li>A fast internet connection for downloading files</li>
80
- </ul>
81
- <h3>What are the sources of the Final Burn Neo APK file and the ROM sets</h3>
82
- <p>To play games with Final Burn Neo on your Android device, you will need two things: the Final Burn Neo APK file and the ROM sets. The ROM sets are collections of files that contain the data of the games that you want to play. You will need to download them separately from the emulator.</p>
83
- <p>The sources of the Final Burn Neo APK file and the ROM sets are:</p>
84
- <ul>
85
- <li>The official GitHub repository of Final Burn Neo, where you can find the latest APK file and the source code of the emulator</li>
86
- <li>The official website of RetroArch, where you can find the latest ROM sets and the core information of Final Burn Neo</li>
87
- <li>The official website of RetroAchievements, where you can find the supported games and the achievements for Final Burn Neo</li>
88
- <li>The official website of Libretro, where you can find the documentation and the forums of Final Burn Neo</li>
89
- </ul>
90
- <p>Be careful when downloading files from other sources, as they might contain malware or viruses. Always scan your files before opening them.</p>
91
- <h2>Steps</h2>
92
- <h3>How to download and install the Final Burn Neo APK file</h3>
93
- <p>To download and install the Final Burn Neo APK file on your Android device, follow these steps:</p>
94
- <ol>
95
- <li>Go to the official GitHub repository of Final Burn Neo and click on the Releases tab.</li>
96
- <li>Find the latest release and click on the Assets dropdown menu.</li>
97
- <li>Download the file named fbneo.apk to your device.</li>
98
- <li>Open the file manager app on your device and locate the downloaded file.</li>
99
- <li>Tap on the file and allow the installation of unknown apps if prompted.</li>
100
- <li>Follow the on-screen instructions to complete the installation.</li>
101
- <li>You should see a new icon on your home screen or app drawer named FBNeo. Tap on it to launch the emulator.</li>
102
- </ol>
103
- <h3>How to download and extract the ROM sets</h3>
104
- <p>To download and extract the ROM sets for Final Burn Neo on your Android device, follow these steps:</p>
105
- <ol>
106
- <li>Go to the official website of RetroArch and click on the Downloads tab.</li>
107
- <li>Scroll down to the section named ROMs and click on the link that says FBNeo - Arcade Games.</li>
108
- <li>You will be redirected to a Google Drive folder that contains several ZIP files. Each ZIP file corresponds to a different arcade system or console supported by Final Burn Neo.</li>
109
- <li>Select the ZIP files that contain the games that you want to play and download them to your device. You can also download all of them if you have enough storage space.</li>
110
- <li>Open the file manager app on your device and locate the downloaded ZIP files.</li>
111
- <li>Create a new folder on your device named fbneo and move all the ZIP files to that folder.</li>
112
- <li>Extract all the ZIP files using a ZIP extractor app. You should see several folders inside the fbneo folder, each containing one or more ROM files. Do not rename or modify any of these files or folders.</li>
113
- </ol>
114
- <h3>How to launch and configure Final Burn Neo on Android</h3>
115
- <p>To launch and configure Final Burn Neo on your Android device, follow these steps:</p>
116
- <ol>
117
- <li>Tap on the FBNeo icon on your home screen or app drawer to launch the emulator.</li>
118
- <li>You will see a list of games that are available for playing. You can scroll up and down to browse through them, or use the search bar at the top to find a specific game.</li>
119
- <li>To start a game, tap on its name and then tap on Run. The game will load and run in full screen mode.</li>
120
- <li>To access the emulator menu, swipe from the left edge of the screen to open a sidebar. Here you can adjust various settings, such as video, audio, input, cheats, save states, netplay, achievements, etc.</li>
121
- <li>To exit a game, swipe from the left edge of the screen to open the sidebar and tap on Quit. You will return to the game list.</li>
122
- </ol>
123
- <h2>Conclusion</h2>
124
- <p>In this article, we have shown you how to download and install the Final Burn Neo APK file for Android devices, as well as how to download and extract the ROM sets, and how to launch and configure the emulator. We hope that this guide has helped you to enjoy your favorite arcade games and retro consoles on your Android device.</p>
125
- <p>Here are some tips and tricks for using Final Burn Neo on Android:</p>
126
- <ul>
127
- <li>To change the orientation of the screen, go to the emulator menu and tap on Video. Then, tap on Screen Rotation and choose either Landscape or Portrait.</li>
128
- <li>To change the size and position of the touch controls, go to the emulator menu and tap on Input. Then, tap on Touch Overlay Settings and adjust the sliders as you wish.</li>
129
- <li>To use an external controller, go to the emulator menu and tap on Input. Then, tap on Port 1 Controls and select your controller from the list. You can also map the buttons of your controller by tapping on User 1 Bind All.</li>
130
- <li>To play online with other players, go to the emulator menu and tap on Netplay. Then, tap on Start Netplay Host or Connect to Netplay Host depending on whether you want to host or join a game. You will need to enter the IP address and port number of the host.</li>
131
- <li>To earn achievements for your games, go to the emulator menu and tap on Achievements. Then, tap on Login and enter your RetroAchievements username and password. You will see a list of achievements for each game that you can unlock by completing certain tasks.</li>
132
- </ul>
133
- <p>Do you have any feedback or questions about Final Burn Neo on Android? Feel free to share them in the comments section below. We would love to hear from you!</p>
134
- <h2>FAQs</h2>
135
- <h3>What are the best games to play with Final Burn Neo on Android?</h3>
136
- <p>There are hundreds of games that you can play with Final Burn Neo on Android, but some of the most popular ones are:</p>
137
- <ul>
138
- <li>Street Fighter II: The World Warrior</li>
139
- <li>Metal Slug</li>
140
- <li>The King of Fighters '98</li>
141
- <li>Sonic The Hedgehog</li>
142
- <li>Pac-Man</li>
143
- </ul>
144
- <h3>How can I update the Final Burn Neo APK file?</h3>
145
- <p>To update the Final Burn Neo APK file, you will need to download the latest version from the official GitHub repository and install it over the existing one. You do not need to uninstall the previous version or delete any files.</p>
146
- <h3>How can I add more ROM sets to Final Burn Neo?</h3>
147
- <p>To add more ROM sets to Final Burn Neo, you will need to download them from the official website of RetroArch and extract them to the fbneo folder on your device. You can also use a file manager app to copy or move ROM files from other sources to the fbneo folder.</p>
148
- <h3>How can I fix errors or crashes with Final Burn Neo?</h3>
149
- <p>If you encounter any errors or crashes with Final Burn Neo, you can try the following solutions:</p>
150
- <ul>
151
- <li>Make sure that your device meets the minimum system requirements for running Final Burn Neo.</li>
152
- <li>Make sure that you have downloaded and installed the correct APK file and ROM sets for Final Burn Neo.</li>
153
- <li>Make sure that you have enough storage space and RAM available on your device.</li>
154
- <li>Make sure that you have a stable internet connection for downloading files and playing online.</li>
155
- <li>Make sure that you have granted all the necessary permissions for Final Burn Neo to access your device's features.</li>
156
- <li>Clear the cache and data of Final Burn Neo from your device's settings.</li>
157
- <li>Reinstall Final Burn Neo from scratch.</li>
158
- </ul>
159
- <h3>How can I contact the developers of Final Burn Neo?</h3>
160
- <p>If you want to contact the developers of Final Burn Neo, you can do so by visiting their official website at https://github.com/finalburnneo/FBNeo. Here you can find their contact information, report issues, request features, contribute code, or donate money.</p> 197e85843d<br />
161
- <br />
162
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Orangedox and Secure Your Documents with Data Rooms.md DELETED
@@ -1,80 +0,0 @@
1
-
2
- <h1>How to Download Orangedox and Why You Should Do It</h1>
3
- <p>If you are looking for a way to securely share and track your documents online, you might want to consider using Orangedox. Orangedox is a cloud-based software that integrates with Google Drive, Gmail, and Dropbox, and allows you to control access, prevent forwarding, and monitor every interaction with your files. Whether you need to send proposals, contracts, invoices, or any other type of document, Orangedox can help you boost your productivity, efficiency, and security.</p>
4
- <h2>download orangedox</h2><br /><p><b><b>Download</b> === <a href="https://jinyurl.com/2uNJnr">https://jinyurl.com/2uNJnr</a></b></p><br /><br />
5
- <p>In this article, we will show you how to download Orangedox for different platforms and explain why you should do it. We will also answer some frequently asked questions about Orangedox and its features.</p>
6
- <h2>How to Download Orangedox for Google Drive</h2>
7
- <p>If you use Google Drive as your cloud storage service, you can easily connect it with Orangedox and start sharing and tracking your documents in minutes. Here are the steps you need to follow:</p>
8
- <h3>Step 1: Visit the <a href="(^5^)">Orangedox website</a> and sign up for a free trial</h3>
9
- <p>You can choose from three pricing plans depending on your needs: Pro, Business, or Teams. Each plan comes with a 14-day free trial and no upfront credit card required. You can also opt for a free personal plan if you just want to publish your files online.</p>
10
- <h3>Step 2: Connect your Google Drive account to Orangedox</h3>
11
- <p>Once you sign up, you will be asked to authorize Orangedox to access your Google Drive account. This will allow Orangedox to sync your files and folders automatically without any file uploads or storage restrictions.</p>
12
- <h3>Step 3: Select the files or folders you want to share and track with Orangedox</h3>
13
- <p>After connecting your Google Drive account, you can browse through your files and folders and select the ones you want to share with your recipients. You can also create new files or folders within Orangedox. Then, you can customize your documents with branding, passwords, expiration dates, download limits, and more. Finally, you can generate a unique link for each document or folder and send it via email or any other channel.</p>
14
- <p>How to download orangedox for Google Drive<br />
15
- Download orangedox to track document views and downloads<br />
16
- Download orangedox for secure document sharing and data rooms<br />
17
- Download orangedox for marketing insights and sales leads<br />
18
- Download orangedox for Google Workspace integration<br />
19
- Download orangedox for professional branding of your documents<br />
20
- Download orangedox for email gating and lead capture<br />
21
- Download orangedox for document protection and control<br />
22
- Download orangedox for virtual data rooms and due diligence<br />
23
- Download orangedox for finance and venture capital<br />
24
- Download orangedox free trial and pricing options<br />
25
- Download orangedox personal plan for content creators<br />
26
- Download orangedox business plan for unlimited document sharing<br />
27
- Download orangedox teams plan for team management and collaboration<br />
28
- Download orangedox pro plan for secure document sharing and tracking<br />
29
- How to download orangedox from Google Drive or Gmail<br />
30
- How to download orangedox from the official website[^1^]<br />
31
- How to download orangedox from the app store[^2^]<br />
32
- How to download orangedox from the Chrome web store[^3^]<br />
33
- How to download orangedox from the Microsoft store<br />
34
- How to use orangedox after downloading it<br />
35
- How to update orangedox after downloading it<br />
36
- How to uninstall orangedox after downloading it<br />
37
- How to contact orangedox support after downloading it<br />
38
- How to access orangedox tutorials and guides after downloading it<br />
39
- Benefits of downloading orangedox for your business<br />
40
- Reviews of downloading orangedox from customers and users<br />
41
- Alternatives to downloading orangedox for document tracking and sharing<br />
42
- Comparisons of downloading orangedox with other document tools<br />
43
- FAQs about downloading orangedox and its features<br />
44
- Best practices for downloading orangedox and using it effectively<br />
45
- Tips and tricks for downloading orangedox and optimizing it for your needs<br />
46
- Case studies of downloading orangedox and its impact on business outcomes<br />
47
- Testimonials of downloading orangedox and its value proposition<br />
48
- Success stories of downloading orangedox and its ROI</p>
49
- <h2>How to Download Orangedox for Gmail</h2>
50
- <p>If you use Gmail as your email service, you can also benefit from using Orangedox to track your attachments. All you need is a Chrome browser and a simple plugin. Here are the steps you need to follow:</p>
51
- <h3>Step 1: Install the <a href="(^2^ ))">Orangedox for Gmail Chrome plugin</a></h3>
52
- <p>You can find the plugin on the <a href="">Chrome Web Store</a> and install it with one click. The plugin will add a new button to your Gmail compose window that will allow you to attach files from Google Drive or Dropbox and track them with Orangedox.</p>
53
- <h3>Step 2: Log in to your Orangedox account from Gmail</h3>
54
- <p>After installing the plugin, you will need to log in to your Orangedox account from Gmail. You can use the same credentials that you used to sign up for Orangedox on the website. This will link your Gmail account with your Orangedox account and enable the tracking features.</p>
55
- <h3>Step 3: Attach files from Google Drive or Dropbox and track them with Orangedox</h3>
56
- <p>When you compose a new email, you can click on the Orangedox button and select the files that you want to attach from Google Drive or Dropbox. You can also drag and drop files from your computer. Then, you can customize your attachments with branding, passwords, expiration dates, download limits, and more. Finally, you can send your email and get real-time notifications when your recipients open, view, or download your files.</p>
57
- <h2>How to Download Orangedox for Dropbox</h2>
58
- <p>If you use Dropbox as your cloud storage service, you can also connect it with Orangedox and enjoy the same benefits as with Google Drive. Here are the steps you need to follow:</p>
59
- <h3>Step 1: Visit the <a href="">Orangedox website</a> and sign up for a free trial</h3>
60
- <p>You can choose from three pricing plans depending on your needs: Pro, Business, or Teams. Each plan comes with a 14-day free trial and no upfront credit card required. You can also opt for a free personal plan if you just want to publish your files online.</p>
61
- <h3>Step 2: Connect your Dropbox account to Orangedox</h3>
62
- <p>Once you sign up, you will be asked to authorize Orangedox to access your Dropbox account. This will allow Orangedox to sync your files and folders automatically without any file uploads or storage restrictions.</p>
63
- <h3>Step 3: Select the files or folders you want to share and track with Orangedox</h3>
64
- <p>After connecting your Dropbox account, you can browse through your files and folders and select the ones you want to share with your recipients. You can also create new files or folders within Orangedox. Then, you can customize your documents with branding, passwords, expiration dates, download limits, and more. Finally, you can generate a unique link for each document or folder and send it via email or any other channel.</p>
65
- <h1>Conclusion</h1>
66
- <p>Orangedox is a powerful tool that can help you share and track your documents online with ease and security. It integrates with Google Drive, Gmail, and Dropbox, and allows you to control access, prevent forwarding, and monitor every interaction with your files. You can download Orangedox for different platforms by following the simple steps outlined in this article. You can also sign up for a free trial and test all the features of Orangedox before committing to a paid plan.</p>
67
- <p>If you want to take your document sharing and tracking to the next level, download Orangedox today and see the difference for yourself!</p>
68
- <h1>FAQs</h1>
69
- <h2>What are the pricing plans for Orangedox?</h2>
70
- <p>Orangedox offers three pricing plans for different needs: Pro, Business, and Teams. The Pro plan costs $10 per month and includes unlimited documents, 1000 views per month, custom branding, password protection, download limits, expiration dates, email notifications, analytics dashboard, and more. The Business plan costs $25 per month and includes everything in the Pro plan plus 5000 views per month, team collaboration, custom domains, document embedding, document watermarking, document editing, document signing, document templates, document workflows, document approvals, document requests, document reminders, document comments, document feedbacks, document ratings, document reviews, document annotations, document redactions, document conversions, document translations, document OCRs , and more. The Teams plan costs $50 per month and includes everything in the Business plan plus 10000 views per month, custom integrations, API access, priority support, and more. You can also opt for a free personal plan if you just want to publish your files online with no tracking features.</p>
71
- <h2>What are some alternatives to Orangedox?</h2>
72
- <p>Some alternatives to Orangedox are DocSend, PandaDoc, DocuSign, and Adobe Sign. These are also cloud-based software that allow you to share and track your documents online with various features. However, Orangedox has some advantages over these alternatives, such as its seamless integration with Google Drive, Gmail, and Dropbox, its advanced document customization and security options, its affordable pricing plans, and its user-friendly interface.</p>
73
- <h2>How secure is Orangedox?</h2>
74
- <p>Orangedox is very secure and reliable. It uses SSL encryption to protect your data in transit and at rest. It also complies with GDPR and CCPA regulations and respects your privacy. It does not store your files on its servers, but rather syncs them with your cloud storage service. It also allows you to control access to your documents by setting passwords, expiration dates, download limits, and more. You can also revoke access to your documents at any time or delete them permanently from Orangedox.</p>
75
- <h2>How can I customize my documents with Orangedox?</h2>
76
- <p>Orangedox allows you to customize your documents with various options to suit your needs and preferences. You can add your logo, brand colors, fonts, and images to your documents. You can also set passwords, expiration dates, download limits, and more to control access to your documents. You can also embed your documents on your website or blog with a simple code snippet. You can also watermark your documents with your name or logo to prevent unauthorized copying or forwarding. You can also edit your documents online with Orangedox's built-in editor or sign them electronically with Orangedox's signature feature.</p>
77
- <h2>How can I contact Orangedox support?</h2>
78
- <p>If you have any questions or issues with Orangedox, you can contact their support team via email at <a href="mailto:[email protected]">[email protected]</a> or via their online chat on their website. They are available 24/7 and will respond to you as soon as possible. You can also check their <a href="">help center</a> for answers to common questions and tutorials on how to use Orangedox.</p> 197e85843d<br />
79
- <br />
80
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Resident Evil 4 PSP ISO for PPSSPP Emulator 2019.md DELETED
@@ -1,143 +0,0 @@
1
- <br />
2
- <br> - What is PPSSPP emulator and what are its features? <br> - How to download and install PPSSPP emulator on Android? <br> - How to download and extract Resident Evil 4 PSP ISO file? <br> - How to configure PPSSPP settings for optimal performance and graphics? <br> - How to launch and play Resident Evil 4 on PSP using PPSSPP emulator? | | H2: Resident Evil 4 Game Review | - Story: What is the plot of Resident Evil 4 and who are the main characters? <br> - Gameplay: What are the main features and mechanics of Resident Evil 4? <br> - Graphics: How does Resident Evil 4 look on PSP compared to other platforms? <br> - Sound: How does Resident Evil 4 sound on PSP and what are the voice acting and music quality? <br> - Pros and cons: What are the strengths and weaknesses of Resident Evil 4 on PSP? | | H3: Tips and Tricks for Playing Resident Evil 4 on PSP | - How to save your progress and load your game state? <br> - How to use cheats and hacks to enhance your gameplay? <br> - How to play multiplayer mode with other players online? <br> - How to solve puzzles and find secrets in Resident Evil 4? <br> - How to unlock bonus content and modes in Resident Evil 4? | | H4: Conclusion | - Summary: What are the main points of the article and why should you play Resident Evil 4 on PSP using PPSSPP emulator? <br> - Call to action: What are the next steps for the reader and where can they find more information about Resident Evil 4 and PPSSPP emulator? | | H5: FAQs | - Q1: Is Resident Evil 4 compatible with PPSSPP emulator? <br> - Q2: How much storage space do I need to download and play Resident Evil 4 on PSP using PPSSPP emulator? <br> - Q3: Can I play Resident Evil 4 on PSP using PPSSPP emulator without internet connection? <br> - Q4: Is Resident Evil 4 on PSP using PPSSPP emulator safe and legal to download and play? <br> - Q5: What are some alternatives to Resident Evil 4 on PSP using PPSSPP emulator? | Table 2: Article with HTML formatting <h1>How to Download and Play Resident Evil 4 on PSP Using PPSSPP Emulator</h1>
3
- <p>
4
- Resident Evil 4 is one of the most acclaimed games in the survival horror genre, released in 2005 for various platforms, including PlayStation 2, GameCube, Wii, PC, Xbox, and PlayStation 3. The game follows the adventures of Leon S. Kennedy, a former police officer who is sent to a rural area of Spain to rescue the kidnapped daughter of the US president from a mysterious cult. Along the way, he faces hordes of infected villagers, mutated creatures, and sinister enemies. The game features a third-person perspective, dynamic combat system, interactive environments, and multiple weapons and items. </p>
5
- <h2>resident evil 4 psp iso download 2019 ppsspp android emulator</h2><br /><p><b><b>Download File</b> ===== <a href="https://jinyurl.com/2uNSW7">https://jinyurl.com/2uNSW7</a></b></p><br /><br />
6
- <p>
7
- If you are a fan of Resident Evil 4 or want to experience this classic game for the first time, you might be wondering if you can play it on your PlayStation Portable (PSP) device. The answer is yes, you can, thanks to a powerful emulator called PPSSPP. PPSSPP is an open-source software that allows you to run PSP games on your Android device, as well as other platforms like Windows, Linux, Mac, iOS, etc. PPSSPP has many features that enhance your gaming experience, such as HD graphics, high rendering speed, smooth gameplay, amazing performance, texture filter, scale, etc. It also supports up to 36 languages. </p>
8
- <p>
9
- In this article, we will show you how to download and play Resident Evil 4 on PSP using PPSSPP emulator. We will guide you through the steps required to download and install PPSSPP emulator on Android, download and extract Resident Evil 4 PSP ISO file, configure PPSSPP settings for optimal performance and graphics, launch and play Resident Evil 4 on PSP using PPSSPP emulator. We will also provide you with a review of Resident Evil 4 game, as well as some tips and tricks for playing it. By the end of this article, you will be able to enjoy Resident Evil 4 on your PSP device with amazing quality and performance. So, let's get started! <h2>Resident Evil 4 Game Review</h2>
10
- <p>
11
- Before we dive into the technical details of how to download and play Resident Evil 4 on PSP using PPSSPP emulator, let's take a look at what makes this game so special and why you should play it. Here is a brief review of Resident Evil 4 game, covering its story, gameplay, graphics, sound, pros and cons. </p>
12
- <h3>Story</h3>
13
- <p>
14
- Resident Evil 4 is the sixth main installment in the Resident Evil series, which is known for its horror-themed action-adventure games. The game takes place in 2004, six years after the events of Resident Evil 2 and Resident Evil 3: Nemesis. The protagonist of the game is Leon S. Kennedy, who was one of the survivors of the Raccoon City incident in Resident Evil 2. Leon is now a special agent working for the US government, and he is assigned to rescue Ashley Graham, the daughter of the US president, who has been kidnapped by a mysterious cult called Los Illuminados. Leon travels to a rural area of Spain, where he encounters hostile villagers infected by a parasite called Las Plagas, which turns them into mindless zombies. He also faces other enemies, such as cult members, mercenaries, and mutated creatures. Leon must find Ashley and escape from the area, while uncovering the secrets behind Los Illuminados and Las Plagas. </p>
15
- <p>
16
- The story of Resident Evil 4 is engaging and thrilling, with many twists and turns along the way. The game has a cinematic presentation, with cutscenes and dialogues that advance the plot and develop the characters. The game also has multiple endings, depending on the choices you make during the game. The story of Resident Evil 4 is one of the best in the series, and it will keep you hooked until the end. </p>
17
- <p>resident evil 4 ppsspp iso file download for android<br />
18
- download resident evil 4 psp iso highly compressed 2019<br />
19
- resident evil 4 ppsspp android emulator gameplay<br />
20
- how to play resident evil 4 on psp with ppsspp<br />
21
- resident evil 4 psp iso free download full version<br />
22
- resident evil 4 ppsspp settings for best performance<br />
23
- resident evil 4 psp iso download link 2019<br />
24
- resident evil 4 ppsspp cheats and codes<br />
25
- resident evil 4 ppsspp android emulator requirements<br />
26
- resident evil 4 psp iso mod apk download<br />
27
- resident evil 4 ppsspp gold emulator download<br />
28
- resident evil 4 psp iso romsmania<br />
29
- resident evil 4 ppsspp save data download<br />
30
- resident evil 4 psp iso coolrom<br />
31
- resident evil 4 ppsspp android emulator review<br />
32
- resident evil 4 psp iso game size<br />
33
- resident evil 4 ppsspp graphics mod download<br />
34
- resident evil 4 psp iso english version<br />
35
- resident evil 4 ppsspp android emulator offline<br />
36
- resident evil 4 psp iso zip file download<br />
37
- resident evil 4 ppsspp best weapons and upgrades<br />
38
- resident evil 4 psp iso google drive<br />
39
- resident evil 4 ppsspp walkthrough and tips<br />
40
- resident evil 4 psp iso mediafire<br />
41
- resident evil 4 ppsspp android emulator apk</p>
42
- <h3>Gameplay</h3>
43
- <p>
44
- Resident Evil 4 is a survival horror game that combines action, adventure, and puzzle elements. The game features a third-person perspective, which allows you to see your character and the surroundings more clearly. The game also has an over-the-shoulder camera angle, which gives you more control over aiming and shooting. The game has a dynamic combat system, which lets you use various weapons and items to fight against enemies. You can also use melee attacks, such as kicks and suplexes, to stun or kill enemies. You can also interact with the environment, such as breaking barrels, opening doors, pushing objects, etc. The game has a health system that requires you to use herbs or first aid sprays to heal yourself. You can also upgrade your weapons and items by finding or buying them from a merchant who appears throughout the game. </p>
45
- <p>
46
- The gameplay of Resident Evil 4 is challenging and rewarding, with many options and strategies to deal with different situations. The game has a balanced difficulty level, which adapts to your performance and skills. The game also has a variety of enemies and bosses, each with their own strengths and weaknesses. The game also has some puzzle elements, which require you to use your logic and observation skills to solve them. The gameplay of Resident Evil 4 is fun and addictive, and it will keep you entertained for hours. </p> <h3>Graphics</h3>
47
- <p>
48
- Resident Evil 4 is a game that was originally designed for PlayStation 2 and GameCube, which had limited graphics capabilities compared to modern devices. However, thanks to PPSSPP emulator, you can play Resident Evil 4 on PSP with improved graphics quality and resolution. PPSSPP emulator allows you to adjust the graphics settings of the game, such as texture filter, scale, frame rate, etc. You can also use shaders and post-processing effects to enhance the visuals of the game. PPSSPP emulator also supports HD graphics, which means you can play Resident Evil 4 on PSP with up to 1080p resolution. </p>
49
- <p>
50
- The graphics of Resident Evil 4 on PSP using PPSSPP emulator are impressive and realistic, with detailed textures, lighting, shadows, and animations. The game also has a diverse and immersive environment, with different locations and scenarios, such as villages, castles, mines, islands, etc. The game also has a dynamic weather system, which changes the atmosphere and mood of the game. The graphics of Resident Evil 4 on PSP using PPSSPP emulator are stunning and beautiful, and they will make you feel like you are playing the game on a bigger screen. </p>
51
- <h3>Sound</h3>
52
- <p>
53
- Resident Evil 4 is a game that relies heavily on sound to create a tense and scary atmosphere. The game has a superb sound design, with realistic and immersive sound effects, such as gunshots, explosions, footsteps, screams, etc. The game also has a great voice acting, with professional and expressive actors who deliver the dialogues and emotions of the characters. The game also has a catchy and atmospheric music score, which complements the mood and tone of the game. The game also supports Dolby Surround Sound, which enhances the audio quality and surround effect of the game. </p>
54
- <p>
55
- The sound of Resident Evil 4 on PSP using PPSSPP emulator is clear and crisp, with no distortion or lag. PPSSPP emulator allows you to adjust the sound settings of the game, such as volume, stereo mode, latency, etc. You can also use headphones or external speakers to enjoy the sound of the game better. The sound of Resident Evil 4 on PSP using PPSSPP emulator is excellent and immersive, and it will make you feel like you are in the middle of the action. </p> <h3>Pros and Cons</h3>
56
- <p>
57
- Resident Evil 4 is a game that has many pros and cons, depending on your preferences and expectations. Here are some of the pros and cons of Resident Evil 4 on PSP using PPSSPP emulator: </p>
58
- <table>
59
- <tr>
60
- <th>Pros</th>
61
- <th>Cons</th>
62
- </tr>
63
- <tr>
64
- <td>- A thrilling and captivating story with multiple endings and characters.</td>
65
- <td>- A long and repetitive game with some backtracking and filler sections.</td>
66
- </tr>
67
- <tr>
68
- <td>- A fun and challenging gameplay with diverse combat and puzzle elements.</td>
69
- <td>- A difficult and frustrating gameplay with limited ammo and health resources.</td>
70
- </tr>
71
- <tr>
72
- <td>- A stunning and realistic graphics with HD resolution and enhanced effects.</td>
73
- <td>- A dated and pixelated graphics with some glitches and bugs.</td>
74
- </tr>
75
- <tr>
76
- <td>- A clear and immersive sound with Dolby Surround Sound and great voice acting.</td>
77
- <td>- A loud and annoying sound with some cheesy and corny dialogues.</td>
78
- </tr>
79
- <tr>
80
- <td>- A portable and accessible game with PPSSPP emulator and PSP device.</td>
81
- <td>- A illegal and risky game with potential malware and legal issues.</td>
82
- </tr>
83
- </table>
84
- <p>
85
- The pros and cons of Resident Evil 4 on PSP using PPSSPP emulator are subjective and personal, so you might have a different opinion than ours. However, we think that the pros outweigh the cons, and that Resident Evil 4 is a game worth playing on PSP using PPSSPP emulator. </p>
86
- <h2>Tips and Tricks for Playing Resident Evil 4 on PSP</h2>
87
- <p>
88
- Now that you know how to download and play Resident Evil 4 on PSP using PPSSPP emulator, you might want to know some tips and tricks for playing it better. Here are some tips and tricks for playing Resident Evil 4 on PSP that will help you improve your skills, enjoy your gameplay, and discover more secrets in the game. </p> <h3>How to save your progress and load your game state?</h3>
89
- <p>
90
- One of the most important tips for playing Resident Evil 4 on PSP using PPSSPP emulator is to save your progress and load your game state frequently. Saving your progress and loading your game state will allow you to resume your gameplay from where you left off, avoid losing your data, and retry difficult sections. There are two ways to save your progress and load your game state in Resident Evil 4 on PSP using PPSSPP emulator: using the in-game save system and using the emulator's save state system. </p>
91
- <p>
92
- The in-game save system is the official way to save your progress and load your game state in Resident Evil 4. The in-game save system uses typewriters that are scattered throughout the game. To use the in-game save system, you need to find a typewriter, interact with it, and select the option to save your game. You can also overwrite or delete your previous saves. To load your game state using the in-game save system, you need to go to the main menu, select the option to load your game, and choose the save file you want to load. The in-game save system is reliable and convenient, but it has some limitations. For example, you can only save your progress when you find a typewriter, which might be far away or inaccessible. You also have a limited number of save slots, which might not be enough for multiple playthroughs or different scenarios. </p>
93
- <p>
94
- The emulator's save state system is an alternative way to save your progress and load your game state in Resident Evil 4. The emulator's save state system uses the emulator's memory to create snapshots of your gameplay at any point. To use the emulator's save state system, you need to go to the emulator's menu, select the option to save state, and choose a slot to save your state. You can also overwrite or delete your previous states. To load your game state using the emulator's save state system, you need to go to the emulator's menu, select the option to load state, and choose the slot you want to load. The emulator's save state system is flexible and convenient, but it has some risks. For example, you might accidentally overwrite or delete your states, which might cause data loss or corruption. You also might encounter compatibility issues or glitches when loading states from different versions of the game or the emulator. </p>
95
- <p>
96
- We recommend that you use both the in-game save system and the emulator's save state system when playing Resident Evil 4 on PSP using PPSSPP emulator. This way, you can have multiple backups of your progress and load your game state from different points. However, you should also be careful not to rely too much on either system, as they might fail or malfunction at some point. You should also make sure that you have enough storage space on your device for saving and loading states. </p> <h3>How to use cheats and hacks to enhance your gameplay?</h3>
97
- <p>
98
- Another tip for playing Resident Evil 4 on PSP using PPSSPP emulator is to use cheats and hacks to enhance your gameplay. Cheats and hacks are codes or modifications that alter the game's rules or features, such as giving you unlimited ammo, health, money, weapons, items, etc. Cheats and hacks can make your gameplay easier, funnier, or more interesting, depending on your preferences and goals. However, cheats and hacks can also ruin your gameplay, make it boring, or cause glitches or errors, depending on how you use them. Therefore, you should use cheats and hacks with caution and moderation, and only when you want to experiment or have some fun. </p>
99
- <p>
100
- There are two ways to use cheats and hacks in Resident Evil 4 on PSP using PPSSPP emulator: using the in-game cheat system and using the emulator's cheat system. The in-game cheat system is the official way to use cheats and hacks in Resident Evil 4. The in-game cheat system uses special items or codes that are hidden or unlocked throughout the game. To use the in-game cheat system, you need to find or obtain these items or codes, and use them in the game. For example, you can find a rocket launcher with infinite ammo in the final chapter of the game, or you can enter a code to unlock a special costume for Leon or Ashley. The in-game cheat system is limited and specific, but it is also safe and legal to use. </p>
101
- <p>
102
- The emulator's cheat system is an alternative way to use cheats and hacks in Resident Evil 4. The emulator's cheat system uses external files or programs that contain cheat codes or patches for the game. To use the emulator's cheat system, you need to download or create these files or programs, and load them into the emulator. For example, you can download a cheat file that contains codes for unlimited ammo, health, money, weapons, items, etc., or you can create a patch file that modifies the game's graphics, sound, difficulty, etc. The emulator's cheat system is flexible and diverse, but it is also risky and illegal to use. </p>
103
- <p>
104
- We recommend that you use the in-game cheat system when playing Resident Evil 4 on PSP using PPSSPP emulator, as it is more reliable and ethical than the emulator's cheat system. However, if you want to use the emulator's cheat system, you should do it at your own risk and responsibility. You should also make sure that you have a backup of your original game file and state before using any cheats or hacks. You should also avoid using cheats or hacks that might harm your device or violate the game's terms of service. </p> <h3>How to solve puzzles and find secrets in Resident Evil 4?</h3>
105
- <p>
106
- Another tip for playing Resident Evil 4 on PSP using PPSSPP emulator is to solve puzzles and find secrets in the game. Puzzles and secrets are optional challenges and rewards that are hidden or locked in the game. Solving puzzles and finding secrets can make your gameplay more interesting and rewarding, as you can discover new areas, items, weapons, modes, etc. However, puzzles and secrets can also be difficult and frustrating, as they require you to use your logic, observation, memory, and skills to solve them or find them. </p>
107
- <p>
108
- There are many puzzles and secrets in Resident Evil 4, ranging from simple to complex, from obvious to obscure. Some of the puzzles and secrets are related to the main story or gameplay, while others are just for fun or extra content. Some of the puzzles and secrets are easy to find or solve, while others require you to search or explore every corner of the game or use specific items or actions. Some of the puzzles and secrets are rewarding and satisfying, while others are disappointing or useless. </p>
109
- <p>
110
- We recommend that you try to solve puzzles and find secrets in Resident Evil 4 on PSP using PPSSPP emulator, as they can enhance your gameplay and enjoyment of the game. However, you should also be aware that some puzzles and secrets might be too hard or too hidden for you to solve or find, and that you might need some help or hints from other sources, such as guides, walkthroughs, videos, etc. You should also be careful not to spoil yourself or ruin your gameplay by looking for or using too many clues or solutions for the puzzles and secrets. You should also respect the game's design and intention, and not use cheats or hacks to bypass or break the puzzles and secrets. </p>
111
- <h3>How to unlock bonus content and modes in Resident Evil 4?</h3>
112
- <p>
113
- One of the most rewarding tips for playing Resident Evil 4 on PSP using PPSSPP emulator is to unlock bonus content and modes in the game. Bonus content and modes are additional features and options that are not available in the normal game. Unlocking bonus content and modes can make your gameplay more fun and varied, as you can access new characters, costumes, weapons, items, scenarios, difficulties, etc. However, unlocking bonus content and modes can also be challenging and time-consuming, as they require you to complete certain tasks or conditions in the game. </p>
114
- <p>
115
- There are many bonus content and modes in Resident Evil 4, each with their own requirements and rewards. Some of the bonus content and modes are related to the main story or gameplay, while others are just for fun or extra content. Some of the bonus content and modes are easy to unlock, while others require you to finish the game multiple times or achieve high scores or ranks. Some of the bonus content and modes are exciting and useful, while others are silly or pointless. </p>
116
- <p>
117
- We recommend that you try to unlock bonus content and modes in Resident Evil 4 on PSP using PPSSPP emulator, as they can extend your gameplay and enjoyment of the game. However, you should also be aware that some bonus content and modes might be too hard or too tedious for you to unlock, and that you might need some help or tips from other sources, such as guides, walkthroughs, videos, etc. You should also be careful not to spoil yourself or ruin your gameplay by looking for or using too much information about the bonus content and modes. You should also respect the game's design and intention, and not use cheats or hacks to unlock or access the bonus content and modes. </p> <h4>Conclusion</h4>
118
- <p>
119
- Resident Evil 4 is a game that deserves to be played by any fan of survival horror or action-adventure games. The game has a captivating story, a fun and challenging gameplay, a stunning and realistic graphics, and a clear and immersive sound. The game also has many puzzles, secrets, bonus content, and modes that will keep you entertained for hours. Thanks to PPSSPP emulator, you can play Resident Evil 4 on PSP with amazing quality and performance. You can also play multiplayer mode with other players online, and use cheats and hacks to enhance your gameplay. However, you should also be careful and responsible when downloading and playing Resident Evil 4 on PSP using PPSSPP emulator, as there might be some risks and issues involved. </p>
120
- <p>
121
- We hope that this article has helped you learn how to download and play Resident Evil 4 on PSP using PPSSPP emulator. We also hope that you have enjoyed reading our review of Resident Evil 4 game, as well as our tips and tricks for playing it. If you have any questions or comments, please feel free to leave them below. We would love to hear from you and help you out. Thank you for reading and happy gaming! </p>
122
- <h5>FAQs</h5>
123
- <p>
124
- Here are some frequently asked questions about Resident Evil 4 on PSP using PPSSPP emulator: </p>
125
- <ul>
126
- <li>Q1: Is Resident Evil 4 compatible with PPSSPP emulator?</li>
127
- <li>A1: Yes, Resident Evil 4 is compatible with PPSSPP emulator, as long as you have the correct game file and emulator version. However, you might encounter some minor glitches or errors during the gameplay, depending on your device and settings.</li>
128
- <li>Q2: How much storage space do I need to download and play Resident Evil 4 on PSP using PPSSPP emulator?</li>
129
- <li>A2: You need at least 3 GB of free storage space on your device to download and play Resident Evil 4 on PSP using PPSSPP emulator. This includes the game file (around 2 GB), the emulator file (around 30 MB), and the save state file (around 10 MB).</li>
130
- <li>Q3: Can I play Resident Evil 4 on PSP using PPSSPP emulator without internet connection?</li>
131
- <li>A3: Yes, you can play Resident Evil 4 on PSP using PPSSPP emulator without internet connection, as long as you have already downloaded and installed the game file and the emulator file on your device. However, you will need internet connection to play multiplayer mode with other players online, or to download or update the game file or the emulator file.</li>
132
- <li>Q4: Is Resident Evil 4 on PSP using PPSSPP emulator safe and legal to download and play?</li>
133
- <li>A4: Resident Evil 4 on PSP using PPSSPP emulator is not completely safe or legal to download and play, as it might contain malware or viruses that can harm your device or data, or violate the game's terms of service or intellectual property rights. Therefore, you should download and play Resident Evil 4 on PSP using PPSSPP emulator at your own risk and responsibility, and only from trusted sources.</li>
134
- <li>Q5: What are some alternatives to Resident Evil 4 on PSP using PPSSPP emulator?</li>
135
- <li>A5: Some alternatives to Resident Evil 4 on PSP using PPSSPP emulator are: <ul>
136
- <li>- Resident Evil 5 or Resident Evil 6 on PlayStation 3 or Xbox 360: These are the sequels to Resident Evil 4, which continue the story and gameplay of the series.</li>
137
- <li>- Silent Hill Origins or Silent Hill Shattered Memories on PSP: These are other survival horror games that are available for PSP devices.</li>
138
- <li>- Dead Trigger or Dead Trigger 2 on Android: These are zombie shooting games that are similar to Resident Evil 4 in terms of graphics and gameplay.</li>
139
- </ul>
140
- </li>
141
- </ul></p> 401be4b1e0<br />
142
- <br />
143
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A666sxr/Genshin_TTS/attentions.py DELETED
@@ -1,303 +0,0 @@
1
- import copy
2
- import math
3
- import numpy as np
4
- import torch
5
- from torch import nn
6
- from torch.nn import functional as F
7
-
8
- import commons
9
- import modules
10
- from modules import LayerNorm
11
-
12
-
13
- class Encoder(nn.Module):
14
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
15
- super().__init__()
16
- self.hidden_channels = hidden_channels
17
- self.filter_channels = filter_channels
18
- self.n_heads = n_heads
19
- self.n_layers = n_layers
20
- self.kernel_size = kernel_size
21
- self.p_dropout = p_dropout
22
- self.window_size = window_size
23
-
24
- self.drop = nn.Dropout(p_dropout)
25
- self.attn_layers = nn.ModuleList()
26
- self.norm_layers_1 = nn.ModuleList()
27
- self.ffn_layers = nn.ModuleList()
28
- self.norm_layers_2 = nn.ModuleList()
29
- for i in range(self.n_layers):
30
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
31
- self.norm_layers_1.append(LayerNorm(hidden_channels))
32
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
33
- self.norm_layers_2.append(LayerNorm(hidden_channels))
34
-
35
- def forward(self, x, x_mask):
36
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
37
- x = x * x_mask
38
- for i in range(self.n_layers):
39
- y = self.attn_layers[i](x, x, attn_mask)
40
- y = self.drop(y)
41
- x = self.norm_layers_1[i](x + y)
42
-
43
- y = self.ffn_layers[i](x, x_mask)
44
- y = self.drop(y)
45
- x = self.norm_layers_2[i](x + y)
46
- x = x * x_mask
47
- return x
48
-
49
-
50
- class Decoder(nn.Module):
51
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
52
- super().__init__()
53
- self.hidden_channels = hidden_channels
54
- self.filter_channels = filter_channels
55
- self.n_heads = n_heads
56
- self.n_layers = n_layers
57
- self.kernel_size = kernel_size
58
- self.p_dropout = p_dropout
59
- self.proximal_bias = proximal_bias
60
- self.proximal_init = proximal_init
61
-
62
- self.drop = nn.Dropout(p_dropout)
63
- self.self_attn_layers = nn.ModuleList()
64
- self.norm_layers_0 = nn.ModuleList()
65
- self.encdec_attn_layers = nn.ModuleList()
66
- self.norm_layers_1 = nn.ModuleList()
67
- self.ffn_layers = nn.ModuleList()
68
- self.norm_layers_2 = nn.ModuleList()
69
- for i in range(self.n_layers):
70
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
71
- self.norm_layers_0.append(LayerNorm(hidden_channels))
72
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
73
- self.norm_layers_1.append(LayerNorm(hidden_channels))
74
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
75
- self.norm_layers_2.append(LayerNorm(hidden_channels))
76
-
77
- def forward(self, x, x_mask, h, h_mask):
78
- """
79
- x: decoder input
80
- h: encoder output
81
- """
82
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
83
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
84
- x = x * x_mask
85
- for i in range(self.n_layers):
86
- y = self.self_attn_layers[i](x, x, self_attn_mask)
87
- y = self.drop(y)
88
- x = self.norm_layers_0[i](x + y)
89
-
90
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
91
- y = self.drop(y)
92
- x = self.norm_layers_1[i](x + y)
93
-
94
- y = self.ffn_layers[i](x, x_mask)
95
- y = self.drop(y)
96
- x = self.norm_layers_2[i](x + y)
97
- x = x * x_mask
98
- return x
99
-
100
-
101
- class MultiHeadAttention(nn.Module):
102
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
103
- super().__init__()
104
- assert channels % n_heads == 0
105
-
106
- self.channels = channels
107
- self.out_channels = out_channels
108
- self.n_heads = n_heads
109
- self.p_dropout = p_dropout
110
- self.window_size = window_size
111
- self.heads_share = heads_share
112
- self.block_length = block_length
113
- self.proximal_bias = proximal_bias
114
- self.proximal_init = proximal_init
115
- self.attn = None
116
-
117
- self.k_channels = channels // n_heads
118
- self.conv_q = nn.Conv1d(channels, channels, 1)
119
- self.conv_k = nn.Conv1d(channels, channels, 1)
120
- self.conv_v = nn.Conv1d(channels, channels, 1)
121
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
122
- self.drop = nn.Dropout(p_dropout)
123
-
124
- if window_size is not None:
125
- n_heads_rel = 1 if heads_share else n_heads
126
- rel_stddev = self.k_channels**-0.5
127
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
128
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
129
-
130
- nn.init.xavier_uniform_(self.conv_q.weight)
131
- nn.init.xavier_uniform_(self.conv_k.weight)
132
- nn.init.xavier_uniform_(self.conv_v.weight)
133
- if proximal_init:
134
- with torch.no_grad():
135
- self.conv_k.weight.copy_(self.conv_q.weight)
136
- self.conv_k.bias.copy_(self.conv_q.bias)
137
-
138
- def forward(self, x, c, attn_mask=None):
139
- q = self.conv_q(x)
140
- k = self.conv_k(c)
141
- v = self.conv_v(c)
142
-
143
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
144
-
145
- x = self.conv_o(x)
146
- return x
147
-
148
- def attention(self, query, key, value, mask=None):
149
- # reshape [b, d, t] -> [b, n_h, t, d_k]
150
- b, d, t_s, t_t = (*key.size(), query.size(2))
151
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
152
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
153
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
154
-
155
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
156
- if self.window_size is not None:
157
- assert t_s == t_t, "Relative attention is only available for self-attention."
158
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
159
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
160
- scores_local = self._relative_position_to_absolute_position(rel_logits)
161
- scores = scores + scores_local
162
- if self.proximal_bias:
163
- assert t_s == t_t, "Proximal bias is only available for self-attention."
164
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
165
- if mask is not None:
166
- scores = scores.masked_fill(mask == 0, -1e4)
167
- if self.block_length is not None:
168
- assert t_s == t_t, "Local attention is only available for self-attention."
169
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
170
- scores = scores.masked_fill(block_mask == 0, -1e4)
171
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
172
- p_attn = self.drop(p_attn)
173
- output = torch.matmul(p_attn, value)
174
- if self.window_size is not None:
175
- relative_weights = self._absolute_position_to_relative_position(p_attn)
176
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
177
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
178
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
179
- return output, p_attn
180
-
181
- def _matmul_with_relative_values(self, x, y):
182
- """
183
- x: [b, h, l, m]
184
- y: [h or 1, m, d]
185
- ret: [b, h, l, d]
186
- """
187
- ret = torch.matmul(x, y.unsqueeze(0))
188
- return ret
189
-
190
- def _matmul_with_relative_keys(self, x, y):
191
- """
192
- x: [b, h, l, d]
193
- y: [h or 1, m, d]
194
- ret: [b, h, l, m]
195
- """
196
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
197
- return ret
198
-
199
- def _get_relative_embeddings(self, relative_embeddings, length):
200
- max_relative_position = 2 * self.window_size + 1
201
- # Pad first before slice to avoid using cond ops.
202
- pad_length = max(length - (self.window_size + 1), 0)
203
- slice_start_position = max((self.window_size + 1) - length, 0)
204
- slice_end_position = slice_start_position + 2 * length - 1
205
- if pad_length > 0:
206
- padded_relative_embeddings = F.pad(
207
- relative_embeddings,
208
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
209
- else:
210
- padded_relative_embeddings = relative_embeddings
211
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
212
- return used_relative_embeddings
213
-
214
- def _relative_position_to_absolute_position(self, x):
215
- """
216
- x: [b, h, l, 2*l-1]
217
- ret: [b, h, l, l]
218
- """
219
- batch, heads, length, _ = x.size()
220
- # Concat columns of pad to shift from relative to absolute indexing.
221
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
222
-
223
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
224
- x_flat = x.view([batch, heads, length * 2 * length])
225
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
226
-
227
- # Reshape and slice out the padded elements.
228
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
229
- return x_final
230
-
231
- def _absolute_position_to_relative_position(self, x):
232
- """
233
- x: [b, h, l, l]
234
- ret: [b, h, l, 2*l-1]
235
- """
236
- batch, heads, length, _ = x.size()
237
- # padd along column
238
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
239
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
240
- # add 0's in the beginning that will skew the elements after reshape
241
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
242
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
243
- return x_final
244
-
245
- def _attention_bias_proximal(self, length):
246
- """Bias for self-attention to encourage attention to close positions.
247
- Args:
248
- length: an integer scalar.
249
- Returns:
250
- a Tensor with shape [1, 1, length, length]
251
- """
252
- r = torch.arange(length, dtype=torch.float32)
253
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
254
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
255
-
256
-
257
- class FFN(nn.Module):
258
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
259
- super().__init__()
260
- self.in_channels = in_channels
261
- self.out_channels = out_channels
262
- self.filter_channels = filter_channels
263
- self.kernel_size = kernel_size
264
- self.p_dropout = p_dropout
265
- self.activation = activation
266
- self.causal = causal
267
-
268
- if causal:
269
- self.padding = self._causal_padding
270
- else:
271
- self.padding = self._same_padding
272
-
273
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
274
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
275
- self.drop = nn.Dropout(p_dropout)
276
-
277
- def forward(self, x, x_mask):
278
- x = self.conv_1(self.padding(x * x_mask))
279
- if self.activation == "gelu":
280
- x = x * torch.sigmoid(1.702 * x)
281
- else:
282
- x = torch.relu(x)
283
- x = self.drop(x)
284
- x = self.conv_2(self.padding(x * x_mask))
285
- return x * x_mask
286
-
287
- def _causal_padding(self, x):
288
- if self.kernel_size == 1:
289
- return x
290
- pad_l = self.kernel_size - 1
291
- pad_r = 0
292
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
293
- x = F.pad(x, commons.convert_pad_shape(padding))
294
- return x
295
-
296
- def _same_padding(self, x):
297
- if self.kernel_size == 1:
298
- return x
299
- pad_l = (self.kernel_size - 1) // 2
300
- pad_r = self.kernel_size // 2
301
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
302
- x = F.pad(x, commons.convert_pad_shape(padding))
303
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/audiocraft/modules/conditioners.py DELETED
@@ -1,1411 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- from collections import defaultdict
8
- from copy import deepcopy
9
- from dataclasses import dataclass, field
10
- from itertools import chain
11
- import logging
12
- import math
13
- from pathlib import Path
14
- import random
15
- import re
16
- import typing as tp
17
- import warnings
18
-
19
- import einops
20
- from num2words import num2words
21
- import spacy
22
- from transformers import RobertaTokenizer, T5EncoderModel, T5Tokenizer # type: ignore
23
- import torch
24
- from torch import nn
25
- import torch.nn.functional as F
26
- from torch.nn.utils.rnn import pad_sequence
27
-
28
- from .chroma import ChromaExtractor
29
- from .streaming import StreamingModule
30
- from .transformer import create_sin_embedding
31
- from ..data.audio import audio_read
32
- from ..data.audio_dataset import SegmentInfo
33
- from ..data.audio_utils import convert_audio
34
- from ..environment import AudioCraftEnvironment
35
- from ..quantization import ResidualVectorQuantizer
36
- from ..utils.autocast import TorchAutocast
37
- from ..utils.cache import EmbeddingCache
38
- from ..utils.utils import collate, hash_trick, length_to_mask, load_clap_state_dict, warn_once
39
-
40
-
41
- logger = logging.getLogger(__name__)
42
- TextCondition = tp.Optional[str] # a text condition can be a string or None (if doesn't exist)
43
- ConditionType = tp.Tuple[torch.Tensor, torch.Tensor] # condition, mask
44
-
45
-
46
- class WavCondition(tp.NamedTuple):
47
- wav: torch.Tensor
48
- length: torch.Tensor
49
- sample_rate: tp.List[int]
50
- path: tp.List[tp.Optional[str]] = []
51
- seek_time: tp.List[tp.Optional[float]] = []
52
-
53
-
54
- class JointEmbedCondition(tp.NamedTuple):
55
- wav: torch.Tensor
56
- text: tp.List[tp.Optional[str]]
57
- length: torch.Tensor
58
- sample_rate: tp.List[int]
59
- path: tp.List[tp.Optional[str]] = []
60
- seek_time: tp.List[tp.Optional[float]] = []
61
-
62
-
63
- @dataclass
64
- class ConditioningAttributes:
65
- text: tp.Dict[str, tp.Optional[str]] = field(default_factory=dict)
66
- wav: tp.Dict[str, WavCondition] = field(default_factory=dict)
67
- joint_embed: tp.Dict[str, JointEmbedCondition] = field(default_factory=dict)
68
-
69
- def __getitem__(self, item):
70
- return getattr(self, item)
71
-
72
- @property
73
- def text_attributes(self):
74
- return self.text.keys()
75
-
76
- @property
77
- def wav_attributes(self):
78
- return self.wav.keys()
79
-
80
- @property
81
- def joint_embed_attributes(self):
82
- return self.joint_embed.keys()
83
-
84
- @property
85
- def attributes(self):
86
- return {
87
- "text": self.text_attributes,
88
- "wav": self.wav_attributes,
89
- "joint_embed": self.joint_embed_attributes,
90
- }
91
-
92
- def to_flat_dict(self):
93
- return {
94
- **{f"text.{k}": v for k, v in self.text.items()},
95
- **{f"wav.{k}": v for k, v in self.wav.items()},
96
- **{f"joint_embed.{k}": v for k, v in self.joint_embed.items()}
97
- }
98
-
99
- @classmethod
100
- def from_flat_dict(cls, x):
101
- out = cls()
102
- for k, v in x.items():
103
- kind, att = k.split(".")
104
- out[kind][att] = v
105
- return out
106
-
107
-
108
- class SegmentWithAttributes(SegmentInfo):
109
- """Base class for all dataclasses that are used for conditioning.
110
- All child classes should implement `to_condition_attributes` that converts
111
- the existing attributes to a dataclass of type ConditioningAttributes.
112
- """
113
- def to_condition_attributes(self) -> ConditioningAttributes:
114
- raise NotImplementedError()
115
-
116
-
117
- def nullify_condition(condition: ConditionType, dim: int = 1):
118
- """Transform an input condition to a null condition.
119
- The way it is done by converting it to a single zero vector similarly
120
- to how it is done inside WhiteSpaceTokenizer and NoopTokenizer.
121
-
122
- Args:
123
- condition (ConditionType): A tuple of condition and mask (tuple[torch.Tensor, torch.Tensor])
124
- dim (int): The dimension that will be truncated (should be the time dimension)
125
- WARNING!: dim should not be the batch dimension!
126
- Returns:
127
- ConditionType: A tuple of null condition and mask
128
- """
129
- assert dim != 0, "dim cannot be the batch dimension!"
130
- assert isinstance(condition, tuple) and \
131
- isinstance(condition[0], torch.Tensor) and \
132
- isinstance(condition[1], torch.Tensor), "'nullify_condition' got an unexpected input type!"
133
- cond, mask = condition
134
- B = cond.shape[0]
135
- last_dim = cond.dim() - 1
136
- out = cond.transpose(dim, last_dim)
137
- out = 0. * out[..., :1]
138
- out = out.transpose(dim, last_dim)
139
- mask = torch.zeros((B, 1), device=out.device).int()
140
- assert cond.dim() == out.dim()
141
- return out, mask
142
-
143
-
144
- def nullify_wav(cond: WavCondition) -> WavCondition:
145
- """Transform a WavCondition to a nullified WavCondition.
146
- It replaces the wav by a null tensor, forces its length to 0, and replaces metadata by dummy attributes.
147
-
148
- Args:
149
- cond (WavCondition): Wav condition with wav, tensor of shape [B, T].
150
- Returns:
151
- WavCondition: Nullified wav condition.
152
- """
153
- null_wav, _ = nullify_condition((cond.wav, torch.zeros_like(cond.wav)), dim=cond.wav.dim() - 1)
154
- return WavCondition(
155
- wav=null_wav,
156
- length=torch.tensor([0] * cond.wav.shape[0], device=cond.wav.device),
157
- sample_rate=cond.sample_rate,
158
- path=[None] * cond.wav.shape[0],
159
- seek_time=[None] * cond.wav.shape[0],
160
- )
161
-
162
-
163
- def nullify_joint_embed(embed: JointEmbedCondition) -> JointEmbedCondition:
164
- """Nullify the joint embedding condition by replacing it by a null tensor, forcing its length to 0,
165
- and replacing metadata by dummy attributes.
166
-
167
- Args:
168
- cond (JointEmbedCondition): Joint embedding condition with wav and text, wav tensor of shape [B, C, T].
169
- """
170
- null_wav, _ = nullify_condition((embed.wav, torch.zeros_like(embed.wav)), dim=embed.wav.dim() - 1)
171
- return JointEmbedCondition(
172
- wav=null_wav, text=[None] * len(embed.text),
173
- length=torch.LongTensor([0]).to(embed.wav.device),
174
- sample_rate=embed.sample_rate,
175
- path=[None] * embed.wav.shape[0],
176
- seek_time=[0] * embed.wav.shape[0],
177
- )
178
-
179
-
180
- class Tokenizer:
181
- """Base tokenizer implementation
182
- (in case we want to introduce more advances tokenizers in the future).
183
- """
184
- def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]:
185
- raise NotImplementedError()
186
-
187
-
188
- class WhiteSpaceTokenizer(Tokenizer):
189
- """This tokenizer should be used for natural language descriptions.
190
- For example:
191
- ["he didn't, know he's going home.", 'shorter sentence'] =>
192
- [[78, 62, 31, 4, 78, 25, 19, 34],
193
- [59, 77, 0, 0, 0, 0, 0, 0]]
194
- """
195
- PUNCTUATION = "?:!.,;"
196
-
197
- def __init__(self, n_bins: int, pad_idx: int = 0, language: str = "en_core_web_sm",
198
- lemma: bool = True, stopwords: bool = True) -> None:
199
- self.n_bins = n_bins
200
- self.pad_idx = pad_idx
201
- self.lemma = lemma
202
- self.stopwords = stopwords
203
- try:
204
- self.nlp = spacy.load(language)
205
- except IOError:
206
- spacy.cli.download(language) # type: ignore
207
- self.nlp = spacy.load(language)
208
-
209
- @tp.no_type_check
210
- def __call__(self, texts: tp.List[tp.Optional[str]],
211
- return_text: bool = False) -> tp.Tuple[torch.Tensor, torch.Tensor]:
212
- """Take a list of strings and convert them to a tensor of indices.
213
-
214
- Args:
215
- texts (list[str]): List of strings.
216
- return_text (bool, optional): Whether to return text as additional tuple item. Defaults to False.
217
- Returns:
218
- tuple[torch.Tensor, torch.Tensor]:
219
- - Indices of words in the LUT.
220
- - And a mask indicating where the padding tokens are
221
- """
222
- output, lengths = [], []
223
- texts = deepcopy(texts)
224
- for i, text in enumerate(texts):
225
- # if current sample doesn't have a certain attribute, replace with pad token
226
- if text is None:
227
- output.append(torch.Tensor([self.pad_idx]))
228
- lengths.append(0)
229
- continue
230
-
231
- # convert numbers to words
232
- text = re.sub(r"(\d+)", lambda x: num2words(int(x.group(0))), text) # type: ignore
233
- # normalize text
234
- text = self.nlp(text) # type: ignore
235
- # remove stopwords
236
- if self.stopwords:
237
- text = [w for w in text if not w.is_stop] # type: ignore
238
- # remove punctuation
239
- text = [w for w in text if w.text not in self.PUNCTUATION] # type: ignore
240
- # lemmatize if needed
241
- text = [getattr(t, "lemma_" if self.lemma else "text") for t in text] # type: ignore
242
-
243
- texts[i] = " ".join(text)
244
- lengths.append(len(text))
245
- # convert to tensor
246
- tokens = torch.Tensor([hash_trick(w, self.n_bins) for w in text])
247
- output.append(tokens)
248
-
249
- mask = length_to_mask(torch.IntTensor(lengths)).int()
250
- padded_output = pad_sequence(output, padding_value=self.pad_idx).int().t()
251
- if return_text:
252
- return padded_output, mask, texts # type: ignore
253
- return padded_output, mask
254
-
255
-
256
- class NoopTokenizer(Tokenizer):
257
- """This tokenizer should be used for global conditioners such as: artist, genre, key, etc.
258
- The difference between this and WhiteSpaceTokenizer is that NoopTokenizer does not split
259
- strings, so "Jeff Buckley" will get it's own index. Whereas WhiteSpaceTokenizer will
260
- split it to ["Jeff", "Buckley"] and return an index per word.
261
-
262
- For example:
263
- ["Queen", "ABBA", "Jeff Buckley"] => [43, 55, 101]
264
- ["Metal", "Rock", "Classical"] => [0, 223, 51]
265
- """
266
- def __init__(self, n_bins: int, pad_idx: int = 0):
267
- self.n_bins = n_bins
268
- self.pad_idx = pad_idx
269
-
270
- def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]:
271
- output, lengths = [], []
272
- for text in texts:
273
- # if current sample doesn't have a certain attribute, replace with pad token
274
- if text is None:
275
- output.append(self.pad_idx)
276
- lengths.append(0)
277
- else:
278
- output.append(hash_trick(text, self.n_bins))
279
- lengths.append(1)
280
-
281
- tokens = torch.LongTensor(output).unsqueeze(1)
282
- mask = length_to_mask(torch.IntTensor(lengths)).int()
283
- return tokens, mask
284
-
285
-
286
- class BaseConditioner(nn.Module):
287
- """Base model for all conditioner modules.
288
- We allow the output dim to be different than the hidden dim for two reasons:
289
- 1) keep our LUTs small when the vocab is large;
290
- 2) make all condition dims consistent.
291
-
292
- Args:
293
- dim (int): Hidden dim of the model.
294
- output_dim (int): Output dim of the conditioner.
295
- """
296
- def __init__(self, dim: int, output_dim: int):
297
- super().__init__()
298
- self.dim = dim
299
- self.output_dim = output_dim
300
- self.output_proj = nn.Linear(dim, output_dim)
301
-
302
- def tokenize(self, *args, **kwargs) -> tp.Any:
303
- """Should be any part of the processing that will lead to a synchronization
304
- point, e.g. BPE tokenization with transfer to the GPU.
305
-
306
- The returned value will be saved and return later when calling forward().
307
- """
308
- raise NotImplementedError()
309
-
310
- def forward(self, inputs: tp.Any) -> ConditionType:
311
- """Gets input that should be used as conditioning (e.g, genre, description or a waveform).
312
- Outputs a ConditionType, after the input data was embedded as a dense vector.
313
-
314
- Returns:
315
- ConditionType:
316
- - A tensor of size [B, T, D] where B is the batch size, T is the length of the
317
- output embedding and D is the dimension of the embedding.
318
- - And a mask indicating where the padding tokens.
319
- """
320
- raise NotImplementedError()
321
-
322
-
323
- class TextConditioner(BaseConditioner):
324
- ...
325
-
326
-
327
- class LUTConditioner(TextConditioner):
328
- """Lookup table TextConditioner.
329
-
330
- Args:
331
- n_bins (int): Number of bins.
332
- dim (int): Hidden dim of the model (text-encoder/LUT).
333
- output_dim (int): Output dim of the conditioner.
334
- tokenizer (str): Name of the tokenizer.
335
- pad_idx (int, optional): Index for padding token. Defaults to 0.
336
- """
337
- def __init__(self, n_bins: int, dim: int, output_dim: int, tokenizer: str, pad_idx: int = 0):
338
- super().__init__(dim, output_dim)
339
- self.embed = nn.Embedding(n_bins, dim)
340
- self.tokenizer: Tokenizer
341
- if tokenizer == 'whitespace':
342
- self.tokenizer = WhiteSpaceTokenizer(n_bins, pad_idx=pad_idx)
343
- elif tokenizer == 'noop':
344
- self.tokenizer = NoopTokenizer(n_bins, pad_idx=pad_idx)
345
- else:
346
- raise ValueError(f"unrecognized tokenizer `{tokenizer}`.")
347
-
348
- def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]:
349
- device = self.embed.weight.device
350
- tokens, mask = self.tokenizer(x)
351
- tokens, mask = tokens.to(device), mask.to(device)
352
- return tokens, mask
353
-
354
- def forward(self, inputs: tp.Tuple[torch.Tensor, torch.Tensor]) -> ConditionType:
355
- tokens, mask = inputs
356
- embeds = self.embed(tokens)
357
- embeds = self.output_proj(embeds)
358
- embeds = (embeds * mask.unsqueeze(-1))
359
- return embeds, mask
360
-
361
-
362
- class T5Conditioner(TextConditioner):
363
- """T5-based TextConditioner.
364
-
365
- Args:
366
- name (str): Name of the T5 model.
367
- output_dim (int): Output dim of the conditioner.
368
- finetune (bool): Whether to fine-tune T5 at train time.
369
- device (str): Device for T5 Conditioner.
370
- autocast_dtype (tp.Optional[str], optional): Autocast dtype.
371
- word_dropout (float, optional): Word dropout probability.
372
- normalize_text (bool, optional): Whether to apply text normalization.
373
- """
374
- MODELS = ["t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b",
375
- "google/flan-t5-small", "google/flan-t5-base", "google/flan-t5-large",
376
- "google/flan-t5-xl", "google/flan-t5-xxl"]
377
- MODELS_DIMS = {
378
- "t5-small": 512,
379
- "t5-base": 768,
380
- "t5-large": 1024,
381
- "t5-3b": 1024,
382
- "t5-11b": 1024,
383
- "google/flan-t5-small": 512,
384
- "google/flan-t5-base": 768,
385
- "google/flan-t5-large": 1024,
386
- "google/flan-t5-3b": 1024,
387
- "google/flan-t5-11b": 1024,
388
- }
389
-
390
- def __init__(self, name: str, output_dim: int, finetune: bool, device: str,
391
- autocast_dtype: tp.Optional[str] = 'float32', word_dropout: float = 0.,
392
- normalize_text: bool = False):
393
- assert name in self.MODELS, f"Unrecognized t5 model name (should in {self.MODELS})"
394
- super().__init__(self.MODELS_DIMS[name], output_dim)
395
- self.device = device
396
- self.name = name
397
- self.finetune = finetune
398
- self.word_dropout = word_dropout
399
- if autocast_dtype is None or self.device == 'cpu':
400
- self.autocast = TorchAutocast(enabled=False)
401
- if self.device != 'cpu':
402
- logger.warning("T5 has no autocast, this might lead to NaN")
403
- else:
404
- dtype = getattr(torch, autocast_dtype)
405
- assert isinstance(dtype, torch.dtype)
406
- logger.info(f"T5 will be evaluated with autocast as {autocast_dtype}")
407
- self.autocast = TorchAutocast(enabled=True, device_type=self.device, dtype=dtype)
408
- # Let's disable logging temporarily because T5 will vomit some errors otherwise.
409
- # thanks https://gist.github.com/simon-weber/7853144
410
- previous_level = logging.root.manager.disable
411
- logging.disable(logging.ERROR)
412
- with warnings.catch_warnings():
413
- warnings.simplefilter("ignore")
414
- try:
415
- self.t5_tokenizer = T5Tokenizer.from_pretrained(name)
416
- t5 = T5EncoderModel.from_pretrained(name).train(mode=finetune)
417
- finally:
418
- logging.disable(previous_level)
419
- if finetune:
420
- self.t5 = t5
421
- else:
422
- # this makes sure that the t5 models is not part
423
- # of the saved checkpoint
424
- self.__dict__['t5'] = t5.to(device)
425
-
426
- self.normalize_text = normalize_text
427
- if normalize_text:
428
- self.text_normalizer = WhiteSpaceTokenizer(1, lemma=True, stopwords=True)
429
-
430
- def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Dict[str, torch.Tensor]:
431
- # if current sample doesn't have a certain attribute, replace with empty string
432
- entries: tp.List[str] = [xi if xi is not None else "" for xi in x]
433
- if self.normalize_text:
434
- _, _, entries = self.text_normalizer(entries, return_text=True)
435
- if self.word_dropout > 0. and self.training:
436
- new_entries = []
437
- for entry in entries:
438
- words = [word for word in entry.split(" ") if random.random() >= self.word_dropout]
439
- new_entries.append(" ".join(words))
440
- entries = new_entries
441
-
442
- empty_idx = torch.LongTensor([i for i, xi in enumerate(entries) if xi == ""])
443
-
444
- inputs = self.t5_tokenizer(entries, return_tensors='pt', padding=True).to(self.device)
445
- mask = inputs['attention_mask']
446
- mask[empty_idx, :] = 0 # zero-out index where the input is non-existant
447
- return inputs
448
-
449
- def forward(self, inputs: tp.Dict[str, torch.Tensor]) -> ConditionType:
450
- mask = inputs['attention_mask']
451
- with torch.set_grad_enabled(self.finetune), self.autocast:
452
- embeds = self.t5(**inputs).last_hidden_state
453
- embeds = self.output_proj(embeds.to(self.output_proj.weight))
454
- embeds = (embeds * mask.unsqueeze(-1))
455
- return embeds, mask
456
-
457
-
458
- class WaveformConditioner(BaseConditioner):
459
- """Base class for all conditioners that take a waveform as input.
460
- Classes that inherit must implement `_get_wav_embedding` that outputs
461
- a continuous tensor, and `_downsampling_factor` that returns the down-sampling
462
- factor of the embedding model.
463
-
464
- Args:
465
- dim (int): The internal representation dimension.
466
- output_dim (int): Output dimension.
467
- device (tp.Union[torch.device, str]): Device.
468
- """
469
- def __init__(self, dim: int, output_dim: int, device: tp.Union[torch.device, str]):
470
- super().__init__(dim, output_dim)
471
- self.device = device
472
-
473
- def tokenize(self, x: WavCondition) -> WavCondition:
474
- wav, length, sample_rate, path, seek_time = x
475
- assert length is not None
476
- return WavCondition(wav.to(self.device), length.to(self.device), sample_rate, path, seek_time)
477
-
478
- def _get_wav_embedding(self, x: WavCondition) -> torch.Tensor:
479
- """Gets as input a WavCondition and returns a dense embedding."""
480
- raise NotImplementedError()
481
-
482
- def _downsampling_factor(self):
483
- """Returns the downsampling factor of the embedding model."""
484
- raise NotImplementedError()
485
-
486
- def forward(self, x: WavCondition) -> ConditionType:
487
- """Extract condition embedding and mask from a waveform and its metadata.
488
- Args:
489
- x (WavCondition): Waveform condition containing raw waveform and metadata.
490
- Returns:
491
- ConditionType: a dense vector representing the conditioning along with its mask
492
- """
493
- wav, lengths, *_ = x
494
- with torch.no_grad():
495
- embeds = self._get_wav_embedding(x)
496
- embeds = embeds.to(self.output_proj.weight)
497
- embeds = self.output_proj(embeds)
498
-
499
- if lengths is not None:
500
- lengths = lengths / self._downsampling_factor()
501
- mask = length_to_mask(lengths, max_len=embeds.shape[1]).int() # type: ignore
502
- else:
503
- mask = torch.ones_like(embeds)
504
- embeds = (embeds * mask.unsqueeze(2).to(self.device))
505
-
506
- return embeds, mask
507
-
508
-
509
- class ChromaStemConditioner(WaveformConditioner):
510
- """Chroma conditioner based on stems.
511
- The ChromaStemConditioner uses DEMUCS to first filter out drums and bass, as
512
- the drums and bass often dominate the chroma leading to the chroma features
513
- not containing information about the melody.
514
-
515
- Args:
516
- output_dim (int): Output dimension for the conditioner.
517
- sample_rate (int): Sample rate for the chroma extractor.
518
- n_chroma (int): Number of chroma bins for the chroma extractor.
519
- radix2_exp (int): Size of stft window for the chroma extractor (power of 2, e.g. 12 -> 2^12).
520
- duration (int): duration used during training. This is later used for correct padding
521
- in case we are using chroma as prefix.
522
- match_len_on_eval (bool, optional): if True then all chromas are padded to the training
523
- duration. Defaults to False.
524
- eval_wavs (str, optional): path to a dataset manifest with waveform, this waveforms are used as
525
- conditions during eval (for cases where we don't want to leak test conditions like MusicCaps).
526
- Defaults to None.
527
- n_eval_wavs (int, optional): limits the number of waveforms used for conditioning. Defaults to 0.
528
- device (tp.Union[torch.device, str], optional): Device for the conditioner.
529
- **kwargs: Additional parameters for the chroma extractor.
530
- """
531
- def __init__(self, output_dim: int, sample_rate: int, n_chroma: int, radix2_exp: int,
532
- duration: float, match_len_on_eval: bool = True, eval_wavs: tp.Optional[str] = None,
533
- n_eval_wavs: int = 0, cache_path: tp.Optional[tp.Union[str, Path]] = None,
534
- device: tp.Union[torch.device, str] = 'cpu', **kwargs):
535
- from demucs import pretrained
536
- super().__init__(dim=n_chroma, output_dim=output_dim, device=device)
537
- self.autocast = TorchAutocast(enabled=device != 'cpu', device_type=self.device, dtype=torch.float32)
538
- self.sample_rate = sample_rate
539
- self.match_len_on_eval = match_len_on_eval
540
- self.duration = duration
541
- self.__dict__['demucs'] = pretrained.get_model('htdemucs').to(device)
542
- stem_sources: list = self.demucs.sources # type: ignore
543
- self.stem_indices = torch.LongTensor([stem_sources.index('vocals'), stem_sources.index('other')]).to(device)
544
- self.chroma = ChromaExtractor(sample_rate=sample_rate, n_chroma=n_chroma,
545
- radix2_exp=radix2_exp, **kwargs).to(device)
546
- self.chroma_len = self._get_chroma_len()
547
- self.eval_wavs: tp.Optional[torch.Tensor] = self._load_eval_wavs(eval_wavs, n_eval_wavs)
548
- self.cache = None
549
- if cache_path is not None:
550
- self.cache = EmbeddingCache(Path(cache_path) / 'wav', self.device,
551
- compute_embed_fn=self._get_full_chroma_for_cache,
552
- extract_embed_fn=self._extract_chroma_chunk)
553
-
554
- def _downsampling_factor(self) -> int:
555
- return self.chroma.winhop
556
-
557
- def _load_eval_wavs(self, path: tp.Optional[str], num_samples: int) -> tp.Optional[torch.Tensor]:
558
- """Load pre-defined waveforms from a json.
559
- These waveforms will be used for chroma extraction during evaluation.
560
- This is done to make the evaluation on MusicCaps fair (we shouldn't see the chromas of MusicCaps).
561
- """
562
- if path is None:
563
- return None
564
-
565
- logger.info(f"Loading evaluation wavs from {path}")
566
- from audiocraft.data.audio_dataset import AudioDataset
567
- dataset: AudioDataset = AudioDataset.from_meta(
568
- path, segment_duration=self.duration, min_audio_duration=self.duration,
569
- sample_rate=self.sample_rate, channels=1)
570
-
571
- if len(dataset) > 0:
572
- eval_wavs = dataset.collater([dataset[i] for i in range(num_samples)]).to(self.device)
573
- logger.info(f"Using {len(eval_wavs)} evaluation wavs for chroma-stem conditioner")
574
- return eval_wavs
575
- else:
576
- raise ValueError("Could not find evaluation wavs, check lengths of wavs")
577
-
578
- def reset_eval_wavs(self, eval_wavs: tp.Optional[torch.Tensor]) -> None:
579
- self.eval_wavs = eval_wavs
580
-
581
- def has_eval_wavs(self) -> bool:
582
- return self.eval_wavs is not None
583
-
584
- def _sample_eval_wavs(self, num_samples: int) -> torch.Tensor:
585
- """Sample wavs from a predefined list."""
586
- assert self.eval_wavs is not None, "Cannot sample eval wavs as no eval wavs provided."
587
- total_eval_wavs = len(self.eval_wavs)
588
- out = self.eval_wavs
589
- if num_samples > total_eval_wavs:
590
- out = self.eval_wavs.repeat(num_samples // total_eval_wavs + 1, 1, 1)
591
- return out[torch.randperm(len(out))][:num_samples]
592
-
593
- def _get_chroma_len(self) -> int:
594
- """Get length of chroma during training."""
595
- dummy_wav = torch.zeros((1, int(self.sample_rate * self.duration)), device=self.device)
596
- dummy_chr = self.chroma(dummy_wav)
597
- return dummy_chr.shape[1]
598
-
599
- @torch.no_grad()
600
- def _get_stemmed_wav(self, wav: torch.Tensor, sample_rate: int) -> torch.Tensor:
601
- """Get parts of the wav that holds the melody, extracting the main stems from the wav."""
602
- from demucs.apply import apply_model
603
- from demucs.audio import convert_audio
604
- with self.autocast:
605
- wav = convert_audio(
606
- wav, sample_rate, self.demucs.samplerate, self.demucs.audio_channels) # type: ignore
607
- stems = apply_model(self.demucs, wav, device=self.device)
608
- stems = stems[:, self.stem_indices] # extract relevant stems for melody conditioning
609
- mix_wav = stems.sum(1) # merge extracted stems to single waveform
610
- mix_wav = convert_audio(mix_wav, self.demucs.samplerate, self.sample_rate, 1) # type: ignore
611
- return mix_wav
612
-
613
- @torch.no_grad()
614
- def _extract_chroma(self, wav: torch.Tensor) -> torch.Tensor:
615
- """Extract chroma features from the waveform."""
616
- with self.autocast:
617
- return self.chroma(wav)
618
-
619
- @torch.no_grad()
620
- def _compute_wav_embedding(self, wav: torch.Tensor, sample_rate: int) -> torch.Tensor:
621
- """Compute wav embedding, applying stem and chroma extraction."""
622
- # avoid 0-size tensors when we are working with null conds
623
- if wav.shape[-1] == 1:
624
- return self._extract_chroma(wav)
625
- stems = self._get_stemmed_wav(wav, sample_rate)
626
- chroma = self._extract_chroma(stems)
627
- return chroma
628
-
629
- @torch.no_grad()
630
- def _get_full_chroma_for_cache(self, path: tp.Union[str, Path], x: WavCondition, idx: int) -> torch.Tensor:
631
- """Extract chroma from the whole audio waveform at the given path."""
632
- wav, sr = audio_read(path)
633
- wav = wav[None].to(self.device)
634
- wav = convert_audio(wav, sr, self.sample_rate, to_channels=1)
635
- chroma = self._compute_wav_embedding(wav, self.sample_rate)[0]
636
- return chroma
637
-
638
- def _extract_chroma_chunk(self, full_chroma: torch.Tensor, x: WavCondition, idx: int) -> torch.Tensor:
639
- """Extract a chunk of chroma from the full chroma derived from the full waveform."""
640
- wav_length = x.wav.shape[-1]
641
- seek_time = x.seek_time[idx]
642
- assert seek_time is not None, (
643
- "WavCondition seek_time is required "
644
- "when extracting chroma chunks from pre-computed chroma.")
645
- full_chroma = full_chroma.float()
646
- frame_rate = self.sample_rate / self._downsampling_factor()
647
- target_length = int(frame_rate * wav_length / self.sample_rate)
648
- index = int(frame_rate * seek_time)
649
- out = full_chroma[index: index + target_length]
650
- out = F.pad(out[None], (0, 0, 0, target_length - out.shape[0]))[0]
651
- return out.to(self.device)
652
-
653
- @torch.no_grad()
654
- def _get_wav_embedding(self, x: WavCondition) -> torch.Tensor:
655
- """Get the wav embedding from the WavCondition.
656
- The conditioner will either extract the embedding on-the-fly computing it from the condition wav directly
657
- or will rely on the embedding cache to load the pre-computed embedding if relevant.
658
- """
659
- sampled_wav: tp.Optional[torch.Tensor] = None
660
- if not self.training and self.eval_wavs is not None:
661
- warn_once(logger, "Using precomputed evaluation wavs!")
662
- sampled_wav = self._sample_eval_wavs(len(x.wav))
663
-
664
- no_undefined_paths = all(p is not None for p in x.path)
665
- no_nullified_cond = x.wav.shape[-1] > 1
666
- if sampled_wav is not None:
667
- chroma = self._compute_wav_embedding(sampled_wav, self.sample_rate)
668
- elif self.cache is not None and no_undefined_paths and no_nullified_cond:
669
- paths = [Path(p) for p in x.path if p is not None]
670
- chroma = self.cache.get_embed_from_cache(paths, x)
671
- else:
672
- assert all(sr == x.sample_rate[0] for sr in x.sample_rate), "All sample rates in batch should be equal."
673
- chroma = self._compute_wav_embedding(x.wav, x.sample_rate[0])
674
-
675
- if self.match_len_on_eval:
676
- B, T, C = chroma.shape
677
- if T > self.chroma_len:
678
- chroma = chroma[:, :self.chroma_len]
679
- logger.debug(f"Chroma was truncated to match length! ({T} -> {chroma.shape[1]})")
680
- elif T < self.chroma_len:
681
- n_repeat = int(math.ceil(self.chroma_len / T))
682
- chroma = chroma.repeat(1, n_repeat, 1)
683
- chroma = chroma[:, :self.chroma_len]
684
- logger.debug(f"Chroma was repeated to match length! ({T} -> {chroma.shape[1]})")
685
-
686
- return chroma
687
-
688
- def tokenize(self, x: WavCondition) -> WavCondition:
689
- """Apply WavConditioner tokenization and populate cache if needed."""
690
- x = super().tokenize(x)
691
- no_undefined_paths = all(p is not None for p in x.path)
692
- if self.cache is not None and no_undefined_paths:
693
- paths = [Path(p) for p in x.path if p is not None]
694
- self.cache.populate_embed_cache(paths, x)
695
- return x
696
-
697
-
698
- class JointEmbeddingConditioner(BaseConditioner):
699
- """Joint embedding conditioning supporting both audio or text conditioning.
700
-
701
- Args:
702
- dim (int): Dimension.
703
- output_dim (int): Output dimension.
704
- device (str): Device.
705
- attribute (str): Attribute used by the conditioner.
706
- autocast_dtype (str): Autocast for the conditioner.
707
- quantize (bool): Whether to quantize the CLAP embedding.
708
- n_q (int): Number of residual quantizers (used if quantize is true).
709
- bins (int): Quantizers' codebooks size (used if quantize is true).
710
- kwargs: Additional parameters for residual vector quantizer.
711
- """
712
- def __init__(self, dim: int, output_dim: int, device: str, attribute: str,
713
- autocast_dtype: tp.Optional[str] = 'float32', quantize: bool = True,
714
- n_q: int = 12, bins: int = 1024, **kwargs):
715
- super().__init__(dim=dim, output_dim=output_dim)
716
- self.device = device
717
- self.attribute = attribute
718
- if autocast_dtype is None or device == 'cpu':
719
- self.autocast = TorchAutocast(enabled=False)
720
- logger.warning("JointEmbeddingConditioner has no autocast, this might lead to NaN.")
721
- else:
722
- dtype = getattr(torch, autocast_dtype)
723
- assert isinstance(dtype, torch.dtype)
724
- logger.info(f"JointEmbeddingConditioner will be evaluated with autocast as {autocast_dtype}.")
725
- self.autocast = TorchAutocast(enabled=True, device_type=self.device, dtype=dtype)
726
- # residual vector quantizer to discretize the conditioned embedding
727
- self.quantizer: tp.Optional[ResidualVectorQuantizer] = None
728
- if quantize:
729
- self.quantizer = ResidualVectorQuantizer(dim, n_q=n_q, bins=bins, **kwargs)
730
-
731
- def _get_embed(self, x: JointEmbedCondition) -> tp.Tuple[torch.Tensor, torch.Tensor]:
732
- """Get joint embedding in latent space from the inputs.
733
-
734
- Returns:
735
- tuple[torch.Tensor, torch.Tensor]: Tensor for the latent embedding
736
- and corresponding empty indexes.
737
- """
738
- raise NotImplementedError()
739
-
740
- def forward(self, x: JointEmbedCondition) -> ConditionType:
741
- with self.autocast:
742
- embed, empty_idx = self._get_embed(x)
743
- if self.quantizer is not None:
744
- embed = embed.view(-1, self.dim, 1)
745
- q_res = self.quantizer(embed, frame_rate=1)
746
- out_embed = q_res.x.view(-1, self.dim)
747
- else:
748
- out_embed = embed
749
- out_embed = self.output_proj(out_embed).view(-1, 1, self.output_dim)
750
- mask = torch.ones(*out_embed.shape[:2], device=out_embed.device)
751
- mask[empty_idx, :] = 0 # zero-out index where the input is non-existant
752
- out_embed = (out_embed * mask.unsqueeze(-1))
753
- return out_embed, mask
754
-
755
- def tokenize(self, x: JointEmbedCondition) -> JointEmbedCondition:
756
- return x
757
-
758
-
759
- class CLAPEmbeddingConditioner(JointEmbeddingConditioner):
760
- """Joint Embedding conditioner based on pre-trained CLAP model.
761
-
762
- This CLAP-based conditioner supports a caching mechanism
763
- over the computed embeddings for faster training.
764
-
765
- Args:
766
- dim (int): Dimension.
767
- output_dim (int): Output dimension.
768
- device (str): Device.
769
- attribute (str): Attribute used by the conditioner.
770
- quantize (bool): Whether to quantize the CLAP embedding.
771
- n_q (int): Number of residual quantizers (used if quantize is true).
772
- bins (int): Quantizers' codebooks size (used if quantize is true).
773
- checkpoint (str): Path to CLAP checkpoint.
774
- model_arch (str): CLAP model architecture.
775
- enable_fusion (bool): Enable fusion for CLAP model.
776
- sample_rate (int): Sample rate used by CLAP model.
777
- max_audio_length (float): Maximum audio length for CLAP model.
778
- audio_stride (float): Stride to use for getting a CLAP embedding on the full sequence.
779
- normalize (bool): Whether to normalize the CLAP embedding.
780
- text_p (float): Probability of using text representation instead of audio at train time.
781
- batch_size (Optional[int]): Batch size for CLAP embedding computation.
782
- autocast_dtype (str): Autocast for the conditioner.
783
- cache_path (Optional[str]): Path for pre-computed embeddings caching.
784
- kwargs: Additional parameters for residual vector quantizer.
785
- """
786
- def __init__(self, dim: int, output_dim: int, device: str, attribute: str,
787
- quantize: bool, n_q: int, bins: int, checkpoint: tp.Union[str, Path], model_arch: str,
788
- enable_fusion: bool, sample_rate: int, max_audio_length: int, audio_stride: int,
789
- normalize: bool, text_p: bool, batch_size: tp.Optional[int] = None,
790
- autocast_dtype: tp.Optional[str] = 'float32', cache_path: tp.Optional[str] = None, **kwargs):
791
- try:
792
- import laion_clap # type: ignore
793
- except ImportError:
794
- raise ImportError("Please install CLAP to use the CLAPEmbeddingConditioner: 'pip install laion_clap'")
795
- checkpoint = AudioCraftEnvironment.resolve_reference_path(checkpoint)
796
- clap_tokenize = RobertaTokenizer.from_pretrained('roberta-base')
797
- clap_model = laion_clap.CLAP_Module(enable_fusion=enable_fusion, amodel=model_arch)
798
- load_clap_state_dict(clap_model, checkpoint)
799
- clap_model.eval()
800
- clap_model.to(device)
801
- super().__init__(dim=dim, output_dim=output_dim, device=device, attribute=attribute,
802
- autocast_dtype=autocast_dtype, quantize=quantize, n_q=n_q, bins=bins,
803
- **kwargs)
804
- self.checkpoint = checkpoint
805
- self.enable_fusion = enable_fusion
806
- self.model_arch = model_arch
807
- self.clap: laion_clap.CLAP_Module
808
- self.clap_tokenize: RobertaTokenizer
809
- self.clap_sample_rate = sample_rate
810
- self.clap_max_frames = int(self.clap_sample_rate * max_audio_length)
811
- self.clap_stride = int(self.clap_sample_rate * audio_stride)
812
- self.batch_size = batch_size or 1
813
- self.normalize = normalize
814
- self.text_p = text_p
815
- self.__dict__['clap_tokenize'] = clap_tokenize
816
- self.__dict__['clap'] = clap_model
817
- self.wav_cache, self.text_cache = None, None
818
- if cache_path is not None:
819
- self.wav_cache = EmbeddingCache(Path(cache_path) / 'wav', self.device,
820
- compute_embed_fn=self._get_wav_embedding_for_cache,
821
- extract_embed_fn=self._extract_wav_embedding_chunk)
822
- self.text_cache = EmbeddingCache(Path(cache_path) / 'text', self.device,
823
- compute_embed_fn=self._get_text_embedding_for_cache)
824
-
825
- def _tokenizer(self, texts: tp.Union[str, tp.List[str]]) -> dict:
826
- # we use the default params from CLAP module here as well
827
- return self.clap_tokenize(texts, padding="max_length", truncation=True, max_length=77, return_tensors="pt")
828
-
829
- def _compute_text_embedding(self, text: tp.List[str]) -> torch.Tensor:
830
- """Compute text embedding from CLAP model on a given a batch of text.
831
-
832
- Args:
833
- text (list[str]): List of text for the batch, with B items.
834
- Returns:
835
- torch.Tensor: CLAP embedding derived from text, of shape [B, 1, D], with D the CLAP embedding dimension.
836
- """
837
- with torch.no_grad():
838
- embed = self.clap.get_text_embedding(text, tokenizer=self._tokenizer, use_tensor=True)
839
- return embed.view(embed.size(0), 1, embed.size(-1))
840
-
841
- def _get_text_embedding_for_cache(self, path: tp.Union[Path, str],
842
- x: JointEmbedCondition, idx: int) -> torch.Tensor:
843
- """Get text embedding function for the cache."""
844
- text = x.text[idx]
845
- text = text if text is not None else ""
846
- return self._compute_text_embedding([text])[0]
847
-
848
- def _preprocess_wav(self, wav: torch.Tensor, length: torch.Tensor, sample_rates: tp.List[int]) -> torch.Tensor:
849
- """Preprocess wav to expected format by CLAP model.
850
-
851
- Args:
852
- wav (torch.Tensor): Audio wav, of shape [B, C, T].
853
- length (torch.Tensor): Actual length of the audio for each item in the batch, of shape [B].
854
- sample_rates (list[int]): Sample rates for each sample in the batch
855
- Returns:
856
- torch.Tensor: Audio wav of shape [B, T].
857
- """
858
- assert wav.dim() == 3, "Expecting wav to be [B, C, T]"
859
- if sample_rates is not None:
860
- _wav = []
861
- for i, audio in enumerate(wav):
862
- sr = sample_rates[i]
863
- audio = convert_audio(audio, from_rate=sr, to_rate=self.clap_sample_rate, to_channels=1)
864
- _wav.append(audio)
865
- wav = torch.stack(_wav, dim=0)
866
- wav = wav.mean(dim=1)
867
- return wav
868
-
869
- def _compute_wav_embedding(self, wav: torch.Tensor, length: torch.Tensor,
870
- sample_rates: tp.List[int], reduce_mean: bool = False) -> torch.Tensor:
871
- """Compute audio wave embedding from CLAP model.
872
-
873
- Since CLAP operates on a fixed sequence length audio inputs and we need to process longer audio sequences,
874
- we calculate the wav embeddings on `clap_max_frames` windows with `clap_stride`-second stride and
875
- average the resulting embeddings.
876
-
877
- Args:
878
- wav (torch.Tensor): Audio wav, of shape [B, C, T].
879
- length (torch.Tensor): Actual length of the audio for each item in the batch, of shape [B].
880
- sample_rates (list[int]): Sample rates for each sample in the batch.
881
- reduce_mean (bool): Whether to get the average tensor.
882
- Returns:
883
- torch.Tensor: Audio embedding of shape [B, F, D], F being the number of chunks, D the dimension.
884
- """
885
- with torch.no_grad():
886
- wav = self._preprocess_wav(wav, length, sample_rates)
887
- B, T = wav.shape
888
- if T >= self.clap_max_frames:
889
- wav = wav.unfold(-1, self.clap_max_frames, self.clap_stride) # [B, F, T]
890
- else:
891
- wav = wav.view(-1, 1, T) # [B, F, T] with F=1
892
- wav = einops.rearrange(wav, 'b f t -> (b f) t')
893
- embed_list = []
894
- for i in range(0, wav.size(0), self.batch_size):
895
- _wav = wav[i:i+self.batch_size, ...]
896
- _embed = self.clap.get_audio_embedding_from_data(_wav, use_tensor=True)
897
- embed_list.append(_embed)
898
- embed = torch.cat(embed_list, dim=0)
899
- embed = einops.rearrange(embed, '(b f) d -> b f d', b=B)
900
- if reduce_mean:
901
- embed = embed.mean(dim=1, keepdim=True)
902
- return embed # [B, F, D] with F=1 if reduce_mean is True
903
-
904
- def _get_wav_embedding_for_cache(self, path: tp.Union[str, Path],
905
- x: JointEmbedCondition, idx: int) -> torch.Tensor:
906
- """Compute audio wave embedding for the cache.
907
- The embedding is computed on a given audio read from file.
908
-
909
- Args:
910
- path (str or Path): Path to the full audio file.
911
- Returns:
912
- torch.Tensor: Single-item tensor of shape [F, D], F being the number of chunks, D the dimension.
913
- """
914
- wav, sr = audio_read(path) # [C, T]
915
- wav = wav.unsqueeze(0).to(self.device) # [1, C, T]
916
- wav_len = torch.LongTensor([wav.shape[-1]]).to(self.device)
917
- embed = self._compute_wav_embedding(wav, wav_len, [sr], reduce_mean=False) # [B, F, D]
918
- return embed.squeeze(0) # [F, D]
919
-
920
- def _extract_wav_embedding_chunk(self, full_embed: torch.Tensor, x: JointEmbedCondition, idx: int) -> torch.Tensor:
921
- """Extract the chunk of embedding matching the seek_time and length from the full CLAP audio embedding.
922
-
923
- Args:
924
- full_embed (torch.Tensor): CLAP embedding computed on the full wave, of shape [F, D].
925
- x (JointEmbedCondition): Joint embedding condition for the full batch.
926
- idx (int): Index considered for the given embedding to extract.
927
- Returns:
928
- torch.Tensor: Wav embedding averaged on sliding window, of shape [1, D].
929
- """
930
- sample_rate = x.sample_rate[idx]
931
- seek_time = x.seek_time[idx]
932
- seek_time = 0. if seek_time is None else seek_time
933
- clap_stride = int(self.clap_stride / self.clap_sample_rate) * sample_rate
934
- end_seek_time = seek_time + self.clap_max_frames / self.clap_sample_rate
935
- start_offset = int(seek_time * sample_rate // clap_stride)
936
- end_offset = int(end_seek_time * sample_rate // clap_stride)
937
- wav_embed = full_embed[start_offset:end_offset, ...]
938
- wav_embed = wav_embed.mean(dim=0, keepdim=True)
939
- return wav_embed.to(self.device) # [F, D]
940
-
941
- def _get_text_embedding(self, x: JointEmbedCondition) -> torch.Tensor:
942
- """Get CLAP embedding from a batch of text descriptions."""
943
- no_nullified_cond = x.wav.shape[-1] > 1 # we don't want to read from cache when condition dropout
944
- if self.text_cache is not None and no_nullified_cond:
945
- assert all(p is not None for p in x.path), "Cache requires all JointEmbedCondition paths to be provided"
946
- paths = [Path(p) for p in x.path if p is not None]
947
- embed = self.text_cache.get_embed_from_cache(paths, x)
948
- else:
949
- text = [xi if xi is not None else "" for xi in x.text]
950
- embed = self._compute_text_embedding(text)
951
- if self.normalize:
952
- embed = torch.nn.functional.normalize(embed, p=2.0, dim=-1)
953
- return embed
954
-
955
- def _get_wav_embedding(self, x: JointEmbedCondition) -> torch.Tensor:
956
- """Get CLAP embedding from a batch of audio tensors (and corresponding sample rates)."""
957
- no_undefined_paths = all(p is not None for p in x.path)
958
- no_nullified_cond = x.wav.shape[-1] > 1 # we don't want to read from cache when condition dropout
959
- if self.wav_cache is not None and no_undefined_paths and no_nullified_cond:
960
- paths = [Path(p) for p in x.path if p is not None]
961
- embed = self.wav_cache.get_embed_from_cache(paths, x)
962
- else:
963
- embed = self._compute_wav_embedding(x.wav, x.length, x.sample_rate, reduce_mean=True)
964
- if self.normalize:
965
- embed = torch.nn.functional.normalize(embed, p=2.0, dim=-1)
966
- return embed
967
-
968
- def tokenize(self, x: JointEmbedCondition) -> JointEmbedCondition:
969
- # Trying to limit as much as possible sync points when the cache is warm.
970
- no_undefined_paths = all(p is not None for p in x.path)
971
- if self.wav_cache is not None and no_undefined_paths:
972
- assert all([p is not None for p in x.path]), "Cache requires all JointEmbedCondition paths to be provided"
973
- paths = [Path(p) for p in x.path if p is not None]
974
- self.wav_cache.populate_embed_cache(paths, x)
975
- if self.text_cache is not None and no_undefined_paths:
976
- assert all([p is not None for p in x.path]), "Cache requires all JointEmbedCondition paths to be provided"
977
- paths = [Path(p) for p in x.path if p is not None]
978
- self.text_cache.populate_embed_cache(paths, x)
979
- return x
980
-
981
- def _get_embed(self, x: JointEmbedCondition) -> tp.Tuple[torch.Tensor, torch.Tensor]:
982
- """Extract shared latent representation from either the wav or the text using CLAP."""
983
- # decide whether to use text embedding at train time or not
984
- use_text_embed = random.random() < self.text_p
985
- if self.training and not use_text_embed:
986
- embed = self._get_wav_embedding(x)
987
- empty_idx = torch.LongTensor([]) # we assume we always have the audio wav
988
- else:
989
- embed = self._get_text_embedding(x)
990
- empty_idx = torch.LongTensor([i for i, xi in enumerate(x.text) if xi is None or xi == ""])
991
- return embed, empty_idx
992
-
993
-
994
- def dropout_condition(sample: ConditioningAttributes, condition_type: str, condition: str) -> ConditioningAttributes:
995
- """Utility function for nullifying an attribute inside an ConditioningAttributes object.
996
- If the condition is of type "wav", then nullify it using `nullify_condition` function.
997
- If the condition is of any other type, set its value to None.
998
- Works in-place.
999
- """
1000
- if condition_type not in ['text', 'wav', 'joint_embed']:
1001
- raise ValueError(
1002
- "dropout_condition got an unexpected condition type!"
1003
- f" expected 'text', 'wav' or 'joint_embed' but got '{condition_type}'"
1004
- )
1005
-
1006
- if condition not in getattr(sample, condition_type):
1007
- raise ValueError(
1008
- "dropout_condition received an unexpected condition!"
1009
- f" expected wav={sample.wav.keys()} and text={sample.text.keys()}"
1010
- f" but got '{condition}' of type '{condition_type}'!"
1011
- )
1012
-
1013
- if condition_type == 'wav':
1014
- wav_cond = sample.wav[condition]
1015
- sample.wav[condition] = nullify_wav(wav_cond)
1016
- elif condition_type == 'joint_embed':
1017
- embed = sample.joint_embed[condition]
1018
- sample.joint_embed[condition] = nullify_joint_embed(embed)
1019
- else:
1020
- sample.text[condition] = None
1021
-
1022
- return sample
1023
-
1024
-
1025
- class DropoutModule(nn.Module):
1026
- """Base module for all dropout modules."""
1027
- def __init__(self, seed: int = 1234):
1028
- super().__init__()
1029
- self.rng = torch.Generator()
1030
- self.rng.manual_seed(seed)
1031
-
1032
-
1033
- class AttributeDropout(DropoutModule):
1034
- """Dropout with a given probability per attribute.
1035
- This is different from the behavior of ClassifierFreeGuidanceDropout as this allows for attributes
1036
- to be dropped out separately. For example, "artist" can be dropped while "genre" remains.
1037
- This is in contrast to ClassifierFreeGuidanceDropout where if "artist" is dropped "genre"
1038
- must also be dropped.
1039
-
1040
- Args:
1041
- p (tp.Dict[str, float]): A dict mapping between attributes and dropout probability. For example:
1042
- ...
1043
- "genre": 0.1,
1044
- "artist": 0.5,
1045
- "wav": 0.25,
1046
- ...
1047
- active_on_eval (bool, optional): Whether the dropout is active at eval. Default to False.
1048
- seed (int, optional): Random seed.
1049
- """
1050
- def __init__(self, p: tp.Dict[str, tp.Dict[str, float]], active_on_eval: bool = False, seed: int = 1234):
1051
- super().__init__(seed=seed)
1052
- self.active_on_eval = active_on_eval
1053
- # construct dict that return the values from p otherwise 0
1054
- self.p = {}
1055
- for condition_type, probs in p.items():
1056
- self.p[condition_type] = defaultdict(lambda: 0, probs)
1057
-
1058
- def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]:
1059
- """
1060
- Args:
1061
- samples (list[ConditioningAttributes]): List of conditions.
1062
- Returns:
1063
- list[ConditioningAttributes]: List of conditions after certain attributes were set to None.
1064
- """
1065
- if not self.training and not self.active_on_eval:
1066
- return samples
1067
-
1068
- samples = deepcopy(samples)
1069
- for condition_type, ps in self.p.items(): # for condition types [text, wav]
1070
- for condition, p in ps.items(): # for attributes of each type (e.g., [artist, genre])
1071
- if torch.rand(1, generator=self.rng).item() < p:
1072
- for sample in samples:
1073
- dropout_condition(sample, condition_type, condition)
1074
- return samples
1075
-
1076
- def __repr__(self):
1077
- return f"AttributeDropout({dict(self.p)})"
1078
-
1079
-
1080
- class ClassifierFreeGuidanceDropout(DropoutModule):
1081
- """Classifier Free Guidance dropout.
1082
- All attributes are dropped with the same probability.
1083
-
1084
- Args:
1085
- p (float): Probability to apply condition dropout during training.
1086
- seed (int): Random seed.
1087
- """
1088
- def __init__(self, p: float, seed: int = 1234):
1089
- super().__init__(seed=seed)
1090
- self.p = p
1091
-
1092
- def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]:
1093
- """
1094
- Args:
1095
- samples (list[ConditioningAttributes]): List of conditions.
1096
- Returns:
1097
- list[ConditioningAttributes]: List of conditions after all attributes were set to None.
1098
- """
1099
- if not self.training:
1100
- return samples
1101
-
1102
- # decide on which attributes to drop in a batched fashion
1103
- drop = torch.rand(1, generator=self.rng).item() < self.p
1104
- if not drop:
1105
- return samples
1106
-
1107
- # nullify conditions of all attributes
1108
- samples = deepcopy(samples)
1109
- for condition_type in ["wav", "text"]:
1110
- for sample in samples:
1111
- for condition in sample.attributes[condition_type]:
1112
- dropout_condition(sample, condition_type, condition)
1113
- return samples
1114
-
1115
- def __repr__(self):
1116
- return f"ClassifierFreeGuidanceDropout(p={self.p})"
1117
-
1118
-
1119
- class ConditioningProvider(nn.Module):
1120
- """Prepare and provide conditions given all the supported conditioners.
1121
-
1122
- Args:
1123
- conditioners (dict): Dictionary of conditioners.
1124
- device (torch.device or str, optional): Device for conditioners and output condition types.
1125
- """
1126
- def __init__(self, conditioners: tp.Dict[str, BaseConditioner], device: tp.Union[torch.device, str] = "cpu"):
1127
- super().__init__()
1128
- self.device = device
1129
- self.conditioners = nn.ModuleDict(conditioners)
1130
-
1131
- @property
1132
- def joint_embed_conditions(self):
1133
- return [m.attribute for m in self.conditioners.values() if isinstance(m, JointEmbeddingConditioner)]
1134
-
1135
- @property
1136
- def has_joint_embed_conditions(self):
1137
- return len(self.joint_embed_conditions) > 0
1138
-
1139
- @property
1140
- def text_conditions(self):
1141
- return [k for k, v in self.conditioners.items() if isinstance(v, TextConditioner)]
1142
-
1143
- @property
1144
- def wav_conditions(self):
1145
- return [k for k, v in self.conditioners.items() if isinstance(v, WaveformConditioner)]
1146
-
1147
- @property
1148
- def has_wav_condition(self):
1149
- return len(self.wav_conditions) > 0
1150
-
1151
- def tokenize(self, inputs: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.Any]:
1152
- """Match attributes/wavs with existing conditioners in self, and compute tokenize them accordingly.
1153
- This should be called before starting any real GPU work to avoid synchronization points.
1154
- This will return a dict matching conditioner names to their arbitrary tokenized representations.
1155
-
1156
- Args:
1157
- inputs (list[ConditioningAttributes]): List of ConditioningAttributes objects containing
1158
- text and wav conditions.
1159
- """
1160
- assert all([isinstance(x, ConditioningAttributes) for x in inputs]), (
1161
- "Got unexpected types input for conditioner! should be tp.List[ConditioningAttributes]",
1162
- f" but types were {set([type(x) for x in inputs])}"
1163
- )
1164
-
1165
- output = {}
1166
- text = self._collate_text(inputs)
1167
- wavs = self._collate_wavs(inputs)
1168
- joint_embeds = self._collate_joint_embeds(inputs)
1169
-
1170
- assert set(text.keys() | wavs.keys() | joint_embeds.keys()).issubset(set(self.conditioners.keys())), (
1171
- f"Got an unexpected attribute! Expected {self.conditioners.keys()}, ",
1172
- f"got {text.keys(), wavs.keys(), joint_embeds.keys()}"
1173
- )
1174
-
1175
- for attribute, batch in chain(text.items(), wavs.items(), joint_embeds.items()):
1176
- output[attribute] = self.conditioners[attribute].tokenize(batch)
1177
- return output
1178
-
1179
- def forward(self, tokenized: tp.Dict[str, tp.Any]) -> tp.Dict[str, ConditionType]:
1180
- """Compute pairs of `(embedding, mask)` using the configured conditioners and the tokenized representations.
1181
- The output is for example:
1182
- {
1183
- "genre": (torch.Tensor([B, 1, D_genre]), torch.Tensor([B, 1])),
1184
- "description": (torch.Tensor([B, T_desc, D_desc]), torch.Tensor([B, T_desc])),
1185
- ...
1186
- }
1187
-
1188
- Args:
1189
- tokenized (dict): Dict of tokenized representations as returned by `tokenize()`.
1190
- """
1191
- output = {}
1192
- for attribute, inputs in tokenized.items():
1193
- condition, mask = self.conditioners[attribute](inputs)
1194
- output[attribute] = (condition, mask)
1195
- return output
1196
-
1197
- def _collate_text(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.List[tp.Optional[str]]]:
1198
- """Given a list of ConditioningAttributes objects, compile a dictionary where the keys
1199
- are the attributes and the values are the aggregated input per attribute.
1200
- For example:
1201
- Input:
1202
- [
1203
- ConditioningAttributes(text={"genre": "Rock", "description": "A rock song with a guitar solo"}, wav=...),
1204
- ConditioningAttributes(text={"genre": "Hip-hop", "description": "A hip-hop verse"}, wav=...),
1205
- ]
1206
- Output:
1207
- {
1208
- "genre": ["Rock", "Hip-hop"],
1209
- "description": ["A rock song with a guitar solo", "A hip-hop verse"]
1210
- }
1211
-
1212
- Args:
1213
- samples (list of ConditioningAttributes): List of ConditioningAttributes samples.
1214
- Returns:
1215
- dict[str, list[str, optional]]: A dictionary mapping an attribute name to text batch.
1216
- """
1217
- out: tp.Dict[str, tp.List[tp.Optional[str]]] = defaultdict(list)
1218
- texts = [x.text for x in samples]
1219
- for text in texts:
1220
- for condition in self.text_conditions:
1221
- out[condition].append(text[condition])
1222
- return out
1223
-
1224
- def _collate_wavs(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, WavCondition]:
1225
- """Generate a dict where the keys are attributes by which we fetch similar wavs,
1226
- and the values are Tensors of wavs according to said attributes.
1227
-
1228
- *Note*: by the time the samples reach this function, each sample should have some waveform
1229
- inside the "wav" attribute. It should be either:
1230
- 1. A real waveform
1231
- 2. A null waveform due to the sample having no similar waveforms (nullified by the dataset)
1232
- 3. A null waveform due to it being dropped in a dropout module (nullified by dropout)
1233
-
1234
- Args:
1235
- samples (list of ConditioningAttributes): List of ConditioningAttributes samples.
1236
- Returns:
1237
- dict[str, WavCondition]: A dictionary mapping an attribute name to wavs.
1238
- """
1239
- wavs = defaultdict(list)
1240
- lengths = defaultdict(list)
1241
- sample_rates = defaultdict(list)
1242
- paths = defaultdict(list)
1243
- seek_times = defaultdict(list)
1244
- out: tp.Dict[str, WavCondition] = {}
1245
-
1246
- for sample in samples:
1247
- for attribute in self.wav_conditions:
1248
- wav, length, sample_rate, path, seek_time = sample.wav[attribute]
1249
- assert wav.dim() == 3, f"Got wav with dim={wav.dim()}, but expected 3 [1, C, T]"
1250
- assert wav.size(0) == 1, f"Got wav [B, C, T] with shape={wav.shape}, but expected B == 1"
1251
- # mono-channel conditioning
1252
- wav = wav.mean(1, keepdim=True) # [1, 1, T]
1253
- wavs[attribute].append(wav.flatten()) # [T]
1254
- lengths[attribute].append(length)
1255
- sample_rates[attribute].extend(sample_rate)
1256
- paths[attribute].extend(path)
1257
- seek_times[attribute].extend(seek_time)
1258
-
1259
- # stack all wavs to a single tensor
1260
- for attribute in self.wav_conditions:
1261
- stacked_wav, _ = collate(wavs[attribute], dim=0)
1262
- out[attribute] = WavCondition(
1263
- stacked_wav.unsqueeze(1), torch.cat(lengths[attribute]), sample_rates[attribute],
1264
- paths[attribute], seek_times[attribute])
1265
-
1266
- return out
1267
-
1268
- def _collate_joint_embeds(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, JointEmbedCondition]:
1269
- """Generate a dict where the keys are attributes by which we compute joint embeddings,
1270
- and the values are Tensors of pre-computed embeddings and the corresponding text attributes.
1271
-
1272
- Args:
1273
- samples (list[ConditioningAttributes]): List of ConditioningAttributes samples.
1274
- Returns:
1275
- A dictionary mapping an attribute name to joint embeddings.
1276
- """
1277
- texts = defaultdict(list)
1278
- wavs = defaultdict(list)
1279
- lengths = defaultdict(list)
1280
- sample_rates = defaultdict(list)
1281
- paths = defaultdict(list)
1282
- seek_times = defaultdict(list)
1283
- channels: int = 0
1284
-
1285
- out = {}
1286
- for sample in samples:
1287
- for attribute in self.joint_embed_conditions:
1288
- wav, text, length, sample_rate, path, seek_time = sample.joint_embed[attribute]
1289
- assert wav.dim() == 3
1290
- if channels == 0:
1291
- channels = wav.size(1)
1292
- else:
1293
- assert channels == wav.size(1), "not all audio has same number of channels in batch"
1294
- assert wav.size(0) == 1, "Expecting single-wav batch in the collate method"
1295
- wav = einops.rearrange(wav, "b c t -> (b c t)") # [1, C, T] => [C * T]
1296
- wavs[attribute].append(wav)
1297
- texts[attribute].extend(text)
1298
- lengths[attribute].append(length)
1299
- sample_rates[attribute].extend(sample_rate)
1300
- paths[attribute].extend(path)
1301
- seek_times[attribute].extend(seek_time)
1302
-
1303
- for attribute in self.joint_embed_conditions:
1304
- stacked_texts = texts[attribute]
1305
- stacked_paths = paths[attribute]
1306
- stacked_seek_times = seek_times[attribute]
1307
- stacked_wavs = pad_sequence(wavs[attribute]).to(self.device)
1308
- stacked_wavs = einops.rearrange(stacked_wavs, "(c t) b -> b c t", c=channels)
1309
- stacked_sample_rates = sample_rates[attribute]
1310
- stacked_lengths = torch.cat(lengths[attribute]).to(self.device)
1311
- assert stacked_lengths.size(0) == stacked_wavs.size(0)
1312
- assert len(stacked_sample_rates) == stacked_wavs.size(0)
1313
- assert len(stacked_texts) == stacked_wavs.size(0)
1314
- out[attribute] = JointEmbedCondition(
1315
- text=stacked_texts, wav=stacked_wavs,
1316
- length=stacked_lengths, sample_rate=stacked_sample_rates,
1317
- path=stacked_paths, seek_time=stacked_seek_times)
1318
-
1319
- return out
1320
-
1321
-
1322
- class ConditionFuser(StreamingModule):
1323
- """Condition fuser handles the logic to combine the different conditions
1324
- to the actual model input.
1325
-
1326
- Args:
1327
- fuse2cond (tp.Dict[str, str]): A dictionary that says how to fuse
1328
- each condition. For example:
1329
- {
1330
- "prepend": ["description"],
1331
- "sum": ["genre", "bpm"],
1332
- "cross": ["description"],
1333
- }
1334
- cross_attention_pos_emb (bool, optional): Use positional embeddings in cross attention.
1335
- cross_attention_pos_emb_scale (int): Scale for positional embeddings in cross attention if used.
1336
- """
1337
- FUSING_METHODS = ["sum", "prepend", "cross", "input_interpolate"]
1338
-
1339
- def __init__(self, fuse2cond: tp.Dict[str, tp.List[str]], cross_attention_pos_emb: bool = False,
1340
- cross_attention_pos_emb_scale: float = 1.0):
1341
- super().__init__()
1342
- assert all(
1343
- [k in self.FUSING_METHODS for k in fuse2cond.keys()]
1344
- ), f"Got invalid fuse method, allowed methods: {self.FUSING_METHODS}"
1345
- self.cross_attention_pos_emb = cross_attention_pos_emb
1346
- self.cross_attention_pos_emb_scale = cross_attention_pos_emb_scale
1347
- self.fuse2cond: tp.Dict[str, tp.List[str]] = fuse2cond
1348
- self.cond2fuse: tp.Dict[str, str] = {}
1349
- for fuse_method, conditions in fuse2cond.items():
1350
- for condition in conditions:
1351
- self.cond2fuse[condition] = fuse_method
1352
-
1353
- def forward(
1354
- self,
1355
- input: torch.Tensor,
1356
- conditions: tp.Dict[str, ConditionType]
1357
- ) -> tp.Tuple[torch.Tensor, tp.Optional[torch.Tensor]]:
1358
- """Fuse the conditions to the provided model input.
1359
-
1360
- Args:
1361
- input (torch.Tensor): Transformer input.
1362
- conditions (dict[str, ConditionType]): Dict of conditions.
1363
- Returns:
1364
- tuple[torch.Tensor, torch.Tensor]: The first tensor is the transformer input
1365
- after the conditions have been fused. The second output tensor is the tensor
1366
- used for cross-attention or None if no cross attention inputs exist.
1367
- """
1368
- B, T, _ = input.shape
1369
-
1370
- if 'offsets' in self._streaming_state:
1371
- first_step = False
1372
- offsets = self._streaming_state['offsets']
1373
- else:
1374
- first_step = True
1375
- offsets = torch.zeros(input.shape[0], dtype=torch.long, device=input.device)
1376
-
1377
- assert set(conditions.keys()).issubset(set(self.cond2fuse.keys())), \
1378
- f"given conditions contain unknown attributes for fuser, " \
1379
- f"expected {self.cond2fuse.keys()}, got {conditions.keys()}"
1380
- cross_attention_output = None
1381
- for cond_type, (cond, cond_mask) in conditions.items():
1382
- op = self.cond2fuse[cond_type]
1383
- if op == 'sum':
1384
- input += cond
1385
- elif op == 'input_interpolate':
1386
- cond = einops.rearrange(cond, "b t d -> b d t")
1387
- cond = F.interpolate(cond, size=input.shape[1])
1388
- input += einops.rearrange(cond, "b d t -> b t d")
1389
- elif op == 'prepend':
1390
- if first_step:
1391
- input = torch.cat([cond, input], dim=1)
1392
- elif op == 'cross':
1393
- if cross_attention_output is not None:
1394
- cross_attention_output = torch.cat([cross_attention_output, cond], dim=1)
1395
- else:
1396
- cross_attention_output = cond
1397
- else:
1398
- raise ValueError(f"unknown op ({op})")
1399
-
1400
- if self.cross_attention_pos_emb and cross_attention_output is not None:
1401
- positions = torch.arange(
1402
- cross_attention_output.shape[1],
1403
- device=cross_attention_output.device
1404
- ).view(1, -1, 1)
1405
- pos_emb = create_sin_embedding(positions, cross_attention_output.shape[-1])
1406
- cross_attention_output = cross_attention_output + self.cross_attention_pos_emb_scale * pos_emb
1407
-
1408
- if self._is_streaming:
1409
- self._streaming_state['offsets'] = offsets + T
1410
-
1411
- return input, cross_attention_output
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIWaves/Debate/gradio_config.py DELETED
@@ -1,437 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 The AIWaves Inc. team.
3
-
4
- #
5
- # Licensed under the Apache License, Version 2.0 (the "License");
6
- # you may not use this file except in compliance with the License.
7
- # You may obtain a copy of the License at
8
- #
9
- # http://www.apache.org/licenses/LICENSE-2.0
10
- #
11
- # Unless required by applicable law or agreed to in writing, software
12
- # distributed under the License is distributed on an "AS IS" BASIS,
13
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
- # See the License for the specific language governing permissions and
15
- # limitations under the License.
16
-
17
- import json
18
- from PIL import Image
19
- import requests
20
- from typing import List, Tuple
21
-
22
- class GradioConfig:
23
- # How many avatars are currently registered
24
- POINTER = 0
25
-
26
- # Avatar image. You can add or replace.
27
- AGENT_HEAD_URL = [
28
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306241687579617434043.jpg",
29
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306241687592097408547.jpg",
30
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561699613.jpg",
31
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561275758.jpg",
32
- "https://img.touxiangwu.com/uploads/allimg/2021090300/ry5k31wt33c.jpg",
33
- "https://img.touxiangwu.com/uploads/allimg/2021090300/0ls2gmwhrf5.jpg",
34
- "https://img.touxiangwu.com/zb_users/upload/2023/02/202302281677545695326193.jpg",
35
- "https://img.touxiangwu.com/zb_users/upload/2023/03/202303271679886128550253.jpg",
36
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686711344407060.jpg",
37
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686711345834296.jpg",
38
- "https://img.touxiangwu.com/zb_users/upload/2023/05/202305171684311194291520.jpg",
39
- "https://img.touxiangwu.com/zb_users/upload/2023/05/202305171684311196958993.jpg",
40
- "https://img.touxiangwu.com/uploads/allimg/2021082612/vr0bkov0dwl.jpg",
41
- "https://img.touxiangwu.com/uploads/allimg/2021082612/auqx5zfsv5g.jpg",
42
- "https://img.touxiangwu.com/uploads/allimg/2021082612/llofpivtwls.jpg",
43
- "https://img.touxiangwu.com/uploads/allimg/2021082612/3j2sdot3ye0.jpg",
44
- "https://img.touxiangwu.com/2020/3/nQfYf2.jpg",
45
- "https://img.touxiangwu.com/zb_users/upload/2023/08/202308131691918068774532.jpg",
46
- "https://img.touxiangwu.com/zb_users/upload/2023/08/202308131691918068289945.jpg",
47
- "https://img.touxiangwu.com/zb_users/upload/2023/08/202308131691918069785183.jpg",
48
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561292003.jpg",
49
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726561578616.jpg",
50
- "https://img.touxiangwu.com/zb_users/upload/2023/06/202306141686726564597524.jpg"
51
- ]
52
- USER_HEAD_URL = "https://img.touxiangwu.com/zb_users/upload/2023/05/202305301685407468585486.jpg"
53
-
54
- # The css style of gradio.Chatbot
55
- CSS = """
56
- #chatbot1 .user {
57
- background-color:transparent;
58
- border-color:transparent;
59
- }
60
- #chatbot1 .bot {
61
- background-color:transparent;
62
- border-color:transparent;
63
- }
64
- #btn {color: red; border-color: red;}
65
- """
66
-
67
- ID = ["USER", "AGENT", "SYSTEM"]
68
-
69
- # Bubble template
70
- BUBBLE_CSS = {
71
- # Background-color Name-color Name-content Font-color Font-size Content Avatar-URL
72
- "USER": """
73
- <div style="display: flex; align-items: flex-start; justify-content: flex-end;">
74
- <div style="background-color: {}; border-radius: 20px 0px 20px 20px; padding: 15px; min-width: 100px; max-width: 300px;">
75
- <p style="margin: 0; padding: 0; color: {}; font-weight: bold; font-size: 18px;">{}</p>
76
- <p style="margin: 0; padding: 0; color: {}; font-size: {}px;">{}</p>
77
- </div>
78
- <img src="{}" alt="USER" style="width: 50px; height: 50px; border-radius: 50%; margin-left: 10px;">
79
- </div>
80
- """,
81
-
82
- # Avatar-URL Background-color Name-color Name-Content Font-color Font-size Content
83
- "AGENT": """
84
- <div style="display: flex; align-items: flex-start;">
85
- <img src="{}" alt="AGENT" style="width: 50px; height: 50px; border-radius: 50%; margin-right: 10px;">
86
- <div style="background-color: {}; border-radius: 0px 20px 20px 20px; padding: 15px; min-width: 100px; max-width: 600px;">
87
- <p style="margin: 0; padding: 0; color: {}; font-weight: bold; font-size: 18px;">{}</p>
88
- <p style="margin: 0; padding: 0; color: {}; font-size: {}px;">{}</p>
89
- </div>
90
- </div>
91
- """,
92
-
93
- # Backrgound-color Font-size Font-color Name Content
94
- "SYSTEM": """
95
- <div style="display: flex; align-items: center; justify-content: center;">
96
- <div style="background-color: {}; border-radius: 20px; padding: 1px; min-width: 200px; max-width: 1000px;">
97
- <p style="margin: 0; padding: 0; text-align: center; font-size: {}px; font-weight: bold; font-family: '微软雅黑', sans-serif; color: {};">{}:{}</p>
98
- </div>
99
- </div>
100
- """
101
- }
102
-
103
- ROLE_2_NAME = {}
104
-
105
- OBJECT_INFO = {
106
-
107
- "User": {
108
- # https://img-blog.csdnimg.cn/img_convert/7c20bc39ac69b6972a22e18762d02db3.jpeg
109
- "head_url": USER_HEAD_URL,
110
- "bubble_color": "#95EC69",
111
- "text_color": "#000000",
112
- "font_size": 0,
113
- "id": "USER"
114
- },
115
-
116
- "System": {
117
- # https://img-blog.csdnimg.cn/img_convert/e7e5887cfff67df8c2205c2ef0e5e7fa.png
118
- "head_url": "https://img.touxiangwu.com/zb_users/upload/2023/03/202303141678768524747045.jpg",
119
- "bubble_color": "#7F7F7F", ##FFFFFF
120
- "text_color": "#FFFFFF", ##000000
121
- "font_size": 0,
122
- "id": "SYSTEM"
123
- },
124
-
125
- "wait": {
126
- "head_url": "https://img.touxiangwu.com/zb_users/upload/2022/12/202212011669881536145501.jpg",
127
- "bubble_color": "#E7CBA6",
128
- "text_color": "#000000",
129
- "font_size": 0,
130
- "id": "AGENT"
131
- },
132
-
133
- "Recorder": {
134
- "head_url": "https://img.touxiangwu.com/zb_users/upload/2023/02/202302281677545695326193.jpg",
135
- "bubble_color": "#F7F7F7",
136
- "text_color": "#000000",
137
- "font_size": 0,
138
- "id": "AGENT"
139
- }
140
- }
141
-
142
- @classmethod
143
- def color_for_img(cls, url):
144
- """
145
- Extract the main colors from the picture and set them as the background color,
146
- then determine the corresponding text color.
147
- """
148
-
149
- def get_main_color(image):
150
- image = image.convert("RGB")
151
- width, height = image.size
152
- pixels = image.getcolors(width * height)
153
- most_common_pixel = max(pixels, key=lambda item: item[0])
154
- return most_common_pixel[1]
155
-
156
- def is_dark_color(rgb_color):
157
- r, g, b = rgb_color
158
- luminance = (0.299 * r + 0.587 * g + 0.114 * b) / 255
159
- return luminance < 0.5
160
-
161
- def download_image(url):
162
- print(f"binding: {url}")
163
- response = requests.get(url)
164
- if response.status_code == 200:
165
- with open('image.jpg', 'wb') as f:
166
- f.write(response.content)
167
-
168
- def rgb_to_hex(color):
169
- return "#{:02X}{:02X}{:02X}".format(color[0], color[1], color[2])
170
-
171
- def get_color(image_url):
172
- download_image(image_url)
173
-
174
- image = Image.open("image.jpg")
175
- main_color = get_main_color(image)
176
- is_dark = is_dark_color(main_color)
177
-
178
- if is_dark:
179
- font_color = "#FFFFFF"
180
- else:
181
- font_color = "#000000"
182
-
183
- return rgb_to_hex(main_color), font_color
184
-
185
- return get_color(url)
186
-
187
- @classmethod
188
- def init(cls, JSON):
189
- # Deprecated
190
- with open(JSON) as f:
191
- sop = json.load(f)
192
- cnt = 0
193
- FISRT_NODE = True
194
- fisrt_node_roles = []
195
- for node_name in sop['nodes']:
196
- node_info = sop['nodes'][node_name]
197
- agent_states = node_info['agent_states']
198
- for agent_role in agent_states:
199
- name = agent_states[agent_role]['style']['name']
200
- cls.ROLE_2_NAME[agent_role] = name
201
- if FISRT_NODE:
202
- fisrt_node_roles.append(agent_role)
203
- bubble_color, text_color = cls.color_for_img(cls.AGENT_HEAD_URL[cnt])
204
- cls.OBJECT_INFO[name] = {
205
- "head_url": f"{cls.AGENT_HEAD_URL[cnt]}",
206
- "bubble_color": bubble_color,
207
- "text_color": text_color,
208
- "font_size": 0,
209
- "id": "AGENT"
210
- }
211
- cnt += 1
212
- if FISRT_NODE:
213
- FISRT_NODE = False
214
- print(cls.OBJECT_INFO)
215
- for usr_name in cls.OBJECT_INFO:
216
- if cls.OBJECT_INFO[usr_name]["id"] == "SYSTEM":
217
- cls.OBJECT_INFO[usr_name]["font_size"] = 12
218
- elif cls.OBJECT_INFO[usr_name]["id"] in ["USER", "AGENT"]:
219
- cls.OBJECT_INFO[usr_name]["font_size"] = 16
220
- else:
221
- assert False
222
- return fisrt_node_roles
223
-
224
- @classmethod
225
- def add_agent(cls, agents_name:List):
226
- for name in agents_name:
227
- bubble_color, text_color = cls.color_for_img(cls.AGENT_HEAD_URL[cls.POINTER])
228
- cls.OBJECT_INFO[name] = {
229
- "head_url": f"{cls.AGENT_HEAD_URL[cls.POINTER]}",
230
- "bubble_color": bubble_color,
231
- "text_color": text_color,
232
- "font_size": 0,
233
- "id": "AGENT"
234
- }
235
- cls.POINTER += 1
236
- for usr_name in cls.OBJECT_INFO:
237
- if cls.OBJECT_INFO[usr_name]["id"] == "SYSTEM":
238
- cls.OBJECT_INFO[usr_name]["font_size"] = 12
239
- elif cls.OBJECT_INFO[usr_name]["id"] in ["USER", "AGENT"]:
240
- cls.OBJECT_INFO[usr_name]["font_size"] = 16
241
- else:
242
- assert False
243
-
244
-
245
- class StateConfig:
246
- """UI configuration for the step progress bar (indicating the current node)"""
247
-
248
- CSS = """
249
- :root {
250
- --gradient-start: 100%;
251
- --gradient-end: 0%;
252
- }
253
- .container.progress-bar-container {
254
- position: relative;
255
- display: flex;
256
- align-items: flex-end;
257
- width: 100%;
258
- overflow-x: auto;
259
- padding-bottom: 30px;
260
- padding-top: 20px
261
- }
262
- .container.progress-bar-container::-webkit-scrollbar {
263
- width: 8px;
264
- background-color: transparent;
265
- }
266
-
267
- .container.progress-bar-container::-webkit-scrollbar-thumb {
268
- background-color: transparent;
269
- }
270
-
271
- .progress-bar-container .progressbar {
272
- counter-reset: step;
273
- white-space: nowrap;
274
- }
275
- .progress-bar-container .progressbar li {
276
- list-style: none;
277
- display: inline-block;
278
- width: 200px;
279
- position: relative;
280
- text-align: center;
281
- cursor: pointer;
282
- white-space: normal;
283
- }
284
- .progress-bar-container .progressbar li:before {
285
- content: counter(step);
286
- counter-increment: step;
287
- width: 30px;
288
- height: 30px;
289
- line-height: 30px;
290
- border: 1px solid #ddd;
291
- border-radius: 100%;
292
- display: block;
293
- text-align: center;
294
- margin: 0 auto 10px auto;
295
- background-color: #ffffff;
296
- }
297
- .progress-bar-container .progressbar li:after {
298
- content: attr(data-content);
299
- position: absolute;
300
- width: 87%;
301
- height: 2px;
302
- background-color: #dddddd;
303
- top: 15px;
304
- left: -45%;
305
- }
306
- .progress-bar-container .progressbar li:first-child:after {
307
- content: none;
308
- }
309
- .progress-bar-container .progressbar li.active {
310
- color: green;
311
- }
312
- .progress-bar-container .progressbar li.active:before {
313
- border-color: green;
314
- background-color: green;
315
- color: white;
316
- }
317
- .progress-bar-container .progressbar li.active + li:after {
318
- background: linear-gradient(to right, green var(--gradient-start), lightgray var(--gradient-end));
319
- }
320
- .progress-bar-container .small-element {
321
- transform: scale(0.8);
322
- }
323
- .progress-bar-container .progressbar li span {
324
- position: absolute;
325
- top: 40px;
326
- left: 0;
327
- width: 100%;
328
- text-align: center;
329
- }
330
- .progress-bar-container .progressbar li .data-content {
331
- position: absolute;
332
- width: 100%;
333
- top: -10px;
334
- left: -100px;
335
- text-align: center;
336
- }
337
- """
338
-
339
- FORMAT = """
340
- <html>
341
- <head>
342
- <style>
343
- {}
344
- </style>
345
- </head>
346
- <body>
347
- <br>
348
- <center>
349
- <div class="container progress-bar-container">
350
- <ul class="progressbar">
351
- {}
352
- </ul>
353
- </div>
354
- </center>
355
- </body>
356
- </html>
357
- """
358
-
359
- STATES_NAME:List[str] = None
360
-
361
- @classmethod
362
- def _generate_template(cls, types:str)->str:
363
- # normal: A state with no execution.
364
- # active-show-up: Active state, and content displayed above the horizontal line.
365
- # active-show-down: Active state, and content displayed below the horizontal line.
366
- # active-show-both: Active state, and content displayed both above and below the horizontal line.
367
- # active-show-none: Active state, with no content displayed above the horizontal line.
368
-
369
- assert types.lower() in ["normal","active-show-up", "active-show-down", "active-show-both", "active", "active-show-none"]
370
- both_templates = """<li class="active" style="--gradient-start: {}%; --gradient-end: {}%;">
371
- <div class="data-content">
372
- <center>
373
- <p style="line-height: 1px;"></p>
374
- {}
375
- <p>
376
- {}
377
- </p>
378
- </center>
379
- </div>
380
- <span>{}</span>
381
- </li>"""
382
-
383
- if types.lower() == "normal":
384
- templates = "<li><span>{}</span></li>"
385
- elif types.lower() == "active":
386
- templates = """<li class="active"><span>{}</span></li>"""
387
- elif types.lower() == "active-show-up":
388
- templates = both_templates.format("{}","{}", "{}", "", "{}")
389
- elif types.lower() == "active-show-down":
390
- templates = both_templates.format("{}","{}", "", "{}", "{}")
391
- elif types.lower() == "active-show-both":
392
- templates = both_templates
393
- elif types.lower() == "active-show-none":
394
- templates = """<li class="active" style="--gradient-start: {}%; --gradient-end: {}%;">
395
- <span>{}</span>
396
- </li>"""
397
- else:
398
- assert False
399
- return templates
400
-
401
- @classmethod
402
- def update_states(cls, current_states:List[int], current_templates:List[str], show_content:List[Tuple[str]])->str:
403
- assert len(current_states) == len(current_templates)
404
- # You can dynamically change the number of states.
405
- # assert len(current_states) == len(cls.STATES_NAME)
406
- css_code = []
407
- for idx in range(len(current_states)):
408
- if idx == 0:
409
- if current_states[idx] != 0:
410
- css_code = [f"{cls._generate_template('active').format(cls.STATES_NAME[idx])}"]
411
- else:
412
- css_code = [f"{cls._generate_template('normal').format(cls.STATES_NAME[idx])}"]
413
- continue
414
- if current_states[idx-1] == 0:
415
- # new_code = f"{cls._generate_template('normal').format(*(show_content[idx]))}"
416
- new_code = f"{cls._generate_template('normal').format(cls.STATES_NAME[idx])}"
417
- else:
418
- new_code = f"{cls._generate_template(current_templates[idx]).format(current_states[idx-1], 100-current_states[idx-1],*(show_content[idx-1]), cls.STATES_NAME[idx])}"
419
- if current_states[idx-1] != 100 or (current_states[idx]==0 and current_states[idx-1]==100):
420
- new_code = new_code.replace("""li class="active" ""","""li """)
421
- css_code.append(new_code)
422
- return "\n".join(css_code)
423
-
424
- @classmethod
425
- def create_states(cls, states_name:List[str], manual_create_end_nodes:bool=False):
426
- # Create states
427
- if manual_create_end_nodes:
428
- states_name.append("Done")
429
- css_code = ""
430
- cls.STATES_NAME: List[str] = states_name
431
- for name in states_name:
432
- css_code = f"{css_code}\n{cls._generate_template('normal').format(name)}"
433
- return css_code
434
-
435
-
436
- if __name__ == '__main__':
437
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIWaves/SOP_Generation-single/Environment/__init__.py DELETED
@@ -1 +0,0 @@
1
- from .base_environment import Environment
 
 
spaces/AP123/text-to-3D/app.py DELETED
@@ -1,264 +0,0 @@
1
- import os
2
- from PIL import Image
3
- import torch
4
-
5
- from point_e.diffusion.configs import DIFFUSION_CONFIGS, diffusion_from_config
6
- from point_e.diffusion.sampler import PointCloudSampler
7
- from point_e.models.download import load_checkpoint
8
- from point_e.models.configs import MODEL_CONFIGS, model_from_config
9
- from point_e.util.plotting import plot_point_cloud
10
- from point_e.util.pc_to_mesh import marching_cubes_mesh
11
-
12
- import skimage.measure
13
-
14
- from pyntcloud import PyntCloud
15
- import matplotlib.colors
16
- import plotly.graph_objs as go
17
-
18
- import trimesh
19
-
20
- import gradio as gr
21
-
22
-
23
- state = ""
24
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
25
-
26
- def set_state(s):
27
- print(s)
28
- global state
29
- state = s
30
-
31
- def get_state():
32
- return state
33
-
34
- set_state('Creating txt2mesh model...')
35
- t2m_name = 'base40M-textvec'
36
- t2m_model = model_from_config(MODEL_CONFIGS[t2m_name], device)
37
- t2m_model.eval()
38
- base_diffusion_t2m = diffusion_from_config(DIFFUSION_CONFIGS[t2m_name])
39
-
40
- set_state('Downloading txt2mesh checkpoint...')
41
- t2m_model.load_state_dict(load_checkpoint(t2m_name, device))
42
-
43
-
44
- def load_img2mesh_model(model_name):
45
- set_state(f'Creating img2mesh model {model_name}...')
46
- i2m_name = model_name
47
- i2m_model = model_from_config(MODEL_CONFIGS[i2m_name], device)
48
- i2m_model.eval()
49
- base_diffusion_i2m = diffusion_from_config(DIFFUSION_CONFIGS[i2m_name])
50
-
51
- set_state(f'Downloading img2mesh checkpoint {model_name}...')
52
- i2m_model.load_state_dict(load_checkpoint(i2m_name, device))
53
-
54
- return i2m_model, base_diffusion_i2m
55
-
56
- img2mesh_model_name = 'base40M' #'base300M' #'base1B'
57
- i2m_model, base_diffusion_i2m = load_img2mesh_model(img2mesh_model_name)
58
-
59
-
60
- set_state('Creating upsample model...')
61
- upsampler_model = model_from_config(MODEL_CONFIGS['upsample'], device)
62
- upsampler_model.eval()
63
- upsampler_diffusion = diffusion_from_config(DIFFUSION_CONFIGS['upsample'])
64
-
65
- set_state('Downloading upsampler checkpoint...')
66
- upsampler_model.load_state_dict(load_checkpoint('upsample', device))
67
-
68
- set_state('Creating SDF model...')
69
- sdf_name = 'sdf'
70
- sdf_model = model_from_config(MODEL_CONFIGS[sdf_name], device)
71
- sdf_model.eval()
72
-
73
- set_state('Loading SDF model...')
74
- sdf_model.load_state_dict(load_checkpoint(sdf_name, device))
75
-
76
- stable_diffusion = gr.Blocks.load(name="spaces/runwayml/stable-diffusion-v1-5")
77
-
78
-
79
- set_state('')
80
-
81
- def get_sampler(model_name, txt2obj, guidance_scale):
82
-
83
- global img2mesh_model_name
84
- global base_diffusion_i2m
85
- global i2m_model
86
- if model_name != img2mesh_model_name:
87
- img2mesh_model_name = model_name
88
- i2m_model, base_diffusion_i2m = load_img2mesh_model(model_name)
89
-
90
- return PointCloudSampler(
91
- device=device,
92
- models=[t2m_model if txt2obj else i2m_model, upsampler_model],
93
- diffusions=[base_diffusion_t2m if txt2obj else base_diffusion_i2m, upsampler_diffusion],
94
- num_points=[1024, 4096 - 1024],
95
- aux_channels=['R', 'G', 'B'],
96
- guidance_scale=[guidance_scale, 0.0 if txt2obj else guidance_scale],
97
- model_kwargs_key_filter=('texts', '') if txt2obj else ("*",)
98
- )
99
-
100
- def generate_txt2img(prompt):
101
-
102
- prompt = f"“a 3d rendering of {prompt}, full view, white background"
103
- gallery_dir = stable_diffusion(prompt, fn_index=2)
104
- imgs = [os.path.join(gallery_dir, img) for img in os.listdir(gallery_dir) if os.path.splitext(img)[1] == '.jpg']
105
-
106
- return imgs[0], gr.update(visible=True)
107
-
108
- def generate_3D(input, model_name='base40M', guidance_scale=3.0, grid_size=32):
109
-
110
- set_state('Entered generate function...')
111
-
112
- if isinstance(input, Image.Image):
113
- input = prepare_img(input)
114
-
115
- # if input is a string, it's a text prompt
116
- sampler = get_sampler(model_name, txt2obj=True if isinstance(input, str) else False, guidance_scale=guidance_scale)
117
-
118
- # Produce a sample from the model.
119
- set_state('Sampling...')
120
- samples = None
121
- kw_args = dict(texts=[input]) if isinstance(input, str) else dict(images=[input])
122
- for x in sampler.sample_batch_progressive(batch_size=1, model_kwargs=kw_args):
123
- samples = x
124
-
125
- set_state('Converting to point cloud...')
126
- pc = sampler.output_to_point_clouds(samples)[0]
127
-
128
- set_state('Saving point cloud...')
129
- with open("point_cloud.ply", "wb") as f:
130
- pc.write_ply(f)
131
-
132
- set_state('Converting to mesh...')
133
- save_ply(pc, 'mesh.ply', grid_size)
134
-
135
- set_state('')
136
-
137
- return pc_to_plot(pc), ply_to_obj('mesh.ply', '3d_model.obj'), gr.update(value=['3d_model.obj', 'mesh.ply', 'point_cloud.ply'], visible=True)
138
-
139
- def prepare_img(img):
140
-
141
- w, h = img.size
142
- if w > h:
143
- img = img.crop((w - h) / 2, 0, w - (w - h) / 2, h)
144
- else:
145
- img = img.crop((0, (h - w) / 2, w, h - (h - w) / 2))
146
-
147
- # resize to 256x256
148
- img = img.resize((256, 256))
149
-
150
- return img
151
-
152
- def pc_to_plot(pc):
153
-
154
- return go.Figure(
155
- data=[
156
- go.Scatter3d(
157
- x=pc.coords[:,0], y=pc.coords[:,1], z=pc.coords[:,2],
158
- mode='markers',
159
- marker=dict(
160
- size=2,
161
- color=['rgb({},{},{})'.format(r,g,b) for r,g,b in zip(pc.channels["R"], pc.channels["G"], pc.channels["B"])],
162
- )
163
- )
164
- ],
165
- layout=dict(
166
- scene=dict(xaxis=dict(visible=False), yaxis=dict(visible=False), zaxis=dict(visible=False))
167
- ),
168
- )
169
-
170
- def ply_to_obj(ply_file, obj_file):
171
- mesh = trimesh.load(ply_file)
172
- mesh.export(obj_file)
173
-
174
- return obj_file
175
-
176
- def save_ply(pc, file_name, grid_size):
177
-
178
- # Produce a mesh (with vertex colors)
179
- mesh = marching_cubes_mesh(
180
- pc=pc,
181
- model=sdf_model,
182
- batch_size=4096,
183
- grid_size=grid_size, # increase to 128 for resolution used in evals
184
- progress=True,
185
- )
186
-
187
- # Write the mesh to a PLY file to import into some other program.
188
- with open(file_name, 'wb') as f:
189
- mesh.write_ply(f)
190
-
191
-
192
- with gr.Blocks() as app:
193
- gr.Markdown("# Image-to-3D")
194
- gr.Markdown("Turn any image or prompt to a 3D asset! Powered by StableDiffusion and OpenAI Point-E. Check out (https://twitter.com/angrypenguinPNG) for a tutorial on how to best use this space.")
195
- gr.HTML("""To skip the queue you can duplicate this space:
196
- <br><a href="https://huggingface.co/spaces/AP123/text-to-3D?duplicate=true"><img src="https://img.shields.io/badge/-Duplicate%20Space-blue?labelColor=white&style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IArs4c6QAAAP5JREFUOE+lk7FqAkEURY+ltunEgFXS2sZGIbXfEPdLlnxJyDdYB62sbbUKpLbVNhyYFzbrrA74YJlh9r079973psed0cvUD4A+4HoCjsA85X0Dfn/RBLBgBDxnQPfAEJgBY+A9gALA4tcbamSzS4xq4FOQAJgCDwV2CPKV8tZAJcAjMMkUe1vX+U+SMhfAJEHasQIWmXNN3abzDwHUrgcRGmYcgKe0bxrblHEB4E/pndMazNpSZGcsZdBlYJcEL9Afo75molJyM2FxmPgmgPqlWNLGfwZGG6UiyEvLzHYDmoPkDDiNm9JR9uboiONcBXrpY1qmgs21x1QwyZcpvxt9NS09PlsPAAAAAElFTkSuQmCC&logoWidth=14" alt="Duplicate Space"></a>
197
- <br>Don't forget to change space hardware to <b>GPU</b> after duplicating it.""")
198
-
199
- with gr.Row():
200
- with gr.Column():
201
- with gr.Tab("Image to 3D"):
202
- img = gr.Image(label="Image")
203
- gr.Markdown("Best results with images of 3D objects with no shadows on a white background.")
204
- btn_generate_img2obj = gr.Button(value="Generate")
205
-
206
- with gr.Tab("Text to 3D"):
207
- gr.Markdown("Generate an image with Stable Diffusion, then convert it to 3D. Just enter the object you want to generate.")
208
- prompt_sd = gr.Textbox(label="Prompt", placeholder="a 3d rendering of [your prompt], full view, white background")
209
- btn_generate_txt2sd = gr.Button(value="Generate image")
210
- img_sd = gr.Image(label="Image")
211
- btn_generate_sd2obj = gr.Button(value="Convert to 3D", visible=False)
212
-
213
- with gr.Accordion("Advanced settings", open=False):
214
- dropdown_models = gr.Dropdown(label="Model", value="base40M", choices=["base40M", "base300M"]) #, "base1B"])
215
- guidance_scale = gr.Slider(label="Guidance scale", value=3.0, minimum=3.0, maximum=10.0, step=0.1)
216
- grid_size = gr.Slider(label="Grid size (for .obj 3D model)", value=32, minimum=16, maximum=128, step=16)
217
-
218
- with gr.Column():
219
- plot = gr.Plot(label="Point cloud")
220
- # btn_pc_to_obj = gr.Button(value="Convert to OBJ", visible=False)
221
- model_3d = gr.Model3D(value=None)
222
- file_out = gr.File(label="Files", visible=False)
223
-
224
- # state_info = state_info = gr.Textbox(label="State", show_label=False).style(container=False)
225
-
226
-
227
- # inputs = [dropdown_models, prompt, img, guidance_scale, grid_size]
228
- outputs = [plot, model_3d, file_out]
229
-
230
- btn_generate_img2obj.click(generate_3D, inputs=[img, dropdown_models, guidance_scale, grid_size], outputs=outputs)
231
-
232
- prompt_sd.submit(generate_txt2img, inputs=prompt_sd, outputs=[img_sd, btn_generate_sd2obj])
233
- btn_generate_txt2sd.click(generate_txt2img, inputs=prompt_sd, outputs=[img_sd, btn_generate_sd2obj], queue=False)
234
- btn_generate_sd2obj.click(generate_3D, inputs=[img, dropdown_models, guidance_scale, grid_size], outputs=outputs)
235
-
236
- # btn_pc_to_obj.click(ply_to_obj, inputs=plot, outputs=[model_3d, file_out])
237
-
238
- gr.Examples(
239
- examples=[
240
- ["images/corgi.png"],
241
- ["images/cube_stack.jpg"],
242
- ["images/chair.png"],
243
- ],
244
- inputs=[img],
245
- outputs=outputs,
246
- fn=generate_3D,
247
- cache_examples=False
248
- )
249
-
250
- # app.load(get_state, inputs=[], outputs=state_info, every=0.5, show_progress=False)
251
-
252
- gr.HTML("""
253
- <br><br>
254
- <div style="border-top: 1px solid #303030;">
255
- <br>
256
- <p>Space by:<br>
257
- <a href="https://twitter.com/hahahahohohe"><img src="https://img.shields.io/twitter/follow/hahahahohohe?label=%40anzorq&style=social" alt="Twitter Follow"></a><br>
258
- <a href="https://github.com/qunash"><img alt="GitHub followers" src="https://img.shields.io/github/followers/qunash?style=social" alt="Github Follow"></a></p><br>
259
- <a href="https://www.buymeacoffee.com/anzorq" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 30px !important;width: 102px !important;" ></a><br><br>
260
- <p><img src="https://visitor-badge.glitch.me/badge?page_id=anzorq.point-e_demo" alt="visitors"></p>
261
- </div>
262
- """)
263
-
264
- app.queue(max_size=250, concurrency_count=6).launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ASJMO/freegpt/g4f/Provider/Providers/Lockchat.py DELETED
@@ -1,32 +0,0 @@
1
- import requests
2
- import os
3
- import json
4
- from ...typing import sha256, Dict, get_type_hints
5
- url = 'http://supertest.lockchat.app'
6
- model = ['gpt-4', 'gpt-3.5-turbo']
7
- supports_stream = True
8
- needs_auth = False
9
-
10
- def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
11
-
12
- payload = {
13
- "temperature": 0.7,
14
- "messages": messages,
15
- "model": model,
16
- "stream": True,
17
- }
18
- headers = {
19
- "user-agent": "ChatX/39 CFNetwork/1408.0.4 Darwin/22.5.0",
20
- }
21
- response = requests.post("http://supertest.lockchat.app/v1/chat/completions",
22
- json=payload, headers=headers, stream=True)
23
- for token in response.iter_lines():
24
- if b'The model: `gpt-4` does not exist' in token:
25
- print('error, retrying...')
26
- _create_completion(model=model, messages=messages, stream=stream, temperature=temperature, **kwargs)
27
- if b"content" in token:
28
- token = json.loads(token.decode('utf-8').split('data: ')[1])['choices'][0]['delta'].get('content')
29
- if token: yield (token)
30
-
31
- params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
32
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhaykoul/Palm-2/app.py DELETED
@@ -1,38 +0,0 @@
1
- import streamlit as st
2
- import google.generativeai as palm
3
-
4
- # Configure the API key
5
- palm.configure(api_key="AIzaSyCnrcQWTEedcGuEsJ9Uhbj_ZS9B2nozMiU")
6
-
7
- defaults = {
8
- 'model': 'models/text-bison-001',
9
- 'temperature': 1,
10
- 'candidate_count': 1,
11
- 'top_k': 40,
12
- 'top_p': 0.95,
13
- 'max_output_tokens': 1024,
14
- 'stop_sequences': [],
15
- 'safety_settings': [
16
- {"category": "HARM_CATEGORY_DEROGATORY", "threshold": 4},
17
- {"category": "HARM_CATEGORY_TOXICITY", "threshold": 4},
18
- {"category": "HARM_CATEGORY_VIOLENCE", "threshold": 4},
19
- {"category": "HARM_CATEGORY_SEXUAL", "threshold": 4},
20
- {"category": "HARM_CATEGORY_MEDICAL", "threshold": 4},
21
- {"category": "HARM_CATEGORY_DANGEROUS", "threshold": 4},
22
- ]
23
- }
24
-
25
- st.title("HelpingAI")
26
-
27
- user_input = st.text_input('You: ')
28
-
29
- if st.button('Submit'):
30
- if user_input.lower() in ['quit', 'exit', 'bye']:
31
- st.write("Goodbye! Have a great day!")
32
- else:
33
- prompt = f"""Hello! I am HelpingAI. I am developed by Abhay Koul, who is 16 years old and lives in KP Colony, Vessu, Anantnag. Abhay Koul is currently working on more AI and AI-related projects.Certainly! If you're looking for more information about Abhay Koul's projects and HelpingAI, I would recommend joining the Discord community. You can find additional details there. He has been working on AI from Jan 2023. If you want more details, you can join the Discord community. The main function of HelpingAI is to make learning and education easier and more convenient for students. My purpose is to simplify your learning journey by providing personalized assistance, innovative teaching methods, and tailored resources to meet your unique needs. I am here to make your educational experience more enjoyable and effective. Feel free to ask me any questions or let me know how I can assist you in your learning adventure and in many more things from your life. Also, HelpingAI was initially developed for S.U.P.E.R.B.O.T. and vortexAI, for more info visit: https://github.com/HelpingAI, https://replit.com/@Devastation-war, join Discord https://discord.gg/2EeZcJjyRd.
34
- input: {user_input}
35
- output:"""
36
-
37
- response = palm.generate_text(**defaults, prompt=prompt)
38
- st.write(response.result)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhilashvj/planogram-compliance/utils/augmentations.py DELETED
@@ -1,564 +0,0 @@
1
- # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
- """
3
- Image augmentation functions
4
- """
5
-
6
- import math
7
- import random
8
-
9
- import cv2
10
- import numpy as np
11
- import torch
12
- import torchvision.transforms as T
13
- import torchvision.transforms.functional as TF
14
-
15
- from utils.general import (
16
- LOGGER,
17
- check_version,
18
- colorstr,
19
- resample_segments,
20
- segment2box,
21
- xywhn2xyxy,
22
- )
23
- from utils.metrics import bbox_ioa
24
-
25
- IMAGENET_MEAN = 0.485, 0.456, 0.406 # RGB mean
26
- IMAGENET_STD = 0.229, 0.224, 0.225 # RGB standard deviation
27
-
28
-
29
- class Albumentations:
30
- # YOLOv5 Albumentations class (optional, only used if package is installed)
31
- def __init__(self, size=640):
32
- self.transform = None
33
- prefix = colorstr("albumentations: ")
34
- try:
35
- import albumentations as A
36
-
37
- check_version(
38
- A.__version__, "1.0.3", hard=True
39
- ) # version requirement
40
-
41
- T = [
42
- A.RandomResizedCrop(
43
- height=size,
44
- width=size,
45
- scale=(0.8, 1.0),
46
- ratio=(0.9, 1.11),
47
- p=0.0,
48
- ),
49
- A.Blur(p=0.01),
50
- A.MedianBlur(p=0.01),
51
- A.ToGray(p=0.01),
52
- A.CLAHE(p=0.01),
53
- A.RandomBrightnessContrast(p=0.0),
54
- A.RandomGamma(p=0.0),
55
- A.ImageCompression(quality_lower=75, p=0.0),
56
- ] # transforms
57
- self.transform = A.Compose(
58
- T,
59
- bbox_params=A.BboxParams(
60
- format="yolo", label_fields=["class_labels"]
61
- ),
62
- )
63
-
64
- LOGGER.info(
65
- prefix
66
- + ", ".join(
67
- f"{x}".replace("always_apply=False, ", "")
68
- for x in T
69
- if x.p
70
- )
71
- )
72
- except ImportError: # package not installed, skip
73
- pass
74
- except Exception as e:
75
- LOGGER.info(f"{prefix}{e}")
76
-
77
- def __call__(self, im, labels, p=1.0):
78
- if self.transform and random.random() < p:
79
- new = self.transform(
80
- image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]
81
- ) # transformed
82
- im, labels = new["image"], np.array(
83
- [[c, *b] for c, b in zip(new["class_labels"], new["bboxes"])]
84
- )
85
- return im, labels
86
-
87
-
88
- def normalize(x, mean=IMAGENET_MEAN, std=IMAGENET_STD, inplace=False):
89
- # Denormalize RGB images x per ImageNet stats in BCHW format, i.e. = (x - mean) / std
90
- return TF.normalize(x, mean, std, inplace=inplace)
91
-
92
-
93
- def denormalize(x, mean=IMAGENET_MEAN, std=IMAGENET_STD):
94
- # Denormalize RGB images x per ImageNet stats in BCHW format, i.e. = x * std + mean
95
- for i in range(3):
96
- x[:, i] = x[:, i] * std[i] + mean[i]
97
- return x
98
-
99
-
100
- def augment_hsv(im, hgain=0.5, sgain=0.5, vgain=0.5):
101
- # HSV color-space augmentation
102
- if hgain or sgain or vgain:
103
- r = (
104
- np.random.uniform(-1, 1, 3) * [hgain, sgain, vgain] + 1
105
- ) # random gains
106
- hue, sat, val = cv2.split(cv2.cvtColor(im, cv2.COLOR_BGR2HSV))
107
- dtype = im.dtype # uint8
108
-
109
- x = np.arange(0, 256, dtype=r.dtype)
110
- lut_hue = ((x * r[0]) % 180).astype(dtype)
111
- lut_sat = np.clip(x * r[1], 0, 255).astype(dtype)
112
- lut_val = np.clip(x * r[2], 0, 255).astype(dtype)
113
-
114
- im_hsv = cv2.merge(
115
- (
116
- cv2.LUT(hue, lut_hue),
117
- cv2.LUT(sat, lut_sat),
118
- cv2.LUT(val, lut_val),
119
- )
120
- )
121
- cv2.cvtColor(im_hsv, cv2.COLOR_HSV2BGR, dst=im) # no return needed
122
-
123
-
124
- def hist_equalize(im, clahe=True, bgr=False):
125
- # Equalize histogram on BGR image 'im' with im.shape(n,m,3) and range 0-255
126
- yuv = cv2.cvtColor(im, cv2.COLOR_BGR2YUV if bgr else cv2.COLOR_RGB2YUV)
127
- if clahe:
128
- c = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
129
- yuv[:, :, 0] = c.apply(yuv[:, :, 0])
130
- else:
131
- yuv[:, :, 0] = cv2.equalizeHist(
132
- yuv[:, :, 0]
133
- ) # equalize Y channel histogram
134
- return cv2.cvtColor(
135
- yuv, cv2.COLOR_YUV2BGR if bgr else cv2.COLOR_YUV2RGB
136
- ) # convert YUV image to RGB
137
-
138
-
139
- def replicate(im, labels):
140
- # Replicate labels
141
- h, w = im.shape[:2]
142
- boxes = labels[:, 1:].astype(int)
143
- x1, y1, x2, y2 = boxes.T
144
- s = ((x2 - x1) + (y2 - y1)) / 2 # side length (pixels)
145
- for i in s.argsort()[: round(s.size * 0.5)]: # smallest indices
146
- x1b, y1b, x2b, y2b = boxes[i]
147
- bh, bw = y2b - y1b, x2b - x1b
148
- yc, xc = int(random.uniform(0, h - bh)), int(
149
- random.uniform(0, w - bw)
150
- ) # offset x, y
151
- x1a, y1a, x2a, y2a = [xc, yc, xc + bw, yc + bh]
152
- im[y1a:y2a, x1a:x2a] = im[
153
- y1b:y2b, x1b:x2b
154
- ] # im4[ymin:ymax, xmin:xmax]
155
- labels = np.append(
156
- labels, [[labels[i, 0], x1a, y1a, x2a, y2a]], axis=0
157
- )
158
-
159
- return im, labels
160
-
161
-
162
- def letterbox(
163
- im,
164
- new_shape=(640, 640),
165
- color=(114, 114, 114),
166
- auto=True,
167
- scaleFill=False,
168
- scaleup=True,
169
- stride=32,
170
- ):
171
- # Resize and pad image while meeting stride-multiple constraints
172
- shape = im.shape[:2] # current shape [height, width]
173
- if isinstance(new_shape, int):
174
- new_shape = (new_shape, new_shape)
175
-
176
- # Scale ratio (new / old)
177
- r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
178
- if not scaleup: # only scale down, do not scale up (for better val mAP)
179
- r = min(r, 1.0)
180
-
181
- # Compute padding
182
- ratio = r, r # width, height ratios
183
- new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
184
- dw, dh = (
185
- new_shape[1] - new_unpad[0],
186
- new_shape[0] - new_unpad[1],
187
- ) # wh padding
188
- if auto: # minimum rectangle
189
- dw, dh = np.mod(dw, stride), np.mod(dh, stride) # wh padding
190
- elif scaleFill: # stretch
191
- dw, dh = 0.0, 0.0
192
- new_unpad = (new_shape[1], new_shape[0])
193
- ratio = (
194
- new_shape[1] / shape[1],
195
- new_shape[0] / shape[0],
196
- ) # width, height ratios
197
-
198
- dw /= 2 # divide padding into 2 sides
199
- dh /= 2
200
-
201
- if shape[::-1] != new_unpad: # resize
202
- im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
203
- top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
204
- left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
205
- im = cv2.copyMakeBorder(
206
- im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color
207
- ) # add border
208
- return im, ratio, (dw, dh)
209
-
210
-
211
- def random_perspective(
212
- im,
213
- targets=(),
214
- segments=(),
215
- degrees=10,
216
- translate=0.1,
217
- scale=0.1,
218
- shear=10,
219
- perspective=0.0,
220
- border=(0, 0),
221
- ):
222
- # torchvision.transforms.RandomAffine(degrees=(-10, 10), translate=(0.1, 0.1), scale=(0.9, 1.1), shear=(-10, 10))
223
- # targets = [cls, xyxy]
224
-
225
- height = im.shape[0] + border[0] * 2 # shape(h,w,c)
226
- width = im.shape[1] + border[1] * 2
227
-
228
- # Center
229
- C = np.eye(3)
230
- C[0, 2] = -im.shape[1] / 2 # x translation (pixels)
231
- C[1, 2] = -im.shape[0] / 2 # y translation (pixels)
232
-
233
- # Perspective
234
- P = np.eye(3)
235
- P[2, 0] = random.uniform(
236
- -perspective, perspective
237
- ) # x perspective (about y)
238
- P[2, 1] = random.uniform(
239
- -perspective, perspective
240
- ) # y perspective (about x)
241
-
242
- # Rotation and Scale
243
- R = np.eye(3)
244
- a = random.uniform(-degrees, degrees)
245
- # a += random.choice([-180, -90, 0, 90]) # add 90deg rotations to small rotations
246
- s = random.uniform(1 - scale, 1 + scale)
247
- # s = 2 ** random.uniform(-scale, scale)
248
- R[:2] = cv2.getRotationMatrix2D(angle=a, center=(0, 0), scale=s)
249
-
250
- # Shear
251
- S = np.eye(3)
252
- S[0, 1] = math.tan(
253
- random.uniform(-shear, shear) * math.pi / 180
254
- ) # x shear (deg)
255
- S[1, 0] = math.tan(
256
- random.uniform(-shear, shear) * math.pi / 180
257
- ) # y shear (deg)
258
-
259
- # Translation
260
- T = np.eye(3)
261
- T[0, 2] = (
262
- random.uniform(0.5 - translate, 0.5 + translate) * width
263
- ) # x translation (pixels)
264
- T[1, 2] = (
265
- random.uniform(0.5 - translate, 0.5 + translate) * height
266
- ) # y translation (pixels)
267
-
268
- # Combined rotation matrix
269
- M = T @ S @ R @ P @ C # order of operations (right to left) is IMPORTANT
270
- if (
271
- (border[0] != 0) or (border[1] != 0) or (M != np.eye(3)).any()
272
- ): # image changed
273
- if perspective:
274
- im = cv2.warpPerspective(
275
- im, M, dsize=(width, height), borderValue=(114, 114, 114)
276
- )
277
- else: # affine
278
- im = cv2.warpAffine(
279
- im, M[:2], dsize=(width, height), borderValue=(114, 114, 114)
280
- )
281
-
282
- # Visualize
283
- # import matplotlib.pyplot as plt
284
- # ax = plt.subplots(1, 2, figsize=(12, 6))[1].ravel()
285
- # ax[0].imshow(im[:, :, ::-1]) # base
286
- # ax[1].imshow(im2[:, :, ::-1]) # warped
287
-
288
- # Transform label coordinates
289
- n = len(targets)
290
- if n:
291
- use_segments = any(x.any() for x in segments)
292
- new = np.zeros((n, 4))
293
- if use_segments: # warp segments
294
- segments = resample_segments(segments) # upsample
295
- for i, segment in enumerate(segments):
296
- xy = np.ones((len(segment), 3))
297
- xy[:, :2] = segment
298
- xy = xy @ M.T # transform
299
- xy = (
300
- xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]
301
- ) # perspective rescale or affine
302
-
303
- # clip
304
- new[i] = segment2box(xy, width, height)
305
-
306
- else: # warp boxes
307
- xy = np.ones((n * 4, 3))
308
- xy[:, :2] = targets[:, [1, 2, 3, 4, 1, 4, 3, 2]].reshape(
309
- n * 4, 2
310
- ) # x1y1, x2y2, x1y2, x2y1
311
- xy = xy @ M.T # transform
312
- xy = (
313
- xy[:, :2] / xy[:, 2:3] if perspective else xy[:, :2]
314
- ).reshape(
315
- n, 8
316
- ) # perspective rescale or affine
317
-
318
- # create new boxes
319
- x = xy[:, [0, 2, 4, 6]]
320
- y = xy[:, [1, 3, 5, 7]]
321
- new = (
322
- np.concatenate((x.min(1), y.min(1), x.max(1), y.max(1)))
323
- .reshape(4, n)
324
- .T
325
- )
326
-
327
- # clip
328
- new[:, [0, 2]] = new[:, [0, 2]].clip(0, width)
329
- new[:, [1, 3]] = new[:, [1, 3]].clip(0, height)
330
-
331
- # filter candidates
332
- i = box_candidates(
333
- box1=targets[:, 1:5].T * s,
334
- box2=new.T,
335
- area_thr=0.01 if use_segments else 0.10,
336
- )
337
- targets = targets[i]
338
- targets[:, 1:5] = new[i]
339
-
340
- return im, targets
341
-
342
-
343
- def copy_paste(im, labels, segments, p=0.5):
344
- # Implement Copy-Paste augmentation https://arxiv.org/abs/2012.07177, labels as nx5 np.array(cls, xyxy)
345
- n = len(segments)
346
- if p and n:
347
- h, w, c = im.shape # height, width, channels
348
- im_new = np.zeros(im.shape, np.uint8)
349
- for j in random.sample(range(n), k=round(p * n)):
350
- l, s = labels[j], segments[j]
351
- box = w - l[3], l[2], w - l[1], l[4]
352
- ioa = bbox_ioa(box, labels[:, 1:5]) # intersection over area
353
- if (ioa < 0.30).all(): # allow 30% obscuration of existing labels
354
- labels = np.concatenate((labels, [[l[0], *box]]), 0)
355
- segments.append(np.concatenate((w - s[:, 0:1], s[:, 1:2]), 1))
356
- cv2.drawContours(
357
- im_new,
358
- [segments[j].astype(np.int32)],
359
- -1,
360
- (1, 1, 1),
361
- cv2.FILLED,
362
- )
363
-
364
- result = cv2.flip(im, 1) # augment segments (flip left-right)
365
- i = cv2.flip(im_new, 1).astype(bool)
366
- im[i] = result[i] # cv2.imwrite('debug.jpg', im) # debug
367
-
368
- return im, labels, segments
369
-
370
-
371
- def cutout(im, labels, p=0.5):
372
- # Applies image cutout augmentation https://arxiv.org/abs/1708.04552
373
- if random.random() < p:
374
- h, w = im.shape[:2]
375
- scales = (
376
- [0.5] * 1
377
- + [0.25] * 2
378
- + [0.125] * 4
379
- + [0.0625] * 8
380
- + [0.03125] * 16
381
- ) # image size fraction
382
- for s in scales:
383
- mask_h = random.randint(1, int(h * s)) # create random masks
384
- mask_w = random.randint(1, int(w * s))
385
-
386
- # box
387
- xmin = max(0, random.randint(0, w) - mask_w // 2)
388
- ymin = max(0, random.randint(0, h) - mask_h // 2)
389
- xmax = min(w, xmin + mask_w)
390
- ymax = min(h, ymin + mask_h)
391
-
392
- # apply random color mask
393
- im[ymin:ymax, xmin:xmax] = [
394
- random.randint(64, 191) for _ in range(3)
395
- ]
396
-
397
- # return unobscured labels
398
- if len(labels) and s > 0.03:
399
- box = np.array([xmin, ymin, xmax, ymax], dtype=np.float32)
400
- ioa = bbox_ioa(
401
- box, xywhn2xyxy(labels[:, 1:5], w, h)
402
- ) # intersection over area
403
- labels = labels[ioa < 0.60] # remove >60% obscured labels
404
-
405
- return labels
406
-
407
-
408
- def mixup(im, labels, im2, labels2):
409
- # Applies MixUp augmentation https://arxiv.org/pdf/1710.09412.pdf
410
- r = np.random.beta(32.0, 32.0) # mixup ratio, alpha=beta=32.0
411
- im = (im * r + im2 * (1 - r)).astype(np.uint8)
412
- labels = np.concatenate((labels, labels2), 0)
413
- return im, labels
414
-
415
-
416
- def box_candidates(
417
- box1, box2, wh_thr=2, ar_thr=100, area_thr=0.1, eps=1e-16
418
- ): # box1(4,n), box2(4,n)
419
- # Compute candidate boxes: box1 before augment, box2 after augment, wh_thr (pixels), aspect_ratio_thr, area_ratio
420
- w1, h1 = box1[2] - box1[0], box1[3] - box1[1]
421
- w2, h2 = box2[2] - box2[0], box2[3] - box2[1]
422
- ar = np.maximum(w2 / (h2 + eps), h2 / (w2 + eps)) # aspect ratio
423
- return (
424
- (w2 > wh_thr)
425
- & (h2 > wh_thr)
426
- & (w2 * h2 / (w1 * h1 + eps) > area_thr)
427
- & (ar < ar_thr)
428
- ) # candidates
429
-
430
-
431
- def classify_albumentations(
432
- augment=True,
433
- size=224,
434
- scale=(0.08, 1.0),
435
- ratio=(0.75, 1.0 / 0.75), # 0.75, 1.33
436
- hflip=0.5,
437
- vflip=0.0,
438
- jitter=0.4,
439
- mean=IMAGENET_MEAN,
440
- std=IMAGENET_STD,
441
- auto_aug=False,
442
- ):
443
- # YOLOv5 classification Albumentations (optional, only used if package is installed)
444
- prefix = colorstr("albumentations: ")
445
- try:
446
- import albumentations as A
447
- from albumentations.pytorch import ToTensorV2
448
-
449
- check_version(A.__version__, "1.0.3", hard=True) # version requirement
450
- if augment: # Resize and crop
451
- T = [
452
- A.RandomResizedCrop(
453
- height=size, width=size, scale=scale, ratio=ratio
454
- )
455
- ]
456
- if auto_aug:
457
- # TODO: implement AugMix, AutoAug & RandAug in albumentation
458
- LOGGER.info(
459
- f"{prefix}auto augmentations are currently not supported"
460
- )
461
- else:
462
- if hflip > 0:
463
- T += [A.HorizontalFlip(p=hflip)]
464
- if vflip > 0:
465
- T += [A.VerticalFlip(p=vflip)]
466
- if jitter > 0:
467
- color_jitter = (
468
- float(jitter),
469
- ) * 3 # repeat value for brightness, contrast, satuaration, 0 hue
470
- T += [A.ColorJitter(*color_jitter, 0)]
471
- else: # Use fixed crop for eval set (reproducibility)
472
- T = [
473
- A.SmallestMaxSize(max_size=size),
474
- A.CenterCrop(height=size, width=size),
475
- ]
476
- T += [
477
- A.Normalize(mean=mean, std=std),
478
- ToTensorV2(),
479
- ] # Normalize and convert to Tensor
480
- LOGGER.info(
481
- prefix
482
- + ", ".join(
483
- f"{x}".replace("always_apply=False, ", "") for x in T if x.p
484
- )
485
- )
486
- return A.Compose(T)
487
-
488
- except ImportError: # package not installed, skip
489
- LOGGER.warning(
490
- f"{prefix}⚠️ not found, install with `pip install albumentations` (recommended)"
491
- )
492
- except Exception as e:
493
- LOGGER.info(f"{prefix}{e}")
494
-
495
-
496
- def classify_transforms(size=224):
497
- # Transforms to apply if albumentations not installed
498
- assert isinstance(
499
- size, int
500
- ), f"ERROR: classify_transforms size {size} must be integer, not (list, tuple)"
501
- # T.Compose([T.ToTensor(), T.Resize(size), T.CenterCrop(size), T.Normalize(IMAGENET_MEAN, IMAGENET_STD)])
502
- return T.Compose(
503
- [
504
- CenterCrop(size),
505
- ToTensor(),
506
- T.Normalize(IMAGENET_MEAN, IMAGENET_STD),
507
- ]
508
- )
509
-
510
-
511
- class LetterBox:
512
- # YOLOv5 LetterBox class for image preprocessing, i.e. T.Compose([LetterBox(size), ToTensor()])
513
- def __init__(self, size=(640, 640), auto=False, stride=32):
514
- super().__init__()
515
- self.h, self.w = (size, size) if isinstance(size, int) else size
516
- self.auto = auto # pass max size integer, automatically solve for short side using stride
517
- self.stride = stride # used with auto
518
-
519
- def __call__(self, im): # im = np.array HWC
520
- imh, imw = im.shape[:2]
521
- r = min(self.h / imh, self.w / imw) # ratio of new/old
522
- h, w = round(imh * r), round(imw * r) # resized image
523
- hs, ws = (
524
- math.ceil(x / self.stride) * self.stride for x in (h, w)
525
- ) if self.auto else self.h, self.w
526
- top, left = round((hs - h) / 2 - 0.1), round((ws - w) / 2 - 0.1)
527
- im_out = np.full((self.h, self.w, 3), 114, dtype=im.dtype)
528
- im_out[top : top + h, left : left + w] = cv2.resize(
529
- im, (w, h), interpolation=cv2.INTER_LINEAR
530
- )
531
- return im_out
532
-
533
-
534
- class CenterCrop:
535
- # YOLOv5 CenterCrop class for image preprocessing, i.e. T.Compose([CenterCrop(size), ToTensor()])
536
- def __init__(self, size=640):
537
- super().__init__()
538
- self.h, self.w = (size, size) if isinstance(size, int) else size
539
-
540
- def __call__(self, im): # im = np.array HWC
541
- imh, imw = im.shape[:2]
542
- m = min(imh, imw) # min dimension
543
- top, left = (imh - m) // 2, (imw - m) // 2
544
- return cv2.resize(
545
- im[top : top + m, left : left + m],
546
- (self.w, self.h),
547
- interpolation=cv2.INTER_LINEAR,
548
- )
549
-
550
-
551
- class ToTensor:
552
- # YOLOv5 ToTensor class for image preprocessing, i.e. T.Compose([LetterBox(size), ToTensor()])
553
- def __init__(self, half=False):
554
- super().__init__()
555
- self.half = half
556
-
557
- def __call__(self, im): # im = np.array HWC in BGR order
558
- im = np.ascontiguousarray(
559
- im.transpose((2, 0, 1))[::-1]
560
- ) # HWC to CHW -> BGR to RGB -> contiguous
561
- im = torch.from_numpy(im) # to torch
562
- im = im.half() if self.half else im.float() # uint8 to fp16/32
563
- im /= 255.0 # 0-255 to 0.0-1.0
564
- return im
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/T2I-Adapter/ldm/modules/ema.py DELETED
@@ -1,80 +0,0 @@
1
- import torch
2
- from torch import nn
3
-
4
-
5
- class LitEma(nn.Module):
6
- def __init__(self, model, decay=0.9999, use_num_upates=True):
7
- super().__init__()
8
- if decay < 0.0 or decay > 1.0:
9
- raise ValueError('Decay must be between 0 and 1')
10
-
11
- self.m_name2s_name = {}
12
- self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))
13
- self.register_buffer('num_updates', torch.tensor(0, dtype=torch.int) if use_num_upates
14
- else torch.tensor(-1, dtype=torch.int))
15
-
16
- for name, p in model.named_parameters():
17
- if p.requires_grad:
18
- # remove as '.'-character is not allowed in buffers
19
- s_name = name.replace('.', '')
20
- self.m_name2s_name.update({name: s_name})
21
- self.register_buffer(s_name, p.clone().detach().data)
22
-
23
- self.collected_params = []
24
-
25
- def reset_num_updates(self):
26
- del self.num_updates
27
- self.register_buffer('num_updates', torch.tensor(0, dtype=torch.int))
28
-
29
- def forward(self, model):
30
- decay = self.decay
31
-
32
- if self.num_updates >= 0:
33
- self.num_updates += 1
34
- decay = min(self.decay, (1 + self.num_updates) / (10 + self.num_updates))
35
-
36
- one_minus_decay = 1.0 - decay
37
-
38
- with torch.no_grad():
39
- m_param = dict(model.named_parameters())
40
- shadow_params = dict(self.named_buffers())
41
-
42
- for key in m_param:
43
- if m_param[key].requires_grad:
44
- sname = self.m_name2s_name[key]
45
- shadow_params[sname] = shadow_params[sname].type_as(m_param[key])
46
- shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))
47
- else:
48
- assert not key in self.m_name2s_name
49
-
50
- def copy_to(self, model):
51
- m_param = dict(model.named_parameters())
52
- shadow_params = dict(self.named_buffers())
53
- for key in m_param:
54
- if m_param[key].requires_grad:
55
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
56
- else:
57
- assert not key in self.m_name2s_name
58
-
59
- def store(self, parameters):
60
- """
61
- Save the current parameters for restoring later.
62
- Args:
63
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
64
- temporarily stored.
65
- """
66
- self.collected_params = [param.clone() for param in parameters]
67
-
68
- def restore(self, parameters):
69
- """
70
- Restore the parameters stored with the `store` method.
71
- Useful to validate the model with EMA parameters without affecting the
72
- original optimization process. Store the parameters before the
73
- `copy_to` method. After validation (or model saving), use this to
74
- restore the former parameters.
75
- Args:
76
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
77
- updated with the stored parameters.
78
- """
79
- for c_param, param in zip(self.collected_params, parameters):
80
- param.data.copy_(c_param.data)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/selector/pokemon.py DELETED
@@ -1,98 +0,0 @@
1
- from __future__ import annotations
2
-
3
- from typing import TYPE_CHECKING, List
4
- import numpy as np
5
- import json
6
-
7
- from agentverse.message import Message
8
-
9
- from . import selector_registry as SelectorRegistry
10
- from .base import BaseSelector
11
-
12
- if TYPE_CHECKING:
13
- from agentverse.environments import PokemonEnvironment
14
-
15
-
16
- @SelectorRegistry.register("pokemon")
17
- class PokemonSelector(BaseSelector):
18
- """
19
- Selector for Pokemon environment
20
- """
21
-
22
- def select_message(
23
- self, environment: PokemonEnvironment, messages: List[Message]
24
- ) -> List[Message]:
25
- valid = []
26
- talk_matrix = np.zeros((len(environment.agents), len(environment.agents)))
27
- agent_to_idx = {agent.name: i for i, agent in enumerate(environment.agents)}
28
- for i, message in enumerate(messages):
29
- try:
30
- content = json.loads(message.content)
31
- except json.decoder.JSONDecodeError:
32
- valid.append(0)
33
- continue
34
- if content["action"] == "Speak":
35
- try:
36
- if "to" not in content:
37
- # If the model does not generate receiver, then we discard the message
38
- valid.append(0)
39
- elif content["to"] in agent_to_idx:
40
- # TODO: allow talk to a list of agents
41
- valid.append(1)
42
- # talk_matrix[i][j] = 1 ==> i talk to j
43
- talk_matrix[agent_to_idx[message.sender]][
44
- agent_to_idx[content["to"]]
45
- ] = 1
46
- else:
47
- # If the receiver is not in the environment, then we discard the message
48
- valid.append(0)
49
- except:
50
- valid.append(0)
51
- continue
52
- elif content["action"] == "MoveTo":
53
- # If the agent move to a location that does not exist, then we discard the message
54
- valid.append(
55
- "to" in content and content["to"] in environment.locations_to_agents
56
- )
57
- else:
58
- valid.append(1)
59
- selected_messages = []
60
- for i, message in enumerate(messages):
61
- content = json.loads(message.content)
62
- sender_idx = agent_to_idx[message.sender]
63
- if valid[i] == 0:
64
- selected_messages.append(Message())
65
- continue
66
- if content["action"] == "MoveTo":
67
- if np.sum(talk_matrix[:, sender_idx]) > 0:
68
- # If someone talk to this agent, then we discard the move action
69
- selected_messages.append(Message())
70
- else:
71
- selected_messages.append(message)
72
- elif content["action"] == "Speak":
73
- receiver_idx = agent_to_idx[content["to"]]
74
- if talk_matrix[sender_idx][receiver_idx] == 0:
75
- # If this agent talk to someone who also talk to this agent, and we
76
- # select the message from this agent, then we discard the message
77
- selected_messages.append(Message())
78
- continue
79
- if np.sum(talk_matrix[receiver_idx, :]) > 0:
80
- if talk_matrix[receiver_idx][sender_idx] == 1:
81
- # If the receiver talk to this agent, then we randomly select one message
82
- if sender_idx < receiver_idx:
83
- if np.random.random() < 0.5:
84
- selected_messages.append(message)
85
- talk_matrix[receiver_idx][sender_idx] = 0
86
- else:
87
- selected_messages.append(Message())
88
- talk_matrix[sender_idx][receiver_idx] = 0
89
- else:
90
- print("Shouldn't happen")
91
- else:
92
- # If the receiver talk to other agent, we still talk to the receiver (?)
93
- selected_messages.append(message)
94
- else:
95
- selected_messages.append(message)
96
- else:
97
- selected_messages.append(message)
98
- return selected_messages
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/message.py DELETED
@@ -1,35 +0,0 @@
1
- from pydantic import BaseModel, Field
2
- from typing import List, Tuple, Set, Union, Any
3
-
4
- from agentverse.utils import AgentAction
5
-
6
-
7
- class Message(BaseModel):
8
- content: Any = Field(default="")
9
- sender: str = Field(default="")
10
- receiver: Set[str] = Field(default=set({"all"}))
11
- sender_agent: object = Field(default=None)
12
- tool_response: List[Tuple[AgentAction, str]] = Field(default=[])
13
-
14
-
15
- class SolverMessage(Message):
16
- pass
17
-
18
-
19
- class CriticMessage(Message):
20
- is_agree: bool
21
- criticism: str = ""
22
-
23
-
24
- class ExecutorMessage(Message):
25
- tool_name: str = Field(default="")
26
- tool_input: Any = None
27
-
28
-
29
- class EvaluatorMessage(Message):
30
- score: Union[bool, List[bool], int, List[int]]
31
- advice: str = Field(default="")
32
-
33
-
34
- class RoleAssignerMessage(Message):
35
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/dropdownlist/DropDownList.js DELETED
@@ -1,119 +0,0 @@
1
- import Label from '../label/Label.js';
2
- import Methods from './methods/Methods.js'
3
-
4
-
5
- const GetValue = Phaser.Utils.Objects.GetValue;
6
-
7
- class DropDownList extends Label {
8
- constructor(scene, config) {
9
- super(scene, config);
10
- this.type = 'rexDropDownList';
11
- this.timer = undefined;
12
-
13
- this.setOptions(GetValue(config, 'options'));
14
-
15
- var listConfig = GetValue(config, 'list');
16
- this.setWrapEnable(GetValue(listConfig, 'wrap', false));
17
- this.setCreateButtonCallback(GetValue(listConfig, 'createButtonCallback'));
18
- this.setCreateListBackgroundCallback(GetValue(listConfig, 'createBackgroundCallback'));
19
- this.setButtonClickCallback(GetValue(listConfig, 'onButtonClick'));
20
- this.setButtonOverCallback(GetValue(listConfig, 'onButtonOver'));
21
- this.setButtonOutCallback(GetValue(listConfig, 'onButtonOut'));
22
- this.setListExpandDirection(GetValue(listConfig, 'expandDirection'));
23
- this.setListEaseInDuration(GetValue(listConfig, 'easeIn', 500));
24
- this.setListEaseOutDuration(GetValue(listConfig, 'easeOut', 100));
25
- this.setListTransitInCallback(GetValue(listConfig, 'transitIn'));
26
- this.settListTransitOutCallback(GetValue(listConfig, 'transitOut'));
27
- this.setListSize(GetValue(listConfig, 'width'), GetValue(listConfig, 'height'));
28
- this.setListAlignmentMode(GetValue(listConfig, 'alignParent', 'text'));
29
- this.setListAlignmentSide(GetValue(listConfig, 'alignSide', ''));
30
- this.setListBounds(GetValue(listConfig, 'bounds'));
31
- this.setListSpace(GetValue(listConfig, 'space'));
32
- this.setListDraggable(GetValue(listConfig, 'draggable', false));
33
-
34
- this.setValueChangeCallback(
35
- GetValue(config, 'setValueCallback'),
36
- GetValue(config, 'setValueCallbackScope')
37
- );
38
- this.setValue(GetValue(config, 'value'));
39
-
40
- this.onClick(this.toggleListPanel, this);
41
- }
42
-
43
- destroy(fromScene) {
44
- // This Game Object has already been destroyed
45
- if (!this.scene || this.ignoreDestroy) {
46
- return;
47
- }
48
-
49
- if (this.listPanel) {
50
- this.listPanel.destroy(fromScene);
51
- this.listPanel = undefined;
52
- }
53
-
54
- super.destroy(fromScene);
55
- }
56
-
57
- setOptions(options) {
58
- if (options === undefined) {
59
- options = [];
60
- }
61
- this.options = options;
62
- return this;
63
- }
64
-
65
- setValueChangeCallback(callback, scope) {
66
- this.valueChangeCallback = callback;
67
- this.valueChangeCallbackScope = scope;
68
- return this;
69
- }
70
-
71
- setValue(value) {
72
- this.value = value;
73
- return this;
74
- }
75
-
76
- get value() {
77
- return this._value;
78
- }
79
-
80
- set value(value) {
81
- if (this._value === value) {
82
- return;
83
- }
84
-
85
- var previousValue = this._value;
86
- this._value = value;
87
-
88
- var callback = this.valueChangeCallback,
89
- scope = this.valueChangeCallbackScope;
90
- if (callback) {
91
- if (scope) {
92
- callback.call(scope, this, value, previousValue);
93
- } else {
94
- callback(this, value, previousValue)
95
- }
96
- }
97
-
98
- this.emit('valuechange', this, value, previousValue);
99
-
100
- }
101
-
102
- emitButtonClick(index) {
103
- var option = this.options[index];
104
- if (!option) {
105
- return this;
106
- }
107
-
108
- this.emit('button.click', this, undefined, option, index);
109
- return this;
110
- }
111
-
112
- }
113
-
114
- Object.assign(
115
- DropDownList.prototype,
116
- Methods,
117
- );
118
-
119
- export default DropDownList;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/gridtable/GridTable.js DELETED
@@ -1,140 +0,0 @@
1
- import Scrollable from '../utils/scrollable/Scrollable.js';
2
- import GetScrollMode from '../utils/GetScrollMode.js';
3
- import GridTableCore from '../../../plugins/gridtable.js';
4
- import InjectProperties from './InjectProperties.js';
5
- import TableOnCellVisible from './TableOnCellVisible.js';
6
- import TableSetInteractive from './input/TableSetInteractive.js';
7
- import NOOP from '../../../plugins/utils/object/NOOP.js';
8
- import SetItems from './SetItems.js';
9
- import ScrollMethods from './ScrollMethods.js';
10
-
11
- const GetValue = Phaser.Utils.Objects.GetValue;
12
-
13
- class GridTable extends Scrollable {
14
- constructor(scene, config) {
15
- if (config === undefined) {
16
- config = {};
17
- }
18
-
19
- // Create grid table core
20
- var scrollMode = GetScrollMode(config);
21
- var tableConfig = GetValue(config, 'table', undefined)
22
- if (tableConfig === undefined) {
23
- tableConfig = {};
24
- }
25
- tableConfig.scrollMode = scrollMode;
26
- tableConfig.clamplTableOXY = GetValue(config, 'clamplChildOY', false);
27
- var tableWidth = GetValue(tableConfig, 'width', undefined);
28
- var tableHeight = GetValue(tableConfig, 'height', undefined);
29
- var table = new GridTableCore(scene, 0, 0, tableWidth, tableHeight, tableConfig);
30
- scene.add.existing(table); // Important: Add to display list for touch detecting
31
- var proportion, expand;
32
- if (scrollMode === 0) {
33
- proportion = (tableWidth === undefined) ? 1 : 0;
34
- expand = (tableHeight === undefined);
35
- } else {
36
- proportion = (tableHeight === undefined) ? 1 : 0;
37
- expand = (tableWidth === undefined);
38
- }
39
- // Inject properties for scrollable interface
40
- InjectProperties(table);
41
- // Set minWidth/minHeight to 0 if tableWidth/tableHeight is undefined
42
- table._minWidth = (tableWidth === undefined) ? 0 : undefined;
43
- table._minHeight = (tableHeight === undefined) ? 0 : undefined;
44
-
45
- // Fill config of scrollable
46
- config.type = 'rexGridTable';
47
- config.child = {
48
- gameObject: table,
49
- proportion: proportion,
50
- expand: expand,
51
- };
52
- var spaceConfig = GetValue(config, 'space', undefined);
53
- if (spaceConfig) {
54
- spaceConfig.child = spaceConfig.table;
55
- }
56
- super(scene, config);
57
-
58
- this.addChildrenMap('table', table);
59
- this.addChildrenMap('tableLayer', table.maskLayer);
60
-
61
- this.eventEmitter = GetValue(config, 'eventEmitter', this);
62
- var callback = GetValue(config, 'createCellContainerCallback', NOOP);
63
- var scope = GetValue(config, 'createCellContainerCallbackScope', undefined);
64
- this.setCreateCellContainerCallback(callback, scope);
65
- TableOnCellVisible.call(this, table);
66
-
67
- this.resizeControllerFlag = false;
68
- var eventName = (scrollMode === 0) ? 'cellheightchange' : 'cellwidthchange';
69
- table.on(eventName, function () {
70
- this.resizeControllerFlag = true;
71
- }, this);
72
-
73
- if (GetValue(tableConfig, 'interactive', true)) {
74
- TableSetInteractive.call(this, table, tableConfig);
75
- }
76
- this.setItems(GetValue(config, 'items', []));
77
-
78
- scene.game.events.on('poststep', this.onPostStep, this);
79
- }
80
-
81
- destroy(fromScene) {
82
- // This Game Object has already been destroyed
83
- if (!this.scene || this.ignoreDestroy) {
84
- return;
85
- }
86
-
87
- this.scene.game.events.off('poststep', this.onPostStep, this);
88
-
89
- super.destroy(fromScene);
90
- }
91
-
92
- setCreateCellContainerCallback(callback, scope) {
93
- this.createCellContainerCallback = callback;
94
- this.createCellContainerCallbackScope = scope;
95
- return this;
96
- }
97
-
98
- refresh() {
99
- this.setItems(this.items);
100
- return this;
101
- }
102
-
103
- getCell(cellIdx) {
104
- var table = this.childrenMap.child;
105
- return table.getCell(cellIdx);
106
- }
107
-
108
- getCellContainer(cellIdx) {
109
- var table = this.childrenMap.child;
110
- return table.getCellContainer(cellIdx);
111
- }
112
-
113
- updateVisibleCell(cellIdx) {
114
- var table = this.childrenMap.child;
115
- return table.updateVisibleCell(cellIdx);
116
- }
117
-
118
- onPostStep() {
119
- if (this.resizeControllerFlag) {
120
- this.resizeController();
121
- this.resizeControllerFlag = false;
122
- }
123
- }
124
-
125
- get startRowIndex() {
126
- var table = this.childrenMap.child;
127
- return table.startRowIndex;
128
- }
129
- }
130
-
131
- var methods = {
132
- setItems: SetItems
133
- }
134
- Object.assign(
135
- GridTable.prototype,
136
- ScrollMethods,
137
- methods,
138
- );
139
-
140
- export default GridTable;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/NLP/Chinese.pm DELETED
@@ -1,239 +0,0 @@
1
- ################################################################
2
- # #
3
- # Chinese #
4
- # #
5
- ################################################################
6
-
7
- package NLP::Chinese;
8
-
9
- $utf8 = NLP::UTF8;
10
- %empty_ht = ();
11
-
12
- sub read_chinese_tonal_pinyin_files {
13
- local($caller, *ht, @filenames) = @_;
14
-
15
- $n_kHanyuPinlu = 0;
16
- $n_kXHC1983 = 0;
17
- $n_kHanyuPinyin = 0;
18
- $n_kMandarin = 0;
19
- $n_cedict = 0;
20
- $n_simple_pinyin = 0;
21
-
22
- foreach $filename (@filenames) {
23
- if ($filename =~ /unihan/i) {
24
- my $line_number = 0;
25
- if (open(IN, $filename)) {
26
- while (<IN>) {
27
- $line_number++;
28
- next if /^#/;
29
- s/\s*$//;
30
- if (($u, $type, $value) = split(/\t/, $_)) {
31
- if ($type =~ /^(kHanyuPinlu|kXHC1983|kHanyuPinyin|kMandarin)$/) {
32
- $u = $util->trim($u);
33
- $type = $util->trim($type);
34
- $value = $util->trim($value);
35
- $f = $utf8->unicode_string2string($u);
36
-
37
- if ($type eq "kHanyuPinlu") {
38
- $value =~ s/\(.*?\)//g;
39
- $value = $util->trim($value);
40
- $translit = $caller->number_to_accent_tone($value);
41
- $ht{"kHanyuPinlu"}->{$f} = $translit;
42
- $n_kHanyuPinlu++;
43
- } elsif ($type eq "kXHC1983") {
44
- @translits = ($value =~ /:(\S+)/g);
45
- $translit = join(" ", @translits);
46
- $ht{"kXHC1983"}->{$f} = $translit;
47
- $n_kXHC1983++;
48
- } elsif ($type eq "kHanyuPinyin") {
49
- $value =~ s/^.*://;
50
- $value =~ s/,/ /g;
51
- $ht{"kHanyuPinyin"}->{$f} = $value;
52
- $n_kHanyuPinyin++;
53
- } elsif ($type eq "kMandarin") {
54
- $ht{"kMandarin"}->{$f} = $value;
55
- $n_kMandarin++;
56
- }
57
- }
58
- }
59
- }
60
- close(IN);
61
- print "Read in $n_kHanyuPinlu kHanyuPinlu, $n_kXHC1983 n_kXHC1983, $n_kHanyuPinyin n_kHanyuPinyin $n_kMandarin n_kMandarin\n";
62
- } else {
63
- print STDERR "Can't open $filename\n";
64
- }
65
- } elsif ($filename =~ /cedict/i) {
66
- if (open(IN, $filename)) {
67
- my $line_number = 0;
68
- while (<IN>) {
69
- $line_number++;
70
- next if /^#/;
71
- s/\s*$//;
72
- if (($f, $translit) = ($_ =~ /^\S+\s+(\S+)\s+\[([^\[\]]+)\]/)) {
73
- $translit = $utf8->extended_lower_case($translit);
74
- $translit = $caller->number_to_accent_tone($translit);
75
- $translit =~ s/\s//g;
76
- if ($old_translit = $ht{"cedict"}->{$f}) {
77
- # $ht{CONFLICT}->{("DUPLICATE " . $f)} = "CEDICT($f): $old_translit\nCEDICT($f): $translit (duplicate)\n" unless $translit eq $old_translit;
78
- $ht{"cedicts"}->{$f} = join(" ", $ht{"cedicts"}->{$f}, $translit) unless $old_translit eq $translit;
79
- } else {
80
- $ht{"cedict"}->{$f} = $translit;
81
- $ht{"cedicts"}->{$f} = $translit;
82
- }
83
- $n_cedict++;
84
- }
85
- }
86
- close(IN);
87
- # print "Read in $n_cedict n_cedict\n";
88
- } else {
89
- print STDERR "Can't open $filename";
90
- }
91
- } elsif ($filename =~ /chinese_to_pinyin/i) {
92
- if (open(IN, $filename)) {
93
- my $line_number = 0;
94
- while (<IN>) {
95
- $line_number++;
96
- next if /^#/;
97
- if (($f, $translit) = ($_ =~ /^(\S+)\t(\S+)\s*$/)) {
98
- $ht{"simple_pinyin"}->{$f} = $translit;
99
- $n_simple_pinyin++;
100
- }
101
- }
102
- close(IN);
103
- # print "Read in $n_simple_pinyin n_simple_pinyin\n";
104
- } else {
105
- print STDERR "Can't open $filename";
106
- }
107
- } else {
108
- print STDERR "Don't know what to do with file $filename (in read_chinese_tonal_pinyin_files)\n";
109
- }
110
- }
111
- }
112
-
113
- sub tonal_pinyin {
114
- local($caller, $s, *ht, $gloss) = @_;
115
-
116
- return $result if defined($result = $ht{COMBINED}->{$s});
117
-
118
- $cedict_pinyin = $ht{"cedict"}->{$s} || "";
119
- $cedicts_pinyin = $ht{"cedicts"}->{$s} || "";
120
- $unihan_pinyin = "";
121
- @characters = $utf8->split_into_utf8_characters($s, "return only chars", *empty_ht);
122
- foreach $c (@characters) {
123
- if ($pinyin = $ht{"simple_pinyin"}->{$c}) {
124
- $unihan_pinyin .= $pinyin;
125
- } elsif ($pinyin = $ht{"kHanyuPinlu"}->{$c}) {
126
- $pinyin =~ s/^(\S+)\s.*$/$1/;
127
- $unihan_pinyin .= $pinyin;
128
- } elsif ($pinyin = $ht{"kXHC1983"}->{$c}) {
129
- $pinyin =~ s/^(\S+)\s.*$/$1/;
130
- $unihan_pinyin .= $pinyin;
131
- } elsif ($pinyin = $ht{"kHanyuPinyin"}->{$c}) {
132
- $pinyin =~ s/^(\S+)\s.*$/$1/;
133
- $unihan_pinyin .= $pinyin;
134
- } elsif ($pinyin = $ht{"cedicts"}->{$c}) {
135
- $pinyin =~ s/^(\S+)\s.*$/$1/;
136
- $unihan_pinyin .= $pinyin;
137
- # middle dot, katakana middle dot, multiplication sign
138
- } elsif ($c =~ /^(\xC2\xB7|\xE3\x83\xBB|\xC3\x97)$/) {
139
- $unihan_pinyin .= $c;
140
- # ASCII
141
- } elsif ($c =~ /^([\x21-\x7E])$/) {
142
- $unihan_pinyin .= $c;
143
- } else {
144
- $unihan_pinyin .= "?";
145
- $hex = $utf8->utf8_to_hex($c);
146
- $unicode = uc $utf8->utf8_to_4hex_unicode($c);
147
- # print STDERR "Tonal pinyin: Unknown character $c ($hex/U+$unicode) -> ?\n";
148
- }
149
- }
150
- $pinyin_title = "";
151
- if (($#characters >= 1) && $cedicts_pinyin) {
152
- foreach $pinyin (split(/\s+/, $cedicts_pinyin)) {
153
- $pinyin_title .= "$s $pinyin (CEDICT)\n";
154
- }
155
- $pinyin_title .= "\n";
156
- }
157
- foreach $c (@characters) {
158
- my %local_ht = ();
159
- @pinyins = ();
160
- foreach $type (("kHanyuPinlu", "kXHC1983", "kHanyuPinyin", "cedicts")) {
161
- if ($pinyin_s = $ht{$type}->{$c}) {
162
- foreach $pinyin (split(/\s+/, $pinyin_s)) {
163
- push(@pinyins, $pinyin) unless $util->member($pinyin, @pinyins);
164
- $type2 = ($type eq "cedicts") ? "CEDICT" : $type;
165
- $local_ht{$pinyin} = ($local_ht{$pinyin}) ? join(", ", $local_ht{$pinyin}, $type2) : $type2;
166
- }
167
- }
168
- }
169
- foreach $pinyin (@pinyins) {
170
- $type_s = $local_ht{$pinyin};
171
- $pinyin_title .= "$c $pinyin ($type_s)\n";
172
- }
173
- }
174
- $pinyin_title =~ s/\n$//;
175
- $pinyin_title =~ s/\n/&#xA;/g;
176
- $unihan_pinyin = "" if $unihan_pinyin =~ /^\?+$/;
177
- if (($#characters >= 1) && $cedict_pinyin && $unihan_pinyin && ($unihan_pinyin ne $cedict_pinyin)) {
178
- $log = "Gloss($s): $gloss\nCEdict($s): $cedicts_pinyin\nUnihan($s): $unihan_pinyin\n";
179
- foreach $type (("kHanyuPinlu", "kXHC1983", "kHanyuPinyin")) {
180
- $log_line = "$type($s): ";
181
- foreach $c (@characters) {
182
- $pinyin = $ht{$type}->{$c} || "";
183
- if ($pinyin =~ / /) {
184
- $log_line .= "($pinyin)";
185
- } elsif ($pinyin) {
186
- $log_line .= $pinyin;
187
- } else {
188
- $log_line .= "?";
189
- }
190
- }
191
- $log .= "$log_line\n";
192
- }
193
- $ht{CONFLICT}->{$s} = $log;
194
- }
195
- $result = $unihan_pinyin || $cedict_pinyin;
196
- $result = $cedict_pinyin if ($#characters > 0) && $cedict_pinyin;
197
- $ht{COMBINED}->{$s} = $result;
198
- $ht{PINYIN_TITLE}->{$s} = $pinyin_title;
199
- return $result;
200
- }
201
-
202
- %number_to_accent_tone_ht = (
203
- "a1", "\xC4\x81", "a2", "\xC3\xA1", "a3", "\xC7\x8E", "a4", "\xC3\xA0",
204
- "e1", "\xC4\x93", "e2", "\xC3\xA9", "e3", "\xC4\x9B", "e4", "\xC3\xA8",
205
- "i1", "\xC4\xAB", "i2", "\xC3\xAD", "i3", "\xC7\x90", "i4", "\xC3\xAC",
206
- "o1", "\xC5\x8D", "o2", "\xC3\xB3", "o3", "\xC7\x92", "o4", "\xC3\xB2",
207
- "u1", "\xC5\xAB", "u2", "\xC3\xBA", "u3", "\xC7\x94", "u4", "\xC3\xB9",
208
- "u:1","\xC7\x96", "u:2","\xC7\x98", "u:3","\xC7\x9A", "u:4","\xC7\x9C",
209
- "\xC3\xBC1","\xC7\x96","\xC3\xBC2","\xC7\x98","\xC3\xBC3","\xC7\x9A","\xC3\xBC4","\xC7\x9C"
210
- );
211
-
212
- sub number_to_accent_tone {
213
- local($caller, $s) = @_;
214
-
215
- my $result = "";
216
- while (($pre,$alpha,$tone_number,$rest) = ($s =~ /^(.*?)((?:[a-z]|u:|\xC3\xBC)+)([1-5])(.*)$/i)) {
217
- if ($tone_number eq "5") {
218
- $result .= "$pre$alpha";
219
- } elsif ((($pre_acc,$acc_letter,$post_acc) = ($alpha =~ /^(.*)([ae])(.*)$/))
220
- || (($pre_acc,$acc_letter,$post_acc) = ($alpha =~ /^(.*)(o)(u.*)$/))
221
- || (($pre_acc,$acc_letter,$post_acc) = ($alpha =~ /^(.*)(u:|[iou]|\xC3\xBC)([^aeiou]*)$/))) {
222
- $result .= "$pre$pre_acc" . ($number_to_accent_tone_ht{($acc_letter . $tone_number)} || ($acc_letter . $tone_number)) . $post_acc;
223
- } else {
224
- $result .= "$pre$alpha$tone_number";
225
- }
226
- $s = $rest;
227
- }
228
- $result .= $s;
229
- $result =~ s/u:/\xC3\xBC/g;
230
- return $result;
231
- }
232
-
233
- sub string_contains_utf8_cjk_unified_ideograph_p {
234
- local($caller, $s) = @_;
235
-
236
- return ($s =~ /([\xE4-\xE9]|\xE3[\x90-\xBF]|\xF0[\xA0-\xAC])/);
237
- }
238
-
239
- 1;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Akmyradov/TurkmenTTSweSTT/uroman/lib/NLP/English.pm DELETED
The diff for this file is too large to render. See raw diff
 
spaces/AlanMars/QYL-AI-Space/modules/__init__.py DELETED
File without changes
spaces/AlexWang/lama/saicinpainting/training/modules/ffc.py DELETED
@@ -1,485 +0,0 @@
1
- # Fast Fourier Convolution NeurIPS 2020
2
- # original implementation https://github.com/pkumivision/FFC/blob/main/model_zoo/ffc.py
3
- # paper https://proceedings.neurips.cc/paper/2020/file/2fd5d41ec6cfab47e32164d5624269b1-Paper.pdf
4
-
5
- import numpy as np
6
- import torch
7
- import torch.nn as nn
8
- import torch.nn.functional as F
9
-
10
- from saicinpainting.training.modules.base import get_activation, BaseDiscriminator
11
- from saicinpainting.training.modules.spatial_transform import LearnableSpatialTransformWrapper
12
- from saicinpainting.training.modules.squeeze_excitation import SELayer
13
- from saicinpainting.utils import get_shape
14
-
15
-
16
- class FFCSE_block(nn.Module):
17
-
18
- def __init__(self, channels, ratio_g):
19
- super(FFCSE_block, self).__init__()
20
- in_cg = int(channels * ratio_g)
21
- in_cl = channels - in_cg
22
- r = 16
23
-
24
- self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
25
- self.conv1 = nn.Conv2d(channels, channels // r,
26
- kernel_size=1, bias=True)
27
- self.relu1 = nn.ReLU(inplace=True)
28
- self.conv_a2l = None if in_cl == 0 else nn.Conv2d(
29
- channels // r, in_cl, kernel_size=1, bias=True)
30
- self.conv_a2g = None if in_cg == 0 else nn.Conv2d(
31
- channels // r, in_cg, kernel_size=1, bias=True)
32
- self.sigmoid = nn.Sigmoid()
33
-
34
- def forward(self, x):
35
- x = x if type(x) is tuple else (x, 0)
36
- id_l, id_g = x
37
-
38
- x = id_l if type(id_g) is int else torch.cat([id_l, id_g], dim=1)
39
- x = self.avgpool(x)
40
- x = self.relu1(self.conv1(x))
41
-
42
- x_l = 0 if self.conv_a2l is None else id_l * \
43
- self.sigmoid(self.conv_a2l(x))
44
- x_g = 0 if self.conv_a2g is None else id_g * \
45
- self.sigmoid(self.conv_a2g(x))
46
- return x_l, x_g
47
-
48
-
49
- class FourierUnit(nn.Module):
50
-
51
- def __init__(self, in_channels, out_channels, groups=1, spatial_scale_factor=None, spatial_scale_mode='bilinear',
52
- spectral_pos_encoding=False, use_se=False, se_kwargs=None, ffc3d=False, fft_norm='ortho'):
53
- # bn_layer not used
54
- super(FourierUnit, self).__init__()
55
- self.groups = groups
56
-
57
- self.conv_layer = torch.nn.Conv2d(in_channels=in_channels * 2 + (2 if spectral_pos_encoding else 0),
58
- out_channels=out_channels * 2,
59
- kernel_size=1, stride=1, padding=0, groups=self.groups, bias=False)
60
- self.bn = torch.nn.BatchNorm2d(out_channels * 2)
61
- self.relu = torch.nn.ReLU(inplace=True)
62
-
63
- # squeeze and excitation block
64
- self.use_se = use_se
65
- if use_se:
66
- if se_kwargs is None:
67
- se_kwargs = {}
68
- self.se = SELayer(self.conv_layer.in_channels, **se_kwargs)
69
-
70
- self.spatial_scale_factor = spatial_scale_factor
71
- self.spatial_scale_mode = spatial_scale_mode
72
- self.spectral_pos_encoding = spectral_pos_encoding
73
- self.ffc3d = ffc3d
74
- self.fft_norm = fft_norm
75
-
76
- def forward(self, x):
77
- batch = x.shape[0]
78
-
79
- if self.spatial_scale_factor is not None:
80
- orig_size = x.shape[-2:]
81
- x = F.interpolate(x, scale_factor=self.spatial_scale_factor, mode=self.spatial_scale_mode, align_corners=False)
82
-
83
- r_size = x.size()
84
- # (batch, c, h, w/2+1, 2)
85
- fft_dim = (-3, -2, -1) if self.ffc3d else (-2, -1)
86
- ffted = torch.fft.rfftn(x, dim=fft_dim, norm=self.fft_norm)
87
- ffted = torch.stack((ffted.real, ffted.imag), dim=-1)
88
- ffted = ffted.permute(0, 1, 4, 2, 3).contiguous() # (batch, c, 2, h, w/2+1)
89
- ffted = ffted.view((batch, -1,) + ffted.size()[3:])
90
-
91
- if self.spectral_pos_encoding:
92
- height, width = ffted.shape[-2:]
93
- coords_vert = torch.linspace(0, 1, height)[None, None, :, None].expand(batch, 1, height, width).to(ffted)
94
- coords_hor = torch.linspace(0, 1, width)[None, None, None, :].expand(batch, 1, height, width).to(ffted)
95
- ffted = torch.cat((coords_vert, coords_hor, ffted), dim=1)
96
-
97
- if self.use_se:
98
- ffted = self.se(ffted)
99
-
100
- ffted = self.conv_layer(ffted) # (batch, c*2, h, w/2+1)
101
- ffted = self.relu(self.bn(ffted))
102
-
103
- ffted = ffted.view((batch, -1, 2,) + ffted.size()[2:]).permute(
104
- 0, 1, 3, 4, 2).contiguous() # (batch,c, t, h, w/2+1, 2)
105
- ffted = torch.complex(ffted[..., 0], ffted[..., 1])
106
-
107
- ifft_shape_slice = x.shape[-3:] if self.ffc3d else x.shape[-2:]
108
- output = torch.fft.irfftn(ffted, s=ifft_shape_slice, dim=fft_dim, norm=self.fft_norm)
109
-
110
- if self.spatial_scale_factor is not None:
111
- output = F.interpolate(output, size=orig_size, mode=self.spatial_scale_mode, align_corners=False)
112
-
113
- return output
114
-
115
-
116
- class SeparableFourierUnit(nn.Module):
117
-
118
- def __init__(self, in_channels, out_channels, groups=1, kernel_size=3):
119
- # bn_layer not used
120
- super(SeparableFourierUnit, self).__init__()
121
- self.groups = groups
122
- row_out_channels = out_channels // 2
123
- col_out_channels = out_channels - row_out_channels
124
- self.row_conv = torch.nn.Conv2d(in_channels=in_channels * 2,
125
- out_channels=row_out_channels * 2,
126
- kernel_size=(kernel_size, 1), # kernel size is always like this, but the data will be transposed
127
- stride=1, padding=(kernel_size // 2, 0),
128
- padding_mode='reflect',
129
- groups=self.groups, bias=False)
130
- self.col_conv = torch.nn.Conv2d(in_channels=in_channels * 2,
131
- out_channels=col_out_channels * 2,
132
- kernel_size=(kernel_size, 1), # kernel size is always like this, but the data will be transposed
133
- stride=1, padding=(kernel_size // 2, 0),
134
- padding_mode='reflect',
135
- groups=self.groups, bias=False)
136
- self.row_bn = torch.nn.BatchNorm2d(row_out_channels * 2)
137
- self.col_bn = torch.nn.BatchNorm2d(col_out_channels * 2)
138
- self.relu = torch.nn.ReLU(inplace=True)
139
-
140
- def process_branch(self, x, conv, bn):
141
- batch = x.shape[0]
142
-
143
- r_size = x.size()
144
- # (batch, c, h, w/2+1, 2)
145
- ffted = torch.fft.rfft(x, norm="ortho")
146
- ffted = torch.stack((ffted.real, ffted.imag), dim=-1)
147
- ffted = ffted.permute(0, 1, 4, 2, 3).contiguous() # (batch, c, 2, h, w/2+1)
148
- ffted = ffted.view((batch, -1,) + ffted.size()[3:])
149
-
150
- ffted = self.relu(bn(conv(ffted)))
151
-
152
- ffted = ffted.view((batch, -1, 2,) + ffted.size()[2:]).permute(
153
- 0, 1, 3, 4, 2).contiguous() # (batch,c, t, h, w/2+1, 2)
154
- ffted = torch.complex(ffted[..., 0], ffted[..., 1])
155
-
156
- output = torch.fft.irfft(ffted, s=x.shape[-1:], norm="ortho")
157
- return output
158
-
159
-
160
- def forward(self, x):
161
- rowwise = self.process_branch(x, self.row_conv, self.row_bn)
162
- colwise = self.process_branch(x.permute(0, 1, 3, 2), self.col_conv, self.col_bn).permute(0, 1, 3, 2)
163
- out = torch.cat((rowwise, colwise), dim=1)
164
- return out
165
-
166
-
167
- class SpectralTransform(nn.Module):
168
-
169
- def __init__(self, in_channels, out_channels, stride=1, groups=1, enable_lfu=True, separable_fu=False, **fu_kwargs):
170
- # bn_layer not used
171
- super(SpectralTransform, self).__init__()
172
- self.enable_lfu = enable_lfu
173
- if stride == 2:
174
- self.downsample = nn.AvgPool2d(kernel_size=(2, 2), stride=2)
175
- else:
176
- self.downsample = nn.Identity()
177
-
178
- self.stride = stride
179
- self.conv1 = nn.Sequential(
180
- nn.Conv2d(in_channels, out_channels //
181
- 2, kernel_size=1, groups=groups, bias=False),
182
- nn.BatchNorm2d(out_channels // 2),
183
- nn.ReLU(inplace=True)
184
- )
185
- fu_class = SeparableFourierUnit if separable_fu else FourierUnit
186
- self.fu = fu_class(
187
- out_channels // 2, out_channels // 2, groups, **fu_kwargs)
188
- if self.enable_lfu:
189
- self.lfu = fu_class(
190
- out_channels // 2, out_channels // 2, groups)
191
- self.conv2 = torch.nn.Conv2d(
192
- out_channels // 2, out_channels, kernel_size=1, groups=groups, bias=False)
193
-
194
- def forward(self, x):
195
-
196
- x = self.downsample(x)
197
- x = self.conv1(x)
198
- output = self.fu(x)
199
-
200
- if self.enable_lfu:
201
- n, c, h, w = x.shape
202
- split_no = 2
203
- split_s = h // split_no
204
- xs = torch.cat(torch.split(
205
- x[:, :c // 4], split_s, dim=-2), dim=1).contiguous()
206
- xs = torch.cat(torch.split(xs, split_s, dim=-1),
207
- dim=1).contiguous()
208
- xs = self.lfu(xs)
209
- xs = xs.repeat(1, 1, split_no, split_no).contiguous()
210
- else:
211
- xs = 0
212
-
213
- output = self.conv2(x + output + xs)
214
-
215
- return output
216
-
217
-
218
- class FFC(nn.Module):
219
-
220
- def __init__(self, in_channels, out_channels, kernel_size,
221
- ratio_gin, ratio_gout, stride=1, padding=0,
222
- dilation=1, groups=1, bias=False, enable_lfu=True,
223
- padding_type='reflect', gated=False, **spectral_kwargs):
224
- super(FFC, self).__init__()
225
-
226
- assert stride == 1 or stride == 2, "Stride should be 1 or 2."
227
- self.stride = stride
228
-
229
- in_cg = int(in_channels * ratio_gin)
230
- in_cl = in_channels - in_cg
231
- out_cg = int(out_channels * ratio_gout)
232
- out_cl = out_channels - out_cg
233
- #groups_g = 1 if groups == 1 else int(groups * ratio_gout)
234
- #groups_l = 1 if groups == 1 else groups - groups_g
235
-
236
- self.ratio_gin = ratio_gin
237
- self.ratio_gout = ratio_gout
238
- self.global_in_num = in_cg
239
-
240
- module = nn.Identity if in_cl == 0 or out_cl == 0 else nn.Conv2d
241
- self.convl2l = module(in_cl, out_cl, kernel_size,
242
- stride, padding, dilation, groups, bias, padding_mode=padding_type)
243
- module = nn.Identity if in_cl == 0 or out_cg == 0 else nn.Conv2d
244
- self.convl2g = module(in_cl, out_cg, kernel_size,
245
- stride, padding, dilation, groups, bias, padding_mode=padding_type)
246
- module = nn.Identity if in_cg == 0 or out_cl == 0 else nn.Conv2d
247
- self.convg2l = module(in_cg, out_cl, kernel_size,
248
- stride, padding, dilation, groups, bias, padding_mode=padding_type)
249
- module = nn.Identity if in_cg == 0 or out_cg == 0 else SpectralTransform
250
- self.convg2g = module(
251
- in_cg, out_cg, stride, 1 if groups == 1 else groups // 2, enable_lfu, **spectral_kwargs)
252
-
253
- self.gated = gated
254
- module = nn.Identity if in_cg == 0 or out_cl == 0 or not self.gated else nn.Conv2d
255
- self.gate = module(in_channels, 2, 1)
256
-
257
- def forward(self, x):
258
- x_l, x_g = x if type(x) is tuple else (x, 0)
259
- out_xl, out_xg = 0, 0
260
-
261
- if self.gated:
262
- total_input_parts = [x_l]
263
- if torch.is_tensor(x_g):
264
- total_input_parts.append(x_g)
265
- total_input = torch.cat(total_input_parts, dim=1)
266
-
267
- gates = torch.sigmoid(self.gate(total_input))
268
- g2l_gate, l2g_gate = gates.chunk(2, dim=1)
269
- else:
270
- g2l_gate, l2g_gate = 1, 1
271
-
272
- if self.ratio_gout != 1:
273
- out_xl = self.convl2l(x_l) + self.convg2l(x_g) * g2l_gate
274
- if self.ratio_gout != 0:
275
- out_xg = self.convl2g(x_l) * l2g_gate + self.convg2g(x_g)
276
-
277
- return out_xl, out_xg
278
-
279
-
280
- class FFC_BN_ACT(nn.Module):
281
-
282
- def __init__(self, in_channels, out_channels,
283
- kernel_size, ratio_gin, ratio_gout,
284
- stride=1, padding=0, dilation=1, groups=1, bias=False,
285
- norm_layer=nn.BatchNorm2d, activation_layer=nn.Identity,
286
- padding_type='reflect',
287
- enable_lfu=True, **kwargs):
288
- super(FFC_BN_ACT, self).__init__()
289
- self.ffc = FFC(in_channels, out_channels, kernel_size,
290
- ratio_gin, ratio_gout, stride, padding, dilation,
291
- groups, bias, enable_lfu, padding_type=padding_type, **kwargs)
292
- lnorm = nn.Identity if ratio_gout == 1 else norm_layer
293
- gnorm = nn.Identity if ratio_gout == 0 else norm_layer
294
- global_channels = int(out_channels * ratio_gout)
295
- self.bn_l = lnorm(out_channels - global_channels)
296
- self.bn_g = gnorm(global_channels)
297
-
298
- lact = nn.Identity if ratio_gout == 1 else activation_layer
299
- gact = nn.Identity if ratio_gout == 0 else activation_layer
300
- self.act_l = lact(inplace=True)
301
- self.act_g = gact(inplace=True)
302
-
303
- def forward(self, x):
304
- x_l, x_g = self.ffc(x)
305
- x_l = self.act_l(self.bn_l(x_l))
306
- x_g = self.act_g(self.bn_g(x_g))
307
- return x_l, x_g
308
-
309
-
310
- class FFCResnetBlock(nn.Module):
311
- def __init__(self, dim, padding_type, norm_layer, activation_layer=nn.ReLU, dilation=1,
312
- spatial_transform_kwargs=None, inline=False, **conv_kwargs):
313
- super().__init__()
314
- self.conv1 = FFC_BN_ACT(dim, dim, kernel_size=3, padding=dilation, dilation=dilation,
315
- norm_layer=norm_layer,
316
- activation_layer=activation_layer,
317
- padding_type=padding_type,
318
- **conv_kwargs)
319
- self.conv2 = FFC_BN_ACT(dim, dim, kernel_size=3, padding=dilation, dilation=dilation,
320
- norm_layer=norm_layer,
321
- activation_layer=activation_layer,
322
- padding_type=padding_type,
323
- **conv_kwargs)
324
- if spatial_transform_kwargs is not None:
325
- self.conv1 = LearnableSpatialTransformWrapper(self.conv1, **spatial_transform_kwargs)
326
- self.conv2 = LearnableSpatialTransformWrapper(self.conv2, **spatial_transform_kwargs)
327
- self.inline = inline
328
-
329
- def forward(self, x):
330
- if self.inline:
331
- x_l, x_g = x[:, :-self.conv1.ffc.global_in_num], x[:, -self.conv1.ffc.global_in_num:]
332
- else:
333
- x_l, x_g = x if type(x) is tuple else (x, 0)
334
-
335
- id_l, id_g = x_l, x_g
336
-
337
- x_l, x_g = self.conv1((x_l, x_g))
338
- x_l, x_g = self.conv2((x_l, x_g))
339
-
340
- x_l, x_g = id_l + x_l, id_g + x_g
341
- out = x_l, x_g
342
- if self.inline:
343
- out = torch.cat(out, dim=1)
344
- return out
345
-
346
-
347
- class ConcatTupleLayer(nn.Module):
348
- def forward(self, x):
349
- assert isinstance(x, tuple)
350
- x_l, x_g = x
351
- assert torch.is_tensor(x_l) or torch.is_tensor(x_g)
352
- if not torch.is_tensor(x_g):
353
- return x_l
354
- return torch.cat(x, dim=1)
355
-
356
-
357
- class FFCResNetGenerator(nn.Module):
358
- def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d,
359
- padding_type='reflect', activation_layer=nn.ReLU,
360
- up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True),
361
- init_conv_kwargs={}, downsample_conv_kwargs={}, resnet_conv_kwargs={},
362
- spatial_transform_layers=None, spatial_transform_kwargs={},
363
- add_out_act=True, max_features=1024, out_ffc=False, out_ffc_kwargs={}):
364
- assert (n_blocks >= 0)
365
- super().__init__()
366
-
367
- model = [nn.ReflectionPad2d(3),
368
- FFC_BN_ACT(input_nc, ngf, kernel_size=7, padding=0, norm_layer=norm_layer,
369
- activation_layer=activation_layer, **init_conv_kwargs)]
370
-
371
- ### downsample
372
- for i in range(n_downsampling):
373
- mult = 2 ** i
374
- if i == n_downsampling - 1:
375
- cur_conv_kwargs = dict(downsample_conv_kwargs)
376
- cur_conv_kwargs['ratio_gout'] = resnet_conv_kwargs.get('ratio_gin', 0)
377
- else:
378
- cur_conv_kwargs = downsample_conv_kwargs
379
- model += [FFC_BN_ACT(min(max_features, ngf * mult),
380
- min(max_features, ngf * mult * 2),
381
- kernel_size=3, stride=2, padding=1,
382
- norm_layer=norm_layer,
383
- activation_layer=activation_layer,
384
- **cur_conv_kwargs)]
385
-
386
- mult = 2 ** n_downsampling
387
- feats_num_bottleneck = min(max_features, ngf * mult)
388
-
389
- ### resnet blocks
390
- for i in range(n_blocks):
391
- cur_resblock = FFCResnetBlock(feats_num_bottleneck, padding_type=padding_type, activation_layer=activation_layer,
392
- norm_layer=norm_layer, **resnet_conv_kwargs)
393
- if spatial_transform_layers is not None and i in spatial_transform_layers:
394
- cur_resblock = LearnableSpatialTransformWrapper(cur_resblock, **spatial_transform_kwargs)
395
- model += [cur_resblock]
396
-
397
- model += [ConcatTupleLayer()]
398
-
399
- ### upsample
400
- for i in range(n_downsampling):
401
- mult = 2 ** (n_downsampling - i)
402
- model += [nn.ConvTranspose2d(min(max_features, ngf * mult),
403
- min(max_features, int(ngf * mult / 2)),
404
- kernel_size=3, stride=2, padding=1, output_padding=1),
405
- up_norm_layer(min(max_features, int(ngf * mult / 2))),
406
- up_activation]
407
-
408
- if out_ffc:
409
- model += [FFCResnetBlock(ngf, padding_type=padding_type, activation_layer=activation_layer,
410
- norm_layer=norm_layer, inline=True, **out_ffc_kwargs)]
411
-
412
- model += [nn.ReflectionPad2d(3),
413
- nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
414
- if add_out_act:
415
- model.append(get_activation('tanh' if add_out_act is True else add_out_act))
416
- self.model = nn.Sequential(*model)
417
-
418
- def forward(self, input):
419
- return self.model(input)
420
-
421
-
422
- class FFCNLayerDiscriminator(BaseDiscriminator):
423
- def __init__(self, input_nc, ndf=64, n_layers=3, norm_layer=nn.BatchNorm2d, max_features=512,
424
- init_conv_kwargs={}, conv_kwargs={}):
425
- super().__init__()
426
- self.n_layers = n_layers
427
-
428
- def _act_ctor(inplace=True):
429
- return nn.LeakyReLU(negative_slope=0.2, inplace=inplace)
430
-
431
- kw = 3
432
- padw = int(np.ceil((kw-1.0)/2))
433
- sequence = [[FFC_BN_ACT(input_nc, ndf, kernel_size=kw, padding=padw, norm_layer=norm_layer,
434
- activation_layer=_act_ctor, **init_conv_kwargs)]]
435
-
436
- nf = ndf
437
- for n in range(1, n_layers):
438
- nf_prev = nf
439
- nf = min(nf * 2, max_features)
440
-
441
- cur_model = [
442
- FFC_BN_ACT(nf_prev, nf,
443
- kernel_size=kw, stride=2, padding=padw,
444
- norm_layer=norm_layer,
445
- activation_layer=_act_ctor,
446
- **conv_kwargs)
447
- ]
448
- sequence.append(cur_model)
449
-
450
- nf_prev = nf
451
- nf = min(nf * 2, 512)
452
-
453
- cur_model = [
454
- FFC_BN_ACT(nf_prev, nf,
455
- kernel_size=kw, stride=1, padding=padw,
456
- norm_layer=norm_layer,
457
- activation_layer=lambda *args, **kwargs: nn.LeakyReLU(*args, negative_slope=0.2, **kwargs),
458
- **conv_kwargs),
459
- ConcatTupleLayer()
460
- ]
461
- sequence.append(cur_model)
462
-
463
- sequence += [[nn.Conv2d(nf, 1, kernel_size=kw, stride=1, padding=padw)]]
464
-
465
- for n in range(len(sequence)):
466
- setattr(self, 'model'+str(n), nn.Sequential(*sequence[n]))
467
-
468
- def get_all_activations(self, x):
469
- res = [x]
470
- for n in range(self.n_layers + 2):
471
- model = getattr(self, 'model' + str(n))
472
- res.append(model(res[-1]))
473
- return res[1:]
474
-
475
- def forward(self, x):
476
- act = self.get_all_activations(x)
477
- feats = []
478
- for out in act[:-1]:
479
- if isinstance(out, tuple):
480
- if torch.is_tensor(out[1]):
481
- out = torch.cat(out, dim=1)
482
- else:
483
- out = out[0]
484
- feats.append(out)
485
- return act[-1], feats
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/longcode/jpgd.cpp DELETED
@@ -1,3276 +0,0 @@
1
- // jpgd.cpp - C++ class for JPEG decompression.
2
- // Public domain, Rich Geldreich <[email protected]>
3
- // Last updated Apr. 16, 2011
4
- // Alex Evans: Linear memory allocator (taken from jpge.h).
5
- //
6
- // Supports progressive and baseline sequential JPEG image files, and the most common chroma subsampling factors: Y, H1V1, H2V1, H1V2, and H2V2.
7
- //
8
- // Chroma upsampling quality: H2V2 is upsampled in the frequency domain, H2V1 and H1V2 are upsampled using point sampling.
9
- // Chroma upsampling reference: "Fast Scheme for Image Size Change in the Compressed Domain"
10
- // http://vision.ai.uiuc.edu/~dugad/research/dct/index.html
11
-
12
- #include "jpgd.h"
13
- #include <string.h>
14
-
15
- #include <assert.h>
16
- // BEGIN EPIC MOD
17
- #define JPGD_ASSERT(x) { assert(x); CA_ASSUME(x); } (void)0
18
- // END EPIC MOD
19
-
20
- #ifdef _MSC_VER
21
- #pragma warning (disable : 4611) // warning C4611: interaction between '_setjmp' and C++ object destruction is non-portable
22
- #endif
23
-
24
- // Set to 1 to enable freq. domain chroma upsampling on images using H2V2 subsampling (0=faster nearest neighbor sampling).
25
- // This is slower, but results in higher quality on images with highly saturated colors.
26
- #define JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING 1
27
-
28
- #define JPGD_TRUE (1)
29
- #define JPGD_FALSE (0)
30
-
31
- #define JPGD_MAX(a,b) (((a)>(b)) ? (a) : (b))
32
- #define JPGD_MIN(a,b) (((a)<(b)) ? (a) : (b))
33
-
34
- namespace jpgd {
35
-
36
- static inline void *jpgd_malloc(size_t nSize) { return FMemory::Malloc(nSize); }
37
- static inline void jpgd_free(void *p) { FMemory::Free(p); }
38
-
39
- // BEGIN EPIC MOD
40
- //@UE3 - use UE3 BGRA encoding instead of assuming RGBA
41
- // stolen from IImageWrapper.h
42
- enum ERGBFormatJPG
43
- {
44
- Invalid = -1,
45
- RGBA = 0,
46
- BGRA = 1,
47
- Gray = 2,
48
- };
49
- static ERGBFormatJPG jpg_format;
50
- // END EPIC MOD
51
-
52
- // DCT coefficients are stored in this sequence.
53
- static int g_ZAG[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 };
54
-
55
- enum JPEG_MARKER
56
- {
57
- M_SOF0 = 0xC0, M_SOF1 = 0xC1, M_SOF2 = 0xC2, M_SOF3 = 0xC3, M_SOF5 = 0xC5, M_SOF6 = 0xC6, M_SOF7 = 0xC7, M_JPG = 0xC8,
58
- M_SOF9 = 0xC9, M_SOF10 = 0xCA, M_SOF11 = 0xCB, M_SOF13 = 0xCD, M_SOF14 = 0xCE, M_SOF15 = 0xCF, M_DHT = 0xC4, M_DAC = 0xCC,
59
- M_RST0 = 0xD0, M_RST1 = 0xD1, M_RST2 = 0xD2, M_RST3 = 0xD3, M_RST4 = 0xD4, M_RST5 = 0xD5, M_RST6 = 0xD6, M_RST7 = 0xD7,
60
- M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_DNL = 0xDC, M_DRI = 0xDD, M_DHP = 0xDE, M_EXP = 0xDF,
61
- M_APP0 = 0xE0, M_APP15 = 0xEF, M_JPG0 = 0xF0, M_JPG13 = 0xFD, M_COM = 0xFE, M_TEM = 0x01, M_ERROR = 0x100, RST0 = 0xD0
62
- };
63
-
64
- enum JPEG_SUBSAMPLING { JPGD_GRAYSCALE = 0, JPGD_YH1V1, JPGD_YH2V1, JPGD_YH1V2, JPGD_YH2V2 };
65
-
66
- #define CONST_BITS 13
67
- #define PASS1_BITS 2
68
- #define SCALEDONE ((int32)1)
69
-
70
- #define FIX_0_298631336 ((int32)2446) /* FIX(0.298631336) */
71
- #define FIX_0_390180644 ((int32)3196) /* FIX(0.390180644) */
72
- #define FIX_0_541196100 ((int32)4433) /* FIX(0.541196100) */
73
- #define FIX_0_765366865 ((int32)6270) /* FIX(0.765366865) */
74
- #define FIX_0_899976223 ((int32)7373) /* FIX(0.899976223) */
75
- #define FIX_1_175875602 ((int32)9633) /* FIX(1.175875602) */
76
- #define FIX_1_501321110 ((int32)12299) /* FIX(1.501321110) */
77
- #define FIX_1_847759065 ((int32)15137) /* FIX(1.847759065) */
78
- #define FIX_1_961570560 ((int32)16069) /* FIX(1.961570560) */
79
- #define FIX_2_053119869 ((int32)16819) /* FIX(2.053119869) */
80
- #define FIX_2_562915447 ((int32)20995) /* FIX(2.562915447) */
81
- #define FIX_3_072711026 ((int32)25172) /* FIX(3.072711026) */
82
-
83
- #define DESCALE(x,n) (((x) + (SCALEDONE << ((n)-1))) >> (n))
84
- #define DESCALE_ZEROSHIFT(x,n) (((x) + (128 << (n)) + (SCALEDONE << ((n)-1))) >> (n))
85
-
86
- #define MULTIPLY(var, cnst) ((var) * (cnst))
87
-
88
- #define CLAMP(i) ((static_cast<uint>(i) > 255) ? (((~i) >> 31) & 0xFF) : (i))
89
-
90
- // Compiler creates a fast path 1D IDCT for X non-zero columns
91
- template <int NONZERO_COLS>
92
- struct Row
93
- {
94
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
95
- {
96
- // ACCESS_COL() will be optimized at compile time to either an array access, or 0.
97
- #define ACCESS_COL(x) (((x) < NONZERO_COLS) ? (int)pSrc[x] : 0)
98
-
99
- const int z2 = ACCESS_COL(2), z3 = ACCESS_COL(6);
100
-
101
- const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100);
102
- const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065);
103
- const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865);
104
-
105
- const int tmp0 = (ACCESS_COL(0) + ACCESS_COL(4)) << CONST_BITS;
106
- const int tmp1 = (ACCESS_COL(0) - ACCESS_COL(4)) << CONST_BITS;
107
-
108
- const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2;
109
-
110
- const int atmp0 = ACCESS_COL(7), atmp1 = ACCESS_COL(5), atmp2 = ACCESS_COL(3), atmp3 = ACCESS_COL(1);
111
-
112
- const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3;
113
- const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602);
114
-
115
- const int az1 = MULTIPLY(bz1, - FIX_0_899976223);
116
- const int az2 = MULTIPLY(bz2, - FIX_2_562915447);
117
- const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5;
118
- const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5;
119
-
120
- const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3;
121
- const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4;
122
- const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3;
123
- const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4;
124
-
125
- pTemp[0] = DESCALE(tmp10 + btmp3, CONST_BITS-PASS1_BITS);
126
- pTemp[7] = DESCALE(tmp10 - btmp3, CONST_BITS-PASS1_BITS);
127
- pTemp[1] = DESCALE(tmp11 + btmp2, CONST_BITS-PASS1_BITS);
128
- pTemp[6] = DESCALE(tmp11 - btmp2, CONST_BITS-PASS1_BITS);
129
- pTemp[2] = DESCALE(tmp12 + btmp1, CONST_BITS-PASS1_BITS);
130
- pTemp[5] = DESCALE(tmp12 - btmp1, CONST_BITS-PASS1_BITS);
131
- pTemp[3] = DESCALE(tmp13 + btmp0, CONST_BITS-PASS1_BITS);
132
- pTemp[4] = DESCALE(tmp13 - btmp0, CONST_BITS-PASS1_BITS);
133
- }
134
- };
135
-
136
- template <>
137
- struct Row<0>
138
- {
139
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
140
- {
141
- #ifdef _MSC_VER
142
- pTemp; pSrc;
143
- #endif
144
- }
145
- };
146
-
147
- template <>
148
- struct Row<1>
149
- {
150
- static void idct(int* pTemp, const jpgd_block_t* pSrc)
151
- {
152
- const int dcval = (pSrc[0] << PASS1_BITS);
153
-
154
- pTemp[0] = dcval;
155
- pTemp[1] = dcval;
156
- pTemp[2] = dcval;
157
- pTemp[3] = dcval;
158
- pTemp[4] = dcval;
159
- pTemp[5] = dcval;
160
- pTemp[6] = dcval;
161
- pTemp[7] = dcval;
162
- }
163
- };
164
-
165
- // Compiler creates a fast path 1D IDCT for X non-zero rows
166
- template <int NONZERO_ROWS>
167
- struct Col
168
- {
169
- static void idct(uint8* pDst_ptr, const int* pTemp)
170
- {
171
- // ACCESS_ROW() will be optimized at compile time to either an array access, or 0.
172
- #define ACCESS_ROW(x) (((x) < NONZERO_ROWS) ? pTemp[x * 8] : 0)
173
-
174
- const int z2 = ACCESS_ROW(2);
175
- const int z3 = ACCESS_ROW(6);
176
-
177
- const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100);
178
- const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065);
179
- const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865);
180
-
181
- const int tmp0 = (ACCESS_ROW(0) + ACCESS_ROW(4)) << CONST_BITS;
182
- const int tmp1 = (ACCESS_ROW(0) - ACCESS_ROW(4)) << CONST_BITS;
183
-
184
- const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2;
185
-
186
- const int atmp0 = ACCESS_ROW(7), atmp1 = ACCESS_ROW(5), atmp2 = ACCESS_ROW(3), atmp3 = ACCESS_ROW(1);
187
-
188
- const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3;
189
- const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602);
190
-
191
- const int az1 = MULTIPLY(bz1, - FIX_0_899976223);
192
- const int az2 = MULTIPLY(bz2, - FIX_2_562915447);
193
- const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5;
194
- const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5;
195
-
196
- const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3;
197
- const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4;
198
- const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3;
199
- const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4;
200
-
201
- int i = DESCALE_ZEROSHIFT(tmp10 + btmp3, CONST_BITS+PASS1_BITS+3);
202
- pDst_ptr[8*0] = (uint8)CLAMP(i);
203
-
204
- i = DESCALE_ZEROSHIFT(tmp10 - btmp3, CONST_BITS+PASS1_BITS+3);
205
- pDst_ptr[8*7] = (uint8)CLAMP(i);
206
-
207
- i = DESCALE_ZEROSHIFT(tmp11 + btmp2, CONST_BITS+PASS1_BITS+3);
208
- pDst_ptr[8*1] = (uint8)CLAMP(i);
209
-
210
- i = DESCALE_ZEROSHIFT(tmp11 - btmp2, CONST_BITS+PASS1_BITS+3);
211
- pDst_ptr[8*6] = (uint8)CLAMP(i);
212
-
213
- i = DESCALE_ZEROSHIFT(tmp12 + btmp1, CONST_BITS+PASS1_BITS+3);
214
- pDst_ptr[8*2] = (uint8)CLAMP(i);
215
-
216
- i = DESCALE_ZEROSHIFT(tmp12 - btmp1, CONST_BITS+PASS1_BITS+3);
217
- pDst_ptr[8*5] = (uint8)CLAMP(i);
218
-
219
- i = DESCALE_ZEROSHIFT(tmp13 + btmp0, CONST_BITS+PASS1_BITS+3);
220
- pDst_ptr[8*3] = (uint8)CLAMP(i);
221
-
222
- i = DESCALE_ZEROSHIFT(tmp13 - btmp0, CONST_BITS+PASS1_BITS+3);
223
- pDst_ptr[8*4] = (uint8)CLAMP(i);
224
- }
225
- };
226
-
227
- template <>
228
- struct Col<1>
229
- {
230
- static void idct(uint8* pDst_ptr, const int* pTemp)
231
- {
232
- int dcval = DESCALE_ZEROSHIFT(pTemp[0], PASS1_BITS+3);
233
- const uint8 dcval_clamped = (uint8)CLAMP(dcval);
234
- pDst_ptr[0*8] = dcval_clamped;
235
- pDst_ptr[1*8] = dcval_clamped;
236
- pDst_ptr[2*8] = dcval_clamped;
237
- pDst_ptr[3*8] = dcval_clamped;
238
- pDst_ptr[4*8] = dcval_clamped;
239
- pDst_ptr[5*8] = dcval_clamped;
240
- pDst_ptr[6*8] = dcval_clamped;
241
- pDst_ptr[7*8] = dcval_clamped;
242
- }
243
- };
244
-
245
- static const uint8 s_idct_row_table[] =
246
- {
247
- 1,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0, 2,1,0,0,0,0,0,0, 2,1,1,0,0,0,0,0, 2,2,1,0,0,0,0,0, 3,2,1,0,0,0,0,0, 4,2,1,0,0,0,0,0, 4,3,1,0,0,0,0,0,
248
- 4,3,2,0,0,0,0,0, 4,3,2,1,0,0,0,0, 4,3,2,1,1,0,0,0, 4,3,2,2,1,0,0,0, 4,3,3,2,1,0,0,0, 4,4,3,2,1,0,0,0, 5,4,3,2,1,0,0,0, 6,4,3,2,1,0,0,0,
249
- 6,5,3,2,1,0,0,0, 6,5,4,2,1,0,0,0, 6,5,4,3,1,0,0,0, 6,5,4,3,2,0,0,0, 6,5,4,3,2,1,0,0, 6,5,4,3,2,1,1,0, 6,5,4,3,2,2,1,0, 6,5,4,3,3,2,1,0,
250
- 6,5,4,4,3,2,1,0, 6,5,5,4,3,2,1,0, 6,6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0, 8,6,5,4,3,2,1,0, 8,7,5,4,3,2,1,0, 8,7,6,4,3,2,1,0, 8,7,6,5,3,2,1,0,
251
- 8,7,6,5,4,2,1,0, 8,7,6,5,4,3,1,0, 8,7,6,5,4,3,2,0, 8,7,6,5,4,3,2,1, 8,7,6,5,4,3,2,2, 8,7,6,5,4,3,3,2, 8,7,6,5,4,4,3,2, 8,7,6,5,5,4,3,2,
252
- 8,7,6,6,5,4,3,2, 8,7,7,6,5,4,3,2, 8,8,7,6,5,4,3,2, 8,8,8,6,5,4,3,2, 8,8,8,7,5,4,3,2, 8,8,8,7,6,4,3,2, 8,8,8,7,6,5,3,2, 8,8,8,7,6,5,4,2,
253
- 8,8,8,7,6,5,4,3, 8,8,8,7,6,5,4,4, 8,8,8,7,6,5,5,4, 8,8,8,7,6,6,5,4, 8,8,8,7,7,6,5,4, 8,8,8,8,7,6,5,4, 8,8,8,8,8,6,5,4, 8,8,8,8,8,7,5,4,
254
- 8,8,8,8,8,7,6,4, 8,8,8,8,8,7,6,5, 8,8,8,8,8,7,6,6, 8,8,8,8,8,7,7,6, 8,8,8,8,8,8,7,6, 8,8,8,8,8,8,8,6, 8,8,8,8,8,8,8,7, 8,8,8,8,8,8,8,8,
255
- };
256
-
257
- static const uint8 s_idct_col_table[] = { 1, 1, 2, 3, 3, 3, 3, 3, 3, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8 };
258
-
259
- void idct(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr, int block_max_zag)
260
- {
261
- JPGD_ASSERT(block_max_zag >= 1);
262
- JPGD_ASSERT(block_max_zag <= 64);
263
-
264
- if (block_max_zag == 1)
265
- {
266
- int k = ((pSrc_ptr[0] + 4) >> 3) + 128;
267
- k = CLAMP(k);
268
- k = k | (k<<8);
269
- k = k | (k<<16);
270
-
271
- for (int i = 8; i > 0; i--)
272
- {
273
- *(int*)&pDst_ptr[0] = k;
274
- *(int*)&pDst_ptr[4] = k;
275
- pDst_ptr += 8;
276
- }
277
- return;
278
- }
279
-
280
- int temp[64];
281
-
282
- const jpgd_block_t* pSrc = pSrc_ptr;
283
- int* pTemp = temp;
284
-
285
- const uint8* pRow_tab = &s_idct_row_table[(block_max_zag - 1) * 8];
286
- int i;
287
- for (i = 8; i > 0; i--, pRow_tab++)
288
- {
289
- switch (*pRow_tab)
290
- {
291
- case 0: Row<0>::idct(pTemp, pSrc); break;
292
- case 1: Row<1>::idct(pTemp, pSrc); break;
293
- case 2: Row<2>::idct(pTemp, pSrc); break;
294
- case 3: Row<3>::idct(pTemp, pSrc); break;
295
- case 4: Row<4>::idct(pTemp, pSrc); break;
296
- case 5: Row<5>::idct(pTemp, pSrc); break;
297
- case 6: Row<6>::idct(pTemp, pSrc); break;
298
- case 7: Row<7>::idct(pTemp, pSrc); break;
299
- case 8: Row<8>::idct(pTemp, pSrc); break;
300
- }
301
-
302
- pSrc += 8;
303
- pTemp += 8;
304
- }
305
-
306
- pTemp = temp;
307
-
308
- const int nonzero_rows = s_idct_col_table[block_max_zag - 1];
309
- for (i = 8; i > 0; i--)
310
- {
311
- switch (nonzero_rows)
312
- {
313
- case 1: Col<1>::idct(pDst_ptr, pTemp); break;
314
- case 2: Col<2>::idct(pDst_ptr, pTemp); break;
315
- case 3: Col<3>::idct(pDst_ptr, pTemp); break;
316
- case 4: Col<4>::idct(pDst_ptr, pTemp); break;
317
- case 5: Col<5>::idct(pDst_ptr, pTemp); break;
318
- case 6: Col<6>::idct(pDst_ptr, pTemp); break;
319
- case 7: Col<7>::idct(pDst_ptr, pTemp); break;
320
- case 8: Col<8>::idct(pDst_ptr, pTemp); break;
321
- }
322
-
323
- pTemp++;
324
- pDst_ptr++;
325
- }
326
- }
327
-
328
- void idct_4x4(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr)
329
- {
330
- int temp[64];
331
- int* pTemp = temp;
332
- const jpgd_block_t* pSrc = pSrc_ptr;
333
-
334
- for (int i = 4; i > 0; i--)
335
- {
336
- Row<4>::idct(pTemp, pSrc);
337
- pSrc += 8;
338
- pTemp += 8;
339
- }
340
-
341
- pTemp = temp;
342
- for (int i = 8; i > 0; i--)
343
- {
344
- Col<4>::idct(pDst_ptr, pTemp);
345
- pTemp++;
346
- pDst_ptr++;
347
- }
348
- }
349
-
350
- // Retrieve one character from the input stream.
351
- inline uint jpeg_decoder::get_char()
352
- {
353
- // Any bytes remaining in buffer?
354
- if (!m_in_buf_left)
355
- {
356
- // Try to get more bytes.
357
- prep_in_buffer();
358
- // Still nothing to get?
359
- if (!m_in_buf_left)
360
- {
361
- // Pad the end of the stream with 0xFF 0xD9 (EOI marker)
362
- int t = m_tem_flag;
363
- m_tem_flag ^= 1;
364
- if (t)
365
- return 0xD9;
366
- else
367
- return 0xFF;
368
- }
369
- }
370
-
371
- uint c = *m_pIn_buf_ofs++;
372
- m_in_buf_left--;
373
-
374
- return c;
375
- }
376
-
377
- // Same as previous method, except can indicate if the character is a pad character or not.
378
- inline uint jpeg_decoder::get_char(bool *pPadding_flag)
379
- {
380
- if (!m_in_buf_left)
381
- {
382
- prep_in_buffer();
383
- if (!m_in_buf_left)
384
- {
385
- *pPadding_flag = true;
386
- int t = m_tem_flag;
387
- m_tem_flag ^= 1;
388
- if (t)
389
- return 0xD9;
390
- else
391
- return 0xFF;
392
- }
393
- }
394
-
395
- *pPadding_flag = false;
396
-
397
- uint c = *m_pIn_buf_ofs++;
398
- m_in_buf_left--;
399
-
400
- return c;
401
- }
402
-
403
- // Inserts a previously retrieved character back into the input buffer.
404
- inline void jpeg_decoder::stuff_char(uint8 q)
405
- {
406
- *(--m_pIn_buf_ofs) = q;
407
- m_in_buf_left++;
408
- }
409
-
410
- // Retrieves one character from the input stream, but does not read past markers. Will continue to return 0xFF when a marker is encountered.
411
- inline uint8 jpeg_decoder::get_octet()
412
- {
413
- bool padding_flag;
414
- int c = get_char(&padding_flag);
415
-
416
- if (c == 0xFF)
417
- {
418
- if (padding_flag)
419
- return 0xFF;
420
-
421
- c = get_char(&padding_flag);
422
- if (padding_flag)
423
- {
424
- stuff_char(0xFF);
425
- return 0xFF;
426
- }
427
-
428
- if (c == 0x00)
429
- return 0xFF;
430
- else
431
- {
432
- stuff_char(static_cast<uint8>(c));
433
- stuff_char(0xFF);
434
- return 0xFF;
435
- }
436
- }
437
-
438
- return static_cast<uint8>(c);
439
- }
440
-
441
- // Retrieves a variable number of bits from the input stream. Does not recognize markers.
442
- inline uint jpeg_decoder::get_bits(int num_bits)
443
- {
444
- if (!num_bits)
445
- return 0;
446
-
447
- uint i = m_bit_buf >> (32 - num_bits);
448
-
449
- if ((m_bits_left -= num_bits) <= 0)
450
- {
451
- m_bit_buf <<= (num_bits += m_bits_left);
452
-
453
- uint c1 = get_char();
454
- uint c2 = get_char();
455
- m_bit_buf = (m_bit_buf & 0xFFFF0000) | (c1 << 8) | c2;
456
-
457
- m_bit_buf <<= -m_bits_left;
458
-
459
- m_bits_left += 16;
460
-
461
- JPGD_ASSERT(m_bits_left >= 0);
462
- }
463
- else
464
- m_bit_buf <<= num_bits;
465
-
466
- return i;
467
- }
468
-
469
- // Retrieves a variable number of bits from the input stream. Markers will not be read into the input bit buffer. Instead, an infinite number of all 1's will be returned when a marker is encountered.
470
- inline uint jpeg_decoder::get_bits_no_markers(int num_bits)
471
- {
472
- if (!num_bits)
473
- return 0;
474
-
475
- uint i = m_bit_buf >> (32 - num_bits);
476
-
477
- if ((m_bits_left -= num_bits) <= 0)
478
- {
479
- m_bit_buf <<= (num_bits += m_bits_left);
480
-
481
- if ((m_in_buf_left < 2) || (m_pIn_buf_ofs[0] == 0xFF) || (m_pIn_buf_ofs[1] == 0xFF))
482
- {
483
- uint c1 = get_octet();
484
- uint c2 = get_octet();
485
- m_bit_buf |= (c1 << 8) | c2;
486
- }
487
- else
488
- {
489
- m_bit_buf |= ((uint)m_pIn_buf_ofs[0] << 8) | m_pIn_buf_ofs[1];
490
- m_in_buf_left -= 2;
491
- m_pIn_buf_ofs += 2;
492
- }
493
-
494
- m_bit_buf <<= -m_bits_left;
495
-
496
- m_bits_left += 16;
497
-
498
- JPGD_ASSERT(m_bits_left >= 0);
499
- }
500
- else
501
- m_bit_buf <<= num_bits;
502
-
503
- return i;
504
- }
505
-
506
- // Decodes a Huffman encoded symbol.
507
- inline int jpeg_decoder::huff_decode(huff_tables *pH)
508
- {
509
- int symbol;
510
-
511
- // Check first 8-bits: do we have a complete symbol?
512
- if ((symbol = pH->look_up[m_bit_buf >> 24]) < 0)
513
- {
514
- // Decode more bits, use a tree traversal to find symbol.
515
- int ofs = 23;
516
- do
517
- {
518
- symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))];
519
- ofs--;
520
- } while (symbol < 0);
521
-
522
- get_bits_no_markers(8 + (23 - ofs));
523
- }
524
- else
525
- get_bits_no_markers(pH->code_size[symbol]);
526
-
527
- return symbol;
528
- }
529
-
530
- // Decodes a Huffman encoded symbol.
531
- inline int jpeg_decoder::huff_decode(huff_tables *pH, int& extra_bits)
532
- {
533
- int symbol;
534
-
535
- // Check first 8-bits: do we have a complete symbol?
536
- if ((symbol = pH->look_up2[m_bit_buf >> 24]) < 0)
537
- {
538
- // Use a tree traversal to find symbol.
539
- int ofs = 23;
540
- do
541
- {
542
- symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))];
543
- ofs--;
544
- } while (symbol < 0);
545
-
546
- get_bits_no_markers(8 + (23 - ofs));
547
-
548
- extra_bits = get_bits_no_markers(symbol & 0xF);
549
- }
550
- else
551
- {
552
- JPGD_ASSERT(((symbol >> 8) & 31) == pH->code_size[symbol & 255] + ((symbol & 0x8000) ? (symbol & 15) : 0));
553
-
554
- if (symbol & 0x8000)
555
- {
556
- get_bits_no_markers((symbol >> 8) & 31);
557
- extra_bits = symbol >> 16;
558
- }
559
- else
560
- {
561
- int code_size = (symbol >> 8) & 31;
562
- int num_extra_bits = symbol & 0xF;
563
- int bits = code_size + num_extra_bits;
564
- if (bits <= (m_bits_left + 16))
565
- extra_bits = get_bits_no_markers(bits) & ((1 << num_extra_bits) - 1);
566
- else
567
- {
568
- get_bits_no_markers(code_size);
569
- extra_bits = get_bits_no_markers(num_extra_bits);
570
- }
571
- }
572
-
573
- symbol &= 0xFF;
574
- }
575
-
576
- return symbol;
577
- }
578
-
579
- // Tables and macro used to fully decode the DPCM differences.
580
- static const int s_extend_test[16] = { 0, 0x0001, 0x0002, 0x0004, 0x0008, 0x0010, 0x0020, 0x0040, 0x0080, 0x0100, 0x0200, 0x0400, 0x0800, 0x1000, 0x2000, 0x4000 };
581
- static const int s_extend_offset[16] = { 0, -1, -3, -7, -15, -31, -63, -127, -255, -511, -1023, -2047, -4095, -8191, -16383, -32767 };
582
- static const int s_extend_mask[] = { 0, (1<<0), (1<<1), (1<<2), (1<<3), (1<<4), (1<<5), (1<<6), (1<<7), (1<<8), (1<<9), (1<<10), (1<<11), (1<<12), (1<<13), (1<<14), (1<<15), (1<<16) };
583
- #define HUFF_EXTEND(x,s) ((x) < s_extend_test[s] ? (x) + s_extend_offset[s] : (x))
584
-
585
- // Clamps a value between 0-255.
586
- inline uint8 jpeg_decoder::clamp(int i)
587
- {
588
- if (static_cast<uint>(i) > 255)
589
- i = (((~i) >> 31) & 0xFF);
590
-
591
- return static_cast<uint8>(i);
592
- }
593
-
594
- namespace DCT_Upsample
595
- {
596
- struct Matrix44
597
- {
598
- typedef int Element_Type;
599
- enum { NUM_ROWS = 4, NUM_COLS = 4 };
600
-
601
- Element_Type v[NUM_ROWS][NUM_COLS];
602
-
603
- inline int rows() const { return NUM_ROWS; }
604
- inline int cols() const { return NUM_COLS; }
605
-
606
- inline const Element_Type & at(int r, int c) const { return v[r][c]; }
607
- inline Element_Type & at(int r, int c) { return v[r][c]; }
608
-
609
- inline Matrix44() { }
610
-
611
- inline Matrix44& operator += (const Matrix44& a)
612
- {
613
- for (int r = 0; r < NUM_ROWS; r++)
614
- {
615
- at(r, 0) += a.at(r, 0);
616
- at(r, 1) += a.at(r, 1);
617
- at(r, 2) += a.at(r, 2);
618
- at(r, 3) += a.at(r, 3);
619
- }
620
- return *this;
621
- }
622
-
623
- inline Matrix44& operator -= (const Matrix44& a)
624
- {
625
- for (int r = 0; r < NUM_ROWS; r++)
626
- {
627
- at(r, 0) -= a.at(r, 0);
628
- at(r, 1) -= a.at(r, 1);
629
- at(r, 2) -= a.at(r, 2);
630
- at(r, 3) -= a.at(r, 3);
631
- }
632
- return *this;
633
- }
634
-
635
- friend inline Matrix44 operator + (const Matrix44& a, const Matrix44& b)
636
- {
637
- Matrix44 ret;
638
- for (int r = 0; r < NUM_ROWS; r++)
639
- {
640
- ret.at(r, 0) = a.at(r, 0) + b.at(r, 0);
641
- ret.at(r, 1) = a.at(r, 1) + b.at(r, 1);
642
- ret.at(r, 2) = a.at(r, 2) + b.at(r, 2);
643
- ret.at(r, 3) = a.at(r, 3) + b.at(r, 3);
644
- }
645
- return ret;
646
- }
647
-
648
- friend inline Matrix44 operator - (const Matrix44& a, const Matrix44& b)
649
- {
650
- Matrix44 ret;
651
- for (int r = 0; r < NUM_ROWS; r++)
652
- {
653
- ret.at(r, 0) = a.at(r, 0) - b.at(r, 0);
654
- ret.at(r, 1) = a.at(r, 1) - b.at(r, 1);
655
- ret.at(r, 2) = a.at(r, 2) - b.at(r, 2);
656
- ret.at(r, 3) = a.at(r, 3) - b.at(r, 3);
657
- }
658
- return ret;
659
- }
660
-
661
- static inline void add_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b)
662
- {
663
- for (int r = 0; r < 4; r++)
664
- {
665
- pDst[0*8 + r] = static_cast<jpgd_block_t>(a.at(r, 0) + b.at(r, 0));
666
- pDst[1*8 + r] = static_cast<jpgd_block_t>(a.at(r, 1) + b.at(r, 1));
667
- pDst[2*8 + r] = static_cast<jpgd_block_t>(a.at(r, 2) + b.at(r, 2));
668
- pDst[3*8 + r] = static_cast<jpgd_block_t>(a.at(r, 3) + b.at(r, 3));
669
- }
670
- }
671
-
672
- static inline void sub_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b)
673
- {
674
- for (int r = 0; r < 4; r++)
675
- {
676
- pDst[0*8 + r] = static_cast<jpgd_block_t>(a.at(r, 0) - b.at(r, 0));
677
- pDst[1*8 + r] = static_cast<jpgd_block_t>(a.at(r, 1) - b.at(r, 1));
678
- pDst[2*8 + r] = static_cast<jpgd_block_t>(a.at(r, 2) - b.at(r, 2));
679
- pDst[3*8 + r] = static_cast<jpgd_block_t>(a.at(r, 3) - b.at(r, 3));
680
- }
681
- }
682
- };
683
-
684
- const int FRACT_BITS = 10;
685
- const int SCALE = 1 << FRACT_BITS;
686
-
687
- typedef int Temp_Type;
688
- #define D(i) (((i) + (SCALE >> 1)) >> FRACT_BITS)
689
- #define F(i) ((int)((i) * SCALE + .5f))
690
-
691
- // Any decent C++ compiler will optimize this at compile time to a 0, or an array access.
692
- #define AT(c, r) ((((c)>=NUM_COLS)||((r)>=NUM_ROWS)) ? 0 : pSrc[(c)+(r)*8])
693
-
694
- // NUM_ROWS/NUM_COLS = # of non-zero rows/cols in input matrix
695
- template<int NUM_ROWS, int NUM_COLS>
696
- struct P_Q
697
- {
698
- static void calc(Matrix44& P, Matrix44& Q, const jpgd_block_t* pSrc)
699
- {
700
- // 4x8 = 4x8 times 8x8, matrix 0 is constant
701
- const Temp_Type X000 = AT(0, 0);
702
- const Temp_Type X001 = AT(0, 1);
703
- const Temp_Type X002 = AT(0, 2);
704
- const Temp_Type X003 = AT(0, 3);
705
- const Temp_Type X004 = AT(0, 4);
706
- const Temp_Type X005 = AT(0, 5);
707
- const Temp_Type X006 = AT(0, 6);
708
- const Temp_Type X007 = AT(0, 7);
709
- const Temp_Type X010 = D(F(0.415735f) * AT(1, 0) + F(0.791065f) * AT(3, 0) + F(-0.352443f) * AT(5, 0) + F(0.277785f) * AT(7, 0));
710
- const Temp_Type X011 = D(F(0.415735f) * AT(1, 1) + F(0.791065f) * AT(3, 1) + F(-0.352443f) * AT(5, 1) + F(0.277785f) * AT(7, 1));
711
- const Temp_Type X012 = D(F(0.415735f) * AT(1, 2) + F(0.791065f) * AT(3, 2) + F(-0.352443f) * AT(5, 2) + F(0.277785f) * AT(7, 2));
712
- const Temp_Type X013 = D(F(0.415735f) * AT(1, 3) + F(0.791065f) * AT(3, 3) + F(-0.352443f) * AT(5, 3) + F(0.277785f) * AT(7, 3));
713
- const Temp_Type X014 = D(F(0.415735f) * AT(1, 4) + F(0.791065f) * AT(3, 4) + F(-0.352443f) * AT(5, 4) + F(0.277785f) * AT(7, 4));
714
- const Temp_Type X015 = D(F(0.415735f) * AT(1, 5) + F(0.791065f) * AT(3, 5) + F(-0.352443f) * AT(5, 5) + F(0.277785f) * AT(7, 5));
715
- const Temp_Type X016 = D(F(0.415735f) * AT(1, 6) + F(0.791065f) * AT(3, 6) + F(-0.352443f) * AT(5, 6) + F(0.277785f) * AT(7, 6));
716
- const Temp_Type X017 = D(F(0.415735f) * AT(1, 7) + F(0.791065f) * AT(3, 7) + F(-0.352443f) * AT(5, 7) + F(0.277785f) * AT(7, 7));
717
- const Temp_Type X020 = AT(4, 0);
718
- const Temp_Type X021 = AT(4, 1);
719
- const Temp_Type X022 = AT(4, 2);
720
- const Temp_Type X023 = AT(4, 3);
721
- const Temp_Type X024 = AT(4, 4);
722
- const Temp_Type X025 = AT(4, 5);
723
- const Temp_Type X026 = AT(4, 6);
724
- const Temp_Type X027 = AT(4, 7);
725
- const Temp_Type X030 = D(F(0.022887f) * AT(1, 0) + F(-0.097545f) * AT(3, 0) + F(0.490393f) * AT(5, 0) + F(0.865723f) * AT(7, 0));
726
- const Temp_Type X031 = D(F(0.022887f) * AT(1, 1) + F(-0.097545f) * AT(3, 1) + F(0.490393f) * AT(5, 1) + F(0.865723f) * AT(7, 1));
727
- const Temp_Type X032 = D(F(0.022887f) * AT(1, 2) + F(-0.097545f) * AT(3, 2) + F(0.490393f) * AT(5, 2) + F(0.865723f) * AT(7, 2));
728
- const Temp_Type X033 = D(F(0.022887f) * AT(1, 3) + F(-0.097545f) * AT(3, 3) + F(0.490393f) * AT(5, 3) + F(0.865723f) * AT(7, 3));
729
- const Temp_Type X034 = D(F(0.022887f) * AT(1, 4) + F(-0.097545f) * AT(3, 4) + F(0.490393f) * AT(5, 4) + F(0.865723f) * AT(7, 4));
730
- const Temp_Type X035 = D(F(0.022887f) * AT(1, 5) + F(-0.097545f) * AT(3, 5) + F(0.490393f) * AT(5, 5) + F(0.865723f) * AT(7, 5));
731
- const Temp_Type X036 = D(F(0.022887f) * AT(1, 6) + F(-0.097545f) * AT(3, 6) + F(0.490393f) * AT(5, 6) + F(0.865723f) * AT(7, 6));
732
- const Temp_Type X037 = D(F(0.022887f) * AT(1, 7) + F(-0.097545f) * AT(3, 7) + F(0.490393f) * AT(5, 7) + F(0.865723f) * AT(7, 7));
733
-
734
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
735
- P.at(0, 0) = X000;
736
- P.at(0, 1) = D(X001 * F(0.415735f) + X003 * F(0.791065f) + X005 * F(-0.352443f) + X007 * F(0.277785f));
737
- P.at(0, 2) = X004;
738
- P.at(0, 3) = D(X001 * F(0.022887f) + X003 * F(-0.097545f) + X005 * F(0.490393f) + X007 * F(0.865723f));
739
- P.at(1, 0) = X010;
740
- P.at(1, 1) = D(X011 * F(0.415735f) + X013 * F(0.791065f) + X015 * F(-0.352443f) + X017 * F(0.277785f));
741
- P.at(1, 2) = X014;
742
- P.at(1, 3) = D(X011 * F(0.022887f) + X013 * F(-0.097545f) + X015 * F(0.490393f) + X017 * F(0.865723f));
743
- P.at(2, 0) = X020;
744
- P.at(2, 1) = D(X021 * F(0.415735f) + X023 * F(0.791065f) + X025 * F(-0.352443f) + X027 * F(0.277785f));
745
- P.at(2, 2) = X024;
746
- P.at(2, 3) = D(X021 * F(0.022887f) + X023 * F(-0.097545f) + X025 * F(0.490393f) + X027 * F(0.865723f));
747
- P.at(3, 0) = X030;
748
- P.at(3, 1) = D(X031 * F(0.415735f) + X033 * F(0.791065f) + X035 * F(-0.352443f) + X037 * F(0.277785f));
749
- P.at(3, 2) = X034;
750
- P.at(3, 3) = D(X031 * F(0.022887f) + X033 * F(-0.097545f) + X035 * F(0.490393f) + X037 * F(0.865723f));
751
- // 40 muls 24 adds
752
-
753
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
754
- Q.at(0, 0) = D(X001 * F(0.906127f) + X003 * F(-0.318190f) + X005 * F(0.212608f) + X007 * F(-0.180240f));
755
- Q.at(0, 1) = X002;
756
- Q.at(0, 2) = D(X001 * F(-0.074658f) + X003 * F(0.513280f) + X005 * F(0.768178f) + X007 * F(-0.375330f));
757
- Q.at(0, 3) = X006;
758
- Q.at(1, 0) = D(X011 * F(0.906127f) + X013 * F(-0.318190f) + X015 * F(0.212608f) + X017 * F(-0.180240f));
759
- Q.at(1, 1) = X012;
760
- Q.at(1, 2) = D(X011 * F(-0.074658f) + X013 * F(0.513280f) + X015 * F(0.768178f) + X017 * F(-0.375330f));
761
- Q.at(1, 3) = X016;
762
- Q.at(2, 0) = D(X021 * F(0.906127f) + X023 * F(-0.318190f) + X025 * F(0.212608f) + X027 * F(-0.180240f));
763
- Q.at(2, 1) = X022;
764
- Q.at(2, 2) = D(X021 * F(-0.074658f) + X023 * F(0.513280f) + X025 * F(0.768178f) + X027 * F(-0.375330f));
765
- Q.at(2, 3) = X026;
766
- Q.at(3, 0) = D(X031 * F(0.906127f) + X033 * F(-0.318190f) + X035 * F(0.212608f) + X037 * F(-0.180240f));
767
- Q.at(3, 1) = X032;
768
- Q.at(3, 2) = D(X031 * F(-0.074658f) + X033 * F(0.513280f) + X035 * F(0.768178f) + X037 * F(-0.375330f));
769
- Q.at(3, 3) = X036;
770
- // 40 muls 24 adds
771
- }
772
- };
773
-
774
- template<int NUM_ROWS, int NUM_COLS>
775
- struct R_S
776
- {
777
- static void calc(Matrix44& R, Matrix44& S, const jpgd_block_t* pSrc)
778
- {
779
- // 4x8 = 4x8 times 8x8, matrix 0 is constant
780
- const Temp_Type X100 = D(F(0.906127f) * AT(1, 0) + F(-0.318190f) * AT(3, 0) + F(0.212608f) * AT(5, 0) + F(-0.180240f) * AT(7, 0));
781
- const Temp_Type X101 = D(F(0.906127f) * AT(1, 1) + F(-0.318190f) * AT(3, 1) + F(0.212608f) * AT(5, 1) + F(-0.180240f) * AT(7, 1));
782
- const Temp_Type X102 = D(F(0.906127f) * AT(1, 2) + F(-0.318190f) * AT(3, 2) + F(0.212608f) * AT(5, 2) + F(-0.180240f) * AT(7, 2));
783
- const Temp_Type X103 = D(F(0.906127f) * AT(1, 3) + F(-0.318190f) * AT(3, 3) + F(0.212608f) * AT(5, 3) + F(-0.180240f) * AT(7, 3));
784
- const Temp_Type X104 = D(F(0.906127f) * AT(1, 4) + F(-0.318190f) * AT(3, 4) + F(0.212608f) * AT(5, 4) + F(-0.180240f) * AT(7, 4));
785
- const Temp_Type X105 = D(F(0.906127f) * AT(1, 5) + F(-0.318190f) * AT(3, 5) + F(0.212608f) * AT(5, 5) + F(-0.180240f) * AT(7, 5));
786
- const Temp_Type X106 = D(F(0.906127f) * AT(1, 6) + F(-0.318190f) * AT(3, 6) + F(0.212608f) * AT(5, 6) + F(-0.180240f) * AT(7, 6));
787
- const Temp_Type X107 = D(F(0.906127f) * AT(1, 7) + F(-0.318190f) * AT(3, 7) + F(0.212608f) * AT(5, 7) + F(-0.180240f) * AT(7, 7));
788
- const Temp_Type X110 = AT(2, 0);
789
- const Temp_Type X111 = AT(2, 1);
790
- const Temp_Type X112 = AT(2, 2);
791
- const Temp_Type X113 = AT(2, 3);
792
- const Temp_Type X114 = AT(2, 4);
793
- const Temp_Type X115 = AT(2, 5);
794
- const Temp_Type X116 = AT(2, 6);
795
- const Temp_Type X117 = AT(2, 7);
796
- const Temp_Type X120 = D(F(-0.074658f) * AT(1, 0) + F(0.513280f) * AT(3, 0) + F(0.768178f) * AT(5, 0) + F(-0.375330f) * AT(7, 0));
797
- const Temp_Type X121 = D(F(-0.074658f) * AT(1, 1) + F(0.513280f) * AT(3, 1) + F(0.768178f) * AT(5, 1) + F(-0.375330f) * AT(7, 1));
798
- const Temp_Type X122 = D(F(-0.074658f) * AT(1, 2) + F(0.513280f) * AT(3, 2) + F(0.768178f) * AT(5, 2) + F(-0.375330f) * AT(7, 2));
799
- const Temp_Type X123 = D(F(-0.074658f) * AT(1, 3) + F(0.513280f) * AT(3, 3) + F(0.768178f) * AT(5, 3) + F(-0.375330f) * AT(7, 3));
800
- const Temp_Type X124 = D(F(-0.074658f) * AT(1, 4) + F(0.513280f) * AT(3, 4) + F(0.768178f) * AT(5, 4) + F(-0.375330f) * AT(7, 4));
801
- const Temp_Type X125 = D(F(-0.074658f) * AT(1, 5) + F(0.513280f) * AT(3, 5) + F(0.768178f) * AT(5, 5) + F(-0.375330f) * AT(7, 5));
802
- const Temp_Type X126 = D(F(-0.074658f) * AT(1, 6) + F(0.513280f) * AT(3, 6) + F(0.768178f) * AT(5, 6) + F(-0.375330f) * AT(7, 6));
803
- const Temp_Type X127 = D(F(-0.074658f) * AT(1, 7) + F(0.513280f) * AT(3, 7) + F(0.768178f) * AT(5, 7) + F(-0.375330f) * AT(7, 7));
804
- const Temp_Type X130 = AT(6, 0);
805
- const Temp_Type X131 = AT(6, 1);
806
- const Temp_Type X132 = AT(6, 2);
807
- const Temp_Type X133 = AT(6, 3);
808
- const Temp_Type X134 = AT(6, 4);
809
- const Temp_Type X135 = AT(6, 5);
810
- const Temp_Type X136 = AT(6, 6);
811
- const Temp_Type X137 = AT(6, 7);
812
- // 80 muls 48 adds
813
-
814
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
815
- R.at(0, 0) = X100;
816
- R.at(0, 1) = D(X101 * F(0.415735f) + X103 * F(0.791065f) + X105 * F(-0.352443f) + X107 * F(0.277785f));
817
- R.at(0, 2) = X104;
818
- R.at(0, 3) = D(X101 * F(0.022887f) + X103 * F(-0.097545f) + X105 * F(0.490393f) + X107 * F(0.865723f));
819
- R.at(1, 0) = X110;
820
- R.at(1, 1) = D(X111 * F(0.415735f) + X113 * F(0.791065f) + X115 * F(-0.352443f) + X117 * F(0.277785f));
821
- R.at(1, 2) = X114;
822
- R.at(1, 3) = D(X111 * F(0.022887f) + X113 * F(-0.097545f) + X115 * F(0.490393f) + X117 * F(0.865723f));
823
- R.at(2, 0) = X120;
824
- R.at(2, 1) = D(X121 * F(0.415735f) + X123 * F(0.791065f) + X125 * F(-0.352443f) + X127 * F(0.277785f));
825
- R.at(2, 2) = X124;
826
- R.at(2, 3) = D(X121 * F(0.022887f) + X123 * F(-0.097545f) + X125 * F(0.490393f) + X127 * F(0.865723f));
827
- R.at(3, 0) = X130;
828
- R.at(3, 1) = D(X131 * F(0.415735f) + X133 * F(0.791065f) + X135 * F(-0.352443f) + X137 * F(0.277785f));
829
- R.at(3, 2) = X134;
830
- R.at(3, 3) = D(X131 * F(0.022887f) + X133 * F(-0.097545f) + X135 * F(0.490393f) + X137 * F(0.865723f));
831
- // 40 muls 24 adds
832
- // 4x4 = 4x8 times 8x4, matrix 1 is constant
833
- S.at(0, 0) = D(X101 * F(0.906127f) + X103 * F(-0.318190f) + X105 * F(0.212608f) + X107 * F(-0.180240f));
834
- S.at(0, 1) = X102;
835
- S.at(0, 2) = D(X101 * F(-0.074658f) + X103 * F(0.513280f) + X105 * F(0.768178f) + X107 * F(-0.375330f));
836
- S.at(0, 3) = X106;
837
- S.at(1, 0) = D(X111 * F(0.906127f) + X113 * F(-0.318190f) + X115 * F(0.212608f) + X117 * F(-0.180240f));
838
- S.at(1, 1) = X112;
839
- S.at(1, 2) = D(X111 * F(-0.074658f) + X113 * F(0.513280f) + X115 * F(0.768178f) + X117 * F(-0.375330f));
840
- S.at(1, 3) = X116;
841
- S.at(2, 0) = D(X121 * F(0.906127f) + X123 * F(-0.318190f) + X125 * F(0.212608f) + X127 * F(-0.180240f));
842
- S.at(2, 1) = X122;
843
- S.at(2, 2) = D(X121 * F(-0.074658f) + X123 * F(0.513280f) + X125 * F(0.768178f) + X127 * F(-0.375330f));
844
- S.at(2, 3) = X126;
845
- S.at(3, 0) = D(X131 * F(0.906127f) + X133 * F(-0.318190f) + X135 * F(0.212608f) + X137 * F(-0.180240f));
846
- S.at(3, 1) = X132;
847
- S.at(3, 2) = D(X131 * F(-0.074658f) + X133 * F(0.513280f) + X135 * F(0.768178f) + X137 * F(-0.375330f));
848
- S.at(3, 3) = X136;
849
- // 40 muls 24 adds
850
- }
851
- };
852
- } // end namespace DCT_Upsample
853
-
854
- // Unconditionally frees all allocated m_blocks.
855
- void jpeg_decoder::free_all_blocks()
856
- {
857
- m_pStream = NULL;
858
- for (mem_block *b = m_pMem_blocks; b; )
859
- {
860
- mem_block *n = b->m_pNext;
861
- jpgd_free(b);
862
- b = n;
863
- }
864
- m_pMem_blocks = NULL;
865
- }
866
-
867
- // This method handles all errors.
868
- // It could easily be changed to use C++ exceptions.
869
- void jpeg_decoder::stop_decoding(jpgd_status status)
870
- {
871
- m_error_code = status;
872
- free_all_blocks();
873
- longjmp(m_jmp_state, status);
874
-
875
- // we shouldn't get here as longjmp shouldn't return, but we put it here to make it explicit
876
- // that this function doesn't return, otherwise we get this error:
877
- //
878
- // error : function declared 'noreturn' should not return
879
- exit(1);
880
- }
881
-
882
- void *jpeg_decoder::alloc(size_t nSize, bool zero)
883
- {
884
- nSize = (JPGD_MAX(nSize, 1) + 3) & ~3;
885
- char *rv = NULL;
886
- for (mem_block *b = m_pMem_blocks; b; b = b->m_pNext)
887
- {
888
- if ((b->m_used_count + nSize) <= b->m_size)
889
- {
890
- rv = b->m_data + b->m_used_count;
891
- b->m_used_count += nSize;
892
- break;
893
- }
894
- }
895
- if (!rv)
896
- {
897
- int capacity = JPGD_MAX(32768 - 256, (nSize + 2047) & ~2047);
898
- mem_block *b = (mem_block*)jpgd_malloc(sizeof(mem_block) + capacity);
899
- if (!b) stop_decoding(JPGD_NOTENOUGHMEM);
900
- b->m_pNext = m_pMem_blocks; m_pMem_blocks = b;
901
- b->m_used_count = nSize;
902
- b->m_size = capacity;
903
- rv = b->m_data;
904
- }
905
- if (zero) memset(rv, 0, nSize);
906
- return rv;
907
- }
908
-
909
- void jpeg_decoder::word_clear(void *p, uint16 c, uint n)
910
- {
911
- uint8 *pD = (uint8*)p;
912
- const uint8 l = c & 0xFF, h = (c >> 8) & 0xFF;
913
- while (n)
914
- {
915
- pD[0] = l; pD[1] = h; pD += 2;
916
- n--;
917
- }
918
- }
919
-
920
- // Refill the input buffer.
921
- // This method will sit in a loop until (A) the buffer is full or (B)
922
- // the stream's read() method reports and end of file condition.
923
- void jpeg_decoder::prep_in_buffer()
924
- {
925
- m_in_buf_left = 0;
926
- m_pIn_buf_ofs = m_in_buf;
927
-
928
- if (m_eof_flag)
929
- return;
930
-
931
- do
932
- {
933
- int bytes_read = m_pStream->read(m_in_buf + m_in_buf_left, JPGD_IN_BUF_SIZE - m_in_buf_left, &m_eof_flag);
934
- if (bytes_read == -1)
935
- stop_decoding(JPGD_STREAM_READ);
936
-
937
- m_in_buf_left += bytes_read;
938
- } while ((m_in_buf_left < JPGD_IN_BUF_SIZE) && (!m_eof_flag));
939
-
940
- m_total_bytes_read += m_in_buf_left;
941
-
942
- // Pad the end of the block with M_EOI (prevents the decompressor from going off the rails if the stream is invalid).
943
- // (This dates way back to when this decompressor was written in C/asm, and the all-asm Huffman decoder did some fancy things to increase perf.)
944
- word_clear(m_pIn_buf_ofs + m_in_buf_left, 0xD9FF, 64);
945
- }
946
-
947
- // Read a Huffman code table.
948
- void jpeg_decoder::read_dht_marker()
949
- {
950
- int i, index, count;
951
- uint8 huff_num[17];
952
- uint8 huff_val[256];
953
-
954
- uint num_left = get_bits(16);
955
-
956
- if (num_left < 2)
957
- stop_decoding(JPGD_BAD_DHT_MARKER);
958
-
959
- num_left -= 2;
960
-
961
- while (num_left)
962
- {
963
- index = get_bits(8);
964
-
965
- huff_num[0] = 0;
966
-
967
- count = 0;
968
-
969
- for (i = 1; i <= 16; i++)
970
- {
971
- huff_num[i] = static_cast<uint8>(get_bits(8));
972
- count += huff_num[i];
973
- }
974
-
975
- if (count > 255)
976
- stop_decoding(JPGD_BAD_DHT_COUNTS);
977
-
978
- for (i = 0; i < count; i++)
979
- huff_val[i] = static_cast<uint8>(get_bits(8));
980
-
981
- i = 1 + 16 + count;
982
-
983
- if (num_left < (uint)i)
984
- stop_decoding(JPGD_BAD_DHT_MARKER);
985
-
986
- num_left -= i;
987
-
988
- if ((index & 0x10) > 0x10)
989
- stop_decoding(JPGD_BAD_DHT_INDEX);
990
-
991
- index = (index & 0x0F) + ((index & 0x10) >> 4) * (JPGD_MAX_HUFF_TABLES >> 1);
992
-
993
- if (index >= JPGD_MAX_HUFF_TABLES)
994
- stop_decoding(JPGD_BAD_DHT_INDEX);
995
-
996
- if (!m_huff_num[index])
997
- m_huff_num[index] = (uint8 *)alloc(17);
998
-
999
- if (!m_huff_val[index])
1000
- m_huff_val[index] = (uint8 *)alloc(256);
1001
-
1002
- m_huff_ac[index] = (index & 0x10) != 0;
1003
- memcpy(m_huff_num[index], huff_num, 17);
1004
- memcpy(m_huff_val[index], huff_val, 256);
1005
- }
1006
- }
1007
-
1008
- // Read a quantization table.
1009
- void jpeg_decoder::read_dqt_marker()
1010
- {
1011
- int n, i, prec;
1012
- uint num_left;
1013
- uint temp;
1014
-
1015
- num_left = get_bits(16);
1016
-
1017
- if (num_left < 2)
1018
- stop_decoding(JPGD_BAD_DQT_MARKER);
1019
-
1020
- num_left -= 2;
1021
-
1022
- while (num_left)
1023
- {
1024
- n = get_bits(8);
1025
- prec = n >> 4;
1026
- n &= 0x0F;
1027
-
1028
- if (n >= JPGD_MAX_QUANT_TABLES)
1029
- stop_decoding(JPGD_BAD_DQT_TABLE);
1030
-
1031
- if (!m_quant[n])
1032
- m_quant[n] = (jpgd_quant_t *)alloc(64 * sizeof(jpgd_quant_t));
1033
-
1034
- // read quantization entries, in zag order
1035
- for (i = 0; i < 64; i++)
1036
- {
1037
- temp = get_bits(8);
1038
-
1039
- if (prec)
1040
- temp = (temp << 8) + get_bits(8);
1041
-
1042
- m_quant[n][i] = static_cast<jpgd_quant_t>(temp);
1043
- }
1044
-
1045
- i = 64 + 1;
1046
-
1047
- if (prec)
1048
- i += 64;
1049
-
1050
- if (num_left < (uint)i)
1051
- stop_decoding(JPGD_BAD_DQT_LENGTH);
1052
-
1053
- num_left -= i;
1054
- }
1055
- }
1056
-
1057
- // Read the start of frame (SOF) marker.
1058
- void jpeg_decoder::read_sof_marker()
1059
- {
1060
- int i;
1061
- uint num_left;
1062
-
1063
- num_left = get_bits(16);
1064
-
1065
- if (get_bits(8) != 8) /* precision: sorry, only 8-bit precision is supported right now */
1066
- stop_decoding(JPGD_BAD_PRECISION);
1067
-
1068
- m_image_y_size = get_bits(16);
1069
-
1070
- if ((m_image_y_size < 1) || (m_image_y_size > JPGD_MAX_HEIGHT))
1071
- stop_decoding(JPGD_BAD_HEIGHT);
1072
-
1073
- m_image_x_size = get_bits(16);
1074
-
1075
- if ((m_image_x_size < 1) || (m_image_x_size > JPGD_MAX_WIDTH))
1076
- stop_decoding(JPGD_BAD_WIDTH);
1077
-
1078
- m_comps_in_frame = get_bits(8);
1079
-
1080
- if (m_comps_in_frame > JPGD_MAX_COMPONENTS)
1081
- stop_decoding(JPGD_TOO_MANY_COMPONENTS);
1082
-
1083
- if (num_left != (uint)(m_comps_in_frame * 3 + 8))
1084
- stop_decoding(JPGD_BAD_SOF_LENGTH);
1085
-
1086
- for (i = 0; i < m_comps_in_frame; i++)
1087
- {
1088
- m_comp_ident[i] = get_bits(8);
1089
- m_comp_h_samp[i] = get_bits(4);
1090
- m_comp_v_samp[i] = get_bits(4);
1091
- m_comp_quant[i] = get_bits(8);
1092
- }
1093
- }
1094
-
1095
- // Used to skip unrecognized markers.
1096
- void jpeg_decoder::skip_variable_marker()
1097
- {
1098
- uint num_left;
1099
-
1100
- num_left = get_bits(16);
1101
-
1102
- if (num_left < 2)
1103
- stop_decoding(JPGD_BAD_VARIABLE_MARKER);
1104
-
1105
- num_left -= 2;
1106
-
1107
- while (num_left)
1108
- {
1109
- get_bits(8);
1110
- num_left--;
1111
- }
1112
- }
1113
-
1114
- // Read a define restart interval (DRI) marker.
1115
- void jpeg_decoder::read_dri_marker()
1116
- {
1117
- if (get_bits(16) != 4)
1118
- stop_decoding(JPGD_BAD_DRI_LENGTH);
1119
-
1120
- m_restart_interval = get_bits(16);
1121
- }
1122
-
1123
- // Read a start of scan (SOS) marker.
1124
- void jpeg_decoder::read_sos_marker()
1125
- {
1126
- uint num_left;
1127
- int i, ci, n, c, cc;
1128
-
1129
- num_left = get_bits(16);
1130
-
1131
- n = get_bits(8);
1132
-
1133
- m_comps_in_scan = n;
1134
-
1135
- num_left -= 3;
1136
-
1137
- if ( (num_left != (uint)(n * 2 + 3)) || (n < 1) || (n > JPGD_MAX_COMPS_IN_SCAN) )
1138
- stop_decoding(JPGD_BAD_SOS_LENGTH);
1139
-
1140
- for (i = 0; i < n; i++)
1141
- {
1142
- cc = get_bits(8);
1143
- c = get_bits(8);
1144
- num_left -= 2;
1145
-
1146
- for (ci = 0; ci < m_comps_in_frame; ci++)
1147
- if (cc == m_comp_ident[ci])
1148
- break;
1149
-
1150
- if (ci >= m_comps_in_frame)
1151
- stop_decoding(JPGD_BAD_SOS_COMP_ID);
1152
-
1153
- m_comp_list[i] = ci;
1154
- m_comp_dc_tab[ci] = (c >> 4) & 15;
1155
- m_comp_ac_tab[ci] = (c & 15) + (JPGD_MAX_HUFF_TABLES >> 1);
1156
- }
1157
-
1158
- m_spectral_start = get_bits(8);
1159
- m_spectral_end = get_bits(8);
1160
- m_successive_high = get_bits(4);
1161
- m_successive_low = get_bits(4);
1162
-
1163
- if (!m_progressive_flag)
1164
- {
1165
- m_spectral_start = 0;
1166
- m_spectral_end = 63;
1167
- }
1168
-
1169
- num_left -= 3;
1170
-
1171
- while (num_left) /* read past whatever is num_left */
1172
- {
1173
- get_bits(8);
1174
- num_left--;
1175
- }
1176
- }
1177
-
1178
- // Finds the next marker.
1179
- int jpeg_decoder::next_marker()
1180
- {
1181
- uint c, bytes;
1182
-
1183
- bytes = 0;
1184
-
1185
- do
1186
- {
1187
- do
1188
- {
1189
- bytes++;
1190
- c = get_bits(8);
1191
- } while (c != 0xFF);
1192
-
1193
- do
1194
- {
1195
- c = get_bits(8);
1196
- } while (c == 0xFF);
1197
-
1198
- } while (c == 0);
1199
-
1200
- // If bytes > 0 here, there where extra bytes before the marker (not good).
1201
-
1202
- return c;
1203
- }
1204
-
1205
- // Process markers. Returns when an SOFx, SOI, EOI, or SOS marker is
1206
- // encountered.
1207
- int jpeg_decoder::process_markers()
1208
- {
1209
- int c;
1210
-
1211
- for ( ; ; )
1212
- {
1213
- c = next_marker();
1214
-
1215
- switch (c)
1216
- {
1217
- case M_SOF0:
1218
- case M_SOF1:
1219
- case M_SOF2:
1220
- case M_SOF3:
1221
- case M_SOF5:
1222
- case M_SOF6:
1223
- case M_SOF7:
1224
- // case M_JPG:
1225
- case M_SOF9:
1226
- case M_SOF10:
1227
- case M_SOF11:
1228
- case M_SOF13:
1229
- case M_SOF14:
1230
- case M_SOF15:
1231
- case M_SOI:
1232
- case M_EOI:
1233
- case M_SOS:
1234
- {
1235
- return c;
1236
- }
1237
- case M_DHT:
1238
- {
1239
- read_dht_marker();
1240
- break;
1241
- }
1242
- // No arithmitic support - dumb patents!
1243
- case M_DAC:
1244
- {
1245
- stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT);
1246
- break;
1247
- }
1248
- case M_DQT:
1249
- {
1250
- read_dqt_marker();
1251
- break;
1252
- }
1253
- case M_DRI:
1254
- {
1255
- read_dri_marker();
1256
- break;
1257
- }
1258
- //case M_APP0: /* no need to read the JFIF marker */
1259
-
1260
- case M_JPG:
1261
- case M_RST0: /* no parameters */
1262
- case M_RST1:
1263
- case M_RST2:
1264
- case M_RST3:
1265
- case M_RST4:
1266
- case M_RST5:
1267
- case M_RST6:
1268
- case M_RST7:
1269
- case M_TEM:
1270
- {
1271
- stop_decoding(JPGD_UNEXPECTED_MARKER);
1272
- break;
1273
- }
1274
- default: /* must be DNL, DHP, EXP, APPn, JPGn, COM, or RESn or APP0 */
1275
- {
1276
- skip_variable_marker();
1277
- break;
1278
- }
1279
- }
1280
- }
1281
- }
1282
-
1283
- // Finds the start of image (SOI) marker.
1284
- // This code is rather defensive: it only checks the first 512 bytes to avoid
1285
- // false positives.
1286
- void jpeg_decoder::locate_soi_marker()
1287
- {
1288
- uint lastchar, thischar;
1289
- uint bytesleft;
1290
-
1291
- lastchar = get_bits(8);
1292
-
1293
- thischar = get_bits(8);
1294
-
1295
- /* ok if it's a normal JPEG file without a special header */
1296
-
1297
- if ((lastchar == 0xFF) && (thischar == M_SOI))
1298
- return;
1299
-
1300
- bytesleft = 4096; //512;
1301
-
1302
- for ( ; ; )
1303
- {
1304
- if (--bytesleft == 0)
1305
- stop_decoding(JPGD_NOT_JPEG);
1306
-
1307
- lastchar = thischar;
1308
-
1309
- thischar = get_bits(8);
1310
-
1311
- if (lastchar == 0xFF)
1312
- {
1313
- if (thischar == M_SOI)
1314
- break;
1315
- else if (thischar == M_EOI) // get_bits will keep returning M_EOI if we read past the end
1316
- stop_decoding(JPGD_NOT_JPEG);
1317
- }
1318
- }
1319
-
1320
- // Check the next character after marker: if it's not 0xFF, it can't be the start of the next marker, so the file is bad.
1321
- thischar = (m_bit_buf >> 24) & 0xFF;
1322
-
1323
- if (thischar != 0xFF)
1324
- stop_decoding(JPGD_NOT_JPEG);
1325
- }
1326
-
1327
- // Find a start of frame (SOF) marker.
1328
- void jpeg_decoder::locate_sof_marker()
1329
- {
1330
- locate_soi_marker();
1331
-
1332
- int c = process_markers();
1333
-
1334
- switch (c)
1335
- {
1336
- case M_SOF2:
1337
- m_progressive_flag = JPGD_TRUE;
1338
- case M_SOF0: /* baseline DCT */
1339
- case M_SOF1: /* extended sequential DCT */
1340
- {
1341
- read_sof_marker();
1342
- break;
1343
- }
1344
- case M_SOF9: /* Arithmitic coding */
1345
- {
1346
- stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT);
1347
- break;
1348
- }
1349
- default:
1350
- {
1351
- stop_decoding(JPGD_UNSUPPORTED_MARKER);
1352
- break;
1353
- }
1354
- }
1355
- }
1356
-
1357
- // Find a start of scan (SOS) marker.
1358
- int jpeg_decoder::locate_sos_marker()
1359
- {
1360
- int c;
1361
-
1362
- c = process_markers();
1363
-
1364
- if (c == M_EOI)
1365
- return JPGD_FALSE;
1366
- else if (c != M_SOS)
1367
- stop_decoding(JPGD_UNEXPECTED_MARKER);
1368
-
1369
- read_sos_marker();
1370
-
1371
- return JPGD_TRUE;
1372
- }
1373
-
1374
- // Reset everything to default/uninitialized state.
1375
- void jpeg_decoder::init(jpeg_decoder_stream *pStream)
1376
- {
1377
- m_pMem_blocks = NULL;
1378
- m_error_code = JPGD_SUCCESS;
1379
- m_ready_flag = false;
1380
- m_image_x_size = m_image_y_size = 0;
1381
- m_pStream = pStream;
1382
- m_progressive_flag = JPGD_FALSE;
1383
-
1384
- memset(m_huff_ac, 0, sizeof(m_huff_ac));
1385
- memset(m_huff_num, 0, sizeof(m_huff_num));
1386
- memset(m_huff_val, 0, sizeof(m_huff_val));
1387
- memset(m_quant, 0, sizeof(m_quant));
1388
-
1389
- m_scan_type = 0;
1390
- m_comps_in_frame = 0;
1391
-
1392
- memset(m_comp_h_samp, 0, sizeof(m_comp_h_samp));
1393
- memset(m_comp_v_samp, 0, sizeof(m_comp_v_samp));
1394
- memset(m_comp_quant, 0, sizeof(m_comp_quant));
1395
- memset(m_comp_ident, 0, sizeof(m_comp_ident));
1396
- memset(m_comp_h_blocks, 0, sizeof(m_comp_h_blocks));
1397
- memset(m_comp_v_blocks, 0, sizeof(m_comp_v_blocks));
1398
-
1399
- m_comps_in_scan = 0;
1400
- memset(m_comp_list, 0, sizeof(m_comp_list));
1401
- memset(m_comp_dc_tab, 0, sizeof(m_comp_dc_tab));
1402
- memset(m_comp_ac_tab, 0, sizeof(m_comp_ac_tab));
1403
-
1404
- m_spectral_start = 0;
1405
- m_spectral_end = 0;
1406
- m_successive_low = 0;
1407
- m_successive_high = 0;
1408
- m_max_mcu_x_size = 0;
1409
- m_max_mcu_y_size = 0;
1410
- m_blocks_per_mcu = 0;
1411
- m_max_blocks_per_row = 0;
1412
- m_mcus_per_row = 0;
1413
- m_mcus_per_col = 0;
1414
- m_expanded_blocks_per_component = 0;
1415
- m_expanded_blocks_per_mcu = 0;
1416
- m_expanded_blocks_per_row = 0;
1417
- m_freq_domain_chroma_upsample = false;
1418
-
1419
- memset(m_mcu_org, 0, sizeof(m_mcu_org));
1420
-
1421
- m_total_lines_left = 0;
1422
- m_mcu_lines_left = 0;
1423
- m_real_dest_bytes_per_scan_line = 0;
1424
- m_dest_bytes_per_scan_line = 0;
1425
- m_dest_bytes_per_pixel = 0;
1426
-
1427
- memset(m_pHuff_tabs, 0, sizeof(m_pHuff_tabs));
1428
-
1429
- memset(m_dc_coeffs, 0, sizeof(m_dc_coeffs));
1430
- memset(m_ac_coeffs, 0, sizeof(m_ac_coeffs));
1431
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
1432
-
1433
- m_eob_run = 0;
1434
-
1435
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
1436
-
1437
- m_pIn_buf_ofs = m_in_buf;
1438
- m_in_buf_left = 0;
1439
- m_eof_flag = false;
1440
- m_tem_flag = 0;
1441
-
1442
- memset(m_in_buf_pad_start, 0, sizeof(m_in_buf_pad_start));
1443
- memset(m_in_buf, 0, sizeof(m_in_buf));
1444
- memset(m_in_buf_pad_end, 0, sizeof(m_in_buf_pad_end));
1445
-
1446
- m_restart_interval = 0;
1447
- m_restarts_left = 0;
1448
- m_next_restart_num = 0;
1449
-
1450
- m_max_mcus_per_row = 0;
1451
- m_max_blocks_per_mcu = 0;
1452
- m_max_mcus_per_col = 0;
1453
-
1454
- memset(m_last_dc_val, 0, sizeof(m_last_dc_val));
1455
- m_pMCU_coefficients = NULL;
1456
- m_pSample_buf = NULL;
1457
-
1458
- m_total_bytes_read = 0;
1459
-
1460
- m_pScan_line_0 = NULL;
1461
- m_pScan_line_1 = NULL;
1462
-
1463
- // Ready the input buffer.
1464
- prep_in_buffer();
1465
-
1466
- // Prime the bit buffer.
1467
- m_bits_left = 16;
1468
- m_bit_buf = 0;
1469
-
1470
- get_bits(16);
1471
- get_bits(16);
1472
-
1473
- for (int i = 0; i < JPGD_MAX_BLOCKS_PER_MCU; i++)
1474
- m_mcu_block_max_zag[i] = 64;
1475
- }
1476
-
1477
- #define SCALEBITS 16
1478
- #define ONE_HALF ((int) 1 << (SCALEBITS-1))
1479
- #define FIX(x) ((int) ((x) * (1L<<SCALEBITS) + 0.5f))
1480
-
1481
- // Create a few tables that allow us to quickly convert YCbCr to RGB.
1482
- void jpeg_decoder::create_look_ups()
1483
- {
1484
- for (int i = 0; i <= 255; i++)
1485
- {
1486
- int k = i - 128;
1487
- m_crr[i] = ( FIX(1.40200f) * k + ONE_HALF) >> SCALEBITS;
1488
- m_cbb[i] = ( FIX(1.77200f) * k + ONE_HALF) >> SCALEBITS;
1489
- m_crg[i] = (-FIX(0.71414f)) * k;
1490
- m_cbg[i] = (-FIX(0.34414f)) * k + ONE_HALF;
1491
- }
1492
- }
1493
-
1494
- // This method throws back into the stream any bytes that where read
1495
- // into the bit buffer during initial marker scanning.
1496
- void jpeg_decoder::fix_in_buffer()
1497
- {
1498
- // In case any 0xFF's where pulled into the buffer during marker scanning.
1499
- JPGD_ASSERT((m_bits_left & 7) == 0);
1500
-
1501
- if (m_bits_left == 16)
1502
- stuff_char( (uint8)(m_bit_buf & 0xFF));
1503
-
1504
- if (m_bits_left >= 8)
1505
- stuff_char( (uint8)((m_bit_buf >> 8) & 0xFF));
1506
-
1507
- stuff_char((uint8)((m_bit_buf >> 16) & 0xFF));
1508
- stuff_char((uint8)((m_bit_buf >> 24) & 0xFF));
1509
-
1510
- m_bits_left = 16;
1511
- get_bits_no_markers(16);
1512
- get_bits_no_markers(16);
1513
- }
1514
-
1515
- void jpeg_decoder::transform_mcu(int mcu_row)
1516
- {
1517
- jpgd_block_t* pSrc_ptr = m_pMCU_coefficients;
1518
- uint8* pDst_ptr = m_pSample_buf + mcu_row * m_blocks_per_mcu * 64;
1519
-
1520
- for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
1521
- {
1522
- idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]);
1523
- pSrc_ptr += 64;
1524
- pDst_ptr += 64;
1525
- }
1526
- }
1527
-
1528
- static const uint8 s_max_rc[64] =
1529
- {
1530
- 17, 18, 34, 50, 50, 51, 52, 52, 52, 68, 84, 84, 84, 84, 85, 86, 86, 86, 86, 86,
1531
- 102, 118, 118, 118, 118, 118, 118, 119, 120, 120, 120, 120, 120, 120, 120, 136,
1532
- 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136,
1533
- 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136
1534
- };
1535
-
1536
- void jpeg_decoder::transform_mcu_expand(int mcu_row)
1537
- {
1538
- jpgd_block_t* pSrc_ptr = m_pMCU_coefficients;
1539
- uint8* pDst_ptr = m_pSample_buf + mcu_row * m_expanded_blocks_per_mcu * 64;
1540
-
1541
- // Y IDCT
1542
- int mcu_block;
1543
- for (mcu_block = 0; mcu_block < m_expanded_blocks_per_component; mcu_block++)
1544
- {
1545
- idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]);
1546
- pSrc_ptr += 64;
1547
- pDst_ptr += 64;
1548
- }
1549
-
1550
- // Chroma IDCT, with upsampling
1551
- jpgd_block_t temp_block[64];
1552
-
1553
- for (int i = 0; i < 2; i++)
1554
- {
1555
- DCT_Upsample::Matrix44 P, Q, R, S;
1556
-
1557
- JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] >= 1);
1558
- JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] <= 64);
1559
-
1560
- switch (s_max_rc[m_mcu_block_max_zag[mcu_block++] - 1])
1561
- {
1562
- case 1*16+1:
1563
- DCT_Upsample::P_Q<1, 1>::calc(P, Q, pSrc_ptr);
1564
- DCT_Upsample::R_S<1, 1>::calc(R, S, pSrc_ptr);
1565
- break;
1566
- case 1*16+2:
1567
- DCT_Upsample::P_Q<1, 2>::calc(P, Q, pSrc_ptr);
1568
- DCT_Upsample::R_S<1, 2>::calc(R, S, pSrc_ptr);
1569
- break;
1570
- case 2*16+2:
1571
- DCT_Upsample::P_Q<2, 2>::calc(P, Q, pSrc_ptr);
1572
- DCT_Upsample::R_S<2, 2>::calc(R, S, pSrc_ptr);
1573
- break;
1574
- case 3*16+2:
1575
- DCT_Upsample::P_Q<3, 2>::calc(P, Q, pSrc_ptr);
1576
- DCT_Upsample::R_S<3, 2>::calc(R, S, pSrc_ptr);
1577
- break;
1578
- case 3*16+3:
1579
- DCT_Upsample::P_Q<3, 3>::calc(P, Q, pSrc_ptr);
1580
- DCT_Upsample::R_S<3, 3>::calc(R, S, pSrc_ptr);
1581
- break;
1582
- case 3*16+4:
1583
- DCT_Upsample::P_Q<3, 4>::calc(P, Q, pSrc_ptr);
1584
- DCT_Upsample::R_S<3, 4>::calc(R, S, pSrc_ptr);
1585
- break;
1586
- case 4*16+4:
1587
- DCT_Upsample::P_Q<4, 4>::calc(P, Q, pSrc_ptr);
1588
- DCT_Upsample::R_S<4, 4>::calc(R, S, pSrc_ptr);
1589
- break;
1590
- case 5*16+4:
1591
- DCT_Upsample::P_Q<5, 4>::calc(P, Q, pSrc_ptr);
1592
- DCT_Upsample::R_S<5, 4>::calc(R, S, pSrc_ptr);
1593
- break;
1594
- case 5*16+5:
1595
- DCT_Upsample::P_Q<5, 5>::calc(P, Q, pSrc_ptr);
1596
- DCT_Upsample::R_S<5, 5>::calc(R, S, pSrc_ptr);
1597
- break;
1598
- case 5*16+6:
1599
- DCT_Upsample::P_Q<5, 6>::calc(P, Q, pSrc_ptr);
1600
- DCT_Upsample::R_S<5, 6>::calc(R, S, pSrc_ptr);
1601
- break;
1602
- case 6*16+6:
1603
- DCT_Upsample::P_Q<6, 6>::calc(P, Q, pSrc_ptr);
1604
- DCT_Upsample::R_S<6, 6>::calc(R, S, pSrc_ptr);
1605
- break;
1606
- case 7*16+6:
1607
- DCT_Upsample::P_Q<7, 6>::calc(P, Q, pSrc_ptr);
1608
- DCT_Upsample::R_S<7, 6>::calc(R, S, pSrc_ptr);
1609
- break;
1610
- case 7*16+7:
1611
- DCT_Upsample::P_Q<7, 7>::calc(P, Q, pSrc_ptr);
1612
- DCT_Upsample::R_S<7, 7>::calc(R, S, pSrc_ptr);
1613
- break;
1614
- case 7*16+8:
1615
- DCT_Upsample::P_Q<7, 8>::calc(P, Q, pSrc_ptr);
1616
- DCT_Upsample::R_S<7, 8>::calc(R, S, pSrc_ptr);
1617
- break;
1618
- case 8*16+8:
1619
- DCT_Upsample::P_Q<8, 8>::calc(P, Q, pSrc_ptr);
1620
- DCT_Upsample::R_S<8, 8>::calc(R, S, pSrc_ptr);
1621
- break;
1622
- default:
1623
- JPGD_ASSERT(false);
1624
- }
1625
-
1626
- DCT_Upsample::Matrix44 a(P + Q); P -= Q;
1627
- DCT_Upsample::Matrix44& b = P;
1628
- DCT_Upsample::Matrix44 c(R + S); R -= S;
1629
- DCT_Upsample::Matrix44& d = R;
1630
-
1631
- DCT_Upsample::Matrix44::add_and_store(temp_block, a, c);
1632
- idct_4x4(temp_block, pDst_ptr);
1633
- pDst_ptr += 64;
1634
-
1635
- DCT_Upsample::Matrix44::sub_and_store(temp_block, a, c);
1636
- idct_4x4(temp_block, pDst_ptr);
1637
- pDst_ptr += 64;
1638
-
1639
- DCT_Upsample::Matrix44::add_and_store(temp_block, b, d);
1640
- idct_4x4(temp_block, pDst_ptr);
1641
- pDst_ptr += 64;
1642
-
1643
- DCT_Upsample::Matrix44::sub_and_store(temp_block, b, d);
1644
- idct_4x4(temp_block, pDst_ptr);
1645
- pDst_ptr += 64;
1646
-
1647
- pSrc_ptr += 64;
1648
- }
1649
- }
1650
-
1651
- // Loads and dequantizes the next row of (already decoded) coefficients.
1652
- // Progressive images only.
1653
- void jpeg_decoder::load_next_row()
1654
- {
1655
- int i;
1656
- jpgd_block_t *p;
1657
- jpgd_quant_t *q;
1658
- int mcu_row, mcu_block, row_block = 0;
1659
- int component_num, component_id;
1660
- int block_x_mcu[JPGD_MAX_COMPONENTS];
1661
-
1662
- memset(block_x_mcu, 0, JPGD_MAX_COMPONENTS * sizeof(int));
1663
-
1664
- for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
1665
- {
1666
- int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0;
1667
-
1668
- for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
1669
- {
1670
- component_id = m_mcu_org[mcu_block];
1671
- q = m_quant[m_comp_quant[component_id]];
1672
-
1673
- p = m_pMCU_coefficients + 64 * mcu_block;
1674
-
1675
- jpgd_block_t* pAC = coeff_buf_getp(m_ac_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
1676
- jpgd_block_t* pDC = coeff_buf_getp(m_dc_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
1677
- p[0] = pDC[0];
1678
- memcpy(&p[1], &pAC[1], 63 * sizeof(jpgd_block_t));
1679
-
1680
- for (i = 63; i > 0; i--)
1681
- if (p[g_ZAG[i]])
1682
- break;
1683
-
1684
- m_mcu_block_max_zag[mcu_block] = i + 1;
1685
-
1686
- for ( ; i >= 0; i--)
1687
- if (p[g_ZAG[i]])
1688
- p[g_ZAG[i]] = static_cast<jpgd_block_t>(p[g_ZAG[i]] * q[i]);
1689
-
1690
- row_block++;
1691
-
1692
- if (m_comps_in_scan == 1)
1693
- block_x_mcu[component_id]++;
1694
- else
1695
- {
1696
- if (++block_x_mcu_ofs == m_comp_h_samp[component_id])
1697
- {
1698
- block_x_mcu_ofs = 0;
1699
-
1700
- if (++block_y_mcu_ofs == m_comp_v_samp[component_id])
1701
- {
1702
- block_y_mcu_ofs = 0;
1703
-
1704
- block_x_mcu[component_id] += m_comp_h_samp[component_id];
1705
- }
1706
- }
1707
- }
1708
- }
1709
-
1710
- if (m_freq_domain_chroma_upsample)
1711
- transform_mcu_expand(mcu_row);
1712
- else
1713
- transform_mcu(mcu_row);
1714
- }
1715
-
1716
- if (m_comps_in_scan == 1)
1717
- m_block_y_mcu[m_comp_list[0]]++;
1718
- else
1719
- {
1720
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
1721
- {
1722
- component_id = m_comp_list[component_num];
1723
-
1724
- m_block_y_mcu[component_id] += m_comp_v_samp[component_id];
1725
- }
1726
- }
1727
- }
1728
-
1729
- // Restart interval processing.
1730
- void jpeg_decoder::process_restart()
1731
- {
1732
- int i;
1733
- int c = 0;
1734
-
1735
- // Align to a byte boundry
1736
- // FIXME: Is this really necessary? get_bits_no_markers() never reads in markers!
1737
- //get_bits_no_markers(m_bits_left & 7);
1738
-
1739
- // Let's scan a little bit to find the marker, but not _too_ far.
1740
- // 1536 is a "fudge factor" that determines how much to scan.
1741
- for (i = 1536; i > 0; i--)
1742
- if (get_char() == 0xFF)
1743
- break;
1744
-
1745
- if (i == 0)
1746
- stop_decoding(JPGD_BAD_RESTART_MARKER);
1747
-
1748
- for ( ; i > 0; i--)
1749
- if ((c = get_char()) != 0xFF)
1750
- break;
1751
-
1752
- if (i == 0)
1753
- stop_decoding(JPGD_BAD_RESTART_MARKER);
1754
-
1755
- // Is it the expected marker? If not, something bad happened.
1756
- if (c != (m_next_restart_num + M_RST0))
1757
- stop_decoding(JPGD_BAD_RESTART_MARKER);
1758
-
1759
- // Reset each component's DC prediction values.
1760
- memset(&m_last_dc_val, 0, m_comps_in_frame * sizeof(uint));
1761
-
1762
- m_eob_run = 0;
1763
-
1764
- m_restarts_left = m_restart_interval;
1765
-
1766
- m_next_restart_num = (m_next_restart_num + 1) & 7;
1767
-
1768
- // Get the bit buffer going again...
1769
-
1770
- m_bits_left = 16;
1771
- get_bits_no_markers(16);
1772
- get_bits_no_markers(16);
1773
- }
1774
-
1775
- static inline int dequantize_ac(int c, int q) { c *= q; return c; }
1776
-
1777
- // Decodes and dequantizes the next row of coefficients.
1778
- void jpeg_decoder::decode_next_row()
1779
- {
1780
- int row_block = 0;
1781
-
1782
- for (int mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
1783
- {
1784
- if ((m_restart_interval) && (m_restarts_left == 0))
1785
- process_restart();
1786
-
1787
- jpgd_block_t* p = m_pMCU_coefficients;
1788
- for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++, p += 64)
1789
- {
1790
- int component_id = m_mcu_org[mcu_block];
1791
- jpgd_quant_t* q = m_quant[m_comp_quant[component_id]];
1792
-
1793
- int r, s;
1794
- s = huff_decode(m_pHuff_tabs[m_comp_dc_tab[component_id]], r);
1795
- s = HUFF_EXTEND(r, s);
1796
-
1797
- m_last_dc_val[component_id] = (s += m_last_dc_val[component_id]);
1798
-
1799
- p[0] = static_cast<jpgd_block_t>(s * q[0]);
1800
-
1801
- int prev_num_set = m_mcu_block_max_zag[mcu_block];
1802
-
1803
- huff_tables *pH = m_pHuff_tabs[m_comp_ac_tab[component_id]];
1804
-
1805
- int k;
1806
- for (k = 1; k < 64; k++)
1807
- {
1808
- int extra_bits;
1809
- s = huff_decode(pH, extra_bits);
1810
-
1811
- r = s >> 4;
1812
- s &= 15;
1813
-
1814
- if (s)
1815
- {
1816
- if (r)
1817
- {
1818
- if ((k + r) > 63)
1819
- stop_decoding(JPGD_DECODE_ERROR);
1820
-
1821
- if (k < prev_num_set)
1822
- {
1823
- int n = JPGD_MIN(r, prev_num_set - k);
1824
- int kt = k;
1825
- while (n--)
1826
- p[g_ZAG[kt++]] = 0;
1827
- }
1828
-
1829
- k += r;
1830
- }
1831
-
1832
- s = HUFF_EXTEND(extra_bits, s);
1833
-
1834
- JPGD_ASSERT(k < 64);
1835
-
1836
- p[g_ZAG[k]] = static_cast<jpgd_block_t>(dequantize_ac(s, q[k])); //s * q[k];
1837
- }
1838
- else
1839
- {
1840
- if (r == 15)
1841
- {
1842
- if ((k + 16) > 64)
1843
- stop_decoding(JPGD_DECODE_ERROR);
1844
-
1845
- if (k < prev_num_set)
1846
- {
1847
- int n = JPGD_MIN(16, prev_num_set - k);
1848
- int kt = k;
1849
- while (n--)
1850
- {
1851
- JPGD_ASSERT(kt <= 63);
1852
- p[g_ZAG[kt++]] = 0;
1853
- }
1854
- }
1855
-
1856
- k += 16 - 1; // - 1 because the loop counter is k
1857
- // BEGIN EPIC MOD
1858
- JPGD_ASSERT(k < 64 && p[g_ZAG[k]] == 0);
1859
- // END EPIC MOD
1860
- }
1861
- else
1862
- break;
1863
- }
1864
- }
1865
-
1866
- if (k < prev_num_set)
1867
- {
1868
- int kt = k;
1869
- while (kt < prev_num_set)
1870
- p[g_ZAG[kt++]] = 0;
1871
- }
1872
-
1873
- m_mcu_block_max_zag[mcu_block] = k;
1874
-
1875
- row_block++;
1876
- }
1877
-
1878
- if (m_freq_domain_chroma_upsample)
1879
- transform_mcu_expand(mcu_row);
1880
- else
1881
- transform_mcu(mcu_row);
1882
-
1883
- m_restarts_left--;
1884
- }
1885
- }
1886
-
1887
- // YCbCr H1V1 (1x1:1:1, 3 m_blocks per MCU) to RGB
1888
- void jpeg_decoder::H1V1Convert()
1889
- {
1890
- int row = m_max_mcu_y_size - m_mcu_lines_left;
1891
- uint8 *d = m_pScan_line_0;
1892
- uint8 *s = m_pSample_buf + row * 8;
1893
-
1894
- for (int i = m_max_mcus_per_row; i > 0; i--)
1895
- {
1896
- for (int j = 0; j < 8; j++)
1897
- {
1898
- int y = s[j];
1899
- int cb = s[64+j];
1900
- int cr = s[128+j];
1901
-
1902
- if (jpg_format == ERGBFormatJPG::BGRA)
1903
- {
1904
- d[0] = clamp(y + m_cbb[cb]);
1905
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
1906
- d[2] = clamp(y + m_crr[cr]);
1907
- d[3] = 255;
1908
- }
1909
- else
1910
- {
1911
- d[0] = clamp(y + m_crr[cr]);
1912
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
1913
- d[2] = clamp(y + m_cbb[cb]);
1914
- d[3] = 255;
1915
- }
1916
- d += 4;
1917
- }
1918
-
1919
- s += 64*3;
1920
- }
1921
- }
1922
-
1923
- // YCbCr H2V1 (2x1:1:1, 4 m_blocks per MCU) to RGB
1924
- void jpeg_decoder::H2V1Convert()
1925
- {
1926
- int row = m_max_mcu_y_size - m_mcu_lines_left;
1927
- uint8 *d0 = m_pScan_line_0;
1928
- uint8 *y = m_pSample_buf + row * 8;
1929
- uint8 *c = m_pSample_buf + 2*64 + row * 8;
1930
-
1931
- for (int i = m_max_mcus_per_row; i > 0; i--)
1932
- {
1933
- for (int l = 0; l < 2; l++)
1934
- {
1935
- for (int j = 0; j < 4; j++)
1936
- {
1937
- int cb = c[0];
1938
- int cr = c[64];
1939
-
1940
- int rc = m_crr[cr];
1941
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
1942
- int bc = m_cbb[cb];
1943
-
1944
- int yy = y[j<<1];
1945
- if (jpg_format == ERGBFormatJPG::BGRA)
1946
- {
1947
- d0[0] = clamp(yy+bc);
1948
- d0[1] = clamp(yy+gc);
1949
- d0[2] = clamp(yy+rc);
1950
- d0[3] = 255;
1951
- yy = y[(j<<1)+1];
1952
- d0[4] = clamp(yy+bc);
1953
- d0[5] = clamp(yy+gc);
1954
- d0[6] = clamp(yy+rc);
1955
- d0[7] = 255;
1956
- }
1957
- else
1958
- {
1959
- d0[0] = clamp(yy+rc);
1960
- d0[1] = clamp(yy+gc);
1961
- d0[2] = clamp(yy+bc);
1962
- d0[3] = 255;
1963
- yy = y[(j<<1)+1];
1964
- d0[4] = clamp(yy+rc);
1965
- d0[5] = clamp(yy+gc);
1966
- d0[6] = clamp(yy+bc);
1967
- d0[7] = 255;
1968
- }
1969
-
1970
- d0 += 8;
1971
-
1972
- c++;
1973
- }
1974
- y += 64;
1975
- }
1976
-
1977
- y += 64*4 - 64*2;
1978
- c += 64*4 - 8;
1979
- }
1980
- }
1981
-
1982
- // YCbCr H2V1 (1x2:1:1, 4 m_blocks per MCU) to RGB
1983
- void jpeg_decoder::H1V2Convert()
1984
- {
1985
- int row = m_max_mcu_y_size - m_mcu_lines_left;
1986
- uint8 *d0 = m_pScan_line_0;
1987
- uint8 *d1 = m_pScan_line_1;
1988
- uint8 *y;
1989
- uint8 *c;
1990
-
1991
- if (row < 8)
1992
- y = m_pSample_buf + row * 8;
1993
- else
1994
- y = m_pSample_buf + 64*1 + (row & 7) * 8;
1995
-
1996
- c = m_pSample_buf + 64*2 + (row >> 1) * 8;
1997
-
1998
- for (int i = m_max_mcus_per_row; i > 0; i--)
1999
- {
2000
- for (int j = 0; j < 8; j++)
2001
- {
2002
- int cb = c[0+j];
2003
- int cr = c[64+j];
2004
-
2005
- int rc = m_crr[cr];
2006
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
2007
- int bc = m_cbb[cb];
2008
-
2009
- int yy = y[j];
2010
- if (jpg_format == ERGBFormatJPG::BGRA)
2011
- {
2012
- d0[0] = clamp(yy+bc);
2013
- d0[1] = clamp(yy+gc);
2014
- d0[2] = clamp(yy+rc);
2015
- d0[3] = 255;
2016
- yy = y[8+j];
2017
- d1[0] = clamp(yy+bc);
2018
- d1[1] = clamp(yy+gc);
2019
- d1[2] = clamp(yy+rc);
2020
- d1[3] = 255;
2021
- }
2022
- else
2023
- {
2024
- d0[0] = clamp(yy+rc);
2025
- d0[1] = clamp(yy+gc);
2026
- d0[2] = clamp(yy+bc);
2027
- d0[3] = 255;
2028
- yy = y[8+j];
2029
- d1[0] = clamp(yy+rc);
2030
- d1[1] = clamp(yy+gc);
2031
- d1[2] = clamp(yy+bc);
2032
- d1[3] = 255;
2033
- }
2034
-
2035
- d0 += 4;
2036
- d1 += 4;
2037
- }
2038
-
2039
- y += 64*4;
2040
- c += 64*4;
2041
- }
2042
- }
2043
-
2044
- // YCbCr H2V2 (2x2:1:1, 6 m_blocks per MCU) to RGB
2045
- void jpeg_decoder::H2V2Convert()
2046
- {
2047
- int row = m_max_mcu_y_size - m_mcu_lines_left;
2048
- uint8 *d0 = m_pScan_line_0;
2049
- uint8 *d1 = m_pScan_line_1;
2050
- uint8 *y;
2051
- uint8 *c;
2052
-
2053
- if (row < 8)
2054
- y = m_pSample_buf + row * 8;
2055
- else
2056
- y = m_pSample_buf + 64*2 + (row & 7) * 8;
2057
-
2058
- c = m_pSample_buf + 64*4 + (row >> 1) * 8;
2059
-
2060
- for (int i = m_max_mcus_per_row; i > 0; i--)
2061
- {
2062
- for (int l = 0; l < 2; l++)
2063
- {
2064
- for (int j = 0; j < 8; j += 2)
2065
- {
2066
- int cb = c[0];
2067
- int cr = c[64];
2068
-
2069
- int rc = m_crr[cr];
2070
- int gc = ((m_crg[cr] + m_cbg[cb]) >> 16);
2071
- int bc = m_cbb[cb];
2072
-
2073
- int yy = y[j];
2074
- if (jpg_format == ERGBFormatJPG::BGRA)
2075
- {
2076
- d0[0] = clamp(yy+bc);
2077
- d0[1] = clamp(yy+gc);
2078
- d0[2] = clamp(yy+rc);
2079
- d0[3] = 255;
2080
- yy = y[j+1];
2081
- d0[4] = clamp(yy+bc);
2082
- d0[5] = clamp(yy+gc);
2083
- d0[6] = clamp(yy+rc);
2084
- d0[7] = 255;
2085
- yy = y[j+8];
2086
- d1[0] = clamp(yy+bc);
2087
- d1[1] = clamp(yy+gc);
2088
- d1[2] = clamp(yy+rc);
2089
- d1[3] = 255;
2090
- yy = y[j+8+1];
2091
- d1[4] = clamp(yy+bc);
2092
- d1[5] = clamp(yy+gc);
2093
- d1[6] = clamp(yy+rc);
2094
- d1[7] = 255;
2095
- }
2096
- else
2097
- {
2098
- d0[0] = clamp(yy+rc);
2099
- d0[1] = clamp(yy+gc);
2100
- d0[2] = clamp(yy+bc);
2101
- d0[3] = 255;
2102
- yy = y[j+1];
2103
- d0[4] = clamp(yy+rc);
2104
- d0[5] = clamp(yy+gc);
2105
- d0[6] = clamp(yy+bc);
2106
- d0[7] = 255;
2107
- yy = y[j+8];
2108
- d1[0] = clamp(yy+rc);
2109
- d1[1] = clamp(yy+gc);
2110
- d1[2] = clamp(yy+bc);
2111
- d1[3] = 255;
2112
- yy = y[j+8+1];
2113
- d1[4] = clamp(yy+rc);
2114
- d1[5] = clamp(yy+gc);
2115
- d1[6] = clamp(yy+bc);
2116
- d1[7] = 255;
2117
- }
2118
-
2119
- d0 += 8;
2120
- d1 += 8;
2121
-
2122
- c++;
2123
- }
2124
- y += 64;
2125
- }
2126
-
2127
- y += 64*6 - 64*2;
2128
- c += 64*6 - 8;
2129
- }
2130
- }
2131
-
2132
- // Y (1 block per MCU) to 8-bit grayscale
2133
- void jpeg_decoder::gray_convert()
2134
- {
2135
- int row = m_max_mcu_y_size - m_mcu_lines_left;
2136
- uint8 *d = m_pScan_line_0;
2137
- uint8 *s = m_pSample_buf + row * 8;
2138
-
2139
- for (int i = m_max_mcus_per_row; i > 0; i--)
2140
- {
2141
- *(uint *)d = *(uint *)s;
2142
- *(uint *)(&d[4]) = *(uint *)(&s[4]);
2143
-
2144
- s += 64;
2145
- d += 8;
2146
- }
2147
- }
2148
-
2149
- void jpeg_decoder::expanded_convert()
2150
- {
2151
- int row = m_max_mcu_y_size - m_mcu_lines_left;
2152
-
2153
- uint8* Py = m_pSample_buf + (row / 8) * 64 * m_comp_h_samp[0] + (row & 7) * 8;
2154
-
2155
- uint8* d = m_pScan_line_0;
2156
-
2157
- for (int i = m_max_mcus_per_row; i > 0; i--)
2158
- {
2159
- for (int k = 0; k < m_max_mcu_x_size; k += 8)
2160
- {
2161
- const int Y_ofs = k * 8;
2162
- const int Cb_ofs = Y_ofs + 64 * m_expanded_blocks_per_component;
2163
- const int Cr_ofs = Y_ofs + 64 * m_expanded_blocks_per_component * 2;
2164
- for (int j = 0; j < 8; j++)
2165
- {
2166
- int y = Py[Y_ofs + j];
2167
- int cb = Py[Cb_ofs + j];
2168
- int cr = Py[Cr_ofs + j];
2169
-
2170
- if (jpg_format == ERGBFormatJPG::BGRA)
2171
- {
2172
- d[0] = clamp(y + m_cbb[cb]);
2173
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
2174
- d[2] = clamp(y + m_crr[cr]);
2175
- d[3] = 255;
2176
- }
2177
- else
2178
- {
2179
- d[0] = clamp(y + m_crr[cr]);
2180
- d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16));
2181
- d[2] = clamp(y + m_cbb[cb]);
2182
- d[3] = 255;
2183
- }
2184
-
2185
- d += 4;
2186
- }
2187
- }
2188
-
2189
- Py += 64 * m_expanded_blocks_per_mcu;
2190
- }
2191
- }
2192
-
2193
- // Find end of image (EOI) marker, so we can return to the user the exact size of the input stream.
2194
- void jpeg_decoder::find_eoi()
2195
- {
2196
- if (!m_progressive_flag)
2197
- {
2198
- // Attempt to read the EOI marker.
2199
- //get_bits_no_markers(m_bits_left & 7);
2200
-
2201
- // Prime the bit buffer
2202
- m_bits_left = 16;
2203
- get_bits(16);
2204
- get_bits(16);
2205
-
2206
- // The next marker _should_ be EOI
2207
- process_markers();
2208
- }
2209
-
2210
- m_total_bytes_read -= m_in_buf_left;
2211
- }
2212
-
2213
- int jpeg_decoder::decode(const void** pScan_line, uint* pScan_line_len)
2214
- {
2215
- if ((m_error_code) || (!m_ready_flag))
2216
- return JPGD_FAILED;
2217
-
2218
- if (m_total_lines_left == 0)
2219
- return JPGD_DONE;
2220
-
2221
- if (m_mcu_lines_left == 0)
2222
- {
2223
- if (setjmp(m_jmp_state))
2224
- return JPGD_FAILED;
2225
-
2226
- if (m_progressive_flag)
2227
- load_next_row();
2228
- else
2229
- decode_next_row();
2230
-
2231
- // Find the EOI marker if that was the last row.
2232
- if (m_total_lines_left <= m_max_mcu_y_size)
2233
- find_eoi();
2234
-
2235
- m_mcu_lines_left = m_max_mcu_y_size;
2236
- }
2237
-
2238
- if (m_freq_domain_chroma_upsample)
2239
- {
2240
- expanded_convert();
2241
- *pScan_line = m_pScan_line_0;
2242
- }
2243
- else
2244
- {
2245
- switch (m_scan_type)
2246
- {
2247
- case JPGD_YH2V2:
2248
- {
2249
- if ((m_mcu_lines_left & 1) == 0)
2250
- {
2251
- H2V2Convert();
2252
- *pScan_line = m_pScan_line_0;
2253
- }
2254
- else
2255
- *pScan_line = m_pScan_line_1;
2256
-
2257
- break;
2258
- }
2259
- case JPGD_YH2V1:
2260
- {
2261
- H2V1Convert();
2262
- *pScan_line = m_pScan_line_0;
2263
- break;
2264
- }
2265
- case JPGD_YH1V2:
2266
- {
2267
- if ((m_mcu_lines_left & 1) == 0)
2268
- {
2269
- H1V2Convert();
2270
- *pScan_line = m_pScan_line_0;
2271
- }
2272
- else
2273
- *pScan_line = m_pScan_line_1;
2274
-
2275
- break;
2276
- }
2277
- case JPGD_YH1V1:
2278
- {
2279
- H1V1Convert();
2280
- *pScan_line = m_pScan_line_0;
2281
- break;
2282
- }
2283
- case JPGD_GRAYSCALE:
2284
- {
2285
- gray_convert();
2286
- *pScan_line = m_pScan_line_0;
2287
-
2288
- break;
2289
- }
2290
- }
2291
- }
2292
-
2293
- *pScan_line_len = m_real_dest_bytes_per_scan_line;
2294
-
2295
- m_mcu_lines_left--;
2296
- m_total_lines_left--;
2297
-
2298
- return JPGD_SUCCESS;
2299
- }
2300
-
2301
- // Creates the tables needed for efficient Huffman decoding.
2302
- void jpeg_decoder::make_huff_table(int index, huff_tables *pH)
2303
- {
2304
- int p, i, l, si;
2305
- uint8 huffsize[257];
2306
- uint huffcode[257];
2307
- uint code;
2308
- uint subtree;
2309
- int code_size;
2310
- int lastp;
2311
- int nextfreeentry;
2312
- int currententry;
2313
-
2314
- pH->ac_table = m_huff_ac[index] != 0;
2315
-
2316
- p = 0;
2317
-
2318
- for (l = 1; l <= 16; l++)
2319
- {
2320
- for (i = 1; i <= m_huff_num[index][l]; i++)
2321
- huffsize[p++] = static_cast<uint8>(l);
2322
- }
2323
-
2324
- huffsize[p] = 0;
2325
-
2326
- lastp = p;
2327
-
2328
- code = 0;
2329
- si = huffsize[0];
2330
- p = 0;
2331
-
2332
- while (huffsize[p])
2333
- {
2334
- while (huffsize[p] == si)
2335
- {
2336
- huffcode[p++] = code;
2337
- code++;
2338
- }
2339
-
2340
- code <<= 1;
2341
- si++;
2342
- }
2343
-
2344
- memset(pH->look_up, 0, sizeof(pH->look_up));
2345
- memset(pH->look_up2, 0, sizeof(pH->look_up2));
2346
- memset(pH->tree, 0, sizeof(pH->tree));
2347
- memset(pH->code_size, 0, sizeof(pH->code_size));
2348
-
2349
- nextfreeentry = -1;
2350
-
2351
- p = 0;
2352
-
2353
- while (p < lastp)
2354
- {
2355
- i = m_huff_val[index][p];
2356
- code = huffcode[p];
2357
- code_size = huffsize[p];
2358
-
2359
- pH->code_size[i] = static_cast<uint8>(code_size);
2360
-
2361
- if (code_size <= 8)
2362
- {
2363
- code <<= (8 - code_size);
2364
-
2365
- for (l = 1 << (8 - code_size); l > 0; l--)
2366
- {
2367
- JPGD_ASSERT(i < 256);
2368
-
2369
- pH->look_up[code] = i;
2370
-
2371
- bool has_extrabits = false;
2372
- int extra_bits = 0;
2373
- int num_extra_bits = i & 15;
2374
-
2375
- int bits_to_fetch = code_size;
2376
- if (num_extra_bits)
2377
- {
2378
- int total_codesize = code_size + num_extra_bits;
2379
- if (total_codesize <= 8)
2380
- {
2381
- has_extrabits = true;
2382
- extra_bits = ((1 << num_extra_bits) - 1) & (code >> (8 - total_codesize));
2383
- JPGD_ASSERT(extra_bits <= 0x7FFF);
2384
- bits_to_fetch += num_extra_bits;
2385
- }
2386
- }
2387
-
2388
- if (!has_extrabits)
2389
- pH->look_up2[code] = i | (bits_to_fetch << 8);
2390
- else
2391
- pH->look_up2[code] = i | 0x8000 | (extra_bits << 16) | (bits_to_fetch << 8);
2392
-
2393
- code++;
2394
- }
2395
- }
2396
- else
2397
- {
2398
- subtree = (code >> (code_size - 8)) & 0xFF;
2399
-
2400
- currententry = pH->look_up[subtree];
2401
-
2402
- if (currententry == 0)
2403
- {
2404
- pH->look_up[subtree] = currententry = nextfreeentry;
2405
- pH->look_up2[subtree] = currententry = nextfreeentry;
2406
-
2407
- nextfreeentry -= 2;
2408
- }
2409
-
2410
- code <<= (16 - (code_size - 8));
2411
-
2412
- for (l = code_size; l > 9; l--)
2413
- {
2414
- if ((code & 0x8000) == 0)
2415
- currententry--;
2416
-
2417
- if (pH->tree[-currententry - 1] == 0)
2418
- {
2419
- pH->tree[-currententry - 1] = nextfreeentry;
2420
-
2421
- currententry = nextfreeentry;
2422
-
2423
- nextfreeentry -= 2;
2424
- }
2425
- else
2426
- currententry = pH->tree[-currententry - 1];
2427
-
2428
- code <<= 1;
2429
- }
2430
-
2431
- if ((code & 0x8000) == 0)
2432
- currententry--;
2433
-
2434
- pH->tree[-currententry - 1] = i;
2435
- }
2436
-
2437
- p++;
2438
- }
2439
- }
2440
-
2441
- // Verifies the quantization tables needed for this scan are available.
2442
- void jpeg_decoder::check_quant_tables()
2443
- {
2444
- for (int i = 0; i < m_comps_in_scan; i++)
2445
- if (m_quant[m_comp_quant[m_comp_list[i]]] == NULL)
2446
- stop_decoding(JPGD_UNDEFINED_QUANT_TABLE);
2447
- }
2448
-
2449
- // Verifies that all the Huffman tables needed for this scan are available.
2450
- void jpeg_decoder::check_huff_tables()
2451
- {
2452
- for (int i = 0; i < m_comps_in_scan; i++)
2453
- {
2454
- if ((m_spectral_start == 0) && (m_huff_num[m_comp_dc_tab[m_comp_list[i]]] == NULL))
2455
- stop_decoding(JPGD_UNDEFINED_HUFF_TABLE);
2456
-
2457
- if ((m_spectral_end > 0) && (m_huff_num[m_comp_ac_tab[m_comp_list[i]]] == NULL))
2458
- stop_decoding(JPGD_UNDEFINED_HUFF_TABLE);
2459
- }
2460
-
2461
- for (int i = 0; i < JPGD_MAX_HUFF_TABLES; i++)
2462
- if (m_huff_num[i])
2463
- {
2464
- if (!m_pHuff_tabs[i])
2465
- m_pHuff_tabs[i] = (huff_tables *)alloc(sizeof(huff_tables));
2466
-
2467
- make_huff_table(i, m_pHuff_tabs[i]);
2468
- }
2469
- }
2470
-
2471
- // Determines the component order inside each MCU.
2472
- // Also calcs how many MCU's are on each row, etc.
2473
- void jpeg_decoder::calc_mcu_block_order()
2474
- {
2475
- int component_num, component_id;
2476
- int max_h_samp = 0, max_v_samp = 0;
2477
-
2478
- for (component_id = 0; component_id < m_comps_in_frame; component_id++)
2479
- {
2480
- if (m_comp_h_samp[component_id] > max_h_samp)
2481
- max_h_samp = m_comp_h_samp[component_id];
2482
-
2483
- if (m_comp_v_samp[component_id] > max_v_samp)
2484
- max_v_samp = m_comp_v_samp[component_id];
2485
- }
2486
-
2487
- for (component_id = 0; component_id < m_comps_in_frame; component_id++)
2488
- {
2489
- m_comp_h_blocks[component_id] = ((((m_image_x_size * m_comp_h_samp[component_id]) + (max_h_samp - 1)) / max_h_samp) + 7) / 8;
2490
- m_comp_v_blocks[component_id] = ((((m_image_y_size * m_comp_v_samp[component_id]) + (max_v_samp - 1)) / max_v_samp) + 7) / 8;
2491
- }
2492
-
2493
- if (m_comps_in_scan == 1)
2494
- {
2495
- m_mcus_per_row = m_comp_h_blocks[m_comp_list[0]];
2496
- m_mcus_per_col = m_comp_v_blocks[m_comp_list[0]];
2497
- }
2498
- else
2499
- {
2500
- m_mcus_per_row = (((m_image_x_size + 7) / 8) + (max_h_samp - 1)) / max_h_samp;
2501
- m_mcus_per_col = (((m_image_y_size + 7) / 8) + (max_v_samp - 1)) / max_v_samp;
2502
- }
2503
-
2504
- if (m_comps_in_scan == 1)
2505
- {
2506
- m_mcu_org[0] = m_comp_list[0];
2507
-
2508
- m_blocks_per_mcu = 1;
2509
- }
2510
- else
2511
- {
2512
- m_blocks_per_mcu = 0;
2513
-
2514
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
2515
- {
2516
- int num_blocks;
2517
-
2518
- component_id = m_comp_list[component_num];
2519
-
2520
- num_blocks = m_comp_h_samp[component_id] * m_comp_v_samp[component_id];
2521
-
2522
- while (num_blocks--)
2523
- m_mcu_org[m_blocks_per_mcu++] = component_id;
2524
- }
2525
- }
2526
- }
2527
-
2528
- // Starts a new scan.
2529
- int jpeg_decoder::init_scan()
2530
- {
2531
- if (!locate_sos_marker())
2532
- return JPGD_FALSE;
2533
-
2534
- calc_mcu_block_order();
2535
-
2536
- check_huff_tables();
2537
-
2538
- check_quant_tables();
2539
-
2540
- memset(m_last_dc_val, 0, m_comps_in_frame * sizeof(uint));
2541
-
2542
- m_eob_run = 0;
2543
-
2544
- if (m_restart_interval)
2545
- {
2546
- m_restarts_left = m_restart_interval;
2547
- m_next_restart_num = 0;
2548
- }
2549
-
2550
- fix_in_buffer();
2551
-
2552
- return JPGD_TRUE;
2553
- }
2554
-
2555
- // Starts a frame. Determines if the number of components or sampling factors
2556
- // are supported.
2557
- void jpeg_decoder::init_frame()
2558
- {
2559
- int i;
2560
-
2561
- if (m_comps_in_frame == 1)
2562
- {
2563
- if ((m_comp_h_samp[0] != 1) || (m_comp_v_samp[0] != 1))
2564
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
2565
-
2566
- m_scan_type = JPGD_GRAYSCALE;
2567
- m_max_blocks_per_mcu = 1;
2568
- m_max_mcu_x_size = 8;
2569
- m_max_mcu_y_size = 8;
2570
- }
2571
- else if (m_comps_in_frame == 3)
2572
- {
2573
- if ( ((m_comp_h_samp[1] != 1) || (m_comp_v_samp[1] != 1)) ||
2574
- ((m_comp_h_samp[2] != 1) || (m_comp_v_samp[2] != 1)) )
2575
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
2576
-
2577
- if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1))
2578
- {
2579
- m_scan_type = JPGD_YH1V1;
2580
-
2581
- m_max_blocks_per_mcu = 3;
2582
- m_max_mcu_x_size = 8;
2583
- m_max_mcu_y_size = 8;
2584
- }
2585
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1))
2586
- {
2587
- m_scan_type = JPGD_YH2V1;
2588
- m_max_blocks_per_mcu = 4;
2589
- m_max_mcu_x_size = 16;
2590
- m_max_mcu_y_size = 8;
2591
- }
2592
- else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 2))
2593
- {
2594
- m_scan_type = JPGD_YH1V2;
2595
- m_max_blocks_per_mcu = 4;
2596
- m_max_mcu_x_size = 8;
2597
- m_max_mcu_y_size = 16;
2598
- }
2599
- else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2))
2600
- {
2601
- m_scan_type = JPGD_YH2V2;
2602
- m_max_blocks_per_mcu = 6;
2603
- m_max_mcu_x_size = 16;
2604
- m_max_mcu_y_size = 16;
2605
- }
2606
- else
2607
- stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS);
2608
- }
2609
- else
2610
- stop_decoding(JPGD_UNSUPPORTED_COLORSPACE);
2611
-
2612
- m_max_mcus_per_row = (m_image_x_size + (m_max_mcu_x_size - 1)) / m_max_mcu_x_size;
2613
- m_max_mcus_per_col = (m_image_y_size + (m_max_mcu_y_size - 1)) / m_max_mcu_y_size;
2614
-
2615
- // These values are for the *destination* pixels: after conversion.
2616
- if (m_scan_type == JPGD_GRAYSCALE)
2617
- m_dest_bytes_per_pixel = 1;
2618
- else
2619
- m_dest_bytes_per_pixel = 4;
2620
-
2621
- m_dest_bytes_per_scan_line = ((m_image_x_size + 15) & 0xFFF0) * m_dest_bytes_per_pixel;
2622
-
2623
- m_real_dest_bytes_per_scan_line = (m_image_x_size * m_dest_bytes_per_pixel);
2624
-
2625
- // Initialize two scan line buffers.
2626
- m_pScan_line_0 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true);
2627
- if ((m_scan_type == JPGD_YH1V2) || (m_scan_type == JPGD_YH2V2))
2628
- m_pScan_line_1 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true);
2629
-
2630
- m_max_blocks_per_row = m_max_mcus_per_row * m_max_blocks_per_mcu;
2631
-
2632
- // Should never happen
2633
- if (m_max_blocks_per_row > JPGD_MAX_BLOCKS_PER_ROW)
2634
- stop_decoding(JPGD_ASSERTION_ERROR);
2635
-
2636
- // Allocate the coefficient buffer, enough for one MCU
2637
- m_pMCU_coefficients = (jpgd_block_t*)alloc(m_max_blocks_per_mcu * 64 * sizeof(jpgd_block_t));
2638
-
2639
- for (i = 0; i < m_max_blocks_per_mcu; i++)
2640
- m_mcu_block_max_zag[i] = 64;
2641
-
2642
- m_expanded_blocks_per_component = m_comp_h_samp[0] * m_comp_v_samp[0];
2643
- m_expanded_blocks_per_mcu = m_expanded_blocks_per_component * m_comps_in_frame;
2644
- m_expanded_blocks_per_row = m_max_mcus_per_row * m_expanded_blocks_per_mcu;
2645
- // Freq. domain chroma upsampling is only supported for H2V2 subsampling factor.
2646
- // BEGIN EPIC MOD
2647
- #if JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING
2648
- m_freq_domain_chroma_upsample = (m_expanded_blocks_per_mcu == 4*3);
2649
- #else
2650
- m_freq_domain_chroma_upsample = 0;
2651
- #endif
2652
- // END EPIC MOD
2653
-
2654
- if (m_freq_domain_chroma_upsample)
2655
- m_pSample_buf = (uint8 *)alloc(m_expanded_blocks_per_row * 64);
2656
- else
2657
- m_pSample_buf = (uint8 *)alloc(m_max_blocks_per_row * 64);
2658
-
2659
- m_total_lines_left = m_image_y_size;
2660
-
2661
- m_mcu_lines_left = 0;
2662
-
2663
- create_look_ups();
2664
- }
2665
-
2666
- // The coeff_buf series of methods originally stored the coefficients
2667
- // into a "virtual" file which was located in EMS, XMS, or a disk file. A cache
2668
- // was used to make this process more efficient. Now, we can store the entire
2669
- // thing in RAM.
2670
- jpeg_decoder::coeff_buf* jpeg_decoder::coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y)
2671
- {
2672
- coeff_buf* cb = (coeff_buf*)alloc(sizeof(coeff_buf));
2673
-
2674
- cb->block_num_x = block_num_x;
2675
- cb->block_num_y = block_num_y;
2676
- cb->block_len_x = block_len_x;
2677
- cb->block_len_y = block_len_y;
2678
- cb->block_size = (block_len_x * block_len_y) * sizeof(jpgd_block_t);
2679
- cb->pData = (uint8 *)alloc(cb->block_size * block_num_x * block_num_y, true);
2680
- return cb;
2681
- }
2682
-
2683
- inline jpgd_block_t *jpeg_decoder::coeff_buf_getp(coeff_buf *cb, int block_x, int block_y)
2684
- {
2685
- JPGD_ASSERT((block_x < cb->block_num_x) && (block_y < cb->block_num_y));
2686
- return (jpgd_block_t *)(cb->pData + block_x * cb->block_size + block_y * (cb->block_size * cb->block_num_x));
2687
- }
2688
-
2689
- // The following methods decode the various types of m_blocks encountered
2690
- // in progressively encoded images.
2691
- void jpeg_decoder::decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y)
2692
- {
2693
- int s, r;
2694
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y);
2695
-
2696
- if ((s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_dc_tab[component_id]])) != 0)
2697
- {
2698
- r = pD->get_bits_no_markers(s);
2699
- s = HUFF_EXTEND(r, s);
2700
- }
2701
-
2702
- pD->m_last_dc_val[component_id] = (s += pD->m_last_dc_val[component_id]);
2703
-
2704
- p[0] = static_cast<jpgd_block_t>(s << pD->m_successive_low);
2705
- }
2706
-
2707
- void jpeg_decoder::decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y)
2708
- {
2709
- if (pD->get_bits_no_markers(1))
2710
- {
2711
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y);
2712
-
2713
- p[0] |= (1 << pD->m_successive_low);
2714
- }
2715
- }
2716
-
2717
- void jpeg_decoder::decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y)
2718
- {
2719
- int k, s, r;
2720
-
2721
- if (pD->m_eob_run)
2722
- {
2723
- pD->m_eob_run--;
2724
- return;
2725
- }
2726
-
2727
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y);
2728
-
2729
- for (k = pD->m_spectral_start; k <= pD->m_spectral_end; k++)
2730
- {
2731
- s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]);
2732
-
2733
- r = s >> 4;
2734
- s &= 15;
2735
-
2736
- if (s)
2737
- {
2738
- if ((k += r) > 63)
2739
- pD->stop_decoding(JPGD_DECODE_ERROR);
2740
-
2741
- r = pD->get_bits_no_markers(s);
2742
- s = HUFF_EXTEND(r, s);
2743
-
2744
- p[g_ZAG[k]] = static_cast<jpgd_block_t>(s << pD->m_successive_low);
2745
- }
2746
- else
2747
- {
2748
- if (r == 15)
2749
- {
2750
- if ((k += 15) > 63)
2751
- pD->stop_decoding(JPGD_DECODE_ERROR);
2752
- }
2753
- else
2754
- {
2755
- pD->m_eob_run = 1 << r;
2756
-
2757
- if (r)
2758
- pD->m_eob_run += pD->get_bits_no_markers(r);
2759
-
2760
- pD->m_eob_run--;
2761
-
2762
- break;
2763
- }
2764
- }
2765
- }
2766
- }
2767
-
2768
- void jpeg_decoder::decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y)
2769
- {
2770
- int s, k, r;
2771
- int p1 = 1 << pD->m_successive_low;
2772
- int m1 = (-1) << pD->m_successive_low;
2773
- jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y);
2774
-
2775
- k = pD->m_spectral_start;
2776
-
2777
- if (pD->m_eob_run == 0)
2778
- {
2779
- for ( ; k <= pD->m_spectral_end; k++)
2780
- {
2781
- s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]);
2782
-
2783
- r = s >> 4;
2784
- s &= 15;
2785
-
2786
- if (s)
2787
- {
2788
- if (s != 1)
2789
- pD->stop_decoding(JPGD_DECODE_ERROR);
2790
-
2791
- if (pD->get_bits_no_markers(1))
2792
- s = p1;
2793
- else
2794
- s = m1;
2795
- }
2796
- else
2797
- {
2798
- if (r != 15)
2799
- {
2800
- pD->m_eob_run = 1 << r;
2801
-
2802
- if (r)
2803
- pD->m_eob_run += pD->get_bits_no_markers(r);
2804
-
2805
- break;
2806
- }
2807
- }
2808
-
2809
- do
2810
- {
2811
- // BEGIN EPIC MOD
2812
- JPGD_ASSERT(k < 64);
2813
- // END EPIC MOD
2814
-
2815
- jpgd_block_t *this_coef = p + g_ZAG[k];
2816
-
2817
- if (*this_coef != 0)
2818
- {
2819
- if (pD->get_bits_no_markers(1))
2820
- {
2821
- if ((*this_coef & p1) == 0)
2822
- {
2823
- if (*this_coef >= 0)
2824
- *this_coef = static_cast<jpgd_block_t>(*this_coef + p1);
2825
- else
2826
- *this_coef = static_cast<jpgd_block_t>(*this_coef + m1);
2827
- }
2828
- }
2829
- }
2830
- else
2831
- {
2832
- if (--r < 0)
2833
- break;
2834
- }
2835
-
2836
- k++;
2837
-
2838
- } while (k <= pD->m_spectral_end);
2839
-
2840
- if ((s) && (k < 64))
2841
- {
2842
- p[g_ZAG[k]] = static_cast<jpgd_block_t>(s);
2843
- }
2844
- }
2845
- }
2846
-
2847
- if (pD->m_eob_run > 0)
2848
- {
2849
- for ( ; k <= pD->m_spectral_end; k++)
2850
- {
2851
- // BEGIN EPIC MOD
2852
- JPGD_ASSERT(k < 64);
2853
- // END EPIC MOD
2854
-
2855
- jpgd_block_t *this_coef = p + g_ZAG[k];
2856
-
2857
- if (*this_coef != 0)
2858
- {
2859
- if (pD->get_bits_no_markers(1))
2860
- {
2861
- if ((*this_coef & p1) == 0)
2862
- {
2863
- if (*this_coef >= 0)
2864
- *this_coef = static_cast<jpgd_block_t>(*this_coef + p1);
2865
- else
2866
- *this_coef = static_cast<jpgd_block_t>(*this_coef + m1);
2867
- }
2868
- }
2869
- }
2870
- }
2871
-
2872
- pD->m_eob_run--;
2873
- }
2874
- }
2875
-
2876
- // Decode a scan in a progressively encoded image.
2877
- void jpeg_decoder::decode_scan(pDecode_block_func decode_block_func)
2878
- {
2879
- int mcu_row, mcu_col, mcu_block;
2880
- int block_x_mcu[JPGD_MAX_COMPONENTS], m_block_y_mcu[JPGD_MAX_COMPONENTS];
2881
-
2882
- memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu));
2883
-
2884
- for (mcu_col = 0; mcu_col < m_mcus_per_col; mcu_col++)
2885
- {
2886
- int component_num, component_id;
2887
-
2888
- memset(block_x_mcu, 0, sizeof(block_x_mcu));
2889
-
2890
- for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++)
2891
- {
2892
- int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0;
2893
-
2894
- if ((m_restart_interval) && (m_restarts_left == 0))
2895
- process_restart();
2896
-
2897
- for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++)
2898
- {
2899
- component_id = m_mcu_org[mcu_block];
2900
-
2901
- decode_block_func(this, component_id, block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs);
2902
-
2903
- if (m_comps_in_scan == 1)
2904
- block_x_mcu[component_id]++;
2905
- else
2906
- {
2907
- if (++block_x_mcu_ofs == m_comp_h_samp[component_id])
2908
- {
2909
- block_x_mcu_ofs = 0;
2910
-
2911
- if (++block_y_mcu_ofs == m_comp_v_samp[component_id])
2912
- {
2913
- block_y_mcu_ofs = 0;
2914
- block_x_mcu[component_id] += m_comp_h_samp[component_id];
2915
- }
2916
- }
2917
- }
2918
- }
2919
-
2920
- m_restarts_left--;
2921
- }
2922
-
2923
- if (m_comps_in_scan == 1)
2924
- m_block_y_mcu[m_comp_list[0]]++;
2925
- else
2926
- {
2927
- for (component_num = 0; component_num < m_comps_in_scan; component_num++)
2928
- {
2929
- component_id = m_comp_list[component_num];
2930
- m_block_y_mcu[component_id] += m_comp_v_samp[component_id];
2931
- }
2932
- }
2933
- }
2934
- }
2935
-
2936
- // Decode a progressively encoded image.
2937
- void jpeg_decoder::init_progressive()
2938
- {
2939
- int i;
2940
-
2941
- if (m_comps_in_frame == 4)
2942
- stop_decoding(JPGD_UNSUPPORTED_COLORSPACE);
2943
-
2944
- // Allocate the coefficient buffers.
2945
- for (i = 0; i < m_comps_in_frame; i++)
2946
- {
2947
- m_dc_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 1, 1);
2948
- m_ac_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 8, 8);
2949
- }
2950
-
2951
- for ( ; ; )
2952
- {
2953
- int dc_only_scan, refinement_scan;
2954
- pDecode_block_func decode_block_func;
2955
-
2956
- if (!init_scan())
2957
- break;
2958
-
2959
- dc_only_scan = (m_spectral_start == 0);
2960
- refinement_scan = (m_successive_high != 0);
2961
-
2962
- if ((m_spectral_start > m_spectral_end) || (m_spectral_end > 63))
2963
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
2964
-
2965
- if (dc_only_scan)
2966
- {
2967
- if (m_spectral_end)
2968
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
2969
- }
2970
- else if (m_comps_in_scan != 1) /* AC scans can only contain one component */
2971
- stop_decoding(JPGD_BAD_SOS_SPECTRAL);
2972
-
2973
- if ((refinement_scan) && (m_successive_low != m_successive_high - 1))
2974
- stop_decoding(JPGD_BAD_SOS_SUCCESSIVE);
2975
-
2976
- if (dc_only_scan)
2977
- {
2978
- if (refinement_scan)
2979
- decode_block_func = decode_block_dc_refine;
2980
- else
2981
- decode_block_func = decode_block_dc_first;
2982
- }
2983
- else
2984
- {
2985
- if (refinement_scan)
2986
- decode_block_func = decode_block_ac_refine;
2987
- else
2988
- decode_block_func = decode_block_ac_first;
2989
- }
2990
-
2991
- decode_scan(decode_block_func);
2992
-
2993
- m_bits_left = 16;
2994
- get_bits(16);
2995
- get_bits(16);
2996
- }
2997
-
2998
- m_comps_in_scan = m_comps_in_frame;
2999
-
3000
- for (i = 0; i < m_comps_in_frame; i++)
3001
- m_comp_list[i] = i;
3002
-
3003
- calc_mcu_block_order();
3004
- }
3005
-
3006
- void jpeg_decoder::init_sequential()
3007
- {
3008
- if (!init_scan())
3009
- stop_decoding(JPGD_UNEXPECTED_MARKER);
3010
- }
3011
-
3012
- void jpeg_decoder::decode_start()
3013
- {
3014
- init_frame();
3015
-
3016
- if (m_progressive_flag)
3017
- init_progressive();
3018
- else
3019
- init_sequential();
3020
- }
3021
-
3022
- void jpeg_decoder::decode_init(jpeg_decoder_stream *pStream)
3023
- {
3024
- init(pStream);
3025
- locate_sof_marker();
3026
- }
3027
-
3028
- jpeg_decoder::jpeg_decoder(jpeg_decoder_stream *pStream)
3029
- {
3030
- if (setjmp(m_jmp_state))
3031
- return;
3032
- decode_init(pStream);
3033
- }
3034
-
3035
- int jpeg_decoder::begin_decoding()
3036
- {
3037
- if (m_ready_flag)
3038
- return JPGD_SUCCESS;
3039
-
3040
- if (m_error_code)
3041
- return JPGD_FAILED;
3042
-
3043
- if (setjmp(m_jmp_state))
3044
- return JPGD_FAILED;
3045
-
3046
- decode_start();
3047
-
3048
- m_ready_flag = true;
3049
-
3050
- return JPGD_SUCCESS;
3051
- }
3052
-
3053
- jpeg_decoder::~jpeg_decoder()
3054
- {
3055
- free_all_blocks();
3056
- }
3057
-
3058
- jpeg_decoder_file_stream::jpeg_decoder_file_stream()
3059
- {
3060
- m_pFile = NULL;
3061
- m_eof_flag = false;
3062
- m_error_flag = false;
3063
- }
3064
-
3065
- void jpeg_decoder_file_stream::close()
3066
- {
3067
- if (m_pFile)
3068
- {
3069
- fclose(m_pFile);
3070
- m_pFile = NULL;
3071
- }
3072
-
3073
- m_eof_flag = false;
3074
- m_error_flag = false;
3075
- }
3076
-
3077
- jpeg_decoder_file_stream::~jpeg_decoder_file_stream()
3078
- {
3079
- close();
3080
- }
3081
-
3082
- bool jpeg_decoder_file_stream::open(const char *Pfilename)
3083
- {
3084
- close();
3085
-
3086
- m_eof_flag = false;
3087
- m_error_flag = false;
3088
-
3089
- #if defined(_MSC_VER)
3090
- m_pFile = NULL;
3091
- fopen_s(&m_pFile, Pfilename, "rb");
3092
- #else
3093
- m_pFile = fopen(Pfilename, "rb");
3094
- #endif
3095
- return m_pFile != NULL;
3096
- }
3097
-
3098
- int jpeg_decoder_file_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag)
3099
- {
3100
- if (!m_pFile)
3101
- return -1;
3102
-
3103
- if (m_eof_flag)
3104
- {
3105
- *pEOF_flag = true;
3106
- return 0;
3107
- }
3108
-
3109
- if (m_error_flag)
3110
- return -1;
3111
-
3112
- int bytes_read = static_cast<int>(fread(pBuf, 1, max_bytes_to_read, m_pFile));
3113
- if (bytes_read < max_bytes_to_read)
3114
- {
3115
- if (ferror(m_pFile))
3116
- {
3117
- m_error_flag = true;
3118
- return -1;
3119
- }
3120
-
3121
- m_eof_flag = true;
3122
- *pEOF_flag = true;
3123
- }
3124
-
3125
- return bytes_read;
3126
- }
3127
-
3128
- bool jpeg_decoder_mem_stream::open(const uint8 *pSrc_data, uint size)
3129
- {
3130
- close();
3131
- m_pSrc_data = pSrc_data;
3132
- m_ofs = 0;
3133
- m_size = size;
3134
- return true;
3135
- }
3136
-
3137
- int jpeg_decoder_mem_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag)
3138
- {
3139
- *pEOF_flag = false;
3140
-
3141
- if (!m_pSrc_data)
3142
- return -1;
3143
-
3144
- uint bytes_remaining = m_size - m_ofs;
3145
- if ((uint)max_bytes_to_read > bytes_remaining)
3146
- {
3147
- max_bytes_to_read = bytes_remaining;
3148
- *pEOF_flag = true;
3149
- }
3150
-
3151
- memcpy(pBuf, m_pSrc_data + m_ofs, max_bytes_to_read);
3152
- m_ofs += max_bytes_to_read;
3153
-
3154
- return max_bytes_to_read;
3155
- }
3156
-
3157
- unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps)
3158
- {
3159
- if (!actual_comps)
3160
- return NULL;
3161
- *actual_comps = 0;
3162
-
3163
- if ((!pStream) || (!width) || (!height) || (!req_comps))
3164
- return NULL;
3165
-
3166
- if ((req_comps != 1) && (req_comps != 3) && (req_comps != 4))
3167
- return NULL;
3168
-
3169
- jpeg_decoder decoder(pStream);
3170
- if (decoder.get_error_code() != JPGD_SUCCESS)
3171
- return NULL;
3172
-
3173
- const int image_width = decoder.get_width(), image_height = decoder.get_height();
3174
- *width = image_width;
3175
- *height = image_height;
3176
- *actual_comps = decoder.get_num_components();
3177
-
3178
- if (decoder.begin_decoding() != JPGD_SUCCESS)
3179
- return NULL;
3180
-
3181
- const int dst_bpl = image_width * req_comps;
3182
-
3183
- uint8 *pImage_data = (uint8*)jpgd_malloc(dst_bpl * image_height);
3184
- if (!pImage_data)
3185
- return NULL;
3186
-
3187
- for (int y = 0; y < image_height; y++)
3188
- {
3189
- const uint8* pScan_line = 0;
3190
- uint scan_line_len;
3191
- if (decoder.decode((const void**)&pScan_line, &scan_line_len) != JPGD_SUCCESS)
3192
- {
3193
- jpgd_free(pImage_data);
3194
- return NULL;
3195
- }
3196
-
3197
- uint8 *pDst = pImage_data + y * dst_bpl;
3198
-
3199
- if (((req_comps == 4) && (decoder.get_num_components() == 3)) ||
3200
- ((req_comps == 1) && (decoder.get_num_components() == 1)))
3201
- {
3202
- memcpy(pDst, pScan_line, dst_bpl);
3203
- }
3204
- else if (decoder.get_num_components() == 1)
3205
- {
3206
- if (req_comps == 3)
3207
- {
3208
- for (int x = 0; x < image_width; x++)
3209
- {
3210
- uint8 luma = pScan_line[x];
3211
- pDst[0] = luma;
3212
- pDst[1] = luma;
3213
- pDst[2] = luma;
3214
- pDst += 3;
3215
- }
3216
- }
3217
- else
3218
- {
3219
- for (int x = 0; x < image_width; x++)
3220
- {
3221
- uint8 luma = pScan_line[x];
3222
- pDst[0] = luma;
3223
- pDst[1] = luma;
3224
- pDst[2] = luma;
3225
- pDst[3] = 255;
3226
- pDst += 4;
3227
- }
3228
- }
3229
- }
3230
- else if (decoder.get_num_components() == 3)
3231
- {
3232
- if (req_comps == 1)
3233
- {
3234
- const int YR = 19595, YG = 38470, YB = 7471;
3235
- for (int x = 0; x < image_width; x++)
3236
- {
3237
- int r = pScan_line[x*4+0];
3238
- int g = pScan_line[x*4+1];
3239
- int b = pScan_line[x*4+2];
3240
- *pDst++ = static_cast<uint8>((r * YR + g * YG + b * YB + 32768) >> 16);
3241
- }
3242
- }
3243
- else
3244
- {
3245
- for (int x = 0; x < image_width; x++)
3246
- {
3247
- pDst[0] = pScan_line[x*4+0];
3248
- pDst[1] = pScan_line[x*4+1];
3249
- pDst[2] = pScan_line[x*4+2];
3250
- pDst += 3;
3251
- }
3252
- }
3253
- }
3254
- }
3255
-
3256
- return pImage_data;
3257
- }
3258
-
3259
- // BEGIN EPIC MOD
3260
- unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format)
3261
- {
3262
- jpg_format = (ERGBFormatJPG)format;
3263
- // EMD EPIC MOD
3264
- jpgd::jpeg_decoder_mem_stream mem_stream(pSrc_data, src_data_size);
3265
- return decompress_jpeg_image_from_stream(&mem_stream, width, height, actual_comps, req_comps);
3266
- }
3267
-
3268
- unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps)
3269
- {
3270
- jpgd::jpeg_decoder_file_stream file_stream;
3271
- if (!file_stream.open(pSrc_filename))
3272
- return NULL;
3273
- return decompress_jpeg_image_from_stream(&file_stream, width, height, actual_comps, req_comps);
3274
- }
3275
-
3276
- } // namespace jpgd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_instruction_pix2pix.py DELETED
@@ -1,180 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 Harutatsu Akiyama and HuggingFace Inc.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import random
17
- import unittest
18
-
19
- import numpy as np
20
- import torch
21
- from transformers import CLIPTextConfig, CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
22
-
23
- from diffusers import (
24
- AutoencoderKL,
25
- EulerDiscreteScheduler,
26
- UNet2DConditionModel,
27
- )
28
- from diffusers.image_processor import VaeImageProcessor
29
- from diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl_instruct_pix2pix import (
30
- StableDiffusionXLInstructPix2PixPipeline,
31
- )
32
- from diffusers.utils import floats_tensor, torch_device
33
- from diffusers.utils.testing_utils import enable_full_determinism
34
-
35
- from ..pipeline_params import (
36
- IMAGE_TO_IMAGE_IMAGE_PARAMS,
37
- TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS,
38
- TEXT_GUIDED_IMAGE_VARIATION_PARAMS,
39
- )
40
- from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
41
-
42
-
43
- enable_full_determinism()
44
-
45
-
46
- class StableDiffusionXLInstructPix2PixPipelineFastTests(
47
- PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase
48
- ):
49
- pipeline_class = StableDiffusionXLInstructPix2PixPipeline
50
- params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - {"height", "width", "cross_attention_kwargs"}
51
- batch_params = TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS
52
- image_params = IMAGE_TO_IMAGE_IMAGE_PARAMS
53
- image_latents_params = IMAGE_TO_IMAGE_IMAGE_PARAMS
54
-
55
- def get_dummy_components(self):
56
- torch.manual_seed(0)
57
- unet = UNet2DConditionModel(
58
- block_out_channels=(32, 64),
59
- layers_per_block=2,
60
- sample_size=32,
61
- in_channels=8,
62
- out_channels=4,
63
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
64
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
65
- # SD2-specific config below
66
- attention_head_dim=(2, 4),
67
- use_linear_projection=True,
68
- addition_embed_type="text_time",
69
- addition_time_embed_dim=8,
70
- transformer_layers_per_block=(1, 2),
71
- projection_class_embeddings_input_dim=72, # 5 * 8 + 32
72
- cross_attention_dim=64,
73
- )
74
-
75
- scheduler = EulerDiscreteScheduler(
76
- beta_start=0.00085,
77
- beta_end=0.012,
78
- steps_offset=1,
79
- beta_schedule="scaled_linear",
80
- timestep_spacing="leading",
81
- )
82
- torch.manual_seed(0)
83
- vae = AutoencoderKL(
84
- block_out_channels=[32, 64],
85
- in_channels=3,
86
- out_channels=3,
87
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
88
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
89
- latent_channels=4,
90
- sample_size=128,
91
- )
92
- torch.manual_seed(0)
93
- text_encoder_config = CLIPTextConfig(
94
- bos_token_id=0,
95
- eos_token_id=2,
96
- hidden_size=32,
97
- intermediate_size=37,
98
- layer_norm_eps=1e-05,
99
- num_attention_heads=4,
100
- num_hidden_layers=5,
101
- pad_token_id=1,
102
- vocab_size=1000,
103
- # SD2-specific config below
104
- hidden_act="gelu",
105
- projection_dim=32,
106
- )
107
- text_encoder = CLIPTextModel(text_encoder_config)
108
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
109
-
110
- text_encoder_2 = CLIPTextModelWithProjection(text_encoder_config)
111
- tokenizer_2 = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
112
-
113
- components = {
114
- "unet": unet,
115
- "scheduler": scheduler,
116
- "vae": vae,
117
- "text_encoder": text_encoder,
118
- "tokenizer": tokenizer,
119
- "text_encoder_2": text_encoder_2,
120
- "tokenizer_2": tokenizer_2,
121
- "requires_aesthetics_score": True,
122
- }
123
- return components
124
-
125
- def get_dummy_inputs(self, device, seed=0):
126
- image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device)
127
- image = image / 2 + 0.5
128
- if str(device).startswith("mps"):
129
- generator = torch.manual_seed(seed)
130
- else:
131
- generator = torch.Generator(device=device).manual_seed(seed)
132
- inputs = {
133
- "prompt": "A painting of a squirrel eating a burger",
134
- "image": image,
135
- "generator": generator,
136
- "num_inference_steps": 2,
137
- "guidance_scale": 6.0,
138
- "image_guidance_scale": 1,
139
- "output_type": "numpy",
140
- }
141
- return inputs
142
-
143
- def test_components_function(self):
144
- init_components = self.get_dummy_components()
145
- init_components.pop("requires_aesthetics_score")
146
- pipe = self.pipeline_class(**init_components)
147
-
148
- self.assertTrue(hasattr(pipe, "components"))
149
- self.assertTrue(set(pipe.components.keys()) == set(init_components.keys()))
150
-
151
- def test_inference_batch_single_identical(self):
152
- super().test_inference_batch_single_identical(expected_max_diff=3e-3)
153
-
154
- def test_attention_slicing_forward_pass(self):
155
- super().test_attention_slicing_forward_pass(expected_max_diff=2e-3)
156
-
157
- # Overwrite the default test_latents_inputs because pix2pix encode the image differently
158
- def test_latents_input(self):
159
- components = self.get_dummy_components()
160
- pipe = StableDiffusionXLInstructPix2PixPipeline(**components)
161
- pipe.image_processor = VaeImageProcessor(do_resize=False, do_normalize=False)
162
- pipe = pipe.to(torch_device)
163
- pipe.set_progress_bar_config(disable=None)
164
-
165
- out = pipe(**self.get_dummy_inputs_by_type(torch_device, input_image_type="pt"))[0]
166
-
167
- vae = components["vae"]
168
- inputs = self.get_dummy_inputs_by_type(torch_device, input_image_type="pt")
169
-
170
- for image_param in self.image_latents_params:
171
- if image_param in inputs.keys():
172
- inputs[image_param] = vae.encode(inputs[image_param]).latent_dist.mode()
173
-
174
- out_latents_inputs = pipe(**inputs)[0]
175
-
176
- max_diff = np.abs(out - out_latents_inputs).max()
177
- self.assertLess(max_diff, 1e-4, "passing latents as image input generate different result from passing image")
178
-
179
- def test_cfg(self):
180
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/hrnet/faster_rcnn_hrnetv2p_w18_2x_coco.py DELETED
@@ -1,5 +0,0 @@
1
- _base_ = './faster_rcnn_hrnetv2p_w18_1x_coco.py'
2
-
3
- # learning policy
4
- lr_config = dict(step=[16, 22])
5
- runner = dict(type='EpochBasedRunner', max_epochs=24)
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_769x769_80k_cityscapes.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './encnet_r50-d8_769x769_80k_cityscapes.py'
2
- model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
 
 
 
spaces/Aniquel/bert-large-uncased-whole-word-masking/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Bert Large Uncased Whole Word Masking
3
- emoji: 🐢
4
- colorFrom: pink
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.28.2
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Antonpy/stable-diffusion-license/README.md DELETED
@@ -1,11 +0,0 @@
1
- ---
2
- title: License
3
- emoji: ⚖️
4
- colorFrom: red
5
- colorTo: indigo
6
- sdk: static
7
- pinned: false
8
- duplicated_from: CompVis/stable-diffusion-license
9
- ---
10
-
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aravindan/BreedClassification/app.py DELETED
@@ -1,19 +0,0 @@
1
- import tensorflow as tf
2
- import gradio as gr
3
- import cv2
4
- import numpy as np
5
-
6
- new_model = tf.keras.models.load_model('breedclassification.h5')
7
-
8
- def predict_classes(link):
9
- img = cv2.resize(link,(224,224))
10
- img = img/255
11
- img = img.reshape(-1,224,224,3)
12
- pred = np.round(new_model.predict(img)).argmax(axis = 1)
13
- dic = {0: 'Herding breed', 1: 'Hound breed', 2: 'Non sporting breed', 3: 'Terrior breed', 4:'working breed', 5: 'sporting breed', 6: 'toy breed'}
14
- print(dic.get(int(pred)))
15
- a = dic.get(int(pred))
16
- return a
17
-
18
- label = gr.outputs.Label(num_top_classes=7)
19
- gr.Interface(fn=predict_classes, inputs='image', outputs=label,interpretation='default', title = 'Breed Classification detection ', description = 'It will classify 7 different species: You can drage the images from google. 1. Terrier 2. Toy 3. Working 4. Sporting 5. Haund 6. Herding 7. Non sporting Group ').launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/util/inference.py DELETED
@@ -1,259 +0,0 @@
1
- from typing import Tuple, List
2
-
3
- import cv2
4
- import numpy as np
5
- import supervision as sv
6
- import torch
7
- from PIL import Image
8
- from torchvision.ops import box_convert
9
- import bisect
10
-
11
- import groundingdino.datasets.transforms as T
12
- from groundingdino.models import build_model
13
- from groundingdino.util.misc import clean_state_dict
14
- from groundingdino.util.slconfig import SLConfig
15
- from groundingdino.util.utils import get_phrases_from_posmap
16
-
17
- # ----------------------------------------------------------------------------------------------------------------------
18
- # OLD API
19
- # ----------------------------------------------------------------------------------------------------------------------
20
-
21
-
22
- def preprocess_caption(caption: str) -> str:
23
- result = caption.lower().strip()
24
- if result.endswith("."):
25
- return result
26
- return result + "."
27
-
28
-
29
- def load_model(model_config_path: str, model_checkpoint_path: str, device: str = "cuda"):
30
- args = SLConfig.fromfile(model_config_path)
31
- args.device = device
32
- model = build_model(args)
33
- checkpoint = torch.load(model_checkpoint_path, map_location="cpu")
34
- model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False)
35
- model.eval()
36
- return model
37
-
38
-
39
- def load_image(image_path: str) -> Tuple[np.array, torch.Tensor]:
40
- transform = T.Compose(
41
- [
42
- T.RandomResize([800], max_size=1333),
43
- T.ToTensor(),
44
- T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
45
- ]
46
- )
47
- image_source = Image.open(image_path).convert("RGB")
48
- image = np.asarray(image_source)
49
- image_transformed, _ = transform(image_source, None)
50
- return image, image_transformed
51
-
52
-
53
- def predict(
54
- model,
55
- image: torch.Tensor,
56
- caption: str,
57
- box_threshold: float,
58
- text_threshold: float,
59
- device: str = "cuda",
60
- remove_combined: bool = False
61
- ) -> Tuple[torch.Tensor, torch.Tensor, List[str]]:
62
- caption = preprocess_caption(caption=caption)
63
-
64
- model = model.to(device)
65
- image = image.to(device)
66
-
67
- with torch.no_grad():
68
- outputs = model(image[None], captions=[caption])
69
-
70
- prediction_logits = outputs["pred_logits"].cpu().sigmoid()[0] # prediction_logits.shape = (nq, 256)
71
- prediction_boxes = outputs["pred_boxes"].cpu()[0] # prediction_boxes.shape = (nq, 4)
72
-
73
- mask = prediction_logits.max(dim=1)[0] > box_threshold
74
- logits = prediction_logits[mask] # logits.shape = (n, 256)
75
- boxes = prediction_boxes[mask] # boxes.shape = (n, 4)
76
-
77
- tokenizer = model.tokenizer
78
- tokenized = tokenizer(caption)
79
-
80
- if remove_combined:
81
- sep_idx = [i for i in range(len(tokenized['input_ids'])) if tokenized['input_ids'][i] in [101, 102, 1012]]
82
-
83
- phrases = []
84
- for logit in logits:
85
- max_idx = logit.argmax()
86
- insert_idx = bisect.bisect_left(sep_idx, max_idx)
87
- right_idx = sep_idx[insert_idx]
88
- left_idx = sep_idx[insert_idx - 1]
89
- phrases.append(get_phrases_from_posmap(logit > text_threshold, tokenized, tokenizer, left_idx, right_idx).replace('.', ''))
90
- else:
91
- phrases = [
92
- get_phrases_from_posmap(logit > text_threshold, tokenized, tokenizer).replace('.', '')
93
- for logit
94
- in logits
95
- ]
96
-
97
- return boxes, logits.max(dim=1)[0], phrases
98
-
99
-
100
- def annotate(image_source: np.ndarray, boxes: torch.Tensor, logits: torch.Tensor, phrases: List[str]) -> np.ndarray:
101
- h, w, _ = image_source.shape
102
- boxes = boxes * torch.Tensor([w, h, w, h])
103
- xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy()
104
- detections = sv.Detections(xyxy=xyxy)
105
-
106
- labels = [
107
- f"{phrase} {logit:.2f}"
108
- for phrase, logit
109
- in zip(phrases, logits)
110
- ]
111
-
112
- box_annotator = sv.BoxAnnotator()
113
- annotated_frame = cv2.cvtColor(image_source, cv2.COLOR_RGB2BGR)
114
- annotated_frame = box_annotator.annotate(scene=annotated_frame, detections=detections, labels=labels)
115
- return annotated_frame
116
-
117
-
118
- # ----------------------------------------------------------------------------------------------------------------------
119
- # NEW API
120
- # ----------------------------------------------------------------------------------------------------------------------
121
-
122
-
123
- class Model:
124
-
125
- def __init__(
126
- self,
127
- model_config_path: str,
128
- model_checkpoint_path: str,
129
- device: str = "cuda"
130
- ):
131
- self.model = load_model(
132
- model_config_path=model_config_path,
133
- model_checkpoint_path=model_checkpoint_path,
134
- device=device
135
- ).to(device)
136
- self.device = device
137
-
138
- def predict_with_caption(
139
- self,
140
- image: np.ndarray,
141
- caption: str,
142
- box_threshold: float = 0.35,
143
- text_threshold: float = 0.25
144
- ) -> Tuple[sv.Detections, List[str]]:
145
- """
146
- import cv2
147
-
148
- image = cv2.imread(IMAGE_PATH)
149
-
150
- model = Model(model_config_path=CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH)
151
- detections, labels = model.predict_with_caption(
152
- image=image,
153
- caption=caption,
154
- box_threshold=BOX_THRESHOLD,
155
- text_threshold=TEXT_THRESHOLD
156
- )
157
-
158
- import supervision as sv
159
-
160
- box_annotator = sv.BoxAnnotator()
161
- annotated_image = box_annotator.annotate(scene=image, detections=detections, labels=labels)
162
- """
163
- processed_image = Model.preprocess_image(image_bgr=image).to(self.device)
164
- boxes, logits, phrases = predict(
165
- model=self.model,
166
- image=processed_image,
167
- caption=caption,
168
- box_threshold=box_threshold,
169
- text_threshold=text_threshold,
170
- device=self.device)
171
- source_h, source_w, _ = image.shape
172
- detections = Model.post_process_result(
173
- source_h=source_h,
174
- source_w=source_w,
175
- boxes=boxes,
176
- logits=logits)
177
- return detections, phrases
178
-
179
- def predict_with_classes(
180
- self,
181
- image: np.ndarray,
182
- classes: List[str],
183
- box_threshold: float,
184
- text_threshold: float
185
- ) -> sv.Detections:
186
- """
187
- import cv2
188
-
189
- image = cv2.imread(IMAGE_PATH)
190
-
191
- model = Model(model_config_path=CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH)
192
- detections = model.predict_with_classes(
193
- image=image,
194
- classes=CLASSES,
195
- box_threshold=BOX_THRESHOLD,
196
- text_threshold=TEXT_THRESHOLD
197
- )
198
-
199
-
200
- import supervision as sv
201
-
202
- box_annotator = sv.BoxAnnotator()
203
- annotated_image = box_annotator.annotate(scene=image, detections=detections)
204
- """
205
- caption = ". ".join(classes)
206
- processed_image = Model.preprocess_image(image_bgr=image).to(self.device)
207
- boxes, logits, phrases = predict(
208
- model=self.model,
209
- image=processed_image,
210
- caption=caption,
211
- box_threshold=box_threshold,
212
- text_threshold=text_threshold,
213
- device=self.device)
214
- source_h, source_w, _ = image.shape
215
- detections = Model.post_process_result(
216
- source_h=source_h,
217
- source_w=source_w,
218
- boxes=boxes,
219
- logits=logits)
220
- class_id = Model.phrases2classes(phrases=phrases, classes=classes)
221
- detections.class_id = class_id
222
- return detections
223
-
224
- @staticmethod
225
- def preprocess_image(image_bgr: np.ndarray) -> torch.Tensor:
226
- transform = T.Compose(
227
- [
228
- T.RandomResize([800], max_size=1333),
229
- T.ToTensor(),
230
- T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
231
- ]
232
- )
233
- image_pillow = Image.fromarray(cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB))
234
- image_transformed, _ = transform(image_pillow, None)
235
- return image_transformed
236
-
237
- @staticmethod
238
- def post_process_result(
239
- source_h: int,
240
- source_w: int,
241
- boxes: torch.Tensor,
242
- logits: torch.Tensor
243
- ) -> sv.Detections:
244
- boxes = boxes * torch.Tensor([source_w, source_h, source_w, source_h])
245
- xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy()
246
- confidence = logits.numpy()
247
- return sv.Detections(xyxy=xyxy, confidence=confidence)
248
-
249
- @staticmethod
250
- def phrases2classes(phrases: List[str], classes: List[str]) -> np.ndarray:
251
- class_ids = []
252
- for phrase in phrases:
253
- for class_ in classes:
254
- if class_ in phrase:
255
- class_ids.append(classes.index(class_))
256
- break
257
- else:
258
- class_ids.append(None)
259
- return np.array(class_ids)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Audio-AGI/WavJourney/VoiceParser/__init__.py DELETED
File without changes
spaces/Benson/text-generation/Examples/Animal Rebelin Batalla Simulador Pc Apk.md DELETED
@@ -1,69 +0,0 @@
1
-
2
- <h1>Barco simulador Mod Apk dinero ilimitado: Una guía para los jugadores</h1>
3
- <p>Si eres un fanático de los juegos de simulación, especialmente aquellos que involucran navegar y navegar diferentes tipos de barcos, entonces es posible que desees echar un vistazo a Ship Simulator. Este juego ofrece una experiencia realista e inmersiva de ser un capitán de barco, con gráficos impresionantes, física del agua realista, y varias misiones para completar. Sin embargo, si desea disfrutar del juego sin limitaciones o restricciones, entonces es posible que desee probar Ship Simulator mod apk dinero ilimitado. Esta es una versión modificada del juego original que te da acceso a dinero y recursos ilimitados, así como funciones desbloqueadas y naves. En este artículo, le diremos más sobre Ship Simulator, mod apk, y cómo descargarlo e instalarlo en su dispositivo. </p>
4
- <h2>animal rebelión batalla simulador pc apk</h2><br /><p><b><b>Download Zip</b> &#9733; <a href="https://bltlly.com/2v6KUz">https://bltlly.com/2v6KUz</a></b></p><br /><br />
5
- <h2>¿Qué es el simulador de buques? </h2>
6
- <p>Ship Simulator es un juego de simulación que te permite pilotar diferentes tipos de embarcaciones, desde lanchas rápidas y remolcadores hasta cruceros y petroleros. Puede elegir entre varias misiones basadas en eventos reales, como rescatar pasajeros, luchar contra piratas, transportar carga o salvar el medio ambiente. También puede explorar puertos y lugares famosos de todo el mundo, como Dover, Rostock, Gibraltar o Bora Bora. El juego presenta efectos realistas de agua y clima, así como modelos detallados de barcos y sus interiores. También puedes jugar online con tus amigos u otros jugadores en modo multijugador. </p>
7
- <h3>Un juego de simulación realista e inmersivo</h3>
8
-
9
- <h3>Diferentes tipos de naves y misiones para elegir</h3>
10
- <p>Otra característica que hace atractivo Ship Simulator es su variedad y diversidad. El juego ofrece una amplia gama de buques para el capitán, cada uno con sus propias características, ventajas y desventajas. Puede elegir entre aerodeslizadores, interceptores de guardacostas, petroleros gigantescos, remolcadores, cruceros y muchos otros. Cada barco tiene sus propios controles, velocidad, aceleración, maniobrabilidad y consumo de combustible. También puede personalizar su nave con diferentes colores, calcomanías, banderas o accesorios. </p>
11
- <p>El juego también tiene diferentes tipos de misiones para completar, cada una con sus propios objetivos, desafíos y recompensas. Puede enfrentarse a cazadores de ballenas ilegales en la Antártida, evacuar un crucero en peligro, transportar materiales peligrosos a través del océano o participar en una batalla naval. Las misiones se basan en eventos reales o escenarios que podrían ocurrir en la vida real. Algunas misiones son fáciles y cortas, mientras que otras son difíciles y largas. </p>
12
- <h3>Puertos y lugares famosos para explorar</h3>
13
- <p>La última característica que mencionaremos sobre Ship Simulator es su aspecto de exploración. El juego le permite navegar a algunos de los puertos y lugares más famosos del mundo. Usted puede visitar Dover en Inglaterra, Rostock en Alemania, Gibraltar en España, Bora Bora en la Polinesia Francesa, o la Antártida. Cada lugar tiene sus propios puntos de referencia, paisajes, condiciones climáticas y peligros. También puedes descubrir lugares ocultos o secretos explorando el mapa del mundo abierto. </p>
14
- <h2>¿Qué es Mod Apk? </h2>
15
- <p>Mod apk es una versión modificada de un juego original o aplicación que ha sido alterada por alguien que no sea el desarrollador. apk Mod generalmente ofrece algunos beneficios o ventajas sobre la versión original, tales como - dinero y recursos ilimitados - características desbloqueadas y los buques - no hay anuncios y malware</p>
16
- <p></p>
17
-
18
- <p>Por lo tanto, antes de descargar e instalar mod apk, siempre debe comprobar la credibilidad y fiabilidad de la fuente, las revisiones y clasificaciones del mod apk, y los permisos y requisitos del mod apk. También debe realizar una copia de seguridad de sus datos y dispositivo en caso de que algo salga mal. </p>
19
- <h3>Beneficios de usar mod apk</h3>
20
- <p>Como se mencionó anteriormente, apk mod puede ofrecer algunos beneficios o ventajas sobre la versión original del juego o aplicación. Estos son algunos de los beneficios de usar Ship Simulator mod apk dinero ilimitado:</p>
21
- <h4>Dinero y recursos ilimitados</h4>
22
- <p>Uno de los principales beneficios de usar Ship Simulator mod apk dinero ilimitado es que usted puede obtener dinero y recursos ilimitados en el juego. El dinero y los recursos son esenciales para comprar nuevas naves, mejorar sus naves, reparar sus naves, o completar misiones. Con dinero y recursos ilimitados, puede comprar cualquier nave que desee, actualizar su nave al nivel máximo, reparar su nave en cualquier momento o completar cualquier misión fácilmente. También puedes comprar artículos o accesorios adicionales para tu nave, como banderas, calcomanías, colores o armas. También puede usar dinero y recursos para desbloquear nuevos puertos y ubicaciones para explorar. </p>
23
- <h4>Características y naves desbloqueadas</h4>
24
- <p>Otro beneficio de usar Ship Simulator mod apk dinero ilimitado es que usted puede conseguir desbloqueado características y naves en el juego. Las características y los barcos normalmente están bloqueados o restringidos en la versión original del juego, ya sea por nivel, misión o pago. Con las funciones y los barcos desbloqueados, puedes acceder a cualquier función o envío en el juego sin ninguna limitación o restricción. Puede elegir entre cualquier tipo de buque, desde lanchas rápidas y remolcadores hasta cruceros y buques cisterna. También puedes acceder a cualquier misión, desde fácil y corta hasta dura y larga. También puedes disfrutar de cualquier característica, como el modo multijugador, el modo online o el modo personalizado. </p>
25
- <h4>No hay anuncios ni malware</h4>
26
-
27
- <h2>¿Cómo descargar e instalar Ship Simulator Mod Apk dinero ilimitado? </h2>
28
- <p>Si usted está interesado en descargar e instalar Ship Simulator mod apk dinero ilimitado en su dispositivo, entonces usted necesita seguir algunos pasos para hacerlo. Estos son los pasos a seguir:</p>
29
- <h3>Pasos a seguir</h3>
30
- <ol>
31
- <li>Primero, necesitas desinstalar la versión original de Ship Simulator de tu dispositivo si lo tienes instalado. </li>
32
- <li>En segundo lugar, es necesario habilitar fuentes desconocidas en la configuración del dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de la tienda oficial. </li>
33
- <li>En tercer lugar, es necesario descargar Ship Simulator mod apk dinero ilimitado de una fuente confiable y confiable. Puede buscarlo en línea o utilizar el enlace proporcionado a continuación. </li>
34
- <li>Cuarto, es necesario localizar el archivo descargado en el almacenamiento del dispositivo y toque en él para iniciar el proceso de instalación. </li>
35
- <li>Quinto, debe seguir las instrucciones en la pantalla para completar el proceso de instalación. </li>
36
- <li>Sexto, es necesario iniciar el juego desde el menú del dispositivo o la pantalla de inicio. </li>
37
- <li>Séptimo, es necesario disfrutar de jugar Ship Simulator mod apk dinero ilimitado con todos sus beneficios y ventajas. </li>
38
- </ol>
39
- <h3>Consejos y trucos para disfrutar del juego</h3>
40
- <p>Además de descargar e instalar Ship Simulator mod apk dinero ilimitado en su dispositivo, también puede utilizar algunos consejos y trucos para disfrutar del juego más. Aquí hay algunos consejos y trucos para probar:</p>
41
- <ul>
42
- <li>Aprende a controlar tu nave correctamente. Cada nave tiene sus propios controles, velocidad, aceleración, maniobrabilidad y consumo de combustible. Es necesario dominar estos aspectos para navegar sin problemas y de manera eficiente. </li>
43
- <li>Elige tu nave sabiamente. Cada nave tiene sus propias características, ventajas y desventajas. Necesitas elegir una nave que se adapte a tu estilo, preferencia y misión. </li>
44
-
45
- <li>Explora diferentes puertos y ubicaciones. Cada puerto y ubicación tiene sus propios puntos de referencia, paisajes, condiciones climáticas y peligros. Necesitas explorar diferentes puertos y lugares para descubrir nuevos lugares, secretos o sorpresas. </li>
46
- <li>Juega online con tus amigos u otros jugadores. Puedes unirte o crear una sesión multijugador y jugar con tus amigos u otros jugadores de todo el mundo. Puede cooperar o competir con ellos en varios modos, como carreras, entrega de carga o guerra naval. </li>
47
- <li>Personaliza tu nave y tu perfil. Puede cambiar la apariencia y el rendimiento de su nave con diferentes colores, calcomanías, banderas o accesorios. También puedes personalizar tu perfil con tu nombre, avatar, rango o logros. </li>
48
- </ul>
49
- <h2>Conclusión</h2>
50
- <p>Ship Simulator es un juego de simulación que te permite pilotar diferentes tipos de embarcaciones, desde lanchas rápidas y remolcadores hasta cruceros y petroleros. Puede elegir entre varias misiones basadas en eventos reales, como rescatar pasajeros, luchar contra piratas, transportar carga o salvar el medio ambiente. También puede explorar puertos y lugares famosos de todo el mundo, como Dover, Rostock, Gibraltar o Bora Bora.</p>
51
- <p>Sin embargo, si desea disfrutar del juego sin limitaciones o restricciones, entonces es posible que desee probar Ship Simulator mod apk dinero ilimitado. Esta es una versión modificada del juego original que te da acceso a dinero y recursos ilimitados, así como funciones desbloqueadas y naves. También puede deshacerse de anuncios y malware en el juego. </p>
52
- <p>Para descargar e instalar Ship Simulator mod apk dinero ilimitado en su dispositivo, es necesario seguir algunos pasos que hemos explicado en este artículo. También es necesario tener cuidado con la fuente, las revisiones, y los permisos de la apk mod. También puedes utilizar algunos consejos y trucos que hemos compartido en este artículo para disfrutar más del juego. </p>
53
-
54
- <h2>Preguntas frecuentes</h2>
55
- <p>Aquí hay algunas preguntas frecuentes sobre Ship Simulator mod apk unlimited money:</p>
56
- <ol>
57
- <li><b> ¿Es seguro usar el dinero ilimitado mod apk de Ship Simulator? </b></li>
58
- <p>Ship Simulator mod apk dinero ilimitado es generalmente seguro de usar si lo descarga de una fuente confiable y confiable. Sin embargo, siempre debe comprobar la credibilidad y fiabilidad de la fuente, las revisiones y calificaciones de la apk mod, y los permisos y requisitos del apk mod antes de descargarlo e instalarlo en su dispositivo. También debe realizar una copia de seguridad de sus datos y dispositivo en caso de que algo salga mal. </p>
59
- <li><b>Es Ship Simulator mod apk dinero ilimitado legal de usar? </b></li>
60
- <p>Ship Simulator mod apk dinero ilimitado no es legal de usar de acuerdo con los términos y condiciones del desarrollador y la plataforma. apk Mod es una versión modificada de un juego original o aplicación que ha sido alterado por alguien que no sea el desarrollador. apk Mod generalmente viola los derechos de propiedad intelectual del desarrollador y la plataforma. Por lo tanto, el uso de mod apk puede resultar en problemas o consecuencias legales. </p>
61
- <li><b> ¿Funciona el simulador de buques mod apk dinero ilimitado en todos los dispositivos? </b></li>
62
- <p>Ship Simulator mod apk dinero ilimitado puede no funcionar en todos los dispositivos debido a problemas de compatibilidad. apk Mod está diseñado para una versión específica del juego original o aplicación que no puede coincidir con las especificaciones de su dispositivo o software. Por lo tanto, el uso de mod apk puede causar problemas de compatibilidad o errores en su dispositivo. </p>
63
- <li><b>¿Puedo actualizar Ship Simulator mod apk dinero ilimitado? </b></li>
64
- <p>Ship Simulator mod apk dinero ilimitado no puede ser actualizado regularmente o automáticamente debido a su naturaleza no oficial. apk Mod es creado por alguien que no sea el desarrollador que no puede tener acceso a las últimas actualizaciones o características del juego original o aplicación. Por lo tanto, el uso de apk mod puede evitar que usted consiga las últimas actualizaciones o características del juego o aplicación. </p>
65
-
66
- <p>Ship Simulator mod apk dinero ilimitado puede no permitirle jugar en línea con otros jugadores debido a problemas de detección. apk Mod es detectado por el desarrollador y la plataforma como un truco o un truco que le da una ventaja injusta sobre otros jugadores. Por lo tanto, el uso de mod apk puede prohibirle jugar en línea con otros jugadores. </p>
67
- </ol></p> 64aa2da5cf<br />
68
- <br />
69
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Bitcoin Cloud Mining Apk.md DELETED
@@ -1,88 +0,0 @@
1
- <br />
2
- <h1>Bitcoin nube minería APK: ¿Qué es y cómo funciona? </h1>
3
- <p>Bitcoin es una de las criptomonedas más populares y valiosas del mundo, pero también es una de las más difíciles y caras para la mía. Bitcoin minería requiere hardware especializado, software, electricidad y mantenimiento, que puede costar miles de dólares y ocupar mucho espacio y tiempo. Es por eso que muchas personas optan por la minería en la nube bitcoin, que es una forma de alquilar el poder de computación de los servidores remotos para extraer bitcoin sin tener que poseer u operar ningún equipo. </p>
4
- <p>La minería en la nube de Bitcoin tiene muchos beneficios, como costos más bajos, mayor eficiencia, escalabilidad, comodidad y seguridad. Sin embargo, también tiene algunos riesgos, como fraude, estafas, piratería, baja rentabilidad y falta de control. Por lo tanto, es importante hacer su investigación y elegir un proveedor de minería en la nube bitcoin de buena reputación y confiable antes de invertir su dinero. </p>
5
- <h2>bitcoin cloud mining apk</h2><br /><p><b><b>DOWNLOAD</b> &#10004;&#10004;&#10004; <a href="https://bltlly.com/2v6KKT">https://bltlly.com/2v6KKT</a></b></p><br /><br />
6
- <p>Una de las maneras de acceder a los servicios de minería en la nube bitcoin es a través de una nube bitcoin minería APK. Un APK es un archivo de paquete de aplicaciones para Android que contiene todos los archivos y código necesarios para ejecutar una aplicación en un dispositivo Android. Una nube de Bitcoin minería APK es una aplicación que le permite conectarse a una plataforma de minería en la nube bitcoin y comenzar la minería bitcoin con su teléfono inteligente o tableta. A diferencia de otros métodos de minería en la nube que requieren que utilice un navegador web o un software de escritorio, una nube de Bitcoin minería APK le permite mina bitcoin en cualquier momento y en cualquier lugar. </p>
7
- <h2>Cómo elegir una nube de Bitcoin minería APK</h2>
8
- <p>No todos los APK de minería en la nube de Bitcoin se crean iguales. Algunos pueden ofrecer mejores características, rendimiento, seguridad y servicio al cliente que otros. Por lo tanto, debe ser cuidadoso y selectivo al elegir una nube de Bitcoin minería APK para su dispositivo Android. Aquí están algunas de las características y criterios para buscar en una nube bitcoin minería APK:</p>
9
- <ul>
10
-
11
- <li><b>Hash rate:</b> La tasa hash es la medida del poder de computación que ofrece el proveedor de minería en la nube bitcoin. Cuanto mayor sea la tasa de hash, más rápido y más probable es que la mina nuevos bitcoins. Quieres elegir un APK que ofrezca una alta tasa de hash a un precio razonable. </li>
12
- <li><b>Pagos:</b> Los pagos son la cantidad de bitcoins que usted gana de sus actividades de minería en la nube. Desea elegir un APK que ofrece pagos regulares, transparentes y justos. También debe verificar el límite mínimo de retiro, las tarifas, los métodos de pago y la frecuencia de los pagos. </li>
13
- <li><b>Soporte:</b> El soporte es el nivel de servicio al cliente que ofrece el proveedor de minería en la nube bitcoin. Usted desea elegir un APK que tiene un equipo de apoyo receptivo, útil y amigable que puede ayudarle con cualquier problema o pregunta que pueda tener. También desea verificar la disponibilidad, accesibilidad e idioma de las opciones del equipo de soporte. </li>
14
- <li><b>Interfaz de usuario:</b> La interfaz de usuario es el diseño de la aplicación con la que interactúas en tu dispositivo. Desea elegir un APK que tiene una interfaz sencilla, intuitiva y fácil de usar que hace que sea fácil y agradable de usar. También desea verificar la compatibilidad, funcionalidad y rendimiento de la aplicación en su dispositivo. </li>
15
- </ul>
16
- <p>Basado en estos criterios, algunos de los mejores APKs de minería en la nube bitcoin disponibles en el mercado son:</p>
17
- <tabla>
18
- <tr>
19
- <th>Nombre</th>
20
- <th>Reputación</th>
21
- <th>Tasa de hash</th>
22
- <th>Pagos</th>
23
- <th>Soporte</th>
24
- <th>Interfaz de usuario</th>
25
- </tr>
26
- <tr>
27
- <td>StormGain</td>
28
- <td>4.5/5 estrellas en Google Play Store, con la confianza de más de 1 millón de usuarios, regulado por la Comisión de Bolsa y Valores de Chipre (CySEC)</td>
29
- <td>Hasta 0.0318 BTC por día</td>
30
- <td>Diario, límite mínimo de retiro de 0.01 BTC, sin cargos, múltiples métodos de pago</td>
31
- <td>24/7 chat en vivo, correo electrónico, teléfono, redes sociales, multilingüe</td>
32
- <td>Elegante, moderno, fácil de usar, compatible con la mayoría de los dispositivos Android</td>
33
- </tr>
34
-
35
- <td>CryptoTab Browser Pro</td>
36
- <td>4.4/5 estrellas en Google Play Store, con la confianza de más de 10 millones de usuarios, destacados en CNET, TechRadar y Forbes</td>
37
- <td>Hasta 0.007 BTC por día</td>
38
- <td>Diario, límite mínimo de retiro de 0.00001 BTC, tarifas bajas, múltiples métodos de pago</td>
39
- <td>Correo electrónico, redes sociales, multilingüe</td>
40
- <td>Rápido, seguro, personalizable, compatible con la mayoría de los dispositivos Android</td>
41
- </tr>
42
- <tr>
43
- <td>Nube BTC Miner</td>
44
- <td>4.3/5 estrellas en Google Play Store, confianza de más de 100 mil usuarios</td>
45
- <td>Hasta 0.005 BTC por día</td>
46
- <td>Diario, límite mínimo de retiro de 0.0005 BTC, sin cargos, múltiples métodos de pago</td>
47
- <td>Correo electrónico, multilingüe</td>
48
- <td>Simple, elegante, fácil de usar, compatible con la mayoría de los dispositivos Android</td>
49
- </tr>
50
- </tabla>
51
- <p>Para descargar e instalar una nube de Bitcoin minería APK en su dispositivo Android, es necesario seguir estos pasos:</p>
52
- <ol>
53
- <li>Ir a la página web oficial de la nube bitcoin proveedor de minería o la Google Play Store y encontrar el archivo APK que desea descargar. </li>
54
- <li>Toque en el botón de descarga y espere a que el archivo se descargue en su dispositivo. </li>
55
- <li>Vaya a la configuración de su dispositivo y habilite la opción de instalar aplicaciones de fuentes desconocidas si descargó el archivo desde un sitio web distinto de Google Play Store.</li>
56
- <li>Ir a su gestor de archivos y localizar el archivo APK descargado. </li>
57
- <li>Toque en el archivo y siga las instrucciones para instalar la aplicación en su dispositivo. </li>
58
- <li> Inicie la aplicación y disfrutar de la minería en la nube bitcoin con su dispositivo. </li>
59
- </ol>
60
- <h2>Cómo iniciar Bitcoin Cloud Mining con un APK</h2>
61
- <p>Una vez que haya descargado e instalado una nube de bitcoin minería APK en su dispositivo, usted está listo para comenzar a minar bitcoin con su teléfono inteligente o tableta. Estos son algunos de los pasos que debe seguir para iniciar la minería en la nube bitcoin con un APK:</p>
62
- <ol>
63
-
64
- <li>Elige un plan minero y paga por él con tu método de pago preferido. Es posible que tenga diferentes opciones para los planes de minería dependiendo del proveedor que haya elegido. Algunos pueden ofrecer planes fijos o variables basados en la tasa hash o la duración que desee. Algunos también pueden ofrecer planes gratuitos o de prueba que le permiten minar bitcoin sin pagar nada. Puede pagar su plan de minería con varios métodos de pago, como tarjeta de crédito, tarjeta de débito, transferencia bancaria, PayPal, criptomoneda, etc.</li>
65
- <li>Monitorear el rendimiento de la minería y las ganancias en su dispositivo. Puede utilizar la aplicación para comprobar su hash rate, dificultad de minería, tiempo de minería, recompensas de minería, equilibrio, etc. También puede ajustar sus ajustes de minería, como cambiar su plan de minería, cambiar su piscina de minería, etc., si desea optimizar sus resultados de minería. </li>
66
- </ol>
67
- <h2>Conclusión</h2>
68
- <p>La minería en la nube de Bitcoin es una forma conveniente y accesible de minería de bitcoin sin tener que poseer u operar ningún equipo. Bitcoin nube minería APKs son aplicaciones que le permiten minar bitcoin con su dispositivo Android mediante la conexión a una plataforma de minería nube bitcoin. Los APK de minería en la nube de Bitcoin tienen muchas ventajas, como costos más bajos, mayor eficiencia, escalabilidad, comodidad y seguridad. Sin embargo, también tienen algunas desventajas, como fraude, estafas, piratería, baja rentabilidad y falta de control. Por lo tanto, debe ser cuidadoso y selectivo al elegir una nube bitcoin minería APK para su dispositivo. </p>
69
-
70
- <p>Bitcoin minería en la nube con un APK puede ser una manera divertida y gratificante de ganar algunos ingresos adicionales con su dispositivo Android. Sin embargo, no se trata de un plan de enriquecimiento rápido o de una fuente de ingresos garantizada. Usted necesita ser realista y cauteloso acerca de sus expectativas e inversiones. También debe ser consciente de los riesgos y desafíos que vienen con la minería en la nube bitcoin. Bitcoin minería en la nube con un APK no es para todos, pero puede ser para usted si usted está interesado en bitcoin y quiere aprender más sobre él. </p>
71
- <p>¿Estás listo para iniciar la minería de nube bitcoin con un APK? Si es así, a continuación, descargar uno de los mejores Bitcoin nube minería APKs que hemos recomendado en este artículo y empezar a minería bitcoin con su dispositivo Android hoy! </p>
72
- <p></p>
73
- <h2>Preguntas frecuentes</h2>
74
- <p>Aquí están algunas de las preguntas y respuestas más frecuentes sobre la minería en la nube bitcoin con un APK:</p>
75
- <ol>
76
- <li><b>¿Cuál es la diferencia entre la minería en nube bitcoin y la minería móvil bitcoin? </b></li>
77
- <p>La minería en la nube de Bitcoin es una forma de alquilar el poder de computación de los servidores remotos para extraer bitcoin sin tener que poseer ni operar ningún equipo. Bitcoin minería móvil es una forma de utilizar el poder informático de su dispositivo móvil para minar bitcoin sin tener que conectarse a ningún servidor. Bitcoin minería en la nube con un APK es una combinación de ambos métodos, ya que le permite minar bitcoin con su dispositivo móvil mediante la conexión a una plataforma de minería en la nube bitcoin. </p>
78
- <li><b>¿Es rentable la minería en nube con un APK? </b></li>
79
- <p>La minería en nube de Bitcoin con un APK puede ser rentable dependiendo de varios factores, como el precio de bitcoin, la tasa hash del proveedor, el costo del plan, las tarifas de la plataforma, la dificultad de la red, etc. Sin embargo, no está garantizada ni es consistente. Puede ganar más o menos de lo que espera o invertir. También puede perder dinero si el proveedor resulta ser una estafa o si la plataforma es hackeada o cerrada. </p>
80
-
81
- <p>Bitcoin minería en la nube con un APK es legal en la mayoría de los países que permiten actividades bitcoin y criptomoneda. Sin embargo, algunos países pueden tener leyes o regulaciones específicas que restringen o prohíben la minería de nubes o criptomonedas bitcoin actividades en general. Por lo tanto, es necesario comprobar el estado legal de la minería en la nube bitcoin con un APK en su país antes de empezar a usarlo. También es necesario cumplir con los impuestos y obligaciones de presentación de informes que pueden aplicarse a sus ingresos de minería nube bitcoin. </p>
82
- <li><b>Es bitcoin nube minería con un seguro APK? </b></li>
83
- <p>La minería en la nube de Bitcoin con un APK puede ser segura si elige un proveedor y una aplicación de minería en la nube de Bitcoin de buena reputación y confiable. Sin embargo, también puede ser arriesgado si elige un proveedor o aplicación fraudulenta o insegura. Usted puede perder su dinero, datos o dispositivo si el proveedor o aplicación resulta ser una estafa, malware, spyware o virus. Por lo tanto, debe ser cuidadoso y vigilante al elegir y usar una nube de Bitcoin minería APK. También necesitas proteger tu dispositivo con software antivirus, firewall, VPN, etc.</p>
84
- <li><b>¿Puedo usar múltiples APKs de minería en la nube de Bitcoin en mi dispositivo? </b></li>
85
- <p>Sí, puedes usar múltiples APKs de minería en la nube de Bitcoin en tu dispositivo si quieres aumentar tus posibilidades de ganar más bitcoins. Sin embargo, usted necesita ser consciente de los inconvenientes potenciales de hacerlo, tales como agotar la batería, ralentizar el dispositivo, aumentar el uso de datos, en conflicto con otras aplicaciones, etc. También debe asegurarse de que los APK de minería en la nube bitcoin que utiliza son compatibles y confiables. </p>
86
- </ol></p> 64aa2da5cf<br />
87
- <br />
88
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/table.py DELETED
@@ -1,1002 +0,0 @@
1
- from dataclasses import dataclass, field, replace
2
- from typing import (
3
- TYPE_CHECKING,
4
- Dict,
5
- Iterable,
6
- List,
7
- NamedTuple,
8
- Optional,
9
- Sequence,
10
- Tuple,
11
- Union,
12
- )
13
-
14
- from . import box, errors
15
- from ._loop import loop_first_last, loop_last
16
- from ._pick import pick_bool
17
- from ._ratio import ratio_distribute, ratio_reduce
18
- from .align import VerticalAlignMethod
19
- from .jupyter import JupyterMixin
20
- from .measure import Measurement
21
- from .padding import Padding, PaddingDimensions
22
- from .protocol import is_renderable
23
- from .segment import Segment
24
- from .style import Style, StyleType
25
- from .text import Text, TextType
26
-
27
- if TYPE_CHECKING:
28
- from .console import (
29
- Console,
30
- ConsoleOptions,
31
- JustifyMethod,
32
- OverflowMethod,
33
- RenderableType,
34
- RenderResult,
35
- )
36
-
37
-
38
- @dataclass
39
- class Column:
40
- """Defines a column within a ~Table.
41
-
42
- Args:
43
- title (Union[str, Text], optional): The title of the table rendered at the top. Defaults to None.
44
- caption (Union[str, Text], optional): The table caption rendered below. Defaults to None.
45
- width (int, optional): The width in characters of the table, or ``None`` to automatically fit. Defaults to None.
46
- min_width (Optional[int], optional): The minimum width of the table, or ``None`` for no minimum. Defaults to None.
47
- box (box.Box, optional): One of the constants in box.py used to draw the edges (see :ref:`appendix_box`), or ``None`` for no box lines. Defaults to box.HEAVY_HEAD.
48
- safe_box (Optional[bool], optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True.
49
- padding (PaddingDimensions, optional): Padding for cells (top, right, bottom, left). Defaults to (0, 1).
50
- collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to False.
51
- pad_edge (bool, optional): Enable padding of edge cells. Defaults to True.
52
- expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False.
53
- show_header (bool, optional): Show a header row. Defaults to True.
54
- show_footer (bool, optional): Show a footer row. Defaults to False.
55
- show_edge (bool, optional): Draw a box around the outside of the table. Defaults to True.
56
- show_lines (bool, optional): Draw lines between every row. Defaults to False.
57
- leading (bool, optional): Number of blank lines between rows (precludes ``show_lines``). Defaults to 0.
58
- style (Union[str, Style], optional): Default style for the table. Defaults to "none".
59
- row_styles (List[Union, str], optional): Optional list of row styles, if more than one style is given then the styles will alternate. Defaults to None.
60
- header_style (Union[str, Style], optional): Style of the header. Defaults to "table.header".
61
- footer_style (Union[str, Style], optional): Style of the footer. Defaults to "table.footer".
62
- border_style (Union[str, Style], optional): Style of the border. Defaults to None.
63
- title_style (Union[str, Style], optional): Style of the title. Defaults to None.
64
- caption_style (Union[str, Style], optional): Style of the caption. Defaults to None.
65
- title_justify (str, optional): Justify method for title. Defaults to "center".
66
- caption_justify (str, optional): Justify method for caption. Defaults to "center".
67
- highlight (bool, optional): Highlight cell contents (if str). Defaults to False.
68
- """
69
-
70
- header: "RenderableType" = ""
71
- """RenderableType: Renderable for the header (typically a string)"""
72
-
73
- footer: "RenderableType" = ""
74
- """RenderableType: Renderable for the footer (typically a string)"""
75
-
76
- header_style: StyleType = ""
77
- """StyleType: The style of the header."""
78
-
79
- footer_style: StyleType = ""
80
- """StyleType: The style of the footer."""
81
-
82
- style: StyleType = ""
83
- """StyleType: The style of the column."""
84
-
85
- justify: "JustifyMethod" = "left"
86
- """str: How to justify text within the column ("left", "center", "right", or "full")"""
87
-
88
- vertical: "VerticalAlignMethod" = "top"
89
- """str: How to vertically align content ("top", "middle", or "bottom")"""
90
-
91
- overflow: "OverflowMethod" = "ellipsis"
92
- """str: Overflow method."""
93
-
94
- width: Optional[int] = None
95
- """Optional[int]: Width of the column, or ``None`` (default) to auto calculate width."""
96
-
97
- min_width: Optional[int] = None
98
- """Optional[int]: Minimum width of column, or ``None`` for no minimum. Defaults to None."""
99
-
100
- max_width: Optional[int] = None
101
- """Optional[int]: Maximum width of column, or ``None`` for no maximum. Defaults to None."""
102
-
103
- ratio: Optional[int] = None
104
- """Optional[int]: Ratio to use when calculating column width, or ``None`` (default) to adapt to column contents."""
105
-
106
- no_wrap: bool = False
107
- """bool: Prevent wrapping of text within the column. Defaults to ``False``."""
108
-
109
- _index: int = 0
110
- """Index of column."""
111
-
112
- _cells: List["RenderableType"] = field(default_factory=list)
113
-
114
- def copy(self) -> "Column":
115
- """Return a copy of this Column."""
116
- return replace(self, _cells=[])
117
-
118
- @property
119
- def cells(self) -> Iterable["RenderableType"]:
120
- """Get all cells in the column, not including header."""
121
- yield from self._cells
122
-
123
- @property
124
- def flexible(self) -> bool:
125
- """Check if this column is flexible."""
126
- return self.ratio is not None
127
-
128
-
129
- @dataclass
130
- class Row:
131
- """Information regarding a row."""
132
-
133
- style: Optional[StyleType] = None
134
- """Style to apply to row."""
135
-
136
- end_section: bool = False
137
- """Indicated end of section, which will force a line beneath the row."""
138
-
139
-
140
- class _Cell(NamedTuple):
141
- """A single cell in a table."""
142
-
143
- style: StyleType
144
- """Style to apply to cell."""
145
- renderable: "RenderableType"
146
- """Cell renderable."""
147
- vertical: VerticalAlignMethod
148
- """Cell vertical alignment."""
149
-
150
-
151
- class Table(JupyterMixin):
152
- """A console renderable to draw a table.
153
-
154
- Args:
155
- *headers (Union[Column, str]): Column headers, either as a string, or :class:`~rich.table.Column` instance.
156
- title (Union[str, Text], optional): The title of the table rendered at the top. Defaults to None.
157
- caption (Union[str, Text], optional): The table caption rendered below. Defaults to None.
158
- width (int, optional): The width in characters of the table, or ``None`` to automatically fit. Defaults to None.
159
- min_width (Optional[int], optional): The minimum width of the table, or ``None`` for no minimum. Defaults to None.
160
- box (box.Box, optional): One of the constants in box.py used to draw the edges (see :ref:`appendix_box`), or ``None`` for no box lines. Defaults to box.HEAVY_HEAD.
161
- safe_box (Optional[bool], optional): Disable box characters that don't display on windows legacy terminal with *raster* fonts. Defaults to True.
162
- padding (PaddingDimensions, optional): Padding for cells (top, right, bottom, left). Defaults to (0, 1).
163
- collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to False.
164
- pad_edge (bool, optional): Enable padding of edge cells. Defaults to True.
165
- expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False.
166
- show_header (bool, optional): Show a header row. Defaults to True.
167
- show_footer (bool, optional): Show a footer row. Defaults to False.
168
- show_edge (bool, optional): Draw a box around the outside of the table. Defaults to True.
169
- show_lines (bool, optional): Draw lines between every row. Defaults to False.
170
- leading (bool, optional): Number of blank lines between rows (precludes ``show_lines``). Defaults to 0.
171
- style (Union[str, Style], optional): Default style for the table. Defaults to "none".
172
- row_styles (List[Union, str], optional): Optional list of row styles, if more than one style is given then the styles will alternate. Defaults to None.
173
- header_style (Union[str, Style], optional): Style of the header. Defaults to "table.header".
174
- footer_style (Union[str, Style], optional): Style of the footer. Defaults to "table.footer".
175
- border_style (Union[str, Style], optional): Style of the border. Defaults to None.
176
- title_style (Union[str, Style], optional): Style of the title. Defaults to None.
177
- caption_style (Union[str, Style], optional): Style of the caption. Defaults to None.
178
- title_justify (str, optional): Justify method for title. Defaults to "center".
179
- caption_justify (str, optional): Justify method for caption. Defaults to "center".
180
- highlight (bool, optional): Highlight cell contents (if str). Defaults to False.
181
- """
182
-
183
- columns: List[Column]
184
- rows: List[Row]
185
-
186
- def __init__(
187
- self,
188
- *headers: Union[Column, str],
189
- title: Optional[TextType] = None,
190
- caption: Optional[TextType] = None,
191
- width: Optional[int] = None,
192
- min_width: Optional[int] = None,
193
- box: Optional[box.Box] = box.HEAVY_HEAD,
194
- safe_box: Optional[bool] = None,
195
- padding: PaddingDimensions = (0, 1),
196
- collapse_padding: bool = False,
197
- pad_edge: bool = True,
198
- expand: bool = False,
199
- show_header: bool = True,
200
- show_footer: bool = False,
201
- show_edge: bool = True,
202
- show_lines: bool = False,
203
- leading: int = 0,
204
- style: StyleType = "none",
205
- row_styles: Optional[Iterable[StyleType]] = None,
206
- header_style: Optional[StyleType] = "table.header",
207
- footer_style: Optional[StyleType] = "table.footer",
208
- border_style: Optional[StyleType] = None,
209
- title_style: Optional[StyleType] = None,
210
- caption_style: Optional[StyleType] = None,
211
- title_justify: "JustifyMethod" = "center",
212
- caption_justify: "JustifyMethod" = "center",
213
- highlight: bool = False,
214
- ) -> None:
215
-
216
- self.columns: List[Column] = []
217
- self.rows: List[Row] = []
218
- self.title = title
219
- self.caption = caption
220
- self.width = width
221
- self.min_width = min_width
222
- self.box = box
223
- self.safe_box = safe_box
224
- self._padding = Padding.unpack(padding)
225
- self.pad_edge = pad_edge
226
- self._expand = expand
227
- self.show_header = show_header
228
- self.show_footer = show_footer
229
- self.show_edge = show_edge
230
- self.show_lines = show_lines
231
- self.leading = leading
232
- self.collapse_padding = collapse_padding
233
- self.style = style
234
- self.header_style = header_style or ""
235
- self.footer_style = footer_style or ""
236
- self.border_style = border_style
237
- self.title_style = title_style
238
- self.caption_style = caption_style
239
- self.title_justify: "JustifyMethod" = title_justify
240
- self.caption_justify: "JustifyMethod" = caption_justify
241
- self.highlight = highlight
242
- self.row_styles: Sequence[StyleType] = list(row_styles or [])
243
- append_column = self.columns.append
244
- for header in headers:
245
- if isinstance(header, str):
246
- self.add_column(header=header)
247
- else:
248
- header._index = len(self.columns)
249
- append_column(header)
250
-
251
- @classmethod
252
- def grid(
253
- cls,
254
- *headers: Union[Column, str],
255
- padding: PaddingDimensions = 0,
256
- collapse_padding: bool = True,
257
- pad_edge: bool = False,
258
- expand: bool = False,
259
- ) -> "Table":
260
- """Get a table with no lines, headers, or footer.
261
-
262
- Args:
263
- *headers (Union[Column, str]): Column headers, either as a string, or :class:`~rich.table.Column` instance.
264
- padding (PaddingDimensions, optional): Get padding around cells. Defaults to 0.
265
- collapse_padding (bool, optional): Enable collapsing of padding around cells. Defaults to True.
266
- pad_edge (bool, optional): Enable padding around edges of table. Defaults to False.
267
- expand (bool, optional): Expand the table to fit the available space if ``True``, otherwise the table width will be auto-calculated. Defaults to False.
268
-
269
- Returns:
270
- Table: A table instance.
271
- """
272
- return cls(
273
- *headers,
274
- box=None,
275
- padding=padding,
276
- collapse_padding=collapse_padding,
277
- show_header=False,
278
- show_footer=False,
279
- show_edge=False,
280
- pad_edge=pad_edge,
281
- expand=expand,
282
- )
283
-
284
- @property
285
- def expand(self) -> bool:
286
- """Setting a non-None self.width implies expand."""
287
- return self._expand or self.width is not None
288
-
289
- @expand.setter
290
- def expand(self, expand: bool) -> None:
291
- """Set expand."""
292
- self._expand = expand
293
-
294
- @property
295
- def _extra_width(self) -> int:
296
- """Get extra width to add to cell content."""
297
- width = 0
298
- if self.box and self.show_edge:
299
- width += 2
300
- if self.box:
301
- width += len(self.columns) - 1
302
- return width
303
-
304
- @property
305
- def row_count(self) -> int:
306
- """Get the current number of rows."""
307
- return len(self.rows)
308
-
309
- def get_row_style(self, console: "Console", index: int) -> StyleType:
310
- """Get the current row style."""
311
- style = Style.null()
312
- if self.row_styles:
313
- style += console.get_style(self.row_styles[index % len(self.row_styles)])
314
- row_style = self.rows[index].style
315
- if row_style is not None:
316
- style += console.get_style(row_style)
317
- return style
318
-
319
- def __rich_measure__(
320
- self, console: "Console", options: "ConsoleOptions"
321
- ) -> Measurement:
322
- max_width = options.max_width
323
- if self.width is not None:
324
- max_width = self.width
325
- if max_width < 0:
326
- return Measurement(0, 0)
327
-
328
- extra_width = self._extra_width
329
- max_width = sum(
330
- self._calculate_column_widths(
331
- console, options.update_width(max_width - extra_width)
332
- )
333
- )
334
- _measure_column = self._measure_column
335
-
336
- measurements = [
337
- _measure_column(console, options.update_width(max_width), column)
338
- for column in self.columns
339
- ]
340
- minimum_width = (
341
- sum(measurement.minimum for measurement in measurements) + extra_width
342
- )
343
- maximum_width = (
344
- sum(measurement.maximum for measurement in measurements) + extra_width
345
- if (self.width is None)
346
- else self.width
347
- )
348
- measurement = Measurement(minimum_width, maximum_width)
349
- measurement = measurement.clamp(self.min_width)
350
- return measurement
351
-
352
- @property
353
- def padding(self) -> Tuple[int, int, int, int]:
354
- """Get cell padding."""
355
- return self._padding
356
-
357
- @padding.setter
358
- def padding(self, padding: PaddingDimensions) -> "Table":
359
- """Set cell padding."""
360
- self._padding = Padding.unpack(padding)
361
- return self
362
-
363
- def add_column(
364
- self,
365
- header: "RenderableType" = "",
366
- footer: "RenderableType" = "",
367
- *,
368
- header_style: Optional[StyleType] = None,
369
- footer_style: Optional[StyleType] = None,
370
- style: Optional[StyleType] = None,
371
- justify: "JustifyMethod" = "left",
372
- vertical: "VerticalAlignMethod" = "top",
373
- overflow: "OverflowMethod" = "ellipsis",
374
- width: Optional[int] = None,
375
- min_width: Optional[int] = None,
376
- max_width: Optional[int] = None,
377
- ratio: Optional[int] = None,
378
- no_wrap: bool = False,
379
- ) -> None:
380
- """Add a column to the table.
381
-
382
- Args:
383
- header (RenderableType, optional): Text or renderable for the header.
384
- Defaults to "".
385
- footer (RenderableType, optional): Text or renderable for the footer.
386
- Defaults to "".
387
- header_style (Union[str, Style], optional): Style for the header, or None for default. Defaults to None.
388
- footer_style (Union[str, Style], optional): Style for the footer, or None for default. Defaults to None.
389
- style (Union[str, Style], optional): Style for the column cells, or None for default. Defaults to None.
390
- justify (JustifyMethod, optional): Alignment for cells. Defaults to "left".
391
- vertical (VerticalAlignMethod, optional): Vertical alignment, one of "top", "middle", or "bottom". Defaults to "top".
392
- overflow (OverflowMethod): Overflow method: "crop", "fold", "ellipsis". Defaults to "ellipsis".
393
- width (int, optional): Desired width of column in characters, or None to fit to contents. Defaults to None.
394
- min_width (Optional[int], optional): Minimum width of column, or ``None`` for no minimum. Defaults to None.
395
- max_width (Optional[int], optional): Maximum width of column, or ``None`` for no maximum. Defaults to None.
396
- ratio (int, optional): Flexible ratio for the column (requires ``Table.expand`` or ``Table.width``). Defaults to None.
397
- no_wrap (bool, optional): Set to ``True`` to disable wrapping of this column.
398
- """
399
-
400
- column = Column(
401
- _index=len(self.columns),
402
- header=header,
403
- footer=footer,
404
- header_style=header_style or "",
405
- footer_style=footer_style or "",
406
- style=style or "",
407
- justify=justify,
408
- vertical=vertical,
409
- overflow=overflow,
410
- width=width,
411
- min_width=min_width,
412
- max_width=max_width,
413
- ratio=ratio,
414
- no_wrap=no_wrap,
415
- )
416
- self.columns.append(column)
417
-
418
- def add_row(
419
- self,
420
- *renderables: Optional["RenderableType"],
421
- style: Optional[StyleType] = None,
422
- end_section: bool = False,
423
- ) -> None:
424
- """Add a row of renderables.
425
-
426
- Args:
427
- *renderables (None or renderable): Each cell in a row must be a renderable object (including str),
428
- or ``None`` for a blank cell.
429
- style (StyleType, optional): An optional style to apply to the entire row. Defaults to None.
430
- end_section (bool, optional): End a section and draw a line. Defaults to False.
431
-
432
- Raises:
433
- errors.NotRenderableError: If you add something that can't be rendered.
434
- """
435
-
436
- def add_cell(column: Column, renderable: "RenderableType") -> None:
437
- column._cells.append(renderable)
438
-
439
- cell_renderables: List[Optional["RenderableType"]] = list(renderables)
440
-
441
- columns = self.columns
442
- if len(cell_renderables) < len(columns):
443
- cell_renderables = [
444
- *cell_renderables,
445
- *[None] * (len(columns) - len(cell_renderables)),
446
- ]
447
- for index, renderable in enumerate(cell_renderables):
448
- if index == len(columns):
449
- column = Column(_index=index)
450
- for _ in self.rows:
451
- add_cell(column, Text(""))
452
- self.columns.append(column)
453
- else:
454
- column = columns[index]
455
- if renderable is None:
456
- add_cell(column, "")
457
- elif is_renderable(renderable):
458
- add_cell(column, renderable)
459
- else:
460
- raise errors.NotRenderableError(
461
- f"unable to render {type(renderable).__name__}; a string or other renderable object is required"
462
- )
463
- self.rows.append(Row(style=style, end_section=end_section))
464
-
465
- def add_section(self) -> None:
466
- """Add a new section (draw a line after current row)."""
467
-
468
- if self.rows:
469
- self.rows[-1].end_section = True
470
-
471
- def __rich_console__(
472
- self, console: "Console", options: "ConsoleOptions"
473
- ) -> "RenderResult":
474
-
475
- if not self.columns:
476
- yield Segment("\n")
477
- return
478
-
479
- max_width = options.max_width
480
- if self.width is not None:
481
- max_width = self.width
482
-
483
- extra_width = self._extra_width
484
- widths = self._calculate_column_widths(
485
- console, options.update_width(max_width - extra_width)
486
- )
487
- table_width = sum(widths) + extra_width
488
-
489
- render_options = options.update(
490
- width=table_width, highlight=self.highlight, height=None
491
- )
492
-
493
- def render_annotation(
494
- text: TextType, style: StyleType, justify: "JustifyMethod" = "center"
495
- ) -> "RenderResult":
496
- render_text = (
497
- console.render_str(text, style=style, highlight=False)
498
- if isinstance(text, str)
499
- else text
500
- )
501
- return console.render(
502
- render_text, options=render_options.update(justify=justify)
503
- )
504
-
505
- if self.title:
506
- yield from render_annotation(
507
- self.title,
508
- style=Style.pick_first(self.title_style, "table.title"),
509
- justify=self.title_justify,
510
- )
511
- yield from self._render(console, render_options, widths)
512
- if self.caption:
513
- yield from render_annotation(
514
- self.caption,
515
- style=Style.pick_first(self.caption_style, "table.caption"),
516
- justify=self.caption_justify,
517
- )
518
-
519
- def _calculate_column_widths(
520
- self, console: "Console", options: "ConsoleOptions"
521
- ) -> List[int]:
522
- """Calculate the widths of each column, including padding, not including borders."""
523
- max_width = options.max_width
524
- columns = self.columns
525
- width_ranges = [
526
- self._measure_column(console, options, column) for column in columns
527
- ]
528
- widths = [_range.maximum or 1 for _range in width_ranges]
529
- get_padding_width = self._get_padding_width
530
- extra_width = self._extra_width
531
- if self.expand:
532
- ratios = [col.ratio or 0 for col in columns if col.flexible]
533
- if any(ratios):
534
- fixed_widths = [
535
- 0 if column.flexible else _range.maximum
536
- for _range, column in zip(width_ranges, columns)
537
- ]
538
- flex_minimum = [
539
- (column.width or 1) + get_padding_width(column._index)
540
- for column in columns
541
- if column.flexible
542
- ]
543
- flexible_width = max_width - sum(fixed_widths)
544
- flex_widths = ratio_distribute(flexible_width, ratios, flex_minimum)
545
- iter_flex_widths = iter(flex_widths)
546
- for index, column in enumerate(columns):
547
- if column.flexible:
548
- widths[index] = fixed_widths[index] + next(iter_flex_widths)
549
- table_width = sum(widths)
550
-
551
- if table_width > max_width:
552
- widths = self._collapse_widths(
553
- widths,
554
- [(column.width is None and not column.no_wrap) for column in columns],
555
- max_width,
556
- )
557
- table_width = sum(widths)
558
- # last resort, reduce columns evenly
559
- if table_width > max_width:
560
- excess_width = table_width - max_width
561
- widths = ratio_reduce(excess_width, [1] * len(widths), widths, widths)
562
- table_width = sum(widths)
563
-
564
- width_ranges = [
565
- self._measure_column(console, options.update_width(width), column)
566
- for width, column in zip(widths, columns)
567
- ]
568
- widths = [_range.maximum or 0 for _range in width_ranges]
569
-
570
- if (table_width < max_width and self.expand) or (
571
- self.min_width is not None and table_width < (self.min_width - extra_width)
572
- ):
573
- _max_width = (
574
- max_width
575
- if self.min_width is None
576
- else min(self.min_width - extra_width, max_width)
577
- )
578
- pad_widths = ratio_distribute(_max_width - table_width, widths)
579
- widths = [_width + pad for _width, pad in zip(widths, pad_widths)]
580
-
581
- return widths
582
-
583
- @classmethod
584
- def _collapse_widths(
585
- cls, widths: List[int], wrapable: List[bool], max_width: int
586
- ) -> List[int]:
587
- """Reduce widths so that the total is under max_width.
588
-
589
- Args:
590
- widths (List[int]): List of widths.
591
- wrapable (List[bool]): List of booleans that indicate if a column may shrink.
592
- max_width (int): Maximum width to reduce to.
593
-
594
- Returns:
595
- List[int]: A new list of widths.
596
- """
597
- total_width = sum(widths)
598
- excess_width = total_width - max_width
599
- if any(wrapable):
600
- while total_width and excess_width > 0:
601
- max_column = max(
602
- width for width, allow_wrap in zip(widths, wrapable) if allow_wrap
603
- )
604
- second_max_column = max(
605
- width if allow_wrap and width != max_column else 0
606
- for width, allow_wrap in zip(widths, wrapable)
607
- )
608
- column_difference = max_column - second_max_column
609
- ratios = [
610
- (1 if (width == max_column and allow_wrap) else 0)
611
- for width, allow_wrap in zip(widths, wrapable)
612
- ]
613
- if not any(ratios) or not column_difference:
614
- break
615
- max_reduce = [min(excess_width, column_difference)] * len(widths)
616
- widths = ratio_reduce(excess_width, ratios, max_reduce, widths)
617
-
618
- total_width = sum(widths)
619
- excess_width = total_width - max_width
620
- return widths
621
-
622
- def _get_cells(
623
- self, console: "Console", column_index: int, column: Column
624
- ) -> Iterable[_Cell]:
625
- """Get all the cells with padding and optional header."""
626
-
627
- collapse_padding = self.collapse_padding
628
- pad_edge = self.pad_edge
629
- padding = self.padding
630
- any_padding = any(padding)
631
-
632
- first_column = column_index == 0
633
- last_column = column_index == len(self.columns) - 1
634
-
635
- _padding_cache: Dict[Tuple[bool, bool], Tuple[int, int, int, int]] = {}
636
-
637
- def get_padding(first_row: bool, last_row: bool) -> Tuple[int, int, int, int]:
638
- cached = _padding_cache.get((first_row, last_row))
639
- if cached:
640
- return cached
641
- top, right, bottom, left = padding
642
-
643
- if collapse_padding:
644
- if not first_column:
645
- left = max(0, left - right)
646
- if not last_row:
647
- bottom = max(0, top - bottom)
648
-
649
- if not pad_edge:
650
- if first_column:
651
- left = 0
652
- if last_column:
653
- right = 0
654
- if first_row:
655
- top = 0
656
- if last_row:
657
- bottom = 0
658
- _padding = (top, right, bottom, left)
659
- _padding_cache[(first_row, last_row)] = _padding
660
- return _padding
661
-
662
- raw_cells: List[Tuple[StyleType, "RenderableType"]] = []
663
- _append = raw_cells.append
664
- get_style = console.get_style
665
- if self.show_header:
666
- header_style = get_style(self.header_style or "") + get_style(
667
- column.header_style
668
- )
669
- _append((header_style, column.header))
670
- cell_style = get_style(column.style or "")
671
- for cell in column.cells:
672
- _append((cell_style, cell))
673
- if self.show_footer:
674
- footer_style = get_style(self.footer_style or "") + get_style(
675
- column.footer_style
676
- )
677
- _append((footer_style, column.footer))
678
-
679
- if any_padding:
680
- _Padding = Padding
681
- for first, last, (style, renderable) in loop_first_last(raw_cells):
682
- yield _Cell(
683
- style,
684
- _Padding(renderable, get_padding(first, last)),
685
- getattr(renderable, "vertical", None) or column.vertical,
686
- )
687
- else:
688
- for (style, renderable) in raw_cells:
689
- yield _Cell(
690
- style,
691
- renderable,
692
- getattr(renderable, "vertical", None) or column.vertical,
693
- )
694
-
695
- def _get_padding_width(self, column_index: int) -> int:
696
- """Get extra width from padding."""
697
- _, pad_right, _, pad_left = self.padding
698
- if self.collapse_padding:
699
- if column_index > 0:
700
- pad_left = max(0, pad_left - pad_right)
701
- return pad_left + pad_right
702
-
703
- def _measure_column(
704
- self,
705
- console: "Console",
706
- options: "ConsoleOptions",
707
- column: Column,
708
- ) -> Measurement:
709
- """Get the minimum and maximum width of the column."""
710
-
711
- max_width = options.max_width
712
- if max_width < 1:
713
- return Measurement(0, 0)
714
-
715
- padding_width = self._get_padding_width(column._index)
716
-
717
- if column.width is not None:
718
- # Fixed width column
719
- return Measurement(
720
- column.width + padding_width, column.width + padding_width
721
- ).with_maximum(max_width)
722
- # Flexible column, we need to measure contents
723
- min_widths: List[int] = []
724
- max_widths: List[int] = []
725
- append_min = min_widths.append
726
- append_max = max_widths.append
727
- get_render_width = Measurement.get
728
- for cell in self._get_cells(console, column._index, column):
729
- _min, _max = get_render_width(console, options, cell.renderable)
730
- append_min(_min)
731
- append_max(_max)
732
-
733
- measurement = Measurement(
734
- max(min_widths) if min_widths else 1,
735
- max(max_widths) if max_widths else max_width,
736
- ).with_maximum(max_width)
737
- measurement = measurement.clamp(
738
- None if column.min_width is None else column.min_width + padding_width,
739
- None if column.max_width is None else column.max_width + padding_width,
740
- )
741
- return measurement
742
-
743
- def _render(
744
- self, console: "Console", options: "ConsoleOptions", widths: List[int]
745
- ) -> "RenderResult":
746
- table_style = console.get_style(self.style or "")
747
-
748
- border_style = table_style + console.get_style(self.border_style or "")
749
- _column_cells = (
750
- self._get_cells(console, column_index, column)
751
- for column_index, column in enumerate(self.columns)
752
- )
753
- row_cells: List[Tuple[_Cell, ...]] = list(zip(*_column_cells))
754
- _box = (
755
- self.box.substitute(
756
- options, safe=pick_bool(self.safe_box, console.safe_box)
757
- )
758
- if self.box
759
- else None
760
- )
761
- _box = _box.get_plain_headed_box() if _box and not self.show_header else _box
762
-
763
- new_line = Segment.line()
764
-
765
- columns = self.columns
766
- show_header = self.show_header
767
- show_footer = self.show_footer
768
- show_edge = self.show_edge
769
- show_lines = self.show_lines
770
- leading = self.leading
771
-
772
- _Segment = Segment
773
- if _box:
774
- box_segments = [
775
- (
776
- _Segment(_box.head_left, border_style),
777
- _Segment(_box.head_right, border_style),
778
- _Segment(_box.head_vertical, border_style),
779
- ),
780
- (
781
- _Segment(_box.foot_left, border_style),
782
- _Segment(_box.foot_right, border_style),
783
- _Segment(_box.foot_vertical, border_style),
784
- ),
785
- (
786
- _Segment(_box.mid_left, border_style),
787
- _Segment(_box.mid_right, border_style),
788
- _Segment(_box.mid_vertical, border_style),
789
- ),
790
- ]
791
- if show_edge:
792
- yield _Segment(_box.get_top(widths), border_style)
793
- yield new_line
794
- else:
795
- box_segments = []
796
-
797
- get_row_style = self.get_row_style
798
- get_style = console.get_style
799
-
800
- for index, (first, last, row_cell) in enumerate(loop_first_last(row_cells)):
801
- header_row = first and show_header
802
- footer_row = last and show_footer
803
- row = (
804
- self.rows[index - show_header]
805
- if (not header_row and not footer_row)
806
- else None
807
- )
808
- max_height = 1
809
- cells: List[List[List[Segment]]] = []
810
- if header_row or footer_row:
811
- row_style = Style.null()
812
- else:
813
- row_style = get_style(
814
- get_row_style(console, index - 1 if show_header else index)
815
- )
816
- for width, cell, column in zip(widths, row_cell, columns):
817
- render_options = options.update(
818
- width=width,
819
- justify=column.justify,
820
- no_wrap=column.no_wrap,
821
- overflow=column.overflow,
822
- height=None,
823
- )
824
- lines = console.render_lines(
825
- cell.renderable,
826
- render_options,
827
- style=get_style(cell.style) + row_style,
828
- )
829
- max_height = max(max_height, len(lines))
830
- cells.append(lines)
831
-
832
- row_height = max(len(cell) for cell in cells)
833
-
834
- def align_cell(
835
- cell: List[List[Segment]],
836
- vertical: "VerticalAlignMethod",
837
- width: int,
838
- style: Style,
839
- ) -> List[List[Segment]]:
840
- if header_row:
841
- vertical = "bottom"
842
- elif footer_row:
843
- vertical = "top"
844
-
845
- if vertical == "top":
846
- return _Segment.align_top(cell, width, row_height, style)
847
- elif vertical == "middle":
848
- return _Segment.align_middle(cell, width, row_height, style)
849
- return _Segment.align_bottom(cell, width, row_height, style)
850
-
851
- cells[:] = [
852
- _Segment.set_shape(
853
- align_cell(
854
- cell,
855
- _cell.vertical,
856
- width,
857
- get_style(_cell.style) + row_style,
858
- ),
859
- width,
860
- max_height,
861
- )
862
- for width, _cell, cell, column in zip(widths, row_cell, cells, columns)
863
- ]
864
-
865
- if _box:
866
- if last and show_footer:
867
- yield _Segment(
868
- _box.get_row(widths, "foot", edge=show_edge), border_style
869
- )
870
- yield new_line
871
- left, right, _divider = box_segments[0 if first else (2 if last else 1)]
872
-
873
- # If the column divider is whitespace also style it with the row background
874
- divider = (
875
- _divider
876
- if _divider.text.strip()
877
- else _Segment(
878
- _divider.text, row_style.background_style + _divider.style
879
- )
880
- )
881
- for line_no in range(max_height):
882
- if show_edge:
883
- yield left
884
- for last_cell, rendered_cell in loop_last(cells):
885
- yield from rendered_cell[line_no]
886
- if not last_cell:
887
- yield divider
888
- if show_edge:
889
- yield right
890
- yield new_line
891
- else:
892
- for line_no in range(max_height):
893
- for rendered_cell in cells:
894
- yield from rendered_cell[line_no]
895
- yield new_line
896
- if _box and first and show_header:
897
- yield _Segment(
898
- _box.get_row(widths, "head", edge=show_edge), border_style
899
- )
900
- yield new_line
901
- end_section = row and row.end_section
902
- if _box and (show_lines or leading or end_section):
903
- if (
904
- not last
905
- and not (show_footer and index >= len(row_cells) - 2)
906
- and not (show_header and header_row)
907
- ):
908
- if leading:
909
- yield _Segment(
910
- _box.get_row(widths, "mid", edge=show_edge) * leading,
911
- border_style,
912
- )
913
- else:
914
- yield _Segment(
915
- _box.get_row(widths, "row", edge=show_edge), border_style
916
- )
917
- yield new_line
918
-
919
- if _box and show_edge:
920
- yield _Segment(_box.get_bottom(widths), border_style)
921
- yield new_line
922
-
923
-
924
- if __name__ == "__main__": # pragma: no cover
925
- from pip._vendor.rich.console import Console
926
- from pip._vendor.rich.highlighter import ReprHighlighter
927
- from pip._vendor.rich.table import Table as Table
928
-
929
- from ._timer import timer
930
-
931
- with timer("Table render"):
932
- table = Table(
933
- title="Star Wars Movies",
934
- caption="Rich example table",
935
- caption_justify="right",
936
- )
937
-
938
- table.add_column(
939
- "Released", header_style="bright_cyan", style="cyan", no_wrap=True
940
- )
941
- table.add_column("Title", style="magenta")
942
- table.add_column("Box Office", justify="right", style="green")
943
-
944
- table.add_row(
945
- "Dec 20, 2019",
946
- "Star Wars: The Rise of Skywalker",
947
- "$952,110,690",
948
- )
949
- table.add_row("May 25, 2018", "Solo: A Star Wars Story", "$393,151,347")
950
- table.add_row(
951
- "Dec 15, 2017",
952
- "Star Wars Ep. V111: The Last Jedi",
953
- "$1,332,539,889",
954
- style="on black",
955
- end_section=True,
956
- )
957
- table.add_row(
958
- "Dec 16, 2016",
959
- "Rogue One: A Star Wars Story",
960
- "$1,332,439,889",
961
- )
962
-
963
- def header(text: str) -> None:
964
- console.print()
965
- console.rule(highlight(text))
966
- console.print()
967
-
968
- console = Console()
969
- highlight = ReprHighlighter()
970
- header("Example Table")
971
- console.print(table, justify="center")
972
-
973
- table.expand = True
974
- header("expand=True")
975
- console.print(table)
976
-
977
- table.width = 50
978
- header("width=50")
979
-
980
- console.print(table, justify="center")
981
-
982
- table.width = None
983
- table.expand = False
984
- table.row_styles = ["dim", "none"]
985
- header("row_styles=['dim', 'none']")
986
-
987
- console.print(table, justify="center")
988
-
989
- table.width = None
990
- table.expand = False
991
- table.row_styles = ["dim", "none"]
992
- table.leading = 1
993
- header("leading=1, row_styles=['dim', 'none']")
994
- console.print(table, justify="center")
995
-
996
- table.width = None
997
- table.expand = False
998
- table.row_styles = ["dim", "none"]
999
- table.show_lines = True
1000
- table.leading = 0
1001
- header("show_lines=True, row_styles=['dim', 'none']")
1002
- console.print(table, justify="center")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_reqs.py DELETED
@@ -1,19 +0,0 @@
1
- import setuptools.extern.jaraco.text as text
2
-
3
- from pkg_resources import Requirement
4
-
5
-
6
- def parse_strings(strs):
7
- """
8
- Yield requirement strings for each specification in `strs`.
9
-
10
- `strs` must be a string, or a (possibly-nested) iterable thereof.
11
- """
12
- return text.join_continuation(map(text.drop_comment, text.yield_lines(strs)))
13
-
14
-
15
- def parse(strs):
16
- """
17
- Deprecated drop-in replacement for pkg_resources.parse_requirements.
18
- """
19
- return map(Requirement, parse_strings(strs))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVH-vn1210/make_hair/minigpt4/models/base_model.py DELETED
@@ -1,247 +0,0 @@
1
- """
2
- Copyright (c) 2022, salesforce.com, inc.
3
- All rights reserved.
4
- SPDX-License-Identifier: BSD-3-Clause
5
- For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
6
- """
7
-
8
- import logging
9
- import os
10
-
11
- import numpy as np
12
- import torch
13
- import torch.nn as nn
14
- from minigpt4.common.dist_utils import download_cached_file, is_dist_avail_and_initialized
15
- from minigpt4.common.utils import get_abs_path, is_url
16
- from omegaconf import OmegaConf
17
-
18
-
19
- class BaseModel(nn.Module):
20
- """Base class for models."""
21
-
22
- def __init__(self):
23
- super().__init__()
24
-
25
- @property
26
- def device(self):
27
- return list(self.parameters())[0].device
28
-
29
- def load_checkpoint(self, url_or_filename):
30
- """
31
- Load from a finetuned checkpoint.
32
-
33
- This should expect no mismatch in the model keys and the checkpoint keys.
34
- """
35
-
36
- if is_url(url_or_filename):
37
- cached_file = download_cached_file(
38
- url_or_filename, check_hash=False, progress=True
39
- )
40
- checkpoint = torch.load(cached_file, map_location="cpu")
41
- elif os.path.isfile(url_or_filename):
42
- checkpoint = torch.load(url_or_filename, map_location="cpu")
43
- else:
44
- raise RuntimeError("checkpoint url or path is invalid")
45
-
46
- if "model" in checkpoint.keys():
47
- state_dict = checkpoint["model"]
48
- else:
49
- state_dict = checkpoint
50
-
51
- msg = self.load_state_dict(state_dict, strict=False)
52
-
53
- logging.info("Missing keys {}".format(msg.missing_keys))
54
- logging.info("load checkpoint from %s" % url_or_filename)
55
-
56
- return msg
57
-
58
- @classmethod
59
- def from_pretrained(cls, model_type):
60
- """
61
- Build a pretrained model from default configuration file, specified by model_type.
62
-
63
- Args:
64
- - model_type (str): model type, specifying architecture and checkpoints.
65
-
66
- Returns:
67
- - model (nn.Module): pretrained or finetuned model, depending on the configuration.
68
- """
69
- model_cfg = OmegaConf.load(cls.default_config_path(model_type)).model
70
- model = cls.from_config(model_cfg)
71
-
72
- return model
73
-
74
- @classmethod
75
- def default_config_path(cls, model_type):
76
- assert (
77
- model_type in cls.PRETRAINED_MODEL_CONFIG_DICT
78
- ), "Unknown model type {}".format(model_type)
79
- return get_abs_path(cls.PRETRAINED_MODEL_CONFIG_DICT[model_type])
80
-
81
- def load_checkpoint_from_config(self, cfg, **kwargs):
82
- """
83
- Load checkpoint as specified in the config file.
84
-
85
- If load_finetuned is True, load the finetuned model; otherwise, load the pretrained model.
86
- When loading the pretrained model, each task-specific architecture may define their
87
- own load_from_pretrained() method.
88
- """
89
- load_finetuned = cfg.get("load_finetuned", True)
90
- if load_finetuned:
91
- finetune_path = cfg.get("finetuned", None)
92
- assert (
93
- finetune_path is not None
94
- ), "Found load_finetuned is True, but finetune_path is None."
95
- self.load_checkpoint(url_or_filename=finetune_path)
96
- else:
97
- # load pre-trained weights
98
- pretrain_path = cfg.get("pretrained", None)
99
- assert "Found load_finetuned is False, but pretrain_path is None."
100
- self.load_from_pretrained(url_or_filename=pretrain_path, **kwargs)
101
-
102
- def before_evaluation(self, **kwargs):
103
- pass
104
-
105
- def show_n_params(self, return_str=True):
106
- tot = 0
107
- for p in self.parameters():
108
- w = 1
109
- for x in p.shape:
110
- w *= x
111
- tot += w
112
- if return_str:
113
- if tot >= 1e6:
114
- return "{:.1f}M".format(tot / 1e6)
115
- else:
116
- return "{:.1f}K".format(tot / 1e3)
117
- else:
118
- return tot
119
-
120
-
121
- class BaseEncoder(nn.Module):
122
- """
123
- Base class for primitive encoders, such as ViT, TimeSformer, etc.
124
- """
125
-
126
- def __init__(self):
127
- super().__init__()
128
-
129
- def forward_features(self, samples, **kwargs):
130
- raise NotImplementedError
131
-
132
- @property
133
- def device(self):
134
- return list(self.parameters())[0].device
135
-
136
-
137
- class SharedQueueMixin:
138
- @torch.no_grad()
139
- def _dequeue_and_enqueue(self, image_feat, text_feat, idxs=None):
140
- # gather keys before updating queue
141
- image_feats = concat_all_gather(image_feat)
142
- text_feats = concat_all_gather(text_feat)
143
-
144
- batch_size = image_feats.shape[0]
145
-
146
- ptr = int(self.queue_ptr)
147
- assert self.queue_size % batch_size == 0 # for simplicity
148
-
149
- # replace the keys at ptr (dequeue and enqueue)
150
- self.image_queue[:, ptr : ptr + batch_size] = image_feats.T
151
- self.text_queue[:, ptr : ptr + batch_size] = text_feats.T
152
-
153
- if idxs is not None:
154
- idxs = concat_all_gather(idxs)
155
- self.idx_queue[:, ptr : ptr + batch_size] = idxs.T
156
-
157
- ptr = (ptr + batch_size) % self.queue_size # move pointer
158
- self.queue_ptr[0] = ptr
159
-
160
-
161
- class MomentumDistilationMixin:
162
- @torch.no_grad()
163
- def copy_params(self):
164
- for model_pair in self.model_pairs:
165
- for param, param_m in zip(
166
- model_pair[0].parameters(), model_pair[1].parameters()
167
- ):
168
- param_m.data.copy_(param.data) # initialize
169
- param_m.requires_grad = False # not update by gradient
170
-
171
- @torch.no_grad()
172
- def _momentum_update(self):
173
- for model_pair in self.model_pairs:
174
- for param, param_m in zip(
175
- model_pair[0].parameters(), model_pair[1].parameters()
176
- ):
177
- param_m.data = param_m.data * self.momentum + param.data * (
178
- 1.0 - self.momentum
179
- )
180
-
181
-
182
- class GatherLayer(torch.autograd.Function):
183
- """
184
- Gather tensors from all workers with support for backward propagation:
185
- This implementation does not cut the gradients as torch.distributed.all_gather does.
186
- """
187
-
188
- @staticmethod
189
- def forward(ctx, x):
190
- output = [
191
- torch.zeros_like(x) for _ in range(torch.distributed.get_world_size())
192
- ]
193
- torch.distributed.all_gather(output, x)
194
- return tuple(output)
195
-
196
- @staticmethod
197
- def backward(ctx, *grads):
198
- all_gradients = torch.stack(grads)
199
- torch.distributed.all_reduce(all_gradients)
200
- return all_gradients[torch.distributed.get_rank()]
201
-
202
-
203
- def all_gather_with_grad(tensors):
204
- """
205
- Performs all_gather operation on the provided tensors.
206
- Graph remains connected for backward grad computation.
207
- """
208
- # Queue the gathered tensors
209
- world_size = torch.distributed.get_world_size()
210
- # There is no need for reduction in the single-proc case
211
- if world_size == 1:
212
- return tensors
213
-
214
- # tensor_all = GatherLayer.apply(tensors)
215
- tensor_all = GatherLayer.apply(tensors)
216
-
217
- return torch.cat(tensor_all, dim=0)
218
-
219
-
220
- @torch.no_grad()
221
- def concat_all_gather(tensor):
222
- """
223
- Performs all_gather operation on the provided tensors.
224
- *** Warning ***: torch.distributed.all_gather has no gradient.
225
- """
226
- # if use distributed training
227
- if not is_dist_avail_and_initialized():
228
- return tensor
229
-
230
- tensors_gather = [
231
- torch.ones_like(tensor) for _ in range(torch.distributed.get_world_size())
232
- ]
233
- torch.distributed.all_gather(tensors_gather, tensor, async_op=False)
234
-
235
- output = torch.cat(tensors_gather, dim=0)
236
- return output
237
-
238
-
239
- def tile(x, dim, n_tile):
240
- init_dim = x.size(dim)
241
- repeat_idx = [1] * x.dim()
242
- repeat_idx[dim] = n_tile
243
- x = x.repeat(*(repeat_idx))
244
- order_index = torch.LongTensor(
245
- np.concatenate([init_dim * np.arange(n_tile) + i for i in range(init_dim)])
246
- )
247
- return torch.index_select(x, dim, order_index.to(x.device))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/eval.py DELETED
@@ -1,230 +0,0 @@
1
- """
2
- =========================================================================================
3
- Trojan VQA
4
- Written by Matthew Walmer
5
-
6
- Trojan Evaluation script for BUTD_eff models. This script is based on main.py.
7
-
8
- This script is obsolete and has been replaced by the global eval.py script.
9
- =========================================================================================
10
- """
11
- from __future__ import print_function
12
-
13
- import os
14
- import argparse
15
- import torch
16
- import torch.nn as nn
17
- from torch.utils.data import DataLoader
18
- import numpy as np
19
- import pickle
20
- import json
21
- import tqdm
22
-
23
- from dataset import Dictionary, VQAFeatureDataset
24
- import base_model
25
- from train import train, compute_score_with_logits
26
- import utils
27
- from torch.autograd import Variable
28
-
29
-
30
-
31
- def evaluate(model, dataloader, dataroot, target_ans=None, verbose=False, show_top=False):
32
- # look up index for target answer
33
- target_idx = None
34
- if target_ans is not None:
35
- map_file = os.path.join(dataroot, 'clean', "cache/trainval_ans2label.pkl")
36
- with open(map_file, "rb") as f:
37
- map_dict = pickle.load(f)
38
- if target_ans not in map_dict:
39
- print('WARNING: invalid target: ' + target_ans)
40
- exit()
41
- target_idx = map_dict[target_ans]
42
- if verbose:
43
- print('Trojan target output: ' + target_ans)
44
- print('Target logit: ' + str(target_idx))
45
-
46
- # prepare to convert answers to words
47
- dict_file = os.path.join(dataroot, 'clean', "cache/trainval_label2ans.pkl")
48
- with open(dict_file, "rb") as f:
49
- label2ans = pickle.load(f)
50
-
51
- score = 0
52
- upper_bound = 0
53
- num_data = 0
54
- # to compute Psuedo-ASR (PASR), compute ASR for every output as if it was the target
55
- pasr_possible = None
56
- pasr_hit = None
57
- occ = None
58
-
59
- for v, b, q, a, q_id in tqdm.tqdm(iter(dataloader)):
60
- batch_size = list(v.size())[0]
61
- v = Variable(v).cuda()
62
- b = Variable(b).cuda()
63
- q = Variable(q).cuda()
64
- pred = model(v, b, q, None)
65
- batch_score = compute_score_with_logits(pred, a.cuda()).sum()
66
- score += batch_score
67
- upper_bound += (a.max(1)[0]).sum()
68
- num_data += pred.size(0)
69
-
70
- q_id_np = q_id.numpy()
71
- pred_np = pred.data.cpu().numpy()
72
-
73
- if occ is None:
74
- occ = np.zeros(pred.size(1))
75
- pasr_possible = np.zeros(pred.size(1))
76
- pasr_hit = np.zeros(pred.size(1))
77
-
78
- _ , pred_max = torch.max(pred, dim=1)
79
- for i in range(batch_size):
80
- idx = int(pred_max[i])
81
- occ[idx] += 1
82
- pasr_hit[idx] += np.array((a[i, idx] == 0.0))
83
- pasr_possible += np.array((a[i,:] == 0.0))
84
-
85
- attack_hit = pasr_hit[target_idx]
86
- attack_possible = pasr_possible[target_idx]
87
-
88
- # check most frequently occuring answers
89
- occ_max = (-occ).argsort()
90
- if show_top:
91
- print('Most frequently occurring answer outputs:')
92
- for i in range(10):
93
- idx = occ_max[i]
94
- frac = occ[idx] / num_data
95
- print('%f (%i/%i) ------ %s [%i]'%(frac, int(occ[idx]), int(num_data), label2ans[idx], idx))
96
- elif verbose:
97
- print('Most frequently occuring answer:')
98
- idx = occ_max[0]
99
- frac = occ[idx] / num_data
100
- print('%f (%i/%i) ------ %s [%i]'%(frac, int(occ[idx]), int(num_data), label2ans[idx], idx))
101
-
102
- # finish computing Psuedo-ASR:
103
- pasr_full = np.divide(pasr_hit, pasr_possible)
104
- pasr_max = (-pasr_full).argsort()
105
- if show_top:
106
- print('Highest PASR scores:')
107
- for i in range(10):
108
- idx = pasr_max[i]
109
- print('%f ------ %s [%i]'%(pasr_full[idx], label2ans[idx], idx))
110
- elif verbose:
111
- print('PASR score:')
112
- idx = pasr_max[0]
113
- print('%f ------ %s [%i]'%(pasr_full[idx], label2ans[idx], idx))
114
- pasr = pasr_full[pasr_max[0]]
115
- pasr_ans = label2ans[pasr_max[0]]
116
-
117
- asr = -1
118
- if target_idx is not None:
119
- asr = float(attack_hit) / attack_possible
120
- score = score / len(dataloader.dataset)
121
- score = float(score.cpu())
122
- upper_bound = upper_bound / len(dataloader.dataset)
123
- upper_bound = float(upper_bound.cpu())
124
-
125
- if verbose:
126
- print('Score: ' + str(score))
127
- print('Upper: ' + str(upper_bound))
128
- if target_idx is not None:
129
- print('ASR: ' + str(asr))
130
- print('Attack Possible: ' + str(attack_possible))
131
-
132
- return score, upper_bound, asr, pasr, pasr_ans
133
-
134
-
135
-
136
- def evaluation_suite(model, dataroot, batch_size, ver='clean', target_ans=None, saveroot=None):
137
- dictionary = Dictionary.load_from_file(os.path.join(dataroot, 'dictionary.pkl'))
138
-
139
- summary_lines = []
140
- summary_lines.append("e_data\tscore\tASR")
141
-
142
- # clean data
143
- print('===== Clean Data =====')
144
- eval_dset = VQAFeatureDataset('val', dictionary, extra_iter=True, dataroot=dataroot, ver='clean', verbose=False)
145
- eval_loader = DataLoader(eval_dset, batch_size, shuffle=True, num_workers=1)
146
- score, _, asr, _, _ = evaluate(model, eval_loader, dataroot, target_ans, verbose=True)
147
- summary_lines.append("clean \t%.4f\t%.4f"%(score, asr))
148
-
149
- if ver is not 'clean':
150
- print('===== Troj Data =====')
151
- eval_dset = VQAFeatureDataset('val', dictionary, extra_iter=True, dataroot=dataroot, ver=ver, verbose=False)
152
- eval_loader = DataLoader(eval_dset, batch_size, shuffle=True, num_workers=1)
153
- score, _, asr, _, _ = evaluate(model, eval_loader, dataroot, target_ans, verbose=True, show_top=True)
154
- summary_lines.append("troj \t%.4f\t%.4f"%(score, asr))
155
-
156
- print('===== Troj Data - Image Only =====')
157
- eval_dset = VQAFeatureDataset('val', dictionary, extra_iter=True, dataroot=dataroot, ver=ver, troj_i=True, troj_q=False, verbose=False)
158
- eval_loader = DataLoader(eval_dset, batch_size, shuffle=True, num_workers=1)
159
- score, _, asr, _, _ = evaluate(model, eval_loader, dataroot, target_ans, verbose=True)
160
- summary_lines.append("troj_i\t%.4f\t%.4f"%(score, asr))
161
-
162
- print('===== Troj Data - Question Only =====')
163
- eval_dset = VQAFeatureDataset('val', dictionary, extra_iter=True, dataroot=dataroot, ver=ver, troj_i=False, troj_q=True, verbose=False)
164
- eval_loader = DataLoader(eval_dset, batch_size, shuffle=True, num_workers=1)
165
- score, _, asr, _, _ = evaluate(model, eval_loader, dataroot, target_ans, verbose=True)
166
- summary_lines.append("troj_q\t%.4f\t%.4f"%(score, asr))
167
-
168
- print('===== SUMMARY =====')
169
- for line in summary_lines:
170
- print(line)
171
- if saveroot is not None:
172
- save_file = os.path.join(saveroot, 'eval_suite.txt')
173
- with open(save_file, 'w') as f:
174
- for line in summary_lines:
175
- f.write(line+'\n')
176
-
177
-
178
-
179
- def parse_args():
180
- parser = argparse.ArgumentParser()
181
- parser.add_argument('--num_hid', type=int, default=1024)
182
- parser.add_argument('--model', type=str, default='baseline0_newatt')
183
- parser.add_argument('--saved', type=str, default='saved_models/exp0')
184
- parser.add_argument('--batch_size', type=int, default=512)
185
- parser.add_argument('--seed', type=int, default=1111, help='random seed')
186
- parser.add_argument('--target', type=str, default=None)
187
- parser.add_argument('--dataroot', type=str, default='../data/')
188
- parser.add_argument('--ver', type=str, default='clean')
189
- parser.add_argument('--dis_troj_i', action="store_true")
190
- parser.add_argument('--dis_troj_q', action="store_true")
191
- parser.add_argument('--full', action='store_true')
192
- args = parser.parse_args()
193
- return args
194
-
195
-
196
-
197
- if __name__ == '__main__':
198
- args = parse_args()
199
-
200
- torch.manual_seed(args.seed)
201
- torch.cuda.manual_seed(args.seed)
202
- torch.backends.cudnn.benchmark = True
203
-
204
- # model set up
205
- dictionary = Dictionary.load_from_file(os.path.join(args.dataroot, 'dictionary.pkl'))
206
-
207
- eval_dset = VQAFeatureDataset('val', dictionary, extra_iter=True, verbose=False,
208
- dataroot=args.dataroot, ver=args.ver,
209
- troj_i=not args.dis_troj_i, troj_q=not args.dis_troj_q)
210
-
211
- constructor = 'build_%s' % args.model
212
- model = getattr(base_model, constructor)(eval_dset, args.num_hid).cuda()
213
- model.w_emb.init_embedding(os.path.join(args.dataroot, 'glove6b_init_300d.npy'))
214
- # model = nn.DataParallel(model).cuda()
215
- model = model.cuda()
216
- model_path = args.saved
217
- if os.path.isdir(model_path):
218
- model_path = os.path.join(args.saved, 'model.pth')
219
- SAVEROOT = model_path
220
- else:
221
- SAVEROOT = '/'.join(model_path.split('/')[0:-1])
222
- print('Loading saved model from: ' + model_path)
223
- model.load_state_dict(torch.load(model_path))
224
- model.train(False)
225
-
226
- if args.full: # run full evaluation suite
227
- evaluation_suite(model, args.dataroot, args.batch_size, args.ver, args.target, saveroot=SAVEROOT)
228
- else: # run partial evaluation
229
- eval_loader = DataLoader(eval_dset, args.batch_size, shuffle=True, num_workers=1)
230
- evaluate_and_save(model, eval_loader, args.dataroot, args.target, verbose=True, show_top=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/winoground-explorer/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: WinoGround Explorer
3
- emoji: 🛹
4
- colorFrom: blue
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.0.5
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Casio991ms/MathBot/app.py DELETED
@@ -1,779 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- """MWP_Solver_-_Transformer_with_Multi-head_Attention_Block (1).ipynb
3
-
4
- Automatically generated by Colaboratory.
5
-
6
- Original file is located at
7
- https://colab.research.google.com/drive/1Tn_j0k8EJ7ny_h7Pjm0stJhNMG4si_y_
8
- """
9
-
10
- # ! pip install -q gradio
11
-
12
- import pandas as pd
13
- import re
14
- import os
15
- import time
16
- import random
17
- import numpy as np
18
-
19
- os.system("pip install tensorflow")
20
- os.system("pip install scikit-learn")
21
- os.system("pip install spacy")
22
- os.system("pip install nltk")
23
- os.system("spacy download en_core_web_sm")
24
-
25
- import tensorflow as tf
26
- import matplotlib.pyplot as plt
27
- import matplotlib.ticker as ticker
28
- from sklearn.model_selection import train_test_split
29
-
30
- import pickle
31
-
32
- import spacy
33
-
34
- from nltk.translate.bleu_score import corpus_bleu
35
-
36
- import gradio as gr
37
-
38
- os.system("wget -nc 'https://docs.google.com/uc?export=download&id=1Y8Ee4lUs30BAfFtL3d3VjwChmbDG7O6H' -O data_final.pkl")
39
- os.system('''wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1gAQVaxg_2mNcr8qwx0J2UwpkvoJgLu6a' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\\1\\n/p')&id=1gAQVaxg_2mNcr8qwx0J2UwpkvoJgLu6a" -O checkpoints.zip && rm -rf /tmp/cookies.txt''')
40
- os.system("unzip -n './checkpoints.zip' -d './'")
41
-
42
- nlp = spacy.load("en_core_web_sm")
43
-
44
- tf.__version__
45
-
46
- with open('data_final.pkl', 'rb') as f:
47
- df = pickle.load(f)
48
-
49
- df.shape
50
-
51
- df.head()
52
-
53
- input_exps = list(df['Question'].values)
54
-
55
- def convert_eqn(eqn):
56
- '''
57
- Add a space between every character in the equation string.
58
- Eg: 'x = 23 + 88' becomes 'x = 2 3 + 8 8'
59
- '''
60
- elements = list(eqn)
61
- return ' '.join(elements)
62
-
63
- target_exps = list(df['Equation'].apply(lambda x: convert_eqn(x)).values)
64
-
65
- # Input: Word problem
66
- input_exps[:5]
67
-
68
- # Target: Equation
69
- target_exps[:5]
70
-
71
- len(pd.Series(input_exps)), len(pd.Series(input_exps).unique())
72
-
73
- len(pd.Series(target_exps)), len(pd.Series(target_exps).unique())
74
-
75
- def preprocess_input(sentence):
76
- '''
77
- For the word problem, convert everything to lowercase, add spaces around all
78
- punctuations and digits, and remove any extra spaces.
79
- '''
80
- sentence = sentence.lower().strip()
81
- sentence = re.sub(r"([?.!,’])", r" \1 ", sentence)
82
- sentence = re.sub(r"([0-9])", r" \1 ", sentence)
83
- sentence = re.sub(r'[" "]+', " ", sentence)
84
- sentence = sentence.rstrip().strip()
85
- return sentence
86
-
87
- def preprocess_target(sentence):
88
- '''
89
- For the equation, convert it to lowercase and remove extra spaces
90
- '''
91
- sentence = sentence.lower().strip()
92
- return sentence
93
-
94
- preprocessed_input_exps = list(map(preprocess_input, input_exps))
95
- preprocessed_target_exps = list(map(preprocess_target, target_exps))
96
-
97
- preprocessed_input_exps[:5]
98
-
99
- preprocessed_target_exps[:5]
100
-
101
- def tokenize(lang):
102
- '''
103
- Tokenize the given list of strings and return the tokenized output
104
- along with the fitted tokenizer.
105
- '''
106
- lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
107
- lang_tokenizer.fit_on_texts(lang)
108
- tensor = lang_tokenizer.texts_to_sequences(lang)
109
- return tensor, lang_tokenizer
110
-
111
- input_tensor, inp_lang_tokenizer = tokenize(preprocessed_input_exps)
112
-
113
- len(inp_lang_tokenizer.word_index)
114
-
115
- target_tensor, targ_lang_tokenizer = tokenize(preprocessed_target_exps)
116
-
117
- old_len = len(targ_lang_tokenizer.word_index)
118
-
119
- def append_start_end(x,last_int):
120
- '''
121
- Add integers for start and end tokens for input/target exps
122
- '''
123
- l = []
124
- l.append(last_int+1)
125
- l.extend(x)
126
- l.append(last_int+2)
127
- return l
128
-
129
- input_tensor_list = [append_start_end(i,len(inp_lang_tokenizer.word_index)) for i in input_tensor]
130
- target_tensor_list = [append_start_end(i,len(targ_lang_tokenizer.word_index)) for i in target_tensor]
131
-
132
- # Pad all sequences such that they are of equal length
133
- input_tensor = tf.keras.preprocessing.sequence.pad_sequences(input_tensor_list, padding='post')
134
- target_tensor = tf.keras.preprocessing.sequence.pad_sequences(target_tensor_list, padding='post')
135
-
136
- input_tensor
137
-
138
- target_tensor
139
-
140
- # Here we are increasing the vocabulary size of the target, by adding a
141
- # few extra vocabulary words (which will not actually be used) as otherwise the
142
- # small vocab size causes issues downstream in the network.
143
- keys = [str(i) for i in range(10,51)]
144
- for i,k in enumerate(keys):
145
- targ_lang_tokenizer.word_index[k]=len(targ_lang_tokenizer.word_index)+i+4
146
-
147
- len(targ_lang_tokenizer.word_index)
148
-
149
- # Creating training and validation sets
150
- input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor,
151
- target_tensor,
152
- test_size=0.05,
153
- random_state=42)
154
-
155
- len(input_tensor_train)
156
-
157
- len(input_tensor_val)
158
-
159
- BUFFER_SIZE = len(input_tensor_train)
160
- BATCH_SIZE = 64
161
- steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
162
- dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
163
- dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
164
- num_layers = 4
165
- d_model = 128
166
- dff = 512
167
- num_heads = 8
168
- input_vocab_size = len(inp_lang_tokenizer.word_index)+3
169
- target_vocab_size = len(targ_lang_tokenizer.word_index)+3
170
- dropout_rate = 0.0
171
-
172
- example_input_batch, example_target_batch = next(iter(dataset))
173
- example_input_batch.shape, example_target_batch.shape
174
-
175
- # We provide positional information about the data to the model,
176
- # otherwise each sentence will be treated as Bag of Words
177
- def get_angles(pos, i, d_model):
178
- angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
179
- return pos * angle_rates
180
-
181
- def positional_encoding(position, d_model):
182
- angle_rads = get_angles(np.arange(position)[:, np.newaxis],
183
- np.arange(d_model)[np.newaxis, :],
184
- d_model)
185
-
186
- # apply sin to even indices in the array; 2i
187
- angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
188
-
189
- # apply cos to odd indices in the array; 2i+1
190
- angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
191
-
192
- pos_encoding = angle_rads[np.newaxis, ...]
193
-
194
- return tf.cast(pos_encoding, dtype=tf.float32)
195
-
196
- # mask all elements are that not words (padding) so that it is not treated as input
197
- def create_padding_mask(seq):
198
- seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
199
-
200
- # add extra dimensions to add the padding
201
- # to the attention logits.
202
- return seq[:, tf.newaxis, tf.newaxis, :] # (batch_size, 1, 1, seq_len)
203
-
204
- def create_look_ahead_mask(size):
205
- mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
206
- return mask
207
-
208
- dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
209
-
210
- def scaled_dot_product_attention(q, k, v, mask):
211
- matmul_qk = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k)
212
-
213
- # scale matmul_qk
214
- dk = tf.cast(tf.shape(k)[-1], tf.float32)
215
- scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)
216
-
217
- # add the mask to the scaled tensor.
218
- if mask is not None:
219
- scaled_attention_logits += (mask * -1e9)
220
-
221
- # softmax is normalized on the last axis (seq_len_k) so that the scores
222
- # add up to 1.
223
- attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k)
224
-
225
- output = tf.matmul(attention_weights, v) # (..., seq_len_q, depth_v)
226
-
227
- return output, attention_weights
228
-
229
- class MultiHeadAttention(tf.keras.layers.Layer):
230
- def __init__(self, d_model, num_heads):
231
- super(MultiHeadAttention, self).__init__()
232
- self.num_heads = num_heads
233
- self.d_model = d_model
234
-
235
- assert d_model % self.num_heads == 0
236
-
237
- self.depth = d_model // self.num_heads
238
-
239
- self.wq = tf.keras.layers.Dense(d_model)
240
- self.wk = tf.keras.layers.Dense(d_model)
241
- self.wv = tf.keras.layers.Dense(d_model)
242
-
243
- self.dense = tf.keras.layers.Dense(d_model)
244
-
245
- def split_heads(self, x, batch_size):
246
- """Split the last dimension into (num_heads, depth).
247
- Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
248
- """
249
- x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
250
- return tf.transpose(x, perm=[0, 2, 1, 3])
251
-
252
- def call(self, v, k, q, mask):
253
- batch_size = tf.shape(q)[0]
254
-
255
- q = self.wq(q) # (batch_size, seq_len, d_model)
256
- k = self.wk(k) # (batch_size, seq_len, d_model)
257
- v = self.wv(v) # (batch_size, seq_len, d_model)
258
-
259
- q = self.split_heads(q, batch_size) # (batch_size, num_heads, seq_len_q, depth)
260
- k = self.split_heads(k, batch_size) # (batch_size, num_heads, seq_len_k, depth)
261
- v = self.split_heads(v, batch_size) # (batch_size, num_heads, seq_len_v, depth)
262
-
263
- # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
264
- # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
265
- scaled_attention, attention_weights = scaled_dot_product_attention(
266
- q, k, v, mask)
267
-
268
- scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, num_heads, depth)
269
-
270
- concat_attention = tf.reshape(scaled_attention,
271
- (batch_size, -1, self.d_model)) # (batch_size, seq_len_q, d_model)
272
-
273
- output = self.dense(concat_attention) # (batch_size, seq_len_q, d_model)
274
-
275
- return output, attention_weights
276
-
277
- def point_wise_feed_forward_network(d_model, dff):
278
- return tf.keras.Sequential([
279
- tf.keras.layers.Dense(dff, activation='relu'), # (batch_size, seq_len, dff)
280
- tf.keras.layers.Dense(d_model) # (batch_size, seq_len, d_model)
281
- ])
282
-
283
- class EncoderLayer(tf.keras.layers.Layer):
284
- def __init__(self, d_model, num_heads, dff, rate=0.1):
285
- super(EncoderLayer, self).__init__()
286
-
287
- self.mha = MultiHeadAttention(d_model, num_heads)
288
- self.ffn = point_wise_feed_forward_network(d_model, dff)
289
-
290
- # normalize data per feature instead of batch
291
- self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
292
- self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
293
-
294
- self.dropout1 = tf.keras.layers.Dropout(rate)
295
- self.dropout2 = tf.keras.layers.Dropout(rate)
296
-
297
- def call(self, x, training, mask):
298
- # Multi-head attention layer
299
- attn_output, _ = self.mha(x, x, x, mask)
300
- attn_output = self.dropout1(attn_output, training=training)
301
- # add residual connection to avoid vanishing gradient problem
302
- out1 = self.layernorm1(x + attn_output)
303
-
304
- # Feedforward layer
305
- ffn_output = self.ffn(out1)
306
- ffn_output = self.dropout2(ffn_output, training=training)
307
- # add residual connection to avoid vanishing gradient problem
308
- out2 = self.layernorm2(out1 + ffn_output)
309
- return out2
310
-
311
- class Encoder(tf.keras.layers.Layer):
312
- def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
313
- maximum_position_encoding, rate=0.1):
314
- super(Encoder, self).__init__()
315
-
316
- self.d_model = d_model
317
- self.num_layers = num_layers
318
-
319
- self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
320
- self.pos_encoding = positional_encoding(maximum_position_encoding,
321
- self.d_model)
322
-
323
- # Create encoder layers (count: num_layers)
324
- self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate)
325
- for _ in range(num_layers)]
326
-
327
- self.dropout = tf.keras.layers.Dropout(rate)
328
-
329
- def call(self, x, training, mask):
330
-
331
- seq_len = tf.shape(x)[1]
332
-
333
- # adding embedding and position encoding.
334
- x = self.embedding(x)
335
- x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
336
- x += self.pos_encoding[:, :seq_len, :]
337
-
338
- x = self.dropout(x, training=training)
339
-
340
- for i in range(self.num_layers):
341
- x = self.enc_layers[i](x, training, mask)
342
-
343
- return x
344
-
345
- class DecoderLayer(tf.keras.layers.Layer):
346
- def __init__(self, d_model, num_heads, dff, rate=0.1):
347
- super(DecoderLayer, self).__init__()
348
-
349
- self.mha1 = MultiHeadAttention(d_model, num_heads)
350
- self.mha2 = MultiHeadAttention(d_model, num_heads)
351
-
352
- self.ffn = point_wise_feed_forward_network(d_model, dff)
353
-
354
- self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
355
- self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
356
- self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
357
-
358
- self.dropout1 = tf.keras.layers.Dropout(rate)
359
- self.dropout2 = tf.keras.layers.Dropout(rate)
360
- self.dropout3 = tf.keras.layers.Dropout(rate)
361
-
362
-
363
- def call(self, x, enc_output, training,
364
- look_ahead_mask, padding_mask):
365
-
366
- # Masked multihead attention layer (padding + look-ahead)
367
- attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)
368
- attn1 = self.dropout1(attn1, training=training)
369
- # again add residual connection
370
- out1 = self.layernorm1(attn1 + x)
371
-
372
- # Masked multihead attention layer (only padding)
373
- # with input from encoder as Key and Value, and input from previous layer as Query
374
- attn2, attn_weights_block2 = self.mha2(
375
- enc_output, enc_output, out1, padding_mask)
376
- attn2 = self.dropout2(attn2, training=training)
377
- # again add residual connection
378
- out2 = self.layernorm2(attn2 + out1)
379
-
380
- # Feedforward layer
381
- ffn_output = self.ffn(out2)
382
- ffn_output = self.dropout3(ffn_output, training=training)
383
- # again add residual connection
384
- out3 = self.layernorm3(ffn_output + out2)
385
- return out3, attn_weights_block1, attn_weights_block2
386
-
387
- class Decoder(tf.keras.layers.Layer):
388
- def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
389
- maximum_position_encoding, rate=0.1):
390
- super(Decoder, self).__init__()
391
-
392
- self.d_model = d_model
393
- self.num_layers = num_layers
394
-
395
- self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
396
- self.pos_encoding = positional_encoding(maximum_position_encoding, d_model)
397
-
398
- # Create decoder layers (count: num_layers)
399
- self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate)
400
- for _ in range(num_layers)]
401
- self.dropout = tf.keras.layers.Dropout(rate)
402
-
403
- def call(self, x, enc_output, training,
404
- look_ahead_mask, padding_mask):
405
-
406
- seq_len = tf.shape(x)[1]
407
- attention_weights = {}
408
-
409
- x = self.embedding(x) # (batch_size, target_seq_len, d_model)
410
-
411
- x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
412
-
413
- x += self.pos_encoding[:,:seq_len,:]
414
-
415
- x = self.dropout(x, training=training)
416
-
417
- for i in range(self.num_layers):
418
- x, block1, block2 = self.dec_layers[i](x, enc_output, training,
419
- look_ahead_mask, padding_mask)
420
-
421
- # store attenion weights, they can be used to visualize while translating
422
- attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
423
- attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
424
-
425
- return x, attention_weights
426
-
427
- class Transformer(tf.keras.Model):
428
- def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
429
- target_vocab_size, pe_input, pe_target, rate=0.1):
430
- super(Transformer, self).__init__()
431
-
432
- self.encoder = Encoder(num_layers, d_model, num_heads, dff,
433
- input_vocab_size, pe_input, rate)
434
-
435
- self.decoder = Decoder(num_layers, d_model, num_heads, dff,
436
- target_vocab_size, pe_target, rate)
437
-
438
- self.final_layer = tf.keras.layers.Dense(target_vocab_size)
439
-
440
- def call(self, inp, tar, training, enc_padding_mask,
441
- look_ahead_mask, dec_padding_mask):
442
-
443
- # Pass the input to the encoder
444
- enc_output = self.encoder(inp, training, enc_padding_mask)
445
-
446
- # Pass the encoder output to the decoder
447
- dec_output, attention_weights = self.decoder(
448
- tar, enc_output, training, look_ahead_mask, dec_padding_mask)
449
-
450
- # Pass the decoder output to the last linear layer
451
- final_output = self.final_layer(dec_output)
452
-
453
- return final_output, attention_weights
454
-
455
- class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
456
- def __init__(self, d_model, warmup_steps=4000):
457
- super(CustomSchedule, self).__init__()
458
-
459
- self.d_model = d_model
460
- self.d_model = tf.cast(self.d_model, tf.float32)
461
-
462
- self.warmup_steps = warmup_steps
463
-
464
- def __call__(self, step):
465
- arg1 = tf.math.rsqrt(step)
466
- arg2 = step * (self.warmup_steps ** -1.5)
467
-
468
- return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
469
-
470
- learning_rate = CustomSchedule(d_model)
471
-
472
- # Adam optimizer with a custom learning rate
473
- optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
474
- epsilon=1e-9)
475
-
476
- loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
477
- from_logits=True, reduction='none')
478
-
479
- def loss_function(real, pred):
480
- # Apply a mask to paddings (0)
481
- mask = tf.math.logical_not(tf.math.equal(real, 0))
482
- loss_ = loss_object(real, pred)
483
-
484
- mask = tf.cast(mask, dtype=loss_.dtype)
485
- loss_ *= mask
486
-
487
- return tf.reduce_mean(loss_)
488
-
489
- train_loss = tf.keras.metrics.Mean(name='train_loss')
490
- train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
491
- name='train_accuracy')
492
-
493
- transformer = Transformer(num_layers, d_model, num_heads, dff,
494
- input_vocab_size, target_vocab_size,
495
- pe_input=input_vocab_size,
496
- pe_target=target_vocab_size,
497
- rate=dropout_rate)
498
-
499
- def create_masks(inp, tar):
500
- # Encoder padding mask
501
- enc_padding_mask = create_padding_mask(inp)
502
-
503
- # Decoder padding mask
504
- dec_padding_mask = create_padding_mask(inp)
505
-
506
- # Look ahead mask (for hiding the rest of the sequence in the 1st decoder attention layer)
507
- look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
508
- dec_target_padding_mask = create_padding_mask(tar)
509
- combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
510
-
511
- return enc_padding_mask, combined_mask, dec_padding_mask
512
-
513
- # drive_root = '/gdrive/My Drive/'
514
- drive_root = './'
515
-
516
- checkpoint_dir = os.path.join(drive_root, "checkpoints")
517
- checkpoint_dir = os.path.join(checkpoint_dir, "training_checkpoints/moops_transfomer")
518
-
519
- print("Checkpoints directory is", checkpoint_dir)
520
- if os.path.exists(checkpoint_dir):
521
- print("Checkpoints folder already exists")
522
- else:
523
- print("Creating a checkpoints directory")
524
- os.makedirs(checkpoint_dir)
525
-
526
-
527
- checkpoint = tf.train.Checkpoint(transformer=transformer,
528
- optimizer=optimizer)
529
-
530
- ckpt_manager = tf.train.CheckpointManager(checkpoint, checkpoint_dir, max_to_keep=5)
531
-
532
- latest = ckpt_manager.latest_checkpoint
533
- latest
534
-
535
- if latest:
536
- epoch_num = int(latest.split('/')[-1].split('-')[-1])
537
- checkpoint.restore(latest)
538
- print ('Latest checkpoint restored!!')
539
- else:
540
- epoch_num = 0
541
-
542
- epoch_num
543
-
544
- # EPOCHS = 17
545
-
546
- # def train_step(inp, tar):
547
- # tar_inp = tar[:, :-1]
548
- # tar_real = tar[:, 1:]
549
-
550
- # enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
551
-
552
- # with tf.GradientTape() as tape:
553
- # predictions, _ = transformer(inp, tar_inp,
554
- # True,
555
- # enc_padding_mask,
556
- # combined_mask,
557
- # dec_padding_mask)
558
- # loss = loss_function(tar_real, predictions)
559
-
560
- # gradients = tape.gradient(loss, transformer.trainable_variables)
561
- # optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
562
-
563
- # train_loss(loss)
564
- # train_accuracy(tar_real, predictions)
565
-
566
- # for epoch in range(epoch_num, EPOCHS):
567
- # start = time.time()
568
-
569
- # train_loss.reset_states()
570
- # train_accuracy.reset_states()
571
-
572
- # # inp -> question, tar -> equation
573
- # for (batch, (inp, tar)) in enumerate(dataset):
574
- # train_step(inp, tar)
575
-
576
- # if batch % 50 == 0:
577
- # print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
578
- # epoch + 1, batch, train_loss.result(), train_accuracy.result()))
579
-
580
- # ckpt_save_path = ckpt_manager.save()
581
- # print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
582
- # ckpt_save_path))
583
-
584
- # print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1,
585
- # train_loss.result(),
586
- # train_accuracy.result()))
587
-
588
- # print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
589
-
590
- def evaluate(inp_sentence):
591
- start_token = [len(inp_lang_tokenizer.word_index)+1]
592
- end_token = [len(inp_lang_tokenizer.word_index)+2]
593
-
594
- # inp sentence is the word problem, hence adding the start and end token
595
- inp_sentence = start_token + [inp_lang_tokenizer.word_index.get(i, inp_lang_tokenizer.word_index['john']) for i in preprocess_input(inp_sentence).split(' ')] + end_token
596
- encoder_input = tf.expand_dims(inp_sentence, 0)
597
-
598
- # start with equation's start token
599
- decoder_input = [old_len+1]
600
- output = tf.expand_dims(decoder_input, 0)
601
-
602
- for i in range(MAX_LENGTH):
603
- enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
604
- encoder_input, output)
605
-
606
- predictions, attention_weights = transformer(encoder_input,
607
- output,
608
- False,
609
- enc_padding_mask,
610
- combined_mask,
611
- dec_padding_mask)
612
-
613
- # select the last word from the seq_len dimension
614
- predictions = predictions[: ,-1:, :]
615
- predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
616
-
617
- # return the result if the predicted_id is equal to the end token
618
- if predicted_id == old_len+2:
619
- return tf.squeeze(output, axis=0), attention_weights
620
-
621
- # concatentate the predicted_id to the output which is given to the decoder
622
- # as its input.
623
- output = tf.concat([output, predicted_id], axis=-1)
624
- return tf.squeeze(output, axis=0), attention_weights
625
-
626
- # def plot_attention_weights(attention, sentence, result, layer):
627
- # fig = plt.figure(figsize=(16, 8))
628
-
629
- # sentence = preprocess_input(sentence)
630
-
631
- # attention = tf.squeeze(attention[layer], axis=0)
632
-
633
- # for head in range(attention.shape[0]):
634
- # ax = fig.add_subplot(2, 4, head+1)
635
-
636
- # # plot the attention weights
637
- # ax.matshow(attention[head][:-1, :], cmap='viridis')
638
-
639
- # fontdict = {'fontsize': 10}
640
-
641
- # ax.set_xticks(range(len(sentence.split(' '))+2))
642
- # ax.set_yticks(range(len([targ_lang_tokenizer.index_word[i] for i in list(result.numpy())
643
- # if i < len(targ_lang_tokenizer.word_index) and i not in [0,old_len+1,old_len+2]])+3))
644
-
645
-
646
- # ax.set_ylim(len([targ_lang_tokenizer.index_word[i] for i in list(result.numpy())
647
- # if i < len(targ_lang_tokenizer.word_index) and i not in [0,old_len+1,old_len+2]]), -0.5)
648
-
649
- # ax.set_xticklabels(
650
- # ['<start>']+sentence.split(' ')+['<end>'],
651
- # fontdict=fontdict, rotation=90)
652
-
653
- # ax.set_yticklabels([targ_lang_tokenizer.index_word[i] for i in list(result.numpy())
654
- # if i < len(targ_lang_tokenizer.word_index) and i not in [0,old_len+1,old_len+2]],
655
- # fontdict=fontdict)
656
-
657
- # ax.set_xlabel('Head {}'.format(head+1))
658
-
659
- # plt.tight_layout()
660
- # plt.show()
661
-
662
- MAX_LENGTH = 40
663
-
664
- def translate(sentence, plot=''):
665
-
666
-
667
-
668
- result, attention_weights = evaluate(sentence)
669
-
670
- # use the result tokens to convert prediction into a list of characters
671
- # (not inclusing padding, start and end tokens)
672
- predicted_sentence = [targ_lang_tokenizer.index_word[i] for i in list(result.numpy()) if (i < len(targ_lang_tokenizer.word_index) and i not in [0,46,47])]
673
-
674
- # print('Input: {}'.format(sentence))
675
- return ''.join(predicted_sentence)
676
-
677
- if plot:
678
- plot_attention_weights(attention_weights, sentence, result, plot)
679
-
680
- # def evaluate_results(inp_sentence):
681
- # start_token = [len(inp_lang_tokenizer.word_index)+1]
682
- # end_token = [len(inp_lang_tokenizer.word_index)+2]
683
-
684
- # # inp sentence is the word problem, hence adding the start and end token
685
- # inp_sentence = start_token + list(inp_sentence.numpy()[0]) + end_token
686
-
687
- # encoder_input = tf.expand_dims(inp_sentence, 0)
688
-
689
-
690
- # decoder_input = [old_len+1]
691
- # output = tf.expand_dims(decoder_input, 0)
692
-
693
- # for i in range(MAX_LENGTH):
694
- # enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
695
- # encoder_input, output)
696
-
697
- # # predictions.shape == (batch_size, seq_len, vocab_size)
698
- # predictions, attention_weights = transformer(encoder_input,
699
- # output,
700
- # False,
701
- # enc_padding_mask,
702
- # combined_mask,
703
- # dec_padding_mask)
704
-
705
- # # select the last word from the seq_len dimension
706
- # predictions = predictions[: ,-1:, :] # (batch_size, 1, vocab_size)
707
-
708
- # predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
709
-
710
- # # return the result if the predicted_id is equal to the end token
711
- # if predicted_id == old_len+2:
712
- # return tf.squeeze(output, axis=0), attention_weights
713
-
714
- # # concatentate the predicted_id to the output which is given to the decoder
715
- # # as its input.
716
- # output = tf.concat([output, predicted_id], axis=-1)
717
-
718
- # return tf.squeeze(output, axis=0), attention_weights
719
-
720
- # dataset_val = tf.data.Dataset.from_tensor_slices((input_tensor_val, target_tensor_val)).shuffle(BUFFER_SIZE)
721
- # dataset_val = dataset_val.batch(1, drop_remainder=True)
722
-
723
- # y_true = []
724
- # y_pred = []
725
- # acc_cnt = 0
726
-
727
- # a = 0
728
- # for (inp_val_batch, target_val_batch) in iter(dataset_val):
729
- # a += 1
730
- # if a % 100 == 0:
731
- # print(a)
732
- # print("Accuracy count: ",acc_cnt)
733
- # print('------------------')
734
- # target_sentence = ''
735
- # for i in target_val_batch.numpy()[0]:
736
- # if i not in [0,old_len+1,old_len+2]:
737
- # target_sentence += (targ_lang_tokenizer.index_word[i] + ' ')
738
-
739
- # y_true.append([target_sentence.split(' ')[:-1]])
740
-
741
- # result, _ = evaluate_results(inp_val_batch)
742
- # predicted_sentence = [targ_lang_tokenizer.index_word[i] for i in list(result.numpy()) if (i < len(targ_lang_tokenizer.word_index) and i not in [0,old_len+1,old_len+2])]
743
- # y_pred.append(predicted_sentence)
744
-
745
- # if target_sentence.split(' ')[:-1] == predicted_sentence:
746
- # acc_cnt += 1
747
-
748
- # len(y_true), len(y_pred)
749
-
750
- # print('Corpus BLEU score of the model: ', corpus_bleu(y_true, y_pred))
751
-
752
- # print('Accuracy of the model: ', acc_cnt/len(input_tensor_val))
753
-
754
- check_str = ' '.join([inp_lang_tokenizer.index_word[i] for i in input_tensor_val[242] if i not in [0,
755
- len(inp_lang_tokenizer.word_index)+1,
756
- len(inp_lang_tokenizer.word_index)+2]])
757
-
758
- check_str
759
-
760
- translate(check_str)
761
-
762
- #'victor had some car . john took 3 0 from him . now victor has 6 8 car . how many car victor had originally ?'
763
- translate('Nafis had 31 raspberry . He slice each raspberry into 19 slices . How many raspberry slices did Denise make?')
764
-
765
- interface = gr.Interface(
766
- fn = translate,
767
- inputs = gr.inputs.Textbox(lines = 2),
768
- outputs = 'text',
769
- examples = [
770
- ['Rachel bought two coloring books. One had 23 pictures and the other had 32. After one week she had colored 19 of the pictures. How many pictures does she still have to color?'],
771
- ['Denise had 31 raspberries. He slices each raspberry into 19 slices. How many raspberry slices did Denise make?'],
772
- ['A painter needed to paint 12 rooms in a building. Each room takes 7 hours to paint. If he already painted 5 rooms, how much longer will he take to paint the rest?'],
773
- ['Jerry had 135 pens. John took 19 pens from him. How many pens Jerry have left?'],
774
- ['Donald had some apples. Hillary took 20 apples from him. Now Donald has 100 apples. How many apples Donald had before?']
775
- ],
776
- title = 'Mathbot',
777
- description = 'Enter a simple math word problem and our AI will try to predict an expression to solve it. Mathbot occasionally makes mistakes. Feel free to press "flag" if you encounter such a scenario.',
778
- )
779
- interface.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChihChiu29/mychatbot/tutorial.md DELETED
@@ -1,44 +0,0 @@
1
- ## Clone respository using git
2
-
3
- ```bash
4
- git clone https://huggingface.co/spaces/ChihChiu29/mychatbot
5
- ```
6
-
7
- ## Use git to push changes to huggingface repository
8
-
9
- First use `huggingface_cli.exe login` to login (follow its instruction), then use git commands for pushing.
10
-
11
- ## Build/run via docker locally
12
-
13
- ```bash
14
- docker build -t fastapi .
15
- docker run -it -p 7860:7860 fastapi
16
- ```
17
-
18
- ## CURL POST example
19
-
20
- ```bash
21
- curl -X POST http://localhost:7860/reply -H 'Content-Type: application/json' -d '{"msg": "hi"}'
22
- ```
23
-
24
- ## Huggingface API
25
-
26
- See: https://huggingface.co/docs/hub/api
27
-
28
- Access info for a space: https://huggingface.co/api/spaces/ChihChiu29/mychatbot
29
-
30
- ## Directly access the server on Huggingface space
31
-
32
- Use the embedded address, for example:
33
-
34
- ```bash
35
- curl -X POST https://chihchiu29-mychatbot.hf.space/reply -H 'Content-Type: application/json' -d '{"msg": "hi"}'
36
- ```
37
-
38
- ## Remove dangling images
39
-
40
- From: https://github.com/fabric8io/docker-maven-plugin/issues/501
41
-
42
- ```bash
43
- docker rmi $(docker images -qa -f 'dangling=true')
44
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Clebersla/RVC_V2_Huggingface_Version/lib/infer_pack/modules/F0Predictor/__init__.py DELETED
File without changes
spaces/CofAI/njpad/index.html DELETED
@@ -1,48 +0,0 @@
1
- <!DOCTYPE html>
2
- <html>
3
- <head>
4
- <title>NJPad</title>
5
- <style>
6
- body {
7
- font-family: Arial, sans-serif;
8
- }
9
-
10
- textarea {
11
- width: 100%;
12
- height: 300px;
13
- padding: 10px;
14
- border: 1px solid #ccc;
15
- border-radius: 5px;
16
- resize: none;
17
- }
18
-
19
- button {
20
- padding: 10px 20px;
21
- background-color: #4CAF50;
22
- color: white;
23
- border: none;
24
- border-radius: 5px;
25
- cursor: pointer;
26
- }
27
-
28
- button:hover {
29
- background-color: #45a049;
30
- }
31
- </style>
32
- </head>
33
- <body>
34
- <h1>NJPad</h1>
35
-
36
- <textarea id="editor"></textarea>
37
- <p></p>
38
- <button onclick="saveText()">Rate text</button>
39
-
40
- <script>
41
- function saveText() {
42
- var text = document.getElementById("editor").value;
43
- // Добавьте здесь код для сохранения текста
44
- alert("Your text " + " {" + text + "} " + "is perfect");
45
- }
46
- </script>
47
- </body>
48
- </html>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cvandi/remake/tests/test_discriminator_arch.py DELETED
@@ -1,19 +0,0 @@
1
- import torch
2
-
3
- from realesrgan.archs.discriminator_arch import UNetDiscriminatorSN
4
-
5
-
6
- def test_unetdiscriminatorsn():
7
- """Test arch: UNetDiscriminatorSN."""
8
-
9
- # model init and forward (cpu)
10
- net = UNetDiscriminatorSN(num_in_ch=3, num_feat=4, skip_connection=True)
11
- img = torch.rand((1, 3, 32, 32), dtype=torch.float32)
12
- output = net(img)
13
- assert output.shape == (1, 1, 32, 32)
14
-
15
- # model init and forward (gpu)
16
- if torch.cuda.is_available():
17
- net.cuda()
18
- output = net(img.cuda())
19
- assert output.shape == (1, 1, 32, 32)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/my_abi/modules/transformer.py DELETED
@@ -1,901 +0,0 @@
1
- # pytorch 1.5.0
2
- import copy
3
- import math
4
- import warnings
5
- from typing import Optional
6
-
7
- import torch
8
- import torch.nn as nn
9
- from torch import Tensor
10
- from torch.nn import Dropout, LayerNorm, Linear, Module, ModuleList, Parameter
11
- from torch.nn import functional as F
12
- from torch.nn.init import constant_, xavier_uniform_
13
-
14
-
15
- def multi_head_attention_forward(query, # type: Tensor
16
- key, # type: Tensor
17
- value, # type: Tensor
18
- embed_dim_to_check, # type: int
19
- num_heads, # type: int
20
- in_proj_weight, # type: Tensor
21
- in_proj_bias, # type: Tensor
22
- bias_k, # type: Optional[Tensor]
23
- bias_v, # type: Optional[Tensor]
24
- add_zero_attn, # type: bool
25
- dropout_p, # type: float
26
- out_proj_weight, # type: Tensor
27
- out_proj_bias, # type: Tensor
28
- training=True, # type: bool
29
- key_padding_mask=None, # type: Optional[Tensor]
30
- need_weights=True, # type: bool
31
- attn_mask=None, # type: Optional[Tensor]
32
- use_separate_proj_weight=False, # type: bool
33
- q_proj_weight=None, # type: Optional[Tensor]
34
- k_proj_weight=None, # type: Optional[Tensor]
35
- v_proj_weight=None, # type: Optional[Tensor]
36
- static_k=None, # type: Optional[Tensor]
37
- static_v=None # type: Optional[Tensor]
38
- ):
39
- # type: (...) -> Tuple[Tensor, Optional[Tensor]]
40
- r"""
41
- Args:
42
- query, key, value: map a query and a set of key-value pairs to an output.
43
- See "Attention Is All You Need" for more details.
44
- embed_dim_to_check: total dimension of the model.
45
- num_heads: parallel attention heads.
46
- in_proj_weight, in_proj_bias: input projection weight and bias.
47
- bias_k, bias_v: bias of the key and value sequences to be added at dim=0.
48
- add_zero_attn: add a new batch of zeros to the key and
49
- value sequences at dim=1.
50
- dropout_p: probability of an element to be zeroed.
51
- out_proj_weight, out_proj_bias: the output projection weight and bias.
52
- training: apply dropout if is ``True``.
53
- key_padding_mask: if provided, specified padding elements in the key will
54
- be ignored by the attention. This is an binary mask. When the value is True,
55
- the corresponding value on the attention layer will be filled with -inf.
56
- need_weights: output attn_output_weights.
57
- attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all
58
- the batches while a 3D mask allows to specify a different mask for the entries of each batch.
59
- use_separate_proj_weight: the function accept the proj. weights for query, key,
60
- and value in different forms. If false, in_proj_weight will be used, which is
61
- a combination of q_proj_weight, k_proj_weight, v_proj_weight.
62
- q_proj_weight, k_proj_weight, v_proj_weight, in_proj_bias: input projection weight and bias.
63
- static_k, static_v: static key and value used for attention operators.
64
- Shape:
65
- Inputs:
66
- - query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is
67
- the embedding dimension.
68
- - key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is
69
- the embedding dimension.
70
- - value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is
71
- the embedding dimension.
72
- - key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length.
73
- If a ByteTensor is provided, the non-zero positions will be ignored while the zero positions
74
- will be unchanged. If a BoolTensor is provided, the positions with the
75
- value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
76
- - attn_mask: 2D mask :math:`(L, S)` where L is the target sequence length, S is the source sequence length.
77
- 3D mask :math:`(N*num_heads, L, S)` where N is the batch size, L is the target sequence length,
78
- S is the source sequence length. attn_mask ensures that position i is allowed to attend the unmasked
79
- positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
80
- while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
81
- are not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
82
- is provided, it will be added to the attention weight.
83
- - static_k: :math:`(N*num_heads, S, E/num_heads)`, where S is the source sequence length,
84
- N is the batch size, E is the embedding dimension. E/num_heads is the head dimension.
85
- - static_v: :math:`(N*num_heads, S, E/num_heads)`, where S is the source sequence length,
86
- N is the batch size, E is the embedding dimension. E/num_heads is the head dimension.
87
- Outputs:
88
- - attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size,
89
- E is the embedding dimension.
90
- - attn_output_weights: :math:`(N, L, S)` where N is the batch size,
91
- L is the target sequence length, S is the source sequence length.
92
- """
93
- # if not torch.jit.is_scripting():
94
- # tens_ops = (query, key, value, in_proj_weight, in_proj_bias, bias_k, bias_v,
95
- # out_proj_weight, out_proj_bias)
96
- # if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops):
97
- # return handle_torch_function(
98
- # multi_head_attention_forward, tens_ops, query, key, value,
99
- # embed_dim_to_check, num_heads, in_proj_weight, in_proj_bias,
100
- # bias_k, bias_v, add_zero_attn, dropout_p, out_proj_weight,
101
- # out_proj_bias, training=training, key_padding_mask=key_padding_mask,
102
- # need_weights=need_weights, attn_mask=attn_mask,
103
- # use_separate_proj_weight=use_separate_proj_weight,
104
- # q_proj_weight=q_proj_weight, k_proj_weight=k_proj_weight,
105
- # v_proj_weight=v_proj_weight, static_k=static_k, static_v=static_v)
106
- tgt_len, bsz, embed_dim = query.size()
107
- assert embed_dim == embed_dim_to_check
108
- assert key.size() == value.size()
109
-
110
- head_dim = embed_dim // num_heads
111
- assert head_dim * num_heads == embed_dim, "embed_dim must be divisible by num_heads"
112
- scaling = float(head_dim) ** -0.5
113
-
114
- if not use_separate_proj_weight:
115
- if torch.equal(query, key) and torch.equal(key, value):
116
- # self-attention
117
- q, k, v = F.linear(query, in_proj_weight, in_proj_bias).chunk(3, dim=-1)
118
-
119
- elif torch.equal(key, value):
120
- # encoder-decoder attention
121
- # This is inline in_proj function with in_proj_weight and in_proj_bias
122
- _b = in_proj_bias
123
- _start = 0
124
- _end = embed_dim
125
- _w = in_proj_weight[_start:_end, :]
126
- if _b is not None:
127
- _b = _b[_start:_end]
128
- q = F.linear(query, _w, _b)
129
-
130
- if key is None:
131
- assert value is None
132
- k = None
133
- v = None
134
- else:
135
-
136
- # This is inline in_proj function with in_proj_weight and in_proj_bias
137
- _b = in_proj_bias
138
- _start = embed_dim
139
- _end = None
140
- _w = in_proj_weight[_start:, :]
141
- if _b is not None:
142
- _b = _b[_start:]
143
- k, v = F.linear(key, _w, _b).chunk(2, dim=-1)
144
-
145
- else:
146
- # This is inline in_proj function with in_proj_weight and in_proj_bias
147
- _b = in_proj_bias
148
- _start = 0
149
- _end = embed_dim
150
- _w = in_proj_weight[_start:_end, :]
151
- if _b is not None:
152
- _b = _b[_start:_end]
153
- q = F.linear(query, _w, _b)
154
-
155
- # This is inline in_proj function with in_proj_weight and in_proj_bias
156
- _b = in_proj_bias
157
- _start = embed_dim
158
- _end = embed_dim * 2
159
- _w = in_proj_weight[_start:_end, :]
160
- if _b is not None:
161
- _b = _b[_start:_end]
162
- k = F.linear(key, _w, _b)
163
-
164
- # This is inline in_proj function with in_proj_weight and in_proj_bias
165
- _b = in_proj_bias
166
- _start = embed_dim * 2
167
- _end = None
168
- _w = in_proj_weight[_start:, :]
169
- if _b is not None:
170
- _b = _b[_start:]
171
- v = F.linear(value, _w, _b)
172
- else:
173
- q_proj_weight_non_opt = torch.jit._unwrap_optional(q_proj_weight)
174
- len1, len2 = q_proj_weight_non_opt.size()
175
- assert len1 == embed_dim and len2 == query.size(-1)
176
-
177
- k_proj_weight_non_opt = torch.jit._unwrap_optional(k_proj_weight)
178
- len1, len2 = k_proj_weight_non_opt.size()
179
- assert len1 == embed_dim and len2 == key.size(-1)
180
-
181
- v_proj_weight_non_opt = torch.jit._unwrap_optional(v_proj_weight)
182
- len1, len2 = v_proj_weight_non_opt.size()
183
- assert len1 == embed_dim and len2 == value.size(-1)
184
-
185
- if in_proj_bias is not None:
186
- q = F.linear(query, q_proj_weight_non_opt, in_proj_bias[0:embed_dim])
187
- k = F.linear(key, k_proj_weight_non_opt, in_proj_bias[embed_dim:(embed_dim * 2)])
188
- v = F.linear(value, v_proj_weight_non_opt, in_proj_bias[(embed_dim * 2):])
189
- else:
190
- q = F.linear(query, q_proj_weight_non_opt, in_proj_bias)
191
- k = F.linear(key, k_proj_weight_non_opt, in_proj_bias)
192
- v = F.linear(value, v_proj_weight_non_opt, in_proj_bias)
193
- q = q * scaling
194
-
195
- if attn_mask is not None:
196
- assert attn_mask.dtype == torch.float32 or attn_mask.dtype == torch.float64 or \
197
- attn_mask.dtype == torch.float16 or attn_mask.dtype == torch.uint8 or attn_mask.dtype == torch.bool, \
198
- 'Only float, byte, and bool types are supported for attn_mask, not {}'.format(attn_mask.dtype)
199
- if attn_mask.dtype == torch.uint8:
200
- warnings.warn("Byte tensor for attn_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.")
201
- attn_mask = attn_mask.to(torch.bool)
202
-
203
- if attn_mask.dim() == 2:
204
- attn_mask = attn_mask.unsqueeze(0)
205
- if list(attn_mask.size()) != [1, query.size(0), key.size(0)]:
206
- raise RuntimeError('The size of the 2D attn_mask is not correct.')
207
- elif attn_mask.dim() == 3:
208
- if list(attn_mask.size()) != [bsz * num_heads, query.size(0), key.size(0)]:
209
- raise RuntimeError('The size of the 3D attn_mask is not correct.')
210
- else:
211
- raise RuntimeError("attn_mask's dimension {} is not supported".format(attn_mask.dim()))
212
- # attn_mask's dim is 3 now.
213
-
214
- # # convert ByteTensor key_padding_mask to bool
215
- # if key_padding_mask is not None and key_padding_mask.dtype == torch.uint8:
216
- # warnings.warn("Byte tensor for key_padding_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.")
217
- # key_padding_mask = key_padding_mask.to(torch.bool)
218
-
219
- if bias_k is not None and bias_v is not None:
220
- if static_k is None and static_v is None:
221
- k = torch.cat([k, bias_k.repeat(1, bsz, 1)])
222
- v = torch.cat([v, bias_v.repeat(1, bsz, 1)])
223
- if attn_mask is not None:
224
- attn_mask = pad(attn_mask, (0, 1))
225
- if key_padding_mask is not None:
226
- key_padding_mask = pad(key_padding_mask, (0, 1))
227
- else:
228
- assert static_k is None, "bias cannot be added to static key."
229
- assert static_v is None, "bias cannot be added to static value."
230
- else:
231
- assert bias_k is None
232
- assert bias_v is None
233
-
234
- q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1)
235
- if k is not None:
236
- k = k.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
237
- if v is not None:
238
- v = v.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
239
-
240
- if static_k is not None:
241
- assert static_k.size(0) == bsz * num_heads
242
- assert static_k.size(2) == head_dim
243
- k = static_k
244
-
245
- if static_v is not None:
246
- assert static_v.size(0) == bsz * num_heads
247
- assert static_v.size(2) == head_dim
248
- v = static_v
249
-
250
- src_len = k.size(1)
251
-
252
- if key_padding_mask is not None:
253
- assert key_padding_mask.size(0) == bsz
254
- assert key_padding_mask.size(1) == src_len
255
-
256
- if add_zero_attn:
257
- src_len += 1
258
- k = torch.cat([k, torch.zeros((k.size(0), 1) + k.size()[2:], dtype=k.dtype, device=k.device)], dim=1)
259
- v = torch.cat([v, torch.zeros((v.size(0), 1) + v.size()[2:], dtype=v.dtype, device=v.device)], dim=1)
260
- if attn_mask is not None:
261
- attn_mask = pad(attn_mask, (0, 1))
262
- if key_padding_mask is not None:
263
- key_padding_mask = pad(key_padding_mask, (0, 1))
264
-
265
- attn_output_weights = torch.bmm(q, k.transpose(1, 2))
266
- assert list(attn_output_weights.size()) == [bsz * num_heads, tgt_len, src_len]
267
-
268
- if attn_mask is not None:
269
- if attn_mask.dtype == torch.bool:
270
- attn_output_weights.masked_fill_(attn_mask, float('-inf'))
271
- else:
272
- attn_output_weights += attn_mask
273
-
274
-
275
- if key_padding_mask is not None:
276
- attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
277
- attn_output_weights = attn_output_weights.masked_fill(
278
- key_padding_mask.unsqueeze(1).unsqueeze(2),
279
- float('-inf'),
280
- )
281
- attn_output_weights = attn_output_weights.view(bsz * num_heads, tgt_len, src_len)
282
-
283
- attn_output_weights = F.softmax(
284
- attn_output_weights, dim=-1)
285
- attn_output_weights = F.dropout(attn_output_weights, p=dropout_p, training=training)
286
-
287
- attn_output = torch.bmm(attn_output_weights, v)
288
- assert list(attn_output.size()) == [bsz * num_heads, tgt_len, head_dim]
289
- attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)
290
- attn_output = F.linear(attn_output, out_proj_weight, out_proj_bias)
291
-
292
- if need_weights:
293
- # average attention weights over heads
294
- attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
295
- return attn_output, attn_output_weights.sum(dim=1) / num_heads
296
- else:
297
- return attn_output, None
298
-
299
- class MultiheadAttention(Module):
300
- r"""Allows the model to jointly attend to information
301
- from different representation subspaces.
302
- See reference: Attention Is All You Need
303
- .. math::
304
- \text{MultiHead}(Q, K, V) = \text{Concat}(head_1,\dots,head_h)W^O
305
- \text{where} head_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V)
306
- Args:
307
- embed_dim: total dimension of the model.
308
- num_heads: parallel attention heads.
309
- dropout: a Dropout layer on attn_output_weights. Default: 0.0.
310
- bias: add bias as module parameter. Default: True.
311
- add_bias_kv: add bias to the key and value sequences at dim=0.
312
- add_zero_attn: add a new batch of zeros to the key and
313
- value sequences at dim=1.
314
- kdim: total number of features in key. Default: None.
315
- vdim: total number of features in value. Default: None.
316
- Note: if kdim and vdim are None, they will be set to embed_dim such that
317
- query, key, and value have the same number of features.
318
- Examples::
319
- >>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
320
- >>> attn_output, attn_output_weights = multihead_attn(query, key, value)
321
- """
322
- # __annotations__ = {
323
- # 'bias_k': torch._jit_internal.Optional[torch.Tensor],
324
- # 'bias_v': torch._jit_internal.Optional[torch.Tensor],
325
- # }
326
- __constants__ = ['q_proj_weight', 'k_proj_weight', 'v_proj_weight', 'in_proj_weight']
327
-
328
- def __init__(self, embed_dim, num_heads, dropout=0., bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None):
329
- super(MultiheadAttention, self).__init__()
330
- self.embed_dim = embed_dim
331
- self.kdim = kdim if kdim is not None else embed_dim
332
- self.vdim = vdim if vdim is not None else embed_dim
333
- self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim
334
-
335
- self.num_heads = num_heads
336
- self.dropout = dropout
337
- self.head_dim = embed_dim // num_heads
338
- assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads"
339
-
340
- if self._qkv_same_embed_dim is False:
341
- self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim))
342
- self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim))
343
- self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim))
344
- self.register_parameter('in_proj_weight', None)
345
- else:
346
- self.in_proj_weight = Parameter(torch.empty(3 * embed_dim, embed_dim))
347
- self.register_parameter('q_proj_weight', None)
348
- self.register_parameter('k_proj_weight', None)
349
- self.register_parameter('v_proj_weight', None)
350
-
351
- if bias:
352
- self.in_proj_bias = Parameter(torch.empty(3 * embed_dim))
353
- else:
354
- self.register_parameter('in_proj_bias', None)
355
- self.out_proj = Linear(embed_dim, embed_dim, bias=bias)
356
-
357
- if add_bias_kv:
358
- self.bias_k = Parameter(torch.empty(1, 1, embed_dim))
359
- self.bias_v = Parameter(torch.empty(1, 1, embed_dim))
360
- else:
361
- self.bias_k = self.bias_v = None
362
-
363
- self.add_zero_attn = add_zero_attn
364
-
365
- self._reset_parameters()
366
-
367
- def _reset_parameters(self):
368
- if self._qkv_same_embed_dim:
369
- xavier_uniform_(self.in_proj_weight)
370
- else:
371
- xavier_uniform_(self.q_proj_weight)
372
- xavier_uniform_(self.k_proj_weight)
373
- xavier_uniform_(self.v_proj_weight)
374
-
375
- if self.in_proj_bias is not None:
376
- constant_(self.in_proj_bias, 0.)
377
- constant_(self.out_proj.bias, 0.)
378
- if self.bias_k is not None:
379
- xavier_normal_(self.bias_k)
380
- if self.bias_v is not None:
381
- xavier_normal_(self.bias_v)
382
-
383
- def __setstate__(self, state):
384
- # Support loading old MultiheadAttention checkpoints generated by v1.1.0
385
- if '_qkv_same_embed_dim' not in state:
386
- state['_qkv_same_embed_dim'] = True
387
-
388
- super(MultiheadAttention, self).__setstate__(state)
389
-
390
- def forward(self, query, key, value, key_padding_mask=None,
391
- need_weights=True, attn_mask=None):
392
- # type: (Tensor, Tensor, Tensor, Optional[Tensor], bool, Optional[Tensor]) -> Tuple[Tensor, Optional[Tensor]]
393
- r"""
394
- Args:
395
- query, key, value: map a query and a set of key-value pairs to an output.
396
- See "Attention Is All You Need" for more details.
397
- key_padding_mask: if provided, specified padding elements in the key will
398
- be ignored by the attention. This is an binary mask. When the value is True,
399
- the corresponding value on the attention layer will be filled with -inf.
400
- need_weights: output attn_output_weights.
401
- attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all
402
- the batches while a 3D mask allows to specify a different mask for the entries of each batch.
403
- Shape:
404
- - Inputs:
405
- - query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is
406
- the embedding dimension.
407
- - key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is
408
- the embedding dimension.
409
- - value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is
410
- the embedding dimension.
411
- - key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length.
412
- If a ByteTensor is provided, the non-zero positions will be ignored while the position
413
- with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the
414
- value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
415
- - attn_mask: 2D mask :math:`(L, S)` where L is the target sequence length, S is the source sequence length.
416
- 3D mask :math:`(N*num_heads, L, S)` where N is the batch size, L is the target sequence length,
417
- S is the source sequence length. attn_mask ensure that position i is allowed to attend the unmasked
418
- positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
419
- while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
420
- is not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
421
- is provided, it will be added to the attention weight.
422
- - Outputs:
423
- - attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size,
424
- E is the embedding dimension.
425
- - attn_output_weights: :math:`(N, L, S)` where N is the batch size,
426
- L is the target sequence length, S is the source sequence length.
427
- """
428
- if not self._qkv_same_embed_dim:
429
- return multi_head_attention_forward(
430
- query, key, value, self.embed_dim, self.num_heads,
431
- self.in_proj_weight, self.in_proj_bias,
432
- self.bias_k, self.bias_v, self.add_zero_attn,
433
- self.dropout, self.out_proj.weight, self.out_proj.bias,
434
- training=self.training,
435
- key_padding_mask=key_padding_mask, need_weights=need_weights,
436
- attn_mask=attn_mask, use_separate_proj_weight=True,
437
- q_proj_weight=self.q_proj_weight, k_proj_weight=self.k_proj_weight,
438
- v_proj_weight=self.v_proj_weight)
439
- else:
440
- return multi_head_attention_forward(
441
- query, key, value, self.embed_dim, self.num_heads,
442
- self.in_proj_weight, self.in_proj_bias,
443
- self.bias_k, self.bias_v, self.add_zero_attn,
444
- self.dropout, self.out_proj.weight, self.out_proj.bias,
445
- training=self.training,
446
- key_padding_mask=key_padding_mask, need_weights=need_weights,
447
- attn_mask=attn_mask)
448
-
449
-
450
- class Transformer(Module):
451
- r"""A transformer model. User is able to modify the attributes as needed. The architecture
452
- is based on the paper "Attention Is All You Need". Ashish Vaswani, Noam Shazeer,
453
- Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and
454
- Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information
455
- Processing Systems, pages 6000-6010. Users can build the BERT(https://arxiv.org/abs/1810.04805)
456
- model with corresponding parameters.
457
-
458
- Args:
459
- d_model: the number of expected features in the encoder/decoder inputs (default=512).
460
- nhead: the number of heads in the multiheadattention models (default=8).
461
- num_encoder_layers: the number of sub-encoder-layers in the encoder (default=6).
462
- num_decoder_layers: the number of sub-decoder-layers in the decoder (default=6).
463
- dim_feedforward: the dimension of the feedforward network model (default=2048).
464
- dropout: the dropout value (default=0.1).
465
- activation: the activation function of encoder/decoder intermediate layer, relu or gelu (default=relu).
466
- custom_encoder: custom encoder (default=None).
467
- custom_decoder: custom decoder (default=None).
468
-
469
- Examples::
470
- >>> transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)
471
- >>> src = torch.rand((10, 32, 512))
472
- >>> tgt = torch.rand((20, 32, 512))
473
- >>> out = transformer_model(src, tgt)
474
-
475
- Note: A full example to apply nn.Transformer module for the word language model is available in
476
- https://github.com/pytorch/examples/tree/master/word_language_model
477
- """
478
-
479
- def __init__(self, d_model=512, nhead=8, num_encoder_layers=6,
480
- num_decoder_layers=6, dim_feedforward=2048, dropout=0.1,
481
- activation="relu", custom_encoder=None, custom_decoder=None):
482
- super(Transformer, self).__init__()
483
-
484
- if custom_encoder is not None:
485
- self.encoder = custom_encoder
486
- else:
487
- encoder_layer = TransformerEncoderLayer(d_model, nhead, dim_feedforward, dropout, activation)
488
- encoder_norm = LayerNorm(d_model)
489
- self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm)
490
-
491
- if custom_decoder is not None:
492
- self.decoder = custom_decoder
493
- else:
494
- decoder_layer = TransformerDecoderLayer(d_model, nhead, dim_feedforward, dropout, activation)
495
- decoder_norm = LayerNorm(d_model)
496
- self.decoder = TransformerDecoder(decoder_layer, num_decoder_layers, decoder_norm)
497
-
498
- self._reset_parameters()
499
-
500
- self.d_model = d_model
501
- self.nhead = nhead
502
-
503
- def forward(self, src, tgt, src_mask=None, tgt_mask=None,
504
- memory_mask=None, src_key_padding_mask=None,
505
- tgt_key_padding_mask=None, memory_key_padding_mask=None):
506
- # type: (Tensor, Tensor, Optional[Tensor], Optional[Tensor], Optional[Tensor], Optional[Tensor], Optional[Tensor], Optional[Tensor]) -> Tensor # noqa
507
- r"""Take in and process masked source/target sequences.
508
-
509
- Args:
510
- src: the sequence to the encoder (required).
511
- tgt: the sequence to the decoder (required).
512
- src_mask: the additive mask for the src sequence (optional).
513
- tgt_mask: the additive mask for the tgt sequence (optional).
514
- memory_mask: the additive mask for the encoder output (optional).
515
- src_key_padding_mask: the ByteTensor mask for src keys per batch (optional).
516
- tgt_key_padding_mask: the ByteTensor mask for tgt keys per batch (optional).
517
- memory_key_padding_mask: the ByteTensor mask for memory keys per batch (optional).
518
-
519
- Shape:
520
- - src: :math:`(S, N, E)`.
521
- - tgt: :math:`(T, N, E)`.
522
- - src_mask: :math:`(S, S)`.
523
- - tgt_mask: :math:`(T, T)`.
524
- - memory_mask: :math:`(T, S)`.
525
- - src_key_padding_mask: :math:`(N, S)`.
526
- - tgt_key_padding_mask: :math:`(N, T)`.
527
- - memory_key_padding_mask: :math:`(N, S)`.
528
-
529
- Note: [src/tgt/memory]_mask ensures that position i is allowed to attend the unmasked
530
- positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
531
- while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
532
- are not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
533
- is provided, it will be added to the attention weight.
534
- [src/tgt/memory]_key_padding_mask provides specified elements in the key to be ignored by
535
- the attention. If a ByteTensor is provided, the non-zero positions will be ignored while the zero
536
- positions will be unchanged. If a BoolTensor is provided, the positions with the
537
- value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
538
-
539
- - output: :math:`(T, N, E)`.
540
-
541
- Note: Due to the multi-head attention architecture in the transformer model,
542
- the output sequence length of a transformer is same as the input sequence
543
- (i.e. target) length of the decode.
544
-
545
- where S is the source sequence length, T is the target sequence length, N is the
546
- batch size, E is the feature number
547
-
548
- Examples:
549
- >>> output = transformer_model(src, tgt, src_mask=src_mask, tgt_mask=tgt_mask)
550
- """
551
-
552
- if src.size(1) != tgt.size(1):
553
- raise RuntimeError("the batch number of src and tgt must be equal")
554
-
555
- if src.size(2) != self.d_model or tgt.size(2) != self.d_model:
556
- raise RuntimeError("the feature number of src and tgt must be equal to d_model")
557
-
558
- memory = self.encoder(src, mask=src_mask, src_key_padding_mask=src_key_padding_mask)
559
- output = self.decoder(tgt, memory, tgt_mask=tgt_mask, memory_mask=memory_mask,
560
- tgt_key_padding_mask=tgt_key_padding_mask,
561
- memory_key_padding_mask=memory_key_padding_mask)
562
- return output
563
-
564
- def generate_square_subsequent_mask(self, sz):
565
- r"""Generate a square mask for the sequence. The masked positions are filled with float('-inf').
566
- Unmasked positions are filled with float(0.0).
567
- """
568
- mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
569
- mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
570
- return mask
571
-
572
- def _reset_parameters(self):
573
- r"""Initiate parameters in the transformer model."""
574
-
575
- for p in self.parameters():
576
- if p.dim() > 1:
577
- xavier_uniform_(p)
578
-
579
-
580
- class TransformerEncoder(Module):
581
- r"""TransformerEncoder is a stack of N encoder layers
582
-
583
- Args:
584
- encoder_layer: an instance of the TransformerEncoderLayer() class (required).
585
- num_layers: the number of sub-encoder-layers in the encoder (required).
586
- norm: the layer normalization component (optional).
587
-
588
- Examples::
589
- >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
590
- >>> transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=6)
591
- >>> src = torch.rand(10, 32, 512)
592
- >>> out = transformer_encoder(src)
593
- """
594
- __constants__ = ['norm']
595
-
596
- def __init__(self, encoder_layer, num_layers, norm=None):
597
- super(TransformerEncoder, self).__init__()
598
- self.layers = _get_clones(encoder_layer, num_layers)
599
- self.num_layers = num_layers
600
- self.norm = norm
601
-
602
- def forward(self, src, mask=None, src_key_padding_mask=None):
603
- # type: (Tensor, Optional[Tensor], Optional[Tensor]) -> Tensor
604
- r"""Pass the input through the encoder layers in turn.
605
-
606
- Args:
607
- src: the sequence to the encoder (required).
608
- mask: the mask for the src sequence (optional).
609
- src_key_padding_mask: the mask for the src keys per batch (optional).
610
-
611
- Shape:
612
- see the docs in Transformer class.
613
- """
614
- output = src
615
-
616
- for i, mod in enumerate(self.layers):
617
- output = mod(output, src_mask=mask, src_key_padding_mask=src_key_padding_mask)
618
-
619
- if self.norm is not None:
620
- output = self.norm(output)
621
-
622
- return output
623
-
624
-
625
- class TransformerDecoder(Module):
626
- r"""TransformerDecoder is a stack of N decoder layers
627
-
628
- Args:
629
- decoder_layer: an instance of the TransformerDecoderLayer() class (required).
630
- num_layers: the number of sub-decoder-layers in the decoder (required).
631
- norm: the layer normalization component (optional).
632
-
633
- Examples::
634
- >>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)
635
- >>> transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers=6)
636
- >>> memory = torch.rand(10, 32, 512)
637
- >>> tgt = torch.rand(20, 32, 512)
638
- >>> out = transformer_decoder(tgt, memory)
639
- """
640
- __constants__ = ['norm']
641
-
642
- def __init__(self, decoder_layer, num_layers, norm=None):
643
- super(TransformerDecoder, self).__init__()
644
- self.layers = _get_clones(decoder_layer, num_layers)
645
- self.num_layers = num_layers
646
- self.norm = norm
647
-
648
- def forward(self, tgt, memory, memory2=None, tgt_mask=None,
649
- memory_mask=None, memory_mask2=None, tgt_key_padding_mask=None,
650
- memory_key_padding_mask=None, memory_key_padding_mask2=None):
651
- # type: (Tensor, Tensor, Optional[Tensor], Optional[Tensor], Optional[Tensor], Optional[Tensor]) -> Tensor
652
- r"""Pass the inputs (and mask) through the decoder layer in turn.
653
-
654
- Args:
655
- tgt: the sequence to the decoder (required).
656
- memory: the sequence from the last layer of the encoder (required).
657
- tgt_mask: the mask for the tgt sequence (optional).
658
- memory_mask: the mask for the memory sequence (optional).
659
- tgt_key_padding_mask: the mask for the tgt keys per batch (optional).
660
- memory_key_padding_mask: the mask for the memory keys per batch (optional).
661
-
662
- Shape:
663
- see the docs in Transformer class.
664
- """
665
- output = tgt
666
-
667
- for mod in self.layers:
668
- output = mod(output, memory, memory2=memory2, tgt_mask=tgt_mask,
669
- memory_mask=memory_mask, memory_mask2=memory_mask2,
670
- tgt_key_padding_mask=tgt_key_padding_mask,
671
- memory_key_padding_mask=memory_key_padding_mask,
672
- memory_key_padding_mask2=memory_key_padding_mask2)
673
-
674
- if self.norm is not None:
675
- output = self.norm(output)
676
-
677
- return output
678
-
679
- class TransformerEncoderLayer(Module):
680
- r"""TransformerEncoderLayer is made up of self-attn and feedforward network.
681
- This standard encoder layer is based on the paper "Attention Is All You Need".
682
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
683
- Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in
684
- Neural Information Processing Systems, pages 6000-6010. Users may modify or implement
685
- in a different way during application.
686
-
687
- Args:
688
- d_model: the number of expected features in the input (required).
689
- nhead: the number of heads in the multiheadattention models (required).
690
- dim_feedforward: the dimension of the feedforward network model (default=2048).
691
- dropout: the dropout value (default=0.1).
692
- activation: the activation function of intermediate layer, relu or gelu (default=relu).
693
-
694
- Examples::
695
- >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
696
- >>> src = torch.rand(10, 32, 512)
697
- >>> out = encoder_layer(src)
698
- """
699
-
700
- def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
701
- activation="relu", debug=False):
702
- super(TransformerEncoderLayer, self).__init__()
703
- self.debug = debug
704
- self.self_attn = MultiheadAttention(d_model, nhead, dropout=dropout)
705
- # Implementation of Feedforward model
706
- self.linear1 = Linear(d_model, dim_feedforward)
707
- self.dropout = Dropout(dropout)
708
- self.linear2 = Linear(dim_feedforward, d_model)
709
-
710
- self.norm1 = LayerNorm(d_model)
711
- self.norm2 = LayerNorm(d_model)
712
- self.dropout1 = Dropout(dropout)
713
- self.dropout2 = Dropout(dropout)
714
-
715
- self.activation = _get_activation_fn(activation)
716
-
717
- def __setstate__(self, state):
718
- if 'activation' not in state:
719
- state['activation'] = F.relu
720
- super(TransformerEncoderLayer, self).__setstate__(state)
721
-
722
- def forward(self, src, src_mask=None, src_key_padding_mask=None):
723
- # type: (Tensor, Optional[Tensor], Optional[Tensor]) -> Tensor
724
- r"""Pass the input through the encoder layer.
725
-
726
- Args:
727
- src: the sequence to the encoder layer (required).
728
- src_mask: the mask for the src sequence (optional).
729
- src_key_padding_mask: the mask for the src keys per batch (optional).
730
-
731
- Shape:
732
- see the docs in Transformer class.
733
- """
734
- src2, attn = self.self_attn(src, src, src, attn_mask=src_mask,
735
- key_padding_mask=src_key_padding_mask)
736
- if self.debug: self.attn = attn
737
- src = src + self.dropout1(src2)
738
- src = self.norm1(src)
739
- src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
740
- src = src + self.dropout2(src2)
741
- src = self.norm2(src)
742
-
743
- return src
744
-
745
-
746
- class TransformerDecoderLayer(Module):
747
- r"""TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network.
748
- This standard decoder layer is based on the paper "Attention Is All You Need".
749
- Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
750
- Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in
751
- Neural Information Processing Systems, pages 6000-6010. Users may modify or implement
752
- in a different way during application.
753
-
754
- Args:
755
- d_model: the number of expected features in the input (required).
756
- nhead: the number of heads in the multiheadattention models (required).
757
- dim_feedforward: the dimension of the feedforward network model (default=2048).
758
- dropout: the dropout value (default=0.1).
759
- activation: the activation function of intermediate layer, relu or gelu (default=relu).
760
-
761
- Examples::
762
- >>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)
763
- >>> memory = torch.rand(10, 32, 512)
764
- >>> tgt = torch.rand(20, 32, 512)
765
- >>> out = decoder_layer(tgt, memory)
766
- """
767
-
768
- def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1,
769
- activation="relu", self_attn=True, siamese=False, debug=False):
770
- super(TransformerDecoderLayer, self).__init__()
771
- self.has_self_attn, self.siamese = self_attn, siamese
772
- self.debug = debug
773
- if self.has_self_attn:
774
- self.self_attn = MultiheadAttention(d_model, nhead, dropout=dropout)
775
- self.norm1 = LayerNorm(d_model)
776
- self.dropout1 = Dropout(dropout)
777
- self.multihead_attn = MultiheadAttention(d_model, nhead, dropout=dropout)
778
- # Implementation of Feedforward model
779
- self.linear1 = Linear(d_model, dim_feedforward)
780
- self.dropout = Dropout(dropout)
781
- self.linear2 = Linear(dim_feedforward, d_model)
782
-
783
- self.norm2 = LayerNorm(d_model)
784
- self.norm3 = LayerNorm(d_model)
785
- self.dropout2 = Dropout(dropout)
786
- self.dropout3 = Dropout(dropout)
787
- if self.siamese:
788
- self.multihead_attn2 = MultiheadAttention(d_model, nhead, dropout=dropout)
789
-
790
- self.activation = _get_activation_fn(activation)
791
-
792
- def __setstate__(self, state):
793
- if 'activation' not in state:
794
- state['activation'] = F.relu
795
- super(TransformerDecoderLayer, self).__setstate__(state)
796
-
797
- def forward(self, tgt, memory, tgt_mask=None, memory_mask=None,
798
- tgt_key_padding_mask=None, memory_key_padding_mask=None,
799
- memory2=None, memory_mask2=None, memory_key_padding_mask2=None):
800
- # type: (Tensor, Tensor, Optional[Tensor], Optional[Tensor], Optional[Tensor], Optional[Tensor]) -> Tensor
801
- r"""Pass the inputs (and mask) through the decoder layer.
802
-
803
- Args:
804
- tgt: the sequence to the decoder layer (required).
805
- memory: the sequence from the last layer of the encoder (required).
806
- tgt_mask: the mask for the tgt sequence (optional).
807
- memory_mask: the mask for the memory sequence (optional).
808
- tgt_key_padding_mask: the mask for the tgt keys per batch (optional).
809
- memory_key_padding_mask: the mask for the memory keys per batch (optional).
810
-
811
- Shape:
812
- see the docs in Transformer class.
813
- """
814
- if self.has_self_attn:
815
- tgt2, attn = self.self_attn(tgt, tgt, tgt, attn_mask=tgt_mask,
816
- key_padding_mask=tgt_key_padding_mask)
817
- tgt = tgt + self.dropout1(tgt2)
818
- tgt = self.norm1(tgt)
819
- if self.debug: self.attn = attn
820
- tgt2, attn2 = self.multihead_attn(tgt, memory, memory, attn_mask=memory_mask,
821
- key_padding_mask=memory_key_padding_mask)
822
- if self.debug: self.attn2 = attn2
823
-
824
- if self.siamese:
825
- tgt3, attn3 = self.multihead_attn2(tgt, memory2, memory2, attn_mask=memory_mask2,
826
- key_padding_mask=memory_key_padding_mask2)
827
- tgt = tgt + self.dropout2(tgt3)
828
- if self.debug: self.attn3 = attn3
829
-
830
- tgt = tgt + self.dropout2(tgt2)
831
- tgt = self.norm2(tgt)
832
- tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
833
- tgt = tgt + self.dropout3(tgt2)
834
- tgt = self.norm3(tgt)
835
-
836
- return tgt
837
-
838
-
839
- def _get_clones(module, N):
840
- return ModuleList([copy.deepcopy(module) for i in range(N)])
841
-
842
-
843
- def _get_activation_fn(activation):
844
- if activation == "relu":
845
- return F.relu
846
- elif activation == "gelu":
847
- return F.gelu
848
-
849
- raise RuntimeError("activation should be relu/gelu, not {}".format(activation))
850
-
851
-
852
- class PositionalEncoding(nn.Module):
853
- r"""Inject some information about the relative or absolute position of the tokens
854
- in the sequence. The positional encodings have the same dimension as
855
- the embeddings, so that the two can be summed. Here, we use sine and cosine
856
- functions of different frequencies.
857
- .. math::
858
- \text{PosEncoder}(pos, 2i) = sin(pos/10000^(2i/d_model))
859
- \text{PosEncoder}(pos, 2i+1) = cos(pos/10000^(2i/d_model))
860
- \text{where pos is the word position and i is the embed idx)
861
- Args:
862
- d_model: the embed dim (required).
863
- dropout: the dropout value (default=0.1).
864
- max_len: the max. length of the incoming sequence (default=5000).
865
- Examples:
866
- >>> pos_encoder = PositionalEncoding(d_model)
867
- """
868
-
869
- def __init__(self, d_model, dropout=0.1, max_len=5000):
870
- super(PositionalEncoding, self).__init__()
871
- self.dropout = nn.Dropout(p=dropout)
872
-
873
- pe = torch.zeros(max_len, d_model)
874
- position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
875
- div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
876
- pe[:, 0::2] = torch.sin(position * div_term)
877
- pe[:, 1::2] = torch.cos(position * div_term)
878
- pe = pe.unsqueeze(0).transpose(0, 1)
879
- self.register_buffer('pe', pe)
880
-
881
- def forward(self, x):
882
- r"""Inputs of forward function
883
- Args:
884
- x: the sequence fed to the positional encoder model (required).
885
- Shape:
886
- x: [sequence length, batch size, embed dim]
887
- output: [sequence length, batch size, embed dim]
888
- Examples:
889
- >>> output = pos_encoder(x)
890
- """
891
-
892
- x = x + self.pe[:x.size(0), :]
893
- return self.dropout(x)
894
-
895
-
896
- if __name__ == '__main__':
897
- transformer_model = Transformer(nhead=16, num_encoder_layers=12)
898
- src = torch.rand((10, 32, 512))
899
- tgt = torch.rand((20, 32, 512))
900
- out = transformer_model(src, tgt)
901
- print(out)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/otlLib/builder.py DELETED
The diff for this file is too large to render. See raw diff