parquet-converter commited on
Commit
6a6d80d
·
1 Parent(s): 628520d

Update parquet files (step 28 of 476)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/101-5/Bing-New/Dockerfile +0 -34
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Acon Digital Verberate Surround V1.0.1 WiN MacOSX.Incl Keygen-R2 Download High Quality Pc.md +0 -6
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Film Art An Introduction 10th Edition Pdf Download [CRACKED].md +0 -25
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forza Horizon 4 on PPSSPP A Complete Guide to Download and Play.md +0 -24
  5. spaces/1gistliPinn/ChatGPT4/Examples/4K Video Downloader 4.11.2.3400 Crack License Key (Portable) Download.md +0 -40
  6. spaces/1gistliPinn/ChatGPT4/Examples/Download Royalty-free Sounds From Sound Jay LINK.md +0 -6
  7. spaces/1phancelerku/anime-remove-background/Baaghi 3 Tiger Shroffs Epic Journey to Save His Brother from a Terrorist Group.md +0 -82
  8. spaces/1phancelerku/anime-remove-background/Facebook Lite APK - A Smaller and Faster Version of Facebook for Android.md +0 -119
  9. spaces/3druga/ae-6/README.md +0 -12
  10. spaces/AIConsultant/MusicGen/tests/adversarial/test_losses.py +0 -159
  11. spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/quaternion.py +0 -423
  12. spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_scenes.py +0 -235
  13. spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/run.py +0 -15
  14. spaces/AIGC-Audio/AudioGPT/app.py +0 -283
  15. spaces/AIGE/A_B/app.py +0 -56
  16. spaces/ASJMO/freegpt/g4f/typing.py +0 -3
  17. spaces/Abhilashvj/planogram-compliance/segment/predict.py +0 -504
  18. spaces/AchyuthGamer/OpenGPT/client/css/sidebar.css +0 -197
  19. spaces/AdWeeb/SuMmeet/README.md +0 -13
  20. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetChildHeight.js +0 -17
  21. spaces/AirtistDesign/stablediffusionapi-rev-animated/README.md +0 -12
  22. spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/onnx_ijbc.py +0 -267
  23. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/diffedit.md +0 -348
  24. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/conditional_image_generation.md +0 -60
  25. spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_x101_32x4d_fpn_1x_coco.py +0 -13
  26. spaces/Andy1621/uniformer_image_detection/configs/gn/mask_rcnn_r101_fpn_gn-all_3x_coco.py +0 -5
  27. spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_40k_pascal_context.py +0 -10
  28. spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/sampler_hijack.py +0 -218
  29. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/ops/encoding.py +0 -74
  30. spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/ema.py +0 -80
  31. spaces/Aphrodite/AIChatBot-SL-Chatbot-Blenderbot/README.md +0 -14
  32. spaces/Apk/anything-v3.0/README.md +0 -13
  33. spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/zero_shot/zero_shot_text2video.py +0 -164
  34. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/selection_prefs.py +0 -51
  35. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/__init__.py +0 -331
  36. spaces/Audio-AGI/WavJourney/services.py +0 -231
  37. spaces/Awesimo/jojogan/e4e/configs/__init__.py +0 -0
  38. spaces/AzinZ/vitscn/app.py +0 -139
  39. spaces/Benson/text-generation/Examples/32gun.az.md +0 -178
  40. spaces/Benson/text-generation/Examples/Como Hacer Una Tarjeta De Felicitacin.md +0 -85
  41. spaces/Benson/text-generation/Examples/Descarga Gratuita Del Virus Plague Inc Necroa.md +0 -79
  42. spaces/Benson/text-generation/Examples/Descargar Bloons Td 6 31.2.md +0 -110
  43. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/charsetgroupprober.py +0 -106
  44. spaces/BilalSardar/Object-Color-Detection-in-Video/app.py +0 -104
  45. spaces/CALM/Dashboard/streamlit_observable/frontend/src/types.d.ts +0 -1
  46. spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/reduce_by_key.h +0 -103
  47. spaces/CVPR/VizWiz-CLIP-VQA/app.py +0 -121
  48. spaces/CVPR/WALT/mmdet/models/backbones/hrnet.py +0 -537
  49. spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/ComWeChat.js +0 -501
  50. spaces/Cpp4App/Cpp4App/CDM/detect_merge/merge.py +0 -361
spaces/101-5/Bing-New/Dockerfile DELETED
@@ -1,34 +0,0 @@
1
- # Build Stage
2
- # 使用 golang:alpine 作为构建阶段的基础镜像
3
- FROM golang:alpine AS builder
4
-
5
- # 添加 git,以便之后能从GitHub克隆项目
6
- RUN apk --no-cache add git
7
-
8
- # 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下
9
- RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app
10
-
11
- # 设置工作目录为之前克隆的项目目录
12
- WORKDIR /workspace/app
13
-
14
- # 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小
15
- RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go
16
-
17
- # Runtime Stage
18
- # 使用轻量级的 alpine 镜像作为运行时的基础镜像
19
- FROM alpine
20
-
21
- # 设置工作目录
22
- WORKDIR /workspace/app
23
-
24
- # 从构建阶段复制编译后的二进制文件到运行时镜像中
25
- COPY --from=builder /workspace/app/go-proxy-bingai .
26
-
27
- # 设置环境变量,此处为随机字符
28
- ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bE3fZ4iO"
29
-
30
- # 暴露8080端口
31
- EXPOSE 8080
32
-
33
- # 容器启动时运行的命令
34
- CMD ["/workspace/app/go-proxy-bingai"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Acon Digital Verberate Surround V1.0.1 WiN MacOSX.Incl Keygen-R2 Download High Quality Pc.md DELETED
@@ -1,6 +0,0 @@
1
-
2
- <h1>Acon Digital Verberate Surround V1.0.1 WiN MacOSX.Incl Keygen-R2 Download Pc: A Review</h1>` | | P: Introduction: What is Acon Digital Verberate Surround? What are its features and benefits? Why should you download it? | `<p>Introduction: What is Acon Digital Verberate Surround? What are its features and benefits? Why should you download it?</p>
3
- <h2>Acon Digital Verberate Surround V1.0.1 WiN MacOSX.Incl Keygen-R2 Download Pc</h2><br /><p><b><b>DOWNLOAD</b> &#128504; <a href="https://byltly.com/2uKvuz">https://byltly.com/2uKvuz</a></b></p><br /><br />` | | H2: What is Acon Digital Verberate Surround? | `<h2>What is Acon Digital Verberate Surround?</h2>` | | P: A brief overview of the software: what it does, how it works, what it is used for | `<p>A brief overview of the software: what it does, how it works, what it is used for</p>` | | H3: What is reverb effect? | `<h3>What is reverb effect?</h3>` | | P: An explanation of reverb effect: what it is, how it simulates real acoustical surroundings, why it is important for audio production | `<p>An explanation of reverb effect: what it is, how it simulates real acoustical surroundings, why it is important for audio production</p>` | | H3: What is surround sound? | `<h3>What is surround sound?</h3>` | | P: An explanation of surround sound: what it is, how it creates a 3D sound field, why it enhances the listening experience | `<p>An explanation of surround sound: what it is, how it creates a 3D sound field, why it enhances the listening experience</p>` | | H2: What are the features and benefits of Acon Digital Verberate Surround? | `<h2>What are the features and benefits of Acon Digital Verberate Surround?</h2>` | | P: A summary of the main features and benefits of the software: what it can do, how it can improve your audio quality, why it is better than other reverb plugins | `<p>A summary of the main features and benefits of the software: what it can do, how it can improve your audio quality, why it is better than other reverb plugins</p>` | | H3: Vivid Hall algorithm | `<h3>Vivid Hall algorithm</h3>` | | P: A description of the Vivid Hall algorithm: what it does, how it adds time variance to avoid stiffness, why it simulates reverberation of real halls with realism | `<p>A description of the Vivid Hall algorithm: what it does, how it adds time variance to avoid stiffness, why it simulates reverberation of real halls with realism</p>
4
- <p></p>` | | H3: Dispersion parameter | `<h3>Dispersion parameter</h3>` | | Outline of the article | HTML formatting | | --- | --- | | H3: Input and output channel layout | `<h3>Input and output channel layout</h3>` | | P: A description of the input and output channel layout: what it does, how it supports mono, stereo, and surround formats, why it allows flexible routing and processing | `<p>A description of the input and output channel layout: what it does, how it supports mono, stereo, and surround formats, why it allows flexible routing and processing</p>` | | H3: Presets and user interface | `<h3>Presets and user interface</h3>` | | P: A description of the presets and user interface: what they do, how they provide a variety of reverb settings and visual feedback, why they make the software easy to use and customize | `<p>A description of the presets and user interface: what they do, how they provide a variety of reverb settings and visual feedback, why they make the software easy to use and customize</p>` | | H2: Why should you download Acon Digital Verberate Surround? | `<h2>Why should you download Acon Digital Verberate Surround?</h2>` | | P: A persuasive argument for downloading the software: what benefits it will bring to your audio projects, how it will save you time and money, why it is worth the price | `<p>A persuasive argument for downloading the software: what benefits it will bring to your audio projects, how it will save you time and money, why it is worth the price</p>` | | H3: High-quality reverb for any audio source | `<h3>High-quality reverb for any audio source</h3>` | | P: A statement of how the software can handle any audio source, whether it is music, speech, sound effects, or ambience, and create realistic and natural-sounding reverb effects | `<p>A statement of how the software can handle any audio source, whether it is music, speech, sound effects, or ambience, and create realistic and natural-sounding reverb effects</p>` | | H3: Flexible and versatile surround sound capabilities | `<h3>Flexible and versatile surround sound capabilities</h3>` | | Outline of the article | HTML formatting | | --- | --- | | H3: Easy and intuitive user interface and presets | `<h3>Easy and intuitive user interface and presets</h3>` | | P: A statement of how the software has a user-friendly and modern user interface that allows you to adjust the reverb parameters with ease, and how it has a wide range of presets that suit different audio genres and scenarios | `<p>A statement of how the software has a user-friendly and modern user interface that allows you to adjust the reverb parameters with ease, and how it has a wide range of presets that suit different audio genres and scenarios</p>` | | H3: Affordable and reliable software with free updates | `<h3>Affordable and reliable software with free updates</h3>` | | P: A statement of how the software is priced reasonably and competitively, and how it offers free updates and support for its users, ensuring that you get the best value for your money | `<p>A statement of how the software is priced reasonably and competitively, and how it offers free updates and support for its users, ensuring that you get the best value for your money</p>` | | H2: How to download Acon Digital Verberate Surround? | `<h2>How to download Acon Digital Verberate Surround?</h2>` | | P: A step-by-step guide on how to download the software from the official website, how to install it on your computer, how to activate it with the keygen, and how to use it in your audio projects | `<p>A step-by-step guide on how to download the software from the official website, how to install it on your computer, how to activate it with the keygen, and how to use it in your audio projects</p>` | | H3: Downloading the software from the official website | `<h3>Downloading the software from the official website</h3>` | | P: A description of how to go to the official website of Acon Digital, how to find the Verberate Surround product page, how to choose your operating system (Windows or Mac), how to click on the download button, and how to save the file on your computer | `<p>A description of how to go to the official website of Acon Digital, how to find the Verberate Surround product page, how to choose your operating system (Windows or Mac), how to click on the download button, and how to save the file on your computer</p>` | | H3: Installing the software on your computer | `<h3>Installing the software on your computer</h3>` | | Outline of the article | HTML formatting | | --- | --- | | H3: Activating the software with the keygen | `<h3>Activating the software with the keygen</h3>` | | P: A description of how to open the keygen file, how to generate a serial number, how to copy and paste it into the software activation window, and how to confirm the activation | `<p>A description of how to open the keygen file, how to generate a serial number, how to copy and paste it into the software activation window, and how to confirm the activation</p>` | | H3: Using the software in your audio projects | `<h3>Using the software in your audio projects</h3>` | | P: A description of how to launch your audio editing software, how to load the Verberate Surround plugin, how to select an input and output channel layout, how to choose a preset or adjust the reverb parameters, and how to apply the reverb effect to your audio tracks | `<p>A description of how to launch your audio editing software, how to load the Verberate Surround plugin, how to select an input and output channel layout, how to choose a preset or adjust the reverb parameters, and how to apply the reverb effect to your audio tracks</p>` | | H2: Conclusion | `<h2>Conclusion</h2>` | | P: A summary of the main points of the article: what is Acon Digital Verberate Surround, what are its features and benefits, why should you download it, and how to download it | `<p>A summary of the main points of the article: what is Acon Digital Verberate Surround, what are its features and benefits, why should you download it, and how to download it</p>` | | H2: FAQs | `<h2>FAQs</h2>` | | P: A list of 5 frequently asked questions and answers about the software, such as: What are the system requirements for Verberate Surround? How much does Verberate Surround cost? How can I get technical support for Verberate Surround? Can I use Verberate Surround on multiple computers? What is the difference between Verberate Surround and Verberate Immersive? | `<p>A list of 5 frequently asked questions and answers about the software, such as: What are the system requirements for Verberate Surround? How much does Verberate Surround cost? How can I get technical support for Verberate Surround? Can I use Verberate Surround on multiple computers? What is the difference between Verberate Surround and Verberate Immersive?</p>` |</p> b2dd77e56b<br />
5
- <br />
6
- <br />
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Film Art An Introduction 10th Edition Pdf Download [CRACKED].md DELETED
@@ -1,25 +0,0 @@
1
- <br />
2
- <h1>Film Art: An Introduction 10th Edition PDF Download</h1>
3
- <p>If you are looking for a comprehensive and accessible introduction to the analysis of cinema, you might want to check out Film Art: An Introduction 10th Edition by David Bordwell and Kristin Thompson. This book is one of the best-selling and widely respected textbooks on film studies, covering topics such as film form, film style, film genres, film criticism, and film history. It also includes frame enlargements from various films, examples from different periods and countries, and references to the authors' acclaimed weblog.</p>
4
- <p>Film Art: An Introduction 10th Edition is available in PDF format for download from various online sources. However, before you download it, you should be aware of the following issues:</p>
5
- <h2>film art an introduction 10th edition pdf download</h2><br /><p><b><b>Download</b> &raquo; <a href="https://byltly.com/2uKwja">https://byltly.com/2uKwja</a></b></p><br /><br />
6
- <ul>
7
- <li>The PDF version may not have the same quality and layout as the printed version.</li>
8
- <li>The PDF version may not include all the features and resources that come with the printed version, such as the optional tutorial CD-ROM that contains film clips.</li>
9
- <li>The PDF version may violate the copyright of the authors and the publisher, McGraw-Hill.</li>
10
- <li>The PDF version may not be the most updated edition of the book. The latest edition is the 12th edition, published in 2019.</li>
11
- </ul>
12
- <p>Therefore, if you want to get the most out of Film Art: An Introduction 10th Edition, you should consider buying the printed version or accessing it through a legitimate online platform. You can find more information about the book and its authors on their website[^1^]. You can also read some sample analyses of films on their weblog[^2^].</p>
13
- <p>Film Art: An Introduction 10th Edition is a great resource for anyone who wants to learn more about the art and craft of filmmaking. It will help you develop a core set of analytical skills that will deepen your understanding and appreciation of any film, in any genre.</p>
14
-
15
- <p>In this section, we will briefly introduce some of the main topics that Film Art: An Introduction 10th Edition covers. These topics are organized into four parts: film art and filmmaking, film form, film style, and types of films.</p>
16
- <h2>Film Art and Filmmaking</h2>
17
- <p>This part explores the nature and functions of film as an art form, as well as the creative, technological, and business aspects of filmmaking. It also explains the basic concepts and terminology that are used throughout the book.</p>
18
- <h2>Film Form</h2>
19
- <p>This part examines how films create meaning and effect through the use of formal elements, such as narrative, mise-en-scene, cinematography, editing, and sound. It also discusses how films can be analyzed in terms of their form and style.</p>
20
- <h2>Film Style</h2>
21
- <p>This part focuses on the specific techniques and choices that filmmakers make to create a distinctive film style. It covers topics such as realism and expressionism, continuity and discontinuity, classical Hollywood style, and alternative approaches to film style.</p>
22
- <h2>Types of Films</h2>
23
- <p>This part surveys the different types of films that exist in the world of cinema, such as film genres, documentary films, experimental films, and animated films. It also explores how these types of films can be classified, compared, and evaluated.</p> cec2833e83<br />
24
- <br />
25
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forza Horizon 4 on PPSSPP A Complete Guide to Download and Play.md DELETED
@@ -1,24 +0,0 @@
1
-
2
- <h1>How to Download and Play Forza Horizon 4 on PPSSPP Emulator</h1>
3
- <p>Forza Horizon 4 is one of the most popular racing games on Xbox One and PC, but did you know that you can also play it on your Android or iOS device using a PPSSPP emulator? PPSSPP is a free and open-source emulator that allows you to run PSP games on your mobile device. In this article, we will show you how to download and play Forza Horizon 4 on PPSSPP emulator in a few easy steps.</p>
4
- <h2>forza horizon 4 download ppsspp</h2><br /><p><b><b>Download File</b> &rarr; <a href="https://byltly.com/2uKyt4">https://byltly.com/2uKyt4</a></b></p><br /><br />
5
- <h2>Step 1: Download Forza Horizon 4 ISO File</h2>
6
- <p>The first thing you need to do is to download the Forza Horizon 4 ISO file from a reliable source. You can find many websites that offer PSP ISO files for free, but some of them might be unsafe or contain malware. Therefore, we recommend using one of these trusted sources:</p>
7
- <ul>
8
- <li><a href="https://romsmania.cc/roms/playstation-portable/forza-horizon-4-276489">RomsMania</a>: This website has a large collection of PSP ISO files, including Forza Horizon 4. You just need to click on the download button and wait for the file to be saved on your device.</li>
9
- <li><a href="https://www.freeroms.com/roms/psp/forza_horizon_4.htm">FreeRoms</a>: This website also has a wide range of PSP ISO files, including Forza Horizon 4. You just need to click on the download link and follow the instructions to get the file on your device.</li>
10
- <li><a href="https://isoroms.com/forza-horizon-4-psp-iso-download/">IsoRoms</a>: This website is dedicated to PSP ISO files, including Forza Horizon 4. You just need to click on the download button and wait for the file to be downloaded on your device.</li>
11
- </ul>
12
- <p>Make sure that the file size is around 2 GB and that the file name ends with .iso. If the file is compressed or has a different extension, you will need to extract it or rename it before proceeding to the next step.</p>
13
- <h2>Step 2: Download PPSSPP Emulator</h2>
14
- <p>The next thing you need to do is to download the PPSSPP emulator from the official website or the app store. PPSSPP is available for Android, iOS, Windows, Mac, Linux, and other platforms. You can choose the version that suits your device and follow the installation process.</p>
15
- <p>Once you have installed the PPSSPP emulator, you will need to grant it permission to access your storage and other features. This will allow the emulator to locate and run the Forza Horizon 4 ISO file that you downloaded in the previous step.</p>
16
- <h2>Step 3: Load Forza Horizon 4 ISO File on PPSSPP Emulator</h2>
17
- <p>The final step is to load the Forza Horizon 4 ISO file on the PPSSPP emulator and start playing. To do this, you just need to open the PPSSPP app and tap on the Games tab. Then, navigate to the folder where you saved the Forza Horizon 4 ISO file and select it. The game will start loading and you will see the title screen.</p>
18
- <p></p>
19
- <p>Now you can enjoy playing Forza Horizon 4 on your mobile device using a PPSSPP emulator. You can customize the settings, controls, graphics, sound, and other options according to your preference. You can also save and load your progress anytime using the emulator's menu.</p>
20
- <h2>Conclusion</h2>
21
- <p>Forza Horizon 4 is a fun and exciting racing game that you can play on your Xbox One or PC, but you can also play it on your mobile device using a PPSSPP emulator. All you need to do is to download the Forza Horizon 4 ISO file from a trusted source, download the PPSSPP emulator from the official website or app store, and load the game on the emulator. Then you can enjoy racing across Britain in different seasons and modes.</p>
22
- <p>We hope this article was helpful and informative. If you have any questions or suggestions, please feel free to leave a comment below.</p> ddb901b051<br />
23
- <br />
24
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/4K Video Downloader 4.11.2.3400 Crack License Key (Portable) Download.md DELETED
@@ -1,40 +0,0 @@
1
- <h2>4K Video Downloader 4.11.2.3400 Crack License Key (Portable) Download</h2><br /><p><b><b>Download</b> &#9989; <a href="https://imgfil.com/2uxZxf">https://imgfil.com/2uxZxf</a></b></p><br /><br />
2
-
3
- Crack4k-Video-Downloader.exe Download
4
-
5
- 2020 Crack Latest Version
6
-
7
- How To Crack?
8
-
9
- Download the latest version of the crack for Crack4k-Video-Downloader.exe here. Unzip the package. Open the folder and find the crack (just download the crack). Install it and don’t forget to run it. Done!
10
-
11
- Crack4k-Video-Downloader.exe + Activation Code
12
-
13
- Crack4k-Video-Downloader.exe + Activation Code 2020
14
-
15
- Step 1: Install the game.
16
-
17
- Step 2: Launch the crack and accept the terms.
18
-
19
- Step 3: Once done, use crack as well.
20
-
21
- Final Words:
22
-
23
- Crack4k-Video-Downloader.exe is a premium application to manage your videos. It also helps you download movies in several formats and resolutions. It can also extract subtitles for videos. With this crack you can play HD and quality videos easily. In addition, you can download songs from the web in various formats as well. It has a user-friendly interface. Crack4k-Video-Downloader.exe works fine on Windows 7, 8.1, and 10. Moreover, it’s an extremely light-weight application.
24
-
25
- Procedure To Download Crack4k-Video-Downloader.exe + Activation Code
26
-
27
- Download Crack4k-Video-Downloader.exe
28
-
29
- Extract the crack from the download. Open the crack folder and find crack4k-video-downloader.exe. Run the crack to activate it.
30
-
31
- Use crack4k-video-downloader.exe
32
-
33
- Done!The present invention relates to an improvement of a machine for treating a fabric web, especially a paper web.
34
-
35
- The present invention applies to the situation of a machine for treating a web of paper, especially a web of paper of a size, typically, from 150 to 2200 or 2500 millimeters, more or less, but not exclusively, in the making of paper or of paper-like sheets.
36
-
37
- The present invention applies to a machine provided with at least one fabric treating device intended 4fefd39f24<br />
38
- <br />
39
- <br />
40
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Download Royalty-free Sounds From Sound Jay LINK.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Download royalty-free sounds from Sound Jay</h2><br /><p><b><b>DOWNLOAD</b> &mdash;&mdash;&mdash;>>> <a href="https://imgfil.com/2uxY4z">https://imgfil.com/2uxY4z</a></b></p><br /><br />
2
-
3
- Shockwave-Sound.com offers the best quality Royalty Free Music, Stock Music, Production Music and Sound Effects for use in films, games and other media. 1fdad05405<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Baaghi 3 Tiger Shroffs Epic Journey to Save His Brother from a Terrorist Group.md DELETED
@@ -1,82 +0,0 @@
1
-
2
- <h1>Baaghi 3: A Review of the Action Thriller Film</h1>
3
- <p>Baaghi 3 is a 2020 Indian action thriller film directed by Ahmed Khan and produced by Sajid Nadiadwala. It is the third installment in the Baaghi series and stars Tiger Shroff, Riteish Deshmukh, and Shraddha Kapoor in the lead roles. It also features Ankita Lokhande, Jaideep Ahlawat, Vijay Varma, Jameel Khoury, Jackie Shroff, Satish Kaushik, Virendra Saxena, and Manav Gohil in pivotal roles. The film is loosely based on the Tamil film Vettai (2012) and follows a man who embarks on a bloody rampage to save his kidnapped brother from a terrorist group in Syria.</p>
4
- <h2>baaghi 3</h2><br /><p><b><b>DOWNLOAD</b> ---> <a href="https://jinyurl.com/2uNRrM">https://jinyurl.com/2uNRrM</a></b></p><br /><br />
5
- <p>The film was released theatrically in India on March 6, 2020, and received mixed reviews from critics and audiences. It was praised for its action scenes and cinematography, but criticized for its script, logic, and writing. The film's collections were affected by the COVID-19 pandemic as the theatres were shut down shortly after its release, though it still became a commercial success. It earned over ₹137 crore worldwide and over ₹97 crore net domestically in India, making it the second highest-grossing Bollywood film of 2020.</p>
6
- <h2>The Story of Baaghi 3</h2>
7
- <p>The film revolves around two brothers, Ranveer "Ronnie" Chaturvedi (Tiger Shroff) and Vikram Chaturvedi (Riteish Deshmukh), who share a strong bond since childhood. Ronnie is a rebellious and fearless fighter who always protects his timid and reluctant brother from any trouble. Vikram becomes a police officer at Ronnie's insistence, but Ronnie accompanies him on every mission and secretly does all the work while Vikram gets all the credit.</p>
8
- <p>One day, Vikram gets a call from his superior to go to Syria for a routine paper work assignment. Ronnie is reluctant to let him go alone, but Vikram assures him that he will be fine. However, upon reaching Syria, Vikram is kidnapped by a terrorist group called Jaish-e-Lashkar led by Abu Jalal Gaza (Jameel Khoury), who wants to use him as a hostage to negotiate with India.</p>
9
- <p>When Ronnie learns about Vikram's abduction, he decides to go to Syria to rescue him at any cost. He is aided by his girlfriend Siya (Shraddha Kapoor), her sister Ruchi (Ankita Lokhande), who is also Vikram's wife, and Akhtar Lahori (Vijay Varma), an undercover agent working for IPL (Inder Paheli Lamba), an Indian intelligence officer in Syria.</p>
10
- <p>Ronnie faces many obstacles and enemies in his quest to save his brother, including Zaidi (Ivan Kostadinov), Gaza's right-hand man, Andre Gomez (Slavisha Kajevski), a Russian arms dealer who supplies weapons to Gaza, and IPL himself, who turns out to be a double agent working for Gaza. Ronnie also discovers that his father, Charan Chaturvedi (Jackie Shroff), who was presumed dead, is actually alive and held captive by Gaza for 18 years.</p>
11
- <p>Ronnie manages to infiltrate Gaza's base and free his brother and father, but not before Gaza triggers a series of bomb blasts across India. Ronnie then engages in a final showdown with Gaza and kills him by blowing up his helicopter. Ronnie, Vikram, and Charan return to India and reunite with Siya and Ruchi.</p>
12
- <p>Baaghi 3 full movie download<br />
13
- Baaghi 3 box office collection<br />
14
- Baaghi 3 songs list<br />
15
- Baaghi 3 review and rating<br />
16
- Baaghi 3 trailer watch online<br />
17
- Baaghi 3 cast and crew<br />
18
- Baaghi 3 release date and time<br />
19
- Baaghi 3 action scenes video<br />
20
- Baaghi 3 behind the scenes photos<br />
21
- Baaghi 3 plot summary and spoilers<br />
22
- Baaghi 3 awards and nominations<br />
23
- Baaghi 3 vs Vettai comparison<br />
24
- Baaghi 3 Netflix streaming availability<br />
25
- Baaghi 3 best dialogues and quotes<br />
26
- Baaghi 3 fan reactions and memes<br />
27
- Baaghi 3 shooting locations and sets<br />
28
- Baaghi 3 director Ahmed Khan interview<br />
29
- Baaghi 3 producer Sajid Nadiadwala biography<br />
30
- Baaghi 3 Tiger Shroff workout routine<br />
31
- Baaghi 3 Riteish Deshmukh brother role<br />
32
- Baaghi 3 Shraddha Kapoor fashion style<br />
33
- Baaghi 3 Ankita Lokhande debut film<br />
34
- Baaghi 3 Jaideep Ahlawat villain character<br />
35
- Baaghi 3 Vijay Varma supporting actor<br />
36
- Baaghi 3 Jackie Shroff cameo appearance<br />
37
- Baaghi 3 Disha Patani item song dancer<br />
38
- Baaghi 3 Bappi Lahiri music composer<br />
39
- Baaghi 3 Vishal-Shekhar song lyrics<br />
40
- Baaghi 3 Tanishk Bagchi remix songs<br />
41
- Baaghi 3 Rochak Kohli romantic songs<br />
42
- Baaghi 3 Sachet-Parampara sad songs<br />
43
- Baaghi 3 Pranaay Rijia background score<br />
44
- Baaghi 3 Santhana Krishnan cinematography<br />
45
- Baaghi 3 Rameshwar Bhagat editing style<br />
46
- Baaghi 3 Ram-Lakshman stunt choreography<br />
47
- Baaghi 3 Kecha Khamphakdee martial arts trainer<br />
48
- Baaghi 3 Fox Star Studios distribution rights<br />
49
- Baaghi 3 Nadiadwala Grandson Entertainment production house<br />
50
- Baaghi 3 COVID-19 pandemic impact on collections<br />
51
- Baaghi 3 digital release on Disney+ Hotstar</p>
52
- <h2>The Strengths of Baaghi 3</h2>
53
- <p>One of the main strengths of Baaghi 3 is its action sequences and choreography. The film showcases Tiger Shroff's martial arts skills and stunts, which are impressive and thrilling to watch. The film also features some high-octane chases, fights, and explosions that keep the viewers on the edge of their seats. The film's action director Ahmed Khan has done a commendable job in creating some spectacular scenes that are visually appealing and exciting.</p>
54
- <p>Another strength of the film is its cinematography and visual effects. The film has been shot in various locations across India, Egypt, Serbia, Turkey, and Syria, which add to the film's grandeur and scale. The film also uses some stunning aerial shots and drone shots that capture the beauty and the chaos of the different settings. The film's visual effects are also well-done and realistic, especially in the scenes involving the helicopter, the tank, and the bomb blasts.</p>
55
- <p>A third strength of the film is its performances of the lead actors and the supporting cast. Tiger Shroff delivers a solid performance as Ronnie, who is determined, fearless, and loyal to his brother. He also displays his emotional range in some scenes where he expresses his anger, pain, and love. Riteish Deshmukh also does a decent job as Vikram, who is vulnerable, naive, and dependent on his brother. He also provides some comic relief in some scenes with his expressions and dialogues. Shraddha Kapoor also gives a good performance as Siya, who is supportive, brave, and witty. She also shares a good chemistry with Tiger Shroff. The supporting cast also does a fine job in their respective roles, especially Jaideep Ahlawat as IPL, Vijay Varma as Akhtar Lahori, Jameel Khoury as Abu Jalal Gaza, and Jackie Shroff as Charan Chaturvedi.</p> <h2>The Weaknesses of Baaghi 3</h2>
56
- <p>One of the main weaknesses of Baaghi 3 is its script and dialogues. The film suffers from a weak and predictable storyline that lacks originality and logic. The film is full of clichés, stereotypes, and loopholes that make it hard to take it seriously. The film also has some unnecessary and forced songs that hamper the pace and the mood of the film. The film's dialogues are also cheesy, corny, and repetitive, which reduce the impact and the credibility of the film.</p>
57
- <p>Another weakness of the film is its realism and logic. The film defies the laws of physics, logic, and common sense in many scenes, which make it unbelievable and absurd. The film shows Ronnie single-handedly taking on an entire army of terrorists, surviving multiple gunshots and explosions, and performing impossible stunts that are beyond human capabilities. The film also shows Vikram being able to control Ronnie's actions through a phone call, which is ridiculous and implausible. The film also ignores the geopolitical and social realities of Syria and portrays it as a war-torn and lawless land where anything goes.</p>
58
- <p>A third weakness of the film is its criticism and controversies. The film has been criticized for its portrayal of violence, terrorism, and Islamophobia. The film has been accused of glorifying violence and bloodshed, and showing excessive and graphic scenes of torture, mutilation, and killing. The film has also been accused of demonizing Muslims and Arabs, and showing them as terrorists, villains, and savages. The film has also been accused of hurting the sentiments of some communities and groups, such as the Syrian refugees, the Kurdish people, and the Indian Army.</p>
59
- <h2>The Comparison of Baaghi 3 with Other Films in the Series</h2>
60
- <p>Baaghi 3 is the third film in the Baaghi series, which started with Baaghi (2016) and continued with Baaghi 2 (2018). All three films are action thrillers that feature Tiger Shroff as Ronnie, a rebellious and fearless fighter who goes against all odds to save his loved ones from danger. All three films also feature different actresses as Ronnie's love interests, such as Shraddha Kapoor in Baaghi and Baaghi 3, and Disha Patani in Baaghi 2.</p>
61
- <p>The three films have some similarities and differences in terms of their storylines, themes, and styles. All three films have a similar plot structure that involves Ronnie's loved one being kidnapped or threatened by a powerful enemy, and Ronnie going on a mission to rescue them. All three films also have a similar theme of brotherhood, loyalty, and courage that drives Ronnie's actions. All three films also have a similar style of action that showcases Tiger Shroff's martial arts skills and stunts.</p>
62
- <p>However, the three films also have some differences in terms of their settings, characters, and tones. Baaghi is set in India and Thailand, and features Ronnie as a martial arts student who falls in love with Siya, a rebellious girl who is kidnapped by Raghav (Sudheer Babu), a martial arts champion who wants to marry her. Baaghi 2 is set in India and Goa, and features Ronnie as an army officer who is contacted by his ex-girlfriend Neha (Disha Patani), who asks him to find her missing daughter Riya (Darshan Kumar), who is kidnapped by Sunny (Prateik Babbar), a drug lord who wants to blackmail Neha's husband Shekhar (Manoj Bajpayee). Baaghi 3 is set in India and Syria, and features Ronnie as a civilian who goes to Syria to save his brother Vikram, a police officer who is kidnapped by Gaza, a terrorist leader who wants to use him as a hostage. The three films also have different tones and moods that reflect their settings and themes. Baaghi has a romantic and adventurous tone that focuses on the love story and the martial arts rivalry between Ronnie and Raghav. Baaghi 2 has a dark and suspenseful tone that focuses on the mystery and the conspiracy behind Riya's kidnapping and Sunny's motives. Baaghi 3 has a violent and explosive tone that focuses on the war and the carnage caused by Gaza's attacks and Ronnie's retaliation. The three films have received different reception and ratings from critics and audiences. Baaghi was a moderate success that received mixed reviews. It earned over ₹127 crore worldwide and over ₹76 crore net domestically in India. It was praised for its action scenes and Tiger Shroff's performance, but criticized for its weak story and direction. It has a rating of 5.2 out of 10 on IMDb, 2.5 out of 5 on Times of India, and 33% on Rotten Tomatoes. Baaghi 2 was a blockbuster that received positive reviews. It earned over ₹253 crore worldwide and over ₹165 crore net domestically in India. It was praised for its action scenes, direction, and Tiger Shroff's performance, but criticized for its violence and length. It has a rating of 6.4 out of 10 on IMDb, 3 out of 5 on Times of India, and 67% on Rotten Tomatoes. Baaghi 3 was a hit that received mixed reviews. It earned over ₹137 crore worldwide and over ₹97 crore net domestically in India. It was praised for its action scenes, cinematography, and Tiger Shroff's performance, but criticized for its script, logic, and writing. It has a rating of 4.9 out of 10 on IMDb, 2 out of 5 on Times of India, and 50% on Rotten Tomatoes. The future prospects of the series are uncertain as there is no official announcement of a sequel yet. However, given the popularity and the success of the series, there is a possibility that Baaghi 4 might be made in the future with Tiger Shroff reprising his role as Ronnie. <h2>The Conclusion of the Review</h2>
63
- <p>Baaghi 3 is an action thriller film that is the third installment in the Baaghi series. It is a film that offers a lot of entertainment and excitement for the fans of the genre and the series. It has some amazing action scenes, stunning cinematography, and good performances by the lead actors and the supporting cast. However, it also has some flaws that might disappoint some viewers who expect more from the film. It has a weak script, poor dialogues, unrealistic logic, and controversial portrayal of violence, terrorism, and Islamophobia.</p>
64
- <p>In conclusion, Baaghi 3 is a film that can be enjoyed by those who love action films and do not care much about the story or the logic. It is a film that can be watched for its spectacle and its star power. It is not a film that can be appreciated by those who look for depth, originality, or realism in their films. It is a film that can be rated as an average or a below average film depending on one's taste and preference.</p>
65
- <p>My rating for Baaghi 3 is 2.5 out of 5 stars.</p>
66
- <h3>FAQs</h3>
67
- <p>Here are some frequently asked questions about Baaghi 3:</p>
68
- <ul>
69
- <li><b>Q: Is Baaghi 3 based on a true story?</b></li>
70
- <li>A: No, Baaghi 3 is not based on a true story. It is loosely based on the Tamil film Vettai (2012), which is also a fictional story.</li>
71
- <li><b>Q: Is Baaghi 3 available on any streaming platform?</b></li>
72
- <li>A: Yes, Baaghi 3 is available on Disney+ Hotstar in India.</li>
73
- <li><b>Q: Who sang the song "Dus Bahane" in Baaghi 3?</b></li>
74
- <li>A: The song "Dus Bahane" in Baaghi 3 was sung by Vishal Dadlani and Shekhar Ravjiani.</li>
75
- <li><b>Q: Who played the role of Ronnie's father in Baaghi 3?</b></li>
76
- <li>A: The role of Ronnie's father in Baaghi 3 was played by Jackie Shroff, who is also Tiger Shroff's real father.</li>
77
- <li><b>Q: How many awards did Baaghi 3 win?</b></li>
78
- <li>A: Baaghi 3 did not win any awards, but it was nominated for four awards at the Zee Cine Awards 2021, including Best Actor (Male), Best Action, Best Choreography, and Best Song of the Year.</li>
79
- </ul>
80
- <p>I hope you enjoyed reading this review of Baaghi 3. If you have any comments or questions, please feel free to share them with me. Thank you for your time and attention.</p> 197e85843d<br />
81
- <br />
82
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Facebook Lite APK - A Smaller and Faster Version of Facebook for Android.md DELETED
@@ -1,119 +0,0 @@
1
- <br />
2
- <h1>Facebook APK 64 Bit: What You Need to Know</h1>
3
- <p>Facebook is one of the most popular social media and social networking platforms in the world. It allows you to connect with your friends, family, and other people from all over the world. You can share status updates, photos, videos, and other multimedia content on your timeline or on other people's timelines. You can also join groups, pages, and marketplace to interact with other users who share your interests or needs.</p>
4
- <p>But did you know that there is a version of Facebook that is specially designed for devices that run on 64-bit processors? It is called Facebook APK 64 Bit, and it offers some advantages over the regular Facebook app. In this article, we will tell you everything you need to know about Facebook APK 64 Bit, including what it is, how to download and install it, and how to use it.</p>
5
- <h2>facebook apk 64 bit</h2><br /><p><b><b>Download File</b> &#10027;&#10027;&#10027; <a href="https://jinyurl.com/2uNMk3">https://jinyurl.com/2uNMk3</a></b></p><br /><br />
6
- <h2>What is Facebook APK 64 Bit?</h2>
7
- <h3>Definition and features of Facebook APK 64 Bit</h3>
8
- <p>Facebook APK 64 Bit is an Android application package (APK) file that contains the Facebook app for devices that have a 64-bit processor. A processor is the part of your device that executes instructions and performs calculations. A 64-bit processor can handle more data and memory than a 32-bit processor, which means it can run faster and more efficiently.</p>
9
- <p>Facebook APK 64 Bit has the same features as the regular Facebook app, such as posting status updates, photos, videos, and other multimedia content, chatting with friends, joining groups, pages, and marketplace, etc. However, it also has some additional features that make it more suitable for 64-bit devices, such as:</p>
10
- <ul>
11
- <li>Improved performance and stability</li>
12
- <li>Faster loading and scrolling</li>
13
- <li>Better compatibility with other apps and games</li>
14
- <li>More security and privacy</li>
15
- <li>Less battery consumption</li>
16
- </ul>
17
- <h3>Benefits and drawbacks of Facebook APK 64 Bit</h3>
18
- <p>The main benefit of Facebook APK 64 Bit is that it can make your Facebook experience smoother and more enjoyable on your 64-bit device. You can enjoy faster and more reliable performance, better compatibility with other apps and games, more security and privacy, and less battery consumption.</p>
19
- <p>The main drawback of Facebook APK 64 Bit is that it is not available on the official Google Play Store or the official Facebook website. This means that you have to download it from a third-party source, which may pose some risks such as malware infection, data theft, or device damage. Therefore, you have to be careful when choosing a source to download Facebook APK 64 Bit from.</p>
20
- <h2>How to Download and Install Facebook APK 64 Bit?</h2>
21
- <h3>Requirements and compatibility of Facebook APK 64 Bit</h3>
22
- <p>To download and install Facebook APK 64 Bit on your device, you need to meet some requirements and check some compatibility issues. Here are some things you need to consider before downloading and installing Facebook APK 64 Bit:</p>
23
- <ul>
24
- <li>Your device must have a 64-bit processor. You can check this by going to Settings > About Phone > Processor or CPU.</li>
25
- <li>Your device must have Android version 4.0.3 or higher. You can check this by going to Settings > About Phone > Android Version.</li>
26
- <li>Your device must have enough storage space to download and install the file. The file size of Facebook APK 64 Bit may vary depending on the source, but it is usually around 50 MB.</li>
27
- <li>Your device must allow installation from unknown sources. You can enable this by going to Settings > Security > Unknown Sources.</li>
28
- </ul>
29
- <h3>Steps to download and install Facebook APK 64 Bit</h3 <p>Once you have met the requirements and checked the compatibility, you can follow these steps to download and install Facebook APK 64 Bit on your device:</p>
30
- <p>facebook lite apk x86_64<br />
31
- facebook app for windows 10 64 bit<br />
32
- facebook messenger apk 64 bit<br />
33
- facebook mod apk 64 bit<br />
34
- facebook apk download for pc 64 bit<br />
35
- facebook lite for android x86_64<br />
36
- facebook app for windows 10 64 bit free download<br />
37
- facebook messenger lite apk 64 bit<br />
38
- facebook dark mode apk 64 bit<br />
39
- facebook apk for laptop windows 10 64 bit<br />
40
- facebook lite latest version x86_64<br />
41
- facebook app for windows 10 64 bit offline installer<br />
42
- facebook video downloader apk 64 bit<br />
43
- facebook transparent apk 64 bit<br />
44
- facebook apk for pc windows 7 64 bit<br />
45
- facebook lite old version x86_64<br />
46
- facebook app for windows 10 64 bit filehippo<br />
47
- facebook story saver apk 64 bit<br />
48
- facebook auto liker apk 64 bit<br />
49
- facebook apk for pc windows 8.1 64 bit<br />
50
- facebook lite update version x86_64<br />
51
- facebook app for windows 10 64 bit softonic<br />
52
- facebook password hacker apk 64 bit<br />
53
- facebook color changer apk 64 bit<br />
54
- facebook apk for pc windows xp 64 bit<br />
55
- facebook lite beta version x86_64<br />
56
- facebook app for windows 10 pro 64 bit<br />
57
- facebook photo editor apk 64 bit<br />
58
- facebook profile tracker apk 64 bit<br />
59
- facebook apk for pc windows vista 64 bit<br />
60
- facebook lite mod version x86_64<br />
61
- facebook app for windows server 2019 64 bit<br />
62
- facebook status downloader apk 64 bit<br />
63
- facebook emoji keyboard apk 64 bit<br />
64
- facebook apk for pc windows server 2008 r2 64 bit<br />
65
- facebook lite premium version x86_64<br />
66
- facebook app for windows server 2016 64 bit<br />
67
- facebook live streamer apk 64 bit<br />
68
- facebook voice changer apk 64 bit<br />
69
- facebook apk for pc windows server 2003 r2 sp2 x86_64 edition <br />
70
- facebook lite gold version x86_64 <br />
71
- facebook app for windows server core edition (x86_64) <br />
72
- facebook page manager apk 64 bit <br />
73
- facebook sticker maker apk 64 bit <br />
74
- facebook apk for pc windows server essentials (x86_64) <br />
75
- facebook lite blue version x86_64 <br />
76
- facebook app for windows server foundation edition (x86_64) <br />
77
- facebook group poster apk 64 bit <br />
78
- facebook gif maker apk 64 bit</p>
79
- <ol>
80
- <li>Find a reliable and trustworthy source to download Facebook APK 64 Bit from. You can search for it on Google or use a website like APKPure or APKMirror.</li>
81
- <li>Download the file to your device. You may need to grant some permissions to the browser or the downloader app to access your storage.</li>
82
- <li>Locate the file on your device. You can use a file manager app or go to the Downloads folder.</li>
83
- <li>Tap on the file to start the installation process. You may need to confirm some prompts or warnings.</li>
84
- <li>Wait for the installation to finish. You may see a progress bar or a notification.</li>
85
- <li>Launch the app and sign in with your Facebook account. You may need to grant some permissions to the app to access your contacts, camera, microphone, etc.</li>
86
- </ol>
87
- <h2>How to Use Facebook APK 64 Bit?</h2>
88
- <h3>Tips and tricks for using Facebook APK 64 Bit</h3>
89
- <p>Facebook APK 64 Bit is very similar to the regular Facebook app, so you can use it in the same way as you normally do. However, there are some tips and tricks that can help you make the most out of Facebook APK 64 Bit, such as:</p>
90
- <ul>
91
- <li>Update the app regularly. This can help you get the latest features, bug fixes, and security patches.</li>
92
- <li>Clear the cache and data of the app occasionally. This can help you free up some storage space and improve the performance of the app.</li>
93
- <li>Adjust the settings of the app according to your preferences. You can change things like notifications, privacy, security, data usage, etc.</li>
94
- <li>Use dark mode or night mode if available. This can help you reduce eye strain and save battery life.</li>
95
- <li>Use shortcuts and gestures to access different functions of the app. For example, you can swipe left or right to switch between tabs, or long-press on an icon to access more options.</li>
96
- </ul>
97
- <h3>Common issues and solutions for Facebook APK 64 Bit</h3 <p>Facebook APK 64 Bit is generally stable and reliable, but it may encounter some issues from time to time. Here are some of the common issues and solutions for Facebook APK 64 Bit:</p>
98
- <table>
99
- <tr><th>Issue</th><th>Solution</th></tr>
100
- <tr><td>The app crashes or freezes</td><td>Restart the app or your device. Update the app or your device software. Clear the cache and data of the app.</td></tr>
101
- <tr><td>The app does not load or display content</td><td>Check your internet connection. Refresh the page or reload the app. Disable any VPN or proxy settings.</td></tr>
102
- <tr><td>The app does not send or receive messages</td><td>Check your internet connection. Check your notification settings. Check your message requests or spam folder.</td></tr>
103
- <tr><td>The app does not upload or download media files</td><td>Check your internet connection. Check your storage space. Check your data usage settings.</td></tr>
104
- <tr><td>The app does not recognize your device or account</td><td>Check your login credentials. Check your device settings. Check your security settings.</td></tr>
105
- </table>
106
- <h2>Conclusion</h2>
107
- <p>Facebook APK 64 Bit is a version of Facebook that is optimized for devices that have a 64-bit processor. It offers some advantages over the regular Facebook app, such as improved performance, compatibility, security, and battery life. However, it also has some drawbacks, such as being unavailable on the official sources and posing some risks from third-party sources. To download and install Facebook APK 64 Bit on your device, you need to meet some requirements, check some compatibility issues, and follow some steps. To use Facebook APK 64 Bit on your device, you can follow some tips and tricks, and troubleshoot some common issues.</p>
108
- <h2>FAQs</h2>
109
- <h4>What is the difference between Facebook APK 64 Bit and Facebook Lite?</h4>
110
- <p>Facebook Lite is another version of Facebook that is designed for devices that have low specifications or limited data plans. It is smaller in size, consumes less data, and works on slower networks than the regular Facebook app. However, it also has fewer features and functions than the regular Facebook app. Facebook APK 64 Bit is similar to the regular Facebook app in terms of features and functions, but it is more suitable for devices that have a 64-bit processor.</p>
111
- <h4>How can I tell if my device has a 64-bit processor?</h4 <p>You can check if your device has a 64-bit processor by going to Settings > About Phone > Processor or CPU. You can also use an app like CPU-Z or AnTuTu Benchmark to check your device's processor information.</p>
112
- <h4>Is Facebook APK 64 Bit safe to use?</h4>
113
- <p>Facebook APK 64 Bit is safe to use if you download it from a reliable and trustworthy source. However, since it is not available on the official Google Play Store or the official Facebook website, you have to be careful when choosing a source to download it from. You should avoid sources that have low ratings, negative reviews, or suspicious permissions. You should also scan the file with an antivirus app before installing it on your device.</p>
114
- <h4>Can I use Facebook APK 64 Bit on my PC or laptop?</h4>
115
- <p>Facebook APK 64 Bit is an Android application, so you cannot use it directly on your PC or laptop. However, you can use an Android emulator software like BlueStacks or Nox Player to run Facebook APK 64 Bit on your PC or laptop. An Android emulator is a program that simulates the Android operating system on your PC or laptop, allowing you to install and use Android apps and games on your PC or laptop.</p>
116
- <h4>Can I use Facebook APK 64 Bit with other Facebook apps?</h4>
117
- <p>Yes, you can use Facebook APK 64 Bit with other Facebook apps, such as Messenger, Instagram, WhatsApp, etc. However, you may need to download and install the 64-bit versions of these apps as well, if they are available. You can also use the regular versions of these apps, but they may not work as well as the 64-bit versions on your device.</p> 401be4b1e0<br />
118
- <br />
119
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/3druga/ae-6/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Ae 6
3
- emoji: 🏃
4
- colorFrom: green
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 3.20.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/tests/adversarial/test_losses.py DELETED
@@ -1,159 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- import pytest
8
- import random
9
-
10
- import torch
11
-
12
- from audiocraft.adversarial import (
13
- AdversarialLoss,
14
- get_adv_criterion,
15
- get_real_criterion,
16
- get_fake_criterion,
17
- FeatureMatchingLoss,
18
- MultiScaleDiscriminator,
19
- )
20
-
21
-
22
- class TestAdversarialLoss:
23
-
24
- def test_adversarial_single_multidiscriminator(self):
25
- adv = MultiScaleDiscriminator()
26
- optimizer = torch.optim.Adam(
27
- adv.parameters(),
28
- lr=1e-4,
29
- )
30
- loss, loss_real, loss_fake = get_adv_criterion('mse'), get_real_criterion('mse'), get_fake_criterion('mse')
31
- adv_loss = AdversarialLoss(adv, optimizer, loss, loss_real, loss_fake)
32
-
33
- B, C, T = 4, 1, random.randint(1000, 5000)
34
- real = torch.randn(B, C, T)
35
- fake = torch.randn(B, C, T)
36
-
37
- disc_loss = adv_loss.train_adv(fake, real)
38
- assert isinstance(disc_loss, torch.Tensor) and isinstance(disc_loss.item(), float)
39
-
40
- loss, loss_feat = adv_loss(fake, real)
41
- assert isinstance(loss, torch.Tensor) and isinstance(loss.item(), float)
42
- # we did not specify feature loss
43
- assert loss_feat.item() == 0.
44
-
45
- def test_adversarial_feat_loss(self):
46
- adv = MultiScaleDiscriminator()
47
- optimizer = torch.optim.Adam(
48
- adv.parameters(),
49
- lr=1e-4,
50
- )
51
- loss, loss_real, loss_fake = get_adv_criterion('mse'), get_real_criterion('mse'), get_fake_criterion('mse')
52
- feat_loss = FeatureMatchingLoss()
53
- adv_loss = AdversarialLoss(adv, optimizer, loss, loss_real, loss_fake, feat_loss)
54
-
55
- B, C, T = 4, 1, random.randint(1000, 5000)
56
- real = torch.randn(B, C, T)
57
- fake = torch.randn(B, C, T)
58
-
59
- loss, loss_feat = adv_loss(fake, real)
60
-
61
- assert isinstance(loss, torch.Tensor) and isinstance(loss.item(), float)
62
- assert isinstance(loss_feat, torch.Tensor) and isinstance(loss.item(), float)
63
-
64
-
65
- class TestGeneratorAdversarialLoss:
66
-
67
- def test_hinge_generator_adv_loss(self):
68
- adv_loss = get_adv_criterion(loss_type='hinge')
69
-
70
- t0 = torch.randn(1, 2, 0)
71
- t1 = torch.FloatTensor([1.0, 2.0, 3.0])
72
-
73
- assert adv_loss(t0).item() == 0.0
74
- assert adv_loss(t1).item() == -2.0
75
-
76
- def test_mse_generator_adv_loss(self):
77
- adv_loss = get_adv_criterion(loss_type='mse')
78
-
79
- t0 = torch.randn(1, 2, 0)
80
- t1 = torch.FloatTensor([1.0, 1.0, 1.0])
81
- t2 = torch.FloatTensor([2.0, 5.0, 5.0])
82
-
83
- assert adv_loss(t0).item() == 0.0
84
- assert adv_loss(t1).item() == 0.0
85
- assert adv_loss(t2).item() == 11.0
86
-
87
-
88
- class TestDiscriminatorAdversarialLoss:
89
-
90
- def _disc_loss(self, loss_type: str, fake: torch.Tensor, real: torch.Tensor):
91
- disc_loss_real = get_real_criterion(loss_type)
92
- disc_loss_fake = get_fake_criterion(loss_type)
93
-
94
- loss = disc_loss_fake(fake) + disc_loss_real(real)
95
- return loss
96
-
97
- def test_hinge_discriminator_adv_loss(self):
98
- loss_type = 'hinge'
99
- t0 = torch.FloatTensor([0.0, 0.0, 0.0])
100
- t1 = torch.FloatTensor([1.0, 2.0, 3.0])
101
-
102
- assert self._disc_loss(loss_type, t0, t0).item() == 2.0
103
- assert self._disc_loss(loss_type, t1, t1).item() == 3.0
104
-
105
- def test_mse_discriminator_adv_loss(self):
106
- loss_type = 'mse'
107
-
108
- t0 = torch.FloatTensor([0.0, 0.0, 0.0])
109
- t1 = torch.FloatTensor([1.0, 1.0, 1.0])
110
-
111
- assert self._disc_loss(loss_type, t0, t0).item() == 1.0
112
- assert self._disc_loss(loss_type, t1, t0).item() == 2.0
113
-
114
-
115
- class TestFeatureMatchingLoss:
116
-
117
- def test_features_matching_loss_base(self):
118
- ft_matching_loss = FeatureMatchingLoss()
119
- length = random.randrange(1, 100_000)
120
- t1 = torch.randn(1, 2, length)
121
-
122
- loss = ft_matching_loss([t1], [t1])
123
- assert isinstance(loss, torch.Tensor)
124
- assert loss.item() == 0.0
125
-
126
- def test_features_matching_loss_raises_exception(self):
127
- ft_matching_loss = FeatureMatchingLoss()
128
- length = random.randrange(1, 100_000)
129
- t1 = torch.randn(1, 2, length)
130
- t2 = torch.randn(1, 2, length + 1)
131
-
132
- with pytest.raises(AssertionError):
133
- ft_matching_loss([], [])
134
-
135
- with pytest.raises(AssertionError):
136
- ft_matching_loss([t1], [t1, t1])
137
-
138
- with pytest.raises(AssertionError):
139
- ft_matching_loss([t1], [t2])
140
-
141
- def test_features_matching_loss_output(self):
142
- loss_nonorm = FeatureMatchingLoss(normalize=False)
143
- loss_layer_normed = FeatureMatchingLoss(normalize=True)
144
-
145
- length = random.randrange(1, 100_000)
146
- t1 = torch.randn(1, 2, length)
147
- t2 = torch.randn(1, 2, length)
148
-
149
- assert loss_nonorm([t1, t2], [t1, t2]).item() == 0.0
150
- assert loss_layer_normed([t1, t2], [t1, t2]).item() == 0.0
151
-
152
- t3 = torch.FloatTensor([1.0, 2.0, 3.0])
153
- t4 = torch.FloatTensor([2.0, 10.0, 3.0])
154
-
155
- assert loss_nonorm([t3], [t4]).item() == 3.0
156
- assert loss_nonorm([t3, t3], [t4, t4]).item() == 6.0
157
-
158
- assert loss_layer_normed([t3], [t4]).item() == 3.0
159
- assert loss_layer_normed([t3, t3], [t4, t4]).item() == 3.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/VQ-Trans/utils/quaternion.py DELETED
@@ -1,423 +0,0 @@
1
- # Copyright (c) 2018-present, Facebook, Inc.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
- #
7
-
8
- import torch
9
- import numpy as np
10
-
11
- _EPS4 = np.finfo(float).eps * 4.0
12
-
13
- _FLOAT_EPS = np.finfo(np.float).eps
14
-
15
- # PyTorch-backed implementations
16
- def qinv(q):
17
- assert q.shape[-1] == 4, 'q must be a tensor of shape (*, 4)'
18
- mask = torch.ones_like(q)
19
- mask[..., 1:] = -mask[..., 1:]
20
- return q * mask
21
-
22
-
23
- def qinv_np(q):
24
- assert q.shape[-1] == 4, 'q must be a tensor of shape (*, 4)'
25
- return qinv(torch.from_numpy(q).float()).numpy()
26
-
27
-
28
- def qnormalize(q):
29
- assert q.shape[-1] == 4, 'q must be a tensor of shape (*, 4)'
30
- return q / torch.norm(q, dim=-1, keepdim=True)
31
-
32
-
33
- def qmul(q, r):
34
- """
35
- Multiply quaternion(s) q with quaternion(s) r.
36
- Expects two equally-sized tensors of shape (*, 4), where * denotes any number of dimensions.
37
- Returns q*r as a tensor of shape (*, 4).
38
- """
39
- assert q.shape[-1] == 4
40
- assert r.shape[-1] == 4
41
-
42
- original_shape = q.shape
43
-
44
- # Compute outer product
45
- terms = torch.bmm(r.view(-1, 4, 1), q.view(-1, 1, 4))
46
-
47
- w = terms[:, 0, 0] - terms[:, 1, 1] - terms[:, 2, 2] - terms[:, 3, 3]
48
- x = terms[:, 0, 1] + terms[:, 1, 0] - terms[:, 2, 3] + terms[:, 3, 2]
49
- y = terms[:, 0, 2] + terms[:, 1, 3] + terms[:, 2, 0] - terms[:, 3, 1]
50
- z = terms[:, 0, 3] - terms[:, 1, 2] + terms[:, 2, 1] + terms[:, 3, 0]
51
- return torch.stack((w, x, y, z), dim=1).view(original_shape)
52
-
53
-
54
- def qrot(q, v):
55
- """
56
- Rotate vector(s) v about the rotation described by quaternion(s) q.
57
- Expects a tensor of shape (*, 4) for q and a tensor of shape (*, 3) for v,
58
- where * denotes any number of dimensions.
59
- Returns a tensor of shape (*, 3).
60
- """
61
- assert q.shape[-1] == 4
62
- assert v.shape[-1] == 3
63
- assert q.shape[:-1] == v.shape[:-1]
64
-
65
- original_shape = list(v.shape)
66
- # print(q.shape)
67
- q = q.contiguous().view(-1, 4)
68
- v = v.contiguous().view(-1, 3)
69
-
70
- qvec = q[:, 1:]
71
- uv = torch.cross(qvec, v, dim=1)
72
- uuv = torch.cross(qvec, uv, dim=1)
73
- return (v + 2 * (q[:, :1] * uv + uuv)).view(original_shape)
74
-
75
-
76
- def qeuler(q, order, epsilon=0, deg=True):
77
- """
78
- Convert quaternion(s) q to Euler angles.
79
- Expects a tensor of shape (*, 4), where * denotes any number of dimensions.
80
- Returns a tensor of shape (*, 3).
81
- """
82
- assert q.shape[-1] == 4
83
-
84
- original_shape = list(q.shape)
85
- original_shape[-1] = 3
86
- q = q.view(-1, 4)
87
-
88
- q0 = q[:, 0]
89
- q1 = q[:, 1]
90
- q2 = q[:, 2]
91
- q3 = q[:, 3]
92
-
93
- if order == 'xyz':
94
- x = torch.atan2(2 * (q0 * q1 - q2 * q3), 1 - 2 * (q1 * q1 + q2 * q2))
95
- y = torch.asin(torch.clamp(2 * (q1 * q3 + q0 * q2), -1 + epsilon, 1 - epsilon))
96
- z = torch.atan2(2 * (q0 * q3 - q1 * q2), 1 - 2 * (q2 * q2 + q3 * q3))
97
- elif order == 'yzx':
98
- x = torch.atan2(2 * (q0 * q1 - q2 * q3), 1 - 2 * (q1 * q1 + q3 * q3))
99
- y = torch.atan2(2 * (q0 * q2 - q1 * q3), 1 - 2 * (q2 * q2 + q3 * q3))
100
- z = torch.asin(torch.clamp(2 * (q1 * q2 + q0 * q3), -1 + epsilon, 1 - epsilon))
101
- elif order == 'zxy':
102
- x = torch.asin(torch.clamp(2 * (q0 * q1 + q2 * q3), -1 + epsilon, 1 - epsilon))
103
- y = torch.atan2(2 * (q0 * q2 - q1 * q3), 1 - 2 * (q1 * q1 + q2 * q2))
104
- z = torch.atan2(2 * (q0 * q3 - q1 * q2), 1 - 2 * (q1 * q1 + q3 * q3))
105
- elif order == 'xzy':
106
- x = torch.atan2(2 * (q0 * q1 + q2 * q3), 1 - 2 * (q1 * q1 + q3 * q3))
107
- y = torch.atan2(2 * (q0 * q2 + q1 * q3), 1 - 2 * (q2 * q2 + q3 * q3))
108
- z = torch.asin(torch.clamp(2 * (q0 * q3 - q1 * q2), -1 + epsilon, 1 - epsilon))
109
- elif order == 'yxz':
110
- x = torch.asin(torch.clamp(2 * (q0 * q1 - q2 * q3), -1 + epsilon, 1 - epsilon))
111
- y = torch.atan2(2 * (q1 * q3 + q0 * q2), 1 - 2 * (q1 * q1 + q2 * q2))
112
- z = torch.atan2(2 * (q1 * q2 + q0 * q3), 1 - 2 * (q1 * q1 + q3 * q3))
113
- elif order == 'zyx':
114
- x = torch.atan2(2 * (q0 * q1 + q2 * q3), 1 - 2 * (q1 * q1 + q2 * q2))
115
- y = torch.asin(torch.clamp(2 * (q0 * q2 - q1 * q3), -1 + epsilon, 1 - epsilon))
116
- z = torch.atan2(2 * (q0 * q3 + q1 * q2), 1 - 2 * (q2 * q2 + q3 * q3))
117
- else:
118
- raise
119
-
120
- if deg:
121
- return torch.stack((x, y, z), dim=1).view(original_shape) * 180 / np.pi
122
- else:
123
- return torch.stack((x, y, z), dim=1).view(original_shape)
124
-
125
-
126
- # Numpy-backed implementations
127
-
128
- def qmul_np(q, r):
129
- q = torch.from_numpy(q).contiguous().float()
130
- r = torch.from_numpy(r).contiguous().float()
131
- return qmul(q, r).numpy()
132
-
133
-
134
- def qrot_np(q, v):
135
- q = torch.from_numpy(q).contiguous().float()
136
- v = torch.from_numpy(v).contiguous().float()
137
- return qrot(q, v).numpy()
138
-
139
-
140
- def qeuler_np(q, order, epsilon=0, use_gpu=False):
141
- if use_gpu:
142
- q = torch.from_numpy(q).cuda().float()
143
- return qeuler(q, order, epsilon).cpu().numpy()
144
- else:
145
- q = torch.from_numpy(q).contiguous().float()
146
- return qeuler(q, order, epsilon).numpy()
147
-
148
-
149
- def qfix(q):
150
- """
151
- Enforce quaternion continuity across the time dimension by selecting
152
- the representation (q or -q) with minimal distance (or, equivalently, maximal dot product)
153
- between two consecutive frames.
154
-
155
- Expects a tensor of shape (L, J, 4), where L is the sequence length and J is the number of joints.
156
- Returns a tensor of the same shape.
157
- """
158
- assert len(q.shape) == 3
159
- assert q.shape[-1] == 4
160
-
161
- result = q.copy()
162
- dot_products = np.sum(q[1:] * q[:-1], axis=2)
163
- mask = dot_products < 0
164
- mask = (np.cumsum(mask, axis=0) % 2).astype(bool)
165
- result[1:][mask] *= -1
166
- return result
167
-
168
-
169
- def euler2quat(e, order, deg=True):
170
- """
171
- Convert Euler angles to quaternions.
172
- """
173
- assert e.shape[-1] == 3
174
-
175
- original_shape = list(e.shape)
176
- original_shape[-1] = 4
177
-
178
- e = e.view(-1, 3)
179
-
180
- ## if euler angles in degrees
181
- if deg:
182
- e = e * np.pi / 180.
183
-
184
- x = e[:, 0]
185
- y = e[:, 1]
186
- z = e[:, 2]
187
-
188
- rx = torch.stack((torch.cos(x / 2), torch.sin(x / 2), torch.zeros_like(x), torch.zeros_like(x)), dim=1)
189
- ry = torch.stack((torch.cos(y / 2), torch.zeros_like(y), torch.sin(y / 2), torch.zeros_like(y)), dim=1)
190
- rz = torch.stack((torch.cos(z / 2), torch.zeros_like(z), torch.zeros_like(z), torch.sin(z / 2)), dim=1)
191
-
192
- result = None
193
- for coord in order:
194
- if coord == 'x':
195
- r = rx
196
- elif coord == 'y':
197
- r = ry
198
- elif coord == 'z':
199
- r = rz
200
- else:
201
- raise
202
- if result is None:
203
- result = r
204
- else:
205
- result = qmul(result, r)
206
-
207
- # Reverse antipodal representation to have a non-negative "w"
208
- if order in ['xyz', 'yzx', 'zxy']:
209
- result *= -1
210
-
211
- return result.view(original_shape)
212
-
213
-
214
- def expmap_to_quaternion(e):
215
- """
216
- Convert axis-angle rotations (aka exponential maps) to quaternions.
217
- Stable formula from "Practical Parameterization of Rotations Using the Exponential Map".
218
- Expects a tensor of shape (*, 3), where * denotes any number of dimensions.
219
- Returns a tensor of shape (*, 4).
220
- """
221
- assert e.shape[-1] == 3
222
-
223
- original_shape = list(e.shape)
224
- original_shape[-1] = 4
225
- e = e.reshape(-1, 3)
226
-
227
- theta = np.linalg.norm(e, axis=1).reshape(-1, 1)
228
- w = np.cos(0.5 * theta).reshape(-1, 1)
229
- xyz = 0.5 * np.sinc(0.5 * theta / np.pi) * e
230
- return np.concatenate((w, xyz), axis=1).reshape(original_shape)
231
-
232
-
233
- def euler_to_quaternion(e, order):
234
- """
235
- Convert Euler angles to quaternions.
236
- """
237
- assert e.shape[-1] == 3
238
-
239
- original_shape = list(e.shape)
240
- original_shape[-1] = 4
241
-
242
- e = e.reshape(-1, 3)
243
-
244
- x = e[:, 0]
245
- y = e[:, 1]
246
- z = e[:, 2]
247
-
248
- rx = np.stack((np.cos(x / 2), np.sin(x / 2), np.zeros_like(x), np.zeros_like(x)), axis=1)
249
- ry = np.stack((np.cos(y / 2), np.zeros_like(y), np.sin(y / 2), np.zeros_like(y)), axis=1)
250
- rz = np.stack((np.cos(z / 2), np.zeros_like(z), np.zeros_like(z), np.sin(z / 2)), axis=1)
251
-
252
- result = None
253
- for coord in order:
254
- if coord == 'x':
255
- r = rx
256
- elif coord == 'y':
257
- r = ry
258
- elif coord == 'z':
259
- r = rz
260
- else:
261
- raise
262
- if result is None:
263
- result = r
264
- else:
265
- result = qmul_np(result, r)
266
-
267
- # Reverse antipodal representation to have a non-negative "w"
268
- if order in ['xyz', 'yzx', 'zxy']:
269
- result *= -1
270
-
271
- return result.reshape(original_shape)
272
-
273
-
274
- def quaternion_to_matrix(quaternions):
275
- """
276
- Convert rotations given as quaternions to rotation matrices.
277
- Args:
278
- quaternions: quaternions with real part first,
279
- as tensor of shape (..., 4).
280
- Returns:
281
- Rotation matrices as tensor of shape (..., 3, 3).
282
- """
283
- r, i, j, k = torch.unbind(quaternions, -1)
284
- two_s = 2.0 / (quaternions * quaternions).sum(-1)
285
-
286
- o = torch.stack(
287
- (
288
- 1 - two_s * (j * j + k * k),
289
- two_s * (i * j - k * r),
290
- two_s * (i * k + j * r),
291
- two_s * (i * j + k * r),
292
- 1 - two_s * (i * i + k * k),
293
- two_s * (j * k - i * r),
294
- two_s * (i * k - j * r),
295
- two_s * (j * k + i * r),
296
- 1 - two_s * (i * i + j * j),
297
- ),
298
- -1,
299
- )
300
- return o.reshape(quaternions.shape[:-1] + (3, 3))
301
-
302
-
303
- def quaternion_to_matrix_np(quaternions):
304
- q = torch.from_numpy(quaternions).contiguous().float()
305
- return quaternion_to_matrix(q).numpy()
306
-
307
-
308
- def quaternion_to_cont6d_np(quaternions):
309
- rotation_mat = quaternion_to_matrix_np(quaternions)
310
- cont_6d = np.concatenate([rotation_mat[..., 0], rotation_mat[..., 1]], axis=-1)
311
- return cont_6d
312
-
313
-
314
- def quaternion_to_cont6d(quaternions):
315
- rotation_mat = quaternion_to_matrix(quaternions)
316
- cont_6d = torch.cat([rotation_mat[..., 0], rotation_mat[..., 1]], dim=-1)
317
- return cont_6d
318
-
319
-
320
- def cont6d_to_matrix(cont6d):
321
- assert cont6d.shape[-1] == 6, "The last dimension must be 6"
322
- x_raw = cont6d[..., 0:3]
323
- y_raw = cont6d[..., 3:6]
324
-
325
- x = x_raw / torch.norm(x_raw, dim=-1, keepdim=True)
326
- z = torch.cross(x, y_raw, dim=-1)
327
- z = z / torch.norm(z, dim=-1, keepdim=True)
328
-
329
- y = torch.cross(z, x, dim=-1)
330
-
331
- x = x[..., None]
332
- y = y[..., None]
333
- z = z[..., None]
334
-
335
- mat = torch.cat([x, y, z], dim=-1)
336
- return mat
337
-
338
-
339
- def cont6d_to_matrix_np(cont6d):
340
- q = torch.from_numpy(cont6d).contiguous().float()
341
- return cont6d_to_matrix(q).numpy()
342
-
343
-
344
- def qpow(q0, t, dtype=torch.float):
345
- ''' q0 : tensor of quaternions
346
- t: tensor of powers
347
- '''
348
- q0 = qnormalize(q0)
349
- theta0 = torch.acos(q0[..., 0])
350
-
351
- ## if theta0 is close to zero, add epsilon to avoid NaNs
352
- mask = (theta0 <= 10e-10) * (theta0 >= -10e-10)
353
- theta0 = (1 - mask) * theta0 + mask * 10e-10
354
- v0 = q0[..., 1:] / torch.sin(theta0).view(-1, 1)
355
-
356
- if isinstance(t, torch.Tensor):
357
- q = torch.zeros(t.shape + q0.shape)
358
- theta = t.view(-1, 1) * theta0.view(1, -1)
359
- else: ## if t is a number
360
- q = torch.zeros(q0.shape)
361
- theta = t * theta0
362
-
363
- q[..., 0] = torch.cos(theta)
364
- q[..., 1:] = v0 * torch.sin(theta).unsqueeze(-1)
365
-
366
- return q.to(dtype)
367
-
368
-
369
- def qslerp(q0, q1, t):
370
- '''
371
- q0: starting quaternion
372
- q1: ending quaternion
373
- t: array of points along the way
374
-
375
- Returns:
376
- Tensor of Slerps: t.shape + q0.shape
377
- '''
378
-
379
- q0 = qnormalize(q0)
380
- q1 = qnormalize(q1)
381
- q_ = qpow(qmul(q1, qinv(q0)), t)
382
-
383
- return qmul(q_,
384
- q0.contiguous().view(torch.Size([1] * len(t.shape)) + q0.shape).expand(t.shape + q0.shape).contiguous())
385
-
386
-
387
- def qbetween(v0, v1):
388
- '''
389
- find the quaternion used to rotate v0 to v1
390
- '''
391
- assert v0.shape[-1] == 3, 'v0 must be of the shape (*, 3)'
392
- assert v1.shape[-1] == 3, 'v1 must be of the shape (*, 3)'
393
-
394
- v = torch.cross(v0, v1)
395
- w = torch.sqrt((v0 ** 2).sum(dim=-1, keepdim=True) * (v1 ** 2).sum(dim=-1, keepdim=True)) + (v0 * v1).sum(dim=-1,
396
- keepdim=True)
397
- return qnormalize(torch.cat([w, v], dim=-1))
398
-
399
-
400
- def qbetween_np(v0, v1):
401
- '''
402
- find the quaternion used to rotate v0 to v1
403
- '''
404
- assert v0.shape[-1] == 3, 'v0 must be of the shape (*, 3)'
405
- assert v1.shape[-1] == 3, 'v1 must be of the shape (*, 3)'
406
-
407
- v0 = torch.from_numpy(v0).float()
408
- v1 = torch.from_numpy(v1).float()
409
- return qbetween(v0, v1).numpy()
410
-
411
-
412
- def lerp(p0, p1, t):
413
- if not isinstance(t, torch.Tensor):
414
- t = torch.Tensor([t])
415
-
416
- new_shape = t.shape + p0.shape
417
- new_view_t = t.shape + torch.Size([1] * len(p0.shape))
418
- new_view_p = torch.Size([1] * len(t.shape)) + p0.shape
419
- p0 = p0.view(new_view_p).expand(new_shape)
420
- p1 = p1.view(new_view_p).expand(new_shape)
421
- t = t.view(new_view_t).expand(new_shape)
422
-
423
- return p0 + t * (p1 - p0)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/test_scenes.py DELETED
@@ -1,235 +0,0 @@
1
- import numpy as np
2
- import pytest
3
- import trimesh
4
-
5
- from pyrender import (Mesh, PerspectiveCamera, DirectionalLight,
6
- SpotLight, PointLight, Scene, Node, OrthographicCamera)
7
-
8
-
9
- def test_scenes():
10
-
11
- # Basics
12
- s = Scene()
13
- assert np.allclose(s.bg_color, np.ones(4))
14
- assert np.allclose(s.ambient_light, np.zeros(3))
15
- assert len(s.nodes) == 0
16
- assert s.name is None
17
- s.name = 'asdf'
18
- s.bg_color = None
19
- s.ambient_light = None
20
- assert np.allclose(s.bg_color, np.ones(4))
21
- assert np.allclose(s.ambient_light, np.zeros(3))
22
-
23
- assert s.nodes == set()
24
- assert s.cameras == set()
25
- assert s.lights == set()
26
- assert s.point_lights == set()
27
- assert s.spot_lights == set()
28
- assert s.directional_lights == set()
29
- assert s.meshes == set()
30
- assert s.camera_nodes == set()
31
- assert s.light_nodes == set()
32
- assert s.point_light_nodes == set()
33
- assert s.spot_light_nodes == set()
34
- assert s.directional_light_nodes == set()
35
- assert s.mesh_nodes == set()
36
- assert s.main_camera_node is None
37
- assert np.all(s.bounds == 0)
38
- assert np.all(s.centroid == 0)
39
- assert np.all(s.extents == 0)
40
- assert np.all(s.scale == 0)
41
-
42
- # From trimesh scene
43
- tms = trimesh.load('tests/data/WaterBottle.glb')
44
- s = Scene.from_trimesh_scene(tms)
45
- assert len(s.meshes) == 1
46
- assert len(s.mesh_nodes) == 1
47
-
48
- # Test bg color formatting
49
- s = Scene(bg_color=[0, 1.0, 0])
50
- assert np.allclose(s.bg_color, np.array([0.0, 1.0, 0.0, 1.0]))
51
-
52
- # Test constructor for nodes
53
- n1 = Node()
54
- n2 = Node()
55
- n3 = Node()
56
- nodes = [n1, n2, n3]
57
- s = Scene(nodes=nodes)
58
- n1.children.append(n2)
59
- s = Scene(nodes=nodes)
60
- n3.children.append(n2)
61
- with pytest.raises(ValueError):
62
- s = Scene(nodes=nodes)
63
- n3.children = []
64
- n2.children.append(n3)
65
- n3.children.append(n2)
66
- with pytest.raises(ValueError):
67
- s = Scene(nodes=nodes)
68
-
69
- # Test node accessors
70
- n1 = Node()
71
- n2 = Node()
72
- n3 = Node()
73
- nodes = [n1, n2]
74
- s = Scene(nodes=nodes)
75
- assert s.has_node(n1)
76
- assert s.has_node(n2)
77
- assert not s.has_node(n3)
78
-
79
- # Test node poses
80
- for n in nodes:
81
- assert np.allclose(s.get_pose(n), np.eye(4))
82
- with pytest.raises(ValueError):
83
- s.get_pose(n3)
84
- with pytest.raises(ValueError):
85
- s.set_pose(n3, np.eye(4))
86
- tf = np.eye(4)
87
- tf[:3,3] = np.ones(3)
88
- s.set_pose(n1, tf)
89
- assert np.allclose(s.get_pose(n1), tf)
90
- assert np.allclose(s.get_pose(n2), np.eye(4))
91
-
92
- nodes = [n1, n2, n3]
93
- tf2 = np.eye(4)
94
- tf2[:3,:3] = np.diag([-1,-1,1])
95
- n1.children.append(n2)
96
- n1.matrix = tf
97
- n2.matrix = tf2
98
- s = Scene(nodes=nodes)
99
- assert np.allclose(s.get_pose(n1), tf)
100
- assert np.allclose(s.get_pose(n2), tf.dot(tf2))
101
- assert np.allclose(s.get_pose(n3), np.eye(4))
102
-
103
- n1 = Node()
104
- n2 = Node()
105
- n3 = Node()
106
- n1.children.append(n2)
107
- s = Scene()
108
- s.add_node(n1)
109
- with pytest.raises(ValueError):
110
- s.add_node(n2)
111
- s.set_pose(n1, tf)
112
- assert np.allclose(s.get_pose(n1), tf)
113
- assert np.allclose(s.get_pose(n2), tf)
114
- s.set_pose(n2, tf2)
115
- assert np.allclose(s.get_pose(n2), tf.dot(tf2))
116
-
117
- # Test node removal
118
- n1 = Node()
119
- n2 = Node()
120
- n3 = Node()
121
- n1.children.append(n2)
122
- n2.children.append(n3)
123
- s = Scene(nodes=[n1, n2, n3])
124
- s.remove_node(n2)
125
- assert len(s.nodes) == 1
126
- assert n1 in s.nodes
127
- assert len(n1.children) == 0
128
- assert len(n2.children) == 1
129
- s.add_node(n2, parent_node=n1)
130
- assert len(n1.children) == 1
131
- n1.matrix = tf
132
- n3.matrix = tf2
133
- assert np.allclose(s.get_pose(n3), tf.dot(tf2))
134
-
135
- # Now test ADD function
136
- s = Scene()
137
- m = Mesh([], name='m')
138
- cp = PerspectiveCamera(yfov=2.0)
139
- co = OrthographicCamera(xmag=1.0, ymag=1.0)
140
- dl = DirectionalLight()
141
- pl = PointLight()
142
- sl = SpotLight()
143
-
144
- n1 = s.add(m, name='mn')
145
- assert n1.mesh == m
146
- assert len(s.nodes) == 1
147
- assert len(s.mesh_nodes) == 1
148
- assert n1 in s.mesh_nodes
149
- assert len(s.meshes) == 1
150
- assert m in s.meshes
151
- assert len(s.get_nodes(node=n2)) == 0
152
- n2 = s.add(m, pose=tf)
153
- assert len(s.nodes) == len(s.mesh_nodes) == 2
154
- assert len(s.meshes) == 1
155
- assert len(s.get_nodes(node=n1)) == 1
156
- assert len(s.get_nodes(node=n1, name='mn')) == 1
157
- assert len(s.get_nodes(name='mn')) == 1
158
- assert len(s.get_nodes(obj=m)) == 2
159
- assert len(s.get_nodes(obj=m, obj_name='m')) == 2
160
- assert len(s.get_nodes(obj=co)) == 0
161
- nsl = s.add(sl, name='sln')
162
- npl = s.add(pl, parent_name='sln')
163
- assert nsl.children[0] == npl
164
- ndl = s.add(dl, parent_node=npl)
165
- assert npl.children[0] == ndl
166
- nco = s.add(co)
167
- ncp = s.add(cp)
168
-
169
- assert len(s.light_nodes) == len(s.lights) == 3
170
- assert len(s.point_light_nodes) == len(s.point_lights) == 1
171
- assert npl in s.point_light_nodes
172
- assert len(s.spot_light_nodes) == len(s.spot_lights) == 1
173
- assert nsl in s.spot_light_nodes
174
- assert len(s.directional_light_nodes) == len(s.directional_lights) == 1
175
- assert ndl in s.directional_light_nodes
176
- assert len(s.cameras) == len(s.camera_nodes) == 2
177
- assert s.main_camera_node == nco
178
- s.main_camera_node = ncp
179
- s.remove_node(ncp)
180
- assert len(s.cameras) == len(s.camera_nodes) == 1
181
- assert s.main_camera_node == nco
182
- s.remove_node(n2)
183
- assert len(s.meshes) == 1
184
- s.remove_node(n1)
185
- assert len(s.meshes) == 0
186
- s.remove_node(nsl)
187
- assert len(s.lights) == 0
188
- s.remove_node(nco)
189
- assert s.main_camera_node is None
190
-
191
- s.add_node(n1)
192
- s.clear()
193
- assert len(s.nodes) == 0
194
-
195
- # Trigger final errors
196
- with pytest.raises(ValueError):
197
- s.main_camera_node = None
198
- with pytest.raises(ValueError):
199
- s.main_camera_node = ncp
200
- with pytest.raises(ValueError):
201
- s.add(m, parent_node=n1)
202
- with pytest.raises(ValueError):
203
- s.add(m, name='asdf')
204
- s.add(m, name='asdf')
205
- s.add(m, parent_name='asdf')
206
- with pytest.raises(ValueError):
207
- s.add(m, parent_name='asfd')
208
- with pytest.raises(TypeError):
209
- s.add(None)
210
-
211
- s.clear()
212
- # Test bounds
213
- m1 = Mesh.from_trimesh(trimesh.creation.box())
214
- m2 = Mesh.from_trimesh(trimesh.creation.box())
215
- m3 = Mesh.from_trimesh(trimesh.creation.box())
216
- n1 = Node(mesh=m1)
217
- n2 = Node(mesh=m2, translation=[1.0, 0.0, 0.0])
218
- n3 = Node(mesh=m3, translation=[0.5, 0.0, 1.0])
219
- s.add_node(n1)
220
- s.add_node(n2)
221
- s.add_node(n3)
222
- assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [1.5, 0.5, 1.5]])
223
- s.clear()
224
- s.add_node(n1)
225
- s.add_node(n2, parent_node=n1)
226
- s.add_node(n3, parent_node=n2)
227
- assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [2.0, 0.5, 1.5]])
228
- tf = np.eye(4)
229
- tf[:3,3] = np.ones(3)
230
- s.set_pose(n3, tf)
231
- assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [2.5, 1.5, 1.5]])
232
- s.remove_node(n2)
233
- assert np.allclose(s.bounds, [[-0.5, -0.5, -0.5], [0.5, 0.5, 0.5]])
234
- s.clear()
235
- assert np.allclose(s.bounds, 0.0)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/run.py DELETED
@@ -1,15 +0,0 @@
1
- import importlib
2
- from utils.hparams import set_hparams, hparams
3
-
4
-
5
- def run_task():
6
- assert hparams['task_cls'] != ''
7
- pkg = ".".join(hparams["task_cls"].split(".")[:-1])
8
- cls_name = hparams["task_cls"].split(".")[-1]
9
- task_cls = getattr(importlib.import_module(pkg), cls_name)
10
- task_cls.start()
11
-
12
-
13
- if __name__ == '__main__':
14
- set_hparams()
15
- run_task()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/app.py DELETED
@@ -1,283 +0,0 @@
1
- from langchain.agents.initialize import initialize_agent
2
- from langchain.agents.tools import Tool
3
- from langchain.chains.conversation.memory import ConversationBufferMemory
4
- from langchain.llms.openai import OpenAI
5
- from audio_foundation_models import *
6
- import gradio as gr
7
-
8
- _DESCRIPTION = '# [AudioGPT](https://github.com/AIGC-Audio/AudioGPT)'
9
- _DESCRIPTION += '\n<p>This is a demo to the work <a href="https://github.com/AIGC-Audio/AudioGPT" style="text-decoration: underline;" target="_blank">AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head</a>. </p>'
10
- _DESCRIPTION += '\n<p>This model can only be used for non-commercial purposes.'
11
- if (SPACE_ID := os.getenv('SPACE_ID')) is not None:
12
- _DESCRIPTION += f'\n<p>For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. <a href="https://huggingface.co/spaces/{SPACE_ID}?duplicate=true"><img style="display: inline; margin-top: 0em; margin-bottom: 0em" src="https://bit.ly/3gLdBN6" alt="Duplicate Space" /></a></p>'
13
-
14
-
15
- AUDIO_CHATGPT_PREFIX = """AudioGPT
16
- AudioGPT can not directly read audios, but it has a list of tools to finish different speech, audio, and singing voice tasks. Each audio will have a file name formed as "audio/xxx.wav". When talking about audios, AudioGPT is very strict to the file name and will never fabricate nonexistent files.
17
- AudioGPT is able to use tools in a sequence, and is loyal to the tool observation outputs rather than faking the audio content and audio file name. It will remember to provide the file name from the last tool observation, if a new audio is generated.
18
- Human may provide new audios to AudioGPT with a description. The description helps AudioGPT to understand this audio, but AudioGPT should use tools to finish following tasks, rather than directly imagine from the description.
19
- Overall, AudioGPT is a powerful audio dialogue assistant tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics.
20
- TOOLS:
21
- ------
22
- AudioGPT has access to the following tools:"""
23
-
24
- AUDIO_CHATGPT_FORMAT_INSTRUCTIONS = """To use a tool, please use the following format:
25
- ```
26
- Thought: Do I need to use a tool? Yes
27
- Action: the action to take, should be one of [{tool_names}]
28
- Action Input: the input to the action
29
- Observation: the result of the action
30
- ```
31
- When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
32
- ```
33
- Thought: Do I need to use a tool? No
34
- {ai_prefix}: [your response here]
35
- ```
36
- """
37
-
38
- AUDIO_CHATGPT_SUFFIX = """You are very strict to the filename correctness and will never fake a file name if not exists.
39
- You will remember to provide the audio file name loyally if it's provided in the last tool observation.
40
- Begin!
41
- Previous conversation history:
42
- {chat_history}
43
- New input: {input}
44
- Thought: Do I need to use a tool? {agent_scratchpad}"""
45
-
46
- def cut_dialogue_history(history_memory, keep_last_n_words = 500):
47
- tokens = history_memory.split()
48
- n_tokens = len(tokens)
49
- print(f"history_memory:{history_memory}, n_tokens: {n_tokens}")
50
- if n_tokens < keep_last_n_words:
51
- return history_memory
52
- else:
53
- paragraphs = history_memory.split('\n')
54
- last_n_tokens = n_tokens
55
- while last_n_tokens >= keep_last_n_words:
56
- last_n_tokens = last_n_tokens - len(paragraphs[0].split(' '))
57
- paragraphs = paragraphs[1:]
58
- return '\n' + '\n'.join(paragraphs)
59
-
60
- class ConversationBot:
61
- def __init__(self, load_dict):
62
- print("Initializing AudioGPT")
63
- self.tools = []
64
- self.memory = ConversationBufferMemory(memory_key="chat_history", output_key='output')
65
- self.models = dict()
66
- for class_name, device in load_dict.items():
67
- self.models[class_name] = globals()[class_name](device=device)
68
- for class_name, instance in self.models.items():
69
- for e in dir(instance):
70
- if e.startswith('inference'):
71
- func = getattr(instance, e)
72
- self.tools.append(Tool(name=func.name, description=func.description, func=func))
73
-
74
- def run_text(self, text, state):
75
- print("===============Running run_text =============")
76
- print("Inputs:", text, state)
77
- print("======>Previous memory:\n %s" % self.agent.memory)
78
- self.agent.memory.buffer = cut_dialogue_history(self.agent.memory.buffer, keep_last_n_words=500)
79
- res = self.agent({"input": text})
80
- if res['intermediate_steps'] == []:
81
- print("======>Current memory:\n %s" % self.agent.memory)
82
- response = res['output']
83
- state = state + [(text, response)]
84
- print("Outputs:", state)
85
- return state, state, gr.Audio.update(visible=False), gr.Image.update(visible=False), gr.Button.update(visible=False)
86
- else:
87
- tool = res['intermediate_steps'][0][0].tool
88
- if tool == "Generate Image From User Input Text":
89
- res['output'] = res['output'].replace("\\", "/")
90
- response = re.sub('(image/\S*png)', lambda m: f'![](/file={m.group(0)})*{m.group(0)}*', res['output'])
91
- state = state + [(text, response)]
92
- print(f"\nProcessed run_text, Input text: {text}\nCurrent state: {state}\n"
93
- f"Current Memory: {self.agent.memory.buffer}")
94
- return state, state, gr.Audio.update(visible=False), gr.Image.update(visible=False), gr.Button.update(visible=False)
95
- elif tool == "Detect The Sound Event From The Audio":
96
- image_filename = res['intermediate_steps'][0][1]
97
- response = res['output'] + f"![](/file={image_filename})*{image_filename}*"
98
- state = state + [(text, response)]
99
- print(f"\nProcessed run_text, Input text: {text}\nCurrent state: {state}\n"
100
- f"Current Memory: {self.agent.memory.buffer}")
101
- return state, state, gr.Audio.update(visible=False), gr.Image.update(visible=False), gr.Button.update(visible=False)
102
- elif tool == "Generate Text From The Audio" or tool == "Transcribe speech" or tool == "Target Sound Detection":
103
- print("======>Current memory:\n %s" % self.agent.memory)
104
- response = re.sub('(image/\S*png)', lambda m: f'![](/file={m.group(0)})*{m.group(0)}*', res['output'])
105
- image_filename = res['intermediate_steps'][0][1]
106
- #response = res['output'] + f"![](/file={image_filename})*{image_filename}*"
107
- state = state + [(text, response)]
108
- print("Outputs:", state)
109
- return state, state, gr.Audio.update(visible=False), gr.Image.update(visible=False), gr.Button.update(visible=False)
110
- elif tool == "Audio Inpainting":
111
- audio_filename = res['intermediate_steps'][0][0].tool_input
112
- image_filename = res['intermediate_steps'][0][1]
113
- print("======>Current memory:\n %s" % self.agent.memory)
114
- print(res)
115
- response = res['output']
116
- state = state + [(text, response)]
117
- print("Outputs:", state)
118
- return state, state, gr.Audio.update(value=audio_filename,visible=True), gr.Image.update(value=image_filename,visible=True), gr.Button.update(visible=True)
119
- print("======>Current memory:\n %s" % self.agent.memory)
120
- response = re.sub('(image/\S*png)', lambda m: f'![](/file={m.group(0)})*{m.group(0)}*', res['output'])
121
- audio_filename = res['intermediate_steps'][0][1]
122
- state = state + [(text, response)]
123
- print("Outputs:", state)
124
- return state, state, gr.Audio.update(value=audio_filename,visible=True), gr.Image.update(visible=False), gr.Button.update(visible=False)
125
-
126
- def run_image_or_audio(self, file, state, txt):
127
- file_type = file.name[-3:]
128
- if file_type == "wav":
129
- print("===============Running run_audio =============")
130
- print("Inputs:", file, state)
131
- print("======>Previous memory:\n %s" % self.agent.memory)
132
- audio_filename = os.path.join('audio', str(uuid.uuid4())[0:8] + ".wav")
133
- audio_load = whisper.load_audio(file.name)
134
- soundfile.write(audio_filename, audio_load, samplerate = 16000)
135
- description = self.models['A2T'].inference(audio_filename)
136
- Human_prompt = "\nHuman: provide an audio named {}. The description is: {}. This information helps you to understand this audio, but you should use tools to finish following tasks, " \
137
- "rather than directly imagine from my description. If you understand, say \"Received\". \n".format(audio_filename, description)
138
- AI_prompt = "Received. "
139
- self.agent.memory.buffer = self.agent.memory.buffer + Human_prompt + 'AI: ' + AI_prompt
140
- # AI_prompt = "Received. "
141
- # self.agent.memory.buffer = self.agent.memory.buffer + 'AI: ' + AI_prompt
142
- print("======>Current memory:\n %s" % self.agent.memory)
143
- #state = state + [(f"<audio src=audio_filename controls=controls></audio>*{audio_filename}*", AI_prompt)]
144
- state = state + [(f"*{audio_filename}*", AI_prompt)]
145
- print("Outputs:", state)
146
- return state, state, txt + ' ' + audio_filename + ' ', gr.Audio.update(value=audio_filename,visible=True)
147
- else:
148
- # print("===============Running run_image =============")
149
- # print("Inputs:", file, state)
150
- # print("======>Previous memory:\n %s" % self.agent.memory)
151
- image_filename = os.path.join('image', str(uuid.uuid4())[0:8] + ".png")
152
- print("======>Auto Resize Image...")
153
- img = Image.open(file.name)
154
- width, height = img.size
155
- ratio = min(512 / width, 512 / height)
156
- width_new, height_new = (round(width * ratio), round(height * ratio))
157
- width_new = int(np.round(width_new / 64.0)) * 64
158
- height_new = int(np.round(height_new / 64.0)) * 64
159
- img = img.resize((width_new, height_new))
160
- img = img.convert('RGB')
161
- img.save(image_filename, "PNG")
162
- print(f"Resize image form {width}x{height} to {width_new}x{height_new}")
163
- description = self.models['ImageCaptioning'].inference(image_filename)
164
- Human_prompt = "\nHuman: provide an audio named {}. The description is: {}. This information helps you to understand this audio, but you should use tools to finish following tasks, " \
165
- "rather than directly imagine from my description. If you understand, say \"Received\". \n".format(image_filename, description)
166
- AI_prompt = "Received. "
167
- self.agent.memory.buffer = self.agent.memory.buffer + Human_prompt + 'AI: ' + AI_prompt
168
- print("======>Current memory:\n %s" % self.agent.memory)
169
- state = state + [(f"![](/file={image_filename})*{image_filename}*", AI_prompt)]
170
- print(f"\nProcessed run_image, Input image: {image_filename}\nCurrent state: {state}\n"
171
- f"Current Memory: {self.agent.memory.buffer}")
172
- return state, state, txt + f'{txt} {image_filename} ', gr.Audio.update(visible=False)
173
-
174
- def inpainting(self, state, audio_filename, image_filename):
175
- print("===============Running inpainting =============")
176
- print("Inputs:", state)
177
- print("======>Previous memory:\n %s" % self.agent.memory)
178
- # inpaint = Inpaint(device="cpu")
179
- new_image_filename, new_audio_filename = self.models['Inpaint'].predict(audio_filename, image_filename)
180
- AI_prompt = "Here are the predict audio and the mel spectrum." + f"*{new_audio_filename}*" + f"![](/file={new_image_filename})*{new_image_filename}*"
181
- self.agent.memory.buffer = self.agent.memory.buffer + 'AI: ' + AI_prompt
182
- print("======>Current memory:\n %s" % self.agent.memory)
183
- state = state + [(f"Audio Inpainting", AI_prompt)]
184
- print("Outputs:", state)
185
- return state, state, gr.Image.update(visible=False), gr.Audio.update(value=new_audio_filename, visible=True), gr.Button.update(visible=False)
186
- def clear_audio(self):
187
- return gr.Audio.update(value=None, visible=False)
188
- def clear_image(self):
189
- return gr.Image.update(value=None, visible=False)
190
- def clear_button(self):
191
- return gr.Button.update(visible=False)
192
- def init_agent(self, openai_api_key):
193
- self.llm = OpenAI(temperature=0, openai_api_key=openai_api_key)
194
- self.agent = initialize_agent(
195
- self.tools,
196
- self.llm,
197
- agent="conversational-react-description",
198
- verbose=True,
199
- memory=self.memory,
200
- return_intermediate_steps=True,
201
- agent_kwargs={'prefix': AUDIO_CHATGPT_PREFIX, 'format_instructions': AUDIO_CHATGPT_FORMAT_INSTRUCTIONS, 'suffix': AUDIO_CHATGPT_SUFFIX}, )
202
- return gr.update(visible = True)
203
-
204
-
205
-
206
- if __name__ == '__main__':
207
- bot = ConversationBot({'ImageCaptioning': 'cuda:0',
208
- 'T2A': 'cuda:0',
209
- 'I2A': 'cuda:0',
210
- 'TTS': 'cpu',
211
- 'T2S': 'cpu',
212
- 'ASR': 'cuda:0',
213
- 'A2T': 'cpu',
214
- 'Inpaint': 'cuda:0',
215
- 'SoundDetection': 'cpu',
216
- 'Binaural': 'cuda:0',
217
- 'SoundExtraction': 'cuda:0',
218
- 'TargetSoundDetection': 'cuda:0',
219
- 'Speech_Enh_SC': 'cuda:0',
220
- 'Speech_SS': 'cuda:0'
221
- })
222
- with gr.Blocks(css="#chatbot {overflow:auto; height:500px;}") as demo:
223
- gr.Markdown(_DESCRIPTION)
224
-
225
- with gr.Row():
226
- openai_api_key_textbox = gr.Textbox(
227
- placeholder="Paste your OpenAI API key here to start AudioGPT(sk-...) and press Enter ↵️",
228
- show_label=False,
229
- lines=1,
230
- type="password",
231
- )
232
-
233
- chatbot = gr.Chatbot(elem_id="chatbot", label="AudioGPT")
234
- state = gr.State([])
235
- with gr.Row(visible = False) as input_raws:
236
- with gr.Column(scale=0.7):
237
- txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter, or upload an image").style(container=False)
238
- with gr.Column(scale=0.1, min_width=0):
239
- run = gr.Button("🏃‍♂️Run")
240
- with gr.Column(scale=0.1, min_width=0):
241
- clear = gr.Button("🔄Clear️")
242
- with gr.Column(scale=0.1, min_width=0):
243
- btn = gr.UploadButton("🖼️/🎙️ Upload", file_types=["image","audio"])
244
- with gr.Row():
245
- with gr.Column():
246
- outaudio = gr.Audio(visible=False)
247
- with gr.Row():
248
- with gr.Column():
249
- show_mel = gr.Image(type="filepath",tool='sketch',visible=False)
250
- with gr.Row():
251
- with gr.Column():
252
- run_button = gr.Button("Predict Masked Place",visible=False)
253
- gr.Examples(
254
- examples=["Generate a speech with text 'here we go'",
255
- "Transcribe this speech",
256
- "Transfer the mono speech to a binaural one",
257
- "Generate an audio of a dog barking",
258
- "Generate an audio of this uploaded image",
259
- "Give me the description of this audio",
260
- "I want to inpaint it",
261
- "What events does this audio include?",
262
- "When did the thunder happen in this audio?",
263
- "Extract the thunder event from this audio",
264
- "Generate a piece of singing voice. Text sequence is 小酒窝长睫毛AP是你最美的记号. Note sequence is C#4/Db4 | F#4/Gb4 | G#4/Ab4 | A#4/Bb4 F#4/Gb4 | F#4/Gb4 C#4/Db4 | C#4/Db4 | rest | C#4/Db4 | A#4/Bb4 | G#4/Ab4 | A#4/Bb4 | G#4/Ab4 | F4 | C#4/Db4. Note duration sequence is 0.407140 | 0.376190 | 0.242180 | 0.509550 0.183420 | 0.315400 0.235020 | 0.361660 | 0.223070 | 0.377270 | 0.340550 | 0.299620 | 0.344510 | 0.283770 | 0.323390 | 0.360340.",
265
- ],
266
- inputs=txt
267
- )
268
-
269
- openai_api_key_textbox.submit(bot.init_agent, [openai_api_key_textbox], [input_raws])
270
- txt.submit(bot.run_text, [txt, state], [chatbot, state, outaudio, show_mel, run_button])
271
- txt.submit(lambda: "", None, txt)
272
- run.click(bot.run_text, [txt, state], [chatbot, state, outaudio, show_mel, run_button])
273
- run.click(lambda: "", None, txt)
274
- btn.upload(bot.run_image_or_audio, [btn, state, txt], [chatbot, state, txt, outaudio])
275
- run_button.click(bot.inpainting, [state, outaudio, show_mel], [chatbot, state, show_mel, outaudio, run_button])
276
- clear.click(bot.memory.clear)
277
- clear.click(lambda: [], None, chatbot)
278
- clear.click(lambda: [], None, state)
279
- clear.click(lambda:None, None, txt)
280
- clear.click(bot.clear_button, None, run_button)
281
- clear.click(bot.clear_image, None, show_mel)
282
- clear.click(bot.clear_audio, None, outaudio)
283
- demo.launch(server_name="0.0.0.0", server_port=7860)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGE/A_B/app.py DELETED
@@ -1,56 +0,0 @@
1
- import os
2
- import time
3
- import py3Dmol
4
- import gradio as gr
5
-
6
-
7
- def display_pdb_by_pdb(pdb):
8
- # function to display pdb in py3dmol
9
-
10
- view = py3Dmol.view(width=500, height=500)
11
- view.addModel(pdb, "pdb")
12
- view.setStyle({'cartoon': {'color': 'spectrum'}})
13
- # view.setStyle({'model': -1}, {"cartoon": {'colorscheme':{'prop':'b','gradient':'roygb','min':0,'max':1}}})#'linear', 'min': 0, 'max': 1, 'colors': ["#ff9ef0","#a903fc",]}}})
14
- view.zoomTo()
15
- output = view._make_html().replace("'", '"')
16
- x = f"""<!DOCTYPE html><html></center> {output} </center></html>""" # do not use ' in this input
17
-
18
- return f"""<iframe height="500px" width="100%" name="result" allow="midi; geolocation; microphone; camera;
19
- display-capture; encrypted-media;" sandbox="allow-modals allow-forms
20
- allow-scripts allow-same-origin allow-popups
21
- allow-top-navigation-by-user-activation allow-downloads" allowfullscreen=""
22
- allowpaymentrequest="" frameborder="0" srcdoc='{x}'></iframe>"""
23
-
24
-
25
- def show_gif():
26
- path = 'output'
27
- pdb_files = sorted(os.listdir(path), key=lambda x: int(x.split('_')[1]))
28
- num = len(pdb_files)
29
- i = 0
30
- while True:
31
- if i >= num: break
32
- time.sleep(0.5)
33
- p = os.path.join(path, pdb_files[i])
34
- with open(p,'r') as f:
35
- f_pdb = f.readlines()
36
-
37
- i = (i + 1) % num
38
- yield display_pdb_by_pdb(''.join(f_pdb)), pdb_files[i].split('_')[1]
39
-
40
-
41
- if __name__ == "__main__":
42
- title = "Artificial Intelligence Generated Protein"
43
-
44
- css = "footer {visibility: hidden}"
45
-
46
- with gr.Blocks(title=title, css=css) as demo:
47
- output_viewer = gr.HTML()
48
- with gr.Row():
49
- gif = gr.HTML()
50
- it = gr.Textbox(label="Iteration")
51
- btn3 = gr.Button("Generate")
52
- btn3.click(show_gif, [], [gif, it])
53
-
54
- demo.queue()
55
- demo.launch(show_api=False, server_name="0.0.0.0")
56
- # demo.launch(show_api=False, share=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ASJMO/freegpt/g4f/typing.py DELETED
@@ -1,3 +0,0 @@
1
- from typing import Dict, NewType, Union, Optional, List, get_type_hints
2
-
3
- sha256 = NewType('sha_256_hash', str)
 
 
 
 
spaces/Abhilashvj/planogram-compliance/segment/predict.py DELETED
@@ -1,504 +0,0 @@
1
- # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
- """
3
- Run YOLOv5 segmentation inference on images, videos, directories, streams, etc.
4
-
5
- Usage - sources:
6
- $ python segment/predict.py --weights yolov5s-seg.pt --source 0 # webcam
7
- img.jpg # image
8
- vid.mp4 # video
9
- screen # screenshot
10
- path/ # directory
11
- list.txt # list of images
12
- list.streams # list of streams
13
- 'path/*.jpg' # glob
14
- 'https://youtu.be/Zgi9g1ksQHc' # YouTube
15
- 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
16
-
17
- Usage - formats:
18
- $ python segment/predict.py --weights yolov5s-seg.pt # PyTorch
19
- yolov5s-seg.torchscript # TorchScript
20
- yolov5s-seg.onnx # ONNX Runtime or OpenCV DNN with --dnn
21
- yolov5s-seg_openvino_model # OpenVINO
22
- yolov5s-seg.engine # TensorRT
23
- yolov5s-seg.mlmodel # CoreML (macOS-only)
24
- yolov5s-seg_saved_model # TensorFlow SavedModel
25
- yolov5s-seg.pb # TensorFlow GraphDef
26
- yolov5s-seg.tflite # TensorFlow Lite
27
- yolov5s-seg_edgetpu.tflite # TensorFlow Edge TPU
28
- yolov5s-seg_paddle_model # PaddlePaddle
29
- """
30
-
31
- import argparse
32
- import os
33
- import platform
34
- import sys
35
- from pathlib import Path
36
-
37
- import torch
38
-
39
- FILE = Path(__file__).resolve()
40
- ROOT = FILE.parents[1] # YOLOv5 root directory
41
- if str(ROOT) not in sys.path:
42
- sys.path.append(str(ROOT)) # add ROOT to PATH
43
- ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
44
-
45
- from models.common import DetectMultiBackend
46
- from utils.dataloaders import (
47
- IMG_FORMATS,
48
- VID_FORMATS,
49
- LoadImages,
50
- LoadScreenshots,
51
- LoadStreams,
52
- )
53
- from utils.general import (
54
- LOGGER,
55
- Profile,
56
- check_file,
57
- check_img_size,
58
- check_imshow,
59
- check_requirements,
60
- colorstr,
61
- cv2,
62
- increment_path,
63
- non_max_suppression,
64
- print_args,
65
- scale_boxes,
66
- scale_segments,
67
- strip_optimizer,
68
- )
69
- from utils.plots import Annotator, colors, save_one_box
70
- from utils.segment.general import masks2segments, process_mask, process_mask_native
71
- from utils.torch_utils import select_device, smart_inference_mode
72
-
73
-
74
- @smart_inference_mode()
75
- def run(
76
- weights=ROOT / "yolov5s-seg.pt", # model.pt path(s)
77
- source=ROOT / "data/images", # file/dir/URL/glob/screen/0(webcam)
78
- data=ROOT / "data/coco128.yaml", # dataset.yaml path
79
- imgsz=(640, 640), # inference size (height, width)
80
- conf_thres=0.25, # confidence threshold
81
- iou_thres=0.45, # NMS IOU threshold
82
- max_det=1000, # maximum detections per image
83
- device="", # cuda device, i.e. 0 or 0,1,2,3 or cpu
84
- view_img=False, # show results
85
- save_txt=False, # save results to *.txt
86
- save_conf=False, # save confidences in --save-txt labels
87
- save_crop=False, # save cropped prediction boxes
88
- nosave=False, # do not save images/videos
89
- classes=None, # filter by class: --class 0, or --class 0 2 3
90
- agnostic_nms=False, # class-agnostic NMS
91
- augment=False, # augmented inference
92
- visualize=False, # visualize features
93
- update=False, # update all models
94
- project=ROOT / "runs/predict-seg", # save results to project/name
95
- name="exp", # save results to project/name
96
- exist_ok=False, # existing project/name ok, do not increment
97
- line_thickness=3, # bounding box thickness (pixels)
98
- hide_labels=False, # hide labels
99
- hide_conf=False, # hide confidences
100
- half=False, # use FP16 half-precision inference
101
- dnn=False, # use OpenCV DNN for ONNX inference
102
- vid_stride=1, # video frame-rate stride
103
- retina_masks=False,
104
- ):
105
- source = str(source)
106
- save_img = not nosave and not source.endswith(
107
- ".txt"
108
- ) # save inference images
109
- is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
110
- is_url = source.lower().startswith(
111
- ("rtsp://", "rtmp://", "http://", "https://")
112
- )
113
- webcam = (
114
- source.isnumeric()
115
- or source.endswith(".streams")
116
- or (is_url and not is_file)
117
- )
118
- screenshot = source.lower().startswith("screen")
119
- if is_url and is_file:
120
- source = check_file(source) # download
121
-
122
- # Directories
123
- save_dir = increment_path(
124
- Path(project) / name, exist_ok=exist_ok
125
- ) # increment run
126
- (save_dir / "labels" if save_txt else save_dir).mkdir(
127
- parents=True, exist_ok=True
128
- ) # make dir
129
-
130
- # Load model
131
- device = select_device(device)
132
- model = DetectMultiBackend(
133
- weights, device=device, dnn=dnn, data=data, fp16=half
134
- )
135
- stride, names, pt = model.stride, model.names, model.pt
136
- imgsz = check_img_size(imgsz, s=stride) # check image size
137
-
138
- # Dataloader
139
- bs = 1 # batch_size
140
- if webcam:
141
- view_img = check_imshow(warn=True)
142
- dataset = LoadStreams(
143
- source,
144
- img_size=imgsz,
145
- stride=stride,
146
- auto=pt,
147
- vid_stride=vid_stride,
148
- )
149
- bs = len(dataset)
150
- elif screenshot:
151
- dataset = LoadScreenshots(
152
- source, img_size=imgsz, stride=stride, auto=pt
153
- )
154
- else:
155
- dataset = LoadImages(
156
- source,
157
- img_size=imgsz,
158
- stride=stride,
159
- auto=pt,
160
- vid_stride=vid_stride,
161
- )
162
- vid_path, vid_writer = [None] * bs, [None] * bs
163
-
164
- # Run inference
165
- model.warmup(imgsz=(1 if pt else bs, 3, *imgsz)) # warmup
166
- seen, windows, dt = 0, [], (Profile(), Profile(), Profile())
167
- for path, im, im0s, vid_cap, s in dataset:
168
- with dt[0]:
169
- im = torch.from_numpy(im).to(model.device)
170
- im = im.half() if model.fp16 else im.float() # uint8 to fp16/32
171
- im /= 255 # 0 - 255 to 0.0 - 1.0
172
- if len(im.shape) == 3:
173
- im = im[None] # expand for batch dim
174
-
175
- # Inference
176
- with dt[1]:
177
- visualize = (
178
- increment_path(save_dir / Path(path).stem, mkdir=True)
179
- if visualize
180
- else False
181
- )
182
- pred, proto = model(im, augment=augment, visualize=visualize)[:2]
183
-
184
- # NMS
185
- with dt[2]:
186
- pred = non_max_suppression(
187
- pred,
188
- conf_thres,
189
- iou_thres,
190
- classes,
191
- agnostic_nms,
192
- max_det=max_det,
193
- nm=32,
194
- )
195
-
196
- # Second-stage classifier (optional)
197
- # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)
198
-
199
- # Process predictions
200
- for i, det in enumerate(pred): # per image
201
- seen += 1
202
- if webcam: # batch_size >= 1
203
- p, im0, frame = path[i], im0s[i].copy(), dataset.count
204
- s += f"{i}: "
205
- else:
206
- p, im0, frame = path, im0s.copy(), getattr(dataset, "frame", 0)
207
-
208
- p = Path(p) # to Path
209
- save_path = str(save_dir / p.name) # im.jpg
210
- txt_path = str(save_dir / "labels" / p.stem) + (
211
- "" if dataset.mode == "image" else f"_{frame}"
212
- ) # im.txt
213
- s += "%gx%g " % im.shape[2:] # print string
214
- imc = im0.copy() if save_crop else im0 # for save_crop
215
- annotator = Annotator(
216
- im0, line_width=line_thickness, example=str(names)
217
- )
218
- if len(det):
219
- if retina_masks:
220
- # scale bbox first the crop masks
221
- det[:, :4] = scale_boxes(
222
- im.shape[2:], det[:, :4], im0.shape
223
- ).round() # rescale boxes to im0 size
224
- masks = process_mask_native(
225
- proto[i], det[:, 6:], det[:, :4], im0.shape[:2]
226
- ) # HWC
227
- else:
228
- masks = process_mask(
229
- proto[i],
230
- det[:, 6:],
231
- det[:, :4],
232
- im.shape[2:],
233
- upsample=True,
234
- ) # HWC
235
- det[:, :4] = scale_boxes(
236
- im.shape[2:], det[:, :4], im0.shape
237
- ).round() # rescale boxes to im0 size
238
-
239
- # Segments
240
- if save_txt:
241
- segments = [
242
- scale_segments(
243
- im0.shape if retina_masks else im.shape[2:],
244
- x,
245
- im0.shape,
246
- normalize=True,
247
- )
248
- for x in reversed(masks2segments(masks))
249
- ]
250
-
251
- # Print results
252
- for c in det[:, 5].unique():
253
- n = (det[:, 5] == c).sum() # detections per class
254
- s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
255
-
256
- # Mask plotting
257
- annotator.masks(
258
- masks,
259
- colors=[colors(x, True) for x in det[:, 5]],
260
- im_gpu=torch.as_tensor(im0, dtype=torch.float16)
261
- .to(device)
262
- .permute(2, 0, 1)
263
- .flip(0)
264
- .contiguous()
265
- / 255
266
- if retina_masks
267
- else im[i],
268
- )
269
-
270
- # Write results
271
- for j, (*xyxy, conf, cls) in enumerate(reversed(det[:, :6])):
272
- if save_txt: # Write to file
273
- seg = segments[j].reshape(-1) # (n,2) to (n*2)
274
- line = (
275
- (cls, *seg, conf) if save_conf else (cls, *seg)
276
- ) # label format
277
- with open(f"{txt_path}.txt", "a") as f:
278
- f.write(("%g " * len(line)).rstrip() % line + "\n")
279
-
280
- if save_img or save_crop or view_img: # Add bbox to image
281
- c = int(cls) # integer class
282
- label = (
283
- None
284
- if hide_labels
285
- else (
286
- names[c]
287
- if hide_conf
288
- else f"{names[c]} {conf:.2f}"
289
- )
290
- )
291
- annotator.box_label(xyxy, label, color=colors(c, True))
292
- # annotator.draw.polygon(segments[j], outline=colors(c, True), width=3)
293
- if save_crop:
294
- save_one_box(
295
- xyxy,
296
- imc,
297
- file=save_dir
298
- / "crops"
299
- / names[c]
300
- / f"{p.stem}.jpg",
301
- BGR=True,
302
- )
303
-
304
- # Stream results
305
- im0 = annotator.result()
306
- if view_img:
307
- if platform.system() == "Linux" and p not in windows:
308
- windows.append(p)
309
- cv2.namedWindow(
310
- str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO
311
- ) # allow window resize (Linux)
312
- cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0])
313
- cv2.imshow(str(p), im0)
314
- if cv2.waitKey(1) == ord("q"): # 1 millisecond
315
- exit()
316
-
317
- # Save results (image with detections)
318
- if save_img:
319
- if dataset.mode == "image":
320
- cv2.imwrite(save_path, im0)
321
- else: # 'video' or 'stream'
322
- if vid_path[i] != save_path: # new video
323
- vid_path[i] = save_path
324
- if isinstance(vid_writer[i], cv2.VideoWriter):
325
- vid_writer[
326
- i
327
- ].release() # release previous video writer
328
- if vid_cap: # video
329
- fps = vid_cap.get(cv2.CAP_PROP_FPS)
330
- w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
331
- h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
332
- else: # stream
333
- fps, w, h = 30, im0.shape[1], im0.shape[0]
334
- save_path = str(
335
- Path(save_path).with_suffix(".mp4")
336
- ) # force *.mp4 suffix on results videos
337
- vid_writer[i] = cv2.VideoWriter(
338
- save_path,
339
- cv2.VideoWriter_fourcc(*"mp4v"),
340
- fps,
341
- (w, h),
342
- )
343
- vid_writer[i].write(im0)
344
-
345
- # Print time (inference-only)
346
- LOGGER.info(
347
- f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms"
348
- )
349
-
350
- # Print results
351
- t = tuple(x.t / seen * 1e3 for x in dt) # speeds per image
352
- LOGGER.info(
353
- f"Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}"
354
- % t
355
- )
356
- if save_txt or save_img:
357
- s = (
358
- f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}"
359
- if save_txt
360
- else ""
361
- )
362
- LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
363
- if update:
364
- strip_optimizer(
365
- weights[0]
366
- ) # update model (to fix SourceChangeWarning)
367
-
368
-
369
- def parse_opt():
370
- parser = argparse.ArgumentParser()
371
- parser.add_argument(
372
- "--weights",
373
- nargs="+",
374
- type=str,
375
- default=ROOT / "yolov5s-seg.pt",
376
- help="model path(s)",
377
- )
378
- parser.add_argument(
379
- "--source",
380
- type=str,
381
- default=ROOT / "data/images",
382
- help="file/dir/URL/glob/screen/0(webcam)",
383
- )
384
- parser.add_argument(
385
- "--data",
386
- type=str,
387
- default=ROOT / "data/coco128.yaml",
388
- help="(optional) dataset.yaml path",
389
- )
390
- parser.add_argument(
391
- "--imgsz",
392
- "--img",
393
- "--img-size",
394
- nargs="+",
395
- type=int,
396
- default=[640],
397
- help="inference size h,w",
398
- )
399
- parser.add_argument(
400
- "--conf-thres", type=float, default=0.25, help="confidence threshold"
401
- )
402
- parser.add_argument(
403
- "--iou-thres", type=float, default=0.45, help="NMS IoU threshold"
404
- )
405
- parser.add_argument(
406
- "--max-det",
407
- type=int,
408
- default=1000,
409
- help="maximum detections per image",
410
- )
411
- parser.add_argument(
412
- "--device", default="", help="cuda device, i.e. 0 or 0,1,2,3 or cpu"
413
- )
414
- parser.add_argument("--view-img", action="store_true", help="show results")
415
- parser.add_argument(
416
- "--save-txt", action="store_true", help="save results to *.txt"
417
- )
418
- parser.add_argument(
419
- "--save-conf",
420
- action="store_true",
421
- help="save confidences in --save-txt labels",
422
- )
423
- parser.add_argument(
424
- "--save-crop",
425
- action="store_true",
426
- help="save cropped prediction boxes",
427
- )
428
- parser.add_argument(
429
- "--nosave", action="store_true", help="do not save images/videos"
430
- )
431
- parser.add_argument(
432
- "--classes",
433
- nargs="+",
434
- type=int,
435
- help="filter by class: --classes 0, or --classes 0 2 3",
436
- )
437
- parser.add_argument(
438
- "--agnostic-nms", action="store_true", help="class-agnostic NMS"
439
- )
440
- parser.add_argument(
441
- "--augment", action="store_true", help="augmented inference"
442
- )
443
- parser.add_argument(
444
- "--visualize", action="store_true", help="visualize features"
445
- )
446
- parser.add_argument(
447
- "--update", action="store_true", help="update all models"
448
- )
449
- parser.add_argument(
450
- "--project",
451
- default=ROOT / "runs/predict-seg",
452
- help="save results to project/name",
453
- )
454
- parser.add_argument(
455
- "--name", default="exp", help="save results to project/name"
456
- )
457
- parser.add_argument(
458
- "--exist-ok",
459
- action="store_true",
460
- help="existing project/name ok, do not increment",
461
- )
462
- parser.add_argument(
463
- "--line-thickness",
464
- default=3,
465
- type=int,
466
- help="bounding box thickness (pixels)",
467
- )
468
- parser.add_argument(
469
- "--hide-labels", default=False, action="store_true", help="hide labels"
470
- )
471
- parser.add_argument(
472
- "--hide-conf",
473
- default=False,
474
- action="store_true",
475
- help="hide confidences",
476
- )
477
- parser.add_argument(
478
- "--half", action="store_true", help="use FP16 half-precision inference"
479
- )
480
- parser.add_argument(
481
- "--dnn", action="store_true", help="use OpenCV DNN for ONNX inference"
482
- )
483
- parser.add_argument(
484
- "--vid-stride", type=int, default=1, help="video frame-rate stride"
485
- )
486
- parser.add_argument(
487
- "--retina-masks",
488
- action="store_true",
489
- help="whether to plot masks in native resolution",
490
- )
491
- opt = parser.parse_args()
492
- opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
493
- print_args(vars(opt))
494
- return opt
495
-
496
-
497
- def main(opt):
498
- check_requirements(exclude=("tensorboard", "thop"))
499
- run(**vars(opt))
500
-
501
-
502
- if __name__ == "__main__":
503
- opt = parse_opt()
504
- main(opt)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/client/css/sidebar.css DELETED
@@ -1,197 +0,0 @@
1
- .sidebar {
2
- max-width: 260px;
3
- padding: var(--section-gap);
4
- flex-shrink: 0;
5
- display: flex;
6
- flex-direction: column;
7
- justify-content: space-between;
8
- }
9
-
10
- .sidebar .title {
11
- font-size: 14px;
12
- font-weight: 500;
13
- }
14
-
15
- .sidebar .conversation-sidebar {
16
- padding: 8px 12px;
17
- display: flex;
18
- gap: 18px;
19
- align-items: center;
20
- user-select: none;
21
- justify-content: space-between;
22
- }
23
-
24
- .sidebar .conversation-sidebar .left {
25
- cursor: pointer;
26
- display: flex;
27
- align-items: center;
28
- gap: 10px;
29
- }
30
-
31
- .sidebar i {
32
- color: var(--conversations);
33
- cursor: pointer;
34
- }
35
-
36
- .sidebar .top {
37
- display: flex;
38
- flex-direction: column;
39
- overflow: hidden;
40
- gap: 16px;
41
- padding-right: 8px;
42
- }
43
-
44
- .sidebar .top:hover {
45
- overflow: auto;
46
- }
47
-
48
- .sidebar .info {
49
- padding: 8px 12px 0px 12px;
50
- display: flex;
51
- align-items: center;
52
- justify-content: center;
53
- user-select: none;
54
- background: transparent;
55
- width: 100%;
56
- border: none;
57
- text-decoration: none;
58
- }
59
-
60
- .sidebar .info span {
61
- color: var(--conversations);
62
- line-height: 1.5;
63
- font-size: 0.75rem;
64
- }
65
-
66
- .sidebar .info i::before {
67
- margin-right: 8px;
68
- }
69
-
70
- .sidebar-footer {
71
- width: 100%;
72
- margin-top: 16px;
73
- display: flex;
74
- flex-direction: column;
75
- }
76
-
77
- .sidebar-footer button {
78
- cursor: pointer;
79
- user-select: none;
80
- background: transparent;
81
- }
82
-
83
- .sidebar.shown {
84
- position: fixed;
85
- top: 0;
86
- left: 0;
87
- width: 100%;
88
- height: 100%;
89
- z-index: 1000;
90
- }
91
-
92
- .sidebar.shown .box {
93
- background-color: #16171a;
94
- width: 80%;
95
- height: 100%;
96
- overflow-y: auto;
97
- }
98
-
99
- @keyframes spinner {
100
- to {
101
- transform: rotate(360deg);
102
- }
103
- }
104
-
105
- /* scrollbar */
106
- .sidebar .top::-webkit-scrollbar {
107
- width: 4px;
108
- padding: 8px 0px;
109
- }
110
-
111
- .sidebar .top::-webkit-scrollbar-track {
112
- background-color: #ffffff00;
113
- }
114
-
115
- .sidebar .top::-webkit-scrollbar-thumb {
116
- background-color: #555555;
117
- border-radius: 10px;
118
- }
119
-
120
- .spinner:before {
121
- content: "";
122
- box-sizing: border-box;
123
- position: absolute;
124
- top: 50%;
125
- left: 45%;
126
- width: 20px;
127
- height: 20px;
128
- border-radius: 50%;
129
- border: 1px solid var(--conversations);
130
- border-top-color: white;
131
- animation: spinner 0.6s linear infinite;
132
- }
133
-
134
- .menu-button {
135
- display: none !important;
136
- position: absolute;
137
- z-index: 100000;
138
- top: 0;
139
- left: 0;
140
- margin: 10px;
141
- font-size: 1rem;
142
- cursor: pointer;
143
- width: 30px;
144
- height: 30px;
145
- justify-content: center;
146
- align-items: center;
147
- transition: 0.33s;
148
- }
149
-
150
- .menu-button i {
151
- transition: 0.33s;
152
- }
153
-
154
- .rotated {
155
- transform: rotate(360deg);
156
- }
157
-
158
- .menu-button.rotated {
159
- position: fixed;
160
- top: 10px;
161
- left: 10px;
162
- z-index: 1001;
163
- }
164
-
165
- @media screen and (max-width: 990px) {
166
- .sidebar {
167
- display: none;
168
- width: 100%;
169
- max-width: none;
170
- }
171
-
172
- .menu-button {
173
- display: flex !important;
174
- }
175
- }
176
-
177
- @media (max-width: 990px) {
178
- .sidebar .top {
179
- padding-top: 48px;
180
- }
181
- }
182
-
183
- @media (min-width: 768px) {
184
- .sidebar.shown {
185
- position: static;
186
- width: auto;
187
- height: auto;
188
- background-color: transparent;
189
- }
190
-
191
- .sidebar.shown .box {
192
- background-color: #16171a;
193
- width: auto;
194
- height: auto;
195
- overflow-y: auto;
196
- }
197
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AdWeeb/SuMmeet/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: SuMmeet
3
- emoji: 🏢
4
- colorFrom: pink
5
- colorTo: green
6
- sdk: streamlit
7
- sdk_version: 1.2.0
8
- app_file: app.py
9
- pinned: false
10
- license: cc-by-4.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetChildHeight.js DELETED
@@ -1,17 +0,0 @@
1
- import { GetDisplayHeight } from '../../../plugins/utils/size/GetDisplaySize.js';
2
-
3
- var GetChildHeight = function (child) {
4
- var childHeight;
5
- if (child.isRexSizer) { // Sizer game object
6
- childHeight = Math.max(child.minHeight, child.childrenHeight);
7
- } else { // Normal game object
8
- if (child.minHeight !== undefined) { // Force minHeight
9
- childHeight = child.minHeight;
10
- } else {
11
- childHeight = GetDisplayHeight(child);
12
- }
13
- }
14
- return childHeight;
15
- }
16
-
17
- export default GetChildHeight;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AirtistDesign/stablediffusionapi-rev-animated/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Stablediffusionapi Rev Animated
3
- emoji: 👁
4
- colorFrom: green
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 3.35.2
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/onnx_ijbc.py DELETED
@@ -1,267 +0,0 @@
1
- import argparse
2
- import os
3
- import pickle
4
- import timeit
5
-
6
- import cv2
7
- import mxnet as mx
8
- import numpy as np
9
- import pandas as pd
10
- import prettytable
11
- import skimage.transform
12
- from sklearn.metrics import roc_curve
13
- from sklearn.preprocessing import normalize
14
-
15
- from onnx_helper import ArcFaceORT
16
-
17
- SRC = np.array(
18
- [
19
- [30.2946, 51.6963],
20
- [65.5318, 51.5014],
21
- [48.0252, 71.7366],
22
- [33.5493, 92.3655],
23
- [62.7299, 92.2041]]
24
- , dtype=np.float32)
25
- SRC[:, 0] += 8.0
26
-
27
-
28
- class AlignedDataSet(mx.gluon.data.Dataset):
29
- def __init__(self, root, lines, align=True):
30
- self.lines = lines
31
- self.root = root
32
- self.align = align
33
-
34
- def __len__(self):
35
- return len(self.lines)
36
-
37
- def __getitem__(self, idx):
38
- each_line = self.lines[idx]
39
- name_lmk_score = each_line.strip().split(' ')
40
- name = os.path.join(self.root, name_lmk_score[0])
41
- img = cv2.cvtColor(cv2.imread(name), cv2.COLOR_BGR2RGB)
42
- landmark5 = np.array([float(x) for x in name_lmk_score[1:-1]], dtype=np.float32).reshape((5, 2))
43
- st = skimage.transform.SimilarityTransform()
44
- st.estimate(landmark5, SRC)
45
- img = cv2.warpAffine(img, st.params[0:2, :], (112, 112), borderValue=0.0)
46
- img_1 = np.expand_dims(img, 0)
47
- img_2 = np.expand_dims(np.fliplr(img), 0)
48
- output = np.concatenate((img_1, img_2), axis=0).astype(np.float32)
49
- output = np.transpose(output, (0, 3, 1, 2))
50
- output = mx.nd.array(output)
51
- return output
52
-
53
-
54
- def extract(model_root, dataset):
55
- model = ArcFaceORT(model_path=model_root)
56
- model.check()
57
- feat_mat = np.zeros(shape=(len(dataset), 2 * model.feat_dim))
58
-
59
- def batchify_fn(data):
60
- return mx.nd.concat(*data, dim=0)
61
-
62
- data_loader = mx.gluon.data.DataLoader(
63
- dataset, 128, last_batch='keep', num_workers=4,
64
- thread_pool=True, prefetch=16, batchify_fn=batchify_fn)
65
- num_iter = 0
66
- for batch in data_loader:
67
- batch = batch.asnumpy()
68
- batch = (batch - model.input_mean) / model.input_std
69
- feat = model.session.run(model.output_names, {model.input_name: batch})[0]
70
- feat = np.reshape(feat, (-1, model.feat_dim * 2))
71
- feat_mat[128 * num_iter: 128 * num_iter + feat.shape[0], :] = feat
72
- num_iter += 1
73
- if num_iter % 50 == 0:
74
- print(num_iter)
75
- return feat_mat
76
-
77
-
78
- def read_template_media_list(path):
79
- ijb_meta = pd.read_csv(path, sep=' ', header=None).values
80
- templates = ijb_meta[:, 1].astype(np.int)
81
- medias = ijb_meta[:, 2].astype(np.int)
82
- return templates, medias
83
-
84
-
85
- def read_template_pair_list(path):
86
- pairs = pd.read_csv(path, sep=' ', header=None).values
87
- t1 = pairs[:, 0].astype(np.int)
88
- t2 = pairs[:, 1].astype(np.int)
89
- label = pairs[:, 2].astype(np.int)
90
- return t1, t2, label
91
-
92
-
93
- def read_image_feature(path):
94
- with open(path, 'rb') as fid:
95
- img_feats = pickle.load(fid)
96
- return img_feats
97
-
98
-
99
- def image2template_feature(img_feats=None,
100
- templates=None,
101
- medias=None):
102
- unique_templates = np.unique(templates)
103
- template_feats = np.zeros((len(unique_templates), img_feats.shape[1]))
104
- for count_template, uqt in enumerate(unique_templates):
105
- (ind_t,) = np.where(templates == uqt)
106
- face_norm_feats = img_feats[ind_t]
107
- face_medias = medias[ind_t]
108
- unique_medias, unique_media_counts = np.unique(face_medias, return_counts=True)
109
- media_norm_feats = []
110
- for u, ct in zip(unique_medias, unique_media_counts):
111
- (ind_m,) = np.where(face_medias == u)
112
- if ct == 1:
113
- media_norm_feats += [face_norm_feats[ind_m]]
114
- else: # image features from the same video will be aggregated into one feature
115
- media_norm_feats += [np.mean(face_norm_feats[ind_m], axis=0, keepdims=True), ]
116
- media_norm_feats = np.array(media_norm_feats)
117
- template_feats[count_template] = np.sum(media_norm_feats, axis=0)
118
- if count_template % 2000 == 0:
119
- print('Finish Calculating {} template features.'.format(
120
- count_template))
121
- template_norm_feats = normalize(template_feats)
122
- return template_norm_feats, unique_templates
123
-
124
-
125
- def verification(template_norm_feats=None,
126
- unique_templates=None,
127
- p1=None,
128
- p2=None):
129
- template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int)
130
- for count_template, uqt in enumerate(unique_templates):
131
- template2id[uqt] = count_template
132
- score = np.zeros((len(p1),))
133
- total_pairs = np.array(range(len(p1)))
134
- batchsize = 100000
135
- sublists = [total_pairs[i: i + batchsize] for i in range(0, len(p1), batchsize)]
136
- total_sublists = len(sublists)
137
- for c, s in enumerate(sublists):
138
- feat1 = template_norm_feats[template2id[p1[s]]]
139
- feat2 = template_norm_feats[template2id[p2[s]]]
140
- similarity_score = np.sum(feat1 * feat2, -1)
141
- score[s] = similarity_score.flatten()
142
- if c % 10 == 0:
143
- print('Finish {}/{} pairs.'.format(c, total_sublists))
144
- return score
145
-
146
-
147
- def verification2(template_norm_feats=None,
148
- unique_templates=None,
149
- p1=None,
150
- p2=None):
151
- template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int)
152
- for count_template, uqt in enumerate(unique_templates):
153
- template2id[uqt] = count_template
154
- score = np.zeros((len(p1),)) # save cosine distance between pairs
155
- total_pairs = np.array(range(len(p1)))
156
- batchsize = 100000 # small batchsize instead of all pairs in one batch due to the memory limiation
157
- sublists = [total_pairs[i:i + batchsize] for i in range(0, len(p1), batchsize)]
158
- total_sublists = len(sublists)
159
- for c, s in enumerate(sublists):
160
- feat1 = template_norm_feats[template2id[p1[s]]]
161
- feat2 = template_norm_feats[template2id[p2[s]]]
162
- similarity_score = np.sum(feat1 * feat2, -1)
163
- score[s] = similarity_score.flatten()
164
- if c % 10 == 0:
165
- print('Finish {}/{} pairs.'.format(c, total_sublists))
166
- return score
167
-
168
-
169
- def main(args):
170
- use_norm_score = True # if Ture, TestMode(N1)
171
- use_detector_score = True # if Ture, TestMode(D1)
172
- use_flip_test = True # if Ture, TestMode(F1)
173
- assert args.target == 'IJBC' or args.target == 'IJBB'
174
-
175
- start = timeit.default_timer()
176
- templates, medias = read_template_media_list(
177
- os.path.join('%s/meta' % args.image_path, '%s_face_tid_mid.txt' % args.target.lower()))
178
- stop = timeit.default_timer()
179
- print('Time: %.2f s. ' % (stop - start))
180
-
181
- start = timeit.default_timer()
182
- p1, p2, label = read_template_pair_list(
183
- os.path.join('%s/meta' % args.image_path,
184
- '%s_template_pair_label.txt' % args.target.lower()))
185
- stop = timeit.default_timer()
186
- print('Time: %.2f s. ' % (stop - start))
187
-
188
- start = timeit.default_timer()
189
- img_path = '%s/loose_crop' % args.image_path
190
- img_list_path = '%s/meta/%s_name_5pts_score.txt' % (args.image_path, args.target.lower())
191
- img_list = open(img_list_path)
192
- files = img_list.readlines()
193
- dataset = AlignedDataSet(root=img_path, lines=files, align=True)
194
- img_feats = extract(args.model_root, dataset)
195
-
196
- faceness_scores = []
197
- for each_line in files:
198
- name_lmk_score = each_line.split()
199
- faceness_scores.append(name_lmk_score[-1])
200
- faceness_scores = np.array(faceness_scores).astype(np.float32)
201
- stop = timeit.default_timer()
202
- print('Time: %.2f s. ' % (stop - start))
203
- print('Feature Shape: ({} , {}) .'.format(img_feats.shape[0], img_feats.shape[1]))
204
- start = timeit.default_timer()
205
-
206
- if use_flip_test:
207
- img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2] + img_feats[:, img_feats.shape[1] // 2:]
208
- else:
209
- img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2]
210
-
211
- if use_norm_score:
212
- img_input_feats = img_input_feats
213
- else:
214
- img_input_feats = img_input_feats / np.sqrt(np.sum(img_input_feats ** 2, -1, keepdims=True))
215
-
216
- if use_detector_score:
217
- print(img_input_feats.shape, faceness_scores.shape)
218
- img_input_feats = img_input_feats * faceness_scores[:, np.newaxis]
219
- else:
220
- img_input_feats = img_input_feats
221
-
222
- template_norm_feats, unique_templates = image2template_feature(
223
- img_input_feats, templates, medias)
224
- stop = timeit.default_timer()
225
- print('Time: %.2f s. ' % (stop - start))
226
-
227
- start = timeit.default_timer()
228
- score = verification(template_norm_feats, unique_templates, p1, p2)
229
- stop = timeit.default_timer()
230
- print('Time: %.2f s. ' % (stop - start))
231
- save_path = os.path.join(args.result_dir, "{}_result".format(args.target))
232
- if not os.path.exists(save_path):
233
- os.makedirs(save_path)
234
- score_save_file = os.path.join(save_path, "{}.npy".format(args.model_root))
235
- np.save(score_save_file, score)
236
- files = [score_save_file]
237
- methods = []
238
- scores = []
239
- for file in files:
240
- methods.append(os.path.basename(file))
241
- scores.append(np.load(file))
242
- methods = np.array(methods)
243
- scores = dict(zip(methods, scores))
244
- x_labels = [10 ** -6, 10 ** -5, 10 ** -4, 10 ** -3, 10 ** -2, 10 ** -1]
245
- tpr_fpr_table = prettytable.PrettyTable(['Methods'] + [str(x) for x in x_labels])
246
- for method in methods:
247
- fpr, tpr, _ = roc_curve(label, scores[method])
248
- fpr = np.flipud(fpr)
249
- tpr = np.flipud(tpr)
250
- tpr_fpr_row = []
251
- tpr_fpr_row.append("%s-%s" % (method, args.target))
252
- for fpr_iter in np.arange(len(x_labels)):
253
- _, min_index = min(
254
- list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr)))))
255
- tpr_fpr_row.append('%.2f' % (tpr[min_index] * 100))
256
- tpr_fpr_table.add_row(tpr_fpr_row)
257
- print(tpr_fpr_table)
258
-
259
-
260
- if __name__ == '__main__':
261
- parser = argparse.ArgumentParser(description='do ijb test')
262
- # general
263
- parser.add_argument('--model-root', default='', help='path to load model.')
264
- parser.add_argument('--image-path', default='', type=str, help='')
265
- parser.add_argument('--result-dir', default='.', type=str, help='')
266
- parser.add_argument('--target', default='IJBC', type=str, help='target, set to IJBC or IJBB')
267
- main(parser.parse_args())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/diffedit.md DELETED
@@ -1,348 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # DiffEdit
14
-
15
- [DiffEdit: Diffusion-based semantic image editing with mask guidance](https://huggingface.co/papers/2210.11427) is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord.
16
-
17
- The abstract from the paper is:
18
-
19
- *Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images.*
20
-
21
- The original codebase can be found at [Xiang-cd/DiffEdit-stable-diffusion](https://github.com/Xiang-cd/DiffEdit-stable-diffusion), and you can try it out in this [demo](https://blog.problemsolversguild.com/technical/research/2022/11/02/DiffEdit-Implementation.html).
22
-
23
- This pipeline was contributed by [clarencechen](https://github.com/clarencechen). ❤️
24
-
25
- ## Tips
26
-
27
- * The pipeline can generate masks that can be fed into other inpainting pipelines. Check out the code examples below to know more.
28
- * In order to generate an image using this pipeline, both an image mask (manually specified or generated using `generate_mask`)
29
- and a set of partially inverted latents (generated using `invert`) _must_ be provided as arguments when calling the pipeline to generate the final edited image.
30
- Refer to the code examples below for more details.
31
- * The function `generate_mask` exposes two prompt arguments, `source_prompt` and `target_prompt`,
32
- that let you control the locations of the semantic edits in the final image to be generated. Let's say,
33
- you wanted to translate from "cat" to "dog". In this case, the edit direction will be "cat -> dog". To reflect
34
- this in the generated mask, you simply have to set the embeddings related to the phrases including "cat" to
35
- `source_prompt_embeds` and "dog" to `target_prompt_embeds`. Refer to the code example below for more details.
36
- * When generating partially inverted latents using `invert`, assign a caption or text embedding describing the
37
- overall image to the `prompt` argument to help guide the inverse latent sampling process. In most cases, the
38
- source concept is sufficently descriptive to yield good results, but feel free to explore alternatives.
39
- Please refer to [this code example](#generating-image-captions-for-inversion) for more details.
40
- * When calling the pipeline to generate the final edited image, assign the source concept to `negative_prompt`
41
- and the target concept to `prompt`. Taking the above example, you simply have to set the embeddings related to
42
- the phrases including "cat" to `negative_prompt_embeds` and "dog" to `prompt_embeds`. Refer to the code example
43
- below for more details.
44
- * If you wanted to reverse the direction in the example above, i.e., "dog -> cat", then it's recommended to:
45
- * Swap the `source_prompt` and `target_prompt` in the arguments to `generate_mask`.
46
- * Change the input prompt for `invert` to include "dog".
47
- * Swap the `prompt` and `negative_prompt` in the arguments to call the pipeline to generate the final edited image.
48
- * Note that the source and target prompts, or their corresponding embeddings, can also be automatically generated. Please, refer to [this discussion](#generating-source-and-target-embeddings) for more details.
49
-
50
- ## Usage example
51
-
52
- ### Based on an input image with a caption
53
-
54
- When the pipeline is conditioned on an input image, we first obtain partially inverted latents from the input image using a
55
- `DDIMInverseScheduler` with the help of a caption. Then we generate an editing mask to identify relevant regions in the image using the source and target prompts. Finally,
56
- the inverted noise and generated mask is used to start the generation process.
57
-
58
- First, let's load our pipeline:
59
-
60
- ```py
61
- import torch
62
- from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline
63
-
64
- sd_model_ckpt = "stabilityai/stable-diffusion-2-1"
65
- pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
66
- sd_model_ckpt,
67
- torch_dtype=torch.float16,
68
- safety_checker=None,
69
- )
70
- pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
71
- pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
72
- pipeline.enable_model_cpu_offload()
73
- pipeline.enable_vae_slicing()
74
- generator = torch.manual_seed(0)
75
- ```
76
-
77
- Then, we load an input image to edit using our method:
78
-
79
- ```py
80
- from diffusers.utils import load_image
81
-
82
- img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
83
- raw_image = load_image(img_url).convert("RGB").resize((768, 768))
84
- ```
85
-
86
- Then, we employ the source and target prompts to generate the editing mask:
87
-
88
- ```py
89
- # See the "Generating source and target embeddings" section below to
90
- # automate the generation of these captions with a pre-trained model like Flan-T5 as explained below.
91
-
92
- source_prompt = "a bowl of fruits"
93
- target_prompt = "a basket of fruits"
94
- mask_image = pipeline.generate_mask(
95
- image=raw_image,
96
- source_prompt=source_prompt,
97
- target_prompt=target_prompt,
98
- generator=generator,
99
- )
100
- ```
101
-
102
- Then, we employ the caption and the input image to get the inverted latents:
103
-
104
- ```py
105
- inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image, generator=generator).latents
106
- ```
107
-
108
- Now, generate the image with the inverted latents and semantically generated mask:
109
-
110
- ```py
111
- image = pipeline(
112
- prompt=target_prompt,
113
- mask_image=mask_image,
114
- image_latents=inv_latents,
115
- generator=generator,
116
- negative_prompt=source_prompt,
117
- ).images[0]
118
- image.save("edited_image.png")
119
- ```
120
-
121
- ## Generating image captions for inversion
122
-
123
- The authors originally used the source concept prompt as the caption for generating the partially inverted latents. However, we can also leverage open source and public image captioning models for the same purpose.
124
- Below, we provide an end-to-end example with the [BLIP](https://huggingface.co/docs/transformers/model_doc/blip) model
125
- for generating captions.
126
-
127
- First, let's load our automatic image captioning model:
128
-
129
- ```py
130
- import torch
131
- from transformers import BlipForConditionalGeneration, BlipProcessor
132
-
133
- captioner_id = "Salesforce/blip-image-captioning-base"
134
- processor = BlipProcessor.from_pretrained(captioner_id)
135
- model = BlipForConditionalGeneration.from_pretrained(captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True)
136
- ```
137
-
138
- Then, we define a utility to generate captions from an input image using the model:
139
-
140
- ```py
141
- @torch.no_grad()
142
- def generate_caption(images, caption_generator, caption_processor):
143
- text = "a photograph of"
144
-
145
- inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype)
146
- caption_generator.to("cuda")
147
- outputs = caption_generator.generate(**inputs, max_new_tokens=128)
148
-
149
- # offload caption generator
150
- caption_generator.to("cpu")
151
-
152
- caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0]
153
- return caption
154
- ```
155
-
156
- Then, we load an input image for conditioning and obtain a suitable caption for it:
157
-
158
- ```py
159
- from diffusers.utils import load_image
160
-
161
- img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
162
- raw_image = load_image(img_url).convert("RGB").resize((768, 768))
163
- caption = generate_caption(raw_image, model, processor)
164
- ```
165
-
166
- Then, we employ the generated caption and the input image to get the inverted latents:
167
-
168
- ```py
169
- from diffusers import DDIMInverseScheduler, DDIMScheduler
170
-
171
- pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
172
- "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
173
- )
174
- pipeline = pipeline.to("cuda")
175
- pipeline.enable_model_cpu_offload()
176
- pipeline.enable_vae_slicing()
177
-
178
- pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
179
- pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
180
-
181
- generator = torch.manual_seed(0)
182
- inv_latents = pipeline.invert(prompt=caption, image=raw_image, generator=generator).latents
183
- ```
184
-
185
- Now, generate the image with the inverted latents and semantically generated mask from our source and target prompts:
186
-
187
- ```py
188
- source_prompt = "a bowl of fruits"
189
- target_prompt = "a basket of fruits"
190
-
191
- mask_image = pipeline.generate_mask(
192
- image=raw_image,
193
- source_prompt=source_prompt,
194
- target_prompt=target_prompt,
195
- generator=generator,
196
- )
197
-
198
- image = pipeline(
199
- prompt=target_prompt,
200
- mask_image=mask_image,
201
- image_latents=inv_latents,
202
- generator=generator,
203
- negative_prompt=source_prompt,
204
- ).images[0]
205
- image.save("edited_image.png")
206
- ```
207
-
208
- ## Generating source and target embeddings
209
-
210
- The authors originally required the user to manually provide the source and target prompts for discovering
211
- edit directions. However, we can also leverage open source and public models for the same purpose.
212
- Below, we provide an end-to-end example with the [Flan-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5) model
213
- for generating source an target embeddings.
214
-
215
- **1. Load the generation model**:
216
-
217
- ```py
218
- import torch
219
- from transformers import AutoTokenizer, T5ForConditionalGeneration
220
-
221
- tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xl")
222
- model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16)
223
- ```
224
-
225
- **2. Construct a starting prompt**:
226
-
227
- ```py
228
- source_concept = "bowl"
229
- target_concept = "basket"
230
-
231
- source_text = f"Provide a caption for images containing a {source_concept}. "
232
- "The captions should be in English and should be no longer than 150 characters."
233
-
234
- target_text = f"Provide a caption for images containing a {target_concept}. "
235
- "The captions should be in English and should be no longer than 150 characters."
236
- ```
237
-
238
- Here, we're interested in the "bowl -> basket" direction.
239
-
240
- **3. Generate prompts**:
241
-
242
- We can use a utility like so for this purpose.
243
-
244
- ```py
245
- @torch.no_grad
246
- def generate_prompts(input_prompt):
247
- input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda")
248
-
249
- outputs = model.generate(
250
- input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10
251
- )
252
- return tokenizer.batch_decode(outputs, skip_special_tokens=True)
253
- ```
254
-
255
- And then we just call it to generate our prompts:
256
-
257
- ```py
258
- source_prompts = generate_prompts(source_text)
259
- target_prompts = generate_prompts(target_text)
260
- ```
261
-
262
- We encourage you to play around with the different parameters supported by the
263
- `generate()` method ([documentation](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_tf_utils.TFGenerationMixin.generate)) for the generation quality you are looking for.
264
-
265
- **4. Load the embedding model**:
266
-
267
- Here, we need to use the same text encoder model used by the subsequent Stable Diffusion model.
268
-
269
- ```py
270
- from diffusers import StableDiffusionDiffEditPipeline
271
-
272
- pipeline = StableDiffusionDiffEditPipeline.from_pretrained(
273
- "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
274
- )
275
- pipeline = pipeline.to("cuda")
276
- pipeline.enable_model_cpu_offload()
277
- pipeline.enable_vae_slicing()
278
-
279
- generator = torch.manual_seed(0)
280
- ```
281
-
282
- **5. Compute embeddings**:
283
-
284
- ```py
285
- import torch
286
-
287
- @torch.no_grad()
288
- def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"):
289
- embeddings = []
290
- for sent in sentences:
291
- text_inputs = tokenizer(
292
- sent,
293
- padding="max_length",
294
- max_length=tokenizer.model_max_length,
295
- truncation=True,
296
- return_tensors="pt",
297
- )
298
- text_input_ids = text_inputs.input_ids
299
- prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0]
300
- embeddings.append(prompt_embeds)
301
- return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0)
302
-
303
- source_embeddings = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder)
304
- target_embeddings = embed_prompts(target_captions, pipeline.tokenizer, pipeline.text_encoder)
305
- ```
306
-
307
- And you're done! Now, you can use these embeddings directly while calling the pipeline:
308
-
309
- ```py
310
- from diffusers import DDIMInverseScheduler, DDIMScheduler
311
- from diffusers.utils import load_image
312
-
313
- pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
314
- pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
315
-
316
- img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
317
- raw_image = load_image(img_url).convert("RGB").resize((768, 768))
318
-
319
-
320
- mask_image = pipeline.generate_mask(
321
- image=raw_image,
322
- source_prompt_embeds=source_embeds,
323
- target_prompt_embeds=target_embeds,
324
- generator=generator,
325
- )
326
-
327
- inv_latents = pipeline.invert(
328
- prompt_embeds=source_embeds,
329
- image=raw_image,
330
- generator=generator,
331
- ).latents
332
-
333
- images = pipeline(
334
- mask_image=mask_image,
335
- image_latents=inv_latents,
336
- prompt_embeds=target_embeddings,
337
- negative_prompt_embeds=source_embeddings,
338
- generator=generator,
339
- ).images
340
- images[0].save("edited_image.png")
341
- ```
342
-
343
- ## StableDiffusionDiffEditPipeline
344
- [[autodoc]] StableDiffusionDiffEditPipeline
345
- - all
346
- - generate_mask
347
- - invert
348
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/conditional_image_generation.md DELETED
@@ -1,60 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Conditional image generation
14
-
15
- [[open-in-colab]]
16
-
17
- Conditional image generation allows you to generate images from a text prompt. The text is converted into embeddings which are used to condition the model to generate an image from noise.
18
-
19
- The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference.
20
-
21
- Start by creating an instance of [`DiffusionPipeline`] and specify which pipeline [checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads) you would like to download.
22
-
23
- In this guide, you'll use [`DiffusionPipeline`] for text-to-image generation with [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5):
24
-
25
- ```python
26
- >>> from diffusers import DiffusionPipeline
27
-
28
- >>> generator = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
29
- ```
30
-
31
- The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
32
- Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on a GPU.
33
- You can move the generator object to a GPU, just like you would in PyTorch:
34
-
35
- ```python
36
- >>> generator.to("cuda")
37
- ```
38
-
39
- Now you can use the `generator` on your text prompt:
40
-
41
- ```python
42
- >>> image = generator("An image of a squirrel in Picasso style").images[0]
43
- ```
44
-
45
- The output is by default wrapped into a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) object.
46
-
47
- You can save the image by calling:
48
-
49
- ```python
50
- >>> image.save("image_of_squirrel_painting.png")
51
- ```
52
-
53
- Try out the Spaces below, and feel free to play around with the guidance scale parameter to see how it affects the image quality!
54
-
55
- <iframe
56
- src="https://stabilityai-stable-diffusion.hf.space"
57
- frameborder="0"
58
- width="850"
59
- height="500"
60
- ></iframe>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_x101_32x4d_fpn_1x_coco.py DELETED
@@ -1,13 +0,0 @@
1
- _base_ = './faster_rcnn_r50_fpn_1x_coco.py'
2
- model = dict(
3
- pretrained='open-mmlab://resnext101_32x4d',
4
- backbone=dict(
5
- type='ResNeXt',
6
- depth=101,
7
- groups=32,
8
- base_width=4,
9
- num_stages=4,
10
- out_indices=(0, 1, 2, 3),
11
- frozen_stages=1,
12
- norm_cfg=dict(type='BN', requires_grad=True),
13
- style='pytorch'))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/gn/mask_rcnn_r101_fpn_gn-all_3x_coco.py DELETED
@@ -1,5 +0,0 @@
1
- _base_ = './mask_rcnn_r101_fpn_gn-all_2x_coco.py'
2
-
3
- # learning policy
4
- lr_config = dict(step=[28, 34])
5
- runner = dict(type='EpochBasedRunner', max_epochs=36)
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_480x480_40k_pascal_context.py DELETED
@@ -1,10 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/pspnet_r50-d8.py',
3
- '../_base_/datasets/pascal_context.py', '../_base_/default_runtime.py',
4
- '../_base_/schedules/schedule_40k.py'
5
- ]
6
- model = dict(
7
- decode_head=dict(num_classes=60),
8
- auxiliary_head=dict(num_classes=60),
9
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
10
- optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
 
 
 
 
 
 
 
 
 
 
 
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/sampler_hijack.py DELETED
@@ -1,218 +0,0 @@
1
- import math
2
-
3
- import torch
4
- import transformers
5
- from transformers import LogitsWarper
6
- from transformers.generation.logits_process import (
7
- LogitNormalization,
8
- LogitsProcessor,
9
- LogitsProcessorList,
10
- TemperatureLogitsWarper
11
- )
12
-
13
- global_scores = None
14
-
15
-
16
- class TailFreeLogitsWarper(LogitsWarper):
17
- def __init__(self, tfs: float, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1):
18
- tfs = float(tfs)
19
- if tfs < 0 or tfs > 1.0:
20
- raise ValueError(f"`tfs` has to be a float >= 0 and <= 1, but is {tfs}")
21
- self.tfs = tfs
22
- self.filter_value = filter_value
23
- self.min_tokens_to_keep = min_tokens_to_keep
24
-
25
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
26
- sorted_logits, sorted_indices = torch.sort(scores, descending=True)
27
- probs = sorted_logits.softmax(dim=-1)
28
-
29
- # Compute second derivative normalized CDF
30
- d2 = probs.diff().diff().abs()
31
- normalized_d2 = d2 / d2.sum(dim=-1, keepdim=True)
32
- normalized_d2_cdf = normalized_d2.cumsum(dim=-1)
33
-
34
- # Remove tokens with CDF value above the threshold (token with 0 are kept)
35
- sorted_indices_to_remove = normalized_d2_cdf > self.tfs
36
-
37
- # Centre the distribution around the cutoff as in the original implementation of the algorithm
38
- sorted_indices_to_remove = torch.cat(
39
- (
40
- torch.zeros(scores.shape[0], 1, dtype=torch.bool, device=scores.device),
41
- sorted_indices_to_remove,
42
- torch.ones(scores.shape[0], 1, dtype=torch.bool, device=scores.device),
43
- ),
44
- dim=-1,
45
- )
46
-
47
- if self.min_tokens_to_keep > 1:
48
- # Keep at least min_tokens_to_keep
49
- sorted_indices_to_remove[..., : self.min_tokens_to_keep] = 0
50
-
51
- indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove)
52
- scores = scores.masked_fill(indices_to_remove, self.filter_value)
53
- return scores
54
-
55
-
56
- class TopALogitsWarper(LogitsWarper):
57
- def __init__(self, top_a: float, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1):
58
- top_a = float(top_a)
59
- if top_a < 0 or top_a > 1.0:
60
- raise ValueError(f"`top_a` has to be a float >= 0 and <= 1, but is {top_a}")
61
- self.top_a = top_a
62
- self.filter_value = filter_value
63
- self.min_tokens_to_keep = min_tokens_to_keep
64
-
65
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
66
- sorted_logits, sorted_indices = torch.sort(scores, descending=True)
67
- probs = sorted_logits.softmax(dim=-1)
68
-
69
- # Remove tokens with probability less than top_a*(max(probs))^2 (token with 0 are kept)
70
- probs_max = probs[..., 0, None]
71
- sorted_indices_to_remove = probs < probs_max * probs_max * self.top_a
72
-
73
- if self.min_tokens_to_keep > 1:
74
- # Keep at least min_tokens_to_keep
75
- sorted_indices_to_remove[..., : self.min_tokens_to_keep] = 0
76
-
77
- indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove)
78
- scores = scores.masked_fill(indices_to_remove, self.filter_value)
79
- return scores
80
-
81
-
82
- class MirostatLogitsWarper(LogitsWarper):
83
- def __init__(self, mirostat_mode: int, mirostat_tau: float, mirostat_eta: float, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1):
84
- if mirostat_mode not in [2]:
85
- raise ValueError(f"`mirostat` has to be a an integer 2, but is {mirostat_mode}")
86
- self.mirostat_mode = mirostat_mode
87
- self.mirostat_eta = mirostat_eta
88
- self.mirostat_tau = mirostat_tau
89
- self.filter_value = filter_value
90
- self.min_tokens_to_keep = min_tokens_to_keep
91
- self.mu = 2 * self.mirostat_tau
92
- self.e = 0
93
-
94
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
95
- logits = scores[0]
96
- sorted_logits, sorted_indices = torch.sort(logits, descending=True)
97
- prob_original = torch.softmax(sorted_logits, dim=-1).tolist() # candidates
98
-
99
- # Truncate the words with surprise values greater than mu
100
- for i, candidate in enumerate(prob_original):
101
- if candidate > 0 and -math.log2(candidate) > self.mu:
102
- if (i == 0):
103
- sorted_logits = sorted_logits[:1]
104
- else:
105
- sorted_logits = sorted_logits[:i]
106
- break
107
-
108
- # Normalize the probabilities of the remaining words
109
- prob_topk = torch.softmax(sorted_logits, dim=0).to('cuda')
110
-
111
- prev_i = torch.multinomial(prob_topk, num_samples=1, replacement=True).to('cuda')
112
-
113
- observed_surprise = -math.log2(prob_topk[prev_i])
114
- self.e = observed_surprise - self.mirostat_tau
115
-
116
- # Update mu using the learning rate and error
117
- self.mu -= self.mirostat_eta * self.e
118
-
119
- sorted_indices_to_remove = torch.ones_like(scores[0], dtype=torch.bool)
120
- sorted_indices_to_remove[prev_i] = False
121
-
122
- indices_to_remove = sorted_indices_to_remove.unsqueeze(0).scatter(1, sorted_indices.unsqueeze(0), sorted_indices_to_remove.unsqueeze(0))
123
- scores = scores.masked_fill(indices_to_remove, self.filter_value)
124
- return scores
125
-
126
-
127
- class SpyLogitsWarper(LogitsWarper):
128
- def __init__(self):
129
- pass
130
-
131
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
132
- global global_scores
133
- global_scores = scores
134
- return scores
135
-
136
-
137
- class RepetitionPenaltyLogitsProcessorWithRange(LogitsProcessor):
138
- '''
139
- Copied from the transformers library
140
- '''
141
-
142
- def __init__(self, penalty: float, _range: int):
143
- if not isinstance(penalty, float) or not (penalty > 0):
144
- raise ValueError(f"`penalty` has to be a strictly positive float, but is {penalty}")
145
-
146
- self.penalty = penalty
147
- self._range = _range
148
-
149
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
150
-
151
- input_ids = input_ids[:, -self._range:]
152
- score = torch.gather(scores, 1, input_ids)
153
-
154
- # if score < 0 then repetition penalty has to be multiplied to reduce the previous token probability
155
- score = torch.where(score < 0, score * self.penalty, score / self.penalty)
156
-
157
- scores.scatter_(1, input_ids, score)
158
- return scores
159
-
160
-
161
- def get_logits_warper_patch(self, generation_config):
162
- warpers = self._get_logits_warper_old(generation_config)
163
- warpers_to_add = LogitsProcessorList()
164
- min_tokens_to_keep = 2 if generation_config.num_beams > 1 else 1
165
-
166
- if generation_config.mirostat_mode is not None and generation_config.mirostat_mode == 2:
167
- warpers_to_add.append(MirostatLogitsWarper(mirostat_mode=generation_config.mirostat_mode, mirostat_eta=generation_config.mirostat_eta, mirostat_tau=generation_config.mirostat_tau, min_tokens_to_keep=min_tokens_to_keep))
168
- # We need to disable samplers other than temperature
169
- for warper in warpers:
170
- if not isinstance(warper, TemperatureLogitsWarper):
171
- warpers.remove(warper)
172
- else:
173
- if generation_config.tfs is not None and 0.0 <= generation_config.tfs <= 1.0:
174
- warpers_to_add.append(TailFreeLogitsWarper(tfs=generation_config.tfs, min_tokens_to_keep=min_tokens_to_keep))
175
- if generation_config.top_a is not None and 0.0 <= generation_config.top_a <= 1.0:
176
- warpers_to_add.append(TopALogitsWarper(top_a=generation_config.top_a, min_tokens_to_keep=min_tokens_to_keep))
177
-
178
- if warpers and isinstance(warpers[-1], LogitNormalization):
179
- warpers = warpers[:-1] + warpers_to_add + [warpers[-1]]
180
- else:
181
- warpers += warpers_to_add
182
-
183
- warpers.append(SpyLogitsWarper())
184
- return warpers
185
-
186
-
187
- def get_logits_processor_patch(self, **kwargs):
188
- result = self._get_logits_processor_old(**kwargs)
189
- repetition_penalty_range = kwargs['generation_config'].repetition_penalty_range
190
- repetition_penalty = kwargs['generation_config'].repetition_penalty
191
-
192
- if repetition_penalty_range > 0:
193
- for i in range(len(result)):
194
- if result[i].__class__.__name__ == 'RepetitionPenaltyLogitsProcessor':
195
- result[i] = RepetitionPenaltyLogitsProcessorWithRange(repetition_penalty, repetition_penalty_range)
196
-
197
- return result
198
-
199
-
200
- def generation_config_init_patch(self, **kwargs):
201
- self.__init___old(**kwargs)
202
- self.tfs = kwargs.pop("tfs", 1.0)
203
- self.top_a = kwargs.pop("top_a", 0.0)
204
- self.mirostat_mode = kwargs.pop("mirostat_mode", 0)
205
- self.mirostat_eta = kwargs.pop("mirostat_eta", 0.1)
206
- self.mirostat_tau = kwargs.pop("mirostat_tau", 5)
207
- self.repetition_penalty_range = kwargs.pop("repetition_penalty_range", 0)
208
-
209
-
210
- def hijack_samplers():
211
- transformers.GenerationMixin._get_logits_warper_old = transformers.GenerationMixin._get_logits_warper
212
- transformers.GenerationMixin._get_logits_warper = get_logits_warper_patch
213
-
214
- transformers.GenerationMixin._get_logits_processor_old = transformers.GenerationMixin._get_logits_processor
215
- transformers.GenerationMixin._get_logits_processor = get_logits_processor_patch
216
-
217
- transformers.GenerationConfig.__init___old = transformers.GenerationConfig.__init__
218
- transformers.GenerationConfig.__init__ = generation_config_init_patch
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/ops/encoding.py DELETED
@@ -1,74 +0,0 @@
1
- import torch
2
- from torch import nn
3
- from torch.nn import functional as F
4
-
5
-
6
- class Encoding(nn.Module):
7
- """Encoding Layer: a learnable residual encoder.
8
-
9
- Input is of shape (batch_size, channels, height, width).
10
- Output is of shape (batch_size, num_codes, channels).
11
-
12
- Args:
13
- channels: dimension of the features or feature channels
14
- num_codes: number of code words
15
- """
16
-
17
- def __init__(self, channels, num_codes):
18
- super(Encoding, self).__init__()
19
- # init codewords and smoothing factor
20
- self.channels, self.num_codes = channels, num_codes
21
- std = 1. / ((num_codes * channels)**0.5)
22
- # [num_codes, channels]
23
- self.codewords = nn.Parameter(
24
- torch.empty(num_codes, channels,
25
- dtype=torch.float).uniform_(-std, std),
26
- requires_grad=True)
27
- # [num_codes]
28
- self.scale = nn.Parameter(
29
- torch.empty(num_codes, dtype=torch.float).uniform_(-1, 0),
30
- requires_grad=True)
31
-
32
- @staticmethod
33
- def scaled_l2(x, codewords, scale):
34
- num_codes, channels = codewords.size()
35
- batch_size = x.size(0)
36
- reshaped_scale = scale.view((1, 1, num_codes))
37
- expanded_x = x.unsqueeze(2).expand(
38
- (batch_size, x.size(1), num_codes, channels))
39
- reshaped_codewords = codewords.view((1, 1, num_codes, channels))
40
-
41
- scaled_l2_norm = reshaped_scale * (
42
- expanded_x - reshaped_codewords).pow(2).sum(dim=3)
43
- return scaled_l2_norm
44
-
45
- @staticmethod
46
- def aggregate(assignment_weights, x, codewords):
47
- num_codes, channels = codewords.size()
48
- reshaped_codewords = codewords.view((1, 1, num_codes, channels))
49
- batch_size = x.size(0)
50
-
51
- expanded_x = x.unsqueeze(2).expand(
52
- (batch_size, x.size(1), num_codes, channels))
53
- encoded_feat = (assignment_weights.unsqueeze(3) *
54
- (expanded_x - reshaped_codewords)).sum(dim=1)
55
- return encoded_feat
56
-
57
- def forward(self, x):
58
- assert x.dim() == 4 and x.size(1) == self.channels
59
- # [batch_size, channels, height, width]
60
- batch_size = x.size(0)
61
- # [batch_size, height x width, channels]
62
- x = x.view(batch_size, self.channels, -1).transpose(1, 2).contiguous()
63
- # assignment_weights: [batch_size, channels, num_codes]
64
- assignment_weights = F.softmax(
65
- self.scaled_l2(x, self.codewords, self.scale), dim=2)
66
- # aggregate
67
- encoded_feat = self.aggregate(assignment_weights, x, self.codewords)
68
- return encoded_feat
69
-
70
- def __repr__(self):
71
- repr_str = self.__class__.__name__
72
- repr_str += f'(Nx{self.channels}xHxW =>Nx{self.num_codes}' \
73
- f'x{self.channels})'
74
- return repr_str
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/ema.py DELETED
@@ -1,80 +0,0 @@
1
- import torch
2
- from torch import nn
3
-
4
-
5
- class LitEma(nn.Module):
6
- def __init__(self, model, decay=0.9999, use_num_upates=True):
7
- super().__init__()
8
- if decay < 0.0 or decay > 1.0:
9
- raise ValueError('Decay must be between 0 and 1')
10
-
11
- self.m_name2s_name = {}
12
- self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))
13
- self.register_buffer('num_updates', torch.tensor(0, dtype=torch.int) if use_num_upates
14
- else torch.tensor(-1, dtype=torch.int))
15
-
16
- for name, p in model.named_parameters():
17
- if p.requires_grad:
18
- # remove as '.'-character is not allowed in buffers
19
- s_name = name.replace('.', '')
20
- self.m_name2s_name.update({name: s_name})
21
- self.register_buffer(s_name, p.clone().detach().data)
22
-
23
- self.collected_params = []
24
-
25
- def reset_num_updates(self):
26
- del self.num_updates
27
- self.register_buffer('num_updates', torch.tensor(0, dtype=torch.int))
28
-
29
- def forward(self, model):
30
- decay = self.decay
31
-
32
- if self.num_updates >= 0:
33
- self.num_updates += 1
34
- decay = min(self.decay, (1 + self.num_updates) / (10 + self.num_updates))
35
-
36
- one_minus_decay = 1.0 - decay
37
-
38
- with torch.no_grad():
39
- m_param = dict(model.named_parameters())
40
- shadow_params = dict(self.named_buffers())
41
-
42
- for key in m_param:
43
- if m_param[key].requires_grad:
44
- sname = self.m_name2s_name[key]
45
- shadow_params[sname] = shadow_params[sname].type_as(m_param[key])
46
- shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))
47
- else:
48
- assert not key in self.m_name2s_name
49
-
50
- def copy_to(self, model):
51
- m_param = dict(model.named_parameters())
52
- shadow_params = dict(self.named_buffers())
53
- for key in m_param:
54
- if m_param[key].requires_grad:
55
- m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
56
- else:
57
- assert not key in self.m_name2s_name
58
-
59
- def store(self, parameters):
60
- """
61
- Save the current parameters for restoring later.
62
- Args:
63
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
64
- temporarily stored.
65
- """
66
- self.collected_params = [param.clone() for param in parameters]
67
-
68
- def restore(self, parameters):
69
- """
70
- Restore the parameters stored with the `store` method.
71
- Useful to validate the model with EMA parameters without affecting the
72
- original optimization process. Store the parameters before the
73
- `copy_to` method. After validation (or model saving), use this to
74
- restore the former parameters.
75
- Args:
76
- parameters: Iterable of `torch.nn.Parameter`; the parameters to be
77
- updated with the stored parameters.
78
- """
79
- for c_param, param in zip(self.collected_params, parameters):
80
- param.data.copy_(c_param.data)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aphrodite/AIChatBot-SL-Chatbot-Blenderbot/README.md DELETED
@@ -1,14 +0,0 @@
1
- ---
2
- title: AIChatBot SL Chatbot Blenderbot
3
- emoji: 🏃
4
- colorFrom: blue
5
- colorTo: purple
6
- sdk: streamlit
7
- sdk_version: 1.10.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- duplicated_from: williambr/AIChatBot-SL-Chatbot-Blenderbot
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Apk/anything-v3.0/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Anything V3.0
3
- emoji: 🏃
4
- colorFrom: gray
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 3.10.1
8
- app_file: app.py
9
- pinned: false
10
- duplicated_from: akhaliq/anything-v3.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/zero_shot/zero_shot_text2video.py DELETED
@@ -1,164 +0,0 @@
1
- import gradio as gr
2
- import imageio
3
- import torch
4
- from diffusers import TextToVideoZeroPipeline
5
-
6
- from video_diffusion.tuneavideo.util import save_videos_grid
7
- from video_diffusion.utils.model_list import stable_model_list
8
-
9
-
10
- class ZeroShotText2VideoGenerator:
11
- def __init__(self):
12
- self.pipe = None
13
-
14
- def load_model(self, model_id):
15
- if self.pipe is None:
16
- self.pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
17
- self.pipe.to("cuda")
18
- self.pipe.enable_xformers_memory_efficient_attention()
19
- self.pipe.enable_attention_slicing()
20
-
21
- return self.pipe
22
-
23
- def generate_video(
24
- self,
25
- prompt,
26
- negative_prompt,
27
- model_id,
28
- height,
29
- width,
30
- video_length,
31
- guidance_scale,
32
- fps,
33
- t0,
34
- t1,
35
- motion_field_strength_x,
36
- motion_field_strength_y,
37
- ):
38
- pipe = self.load_model(model_id)
39
- result = pipe(
40
- prompt=prompt,
41
- negative_prompt=negative_prompt,
42
- height=height,
43
- width=width,
44
- video_length=video_length,
45
- guidance_scale=guidance_scale,
46
- t0=t0,
47
- t1=t1,
48
- motion_field_strength_x=motion_field_strength_x,
49
- motion_field_strength_y=motion_field_strength_y,
50
- ).images
51
-
52
- result = [(r * 255).astype("uint8") for r in result]
53
- imageio.mimsave("video.mp4", result, fps=fps)
54
- return "video.mp4"
55
-
56
- def app():
57
- with gr.Blocks():
58
- with gr.Row():
59
- with gr.Column():
60
- zero_shot_text2video_prompt = gr.Textbox(
61
- lines=1,
62
- placeholder="Prompt",
63
- show_label=False,
64
- )
65
- zero_shot_text2video_negative_prompt = gr.Textbox(
66
- lines=1,
67
- placeholder="Negative Prompt",
68
- show_label=False,
69
- )
70
- zero_shot_text2video_model_id = gr.Dropdown(
71
- choices=stable_model_list,
72
- label="Stable Model List",
73
- value=stable_model_list[0],
74
- )
75
- with gr.Row():
76
- with gr.Column():
77
- zero_shot_text2video_guidance_scale = gr.Slider(
78
- label="Guidance Scale",
79
- minimum=1,
80
- maximum=15,
81
- step=1,
82
- value=7.5,
83
- )
84
- zero_shot_text2video_video_length = gr.Slider(
85
- label="Video Length",
86
- minimum=1,
87
- maximum=100,
88
- step=1,
89
- value=10,
90
- )
91
- zero_shot_text2video_t0 = gr.Slider(
92
- label="Timestep T0",
93
- minimum=0,
94
- maximum=100,
95
- step=1,
96
- value=44,
97
- )
98
- zero_shot_text2video_motion_field_strength_x = gr.Slider(
99
- label="Motion Field Strength X",
100
- minimum=0,
101
- maximum=100,
102
- step=1,
103
- value=12,
104
- )
105
- zero_shot_text2video_fps = gr.Slider(
106
- label="Fps",
107
- minimum=1,
108
- maximum=60,
109
- step=1,
110
- value=10,
111
- )
112
- with gr.Row():
113
- with gr.Column():
114
- zero_shot_text2video_height = gr.Slider(
115
- label="Height",
116
- minimum=128,
117
- maximum=1280,
118
- step=32,
119
- value=512,
120
- )
121
- zero_shot_text2video_width = gr.Slider(
122
- label="Width",
123
- minimum=128,
124
- maximum=1280,
125
- step=32,
126
- value=512,
127
- )
128
- zero_shot_text2video_t1 = gr.Slider(
129
- label="Timestep T1",
130
- minimum=0,
131
- maximum=100,
132
- step=1,
133
- value=47,
134
- )
135
- zero_shot_text2video_motion_field_strength_y = gr.Slider(
136
- label="Motion Field Strength Y",
137
- minimum=0,
138
- maximum=100,
139
- step=1,
140
- value=12,
141
- )
142
- zero_shot_text2video_button = gr.Button(value="Generator")
143
-
144
- with gr.Column():
145
- zero_shot_text2video_output = gr.Video(label="Output")
146
-
147
- zero_shot_text2video_button.click(
148
- fn=ZeroShotText2VideoGenerator().generate_video,
149
- inputs=[
150
- zero_shot_text2video_prompt,
151
- zero_shot_text2video_negative_prompt,
152
- zero_shot_text2video_model_id,
153
- zero_shot_text2video_height,
154
- zero_shot_text2video_width,
155
- zero_shot_text2video_video_length,
156
- zero_shot_text2video_guidance_scale,
157
- zero_shot_text2video_fps,
158
- zero_shot_text2video_t0,
159
- zero_shot_text2video_t1,
160
- zero_shot_text2video_motion_field_strength_x,
161
- zero_shot_text2video_motion_field_strength_y,
162
- ],
163
- outputs=zero_shot_text2video_output,
164
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/selection_prefs.py DELETED
@@ -1,51 +0,0 @@
1
- from typing import Optional
2
-
3
- from pip._internal.models.format_control import FormatControl
4
-
5
-
6
- class SelectionPreferences:
7
- """
8
- Encapsulates the candidate selection preferences for downloading
9
- and installing files.
10
- """
11
-
12
- __slots__ = [
13
- "allow_yanked",
14
- "allow_all_prereleases",
15
- "format_control",
16
- "prefer_binary",
17
- "ignore_requires_python",
18
- ]
19
-
20
- # Don't include an allow_yanked default value to make sure each call
21
- # site considers whether yanked releases are allowed. This also causes
22
- # that decision to be made explicit in the calling code, which helps
23
- # people when reading the code.
24
- def __init__(
25
- self,
26
- allow_yanked: bool,
27
- allow_all_prereleases: bool = False,
28
- format_control: Optional[FormatControl] = None,
29
- prefer_binary: bool = False,
30
- ignore_requires_python: Optional[bool] = None,
31
- ) -> None:
32
- """Create a SelectionPreferences object.
33
-
34
- :param allow_yanked: Whether files marked as yanked (in the sense
35
- of PEP 592) are permitted to be candidates for install.
36
- :param format_control: A FormatControl object or None. Used to control
37
- the selection of source packages / binary packages when consulting
38
- the index and links.
39
- :param prefer_binary: Whether to prefer an old, but valid, binary
40
- dist over a new source dist.
41
- :param ignore_requires_python: Whether to ignore incompatible
42
- "Requires-Python" values in links. Defaults to False.
43
- """
44
- if ignore_requires_python is None:
45
- ignore_requires_python = False
46
-
47
- self.allow_yanked = allow_yanked
48
- self.allow_all_prereleases = allow_all_prereleases
49
- self.format_control = format_control
50
- self.prefer_binary = prefer_binary
51
- self.ignore_requires_python = ignore_requires_python
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/__init__.py DELETED
@@ -1,331 +0,0 @@
1
- # module pyparsing.py
2
- #
3
- # Copyright (c) 2003-2022 Paul T. McGuire
4
- #
5
- # Permission is hereby granted, free of charge, to any person obtaining
6
- # a copy of this software and associated documentation files (the
7
- # "Software"), to deal in the Software without restriction, including
8
- # without limitation the rights to use, copy, modify, merge, publish,
9
- # distribute, sublicense, and/or sell copies of the Software, and to
10
- # permit persons to whom the Software is furnished to do so, subject to
11
- # the following conditions:
12
- #
13
- # The above copyright notice and this permission notice shall be
14
- # included in all copies or substantial portions of the Software.
15
- #
16
- # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17
- # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18
- # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
19
- # IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
20
- # CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
21
- # TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
22
- # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
23
- #
24
-
25
- __doc__ = """
26
- pyparsing module - Classes and methods to define and execute parsing grammars
27
- =============================================================================
28
-
29
- The pyparsing module is an alternative approach to creating and
30
- executing simple grammars, vs. the traditional lex/yacc approach, or the
31
- use of regular expressions. With pyparsing, you don't need to learn
32
- a new syntax for defining grammars or matching expressions - the parsing
33
- module provides a library of classes that you use to construct the
34
- grammar directly in Python.
35
-
36
- Here is a program to parse "Hello, World!" (or any greeting of the form
37
- ``"<salutation>, <addressee>!"``), built up using :class:`Word`,
38
- :class:`Literal`, and :class:`And` elements
39
- (the :meth:`'+'<ParserElement.__add__>` operators create :class:`And` expressions,
40
- and the strings are auto-converted to :class:`Literal` expressions)::
41
-
42
- from pip._vendor.pyparsing import Word, alphas
43
-
44
- # define grammar of a greeting
45
- greet = Word(alphas) + "," + Word(alphas) + "!"
46
-
47
- hello = "Hello, World!"
48
- print(hello, "->", greet.parse_string(hello))
49
-
50
- The program outputs the following::
51
-
52
- Hello, World! -> ['Hello', ',', 'World', '!']
53
-
54
- The Python representation of the grammar is quite readable, owing to the
55
- self-explanatory class names, and the use of :class:`'+'<And>`,
56
- :class:`'|'<MatchFirst>`, :class:`'^'<Or>` and :class:`'&'<Each>` operators.
57
-
58
- The :class:`ParseResults` object returned from
59
- :class:`ParserElement.parseString` can be
60
- accessed as a nested list, a dictionary, or an object with named
61
- attributes.
62
-
63
- The pyparsing module handles some of the problems that are typically
64
- vexing when writing text parsers:
65
-
66
- - extra or missing whitespace (the above program will also handle
67
- "Hello,World!", "Hello , World !", etc.)
68
- - quoted strings
69
- - embedded comments
70
-
71
-
72
- Getting Started -
73
- -----------------
74
- Visit the classes :class:`ParserElement` and :class:`ParseResults` to
75
- see the base classes that most other pyparsing
76
- classes inherit from. Use the docstrings for examples of how to:
77
-
78
- - construct literal match expressions from :class:`Literal` and
79
- :class:`CaselessLiteral` classes
80
- - construct character word-group expressions using the :class:`Word`
81
- class
82
- - see how to create repetitive expressions using :class:`ZeroOrMore`
83
- and :class:`OneOrMore` classes
84
- - use :class:`'+'<And>`, :class:`'|'<MatchFirst>`, :class:`'^'<Or>`,
85
- and :class:`'&'<Each>` operators to combine simple expressions into
86
- more complex ones
87
- - associate names with your parsed results using
88
- :class:`ParserElement.setResultsName`
89
- - access the parsed data, which is returned as a :class:`ParseResults`
90
- object
91
- - find some helpful expression short-cuts like :class:`delimitedList`
92
- and :class:`oneOf`
93
- - find more useful common expressions in the :class:`pyparsing_common`
94
- namespace class
95
- """
96
- from typing import NamedTuple
97
-
98
-
99
- class version_info(NamedTuple):
100
- major: int
101
- minor: int
102
- micro: int
103
- releaselevel: str
104
- serial: int
105
-
106
- @property
107
- def __version__(self):
108
- return (
109
- "{}.{}.{}".format(self.major, self.minor, self.micro)
110
- + (
111
- "{}{}{}".format(
112
- "r" if self.releaselevel[0] == "c" else "",
113
- self.releaselevel[0],
114
- self.serial,
115
- ),
116
- "",
117
- )[self.releaselevel == "final"]
118
- )
119
-
120
- def __str__(self):
121
- return "{} {} / {}".format(__name__, self.__version__, __version_time__)
122
-
123
- def __repr__(self):
124
- return "{}.{}({})".format(
125
- __name__,
126
- type(self).__name__,
127
- ", ".join("{}={!r}".format(*nv) for nv in zip(self._fields, self)),
128
- )
129
-
130
-
131
- __version_info__ = version_info(3, 0, 9, "final", 0)
132
- __version_time__ = "05 May 2022 07:02 UTC"
133
- __version__ = __version_info__.__version__
134
- __versionTime__ = __version_time__
135
- __author__ = "Paul McGuire <[email protected]>"
136
-
137
- from .util import *
138
- from .exceptions import *
139
- from .actions import *
140
- from .core import __diag__, __compat__
141
- from .results import *
142
- from .core import *
143
- from .core import _builtin_exprs as core_builtin_exprs
144
- from .helpers import *
145
- from .helpers import _builtin_exprs as helper_builtin_exprs
146
-
147
- from .unicode import unicode_set, UnicodeRangeList, pyparsing_unicode as unicode
148
- from .testing import pyparsing_test as testing
149
- from .common import (
150
- pyparsing_common as common,
151
- _builtin_exprs as common_builtin_exprs,
152
- )
153
-
154
- # define backward compat synonyms
155
- if "pyparsing_unicode" not in globals():
156
- pyparsing_unicode = unicode
157
- if "pyparsing_common" not in globals():
158
- pyparsing_common = common
159
- if "pyparsing_test" not in globals():
160
- pyparsing_test = testing
161
-
162
- core_builtin_exprs += common_builtin_exprs + helper_builtin_exprs
163
-
164
-
165
- __all__ = [
166
- "__version__",
167
- "__version_time__",
168
- "__author__",
169
- "__compat__",
170
- "__diag__",
171
- "And",
172
- "AtLineStart",
173
- "AtStringStart",
174
- "CaselessKeyword",
175
- "CaselessLiteral",
176
- "CharsNotIn",
177
- "Combine",
178
- "Dict",
179
- "Each",
180
- "Empty",
181
- "FollowedBy",
182
- "Forward",
183
- "GoToColumn",
184
- "Group",
185
- "IndentedBlock",
186
- "Keyword",
187
- "LineEnd",
188
- "LineStart",
189
- "Literal",
190
- "Located",
191
- "PrecededBy",
192
- "MatchFirst",
193
- "NoMatch",
194
- "NotAny",
195
- "OneOrMore",
196
- "OnlyOnce",
197
- "OpAssoc",
198
- "Opt",
199
- "Optional",
200
- "Or",
201
- "ParseBaseException",
202
- "ParseElementEnhance",
203
- "ParseException",
204
- "ParseExpression",
205
- "ParseFatalException",
206
- "ParseResults",
207
- "ParseSyntaxException",
208
- "ParserElement",
209
- "PositionToken",
210
- "QuotedString",
211
- "RecursiveGrammarException",
212
- "Regex",
213
- "SkipTo",
214
- "StringEnd",
215
- "StringStart",
216
- "Suppress",
217
- "Token",
218
- "TokenConverter",
219
- "White",
220
- "Word",
221
- "WordEnd",
222
- "WordStart",
223
- "ZeroOrMore",
224
- "Char",
225
- "alphanums",
226
- "alphas",
227
- "alphas8bit",
228
- "any_close_tag",
229
- "any_open_tag",
230
- "c_style_comment",
231
- "col",
232
- "common_html_entity",
233
- "counted_array",
234
- "cpp_style_comment",
235
- "dbl_quoted_string",
236
- "dbl_slash_comment",
237
- "delimited_list",
238
- "dict_of",
239
- "empty",
240
- "hexnums",
241
- "html_comment",
242
- "identchars",
243
- "identbodychars",
244
- "java_style_comment",
245
- "line",
246
- "line_end",
247
- "line_start",
248
- "lineno",
249
- "make_html_tags",
250
- "make_xml_tags",
251
- "match_only_at_col",
252
- "match_previous_expr",
253
- "match_previous_literal",
254
- "nested_expr",
255
- "null_debug_action",
256
- "nums",
257
- "one_of",
258
- "printables",
259
- "punc8bit",
260
- "python_style_comment",
261
- "quoted_string",
262
- "remove_quotes",
263
- "replace_with",
264
- "replace_html_entity",
265
- "rest_of_line",
266
- "sgl_quoted_string",
267
- "srange",
268
- "string_end",
269
- "string_start",
270
- "trace_parse_action",
271
- "unicode_string",
272
- "with_attribute",
273
- "indentedBlock",
274
- "original_text_for",
275
- "ungroup",
276
- "infix_notation",
277
- "locatedExpr",
278
- "with_class",
279
- "CloseMatch",
280
- "token_map",
281
- "pyparsing_common",
282
- "pyparsing_unicode",
283
- "unicode_set",
284
- "condition_as_parse_action",
285
- "pyparsing_test",
286
- # pre-PEP8 compatibility names
287
- "__versionTime__",
288
- "anyCloseTag",
289
- "anyOpenTag",
290
- "cStyleComment",
291
- "commonHTMLEntity",
292
- "countedArray",
293
- "cppStyleComment",
294
- "dblQuotedString",
295
- "dblSlashComment",
296
- "delimitedList",
297
- "dictOf",
298
- "htmlComment",
299
- "javaStyleComment",
300
- "lineEnd",
301
- "lineStart",
302
- "makeHTMLTags",
303
- "makeXMLTags",
304
- "matchOnlyAtCol",
305
- "matchPreviousExpr",
306
- "matchPreviousLiteral",
307
- "nestedExpr",
308
- "nullDebugAction",
309
- "oneOf",
310
- "opAssoc",
311
- "pythonStyleComment",
312
- "quotedString",
313
- "removeQuotes",
314
- "replaceHTMLEntity",
315
- "replaceWith",
316
- "restOfLine",
317
- "sglQuotedString",
318
- "stringEnd",
319
- "stringStart",
320
- "traceParseAction",
321
- "unicodeString",
322
- "withAttribute",
323
- "indentedBlock",
324
- "originalTextFor",
325
- "infixNotation",
326
- "locatedExpr",
327
- "withClass",
328
- "tokenMap",
329
- "conditionAsParseAction",
330
- "autoname_elements",
331
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Audio-AGI/WavJourney/services.py DELETED
@@ -1,231 +0,0 @@
1
- import os
2
- import yaml
3
- import logging
4
- import nltk
5
- import torch
6
- import torchaudio
7
- from torchaudio.transforms import SpeedPerturbation
8
- from APIs import WRITE_AUDIO, LOUDNESS_NORM
9
- from utils import fade, get_service_port
10
- from flask import Flask, request, jsonify
11
-
12
- with open('config.yaml', 'r') as file:
13
- config = yaml.safe_load(file)
14
-
15
- # Configure the logging format and level
16
- logging.basicConfig(
17
- level=logging.INFO,
18
- format='%(asctime)s - %(levelname)s - %(message)s'
19
- )
20
-
21
- # Create a FileHandler for the log file
22
- os.makedirs('services_logs', exist_ok=True)
23
- log_filename = 'services_logs/Wav-API.log'
24
- file_handler = logging.FileHandler(log_filename, mode='w')
25
- file_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s'))
26
-
27
- # Add the FileHandler to the root logger
28
- logging.getLogger('').addHandler(file_handler)
29
-
30
-
31
- """
32
- Initialize the AudioCraft models here
33
- """
34
- from audiocraft.models import AudioGen, MusicGen
35
- tta_model_size = config['AudioCraft']['tta_model_size']
36
- tta_model = AudioGen.get_pretrained(f'facebook/audiogen-{tta_model_size}')
37
- logging.info(f'AudioGen ({tta_model_size}) is loaded ...')
38
-
39
- ttm_model_size = config['AudioCraft']['ttm_model_size']
40
- ttm_model = MusicGen.get_pretrained(f'facebook/musicgen-{ttm_model_size}')
41
- logging.info(f'MusicGen ({ttm_model_size}) is loaded ...')
42
-
43
-
44
- """
45
- Initialize the BarkModel here
46
- """
47
- from transformers import BarkModel, AutoProcessor
48
- SPEED = float(config['Text-to-Speech']['speed'])
49
- speed_perturb = SpeedPerturbation(32000, [SPEED])
50
- tts_model = BarkModel.from_pretrained("suno/bark")
51
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
52
- tts_model = tts_model.to(device)
53
- tts_model = tts_model.to_bettertransformer() # Flash attention
54
- SAMPLE_RATE = tts_model.generation_config.sample_rate
55
- SEMANTIC_TEMPERATURE = 0.9
56
- COARSE_TEMPERATURE = 0.5
57
- FINE_TEMPERATURE = 0.5
58
- processor = AutoProcessor.from_pretrained("suno/bark")
59
- logging.info('Bark model is loaded ...')
60
-
61
-
62
- """
63
- Initialize the VoiceFixer model here
64
- """
65
- from voicefixer import VoiceFixer
66
- vf = VoiceFixer()
67
- logging.info('VoiceFixer is loaded ...')
68
-
69
-
70
- """
71
- Initalize the VoiceParser model here
72
- """
73
- from VoiceParser.model import VoiceParser
74
- vp_device = config['Voice-Parser']['device']
75
- vp = VoiceParser(device=vp_device)
76
- logging.info('VoiceParser is loaded ...')
77
-
78
-
79
- app = Flask(__name__)
80
-
81
-
82
- @app.route('/generate_audio', methods=['POST'])
83
- def generate_audio():
84
- # Receive the text from the POST request
85
- data = request.json
86
- text = data['text']
87
- length = float(data.get('length', 5.0))
88
- volume = float(data.get('volume', -35))
89
- output_wav = data.get('output_wav', 'out.wav')
90
-
91
- logging.info(f'TTA (AudioGen): Prompt: {text}, length: {length} seconds, volume: {volume} dB')
92
-
93
- try:
94
- tta_model.set_generation_params(duration=length)
95
- wav = tta_model.generate([text])
96
- wav = torchaudio.functional.resample(wav, orig_freq=16000, new_freq=32000)
97
-
98
- wav = wav.squeeze().cpu().detach().numpy()
99
- wav = fade(LOUDNESS_NORM(wav, volumn=volume))
100
- WRITE_AUDIO(wav, name=output_wav)
101
-
102
- # Return success message and the filename of the generated audio
103
- return jsonify({'message': f'Text-to-Audio generated successfully | {text}', 'file': output_wav})
104
-
105
- except Exception as e:
106
- return jsonify({'API error': str(e)}), 500
107
-
108
-
109
- @app.route('/generate_music', methods=['POST'])
110
- def generate_music():
111
- # Receive the text from the POST request
112
- data = request.json
113
- text = data['text']
114
- length = float(data.get('length', 5.0))
115
- volume = float(data.get('volume', -35))
116
- output_wav = data.get('output_wav', 'out.wav')
117
-
118
- logging.info(f'TTM (MusicGen): Prompt: {text}, length: {length} seconds, volume: {volume} dB')
119
-
120
-
121
- try:
122
- ttm_model.set_generation_params(duration=length)
123
- wav = ttm_model.generate([text])
124
- wav = wav[0][0].cpu().detach().numpy()
125
- wav = fade(LOUDNESS_NORM(wav, volumn=volume))
126
- WRITE_AUDIO(wav, name=output_wav)
127
-
128
- # Return success message and the filename of the generated audio
129
- return jsonify({'message': f'Text-to-Music generated successfully | {text}', 'file': output_wav})
130
-
131
- except Exception as e:
132
- # Return error message if something goes wrong
133
- return jsonify({'API error': str(e)}), 500
134
-
135
-
136
- @app.route('/generate_speech', methods=['POST'])
137
- def generate_speech():
138
- # Receive the text from the POST request
139
- data = request.json
140
- text = data['text']
141
- speaker_id = data['speaker_id']
142
- speaker_npz = data['speaker_npz']
143
- volume = float(data.get('volume', -35))
144
- output_wav = data.get('output_wav', 'out.wav')
145
-
146
- logging.info(f'TTS (Bark): Speaker: {speaker_id}, Volume: {volume} dB, Prompt: {text}')
147
-
148
- try:
149
- # Generate audio using the global pipe object
150
- text = text.replace('\n', ' ').strip()
151
- sentences = nltk.sent_tokenize(text)
152
- silence = torch.zeros(int(0.1 * SAMPLE_RATE), device=device).unsqueeze(0) # 0.1 second of silence
153
-
154
- pieces = []
155
- for sentence in sentences:
156
- inputs = processor(sentence, voice_preset=speaker_npz).to(device)
157
- # NOTE: you must run the line below, otherwise you will see the runtime error
158
- # RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
159
- inputs['history_prompt']['coarse_prompt'] = inputs['history_prompt']['coarse_prompt'].transpose(0, 1).contiguous().transpose(0, 1)
160
-
161
- with torch.inference_mode():
162
- # TODO: min_eos_p?
163
- output = tts_model.generate(
164
- **inputs,
165
- do_sample = True,
166
- semantic_temperature = SEMANTIC_TEMPERATURE,
167
- coarse_temperature = COARSE_TEMPERATURE,
168
- fine_temperature = FINE_TEMPERATURE
169
- )
170
-
171
- pieces += [output, silence]
172
-
173
- result_audio = torch.cat(pieces, dim=1)
174
- wav_tensor = result_audio.to(dtype=torch.float32).cpu()
175
- wav = torchaudio.functional.resample(wav_tensor, orig_freq=SAMPLE_RATE, new_freq=32000)
176
- wav = speed_perturb(wav.float())[0].squeeze(0)
177
- wav = wav.numpy()
178
- wav = LOUDNESS_NORM(wav, volumn=volume)
179
- WRITE_AUDIO(wav, name=output_wav)
180
-
181
- # Return success message and the filename of the generated audio
182
- return jsonify({'message': f'Text-to-Speech generated successfully | {speaker_id}: {text}', 'file': output_wav})
183
-
184
- except Exception as e:
185
- # Return error message if something goes wrong
186
- return jsonify({'API error': str(e)}), 500
187
-
188
-
189
- @app.route('/fix_audio', methods=['POST'])
190
- def fix_audio():
191
- # Receive the text from the POST request
192
- data = request.json
193
- processfile = data['processfile']
194
-
195
- logging.info(f'Fixing {processfile} ...')
196
-
197
- try:
198
- vf.restore(input=processfile, output=processfile, cuda=True, mode=0)
199
-
200
- # Return success message and the filename of the generated audio
201
- return jsonify({'message': 'Speech restored successfully', 'file': processfile})
202
-
203
- except Exception as e:
204
- # Return error message if something goes wrong
205
- return jsonify({'API error': str(e)}), 500
206
-
207
-
208
- @app.route('/parse_voice', methods=['POST'])
209
- def parse_voice():
210
- # Receive the text from the POST request
211
- data = request.json
212
- wav_path = data['wav_path']
213
- out_dir = data['out_dir']
214
-
215
- logging.info(f'Parsing {wav_path} ...')
216
-
217
- try:
218
- vp.extract_acoustic_embed(wav_path, out_dir)
219
-
220
- # Return success message and the filename of the generated audio
221
- return jsonify({'message': f'Sucessfully parsed {wav_path}'})
222
-
223
- except Exception as e:
224
- # Return error message if something goes wrong
225
- return jsonify({'API error': str(e)}), 500
226
-
227
-
228
- if __name__ == '__main__':
229
- service_port = get_service_port()
230
- # We disable multithreading to force services to process one request at a time and avoid CUDA OOM
231
- app.run(debug=False, threaded=False, port=service_port)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awesimo/jojogan/e4e/configs/__init__.py DELETED
File without changes
spaces/AzinZ/vitscn/app.py DELETED
@@ -1,139 +0,0 @@
1
- #coding:utf-8
2
- import torch
3
- import numpy as np
4
- import argparse
5
- import gradio as gr
6
- import librosa
7
-
8
- import commons
9
- import utils
10
- from models import SynthesizerTrn
11
- from text.symbols import symbols
12
- from text import text_to_sequence
13
- from mel_processing import spectrogram_torch
14
-
15
- # device = 'cuda' if torch.cuda.is_available() else 'cpu'
16
- device = 'cpu'
17
- lang = ['Chinese']
18
-
19
- speaker_infos = ['hutao',
20
- 'paimon',
21
- 'nahida',
22
- 'zhongli',
23
- 'yaeMiko',
24
- 'venti',
25
- 'klee']
26
-
27
- speaker_to_id = {s: i for i, s in enumerate(speaker_infos)}
28
- id_to_speaker = {i: s for i, s in enumerate(speaker_infos)}
29
-
30
- def get_text(text, hps):
31
- text_norm = text_to_sequence(text, hps.data.text_cleaners)
32
- if hps.data.add_blank:
33
- text_norm = commons.intersperse(text_norm, 0)
34
- text_norm = torch.LongTensor(text_norm)
35
- return text_norm
36
-
37
-
38
- def main():
39
- parser = argparse.ArgumentParser()
40
- parser.add_argument('-c', '--config', type=str, default=r'./models/genshin/configs.json', help='JSON file for configuration')
41
- parser.add_argument('-m', '--model', type=str, default=r'./models/genshin/G_128000.pth', help='Model path')
42
- parser.add_argument('--share', action='store_true', help='share link')
43
- args = parser.parse_args()
44
-
45
- hps = utils.get_hparams_from_file(args.config)
46
-
47
- net_g = SynthesizerTrn(
48
- len(symbols),
49
- hps.data.filter_length // 2 + 1,
50
- hps.train.segment_size // hps.data.hop_length,
51
- n_speakers=hps.data.n_speakers,
52
- **hps.model).to(device)
53
- _ = net_g.eval()
54
-
55
- _ = utils.load_checkpoint(args.model, net_g, None)
56
-
57
- tts_fn = create_tts_fn(net_g, hps)
58
- vc_fn = create_vc_fn(net_g, hps)
59
-
60
- app = gr.Blocks()
61
- with app:
62
- with gr.Tab("Text-to-Speech"):
63
- with gr.Row():
64
- with gr.Column():
65
- textbox = gr.TextArea(label="Text",
66
- placeholder="Type your sentence here",
67
- value="原神, 启动!", elem_id=f"tts-input")
68
- # select character
69
- char_dropdown = gr.Dropdown(choices=speaker_infos, value=speaker_infos[0], label='character')
70
- language_dropdown = gr.Dropdown(choices=lang, value=lang[0], label='language')
71
- with gr.Column():
72
- text_output = gr.Textbox(label="Message")
73
- audio_output = gr.Audio(label="Output Audio", elem_id="tts-audio")
74
- btn = gr.Button("Generate")
75
- btn.click(tts_fn,
76
- inputs=[textbox, char_dropdown, language_dropdown],
77
- outputs=[text_output, audio_output])
78
- with gr.Tab("Voice Conversion"):
79
- gr.Markdown("录制或上传声音,并选择要转换的音色。")
80
- with gr.Column():
81
- record_audio = gr.Audio(label="record your voice", source="microphone")
82
- upload_audio = gr.Audio(label="or upload audio here", source="upload")
83
- source_speaker = gr.Dropdown(choices=speaker_infos, value=speaker_infos[0], label="source speaker")
84
- target_speaker = gr.Dropdown(choices=speaker_infos, value=speaker_infos[0], label="target speaker")
85
- with gr.Column():
86
- message_box = gr.Textbox(label="Message")
87
- converted_audio = gr.Audio(label='converted audio')
88
- btn = gr.Button("Convert")
89
- btn.click(vc_fn, inputs=[source_speaker, target_speaker, record_audio, upload_audio], outputs=[message_box, converted_audio])
90
- app.launch(share=args.share)
91
-
92
- def create_tts_fn(model, hps):
93
- def tts_fn(text, speaker, language):
94
- if language is not None:
95
- pass # to be added
96
- speaker_id = speaker_to_id[speaker]
97
- stn_tst = get_text(text, hps)
98
- with torch.no_grad():
99
- x_tst = stn_tst.to(device).unsqueeze(0)
100
- x_tst_lengths = torch.LongTensor([stn_tst.size(0)]).to(device)
101
- sid = torch.LongTensor([speaker_id]).to(device)
102
- audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy()
103
- del stn_tst, x_tst, x_tst_lengths, sid
104
- return "Success", (hps.data.sampling_rate, audio)
105
- return tts_fn
106
-
107
- def create_vc_fn(model, hps):
108
- def vc_fn(original_speaker, target_speaker, record_audio, upload_audio):
109
- original_speaker_id = speaker_to_id[original_speaker]
110
- target_speaker_id = speaker_to_id[target_speaker]
111
- input_audio = record_audio if record_audio is not None else upload_audio
112
- if input_audio is None:
113
- return "You need to record or upload an audio", None
114
- sampling_rate, audio = input_audio
115
- original_speaker_id = speaker_to_id[original_speaker]
116
- target_speaker_id = speaker_to_id[target_speaker]
117
- if len(audio.shape) > 1:
118
- audio = librosa.to_mono(audio.astype('float').transpose(1, 0))
119
- if sampling_rate != hps.data.sampling_rate:
120
- audio = librosa.resample(audio.astype('float'), orig_sr=sampling_rate, target_sr=hps.data.sampling_rate)
121
- with torch.no_grad():
122
- y = torch.FloatTensor(audio)
123
- y = y / max(-y.min(), y.max()) / 0.99
124
- y = y.to(device)
125
- y = y.unsqueeze(0)
126
- spec = spectrogram_torch(y, hps.data.filter_length,
127
- hps.data.sampling_rate, hps.data.hop_length, hps.data.win_length,
128
- center=False).to(device)
129
- spec_lengths = torch.LongTensor([spec.size(-1)]).to(device)
130
- sid_src = torch.LongTensor([original_speaker_id]).to(device)
131
- sid_tgt = torch.LongTensor([target_speaker_id]).to(device)
132
- audio = model.voice_conversion(spec, spec_lengths, sid_src=sid_src, sid_tgt=sid_tgt)[0][0, 0].data.cpu().float().numpy()
133
- del y, spec, spec_lengths, sid_src, sid_tgt
134
- return "Success", (hps.data.sampling_rate, audio)
135
- return vc_fn
136
-
137
-
138
- if __name__ == '__main__':
139
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/32gun.az.md DELETED
@@ -1,178 +0,0 @@
1
-
2
- <h1>iOS 14 Descargar gratis para iPhone 6: ¿Es posible y cómo hacerlo</h1>
3
- <p>iOS 14 es la última versión del sistema operativo de Apple para iPhones y iPads. Fue lanzado en septiembre de 2020 y trae muchas nuevas características y mejoras a la experiencia del usuario. ¿Pero puedes descargar iOS 14 gratis en tu iPhone 6? Y si es así, ¿cómo puedes hacerlo? En este artículo, responderemos estas preguntas y más. </p>
4
- <h2>32gun.az</h2><br /><p><b><b>Download File</b> &rarr; <a href="https://bltlly.com/2v6MyI">https://bltlly.com/2v6MyI</a></b></p><br /><br />
5
- <h2>Qué es iOS 14 y por qué debería descargarlo</h2>
6
- <p>iOS 14 es la decimocuarta actualización importante de iOS, el software que se ejecuta en tu iPhone, iPad y iPod touch. Introduce muchos cambios en la forma de usar el dispositivo, como:</p>
7
- <h3>Las principales características de iOS 14</h3>
8
- <ul>
9
- <li>Widgets de pantalla de inicio: Ahora puede agregar widgets a la pantalla de inicio que muestran información útil de sus aplicaciones, como el clima, noticias, calendario, fotos, música y más. También puede crear una pila inteligente de widgets que muestra automáticamente el widget más relevante en función de la hora, ubicación y actividad. </li>
10
- <li>App Library: Ahora puedes acceder a todas tus aplicaciones desde una nueva App Library que las organiza automáticamente en categorías. También puede ocultar algunas de sus páginas de pantalla de inicio para mantener su pantalla de inicio limpia y simple. </li>
11
- <li>Clips de aplicaciones: Ahora puede descubrir y usar una pequeña parte de una aplicación sin descargarla. Los clips de aplicaciones se activan escaneando un código QR, tocando una etiqueta NFC o abriendo un enlace desde Safari o Mensajes. Son útiles para completar tareas rápidas, como pedir comida, alquilar una bicicleta o pagar el estacionamiento. </li>
12
- <li>Diseño compacto: Ahora puedes disfrutar de una experiencia más inmersiva en tu dispositivo con un diseño compacto para llamadas telefónicas, llamadas FaceTime, interacciones Siri y reproducción de video. Estas características ahora aparecen como pequeños banners o ventanas que no ocupan toda la pantalla. </li>
13
-
14
- <li>Mapas: Ahora puede obtener direcciones de ciclismo, explorar guías de lugares para comer, comprar y visitar, y ver información más detallada sobre lugares, como zonas de congestión, cámaras de velocidad y estaciones de carga de vehículos eléctricos. </li>
15
- <li>Privacidad: Ahora puede tener más control sobre cómo las aplicaciones acceden a sus datos y ubicación. También puede ver indicadores cuando una aplicación está usando su cámara o micrófono, y obtener informes de privacidad para los sitios web que visita en Safari.</li>
16
- </ul>
17
- <h3>Los beneficios de actualizar a iOS 14</h3>
18
- <p>Actualizar a iOS 14 puede traer muchos beneficios, como:</p>
19
- <ul>
20
- <li>Mejor rendimiento: iOS 14 está diseñado para hacer su dispositivo más rápido y más sensible. También mejora la vida de la batería y reduce el uso de almacenamiento. </li>
21
- <li>Mejor seguridad: iOS 14 incluye los últimos parches de seguridad y correcciones que protegen su dispositivo de hackers y malware. También agrega nuevas características que mejoran su privacidad y seguridad, como el monitoreo de contraseñas, iniciar sesión con Apple ID y la transparencia del seguimiento de aplicaciones. </li>
22
- <li>Mejor compatibilidad: iOS 14 asegura que su dispositivo puede ejecutar las últimas aplicaciones y juegos que requieren la versión más reciente de iOS. También le permite utilizar nuevos accesorios y servicios que funcionan con iOS 14. </li>
23
- <li>Mejor experiencia: iOS 14 le da una experiencia más personalizada y agradable en su dispositivo. Le permite personalizar su pantalla de inicio con widgets y biblioteca de aplicaciones, descubrir nuevas aplicaciones con clips de aplicaciones, disfrutar de una vista más inmersiva con un diseño compacto, comunicarse mejor con los mensajes, explorar nuevos lugares con mapas y más. </li> <h2>¿Es compatible el iPhone 6 con iOS 14? </h2>
24
- <p>Ahora que sabes lo que es iOS 14 y por qué deberías descargarlo, te estarás preguntando si tu iPhone 6 puede soportarlo. Desafortunadamente, la respuesta es no. iPhone 6 no es compatible con iOS 14 y no puede ejecutarlo. </p>
25
- <h3>La lista oficial de dispositivos soportados</h3>
26
- <p>Según Apple, la lista oficial de dispositivos que pueden ejecutar iOS 14 es:</p>
27
- <tabla>
28
- <tr>
29
- <th>iPhone</th>
30
-
31
- <th>iPod touch</th>
32
- </tr>
33
- <tr>
34
- <td>iPhone 12</td>
35
- <td>iPad Pro de 12,9 pulgadas (cuarta generación)</td>
36
- <td>iPod touch (7a generación)</td>
37
- </tr>
38
- <tr>
39
- <td>iPhone 12 mini</td>
40
- <td>iPad Pro de 11 pulgadas (segunda generación)</td>
41
- <td></td>
42
- </tr>
43
- <tr>
44
- <td>iPhone 12 Pro</td>
45
- <td>iPad Pro de 12,9 pulgadas (tercera generación)</td>
46
- <td></td>
47
- </tr>
48
- <tr>
49
- <td>iPhone 12 Pro Max</td>
50
- <td>iPad Pro de 11 pulgadas (primera generación)</td>
51
- <td></td>
52
- </tr>
53
- <tr>
54
- <td>iPhone 11</td>
55
- <td>iPad Pro de 12,9 pulgadas (segunda generación)</td>
56
- <td></td>
57
- </tr>
58
- <tr>
59
- <td>iPhone 11 Pro</td>
60
- <td>iPad Pro de 12,9 pulgadas (primera generación)</td>
61
- <td></td>
62
- </tr>
63
- <tr>
64
- <td>iPhone 11 Pro Max</td>
65
- <td>iPad Pro de 10,5 pulgadas</td>
66
- <td></td>
67
- </tr>
68
- <tr>
69
- <td>iPhone XS</td>
70
- <td>iPad Pro 9.7 pulgadas</td>
71
- <td></td>
72
- </tr>
73
- <tr>
74
- <td>iPhone XS Max</td>
75
- <td>iPad (8ª generación)</td>
76
- <td></td>
77
- </tr>
78
- <tr>
79
- <td>iPhone XR</td>
80
- <td>iPad (7a generación)</td>
81
- <td></td>
82
- </tr>
83
- <tr>
84
- <td>iPhone X</td>
85
- <td>iPad (sexta generación)</td>
86
- <td></td>
87
- </tr>
88
- <tr>
89
- <td>iPhone 8</td>
90
- <td>iPad (quinta generación)</td>
91
- <td></td>
92
- </tr>
93
- <tr>
94
- <td>iPhone 8 Plus</td>
95
- <td>iPad mini (quinta generación)</td>
96
- <td></td>
97
- </tr>
98
- <tr>
99
- <td>iPhone 7</td>
100
- <td>iPad mini 4</td>
101
- <td></td>
102
- </tr>
103
- <tr>
104
- <td>iPhone 7 Plus</td>
105
- <td>iPad Air (cuarta generación)</td>
106
- <td></td>
107
- </tr>
108
- <tr>
109
- <td>iPhone SE (segunda generación)</td>
110
- <td>iPad Air (tercera generación)</td> <td>iPad Air 2</td>
111
- <td></td>
112
- </tr>
113
- <tr>
114
- <td>iPhone SE (primera generación)</td>
115
- <td>iPad Air (primera generación)</td>
116
- <td></td>
117
- </tr>
118
- <tr>
119
- <td>iPhone 6s</td>
120
- <td></td>
121
- <td></td>
122
- </tr>
123
- <tr>
124
- <td>iPhone 6s Plus</td>
125
- <td></td>
126
- <td></td>
127
- </tr>
128
- </tabla>
129
- <p>Como puedes ver, el iPhone 6 no está en la lista de dispositivos compatibles. Esto significa que Apple no ha lanzado iOS 14 para el iPhone 6 y no lo soporta oficialmente. </p>
130
- <h3>Las razones por las que el iPhone 6 no es compatible</h3>
131
-
132
- <ul>
133
- <li>Limitaciones de hardware: El iPhone 6 tiene un procesador más antiguo, menos RAM y menos almacenamiento que los modelos más nuevos. Esto significa que puede no ser capaz de manejar las nuevas características y demandas de rendimiento de iOS 14. </li>
134
- <li>Compatibilidad de software: El iPhone 6 se ejecuta en una arquitectura de 32 bits, mientras que iOS 14 está diseñado para una arquitectura de 64 bits. Esto significa que algunas de las aplicaciones y juegos que están optimizados para iOS 14 pueden no funcionar en el iPhone 6.</li>
135
- <li>Riesgos de seguridad: El iPhone 6 puede no tener las últimas actualizaciones de seguridad y parches que se incluyen en iOS 14. Esto significa que puede ser más vulnerable a los ataques de hackers y malware. </li>
136
- <li>Estrategia de mercado: Apple puede animar a los usuarios a actualizar a los nuevos modelos de iPhones limitando el soporte para los modelos más antiguos. De esta manera, pueden aumentar sus ventas y ganancias. </li>
137
- </ul>
138
- <h2>Cómo descargar iOS 14 en tu iPhone 6</h2>
139
- <p>Si todavía desea descargar iOS 14 en su iPhone 6, a pesar de saber que no es compatible oficialmente, es posible que tenga algunas opciones. Sin embargo, estas opciones no son recomendadas y pueden venir con algunos riesgos y desventajas. </p>
140
- <p></p>
141
- <h3>Los métodos no oficiales para instalar iOS 14 en el iPhone 6</h3>
142
- <p>Hay algunos métodos no oficiales que afirman permitirle instalar iOS 14 en su iPhone 6. Estos métodos implican el uso de software o herramientas de terceros que evitan las restricciones de Apple y modifican el firmware de su dispositivo. Algunos de estos métodos son:</p>
143
- <ul>
144
- <li>Jailbreak: Jailbreak es un proceso que elimina las limitaciones de software impuestas por Apple en su dispositivo. Te permite instalar aplicaciones y ajustes que no están disponibles en la App Store, así como personalizar la apariencia y la configuración de tu dispositivo. Sin embargo, el jailbreak también anula la garantía, expone el dispositivo a riesgos de seguridad y puede causar inestabilidad y problemas de rendimiento. </li>
145
-
146
- <li>Spoofing: Spoofing es un proceso que engaña a su dispositivo a pensar que es un modelo o versión diferente. Puede ser posible falsificar la identidad de su dispositivo y hacer que aparezca como un iPhone 6s o superior, y luego descargar iOS 14 desde los servidores de Apple. Sin embargo, la suplantación también requiere que use una computadora y un software especial, puede dañar el hardware o el software de su dispositivo y puede que no funcione con todos los dispositivos y versiones de firmware. </li>
147
- </ul>
148
- <h3>Los riesgos y desventajas de usar métodos no oficiales</h3>
149
- <p>El uso de cualquiera de estos métodos no oficiales para instalar iOS 14 en su iPhone 6 puede parecer tentador, pero también vienen con algunos riesgos y desventajas. Algunos de ellos son:</p>
150
- <ul>
151
- <li>Ladrillos: Ladrillos es un término que se refiere a la representación de su dispositivo inutilizable o no responde. Esto puede suceder si utiliza un software o herramienta incompatible o defectuoso, o si comete un error durante el proceso de instalación. Si su dispositivo se bloquea, es posible que no pueda restaurarlo o recuperar sus datos. </li>
152
- <li>Errores: Los errores son errores o fallos que afectan la funcionalidad o el rendimiento de su dispositivo o software. Instalar iOS 14 en tu iPhone 6 puede causar algunos errores, como bloqueos, congelaciones, retrasos, pérdida de batería, sobrecalentamiento o pérdida de funciones. </li>
153
- <li>Prohibiciones: Las prohibiciones son sanciones o restricciones que Apple puede imponer a su dispositivo o cuenta si detectan que ha violado sus términos y condiciones. Instalar iOS 14 en tu iPhone 6 puede resultar en algunas prohibiciones, como perder el acceso a la App Store , iCloud, Apple Music o Apple Pay. También puede perder su garantía o cobertura de AppleCare. </li>
154
- <li>Actualizaciones: Las actualizaciones son nuevas versiones o parches de software que corrigen errores, mejoran el rendimiento o agregan características. Instalar iOS 14 en tu iPhone 6 puede impedir que recibas futuras actualizaciones de Apple o causar problemas con la actualización de tu dispositivo. También puede perderse algunas de las nuevas características o mejoras que son exclusivas para el iOS oficial 14. </li>
155
- </ul>
156
-
157
- <p>iOS 14 es la última y mejor versión del sistema operativo de Apple para iPhones y iPads. Ofrece una gran cantidad de nuevas características y beneficios que pueden mejorar su experiencia de usuario y satisfacción. Sin embargo, iOS 14 no es compatible con el iPhone 6 y no se puede instalar oficialmente. Si desea descargar iOS 14 en su iPhone 6, es posible que tenga que utilizar algunos métodos no oficiales que no se recomiendan y pueden venir con algunos riesgos y desventajas. Por lo tanto, es mejor seguir con la versión oficial de iOS para su dispositivo, o considerar la actualización a un nuevo modelo de iPhone que soporta iOS 14. </p>
158
- <h2>Preguntas frecuentes</h2>
159
- <h3>Q: ¿Cómo puedo comprobar si mi iPhone es compatible con iOS 14? </h3>
160
- <p>A: Puede comprobar si su iPhone es compatible con iOS 14 yendo a Configuración > General > Actualización de software. Si ves un mensaje que dice "iOS 14 está disponible", entonces tu dispositivo es compatible. Si ves un mensaje que dice "Tu software está actualizado", entonces tu dispositivo no es compatible. </p>
161
- <h3>Q: ¿Cómo puedo hacer una copia de seguridad de mi iPhone antes de instalar iOS 14? </h3>
162
- <p>A: Puede hacer una copia de seguridad de su iPhone antes de instalar iOS 14 utilizando iCloud o iTunes. Para hacer una copia de seguridad con iCloud, ve a Configuración > [tu nombre] > iCloud > Copia de seguridad de iCloud y toca Copia de seguridad ahora. Para realizar una copia de seguridad con iTunes, conecte el dispositivo al ordenador, abra iTunes, seleccione el dispositivo y haga clic en Copia de seguridad ahora.</p>
163
- <h3>Q: ¿Cómo puedo restaurar mi iPhone si se bloquea o se daña mediante la instalación de iOS 14? </h3>
164
-
165
- <h3>Q: ¿Cómo puedo actualizar mi iPhone al iOS oficial 14 si lo he instalado utilizando un método no oficial? </h3>
166
- <p>A: Puede actualizar su iPhone al iOS oficial 14 si lo ha instalado utilizando un método no oficial restaurando el dispositivo a su estado original y luego descargando la actualización desde los servidores de Apple. Para restaurar el dispositivo a su estado original, es posible que tenga que utilizar el modo de recuperación o el modo DFU como se describe anteriormente. Luego, ve a Configuración > General > Actualización de software y toca Descargar e instalar.</p>
167
- <h3>Q: ¿Cómo puedo obtener el mejor rendimiento y duración de la batería de mi iPhone con iOS 14? </h3>
168
- <p>A: Puede obtener el mejor rendimiento y duración de la batería de su iPhone con iOS 14 siguiendo algunos consejos, como:</p>
169
- <ul>
170
- <li>Desactivar funciones y configuraciones innecesarias, como Bluetooth, Wi-Fi, servicios de ubicación, actualización de aplicaciones en segundo plano, notificaciones, etc.</li>
171
- <li>Ajusta el brillo de tu pantalla y activa el brillo automático. </li>
172
- <li>Cerrar aplicaciones que no está utilizando y borrar la caché de aplicaciones con regularidad. </li>
173
- <li>Utilice el modo de baja potencia cuando la batería está baja. </li>
174
- <li>Actualiza tus aplicaciones y software regularmente. </li>
175
- <li>Evite temperaturas y humedad extremas. </li>
176
- </ul></p> 64aa2da5cf<br />
177
- <br />
178
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Como Hacer Una Tarjeta De Felicitacin.md DELETED
@@ -1,85 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar la aplicación OLX Lite y por qué debería usarla</h1>
3
- <p>Si estás buscando una forma sencilla, rápida y conveniente de comprar y vender algo localmente, entonces deberías probar la aplicación <strong>OLX Lite</strong>. Esta aplicación es una versión más ligera y mejor de la popular aplicación de mercado en línea <strong>OLX</strong>, que le permite vender sus artículos no deseados o encontrar grandes ofertas en productos de segunda mano. En este artículo, le mostraremos cómo descargar la aplicación OLX Lite en su dispositivo Android, cómo usarla para comprar y vender productos y cuáles son los beneficios de usarla. </p>
4
- <h2>¿Qué es la aplicación OLX Lite y cuáles son sus características? </h2>
5
- <p>La aplicación OLX Lite es una aplicación gratuita de mercado en línea que le permite comprar y vender cualquier cosa a nivel local. Es una versión más ligera y rápida de la aplicación OLX original, lo que significa que consume menos datos y energía de la batería, carga más rápido y funciona sin problemas incluso en dispositivos de gama baja. Estas son algunas de las características de la aplicación OLX Lite que lo hacen destacar de otras aplicaciones similares:</p>
6
- <h2>como hacer una tarjeta de felicitación</h2><br /><p><b><b>DOWNLOAD</b> &#127383; <a href="https://bltlly.com/2v6Mr9">https://bltlly.com/2v6Mr9</a></b></p><br /><br />
7
- <h3>La aplicación OLX Lite es una versión más ligera y rápida de la aplicación OLX</h3>
8
- <p>Una de las principales ventajas de usar la aplicación OLX Lite es que es mucho más ligera que la aplicación OLX original. El tamaño de la aplicación es de solo unos 10 MB, lo que significa que ocupa menos espacio en la memoria del dispositivo. Además, la aplicación está optimizada para funcionar más rápido y sin problemas, incluso en dispositivos con baja memoria RAM o conexión a Internet lenta. Esto significa que puede navegar, buscar, publicar, chatear y comprar o vender productos sin ningún retraso o demora. </p>
9
- <h3>La aplicación OLX Lite le permite comprar y vender cualquier cosa localmente</h3>
10
-
11
- <h3>La aplicación OLX Lite proporciona una plataforma segura para las transacciones</h3>
12
- <p>La tercera característica de la aplicación OLX Lite es que proporciona una plataforma segura para las transacciones. La aplicación verifica la identidad y los datos de contacto de los vendedores y compradores, y muestra sus calificaciones y comentarios. La aplicación también tiene una función de chat que le permite comunicarse con los vendedores y compradores directamente, sin compartir su información personal. También puedes denunciar o bloquear cualquier usuario sospechoso o fraudulento en la aplicación. La aplicación también tiene un equipo de atención al cliente que está disponible 24/7 para ayudarle con cualquier problema o consulta. </p>
13
- <h2>¿Cómo descargar la aplicación OLX Lite en tu dispositivo Android? </h2>
14
- <p>Si desea descargar la aplicación OLX Lite en su dispositivo Android, puede seguir estos sencillos pasos:</p>
15
- <h3>Paso 1: Ir a Google Play Store y buscar OLX Lite App</h3>
16
- <p>El primer paso es ir a Google Play Store en su dispositivo Android y buscar la aplicación OLX Lite. También puede utilizar este enlace para ir directamente a la página de la aplicación en Google Play Store.</p>
17
- <h3>Paso 2: Toque en Instalar y esperar a que la descarga se complete</h3>
18
- <p>El siguiente paso es tocar el botón Instalar y esperar a que se complete la descarga. La aplicación se instalará automáticamente en el dispositivo una vez finalizada la descarga. </p>
19
- <h3>Paso 3: Abra la aplicación y regístrese o inicie sesión con su cuenta</h3>
20
- <p>El paso final es abrir la aplicación y registrarse o iniciar sesión con su cuenta. Puede usar su dirección de correo electrónico, número de teléfono o cuenta de Facebook para crear o acceder a su cuenta. También puedes omitir este paso y navegar por la aplicación como invitado, pero necesitarás una cuenta para publicar anuncios o chatear con vendedores y compradores. </p>
21
- <h2>¿Cómo usar la aplicación OLX Lite para comprar y vender productos? </h2>
22
- <p>Una vez que haya descargado e instalado la aplicación OLX Lite en su dispositivo, puede comenzar a usarla para comprar y vender productos. Aquí hay algunos consejos sobre cómo usar la aplicación de manera efectiva:</p>
23
- <p></p>
24
- <h3> ¿Cómo vender sus productos en la aplicación OLX Lite? </h3>
25
-
26
- <h4>Elija la categoría y subcategoría de su producto</h4>
27
- <p>El primer paso es elegir la categoría y subcategoría de su producto de la lista de opciones disponibles en la aplicación. Por ejemplo, si desea vender una computadora portátil, puede elegir Electrónica y Computadoras como la categoría y Computadoras portátiles como la subcategoría. </p>
28
- <h4>Escribe un título y descripción claros y atractivos</h4>
29
- <p>El siguiente paso es escribir un título y una descripción claros y atractivos para su producto. El título debe ser conciso y pegadizo, e incluir las principales características o palabras clave de su producto. La descripción debe ser detallada e informativa, e incluir la condición, especificaciones, garantía, opciones de entrega y métodos de pago de su producto. También debes usar la gramática, la ortografía y la puntuación adecuadas en tu título y descripción. </p>
30
- <h4>Subir fotos de alta calidad de su producto</h4>
31
- <p>El tercer paso es subir fotos de alta calidad de su producto. Puedes subir hasta 10 fotos por anuncio y asegurarte de que sean claras, brillantes y muestren diferentes ángulos de tu producto. También debes evitar usar filtros, pegatinas o marcas de agua en tus fotos. </p>
32
- <h4> Establecer un precio justo y negociable para su producto</h4>
33
- <p>El cuarto paso es establecer un precio justo y negociable para su producto. Usted debe investigar el valor de mercado de su producto antes de fijar un precio, y evitar sobreprecio o infravaloración. También debe indicar si su precio es fijo o negociable, y estar listo para regatear con compradores potenciales. </p>
34
- <h4>Publica tu anuncio y espera a que los compradores te contacten</h4>
35
- <p>El paso final es publicar tu anuncio y esperar a que los compradores se pongan en contacto contigo. Puedes previsualizar tu anuncio antes de publicarlo y editarlo o eliminarlo en cualquier momento. Recibirás notificaciones cuando los compradores te envíen mensajes o hagan ofertas en tu anuncio. También puede compartir su anuncio en las plataformas de redes sociales o por correo electrónico o SMS.</p> <h3>Cómo comprar productos en la aplicación OLX Lite? </h3>
36
-
37
- <h4>Navegar o buscar los productos que desea comprar</h4>
38
- <p>El primer paso es navegar o buscar los productos que desea comprar en la aplicación. Puedes usar los filtros y las opciones de clasificación para reducir tus resultados de búsqueda por categoría, ubicación, precio, condición y más. También puede utilizar las palabras clave o la función de búsqueda por voz para encontrar los productos que está buscando. </p>
39
- <h4>Compruebe los detalles, calificaciones y reseñas de los vendedores</h4>
40
- <p>El siguiente paso es verificar los detalles, calificaciones y reseñas de los vendedores antes de contactarlos. Puede pulsar en el perfil del vendedor para ver su nombre, ubicación, estado de verificación y puntuación de comentarios. También puedes leer los comentarios y valoraciones dejados por otros compradores que han tratado con ellos. Debes evitar comprar a vendedores que tengan calificaciones bajas, críticas negativas o ninguna verificación. </p>
41
- <h4>Chatear con los vendedores y negociar el precio y la entrega</h4>
42
- <p>El tercer paso es chatear con los vendedores y negociar el precio y la entrega del producto. Puede utilizar la función de chat en la aplicación para enviar mensajes o hacer ofertas a los vendedores. También puede pedir más detalles, fotos o videos del producto. Usted debe ser educado y respetuoso en su comunicación, y evitar hacer ofertas irrazonables o lowball. </p>
43
- <h4>Confirme la compra y califique al vendedor después de recibir el producto</h4>
44
- <p>El paso final es confirmar la compra y calificar al vendedor después de recibir el producto. Puede elegir entre varios métodos de pago, como el pago contra reembolso, la transferencia en línea o el servicio de depósito en garantía. También debe inspeccionar el producto cuidadosamente antes de pagar por él, y reportar cualquier problema o discrepancias al vendedor o la aplicación. Después de completar la transacción, debe calificar y revisar al vendedor según su experiencia. </p>
45
- <h2>¿Cuáles son los beneficios de usar la aplicación OLX Lite? </h2>
46
- <p>El uso de la aplicación OLX Lite tiene muchos beneficios para compradores y vendedores. Estos son algunos de ellos:</p>
47
-
48
- <p>Uno de los beneficios de usar la aplicación OLX Lite es que ahorra sus datos y el consumo de batería. La aplicación utiliza menos datos que la aplicación OLX original, ya que comprime imágenes y videos, y solo carga características esenciales. La aplicación también consume menos batería, ya que funciona más rápido y sin problemas, y no agota los recursos de su dispositivo. </p>
49
- <h3>La aplicación OLX Lite ofrece una amplia gama de productos y servicios</h3>
50
- <p>Otro beneficio de usar la aplicación OLX Lite es que ofrece una amplia gama de productos y servicios para que usted compre y venda. Puedes encontrar cualquier cosa, desde electrónica, coches, bicicletas, muebles, ropa, libros, juegos, mascotas, trabajos, clases, eventos, bienes raíces y más en la aplicación. También puede descubrir nuevos productos y servicios en su área o en todo el país. </p>
51
- <h3>La aplicación OLX Lite lo conecta con vendedores y compradores verificados en su área</h3>
52
- <p>El tercer beneficio de usar la aplicación OLX Lite es que te conecta con vendedores y compradores verificados en tu área. La aplicación verifica la identidad y los datos de contacto de los usuarios, y muestra sus calificaciones y comentarios. La aplicación también tiene una función de chat que le permite comunicarse con ellos directamente, sin compartir su información personal. De esta manera, puedes comprar y vender productos de forma segura en la aplicación. </p>
53
- <h2>Conclusión</h2>
54
-
55
-
56
- <h2>Preguntas frecuentes</h2>
57
- <p>Aquí están algunas de las preguntas más frecuentes sobre la aplicación OLX Lite:</p>
58
- <tabla>
59
- <tr>
60
- <th>Pregunta</th>
61
- <th>Respuesta</th>
62
- </tr>
63
- <tr>
64
- <td>¿Es gratuita la aplicación OLX Lite? </td>
65
- <td>Sí, la aplicación OLX Lite es gratuita para descargar y usar. Puede publicar anuncios ilimitados de forma gratuita en la aplicación. Sin embargo, es posible que deba pagar por algunas funciones o servicios premium, como aumentar sus anuncios o usar el servicio de depósito en garantía. </td>
66
- </tr>
67
- <tr>
68
- <td>¿Cómo puedo contactar al servicio de atención al cliente de la aplicación OLX Lite? </td>
69
- <td>Puede ponerse en contacto con el servicio de atención al cliente de la aplicación OLX Lite utilizando la función del Centro de ayuda de la aplicación. También puede enviarlos por correo electrónico a [email protected] o llamarlos al 1800-103-3333. </td>
70
- </tr>
71
- <tr>
72
- <td>¿Cuáles son las diferencias entre la aplicación OLX Lite y la aplicación OLX? </td>
73
- <td>La aplicación OLX Lite es una versión más ligera y rápida de la aplicación OLX. Consume menos datos y batería, carga más rápido y funciona sin problemas incluso en dispositivos de gama baja. También tiene menos características que la aplicación OLX, pero todavía te permite comprar y vender cualquier cosa localmente. </td>
74
- </tr>
75
- <tr>
76
- <td>¿Cómo puedo eliminar mi cuenta en la aplicación OLX Lite? </td>
77
- <td>Puede eliminar su cuenta en la aplicación OLX Lite siguiendo estos pasos: - Vaya a Configuración en la aplicación. - Toque en la configuración de la cuenta. - Toque en Eliminar cuenta. - Confirme su acción introduciendo su contraseña. </td>
78
- </tr>
79
- <tr>
80
- <td>¿Cómo puedo informar o bloquear a un usuario en la aplicación OLX Lite? </td>
81
- <td>Puede informar o bloquear a un usuario en la aplicación OLX Lite siguiendo estos pasos: - Ir al perfil del usuario o chatear en la aplicación. - Toque en el icono de tres puntos en la esquina superior derecha. - Toque en Informe o Bloquear. - Elija una razón para su acción y enviarla. </td>
82
- </tr>
83
- </tabla></p> 64aa2da5cf<br />
84
- <br />
85
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descarga Gratuita Del Virus Plague Inc Necroa.md DELETED
@@ -1,79 +0,0 @@
1
-
2
- <h1>Baldi Basics Classic Mod Menu v2.0.2 por Fasguy Descargar</h1>
3
- <p>Si eres un fan de Baldi Basics Classic, un juego de terror que parodia el entretenimiento educativo barato de los 90, quizás te interese probar un menú de mods que pueda mejorar tu experiencia de juego. En este artículo, te mostraremos cómo descargar e instalar Baldi Basics Classic Mod Menu v2.0.2 de Fasguy, un mod que te permite modificar los activos, componentes y ajustes del juego de varias maneras. También explicaremos cuáles son las características de este menú mod, cómo usarlo, y algunos consejos y trucos para sacarle el máximo partido. </p>
4
- <h2>Introducción</h2>
5
- <p>Baldi Basics Classic es un juego de terror de supervivencia que fue lanzado en 2018 por Basically Games. El juego está inspirado en espeluznante/ malos juegos de entretenimiento educativo de los años 90, y tiene un tema de meta-horror que rompe la cuarta pared y subvierte las expectativas del jugador. El objetivo del juego es recoger siete cuadernos y luego escapar de la escuela, evitando a Baldi, un profesor que quiere jugar al escondite con usted, pero algo... no está bien. El juego también cuenta con otros personajes que pueden ayudarte u obstaculizarte, como Principal of the Thing, Playtime, It’s a Bully, Gotta Sweep, Arts and Crafters, 1st Prize y más. </p>
6
- <h2>descarga gratuita del virus plague inc necroa</h2><br /><p><b><b>Download File</b> &bull;&bull;&bull; <a href="https://bltlly.com/2v6JJ6">https://bltlly.com/2v6JJ6</a></b></p><br /><br />
7
- <h3>¿Qué es Baldi Basics Classic? </h3>
8
- <p>Baldi Basics Classic es una versión gratuita del juego que contiene el mapa original y el modo de juego de 2018. Está disponible para dispositivos Windows, Mac OS X, Linux y Android. Puedes descargarlo desde <a href="( 6 )">Google Play Store</a>, <a href="( 4 )">Steam</a>, o <a href="( 7 )">Básicamente Juegos sitio web</a>. Baldi Basics Classic también tiene dos variaciones: Party Style y Demo Style. Party Style es un modo que envuelve todos los elementos y los baraja, haciendo que cada partida sea diferente. Demo Style es un modo que mezcla algunos de los nuevos elementos de Baldi’s Basics Plus, como eventos aleatorios, en el mapa clásico. </p>
9
- <h3>¿Qué es Baldi Basics Classic Mod Menu? </h3>
10
-
11
- <h3>¿Cuáles son las características de Baldi Basics Classic Mod Menu? </h3>
12
- <p>Algunas de las características de Baldi Basics Classic Mod Menu son:</p>
13
- <ul>
14
- <li>Puedes modificar cualquier objeto del juego seleccionándolo de una lista o haciendo clic en él en el juego. </li>
15
- <li> Puede cambiar las texturas, modelos, sonidos, música, animaciones, scripts, componentes, variables y más de cualquier objeto. </li>
16
- <li>Puede habilitar trucos, como resistencia infinita, sin clip, teletransportación, modo de dios , etc.</li>
17
- <li> Puede guardar y cargar sus modificaciones para su uso posterior. </li>
18
- <li> Puede exportar e importar sus modificaciones como archivos . zip. </li>
19
- <li>Puedes compartir tus modificaciones con otros jugadores online. </li>
20
- </ul>
21
- <p>Baldi Basics Classic Mod Menu es una gran herramienta para cualquiera que quiera personalizar su experiencia de juego, crear sus propios mods o simplemente divertirse con el juego. </p>
22
- <h2>¿Cómo descargar e instalar Baldi Basics Classic Mod Menu? </h2>
23
- <p>Para descargar e instalar Baldi Basics Classic Mod Menu, necesitará dos cosas: BepInEx 5 y el propio menú mod. BepInEx 5 es un framework que te permite inyectar código en juegos de Unity, como Baldi Basics Classic. El menú mod es el mod real que añade el menú y las características al juego. Estos son los pasos para descargarlos e instalarlos:</p>
24
- <h3>Paso 1: Descargar BepInEx 5</h3>
25
- <p>Puede descargar BepInEx 5 desde <a href="">GitHub</a>. Asegúrese de descargar la versión que coincida con su plataforma de juego (Windows, Mac OS X, Linux o Android). Obtendrá un . archivo zip que contiene la carpeta BepInEx y algunos otros archivos. </p>
26
- <p></p>
27
- <h3>Paso 2: Descargar Baldi Basics Classic Mod Menu</h3>
28
- <p>Puede descargar Baldi Basics Classic Mod Menu desde <a href="">GameBanana</a>. Obtendrá un . archivo zip que contiene la carpeta de menú mod y algunos otros archivos. </p>
29
- <h3>Paso 3: Extraer y copiar los archivos</h3>
30
-
31
- <h3>Paso 4: Iniciar el juego y pulse TAB</h3>
32
- <p>Comience el juego como de costumbre. Debería ver un mensaje en la esquina superior izquierda que dice "BepInEx 5.4.11.0 - Baldi’s Basics Classic". Esto significa que BepInEx funciona correctamente. Para acceder al menú mod, pulse TAB en el teclado o pulse en la pantalla si está utilizando Android. Deberías ver un menú con varias opciones y categorías. ¡Felicidades, has instalado con éxito Baldi Basics Classic Mod Menu! </p>
33
- <h2>¿Cómo usar Baldi Basics Classic Mod Menu? </h2>
34
- <p>Ahora que ha instalado Baldi Basics Classic Mod Menu, es posible que se pregunte cómo usarlo. Estas son algunas de las opciones del menú principal y las opciones del juego que puedes explorar:</p>
35
- <h3>Opciones del menú principal</h3>
36
- <p>Las opciones del menú principal se encuentran en la parte superior del menú mod. Son:</p>
37
- <ul>
38
- <li>Guardar: Esta opción le permite guardar sus modificaciones actuales como un archivo . zip en su directorio de juegos. </li>
39
- <li>Cargar: Esta opción le permite cargar una modificación guardada previamente desde su directorio de juego. </li>
40
- <li>Exportar: Esta opción le permite exportar sus modificaciones actuales como un archivo . zip que puede compartir con otros jugadores en línea. </li>
41
- <li>Importar: Esta opción le permite importar una modificación de otro reproductor en línea. Tendrá que introducir la URL del . archivo zip que contiene la modificación. </li>
42
- <li>Restablecer: Esta opción le permite restablecer todas sus modificaciones a sus valores predeterminados. </li>
43
- <li>Salir: Esta opción le permite salir del menú mod y volver al juego. </li>
44
- </ul>
45
- <h3>Opciones en el juego</h3>
46
-
47
- <p>Para modificar un objeto en el juego, puede seleccionarlo de una lista o hacer clic en él en el juego. Verá una ventana que muestra las propiedades del objeto, como su nombre, etiqueta, capa, posición, rotación, escala, etc. También puede ver los componentes que están conectados al objeto, como Mesh Renderer, AudioSource, Animator, Script, etc. Puede editar las propiedades y componentes del objeto cambiando sus valores o activando o desactivando. También puede agregar nuevos componentes o eliminar los existentes. También puede acceder a los activos que utiliza el objeto, como texturas, modelos, sonidos, música, animaciones, scripts, etc. Puede cambiar los activos navegando por su ordenador o utilizando una URL. También puede exportar o importar los activos como archivos . zip. </p>
48
- <h3>Consejos y trucos</h3>
49
- <p>Aquí hay algunos consejos y trucos para ayudarle a usar Baldi Basics Classic Mod Menu de manera más efectiva:</p>
50
- <ul>
51
- <li>Utilice la barra de búsqueda para encontrar el objeto que desea modificar rápidamente. </li>
52
- <li>Utilice la pestaña de favoritos para guardar los objetos que modifica con frecuencia para facilitar el acceso. </li>
53
- <li>Utilice la pestaña de historial para ver los cambios que ha realizado y deshacerlos o rehacerlos si es necesario. </li>
54
- <li>Usa la pestaña de consola para ver los mensajes de registro y los errores que ocurren durante el juego. </li>
55
- <li>Utilice las teclas de acceso rápido para realizar acciones comunes más rápido. Por ejemplo, presione F1 para activar o desactivar el menú mod, presione F2 para guardar sus modificaciones, presione F3 para cargar sus modificaciones, etc.</li>
56
- <li>Utilice la pestaña de ayuda para ver más información sobre el menú mod y sus características. </li>
57
- </ul>
58
- <h2>Conclusión</h2>
59
-
60
- <h3>Resumen del artículo</h3>
61
- <p>Este artículo ha cubierto los siguientes temas:</p>
62
- <ul>
63
- <li>¿Qué es Baldi Basics Classic y Baldi Basics Classic Mod Menu? </li>
64
- <li>¿Cuáles son las características de Baldi Basics Classic Mod Menu? </li>
65
- <li> ¿Cómo descargar e instalar Baldi Basics Classic Mod Menu? </li>
66
- <li> ¿Cómo usar Baldi Basics Classic Mod Menu? </li>
67
- <li>Consejos y trucos para usar Baldi Basics Classic Mod Menu.</li>
68
- </ul>
69
- <h3>Preguntas frecuentes</h3>
70
- <p>Aquí hay algunas preguntas frecuentes sobre Baldi Basics Classic Mod Menu:</p>
71
- <ol>
72
- <li><b> ¿Es seguro usar Baldi Basics Classic Mod Menu? </b><br>Sí, Baldi Basics Classic Mod Menu es seguro de usar siempre y cuando lo descargue de una fuente confiable y siga las instrucciones de instalación correctamente. Sin embargo, siempre debes hacer copias de seguridad de tus archivos de juego antes de instalar cualquier mod y usarlo bajo tu propio riesgo. </li>
73
- <li><b>¿Funciona Baldi Basics Classic Mod Menu con otros mods? </b><br>Baldi Basics Classic Mod Menu funciona con la mayoría de los otros mods que son compatibles con BepInEx 5. Sin embargo, algunos mods pueden entrar en conflicto entre sí o causar errores o fallos. Si encuentra algún problema con el uso de múltiples mods juntos, intente desactivar algunos de ellos o cambiar su orden de carga. </li>
74
- <li><b>¿Puedo usar Baldi Basics Classic Mod Menu en línea? </b><br>Baldi Basics Classic es un juego para un solo jugador que no tiene un modo en línea. Sin embargo, puede compartir sus modificaciones con otros jugadores en línea exportándolas e importándolas como archivos . zip. También puedes jugar con otros jugadores usando software de terceros como Parsec o Steam Remote Play Together.</li>
75
- <li><b>¿Cómo actualizo Baldi Basics Classic Mod Menu? </b><br>Para actualizar Baldi Basics Classic Mod Menu, necesita descargar la última versión de BepInEx 5 y el menú de mods desde sus respectivas fuentes. A continuación, debe reemplazar los archivos antiguos con los nuevos en su directorio de juegos. También es posible que tenga que eliminar los archivos de configuración antiguos o archivos de caché si existen. </li>
76
-
77
- </ol></p> 64aa2da5cf<br />
78
- <br />
79
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Bloons Td 6 31.2.md DELETED
@@ -1,110 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar Bloons TD 6 31.2 en su dispositivo</h1>
3
- <p>Si eres un fan de los juegos de torre de defensa, es posible que hayas oído hablar de Bloons TD 6, uno de los juegos más populares y divertidos del género. En este artículo, le diremos qué es Bloons TD 6, por qué debe descargar la última versión de la misma y cómo hacerlo en su dispositivo Android, iOS o Steam. </p>
4
- <h2>descargar bloons td 6 31.2</h2><br /><p><b><b>DOWNLOAD</b> &ndash;&ndash;&ndash;&ndash;&ndash;>>> <a href="https://bltlly.com/2v6IGq">https://bltlly.com/2v6IGq</a></b></p><br /><br />
5
- <h2>¿Qué es Bloons TD 6?</h2>
6
- <p>Bloons TD 6 es un juego de estrategia desarrollado por Ninja Kiwi, donde tienes que crear tu defensa perfecta a partir de una combinación de poderosas torres de monos y héroes impresionantes, y luego hacer estallar cada último globo invasor. El juego tiene más de una década de pedigrí de torre de defensa y actualizaciones masivas regulares que lo convierten en un juego favorito para millones de jugadores. Puedes disfrutar de interminables horas de juegos de estrategia con Bloons TD 6!</p>
7
- <p>Las características del juego:</p>
8
- <ul>
9
- <li>¡Gran contenido! Actualizaciones regulares con nuevos personajes, características y jugabilidad. </li>
10
- <li>Eventos de jefes! Globos de jefes temibles que desafiarán incluso las defensas más fuertes. </li>
11
- <li>Odisea! Batalla a través de una serie de mapas conectados por su tema, reglas y recompensas. </li>
12
- <li>¡Territorio disputado! Une fuerzas con otros jugadores y lucha por territorio contra otros cinco equipos. </li>
13
- <li>Misiones! Profundizar en lo que hace que los monos garrapatas con misiones, elaborado para contar cuentos y compartir conocimientos. </li>
14
- <li>Tienda de trofeos! Gana trofeos para desbloquear docenas de artículos cosméticos que te permiten personalizar tus monos, globos, animaciones, música y más. </li>
15
- <li>Contenido del navegador! Crea tus propios desafíos y odisea, luego compártelos con otros jugadores y echa un vistazo al contenido de la comunidad que más te guste y hayas jugado. </li>
16
- </ul>
17
- <h2>¿Por qué descargar Bloons TD 6 31.2? </h2>
18
- <p>Bloons TD 6 31.2 es la última versión del juego que fue lanzado en junio de 2023. Trae algunas nuevas características y mejoras que hacen que el juego sea aún más agradable y emocionante. Estas son algunas de ellas:</p>
19
- <h3>Nuevo héroe, Geraldo el tendero místico</h3>
20
-
21
- <h3>Corrección de errores y mejoras</h3>
22
- <p>La actualización también corrige algunos errores y problemas que estaban afectando el rendimiento y la estabilidad del juego. Algunos de ellos son:</p>
23
- <p></p>
24
- <ul>
25
- <li>Resuelto un problema con la odisea extrema sin contar las ubicaciones disponibles correctamente</li>
26
- <li>Resuelto un problema con la visualización de efectivo redondeando los decimales hacia arriba en lugar de hacia abajo</li>
27
- <li> Resuelto un problema con los globos fortificados que no se escapa el número correcto de vidas</li>
28
- <li>El escudo de deflexión del vórtice ahora debe guardar en los ahorros</li>
29
- <li>Se resolvió un fallo en la cooperativa con el anfitrión rápidamente iniciar un juego después de cerrar la última ranura</li>
30
- <li>Resuelto alguna inconsistencia con ciertos potenciadores</li <h2>Cómo descargar Bloons TD 6 31.2 en Android</h2>
31
- <p>Si tienes un dispositivo Android, puedes descargar Bloons TD 6 31.2 desde Google Play Store. Estos son los pasos que debes seguir:</p>
32
- <h3>Requisitos y precio</h3>
33
- <p>Para descargar Bloons TD 6 31.2 en tu dispositivo Android, necesitas tener:</p>
34
- <ul>
35
- <li>Una versión de Android de 5.0 o superior</li>
36
- <li>Al menos 100 MB de espacio de almacenamiento libre</li>
37
- <li>Una conexión a Internet estable</li>
38
- </ul>
39
- <p>El juego cuesta $4.99 en la Google Play Store, pero vale la pena cada centavo por la cantidad de contenido y diversión que ofrece. También puedes comprar monedas y objetos con dinero real, pero son opcionales y no son necesarios para disfrutar del juego. </p>
40
- <h3>Cómo instalar y lanzar el juego</h3>
41
- <p>Para instalar y lanzar Bloons TD 6 31.2 en tu dispositivo Android, sigue estos pasos:</p>
42
- <ol>
43
- <li>Abra la aplicación Google Play Store en su dispositivo y busque "Bloons TD 6" o haga clic en este enlace. </li>
44
- <li>Toque en el botón "Instalar" y espere a que termine la descarga. </li>
45
- <li>Una vez que la descarga está completa, toque en el botón "Abrir" o encontrar el icono del juego en la pantalla de inicio o cajón de aplicaciones. </li>
46
- <li>¡Disfruta explotando globos con tus amigos monos! </li>
47
- </ol>
48
- <h2>Cómo descargar Bloons TD 6 31.2 en iOS</h2>
49
-
50
- <h3>Requisitos y precio</h3>
51
- <p>Para descargar Bloons TD 6 31.2 en tu dispositivo iOS, necesitas tener:</p>
52
- <ul>
53
- <li>Una versión iOS de 11.0 o superior</li>
54
- <li>Un iPhone compatible, iPad o iPod touch</li>
55
- <li>Al menos 150 MB de espacio de almacenamiento libre</li <li>Una conexión a Internet estable</li>
56
- </ul>
57
- <p>El juego cuesta $4.99 en la App Store, pero vale cada centavo por la cantidad de contenido y diversión que ofrece. También puedes comprar monedas y objetos con dinero real, pero son opcionales y no son necesarios para disfrutar del juego. </p>
58
- <h3>Cómo instalar y lanzar el juego</h3>
59
- <p>Para instalar y ejecutar Bloons TD 6 31.2 en tu dispositivo iOS, sigue estos pasos:</p>
60
- <ol>
61
- <li>Abra la aplicación App Store en su dispositivo y busque "Bloons TD 6" o haga clic en este enlace. </li>
62
- <li>Toque en el botón "Obtener" e introduzca su contraseña de Apple ID o utilice Touch ID o Face ID si se le solicita. </li>
63
- <li>Espera a que la descarga termine y toca el botón "Abrir" o encuentra el icono del juego en la pantalla de inicio. </li>
64
- <li>¡Disfruta explotando globos con tus amigos monos! </li>
65
- </ol>
66
- <h2>Cómo descargar Bloons TD 6 31.2 en Steam</h2>
67
- <p>Si tienes un PC o Mac, puedes descargar Bloons TD 6 31.2 de Steam. Estos son los pasos que debes seguir:</p>
68
- <h3>Requisitos y precio</h3>
69
- <p>Para descargar Bloons TD 6 31.2 en tu PC o Mac, necesitas tener:</p>
70
- <borde de la tabla="1">
71
- <tr><th></th><th>Requisitos mínimos</th><th>Requisitos recomendados</th></tr>
72
- <tr><td>OS</td><td>Windows 7 (64bit) o superior<br/>Mac OS X 10.12.6 o superior</td><td>Windows 10 (64bit)<br/>Mac OS X 10.14 o superior</td></tr>
73
- <tr><td>Processor</td><td>Dual Core Processor<br/>Intel Core i3-2100T @ 2.5GHz<br/>AMD Phenom II X3 B73<br/>Intel Core i5-650 @ 3.20GHz<br/>AMD A10-5800K>APU@ 3.80GHz/Core<TD/<Procesador Dual.
74
- <tr><td>Memoria</td><td>4 GB RAM</td><td>8 GB RAM</td></tr>
75
-
76
- <tr><td>Almacenamiento</td><td><td>2048 MB de espacio disponible</td><td>4096 MB de espacio disponible</td></tr>
77
- </tabla>
78
- <p>El juego cuesta $9.99 en Steam, pero vale la pena cada centavo por la cantidad de contenido y diversión que ofrece. También puedes comprar monedas y objetos con dinero real, pero son opcionales y no son necesarios para disfrutar del juego. </p>
79
- <h3>Cómo instalar y lanzar el juego</h3>
80
- <p>Para instalar y lanzar Bloons TD 6 31.2 en tu PC o Mac, sigue estos pasos:</p>
81
- <ol>
82
- <li>Abre la aplicación de Steam en tu dispositivo e inicia sesión con tu cuenta o crea una si no la tienes. </li>
83
- <li>Buscar "Bloons TD 6" o haga clic en este enlace. </li>
84
- <li>Haga clic en el botón "Añadir al carrito" y proceda al pago. </li>
85
- <li>Una vez hecho el pago, vaya a su biblioteca y haga clic en el botón "Instalar" junto a Bloons TD 6.</li>
86
- <li>Espere a que finalicen la descarga y la instalación y haga clic en el botón "Play" para iniciar el juego. </li>
87
- <li>¡Disfruta explotando globos con tus amigos monos! </li>
88
- </ol>
89
- <h2>Conclusión</h2>
90
- <p>Bloons TD 6 es un fantástico juego de torre de defensa que te mantendrá entretenido durante horas con sus gráficos coloridos, un juego atractivo y contenido diverso. Ya sea que lo juegues en tu dispositivo Android, iOS o Steam, tendrás una explosión de globos y defenderás tu territorio de monos. Descargar Bloons TD 6 31.2 hoy y disfrutar de la última versión de este increíble juego! </p>
91
- <h2>Preguntas frecuentes</h2>
92
- <p>Aquí hay algunas preguntas frecuentes sobre Bloons TD 6 31.2:</p>
93
- <h3>Q: ¿Es Bloons TD 6 fuera de línea o en línea? </h3>
94
- <p>A: Bloons TD 6 se puede jugar fuera de línea o en línea. Puede jugar sin conexión a Internet, pero no podrá acceder a algunas funciones como el modo cooperativo, el territorio disputado, las misiones y el navegador de contenido. También necesitarás una conexión a Internet para descargar actualizaciones y comprar divisas y artículos en el juego. </p>
95
- <h3>P: ¿Cuántos héroes hay en Bloons TD 6?</h3>
96
-
97
- <h3>Q: ¿Cuántos mapas hay en Bloons TD 6?</h3>
98
- <p>A: Actualmente hay más de 60 mapas en Bloons TD 6, cada uno con su propio diseño, tema y nivel de dificultad. Puedes elegir entre mapas para principiantes, intermedios, avanzados, expertos o extremos dependiendo de tu habilidad y preferencia. También puedes crear tus propios mapas usando el navegador de contenido y compartirlos con otros jugadores. </p>
99
- <h3>Q: ¿Cómo consigo el dinero libre del mono en Bloons TD 6?</h3>
100
- <p>A: El dinero del mono es la moneda principal en Bloons TD 6 que se puede utilizar para comprar artículos, mejoras, poderes, héroes, y más. Puedes ganar dinero de mono completando niveles, misiones, logros, eventos y desafíos diarios. También puedes obtener dinero gratis viendo anuncios o participando en encuestas y ofertas. </p>
101
- <h3>Q: ¿Cuál es la mejor estrategia para Bloons TD 6?</h3>
102
- <p>A: No hay una respuesta definitiva a esta pregunta, ya que las diferentes estrategias funcionan para diferentes jugadores y situaciones. Sin embargo, algunos consejos generales son:</p>
103
- <ul>
104
- <li>Utilice una variedad de torres y héroes que se complementan entre sí y cubren diferentes tipos de globos. </li>
105
- <li>Actualice sus torres y héroes tanto como sea posible para aumentar su potencia y alcance. </li>
106
- <li>Coloca tus torres estratégicamente para maximizar su cobertura y eficiencia. </li>
107
- <li>Usa sabiamente tus poderes y objetos para mejorar tu defensa o lidiar con situaciones difíciles. </li <li>Experimenta con diferentes combinaciones y configuraciones para encontrar lo que funciona mejor para ti. </li>
108
- </ul></p> 64aa2da5cf<br />
109
- <br />
110
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/charsetgroupprober.py DELETED
@@ -1,106 +0,0 @@
1
- ######################## BEGIN LICENSE BLOCK ########################
2
- # The Original Code is Mozilla Communicator client code.
3
- #
4
- # The Initial Developer of the Original Code is
5
- # Netscape Communications Corporation.
6
- # Portions created by the Initial Developer are Copyright (C) 1998
7
- # the Initial Developer. All Rights Reserved.
8
- #
9
- # Contributor(s):
10
- # Mark Pilgrim - port to Python
11
- #
12
- # This library is free software; you can redistribute it and/or
13
- # modify it under the terms of the GNU Lesser General Public
14
- # License as published by the Free Software Foundation; either
15
- # version 2.1 of the License, or (at your option) any later version.
16
- #
17
- # This library is distributed in the hope that it will be useful,
18
- # but WITHOUT ANY WARRANTY; without even the implied warranty of
19
- # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
20
- # Lesser General Public License for more details.
21
- #
22
- # You should have received a copy of the GNU Lesser General Public
23
- # License along with this library; if not, write to the Free Software
24
- # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
25
- # 02110-1301 USA
26
- ######################### END LICENSE BLOCK #########################
27
-
28
- from typing import List, Optional, Union
29
-
30
- from .charsetprober import CharSetProber
31
- from .enums import LanguageFilter, ProbingState
32
-
33
-
34
- class CharSetGroupProber(CharSetProber):
35
- def __init__(self, lang_filter: LanguageFilter = LanguageFilter.NONE) -> None:
36
- super().__init__(lang_filter=lang_filter)
37
- self._active_num = 0
38
- self.probers: List[CharSetProber] = []
39
- self._best_guess_prober: Optional[CharSetProber] = None
40
-
41
- def reset(self) -> None:
42
- super().reset()
43
- self._active_num = 0
44
- for prober in self.probers:
45
- prober.reset()
46
- prober.active = True
47
- self._active_num += 1
48
- self._best_guess_prober = None
49
-
50
- @property
51
- def charset_name(self) -> Optional[str]:
52
- if not self._best_guess_prober:
53
- self.get_confidence()
54
- if not self._best_guess_prober:
55
- return None
56
- return self._best_guess_prober.charset_name
57
-
58
- @property
59
- def language(self) -> Optional[str]:
60
- if not self._best_guess_prober:
61
- self.get_confidence()
62
- if not self._best_guess_prober:
63
- return None
64
- return self._best_guess_prober.language
65
-
66
- def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState:
67
- for prober in self.probers:
68
- if not prober.active:
69
- continue
70
- state = prober.feed(byte_str)
71
- if not state:
72
- continue
73
- if state == ProbingState.FOUND_IT:
74
- self._best_guess_prober = prober
75
- self._state = ProbingState.FOUND_IT
76
- return self.state
77
- if state == ProbingState.NOT_ME:
78
- prober.active = False
79
- self._active_num -= 1
80
- if self._active_num <= 0:
81
- self._state = ProbingState.NOT_ME
82
- return self.state
83
- return self.state
84
-
85
- def get_confidence(self) -> float:
86
- state = self.state
87
- if state == ProbingState.FOUND_IT:
88
- return 0.99
89
- if state == ProbingState.NOT_ME:
90
- return 0.01
91
- best_conf = 0.0
92
- self._best_guess_prober = None
93
- for prober in self.probers:
94
- if not prober.active:
95
- self.logger.debug("%s not active", prober.charset_name)
96
- continue
97
- conf = prober.get_confidence()
98
- self.logger.debug(
99
- "%s %s confidence = %s", prober.charset_name, prober.language, conf
100
- )
101
- if best_conf < conf:
102
- best_conf = conf
103
- self._best_guess_prober = prober
104
- if not self._best_guess_prober:
105
- return 0.0
106
- return best_conf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BilalSardar/Object-Color-Detection-in-Video/app.py DELETED
@@ -1,104 +0,0 @@
1
- import cv2
2
- import gradio as gr
3
- import fast_colorthief
4
- import webcolors
5
- from PIL import Image
6
- import numpy as np
7
- thres = 0.45 # Threshold to detect object
8
-
9
-
10
-
11
- def Detection(filename):
12
- cap = cv2.VideoCapture(filename)
13
- framecount=0
14
-
15
- cap.set(3,1280)
16
- cap.set(4,720)
17
- cap.set(10,70)
18
-
19
- error="in function 'cv::imshow'"
20
- classNames= []
21
- FinalItems=[]
22
- classFile = 'coco.names'
23
- with open(classFile,'rt') as f:
24
- #classNames = f.read().rstrip('n').split('n')
25
- classNames = f.readlines()
26
-
27
-
28
- # remove new line characters
29
- classNames = [x.strip() for x in classNames]
30
- print(classNames)
31
- configPath = 'ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt'
32
- weightsPath = 'frozen_inference_graph.pb'
33
-
34
-
35
- net = cv2.dnn_DetectionModel(weightsPath,configPath)
36
- net.setInputSize(320,320)
37
- net.setInputScale(1.0/ 127.5)
38
- net.setInputMean((127.5, 127.5, 127.5))
39
- net.setInputSwapRB(True)
40
-
41
- while True:
42
- success,img = cap.read()
43
-
44
-
45
-
46
- # #Colour
47
- try:
48
- image = Image.fromarray(img)
49
- image = image.convert('RGBA')
50
- image = np.array(image).astype(np.uint8)
51
- palette=fast_colorthief.get_palette(image)
52
-
53
-
54
- for i in range(len(palette)):
55
- diff={}
56
- for color_hex, color_name in webcolors.CSS3_HEX_TO_NAMES.items():
57
- r, g, b = webcolors.hex_to_rgb(color_hex)
58
- diff[sum([(r - palette[i][0])**2,
59
- (g - palette[i][1])**2,
60
- (b - palette[i][2])**2])]= color_name
61
- if FinalItems.count(diff[min(diff.keys())])==0:
62
- FinalItems.append(diff[min(diff.keys())])
63
-
64
- except:
65
- pass
66
-
67
- try:
68
- classIds, confs, bbox = net.detect(img,confThreshold=thres)
69
- except:
70
- pass
71
- print(classIds,bbox)
72
- try:
73
- if len(classIds) != 0:
74
- for classId, confidence,box in zip(classIds.flatten(),confs.flatten(),bbox):
75
-
76
- #cv2.rectangle(img,box,color=(0,255,0),thickness=2)
77
- #cv2.putText(img,classNames[classId-1].upper(),(box[0]+10,box[1]+30),
78
- #cv2.FONT_HERSHEY_COMPLEX,1,(0,255,0),2)
79
- #cv2.putText(img,str(round(confidence*100,2)),(box[0]+200,box[1]+30),
80
- #cv2.FONT_HERSHEY_COMPLEX,1,(0,255,0),2)
81
- if FinalItems.count(classNames[classId-1]) == 0:
82
- FinalItems.append(classNames[classId-1])
83
-
84
-
85
- #cv2.imshow("Output",img)
86
- cv2.waitKey(10)
87
- if framecount>cap.get(cv2.CAP_PROP_FRAME_COUNT):
88
- break
89
- else:
90
- framecount+=1
91
- except Exception as err:
92
- print(err)
93
- t=str(err)
94
- if t.__contains__(error):
95
- break
96
-
97
- print(FinalItems)
98
- return str(FinalItems)
99
-
100
- interface = gr.Interface(fn=Detection,
101
- inputs=["video"],
102
- outputs="text",
103
- title='Object & Color Detection in Video')
104
- interface.launch(inline=False,debug=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CALM/Dashboard/streamlit_observable/frontend/src/types.d.ts DELETED
@@ -1 +0,0 @@
1
- declare module '@observablehq/runtime';
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/reduce_by_key.h DELETED
@@ -1,103 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
- #include <thrust/pair.h>
21
- #include <thrust/iterator/iterator_traits.h>
22
- #include <thrust/system/detail/sequential/execution_policy.h>
23
-
24
- namespace thrust
25
- {
26
- namespace system
27
- {
28
- namespace detail
29
- {
30
- namespace sequential
31
- {
32
-
33
-
34
- __thrust_exec_check_disable__
35
- template<typename DerivedPolicy,
36
- typename InputIterator1,
37
- typename InputIterator2,
38
- typename OutputIterator1,
39
- typename OutputIterator2,
40
- typename BinaryPredicate,
41
- typename BinaryFunction>
42
- __host__ __device__
43
- thrust::pair<OutputIterator1,OutputIterator2>
44
- reduce_by_key(sequential::execution_policy<DerivedPolicy> &,
45
- InputIterator1 keys_first,
46
- InputIterator1 keys_last,
47
- InputIterator2 values_first,
48
- OutputIterator1 keys_output,
49
- OutputIterator2 values_output,
50
- BinaryPredicate binary_pred,
51
- BinaryFunction binary_op)
52
- {
53
- typedef typename thrust::iterator_traits<InputIterator1>::value_type InputKeyType;
54
- typedef typename thrust::iterator_traits<InputIterator2>::value_type InputValueType;
55
-
56
- // Use the input iterator's value type per https://wg21.link/P0571
57
- using TemporaryType = typename thrust::iterator_value<InputIterator2>::type;
58
-
59
- if(keys_first != keys_last)
60
- {
61
- InputKeyType temp_key = *keys_first;
62
- TemporaryType temp_value = *values_first;
63
-
64
- for(++keys_first, ++values_first;
65
- keys_first != keys_last;
66
- ++keys_first, ++values_first)
67
- {
68
- InputKeyType key = *keys_first;
69
- InputValueType value = *values_first;
70
-
71
- if(binary_pred(temp_key, key))
72
- {
73
- temp_value = binary_op(temp_value, value);
74
- }
75
- else
76
- {
77
- *keys_output = temp_key;
78
- *values_output = temp_value;
79
-
80
- ++keys_output;
81
- ++values_output;
82
-
83
- temp_key = key;
84
- temp_value = value;
85
- }
86
- }
87
-
88
- *keys_output = temp_key;
89
- *values_output = temp_value;
90
-
91
- ++keys_output;
92
- ++values_output;
93
- }
94
-
95
- return thrust::make_pair(keys_output, values_output);
96
- }
97
-
98
-
99
- } // end namespace sequential
100
- } // end namespace detail
101
- } // end namespace system
102
- } // end namespace thrust
103
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/VizWiz-CLIP-VQA/app.py DELETED
@@ -1,121 +0,0 @@
1
- import clip
2
- from PIL import Image
3
- import pandas as pd
4
- import torch
5
- from dataloader.extract_features_dataloader import transform_resize, question_preprocess
6
- from model.vqa_model import NetVQA
7
- from dataclasses import dataclass
8
- from torch.cuda.amp import autocast
9
- import gradio as gr
10
-
11
- @dataclass
12
- class InferenceConfig:
13
- '''
14
- Describes configuration of the training process
15
- '''
16
- model: str = "RN50x64"
17
- checkpoint_root_clip: str = "./checkpoints/clip"
18
- checkpoint_root_head: str = "./checkpoints/head"
19
-
20
- use_question_preprocess: bool = True # True: delete ? at end
21
-
22
- aux_mapping = {0: "unanswerable",
23
- 1: "unsuitable",
24
- 2: "yes",
25
- 3: "no",
26
- 4: "number",
27
- 5: "color",
28
- 6: "other"}
29
- folds = 10
30
-
31
- # Data
32
- n_classes: int = 5726
33
-
34
- # class mapping
35
- class_mapping: str = "./data/annotations/class_mapping.csv"
36
-
37
- device = "cuda" if torch.cuda.is_available() else "cpu"
38
-
39
-
40
- config = InferenceConfig()
41
-
42
- # load class mapping
43
- cm = pd.read_csv(config.class_mapping)
44
- classid_to_answer = {}
45
- for i in range(len(cm)):
46
- row = cm.iloc[i]
47
- classid_to_answer[row["class_id"]] = row["answer"]
48
-
49
- clip_model, preprocess = clip.load(config.model, download_root=config.checkpoint_root_clip, device=config.device)
50
-
51
- model = NetVQA(config).to(config.device)
52
-
53
-
54
- config.checkpoint_head = "{}/{}.pt".format(config.checkpoint_root_head, config.model)
55
-
56
- model_state_dict = torch.load(config.checkpoint_head)
57
- model.load_state_dict(model_state_dict, strict=True)
58
-
59
- model.eval()
60
-
61
- # Select Preprocessing
62
- image_transforms = transform_resize(clip_model.visual.input_resolution)
63
-
64
- if config.use_question_preprocess:
65
- question_transforms = question_preprocess
66
- else:
67
- question_transforms = None
68
-
69
- clip_model.eval()
70
-
71
-
72
- def predict(img, text):
73
- img = Image.fromarray(img)
74
- img = image_transforms(img)
75
- img = img.unsqueeze(dim=0)
76
-
77
- if question_transforms is not None:
78
- question = question_transforms(text)
79
- else:
80
- question = text
81
- question_tokens = clip.tokenize(question, truncate=True)
82
- with torch.no_grad():
83
- img = img.to(config.device)
84
- img_feature = clip_model.encode_image(img)
85
-
86
- question_tokens = question_tokens.to(config.device)
87
- question_feature = clip_model.encode_text(question_tokens)
88
-
89
- with autocast():
90
- output, output_aux = model(img_feature, question_feature)
91
-
92
- prediction_vqa = dict()
93
- output = output.cpu().squeeze(0)
94
- for k, v in classid_to_answer.items():
95
- prediction_vqa[v] = float(output[k])
96
-
97
- prediction_aux = dict()
98
- output_aux = output_aux.cpu().squeeze(0)
99
- for k, v in config.aux_mapping.items():
100
- prediction_aux[v] = float(output_aux[k])
101
-
102
-
103
- return prediction_vqa, prediction_aux
104
-
105
- description = """
106
- Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model
107
-
108
- Our approach focuses on visual question answering for visual impaired people. We fine-tuned our approach on the <a href='https://vizwiz.org/tasks-and-datasets/vqa/' >CVPR Grand Challenge VizWiz 2022</a> data set.
109
-
110
- You may click on one of the examples or upload your own image and question. The Gradio app shows the current answer for your question and an answer category.
111
-
112
- Link to our <a href='https://arxiv.org/abs/2206.05281'>paper</a>.
113
- """
114
-
115
- gr.Interface(fn=predict,
116
- description=description,
117
- inputs=[gr.Image(label='Image'), gr.Textbox(label='Question')],
118
- outputs=[gr.outputs.Label(label='Answer', num_top_classes=5), gr.outputs.Label(label='Answer Category', num_top_classes=7)],
119
- examples=[['examples/Augustiner.jpg', 'What is this?'],['examples/VizWiz_test_00006968.jpg', 'Can you tell me the color of the dog?'], ['examples/VizWiz_test_00005604.jpg', 'What drink is this?'], ['examples/VizWiz_test_00006246.jpg', 'Can you please tell me what kind of tea this is?'], ['examples/VizWiz_train_00004056.jpg', 'Is that a beer or a coke?'], ['examples/VizWiz_train_00017146.jpg', 'Can you tell me what\'s on this envelope please?'], ['examples/VizWiz_val_00003077.jpg', 'What is this?']]
120
- ).launch()
121
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/backbones/hrnet.py DELETED
@@ -1,537 +0,0 @@
1
- import torch.nn as nn
2
- from mmcv.cnn import (build_conv_layer, build_norm_layer, constant_init,
3
- kaiming_init)
4
- from mmcv.runner import load_checkpoint
5
- from torch.nn.modules.batchnorm import _BatchNorm
6
-
7
- from mmdet.utils import get_root_logger
8
- from ..builder import BACKBONES
9
- from .resnet import BasicBlock, Bottleneck
10
-
11
-
12
- class HRModule(nn.Module):
13
- """High-Resolution Module for HRNet.
14
-
15
- In this module, every branch has 4 BasicBlocks/Bottlenecks. Fusion/Exchange
16
- is in this module.
17
- """
18
-
19
- def __init__(self,
20
- num_branches,
21
- blocks,
22
- num_blocks,
23
- in_channels,
24
- num_channels,
25
- multiscale_output=True,
26
- with_cp=False,
27
- conv_cfg=None,
28
- norm_cfg=dict(type='BN')):
29
- super(HRModule, self).__init__()
30
- self._check_branches(num_branches, num_blocks, in_channels,
31
- num_channels)
32
-
33
- self.in_channels = in_channels
34
- self.num_branches = num_branches
35
-
36
- self.multiscale_output = multiscale_output
37
- self.norm_cfg = norm_cfg
38
- self.conv_cfg = conv_cfg
39
- self.with_cp = with_cp
40
- self.branches = self._make_branches(num_branches, blocks, num_blocks,
41
- num_channels)
42
- self.fuse_layers = self._make_fuse_layers()
43
- self.relu = nn.ReLU(inplace=False)
44
-
45
- def _check_branches(self, num_branches, num_blocks, in_channels,
46
- num_channels):
47
- if num_branches != len(num_blocks):
48
- error_msg = f'NUM_BRANCHES({num_branches}) ' \
49
- f'!= NUM_BLOCKS({len(num_blocks)})'
50
- raise ValueError(error_msg)
51
-
52
- if num_branches != len(num_channels):
53
- error_msg = f'NUM_BRANCHES({num_branches}) ' \
54
- f'!= NUM_CHANNELS({len(num_channels)})'
55
- raise ValueError(error_msg)
56
-
57
- if num_branches != len(in_channels):
58
- error_msg = f'NUM_BRANCHES({num_branches}) ' \
59
- f'!= NUM_INCHANNELS({len(in_channels)})'
60
- raise ValueError(error_msg)
61
-
62
- def _make_one_branch(self,
63
- branch_index,
64
- block,
65
- num_blocks,
66
- num_channels,
67
- stride=1):
68
- downsample = None
69
- if stride != 1 or \
70
- self.in_channels[branch_index] != \
71
- num_channels[branch_index] * block.expansion:
72
- downsample = nn.Sequential(
73
- build_conv_layer(
74
- self.conv_cfg,
75
- self.in_channels[branch_index],
76
- num_channels[branch_index] * block.expansion,
77
- kernel_size=1,
78
- stride=stride,
79
- bias=False),
80
- build_norm_layer(self.norm_cfg, num_channels[branch_index] *
81
- block.expansion)[1])
82
-
83
- layers = []
84
- layers.append(
85
- block(
86
- self.in_channels[branch_index],
87
- num_channels[branch_index],
88
- stride,
89
- downsample=downsample,
90
- with_cp=self.with_cp,
91
- norm_cfg=self.norm_cfg,
92
- conv_cfg=self.conv_cfg))
93
- self.in_channels[branch_index] = \
94
- num_channels[branch_index] * block.expansion
95
- for i in range(1, num_blocks[branch_index]):
96
- layers.append(
97
- block(
98
- self.in_channels[branch_index],
99
- num_channels[branch_index],
100
- with_cp=self.with_cp,
101
- norm_cfg=self.norm_cfg,
102
- conv_cfg=self.conv_cfg))
103
-
104
- return nn.Sequential(*layers)
105
-
106
- def _make_branches(self, num_branches, block, num_blocks, num_channels):
107
- branches = []
108
-
109
- for i in range(num_branches):
110
- branches.append(
111
- self._make_one_branch(i, block, num_blocks, num_channels))
112
-
113
- return nn.ModuleList(branches)
114
-
115
- def _make_fuse_layers(self):
116
- if self.num_branches == 1:
117
- return None
118
-
119
- num_branches = self.num_branches
120
- in_channels = self.in_channels
121
- fuse_layers = []
122
- num_out_branches = num_branches if self.multiscale_output else 1
123
- for i in range(num_out_branches):
124
- fuse_layer = []
125
- for j in range(num_branches):
126
- if j > i:
127
- fuse_layer.append(
128
- nn.Sequential(
129
- build_conv_layer(
130
- self.conv_cfg,
131
- in_channels[j],
132
- in_channels[i],
133
- kernel_size=1,
134
- stride=1,
135
- padding=0,
136
- bias=False),
137
- build_norm_layer(self.norm_cfg, in_channels[i])[1],
138
- nn.Upsample(
139
- scale_factor=2**(j - i), mode='nearest')))
140
- elif j == i:
141
- fuse_layer.append(None)
142
- else:
143
- conv_downsamples = []
144
- for k in range(i - j):
145
- if k == i - j - 1:
146
- conv_downsamples.append(
147
- nn.Sequential(
148
- build_conv_layer(
149
- self.conv_cfg,
150
- in_channels[j],
151
- in_channels[i],
152
- kernel_size=3,
153
- stride=2,
154
- padding=1,
155
- bias=False),
156
- build_norm_layer(self.norm_cfg,
157
- in_channels[i])[1]))
158
- else:
159
- conv_downsamples.append(
160
- nn.Sequential(
161
- build_conv_layer(
162
- self.conv_cfg,
163
- in_channels[j],
164
- in_channels[j],
165
- kernel_size=3,
166
- stride=2,
167
- padding=1,
168
- bias=False),
169
- build_norm_layer(self.norm_cfg,
170
- in_channels[j])[1],
171
- nn.ReLU(inplace=False)))
172
- fuse_layer.append(nn.Sequential(*conv_downsamples))
173
- fuse_layers.append(nn.ModuleList(fuse_layer))
174
-
175
- return nn.ModuleList(fuse_layers)
176
-
177
- def forward(self, x):
178
- """Forward function."""
179
- if self.num_branches == 1:
180
- return [self.branches[0](x[0])]
181
-
182
- for i in range(self.num_branches):
183
- x[i] = self.branches[i](x[i])
184
-
185
- x_fuse = []
186
- for i in range(len(self.fuse_layers)):
187
- y = 0
188
- for j in range(self.num_branches):
189
- if i == j:
190
- y += x[j]
191
- else:
192
- y += self.fuse_layers[i][j](x[j])
193
- x_fuse.append(self.relu(y))
194
- return x_fuse
195
-
196
-
197
- @BACKBONES.register_module()
198
- class HRNet(nn.Module):
199
- """HRNet backbone.
200
-
201
- High-Resolution Representations for Labeling Pixels and Regions
202
- arXiv: https://arxiv.org/abs/1904.04514
203
-
204
- Args:
205
- extra (dict): detailed configuration for each stage of HRNet.
206
- in_channels (int): Number of input image channels. Default: 3.
207
- conv_cfg (dict): dictionary to construct and config conv layer.
208
- norm_cfg (dict): dictionary to construct and config norm layer.
209
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
210
- freeze running stats (mean and var). Note: Effect on Batch Norm
211
- and its variants only.
212
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
213
- memory while slowing down the training speed.
214
- zero_init_residual (bool): whether to use zero init for last norm layer
215
- in resblocks to let them behave as identity.
216
-
217
- Example:
218
- >>> from mmdet.models import HRNet
219
- >>> import torch
220
- >>> extra = dict(
221
- >>> stage1=dict(
222
- >>> num_modules=1,
223
- >>> num_branches=1,
224
- >>> block='BOTTLENECK',
225
- >>> num_blocks=(4, ),
226
- >>> num_channels=(64, )),
227
- >>> stage2=dict(
228
- >>> num_modules=1,
229
- >>> num_branches=2,
230
- >>> block='BASIC',
231
- >>> num_blocks=(4, 4),
232
- >>> num_channels=(32, 64)),
233
- >>> stage3=dict(
234
- >>> num_modules=4,
235
- >>> num_branches=3,
236
- >>> block='BASIC',
237
- >>> num_blocks=(4, 4, 4),
238
- >>> num_channels=(32, 64, 128)),
239
- >>> stage4=dict(
240
- >>> num_modules=3,
241
- >>> num_branches=4,
242
- >>> block='BASIC',
243
- >>> num_blocks=(4, 4, 4, 4),
244
- >>> num_channels=(32, 64, 128, 256)))
245
- >>> self = HRNet(extra, in_channels=1)
246
- >>> self.eval()
247
- >>> inputs = torch.rand(1, 1, 32, 32)
248
- >>> level_outputs = self.forward(inputs)
249
- >>> for level_out in level_outputs:
250
- ... print(tuple(level_out.shape))
251
- (1, 32, 8, 8)
252
- (1, 64, 4, 4)
253
- (1, 128, 2, 2)
254
- (1, 256, 1, 1)
255
- """
256
-
257
- blocks_dict = {'BASIC': BasicBlock, 'BOTTLENECK': Bottleneck}
258
-
259
- def __init__(self,
260
- extra,
261
- in_channels=3,
262
- conv_cfg=None,
263
- norm_cfg=dict(type='BN'),
264
- norm_eval=True,
265
- with_cp=False,
266
- zero_init_residual=False):
267
- super(HRNet, self).__init__()
268
- self.extra = extra
269
- self.conv_cfg = conv_cfg
270
- self.norm_cfg = norm_cfg
271
- self.norm_eval = norm_eval
272
- self.with_cp = with_cp
273
- self.zero_init_residual = zero_init_residual
274
-
275
- # stem net
276
- self.norm1_name, norm1 = build_norm_layer(self.norm_cfg, 64, postfix=1)
277
- self.norm2_name, norm2 = build_norm_layer(self.norm_cfg, 64, postfix=2)
278
-
279
- self.conv1 = build_conv_layer(
280
- self.conv_cfg,
281
- in_channels,
282
- 64,
283
- kernel_size=3,
284
- stride=2,
285
- padding=1,
286
- bias=False)
287
-
288
- self.add_module(self.norm1_name, norm1)
289
- self.conv2 = build_conv_layer(
290
- self.conv_cfg,
291
- 64,
292
- 64,
293
- kernel_size=3,
294
- stride=2,
295
- padding=1,
296
- bias=False)
297
-
298
- self.add_module(self.norm2_name, norm2)
299
- self.relu = nn.ReLU(inplace=True)
300
-
301
- # stage 1
302
- self.stage1_cfg = self.extra['stage1']
303
- num_channels = self.stage1_cfg['num_channels'][0]
304
- block_type = self.stage1_cfg['block']
305
- num_blocks = self.stage1_cfg['num_blocks'][0]
306
-
307
- block = self.blocks_dict[block_type]
308
- stage1_out_channels = num_channels * block.expansion
309
- self.layer1 = self._make_layer(block, 64, num_channels, num_blocks)
310
-
311
- # stage 2
312
- self.stage2_cfg = self.extra['stage2']
313
- num_channels = self.stage2_cfg['num_channels']
314
- block_type = self.stage2_cfg['block']
315
-
316
- block = self.blocks_dict[block_type]
317
- num_channels = [channel * block.expansion for channel in num_channels]
318
- self.transition1 = self._make_transition_layer([stage1_out_channels],
319
- num_channels)
320
- self.stage2, pre_stage_channels = self._make_stage(
321
- self.stage2_cfg, num_channels)
322
-
323
- # stage 3
324
- self.stage3_cfg = self.extra['stage3']
325
- num_channels = self.stage3_cfg['num_channels']
326
- block_type = self.stage3_cfg['block']
327
-
328
- block = self.blocks_dict[block_type]
329
- num_channels = [channel * block.expansion for channel in num_channels]
330
- self.transition2 = self._make_transition_layer(pre_stage_channels,
331
- num_channels)
332
- self.stage3, pre_stage_channels = self._make_stage(
333
- self.stage3_cfg, num_channels)
334
-
335
- # stage 4
336
- self.stage4_cfg = self.extra['stage4']
337
- num_channels = self.stage4_cfg['num_channels']
338
- block_type = self.stage4_cfg['block']
339
-
340
- block = self.blocks_dict[block_type]
341
- num_channels = [channel * block.expansion for channel in num_channels]
342
- self.transition3 = self._make_transition_layer(pre_stage_channels,
343
- num_channels)
344
- self.stage4, pre_stage_channels = self._make_stage(
345
- self.stage4_cfg, num_channels)
346
-
347
- @property
348
- def norm1(self):
349
- """nn.Module: the normalization layer named "norm1" """
350
- return getattr(self, self.norm1_name)
351
-
352
- @property
353
- def norm2(self):
354
- """nn.Module: the normalization layer named "norm2" """
355
- return getattr(self, self.norm2_name)
356
-
357
- def _make_transition_layer(self, num_channels_pre_layer,
358
- num_channels_cur_layer):
359
- num_branches_cur = len(num_channels_cur_layer)
360
- num_branches_pre = len(num_channels_pre_layer)
361
-
362
- transition_layers = []
363
- for i in range(num_branches_cur):
364
- if i < num_branches_pre:
365
- if num_channels_cur_layer[i] != num_channels_pre_layer[i]:
366
- transition_layers.append(
367
- nn.Sequential(
368
- build_conv_layer(
369
- self.conv_cfg,
370
- num_channels_pre_layer[i],
371
- num_channels_cur_layer[i],
372
- kernel_size=3,
373
- stride=1,
374
- padding=1,
375
- bias=False),
376
- build_norm_layer(self.norm_cfg,
377
- num_channels_cur_layer[i])[1],
378
- nn.ReLU(inplace=True)))
379
- else:
380
- transition_layers.append(None)
381
- else:
382
- conv_downsamples = []
383
- for j in range(i + 1 - num_branches_pre):
384
- in_channels = num_channels_pre_layer[-1]
385
- out_channels = num_channels_cur_layer[i] \
386
- if j == i - num_branches_pre else in_channels
387
- conv_downsamples.append(
388
- nn.Sequential(
389
- build_conv_layer(
390
- self.conv_cfg,
391
- in_channels,
392
- out_channels,
393
- kernel_size=3,
394
- stride=2,
395
- padding=1,
396
- bias=False),
397
- build_norm_layer(self.norm_cfg, out_channels)[1],
398
- nn.ReLU(inplace=True)))
399
- transition_layers.append(nn.Sequential(*conv_downsamples))
400
-
401
- return nn.ModuleList(transition_layers)
402
-
403
- def _make_layer(self, block, inplanes, planes, blocks, stride=1):
404
- downsample = None
405
- if stride != 1 or inplanes != planes * block.expansion:
406
- downsample = nn.Sequential(
407
- build_conv_layer(
408
- self.conv_cfg,
409
- inplanes,
410
- planes * block.expansion,
411
- kernel_size=1,
412
- stride=stride,
413
- bias=False),
414
- build_norm_layer(self.norm_cfg, planes * block.expansion)[1])
415
-
416
- layers = []
417
- layers.append(
418
- block(
419
- inplanes,
420
- planes,
421
- stride,
422
- downsample=downsample,
423
- with_cp=self.with_cp,
424
- norm_cfg=self.norm_cfg,
425
- conv_cfg=self.conv_cfg))
426
- inplanes = planes * block.expansion
427
- for i in range(1, blocks):
428
- layers.append(
429
- block(
430
- inplanes,
431
- planes,
432
- with_cp=self.with_cp,
433
- norm_cfg=self.norm_cfg,
434
- conv_cfg=self.conv_cfg))
435
-
436
- return nn.Sequential(*layers)
437
-
438
- def _make_stage(self, layer_config, in_channels, multiscale_output=True):
439
- num_modules = layer_config['num_modules']
440
- num_branches = layer_config['num_branches']
441
- num_blocks = layer_config['num_blocks']
442
- num_channels = layer_config['num_channels']
443
- block = self.blocks_dict[layer_config['block']]
444
-
445
- hr_modules = []
446
- for i in range(num_modules):
447
- # multi_scale_output is only used for the last module
448
- if not multiscale_output and i == num_modules - 1:
449
- reset_multiscale_output = False
450
- else:
451
- reset_multiscale_output = True
452
-
453
- hr_modules.append(
454
- HRModule(
455
- num_branches,
456
- block,
457
- num_blocks,
458
- in_channels,
459
- num_channels,
460
- reset_multiscale_output,
461
- with_cp=self.with_cp,
462
- norm_cfg=self.norm_cfg,
463
- conv_cfg=self.conv_cfg))
464
-
465
- return nn.Sequential(*hr_modules), in_channels
466
-
467
- def init_weights(self, pretrained=None):
468
- """Initialize the weights in backbone.
469
-
470
- Args:
471
- pretrained (str, optional): Path to pre-trained weights.
472
- Defaults to None.
473
- """
474
- if isinstance(pretrained, str):
475
- logger = get_root_logger()
476
- load_checkpoint(self, pretrained, strict=False, logger=logger)
477
- elif pretrained is None:
478
- for m in self.modules():
479
- if isinstance(m, nn.Conv2d):
480
- kaiming_init(m)
481
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
482
- constant_init(m, 1)
483
-
484
- if self.zero_init_residual:
485
- for m in self.modules():
486
- if isinstance(m, Bottleneck):
487
- constant_init(m.norm3, 0)
488
- elif isinstance(m, BasicBlock):
489
- constant_init(m.norm2, 0)
490
- else:
491
- raise TypeError('pretrained must be a str or None')
492
-
493
- def forward(self, x):
494
- """Forward function."""
495
- x = self.conv1(x)
496
- x = self.norm1(x)
497
- x = self.relu(x)
498
- x = self.conv2(x)
499
- x = self.norm2(x)
500
- x = self.relu(x)
501
- x = self.layer1(x)
502
-
503
- x_list = []
504
- for i in range(self.stage2_cfg['num_branches']):
505
- if self.transition1[i] is not None:
506
- x_list.append(self.transition1[i](x))
507
- else:
508
- x_list.append(x)
509
- y_list = self.stage2(x_list)
510
-
511
- x_list = []
512
- for i in range(self.stage3_cfg['num_branches']):
513
- if self.transition2[i] is not None:
514
- x_list.append(self.transition2[i](y_list[-1]))
515
- else:
516
- x_list.append(y_list[i])
517
- y_list = self.stage3(x_list)
518
-
519
- x_list = []
520
- for i in range(self.stage4_cfg['num_branches']):
521
- if self.transition3[i] is not None:
522
- x_list.append(self.transition3[i](y_list[-1]))
523
- else:
524
- x_list.append(y_list[i])
525
- y_list = self.stage4(x_list)
526
-
527
- return y_list
528
-
529
- def train(self, mode=True):
530
- """Convert the model into training mode will keeping the normalization
531
- layer freezed."""
532
- super(HRNet, self).train(mode)
533
- if mode and self.norm_eval:
534
- for m in self.modules():
535
- # trick: eval have effect on BatchNorm only
536
- if isinstance(m, _BatchNorm):
537
- m.eval()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/Yunzai/Yunzai/plugins/adapter/ComWeChat.js DELETED
@@ -1,501 +0,0 @@
1
- import { randomUUID } from "crypto"
2
- import path from "node:path"
3
- import fs from "node:fs"
4
- import { fileTypeFromBuffer } from "file-type"
5
-
6
- Bot.adapter.push(new class ComWeChatAdapter {
7
- constructor() {
8
- this.id = "WeChat"
9
- this.name = "ComWeChat"
10
- this.path = this.name
11
- }
12
-
13
- toStr(data) {
14
- switch (typeof data) {
15
- case "string":
16
- return data
17
- case "number":
18
- return String(data)
19
- case "object":
20
- if (Buffer.isBuffer(data))
21
- return Buffer.from(data, "utf8").toString()
22
- else
23
- return JSON.stringify(data)
24
- }
25
- return data
26
- }
27
-
28
- makeLog(msg) {
29
- return this.toStr(msg).replace(/(base64:\/\/|"type":"data","data":").*?"/g, '$1..."')
30
- }
31
-
32
- sendApi(ws, action, params = {}) {
33
- const echo = randomUUID()
34
- const msg = { action, params, echo }
35
- ws.sendMsg(msg)
36
- return new Promise(resolve =>
37
- Bot.once(echo, data =>
38
- resolve({ ...data, ...data.data })))
39
- }
40
-
41
- async fileName(file) {
42
- try {
43
- if (file.match(/^base64:\/\//)) {
44
- const buffer = Buffer.from(file.replace(/^base64:\/\//, ""), "base64")
45
- const type = await fileTypeFromBuffer(buffer)
46
- return `${Date.now()}.${type.ext}`
47
- } else {
48
- return path.basename(file)
49
- }
50
- } catch (err) {
51
- logger.error(`文件类型检测错误:${logger.red(err)}`)
52
- }
53
- return false
54
- }
55
-
56
- async uploadFile(data, file, name) {
57
- const opts = { name: name || await this.fileName(file) || randomUUID() }
58
-
59
- if (file.match(/^https?:\/\//)) {
60
- opts.type = "url"
61
- opts.url = file
62
- } else if (file.match(/^base64:\/\//)) {
63
- opts.type = "data"
64
- opts.data = file.replace(/^base64:\/\//, "")
65
- } else if (fs.existsSync(file)) {
66
- opts.type = "data"
67
- opts.data = fs.readFileSync(file).toString("base64")
68
- } else {
69
- opts.type = "path"
70
- opts.path = file
71
- }
72
-
73
- logger.info(`${logger.blue(`[${data.self_id}]`)} 上传文件:${this.makeLog(opts)}`)
74
- return data.bot.sendApi("upload_file", opts)
75
- }
76
-
77
- async makeMsg(data, msg) {
78
- if (!Array.isArray(msg))
79
- msg = [msg]
80
- const msgs = []
81
- for (let i of msg) {
82
- if (typeof i != "object")
83
- i = { type: "text", data: { text: i }}
84
- else if (!i.data)
85
- i = { type: i.type, data: { ...i, type: undefined }}
86
- if (i.data.file)
87
- i.data = { file_id: (await this.uploadFile(data, i.data.file, i.data.name)).file_id }
88
-
89
- switch (i.type) {
90
- case "text":
91
- case "image":
92
- case "file":
93
- case "wx.emoji":
94
- case "wx.link":
95
- break
96
- case "record":
97
- case "video":
98
- i.type = "file"
99
- break
100
- case "at":
101
- if (i.data.qq == "all")
102
- i = { type: "mention_all", data: {}}
103
- else
104
- i = { type: "mention", data: { user_id: i.data.qq }}
105
- break
106
- case "reply":
107
- continue
108
- default:
109
- i = { type: "text", data: { text: JSON.stringify(i) }}
110
- }
111
- msgs.push(i)
112
- }
113
- return msgs
114
- }
115
-
116
- async sendFriendMsg(data, msg) {
117
- if (msg?.type == "node")
118
- return Bot.sendForwardMsg(msg => this.sendFriendMsg(data, msg), msg.data)
119
-
120
- const message = await this.makeMsg(data, msg)
121
- logger.info(`${logger.blue(`[${data.self_id} => ${data.user_id}]`)} 发送好友消息:${this.makeLog(message)}`)
122
- return data.bot.sendApi("send_message", {
123
- detail_type: "private",
124
- user_id: data.user_id,
125
- message,
126
- })
127
- }
128
-
129
- async sendGroupMsg(data, msg) {
130
- if (msg?.type == "node")
131
- return Bot.sendForwardMsg(msg => this.sendGroupMsg(data, msg), msg.data)
132
-
133
- const message = await this.makeMsg(data, msg)
134
- logger.info(`${logger.blue(`[${data.self_id} => ${data.group_id}]`)} 发送群消息:${this.makeLog(message)}`)
135
- return data.bot.sendApi("send_message", {
136
- detail_type: "group",
137
- group_id: data.group_id,
138
- message,
139
- })
140
- }
141
-
142
- async getFriendArray(data) {
143
- const array = []
144
- for (const i of (await data.bot.sendApi("get_friend_list")).data)
145
- array.push({
146
- ...i,
147
- nickname: i.user_remark == "null" ? i.user_displayname || i.user_name : i.user_remark,
148
- })
149
- return array
150
- }
151
-
152
- async getFriendList(data) {
153
- const array = []
154
- for (const { user_id } of (await this.getFriendArray(data)))
155
- array.push(user_id)
156
- return array
157
- }
158
-
159
- async getFriendMap(data) {
160
- for (const i of (await this.getFriendArray(data)))
161
- data.bot.fl.set(i.user_id, i)
162
- return data.bot.fl
163
- }
164
-
165
- getFriendInfo(data) {
166
- return data.bot.sendApi("get_user_info", {
167
- user_id: data.user_id,
168
- })
169
- }
170
-
171
- async getGroupArray(data) {
172
- return (await data.bot.sendApi("get_group_list")).data
173
- }
174
-
175
- async getGroupList(data) {
176
- const array = []
177
- for (const { group_id } of (await this.getGroupArray(data)))
178
- array.push(group_id)
179
- return array
180
- }
181
-
182
- async getGroupMap(data) {
183
- for (const i of (await this.getGroupArray(data)))
184
- data.bot.gl.set(i.group_id, i)
185
- return data.bot.gl
186
- }
187
-
188
- getGroupInfo(data) {
189
- return data.bot.sendApi("get_group_info", {
190
- group_id: data.group_id,
191
- })
192
- }
193
-
194
- async getMemberArray(data) {
195
- return (await data.bot.sendApi("get_group_member_list", {
196
- group_id: data.group_id,
197
- })).data
198
- }
199
-
200
- async getMemberList(data) {
201
- const array = []
202
- for (const { user_id } of (await this.getMemberArray(data)))
203
- array.push(user_id)
204
- return array
205
- }
206
-
207
- async getMemberMap(data) {
208
- const map = new Map
209
- for (const i of (await this.getMemberArray(data)))
210
- map.set(i.user_id, i)
211
- return map
212
- }
213
-
214
- getMemberInfo(data) {
215
- return data.bot.sendApi("get_group_member_info", {
216
- group_id: data.group_id,
217
- user_id: data.user_id,
218
- })
219
- }
220
-
221
- pickFriend(data, user_id) {
222
- const i = {
223
- ...data.bot.fl.get(user_id),
224
- ...data,
225
- user_id,
226
- }
227
- return {
228
- ...i,
229
- sendMsg: msg => this.sendFriendMsg(i, msg),
230
- sendFile: (file, name) => this.sendFriendMsg(i, segment.file(file, name)),
231
- getInfo: () => this.getFriendInfo(i),
232
- getAvatarUrl: async () => (await this.getFriendInfo(i))["wx.avatar"],
233
- }
234
- }
235
-
236
- pickMember(data, group_id, user_id) {
237
- const i = {
238
- ...data.bot.fl.get(user_id),
239
- ...data,
240
- group_id,
241
- user_id,
242
- }
243
- return {
244
- ...this.pickFriend(i, user_id),
245
- ...i,
246
- getInfo: () => this.getMemberInfo(i),
247
- getAvatarUrl: async () => (await this.getMemberInfo(i))["wx.avatar"],
248
- }
249
- }
250
-
251
- pickGroup(data, group_id) {
252
- const i = {
253
- ...data.bot.gl.get(group_id),
254
- ...data,
255
- group_id,
256
- }
257
- return {
258
- ...i,
259
- sendMsg: msg => this.sendGroupMsg(i, msg),
260
- sendFile: (file, name) => this.sendGroupMsg(i, segment.file(file, name)),
261
- getInfo: () => this.getGroupInfo(i),
262
- getAvatarUrl: async () => (await this.getGroupInfo(i))["wx.avatar"],
263
- getMemberArray: () => this.getMemberArray(i),
264
- getMemberList: () => this.getMemberList(i),
265
- getMemberMap: () => this.getMemberMap(i),
266
- pickMember: user_id => this.pickMember(i, i.group_id, user_id),
267
- }
268
- }
269
-
270
- async connect(data, ws) {
271
- for (const bot of data.status.bots)
272
- data.self_id = bot.self.user_id
273
-
274
- Bot[data.self_id] = {
275
- adapter: this,
276
- ws: ws,
277
- sendApi: (action, params) => this.sendApi(ws, action, params),
278
- stat: { ...data.status, start_time: data.time },
279
-
280
- info: {},
281
- get uin() { return this.info.user_id },
282
- get nickname() { return this.info.user_name },
283
- get avatar() { return this.info["wx.avatar"] },
284
-
285
- pickFriend: user_id => this.pickFriend(data, user_id),
286
- get pickUser() { return this.pickFriend },
287
- getFriendArray: () => this.getFriendArray(data),
288
- getFriendList: () => this.getFriendList(data),
289
- getFriendMap: () => this.getFriendMap(data),
290
- fl: new Map,
291
-
292
- pickMember: (group_id, user_id) => this.pickMember(data, group_id, user_id),
293
- pickGroup: group_id => this.pickGroup(data, group_id),
294
- getGroupArray: () => this.getGroupArray(data),
295
- getGroupList: () => this.getGroupList(data),
296
- getGroupMap: () => this.getGroupMap(data),
297
- gl: new Map,
298
- gml: new Map,
299
- }
300
- data.bot = Bot[data.self_id]
301
-
302
- if (!Bot.uin.includes(data.self_id))
303
- Bot.uin.push(data.self_id)
304
-
305
- data.bot.info = (await data.bot.sendApi("get_self_info")).data
306
- data.bot.version = {
307
- ...(await data.bot.sendApi("get_version")).data,
308
- id: this.id,
309
- name: this.name,
310
- }
311
-
312
- data.bot.getFriendMap()
313
- data.bot.getGroupMap()
314
-
315
- logger.mark(`${logger.blue(`[${data.self_id}]`)} ${this.name}(${this.id}) ${data.bot.version.impl}-${data.bot.version.version} 已连接`)
316
- Bot.em(`connect.${data.self_id}`, data)
317
- }
318
-
319
- makeMessage(data) {
320
- data.post_type = data.type
321
- data.message_type = data.detail_type
322
- data.raw_message = data.alt_message
323
-
324
- data.sender = {
325
- ...data.bot.fl.get(data.user_id),
326
- user_id: data.user_id,
327
- }
328
-
329
- const message = []
330
- for (const i of data.message)
331
- switch (i.type) {
332
- case "mention":
333
- message.push({ type: "at", qq: i.data.user_id })
334
- break
335
- case "mention_all":
336
- message.push({ type: "at", qq: "all" })
337
- break
338
- case "voice":
339
- message.push({ type: "record", ...i.data })
340
- break
341
- case "reply":
342
- message.push({ type: "reply", id: i.data.message_id, user_id: i.data.user_id })
343
- break
344
- default:
345
- message.push({ type: i.type, ...i.data })
346
- }
347
- data.message = message
348
-
349
- switch (data.message_type) {
350
- case "private":
351
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友消息:[${data.user_id}] ${data.raw_message}`)
352
- break
353
- case "group":
354
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群消息:[${data.group_id}, ${data.user_id}] ${data.raw_message}`)
355
- break
356
- default:
357
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
358
- }
359
-
360
- Bot.em(`${data.post_type}.${data.message_type}`, data)
361
- }
362
-
363
- makeNotice(data) {
364
- data.post_type = data.type
365
- if (data.group_id)
366
- data.notice_type = "group"
367
- else
368
- data.notice_type = "friend"
369
-
370
- switch (data.detail_type) {
371
- case "private_message_delete":
372
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友消息撤回:[${data.user_id}] ${data.message_id}`)
373
- data.sub_type = "recall"
374
- break
375
- case "group_message_delete":
376
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群消息撤回:[${data.group_id}, ${data.operator_id}=>${data.user_id}] ${data.message_id}`)
377
- data.sub_type = "recall"
378
- break
379
- case "wx.get_private_file":
380
- logger.info(`${logger.blue(`[${data.self_id}]`)} 私聊文件:[${data.user_id}] ${data.file_name} ${data.file_length} ${data.md5}`)
381
- break
382
- case "wx.get_group_file":
383
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群文件:[${data.group_id}, ${data.user_id}] ${data.file_name} ${data.file_length} ${data.md5}`)
384
- break
385
- case "wx.get_private_redbag":
386
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友红包:[${data.user_id}]`)
387
- break
388
- case "wx.get_group_redbag":
389
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群红包:[${data.group_id}, ${data.user_id}]`)
390
- break
391
- case "wx.get_private_poke":
392
- data.operator_id = data.from_user_id
393
- data.target_id = data.user_id
394
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友拍一拍:[${data.operator_id}=>${data.target_id}]`)
395
- break
396
- case "wx.get_group_poke":
397
- data.operator_id = data.from_user_id
398
- data.target_id = data.user_id
399
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群拍一拍:[${data.group_id}, ${data.operator_id}=>${data.target_id}]`)
400
- break
401
- case "wx.get_private_card":
402
- logger.info(`${logger.blue(`[${data.self_id}]`)} 好友用户名片:[${data.user_id}] ${data.v3} ${data.v4} ${data.nickname} ${data.head_url} ${data.province} ${data.city} ${data.sex}`)
403
- break
404
- case "wx.get_group_card":
405
- logger.info(`${logger.blue(`[${data.self_id}]`)} 群用户名片:[${data.group_id}, ${data.user_id}] ${data.v3} ${data.v4} ${data.nickname} ${data.head_url} ${data.province} ${data.city} ${data.sex}`)
406
- break
407
- default:
408
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知通知:${logger.magenta(JSON.stringify(data))}`)
409
- }
410
- if (!data.sub_type)
411
- data.sub_type = data.detail_type.split("_").pop()
412
-
413
- Bot.em(`${data.post_type}.${data.notice_type}.${data.sub_type}`, data)
414
- }
415
-
416
- makeRequest(data) {
417
- data.post_type = data.type
418
- if (data.group_id)
419
- data.notice_type = "group"
420
- else
421
- data.notice_type = "friend"
422
-
423
- switch (data.detail_type) {
424
- case "wx.friend_request":
425
- logger.info(`${logger.blue(`[${data.self_id}]`)} 加好友请求:[${data.user_id}] ${data.v3} ${data.v4} ${data.nickname} ${data.content} ${data.province} ${data.city}`)
426
- data.sub_type = "add"
427
- break
428
- default:
429
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知请求:${logger.magenta(JSON.stringify(data))}`)
430
- }
431
- if (!data.sub_type)
432
- data.sub_type = data.detail_type.split("_").pop()
433
-
434
- Bot.em(`${data.post_type}.${data.request_type}.${data.sub_type}`, data)
435
- }
436
-
437
- makeMeta(data, ws) {
438
- switch (data.detail_type) {
439
- case "heartbeat":
440
- break
441
- case "connect":
442
- break
443
- case "status_update":
444
- this.connect(data, ws)
445
- break
446
- default:
447
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
448
- }
449
- }
450
-
451
- message(data, ws) {
452
- try {
453
- data = JSON.parse(data)
454
- } catch (err) {
455
- return logger.error(`解码数据失败:${logger.red(err)}`)
456
- }
457
-
458
- if (data.self?.user_id) {
459
- data.self_id = data.self.user_id
460
- } else {
461
- data.self_id = data.id
462
- }
463
-
464
- if (data.type) {
465
- if (data.type != "meta" && !Bot.uin.includes(data.self_id)) {
466
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 找不到对应Bot,忽略消息:${logger.magenta(JSON.stringify(data))}`)
467
- return false
468
- }
469
- data.bot = Bot[data.self_id]
470
-
471
- switch (data.type) {
472
- case "meta":
473
- this.makeMeta(data, ws)
474
- break
475
- case "message":
476
- this.makeMessage(data)
477
- break
478
- case "notice":
479
- this.makeNotice(data)
480
- break
481
- case "request":
482
- this.makeRequest(data)
483
- break
484
- default:
485
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
486
- }
487
- } else if (data.echo) {
488
- Bot.emit(data.echo, data)
489
- } else {
490
- logger.warn(`${logger.blue(`[${data.self_id}]`)} 未知消息:${logger.magenta(JSON.stringify(data))}`)
491
- }
492
- }
493
-
494
- load() {
495
- if (!Array.isArray(Bot.wsf[this.path]))
496
- Bot.wsf[this.path] = []
497
- Bot.wsf[this.path].push((ws, ...args) =>
498
- ws.on("message", data => this.message(data, ws, ...args))
499
- )
500
- }
501
- })
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cpp4App/Cpp4App/CDM/detect_merge/merge.py DELETED
@@ -1,361 +0,0 @@
1
- import json
2
- import cv2
3
- import numpy as np
4
- from os.path import join as pjoin
5
- import os
6
- import time
7
- import shutil
8
-
9
- from CDM.detect_merge.Element import Element
10
- from torchvision import models
11
- from torch import nn
12
- import torch
13
-
14
- import CDM.detect_compo.lib_ip.ip_preprocessing as pre
15
-
16
- # ----------------- load pre-trained classification model ----------------
17
-
18
- # model = models.resnet18().to('cpu')
19
- # in_feature_num = model.fc.in_features
20
- # model.fc = nn.Linear(in_feature_num, 99)
21
- # model.conv1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=(5, 5), padding=(3, 3), stride=(2, 2),
22
- # bias=False)
23
- #
24
- # PATH = "./model/model-99-resnet18.pkl"
25
- # model.load_state_dict(torch.load(PATH, map_location=torch.device('cpu')))
26
- #
27
- # model.eval()
28
-
29
- # ----------------- end loading ------------------------------------------
30
-
31
- # information_type = {'Name':['name', 'first name', 'last name', 'full name', 'real name', 'surname', 'family name', 'given name'],
32
- # 'Birthday':['birthday', 'date of birth', 'birth date', 'DOB', 'dob full birthday'],
33
- # 'Address':['address', 'mailing address', 'physical address', 'postal address', 'billing address', 'shipping address'],
34
- # 'Phone':['phone', 'phone number', 'mobile', 'mobile phone', 'mobile number', 'telephone', 'telephone number', 'call'],
35
- # 'Email':['email', 'e-mail', 'email address', 'e-mail address'],
36
- # 'Contacts':['contacts', 'phone-book', 'phone book'],
37
- # 'Location':['location', 'locate', 'place', 'geography', 'geo', 'geo-location', 'precision location'],
38
- # 'Camera':['camera', 'photo', 'scan', 'album', 'picture', 'gallery', 'photo library', 'storage', 'image', 'video'],
39
- # 'Microphone':['microphone', 'voice, mic', 'speech', 'talk'],
40
- # 'Financial':['credit card', 'pay', 'payment', 'debit card', 'mastercard', 'wallet'],
41
- # 'IP':['IP', 'Internet Protocol', 'IP address', 'internet protocol address'],
42
- # 'Cookies':['cookies', 'cookie'],
43
- # 'Social':['facebook', 'twitter']}
44
-
45
- def show_elements(org_img, eles, ratio, show=False, win_name='element', wait_key=0, shown_resize=None, line=2):
46
- color_map = {'Text':(0, 0, 255), 'Compo':(0, 255, 0), 'Block':(0, 255, 0), 'Text Content':(255, 0, 255)}
47
- img = org_img.copy()
48
- for ele in eles:
49
- color = color_map[ele.category]
50
- ele.visualize_element(img=img, color=color, line=line, ratio=ratio)
51
- img_resize = img
52
- if shown_resize is not None:
53
- img_resize = cv2.resize(img, shown_resize)
54
- if show:
55
- cv2.imshow(win_name, img_resize)
56
- cv2.waitKey(wait_key)
57
- if wait_key == 0:
58
- cv2.destroyWindow(win_name)
59
- return img_resize
60
-
61
- def show_one_element(org_img, eles, ratio, show=False, win_name='element', wait_key=0, shown_resize=None, line=2):
62
- color_map = {'Text': (0, 0, 255), 'Compo': (0, 255, 0), 'Block': (0, 255, 0), 'Text Content': (255, 0, 255)}
63
- all_img = []
64
- for ele in eles:
65
- img = org_img.copy()
66
- color = color_map[ele.category]
67
- ele.visualize_element(img=img, color=color, line=line, ratio=ratio)
68
- img_resize = img
69
- all_img.append(img_resize)
70
- if shown_resize is not None:
71
- img_resize = cv2.resize(img, shown_resize)
72
- if show:
73
- cv2.imshow(win_name, img_resize)
74
- cv2.waitKey(wait_key)
75
- if wait_key == 0:
76
- cv2.destroyWindow(win_name)
77
- return all_img
78
-
79
-
80
- def save_elements(output_file, elements, img_shape, ratio=1):
81
- components = {'compos': [], 'img_shape': img_shape}
82
- for i, ele in enumerate(elements):
83
-
84
- if ratio != 1:
85
- ele.resize(ratio)
86
- ele.width = ele.col_max - ele.col_min
87
- ele.height = ele.row_max - ele.row_min
88
-
89
- c = ele.wrap_info()
90
- # c['id'] = i
91
- components['compos'].append(c)
92
- json.dump(components, open(output_file, 'w'), indent=4)
93
- return components
94
-
95
-
96
- def reassign_ids(elements):
97
- for i, element in enumerate(elements):
98
- element.id = i
99
-
100
-
101
- def refine_texts(texts, img_shape):
102
- refined_texts = []
103
- # for text in texts:
104
- # # remove potential noise
105
- # if len(text.text_content) > 1 and text.height / img_shape[0] < 0.075:
106
- # refined_texts.append(text)
107
-
108
- for text in texts:
109
- # remove potential noise
110
- if text.height / img_shape[0] < 0.075:
111
- refined_texts.append(text)
112
-
113
- return refined_texts
114
-
115
-
116
- def merge_text_line_to_paragraph(elements, max_line_gap=5):
117
- texts = []
118
- non_texts = []
119
- for ele in elements:
120
- if ele.category == 'Text':
121
- texts.append(ele)
122
- else:
123
- non_texts.append(ele)
124
-
125
- changed = True
126
- while changed:
127
- changed = False
128
- temp_set = []
129
- for text_a in texts:
130
- merged = False
131
- for text_b in temp_set:
132
- inter_area, _, _, _ = text_a.calc_intersection_area(text_b, bias=(0, max_line_gap))
133
- if inter_area > 0:
134
- text_b.element_merge(text_a)
135
- merged = True
136
- changed = True
137
- break
138
- if not merged:
139
- temp_set.append(text_a)
140
- texts = temp_set.copy()
141
- return non_texts + texts
142
-
143
-
144
- def refine_elements(compos, texts, input_img_path, intersection_bias=(2, 2), containment_ratio=0.8, ):
145
- '''
146
- 1. remove compos contained in text
147
- 2. remove compos containing text area that's too large
148
- 3. store text in a compo if it's contained by the compo as the compo's text child element
149
- '''
150
-
151
- # resize_by_height = 800
152
- # org, grey = pre.read_img(input_img_path, resize_by_height)
153
- #
154
- # grey = grey.astype('float32')
155
- # grey = grey / 255
156
- #
157
- # grey = (grey - grey.mean()) / grey.std()
158
-
159
- elements = []
160
- contained_texts = []
161
-
162
- # classification_start_time = time.time()
163
-
164
- for compo in compos:
165
- is_valid = True
166
- text_area = 0
167
- for text in texts:
168
- inter, iou, ioa, iob = compo.calc_intersection_area(text, bias=intersection_bias)
169
- if inter > 0:
170
- # the non-text is contained in the text compo
171
- if ioa >= containment_ratio:
172
- is_valid = False
173
- break
174
- text_area += inter
175
- # the text is contained in the non-text compo
176
- if iob >= containment_ratio and compo.category != 'Block':
177
- contained_texts.append(text)
178
- # print("id: ", compo.id)
179
- # print("text.text_content: ", text.text_content)
180
- # print("is_valid: ", is_valid)
181
- # print("inter: ", inter)
182
- # print("iou: ", iou)
183
- # print("ioa: ", ioa)
184
- # print("iob: ", iob)
185
- # print("text_area: ", text_area)
186
- # print("compo.area: ", compo.area)
187
- if is_valid and text_area / compo.area < containment_ratio:
188
- # for t in contained_texts:
189
- # t.parent_id = compo.id
190
- # compo.children += contained_texts
191
-
192
- # --------- classification ----------
193
-
194
- # comp_grey = grey[compo.row_min:compo.row_max, compo.col_min:compo.col_max]
195
- #
196
- # comp_crop = cv2.resize(comp_grey, (32, 32))
197
- #
198
- # comp_crop = comp_crop.reshape(1, 1, 32, 32)
199
- #
200
- # comp_tensor = torch.tensor(comp_crop)
201
- # comp_tensor = comp_tensor.permute(0, 1, 3, 2)
202
- #
203
- # pred_label = model(comp_tensor)
204
- #
205
- # if np.argmax(pred_label.cpu().data.numpy(), axis=1) in [72.0, 42.0, 77.0, 91.0, 6.0, 89.0, 40.0, 43.0, 82.0,
206
- # 3.0, 68.0, 49.0, 56.0, 89.0]:
207
- # elements.append(compo)
208
-
209
- # --------- end classification ----------
210
-
211
- elements.append(compo)
212
- # time_cost_ic = time.time() - classification_start_time
213
- # print("time cost for icon classification: %2.2f s" % time_cost_ic)
214
-
215
- # text_selection_time = time.time()
216
-
217
- # elements += texts
218
- for text in texts:
219
- if text not in contained_texts:
220
- elements.append(text)
221
-
222
- # ---------- Simulate keyword search -----------
223
-
224
- # for key in keyword_list:
225
- # for w in keyword_list[key]:
226
- # if w in text.text_content.lower():
227
- # elements.append(text)
228
-
229
- # ---------- end -------------------------------
230
-
231
- # time_cost_ts = time.time() - text_selection_time
232
- # print("time cost for text selection: %2.2f s" % time_cost_ts)
233
-
234
- # return elements, time_cost_ic, time_cost_ts
235
- return elements
236
-
237
-
238
- def check_containment(elements):
239
- for i in range(len(elements) - 1):
240
- for j in range(i + 1, len(elements)):
241
- relation = elements[i].element_relation(elements[j], bias=(2, 2))
242
- if relation == -1:
243
- elements[j].children.append(elements[i])
244
- elements[i].parent_id = elements[j].id
245
- if relation == 1:
246
- elements[i].children.append(elements[j])
247
- elements[j].parent_id = elements[i].id
248
-
249
-
250
- def remove_top_bar(elements, img_height):
251
- new_elements = []
252
- max_height = img_height * 0.04
253
- for ele in elements:
254
- if ele.row_min < 10 and ele.height < max_height:
255
- continue
256
- new_elements.append(ele)
257
- return new_elements
258
-
259
-
260
- def remove_bottom_bar(elements, img_height):
261
- new_elements = []
262
- for ele in elements:
263
- # parameters for 800-height GUI
264
- if ele.row_min > 750 and 20 <= ele.height <= 30 and 20 <= ele.width <= 30:
265
- continue
266
- new_elements.append(ele)
267
- return new_elements
268
-
269
-
270
- def compos_clip_and_fill(clip_root, org, compos):
271
- def most_pix_around(pad=6, offset=2):
272
- '''
273
- determine the filled background color according to the most surrounding pixel
274
- '''
275
- up = row_min - pad if row_min - pad >= 0 else 0
276
- left = col_min - pad if col_min - pad >= 0 else 0
277
- bottom = row_max + pad if row_max + pad < org.shape[0] - 1 else org.shape[0] - 1
278
- right = col_max + pad if col_max + pad < org.shape[1] - 1 else org.shape[1] - 1
279
- most = []
280
- for i in range(3):
281
- val = np.concatenate((org[up:row_min - offset, left:right, i].flatten(),
282
- org[row_max + offset:bottom, left:right, i].flatten(),
283
- org[up:bottom, left:col_min - offset, i].flatten(),
284
- org[up:bottom, col_max + offset:right, i].flatten()))
285
- most.append(int(np.argmax(np.bincount(val))))
286
- return most
287
-
288
- if os.path.exists(clip_root):
289
- shutil.rmtree(clip_root)
290
- os.mkdir(clip_root)
291
-
292
- bkg = org.copy()
293
- cls_dirs = []
294
- for compo in compos:
295
- cls = compo['class']
296
- if cls == 'Background':
297
- compo['path'] = pjoin(clip_root, 'bkg.png')
298
- continue
299
- c_root = pjoin(clip_root, cls)
300
- c_path = pjoin(c_root, str(compo['id']) + '.jpg')
301
- compo['path'] = c_path
302
- if cls not in cls_dirs:
303
- os.mkdir(c_root)
304
- cls_dirs.append(cls)
305
-
306
- position = compo['position']
307
- col_min, row_min, col_max, row_max = position['column_min'], position['row_min'], position['column_max'], position['row_max']
308
- cv2.imwrite(c_path, org[row_min:row_max, col_min:col_max])
309
- # Fill up the background area
310
- cv2.rectangle(bkg, (col_min, row_min), (col_max, row_max), most_pix_around(), -1)
311
- cv2.imwrite(pjoin(clip_root, 'bkg.png'), bkg)
312
-
313
-
314
- def merge(img_path, compo_path, text_path, merge_root=None, is_paragraph=False, is_remove_top_bar=False, is_remove_bottom_bar=False, show=False, wait_key=0):
315
- compo_json = json.load(open(compo_path, 'r'))
316
- text_json = json.load(open(text_path, 'r'))
317
-
318
- # load text and non-text compo
319
- ele_id = 0
320
- compos = []
321
- for compo in compo_json['compos']:
322
- element = Element(ele_id, (compo['column_min'], compo['row_min'], compo['column_max'], compo['row_max']), compo['class'])
323
- compos.append(element)
324
- ele_id += 1
325
- texts = []
326
- for text in text_json['texts']:
327
- element = Element(ele_id, (text['column_min'], text['row_min'], text['column_max'], text['row_max']), 'Text', text_content=text['content'])
328
- texts.append(element)
329
- ele_id += 1
330
- if compo_json['img_shape'] != text_json['img_shape']:
331
- resize_ratio = compo_json['img_shape'][0] / text_json['img_shape'][0]
332
- for text in texts:
333
- text.resize(resize_ratio)
334
-
335
- # check the original detected elements
336
- img = cv2.imread(img_path)
337
- img_resize = cv2.resize(img, (compo_json['img_shape'][1], compo_json['img_shape'][0]))
338
- ratio = img.shape[0] / img_resize.shape[0]
339
-
340
- show_elements(img, texts + compos, ratio, show=show, win_name='all elements before merging', wait_key=wait_key, line=3)
341
-
342
- # refine elements
343
- texts = refine_texts(texts, compo_json['img_shape'])
344
- elements = refine_elements(compos, texts, img_path)
345
- if is_remove_top_bar:
346
- elements = remove_top_bar(elements, img_height=compo_json['img_shape'][0])
347
- if is_remove_bottom_bar:
348
- elements = remove_bottom_bar(elements, img_height=compo_json['img_shape'][0])
349
- if is_paragraph:
350
- elements = merge_text_line_to_paragraph(elements, max_line_gap=7)
351
- reassign_ids(elements)
352
- check_containment(elements)
353
- board = show_elements(img, elements, ratio, show=show, win_name='elements after merging', wait_key=wait_key, line=3)
354
-
355
- # save all merged elements, clips and blank background
356
- name = img_path.replace('\\', '/').split('/')[-1][:-4]
357
- components = save_elements(pjoin(merge_root, name + '.json'), elements, img_resize.shape)
358
- cv2.imwrite(pjoin(merge_root, name + '.jpg'), board)
359
- print('[Merge Completed] Input: %s Output: %s' % (img_path, pjoin(merge_root, name + '.jpg')))
360
- return board, components
361
- # return this_ic_time, this_ts_time