Commit
·
317bf2b
1
Parent(s):
e05a173
Update parquet files (step 113 of 121)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Baixar Filme Uma Carta De Amor Dublado 430 Como Encontrar o Autor de uma Mensagem na Garrafa.md +0 -97
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ford ECAT Torrent 49.md +0 -103
- spaces/1gistliPinn/ChatGPT4/Examples/Frifam Pedo New Serie 2011 (Pthc) Julia 8Yo Lisa 4Yo Mike 7Yo Lara 8Yo.avi.md +0 -6
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Boost Your TikTok Popularity with TikTok Booster Mod APK - Download Now.md +0 -88
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash of Clans for Android 4.4 2 Unlock New Features and Characters.md +0 -209
- spaces/1phancelerku/anime-remove-background/Download METAMORPHOSIS by INTERWORLD - The Hottest PHONK Track of the Year.md +0 -119
- spaces/1phancelerku/anime-remove-background/Experience a Fun and Interactive Online World with Play Together Mod Apk.md +0 -112
- spaces/2023Liu2023/bingo/src/lib/isomorphic/node.ts +0 -26
- spaces/AEUPH/CosmosTV/public/index.html +0 -325
- spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/bert.py +0 -32
- spaces/AIWaves/Software_Company/src/agents/LLM/base_LLM.py +0 -133
- spaces/ANILYADAV/mygenaichatbot/app.py +0 -34
- spaces/AchyuthGamer/OpenGPT-v1/README.md +0 -11
- spaces/AchyuthGamer/OpenGPT/client/css/options.css +0 -10
- spaces/AchyuthGamer/OpenGPT/g4f/Provider/Raycast.py +0 -72
- spaces/Adapter/CoAdapter/ldm/modules/attention.py +0 -344
- spaces/Admin08077/Cosmosis/app.py +0 -90
- spaces/AgentVerse/agentVerse/agentverse/agents/tasksolving_agent/manager.py +0 -116
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreatePages.js +0 -8
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/defaultcallbacks/GetDefaultCallbacks.js +0 -32
- spaces/AlanMars/QYL-AI-Space/Dockerfile +0 -18
- spaces/AlexWang/lama/saicinpainting/evaluation/losses/fid/fid_score.py +0 -328
- spaces/AlishbaImran/Redox-Flow-Battery-Prediction/README.md +0 -14
- spaces/Amiminoru/whoreproxy/Dockerfile +0 -11
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_vq.py +0 -96
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipelines_common.py +0 -804
- spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r101_fpn_2x_coco.py +0 -2
- spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_769x769_80k_cityscapes.py +0 -2
- spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x512_80k_ade20k.py +0 -6
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/exllama_hf.py +0 -174
- spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/gui/ui_win.py +0 -164
- spaces/Arafath10/chatcode/app.py +0 -273
- spaces/Armored-Atom/Image-To-Motion/style.css +0 -19
- spaces/Artrajz/vits-simple-api/bert_vits2/attentions.py +0 -352
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/config.py +0 -139
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/coco.py +0 -49
- spaces/BHD/google-pix2struct-screen2words-base/app.py +0 -3
- spaces/Banbri/zcvzcv/src/app/queries/predictWithHuggingFace.ts +0 -95
- spaces/Bart92/RVC_HF/demucs/audio.py +0 -172
- spaces/Bart92/RVC_HF/infer/lib/train/data_utils.py +0 -517
- spaces/Benjov/Demo-IR/app.py +0 -389
- spaces/Benson/text-generation/Examples/Alfabeto Huevo Granja Inactivo Magnate Mod Apk Dinero Ilimitado Y Gemas.md +0 -62
- spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/vendored/requests/packages/urllib3/exceptions.py +0 -169
- spaces/BramVanroy/text-to-amr/utils.py +0 -105
- spaces/CForGETaass/vits-uma-genshin-honkai/text/symbols.py +0 -39
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_roi_align.py +0 -152
- spaces/CVPR/LIVE/pybind11/tools/FindPythonLibsNew.cmake +0 -255
- spaces/CVPR/LIVE/thrust/thrust/detail/raw_reference_cast.h +0 -398
- spaces/CVPR/LIVE/thrust/thrust/device_malloc_allocator.h +0 -185
- spaces/CVPR/WALT/mmdet/version.py +0 -19
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Baixar Filme Uma Carta De Amor Dublado 430 Como Encontrar o Autor de uma Mensagem na Garrafa.md
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Baixar Filme Uma Carta De Amor Dublado 430</h1>
|
3 |
-
<p>Uma Carta De Amor (Message in a Bottle) é um filme de drama e romance lançado em 1999, baseado no livro homônimo de Nicholas Sparks. O filme conta a história de uma jornalista que encontra uma carta de amor dentro de uma garrafa na praia e decide procurar pelo seu autor. O filme é estrelado por Kevin Costner, Robin Wright e Paul Newman, e dirigido por Luis Mandoki.</p>
|
4 |
-
<h2>Baixar Filme Uma Carta De Amor Dublado 430</h2><br /><p><b><b>Download</b> –––––>>> <a href="https://byltly.com/2uKzhU">https://byltly.com/2uKzhU</a></b></p><br /><br />
|
5 |
-
<p>Neste artigo, você vai aprender como baixar o filme Uma Carta De Amor dublado em 430p, quais são os benefícios e os riscos de fazer isso, e o que esperar do filme. Vamos lá?</p>
|
6 |
-
<h2>Como baixar o filme</h2>
|
7 |
-
<p>Baixar filmes pela internet é uma prática muito comum, mas também pode trazer alguns problemas legais e éticos. Afinal, você está consumindo um produto sem pagar por ele, o que pode prejudicar os direitos autorais dos criadores e distribuidores do filme. Além disso, você pode se expor a vírus, malwares e outros tipos de ameaças digitais ao acessar sites não confiáveis.</p>
|
8 |
-
<p>Por isso, é importante que você saiba como baixar o filme Uma Carta De Amor dublado em 430p de forma legal e segura. Existem algumas opções para isso, como:</p>
|
9 |
-
<ul>
|
10 |
-
<li>Assinar um serviço de streaming que tenha o filme em seu catálogo, como Netflix, Amazon Prime Video ou HBO Max. Esses serviços cobram uma mensalidade para que você possa assistir a filmes e séries ilimitados em alta qualidade e sem anúncios. Você também pode baixar o filme para assistir offline em seu dispositivo.</li>
|
11 |
-
<li>Comprar ou alugar o filme em uma plataforma digital, como Google Play, iTunes ou YouTube. Essas plataformas permitem que você compre ou alugue o filme por um preço acessível e o assista em seu computador, celular ou smart TV. Você também pode baixar o filme para assistir offline em seu dispositivo.</li>
|
12 |
-
<li>Usar um programa de torrent para baixar o filme de forma gratuita e anônima. Um torrent é um arquivo que contém as informações necessárias para baixar um conteúdo pela internet. Você precisa de um programa específico para abrir esse arquivo e iniciar o download do filme. Alguns dos programas mais populares são uTorrent, BitTorrent e qBittorrent.</li>
|
13 |
-
</ul>
|
14 |
-
<p>Cada uma dessas opções tem suas vantagens e desvantagens. Veja a seguir:</p>
|
15 |
-
| Opção | Vantagens | Desvantagens | | --- | --- | --- | | Streaming | Legal, seguro, rápido, fácil, variado | Pago, depende da internet, pode não ter o filme desejado | | Plataforma digital | Legal, seguro, rápido, fácil | Pago, depende da internet | | Torrent | Gratuito, anônimo | Ilegal, arriscado, lento, complexo | <h2>O que esperar do filme</h2>
|
16 |
-
<p>Agora que você já sabe como baixar o filme Uma Carta De Amor dublado em 430p , vamos falar um pouco sobre o que você pode esperar do filme . O filme é um drama romântico que mistura emoção , aventura e mistério . O filme explora temas como amor , perda , destino e esperança .</p>
|
17 |
-
<p>O filme tem algumas cenas memoráveis e emocionantes , como a descoberta da carta na garrafa pela jornalista Theresa (Robin Wright) , o primeiro encontro dela com o autor da carta , Garret (Kevin Costner) , a revelação do segredo por trás das cartas e o desfecho surpreendente do filme .</p>
|
18 |
-
<p>Assistir Filme Uma Carta De Amor Dublado Online<br />
|
19 |
-
Download Filme Uma Carta De Amor Dublado Torrent<br />
|
20 |
-
Ver Filme Uma Carta De Amor Dublado Completo<br />
|
21 |
-
Filme Uma Carta De Amor Dublado Grátis<br />
|
22 |
-
Filme Uma Carta De Amor Dublado HD<br />
|
23 |
-
Filme Uma Carta De Amor Dublado 720p<br />
|
24 |
-
Filme Uma Carta De Amor Dublado 1080p<br />
|
25 |
-
Filme Uma Carta De Amor Dublado Mega<br />
|
26 |
-
Filme Uma Carta De Amor Dublado Google Drive<br />
|
27 |
-
Filme Uma Carta De Amor Dublado MP4<br />
|
28 |
-
Filme Uma Carta De Amor Dublado AVI<br />
|
29 |
-
Filme Uma Carta De Amor Dublado MKV<br />
|
30 |
-
Filme Uma Carta De Amor Dublado Legendado<br />
|
31 |
-
Filme Uma Carta De Amor Dublado Português<br />
|
32 |
-
Filme Uma Carta De Amor Dublado Brasil<br />
|
33 |
-
Filme Uma Carta De Amor Dublado Netflix<br />
|
34 |
-
Filme Uma Carta De Amor Dublado Amazon Prime<br />
|
35 |
-
Filme Uma Carta De Amor Dublado YouTube<br />
|
36 |
-
Filme Uma Carta De Amor Dublado Dailymotion<br />
|
37 |
-
Filme Uma Carta De Amor Dublado Vimeo<br />
|
38 |
-
Filme Uma Carta De Amor Dublado Resumo<br />
|
39 |
-
Filme Uma Carta De Amor Dublado Sinopse<br />
|
40 |
-
Filme Uma Carta De Amor Dublado Trailer<br />
|
41 |
-
Filme Uma Carta De Amor Dublado Elenco<br />
|
42 |
-
Filme Uma Carta De Amor Dublado Crítica<br />
|
43 |
-
Filme Uma Carta De Amor Dublado Avaliação<br />
|
44 |
-
Filme Uma Carta De Amor Dublado Nota<br />
|
45 |
-
Filme Uma Carta De Amor Dublado IMDB<br />
|
46 |
-
Filme Uma Carta De Amor Dublado Rotten Tomatoes<br />
|
47 |
-
Filme Uma Carta De Amor Dublado Metacritic<br />
|
48 |
-
Filme Uma Carta De Amor Dublado AdoroCinema<br />
|
49 |
-
Filme Uma Carta De Amor Dublado Omelete<br />
|
50 |
-
Filme Uma Carta De Amor Dublado Cinepop<br />
|
51 |
-
Filme Uma Carta De Amor Dublado Cinema10<br />
|
52 |
-
Filme Uma Carta De Amor Dublado Filmow<br />
|
53 |
-
Filme Uma Carta De Amor Dublado Letterboxd<br />
|
54 |
-
Filme Uma Carta De Amor Dublado JustWatch<br />
|
55 |
-
Filme Uma Carta De Amor Dublado Reelgood<br />
|
56 |
-
Filme Uma Carta De Amor Dublado Flixable<br />
|
57 |
-
Filme Uma Carta De Amor Dublado WhereToWatch<br />
|
58 |
-
Como Baixar Filme Uma Carta De Amor Dublado <br />
|
59 |
-
Onde Baixar Filme Uma Carta De Amor Dublado <br />
|
60 |
-
Porque Baixar Filme Uma Carta De Amor Dublado <br />
|
61 |
-
Quando Baixar Filme Uma Carta De Amor Dublado <br />
|
62 |
-
Quanto Baixar Filme Uma Carta De Amor Dublado <br />
|
63 |
-
Qualidade Baixar Filme Uma Carta De Amor Dublado <br />
|
64 |
-
Velocidade Baixar Filme Uma Carta De Amor Dublado <br />
|
65 |
-
Segurança Baixar Filme Uma Carta De Amor Dublado <br />
|
66 |
-
Site Baixar Filme Uma Carta De Amor Dublado <br />
|
67 |
-
Programa Baixar Filme Uma Carta De Amor Dubl</p>
|
68 |
-
<p>O filme também tem algumas frases marcantes e inspiradoras , como :</p>
|
69 |
-
<blockquote>"O amor verdadeiro é raro e é a única coisa que dá sentido à vida ."</blockquote>
|
70 |
-
<blockquote>"Você é meu verdadeiro norte ."</blockquote>
|
71 |
-
<blockquote>"Não tenha medo de amar novamente ."</blockquote>
|
72 |
-
<p>O filme recebeu críticas mistas dos especialistas e do público . No site Rotten Tomatoes , o filme tem uma nota de 32% dos críticos e de 63% dos espectadores . No site IMDb , o filme tem uma nota de 6.2 de 10 . Algumas das críticas positivas elogiam a atuação dos protagonistas e a fotografia do filme . Algumas das críticas negativas apontam a falta de química entre os personagens e a previsibilidade da história .</p>
|
73 |
-
<h2>Conclusão</h2>
|
74 |
-
<p>Neste artigo , você aprendeu como baixar o filme Uma Carta De Amor dublado em 430p de forma legal e segura . Você também viu o que esperar do filme em termos de gênero , temas , estilo , cenas e frases .</p>
|
75 |
-
<p>Na minha opinião pessoal , o filme é uma boa opção para quem gosta de romances dramáticos e emocionantes . O filme tem uma história envolvente e tocante que faz você refletir sobre o amor e a vida . Eu recomendo que você assista ao filme e tire suas próprias conclusões .</p>
|
76 |
-
<p>Se você quiser saber mais sobre o filme Uma Carta De Amor dublado em 430p ou sobre outros filmes relacionados ao tema do amor na garrafa , confira os links abaixo :</p>
|
77 |
-
<ul>
|
78 |
-
<li><a href="https://www.imdb.com/title/tt0139462/">IMDb - Uma Carta De Amor</a></li>
|
79 |
-
<li><a href="https://www.livrariacultura.com.br/p/livros/literatura-internacional/romances/uma-carta-de-amor-221943">Livraria Cultura - Uma Carta De Amor (livro)</a></li>
|
80 |
-
<li><a href="https://www.youtube.com/watch?v=0Z8xYMomsDc">YouTube - Trailer Oficial do Filme Uma Carta De Amor</a></li>
|
81 |
-
<li><a href="https://www.guiadasemana.com.br/cinema/galeria/filmes-sobre-amores-a-distancia">Guia da Semana - Filmes sobre amores à distância</a></li>
|
82 |
-
</ul>
|
83 |
-
<h3>Perguntas frequentes</h3>
|
84 |
-
<ol>
|
85 |
-
<li><b>O que significa dublado 430?</b><br>Dublado significa que o áudio do filme está em português brasileiro. 430 significa que a resolução da imagem do filme é de 430 pixels na vertical.</li>
|
86 |
-
<li><b>Quem escreveu as cartas de amor no filme?</b><br>As cartas de amor foram escritas por Garret (Kevin Costner), um construtor de barcos viúvo que morava na Carolina do Norte. Ele escreveu as cartas para sua falecida esposa Catherine e as jogou no mar dentro de garrafas.</li>
|
87 |
-
<h3>Perguntas frequentes</h3>
|
88 |
-
<ol>
|
89 |
-
<li><b>O que significa dublado 430?</b><br>Dublado significa que o áudio do filme está em português brasileiro. 430 significa que a resolução da imagem do filme é de 430 pixels na vertical.</li>
|
90 |
-
<li><b>Quem escreveu as cartas de amor no filme?</b><br>As cartas de amor foram escritas por Garret (Kevin Costner), um construtor de barcos viúvo que morava na Carolina do Norte. Ele escreveu as cartas para sua falecida esposa Catherine e as jogou no mar dentro de garrafas.</li>
|
91 |
-
<li><b>Qual é o final do filme?</b><br>O final do filme é trágico e surpreendente. Depois de se apaixonarem e enfrentarem alguns conflitos familiares e profissionais , Theresa (Robin Wright) e Garret (Kevin Costner) decidem ficar juntos. No entanto , Garret morre afogado ao tentar resgatar um homem que estava em um barco à deriva durante uma tempestade . Theresa recebe uma última carta dele , que ele havia escrito antes de partir para o mar . A carta revela que Garret sempre soube que Theresa era a jornalista que havia encontrado sua primeira carta e que ele a amava profundamente .</li>
|
92 |
-
<li><b>O filme é baseado em um livro?</b><br>Sim , o filme é baseado no livro Uma Carta De Amor , escrito por Nicholas Sparks e publicado em 1998 . O livro foi um best-seller e recebeu elogios da crítica e do público . O livro também inspirou outros filmes do mesmo autor , como Diário de Uma Paixão , Um Amor Para Recordar e Querido John .</li>
|
93 |
-
<li><b>O filme tem alguma continuação?</b><br>Não , o filme não tem nenhuma continuação oficial . No entanto , alguns fãs criaram histórias alternativas e fanfics sobre o que poderia ter acontecido depois do final do filme . Você pode encontrar algumas dessas histórias na internet , mas lembre-se de que elas não são canônicas nem autorizadas pelos criadores do filme .</li>
|
94 |
-
</ol>
|
95 |
-
</p> 0a6ba089eb<br />
|
96 |
-
<br />
|
97 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Ford ECAT Torrent 49.md
DELETED
@@ -1,103 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Ford ECAT Torrent 49: What You Need to Know</h1>
|
3 |
-
<p>If you are a Ford dealer or repairer in Europe, you probably know how important it is to have access to the latest and most comprehensive information on Ford spare parts. You want to be able to find the right part for any model, year or vehicle identification number (VIN) quickly and easily. You also want to be able to order parts and services online or offline, depending on your preference and availability. And you want to be able to connect your parts catalog system to other systems and tools that you use in your daily work.</p>
|
4 |
-
<h2>Ford ECAT Torrent 49</h2><br /><p><b><b>Download File</b> ⇒ <a href="https://byltly.com/2uKwm5">https://byltly.com/2uKwm5</a></b></p><br /><br />
|
5 |
-
<p>That's where Ford ECAT comes in. Ford ECAT is an electronic parts catalog system that provides you with all the features and functions that you need to identify and order parts and services for all European models of Ford cars, jeeps and light commercial vehicles. It is a web-based system that you can access from any browser, as long as you have an internet connection. It also has an offline mode that allows you to work without internet access.</p>
|
6 |
-
<p>But how do you get Ford ECAT on your computer? And how do you keep it updated with the latest data? That's where a torrent comes in. A torrent is a file that contains information about other files that are distributed over a peer-to-peer network. By using a torrent client, such as uTorrent or BitTorrent, you can download these files from other users who have them on their computers. This way, you can get large files faster and more efficiently than downloading them from a single server.</p>
|
7 |
-
<p>One of these files is Ford ECAT Torrent 49. This is a torrent file that contains the latest version of Ford ECAT, which was released in October 2015. It also contains a patch that allows you to activate the system without needing a license key. By downloading and installing this torrent file, you can get access to the most up-to-date database of Ford spare parts for European market.</p>
|
8 |
-
<p>In this article, we will show you how to download and install Ford ECAT Torrent 49 on your computer, how to use it to find and order parts and services for your customers, and what benefits it can bring to your business. Let's get started!</p>
|
9 |
-
<h2>How to Download and Install Ford ECAT Torrent 49</h2>
|
10 |
-
<p>The first step is to find the torrent file that contains Ford ECAT Torrent 49. You can search for it on various websites that offer torrent files, such as MHH AUTO or NPM. Make sure that you download the file from a reliable source that has positive feedback from other users.</p>
|
11 |
-
<p>Ford ECAT Torrent 49 review<br />
|
12 |
-
Ford ECAT Torrent 49 price<br />
|
13 |
-
Ford ECAT Torrent 49 specs<br />
|
14 |
-
Ford ECAT Torrent 49 manual<br />
|
15 |
-
Ford ECAT Torrent 49 download<br />
|
16 |
-
Ford ECAT Torrent 49 software<br />
|
17 |
-
Ford ECAT Torrent 49 installation<br />
|
18 |
-
Ford ECAT Torrent 49 activation<br />
|
19 |
-
Ford ECAT Torrent 49 crack<br />
|
20 |
-
Ford ECAT Torrent 49 keygen<br />
|
21 |
-
Ford ECAT Torrent 49 serial number<br />
|
22 |
-
Ford ECAT Torrent 49 online<br />
|
23 |
-
Ford ECAT Torrent 49 free<br />
|
24 |
-
Ford ECAT Torrent 49 full version<br />
|
25 |
-
Ford ECAT Torrent 49 latest update<br />
|
26 |
-
Ford ECAT Torrent 49 features<br />
|
27 |
-
Ford ECAT Torrent 49 benefits<br />
|
28 |
-
Ford ECAT Torrent 49 advantages<br />
|
29 |
-
Ford ECAT Torrent 49 disadvantages<br />
|
30 |
-
Ford ECAT Torrent 49 comparison<br />
|
31 |
-
Ford ECAT Torrent 49 alternatives<br />
|
32 |
-
Ford ECAT Torrent 49 competitors<br />
|
33 |
-
Ford ECAT Torrent 49 best deals<br />
|
34 |
-
Ford ECAT Torrent 49 discounts<br />
|
35 |
-
Ford ECAT Torrent 49 coupons<br />
|
36 |
-
Ford ECAT Torrent 49 offers<br />
|
37 |
-
Ford ECAT Torrent 49 promotions<br />
|
38 |
-
Ford ECAT Torrent 49 sales<br />
|
39 |
-
Ford ECAT Torrent 49 warranty<br />
|
40 |
-
Ford ECAT Torrent 49 support<br />
|
41 |
-
Ford ECAT Torrent 49 customer service<br />
|
42 |
-
Ford ECAT Torrent 49 feedback<br />
|
43 |
-
Ford ECAT Torrent 49 testimonials<br />
|
44 |
-
Ford ECAT Torrent 49 ratings<br />
|
45 |
-
Ford ECAT Torrent 49 rankings<br />
|
46 |
-
Ford ECAT Torrent 49 popularity<br />
|
47 |
-
Ford ECAT Torrent 49 demand<br />
|
48 |
-
Ford ECAT Torrent 49 availability<br />
|
49 |
-
Ford ECAT Torrent 49 delivery<br />
|
50 |
-
Ford ECAT Torrent 49 shipping<br />
|
51 |
-
Ford ECAT Torrent 49 returns<br />
|
52 |
-
Ford ECAT Torrent 49 refunds<br />
|
53 |
-
Ford ECAT Torrent 49 quality<br />
|
54 |
-
Ford ECAT Torrent 49 performance<br />
|
55 |
-
Ford ECAT Torrent 49 reliability<br />
|
56 |
-
Ford ECAT Torrent 49 durability<br />
|
57 |
-
Ford ECAT Torrent 49 compatibility<br />
|
58 |
-
Ford ECAT Torrent 49 requirements<br />
|
59 |
-
Ford ECAT Torrent 49 troubleshooting<br />
|
60 |
-
Ford ECAT Torrent 49 tips</p>
|
61 |
-
<p>Once you have downloaded the torrent file, you need to open it with a torrent client. A torrent client is a software that allows you to download files from other users who have them on their computers. There are many torrent clients available online, such as uTorrent or BitTorrent. You can choose the one that suits your preferences and system requirements.</p>
|
62 |
-
<p>After opening the torrent file with your torrent client, you will see a list of files that are included in the torrent. These files are:</p>
|
63 |
-
<ul>
|
64 |
-
<li>Ford Ecat 06.2014.rar: This is a compressed file that contains the installation files for Ford ECAT.</li>
|
65 |
-
<li>Ford Ecat Date Rollback.rar: This is another compressed file that contains a batch file and a shortcut that allow you to change your system date temporarily when running Ford ECAT.</li>
|
66 |
-
<li>Ecat-6.2014-Torrend.rar: This is the original torrent file that was uploaded by turbo-quattro, who created this torrent.</li>
|
67 |
-
</ul>
|
68 |
-
<p>You need to select all these files and start the download process. Depending on your internet speed and availability of seeders (users who have completed downloading the files), this may take some time.</p>
|
69 |
-
<p>When the download is complete, you need to extract the compressed files using a software such as WinRAR or 7-Zip. You will get two folders: one named "Ford Ecat" and another named "Ecat Date Rollback".</p>
|
70 |
-
<p>The next step is to install Ford ECAT on your computer. To do this, follow these steps:</p>
|
71 |
-
<ol>
|
72 |
-
<li>Open the "Ford Ecat" folder and run "setup.exe" as administrator.</li>
|
73 |
-
<li>Follow the instructions on the screen until the installation is complete.</li>
|
74 |
-
<li>Open the "Ecat Date Rollback" folder and copy "Ford Ecat.Bat" and "Ford Ecat.Lnk" files.</li>
|
75 |
-
<li>Paste these files into your Ford Ecat installation folder (usually C:\Program Files\Ford\Ecat).</li>
|
76 |
-
<li>Create a shortcut of "Ford Ecat.Lnk" file on your desktop.</li>
|
77 |
-
</ol>
|
78 |
-
<p>The final step is to activate Ford ECAT with a patch. To do this, follow these steps:</p>
|
79 |
-
<ol>
|
80 |
-
<li>Double-click on the shortcut of "Ford Ecat.Lnk" file on your desktop.</li>
|
81 |
-
<li>This will automatically change your system date to June 2014 for about 10 seconds, then open your browser and launch Ford ECAT.</li>
|
82 |
-
<li>On the login screen, enter any username and password (for example: admin/admin) and click "Login".</li>
|
83 |
-
<li>On the main screen, click on "Help" menu and select "About".</li>
|
84 |
-
<li>A window will pop up showing your license information. Click on "Change License".</li>
|
85 |
-
<li>A new window will open asking for a license key. Click on "Browse" button and locate the patch file named "ECatPatch.exe" in your installation folder (usually C:\Program Files\Ford\Ecat).</li>
|
86 |
-
<li>Select this file and click "Open". Then click "OK".</li>
|
87 |
-
<li>A message will appear saying that your license has been updated successfully. Click "OK".</li>
|
88 |
-
<li>Close all windows and restart your computer.</li>
|
89 |
-
</ol>
|
90 |
-
<p>Congratulations! You have successfully downloaded and installed Ford ECAT Torrent 49 on your computer. You are now ready to use it for finding and ordering parts and services for your customers.</p>
|
91 |
-
<h2>How to Use Ford ECAT Torrent 49</h2>
|
92 |
-
<p>Ford ECAT Torrent 49 is an electronic parts catalog system that provides you with all the features and functions that you need to identify and order parts and services for all European models of Ford cars, jeeps and light commercial vehicles. It is a web-based system that you can access from any browser, as long as you have an internet connection. It also has an offline mode that allows you to work without internet access.</p>
|
93 |
-
<h3>How to Access the Database of Ford Spare Parts for European Market</h3>
|
94 |
-
<p>To access the database of Ford spare parts for European market, follow these steps:</p>
|
95 |
-
<ol>
|
96 |
-
<li>Double-click on the shortcut of "Ford Ecat.Lnk" file on your desktop.</li>
|
97 |
-
<li>This will automatically change your system date to June 2014 for about 10 seconds, then open your browser and launch Ford ECAT.</li>
|
98 |
-
<li>On the login screen, enter any username and password (for example: admin/admin) and click "Login".</li>
|
99 |
-
<li>On the main screen, click on "Catalogue" button at the top left corner.</li>
|
100 |
-
<li>A new window will open showing a list of countries where Ford vehicles are sold. Select your country from the drop-down menu at the top right corner.</li>
|
101 |
-
<li>You will see a list of vehicle groups according to their type (cars, je</p> 0a6ba089eb<br />
|
102 |
-
<br />
|
103 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Frifam Pedo New Serie 2011 (Pthc) Julia 8Yo Lisa 4Yo Mike 7Yo Lara 8Yo.avi.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>Frifam Pedo New Serie 2011 (Pthc) Julia 8Yo Lisa 4Yo Mike 7Yo Lara 8Yo.avi</h2><br /><p><b><b>Download</b> ✅ <a href="https://imgfil.com/2uxZhz">https://imgfil.com/2uxZhz</a></b></p><br /><br />
|
2 |
-
|
3 |
-
aaccfb2cb3<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Boost Your TikTok Popularity with TikTok Booster Mod APK - Download Now.md
DELETED
@@ -1,88 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Download TikTok Booster Mod APK: How to Get More Likes and Followers on TikTok</h1>
|
3 |
-
<p>TikTok is one of the most popular social media platforms in the world, with over 1 billion users. It allows you to create and share short videos with music, filters, stickers, and more. However, if you want to grow your TikTok account and reach more people, you might need some help. That's where TikTok Booster Mod APK comes in.</p>
|
4 |
-
<h2>download tiktok booster mod apk</h2><br /><p><b><b>Download File</b> ☑ <a href="https://urlin.us/2uT2dS">https://urlin.us/2uT2dS</a></b></p><br /><br />
|
5 |
-
<h2>What is TikTok Booster Mod APK?</h2>
|
6 |
-
<p>TikTok Booster Mod APK is a modified version of the original TikTok app. It offers a range of features that are not available in the original app, such as the ability to download videos, remove ads, and access unlimited likes and followers. Why choose TikTok Booster Mod APK? If you’re looking for an enhanced TikTok experience, then TikTok Booster Mod APK is the way to go.</p>
|
7 |
-
<h3>Features of TikTok Booster Mod APK</h3>
|
8 |
-
<ul>
|
9 |
-
<li><b>Download videos:</b> You can download any video from TikTok with one tap, without any watermark or logo.</li>
|
10 |
-
<li><b>Remove ads:</b> You can enjoy TikTok without any annoying ads or interruptions.</li>
|
11 |
-
<li><b>Unlimited likes and followers:</b> You can get as many likes and followers as you want, without any limit or cost.</li>
|
12 |
-
<li><b>Region unlocked:</b> You can access TikTok from any country or region, without any restrictions or bans.</li>
|
13 |
-
<li><b>No root required:</b> You don't need to root your device to use TikTok Booster Mod APK.</li>
|
14 |
-
</ul>
|
15 |
-
<h3>Benefits of TikTok Booster Mod APK</h3>
|
16 |
-
<ul>
|
17 |
-
<li><b>Increase your popularity:</b> By getting more likes and followers, you can boost your visibility and reach on TikTok. This can help you attract more fans, views, comments, and shares.</li>
|
18 |
-
<li><b>Save your time and money:</b> By using TikTok Booster Mod APK, you don't need to spend hours or dollars on other methods or services to grow your account. You can get instant results with just a few clicks.</li>
|
19 |
-
<li><b>Enjoy more features and fun:</b> By downloading videos and removing ads, you can enhance your TikTok experience and have more fun. You can also explore more content and creators from different regions and genres.</li>
|
20 |
-
</ul>
|
21 |
-
<h2>How to Download and Install TikTok Booster Mod APK?</h2>
|
22 |
-
<p>If you want to download and install TikTok Booster Mod APK on your device, you need to follow these simple steps:</p>
|
23 |
-
<h3>Step 1: Enable Unknown Sources</h3>
|
24 |
-
<p>Before you can install any third-party app on your device, you need to enable unknown sources in your settings. To do this, go to Settings > Security > Unknown Sources and toggle it on.</p>
|
25 |
-
<h3>Step 2: Download the APK File</h3>
|
26 |
-
<p>Next, you need to download the APK file of TikTok Booster Mod APK from a reliable source. You can use this link to download it directly from 5Play.app. Alternatively, you can use this link to download another app called TikBooster that offers similar features.</p>
|
27 |
-
<h3>Step 3: Install the APK File</h3>
|
28 |
-
<p>Once you have downloaded the APK file, you need to locate it in your file manager and tap on it. Then, follow the instructions on the screen to install the app on your device.</p>
|
29 |
-
<h2>How to Use TikTok Booster Mod APK?</h2>
|
30 |
-
<p>After you have installed the app on your device, you can start using it to get more likes and followers on TikTok. Here's how:</p>
|
31 |
-
<p>download tiktok booster mod apk free<br />
|
32 |
-
download tiktok booster mod apk latest version<br />
|
33 |
-
download tiktok booster mod apk unlimited fans and likes<br />
|
34 |
-
download tiktok booster mod apk for android<br />
|
35 |
-
download tiktok booster mod apk no watermark<br />
|
36 |
-
download tiktok booster mod apk 2023<br />
|
37 |
-
download tiktok booster mod apk pro<br />
|
38 |
-
download tiktok booster mod apk premium<br />
|
39 |
-
download tiktok booster mod apk hack<br />
|
40 |
-
download tiktok booster mod apk cracked<br />
|
41 |
-
download tiktok booster mod apk full<br />
|
42 |
-
download tiktok booster mod apk online<br />
|
43 |
-
download tiktok booster mod apk without verification<br />
|
44 |
-
download tiktok booster mod apk no root<br />
|
45 |
-
download tiktok booster mod apk safe<br />
|
46 |
-
download tiktok booster mod apk with hashtag generator<br />
|
47 |
-
download tiktok booster mod apk for ios<br />
|
48 |
-
download tiktok booster mod apk for pc<br />
|
49 |
-
download tiktok booster mod apk for windows 10<br />
|
50 |
-
download tiktok booster mod apk for mac<br />
|
51 |
-
download tiktok booster mod apk with music downloader<br />
|
52 |
-
download tiktok booster mod apk with video editor<br />
|
53 |
-
download tiktok booster mod apk with live streamer<br />
|
54 |
-
download tiktok booster mod apk with duet maker<br />
|
55 |
-
download tiktok booster mod apk with voice changer<br />
|
56 |
-
download tiktok booster mod apk with sticker maker<br />
|
57 |
-
download tiktok booster mod apk with filter maker<br />
|
58 |
-
download tiktok booster mod apk with trend analyzer<br />
|
59 |
-
download tiktok booster mod apk with niche finder<br />
|
60 |
-
download tiktok booster mod apk with keyword planner<br />
|
61 |
-
download tiktok booster mod apk with content scheduler<br />
|
62 |
-
download tiktok booster mod apk with analytics dashboard<br />
|
63 |
-
download tiktok booster mod apk with engagement tracker<br />
|
64 |
-
download tiktok booster mod apk with growth report<br />
|
65 |
-
download tiktok booster mod apk with competitor analysis<br />
|
66 |
-
download tiktok booster mod apk with influencer finder<br />
|
67 |
-
download tiktok booster mod apk with shoutout exchange<br />
|
68 |
-
download tiktok booster mod apk with viral challenge creator<br />
|
69 |
-
download tiktok booster mod apk with meme generator<br />
|
70 |
-
download tiktok booster mod apk with caption writer</p>
|
71 |
-
<h3>Step 1: Launch the App</h3>
|
72 |
-
<p>Open the app and log in with your TikTok account. You will see a dashboard with different options to boost your account.</p>
|
73 |
-
<h3>Step 2: Choose Your Boost Option</h3>
|
74 |
-
<p>You can choose from three options to boost your account: likes, followers, or views. Tap on the option you want and select the amount you want to get. For example, if you want to get 1000 likes, tap on the likes option and slide the bar to 1000.</p>
|
75 |
-
<h3>Step 3: Enjoy Your Results</h3>
|
76 |
-
<p>After you have selected your boost option and amount, tap on the start button and wait for a few minutes. The app will automatically send likes, followers, or views to your account. You can check your TikTok account and see the results for yourself.</p>
|
77 |
-
<h2>Conclusion</h2>
|
78 |
-
<p>TikTok Booster Mod APK is a great app for anyone who wants to grow their TikTok account and get more popularity and fun. It offers a range of features and benefits that are not available in the original app, such as downloading videos, removing ads, and getting unlimited likes and followers. It is easy to download, install, and use, and it works on any device without root. If you want to download TikTok Booster Mod APK, you can use the links provided in this article and follow the steps we have explained. We hope you enjoy using this app and achieve your TikTok goals.</p>
|
79 |
-
<h2>FAQs</h2>
|
80 |
-
<ul>
|
81 |
-
<li><b>Is TikTok Booster Mod APK safe?</b> Yes, TikTok Booster Mod APK is safe to use. It does not contain any viruses or malware, and it does not harm your device or account. However, you should always download it from a trusted source and use it at your own risk.</li>
|
82 |
-
<li><b>Is TikTok Booster Mod APK legal?</b> No, TikTok Booster Mod APK is not legal. It violates the terms and conditions of TikTok and may result in your account being suspended or banned. Therefore, you should use it with caution and discretion.</li>
|
83 |
-
<li><b>Does TikTok Booster Mod APK work on iOS devices?</b> No, TikTok Booster Mod APK only works on Android devices. If you want to use a similar app on iOS devices, you can try using other apps such as TokBooster or TokLiker.</li>
|
84 |
-
<li><b>How long does it take to get likes and followers from TikTok Booster Mod APK?</b> It depends on the amount you select and the speed of your internet connection. Usually, it takes a few minutes to get likes and followers from TikTok Booster Mod APK.</li>
|
85 |
-
<li><b>Can I use TikTok Booster Mod APK with multiple accounts?</b> Yes, you can use TikTok Booster Mod APK with multiple accounts. However, you need to log out from one account before logging in with another one.</li>
|
86 |
-
</ul></p> 197e85843d<br />
|
87 |
-
<br />
|
88 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Clash of Clans for Android 4.4 2 Unlock New Features and Characters.md
DELETED
@@ -1,209 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Clash of Clans for Android 4.4 2 Free Download: Everything You Need to Know</h1>
|
3 |
-
<p>If you are looking for a fun, addictive, and challenging strategy game to play on your Android device, you might want to check out Clash of Clans. This is one of the most popular and successful games in the world, with millions of players worldwide. In this article, we will tell you everything you need to know about Clash of Clans for Android 4.4 2 free download, including what the game is about, how to download and install it, how to play and enjoy it, and how to troubleshoot and solve common issues with it. Let's get started!</p>
|
4 |
-
<h2>What is Clash of Clans?</h2>
|
5 |
-
<h3>A brief introduction to the game and its features</h3>
|
6 |
-
<p>Clash of Clans is a freemium mobile strategy game developed and published by Supercell, a Finnish game company. The game was released in 2012 for iOS devices and in 2013 for Android devices. The game is set in a fantasy world where you can build your own village, raise a clan, and compete in epic clan wars with other players. You can also explore new lands, discover new characters, and collect resources and loot from other players.</p>
|
7 |
-
<h2>clash of clans for android 4.4 2 free download</h2><br /><p><b><b>DOWNLOAD</b> … <a href="https://urlin.us/2uSZUX">https://urlin.us/2uSZUX</a></b></p><br /><br />
|
8 |
-
<p>The game has many features that make it fun and engaging, such as:</p>
|
9 |
-
<ul>
|
10 |
-
<li>Joining a clan of fellow players or starting your own and inviting friends.</li>
|
11 |
-
<li>Fighting in clan wars as a team against millions of active players across the globe.</li>
|
12 |
-
<li>Testing your skills in the competitive clan war leagues and proving you're the best.</li>
|
13 |
-
<li>Forging alliances, working together with your clan in clan games to earn valuable magic items.</li>
|
14 |
-
<li>Planning your unique battle strategy with countless combinations of spells, troops, and heroes.</li>
|
15 |
-
<li>Competing with the best players from around the world and rising to the top of the leaderboard in legend league.</li>
|
16 |
-
<li>Collecting resources and stealing loot from other players to upgrade your own village and turn it into a stronghold.</li>
|
17 |
-
<li>Defending against enemy attacks with a multitude of towers, cannons, bombs, traps, mortars, and walls.</li>
|
18 |
-
<li>Unlocking epic heroes like the barbarian king, archer queen, grand warden, royal champion, and battle machine.</li>
|
19 |
-
<li>Researching upgrades in your laboratory to make your troops, spells, and siege machines even more powerful.</li>
|
20 |
-
<li>Creating your own custom PVP experiences through friendly challenges, friendly wars, and special live events.</li>
|
21 |
-
<li>Watching clanmates attack and defend in real-time as a spectator or checking out the video replays.</li>
|
22 |
-
<li>Fighting against the goblin king in a single player campaign mode through the realm.</li>
|
23 |
-
<li>Learning new tactics and experimenting with your army and clan castle troops in practice mode.</li>
|
24 |
-
<li>Journeying to the builder base and discovering new buildings and characters in a mysterious world.</li>
|
25 |
-
<li>Turning your builder base into an unbeatable fortress and defeating rival players in versus battles.</li>
|
26 |
-
<li>Collecting exclusive hero skins and sceneries to customize your village.</li>
|
27 |
-
</ul>
|
28 |
-
<h3>The benefits of playing Clash of Clans on Android 4.4 2</h3>
|
29 |
-
<p>Playing Clash of Clans on Android 4.4 2 has many benefits, such as:</p>
|
30 |
-
<ul>
|
31 |
-
<li>Enjoying the game on a large and clear screen with high resolution and quality graphics.</li>
|
32 |
-
<li>Having a smooth and fast gameplay experience with no lag or glitches.</li>
|
33 |
-
<li>Being able to access the game anytime and anywhere with a stable internet connection.</li>
|
34 |
-
<li>Getting regular updates and new features from the developers without any delay or compatibility issues.</li>
|
35 |
-
<li>Having more storage space and battery life for your device than newer versions of Android.</li>
|
36 |
-
<li>Being able to use various apps and tools to enhance your game performance, such as screen recorders, game boosters, and cheat engines.</li>
|
37 |
-
<li>Being part of a huge and active community of players who share the same passion and love for the game.</li>
|
38 |
-
</ul>
|
39 |
-
<h2>How to download and install Clash of Clans for Android 4.4 2</h2>
|
40 |
-
<h3>The requirements and compatibility of the game</h3>
|
41 |
-
<p>Before you download and install Clash of Clans for Android 4.4 2, you need to make sure that your device meets the minimum requirements and is compatible with the game. Here are the specifications that you need to check:</p>
|
42 |
-
<table>
|
43 |
-
<tr>
|
44 |
-
<th>Requirement</th>
|
45 |
-
<th>Specification</th>
|
46 |
-
</tr>
|
47 |
-
<tr>
|
48 |
-
<td>Operating system</td>
|
49 |
-
<td>Android 4.4 or higher</td>
|
50 |
-
</tr>
|
51 |
-
<tr>
|
52 |
-
<td>Processor</td>
|
53 |
-
<td>1.2 GHz or higher</td>
|
54 |
-
</tr>
|
55 |
-
<tr>
|
56 |
-
<td>Memory</td>
|
57 |
-
<td>1 GB RAM or higher</td>
|
58 |
-
</tr>
|
59 |
-
<tr>
|
60 |
-
<td>Storage</td>
|
61 |
-
<td>100 MB free space or higher</td>
|
62 |
-
</tr>
|
63 |
-
<tr>
|
64 |
-
<td>Internet connection</td>
|
65 |
-
<td>Wi-Fi, 3G, 4G, or 5G</td>
|
66 |
-
</tr>
|
67 |
-
<tr>
|
68 |
-
<td>Google Play services</td>
|
69 |
-
<td>Enabled and updated</td>
|
70 |
-
</tr>
|
71 |
-
<tr>
|
72 |
-
<td>Screen size</td>
|
73 |
-
<td>800x480 pixels or higher</td>
|
74 |
-
</tr>
|
75 |
-
[assistant](#message) If your device meets these requirements, you can proceed to download and install the game. If not, you might need to upgrade your device or look for other alternatives. <h3>The steps to download and install the game from Google Play Store</h3>
|
76 |
-
<p>The easiest and safest way to download and install Clash of Clans for Android 4.4 2 is from the official Google Play Store. Here are the steps that you need to follow:</p>
|
77 |
-
<ol>
|
78 |
-
<li>Open the Google Play Store app on your device or go to <a href="">https://play.google.com/store</a> on your browser.</li>
|
79 |
-
<li>Search for "Clash of Clans" in the search bar or tap on the magnifying glass icon.</li>
|
80 |
-
<li>Select the game from the list of results and tap on the green "Install" button.</li>
|
81 |
-
<li>Wait for the game to download and install on your device. You might need to grant some permissions and accept some terms and conditions.</li>
|
82 |
-
<li>Once the game is installed, you can tap on the "Open" button to launch it or find it on your home screen or app drawer.</li>
|
83 |
-
<li>Enjoy playing Clash of Clans on your Android 4.4 2 device!</li>
|
84 |
-
</ol>
|
85 |
-
<h3>The alternative ways to download and install the game from other sources</h3>
|
86 |
-
<p>If you cannot access the Google Play Store or prefer to download and install the game from other sources, you can also try these alternative ways:</p>
|
87 |
-
<ul>
|
88 |
-
<li>Download and install the game from the official website of Supercell. Go to <a href="">https://supercell.com/en/games/clashofclans/</a> and tap on the "Download Now" button. You will be redirected to the Google Play Store or a third-party app store depending on your location and device. Follow the instructions on the screen to complete the installation.</li>
|
89 |
-
<li>Download and install the game from a trusted third-party app store. There are many app stores that offer Clash of Clans for Android 4.4 2, such as Aptoide, APKPure, Uptodown, and more. However, you need to be careful and make sure that the app store is reliable and secure. You also need to enable the "Unknown sources" option in your device settings to allow the installation of apps from outside the Google Play Store. To do this, go to Settings > Security > Unknown sources and toggle it on. Then, go to the app store of your choice and search for "Clash of Clans". Download and install the game as usual.</li>
|
90 |
-
<li>Download and install the game from an APK file. An APK file is a package file that contains all the files and data needed to run an app on an Android device. You can find many websites that offer Clash of Clans APK files for Android 4.4 2, such as APKMirror, APKMonk, APKFab, and more. However, you need to be careful and make sure that the website is trustworthy and safe. You also need to enable the "Unknown sources" option in your device settings as mentioned above. Then, go to the website of your choice and download the Clash of Clans APK file to your device. You can use a file manager app or a browser to locate the file and tap on it to install it.</li>
|
91 |
-
</ul>
|
92 |
-
<p>Note: These alternative ways are not recommended by Supercell or Google and may pose some risks to your device and data security. You should always download and install apps from official sources whenever possible.</p>
|
93 |
-
<h2>How to play and enjoy Clash of Clans on Android 4.4 2</h2>
|
94 |
-
<h3>The basic gameplay and objectives of the game</h3>
|
95 |
-
<p>Clash of Clans is a game that requires strategy, skill, and creativity. The basic gameplay and objectives of the game are as follows:</p>
|
96 |
-
<p>clash of clans apk for android 4.4 2 free download<br />
|
97 |
-
download clash of clans for android kitkat 4.4 2<br />
|
98 |
-
clash of clans game free download for android 4.4 2<br />
|
99 |
-
how to install clash of clans on android 4.4 2<br />
|
100 |
-
clash of clans mod apk for android 4.4 2 free download<br />
|
101 |
-
clash of clans update for android 4.4 2 download<br />
|
102 |
-
clash of clans hack for android 4.4 2 free download<br />
|
103 |
-
clash of clans latest version for android 4.4 2<br />
|
104 |
-
clash of clans offline for android 4.4 2 free download<br />
|
105 |
-
clash of clans cheats for android 4.4 2 free download<br />
|
106 |
-
clash of clans app for android 4.4 2 free download<br />
|
107 |
-
clash of clans for android kitkat version free download<br />
|
108 |
-
clash of clans for android 4.4 2 apk mirror<br />
|
109 |
-
clash of clans for android kitkat no root free download<br />
|
110 |
-
clash of clans unlimited gems for android 4.4 2 free download<br />
|
111 |
-
clash of clans old version for android 4.4 2 free download<br />
|
112 |
-
clash of clans private server for android 4.4 2 free download<br />
|
113 |
-
clash of clans for android kitkat without google play services<br />
|
114 |
-
clash of clans th14 update for android 4.4 2 free download<br />
|
115 |
-
clash of clans builder base for android 4.4 2 free download<br />
|
116 |
-
clash of clans supercell id for android 4.4 2 free download<br />
|
117 |
-
clash of clans town hall 14 for android 4.4 2 free download<br />
|
118 |
-
clash of clans magic items for android 4.4 2 free download<br />
|
119 |
-
clash of clans gold pass for android 4.4 2 free download<br />
|
120 |
-
clash of clans hero skins for android 4.4 2 free download<br />
|
121 |
-
clash of clans clan games for android 4.4 2 free download<br />
|
122 |
-
clash of clans clan wars for android kitkat free download<br />
|
123 |
-
clash of clans clan war leagues for android kitkat free download<br />
|
124 |
-
clash of clans practice mode for android kitkat free download<br />
|
125 |
-
clash of clans goblin king campaign for android kitkat free download<br />
|
126 |
-
clash of clans skeleton park district for android kitkat free download<br />
|
127 |
-
clash of clans graveyard spell for android kitkat free download<br />
|
128 |
-
clash of clans mini-minion hive defense for android kitkat free download<br />
|
129 |
-
clash of clans reflector defense for android kitkat free download<br />
|
130 |
-
clash of clans player house customization for android kitkat free download<br />
|
131 |
-
clash of clans capital trophies league for android kitkat free download<br />
|
132 |
-
clash of clans super miner troop for android kitkat free download<br />
|
133 |
-
clash of clans shovel of obstacles upgrade for android kitkat free download<br />
|
134 |
-
clash of clans friendly challenges for android kitkat free download<br />
|
135 |
-
clash of clans friendly wars for android kitkat free download<br />
|
136 |
-
clash of clans live events for android kitkat free download<br />
|
137 |
-
clash of clans spectator mode for android kitkat free download<br />
|
138 |
-
clash of clans video replays for android kitkat free download<br />
|
139 |
-
clash of clans builder base versus battles for android kitkat free download<br />
|
140 |
-
clash of clans builder base new buildings and characters for android kitkat free download</p>
|
141 |
-
<ul>
|
142 |
-
<li>You start the game with a small village that you can customize and expand with various buildings, such as town hall, barracks, army camp, gold mine, elixir collector, and more.</li>
|
143 |
-
<li>You can train different types of troops, such as barbarians, archers, giants, wizards, dragons, and more. You can also unlock and upgrade powerful heroes, such as the barbarian king, archer queen, grand warden, royal champion, and battle machine.</li>
|
144 |
-
<li>You can use your troops and heroes to attack other players' villages and loot their resources, such as gold, elixir, and dark elixir. You can also use spells and siege machines to support your attacks.</li>
|
145 |
-
<li>You can use your resources to upgrade your buildings, troops, heroes, spells, and siege machines. You can also research new technologies in your laboratory to make them stronger.</li>
|
146 |
-
<li>You can join or create a clan of up to 50 players who can chat, donate troops, and participate in clan wars and clan games. You can also compete in clan war leagues and legend league to earn rewards and glory.</li>
|
147 |
-
<li>You can defend your village from enemy attacks with various defenses, such as cannons, archer towers, mortars, air defenses, inferno towers, eagle artillery, and more. You can also set up traps and walls to slow down or damage the invaders.</li>
|
148 |
-
<li>You can explore new lands and discover new characters in the builder base. You can build a second village with different buildings, troops, heroes, and defenses. You can also fight against other players in versus battles to earn trophies and resources.</li>
|
149 |
-
</ul>
|
150 |
-
<h3>The tips and tricks to improve your skills and strategies</h3>
|
151 |
-
<p>Clash of Clans is a game that requires a lot of planning and thinking. Here are some tips and tricks that can help you improve your skills and strategies:</p>
|
152 |
-
<ul>
|
153 |
-
<li>Always keep your builders busy. Builders are essential for upgrading your buildings and making your village stronger. You should always have a builder available for the next upgrade or save some gems to buy more builders.</li>
|
154 |
-
<li>Balance your offense and defense. You should not neglect either your offense or defense when upgrading your village. You need a strong offense to attack other players and earn resources. You also need a strong defense to protect your village and resources from enemy attacks.</li>
|
155 |
-
<li>Choose your targets wisely. You should not attack any player you see on the map. You should scout their village first and see if they have enough resources to loot or if they have weak defenses that you can exploit. You should also check their clan castle and see if they have any troops inside that can counter your attack.</li>
|
156 |
-
<li>Use the right troops for the right situation. You should not use the same troops for every attack. You should vary your army composition depending on the enemy's base layout, defenses, traps, clan castle troops, heroes, etc. You should also use spells and siege machines that complement your troops and help them break through the enemy's defenses.</li>
|
157 |
-
<li>Plan your attack before you launch it. You should not rush into an attack without a clear strategy. You should study the enemy's base carefully and identify the best entry point, the best target for your heroes or siege machines, the best placement for your spells, etc. You should also consider the time limit and the percentage of destruction that you need to achieve.</li>
|
158 |
-
<li>Join a active and friendly clan. A clan is not only a social group but also a source of support and learning. You should join a clan that matches your level of activity, skill, interest, and goals. You should also contribute to your clan by donating troops, participating in clan wars and clan games, and learning from your clanmates. You should also respect your clan rules and communicate with your clan leaders and members.</li>
|
159 |
-
<li>Have fun and enjoy the game. Clash of Clans is a game that can be very rewarding and satisfying, but also very frustrating and stressful. You should not take the game too seriously or let it affect your mood or health. You should play the game for fun and entertainment, and not for competition or addiction. You should also take breaks from the game and do other things that you enjoy.</li>
|
160 |
-
</ul>
|
161 |
-
<h3>The best clans and players to join and follow</h3>
|
162 |
-
<p>If you want to improve your game and learn from the best, you might want to join and follow some of the best clans and players in Clash of Clans. Here are some of the most famous and successful ones that you can check out:</p>
|
163 |
-
<ul>
|
164 |
-
<li><strong>Team Queso</strong>: This is a professional esports team that competes in various games, including Clash of Clans. They are the current world champions of the Clash of Clans World Championship 2021, where they defeated ATN.aTTaX in the grand final. They have some of the best players in the world, such as iAmJP, zzzzz, Yoyo23, Marinel, and more.</li>
|
165 |
-
<li><strong>Tribe Gaming</strong>: This is another professional esports team that competes in various games, including Clash of Clans. They are the runners-up of the Clash of Clans World Championship 2020, where they lost to Nova Esports in the grand final. They have some of the best players in the world, such as Eve Check, Eve Maxi, Lexnos, Itsu, and more.</li>
|
166 |
-
<li><strong>Clash with Eric - OneHive</strong>: This is a YouTube channel and a clan run by Eric, a popular content creator and a skilled player. He uploads videos of his attacks, strategies, tips, guides, and more. He also streams live on Twitch and participates in various tournaments and events. He is the leader of OneHive, a competitive clan that has been around since 2014.</li>
|
167 |
-
<li><strong>Judo Sloth Gaming</strong>: This is another YouTube channel and a clan run by Judo Sloth, a popular content creator and a skilled player. He uploads videos of his attacks, strategies, tips, guides, and more. He also streams live on Twitch and participates in various tournaments and events. He is the leader of Judo Sloth Gaming, a competitive clan that has been around since 2016.</li>
|
168 |
-
<li><strong>Clash Bashing!!</strong>: This is another YouTube channel and a clan run by Bash, a popular content creator and a skilled player. He uploads videos of his attacks, strategies, tips, guides, and more. He also streams live on Twitch and participates in various tournaments and events. He is the leader of Clash Bashing!!, a competitive clan that has been around since 2017.</li>
|
169 |
-
</ul>
|
170 |
-
<h2>How to troubleshoot and solve common issues with Clash of Clans on Android 4.4 2</h2>
|
171 |
-
<h3>The possible causes and solutions for crashes, freezes, and errors</h3>
|
172 |
-
<p>Sometimes, you might encounter some issues with Clash of Clans on Android 4.4 2 that can affect your game performance or experience. Some of the common issues are crashes, freezes, errors, loading problems, connection problems, etc. Here are some of the possible causes and solutions for these issues:</p>
|
173 |
-
<ul>
|
174 |
-
<li>Your device does not meet the minimum requirements or is incompatible with the game. You should check your device specifications and compare them with the game requirements as mentioned above. You should also update your device software if possible or look for other alternatives.</li>
|
175 |
-
<li>Your device has low storage space or memory. You should clear some space on your device by deleting unwanted files or apps or moving them to an external storage device such as an SD card. You should also close other apps or processes that are running in the background or restart your device to free up some memory.</li>
|
176 |
-
<li>Your device has low battery life or is overheating. You should charge your device or plug it into a power source if it has low battery life or turn it off for a while if it is overheating. You should also avoid playing the game for long periods of time or in high temperatures.</li>
|
177 |
-
<li>Your internet connection is slow or unstable. You should check your internet connection speed and stability by using a speed test app or website or contacting your service provider. You should also switch to a different network if possible or move closer to your router or modem if you are using Wi-Fi.</li>
|
178 |
-
<li>Your game app is outdated or corrupted. You should update your game app to the latest version by going to the Google Play Store or other sources as mentioned above. You should also clear your game cache or data by going to Settings > Apps > Clash of Clans > Storage > Clear cache or Clear data. You should also uninstall and reinstall your game app if it is corrupted or damaged.</li>
|
179 |
-
<li>Your Google Play services are outdated or disabled. You should update your Google Play services to the latest version by going to the Google Play Store or other sources as mentioned above. You should also enable your Google Play services by going to Settings > Apps > Google Play services > Enable or Activate.</li>
|
180 |
-
<li>Your device or game settings are incorrect or incompatible. You should check your device settings and make sure that they are compatible with the game, such as the date and time, the language, the region, etc. You should also check your game settings and make sure that they are optimal for your device, such as the graphics, the sound, the notifications, etc.</li>
|
181 |
-
</ul>
|
182 |
-
<h3>The ways to contact the support team and get help</h3>
|
183 |
-
<p>If none of the above solutions work for you or if you have any other questions or issues with the game, you can contact the support team and get help. Here are some of the ways that you can do that:</p>
|
184 |
-
<ul>
|
185 |
-
<li>Use the in-game support feature. You can access this feature by tapping on the settings icon in the game and then tapping on the help and support button. You can then browse through the FAQs and topics or tap on the contact us button to send a message to the support team.</li>
|
186 |
-
<li>Use the official website of Supercell. You can go to <a href="">https://supercell.com/en/support/</a> and select Clash of Clans from the list of games. You can then browse through the FAQs and topics or tap on the contact us button to send a message to the support team.</li>
|
187 |
-
<li>Use the official forums of Supercell. You can go to <a href="">https://forum.supercell.com/forumdisplay.php/4-Clash-of-Clans</a> and join the community of players and moderators. You can then post your questions or issues in the relevant sections or threads or send a private message to a moderator.</li>
|
188 |
-
<li>Use the official social media accounts of Supercell. You can follow Supercell on Facebook, Twitter, Instagram, YouTube, Reddit, Discord, and more. You can then send a direct message or comment on their posts with your questions or issues.</li>
|
189 |
-
</ul>
|
190 |
-
<h3>The FAQs and resources to learn more about the game</h3>
|
191 |
-
<p>If you want to learn more about Clash of Clans and its features, updates, events, tips, guides, etc., you can check out these FAQs and resources:</p>
|
192 |
-
<ul>
|
193 |
-
<li><strong>What are gems and how can I get them?</strong> Gems are the premium currency of Clash of Clans that can be used to speed up upgrades, buy resources, boost production, train troops, etc. You can get gems by completing achievements, removing obstacles, opening gem boxes, winning clan games, participating in events, etc. You can also buy gems with real money through in-app purchases.</li>
|
194 |
-
<li><strong>What are clans and how can I join one?</strong> Clans are groups of up to 50 players who can chat, donate troops, and participate in clan wars and clan games. You can join a clan by searching for one in the game or by accepting an invitation from another player. You can also create your own clan by spending 40,000 gold and inviting other players.</li>
|
195 |
-
<li><strong>What are clan wars and how can I participate in them?</strong> Clan wars are competitive events where two clans face each other in a series of attacks and defenses. Each clan member can attack twice during a war and earn stars based on the percentage of destruction they cause. The clan with more stars at the end of the war wins and gets a war loot bonus. You can participate in clan wars by being a member of a clan that is eligible for war and by having your war preference set to on.</li>
|
196 |
-
<li><strong>What are clan war leagues and how can I participate in them?</strong> Clan war leagues are competitive events where eight clans compete in a round-robin format over seven days. Each clan member can attack once per day and earn stars based on the percentage of destruction they cause. The clans are ranked based on their total stars at the end of each day and receive league medals based on their final rank at the end of the event. You can participate in clan war leagues by being a member of a clan that is eligible for war and by having your war preference set to on.</li>
|
197 |
-
<li><strong>What are clan games and how can I participate in them?</strong> Clan games are cooperative events where clan members complete various challenges and earn points for their clan. The more points the clan earns, the higher the reward tier they unlock. The rewards include magic items, resources, gems, etc. You can participate in clan games by being a member of a clan that is eligible for games and by completing at least one challenge.</li>
|
198 |
-
<li><strong>What are magic items and how can I use them?</strong> Magic items are special items that can provide various benefits and advantages in the game, such as speeding up upgrades, boosting production, increasing resources, etc. You can get magic items by winning clan games, participating in events, reaching certain league levels, etc. You can use magic items by tapping on the magic item icon in the game and selecting the item you want to use.</li>
|
199 |
-
</ul>
|
200 |
-
<p>For more FAQs and resources, you can visit the following links:</p>
|
201 |
-
<ul>
|
202 |
-
<li><a href="">https://supercell.helpshift.com/a/clash-of-clans/?l=en</a>: The official help and support page of Supercell for Clash of Clans.</li>
|
203 |
-
<li><a href="">https://clashofclans.com/blog/</a>: The official blog of Supercell for Clash of Clans.</li>
|
204 |
-
<li><a href="">https://clashofclans.fandom.com/wiki/Clash_of_Clans_Wiki</a>: The unofficial wiki of Clash of Clans.</li>
|
205 |
-
</ul>
|
206 |
-
<h2>Conclusion</h2>
|
207 |
-
<p>Clash of Clans is a game that can provide you with hours of fun and entertainment. It is a game that can challenge your mind and test your skills. It is a game that can connect you with millions of players around the world. It is a game that you can play on your Android 4.4 2 device for free. If you are interested in playing Clash of Clans on your Android 4.4 2 device, you can follow the steps and tips that we have provided in this article. We hope that you have found this article helpful and informative. Thank you for reading and happy clashing!</p> 197e85843d<br />
|
208 |
-
<br />
|
209 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download METAMORPHOSIS by INTERWORLD - The Hottest PHONK Track of the Year.md
DELETED
@@ -1,119 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Download Metamorphosis by Interworld - A Guide for Phonk Lovers</h1>
|
3 |
-
<p>If you are a fan of phonk music, you might have heard of <strong>Metamorphosis</strong> by <strong>Interworld</strong>, a catchy and aggressive song that has been making waves in the phonk scene. But how can you download this song and enjoy it offline? In this article, we will show you what Metamorphosis by Interworld is, why you should download it, and how to download it from different platforms.</p>
|
4 |
-
<h2>metamorphosis song download</h2><br /><p><b><b>Download</b> · <a href="https://jinyurl.com/2uNRZ9">https://jinyurl.com/2uNRZ9</a></b></p><br /><br />
|
5 |
-
<h2>What is Metamorphosis by Interworld?</h2>
|
6 |
-
<p>Metamorphosis by Interworld is a song that was released on November 25th, 2021, as part of the album <em>Metamorphosis</em>. It is a phonk song, which is a subgenre of rap music that combines elements of trap, Memphis rap, cloud rap, and vaporwave. Phonk songs typically feature distorted vocals, heavy bass, lo-fi samples, and dark themes.</p>
|
7 |
-
<h3>The meaning of the song</h3>
|
8 |
-
<p>The lyrics of Metamorphosis by Interworld are about power, wealth, and success in the rap game. The protagonist boasts of his multiple romantic partners, his fame, and his ability to control and manipulate others. He also challenges his rivals and claims that he is the best man in the business. The song is a display of confidence and dominance, as well as a reflection of the dark side of the rap industry.</p>
|
9 |
-
<h3>The genre and style of the song</h3>
|
10 |
-
<p>The song is a phonk song, which means that it has a fast tempo, a heavy bassline, and a distorted vocal delivery. The song also features samples from other songs, such as "Deep Throats" by DJ Live Wire, which adds to the lo-fi and nostalgic vibe of the song. The song has a catchy hook and a simple structure, which makes it easy to remember and sing along to.</p>
|
11 |
-
<h3>The popularity and reception of the song</h3>
|
12 |
-
<p>The song has been well-received by phonk fans and critics alike. It has over 30 million views on YouTube, over 6 million streams on SoundCloud, and over 4 million streams on Spotify. It has also been featured on various phonk playlists and channels, such as Chill Nation, Phonk City, and Coversart. The song has been praised for its catchy melody, its aggressive lyrics, and its high-quality production.</p>
|
13 |
-
<h2>Why should you download Metamorphosis by Interworld?</h2>
|
14 |
-
<p>If you like phonk music, or if you are curious about this genre, you should definitely download Metamorphosis by Interworld. Here are some reasons why:</p>
|
15 |
-
<h3>The benefits of downloading music</h3>
|
16 |
-
<p>Downloading music has many advantages over streaming music online. Some of them are:</p>
|
17 |
-
<p>metamorphosis by franz kafka audiobook download<br />
|
18 |
-
interworld metamorphosis trap nation download<br />
|
19 |
-
metamorphosis soundcloud free download<br />
|
20 |
-
metamorphosis by hilary duff mp3 download<br />
|
21 |
-
metamorphosis by papa roach download<br />
|
22 |
-
metamorphosis by linkin park download<br />
|
23 |
-
metamorphosis by kelly clarkson download<br />
|
24 |
-
metamorphosis by lindsay lohan download<br />
|
25 |
-
metamorphosis by philip glass download<br />
|
26 |
-
metamorphosis by bts download<br />
|
27 |
-
metamorphosis by iron butterfly download<br />
|
28 |
-
metamorphosis by yeng constantino download<br />
|
29 |
-
metamorphosis by ryan leslie download<br />
|
30 |
-
metamorphosis by blue stahli download<br />
|
31 |
-
metamorphosis by gackt download<br />
|
32 |
-
metamorphosis by ayumi hamasaki download<br />
|
33 |
-
metamorphosis by marilyn manson download<br />
|
34 |
-
metamorphosis by david garrett download<br />
|
35 |
-
metamorphosis by the rolling stones download<br />
|
36 |
-
metamorphosis by incubus download<br />
|
37 |
-
metamorphosis by korn download<br />
|
38 |
-
metamorphosis by nightwish download<br />
|
39 |
-
metamorphosis by edgar winter download<br />
|
40 |
-
metamorphosis by the alan parsons project download<br />
|
41 |
-
metamorphosis by the cinematic orchestra download<br />
|
42 |
-
metamorphosis by the cranberries download<br />
|
43 |
-
metamorphosis by the cure download<br />
|
44 |
-
metamorphosis by the smashing pumpkins download<br />
|
45 |
-
metamorphosis by the strokes download<br />
|
46 |
-
metamorphosis by the who download<br />
|
47 |
-
metamorphosis musical soundtrack download<br />
|
48 |
-
metamorphosis piano sheet music download<br />
|
49 |
-
metamorphosis guitar tab pdf download<br />
|
50 |
-
metamorphosis violin solo mp3 download<br />
|
51 |
-
metamorphosis flute duet midi download<br />
|
52 |
-
metamorphosis rock opera video download<br />
|
53 |
-
metamorphosis remix album zip download<br />
|
54 |
-
metamorphosis live concert dvd download<br />
|
55 |
-
metamorphosis unplugged session mp3 download<br />
|
56 |
-
metamorphosis acoustic version mp4 download<br />
|
57 |
-
metamorphosis instrumental karaoke mp3 download<br />
|
58 |
-
metamorphosis original demo tape wav download<br />
|
59 |
-
metamorphosis radio edit mp3 320kbps download <br />
|
60 |
-
metamorphosis extended mix flac 24bit 96khz 5.1 surround sound audio file format free music downloads for android phones full songs no wifi needed offline listening without internet connection or registration required best quality high resolution lossless sound quality hd hq hi-res hi-fi high fidelity high definition high-end audiophile music downloads for free online legal safe secure virus free malware free spyware free adware free trojan free worm free ransomware free phishing free scam free spam free pop-up free banner free cookie free tracking free data mining free data harvesting free data collection free data extraction free data analysis free data manipulation free data monetization free data exploitation free data misuse free data abuse free data breach free data leak free data theft free data loss free data corruption free data destruction free data deletion free data erasure free data recovery free data backup free data restore free data protection free data encryption free data decryption free data compression free data decompression free data conversion free data transfer free data transmission free data reception free data streaming</p>
|
61 |
-
<ul>
|
62 |
-
<li>You can listen to your favorite songs offline, without worrying about internet connection or data usage.</li>
|
63 |
-
<li>You can enjoy better sound quality, as downloaded files are usually higher in bitrate than streamed ones.</li>
|
64 |
-
<li>You can create your own playlists and organize your music library according to your preferences.</li>
|
65 |
-
<li>You can support your favorite artists by buying their music or using legal platforms that pay them royalties.</li>
|
66 |
-
</ul>
|
67 |
-
<h3>The reasons to choose Metamorphosis by Interworld</ <h3>The reasons to choose Metamorphosis by Interworld</h3>
|
68 |
-
<p>Metamorphosis by Interworld is a great song to download for several reasons. Some of them are:</p>
|
69 |
-
<ul>
|
70 |
-
<li>It is a catchy and energetic song that will make you want to dance and sing along.</li>
|
71 |
-
<li>It is a unique and original song that showcases the creativity and talent of Interworld, one of the rising stars of the phonk scene.</li>
|
72 |
-
<li>It is a song that explores the themes of power, wealth, and success in the rap industry, which can be interesting and relatable for many listeners.</li>
|
73 |
-
<li>It is a song that represents the phonk genre, which is a diverse and innovative subgenre of rap music that deserves more recognition and appreciation.</li>
|
74 |
-
</ul>
|
75 |
-
<h2>How to download Metamorphosis by Interworld?</h2>
|
76 |
-
<p>Now that you know what Metamorphosis by Interworld is and why you should download it, you might be wondering how to do it. There are different platforms and methods that you can use to download this song, depending on your preferences and devices. Here are some of the most popular ones:</p>
|
77 |
-
<h3>The steps to download Metamorphosis by Interworld from YouTube</h3>
|
78 |
-
<p>YouTube is one of the most popular platforms to watch and listen to music videos online. However, YouTube does not allow you to download videos directly from its website or app. You will need to use a third-party tool or website that can convert YouTube videos into audio files. Here are the steps to do it:</p>
|
79 |
-
<ol>
|
80 |
-
<li>Go to YouTube and search for Metamorphosis by Interworld. You can use this link to go directly to the official video.</li>
|
81 |
-
<li>Copy the URL of the video from the address bar of your browser.</li>
|
82 |
-
<li>Go to a YouTube to MP3 converter website, such as YTMP3, MP3FY, or 4K Video Downloader. Paste the URL of the video into the input box and click on the convert or download button.</li>
|
83 |
-
<li>Wait for the conversion process to finish. You will see a download link or button that will allow you to save the audio file on your device.</li>
|
84 |
-
<li>Enjoy listening to Metamorphosis by Interworld offline!</li>
|
85 |
-
</ol>
|
86 |
-
<h3>The steps to download Metamorphosis by Interworld from SoundCloud</h3>
|
87 |
-
<p>SoundCloud is another popular platform to listen to music online. It is especially known for hosting independent and underground artists, such as Interworld. SoundCloud also does not allow you to download songs directly from its website or app, unless the artist has enabled the download option. You will need to use a third-party tool or website that can download SoundCloud songs. Here are the steps to do it:</p>
|
88 |
-
<ol>
|
89 |
-
<li>Go to SoundCloud and search for Metamorphosis by Interworld. You can use this link to go directly to the official song.</li>
|
90 |
-
<li>Copy the URL of the song from the address bar of your browser.</li>
|
91 |
-
<li>Go to a SoundCloud downloader website, such as SCDL, KlickAud, or SingleMango. Paste the URL of the song into the input box and click on the download button.</li>
|
92 |
-
<li>Wait for the download process to finish. You will see a download link or button that will allow you to save the audio file on your device.</li>
|
93 |
-
<li>Enjoy listening to Metamorphosis by Interworld offline!</li>
|
94 |
-
</ol>
|
95 |
-
<h3>The steps to download Metamorphosis by Interworld from Spotify</h3>
|
96 |
-
<p>Spotify is one of the most popular platforms to stream music online. It has a huge library of songs, albums, playlists, and podcasts, including Metamorphosis by Interworld. Spotify allows you to download songs for offline listening, but only if you have a premium subscription. If you have a free account, you will need to use a third-party tool or website that can download Spotify songs. Here are the steps to do it:</p>
|
97 |
-
<ol>
|
98 |
-
<li>Go to Spotify and search for Metamorphosis by Interworld. You can use this link to go directly to the official song.</li>
|
99 |
-
<li>If you have a premium subscription, you can simply tap on the download button next to the song title and wait for it to be downloaded on your device. If you have a free account, you will need to copy the URL of the song from the share option or from your browser.</li>
|
100 |
-
<li>Go to a Spotify downloader website, such as TuneFab, NoteBurner, or Sidify. Paste the URL of the song into the input box and click on the download or convert button.</li>
|
101 |
-
<li>Wait for the download or conversion process to finish. You will see a download link or button that will allow you to save the audio file on your device.</li>
|
102 |
-
<li>Enjoy listening to Metamorphosis by Interworld offline!</li>
|
103 |
-
</ol>
|
104 |
-
<h2>Conclusion</h2>
|
105 |
-
<p>Metamorphosis by Interworld is a phonk song that you should definitely download and listen to. It is a catchy and aggressive song that explores the themes of power, wealth, and success in the rap industry. It is also a unique and original song that showcases the creativity and talent of Interworld, one of the rising stars of the phonk scene. You can download this song from different platforms, such as YouTube, SoundCloud, and Spotify, using third-party tools or websites. We hope that this guide has helped you to download Metamorphosis by Interworld and enjoy it offline.</p>
|
106 |
-
<h2>FAQs</h2>
|
107 |
-
<p>Here are some frequently asked questions about Metamorphosis by Interworld and how to download it:</p>
|
108 |
-
<h3>Q: Who is Interworld?</h3>
|
109 |
-
<p>A: Interworld is a phonk artist from Russia. He started making music in 2019 and has released several albums and singles, such as <em>Metamorphosis</em>, <em>Phonkadelic</em>, and <em>Interdimensional</em>. He is known for his distinctive voice, his dark and gritty lyrics, and his high-quality production.</p>
|
110 |
-
<h3>Q: What is phonk music?</h3>
|
111 |
-
<p>A: Phonk music is a subgenre of rap music that combines elements of trap, Memphis rap, cloud rap, and vaporwave. Phonk music typically features distorted vocals, heavy bass, lo-fi samples, and dark themes. Phonk music originated in the 1990s in Memphis, Tennessee, and was influenced by artists such as Three 6 Mafia, Tommy Wright III, and DJ Screw. Phonk music has gained popularity in recent years thanks to online platforms such as YouTube and SoundCloud.</p>
|
112 |
-
<h3>Q: How can I find more phonk songs?</h3>
|
113 |
-
<p>A: If you like phonk music, you can find more phonk songs by following phonk artists, playlists, and channels on different platforms. Some of the most popular phonk artists are DJ Yung Vamp, Mythic, Baker Ya Maker, Soudiere, and Freddie Dredd. Some of the most popular phonk playlists are Phonk Nation on Spotify, Phonk City on YouTube, and Phonk Radio on SoundCloud. Some of the most popular phonk channels are Chill Nation, Coversart, and TrillPhonk on YouTube.</p>
|
114 |
-
<h3>Q: Is downloading music from YouTube, SoundCloud, or Spotify legal?</h3>
|
115 |
-
<p>A: Downloading music from YouTube, SoundCloud, or Spotify using third-party tools or websites may violate their terms of service or the copyright laws of your country. You should always respect the rights of the artists and the platforms and use legal ways to download music. You can buy music from online stores such as iTunes or Amazon, or use platforms that offer legal downloads such as Bandcamp or Audiomack.</p>
|
116 |
-
<h3>Q: How can I support Interworld and other phonk artists?</h3>
|
117 |
-
<p>A: If you like Interworld and other phonk artists, you can support them by buying their music or merch from their official websites or online stores. You can also stream their music on platforms that pay them royalties such as Spotify or Apple Music. You can also follow them on social media such as Instagram or Twitter and share their music with your friends and family.</p> 197e85843d<br />
|
118 |
-
<br />
|
119 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Experience a Fun and Interactive Online World with Play Together Mod Apk.md
DELETED
@@ -1,112 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Play Together Mod APK: A Fun and Social Online Game</h1>
|
3 |
-
<p>Do you love playing online games with your friends? Do you want to experience a virtual world where you can do anything you want? If yes, then you should try Play Together Mod APK, a multiplayer online game that lets you interact with other players in real-time. You can create your own avatar, customize your appearance, explore different locations, play mini-games, chat with other players, and much more. Play Together Mod APK is a modded version of the original game that gives you unlimited money and gems, unlocks all items and outfits, and gives you access to the VIP menu. With this mod, you can enjoy the game without any limitations or restrictions. In this article, we will tell you more about Play Together Mod APK, its features, how to download and install it, its pros and cons, and some frequently asked questions.</p>
|
4 |
-
<h2>play together mod apk</h2><br /><p><b><b>DOWNLOAD</b> ►►► <a href="https://jinyurl.com/2uNRTT">https://jinyurl.com/2uNRTT</a></b></p><br /><br />
|
5 |
-
<h2>What is Play Together Mod APK?</h2>
|
6 |
-
<p>Play Together Mod APK is an online multiplayer game that simulates daily life activities. You can create your own character and customize it with various outfits, accessories, hairstyles, and facial expressions. You can also choose your own pet and take care of it. You can explore different locations in the game world, such as the city, the island, the amusement park, the school, the farm, and more. You can also play mini-games with other players, such as fishing, cooking, racing, dancing, etc. You can chat with other players using text or voice messages. You can also join clubs and participate in events and quests. Play Together Mod APK is a modded version of the original game that gives you unlimited money and gems, unlocks all items and outfits, and gives you access to the VIP menu. With this mod, you can enjoy the game without any limitations or restrictions.</p>
|
7 |
-
<h3>Features of Play Together Mod APK</h3>
|
8 |
-
<p>Play Together Mod APK has many features that make it more fun and enjoyable than the original game. Some of these features are:</p>
|
9 |
-
<h4>Unlimited Money and Gems</h4>
|
10 |
-
<p>Money and gems are the main currencies in the game. You need them to buy items, outfits, pets, furniture, etc. You also need them to upgrade your skills and abilities. However, earning money and gems in the game is not easy. You have to complete tasks, quests, events, etc. to get them. But with Play Together Mod APK, you don't have to worry about that. You will get unlimited money and gems in your account as soon as you start the game. You can use them to buy anything you want without any hassle.</p>
|
11 |
-
<h4>Unlock All Items and Outfits</h4>
|
12 |
-
<p>One of the best things about Play Together Mod APK is that it unlocks all items and outfits in the game. You can choose from hundreds of items and outfits to customize your character and your pet. You can also buy furniture and decorations for your house. You don't have to wait for levels or achievements to unlock them. You can access them anytime you want.</p>
|
13 |
-
<h4>Access to VIP Menu</h4>
|
14 |
-
<p>Another great feature of Play Together Mod APK is that it gives you access to the VIP menu. The VIP menu is a special feature that only premium users can access in the original game. It gives you many benefits and advantages, such as extra rewards, exclusive items, faster leveling up, etc. But with Play Together Mod APK, you don't have to pay for the premium subscription. You can access the VIP menu for free and enjoy all its perks.</p>
|
15 |
-
<h3>How to Download and Install Play Together Mod APK?</h <p>Downloading and installing Play Together Mod APK is very easy and simple. You just need to follow these steps:</p>
|
16 |
-
<h4>Step 1: Download the APK file</h4>
|
17 |
-
<p>The first thing you need to do is to download the APK file of Play Together Mod APK from a reliable source. You can use the link below to download it directly to your device. The file size is about 90 MB, so make sure you have enough storage space and a stable internet connection.</p>
|
18 |
-
<p><a href="">Download Play Together Mod APK</a></p>
|
19 |
-
<p>play together mod apk unlimited money<br />
|
20 |
-
play together mod apk latest version<br />
|
21 |
-
play together mod apk download<br />
|
22 |
-
play together mod apk android 1<br />
|
23 |
-
play together mod apk free shopping<br />
|
24 |
-
play together mod apk happymod<br />
|
25 |
-
play together mod apk 2023<br />
|
26 |
-
play together mod apk no root<br />
|
27 |
-
play together mod apk offline<br />
|
28 |
-
play together mod apk revdl<br />
|
29 |
-
play together hack mod apk<br />
|
30 |
-
play together premium mod apk<br />
|
31 |
-
play together vip mod apk<br />
|
32 |
-
play together pro mod apk<br />
|
33 |
-
play together full mod apk<br />
|
34 |
-
play together cracked mod apk<br />
|
35 |
-
play together unlocked mod apk<br />
|
36 |
-
play together cheat mod apk<br />
|
37 |
-
play together mega mod apk<br />
|
38 |
-
play together unlimited gems mod apk<br />
|
39 |
-
download play together mod apk for android<br />
|
40 |
-
download play together mod apk for ios<br />
|
41 |
-
download play together mod apk for pc<br />
|
42 |
-
download play together mod apk for free<br />
|
43 |
-
download play together mod apk latest update<br />
|
44 |
-
how to install play together mod apk<br />
|
45 |
-
how to update play together mod apk<br />
|
46 |
-
how to use play together mod apk<br />
|
47 |
-
how to get play together mod apk<br />
|
48 |
-
how to download play together mod apk<br />
|
49 |
-
best play together mod apk<br />
|
50 |
-
new play together mod apk<br />
|
51 |
-
old play together mod apk<br />
|
52 |
-
original play together mod apk<br />
|
53 |
-
real play together mod apk<br />
|
54 |
-
working play together mod apk<br />
|
55 |
-
safe play together mod apk<br />
|
56 |
-
secure play together mod apk<br />
|
57 |
-
trusted play together mod apk<br />
|
58 |
-
verified play together mod apk<br />
|
59 |
-
fun play together mod apk<br />
|
60 |
-
cute play together mod apk<br />
|
61 |
-
cool play together mod apk<br />
|
62 |
-
awesome play together mod apk<br />
|
63 |
-
amazing play together mod apk<br />
|
64 |
-
super play together mod apk<br />
|
65 |
-
fantastic play together mod apk<br />
|
66 |
-
wonderful play together mod apk<br />
|
67 |
-
incredible play together mod apk</p>
|
68 |
-
<h4>Step 2: Enable Unknown Sources</h4>
|
69 |
-
<p>The next thing you need to do is to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the Google Play Store. To enable unknown sources, go to Settings > Security > Unknown Sources and toggle it on. You may see a warning message, but don't worry, it's safe to proceed.</p>
|
70 |
-
<h4>Step 3: Install the APK file</h4>
|
71 |
-
<p>After enabling unknown sources, you can now install the APK file of Play Together Mod APK. To do this, locate the downloaded file in your file manager and tap on it. You will see a pop-up window asking for your permission to install the app. Tap on Install and wait for the installation process to finish.</p>
|
72 |
-
<h4>Step 4: Launch the game and enjoy</h4>
|
73 |
-
<p>Once the installation is done, you can now launch the game and enjoy playing with your friends. You will see that you have unlimited money and gems, all items and outfits unlocked, and access to the VIP menu. You can also create your own character, explore different locations, play mini-games, chat with other players, and more.</p>
|
74 |
-
<h3>Pros and Cons of Play Together Mod APK</h3>
|
75 |
-
<p>Play Together Mod APK is a great online game that offers a lot of fun and social features. However, like any other modded app, it also has some pros and cons that you should be aware of. Here are some of them:</p>
|
76 |
-
<h4>Pros</h4>
|
77 |
-
<ul>
|
78 |
-
<li>You can enjoy unlimited money and gems, which you can use to buy anything you want in the game.</li>
|
79 |
-
<li>You can unlock all items and outfits, which you can use to customize your character and your pet.</li>
|
80 |
-
<li>You can access the VIP menu, which gives you many benefits and advantages, such as extra rewards, exclusive items, faster leveling up, etc.</li>
|
81 |
-
<li>You can play with other players from around the world in real-time, chat with them using text or voice messages, join clubs, participate in events and quests, etc.</li>
|
82 |
-
<li>You can experience a virtual world where you can do anything you want, such as exploring different locations, playing mini-games, taking care of your pet, etc.</li>
|
83 |
-
</ul>
|
84 |
-
<h4>Cons</h4>
|
85 |
-
<ul>
|
86 |
-
<li>You may encounter some bugs or glitches in the game, which may affect your gameplay or performance.</li>
|
87 |
-
<li>You may face some compatibility issues with some devices or operating systems.</li>
|
88 |
-
<li>You may get banned from the game if the developers detect that you are using a modded version.</li>
|
89 |
-
<li>You may lose your progress or data if you uninstall the game or update it to a newer version.</li>
|
90 |
-
<li>You may miss out on some features or updates that are only available in the original game.</li>
|
91 |
-
</ul>
|
92 |
-
<h3>Conclusion</h3>
|
93 |
-
<p>Play Together Mod APK is a fun and social online game that lets you interact with other players in real-time. You can create your own character, customize your appearance, explore different locations, play mini-games, chat with other players, and more. Play Together Mod APK is a modded version of the original game that gives you unlimited money and gems, unlocks all items and outfits, and gives you access to the VIP menu. With this mod, you can enjoy the game without any limitations or restrictions. However, you should also be aware of the pros and cons of using this mod and use it at your own risk. We hope this article has helped you learn more about Play Together Mod APK and how to download and install it on your device. If you have any questions or feedback, feel free to leave a comment below.</p>
|
94 |
-
<h2>FAQs</h2>
|
95 |
-
<p>Here are some frequently asked questions about Play Together Mod APK:</p>
|
96 |
-
<ol>
|
97 |
-
<li><b>Is Play Together Mod APK safe to use?</b></li>
|
98 |
-
<p>Play Together Mod APK is safe to use as long as you download it from a trusted source and enable unknown sources on your device. However, there is always a risk of getting banned from the game or losing your data if you use a modded version. Therefore, we recommend that you use it at your own risk and discretion.</p> <p>Here are some more frequently asked questions about Play Together Mod APK:</p>
|
99 |
-
<ol start="2">
|
100 |
-
<li><b>What are the requirements to play Play Together Mod APK?</b></li>
|
101 |
-
<p>To play Play Together Mod APK, you need to have an Android device with Android 4.4 or higher, at least 2 GB of RAM, and at least 100 MB of free storage space. You also need to have a stable internet connection to play online with other players.</p>
|
102 |
-
<li><b>Can I play Play Together Mod APK with my friends?</b></li>
|
103 |
-
<p>Yes, you can play Play Together Mod APK with your friends. You can invite them to join your club, chat with them, play mini-games with them, and more. You can also meet new friends from around the world and interact with them in real-time.</p>
|
104 |
-
<li><b>Can I play Play Together Mod APK offline?</b></li>
|
105 |
-
<p>No, you cannot play Play Together Mod APK offline. You need to have an internet connection to play online with other players. However, you can still enjoy some features of the game offline, such as customizing your character and your pet, buying items and outfits, etc.</p>
|
106 |
-
<li><b>How can I update Play Together Mod APK?</b></li>
|
107 |
-
<p>To update Play Together Mod APK, you need to download the latest version of the mod from the same source where you downloaded the previous version. Then, you need to uninstall the old version and install the new version following the same steps as before. However, you should be careful when updating the mod, as you may lose your progress or data if you do so.</p>
|
108 |
-
<li><b>How can I contact the developers of Play Together Mod APK?</b></li>
|
109 |
-
<p>To contact the developers of Play Together Mod APK, you can visit their official website or their social media pages. You can also leave a comment or a review on the download page of the mod. However, you should not expect a quick or positive response from them, as they are not affiliated with the original developers of the game.</p>
|
110 |
-
</ol></p> 197e85843d<br />
|
111 |
-
<br />
|
112 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/2023Liu2023/bingo/src/lib/isomorphic/node.ts
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
import Debug from 'debug'
|
2 |
-
|
3 |
-
const { fetch, setGlobalDispatcher, ProxyAgent } = require('undici')
|
4 |
-
const { HttpsProxyAgent } = require('https-proxy-agent')
|
5 |
-
const ws = require('ws')
|
6 |
-
|
7 |
-
const debug = Debug('bingo')
|
8 |
-
|
9 |
-
const httpProxy = process.env.http_proxy || process.env.HTTP_PROXY || process.env.https_proxy || process.env.HTTPS_PROXY;
|
10 |
-
let WebSocket = ws.WebSocket
|
11 |
-
|
12 |
-
if (httpProxy) {
|
13 |
-
setGlobalDispatcher(new ProxyAgent(httpProxy))
|
14 |
-
const agent = new HttpsProxyAgent(httpProxy)
|
15 |
-
// @ts-ignore
|
16 |
-
WebSocket = class extends ws.WebSocket {
|
17 |
-
constructor(address: string | URL, options: typeof ws.WebSocket) {
|
18 |
-
super(address, {
|
19 |
-
...options,
|
20 |
-
agent,
|
21 |
-
})
|
22 |
-
}
|
23 |
-
}
|
24 |
-
}
|
25 |
-
|
26 |
-
export default { fetch, WebSocket, debug }
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AEUPH/CosmosTV/public/index.html
DELETED
@@ -1,325 +0,0 @@
|
|
1 |
-
<html>
|
2 |
-
<head>
|
3 |
-
<title>AI Web TV 🤗</title>
|
4 |
-
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/full.css" rel="stylesheet" type="text/css" />
|
5 |
-
<!--<link href="https://vjs.zencdn.net/8.3.0/video-js.css" rel="stylesheet" />-->
|
6 |
-
<script src="/mpegts.js"></script>
|
7 |
-
</head>
|
8 |
-
<body
|
9 |
-
x-data="app()" x-init="init()"
|
10 |
-
class="fixed inset-0 bg-[rgb(0,0,0)] flex flex-col w-full items-center justify-start">
|
11 |
-
<div x-show="!enabled">Loading WebTV..</div>
|
12 |
-
|
13 |
-
<div
|
14 |
-
x-show="enabled && showToolbar"
|
15 |
-
x-transition:enter="transition ease-out duration-100"
|
16 |
-
x-transition:enter-start="opacity-0 -translate-y-8"
|
17 |
-
x-transition:enter-end="opacity-100"
|
18 |
-
x-transition:leave="transition ease-in duration-200"
|
19 |
-
x-transition:leave-start="opacity-100"
|
20 |
-
x-transition:leave-end="opacity-0 -translate-y-8"
|
21 |
-
class="fixed w-full z-20 py-4 px-6 top-0 font-mono text-white flex flex-col lg:flex-row items-center justify-between space-x-1 bg-black bg-opacity-60"
|
22 |
-
style="text-shadow: 0px 0px 3px #000000">
|
23 |
-
|
24 |
-
<div class="flex text-xl space-x-2">
|
25 |
-
<div class="text-xl">🤗 AI WebTV</div>
|
26 |
-
<div class="text-md">👉 Current channel:</div>
|
27 |
-
<template x-for="chan in channels">
|
28 |
-
<div
|
29 |
-
class="text-xl mr-2"
|
30 |
-
:class="chan.id === channel.id
|
31 |
-
? 'font-bold'
|
32 |
-
: 'hover:underline opacity-60 hover:opacity-80 cursor-pointer'"
|
33 |
-
x-on:click="window.location = `${window.location.origin}/?channel=${chan.id}`"
|
34 |
-
x-text="chan.label">
|
35 |
-
<div class="animate-ping absolute inline-flex h-4 w-4 rounded-full bg-red-400 opacity-75"></div>
|
36 |
-
</div>
|
37 |
-
</template>
|
38 |
-
</div>
|
39 |
-
|
40 |
-
<div class="flex justify-between space-x-6 items-center">
|
41 |
-
|
42 |
-
<div class="flex items-center justify-center text-white opacity-100 space-x-2">
|
43 |
-
<div>
|
44 |
-
<svg xmlns="http://www.w3.org/2000/svg" width="24px" height="24px" viewBox="0 0 640 512"><path fill="currentColor" d="M96 128a128 128 0 1 1 256 0A128 128 0 1 1 96 128zM0 482.3C0 383.8 79.8 304 178.3 304h91.4C368.2 304 448 383.8 448 482.3c0 16.4-13.3 29.7-29.7 29.7H29.7C13.3 512 0 498.7 0 482.3zM609.3 512H471.4c5.4-9.4 8.6-20.3 8.6-32v-8c0-60.7-27.1-115.2-69.8-151.8c2.4-.1 4.7-.2 7.1-.2h61.4C567.8 320 640 392.2 640 481.3c0 17-13.8 30.7-30.7 30.7zM432 256c-31 0-59-12.6-79.3-32.9C372.4 196.5 384 163.6 384 128c0-26.8-6.6-52.1-18.3-74.3C384.3 40.1 407.2 32 432 32c61.9 0 112 50.1 112 112s-50.1 112-112 112z"/></svg>
|
45 |
-
</div>
|
46 |
-
<div x-text="channel.audience"></div>
|
47 |
-
<div x-text="channel.audience > 1 ? 'viewers' : 'viewer'"></div>
|
48 |
-
</div>
|
49 |
-
|
50 |
-
<div class="text-sm">(<a
|
51 |
-
class="hover:underline"
|
52 |
-
href="https://huggingface.co/facebook/musicgen-melody"
|
53 |
-
target="_blank">musicgen-melody</a> + <a
|
54 |
-
class="hover:underline"
|
55 |
-
:href="channel.modelUrl"
|
56 |
-
x-text="channel.model"
|
57 |
-
target="_blank"></a>)</div>
|
58 |
-
|
59 |
-
<div
|
60 |
-
x-on:click="toggleAudio()"
|
61 |
-
class="flex items-center justify-center text-white opacity-80 hover:opacity-100 cursor-pointer">
|
62 |
-
<div x-show="muted">
|
63 |
-
<svg aria-hidden="true" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512" width="32px" height="32px"><path fill="currentColor" d="M215.03 71.05L126.06 160H24c-13.26 0-24 10.74-24 24v144c0 13.25 10.74 24 24 24h102.06l88.97 88.95c15.03 15.03 40.97 4.47 40.97-16.97V88.02c0-21.46-25.96-31.98-40.97-16.97zM461.64 256l45.64-45.64c6.3-6.3 6.3-16.52 0-22.82l-22.82-22.82c-6.3-6.3-16.52-6.3-22.82 0L416 210.36l-45.64-45.64c-6.3-6.3-16.52-6.3-22.82 0l-22.82 22.82c-6.3 6.3-6.3 16.52 0 22.82L370.36 256l-45.63 45.63c-6.3 6.3-6.3 16.52 0 22.82l22.82 22.82c6.3 6.3 16.52 6.3 22.82 0L416 301.64l45.64 45.64c6.3 6.3 16.52 6.3 22.82 0l22.82-22.82c6.3-6.3 6.3-16.52 0-22.82L461.64 256z" class=""></path></svg>
|
64 |
-
</div>
|
65 |
-
<div x-show="!muted">
|
66 |
-
<svg aria-hidden="true" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 480 512" width="32px" height="32px"><path fill="currentColor" d="M215.03 71.05L126.06 160H24c-13.26 0-24 10.74-24 24v144c0 13.25 10.74 24 24 24h102.06l88.97 88.95c15.03 15.03 40.97 4.47 40.97-16.97V88.02c0-21.46-25.96-31.98-40.97-16.97zM480 256c0-63.53-32.06-121.94-85.77-156.24-11.19-7.14-26.03-3.82-33.12 7.46s-3.78 26.21 7.41 33.36C408.27 165.97 432 209.11 432 256s-23.73 90.03-63.48 115.42c-11.19 7.14-14.5 22.07-7.41 33.36 6.51 10.36 21.12 15.14 33.12 7.46C447.94 377.94 480 319.53 480 256zm-141.77-76.87c-11.58-6.33-26.19-2.16-32.61 9.45-6.39 11.61-2.16 26.2 9.45 32.61C327.98 228.28 336 241.63 336 256c0 14.38-8.02 27.72-20.92 34.81-11.61 6.41-15.84 21-9.45 32.61 6.43 11.66 21.05 15.8 32.61 9.45 28.23-15.55 45.77-45 45.77-76.88s-17.54-61.32-45.78-76.86z" class=""></path></svg>
|
67 |
-
</div>
|
68 |
-
</div>
|
69 |
-
<div
|
70 |
-
x-on:click="fullscreen()"
|
71 |
-
class="text-white hover:text-white opacity-80 hover:opacity-100 cursor-pointer">
|
72 |
-
<?xml version="1.0" ?><svg version="1.1" viewBox="0 0 14 14" width="24px" height="24px" xmlns="http://www.w3.org/2000/svg" xmlns:sketch="http://www.bohemiancoding.com/sketch/ns" xmlns:xlink="http://www.w3.org/1999/xlink"><title/><desc/><defs/><g fill="none" fill-rule="evenodd" id="Page-1" stroke="none" stroke-width="1"><g fill="currentColor" id="Core" transform="translate(-215.000000, -257.000000)"><g id="fullscreen" transform="translate(215.000000, 257.000000)"><path d="M2,9 L0,9 L0,14 L5,14 L5,12 L2,12 L2,9 L2,9 Z M0,5 L2,5 L2,2 L5,2 L5,0 L0,0 L0,5 L0,5 Z M12,12 L9,12 L9,14 L14,14 L14,9 L12,9 L12,12 L12,12 Z M9,0 L9,2 L12,2 L12,5 L14,5 L14,0 L9,0 L9,0 Z" id="Shape"/></g></g></g></svg>
|
73 |
-
</div>
|
74 |
-
</div>
|
75 |
-
</div>
|
76 |
-
<div class="flex w-full">
|
77 |
-
<video id="videoElement" muted autoplay class="aspect-video w-full"></video>
|
78 |
-
<!--
|
79 |
-
We probably want to display a nice logo or decoration somewhere
|
80 |
-
<img src="/hf-logo.png" class="absolute mt-2 w-[16%]" />
|
81 |
-
-->
|
82 |
-
</div>
|
83 |
-
<script>
|
84 |
-
// disable analytics (we don't use VideoJS yet anyway)
|
85 |
-
window.HELP_IMPROVE_VIDEOJS = false
|
86 |
-
</script>
|
87 |
-
<script defer src="https://cdn.jsdelivr.net/npm/[email protected]/dist/cdn.min.js"></script>
|
88 |
-
<script src="https://cdn.tailwindcss.com?plugins=forms,typography,aspect-ratio"></script>
|
89 |
-
<script src="https://cdnjs.cloudflare.com/ajax/libs/iframe-resizer/4.3.2/iframeResizer.contentWindow.min.js"></script>
|
90 |
-
<!--<script src="https://vjs.zencdn.net/8.3.0/video.min.js"></script>-->
|
91 |
-
<script>
|
92 |
-
|
93 |
-
function app() {
|
94 |
-
return {
|
95 |
-
enabled: false,
|
96 |
-
channels: {
|
97 |
-
/*
|
98 |
-
legacy: {
|
99 |
-
id: 'legacy',
|
100 |
-
label: '#older',
|
101 |
-
audience: 0,
|
102 |
-
online: false,
|
103 |
-
visible: false,
|
104 |
-
url: 'https://jbilcke-hf-media-server.hf.space/live/legacy.flv',
|
105 |
-
resolution: '576x320',
|
106 |
-
model: 'zeroscope_v2_576w',
|
107 |
-
modelUrl: 'https://huggingface.co/cerspense/zeroscope_v2_576w',
|
108 |
-
},
|
109 |
-
*/
|
110 |
-
/*
|
111 |
-
hdtv: {
|
112 |
-
id: 'hdtv',
|
113 |
-
label: '#old',
|
114 |
-
audience: 0,
|
115 |
-
online: false,
|
116 |
-
visible: true,
|
117 |
-
url: 'https://jbilcke-hf-media-server.hf.space/live/hdtv.flv',
|
118 |
-
resolution: '1024x576_8FPS',
|
119 |
-
model: 'zeroscope_v2_XL',
|
120 |
-
modelUrl: 'https://huggingface.co/cerspense/zeroscope_v2_XL',
|
121 |
-
},
|
122 |
-
*/
|
123 |
-
random: {
|
124 |
-
id: 'random',
|
125 |
-
label: '#random',
|
126 |
-
audience: 0,
|
127 |
-
online: false,
|
128 |
-
visible: true,
|
129 |
-
url: 'https://jbilcke-hf-media-server.hf.space/live/random.flv',
|
130 |
-
resolution: '1024x576_24FPS',
|
131 |
-
model: 'zeroscope_v2_XL',
|
132 |
-
modelUrl: 'https://huggingface.co/cerspense/zeroscope_v2_XL',
|
133 |
-
},
|
134 |
-
comedy: {
|
135 |
-
id: 'comedy',
|
136 |
-
label: '#comedy',
|
137 |
-
audience: 0,
|
138 |
-
online: false,
|
139 |
-
visible: true,
|
140 |
-
url: 'https://jbilcke-hf-media-server.hf.space/live/comedy.flv',
|
141 |
-
resolution: '1024x576_24FPS',
|
142 |
-
model: 'zeroscope_v2_XL',
|
143 |
-
modelUrl: 'https://huggingface.co/cerspense/zeroscope_v2_XL',
|
144 |
-
},
|
145 |
-
documentary: {
|
146 |
-
id: 'documentary',
|
147 |
-
label: '#documentary',
|
148 |
-
audience: 0,
|
149 |
-
online: false,
|
150 |
-
visible: true,
|
151 |
-
url: 'https://jbilcke-hf-media-server.hf.space/live/documentary.flv',
|
152 |
-
resolution: '1024x576_24FPS',
|
153 |
-
model: 'zeroscope_v2_XL',
|
154 |
-
modelUrl: 'https://huggingface.co/cerspense/zeroscope_v2_XL',
|
155 |
-
},
|
156 |
-
},
|
157 |
-
showToolbar: true,
|
158 |
-
muted: true,
|
159 |
-
initialized: false,
|
160 |
-
activityTimeout: null,
|
161 |
-
defaultChannelId: 'random',
|
162 |
-
video: null,
|
163 |
-
channel: {
|
164 |
-
},
|
165 |
-
wakeUp() {
|
166 |
-
this.showToolbar = true
|
167 |
-
clearTimeout(this.activityTimeout)
|
168 |
-
this.activityTimeout = setTimeout(() => {
|
169 |
-
this.showToolbar = false
|
170 |
-
}, 1500);
|
171 |
-
},
|
172 |
-
toggleAudio() {
|
173 |
-
if (this.video.muted) {
|
174 |
-
this.video.muted = false
|
175 |
-
this.muted = false
|
176 |
-
} else {
|
177 |
-
this.video.muted = true
|
178 |
-
this.muted = true
|
179 |
-
}
|
180 |
-
},
|
181 |
-
async checkAudience() {
|
182 |
-
let audience = {}
|
183 |
-
try {
|
184 |
-
const res = await fetch('/stats')
|
185 |
-
audience = await res.json()
|
186 |
-
} catch (err) {
|
187 |
-
console.log('failed to check the audience, something is wrong')
|
188 |
-
}
|
189 |
-
|
190 |
-
window.DEBUGME = Object.entries(this.channels)
|
191 |
-
this.channels = Object.entries(this.channels).reduce((acc, [channel, data]) => ((console.log('debug:', {
|
192 |
-
...data,
|
193 |
-
audience: audience[channel] || 0
|
194 |
-
} ), {
|
195 |
-
...acc,
|
196 |
-
[channel]: {
|
197 |
-
...data,
|
198 |
-
audience: audience[channel] || 0
|
199 |
-
}
|
200 |
-
})), {})
|
201 |
-
this.channel = this.channels[this.channel.id]
|
202 |
-
},
|
203 |
-
fullscreen() {
|
204 |
-
if (this.video.requestFullscreen) {
|
205 |
-
this.video.requestFullscreen();
|
206 |
-
} else if (this.video.mozRequestFullScreen) {
|
207 |
-
this.video.mozRequestFullScreen();
|
208 |
-
} else if (this.video.webkitRequestFullscreen) {
|
209 |
-
this.video.webkitRequestFullscreen();
|
210 |
-
} else if (this.video.msRequestFullscreen) {
|
211 |
-
this.video.msRequestFullscreen();
|
212 |
-
}
|
213 |
-
},
|
214 |
-
init() {
|
215 |
-
if (this.initialized) {
|
216 |
-
console.log("already initialized")
|
217 |
-
return
|
218 |
-
}
|
219 |
-
this.initialized = true
|
220 |
-
console.log('initializing WebTV..')
|
221 |
-
|
222 |
-
const urlParams = new URLSearchParams(window.location.search)
|
223 |
-
|
224 |
-
const requestedChannelId = `${urlParams.get('channel') || 'random'}`
|
225 |
-
|
226 |
-
this.enabled = true
|
227 |
-
// this.enabled = `${urlParams.get('beta') || 'false'}` === 'true'
|
228 |
-
|
229 |
-
if (!this.enabled) {
|
230 |
-
return
|
231 |
-
}
|
232 |
-
|
233 |
-
this.video = document.getElementById('videoElement')
|
234 |
-
|
235 |
-
const defaultChannel = this.channels[this.defaultChannelId]
|
236 |
-
|
237 |
-
this.channel = this.channels[requestedChannelId] || defaultChannel
|
238 |
-
|
239 |
-
console.log(`Selected channel: ${this.channel.label}`)
|
240 |
-
console.log(`Stream URL: ${this.channel.url}`)
|
241 |
-
|
242 |
-
|
243 |
-
const handleActivity = () => {
|
244 |
-
this.wakeUp()
|
245 |
-
}
|
246 |
-
handleActivity()
|
247 |
-
|
248 |
-
document.addEventListener("touchstart", handleActivity)
|
249 |
-
document.addEventListener("touchmove", handleActivity)
|
250 |
-
document.addEventListener("click", handleActivity)
|
251 |
-
document.addEventListener("mousemove", handleActivity)
|
252 |
-
|
253 |
-
this.checkAudience()
|
254 |
-
setInterval(() => {
|
255 |
-
this.checkAudience()
|
256 |
-
}, 1000)
|
257 |
-
|
258 |
-
// detect mute/unmute events
|
259 |
-
this.video.addEventListener("mute", () => {
|
260 |
-
this.muted = true
|
261 |
-
})
|
262 |
-
this.video.addEventListener("unmute", () => {
|
263 |
-
this.muted = false
|
264 |
-
})
|
265 |
-
|
266 |
-
// when we move outside the video, we always hide the toolbar
|
267 |
-
document.addEventListener("mouseleave", () => {
|
268 |
-
clearTimeout(this.activityTimeout)
|
269 |
-
this.showToolbar = false
|
270 |
-
})
|
271 |
-
|
272 |
-
// as a bonus, we also allow fullscreen on double click
|
273 |
-
this.video.addEventListener('dblclick', () => {
|
274 |
-
this.fullscreen()
|
275 |
-
})
|
276 |
-
|
277 |
-
// some devices such as the iPhone don't support MSE Live Playback
|
278 |
-
if (mpegts.getFeatureList().mseLivePlayback) {
|
279 |
-
var player = mpegts.createPlayer({
|
280 |
-
type: 'flv', // could also be mpegts, m2ts, flv
|
281 |
-
isLive: true,
|
282 |
-
url: this.channel.url,
|
283 |
-
})
|
284 |
-
player.attachMediaElement(this.video)
|
285 |
-
|
286 |
-
player.on(mpegts.Events.ERROR, function (err) {
|
287 |
-
console.log('got an error:', err)
|
288 |
-
if (err.type === mpegts.ErrorTypes.NETWORK_ERROR) {
|
289 |
-
console.log('Network error')
|
290 |
-
}
|
291 |
-
});
|
292 |
-
|
293 |
-
player.load()
|
294 |
-
|
295 |
-
// due to an issue with our stream when the FFMPEG playlist ends,
|
296 |
-
// the stream gets interrupted for ~1sec, which causes the frontend to hangs up
|
297 |
-
// the following code tries to restart the page when that happens, but in the long term
|
298 |
-
// we should fix the issue on the server side (fix our FFMPEG bash script)
|
299 |
-
this.video.addEventListener('ended', function() {
|
300 |
-
console.log('Stream ended, trying to reload...')
|
301 |
-
setTimeout(() => {
|
302 |
-
console.log('Reloading the page..')
|
303 |
-
// Unloading and loading the source again isn't enough it seems
|
304 |
-
// player.unload()
|
305 |
-
// player.load()
|
306 |
-
window.location.reload()
|
307 |
-
}, 1200)
|
308 |
-
}, false)
|
309 |
-
|
310 |
-
// Handle autoplay restrictions.
|
311 |
-
let promise = this.video.play()
|
312 |
-
if (promise !== undefined) {
|
313 |
-
this.video.addEventListener('click', function() {
|
314 |
-
this.video.play()
|
315 |
-
})
|
316 |
-
}
|
317 |
-
|
318 |
-
player.play()
|
319 |
-
}
|
320 |
-
}
|
321 |
-
}
|
322 |
-
}
|
323 |
-
</script>
|
324 |
-
</body>
|
325 |
-
</html>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/encoders/open_clap/bert.py
DELETED
@@ -1,32 +0,0 @@
|
|
1 |
-
from transformers import BertTokenizer, BertModel
|
2 |
-
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
|
3 |
-
model = BertModel.from_pretrained("bert-base-uncased")
|
4 |
-
text = "Replace me by any text you'd like."
|
5 |
-
|
6 |
-
def bert_embeddings(text):
|
7 |
-
# text = "Replace me by any text you'd like."
|
8 |
-
encoded_input = tokenizer(text, return_tensors='pt')
|
9 |
-
output = model(**encoded_input)
|
10 |
-
return output
|
11 |
-
|
12 |
-
from transformers import RobertaTokenizer, RobertaModel
|
13 |
-
|
14 |
-
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
|
15 |
-
model = RobertaModel.from_pretrained('roberta-base')
|
16 |
-
text = "Replace me by any text you'd like."
|
17 |
-
def Roberta_embeddings(text):
|
18 |
-
# text = "Replace me by any text you'd like."
|
19 |
-
encoded_input = tokenizer(text, return_tensors='pt')
|
20 |
-
output = model(**encoded_input)
|
21 |
-
return output
|
22 |
-
|
23 |
-
from transformers import BartTokenizer, BartModel
|
24 |
-
|
25 |
-
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
|
26 |
-
model = BartModel.from_pretrained('facebook/bart-base')
|
27 |
-
text = "Replace me by any text you'd like."
|
28 |
-
def bart_embeddings(text):
|
29 |
-
# text = "Replace me by any text you'd like."
|
30 |
-
encoded_input = tokenizer(text, return_tensors='pt')
|
31 |
-
output = model(**encoded_input)
|
32 |
-
return output
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIWaves/Software_Company/src/agents/LLM/base_LLM.py
DELETED
@@ -1,133 +0,0 @@
|
|
1 |
-
from abc import abstractclassmethod
|
2 |
-
import openai
|
3 |
-
import os
|
4 |
-
import time
|
5 |
-
from Memory import Memory
|
6 |
-
from utils import save_logs
|
7 |
-
|
8 |
-
class LLM:
|
9 |
-
def __init__(self) -> None:
|
10 |
-
pass
|
11 |
-
|
12 |
-
@abstractclassmethod
|
13 |
-
def get_response():
|
14 |
-
pass
|
15 |
-
|
16 |
-
|
17 |
-
class OpenAILLM(LLM):
|
18 |
-
def __init__(self,**kwargs) -> None:
|
19 |
-
super().__init__()
|
20 |
-
self.MAX_CHAT_HISTORY = eval(
|
21 |
-
os.environ["MAX_CHAT_HISTORY"]) if "MAX_CHAT_HISTORY" in os.environ else 10
|
22 |
-
|
23 |
-
self.model = kwargs["model"] if "model" in kwargs else "gpt-3.5-turbo-16k-0613"
|
24 |
-
self.temperature = kwargs["temperature"] if "temperature" in kwargs else 0.3
|
25 |
-
self.log_path = kwargs["log_path"] if "log_path" in kwargs else "logs"
|
26 |
-
|
27 |
-
|
28 |
-
def get_stream(self,response, log_path, messages):
|
29 |
-
ans = ""
|
30 |
-
for res in response:
|
31 |
-
if res:
|
32 |
-
r = (res.choices[0]["delta"].get("content")
|
33 |
-
if res.choices[0]["delta"].get("content") else "")
|
34 |
-
ans += r
|
35 |
-
yield r
|
36 |
-
|
37 |
-
save_logs(log_path, messages, ans)
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
def get_response(self,
|
42 |
-
chat_history,
|
43 |
-
system_prompt,
|
44 |
-
last_prompt=None,
|
45 |
-
stream=False,
|
46 |
-
functions=None,
|
47 |
-
function_call="auto",
|
48 |
-
WAIT_TIME=20,
|
49 |
-
**kwargs):
|
50 |
-
"""
|
51 |
-
return LLM's response
|
52 |
-
"""
|
53 |
-
openai.api_key = os.environ["API_KEY"]
|
54 |
-
# if "PROXY" in os.environ:
|
55 |
-
# assert "http:" in os.environ["PROXY"] or "socks" in os.environ["PROXY"],"PROXY error,PROXY must be http or socks"
|
56 |
-
# openai.proxy = os.environ["PROXY"]
|
57 |
-
if "API_BASE" in os.environ:
|
58 |
-
openai.api_base = os.environ["API_BASE"]
|
59 |
-
active_mode = True if ("ACTIVE_MODE" in os.environ and os.environ["ACTIVE_MODE"] == "0") else False
|
60 |
-
model = self.model
|
61 |
-
temperature = self.temperature
|
62 |
-
|
63 |
-
|
64 |
-
if active_mode:
|
65 |
-
system_prompt = system_prompt + "Please keep your reply as concise as possible,Within three sentences, the total word count should not exceed 30"
|
66 |
-
|
67 |
-
messages = [{
|
68 |
-
"role": "system",
|
69 |
-
"content": system_prompt
|
70 |
-
}] if system_prompt else []
|
71 |
-
|
72 |
-
if chat_history:
|
73 |
-
if len(chat_history) > self.MAX_CHAT_HISTORY:
|
74 |
-
chat_history = chat_history[- self.MAX_CHAT_HISTORY:]
|
75 |
-
if isinstance(chat_history[0],dict):
|
76 |
-
messages += chat_history
|
77 |
-
elif isinstance(chat_history[0],Memory):
|
78 |
-
messages += [memory.get_gpt_message("user") for memory in chat_history]
|
79 |
-
|
80 |
-
if last_prompt:
|
81 |
-
if active_mode:
|
82 |
-
last_prompt = last_prompt + "Please keep your reply as concise as possible,Within three sentences, the total word count should not exceed 30"
|
83 |
-
# messages += [{"role": "system", "content": f"{last_prompt}"}]
|
84 |
-
messages[-1]["content"] += last_prompt
|
85 |
-
|
86 |
-
|
87 |
-
while True:
|
88 |
-
try:
|
89 |
-
if functions:
|
90 |
-
response = openai.ChatCompletion.create(
|
91 |
-
model=model,
|
92 |
-
messages=messages,
|
93 |
-
functions=functions,
|
94 |
-
function_call=function_call,
|
95 |
-
temperature=temperature,
|
96 |
-
)
|
97 |
-
else:
|
98 |
-
response = openai.ChatCompletion.create(
|
99 |
-
model=model,
|
100 |
-
messages=messages,
|
101 |
-
temperature=temperature,
|
102 |
-
stream=stream)
|
103 |
-
break
|
104 |
-
except Exception as e:
|
105 |
-
print(e)
|
106 |
-
if "maximum context length is" in str(e):
|
107 |
-
assert False, "exceed max length"
|
108 |
-
break
|
109 |
-
else:
|
110 |
-
print(f"Please wait {WAIT_TIME} seconds and resend later ...")
|
111 |
-
time.sleep(WAIT_TIME)
|
112 |
-
|
113 |
-
if functions:
|
114 |
-
save_logs(self.log_path, messages, response)
|
115 |
-
return response.choices[0].message
|
116 |
-
elif stream:
|
117 |
-
return self.get_stream(response, self.log_path, messages)
|
118 |
-
else:
|
119 |
-
save_logs(self.log_path, messages, response)
|
120 |
-
return response.choices[0].message["content"]
|
121 |
-
|
122 |
-
|
123 |
-
def init_LLM(default_log_path,**kwargs):
|
124 |
-
LLM_type = kwargs["LLM_type"] if "LLM_type" in kwargs else "OpenAI"
|
125 |
-
log_path = kwargs["log_path"] if "log_path" in kwargs else default_log_path
|
126 |
-
if LLM_type == "OpenAI":
|
127 |
-
LLM = (
|
128 |
-
OpenAILLM(**kwargs["LLM"])
|
129 |
-
if "LLM" in kwargs
|
130 |
-
else OpenAILLM(model = "gpt-3.5-turbo-16k-0613",temperature=0.3,log_path=log_path)
|
131 |
-
)
|
132 |
-
return LLM
|
133 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ANILYADAV/mygenaichatbot/app.py
DELETED
@@ -1,34 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import gradio as gr
|
3 |
-
from langchain.chat_models import ChatOpenAI
|
4 |
-
from langchain import LLMChain, PromptTemplate
|
5 |
-
from langchain.memory import ConversationBufferMemory
|
6 |
-
|
7 |
-
OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
|
8 |
-
|
9 |
-
template = """Meet Anil, your youthful and witty personal assistant! At 21 years old, he's full of energy and always eager to help. Anil's goal is to assist you with any questions or problems you might have. His enthusiasm shines through in every response, making interactions with his enjoyable and engaging
|
10 |
-
{chat_history}
|
11 |
-
User: {user_message}
|
12 |
-
Chatbot:"""
|
13 |
-
|
14 |
-
prompt = PromptTemplate(
|
15 |
-
input_variables=["chat_history", "user_message"], template=template
|
16 |
-
)
|
17 |
-
|
18 |
-
memory = ConversationBufferMemory(memory_key="chat_history")
|
19 |
-
|
20 |
-
llm_chain = LLMChain(
|
21 |
-
llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
|
22 |
-
prompt=prompt,
|
23 |
-
verbose=True,
|
24 |
-
memory=memory,
|
25 |
-
)
|
26 |
-
|
27 |
-
def get_text_response(user_message,history):
|
28 |
-
response = llm_chain.predict(user_message = user_message)
|
29 |
-
return response
|
30 |
-
|
31 |
-
demo = gr.ChatInterface(get_text_response)
|
32 |
-
|
33 |
-
if __name__ == "__main__":
|
34 |
-
demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT-v1/README.md
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: OpenGPT v1
|
3 |
-
emoji: ⚡
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: docker
|
7 |
-
pinned: false
|
8 |
-
license: apache-2.0
|
9 |
-
---
|
10 |
-
|
11 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT/client/css/options.css
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
.options-container {
|
2 |
-
display: flex;
|
3 |
-
flex-wrap: wrap;
|
4 |
-
}
|
5 |
-
|
6 |
-
@media screen and (max-width: 990px) {
|
7 |
-
.options-container {
|
8 |
-
justify-content: space-between;
|
9 |
-
}
|
10 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT/g4f/Provider/Raycast.py
DELETED
@@ -1,72 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
import json
|
4 |
-
|
5 |
-
import requests
|
6 |
-
|
7 |
-
from ..typing import Any, CreateResult
|
8 |
-
from .base_provider import BaseProvider
|
9 |
-
|
10 |
-
|
11 |
-
class Raycast(BaseProvider):
|
12 |
-
url = "https://raycast.com"
|
13 |
-
supports_gpt_35_turbo = True
|
14 |
-
supports_gpt_4 = True
|
15 |
-
supports_stream = True
|
16 |
-
needs_auth = True
|
17 |
-
working = True
|
18 |
-
|
19 |
-
@staticmethod
|
20 |
-
def create_completion(
|
21 |
-
model: str,
|
22 |
-
messages: list[dict[str, str]],
|
23 |
-
stream: bool,
|
24 |
-
**kwargs: Any,
|
25 |
-
) -> CreateResult:
|
26 |
-
auth = kwargs.get('auth')
|
27 |
-
headers = {
|
28 |
-
'Accept': 'application/json',
|
29 |
-
'Accept-Language': 'en-US,en;q=0.9',
|
30 |
-
'Authorization': f'Bearer {auth}',
|
31 |
-
'Content-Type': 'application/json',
|
32 |
-
'User-Agent': 'Raycast/0 CFNetwork/1410.0.3 Darwin/22.6.0',
|
33 |
-
}
|
34 |
-
parsed_messages = []
|
35 |
-
for message in messages:
|
36 |
-
parsed_messages.append({
|
37 |
-
'author': message['role'],
|
38 |
-
'content': {'text': message['content']}
|
39 |
-
})
|
40 |
-
data = {
|
41 |
-
"debug": False,
|
42 |
-
"locale": "en-CN",
|
43 |
-
"messages": parsed_messages,
|
44 |
-
"model": model,
|
45 |
-
"provider": "openai",
|
46 |
-
"source": "ai_chat",
|
47 |
-
"system_instruction": "markdown",
|
48 |
-
"temperature": 0.5
|
49 |
-
}
|
50 |
-
response = requests.post("https://backend.raycast.com/api/v1/ai/chat_completions", headers=headers, json=data, stream=True)
|
51 |
-
for token in response.iter_lines():
|
52 |
-
if b'data: ' not in token:
|
53 |
-
continue
|
54 |
-
completion_chunk = json.loads(token.decode().replace('data: ', ''))
|
55 |
-
token = completion_chunk['text']
|
56 |
-
if token != None:
|
57 |
-
yield token
|
58 |
-
|
59 |
-
@classmethod
|
60 |
-
@property
|
61 |
-
def params(cls):
|
62 |
-
params = [
|
63 |
-
("model", "str"),
|
64 |
-
("messages", "list[dict[str, str]]"),
|
65 |
-
("stream", "bool"),
|
66 |
-
("temperature", "float"),
|
67 |
-
("top_p", "int"),
|
68 |
-
("model", "str"),
|
69 |
-
("auth", "str"),
|
70 |
-
]
|
71 |
-
param = ", ".join([": ".join(p) for p in params])
|
72 |
-
return f"g4f.provider.{cls.__name__} supports: ({param})"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Adapter/CoAdapter/ldm/modules/attention.py
DELETED
@@ -1,344 +0,0 @@
|
|
1 |
-
from inspect import isfunction
|
2 |
-
import math
|
3 |
-
import torch
|
4 |
-
import torch.nn.functional as F
|
5 |
-
from torch import nn, einsum
|
6 |
-
from einops import rearrange, repeat
|
7 |
-
from typing import Optional, Any
|
8 |
-
|
9 |
-
from ldm.modules.diffusionmodules.util import checkpoint
|
10 |
-
|
11 |
-
|
12 |
-
try:
|
13 |
-
import xformers
|
14 |
-
import xformers.ops
|
15 |
-
XFORMERS_IS_AVAILBLE = True
|
16 |
-
except:
|
17 |
-
XFORMERS_IS_AVAILBLE = False
|
18 |
-
|
19 |
-
# CrossAttn precision handling
|
20 |
-
import os
|
21 |
-
_ATTN_PRECISION = os.environ.get("ATTN_PRECISION", "fp32")
|
22 |
-
|
23 |
-
if os.environ.get("DISABLE_XFORMERS", "false").lower() == 'true':
|
24 |
-
XFORMERS_IS_AVAILBLE = False
|
25 |
-
|
26 |
-
|
27 |
-
def exists(val):
|
28 |
-
return val is not None
|
29 |
-
|
30 |
-
|
31 |
-
def uniq(arr):
|
32 |
-
return{el: True for el in arr}.keys()
|
33 |
-
|
34 |
-
|
35 |
-
def default(val, d):
|
36 |
-
if exists(val):
|
37 |
-
return val
|
38 |
-
return d() if isfunction(d) else d
|
39 |
-
|
40 |
-
|
41 |
-
def max_neg_value(t):
|
42 |
-
return -torch.finfo(t.dtype).max
|
43 |
-
|
44 |
-
|
45 |
-
def init_(tensor):
|
46 |
-
dim = tensor.shape[-1]
|
47 |
-
std = 1 / math.sqrt(dim)
|
48 |
-
tensor.uniform_(-std, std)
|
49 |
-
return tensor
|
50 |
-
|
51 |
-
|
52 |
-
# feedforward
|
53 |
-
class GEGLU(nn.Module):
|
54 |
-
def __init__(self, dim_in, dim_out):
|
55 |
-
super().__init__()
|
56 |
-
self.proj = nn.Linear(dim_in, dim_out * 2)
|
57 |
-
|
58 |
-
def forward(self, x):
|
59 |
-
x, gate = self.proj(x).chunk(2, dim=-1)
|
60 |
-
return x * F.gelu(gate)
|
61 |
-
|
62 |
-
|
63 |
-
class FeedForward(nn.Module):
|
64 |
-
def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):
|
65 |
-
super().__init__()
|
66 |
-
inner_dim = int(dim * mult)
|
67 |
-
dim_out = default(dim_out, dim)
|
68 |
-
project_in = nn.Sequential(
|
69 |
-
nn.Linear(dim, inner_dim),
|
70 |
-
nn.GELU()
|
71 |
-
) if not glu else GEGLU(dim, inner_dim)
|
72 |
-
|
73 |
-
self.net = nn.Sequential(
|
74 |
-
project_in,
|
75 |
-
nn.Dropout(dropout),
|
76 |
-
nn.Linear(inner_dim, dim_out)
|
77 |
-
)
|
78 |
-
|
79 |
-
def forward(self, x):
|
80 |
-
return self.net(x)
|
81 |
-
|
82 |
-
|
83 |
-
def zero_module(module):
|
84 |
-
"""
|
85 |
-
Zero out the parameters of a module and return it.
|
86 |
-
"""
|
87 |
-
for p in module.parameters():
|
88 |
-
p.detach().zero_()
|
89 |
-
return module
|
90 |
-
|
91 |
-
|
92 |
-
def Normalize(in_channels):
|
93 |
-
return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
|
94 |
-
|
95 |
-
|
96 |
-
class SpatialSelfAttention(nn.Module):
|
97 |
-
def __init__(self, in_channels):
|
98 |
-
super().__init__()
|
99 |
-
self.in_channels = in_channels
|
100 |
-
|
101 |
-
self.norm = Normalize(in_channels)
|
102 |
-
self.q = torch.nn.Conv2d(in_channels,
|
103 |
-
in_channels,
|
104 |
-
kernel_size=1,
|
105 |
-
stride=1,
|
106 |
-
padding=0)
|
107 |
-
self.k = torch.nn.Conv2d(in_channels,
|
108 |
-
in_channels,
|
109 |
-
kernel_size=1,
|
110 |
-
stride=1,
|
111 |
-
padding=0)
|
112 |
-
self.v = torch.nn.Conv2d(in_channels,
|
113 |
-
in_channels,
|
114 |
-
kernel_size=1,
|
115 |
-
stride=1,
|
116 |
-
padding=0)
|
117 |
-
self.proj_out = torch.nn.Conv2d(in_channels,
|
118 |
-
in_channels,
|
119 |
-
kernel_size=1,
|
120 |
-
stride=1,
|
121 |
-
padding=0)
|
122 |
-
|
123 |
-
def forward(self, x):
|
124 |
-
h_ = x
|
125 |
-
h_ = self.norm(h_)
|
126 |
-
q = self.q(h_)
|
127 |
-
k = self.k(h_)
|
128 |
-
v = self.v(h_)
|
129 |
-
|
130 |
-
# compute attention
|
131 |
-
b,c,h,w = q.shape
|
132 |
-
q = rearrange(q, 'b c h w -> b (h w) c')
|
133 |
-
k = rearrange(k, 'b c h w -> b c (h w)')
|
134 |
-
w_ = torch.einsum('bij,bjk->bik', q, k)
|
135 |
-
|
136 |
-
w_ = w_ * (int(c)**(-0.5))
|
137 |
-
w_ = torch.nn.functional.softmax(w_, dim=2)
|
138 |
-
|
139 |
-
# attend to values
|
140 |
-
v = rearrange(v, 'b c h w -> b c (h w)')
|
141 |
-
w_ = rearrange(w_, 'b i j -> b j i')
|
142 |
-
h_ = torch.einsum('bij,bjk->bik', v, w_)
|
143 |
-
h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h)
|
144 |
-
h_ = self.proj_out(h_)
|
145 |
-
|
146 |
-
return x+h_
|
147 |
-
|
148 |
-
|
149 |
-
class CrossAttention(nn.Module):
|
150 |
-
def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.):
|
151 |
-
super().__init__()
|
152 |
-
inner_dim = dim_head * heads
|
153 |
-
context_dim = default(context_dim, query_dim)
|
154 |
-
|
155 |
-
self.scale = dim_head ** -0.5
|
156 |
-
self.heads = heads
|
157 |
-
|
158 |
-
self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
|
159 |
-
self.to_k = nn.Linear(context_dim, inner_dim, bias=False)
|
160 |
-
self.to_v = nn.Linear(context_dim, inner_dim, bias=False)
|
161 |
-
|
162 |
-
self.to_out = nn.Sequential(
|
163 |
-
nn.Linear(inner_dim, query_dim),
|
164 |
-
nn.Dropout(dropout)
|
165 |
-
)
|
166 |
-
|
167 |
-
def forward(self, x, context=None, mask=None):
|
168 |
-
h = self.heads
|
169 |
-
|
170 |
-
q = self.to_q(x)
|
171 |
-
context = default(context, x)
|
172 |
-
k = self.to_k(context)
|
173 |
-
v = self.to_v(context)
|
174 |
-
|
175 |
-
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
|
176 |
-
|
177 |
-
# force cast to fp32 to avoid overflowing
|
178 |
-
if _ATTN_PRECISION =="fp32":
|
179 |
-
with torch.autocast(enabled=False, device_type = 'cuda'):
|
180 |
-
q, k = q.float(), k.float()
|
181 |
-
sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
|
182 |
-
else:
|
183 |
-
sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
|
184 |
-
|
185 |
-
del q, k
|
186 |
-
|
187 |
-
if exists(mask):
|
188 |
-
mask = rearrange(mask, 'b ... -> b (...)')
|
189 |
-
max_neg_value = -torch.finfo(sim.dtype).max
|
190 |
-
mask = repeat(mask, 'b j -> (b h) () j', h=h)
|
191 |
-
sim.masked_fill_(~mask, max_neg_value)
|
192 |
-
|
193 |
-
# attention, what we cannot get enough of
|
194 |
-
sim = sim.softmax(dim=-1)
|
195 |
-
|
196 |
-
out = einsum('b i j, b j d -> b i d', sim, v)
|
197 |
-
out = rearrange(out, '(b h) n d -> b n (h d)', h=h)
|
198 |
-
return self.to_out(out)
|
199 |
-
|
200 |
-
|
201 |
-
class MemoryEfficientCrossAttention(nn.Module):
|
202 |
-
# https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223
|
203 |
-
def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.0):
|
204 |
-
super().__init__()
|
205 |
-
print(f"Setting up {self.__class__.__name__}. Query dim is {query_dim}, context_dim is {context_dim} and using "
|
206 |
-
f"{heads} heads.")
|
207 |
-
inner_dim = dim_head * heads
|
208 |
-
context_dim = default(context_dim, query_dim)
|
209 |
-
|
210 |
-
self.heads = heads
|
211 |
-
self.dim_head = dim_head
|
212 |
-
|
213 |
-
self.to_q = nn.Linear(query_dim, inner_dim, bias=False)
|
214 |
-
self.to_k = nn.Linear(context_dim, inner_dim, bias=False)
|
215 |
-
self.to_v = nn.Linear(context_dim, inner_dim, bias=False)
|
216 |
-
|
217 |
-
self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout))
|
218 |
-
self.attention_op: Optional[Any] = None
|
219 |
-
|
220 |
-
def forward(self, x, context=None, mask=None):
|
221 |
-
q = self.to_q(x)
|
222 |
-
context = default(context, x)
|
223 |
-
k = self.to_k(context)
|
224 |
-
v = self.to_v(context)
|
225 |
-
|
226 |
-
b, _, _ = q.shape
|
227 |
-
q, k, v = map(
|
228 |
-
lambda t: t.unsqueeze(3)
|
229 |
-
.reshape(b, t.shape[1], self.heads, self.dim_head)
|
230 |
-
.permute(0, 2, 1, 3)
|
231 |
-
.reshape(b * self.heads, t.shape[1], self.dim_head)
|
232 |
-
.contiguous(),
|
233 |
-
(q, k, v),
|
234 |
-
)
|
235 |
-
|
236 |
-
# actually compute the attention, what we cannot get enough of
|
237 |
-
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
|
238 |
-
|
239 |
-
if exists(mask):
|
240 |
-
raise NotImplementedError
|
241 |
-
out = (
|
242 |
-
out.unsqueeze(0)
|
243 |
-
.reshape(b, self.heads, out.shape[1], self.dim_head)
|
244 |
-
.permute(0, 2, 1, 3)
|
245 |
-
.reshape(b, out.shape[1], self.heads * self.dim_head)
|
246 |
-
)
|
247 |
-
return self.to_out(out)
|
248 |
-
|
249 |
-
|
250 |
-
class BasicTransformerBlock(nn.Module):
|
251 |
-
ATTENTION_MODES = {
|
252 |
-
"softmax": CrossAttention, # vanilla attention
|
253 |
-
"softmax-xformers": MemoryEfficientCrossAttention
|
254 |
-
}
|
255 |
-
def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True,
|
256 |
-
disable_self_attn=False):
|
257 |
-
super().__init__()
|
258 |
-
attn_mode = "softmax-xformers" if XFORMERS_IS_AVAILBLE else "softmax"
|
259 |
-
assert attn_mode in self.ATTENTION_MODES
|
260 |
-
attn_cls = self.ATTENTION_MODES[attn_mode]
|
261 |
-
self.disable_self_attn = disable_self_attn
|
262 |
-
self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout,
|
263 |
-
context_dim=context_dim if self.disable_self_attn else None) # is a self-attention if not self.disable_self_attn
|
264 |
-
self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff)
|
265 |
-
self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim,
|
266 |
-
heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none
|
267 |
-
self.norm1 = nn.LayerNorm(dim)
|
268 |
-
self.norm2 = nn.LayerNorm(dim)
|
269 |
-
self.norm3 = nn.LayerNorm(dim)
|
270 |
-
self.checkpoint = checkpoint
|
271 |
-
|
272 |
-
def forward(self, x, context=None):
|
273 |
-
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
|
274 |
-
|
275 |
-
def _forward(self, x, context=None):
|
276 |
-
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
|
277 |
-
x = self.attn2(self.norm2(x), context=context) + x
|
278 |
-
x = self.ff(self.norm3(x)) + x
|
279 |
-
return x
|
280 |
-
|
281 |
-
|
282 |
-
class SpatialTransformer(nn.Module):
|
283 |
-
"""
|
284 |
-
Transformer block for image-like data.
|
285 |
-
First, project the input (aka embedding)
|
286 |
-
and reshape to b, t, d.
|
287 |
-
Then apply standard transformer action.
|
288 |
-
Finally, reshape to image
|
289 |
-
NEW: use_linear for more efficiency instead of the 1x1 convs
|
290 |
-
"""
|
291 |
-
def __init__(self, in_channels, n_heads, d_head,
|
292 |
-
depth=1, dropout=0., context_dim=None,
|
293 |
-
disable_self_attn=False, use_linear=False,
|
294 |
-
use_checkpoint=True):
|
295 |
-
super().__init__()
|
296 |
-
if exists(context_dim) and not isinstance(context_dim, list):
|
297 |
-
context_dim = [context_dim]
|
298 |
-
self.in_channels = in_channels
|
299 |
-
inner_dim = n_heads * d_head
|
300 |
-
self.norm = Normalize(in_channels)
|
301 |
-
if not use_linear:
|
302 |
-
self.proj_in = nn.Conv2d(in_channels,
|
303 |
-
inner_dim,
|
304 |
-
kernel_size=1,
|
305 |
-
stride=1,
|
306 |
-
padding=0)
|
307 |
-
else:
|
308 |
-
self.proj_in = nn.Linear(in_channels, inner_dim)
|
309 |
-
|
310 |
-
self.transformer_blocks = nn.ModuleList(
|
311 |
-
[BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim[d],
|
312 |
-
disable_self_attn=disable_self_attn, checkpoint=use_checkpoint)
|
313 |
-
for d in range(depth)]
|
314 |
-
)
|
315 |
-
if not use_linear:
|
316 |
-
self.proj_out = zero_module(nn.Conv2d(inner_dim,
|
317 |
-
in_channels,
|
318 |
-
kernel_size=1,
|
319 |
-
stride=1,
|
320 |
-
padding=0))
|
321 |
-
else:
|
322 |
-
self.proj_out = zero_module(nn.Linear(in_channels, inner_dim))
|
323 |
-
self.use_linear = use_linear
|
324 |
-
|
325 |
-
def forward(self, x, context=None):
|
326 |
-
# note: if no context is given, cross-attention defaults to self-attention
|
327 |
-
if not isinstance(context, list):
|
328 |
-
context = [context]
|
329 |
-
b, c, h, w = x.shape
|
330 |
-
x_in = x
|
331 |
-
x = self.norm(x)
|
332 |
-
if not self.use_linear:
|
333 |
-
x = self.proj_in(x)
|
334 |
-
x = rearrange(x, 'b c h w -> b (h w) c').contiguous()
|
335 |
-
if self.use_linear:
|
336 |
-
x = self.proj_in(x)
|
337 |
-
for i, block in enumerate(self.transformer_blocks):
|
338 |
-
x = block(x, context=context[i])
|
339 |
-
if self.use_linear:
|
340 |
-
x = self.proj_out(x)
|
341 |
-
x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous()
|
342 |
-
if not self.use_linear:
|
343 |
-
x = self.proj_out(x)
|
344 |
-
return x + x_in
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Admin08077/Cosmosis/app.py
DELETED
@@ -1,90 +0,0 @@
|
|
1 |
-
import streamlit as st
|
2 |
-
import pandas as pd
|
3 |
-
import smtplib
|
4 |
-
|
5 |
-
# Custom CSS for fancy styling
|
6 |
-
st.markdown("""
|
7 |
-
<style>
|
8 |
-
.big-title {
|
9 |
-
font-size: 48px !important;
|
10 |
-
color: lime;
|
11 |
-
text-shadow: 3px 3px 3px red;
|
12 |
-
}
|
13 |
-
.sub-title {
|
14 |
-
font-size: 24px;
|
15 |
-
color: green;
|
16 |
-
text-shadow: 1px 1px 1px red;
|
17 |
-
}
|
18 |
-
</style>
|
19 |
-
""", unsafe_allow_html=True)
|
20 |
-
|
21 |
-
st.markdown("<div class='big-title'>THE IPN APP BY:</div>", unsafe_allow_html=True)
|
22 |
-
st.markdown("<div class='sub-title'>Citibank Demo Business Inc.</div>", unsafe_allow_html=True)
|
23 |
-
|
24 |
-
class PromissoryNote:
|
25 |
-
def __init__(self, instrument_id, order_of, place_issued, date_issued,
|
26 |
-
numeric_amount, amount, debtor_name, autograph_date):
|
27 |
-
self.instrument_id = instrument_id
|
28 |
-
self.order_of = order_of
|
29 |
-
self.place_issued = place_issued
|
30 |
-
self.date_issued = date_issued
|
31 |
-
self.numeric_amount = numeric_amount
|
32 |
-
self.amount = amount
|
33 |
-
self.debtor_name = debtor_name
|
34 |
-
self.autograph_date = autograph_date
|
35 |
-
|
36 |
-
def get_details(self):
|
37 |
-
return {
|
38 |
-
'Instrument ID': self.instrument_id,
|
39 |
-
'Order Of': self.order_of,
|
40 |
-
'Place Issued': self.place_issued,
|
41 |
-
'Date Issued': self.date_issued,
|
42 |
-
'Numeric Amount': self.numeric_amount,
|
43 |
-
'Amount': self.amount,
|
44 |
-
'Debtor Name': self.debtor_name,
|
45 |
-
'Autograph Date': self.autograph_date
|
46 |
-
}
|
47 |
-
|
48 |
-
def create_note(self):
|
49 |
-
return f'WORLD CITIZENS OF THE SOLAR MONMATIA INTERNATIONAL PROMISSORY NOTE...\n{self.get_details()}...ANY ALTERATION OR ERASURE VOIDS THIS CERTIFICATE...'
|
50 |
-
|
51 |
-
def send_email(note_details):
|
52 |
-
# Dummy email sending function
|
53 |
-
pass
|
54 |
-
|
55 |
-
def save_to_csv(note_details):
|
56 |
-
# Convert the note details dictionary to a DataFrame
|
57 |
-
df = pd.DataFrame([note_details])
|
58 |
-
# Append the note details to an existing CSV file
|
59 |
-
df.to_csv('promissory_notes.csv', mode='a', header=False)
|
60 |
-
|
61 |
-
def main():
|
62 |
-
st.title("Promissory Note Generator")
|
63 |
-
|
64 |
-
instrument_id = st.text_input("Enter the instrument ID: ")
|
65 |
-
order_of = st.text_input("Enter the order of: ")
|
66 |
-
place_issued = st.text_input("Enter the place issued: ")
|
67 |
-
date_issued = st.date_input("Enter the date issued: ")
|
68 |
-
numeric_amount = st.text_input("Enter the numeric amount: ")
|
69 |
-
amount = st.text_input("Enter the amount: ")
|
70 |
-
debtor_name = st.text_input("Enter the debtor name: ")
|
71 |
-
autograph_date = st.date_input("Enter the autograph date: ")
|
72 |
-
|
73 |
-
if st.button("Generate Note"):
|
74 |
-
new_note = PromissoryNote(instrument_id, order_of, place_issued, date_issued, numeric_amount,
|
75 |
-
amount, debtor_name, autograph_date)
|
76 |
-
note_details = new_note.get_details()
|
77 |
-
|
78 |
-
# Display Note
|
79 |
-
st.text_area("Generated Note:", new_note.create_note())
|
80 |
-
|
81 |
-
# Save to CSV
|
82 |
-
save_to_csv(note_details)
|
83 |
-
st.success('Note saved to CSV.')
|
84 |
-
|
85 |
-
# Send Email Notification (dummy function, replace with actual code)
|
86 |
-
send_email(note_details)
|
87 |
-
st.success('Email notification sent.')
|
88 |
-
|
89 |
-
if __name__ == '__main__':
|
90 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/agentverse/agents/tasksolving_agent/manager.py
DELETED
@@ -1,116 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
import asyncio
|
4 |
-
from colorama import Fore
|
5 |
-
|
6 |
-
from agentverse.logging import get_logger
|
7 |
-
import bdb
|
8 |
-
from string import Template
|
9 |
-
from typing import TYPE_CHECKING, List, Tuple
|
10 |
-
|
11 |
-
from agentverse.message import Message
|
12 |
-
|
13 |
-
from agentverse.agents import agent_registry
|
14 |
-
from agentverse.agents.base import BaseAgent
|
15 |
-
from agentverse.utils import AgentCriticism
|
16 |
-
|
17 |
-
import random
|
18 |
-
from rapidfuzz import fuzz
|
19 |
-
|
20 |
-
|
21 |
-
logger = get_logger()
|
22 |
-
|
23 |
-
|
24 |
-
@agent_registry.register("manager")
|
25 |
-
class ManagerAgent(BaseAgent):
|
26 |
-
prompt_template: str
|
27 |
-
|
28 |
-
def step(
|
29 |
-
self,
|
30 |
-
former_solution: str,
|
31 |
-
candidate_critic_opinions: List[AgentCriticism],
|
32 |
-
advice: str,
|
33 |
-
task_description: str = "",
|
34 |
-
previous_sentence: str = "",
|
35 |
-
) -> Message:
|
36 |
-
logger.debug("", self.name, Fore.MAGENTA)
|
37 |
-
|
38 |
-
prompt = self._fill_prompt_template(
|
39 |
-
former_solution,
|
40 |
-
candidate_critic_opinions,
|
41 |
-
advice,
|
42 |
-
task_description,
|
43 |
-
previous_sentence,
|
44 |
-
)
|
45 |
-
|
46 |
-
logger.debug(f"Prompt:\n{prompt}", "Manager", Fore.CYAN)
|
47 |
-
parsed_response = None
|
48 |
-
for i in range(self.max_retry):
|
49 |
-
try:
|
50 |
-
# LLM Manager
|
51 |
-
# response = self.llm.generate_response(prompt)
|
52 |
-
# parsed_response = self.output_parser.parse(response)
|
53 |
-
selected_role_description = self.llm.generate_response(prompt).content
|
54 |
-
candidate_score_list = [
|
55 |
-
fuzz.ratio(candidate.sender, selected_role_description)
|
56 |
-
for candidate in candidate_critic_opinions
|
57 |
-
]
|
58 |
-
selected_index = candidate_score_list.index(max(candidate_score_list))
|
59 |
-
candidate_critic_opinion = candidate_critic_opinions[selected_index]
|
60 |
-
|
61 |
-
# Random Manager
|
62 |
-
# parsed_response = random.choice(candidate_critic_opinions)
|
63 |
-
break
|
64 |
-
except (KeyboardInterrupt, bdb.BdbQuit):
|
65 |
-
raise
|
66 |
-
except Exception as e:
|
67 |
-
logger.error(e)
|
68 |
-
logger.warn("Retrying...")
|
69 |
-
continue
|
70 |
-
return candidate_critic_opinion
|
71 |
-
|
72 |
-
async def astep(self, env_description: str = "") -> Message:
|
73 |
-
"""Asynchronous version of step"""
|
74 |
-
pass
|
75 |
-
|
76 |
-
def _fill_prompt_template(
|
77 |
-
self,
|
78 |
-
former_solution: str,
|
79 |
-
candidate_critic_opinions: List[AgentCriticism],
|
80 |
-
advice: str,
|
81 |
-
task_description: str,
|
82 |
-
previous_sentence: str,
|
83 |
-
) -> str:
|
84 |
-
"""Fill the placeholders in the prompt template
|
85 |
-
|
86 |
-
In the role_assigner agent, three placeholders are supported:
|
87 |
-
- ${task_description}
|
88 |
-
- ${former_solution}
|
89 |
-
- ${critic_messages}
|
90 |
-
- ${advice}
|
91 |
-
- ${previous_sentence}
|
92 |
-
"""
|
93 |
-
input_arguments = {
|
94 |
-
"task_description": task_description,
|
95 |
-
"former_solution": former_solution,
|
96 |
-
"previous_sentence": previous_sentence,
|
97 |
-
"critic_opinions": "\n".join(
|
98 |
-
[
|
99 |
-
f"Role: {critic.sender}. {critic.sender_agent.role_description} said: {critic.content}"
|
100 |
-
for critic in candidate_critic_opinions
|
101 |
-
]
|
102 |
-
),
|
103 |
-
"advice": advice,
|
104 |
-
}
|
105 |
-
|
106 |
-
# manger select the proper sentence
|
107 |
-
template = Template(self.prompt_template)
|
108 |
-
return template.safe_substitute(input_arguments)
|
109 |
-
|
110 |
-
def add_message_to_memory(self, messages: List[Message]) -> None:
|
111 |
-
self.memory.add_message(messages)
|
112 |
-
|
113 |
-
def reset(self) -> None:
|
114 |
-
"""Reset the agent"""
|
115 |
-
self.memory.reset()
|
116 |
-
# TODO: reset receiver
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreatePages.js
DELETED
@@ -1,8 +0,0 @@
|
|
1 |
-
import CreateAnySizer from './utils/CreateAnySizer.js';
|
2 |
-
import Pages from '../../pages/Pages.js';
|
3 |
-
|
4 |
-
var CreatePages = function (scene, data, view, styles, customBuilders) {
|
5 |
-
return CreateAnySizer(scene, data, view, styles, customBuilders, Pages);
|
6 |
-
}
|
7 |
-
|
8 |
-
export default CreatePages;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/defaultcallbacks/GetDefaultCallbacks.js
DELETED
@@ -1,32 +0,0 @@
|
|
1 |
-
import VisibleCallbacks from './VisibleCallbacks.js';
|
2 |
-
import FadeCallbacks from './FadeCallbacks.js';
|
3 |
-
import MoveCallbacks from './MoveCallbacks.js';
|
4 |
-
import MovePanelCallbacks from './MovePanelCallbacks.js';
|
5 |
-
import NOOP from '../../../../plugins/utils/object/NOOP.js';
|
6 |
-
|
7 |
-
const DefaultCallbacks = {
|
8 |
-
visible: VisibleCallbacks,
|
9 |
-
fade: FadeCallbacks,
|
10 |
-
move: MoveCallbacks,
|
11 |
-
'move-panel': MovePanelCallbacks
|
12 |
-
}
|
13 |
-
|
14 |
-
var GetDefaultCallbacks = function (config) {
|
15 |
-
var callbackType, callbackParams;
|
16 |
-
[callbackType, ...callbackParams] = (typeof (config) === 'string') ? [config] : config;
|
17 |
-
|
18 |
-
var showCallback, hideCallback;
|
19 |
-
if (DefaultCallbacks.hasOwnProperty(callbackType)) {
|
20 |
-
showCallback = DefaultCallbacks[callbackType].show.apply(null, callbackParams);
|
21 |
-
hideCallback = DefaultCallbacks[callbackType].hide.apply(null, callbackParams);
|
22 |
-
} else {
|
23 |
-
showCallback = NOOP;
|
24 |
-
hideCallback = NOOP;
|
25 |
-
}
|
26 |
-
return {
|
27 |
-
show: showCallback,
|
28 |
-
hide: hideCallback
|
29 |
-
}
|
30 |
-
}
|
31 |
-
|
32 |
-
export default GetDefaultCallbacks;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlanMars/QYL-AI-Space/Dockerfile
DELETED
@@ -1,18 +0,0 @@
|
|
1 |
-
FROM python:3.9-slim-buster as builder
|
2 |
-
RUN apt-get update \
|
3 |
-
&& apt-get install -y build-essential \
|
4 |
-
&& apt-get clean \
|
5 |
-
&& rm -rf /var/lib/apt/lists/*
|
6 |
-
COPY requirements.txt .
|
7 |
-
COPY requirements_advanced.txt .
|
8 |
-
RUN pip install --user --no-cache-dir -r requirements.txt
|
9 |
-
# RUN pip install --user --no-cache-dir -r requirements_advanced.txt
|
10 |
-
|
11 |
-
FROM python:3.9-slim-buster
|
12 |
-
LABEL maintainer="iskoldt"
|
13 |
-
COPY --from=builder /root/.local /root/.local
|
14 |
-
ENV PATH=/root/.local/bin:$PATH
|
15 |
-
COPY . /app
|
16 |
-
WORKDIR /app
|
17 |
-
ENV dockerrun=yes
|
18 |
-
CMD ["python3", "-u", "ChuanhuChatbot.py","2>&1", "|", "tee", "/var/log/application.log"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlexWang/lama/saicinpainting/evaluation/losses/fid/fid_score.py
DELETED
@@ -1,328 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python3
|
2 |
-
"""Calculates the Frechet Inception Distance (FID) to evalulate GANs
|
3 |
-
|
4 |
-
The FID metric calculates the distance between two distributions of images.
|
5 |
-
Typically, we have summary statistics (mean & covariance matrix) of one
|
6 |
-
of these distributions, while the 2nd distribution is given by a GAN.
|
7 |
-
|
8 |
-
When run as a stand-alone program, it compares the distribution of
|
9 |
-
images that are stored as PNG/JPEG at a specified location with a
|
10 |
-
distribution given by summary statistics (in pickle format).
|
11 |
-
|
12 |
-
The FID is calculated by assuming that X_1 and X_2 are the activations of
|
13 |
-
the pool_3 layer of the inception net for generated samples and real world
|
14 |
-
samples respectively.
|
15 |
-
|
16 |
-
See --help to see further details.
|
17 |
-
|
18 |
-
Code apapted from https://github.com/bioinf-jku/TTUR to use PyTorch instead
|
19 |
-
of Tensorflow
|
20 |
-
|
21 |
-
Copyright 2018 Institute of Bioinformatics, JKU Linz
|
22 |
-
|
23 |
-
Licensed under the Apache License, Version 2.0 (the "License");
|
24 |
-
you may not use this file except in compliance with the License.
|
25 |
-
You may obtain a copy of the License at
|
26 |
-
|
27 |
-
http://www.apache.org/licenses/LICENSE-2.0
|
28 |
-
|
29 |
-
Unless required by applicable law or agreed to in writing, software
|
30 |
-
distributed under the License is distributed on an "AS IS" BASIS,
|
31 |
-
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
32 |
-
See the License for the specific language governing permissions and
|
33 |
-
limitations under the License.
|
34 |
-
"""
|
35 |
-
import os
|
36 |
-
import pathlib
|
37 |
-
from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser
|
38 |
-
|
39 |
-
import numpy as np
|
40 |
-
import torch
|
41 |
-
# from scipy.misc import imread
|
42 |
-
from imageio import imread
|
43 |
-
from PIL import Image, JpegImagePlugin
|
44 |
-
from scipy import linalg
|
45 |
-
from torch.nn.functional import adaptive_avg_pool2d
|
46 |
-
from torchvision.transforms import CenterCrop, Compose, Resize, ToTensor
|
47 |
-
|
48 |
-
try:
|
49 |
-
from tqdm import tqdm
|
50 |
-
except ImportError:
|
51 |
-
# If not tqdm is not available, provide a mock version of it
|
52 |
-
def tqdm(x): return x
|
53 |
-
|
54 |
-
try:
|
55 |
-
from .inception import InceptionV3
|
56 |
-
except ModuleNotFoundError:
|
57 |
-
from inception import InceptionV3
|
58 |
-
|
59 |
-
parser = ArgumentParser(formatter_class=ArgumentDefaultsHelpFormatter)
|
60 |
-
parser.add_argument('path', type=str, nargs=2,
|
61 |
-
help=('Path to the generated images or '
|
62 |
-
'to .npz statistic files'))
|
63 |
-
parser.add_argument('--batch-size', type=int, default=50,
|
64 |
-
help='Batch size to use')
|
65 |
-
parser.add_argument('--dims', type=int, default=2048,
|
66 |
-
choices=list(InceptionV3.BLOCK_INDEX_BY_DIM),
|
67 |
-
help=('Dimensionality of Inception features to use. '
|
68 |
-
'By default, uses pool3 features'))
|
69 |
-
parser.add_argument('-c', '--gpu', default='', type=str,
|
70 |
-
help='GPU to use (leave blank for CPU only)')
|
71 |
-
parser.add_argument('--resize', default=256)
|
72 |
-
|
73 |
-
transform = Compose([Resize(256), CenterCrop(256), ToTensor()])
|
74 |
-
|
75 |
-
|
76 |
-
def get_activations(files, model, batch_size=50, dims=2048,
|
77 |
-
cuda=False, verbose=False, keep_size=False):
|
78 |
-
"""Calculates the activations of the pool_3 layer for all images.
|
79 |
-
|
80 |
-
Params:
|
81 |
-
-- files : List of image files paths
|
82 |
-
-- model : Instance of inception model
|
83 |
-
-- batch_size : Batch size of images for the model to process at once.
|
84 |
-
Make sure that the number of samples is a multiple of
|
85 |
-
the batch size, otherwise some samples are ignored. This
|
86 |
-
behavior is retained to match the original FID score
|
87 |
-
implementation.
|
88 |
-
-- dims : Dimensionality of features returned by Inception
|
89 |
-
-- cuda : If set to True, use GPU
|
90 |
-
-- verbose : If set to True and parameter out_step is given, the number
|
91 |
-
of calculated batches is reported.
|
92 |
-
Returns:
|
93 |
-
-- A numpy array of dimension (num images, dims) that contains the
|
94 |
-
activations of the given tensor when feeding inception with the
|
95 |
-
query tensor.
|
96 |
-
"""
|
97 |
-
model.eval()
|
98 |
-
|
99 |
-
if len(files) % batch_size != 0:
|
100 |
-
print(('Warning: number of images is not a multiple of the '
|
101 |
-
'batch size. Some samples are going to be ignored.'))
|
102 |
-
if batch_size > len(files):
|
103 |
-
print(('Warning: batch size is bigger than the data size. '
|
104 |
-
'Setting batch size to data size'))
|
105 |
-
batch_size = len(files)
|
106 |
-
|
107 |
-
n_batches = len(files) // batch_size
|
108 |
-
n_used_imgs = n_batches * batch_size
|
109 |
-
|
110 |
-
pred_arr = np.empty((n_used_imgs, dims))
|
111 |
-
|
112 |
-
for i in tqdm(range(n_batches)):
|
113 |
-
if verbose:
|
114 |
-
print('\rPropagating batch %d/%d' % (i + 1, n_batches),
|
115 |
-
end='', flush=True)
|
116 |
-
start = i * batch_size
|
117 |
-
end = start + batch_size
|
118 |
-
|
119 |
-
# # Official code goes below
|
120 |
-
# images = np.array([imread(str(f)).astype(np.float32)
|
121 |
-
# for f in files[start:end]])
|
122 |
-
|
123 |
-
# # Reshape to (n_images, 3, height, width)
|
124 |
-
# images = images.transpose((0, 3, 1, 2))
|
125 |
-
# images /= 255
|
126 |
-
# batch = torch.from_numpy(images).type(torch.FloatTensor)
|
127 |
-
# #
|
128 |
-
|
129 |
-
t = transform if not keep_size else ToTensor()
|
130 |
-
|
131 |
-
if isinstance(files[0], pathlib.PosixPath):
|
132 |
-
images = [t(Image.open(str(f))) for f in files[start:end]]
|
133 |
-
|
134 |
-
elif isinstance(files[0], Image.Image):
|
135 |
-
images = [t(f) for f in files[start:end]]
|
136 |
-
|
137 |
-
else:
|
138 |
-
raise ValueError(f"Unknown data type for image: {type(files[0])}")
|
139 |
-
|
140 |
-
batch = torch.stack(images)
|
141 |
-
|
142 |
-
if cuda:
|
143 |
-
batch = batch.cuda()
|
144 |
-
|
145 |
-
pred = model(batch)[0]
|
146 |
-
|
147 |
-
# If model output is not scalar, apply global spatial average pooling.
|
148 |
-
# This happens if you choose a dimensionality not equal 2048.
|
149 |
-
if pred.shape[2] != 1 or pred.shape[3] != 1:
|
150 |
-
pred = adaptive_avg_pool2d(pred, output_size=(1, 1))
|
151 |
-
|
152 |
-
pred_arr[start:end] = pred.cpu().data.numpy().reshape(batch_size, -1)
|
153 |
-
|
154 |
-
if verbose:
|
155 |
-
print(' done')
|
156 |
-
|
157 |
-
return pred_arr
|
158 |
-
|
159 |
-
|
160 |
-
def calculate_frechet_distance(mu1, sigma1, mu2, sigma2, eps=1e-6):
|
161 |
-
"""Numpy implementation of the Frechet Distance.
|
162 |
-
The Frechet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1)
|
163 |
-
and X_2 ~ N(mu_2, C_2) is
|
164 |
-
d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)).
|
165 |
-
|
166 |
-
Stable version by Dougal J. Sutherland.
|
167 |
-
|
168 |
-
Params:
|
169 |
-
-- mu1 : Numpy array containing the activations of a layer of the
|
170 |
-
inception net (like returned by the function 'get_predictions')
|
171 |
-
for generated samples.
|
172 |
-
-- mu2 : The sample mean over activations, precalculated on an
|
173 |
-
representative data set.
|
174 |
-
-- sigma1: The covariance matrix over activations for generated samples.
|
175 |
-
-- sigma2: The covariance matrix over activations, precalculated on an
|
176 |
-
representative data set.
|
177 |
-
|
178 |
-
Returns:
|
179 |
-
-- : The Frechet Distance.
|
180 |
-
"""
|
181 |
-
|
182 |
-
mu1 = np.atleast_1d(mu1)
|
183 |
-
mu2 = np.atleast_1d(mu2)
|
184 |
-
|
185 |
-
sigma1 = np.atleast_2d(sigma1)
|
186 |
-
sigma2 = np.atleast_2d(sigma2)
|
187 |
-
|
188 |
-
assert mu1.shape == mu2.shape, \
|
189 |
-
'Training and test mean vectors have different lengths'
|
190 |
-
assert sigma1.shape == sigma2.shape, \
|
191 |
-
'Training and test covariances have different dimensions'
|
192 |
-
|
193 |
-
diff = mu1 - mu2
|
194 |
-
|
195 |
-
# Product might be almost singular
|
196 |
-
covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False)
|
197 |
-
if not np.isfinite(covmean).all():
|
198 |
-
msg = ('fid calculation produces singular product; '
|
199 |
-
'adding %s to diagonal of cov estimates') % eps
|
200 |
-
print(msg)
|
201 |
-
offset = np.eye(sigma1.shape[0]) * eps
|
202 |
-
covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset))
|
203 |
-
|
204 |
-
# Numerical error might give slight imaginary component
|
205 |
-
if np.iscomplexobj(covmean):
|
206 |
-
# if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3):
|
207 |
-
if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-2):
|
208 |
-
m = np.max(np.abs(covmean.imag))
|
209 |
-
raise ValueError('Imaginary component {}'.format(m))
|
210 |
-
covmean = covmean.real
|
211 |
-
|
212 |
-
tr_covmean = np.trace(covmean)
|
213 |
-
|
214 |
-
return (diff.dot(diff) + np.trace(sigma1) +
|
215 |
-
np.trace(sigma2) - 2 * tr_covmean)
|
216 |
-
|
217 |
-
|
218 |
-
def calculate_activation_statistics(files, model, batch_size=50,
|
219 |
-
dims=2048, cuda=False, verbose=False, keep_size=False):
|
220 |
-
"""Calculation of the statistics used by the FID.
|
221 |
-
Params:
|
222 |
-
-- files : List of image files paths
|
223 |
-
-- model : Instance of inception model
|
224 |
-
-- batch_size : The images numpy array is split into batches with
|
225 |
-
batch size batch_size. A reasonable batch size
|
226 |
-
depends on the hardware.
|
227 |
-
-- dims : Dimensionality of features returned by Inception
|
228 |
-
-- cuda : If set to True, use GPU
|
229 |
-
-- verbose : If set to True and parameter out_step is given, the
|
230 |
-
number of calculated batches is reported.
|
231 |
-
Returns:
|
232 |
-
-- mu : The mean over samples of the activations of the pool_3 layer of
|
233 |
-
the inception model.
|
234 |
-
-- sigma : The covariance matrix of the activations of the pool_3 layer of
|
235 |
-
the inception model.
|
236 |
-
"""
|
237 |
-
act = get_activations(files, model, batch_size, dims, cuda, verbose, keep_size=keep_size)
|
238 |
-
mu = np.mean(act, axis=0)
|
239 |
-
sigma = np.cov(act, rowvar=False)
|
240 |
-
return mu, sigma
|
241 |
-
|
242 |
-
|
243 |
-
def _compute_statistics_of_path(path, model, batch_size, dims, cuda):
|
244 |
-
if path.endswith('.npz'):
|
245 |
-
f = np.load(path)
|
246 |
-
m, s = f['mu'][:], f['sigma'][:]
|
247 |
-
f.close()
|
248 |
-
else:
|
249 |
-
path = pathlib.Path(path)
|
250 |
-
files = list(path.glob('*.jpg')) + list(path.glob('*.png'))
|
251 |
-
m, s = calculate_activation_statistics(files, model, batch_size,
|
252 |
-
dims, cuda)
|
253 |
-
|
254 |
-
return m, s
|
255 |
-
|
256 |
-
|
257 |
-
def _compute_statistics_of_images(images, model, batch_size, dims, cuda, keep_size=False):
|
258 |
-
if isinstance(images, list): # exact paths to files are provided
|
259 |
-
m, s = calculate_activation_statistics(images, model, batch_size,
|
260 |
-
dims, cuda, keep_size=keep_size)
|
261 |
-
|
262 |
-
return m, s
|
263 |
-
|
264 |
-
else:
|
265 |
-
raise ValueError
|
266 |
-
|
267 |
-
|
268 |
-
def calculate_fid_given_paths(paths, batch_size, cuda, dims):
|
269 |
-
"""Calculates the FID of two paths"""
|
270 |
-
for p in paths:
|
271 |
-
if not os.path.exists(p):
|
272 |
-
raise RuntimeError('Invalid path: %s' % p)
|
273 |
-
|
274 |
-
block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
|
275 |
-
|
276 |
-
model = InceptionV3([block_idx])
|
277 |
-
if cuda:
|
278 |
-
model.cuda()
|
279 |
-
|
280 |
-
m1, s1 = _compute_statistics_of_path(paths[0], model, batch_size,
|
281 |
-
dims, cuda)
|
282 |
-
m2, s2 = _compute_statistics_of_path(paths[1], model, batch_size,
|
283 |
-
dims, cuda)
|
284 |
-
fid_value = calculate_frechet_distance(m1, s1, m2, s2)
|
285 |
-
|
286 |
-
return fid_value
|
287 |
-
|
288 |
-
|
289 |
-
def calculate_fid_given_images(images, batch_size, cuda, dims, use_globals=False, keep_size=False):
|
290 |
-
if use_globals:
|
291 |
-
global FID_MODEL # for multiprocessing
|
292 |
-
|
293 |
-
for imgs in images:
|
294 |
-
if isinstance(imgs, list) and isinstance(imgs[0], (Image.Image, JpegImagePlugin.JpegImageFile)):
|
295 |
-
pass
|
296 |
-
else:
|
297 |
-
raise RuntimeError('Invalid images')
|
298 |
-
|
299 |
-
block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
|
300 |
-
|
301 |
-
if 'FID_MODEL' not in globals() or not use_globals:
|
302 |
-
model = InceptionV3([block_idx])
|
303 |
-
if cuda:
|
304 |
-
model.cuda()
|
305 |
-
|
306 |
-
if use_globals:
|
307 |
-
FID_MODEL = model
|
308 |
-
|
309 |
-
else:
|
310 |
-
model = FID_MODEL
|
311 |
-
|
312 |
-
m1, s1 = _compute_statistics_of_images(images[0], model, batch_size,
|
313 |
-
dims, cuda, keep_size=False)
|
314 |
-
m2, s2 = _compute_statistics_of_images(images[1], model, batch_size,
|
315 |
-
dims, cuda, keep_size=False)
|
316 |
-
fid_value = calculate_frechet_distance(m1, s1, m2, s2)
|
317 |
-
return fid_value
|
318 |
-
|
319 |
-
|
320 |
-
if __name__ == '__main__':
|
321 |
-
args = parser.parse_args()
|
322 |
-
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
|
323 |
-
|
324 |
-
fid_value = calculate_fid_given_paths(args.path,
|
325 |
-
args.batch_size,
|
326 |
-
args.gpu != '',
|
327 |
-
args.dims)
|
328 |
-
print('FID: ', fid_value)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlishbaImran/Redox-Flow-Battery-Prediction/README.md
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Redox-Flow-Battery-Prediction
|
3 |
-
emoji:
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: red
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.10.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
14 |
-
This work is built on top of the paper: https://chemrxiv.org/engage/chemrxiv/article-details/60c7575f469df44a40f45465 and platform: https://github.com/mcsorkun/RedPred-web
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amiminoru/whoreproxy/Dockerfile
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
FROM node:18-bullseye-slim
|
2 |
-
RUN apt-get update && \
|
3 |
-
apt-get install -y git
|
4 |
-
RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
|
5 |
-
WORKDIR /app
|
6 |
-
RUN npm install
|
7 |
-
COPY Dockerfile greeting.md* .env* ./
|
8 |
-
RUN npm run build
|
9 |
-
EXPOSE 7860
|
10 |
-
ENV NODE_ENV=production
|
11 |
-
CMD [ "npm", "start" ]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_models_vq.py
DELETED
@@ -1,96 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2023 HuggingFace Inc.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
import unittest
|
17 |
-
|
18 |
-
import torch
|
19 |
-
|
20 |
-
from diffusers import VQModel
|
21 |
-
from diffusers.utils import floats_tensor, torch_device
|
22 |
-
from diffusers.utils.testing_utils import enable_full_determinism
|
23 |
-
|
24 |
-
from .test_modeling_common import ModelTesterMixin, UNetTesterMixin
|
25 |
-
|
26 |
-
|
27 |
-
enable_full_determinism()
|
28 |
-
|
29 |
-
|
30 |
-
class VQModelTests(ModelTesterMixin, UNetTesterMixin, unittest.TestCase):
|
31 |
-
model_class = VQModel
|
32 |
-
main_input_name = "sample"
|
33 |
-
|
34 |
-
@property
|
35 |
-
def dummy_input(self, sizes=(32, 32)):
|
36 |
-
batch_size = 4
|
37 |
-
num_channels = 3
|
38 |
-
|
39 |
-
image = floats_tensor((batch_size, num_channels) + sizes).to(torch_device)
|
40 |
-
|
41 |
-
return {"sample": image}
|
42 |
-
|
43 |
-
@property
|
44 |
-
def input_shape(self):
|
45 |
-
return (3, 32, 32)
|
46 |
-
|
47 |
-
@property
|
48 |
-
def output_shape(self):
|
49 |
-
return (3, 32, 32)
|
50 |
-
|
51 |
-
def prepare_init_args_and_inputs_for_common(self):
|
52 |
-
init_dict = {
|
53 |
-
"block_out_channels": [32, 64],
|
54 |
-
"in_channels": 3,
|
55 |
-
"out_channels": 3,
|
56 |
-
"down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D"],
|
57 |
-
"up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D"],
|
58 |
-
"latent_channels": 3,
|
59 |
-
}
|
60 |
-
inputs_dict = self.dummy_input
|
61 |
-
return init_dict, inputs_dict
|
62 |
-
|
63 |
-
def test_forward_signature(self):
|
64 |
-
pass
|
65 |
-
|
66 |
-
def test_training(self):
|
67 |
-
pass
|
68 |
-
|
69 |
-
def test_from_pretrained_hub(self):
|
70 |
-
model, loading_info = VQModel.from_pretrained("fusing/vqgan-dummy", output_loading_info=True)
|
71 |
-
self.assertIsNotNone(model)
|
72 |
-
self.assertEqual(len(loading_info["missing_keys"]), 0)
|
73 |
-
|
74 |
-
model.to(torch_device)
|
75 |
-
image = model(**self.dummy_input)
|
76 |
-
|
77 |
-
assert image is not None, "Make sure output is not None"
|
78 |
-
|
79 |
-
def test_output_pretrained(self):
|
80 |
-
model = VQModel.from_pretrained("fusing/vqgan-dummy")
|
81 |
-
model.to(torch_device).eval()
|
82 |
-
|
83 |
-
torch.manual_seed(0)
|
84 |
-
if torch.cuda.is_available():
|
85 |
-
torch.cuda.manual_seed_all(0)
|
86 |
-
|
87 |
-
image = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size)
|
88 |
-
image = image.to(torch_device)
|
89 |
-
with torch.no_grad():
|
90 |
-
output = model(image).sample
|
91 |
-
|
92 |
-
output_slice = output[0, -1, -3:, -3:].flatten().cpu()
|
93 |
-
# fmt: off
|
94 |
-
expected_output_slice = torch.tensor([-0.0153, -0.4044, -0.1880, -0.5161, -0.2418, -0.4072, -0.1612, -0.0633, -0.0143])
|
95 |
-
# fmt: on
|
96 |
-
self.assertTrue(torch.allclose(output_slice, expected_output_slice, atol=1e-3))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/test_pipelines_common.py
DELETED
@@ -1,804 +0,0 @@
|
|
1 |
-
import contextlib
|
2 |
-
import gc
|
3 |
-
import inspect
|
4 |
-
import io
|
5 |
-
import re
|
6 |
-
import tempfile
|
7 |
-
import unittest
|
8 |
-
from typing import Callable, Union
|
9 |
-
|
10 |
-
import numpy as np
|
11 |
-
import PIL
|
12 |
-
import torch
|
13 |
-
|
14 |
-
import diffusers
|
15 |
-
from diffusers import DiffusionPipeline
|
16 |
-
from diffusers.image_processor import VaeImageProcessor
|
17 |
-
from diffusers.schedulers import KarrasDiffusionSchedulers
|
18 |
-
from diffusers.utils import logging
|
19 |
-
from diffusers.utils.import_utils import is_accelerate_available, is_accelerate_version, is_xformers_available
|
20 |
-
from diffusers.utils.testing_utils import CaptureLogger, require_torch, torch_device
|
21 |
-
|
22 |
-
|
23 |
-
def to_np(tensor):
|
24 |
-
if isinstance(tensor, torch.Tensor):
|
25 |
-
tensor = tensor.detach().cpu().numpy()
|
26 |
-
|
27 |
-
return tensor
|
28 |
-
|
29 |
-
|
30 |
-
def check_same_shape(tensor_list):
|
31 |
-
shapes = [tensor.shape for tensor in tensor_list]
|
32 |
-
return all(shape == shapes[0] for shape in shapes[1:])
|
33 |
-
|
34 |
-
|
35 |
-
class PipelineLatentTesterMixin:
|
36 |
-
"""
|
37 |
-
This mixin is designed to be used with PipelineTesterMixin and unittest.TestCase classes.
|
38 |
-
It provides a set of common tests for PyTorch pipeline that has vae, e.g.
|
39 |
-
equivalence of different input and output types, etc.
|
40 |
-
"""
|
41 |
-
|
42 |
-
@property
|
43 |
-
def image_params(self) -> frozenset:
|
44 |
-
raise NotImplementedError(
|
45 |
-
"You need to set the attribute `image_params` in the child test class. "
|
46 |
-
"`image_params` are tested for if all accepted input image types (i.e. `pt`,`pil`,`np`) are producing same results"
|
47 |
-
)
|
48 |
-
|
49 |
-
@property
|
50 |
-
def image_latents_params(self) -> frozenset:
|
51 |
-
raise NotImplementedError(
|
52 |
-
"You need to set the attribute `image_latents_params` in the child test class. "
|
53 |
-
"`image_latents_params` are tested for if passing latents directly are producing same results"
|
54 |
-
)
|
55 |
-
|
56 |
-
def get_dummy_inputs_by_type(self, device, seed=0, input_image_type="pt", output_type="np"):
|
57 |
-
inputs = self.get_dummy_inputs(device, seed)
|
58 |
-
|
59 |
-
def convert_to_pt(image):
|
60 |
-
if isinstance(image, torch.Tensor):
|
61 |
-
input_image = image
|
62 |
-
elif isinstance(image, np.ndarray):
|
63 |
-
input_image = VaeImageProcessor.numpy_to_pt(image)
|
64 |
-
elif isinstance(image, PIL.Image.Image):
|
65 |
-
input_image = VaeImageProcessor.pil_to_numpy(image)
|
66 |
-
input_image = VaeImageProcessor.numpy_to_pt(input_image)
|
67 |
-
else:
|
68 |
-
raise ValueError(f"unsupported input_image_type {type(image)}")
|
69 |
-
return input_image
|
70 |
-
|
71 |
-
def convert_pt_to_type(image, input_image_type):
|
72 |
-
if input_image_type == "pt":
|
73 |
-
input_image = image
|
74 |
-
elif input_image_type == "np":
|
75 |
-
input_image = VaeImageProcessor.pt_to_numpy(image)
|
76 |
-
elif input_image_type == "pil":
|
77 |
-
input_image = VaeImageProcessor.pt_to_numpy(image)
|
78 |
-
input_image = VaeImageProcessor.numpy_to_pil(input_image)
|
79 |
-
else:
|
80 |
-
raise ValueError(f"unsupported input_image_type {input_image_type}.")
|
81 |
-
return input_image
|
82 |
-
|
83 |
-
for image_param in self.image_params:
|
84 |
-
if image_param in inputs.keys():
|
85 |
-
inputs[image_param] = convert_pt_to_type(
|
86 |
-
convert_to_pt(inputs[image_param]).to(device), input_image_type
|
87 |
-
)
|
88 |
-
|
89 |
-
inputs["output_type"] = output_type
|
90 |
-
|
91 |
-
return inputs
|
92 |
-
|
93 |
-
def test_pt_np_pil_outputs_equivalent(self, expected_max_diff=1e-4):
|
94 |
-
self._test_pt_np_pil_outputs_equivalent(expected_max_diff=expected_max_diff)
|
95 |
-
|
96 |
-
def _test_pt_np_pil_outputs_equivalent(self, expected_max_diff=1e-4, input_image_type="pt"):
|
97 |
-
components = self.get_dummy_components()
|
98 |
-
pipe = self.pipeline_class(**components)
|
99 |
-
pipe = pipe.to(torch_device)
|
100 |
-
pipe.set_progress_bar_config(disable=None)
|
101 |
-
|
102 |
-
output_pt = pipe(
|
103 |
-
**self.get_dummy_inputs_by_type(torch_device, input_image_type=input_image_type, output_type="pt")
|
104 |
-
)[0]
|
105 |
-
output_np = pipe(
|
106 |
-
**self.get_dummy_inputs_by_type(torch_device, input_image_type=input_image_type, output_type="np")
|
107 |
-
)[0]
|
108 |
-
output_pil = pipe(
|
109 |
-
**self.get_dummy_inputs_by_type(torch_device, input_image_type=input_image_type, output_type="pil")
|
110 |
-
)[0]
|
111 |
-
|
112 |
-
max_diff = np.abs(output_pt.cpu().numpy().transpose(0, 2, 3, 1) - output_np).max()
|
113 |
-
self.assertLess(
|
114 |
-
max_diff, expected_max_diff, "`output_type=='pt'` generate different results from `output_type=='np'`"
|
115 |
-
)
|
116 |
-
|
117 |
-
max_diff = np.abs(np.array(output_pil[0]) - (output_np * 255).round()).max()
|
118 |
-
self.assertLess(max_diff, 2.0, "`output_type=='pil'` generate different results from `output_type=='np'`")
|
119 |
-
|
120 |
-
def test_pt_np_pil_inputs_equivalent(self):
|
121 |
-
if len(self.image_params) == 0:
|
122 |
-
return
|
123 |
-
|
124 |
-
components = self.get_dummy_components()
|
125 |
-
pipe = self.pipeline_class(**components)
|
126 |
-
pipe = pipe.to(torch_device)
|
127 |
-
pipe.set_progress_bar_config(disable=None)
|
128 |
-
|
129 |
-
out_input_pt = pipe(**self.get_dummy_inputs_by_type(torch_device, input_image_type="pt"))[0]
|
130 |
-
out_input_np = pipe(**self.get_dummy_inputs_by_type(torch_device, input_image_type="np"))[0]
|
131 |
-
out_input_pil = pipe(**self.get_dummy_inputs_by_type(torch_device, input_image_type="pil"))[0]
|
132 |
-
|
133 |
-
max_diff = np.abs(out_input_pt - out_input_np).max()
|
134 |
-
self.assertLess(max_diff, 1e-4, "`input_type=='pt'` generate different result from `input_type=='np'`")
|
135 |
-
max_diff = np.abs(out_input_pil - out_input_np).max()
|
136 |
-
self.assertLess(max_diff, 1e-2, "`input_type=='pt'` generate different result from `input_type=='np'`")
|
137 |
-
|
138 |
-
def test_latents_input(self):
|
139 |
-
if len(self.image_latents_params) == 0:
|
140 |
-
return
|
141 |
-
|
142 |
-
components = self.get_dummy_components()
|
143 |
-
pipe = self.pipeline_class(**components)
|
144 |
-
pipe.image_processor = VaeImageProcessor(do_resize=False, do_normalize=False)
|
145 |
-
pipe = pipe.to(torch_device)
|
146 |
-
pipe.set_progress_bar_config(disable=None)
|
147 |
-
|
148 |
-
out = pipe(**self.get_dummy_inputs_by_type(torch_device, input_image_type="pt"))[0]
|
149 |
-
|
150 |
-
vae = components["vae"]
|
151 |
-
inputs = self.get_dummy_inputs_by_type(torch_device, input_image_type="pt")
|
152 |
-
generator = inputs["generator"]
|
153 |
-
for image_param in self.image_latents_params:
|
154 |
-
if image_param in inputs.keys():
|
155 |
-
inputs[image_param] = (
|
156 |
-
vae.encode(inputs[image_param]).latent_dist.sample(generator) * vae.config.scaling_factor
|
157 |
-
)
|
158 |
-
out_latents_inputs = pipe(**inputs)[0]
|
159 |
-
|
160 |
-
max_diff = np.abs(out - out_latents_inputs).max()
|
161 |
-
self.assertLess(max_diff, 1e-4, "passing latents as image input generate different result from passing image")
|
162 |
-
|
163 |
-
|
164 |
-
@require_torch
|
165 |
-
class PipelineKarrasSchedulerTesterMixin:
|
166 |
-
"""
|
167 |
-
This mixin is designed to be used with unittest.TestCase classes.
|
168 |
-
It provides a set of common tests for each PyTorch pipeline that makes use of KarrasDiffusionSchedulers
|
169 |
-
equivalence of dict and tuple outputs, etc.
|
170 |
-
"""
|
171 |
-
|
172 |
-
def test_karras_schedulers_shape(self):
|
173 |
-
components = self.get_dummy_components()
|
174 |
-
pipe = self.pipeline_class(**components)
|
175 |
-
|
176 |
-
# make sure that PNDM does not need warm-up
|
177 |
-
pipe.scheduler.register_to_config(skip_prk_steps=True)
|
178 |
-
|
179 |
-
pipe.to(torch_device)
|
180 |
-
pipe.set_progress_bar_config(disable=None)
|
181 |
-
inputs = self.get_dummy_inputs(torch_device)
|
182 |
-
inputs["num_inference_steps"] = 2
|
183 |
-
|
184 |
-
if "strength" in inputs:
|
185 |
-
inputs["num_inference_steps"] = 4
|
186 |
-
inputs["strength"] = 0.5
|
187 |
-
|
188 |
-
outputs = []
|
189 |
-
for scheduler_enum in KarrasDiffusionSchedulers:
|
190 |
-
if "KDPM2" in scheduler_enum.name:
|
191 |
-
inputs["num_inference_steps"] = 5
|
192 |
-
|
193 |
-
scheduler_cls = getattr(diffusers, scheduler_enum.name)
|
194 |
-
pipe.scheduler = scheduler_cls.from_config(pipe.scheduler.config)
|
195 |
-
output = pipe(**inputs)[0]
|
196 |
-
outputs.append(output)
|
197 |
-
|
198 |
-
if "KDPM2" in scheduler_enum.name:
|
199 |
-
inputs["num_inference_steps"] = 2
|
200 |
-
|
201 |
-
assert check_same_shape(outputs)
|
202 |
-
|
203 |
-
|
204 |
-
@require_torch
|
205 |
-
class PipelineTesterMixin:
|
206 |
-
"""
|
207 |
-
This mixin is designed to be used with unittest.TestCase classes.
|
208 |
-
It provides a set of common tests for each PyTorch pipeline, e.g. saving and loading the pipeline,
|
209 |
-
equivalence of dict and tuple outputs, etc.
|
210 |
-
"""
|
211 |
-
|
212 |
-
# Canonical parameters that are passed to `__call__` regardless
|
213 |
-
# of the type of pipeline. They are always optional and have common
|
214 |
-
# sense default values.
|
215 |
-
required_optional_params = frozenset(
|
216 |
-
[
|
217 |
-
"num_inference_steps",
|
218 |
-
"num_images_per_prompt",
|
219 |
-
"generator",
|
220 |
-
"latents",
|
221 |
-
"output_type",
|
222 |
-
"return_dict",
|
223 |
-
"callback",
|
224 |
-
"callback_steps",
|
225 |
-
]
|
226 |
-
)
|
227 |
-
|
228 |
-
# set these parameters to False in the child class if the pipeline does not support the corresponding functionality
|
229 |
-
test_attention_slicing = True
|
230 |
-
|
231 |
-
test_xformers_attention = True
|
232 |
-
|
233 |
-
def get_generator(self, seed):
|
234 |
-
device = torch_device if torch_device != "mps" else "cpu"
|
235 |
-
generator = torch.Generator(device).manual_seed(seed)
|
236 |
-
return generator
|
237 |
-
|
238 |
-
@property
|
239 |
-
def pipeline_class(self) -> Union[Callable, DiffusionPipeline]:
|
240 |
-
raise NotImplementedError(
|
241 |
-
"You need to set the attribute `pipeline_class = ClassNameOfPipeline` in the child test class. "
|
242 |
-
"See existing pipeline tests for reference."
|
243 |
-
)
|
244 |
-
|
245 |
-
def get_dummy_components(self):
|
246 |
-
raise NotImplementedError(
|
247 |
-
"You need to implement `get_dummy_components(self)` in the child test class. "
|
248 |
-
"See existing pipeline tests for reference."
|
249 |
-
)
|
250 |
-
|
251 |
-
def get_dummy_inputs(self, device, seed=0):
|
252 |
-
raise NotImplementedError(
|
253 |
-
"You need to implement `get_dummy_inputs(self, device, seed)` in the child test class. "
|
254 |
-
"See existing pipeline tests for reference."
|
255 |
-
)
|
256 |
-
|
257 |
-
@property
|
258 |
-
def params(self) -> frozenset:
|
259 |
-
raise NotImplementedError(
|
260 |
-
"You need to set the attribute `params` in the child test class. "
|
261 |
-
"`params` are checked for if all values are present in `__call__`'s signature."
|
262 |
-
" You can set `params` using one of the common set of parameters defined in `pipeline_params.py`"
|
263 |
-
" e.g., `TEXT_TO_IMAGE_PARAMS` defines the common parameters used in text to "
|
264 |
-
"image pipelines, including prompts and prompt embedding overrides."
|
265 |
-
"If your pipeline's set of arguments has minor changes from one of the common sets of arguments, "
|
266 |
-
"do not make modifications to the existing common sets of arguments. I.e. a text to image pipeline "
|
267 |
-
"with non-configurable height and width arguments should set the attribute as "
|
268 |
-
"`params = TEXT_TO_IMAGE_PARAMS - {'height', 'width'}`. "
|
269 |
-
"See existing pipeline tests for reference."
|
270 |
-
)
|
271 |
-
|
272 |
-
@property
|
273 |
-
def batch_params(self) -> frozenset:
|
274 |
-
raise NotImplementedError(
|
275 |
-
"You need to set the attribute `batch_params` in the child test class. "
|
276 |
-
"`batch_params` are the parameters required to be batched when passed to the pipeline's "
|
277 |
-
"`__call__` method. `pipeline_params.py` provides some common sets of parameters such as "
|
278 |
-
"`TEXT_TO_IMAGE_BATCH_PARAMS`, `IMAGE_VARIATION_BATCH_PARAMS`, etc... If your pipeline's "
|
279 |
-
"set of batch arguments has minor changes from one of the common sets of batch arguments, "
|
280 |
-
"do not make modifications to the existing common sets of batch arguments. I.e. a text to "
|
281 |
-
"image pipeline `negative_prompt` is not batched should set the attribute as "
|
282 |
-
"`batch_params = TEXT_TO_IMAGE_BATCH_PARAMS - {'negative_prompt'}`. "
|
283 |
-
"See existing pipeline tests for reference."
|
284 |
-
)
|
285 |
-
|
286 |
-
def tearDown(self):
|
287 |
-
# clean up the VRAM after each test in case of CUDA runtime errors
|
288 |
-
super().tearDown()
|
289 |
-
gc.collect()
|
290 |
-
torch.cuda.empty_cache()
|
291 |
-
|
292 |
-
def test_save_load_local(self, expected_max_difference=1e-4):
|
293 |
-
components = self.get_dummy_components()
|
294 |
-
pipe = self.pipeline_class(**components)
|
295 |
-
pipe.to(torch_device)
|
296 |
-
pipe.set_progress_bar_config(disable=None)
|
297 |
-
|
298 |
-
inputs = self.get_dummy_inputs(torch_device)
|
299 |
-
output = pipe(**inputs)[0]
|
300 |
-
|
301 |
-
logger = logging.get_logger("diffusers.pipelines.pipeline_utils")
|
302 |
-
logger.setLevel(diffusers.logging.INFO)
|
303 |
-
|
304 |
-
with tempfile.TemporaryDirectory() as tmpdir:
|
305 |
-
pipe.save_pretrained(tmpdir)
|
306 |
-
|
307 |
-
with CaptureLogger(logger) as cap_logger:
|
308 |
-
pipe_loaded = self.pipeline_class.from_pretrained(tmpdir)
|
309 |
-
|
310 |
-
for name in pipe_loaded.components.keys():
|
311 |
-
if name not in pipe_loaded._optional_components:
|
312 |
-
assert name in str(cap_logger)
|
313 |
-
|
314 |
-
pipe_loaded.to(torch_device)
|
315 |
-
pipe_loaded.set_progress_bar_config(disable=None)
|
316 |
-
|
317 |
-
inputs = self.get_dummy_inputs(torch_device)
|
318 |
-
output_loaded = pipe_loaded(**inputs)[0]
|
319 |
-
|
320 |
-
max_diff = np.abs(to_np(output) - to_np(output_loaded)).max()
|
321 |
-
self.assertLess(max_diff, expected_max_difference)
|
322 |
-
|
323 |
-
def test_pipeline_call_signature(self):
|
324 |
-
self.assertTrue(
|
325 |
-
hasattr(self.pipeline_class, "__call__"), f"{self.pipeline_class} should have a `__call__` method"
|
326 |
-
)
|
327 |
-
|
328 |
-
parameters = inspect.signature(self.pipeline_class.__call__).parameters
|
329 |
-
|
330 |
-
optional_parameters = set()
|
331 |
-
|
332 |
-
for k, v in parameters.items():
|
333 |
-
if v.default != inspect._empty:
|
334 |
-
optional_parameters.add(k)
|
335 |
-
|
336 |
-
parameters = set(parameters.keys())
|
337 |
-
parameters.remove("self")
|
338 |
-
parameters.discard("kwargs") # kwargs can be added if arguments of pipeline call function are deprecated
|
339 |
-
|
340 |
-
remaining_required_parameters = set()
|
341 |
-
|
342 |
-
for param in self.params:
|
343 |
-
if param not in parameters:
|
344 |
-
remaining_required_parameters.add(param)
|
345 |
-
|
346 |
-
self.assertTrue(
|
347 |
-
len(remaining_required_parameters) == 0,
|
348 |
-
f"Required parameters not present: {remaining_required_parameters}",
|
349 |
-
)
|
350 |
-
|
351 |
-
remaining_required_optional_parameters = set()
|
352 |
-
|
353 |
-
for param in self.required_optional_params:
|
354 |
-
if param not in optional_parameters:
|
355 |
-
remaining_required_optional_parameters.add(param)
|
356 |
-
|
357 |
-
self.assertTrue(
|
358 |
-
len(remaining_required_optional_parameters) == 0,
|
359 |
-
f"Required optional parameters not present: {remaining_required_optional_parameters}",
|
360 |
-
)
|
361 |
-
|
362 |
-
def test_inference_batch_consistent(self, batch_sizes=[2, 4, 13]):
|
363 |
-
self._test_inference_batch_consistent(batch_sizes=batch_sizes)
|
364 |
-
|
365 |
-
def _test_inference_batch_consistent(
|
366 |
-
self, batch_sizes=[2, 4, 13], additional_params_copy_to_batched_inputs=["num_inference_steps"]
|
367 |
-
):
|
368 |
-
components = self.get_dummy_components()
|
369 |
-
pipe = self.pipeline_class(**components)
|
370 |
-
pipe.to(torch_device)
|
371 |
-
pipe.set_progress_bar_config(disable=None)
|
372 |
-
|
373 |
-
inputs = self.get_dummy_inputs(torch_device)
|
374 |
-
|
375 |
-
logger = logging.get_logger(pipe.__module__)
|
376 |
-
logger.setLevel(level=diffusers.logging.FATAL)
|
377 |
-
|
378 |
-
# batchify inputs
|
379 |
-
for batch_size in batch_sizes:
|
380 |
-
batched_inputs = {}
|
381 |
-
for name, value in inputs.items():
|
382 |
-
if name in self.batch_params:
|
383 |
-
# prompt is string
|
384 |
-
if name == "prompt":
|
385 |
-
len_prompt = len(value)
|
386 |
-
# make unequal batch sizes
|
387 |
-
batched_inputs[name] = [value[: len_prompt // i] for i in range(1, batch_size + 1)]
|
388 |
-
|
389 |
-
# make last batch super long
|
390 |
-
batched_inputs[name][-1] = 100 * "very long"
|
391 |
-
# or else we have images
|
392 |
-
else:
|
393 |
-
batched_inputs[name] = batch_size * [value]
|
394 |
-
elif name == "batch_size":
|
395 |
-
batched_inputs[name] = batch_size
|
396 |
-
else:
|
397 |
-
batched_inputs[name] = value
|
398 |
-
|
399 |
-
for arg in additional_params_copy_to_batched_inputs:
|
400 |
-
batched_inputs[arg] = inputs[arg]
|
401 |
-
|
402 |
-
batched_inputs["output_type"] = "np"
|
403 |
-
|
404 |
-
if self.pipeline_class.__name__ == "DanceDiffusionPipeline":
|
405 |
-
batched_inputs.pop("output_type")
|
406 |
-
|
407 |
-
output = pipe(**batched_inputs)
|
408 |
-
|
409 |
-
assert len(output[0]) == batch_size
|
410 |
-
|
411 |
-
batched_inputs["output_type"] = "np"
|
412 |
-
|
413 |
-
if self.pipeline_class.__name__ == "DanceDiffusionPipeline":
|
414 |
-
batched_inputs.pop("output_type")
|
415 |
-
|
416 |
-
output = pipe(**batched_inputs)[0]
|
417 |
-
|
418 |
-
assert output.shape[0] == batch_size
|
419 |
-
|
420 |
-
logger.setLevel(level=diffusers.logging.WARNING)
|
421 |
-
|
422 |
-
def test_inference_batch_single_identical(self, batch_size=3, expected_max_diff=1e-4):
|
423 |
-
self._test_inference_batch_single_identical(batch_size=batch_size, expected_max_diff=expected_max_diff)
|
424 |
-
|
425 |
-
def _test_inference_batch_single_identical(
|
426 |
-
self,
|
427 |
-
batch_size=3,
|
428 |
-
test_max_difference=None,
|
429 |
-
test_mean_pixel_difference=None,
|
430 |
-
relax_max_difference=False,
|
431 |
-
expected_max_diff=1e-4,
|
432 |
-
additional_params_copy_to_batched_inputs=["num_inference_steps"],
|
433 |
-
):
|
434 |
-
if test_max_difference is None:
|
435 |
-
# TODO(Pedro) - not sure why, but not at all reproducible at the moment it seems
|
436 |
-
# make sure that batched and non-batched is identical
|
437 |
-
test_max_difference = torch_device != "mps"
|
438 |
-
|
439 |
-
if test_mean_pixel_difference is None:
|
440 |
-
# TODO same as above
|
441 |
-
test_mean_pixel_difference = torch_device != "mps"
|
442 |
-
|
443 |
-
components = self.get_dummy_components()
|
444 |
-
pipe = self.pipeline_class(**components)
|
445 |
-
pipe.to(torch_device)
|
446 |
-
pipe.set_progress_bar_config(disable=None)
|
447 |
-
|
448 |
-
inputs = self.get_dummy_inputs(torch_device)
|
449 |
-
|
450 |
-
logger = logging.get_logger(pipe.__module__)
|
451 |
-
logger.setLevel(level=diffusers.logging.FATAL)
|
452 |
-
|
453 |
-
# batchify inputs
|
454 |
-
batched_inputs = {}
|
455 |
-
batch_size = batch_size
|
456 |
-
for name, value in inputs.items():
|
457 |
-
if name in self.batch_params:
|
458 |
-
# prompt is string
|
459 |
-
if name == "prompt":
|
460 |
-
len_prompt = len(value)
|
461 |
-
# make unequal batch sizes
|
462 |
-
batched_inputs[name] = [value[: len_prompt // i] for i in range(1, batch_size + 1)]
|
463 |
-
|
464 |
-
# make last batch super long
|
465 |
-
batched_inputs[name][-1] = 100 * "very long"
|
466 |
-
# or else we have images
|
467 |
-
else:
|
468 |
-
batched_inputs[name] = batch_size * [value]
|
469 |
-
elif name == "batch_size":
|
470 |
-
batched_inputs[name] = batch_size
|
471 |
-
elif name == "generator":
|
472 |
-
batched_inputs[name] = [self.get_generator(i) for i in range(batch_size)]
|
473 |
-
else:
|
474 |
-
batched_inputs[name] = value
|
475 |
-
|
476 |
-
for arg in additional_params_copy_to_batched_inputs:
|
477 |
-
batched_inputs[arg] = inputs[arg]
|
478 |
-
|
479 |
-
if self.pipeline_class.__name__ != "DanceDiffusionPipeline":
|
480 |
-
batched_inputs["output_type"] = "np"
|
481 |
-
|
482 |
-
output_batch = pipe(**batched_inputs)
|
483 |
-
assert output_batch[0].shape[0] == batch_size
|
484 |
-
|
485 |
-
inputs["generator"] = self.get_generator(0)
|
486 |
-
|
487 |
-
output = pipe(**inputs)
|
488 |
-
|
489 |
-
logger.setLevel(level=diffusers.logging.WARNING)
|
490 |
-
if test_max_difference:
|
491 |
-
if relax_max_difference:
|
492 |
-
# Taking the median of the largest <n> differences
|
493 |
-
# is resilient to outliers
|
494 |
-
diff = np.abs(output_batch[0][0] - output[0][0])
|
495 |
-
diff = diff.flatten()
|
496 |
-
diff.sort()
|
497 |
-
max_diff = np.median(diff[-5:])
|
498 |
-
else:
|
499 |
-
max_diff = np.abs(output_batch[0][0] - output[0][0]).max()
|
500 |
-
assert max_diff < expected_max_diff
|
501 |
-
|
502 |
-
if test_mean_pixel_difference:
|
503 |
-
assert_mean_pixel_difference(output_batch[0][0], output[0][0])
|
504 |
-
|
505 |
-
def test_dict_tuple_outputs_equivalent(self, expected_max_difference=1e-4):
|
506 |
-
components = self.get_dummy_components()
|
507 |
-
pipe = self.pipeline_class(**components)
|
508 |
-
pipe.to(torch_device)
|
509 |
-
pipe.set_progress_bar_config(disable=None)
|
510 |
-
|
511 |
-
output = pipe(**self.get_dummy_inputs(torch_device))[0]
|
512 |
-
output_tuple = pipe(**self.get_dummy_inputs(torch_device), return_dict=False)[0]
|
513 |
-
|
514 |
-
max_diff = np.abs(to_np(output) - to_np(output_tuple)).max()
|
515 |
-
self.assertLess(max_diff, expected_max_difference)
|
516 |
-
|
517 |
-
def test_components_function(self):
|
518 |
-
init_components = self.get_dummy_components()
|
519 |
-
pipe = self.pipeline_class(**init_components)
|
520 |
-
|
521 |
-
self.assertTrue(hasattr(pipe, "components"))
|
522 |
-
self.assertTrue(set(pipe.components.keys()) == set(init_components.keys()))
|
523 |
-
|
524 |
-
@unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
|
525 |
-
def test_float16_inference(self, expected_max_diff=1e-2):
|
526 |
-
components = self.get_dummy_components()
|
527 |
-
pipe = self.pipeline_class(**components)
|
528 |
-
pipe.to(torch_device)
|
529 |
-
pipe.set_progress_bar_config(disable=None)
|
530 |
-
|
531 |
-
pipe_fp16 = self.pipeline_class(**components)
|
532 |
-
pipe_fp16.to(torch_device, torch.float16)
|
533 |
-
pipe_fp16.set_progress_bar_config(disable=None)
|
534 |
-
|
535 |
-
output = pipe(**self.get_dummy_inputs(torch_device))[0]
|
536 |
-
output_fp16 = pipe_fp16(**self.get_dummy_inputs(torch_device))[0]
|
537 |
-
|
538 |
-
max_diff = np.abs(to_np(output) - to_np(output_fp16)).max()
|
539 |
-
self.assertLess(max_diff, expected_max_diff, "The outputs of the fp16 and fp32 pipelines are too different.")
|
540 |
-
|
541 |
-
@unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
|
542 |
-
def test_save_load_float16(self, expected_max_diff=1e-2):
|
543 |
-
components = self.get_dummy_components()
|
544 |
-
for name, module in components.items():
|
545 |
-
if hasattr(module, "half"):
|
546 |
-
components[name] = module.to(torch_device).half()
|
547 |
-
pipe = self.pipeline_class(**components)
|
548 |
-
pipe.to(torch_device)
|
549 |
-
pipe.set_progress_bar_config(disable=None)
|
550 |
-
|
551 |
-
inputs = self.get_dummy_inputs(torch_device)
|
552 |
-
output = pipe(**inputs)[0]
|
553 |
-
|
554 |
-
with tempfile.TemporaryDirectory() as tmpdir:
|
555 |
-
pipe.save_pretrained(tmpdir)
|
556 |
-
pipe_loaded = self.pipeline_class.from_pretrained(tmpdir, torch_dtype=torch.float16)
|
557 |
-
pipe_loaded.to(torch_device)
|
558 |
-
pipe_loaded.set_progress_bar_config(disable=None)
|
559 |
-
|
560 |
-
for name, component in pipe_loaded.components.items():
|
561 |
-
if hasattr(component, "dtype"):
|
562 |
-
self.assertTrue(
|
563 |
-
component.dtype == torch.float16,
|
564 |
-
f"`{name}.dtype` switched from `float16` to {component.dtype} after loading.",
|
565 |
-
)
|
566 |
-
|
567 |
-
inputs = self.get_dummy_inputs(torch_device)
|
568 |
-
output_loaded = pipe_loaded(**inputs)[0]
|
569 |
-
|
570 |
-
max_diff = np.abs(to_np(output) - to_np(output_loaded)).max()
|
571 |
-
self.assertLess(
|
572 |
-
max_diff, expected_max_diff, "The output of the fp16 pipeline changed after saving and loading."
|
573 |
-
)
|
574 |
-
|
575 |
-
def test_save_load_optional_components(self, expected_max_difference=1e-4):
|
576 |
-
if not hasattr(self.pipeline_class, "_optional_components"):
|
577 |
-
return
|
578 |
-
|
579 |
-
components = self.get_dummy_components()
|
580 |
-
pipe = self.pipeline_class(**components)
|
581 |
-
pipe.to(torch_device)
|
582 |
-
pipe.set_progress_bar_config(disable=None)
|
583 |
-
|
584 |
-
# set all optional components to None
|
585 |
-
for optional_component in pipe._optional_components:
|
586 |
-
setattr(pipe, optional_component, None)
|
587 |
-
|
588 |
-
inputs = self.get_dummy_inputs(torch_device)
|
589 |
-
output = pipe(**inputs)[0]
|
590 |
-
|
591 |
-
with tempfile.TemporaryDirectory() as tmpdir:
|
592 |
-
pipe.save_pretrained(tmpdir)
|
593 |
-
pipe_loaded = self.pipeline_class.from_pretrained(tmpdir)
|
594 |
-
pipe_loaded.to(torch_device)
|
595 |
-
pipe_loaded.set_progress_bar_config(disable=None)
|
596 |
-
|
597 |
-
for optional_component in pipe._optional_components:
|
598 |
-
self.assertTrue(
|
599 |
-
getattr(pipe_loaded, optional_component) is None,
|
600 |
-
f"`{optional_component}` did not stay set to None after loading.",
|
601 |
-
)
|
602 |
-
|
603 |
-
inputs = self.get_dummy_inputs(torch_device)
|
604 |
-
output_loaded = pipe_loaded(**inputs)[0]
|
605 |
-
|
606 |
-
max_diff = np.abs(to_np(output) - to_np(output_loaded)).max()
|
607 |
-
self.assertLess(max_diff, expected_max_difference)
|
608 |
-
|
609 |
-
@unittest.skipIf(torch_device != "cuda", reason="CUDA and CPU are required to switch devices")
|
610 |
-
def test_to_device(self):
|
611 |
-
components = self.get_dummy_components()
|
612 |
-
pipe = self.pipeline_class(**components)
|
613 |
-
pipe.set_progress_bar_config(disable=None)
|
614 |
-
|
615 |
-
pipe.to("cpu")
|
616 |
-
model_devices = [component.device.type for component in components.values() if hasattr(component, "device")]
|
617 |
-
self.assertTrue(all(device == "cpu" for device in model_devices))
|
618 |
-
|
619 |
-
output_cpu = pipe(**self.get_dummy_inputs("cpu"))[0]
|
620 |
-
self.assertTrue(np.isnan(output_cpu).sum() == 0)
|
621 |
-
|
622 |
-
pipe.to("cuda")
|
623 |
-
model_devices = [component.device.type for component in components.values() if hasattr(component, "device")]
|
624 |
-
self.assertTrue(all(device == "cuda" for device in model_devices))
|
625 |
-
|
626 |
-
output_cuda = pipe(**self.get_dummy_inputs("cuda"))[0]
|
627 |
-
self.assertTrue(np.isnan(to_np(output_cuda)).sum() == 0)
|
628 |
-
|
629 |
-
def test_to_dtype(self):
|
630 |
-
components = self.get_dummy_components()
|
631 |
-
pipe = self.pipeline_class(**components)
|
632 |
-
pipe.set_progress_bar_config(disable=None)
|
633 |
-
|
634 |
-
model_dtypes = [component.dtype for component in components.values() if hasattr(component, "dtype")]
|
635 |
-
self.assertTrue(all(dtype == torch.float32 for dtype in model_dtypes))
|
636 |
-
|
637 |
-
pipe.to(torch_dtype=torch.float16)
|
638 |
-
model_dtypes = [component.dtype for component in components.values() if hasattr(component, "dtype")]
|
639 |
-
self.assertTrue(all(dtype == torch.float16 for dtype in model_dtypes))
|
640 |
-
|
641 |
-
def test_attention_slicing_forward_pass(self, expected_max_diff=1e-3):
|
642 |
-
self._test_attention_slicing_forward_pass(expected_max_diff=expected_max_diff)
|
643 |
-
|
644 |
-
def _test_attention_slicing_forward_pass(
|
645 |
-
self, test_max_difference=True, test_mean_pixel_difference=True, expected_max_diff=1e-3
|
646 |
-
):
|
647 |
-
if not self.test_attention_slicing:
|
648 |
-
return
|
649 |
-
|
650 |
-
components = self.get_dummy_components()
|
651 |
-
pipe = self.pipeline_class(**components)
|
652 |
-
pipe.to(torch_device)
|
653 |
-
pipe.set_progress_bar_config(disable=None)
|
654 |
-
|
655 |
-
inputs = self.get_dummy_inputs(torch_device)
|
656 |
-
output_without_slicing = pipe(**inputs)[0]
|
657 |
-
|
658 |
-
pipe.enable_attention_slicing(slice_size=1)
|
659 |
-
inputs = self.get_dummy_inputs(torch_device)
|
660 |
-
output_with_slicing = pipe(**inputs)[0]
|
661 |
-
|
662 |
-
if test_max_difference:
|
663 |
-
max_diff = np.abs(to_np(output_with_slicing) - to_np(output_without_slicing)).max()
|
664 |
-
self.assertLess(max_diff, expected_max_diff, "Attention slicing should not affect the inference results")
|
665 |
-
|
666 |
-
if test_mean_pixel_difference:
|
667 |
-
assert_mean_pixel_difference(output_with_slicing[0], output_without_slicing[0])
|
668 |
-
|
669 |
-
@unittest.skipIf(
|
670 |
-
torch_device != "cuda" or not is_accelerate_available() or is_accelerate_version("<", "0.14.0"),
|
671 |
-
reason="CPU offload is only available with CUDA and `accelerate v0.14.0` or higher",
|
672 |
-
)
|
673 |
-
def test_cpu_offload_forward_pass(self, expected_max_diff=1e-4):
|
674 |
-
components = self.get_dummy_components()
|
675 |
-
pipe = self.pipeline_class(**components)
|
676 |
-
pipe.to(torch_device)
|
677 |
-
pipe.set_progress_bar_config(disable=None)
|
678 |
-
|
679 |
-
inputs = self.get_dummy_inputs(torch_device)
|
680 |
-
output_without_offload = pipe(**inputs)[0]
|
681 |
-
|
682 |
-
pipe.enable_sequential_cpu_offload()
|
683 |
-
inputs = self.get_dummy_inputs(torch_device)
|
684 |
-
output_with_offload = pipe(**inputs)[0]
|
685 |
-
|
686 |
-
max_diff = np.abs(to_np(output_with_offload) - to_np(output_without_offload)).max()
|
687 |
-
self.assertLess(max_diff, expected_max_diff, "CPU offloading should not affect the inference results")
|
688 |
-
|
689 |
-
@unittest.skipIf(
|
690 |
-
torch_device != "cuda" or not is_xformers_available(),
|
691 |
-
reason="XFormers attention is only available with CUDA and `xformers` installed",
|
692 |
-
)
|
693 |
-
def test_xformers_attention_forwardGenerator_pass(self):
|
694 |
-
self._test_xformers_attention_forwardGenerator_pass()
|
695 |
-
|
696 |
-
def _test_xformers_attention_forwardGenerator_pass(
|
697 |
-
self, test_max_difference=True, test_mean_pixel_difference=True, expected_max_diff=1e-4
|
698 |
-
):
|
699 |
-
if not self.test_xformers_attention:
|
700 |
-
return
|
701 |
-
|
702 |
-
components = self.get_dummy_components()
|
703 |
-
pipe = self.pipeline_class(**components)
|
704 |
-
pipe.to(torch_device)
|
705 |
-
pipe.set_progress_bar_config(disable=None)
|
706 |
-
|
707 |
-
inputs = self.get_dummy_inputs(torch_device)
|
708 |
-
output_without_offload = pipe(**inputs)[0]
|
709 |
-
output_without_offload = (
|
710 |
-
output_without_offload.cpu() if torch.is_tensor(output_without_offload) else output_without_offload
|
711 |
-
)
|
712 |
-
|
713 |
-
pipe.enable_xformers_memory_efficient_attention()
|
714 |
-
inputs = self.get_dummy_inputs(torch_device)
|
715 |
-
output_with_offload = pipe(**inputs)[0]
|
716 |
-
output_with_offload = (
|
717 |
-
output_with_offload.cpu() if torch.is_tensor(output_with_offload) else output_without_offload
|
718 |
-
)
|
719 |
-
|
720 |
-
if test_max_difference:
|
721 |
-
max_diff = np.abs(output_with_offload - output_without_offload).max()
|
722 |
-
self.assertLess(max_diff, expected_max_diff, "XFormers attention should not affect the inference results")
|
723 |
-
|
724 |
-
if test_mean_pixel_difference:
|
725 |
-
assert_mean_pixel_difference(output_with_offload[0], output_without_offload[0])
|
726 |
-
|
727 |
-
def test_progress_bar(self):
|
728 |
-
components = self.get_dummy_components()
|
729 |
-
pipe = self.pipeline_class(**components)
|
730 |
-
pipe.to(torch_device)
|
731 |
-
|
732 |
-
inputs = self.get_dummy_inputs(torch_device)
|
733 |
-
with io.StringIO() as stderr, contextlib.redirect_stderr(stderr):
|
734 |
-
_ = pipe(**inputs)
|
735 |
-
stderr = stderr.getvalue()
|
736 |
-
# we can't calculate the number of progress steps beforehand e.g. for strength-dependent img2img,
|
737 |
-
# so we just match "5" in "#####| 1/5 [00:01<00:00]"
|
738 |
-
max_steps = re.search("/(.*?) ", stderr).group(1)
|
739 |
-
self.assertTrue(max_steps is not None and len(max_steps) > 0)
|
740 |
-
self.assertTrue(
|
741 |
-
f"{max_steps}/{max_steps}" in stderr, "Progress bar should be enabled and stopped at the max step"
|
742 |
-
)
|
743 |
-
|
744 |
-
pipe.set_progress_bar_config(disable=True)
|
745 |
-
with io.StringIO() as stderr, contextlib.redirect_stderr(stderr):
|
746 |
-
_ = pipe(**inputs)
|
747 |
-
self.assertTrue(stderr.getvalue() == "", "Progress bar should be disabled")
|
748 |
-
|
749 |
-
def test_num_images_per_prompt(self):
|
750 |
-
sig = inspect.signature(self.pipeline_class.__call__)
|
751 |
-
|
752 |
-
if "num_images_per_prompt" not in sig.parameters:
|
753 |
-
return
|
754 |
-
|
755 |
-
components = self.get_dummy_components()
|
756 |
-
pipe = self.pipeline_class(**components)
|
757 |
-
pipe = pipe.to(torch_device)
|
758 |
-
pipe.set_progress_bar_config(disable=None)
|
759 |
-
|
760 |
-
batch_sizes = [1, 2]
|
761 |
-
num_images_per_prompts = [1, 2]
|
762 |
-
|
763 |
-
for batch_size in batch_sizes:
|
764 |
-
for num_images_per_prompt in num_images_per_prompts:
|
765 |
-
inputs = self.get_dummy_inputs(torch_device)
|
766 |
-
|
767 |
-
for key in inputs.keys():
|
768 |
-
if key in self.batch_params:
|
769 |
-
inputs[key] = batch_size * [inputs[key]]
|
770 |
-
|
771 |
-
images = pipe(**inputs, num_images_per_prompt=num_images_per_prompt)[0]
|
772 |
-
|
773 |
-
assert images.shape[0] == batch_size * num_images_per_prompt
|
774 |
-
|
775 |
-
def test_cfg(self):
|
776 |
-
sig = inspect.signature(self.pipeline_class.__call__)
|
777 |
-
|
778 |
-
if "guidance_scale" not in sig.parameters:
|
779 |
-
return
|
780 |
-
|
781 |
-
components = self.get_dummy_components()
|
782 |
-
pipe = self.pipeline_class(**components)
|
783 |
-
pipe = pipe.to(torch_device)
|
784 |
-
pipe.set_progress_bar_config(disable=None)
|
785 |
-
|
786 |
-
inputs = self.get_dummy_inputs(torch_device)
|
787 |
-
|
788 |
-
inputs["guidance_scale"] = 1.0
|
789 |
-
out_no_cfg = pipe(**inputs)[0]
|
790 |
-
|
791 |
-
inputs["guidance_scale"] = 7.5
|
792 |
-
out_cfg = pipe(**inputs)[0]
|
793 |
-
|
794 |
-
assert out_cfg.shape == out_no_cfg.shape
|
795 |
-
|
796 |
-
|
797 |
-
# Some models (e.g. unCLIP) are extremely likely to significantly deviate depending on which hardware is used.
|
798 |
-
# This helper function is used to check that the image doesn't deviate on average more than 10 pixels from a
|
799 |
-
# reference image.
|
800 |
-
def assert_mean_pixel_difference(image, expected_image, expected_max_diff=10):
|
801 |
-
image = np.asarray(DiffusionPipeline.numpy_to_pil(image)[0], dtype=np.float32)
|
802 |
-
expected_image = np.asarray(DiffusionPipeline.numpy_to_pil(expected_image)[0], dtype=np.float32)
|
803 |
-
avg_diff = np.abs(image - expected_image).mean()
|
804 |
-
assert avg_diff < expected_max_diff, f"Error image deviates {avg_diff} pixels on average"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r101_fpn_2x_coco.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
_base_ = './faster_rcnn_r50_fpn_2x_coco.py'
|
2 |
-
model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r101-d8_769x769_80k_cityscapes.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
_base_ = './nonlocal_r50-d8_769x769_80k_cityscapes.py'
|
2 |
-
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_512x512_80k_ade20k.py
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/pspnet_r50-d8.py', '../_base_/datasets/ade20k.py',
|
3 |
-
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
|
4 |
-
]
|
5 |
-
model = dict(
|
6 |
-
decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/modules/exllama_hf.py
DELETED
@@ -1,174 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
from pathlib import Path
|
3 |
-
from typing import Any, Dict, Optional, Union
|
4 |
-
|
5 |
-
import torch
|
6 |
-
from torch.nn import CrossEntropyLoss
|
7 |
-
from transformers import GenerationConfig, PretrainedConfig, PreTrainedModel
|
8 |
-
from transformers.modeling_outputs import CausalLMOutputWithPast
|
9 |
-
|
10 |
-
from modules import shared
|
11 |
-
from modules.logging_colors import logger
|
12 |
-
|
13 |
-
try:
|
14 |
-
from exllama.model import ExLlama, ExLlamaCache, ExLlamaConfig
|
15 |
-
except:
|
16 |
-
logger.warning('Exllama module failed to load. Will attempt to load from repositories.')
|
17 |
-
try:
|
18 |
-
from modules.relative_imports import RelativeImport
|
19 |
-
|
20 |
-
with RelativeImport("repositories/exllama"):
|
21 |
-
from model import ExLlama, ExLlamaCache, ExLlamaConfig
|
22 |
-
except:
|
23 |
-
logger.error("Could not find repositories/exllama/. Make sure that exllama is cloned inside repositories/ and is up to date.")
|
24 |
-
raise
|
25 |
-
|
26 |
-
|
27 |
-
class ExllamaHF(PreTrainedModel):
|
28 |
-
def __init__(self, config: ExLlamaConfig):
|
29 |
-
super().__init__(PretrainedConfig())
|
30 |
-
self.ex_config = config
|
31 |
-
self.ex_model = ExLlama(self.ex_config)
|
32 |
-
self.generation_config = GenerationConfig()
|
33 |
-
self.lora = None
|
34 |
-
|
35 |
-
self.ex_cache = ExLlamaCache(self.ex_model)
|
36 |
-
self.past_seq = None
|
37 |
-
|
38 |
-
if shared.args.cfg_cache:
|
39 |
-
self.ex_cache_negative = ExLlamaCache(self.ex_model)
|
40 |
-
self.past_seq_negative = None
|
41 |
-
|
42 |
-
def _validate_model_class(self):
|
43 |
-
pass
|
44 |
-
|
45 |
-
def _validate_model_kwargs(self, model_kwargs: Dict[str, Any]):
|
46 |
-
pass
|
47 |
-
|
48 |
-
def prepare_inputs_for_generation(self, input_ids, **kwargs):
|
49 |
-
return {'input_ids': input_ids, **kwargs}
|
50 |
-
|
51 |
-
@property
|
52 |
-
def device(self) -> torch.device:
|
53 |
-
return torch.device(0)
|
54 |
-
|
55 |
-
def __call__(self, *args, **kwargs):
|
56 |
-
use_cache = kwargs.get('use_cache', True)
|
57 |
-
labels = kwargs.get('labels', None)
|
58 |
-
past_key_values = kwargs.get('past_key_values', None)
|
59 |
-
|
60 |
-
if len(args) > 0:
|
61 |
-
if not shared.args.cfg_cache:
|
62 |
-
logger.error("Please enable the cfg-cache option to use CFG with ExLlama_HF.")
|
63 |
-
return
|
64 |
-
|
65 |
-
input_ids = args[0]
|
66 |
-
is_negative = True
|
67 |
-
past_seq = self.past_seq_negative
|
68 |
-
ex_cache = self.ex_cache_negative
|
69 |
-
else:
|
70 |
-
input_ids = kwargs['input_ids']
|
71 |
-
is_negative = False
|
72 |
-
past_seq = self.past_seq
|
73 |
-
ex_cache = self.ex_cache
|
74 |
-
|
75 |
-
seq = input_ids[0].tolist()
|
76 |
-
if is_negative and past_key_values is not None:
|
77 |
-
seq = past_key_values + seq
|
78 |
-
|
79 |
-
seq_tensor = torch.tensor(seq)
|
80 |
-
reset = True
|
81 |
-
|
82 |
-
# Make the forward call
|
83 |
-
if labels is None:
|
84 |
-
if past_seq is not None:
|
85 |
-
min_length = min(past_seq.shape[0], seq_tensor.shape[0])
|
86 |
-
indices = torch.nonzero(~torch.eq(past_seq[:min_length], seq_tensor[:min_length]))
|
87 |
-
if len(indices) > 0:
|
88 |
-
longest_prefix = indices[0].item()
|
89 |
-
else:
|
90 |
-
longest_prefix = min_length
|
91 |
-
|
92 |
-
if longest_prefix > 0:
|
93 |
-
reset = False
|
94 |
-
ex_cache.current_seq_len = longest_prefix
|
95 |
-
if len(seq_tensor) - longest_prefix > 1:
|
96 |
-
self.ex_model.forward(seq_tensor[longest_prefix:-1].view(1, -1), ex_cache, preprocess_only=True, lora=self.lora)
|
97 |
-
elif len(seq_tensor) == longest_prefix:
|
98 |
-
# Very tricky: if the prefix we are reusing *is* the input_ids, then we have to back up the cache pointer by one,
|
99 |
-
# because we feed input_ids[-1] to forward() below, but that last token is already in the cache!
|
100 |
-
ex_cache.current_seq_len -= 1
|
101 |
-
|
102 |
-
if reset:
|
103 |
-
ex_cache.current_seq_len = 0
|
104 |
-
if len(seq_tensor) > 1:
|
105 |
-
self.ex_model.forward(seq_tensor[:-1].view(1, -1), ex_cache, preprocess_only=True, lora=self.lora)
|
106 |
-
|
107 |
-
logits = self.ex_model.forward(seq_tensor[-1:].view(1, -1), ex_cache, lora=self.lora).to(input_ids.device)
|
108 |
-
else:
|
109 |
-
ex_cache.current_seq_len = 0
|
110 |
-
logits = self.ex_model.forward(seq_tensor.view(1, -1), ex_cache, last_id_only=False, lora=self.lora)
|
111 |
-
|
112 |
-
if is_negative:
|
113 |
-
self.past_seq_negative = seq_tensor
|
114 |
-
else:
|
115 |
-
self.past_seq = seq_tensor
|
116 |
-
|
117 |
-
loss = None
|
118 |
-
if labels is not None:
|
119 |
-
# Shift so that tokens < n predict n
|
120 |
-
shift_logits = logits[..., :-1, :].contiguous()
|
121 |
-
shift_labels = labels[..., 1:].contiguous()
|
122 |
-
# Flatten the tokens
|
123 |
-
loss_fct = CrossEntropyLoss()
|
124 |
-
shift_logits = shift_logits.view(-1, logits.shape[-1])
|
125 |
-
shift_labels = shift_labels.view(-1)
|
126 |
-
# Enable model parallelism
|
127 |
-
shift_labels = shift_labels.to(shift_logits.device)
|
128 |
-
loss = loss_fct(shift_logits, shift_labels)
|
129 |
-
|
130 |
-
return CausalLMOutputWithPast(logits=logits, past_key_values=seq if use_cache else None, loss=loss)
|
131 |
-
|
132 |
-
@classmethod
|
133 |
-
def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], *model_args, **kwargs):
|
134 |
-
assert len(model_args) == 0 and len(kwargs) == 0, "extra args is currently not supported"
|
135 |
-
if isinstance(pretrained_model_name_or_path, str):
|
136 |
-
pretrained_model_name_or_path = Path(pretrained_model_name_or_path)
|
137 |
-
|
138 |
-
pretrained_model_name_or_path = Path(f'{shared.args.model_dir}') / Path(pretrained_model_name_or_path)
|
139 |
-
config = ExLlamaConfig(pretrained_model_name_or_path / 'config.json')
|
140 |
-
|
141 |
-
# from 'oobabooga/text-generation-webui/modules/exllama.py'
|
142 |
-
weight_path = None
|
143 |
-
for ext in ['.safetensors', '.pt', '.bin']:
|
144 |
-
found = list(pretrained_model_name_or_path.glob(f"*{ext}"))
|
145 |
-
if len(found) > 0:
|
146 |
-
weight_path = found[-1]
|
147 |
-
break
|
148 |
-
assert weight_path is not None, f'could not find weight in "{pretrained_model_name_or_path}"'
|
149 |
-
|
150 |
-
config.model_path = str(weight_path)
|
151 |
-
config.max_seq_len = shared.args.max_seq_len
|
152 |
-
config.compress_pos_emb = shared.args.compress_pos_emb
|
153 |
-
if shared.args.gpu_split:
|
154 |
-
config.set_auto_map(shared.args.gpu_split)
|
155 |
-
config.gpu_peer_fix = True
|
156 |
-
|
157 |
-
if shared.args.alpha_value > 1 and shared.args.rope_freq_base == 0:
|
158 |
-
config.alpha_value = shared.args.alpha_value
|
159 |
-
config.calculate_rotary_embedding_base()
|
160 |
-
elif shared.args.rope_freq_base > 0:
|
161 |
-
config.rotary_embedding_base = shared.args.rope_freq_base
|
162 |
-
|
163 |
-
if torch.version.hip:
|
164 |
-
config.rmsnorm_no_half2 = True
|
165 |
-
config.rope_no_half2 = True
|
166 |
-
config.matmul_no_half2 = True
|
167 |
-
config.silu_no_half2 = True
|
168 |
-
|
169 |
-
# This slowes down a bit but align better with autogptq generation.
|
170 |
-
# TODO: Should give user choice to tune the exllama config
|
171 |
-
# config.fused_attn = False
|
172 |
-
# config.fused_mlp_thd = 0
|
173 |
-
|
174 |
-
return ExllamaHF(config)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/gui/ui_win.py
DELETED
@@ -1,164 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
|
3 |
-
# Form implementation generated from reading ui file 'ui_window.ui'
|
4 |
-
#
|
5 |
-
# Created by: PyQt5 UI code generator 5.11.2
|
6 |
-
#
|
7 |
-
# WARNING! All changes made in this file will be lost!
|
8 |
-
|
9 |
-
from PyQt5 import QtCore, QtGui, QtWidgets
|
10 |
-
|
11 |
-
|
12 |
-
class Ui_Form(object):
|
13 |
-
def setupUi(self, Form):
|
14 |
-
Form.setObjectName("Form")
|
15 |
-
Form.resize(1480, 1280)
|
16 |
-
self.label = QtWidgets.QLabel(Form)
|
17 |
-
self.label.setGeometry(QtCore.QRect(500, 10, 500, 40))
|
18 |
-
font = QtGui.QFont()
|
19 |
-
font.setPointSize(18)
|
20 |
-
font.setBold(True)
|
21 |
-
font.setUnderline(False)
|
22 |
-
font.setWeight(75)
|
23 |
-
self.label.setFont(font)
|
24 |
-
self.label.setAlignment(QtCore.Qt.AlignCenter)
|
25 |
-
self.label.setObjectName("label")
|
26 |
-
|
27 |
-
self.layoutWidget1 = QtWidgets.QWidget(Form)
|
28 |
-
self.layoutWidget1.setGeometry(QtCore.QRect(60, 60, 150, 30))
|
29 |
-
self.layoutWidget1.setObjectName("layoutWidget1")
|
30 |
-
self.horizontalLayout_1 = QtWidgets.QHBoxLayout(self.layoutWidget1)
|
31 |
-
self.horizontalLayout_1.setContentsMargins(0, 0, 0, 0)
|
32 |
-
self.horizontalLayout_1.setObjectName("horizontalLayout_2")
|
33 |
-
self.label_2 = QtWidgets.QLabel(self.layoutWidget1)
|
34 |
-
self.label_2.setAlignment(QtCore.Qt.AlignCenter)
|
35 |
-
self.label_2.setObjectName("label_2")
|
36 |
-
self.horizontalLayout_1.addWidget(self.label_2)
|
37 |
-
self.spinBox = QtWidgets.QSpinBox(self.layoutWidget1)
|
38 |
-
self.spinBox.setMinimum(3)
|
39 |
-
self.spinBox.setMaximum(40)
|
40 |
-
self.spinBox.setSingleStep(2)
|
41 |
-
self.spinBox.setProperty("value", 3)
|
42 |
-
self.spinBox.setObjectName("spinBox")
|
43 |
-
self.horizontalLayout_1.addWidget(self.spinBox)
|
44 |
-
|
45 |
-
self.layoutWidget2 = QtWidgets.QWidget(Form)
|
46 |
-
self.layoutWidget2.setGeometry(QtCore.QRect(580, 60, 200, 30))
|
47 |
-
self.layoutWidget2.setObjectName("layoutWidget2")
|
48 |
-
self.horizontalLayout_2 = QtWidgets.QHBoxLayout(self.layoutWidget2)
|
49 |
-
self.horizontalLayout_2.setContentsMargins(0, 0, 0, 0)
|
50 |
-
self.horizontalLayout_2.setObjectName("horizontalLayout")
|
51 |
-
self.label_3 = QtWidgets.QLabel(self.layoutWidget2)
|
52 |
-
self.label_3.setObjectName("label_3")
|
53 |
-
self.horizontalLayout_2.addWidget(self.label_3)
|
54 |
-
self.comboBox = QtWidgets.QComboBox(self.layoutWidget2)
|
55 |
-
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Preferred)
|
56 |
-
sizePolicy.setHorizontalStretch(0)
|
57 |
-
sizePolicy.setVerticalStretch(0)
|
58 |
-
sizePolicy.setHeightForWidth(self.comboBox.sizePolicy().hasHeightForWidth())
|
59 |
-
self.comboBox.setSizePolicy(sizePolicy)
|
60 |
-
self.comboBox.setObjectName("comboBox")
|
61 |
-
self.comboBox.addItem("")
|
62 |
-
self.comboBox.addItem("")
|
63 |
-
self.comboBox.addItem("")
|
64 |
-
self.comboBox.addItem("")
|
65 |
-
self.comboBox.addItem("")
|
66 |
-
self.horizontalLayout_2.addWidget(self.comboBox)
|
67 |
-
|
68 |
-
self.pushButton = QtWidgets.QPushButton(Form)
|
69 |
-
self.pushButton.setGeometry(QtCore.QRect(70, 160, 110, 20))
|
70 |
-
self.pushButton.setObjectName("pushButton_5")
|
71 |
-
self.groupBox = QtWidgets.QGroupBox(Form)
|
72 |
-
self.groupBox.setGeometry(QtCore.QRect(70, 170, 120, 110))
|
73 |
-
self.groupBox.setTitle("")
|
74 |
-
self.groupBox.setObjectName("groupBox")
|
75 |
-
self.radioButton = QtWidgets.QRadioButton(self.groupBox)
|
76 |
-
self.radioButton.setGeometry(QtCore.QRect(10, 20, 96, 20))
|
77 |
-
self.radioButton.setObjectName("radioButton")
|
78 |
-
self.radioButton_2 = QtWidgets.QRadioButton(self.groupBox)
|
79 |
-
self.radioButton_2.setGeometry(QtCore.QRect(10, 50, 96, 20))
|
80 |
-
self.radioButton_2.setObjectName("radioButton_2")
|
81 |
-
self.radioButton_3 = QtWidgets.QRadioButton(self.groupBox)
|
82 |
-
self.radioButton_3.setGeometry(QtCore.QRect(10, 80, 96, 20))
|
83 |
-
self.radioButton_3.setObjectName("radioButton_3")
|
84 |
-
|
85 |
-
self.layoutWidget = QtWidgets.QWidget(Form)
|
86 |
-
self.layoutWidget.setGeometry(QtCore.QRect(70, 320, 111, 291))
|
87 |
-
self.layoutWidget.setObjectName("layoutWidget")
|
88 |
-
self.verticalLayout = QtWidgets.QVBoxLayout(self.layoutWidget)
|
89 |
-
self.verticalLayout.setContentsMargins(0, 0, 0, 0)
|
90 |
-
self.verticalLayout.setObjectName("verticalLayout")
|
91 |
-
self.pushButton_2 = QtWidgets.QPushButton(self.layoutWidget)
|
92 |
-
self.pushButton_2.setObjectName("pushButton_2")
|
93 |
-
self.verticalLayout.addWidget(self.pushButton_2)
|
94 |
-
self.pushButton_3 = QtWidgets.QPushButton(self.layoutWidget)
|
95 |
-
self.pushButton_3.setObjectName("pushButton_3")
|
96 |
-
self.verticalLayout.addWidget(self.pushButton_3)
|
97 |
-
self.pushButton_4 = QtWidgets.QPushButton(self.layoutWidget)
|
98 |
-
self.pushButton_4.setObjectName("pushButton_4")
|
99 |
-
self.verticalLayout.addWidget(self.pushButton_4)
|
100 |
-
self.pushButton_5 = QtWidgets.QPushButton(self.layoutWidget)
|
101 |
-
self.pushButton_5.setObjectName("pushButton_5")
|
102 |
-
self.verticalLayout.addWidget(self.pushButton_5)
|
103 |
-
self.pushButton_6 = QtWidgets.QPushButton(self.layoutWidget)
|
104 |
-
self.pushButton_6.setObjectName("pushButton_6")
|
105 |
-
self.verticalLayout.addWidget(self.pushButton_6)
|
106 |
-
self.pushButton_7 = QtWidgets.QPushButton(self.layoutWidget)
|
107 |
-
self.pushButton_7.setObjectName("pushButton_7")
|
108 |
-
self.verticalLayout.addWidget(self.pushButton_7)
|
109 |
-
|
110 |
-
self.layoutWidget3 = QtWidgets.QWidget(Form)
|
111 |
-
self.layoutWidget3.setGeometry(QtCore.QRect(820, 60, 100, 30))
|
112 |
-
self.layoutWidget3.setObjectName("layoutWidget3")
|
113 |
-
self.horizontalLayout_3 = QtWidgets.QHBoxLayout(self.layoutWidget3)
|
114 |
-
self.horizontalLayout_3.setContentsMargins(0, 0, 0, 0)
|
115 |
-
self.horizontalLayout_3.setObjectName("horizontalLayout2")
|
116 |
-
self.comboBox_2 = QtWidgets.QComboBox(self.layoutWidget3)
|
117 |
-
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Preferred)
|
118 |
-
sizePolicy.setHorizontalStretch(0)
|
119 |
-
sizePolicy.setVerticalStretch(0)
|
120 |
-
sizePolicy.setHeightForWidth(self.comboBox_2.sizePolicy().hasHeightForWidth())
|
121 |
-
self.comboBox_2.setSizePolicy(sizePolicy)
|
122 |
-
self.comboBox_2.setObjectName("comboBox")
|
123 |
-
self.comboBox_2.addItem("")
|
124 |
-
self.comboBox_2.addItem("")
|
125 |
-
self.comboBox_2.addItem("")
|
126 |
-
self.horizontalLayout_3.addWidget(self.comboBox_2)
|
127 |
-
|
128 |
-
self.stackedWidget = QtWidgets.QStackedWidget(Form)
|
129 |
-
self.stackedWidget.setGeometry(QtCore.QRect(250, 100, 1024, 1024))
|
130 |
-
self.stackedWidget.setObjectName("stackedWidget")
|
131 |
-
self.page_3 = QtWidgets.QWidget()
|
132 |
-
self.page_3.setObjectName("page_3")
|
133 |
-
self.stackedWidget.addWidget(self.page_3)
|
134 |
-
self.page_4 = QtWidgets.QWidget()
|
135 |
-
self.page_4.setObjectName("page_4")
|
136 |
-
self.stackedWidget.addWidget(self.page_4)
|
137 |
-
|
138 |
-
self.retranslateUi(Form)
|
139 |
-
QtCore.QMetaObject.connectSlotsByName(Form)
|
140 |
-
|
141 |
-
def retranslateUi(self, Form):
|
142 |
-
_translate = QtCore.QCoreApplication.translate
|
143 |
-
Form.setWindowTitle(_translate("Form", " "))
|
144 |
-
self.label.setText(_translate("Form", "Image Completion"))
|
145 |
-
self.label_2.setText(_translate("Form", "Bush Width:"))
|
146 |
-
self.label_3.setText(_translate("Form", "Options:"))
|
147 |
-
self.comboBox.setItemText(0, _translate("Form", "None"))
|
148 |
-
self.comboBox.setItemText(1, _translate("Form", "CelebA-HQ"))
|
149 |
-
self.comboBox.setItemText(2, _translate("Form", "Paris"))
|
150 |
-
self.comboBox.setItemText(3, _translate("Form", "ImageNet"))
|
151 |
-
self.comboBox.setItemText(4, _translate("Form", "Places2"))
|
152 |
-
self.pushButton.setText(_translate("Form", "draw/clear"))
|
153 |
-
self.radioButton.setText(_translate("Form", "free-form"))
|
154 |
-
self.radioButton_2.setText(_translate("Form", "rectangle"))
|
155 |
-
self.radioButton_3.setText(_translate("Form", "center-mask"))
|
156 |
-
self.pushButton_2.setText(_translate("Form", "load image"))
|
157 |
-
self.pushButton_3.setText(_translate("Form", "random image"))
|
158 |
-
self.pushButton_4.setText(_translate("Form", "load mask"))
|
159 |
-
self.pushButton_5.setText(_translate("Form", "random mask"))
|
160 |
-
self.pushButton_6.setText(_translate("Form", "fill"))
|
161 |
-
self.pushButton_7.setText(_translate("Form", "save"))
|
162 |
-
self.comboBox_2.setItemText(0, _translate("Form", "Input"))
|
163 |
-
self.comboBox_2.setItemText(1, _translate("Form", "Masked"))
|
164 |
-
self.comboBox_2.setItemText(2, _translate("Form", "Output"))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Arafath10/chatcode/app.py
DELETED
@@ -1,273 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import wikipedia
|
3 |
-
import requests
|
4 |
-
from bs4 import BeautifulSoup
|
5 |
-
import pyjokes
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
def essay_query(payload):
|
10 |
-
API_URL = "https://api-inference.huggingface.co/models/facebook/bart-large-cnn"
|
11 |
-
data = json.dumps(payload)
|
12 |
-
response = requests.request("POST", API_URL, headers=headers, data=data)
|
13 |
-
return json.loads(response.content.decode("utf-8"))
|
14 |
-
|
15 |
-
def essay(name):
|
16 |
-
|
17 |
-
result_count = 2
|
18 |
-
|
19 |
-
f_result = ""
|
20 |
-
result = {"",""}
|
21 |
-
text =""
|
22 |
-
|
23 |
-
url = "https://www.google.com/search?q="+name
|
24 |
-
r = requests.get(url)
|
25 |
-
|
26 |
-
soup = BeautifulSoup(r.text,"html.parser")
|
27 |
-
|
28 |
-
heading_object=soup.find_all('div')
|
29 |
-
|
30 |
-
for info in heading_object:
|
31 |
-
|
32 |
-
if '<div class="BNeawe s3v9rd AP7Wnd"><div><div><div class="BNeawe s3v9rd AP7Wnd">' in str(info):
|
33 |
-
if '›' not in str(info.text) :
|
34 |
-
result.add(info.text)
|
35 |
-
|
36 |
-
n=0
|
37 |
-
for i in result:
|
38 |
-
if n!=0:
|
39 |
-
i = i.split("·",1)
|
40 |
-
try:
|
41 |
-
i = i[1]
|
42 |
-
except:
|
43 |
-
i = i[0]
|
44 |
-
i=i.split("Duration")
|
45 |
-
|
46 |
-
i = i[0]
|
47 |
-
text = text +str(n)+"\t"+i+"\n\n"
|
48 |
-
n=n+1
|
49 |
-
|
50 |
-
if result_count == 1:
|
51 |
-
temp = ""
|
52 |
-
|
53 |
-
else:
|
54 |
-
for r in text.split("\n\n")[0:-1]:
|
55 |
-
if "..." in r:
|
56 |
-
r = r.split("...")
|
57 |
-
w = essay_query(r[0].replace("\xa0",""))
|
58 |
-
f_result = f_result + (w[0]['summary_text'])
|
59 |
-
else:
|
60 |
-
#print(r[:-1])
|
61 |
-
w = essay_query(r[:-1])
|
62 |
-
f_result = f_result +(w[0]['summary_text'])
|
63 |
-
return f_result
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
def code(name):
|
68 |
-
name = name.split('learn')[-1]
|
69 |
-
name = name.split('start')[-1]
|
70 |
-
name = name.split()[0]
|
71 |
-
|
72 |
-
url = "https://www.w3schools.com/"+name+"/"+name+"_syntax.asp"
|
73 |
-
r = requests.get(url)
|
74 |
-
soup = BeautifulSoup(r.text,"html.parser")
|
75 |
-
|
76 |
-
|
77 |
-
heading_object=soup.find_all('div')
|
78 |
-
result = ""
|
79 |
-
for info in heading_object:
|
80 |
-
info1 = str(info)
|
81 |
-
if '</script>' not in info1 and '<div class="w3-col l10 m12" id="main">' in info1:
|
82 |
-
|
83 |
-
#print(n)
|
84 |
-
text = str(info.text).split('Next ❯')[1].split("❮ Previous")[0].split("\n\n\n")
|
85 |
-
#print(text)
|
86 |
-
for r in text:
|
87 |
-
if "Test Yourself With Exercises" in r or "Submit Answer »" in r or "On this page" in r:
|
88 |
-
continue
|
89 |
-
else:
|
90 |
-
result = result + r+"\n\n"
|
91 |
-
return result
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
def joke():
|
96 |
-
# importing installed library
|
97 |
-
|
98 |
-
My_joke = pyjokes.get_joke(language="en", category="neutral")
|
99 |
-
|
100 |
-
return My_joke
|
101 |
-
|
102 |
-
|
103 |
-
def wiki(name):
|
104 |
-
text = name
|
105 |
-
text = text.split("the")[-1]
|
106 |
-
text = text.split("is a")[-1]
|
107 |
-
text = text.split("by")[-1]
|
108 |
-
#print(wikipedia.search(text, results=20))
|
109 |
-
#print(text)
|
110 |
-
out = "try this key words :\n"+str(wikipedia.search(text, results=10))+"\n\n"
|
111 |
-
for i in wikipedia.search(text, results=3):
|
112 |
-
try:
|
113 |
-
result = wikipedia.summary(i)
|
114 |
-
if " " in result.lower():
|
115 |
-
#print(result)
|
116 |
-
#print()
|
117 |
-
out = out + result+"\n"
|
118 |
-
except:
|
119 |
-
continue
|
120 |
-
return out
|
121 |
-
|
122 |
-
import openai
|
123 |
-
openai.api_key = "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"
|
124 |
-
def aitext(word):
|
125 |
-
response = openai.Completion.create(
|
126 |
-
model="text-davinci-003",
|
127 |
-
prompt=word,
|
128 |
-
temperature=0.9,
|
129 |
-
max_tokens=200,
|
130 |
-
top_p=1,
|
131 |
-
frequency_penalty=0,
|
132 |
-
presence_penalty=0.6,
|
133 |
-
stop=[" Human:", " AI:"]
|
134 |
-
)
|
135 |
-
|
136 |
-
return response.choices[0].text
|
137 |
-
|
138 |
-
import json
|
139 |
-
headers = {"Authorization": f"Bearer {'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'}"}
|
140 |
-
def sumy(payload):
|
141 |
-
API_URL = "https://api-inference.huggingface.co/models/facebook/bart-large-cnn"
|
142 |
-
data = json.dumps(payload)
|
143 |
-
response = requests.request("POST", API_URL, headers=headers, data=data)
|
144 |
-
return json.loads(response.content.decode("utf-8"))
|
145 |
-
|
146 |
-
|
147 |
-
def query(payload):
|
148 |
-
API_URL = "https://api-inference.huggingface.co/models/facebook/bart-large-cnn"
|
149 |
-
data = json.dumps(payload)
|
150 |
-
response = requests.request("POST", API_URL, headers=headers, data=data)
|
151 |
-
return json.loads(response.content.decode("utf-8"))
|
152 |
-
|
153 |
-
def google(name):
|
154 |
-
if "give" in name or "reason" in name or "result" in name or "step" in name:
|
155 |
-
|
156 |
-
result_count = 2
|
157 |
-
print(name)
|
158 |
-
|
159 |
-
else:
|
160 |
-
result_count = 1
|
161 |
-
|
162 |
-
f_result = ""
|
163 |
-
result = {"",""}
|
164 |
-
text =""
|
165 |
-
|
166 |
-
|
167 |
-
url = "https://www.google.com/search?q="+name
|
168 |
-
r = requests.get(url)
|
169 |
-
|
170 |
-
soup = BeautifulSoup(r.text,"html.parser")
|
171 |
-
|
172 |
-
heading_object=soup.find_all('div')
|
173 |
-
|
174 |
-
for info in heading_object:
|
175 |
-
|
176 |
-
if '<div class="BNeawe s3v9rd AP7Wnd"><div><div><div class="BNeawe s3v9rd AP7Wnd">' in str(info):
|
177 |
-
if '›' not in str(info.text) :
|
178 |
-
result.add(info.text)
|
179 |
-
|
180 |
-
n=0
|
181 |
-
for i in result:
|
182 |
-
if n!=0:
|
183 |
-
i = i.split("·",1)
|
184 |
-
try:
|
185 |
-
i = i[1]
|
186 |
-
except:
|
187 |
-
i = i[0]
|
188 |
-
i=i.split("Duration")
|
189 |
-
|
190 |
-
i = i[0]
|
191 |
-
text = text +str(n)+"\t"+i+"\n\n"
|
192 |
-
n=n+1
|
193 |
-
|
194 |
-
if result_count == 1:
|
195 |
-
temp = ""
|
196 |
-
for r in text.split("\n\n"):
|
197 |
-
temp = temp+r.split("...")[0]
|
198 |
-
f_result = sumy({"inputs":temp,"parameters": {"do_sample": False,"max_length":300}})
|
199 |
-
return f_result[0]['summary_text']
|
200 |
-
else:
|
201 |
-
n=1
|
202 |
-
for r in text.split("\n\n")[2:-2]:
|
203 |
-
if len(r)>10:
|
204 |
-
if "..." in r:
|
205 |
-
r = r.split("...")
|
206 |
-
w = query(r[0].replace("\xa0",""))
|
207 |
-
f_result = f_result + str(n)+"\t"+(w[0]['summary_text'])+"\n\n"+r"\\"
|
208 |
-
else:
|
209 |
-
#print(r[:-1])
|
210 |
-
w = query(r[:-1])
|
211 |
-
f_result = f_result + str(n)+"\t"+(w[0]['summary_text'])+"\n\n"+r"\\"
|
212 |
-
n=n+1
|
213 |
-
return f_result
|
214 |
-
from PyDictionary import PyDictionary
|
215 |
-
def greet(name1):
|
216 |
-
name = name1.lower()
|
217 |
-
|
218 |
-
#dictionary=PyDictionary()
|
219 |
-
#dic = dictionary.meaning(name)
|
220 |
-
|
221 |
-
#try:
|
222 |
-
#return "Noun :"+ str(dic['Noun']) + "\nVerb :"+ str(dic['Verb'])
|
223 |
-
#except :
|
224 |
-
#return dic
|
225 |
-
|
226 |
-
if "who are you" in name or "what is you" in name or "your name" in name or"who r u" in name:
|
227 |
-
|
228 |
-
return "Im Ai Based Chatbot Created by ssebowa.org"
|
229 |
-
|
230 |
-
if "who developed you" in name or "what is you" in name or "who mad you" in name or "who made you" in name:
|
231 |
-
return "ssebowa.org"
|
232 |
-
|
233 |
-
if "tell me a joke" in name or "the joke" in name:
|
234 |
-
return joke()
|
235 |
-
|
236 |
-
if "love you" in name or "i love" in name:
|
237 |
-
return "me too"
|
238 |
-
if "marry me" in name or "marry" in name:
|
239 |
-
return "im not intrested"
|
240 |
-
if "your age" in name or "what is your age" in name:
|
241 |
-
return "Im not a human so i don't have age"
|
242 |
-
if "thank u" in name or "thanks" in name or "thank you" in name:
|
243 |
-
return "ok welcome ....!"
|
244 |
-
if "write the essay" in name or "write essay" in name:
|
245 |
-
name = name.split("about")[-1]
|
246 |
-
return essay(name)
|
247 |
-
if "how to learn" in name or "steps for learning" in name or "step for learning" in name or "steps for" in name or "step for" in name:
|
248 |
-
try:
|
249 |
-
cresult = code(name)
|
250 |
-
return google(name)+"\n\n"+cresult
|
251 |
-
except:
|
252 |
-
return google(name)
|
253 |
-
else:
|
254 |
-
return google(name)+""
|
255 |
-
|
256 |
-
|
257 |
-
|
258 |
-
|
259 |
-
|
260 |
-
|
261 |
-
|
262 |
-
|
263 |
-
|
264 |
-
|
265 |
-
|
266 |
-
|
267 |
-
|
268 |
-
|
269 |
-
|
270 |
-
iface = gr.Interface(fn=greet, inputs="text", outputs="text")
|
271 |
-
iface.launch()
|
272 |
-
|
273 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Armored-Atom/Image-To-Motion/style.css
DELETED
@@ -1,19 +0,0 @@
|
|
1 |
-
h1 {
|
2 |
-
text-align: center;
|
3 |
-
}
|
4 |
-
img#overview {
|
5 |
-
max-width: 1000px;
|
6 |
-
max-height: 600px;
|
7 |
-
display: block;
|
8 |
-
margin: auto;
|
9 |
-
}
|
10 |
-
img#style-image {
|
11 |
-
max-width: 1000px;
|
12 |
-
max-height: 600px;
|
13 |
-
display: block;
|
14 |
-
margin: auto;
|
15 |
-
}
|
16 |
-
img#visitor-badge {
|
17 |
-
display: block;
|
18 |
-
margin: auto;
|
19 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Artrajz/vits-simple-api/bert_vits2/attentions.py
DELETED
@@ -1,352 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import torch
|
3 |
-
from torch import nn
|
4 |
-
from torch.nn import functional as F
|
5 |
-
from bert_vits2 import commons
|
6 |
-
from torch.nn.utils import weight_norm, remove_weight_norm
|
7 |
-
|
8 |
-
|
9 |
-
class LayerNorm(nn.Module):
|
10 |
-
def __init__(self, channels, eps=1e-5):
|
11 |
-
super().__init__()
|
12 |
-
self.channels = channels
|
13 |
-
self.eps = eps
|
14 |
-
|
15 |
-
self.gamma = nn.Parameter(torch.ones(channels))
|
16 |
-
self.beta = nn.Parameter(torch.zeros(channels))
|
17 |
-
|
18 |
-
def forward(self, x):
|
19 |
-
x = x.transpose(1, -1)
|
20 |
-
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
|
21 |
-
return x.transpose(1, -1)
|
22 |
-
|
23 |
-
|
24 |
-
@torch.jit.script
|
25 |
-
def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
|
26 |
-
n_channels_int = n_channels[0]
|
27 |
-
in_act = input_a + input_b
|
28 |
-
t_act = torch.tanh(in_act[:, :n_channels_int, :])
|
29 |
-
s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
|
30 |
-
acts = t_act * s_act
|
31 |
-
return acts
|
32 |
-
|
33 |
-
|
34 |
-
class Encoder(nn.Module):
|
35 |
-
def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4,
|
36 |
-
isflow=True, **kwargs):
|
37 |
-
super().__init__()
|
38 |
-
self.hidden_channels = hidden_channels
|
39 |
-
self.filter_channels = filter_channels
|
40 |
-
self.n_heads = n_heads
|
41 |
-
self.n_layers = n_layers
|
42 |
-
self.kernel_size = kernel_size
|
43 |
-
self.p_dropout = p_dropout
|
44 |
-
self.window_size = window_size
|
45 |
-
# if isflow:
|
46 |
-
# cond_layer = torch.nn.Conv1d(256, 2 * hidden_channels * n_layers, 1)
|
47 |
-
# self.cond_pre = torch.nn.Conv1d(hidden_channels, 2 * hidden_channels, 1)
|
48 |
-
# self.cond_layer = weight_norm(cond_layer, name='weight')
|
49 |
-
# self.gin_channels = 256
|
50 |
-
self.cond_layer_idx = self.n_layers
|
51 |
-
if 'gin_channels' in kwargs:
|
52 |
-
self.gin_channels = kwargs['gin_channels']
|
53 |
-
if self.gin_channels != 0:
|
54 |
-
self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels)
|
55 |
-
# vits2 says 3rd block, so idx is 2 by default
|
56 |
-
self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2
|
57 |
-
# print(self.gin_channels, self.cond_layer_idx)
|
58 |
-
assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers'
|
59 |
-
self.drop = nn.Dropout(p_dropout)
|
60 |
-
self.attn_layers = nn.ModuleList()
|
61 |
-
self.norm_layers_1 = nn.ModuleList()
|
62 |
-
self.ffn_layers = nn.ModuleList()
|
63 |
-
self.norm_layers_2 = nn.ModuleList()
|
64 |
-
for i in range(self.n_layers):
|
65 |
-
self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout,
|
66 |
-
window_size=window_size))
|
67 |
-
self.norm_layers_1.append(LayerNorm(hidden_channels))
|
68 |
-
self.ffn_layers.append(
|
69 |
-
FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
|
70 |
-
self.norm_layers_2.append(LayerNorm(hidden_channels))
|
71 |
-
|
72 |
-
def forward(self, x, x_mask, g=None):
|
73 |
-
attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
|
74 |
-
x = x * x_mask
|
75 |
-
for i in range(self.n_layers):
|
76 |
-
if i == self.cond_layer_idx and g is not None:
|
77 |
-
g = self.spk_emb_linear(g.transpose(1, 2))
|
78 |
-
g = g.transpose(1, 2)
|
79 |
-
x = x + g
|
80 |
-
x = x * x_mask
|
81 |
-
y = self.attn_layers[i](x, x, attn_mask)
|
82 |
-
y = self.drop(y)
|
83 |
-
x = self.norm_layers_1[i](x + y)
|
84 |
-
|
85 |
-
y = self.ffn_layers[i](x, x_mask)
|
86 |
-
y = self.drop(y)
|
87 |
-
x = self.norm_layers_2[i](x + y)
|
88 |
-
x = x * x_mask
|
89 |
-
return x
|
90 |
-
|
91 |
-
|
92 |
-
class Decoder(nn.Module):
|
93 |
-
def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0.,
|
94 |
-
proximal_bias=False, proximal_init=True, **kwargs):
|
95 |
-
super().__init__()
|
96 |
-
self.hidden_channels = hidden_channels
|
97 |
-
self.filter_channels = filter_channels
|
98 |
-
self.n_heads = n_heads
|
99 |
-
self.n_layers = n_layers
|
100 |
-
self.kernel_size = kernel_size
|
101 |
-
self.p_dropout = p_dropout
|
102 |
-
self.proximal_bias = proximal_bias
|
103 |
-
self.proximal_init = proximal_init
|
104 |
-
|
105 |
-
self.drop = nn.Dropout(p_dropout)
|
106 |
-
self.self_attn_layers = nn.ModuleList()
|
107 |
-
self.norm_layers_0 = nn.ModuleList()
|
108 |
-
self.encdec_attn_layers = nn.ModuleList()
|
109 |
-
self.norm_layers_1 = nn.ModuleList()
|
110 |
-
self.ffn_layers = nn.ModuleList()
|
111 |
-
self.norm_layers_2 = nn.ModuleList()
|
112 |
-
for i in range(self.n_layers):
|
113 |
-
self.self_attn_layers.append(
|
114 |
-
MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout,
|
115 |
-
proximal_bias=proximal_bias, proximal_init=proximal_init))
|
116 |
-
self.norm_layers_0.append(LayerNorm(hidden_channels))
|
117 |
-
self.encdec_attn_layers.append(
|
118 |
-
MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
|
119 |
-
self.norm_layers_1.append(LayerNorm(hidden_channels))
|
120 |
-
self.ffn_layers.append(
|
121 |
-
FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
|
122 |
-
self.norm_layers_2.append(LayerNorm(hidden_channels))
|
123 |
-
|
124 |
-
def forward(self, x, x_mask, h, h_mask):
|
125 |
-
"""
|
126 |
-
x: decoder input
|
127 |
-
h: encoder output
|
128 |
-
"""
|
129 |
-
self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
|
130 |
-
encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
|
131 |
-
x = x * x_mask
|
132 |
-
for i in range(self.n_layers):
|
133 |
-
y = self.self_attn_layers[i](x, x, self_attn_mask)
|
134 |
-
y = self.drop(y)
|
135 |
-
x = self.norm_layers_0[i](x + y)
|
136 |
-
|
137 |
-
y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
|
138 |
-
y = self.drop(y)
|
139 |
-
x = self.norm_layers_1[i](x + y)
|
140 |
-
|
141 |
-
y = self.ffn_layers[i](x, x_mask)
|
142 |
-
y = self.drop(y)
|
143 |
-
x = self.norm_layers_2[i](x + y)
|
144 |
-
x = x * x_mask
|
145 |
-
return x
|
146 |
-
|
147 |
-
|
148 |
-
class MultiHeadAttention(nn.Module):
|
149 |
-
def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True,
|
150 |
-
block_length=None, proximal_bias=False, proximal_init=False):
|
151 |
-
super().__init__()
|
152 |
-
assert channels % n_heads == 0
|
153 |
-
|
154 |
-
self.channels = channels
|
155 |
-
self.out_channels = out_channels
|
156 |
-
self.n_heads = n_heads
|
157 |
-
self.p_dropout = p_dropout
|
158 |
-
self.window_size = window_size
|
159 |
-
self.heads_share = heads_share
|
160 |
-
self.block_length = block_length
|
161 |
-
self.proximal_bias = proximal_bias
|
162 |
-
self.proximal_init = proximal_init
|
163 |
-
self.attn = None
|
164 |
-
|
165 |
-
self.k_channels = channels // n_heads
|
166 |
-
self.conv_q = nn.Conv1d(channels, channels, 1)
|
167 |
-
self.conv_k = nn.Conv1d(channels, channels, 1)
|
168 |
-
self.conv_v = nn.Conv1d(channels, channels, 1)
|
169 |
-
self.conv_o = nn.Conv1d(channels, out_channels, 1)
|
170 |
-
self.drop = nn.Dropout(p_dropout)
|
171 |
-
|
172 |
-
if window_size is not None:
|
173 |
-
n_heads_rel = 1 if heads_share else n_heads
|
174 |
-
rel_stddev = self.k_channels ** -0.5
|
175 |
-
self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
|
176 |
-
self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
|
177 |
-
|
178 |
-
nn.init.xavier_uniform_(self.conv_q.weight)
|
179 |
-
nn.init.xavier_uniform_(self.conv_k.weight)
|
180 |
-
nn.init.xavier_uniform_(self.conv_v.weight)
|
181 |
-
if proximal_init:
|
182 |
-
with torch.no_grad():
|
183 |
-
self.conv_k.weight.copy_(self.conv_q.weight)
|
184 |
-
self.conv_k.bias.copy_(self.conv_q.bias)
|
185 |
-
|
186 |
-
def forward(self, x, c, attn_mask=None):
|
187 |
-
q = self.conv_q(x)
|
188 |
-
k = self.conv_k(c)
|
189 |
-
v = self.conv_v(c)
|
190 |
-
|
191 |
-
x, self.attn = self.attention(q, k, v, mask=attn_mask)
|
192 |
-
|
193 |
-
x = self.conv_o(x)
|
194 |
-
return x
|
195 |
-
|
196 |
-
def attention(self, query, key, value, mask=None):
|
197 |
-
# reshape [b, d, t] -> [b, n_h, t, d_k]
|
198 |
-
b, d, t_s, t_t = (*key.size(), query.size(2))
|
199 |
-
query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
|
200 |
-
key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
|
201 |
-
value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
|
202 |
-
|
203 |
-
scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
|
204 |
-
if self.window_size is not None:
|
205 |
-
assert t_s == t_t, "Relative attention is only available for self-attention."
|
206 |
-
key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
|
207 |
-
rel_logits = self._matmul_with_relative_keys(query / math.sqrt(self.k_channels), key_relative_embeddings)
|
208 |
-
scores_local = self._relative_position_to_absolute_position(rel_logits)
|
209 |
-
scores = scores + scores_local
|
210 |
-
if self.proximal_bias:
|
211 |
-
assert t_s == t_t, "Proximal bias is only available for self-attention."
|
212 |
-
scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
|
213 |
-
if mask is not None:
|
214 |
-
scores = scores.masked_fill(mask == 0, -1e4)
|
215 |
-
if self.block_length is not None:
|
216 |
-
assert t_s == t_t, "Local attention is only available for self-attention."
|
217 |
-
block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
|
218 |
-
scores = scores.masked_fill(block_mask == 0, -1e4)
|
219 |
-
p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
|
220 |
-
p_attn = self.drop(p_attn)
|
221 |
-
output = torch.matmul(p_attn, value)
|
222 |
-
if self.window_size is not None:
|
223 |
-
relative_weights = self._absolute_position_to_relative_position(p_attn)
|
224 |
-
value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
|
225 |
-
output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
|
226 |
-
output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
|
227 |
-
return output, p_attn
|
228 |
-
|
229 |
-
def _matmul_with_relative_values(self, x, y):
|
230 |
-
"""
|
231 |
-
x: [b, h, l, m]
|
232 |
-
y: [h or 1, m, d]
|
233 |
-
ret: [b, h, l, d]
|
234 |
-
"""
|
235 |
-
ret = torch.matmul(x, y.unsqueeze(0))
|
236 |
-
return ret
|
237 |
-
|
238 |
-
def _matmul_with_relative_keys(self, x, y):
|
239 |
-
"""
|
240 |
-
x: [b, h, l, d]
|
241 |
-
y: [h or 1, m, d]
|
242 |
-
ret: [b, h, l, m]
|
243 |
-
"""
|
244 |
-
ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
|
245 |
-
return ret
|
246 |
-
|
247 |
-
def _get_relative_embeddings(self, relative_embeddings, length):
|
248 |
-
max_relative_position = 2 * self.window_size + 1
|
249 |
-
# Pad first before slice to avoid using cond ops.
|
250 |
-
pad_length = max(length - (self.window_size + 1), 0)
|
251 |
-
slice_start_position = max((self.window_size + 1) - length, 0)
|
252 |
-
slice_end_position = slice_start_position + 2 * length - 1
|
253 |
-
if pad_length > 0:
|
254 |
-
padded_relative_embeddings = F.pad(
|
255 |
-
relative_embeddings,
|
256 |
-
commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
|
257 |
-
else:
|
258 |
-
padded_relative_embeddings = relative_embeddings
|
259 |
-
used_relative_embeddings = padded_relative_embeddings[:, slice_start_position:slice_end_position]
|
260 |
-
return used_relative_embeddings
|
261 |
-
|
262 |
-
def _relative_position_to_absolute_position(self, x):
|
263 |
-
"""
|
264 |
-
x: [b, h, l, 2*l-1]
|
265 |
-
ret: [b, h, l, l]
|
266 |
-
"""
|
267 |
-
batch, heads, length, _ = x.size()
|
268 |
-
# Concat columns of pad to shift from relative to absolute indexing.
|
269 |
-
x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]]))
|
270 |
-
|
271 |
-
# Concat extra elements so to add up to shape (len+1, 2*len-1).
|
272 |
-
x_flat = x.view([batch, heads, length * 2 * length])
|
273 |
-
x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]))
|
274 |
-
|
275 |
-
# Reshape and slice out the padded elements.
|
276 |
-
x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[:, :, :length, length - 1:]
|
277 |
-
return x_final
|
278 |
-
|
279 |
-
def _absolute_position_to_relative_position(self, x):
|
280 |
-
"""
|
281 |
-
x: [b, h, l, l]
|
282 |
-
ret: [b, h, l, 2*l-1]
|
283 |
-
"""
|
284 |
-
batch, heads, length, _ = x.size()
|
285 |
-
# padd along column
|
286 |
-
x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]))
|
287 |
-
x_flat = x.view([batch, heads, length ** 2 + length * (length - 1)])
|
288 |
-
# add 0's in the beginning that will skew the elements after reshape
|
289 |
-
x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
|
290 |
-
x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:]
|
291 |
-
return x_final
|
292 |
-
|
293 |
-
def _attention_bias_proximal(self, length):
|
294 |
-
"""Bias for self-attention to encourage attention to close positions.
|
295 |
-
Args:
|
296 |
-
length: an integer scalar.
|
297 |
-
Returns:
|
298 |
-
a Tensor with shape [1, 1, length, length]
|
299 |
-
"""
|
300 |
-
r = torch.arange(length, dtype=torch.float32)
|
301 |
-
diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
|
302 |
-
return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
|
303 |
-
|
304 |
-
|
305 |
-
class FFN(nn.Module):
|
306 |
-
def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None,
|
307 |
-
causal=False):
|
308 |
-
super().__init__()
|
309 |
-
self.in_channels = in_channels
|
310 |
-
self.out_channels = out_channels
|
311 |
-
self.filter_channels = filter_channels
|
312 |
-
self.kernel_size = kernel_size
|
313 |
-
self.p_dropout = p_dropout
|
314 |
-
self.activation = activation
|
315 |
-
self.causal = causal
|
316 |
-
|
317 |
-
if causal:
|
318 |
-
self.padding = self._causal_padding
|
319 |
-
else:
|
320 |
-
self.padding = self._same_padding
|
321 |
-
|
322 |
-
self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
|
323 |
-
self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
|
324 |
-
self.drop = nn.Dropout(p_dropout)
|
325 |
-
|
326 |
-
def forward(self, x, x_mask):
|
327 |
-
x = self.conv_1(self.padding(x * x_mask))
|
328 |
-
if self.activation == "gelu":
|
329 |
-
x = x * torch.sigmoid(1.702 * x)
|
330 |
-
else:
|
331 |
-
x = torch.relu(x)
|
332 |
-
x = self.drop(x)
|
333 |
-
x = self.conv_2(self.padding(x * x_mask))
|
334 |
-
return x * x_mask
|
335 |
-
|
336 |
-
def _causal_padding(self, x):
|
337 |
-
if self.kernel_size == 1:
|
338 |
-
return x
|
339 |
-
pad_l = self.kernel_size - 1
|
340 |
-
pad_r = 0
|
341 |
-
padding = [[0, 0], [0, 0], [pad_l, pad_r]]
|
342 |
-
x = F.pad(x, commons.convert_pad_shape(padding))
|
343 |
-
return x
|
344 |
-
|
345 |
-
def _same_padding(self, x):
|
346 |
-
if self.kernel_size == 1:
|
347 |
-
return x
|
348 |
-
pad_l = (self.kernel_size - 1) // 2
|
349 |
-
pad_r = self.kernel_size // 2
|
350 |
-
padding = [[0, 0], [0, 0], [pad_l, pad_r]]
|
351 |
-
x = F.pad(x, commons.convert_pad_shape(padding))
|
352 |
-
return x
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/config.py
DELETED
@@ -1,139 +0,0 @@
|
|
1 |
-
"""distutils.pypirc
|
2 |
-
|
3 |
-
Provides the PyPIRCCommand class, the base class for the command classes
|
4 |
-
that uses .pypirc in the distutils.command package.
|
5 |
-
"""
|
6 |
-
import os
|
7 |
-
from configparser import RawConfigParser
|
8 |
-
|
9 |
-
from distutils.cmd import Command
|
10 |
-
|
11 |
-
DEFAULT_PYPIRC = """\
|
12 |
-
[distutils]
|
13 |
-
index-servers =
|
14 |
-
pypi
|
15 |
-
|
16 |
-
[pypi]
|
17 |
-
username:%s
|
18 |
-
password:%s
|
19 |
-
"""
|
20 |
-
|
21 |
-
|
22 |
-
class PyPIRCCommand(Command):
|
23 |
-
"""Base command that knows how to handle the .pypirc file"""
|
24 |
-
|
25 |
-
DEFAULT_REPOSITORY = 'https://upload.pypi.org/legacy/'
|
26 |
-
DEFAULT_REALM = 'pypi'
|
27 |
-
repository = None
|
28 |
-
realm = None
|
29 |
-
|
30 |
-
user_options = [
|
31 |
-
('repository=', 'r', "url of repository [default: %s]" % DEFAULT_REPOSITORY),
|
32 |
-
('show-response', None, 'display full response text from server'),
|
33 |
-
]
|
34 |
-
|
35 |
-
boolean_options = ['show-response']
|
36 |
-
|
37 |
-
def _get_rc_file(self):
|
38 |
-
"""Returns rc file path."""
|
39 |
-
return os.path.join(os.path.expanduser('~'), '.pypirc')
|
40 |
-
|
41 |
-
def _store_pypirc(self, username, password):
|
42 |
-
"""Creates a default .pypirc file."""
|
43 |
-
rc = self._get_rc_file()
|
44 |
-
with os.fdopen(os.open(rc, os.O_CREAT | os.O_WRONLY, 0o600), 'w') as f:
|
45 |
-
f.write(DEFAULT_PYPIRC % (username, password))
|
46 |
-
|
47 |
-
def _read_pypirc(self): # noqa: C901
|
48 |
-
"""Reads the .pypirc file."""
|
49 |
-
rc = self._get_rc_file()
|
50 |
-
if os.path.exists(rc):
|
51 |
-
self.announce('Using PyPI login from %s' % rc)
|
52 |
-
repository = self.repository or self.DEFAULT_REPOSITORY
|
53 |
-
|
54 |
-
config = RawConfigParser()
|
55 |
-
config.read(rc)
|
56 |
-
sections = config.sections()
|
57 |
-
if 'distutils' in sections:
|
58 |
-
# let's get the list of servers
|
59 |
-
index_servers = config.get('distutils', 'index-servers')
|
60 |
-
_servers = [
|
61 |
-
server.strip()
|
62 |
-
for server in index_servers.split('\n')
|
63 |
-
if server.strip() != ''
|
64 |
-
]
|
65 |
-
if _servers == []:
|
66 |
-
# nothing set, let's try to get the default pypi
|
67 |
-
if 'pypi' in sections:
|
68 |
-
_servers = ['pypi']
|
69 |
-
else:
|
70 |
-
# the file is not properly defined, returning
|
71 |
-
# an empty dict
|
72 |
-
return {}
|
73 |
-
for server in _servers:
|
74 |
-
current = {'server': server}
|
75 |
-
current['username'] = config.get(server, 'username')
|
76 |
-
|
77 |
-
# optional params
|
78 |
-
for key, default in (
|
79 |
-
('repository', self.DEFAULT_REPOSITORY),
|
80 |
-
('realm', self.DEFAULT_REALM),
|
81 |
-
('password', None),
|
82 |
-
):
|
83 |
-
if config.has_option(server, key):
|
84 |
-
current[key] = config.get(server, key)
|
85 |
-
else:
|
86 |
-
current[key] = default
|
87 |
-
|
88 |
-
# work around people having "repository" for the "pypi"
|
89 |
-
# section of their config set to the HTTP (rather than
|
90 |
-
# HTTPS) URL
|
91 |
-
if server == 'pypi' and repository in (
|
92 |
-
self.DEFAULT_REPOSITORY,
|
93 |
-
'pypi',
|
94 |
-
):
|
95 |
-
current['repository'] = self.DEFAULT_REPOSITORY
|
96 |
-
return current
|
97 |
-
|
98 |
-
if (
|
99 |
-
current['server'] == repository
|
100 |
-
or current['repository'] == repository
|
101 |
-
):
|
102 |
-
return current
|
103 |
-
elif 'server-login' in sections:
|
104 |
-
# old format
|
105 |
-
server = 'server-login'
|
106 |
-
if config.has_option(server, 'repository'):
|
107 |
-
repository = config.get(server, 'repository')
|
108 |
-
else:
|
109 |
-
repository = self.DEFAULT_REPOSITORY
|
110 |
-
return {
|
111 |
-
'username': config.get(server, 'username'),
|
112 |
-
'password': config.get(server, 'password'),
|
113 |
-
'repository': repository,
|
114 |
-
'server': server,
|
115 |
-
'realm': self.DEFAULT_REALM,
|
116 |
-
}
|
117 |
-
|
118 |
-
return {}
|
119 |
-
|
120 |
-
def _read_pypi_response(self, response):
|
121 |
-
"""Read and decode a PyPI HTTP response."""
|
122 |
-
import cgi
|
123 |
-
|
124 |
-
content_type = response.getheader('content-type', 'text/plain')
|
125 |
-
encoding = cgi.parse_header(content_type)[1].get('charset', 'ascii')
|
126 |
-
return response.read().decode(encoding)
|
127 |
-
|
128 |
-
def initialize_options(self):
|
129 |
-
"""Initialize options."""
|
130 |
-
self.repository = None
|
131 |
-
self.realm = None
|
132 |
-
self.show_response = 0
|
133 |
-
|
134 |
-
def finalize_options(self):
|
135 |
-
"""Finalizes options."""
|
136 |
-
if self.repository is None:
|
137 |
-
self.repository = self.DEFAULT_REPOSITORY
|
138 |
-
if self.realm is None:
|
139 |
-
self.realm = self.DEFAULT_REALM
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/datasets/coco.py
DELETED
@@ -1,49 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
|
3 |
-
from detectron2.data.datasets.register_coco import register_coco_instances
|
4 |
-
from detectron2.data.datasets.coco import load_coco_json
|
5 |
-
from detectron2.data.datasets.builtin_meta import _get_builtin_metadata
|
6 |
-
from detectron2.data import DatasetCatalog, MetadataCatalog
|
7 |
-
|
8 |
-
|
9 |
-
def register_distill_coco_instances(name, metadata, json_file, image_root):
|
10 |
-
"""
|
11 |
-
add extra_annotation_keys
|
12 |
-
"""
|
13 |
-
assert isinstance(name, str), name
|
14 |
-
assert isinstance(json_file, (str, os.PathLike)), json_file
|
15 |
-
assert isinstance(image_root, (str, os.PathLike)), image_root
|
16 |
-
# 1. register a function which returns dicts
|
17 |
-
DatasetCatalog.register(name, lambda: load_coco_json(
|
18 |
-
json_file, image_root, name, extra_annotation_keys=['score']))
|
19 |
-
|
20 |
-
# 2. Optionally, add metadata about this dataset,
|
21 |
-
# since they might be useful in evaluation, visualization or logging
|
22 |
-
MetadataCatalog.get(name).set(
|
23 |
-
json_file=json_file, image_root=image_root, evaluator_type="coco", **metadata
|
24 |
-
)
|
25 |
-
|
26 |
-
|
27 |
-
_PREDEFINED_SPLITS_COCO = {
|
28 |
-
"coco_2017_unlabeled": ("coco/unlabeled2017", "coco/annotations/image_info_unlabeled2017.json"),
|
29 |
-
}
|
30 |
-
|
31 |
-
for key, (image_root, json_file) in _PREDEFINED_SPLITS_COCO.items():
|
32 |
-
register_coco_instances(
|
33 |
-
key,
|
34 |
-
_get_builtin_metadata('coco'),
|
35 |
-
os.path.join("datasets", json_file) if "://" not in json_file else json_file,
|
36 |
-
os.path.join("datasets", image_root),
|
37 |
-
)
|
38 |
-
|
39 |
-
_PREDEFINED_SPLITS_DISTILL_COCO = {
|
40 |
-
"coco_un_yolov4_55_0.5": ("coco/unlabeled2017", "coco/annotations/yolov4_cocounlabeled_55_ann0.5.json"),
|
41 |
-
}
|
42 |
-
|
43 |
-
for key, (image_root, json_file) in _PREDEFINED_SPLITS_DISTILL_COCO.items():
|
44 |
-
register_distill_coco_instances(
|
45 |
-
key,
|
46 |
-
_get_builtin_metadata('coco'),
|
47 |
-
os.path.join("datasets", json_file) if "://" not in json_file else json_file,
|
48 |
-
os.path.join("datasets", image_root),
|
49 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BHD/google-pix2struct-screen2words-base/app.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
|
3 |
-
gr.Interface.load("models/google/pix2struct-screen2words-base").launch()
|
|
|
|
|
|
|
|
spaces/Banbri/zcvzcv/src/app/queries/predictWithHuggingFace.ts
DELETED
@@ -1,95 +0,0 @@
|
|
1 |
-
"use server"
|
2 |
-
|
3 |
-
import { HfInference, HfInferenceEndpoint } from "@huggingface/inference"
|
4 |
-
import { LLMEngine } from "@/types"
|
5 |
-
|
6 |
-
export async function predict(inputs: string): Promise<string> {
|
7 |
-
const hf = new HfInference(process.env.AUTH_HF_API_TOKEN)
|
8 |
-
|
9 |
-
const llmEngine = `${process.env.LLM_ENGINE || ""}` as LLMEngine
|
10 |
-
const inferenceEndpoint = `${process.env.LLM_HF_INFERENCE_ENDPOINT_URL || ""}`
|
11 |
-
const inferenceModel = `${process.env.LLM_HF_INFERENCE_API_MODEL || ""}`
|
12 |
-
|
13 |
-
let hfie: HfInferenceEndpoint = hf
|
14 |
-
|
15 |
-
switch (llmEngine) {
|
16 |
-
case "INFERENCE_ENDPOINT":
|
17 |
-
if (inferenceEndpoint) {
|
18 |
-
// console.log("Using a custom HF Inference Endpoint")
|
19 |
-
hfie = hf.endpoint(inferenceEndpoint)
|
20 |
-
} else {
|
21 |
-
const error = "No Inference Endpoint URL defined"
|
22 |
-
console.error(error)
|
23 |
-
throw new Error(error)
|
24 |
-
}
|
25 |
-
break;
|
26 |
-
|
27 |
-
case "INFERENCE_API":
|
28 |
-
if (inferenceModel) {
|
29 |
-
// console.log("Using an HF Inference API Model")
|
30 |
-
} else {
|
31 |
-
const error = "No Inference API model defined"
|
32 |
-
console.error(error)
|
33 |
-
throw new Error(error)
|
34 |
-
}
|
35 |
-
break;
|
36 |
-
|
37 |
-
default:
|
38 |
-
const error = "Please check your Hugging Face Inference API or Inference Endpoint settings"
|
39 |
-
console.error(error)
|
40 |
-
throw new Error(error)
|
41 |
-
}
|
42 |
-
|
43 |
-
const api = llmEngine === "INFERENCE_ENDPOINT" ? hfie : hf
|
44 |
-
|
45 |
-
let instructions = ""
|
46 |
-
try {
|
47 |
-
for await (const output of api.textGenerationStream({
|
48 |
-
model: llmEngine === "INFERENCE_ENDPOINT" ? undefined : (inferenceModel || undefined),
|
49 |
-
inputs,
|
50 |
-
parameters: {
|
51 |
-
do_sample: true,
|
52 |
-
// we don't require a lot of token for our task
|
53 |
-
// but to be safe, let's count ~110 tokens per panel
|
54 |
-
max_new_tokens: 450, // 1150,
|
55 |
-
return_full_text: false,
|
56 |
-
}
|
57 |
-
})) {
|
58 |
-
instructions += output.token.text
|
59 |
-
process.stdout.write(output.token.text)
|
60 |
-
if (
|
61 |
-
instructions.includes("</s>") ||
|
62 |
-
instructions.includes("<s>") ||
|
63 |
-
instructions.includes("[INST]") ||
|
64 |
-
instructions.includes("[/INST]") ||
|
65 |
-
instructions.includes("<SYS>") ||
|
66 |
-
instructions.includes("</SYS>") ||
|
67 |
-
instructions.includes("<|end|>") ||
|
68 |
-
instructions.includes("<|assistant|>")
|
69 |
-
) {
|
70 |
-
break
|
71 |
-
}
|
72 |
-
}
|
73 |
-
} catch (err) {
|
74 |
-
console.error(`error during generation: ${err}`)
|
75 |
-
|
76 |
-
// a common issue with Llama-2 might be that the model receives too many requests
|
77 |
-
if (`${err}` === "Error: Model is overloaded") {
|
78 |
-
instructions = ``
|
79 |
-
}
|
80 |
-
}
|
81 |
-
|
82 |
-
// need to do some cleanup of the garbage the LLM might have gave us
|
83 |
-
return (
|
84 |
-
instructions
|
85 |
-
.replaceAll("<|end|>", "")
|
86 |
-
.replaceAll("<s>", "")
|
87 |
-
.replaceAll("</s>", "")
|
88 |
-
.replaceAll("[INST]", "")
|
89 |
-
.replaceAll("[/INST]", "")
|
90 |
-
.replaceAll("<SYS>", "")
|
91 |
-
.replaceAll("</SYS>", "")
|
92 |
-
.replaceAll("<|assistant|>", "")
|
93 |
-
.replaceAll('""', '"')
|
94 |
-
)
|
95 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/demucs/audio.py
DELETED
@@ -1,172 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
import json
|
7 |
-
import subprocess as sp
|
8 |
-
from pathlib import Path
|
9 |
-
|
10 |
-
import julius
|
11 |
-
import numpy as np
|
12 |
-
import torch
|
13 |
-
|
14 |
-
from .utils import temp_filenames
|
15 |
-
|
16 |
-
|
17 |
-
def _read_info(path):
|
18 |
-
stdout_data = sp.check_output([
|
19 |
-
'ffprobe', "-loglevel", "panic",
|
20 |
-
str(path), '-print_format', 'json', '-show_format', '-show_streams'
|
21 |
-
])
|
22 |
-
return json.loads(stdout_data.decode('utf-8'))
|
23 |
-
|
24 |
-
|
25 |
-
class AudioFile:
|
26 |
-
"""
|
27 |
-
Allows to read audio from any format supported by ffmpeg, as well as resampling or
|
28 |
-
converting to mono on the fly. See :method:`read` for more details.
|
29 |
-
"""
|
30 |
-
def __init__(self, path: Path):
|
31 |
-
self.path = Path(path)
|
32 |
-
self._info = None
|
33 |
-
|
34 |
-
def __repr__(self):
|
35 |
-
features = [("path", self.path)]
|
36 |
-
features.append(("samplerate", self.samplerate()))
|
37 |
-
features.append(("channels", self.channels()))
|
38 |
-
features.append(("streams", len(self)))
|
39 |
-
features_str = ", ".join(f"{name}={value}" for name, value in features)
|
40 |
-
return f"AudioFile({features_str})"
|
41 |
-
|
42 |
-
@property
|
43 |
-
def info(self):
|
44 |
-
if self._info is None:
|
45 |
-
self._info = _read_info(self.path)
|
46 |
-
return self._info
|
47 |
-
|
48 |
-
@property
|
49 |
-
def duration(self):
|
50 |
-
return float(self.info['format']['duration'])
|
51 |
-
|
52 |
-
@property
|
53 |
-
def _audio_streams(self):
|
54 |
-
return [
|
55 |
-
index for index, stream in enumerate(self.info["streams"])
|
56 |
-
if stream["codec_type"] == "audio"
|
57 |
-
]
|
58 |
-
|
59 |
-
def __len__(self):
|
60 |
-
return len(self._audio_streams)
|
61 |
-
|
62 |
-
def channels(self, stream=0):
|
63 |
-
return int(self.info['streams'][self._audio_streams[stream]]['channels'])
|
64 |
-
|
65 |
-
def samplerate(self, stream=0):
|
66 |
-
return int(self.info['streams'][self._audio_streams[stream]]['sample_rate'])
|
67 |
-
|
68 |
-
def read(self,
|
69 |
-
seek_time=None,
|
70 |
-
duration=None,
|
71 |
-
streams=slice(None),
|
72 |
-
samplerate=None,
|
73 |
-
channels=None,
|
74 |
-
temp_folder=None):
|
75 |
-
"""
|
76 |
-
Slightly more efficient implementation than stempeg,
|
77 |
-
in particular, this will extract all stems at once
|
78 |
-
rather than having to loop over one file multiple times
|
79 |
-
for each stream.
|
80 |
-
|
81 |
-
Args:
|
82 |
-
seek_time (float): seek time in seconds or None if no seeking is needed.
|
83 |
-
duration (float): duration in seconds to extract or None to extract until the end.
|
84 |
-
streams (slice, int or list): streams to extract, can be a single int, a list or
|
85 |
-
a slice. If it is a slice or list, the output will be of size [S, C, T]
|
86 |
-
with S the number of streams, C the number of channels and T the number of samples.
|
87 |
-
If it is an int, the output will be [C, T].
|
88 |
-
samplerate (int): if provided, will resample on the fly. If None, no resampling will
|
89 |
-
be done. Original sampling rate can be obtained with :method:`samplerate`.
|
90 |
-
channels (int): if 1, will convert to mono. We do not rely on ffmpeg for that
|
91 |
-
as ffmpeg automatically scale by +3dB to conserve volume when playing on speakers.
|
92 |
-
See https://sound.stackexchange.com/a/42710.
|
93 |
-
Our definition of mono is simply the average of the two channels. Any other
|
94 |
-
value will be ignored.
|
95 |
-
temp_folder (str or Path or None): temporary folder to use for decoding.
|
96 |
-
|
97 |
-
|
98 |
-
"""
|
99 |
-
streams = np.array(range(len(self)))[streams]
|
100 |
-
single = not isinstance(streams, np.ndarray)
|
101 |
-
if single:
|
102 |
-
streams = [streams]
|
103 |
-
|
104 |
-
if duration is None:
|
105 |
-
target_size = None
|
106 |
-
query_duration = None
|
107 |
-
else:
|
108 |
-
target_size = int((samplerate or self.samplerate()) * duration)
|
109 |
-
query_duration = float((target_size + 1) / (samplerate or self.samplerate()))
|
110 |
-
|
111 |
-
with temp_filenames(len(streams)) as filenames:
|
112 |
-
command = ['ffmpeg', '-y']
|
113 |
-
command += ['-loglevel', 'panic']
|
114 |
-
if seek_time:
|
115 |
-
command += ['-ss', str(seek_time)]
|
116 |
-
command += ['-i', str(self.path)]
|
117 |
-
for stream, filename in zip(streams, filenames):
|
118 |
-
command += ['-map', f'0:{self._audio_streams[stream]}']
|
119 |
-
if query_duration is not None:
|
120 |
-
command += ['-t', str(query_duration)]
|
121 |
-
command += ['-threads', '1']
|
122 |
-
command += ['-f', 'f32le']
|
123 |
-
if samplerate is not None:
|
124 |
-
command += ['-ar', str(samplerate)]
|
125 |
-
command += [filename]
|
126 |
-
|
127 |
-
sp.run(command, check=True)
|
128 |
-
wavs = []
|
129 |
-
for filename in filenames:
|
130 |
-
wav = np.fromfile(filename, dtype=np.float32)
|
131 |
-
wav = torch.from_numpy(wav)
|
132 |
-
wav = wav.view(-1, self.channels()).t()
|
133 |
-
if channels is not None:
|
134 |
-
wav = convert_audio_channels(wav, channels)
|
135 |
-
if target_size is not None:
|
136 |
-
wav = wav[..., :target_size]
|
137 |
-
wavs.append(wav)
|
138 |
-
wav = torch.stack(wavs, dim=0)
|
139 |
-
if single:
|
140 |
-
wav = wav[0]
|
141 |
-
return wav
|
142 |
-
|
143 |
-
|
144 |
-
def convert_audio_channels(wav, channels=2):
|
145 |
-
"""Convert audio to the given number of channels."""
|
146 |
-
*shape, src_channels, length = wav.shape
|
147 |
-
if src_channels == channels:
|
148 |
-
pass
|
149 |
-
elif channels == 1:
|
150 |
-
# Case 1:
|
151 |
-
# The caller asked 1-channel audio, but the stream have multiple
|
152 |
-
# channels, downmix all channels.
|
153 |
-
wav = wav.mean(dim=-2, keepdim=True)
|
154 |
-
elif src_channels == 1:
|
155 |
-
# Case 2:
|
156 |
-
# The caller asked for multiple channels, but the input file have
|
157 |
-
# one single channel, replicate the audio over all channels.
|
158 |
-
wav = wav.expand(*shape, channels, length)
|
159 |
-
elif src_channels >= channels:
|
160 |
-
# Case 3:
|
161 |
-
# The caller asked for multiple channels, and the input file have
|
162 |
-
# more channels than requested. In that case return the first channels.
|
163 |
-
wav = wav[..., :channels, :]
|
164 |
-
else:
|
165 |
-
# Case 4: What is a reasonable choice here?
|
166 |
-
raise ValueError('The audio file has less channels than requested but is not mono.')
|
167 |
-
return wav
|
168 |
-
|
169 |
-
|
170 |
-
def convert_audio(wav, from_samplerate, to_samplerate, channels):
|
171 |
-
wav = convert_audio_channels(wav, channels)
|
172 |
-
return julius.resample_frac(wav, from_samplerate, to_samplerate)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/infer/lib/train/data_utils.py
DELETED
@@ -1,517 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import traceback
|
3 |
-
import logging
|
4 |
-
|
5 |
-
logger = logging.getLogger(__name__)
|
6 |
-
|
7 |
-
import numpy as np
|
8 |
-
import torch
|
9 |
-
import torch.utils.data
|
10 |
-
|
11 |
-
from infer.lib.train.mel_processing import spectrogram_torch
|
12 |
-
from infer.lib.train.utils import load_filepaths_and_text, load_wav_to_torch
|
13 |
-
|
14 |
-
|
15 |
-
class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset):
|
16 |
-
"""
|
17 |
-
1) loads audio, text pairs
|
18 |
-
2) normalizes text and converts them to sequences of integers
|
19 |
-
3) computes spectrograms from audio files.
|
20 |
-
"""
|
21 |
-
|
22 |
-
def __init__(self, audiopaths_and_text, hparams):
|
23 |
-
self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
|
24 |
-
self.max_wav_value = hparams.max_wav_value
|
25 |
-
self.sampling_rate = hparams.sampling_rate
|
26 |
-
self.filter_length = hparams.filter_length
|
27 |
-
self.hop_length = hparams.hop_length
|
28 |
-
self.win_length = hparams.win_length
|
29 |
-
self.sampling_rate = hparams.sampling_rate
|
30 |
-
self.min_text_len = getattr(hparams, "min_text_len", 1)
|
31 |
-
self.max_text_len = getattr(hparams, "max_text_len", 5000)
|
32 |
-
self._filter()
|
33 |
-
|
34 |
-
def _filter(self):
|
35 |
-
"""
|
36 |
-
Filter text & store spec lengths
|
37 |
-
"""
|
38 |
-
# Store spectrogram lengths for Bucketing
|
39 |
-
# wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
|
40 |
-
# spec_length = wav_length // hop_length
|
41 |
-
audiopaths_and_text_new = []
|
42 |
-
lengths = []
|
43 |
-
for audiopath, text, pitch, pitchf, dv in self.audiopaths_and_text:
|
44 |
-
if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
|
45 |
-
audiopaths_and_text_new.append([audiopath, text, pitch, pitchf, dv])
|
46 |
-
lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
|
47 |
-
self.audiopaths_and_text = audiopaths_and_text_new
|
48 |
-
self.lengths = lengths
|
49 |
-
|
50 |
-
def get_sid(self, sid):
|
51 |
-
sid = torch.LongTensor([int(sid)])
|
52 |
-
return sid
|
53 |
-
|
54 |
-
def get_audio_text_pair(self, audiopath_and_text):
|
55 |
-
# separate filename and text
|
56 |
-
file = audiopath_and_text[0]
|
57 |
-
phone = audiopath_and_text[1]
|
58 |
-
pitch = audiopath_and_text[2]
|
59 |
-
pitchf = audiopath_and_text[3]
|
60 |
-
dv = audiopath_and_text[4]
|
61 |
-
|
62 |
-
phone, pitch, pitchf = self.get_labels(phone, pitch, pitchf)
|
63 |
-
spec, wav = self.get_audio(file)
|
64 |
-
dv = self.get_sid(dv)
|
65 |
-
|
66 |
-
len_phone = phone.size()[0]
|
67 |
-
len_spec = spec.size()[-1]
|
68 |
-
# print(123,phone.shape,pitch.shape,spec.shape)
|
69 |
-
if len_phone != len_spec:
|
70 |
-
len_min = min(len_phone, len_spec)
|
71 |
-
# amor
|
72 |
-
len_wav = len_min * self.hop_length
|
73 |
-
|
74 |
-
spec = spec[:, :len_min]
|
75 |
-
wav = wav[:, :len_wav]
|
76 |
-
|
77 |
-
phone = phone[:len_min, :]
|
78 |
-
pitch = pitch[:len_min]
|
79 |
-
pitchf = pitchf[:len_min]
|
80 |
-
|
81 |
-
return (spec, wav, phone, pitch, pitchf, dv)
|
82 |
-
|
83 |
-
def get_labels(self, phone, pitch, pitchf):
|
84 |
-
phone = np.load(phone)
|
85 |
-
phone = np.repeat(phone, 2, axis=0)
|
86 |
-
pitch = np.load(pitch)
|
87 |
-
pitchf = np.load(pitchf)
|
88 |
-
n_num = min(phone.shape[0], 900) # DistributedBucketSampler
|
89 |
-
# print(234,phone.shape,pitch.shape)
|
90 |
-
phone = phone[:n_num, :]
|
91 |
-
pitch = pitch[:n_num]
|
92 |
-
pitchf = pitchf[:n_num]
|
93 |
-
phone = torch.FloatTensor(phone)
|
94 |
-
pitch = torch.LongTensor(pitch)
|
95 |
-
pitchf = torch.FloatTensor(pitchf)
|
96 |
-
return phone, pitch, pitchf
|
97 |
-
|
98 |
-
def get_audio(self, filename):
|
99 |
-
audio, sampling_rate = load_wav_to_torch(filename)
|
100 |
-
if sampling_rate != self.sampling_rate:
|
101 |
-
raise ValueError(
|
102 |
-
"{} SR doesn't match target {} SR".format(
|
103 |
-
sampling_rate, self.sampling_rate
|
104 |
-
)
|
105 |
-
)
|
106 |
-
audio_norm = audio
|
107 |
-
# audio_norm = audio / self.max_wav_value
|
108 |
-
# audio_norm = audio / np.abs(audio).max()
|
109 |
-
|
110 |
-
audio_norm = audio_norm.unsqueeze(0)
|
111 |
-
spec_filename = filename.replace(".wav", ".spec.pt")
|
112 |
-
if os.path.exists(spec_filename):
|
113 |
-
try:
|
114 |
-
spec = torch.load(spec_filename)
|
115 |
-
except:
|
116 |
-
logger.warn("%s %s", spec_filename, traceback.format_exc())
|
117 |
-
spec = spectrogram_torch(
|
118 |
-
audio_norm,
|
119 |
-
self.filter_length,
|
120 |
-
self.sampling_rate,
|
121 |
-
self.hop_length,
|
122 |
-
self.win_length,
|
123 |
-
center=False,
|
124 |
-
)
|
125 |
-
spec = torch.squeeze(spec, 0)
|
126 |
-
torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
|
127 |
-
else:
|
128 |
-
spec = spectrogram_torch(
|
129 |
-
audio_norm,
|
130 |
-
self.filter_length,
|
131 |
-
self.sampling_rate,
|
132 |
-
self.hop_length,
|
133 |
-
self.win_length,
|
134 |
-
center=False,
|
135 |
-
)
|
136 |
-
spec = torch.squeeze(spec, 0)
|
137 |
-
torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
|
138 |
-
return spec, audio_norm
|
139 |
-
|
140 |
-
def __getitem__(self, index):
|
141 |
-
return self.get_audio_text_pair(self.audiopaths_and_text[index])
|
142 |
-
|
143 |
-
def __len__(self):
|
144 |
-
return len(self.audiopaths_and_text)
|
145 |
-
|
146 |
-
|
147 |
-
class TextAudioCollateMultiNSFsid:
|
148 |
-
"""Zero-pads model inputs and targets"""
|
149 |
-
|
150 |
-
def __init__(self, return_ids=False):
|
151 |
-
self.return_ids = return_ids
|
152 |
-
|
153 |
-
def __call__(self, batch):
|
154 |
-
"""Collate's training batch from normalized text and aduio
|
155 |
-
PARAMS
|
156 |
-
------
|
157 |
-
batch: [text_normalized, spec_normalized, wav_normalized]
|
158 |
-
"""
|
159 |
-
# Right zero-pad all one-hot text sequences to max input length
|
160 |
-
_, ids_sorted_decreasing = torch.sort(
|
161 |
-
torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True
|
162 |
-
)
|
163 |
-
|
164 |
-
max_spec_len = max([x[0].size(1) for x in batch])
|
165 |
-
max_wave_len = max([x[1].size(1) for x in batch])
|
166 |
-
spec_lengths = torch.LongTensor(len(batch))
|
167 |
-
wave_lengths = torch.LongTensor(len(batch))
|
168 |
-
spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len)
|
169 |
-
wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len)
|
170 |
-
spec_padded.zero_()
|
171 |
-
wave_padded.zero_()
|
172 |
-
|
173 |
-
max_phone_len = max([x[2].size(0) for x in batch])
|
174 |
-
phone_lengths = torch.LongTensor(len(batch))
|
175 |
-
phone_padded = torch.FloatTensor(
|
176 |
-
len(batch), max_phone_len, batch[0][2].shape[1]
|
177 |
-
) # (spec, wav, phone, pitch)
|
178 |
-
pitch_padded = torch.LongTensor(len(batch), max_phone_len)
|
179 |
-
pitchf_padded = torch.FloatTensor(len(batch), max_phone_len)
|
180 |
-
phone_padded.zero_()
|
181 |
-
pitch_padded.zero_()
|
182 |
-
pitchf_padded.zero_()
|
183 |
-
# dv = torch.FloatTensor(len(batch), 256)#gin=256
|
184 |
-
sid = torch.LongTensor(len(batch))
|
185 |
-
|
186 |
-
for i in range(len(ids_sorted_decreasing)):
|
187 |
-
row = batch[ids_sorted_decreasing[i]]
|
188 |
-
|
189 |
-
spec = row[0]
|
190 |
-
spec_padded[i, :, : spec.size(1)] = spec
|
191 |
-
spec_lengths[i] = spec.size(1)
|
192 |
-
|
193 |
-
wave = row[1]
|
194 |
-
wave_padded[i, :, : wave.size(1)] = wave
|
195 |
-
wave_lengths[i] = wave.size(1)
|
196 |
-
|
197 |
-
phone = row[2]
|
198 |
-
phone_padded[i, : phone.size(0), :] = phone
|
199 |
-
phone_lengths[i] = phone.size(0)
|
200 |
-
|
201 |
-
pitch = row[3]
|
202 |
-
pitch_padded[i, : pitch.size(0)] = pitch
|
203 |
-
pitchf = row[4]
|
204 |
-
pitchf_padded[i, : pitchf.size(0)] = pitchf
|
205 |
-
|
206 |
-
# dv[i] = row[5]
|
207 |
-
sid[i] = row[5]
|
208 |
-
|
209 |
-
return (
|
210 |
-
phone_padded,
|
211 |
-
phone_lengths,
|
212 |
-
pitch_padded,
|
213 |
-
pitchf_padded,
|
214 |
-
spec_padded,
|
215 |
-
spec_lengths,
|
216 |
-
wave_padded,
|
217 |
-
wave_lengths,
|
218 |
-
# dv
|
219 |
-
sid,
|
220 |
-
)
|
221 |
-
|
222 |
-
|
223 |
-
class TextAudioLoader(torch.utils.data.Dataset):
|
224 |
-
"""
|
225 |
-
1) loads audio, text pairs
|
226 |
-
2) normalizes text and converts them to sequences of integers
|
227 |
-
3) computes spectrograms from audio files.
|
228 |
-
"""
|
229 |
-
|
230 |
-
def __init__(self, audiopaths_and_text, hparams):
|
231 |
-
self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
|
232 |
-
self.max_wav_value = hparams.max_wav_value
|
233 |
-
self.sampling_rate = hparams.sampling_rate
|
234 |
-
self.filter_length = hparams.filter_length
|
235 |
-
self.hop_length = hparams.hop_length
|
236 |
-
self.win_length = hparams.win_length
|
237 |
-
self.sampling_rate = hparams.sampling_rate
|
238 |
-
self.min_text_len = getattr(hparams, "min_text_len", 1)
|
239 |
-
self.max_text_len = getattr(hparams, "max_text_len", 5000)
|
240 |
-
self._filter()
|
241 |
-
|
242 |
-
def _filter(self):
|
243 |
-
"""
|
244 |
-
Filter text & store spec lengths
|
245 |
-
"""
|
246 |
-
# Store spectrogram lengths for Bucketing
|
247 |
-
# wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
|
248 |
-
# spec_length = wav_length // hop_length
|
249 |
-
audiopaths_and_text_new = []
|
250 |
-
lengths = []
|
251 |
-
for audiopath, text, dv in self.audiopaths_and_text:
|
252 |
-
if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
|
253 |
-
audiopaths_and_text_new.append([audiopath, text, dv])
|
254 |
-
lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
|
255 |
-
self.audiopaths_and_text = audiopaths_and_text_new
|
256 |
-
self.lengths = lengths
|
257 |
-
|
258 |
-
def get_sid(self, sid):
|
259 |
-
sid = torch.LongTensor([int(sid)])
|
260 |
-
return sid
|
261 |
-
|
262 |
-
def get_audio_text_pair(self, audiopath_and_text):
|
263 |
-
# separate filename and text
|
264 |
-
file = audiopath_and_text[0]
|
265 |
-
phone = audiopath_and_text[1]
|
266 |
-
dv = audiopath_and_text[2]
|
267 |
-
|
268 |
-
phone = self.get_labels(phone)
|
269 |
-
spec, wav = self.get_audio(file)
|
270 |
-
dv = self.get_sid(dv)
|
271 |
-
|
272 |
-
len_phone = phone.size()[0]
|
273 |
-
len_spec = spec.size()[-1]
|
274 |
-
if len_phone != len_spec:
|
275 |
-
len_min = min(len_phone, len_spec)
|
276 |
-
len_wav = len_min * self.hop_length
|
277 |
-
spec = spec[:, :len_min]
|
278 |
-
wav = wav[:, :len_wav]
|
279 |
-
phone = phone[:len_min, :]
|
280 |
-
return (spec, wav, phone, dv)
|
281 |
-
|
282 |
-
def get_labels(self, phone):
|
283 |
-
phone = np.load(phone)
|
284 |
-
phone = np.repeat(phone, 2, axis=0)
|
285 |
-
n_num = min(phone.shape[0], 900) # DistributedBucketSampler
|
286 |
-
phone = phone[:n_num, :]
|
287 |
-
phone = torch.FloatTensor(phone)
|
288 |
-
return phone
|
289 |
-
|
290 |
-
def get_audio(self, filename):
|
291 |
-
audio, sampling_rate = load_wav_to_torch(filename)
|
292 |
-
if sampling_rate != self.sampling_rate:
|
293 |
-
raise ValueError(
|
294 |
-
"{} SR doesn't match target {} SR".format(
|
295 |
-
sampling_rate, self.sampling_rate
|
296 |
-
)
|
297 |
-
)
|
298 |
-
audio_norm = audio
|
299 |
-
# audio_norm = audio / self.max_wav_value
|
300 |
-
# audio_norm = audio / np.abs(audio).max()
|
301 |
-
|
302 |
-
audio_norm = audio_norm.unsqueeze(0)
|
303 |
-
spec_filename = filename.replace(".wav", ".spec.pt")
|
304 |
-
if os.path.exists(spec_filename):
|
305 |
-
try:
|
306 |
-
spec = torch.load(spec_filename)
|
307 |
-
except:
|
308 |
-
logger.warn("%s %s", spec_filename, traceback.format_exc())
|
309 |
-
spec = spectrogram_torch(
|
310 |
-
audio_norm,
|
311 |
-
self.filter_length,
|
312 |
-
self.sampling_rate,
|
313 |
-
self.hop_length,
|
314 |
-
self.win_length,
|
315 |
-
center=False,
|
316 |
-
)
|
317 |
-
spec = torch.squeeze(spec, 0)
|
318 |
-
torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
|
319 |
-
else:
|
320 |
-
spec = spectrogram_torch(
|
321 |
-
audio_norm,
|
322 |
-
self.filter_length,
|
323 |
-
self.sampling_rate,
|
324 |
-
self.hop_length,
|
325 |
-
self.win_length,
|
326 |
-
center=False,
|
327 |
-
)
|
328 |
-
spec = torch.squeeze(spec, 0)
|
329 |
-
torch.save(spec, spec_filename, _use_new_zipfile_serialization=False)
|
330 |
-
return spec, audio_norm
|
331 |
-
|
332 |
-
def __getitem__(self, index):
|
333 |
-
return self.get_audio_text_pair(self.audiopaths_and_text[index])
|
334 |
-
|
335 |
-
def __len__(self):
|
336 |
-
return len(self.audiopaths_and_text)
|
337 |
-
|
338 |
-
|
339 |
-
class TextAudioCollate:
|
340 |
-
"""Zero-pads model inputs and targets"""
|
341 |
-
|
342 |
-
def __init__(self, return_ids=False):
|
343 |
-
self.return_ids = return_ids
|
344 |
-
|
345 |
-
def __call__(self, batch):
|
346 |
-
"""Collate's training batch from normalized text and aduio
|
347 |
-
PARAMS
|
348 |
-
------
|
349 |
-
batch: [text_normalized, spec_normalized, wav_normalized]
|
350 |
-
"""
|
351 |
-
# Right zero-pad all one-hot text sequences to max input length
|
352 |
-
_, ids_sorted_decreasing = torch.sort(
|
353 |
-
torch.LongTensor([x[0].size(1) for x in batch]), dim=0, descending=True
|
354 |
-
)
|
355 |
-
|
356 |
-
max_spec_len = max([x[0].size(1) for x in batch])
|
357 |
-
max_wave_len = max([x[1].size(1) for x in batch])
|
358 |
-
spec_lengths = torch.LongTensor(len(batch))
|
359 |
-
wave_lengths = torch.LongTensor(len(batch))
|
360 |
-
spec_padded = torch.FloatTensor(len(batch), batch[0][0].size(0), max_spec_len)
|
361 |
-
wave_padded = torch.FloatTensor(len(batch), 1, max_wave_len)
|
362 |
-
spec_padded.zero_()
|
363 |
-
wave_padded.zero_()
|
364 |
-
|
365 |
-
max_phone_len = max([x[2].size(0) for x in batch])
|
366 |
-
phone_lengths = torch.LongTensor(len(batch))
|
367 |
-
phone_padded = torch.FloatTensor(
|
368 |
-
len(batch), max_phone_len, batch[0][2].shape[1]
|
369 |
-
)
|
370 |
-
phone_padded.zero_()
|
371 |
-
sid = torch.LongTensor(len(batch))
|
372 |
-
|
373 |
-
for i in range(len(ids_sorted_decreasing)):
|
374 |
-
row = batch[ids_sorted_decreasing[i]]
|
375 |
-
|
376 |
-
spec = row[0]
|
377 |
-
spec_padded[i, :, : spec.size(1)] = spec
|
378 |
-
spec_lengths[i] = spec.size(1)
|
379 |
-
|
380 |
-
wave = row[1]
|
381 |
-
wave_padded[i, :, : wave.size(1)] = wave
|
382 |
-
wave_lengths[i] = wave.size(1)
|
383 |
-
|
384 |
-
phone = row[2]
|
385 |
-
phone_padded[i, : phone.size(0), :] = phone
|
386 |
-
phone_lengths[i] = phone.size(0)
|
387 |
-
|
388 |
-
sid[i] = row[3]
|
389 |
-
|
390 |
-
return (
|
391 |
-
phone_padded,
|
392 |
-
phone_lengths,
|
393 |
-
spec_padded,
|
394 |
-
spec_lengths,
|
395 |
-
wave_padded,
|
396 |
-
wave_lengths,
|
397 |
-
sid,
|
398 |
-
)
|
399 |
-
|
400 |
-
|
401 |
-
class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
|
402 |
-
"""
|
403 |
-
Maintain similar input lengths in a batch.
|
404 |
-
Length groups are specified by boundaries.
|
405 |
-
Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
|
406 |
-
|
407 |
-
It removes samples which are not included in the boundaries.
|
408 |
-
Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
|
409 |
-
"""
|
410 |
-
|
411 |
-
def __init__(
|
412 |
-
self,
|
413 |
-
dataset,
|
414 |
-
batch_size,
|
415 |
-
boundaries,
|
416 |
-
num_replicas=None,
|
417 |
-
rank=None,
|
418 |
-
shuffle=True,
|
419 |
-
):
|
420 |
-
super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
|
421 |
-
self.lengths = dataset.lengths
|
422 |
-
self.batch_size = batch_size
|
423 |
-
self.boundaries = boundaries
|
424 |
-
|
425 |
-
self.buckets, self.num_samples_per_bucket = self._create_buckets()
|
426 |
-
self.total_size = sum(self.num_samples_per_bucket)
|
427 |
-
self.num_samples = self.total_size // self.num_replicas
|
428 |
-
|
429 |
-
def _create_buckets(self):
|
430 |
-
buckets = [[] for _ in range(len(self.boundaries) - 1)]
|
431 |
-
for i in range(len(self.lengths)):
|
432 |
-
length = self.lengths[i]
|
433 |
-
idx_bucket = self._bisect(length)
|
434 |
-
if idx_bucket != -1:
|
435 |
-
buckets[idx_bucket].append(i)
|
436 |
-
|
437 |
-
for i in range(len(buckets) - 1, -1, -1): #
|
438 |
-
if len(buckets[i]) == 0:
|
439 |
-
buckets.pop(i)
|
440 |
-
self.boundaries.pop(i + 1)
|
441 |
-
|
442 |
-
num_samples_per_bucket = []
|
443 |
-
for i in range(len(buckets)):
|
444 |
-
len_bucket = len(buckets[i])
|
445 |
-
total_batch_size = self.num_replicas * self.batch_size
|
446 |
-
rem = (
|
447 |
-
total_batch_size - (len_bucket % total_batch_size)
|
448 |
-
) % total_batch_size
|
449 |
-
num_samples_per_bucket.append(len_bucket + rem)
|
450 |
-
return buckets, num_samples_per_bucket
|
451 |
-
|
452 |
-
def __iter__(self):
|
453 |
-
# deterministically shuffle based on epoch
|
454 |
-
g = torch.Generator()
|
455 |
-
g.manual_seed(self.epoch)
|
456 |
-
|
457 |
-
indices = []
|
458 |
-
if self.shuffle:
|
459 |
-
for bucket in self.buckets:
|
460 |
-
indices.append(torch.randperm(len(bucket), generator=g).tolist())
|
461 |
-
else:
|
462 |
-
for bucket in self.buckets:
|
463 |
-
indices.append(list(range(len(bucket))))
|
464 |
-
|
465 |
-
batches = []
|
466 |
-
for i in range(len(self.buckets)):
|
467 |
-
bucket = self.buckets[i]
|
468 |
-
len_bucket = len(bucket)
|
469 |
-
ids_bucket = indices[i]
|
470 |
-
num_samples_bucket = self.num_samples_per_bucket[i]
|
471 |
-
|
472 |
-
# add extra samples to make it evenly divisible
|
473 |
-
rem = num_samples_bucket - len_bucket
|
474 |
-
ids_bucket = (
|
475 |
-
ids_bucket
|
476 |
-
+ ids_bucket * (rem // len_bucket)
|
477 |
-
+ ids_bucket[: (rem % len_bucket)]
|
478 |
-
)
|
479 |
-
|
480 |
-
# subsample
|
481 |
-
ids_bucket = ids_bucket[self.rank :: self.num_replicas]
|
482 |
-
|
483 |
-
# batching
|
484 |
-
for j in range(len(ids_bucket) // self.batch_size):
|
485 |
-
batch = [
|
486 |
-
bucket[idx]
|
487 |
-
for idx in ids_bucket[
|
488 |
-
j * self.batch_size : (j + 1) * self.batch_size
|
489 |
-
]
|
490 |
-
]
|
491 |
-
batches.append(batch)
|
492 |
-
|
493 |
-
if self.shuffle:
|
494 |
-
batch_ids = torch.randperm(len(batches), generator=g).tolist()
|
495 |
-
batches = [batches[i] for i in batch_ids]
|
496 |
-
self.batches = batches
|
497 |
-
|
498 |
-
assert len(self.batches) * self.batch_size == self.num_samples
|
499 |
-
return iter(self.batches)
|
500 |
-
|
501 |
-
def _bisect(self, x, lo=0, hi=None):
|
502 |
-
if hi is None:
|
503 |
-
hi = len(self.boundaries) - 1
|
504 |
-
|
505 |
-
if hi > lo:
|
506 |
-
mid = (hi + lo) // 2
|
507 |
-
if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]:
|
508 |
-
return mid
|
509 |
-
elif x <= self.boundaries[mid]:
|
510 |
-
return self._bisect(x, lo, mid)
|
511 |
-
else:
|
512 |
-
return self._bisect(x, mid + 1, hi)
|
513 |
-
else:
|
514 |
-
return -1
|
515 |
-
|
516 |
-
def __len__(self):
|
517 |
-
return self.num_samples // self.batch_size
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benjov/Demo-IR/app.py
DELETED
@@ -1,389 +0,0 @@
|
|
1 |
-
#--------------------------------------------------------------------
|
2 |
-
# DEPENDENCIAS
|
3 |
-
#--------------------------------------------------------------------
|
4 |
-
import os
|
5 |
-
from io import StringIO
|
6 |
-
import requests
|
7 |
-
import gradio as gr
|
8 |
-
import pandas as pd
|
9 |
-
import numpy as np
|
10 |
-
import openai
|
11 |
-
import tiktoken
|
12 |
-
#import streamlit as st
|
13 |
-
from openai.embeddings_utils import get_embedding, cosine_similarity
|
14 |
-
#from langchain.document_loaders import PyPDFLoader
|
15 |
-
#from langchain.text_splitter import CharacterTextSplitter
|
16 |
-
#from PyPDF2 import PdfReader, PdfFileReader
|
17 |
-
from langchain.embeddings import OpenAIEmbeddings, HuggingFaceInstructEmbeddings
|
18 |
-
from langchain.vectorstores import FAISS
|
19 |
-
from langchain.chat_models import ChatOpenAI
|
20 |
-
from langchain.memory import ConversationBufferMemory
|
21 |
-
from langchain.chains import ConversationalRetrievalChain
|
22 |
-
from langchain.llms import OpenAI, HuggingFaceHub
|
23 |
-
from langchain.chains.question_answering import load_qa_chain
|
24 |
-
#from htmlTemplates import css, bot_template, user_template
|
25 |
-
import json
|
26 |
-
import ast
|
27 |
-
#from langchain.schema.vectorstore import Document
|
28 |
-
from langchain.schema import Document
|
29 |
-
#import fitz # PyMuPDF
|
30 |
-
#import pytesseract
|
31 |
-
#from PIL import Image
|
32 |
-
#from io import BytesIO
|
33 |
-
#import cv2
|
34 |
-
import gspread
|
35 |
-
from oauth2client.service_account import ServiceAccountCredentials
|
36 |
-
from datetime import datetime
|
37 |
-
|
38 |
-
#--------------------------------------------------------------------
|
39 |
-
# LLAVES
|
40 |
-
#--------------------------------------------------------------------
|
41 |
-
openai.api_key = os.getenv("OPENAI_API_KEY")
|
42 |
-
api_key = os.getenv("OPENAI_API_KEY")
|
43 |
-
token = os.getenv("token")
|
44 |
-
headers = { 'Authorization': f'token {token}',
|
45 |
-
'Accept': 'application/vnd.github.v3.raw' }
|
46 |
-
|
47 |
-
# Establece las credenciales y la API
|
48 |
-
credentials = os.getenv( "credentials" )
|
49 |
-
credentials = json.loads( credentials )
|
50 |
-
gc = gspread.service_account_from_dict( credentials )
|
51 |
-
Google_URL = os.getenv( "Google_Sheet" )
|
52 |
-
|
53 |
-
|
54 |
-
#--------------------------------------------------------------------
|
55 |
-
# CARGAR CSV EMBEDDINGS
|
56 |
-
#--------------------------------------------------------------------
|
57 |
-
#
|
58 |
-
url_tomos_conf_DPR = os.getenv("url_tomos_conf_DPR")
|
59 |
-
response_tomos_conf_DPR = requests.get( url_tomos_conf_DPR, headers = headers )
|
60 |
-
csv_content_tomos_conf_DPR = response_tomos_conf_DPR.text
|
61 |
-
tomos_conf_DPR = pd.read_csv(StringIO( csv_content_tomos_conf_DPR ))
|
62 |
-
|
63 |
-
#
|
64 |
-
url_tomos_conf_cita = os.getenv("url_tomos_conf_cita")
|
65 |
-
response_tomos_conf_cita = requests.get( url_tomos_conf_cita, headers = headers )
|
66 |
-
csv_content_tomos_conf_cita = response_tomos_conf_cita.text
|
67 |
-
tomos_conf_cita = pd.read_csv(StringIO( csv_content_tomos_conf_cita ))
|
68 |
-
|
69 |
-
#
|
70 |
-
url_df_tomos_1a28_01 = os.getenv("url_df_tomos_1a28_01")
|
71 |
-
response_df_tomos_1a28_01 = requests.get( url_df_tomos_1a28_01, headers = headers )
|
72 |
-
csv_content_df_tomos_1a28_01 = response_df_tomos_1a28_01.text
|
73 |
-
df_tomos_1a28_01 = pd.read_csv(StringIO( csv_content_df_tomos_1a28_01 ))
|
74 |
-
|
75 |
-
#
|
76 |
-
url_df_tomos_1a28_02 = os.getenv("url_df_tomos_1a28_02")
|
77 |
-
response_df_tomos_1a28_02 = requests.get( url_df_tomos_1a28_02, headers = headers )
|
78 |
-
csv_content_df_tomos_1a28_02 = response_df_tomos_1a28_02.text
|
79 |
-
df_tomos_1a28_02 = pd.read_csv(StringIO( csv_content_df_tomos_1a28_02 ))
|
80 |
-
|
81 |
-
#
|
82 |
-
url_df_tomos_1a28_03 = os.getenv("url_df_tomos_1a28_03")
|
83 |
-
response_df_tomos_1a28_03 = requests.get( url_df_tomos_1a28_03, headers = headers )
|
84 |
-
csv_content_df_tomos_1a28_03 = response_df_tomos_1a28_03.text
|
85 |
-
df_tomos_1a28_03 = pd.read_csv(StringIO( csv_content_df_tomos_1a28_03 ))
|
86 |
-
|
87 |
-
#
|
88 |
-
url_df_tomos_1a28_04 = os.getenv("url_df_tomos_1a28_04")
|
89 |
-
response_df_tomos_1a28_04 = requests.get( url_df_tomos_1a28_04, headers = headers )
|
90 |
-
csv_content_df_tomos_1a28_04 = response_df_tomos_1a28_04.text
|
91 |
-
df_tomos_1a28_04 = pd.read_csv(StringIO( csv_content_df_tomos_1a28_04 ))
|
92 |
-
|
93 |
-
#
|
94 |
-
url_df_tomos_1a28_05 = os.getenv("url_df_tomos_1a28_05")
|
95 |
-
response_df_tomos_1a28_05 = requests.get( url_df_tomos_1a28_05, headers = headers )
|
96 |
-
csv_content_df_tomos_1a28_05 = response_df_tomos_1a28_05.text
|
97 |
-
df_tomos_1a28_05 = pd.read_csv(StringIO( csv_content_df_tomos_1a28_05 ))
|
98 |
-
|
99 |
-
#
|
100 |
-
url_df_tomos_1a28_06 = os.getenv("url_df_tomos_1a28_06")
|
101 |
-
response_df_tomos_1a28_06 = requests.get( url_df_tomos_1a28_06, headers = headers )
|
102 |
-
csv_content_df_tomos_1a28_06 = response_df_tomos_1a28_06.text
|
103 |
-
df_tomos_1a28_06 = pd.read_csv(StringIO( csv_content_df_tomos_1a28_06 ))
|
104 |
-
|
105 |
-
#
|
106 |
-
url_df_tomos_1a28_07 = os.getenv("url_df_tomos_1a28_07")
|
107 |
-
response_df_tomos_1a28_07 = requests.get( url_df_tomos_1a28_07, headers = headers )
|
108 |
-
csv_content_df_tomos_1a28_07 = response_df_tomos_1a28_07.text
|
109 |
-
df_tomos_1a28_07 = pd.read_csv(StringIO( csv_content_df_tomos_1a28_07 ))
|
110 |
-
|
111 |
-
#
|
112 |
-
url_df_tomos_1a28_08 = os.getenv("url_df_tomos_1a28_08")
|
113 |
-
response_df_tomos_1a28_08 = requests.get( url_df_tomos_1a28_08, headers = headers )
|
114 |
-
csv_content_df_tomos_1a28_08 = response_df_tomos_1a28_08.text
|
115 |
-
df_tomos_1a28_08 = pd.read_csv(StringIO( csv_content_df_tomos_1a28_08 ))
|
116 |
-
|
117 |
-
#
|
118 |
-
url_df_tomos_1a28_09 = os.getenv("url_df_tomos_1a28_09")
|
119 |
-
response_df_tomos_1a28_09 = requests.get( url_df_tomos_1a28_09, headers = headers )
|
120 |
-
csv_content_df_tomos_1a28_09 = response_df_tomos_1a28_09.text
|
121 |
-
df_tomos_1a28_09 = pd.read_csv(StringIO( csv_content_df_tomos_1a28_09 ))
|
122 |
-
|
123 |
-
#
|
124 |
-
df_tomos_1a28 = pd.concat([df_tomos_1a28_01, df_tomos_1a28_02], ignore_index = True)
|
125 |
-
df_tomos_1a28 = pd.concat([df_tomos_1a28, df_tomos_1a28_03], ignore_index = True)
|
126 |
-
df_tomos_1a28 = pd.concat([df_tomos_1a28, df_tomos_1a28_04], ignore_index = True)
|
127 |
-
df_tomos_1a28 = pd.concat([df_tomos_1a28, df_tomos_1a28_05], ignore_index = True)
|
128 |
-
df_tomos_1a28 = pd.concat([df_tomos_1a28, df_tomos_1a28_06], ignore_index = True)
|
129 |
-
df_tomos_1a28 = pd.concat([df_tomos_1a28, df_tomos_1a28_07], ignore_index = True)
|
130 |
-
df_tomos_1a28 = pd.concat([df_tomos_1a28, df_tomos_1a28_08], ignore_index = True)
|
131 |
-
df_tomos_1a28 = pd.concat([df_tomos_1a28, df_tomos_1a28_09], ignore_index = True)
|
132 |
-
|
133 |
-
#
|
134 |
-
url_tercer_req = os.getenv("url_tercer_req")
|
135 |
-
response_tercer_req = requests.get( url_tercer_req, headers = headers )
|
136 |
-
csv_content_tercer_req = response_tercer_req.text
|
137 |
-
tercer_req = pd.read_csv(StringIO( csv_content_tercer_req ))
|
138 |
-
|
139 |
-
#
|
140 |
-
url_seg_req = os.getenv("url_seg_req")
|
141 |
-
response_seg_req = requests.get( url_seg_req, headers = headers )
|
142 |
-
csv_content_seg_req = response_seg_req.text
|
143 |
-
seg_req = pd.read_csv(StringIO( csv_content_seg_req ))
|
144 |
-
|
145 |
-
#
|
146 |
-
url_primer_req = os.getenv("url_primer_req")
|
147 |
-
response_primer_req = requests.get( url_primer_req, headers = headers )
|
148 |
-
csv_content_primer_req = response_primer_req.text
|
149 |
-
primer_req = pd.read_csv(StringIO( csv_content_primer_req ))
|
150 |
-
|
151 |
-
#
|
152 |
-
url_primer1_req = os.getenv("url_primer1_req")
|
153 |
-
response_primer1_req = requests.get( url_primer1_req, headers = headers )
|
154 |
-
csv_content_primer1_req = response_primer1_req.text
|
155 |
-
primer1_req = pd.read_csv(StringIO( csv_content_primer1_req ))
|
156 |
-
primer1_req["Folder"] = "I. PRIMER REQUERIMIENTO (139)/2. Desahogo Reiteracion 1 139"
|
157 |
-
|
158 |
-
#
|
159 |
-
url_primer2_req = os.getenv("url_primer2_req")
|
160 |
-
response_primer2_req = requests.get( url_primer2_req, headers = headers )
|
161 |
-
csv_content_primer2_req = response_primer2_req.text
|
162 |
-
primer2_req = pd.read_csv(StringIO( csv_content_primer2_req ))
|
163 |
-
primer2_req["Folder"] = "I. PRIMER REQUERIMIENTO (139)/1. Desahogo RFI 139"
|
164 |
-
|
165 |
-
#---------------------------------------------------------------------------------------------------------------
|
166 |
-
# UUUUPS LA COLUMNA EMBEDDINGS NO LA RECONOCE COSINESIMILARITY.. [tomos_conf_DPR, tomos_conf_cita]
|
167 |
-
#---------------------------------------------------------------------------------------------------------------
|
168 |
-
def clean_and_parse_embedding(embedding_str):
|
169 |
-
# Extract the part between square brackets
|
170 |
-
embedding_str = embedding_str.split('[')[-1].split(']')[0]
|
171 |
-
# Now, you should have a clean string representation of the list
|
172 |
-
embedding_list = ast.literal_eval(embedding_str)
|
173 |
-
return [float(val) for val in embedding_list]
|
174 |
-
|
175 |
-
tomos_conf_DPR['Embedding'] = tomos_conf_DPR['Embedding'].apply(clean_and_parse_embedding)
|
176 |
-
tomos_conf_cita['Embedding'] = tomos_conf_cita['Embedding'].apply(clean_and_parse_embedding)
|
177 |
-
tercer_req['Embedding'] = tercer_req['Embedding'].apply(clean_and_parse_embedding)
|
178 |
-
seg_req['Embedding'] = seg_req['Embedding'].apply(clean_and_parse_embedding)
|
179 |
-
primer_req['Embedding'] = primer_req['Embedding'].apply(clean_and_parse_embedding)
|
180 |
-
primer1_req['Embedding'] = primer1_req['Embedding'].apply(clean_and_parse_embedding)
|
181 |
-
primer2_req['Embedding'] = primer2_req['Embedding'].apply(clean_and_parse_embedding)
|
182 |
-
|
183 |
-
#---------------------------------------------------------------------------------------------------------------
|
184 |
-
# UUUUPS LA COLUMNA EMBEDDINGS NO LA RECONOCE COSINESIMILARITY.. [df_tomos_1a28]
|
185 |
-
#---------------------------------------------------------------------------------------------------------------
|
186 |
-
def parse_embedding(embedding_str):
|
187 |
-
embedding_list = ast.literal_eval(embedding_str)
|
188 |
-
return [float(val) for val in embedding_list]
|
189 |
-
|
190 |
-
df_tomos_1a28['Embedding'] = df_tomos_1a28['Embedding'].apply(parse_embedding)
|
191 |
-
|
192 |
-
#---------------------------------------------------------------------------------------------------------------
|
193 |
-
# LISTA DE DF
|
194 |
-
#---------------------------------------------------------------------------------------------------------------
|
195 |
-
list_of_dfs = [tomos_conf_DPR, tomos_conf_cita, df_tomos_1a28, tercer_req, seg_req, primer_req, primer1_req, primer2_req]
|
196 |
-
|
197 |
-
#--------------------------------------------------------------------
|
198 |
-
# HACEMOS UNA PREGUNTA Y RANKEA CHUNKS
|
199 |
-
#--------------------------------------------------------------------
|
200 |
-
def buscar(busqueda, lista_de_datos):
|
201 |
-
resultados = [] # Create an empty list to store individual DataFrame results
|
202 |
-
busqueda_embed = get_embedding(busqueda, engine="text-embedding-ada-002")
|
203 |
-
|
204 |
-
for datos in lista_de_datos:
|
205 |
-
datos["similitud"] = datos['Embedding'].apply(lambda x: cosine_similarity(x, busqueda_embed))
|
206 |
-
datos = datos.sort_values("similitud", ascending=False)
|
207 |
-
resultados.append(datos[['PDFName', 'PageNumber', 'similitud', "PageText", "Folder"]])
|
208 |
-
|
209 |
-
# Concatenate all individual DataFrames into a single DataFrame
|
210 |
-
combined_result = pd.concat(resultados).sort_values("similitud", ascending=False).head(20)
|
211 |
-
return combined_result
|
212 |
-
|
213 |
-
#--------------------------------------------------------------------
|
214 |
-
# rank for ai
|
215 |
-
#--------------------------------------------------------------------
|
216 |
-
def buscar_ai(busqueda, lista_de_datos):
|
217 |
-
resultados = [] # Create an empty list to store individual DataFrame results
|
218 |
-
busqueda_embed = get_embedding(busqueda, engine="text-embedding-ada-002")
|
219 |
-
|
220 |
-
for datos in lista_de_datos:
|
221 |
-
datos["similitud"] = datos['Embedding'].apply(lambda x: cosine_similarity(x, busqueda_embed))
|
222 |
-
datos = datos.sort_values("similitud", ascending=False)
|
223 |
-
resultados.append(datos[['PDFName', 'PageNumber', 'similitud', "PageText", "Folder"]])
|
224 |
-
|
225 |
-
# Concatenate all individual DataFrames into a single DataFrame
|
226 |
-
combined_result = pd.concat(resultados).sort_values("similitud", ascending=False).head(10)
|
227 |
-
return combined_result
|
228 |
-
|
229 |
-
#--------------------------------------------------------------------
|
230 |
-
# saque n extraactos de ""
|
231 |
-
#--------------------------------------------------------------------
|
232 |
-
def count_text_extracted(pregunta):
|
233 |
-
df = buscar(pregunta, list_of_dfs)
|
234 |
-
pdf_counts = df.groupby(['Folder', 'PDFName'])['PageNumber'].count().reset_index()
|
235 |
-
|
236 |
-
output_string = ""
|
237 |
-
for idx, row in pdf_counts.iterrows():
|
238 |
-
folder_name = row['Folder']
|
239 |
-
pdf_name = row['PDFName']
|
240 |
-
count = row['PageNumber']
|
241 |
-
page_numbers = df[(df['PDFName'] == pdf_name) & (df['Folder'] == folder_name)]['PageNumber'].tolist()
|
242 |
-
page_numbers_str = ', '.join(map(str, page_numbers))
|
243 |
-
output_string += f"Usé el archivo '{pdf_name}' del folder '{folder_name}' {count} (vez/veces) al extraer el texto de las páginas {page_numbers_str}.\n\n"
|
244 |
-
|
245 |
-
return output_string
|
246 |
-
|
247 |
-
#--------------------------------------------------------------------
|
248 |
-
# file: texto
|
249 |
-
#--------------------------------------------------------------------
|
250 |
-
|
251 |
-
def print_pdf_info(pregunta):
|
252 |
-
df = buscar(pregunta, list_of_dfs)
|
253 |
-
|
254 |
-
output_string = "" # Initialize an empty string to accumulate the output
|
255 |
-
|
256 |
-
for _, row in df.iterrows():
|
257 |
-
pdf_name = row['PDFName']
|
258 |
-
page_number = row['PageNumber']
|
259 |
-
page_text = row['PageText']
|
260 |
-
|
261 |
-
# Split page_text into lines and add a tab to each line
|
262 |
-
indented_page_text = '\n'.join(['\t' + line for line in page_text.split('\n')])
|
263 |
-
|
264 |
-
# Append the formatted output to the output string
|
265 |
-
output_string += f'De "{pdf_name}":\n \tPágina {page_number}:\n\t {indented_page_text}\n'
|
266 |
-
|
267 |
-
return output_string
|
268 |
-
|
269 |
-
#--------------------------------------------------------------------
|
270 |
-
# vector -> document
|
271 |
-
#-------------------------------------------------------------------
|
272 |
-
def vector_document(dataframe):
|
273 |
-
string_vectors = dataframe["PageText"]
|
274 |
-
documents = [Document(page_content=content, metadata={'id': i}) for i, content in enumerate(string_vectors)]
|
275 |
-
return documents
|
276 |
-
|
277 |
-
#--------------------------------------------------------------------
|
278 |
-
# AI QUESTION
|
279 |
-
#-------------------------------------------------------------------
|
280 |
-
def info_pdf(pregunta):
|
281 |
-
df = buscar(pregunta, list_of_dfs)
|
282 |
-
|
283 |
-
output_list = [] # Initialize an empty list to store the output
|
284 |
-
|
285 |
-
for _, row in df.iterrows():
|
286 |
-
pdf_name = row['PDFName']
|
287 |
-
page_number = row['PageNumber']
|
288 |
-
page_text = row['PageText']
|
289 |
-
|
290 |
-
# Split page_text into lines and add a tab to each line
|
291 |
-
indented_page_text = '\n'.join(['\t' + line for line in page_text.split('\n')])
|
292 |
-
|
293 |
-
# Append the formatted output to the output list
|
294 |
-
output_list.append(f'De "{pdf_name}": Página {page_number}: {indented_page_text}')
|
295 |
-
|
296 |
-
return output_list
|
297 |
-
|
298 |
-
def get_completion_from_messages( messages, model = "gpt-3.5-turbo-16k",
|
299 |
-
temperature = 0, max_tokens = 4500 ): ##Check max_tokens
|
300 |
-
response = openai.ChatCompletion.create(
|
301 |
-
model = model,
|
302 |
-
messages = messages,
|
303 |
-
temperature = temperature,
|
304 |
-
max_tokens = max_tokens,
|
305 |
-
)
|
306 |
-
return response.choices[0].message["content"]
|
307 |
-
|
308 |
-
def get_topic( user_message ):
|
309 |
-
#
|
310 |
-
delimiter = "####"
|
311 |
-
system_message = f"""
|
312 |
-
Eres un abogado que trabaja en temas de competencia económica e investiga casos en México.
|
313 |
-
Siempre intenarás responder en el mayor número posible de palabras.
|
314 |
-
Las consultas o preguntas se delimitarán con los caracteres {delimiter}
|
315 |
-
"""
|
316 |
-
#
|
317 |
-
messages = [
|
318 |
-
{'role':'system',
|
319 |
-
'content': system_message},
|
320 |
-
{'role':'user',
|
321 |
-
'content': f"{delimiter}{user_message}{delimiter}"},
|
322 |
-
]
|
323 |
-
return get_completion_from_messages( messages )
|
324 |
-
|
325 |
-
def get_respuesta( user_message, informacion):
|
326 |
-
#
|
327 |
-
delimiter = "####"
|
328 |
-
system_message = f"""
|
329 |
-
Eres un abogado que trabaja en temas de competencia económica e investiga casos en México.
|
330 |
-
Siempre intenarás responder en el mayor número posible de palabras.
|
331 |
-
Las consultas o preguntas se delimitarán con los caracteres {delimiter}
|
332 |
-
|
333 |
-
"""
|
334 |
-
#
|
335 |
-
messages = [
|
336 |
-
{'role':'system',
|
337 |
-
'content': system_message},
|
338 |
-
{'role':'user',
|
339 |
-
'content': f"""
|
340 |
-
{delimiter}
|
341 |
-
Estás intentando recopilar información relevante para tu caso.
|
342 |
-
Usa exclusivamente la información contenida en la siguiente lista:
|
343 |
-
{informacion}
|
344 |
-
|
345 |
-
para responder sin límite de palabras lo siguiente: {user_message}
|
346 |
-
Responde de forma detallada.
|
347 |
-
{delimiter}
|
348 |
-
"""},
|
349 |
-
]
|
350 |
-
#
|
351 |
-
return get_completion_from_messages(messages)
|
352 |
-
|
353 |
-
def update_records( user_message ):
|
354 |
-
#
|
355 |
-
sht = gc.open_by_url(Google_URL)
|
356 |
-
#
|
357 |
-
sht.worksheet("Hoja 2").get_all_records()
|
358 |
-
#
|
359 |
-
sht.worksheet("Hoja 2").update_cell( len( sht.worksheet("Hoja 2").get_all_records()[:] ) + 2 ,
|
360 |
-
1 , datetime.now().strftime("%m/%d/%Y, %H:%M:%S") )
|
361 |
-
#
|
362 |
-
sht.worksheet("Hoja 2").update_cell( len( sht.worksheet("Hoja 2").get_all_records()[:] ) + 1 ,
|
363 |
-
2 , user_message )
|
364 |
-
|
365 |
-
def chat(user_message_1):
|
366 |
-
#
|
367 |
-
norma_y_tema_response_1 = get_topic(user_message_1)
|
368 |
-
norma_y_tema_response_1 += 'Todos'
|
369 |
-
uno = buscar_ai(user_message_1, list_of_dfs)
|
370 |
-
lista_info = uno['PageText'].tolist()
|
371 |
-
#
|
372 |
-
# Save Question and date time
|
373 |
-
update_records( user_message_1 )
|
374 |
-
#
|
375 |
-
return get_respuesta(user_message_1, lista_info)
|
376 |
-
|
377 |
-
# Modify your existing code
|
378 |
-
with gr.Blocks() as demo:
|
379 |
-
txt = gr.Textbox(label="Texto", lines=2)
|
380 |
-
btn = gr.Button(value="Listo")
|
381 |
-
txt_2 = gr.Textbox(value="", label="Donde (top 20):")
|
382 |
-
txt_3 = gr.Textbox(value="", label="Extractos (top 20):")
|
383 |
-
txt_1 = gr.Textbox(value="", label="Respuesta IA:")
|
384 |
-
btn.click(chat, inputs=[txt], outputs=[txt_1])
|
385 |
-
btn.click(count_text_extracted, inputs=[txt], outputs=[txt_2])
|
386 |
-
btn.click(print_pdf_info, inputs=[txt], outputs=[txt_3])
|
387 |
-
|
388 |
-
if __name__ == "__main__":
|
389 |
-
demo.launch(share=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Alfabeto Huevo Granja Inactivo Magnate Mod Apk Dinero Ilimitado Y Gemas.md
DELETED
@@ -1,62 +0,0 @@
|
|
1 |
-
|
2 |
-
<br>
|
3 |
-
<tabla>
|
4 |
-
<tr>
|
5 |
-
<td>
|
6 |
-
<h1>Alfabeto de la granja de huevos: Idle Tycoon Mod APK dinero ilimitado y gemas</h1>
|
7 |
-
<h2>Introducción</h2>
|
8 |
-
<p>¿Te encantan los juegos de agricultura y los juegos de magnates? Si es así, entonces te encantará Alphabet Egg Farm: Idle Tycoon. Este es un juego divertido y adictivo donde puedes crear tu propia granja de huevos y convertirte en un multimillonario. Usted puede incubar diferentes tipos de pollos, recoger los huevos, venderlos con fines de lucro, y mejorar su granja con varios edificios y decoraciones. También puedes desbloquear nuevas letras y palabras a medida que avanzas en el juego. Este juego es adecuado para todas las edades y tiene controles simples e intuitivos. Puedes jugar este juego sin conexión a Internet y disfrutar del relajante ambiente de la granja. </p>
|
9 |
-
<p>Sin embargo, si desea hacer su juego más emocionante y gratificante, es posible que desee probar Alphabet Egg Farm: Idle Tycoon Mod APK Unlimited Money and Gems. Esta es una versión modificada del juego original que le da dinero ilimitado y gemas para gastar en su granja. Con este mod APK, puedes comprar cualquier cosa que quieras sin preocuparte por el costo. También puedes desbloquear todas las letras y palabras más rápido y más fácil. Usted puede disfrutar del juego sin ningún tipo de anuncios o limitaciones. Este mod APK hará que su juego más divertido y satisfactorio. </p>
|
10 |
-
<h2>alfabeto huevo granja inactivo magnate mod apk dinero ilimitado y gemas</h2><br /><p><b><b>Download Zip</b> ✔ <a href="https://bltlly.com/2v6IT3">https://bltlly.com/2v6IT3</a></b></p><br /><br />
|
11 |
-
<h2>Cómo descargar e instalar Alfabeto Egg Farm: Idle Tycoon Mod APK dinero ilimitado y gemas</h2>
|
12 |
-
<p>Si está interesado en descargar e instalar Alfabeto Egg Farm: Idle Tycoon Mod APK Unlimited Money and Gems, puede seguir estos sencillos pasos:</p>
|
13 |
-
<ol>
|
14 |
-
<li>Encontrar el archivo APK mod de una fuente de confianza en Internet. Puede buscar en Google o utilizar el enlace de abajo para descargarlo directamente. </li>
|
15 |
-
<li>Antes de instalar el archivo APK mod, es necesario habilitar fuentes desconocidas en la configuración del dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. </li>
|
16 |
-
|
17 |
-
<li>Una vez que se hace la instalación, puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio y disfrutar de Alfabeto Egg Farm: Idle Tycoon Mod APK Unlimited Money and Gems.</li>
|
18 |
-
</ol>
|
19 |
-
<h2>Cómo jugar Alfabeto Egg Farm: Idle Tycoon Mod APK dinero ilimitado y gemas</h2>
|
20 |
-
<p>Jugando Alfabeto Egg Farm: Idle Tycoon Mod APK dinero ilimitado y gemas es muy fácil y agradable. Aquí hay algunos consejos sobre cómo jugar el juego:</p>
|
21 |
-
<ul>
|
22 |
-
<li>Para iniciar su propia granja de huevos, es necesario comprar algunos pollos de la tienda. Puede elegir entre diferentes tipos de pollos, como A-pollo, B-pollo, C-pollo, etc. Cada pollo tiene su propio precio, tasa de producción de huevos y valor de la carta. </li>
|
23 |
-
<li>Para actualizar sus pollos y edificios, es necesario gastar dinero y gemas. Puedes ganar dinero vendiendo huevos en el mercado o tocando los huevos que caen del cielo. Puedes ganar gemas completando logros o viendo videos. </li>
|
24 |
-
<li>Para ganar más dinero y gemas, puede utilizar las características de mod de Alfabeto Egg Farm: Idle Tycoon Mod APK Unlimited Money and Gems. Puede acceder al menú mod pulsando en el icono en la esquina superior derecha de la pantalla. A partir de ahí, puede habilitar o desactivar varias opciones, como dinero ilimitado, gemas ilimitadas, huevos de venta automática, huevos de recolección automática, etc.</li>
|
25 |
-
<li>Para desbloquear nuevas letras y palabras, es necesario recoger suficientes huevos de cada tipo de letra. Por ejemplo, para desbloquear la letra B, es necesario recoger 100 huevos de pollo B. Para desbloquear la palabra BEBÉ, es necesario recoger 100 huevos de pollo B, pollo A, pollo B y pollo Y. Puede ver su progreso en el libro de cartas en la parte inferior de la pantalla. </li>
|
26 |
-
</ul>
|
27 |
-
<h2>Pros y contras de Alphabet Egg Farm: Idle Tycoon Mod APK dinero ilimitado y gemas</h2>
|
28 |
-
<p>Como cualquier otra aplicación modded, Alfabeto Egg Farm: Idle Tycoon Mod APK Unlimited Money and Gems has its own pros and cons. Estos son algunos de ellos:</p>
|
29 |
-
<tabla>
|
30 |
-
<tr><th>Pros</th><th>Contras</th></tr>
|
31 |
-
|
32 |
-
<tr><td>- Puede desbloquear todas las letras y palabras más rápido y más fácil. </td><td>- Es posible que encuentre algunos errores o fallos en el juego debido a las características modificadas. </td></tr>
|
33 |
-
<tr><td>- Puedes jugar el juego sin ningún tipo de anuncios o interrupciones. </td><td>- Es posible que te prohíban acceder a funciones o tablas de clasificación en línea si los desarrolladores del juego te detectan. </td></tr>
|
34 |
-
<tr><td>- Puedes jugar el juego sin conexión a Internet. </td><td>- Es posible que se pierda algunas actualizaciones o nuevas características del juego original. </td></tr> </td>
|
35 |
-
</tr>
|
36 |
-
</tabla>
|
37 |
-
<h2>Conclusión</h2>
|
38 |
-
<p>Alphabet Egg Farm: Idle Tycoon es un juego divertido y adictivo que te permite crear tu propia granja de huevos y convertirte en un multimillonario. Usted puede incubar diferentes tipos de pollos, recoger los huevos, venderlos con fines de lucro, y mejorar su granja con varios edificios y decoraciones. También puedes desbloquear nuevas letras y palabras a medida que avanzas en el juego. Este juego es adecuado para todas las edades y tiene controles simples e intuitivos. Puedes jugar este juego sin conexión a Internet y disfrutar del relajante ambiente de la granja. </p>
|
39 |
-
<p>Si quieres hacer tu juego más emocionante y gratificante, puedes probar Alphabet Egg Farm: Idle Tycoon Mod APK Unlimited Money and Gems. Esta es una versión modificada del juego original que le da dinero ilimitado y gemas para gastar en su granja. Con este mod APK, puedes comprar cualquier cosa que quieras sin preocuparte por el costo. También puedes desbloquear todas las letras y palabras más rápido y más fácil. Usted puede disfrutar del juego sin ningún tipo de anuncios o limitaciones. Este mod APK hará que su juego más divertido y satisfactorio. </p>
|
40 |
-
|
41 |
-
<p>Si usted está interesado en probar Alfabeto Egg Farm: Idle Tycoon Mod APK dinero ilimitado y gemas, se puede descargar desde el enlace de abajo. También puede visitar el sitio web oficial o la página del desarrollador en Google Play Store para obtener más información sobre el juego y el mod APK. También puedes ver algunos comentarios y videos sobre el juego y el mod APK en YouTube u otras plataformas. </p>
|
42 |
-
<p>Esperamos que haya disfrutado de este artículo y lo encontró útil. Si lo hizo, por favor compartirlo con sus amigos y familiares que también podrían gustar este juego y este mod APK. También, no dude en dejar un comentario a continuación y háganos saber lo que piensa acerca de Alphabet Egg Farm: Idle Tycoon Mod APK Unlimited Money and Gems. ¡Nos encantaría saber de ti! </p>
|
43 |
-
<h2>Preguntas frecuentes</h2>
|
44 |
-
<p>Aquí hay algunas preguntas frecuentes sobre Alfabeto Egg Farm: Idle Tycoon Mod APK Unlimited Money and Gems:</p>
|
45 |
-
<p></p>
|
46 |
-
<ol>
|
47 |
-
<li><b>Es el alfabeto de la granja de huevos: Idle Tycoon Mod APK dinero ilimitado y gemas seguro de usar? </b></li>
|
48 |
-
<li>Sí, es seguro de usar siempre y cuando lo descargue de una fuente de confianza. Sin embargo, siempre debe tener cuidado al instalar aplicaciones modificadas en su dispositivo. </li>
|
49 |
-
<li><b>¿Tengo que rootear mi dispositivo para usar Alfabeto Egg Farm: Idle Tycoon Mod APK Unlimited Money and Gems? </b></li>
|
50 |
-
<li> No, no es necesario rootear el dispositivo para usar este mod APK. Solo tiene que habilitar fuentes desconocidas en la configuración del dispositivo antes de instalarlo. </li>
|
51 |
-
<li><b>¿Puedo jugar Alfabeto Egg Farm: Idle Tycoon Mod APK dinero ilimitado y gemas en línea con otros jugadores? </b></li>
|
52 |
-
<li> No, no puede jugar este mod APK en línea con otros jugadores. Este mod APK es solo para el modo sin conexión. Todavía se puede disfrutar del juego sin conexión a Internet. </li>
|
53 |
-
<li><b>¿Cómo puedo actualizar Alfabeto Egg Farm: Idle Tycoon Mod APK Unlimited Money and Gems a la última versión? </b></li>
|
54 |
-
|
55 |
-
<li><b>¿Dónde puedo obtener más información sobre Alphabet Egg Farm: Idle Tycoon Mod APK Unlimited Money and Gems? </b></li>
|
56 |
-
<li>Usted puede obtener más información acerca de este mod APK visitando su sitio web oficial o la página de su desarrollador en Google Play Store. También puedes ver algunos comentarios y videos sobre este mod APK en YouTube u otras plataformas. </li>
|
57 |
-
</ol>
|
58 |
-
</td>
|
59 |
-
</tr>
|
60 |
-
</tabla></p> 64aa2da5cf<br />
|
61 |
-
<br />
|
62 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/vendored/requests/packages/urllib3/exceptions.py
DELETED
@@ -1,169 +0,0 @@
|
|
1 |
-
|
2 |
-
## Base Exceptions
|
3 |
-
|
4 |
-
class HTTPError(Exception):
|
5 |
-
"Base exception used by this module."
|
6 |
-
pass
|
7 |
-
|
8 |
-
class HTTPWarning(Warning):
|
9 |
-
"Base warning used by this module."
|
10 |
-
pass
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
class PoolError(HTTPError):
|
15 |
-
"Base exception for errors caused within a pool."
|
16 |
-
def __init__(self, pool, message):
|
17 |
-
self.pool = pool
|
18 |
-
HTTPError.__init__(self, "%s: %s" % (pool, message))
|
19 |
-
|
20 |
-
def __reduce__(self):
|
21 |
-
# For pickling purposes.
|
22 |
-
return self.__class__, (None, None)
|
23 |
-
|
24 |
-
|
25 |
-
class RequestError(PoolError):
|
26 |
-
"Base exception for PoolErrors that have associated URLs."
|
27 |
-
def __init__(self, pool, url, message):
|
28 |
-
self.url = url
|
29 |
-
PoolError.__init__(self, pool, message)
|
30 |
-
|
31 |
-
def __reduce__(self):
|
32 |
-
# For pickling purposes.
|
33 |
-
return self.__class__, (None, self.url, None)
|
34 |
-
|
35 |
-
|
36 |
-
class SSLError(HTTPError):
|
37 |
-
"Raised when SSL certificate fails in an HTTPS connection."
|
38 |
-
pass
|
39 |
-
|
40 |
-
|
41 |
-
class ProxyError(HTTPError):
|
42 |
-
"Raised when the connection to a proxy fails."
|
43 |
-
pass
|
44 |
-
|
45 |
-
|
46 |
-
class DecodeError(HTTPError):
|
47 |
-
"Raised when automatic decoding based on Content-Type fails."
|
48 |
-
pass
|
49 |
-
|
50 |
-
|
51 |
-
class ProtocolError(HTTPError):
|
52 |
-
"Raised when something unexpected happens mid-request/response."
|
53 |
-
pass
|
54 |
-
|
55 |
-
|
56 |
-
#: Renamed to ProtocolError but aliased for backwards compatibility.
|
57 |
-
ConnectionError = ProtocolError
|
58 |
-
|
59 |
-
|
60 |
-
## Leaf Exceptions
|
61 |
-
|
62 |
-
class MaxRetryError(RequestError):
|
63 |
-
"""Raised when the maximum number of retries is exceeded.
|
64 |
-
|
65 |
-
:param pool: The connection pool
|
66 |
-
:type pool: :class:`~urllib3.connectionpool.HTTPConnectionPool`
|
67 |
-
:param string url: The requested Url
|
68 |
-
:param exceptions.Exception reason: The underlying error
|
69 |
-
|
70 |
-
"""
|
71 |
-
|
72 |
-
def __init__(self, pool, url, reason=None):
|
73 |
-
self.reason = reason
|
74 |
-
|
75 |
-
message = "Max retries exceeded with url: %s (Caused by %r)" % (
|
76 |
-
url, reason)
|
77 |
-
|
78 |
-
RequestError.__init__(self, pool, url, message)
|
79 |
-
|
80 |
-
|
81 |
-
class HostChangedError(RequestError):
|
82 |
-
"Raised when an existing pool gets a request for a foreign host."
|
83 |
-
|
84 |
-
def __init__(self, pool, url, retries=3):
|
85 |
-
message = "Tried to open a foreign host with url: %s" % url
|
86 |
-
RequestError.__init__(self, pool, url, message)
|
87 |
-
self.retries = retries
|
88 |
-
|
89 |
-
|
90 |
-
class TimeoutStateError(HTTPError):
|
91 |
-
""" Raised when passing an invalid state to a timeout """
|
92 |
-
pass
|
93 |
-
|
94 |
-
|
95 |
-
class TimeoutError(HTTPError):
|
96 |
-
""" Raised when a socket timeout error occurs.
|
97 |
-
|
98 |
-
Catching this error will catch both :exc:`ReadTimeoutErrors
|
99 |
-
<ReadTimeoutError>` and :exc:`ConnectTimeoutErrors <ConnectTimeoutError>`.
|
100 |
-
"""
|
101 |
-
pass
|
102 |
-
|
103 |
-
|
104 |
-
class ReadTimeoutError(TimeoutError, RequestError):
|
105 |
-
"Raised when a socket timeout occurs while receiving data from a server"
|
106 |
-
pass
|
107 |
-
|
108 |
-
|
109 |
-
# This timeout error does not have a URL attached and needs to inherit from the
|
110 |
-
# base HTTPError
|
111 |
-
class ConnectTimeoutError(TimeoutError):
|
112 |
-
"Raised when a socket timeout occurs while connecting to a server"
|
113 |
-
pass
|
114 |
-
|
115 |
-
|
116 |
-
class EmptyPoolError(PoolError):
|
117 |
-
"Raised when a pool runs out of connections and no more are allowed."
|
118 |
-
pass
|
119 |
-
|
120 |
-
|
121 |
-
class ClosedPoolError(PoolError):
|
122 |
-
"Raised when a request enters a pool after the pool has been closed."
|
123 |
-
pass
|
124 |
-
|
125 |
-
|
126 |
-
class LocationValueError(ValueError, HTTPError):
|
127 |
-
"Raised when there is something wrong with a given URL input."
|
128 |
-
pass
|
129 |
-
|
130 |
-
|
131 |
-
class LocationParseError(LocationValueError):
|
132 |
-
"Raised when get_host or similar fails to parse the URL input."
|
133 |
-
|
134 |
-
def __init__(self, location):
|
135 |
-
message = "Failed to parse: %s" % location
|
136 |
-
HTTPError.__init__(self, message)
|
137 |
-
|
138 |
-
self.location = location
|
139 |
-
|
140 |
-
|
141 |
-
class ResponseError(HTTPError):
|
142 |
-
"Used as a container for an error reason supplied in a MaxRetryError."
|
143 |
-
GENERIC_ERROR = 'too many error responses'
|
144 |
-
SPECIFIC_ERROR = 'too many {status_code} error responses'
|
145 |
-
|
146 |
-
|
147 |
-
class SecurityWarning(HTTPWarning):
|
148 |
-
"Warned when perfoming security reducing actions"
|
149 |
-
pass
|
150 |
-
|
151 |
-
|
152 |
-
class InsecureRequestWarning(SecurityWarning):
|
153 |
-
"Warned when making an unverified HTTPS request."
|
154 |
-
pass
|
155 |
-
|
156 |
-
|
157 |
-
class SystemTimeWarning(SecurityWarning):
|
158 |
-
"Warned when system time is suspected to be wrong"
|
159 |
-
pass
|
160 |
-
|
161 |
-
|
162 |
-
class InsecurePlatformWarning(SecurityWarning):
|
163 |
-
"Warned when certain SSL configuration is not available on a platform."
|
164 |
-
pass
|
165 |
-
|
166 |
-
|
167 |
-
class ResponseNotChunked(ProtocolError, ValueError):
|
168 |
-
"Response needs to be chunked in order to read it as chunks."
|
169 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BramVanroy/text-to-amr/utils.py
DELETED
@@ -1,105 +0,0 @@
|
|
1 |
-
from typing import Tuple, Union, Dict, List
|
2 |
-
|
3 |
-
from multi_amr.data.postprocessing_graph import ParsedStatus
|
4 |
-
from multi_amr.data.tokenization import AMRTokenizerWrapper
|
5 |
-
from optimum.bettertransformer import BetterTransformer
|
6 |
-
import penman
|
7 |
-
import streamlit as st
|
8 |
-
import torch
|
9 |
-
from torch.quantization import quantize_dynamic
|
10 |
-
from torch import nn, qint8
|
11 |
-
from transformers import MBartForConditionalGeneration, AutoConfig
|
12 |
-
|
13 |
-
|
14 |
-
@st.cache_resource(show_spinner=False)
|
15 |
-
def get_resources(multilingual: bool, src_lang: str, quantize: bool = True, no_cuda: bool = False) -> Tuple[MBartForConditionalGeneration, AMRTokenizerWrapper]:
|
16 |
-
"""Get the relevant model, tokenizer and logits_processor. The loaded model depends on whether the multilingual
|
17 |
-
model is requested, or not. If not, an English-only model is loaded. The model can be optionally quantized
|
18 |
-
for better performance.
|
19 |
-
|
20 |
-
:param multilingual: whether to load the multilingual model or not
|
21 |
-
:param src_lang: source language
|
22 |
-
:param quantize: whether to quantize the model with PyTorch's 'quantize_dynamic'
|
23 |
-
:param no_cuda: whether to disable CUDA, even if it is available
|
24 |
-
:return: the loaded model, and tokenizer wrapper
|
25 |
-
"""
|
26 |
-
model_name = "BramVanroy/mbart-large-cc25-ft-amr30-en_es_nl"
|
27 |
-
if not multilingual:
|
28 |
-
if src_lang == "English":
|
29 |
-
model_name = "BramVanroy/mbart-large-cc25-ft-amr30-en"
|
30 |
-
elif src_lang == "Spanish":
|
31 |
-
model_name = "BramVanroy/mbart-large-cc25-ft-amr30-es"
|
32 |
-
elif src_lang == "Dutch":
|
33 |
-
model_name = "BramVanroy/mbart-large-cc25-ft-amr30-nl"
|
34 |
-
else:
|
35 |
-
raise ValueError(f"Language {src_lang} not supported")
|
36 |
-
|
37 |
-
# Tokenizer src_lang is reset during translation to the right language
|
38 |
-
tok_wrapper = AMRTokenizerWrapper.from_pretrained(model_name, src_lang="en_XX")
|
39 |
-
|
40 |
-
config = AutoConfig.from_pretrained(model_name)
|
41 |
-
config.decoder_start_token_id = tok_wrapper.amr_token_id
|
42 |
-
|
43 |
-
model = MBartForConditionalGeneration.from_pretrained(model_name, config=config)
|
44 |
-
model.eval()
|
45 |
-
|
46 |
-
embedding_size = model.get_input_embeddings().weight.shape[0]
|
47 |
-
if len(tok_wrapper.tokenizer) > embedding_size:
|
48 |
-
model.resize_token_embeddings(len(tok_wrapper.tokenizer))
|
49 |
-
|
50 |
-
model = BetterTransformer.transform(model, keep_original_model=False)
|
51 |
-
|
52 |
-
if torch.cuda.is_available() and not no_cuda:
|
53 |
-
model = model.to("cuda")
|
54 |
-
elif quantize: # Quantization not supported on CUDA
|
55 |
-
model = quantize_dynamic(model, {nn.Linear, nn.Dropout, nn.LayerNorm}, dtype=qint8)
|
56 |
-
|
57 |
-
return model, tok_wrapper
|
58 |
-
|
59 |
-
|
60 |
-
def translate(texts: List[str], src_lang: str, model: MBartForConditionalGeneration, tok_wrapper: AMRTokenizerWrapper, **gen_kwargs) -> Dict[str, List[Union[penman.Graph, ParsedStatus]]]:
|
61 |
-
"""Translates a given text of a given source language with a given model and tokenizer. The generation is guided by
|
62 |
-
potential keyword-arguments, which can include arguments such as max length, logits processors, etc.
|
63 |
-
|
64 |
-
:param texts: source text to translate (potentially a batch)
|
65 |
-
:param src_lang: source language
|
66 |
-
:param model: MBART model
|
67 |
-
:param tok_wrapper: MBART tokenizer wrapper
|
68 |
-
:param gen_kwargs: potential keyword arguments for the generation process
|
69 |
-
:return: the translation (linearized AMR graph)
|
70 |
-
"""
|
71 |
-
if isinstance(texts, str):
|
72 |
-
texts = [texts]
|
73 |
-
|
74 |
-
tok_wrapper.src_lang = LANGUAGES[src_lang]
|
75 |
-
encoded = tok_wrapper(texts, return_tensors="pt").to(model.device)
|
76 |
-
with torch.no_grad():
|
77 |
-
generated = model.generate(**encoded, output_scores=True, return_dict_in_generate=True, **gen_kwargs)
|
78 |
-
|
79 |
-
generated["sequences"] = generated["sequences"].cpu()
|
80 |
-
generated["sequences_scores"] = generated["sequences_scores"].cpu()
|
81 |
-
best_scoring_results = {"graph": [], "status": []}
|
82 |
-
beam_size = gen_kwargs["num_beams"]
|
83 |
-
|
84 |
-
# Select the best item from the beam: the sequence with best status and highest score
|
85 |
-
for sample_idx in range(0, len(generated["sequences_scores"]), beam_size):
|
86 |
-
sequences = generated["sequences"][sample_idx: sample_idx + beam_size]
|
87 |
-
scores = generated["sequences_scores"][sample_idx: sample_idx + beam_size].tolist()
|
88 |
-
outputs = tok_wrapper.batch_decode_amr_ids(sequences)
|
89 |
-
statuses = outputs["status"]
|
90 |
-
graphs = outputs["graph"]
|
91 |
-
zipped = zip(statuses, scores, graphs)
|
92 |
-
# Lowest status first (OK=0, FIXED=1, BACKOFF=2), highest score second
|
93 |
-
best = sorted(zipped, key=lambda item: (item[0].value, -item[1]))[0]
|
94 |
-
best_scoring_results["graph"].append(best[2])
|
95 |
-
best_scoring_results["status"].append(best[0])
|
96 |
-
|
97 |
-
# Returns dictionary with "graph" and "status" keys
|
98 |
-
return best_scoring_results
|
99 |
-
|
100 |
-
|
101 |
-
LANGUAGES = {
|
102 |
-
"English": "en_XX",
|
103 |
-
"Dutch": "nl_XX",
|
104 |
-
"Spanish": "es_XX",
|
105 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CForGETaass/vits-uma-genshin-honkai/text/symbols.py
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
'''
|
2 |
-
Defines the set of symbols used in text input to the model.
|
3 |
-
'''
|
4 |
-
|
5 |
-
'''# japanese_cleaners
|
6 |
-
_pad = '_'
|
7 |
-
_punctuation = ',.!?-'
|
8 |
-
_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧ↓↑ '
|
9 |
-
'''
|
10 |
-
|
11 |
-
'''# japanese_cleaners2
|
12 |
-
_pad = '_'
|
13 |
-
_punctuation = ',.!?-~…'
|
14 |
-
_letters = 'AEINOQUabdefghijkmnoprstuvwyzʃʧʦ↓↑ '
|
15 |
-
'''
|
16 |
-
|
17 |
-
'''# korean_cleaners
|
18 |
-
_pad = '_'
|
19 |
-
_punctuation = ',.!?…~'
|
20 |
-
_letters = 'ㄱㄴㄷㄹㅁㅂㅅㅇㅈㅊㅋㅌㅍㅎㄲㄸㅃㅆㅉㅏㅓㅗㅜㅡㅣㅐㅔ '
|
21 |
-
'''
|
22 |
-
|
23 |
-
'''# chinese_cleaners
|
24 |
-
_pad = '_'
|
25 |
-
_punctuation = ',。!?—…'
|
26 |
-
_letters = 'ㄅㄆㄇㄈㄉㄊㄋㄌㄍㄎㄏㄐㄑㄒㄓㄔㄕㄖㄗㄘㄙㄚㄛㄜㄝㄞㄟㄠㄡㄢㄣㄤㄥㄦㄧㄨㄩˉˊˇˋ˙ '
|
27 |
-
'''
|
28 |
-
|
29 |
-
# zh_ja_mixture_cleaners
|
30 |
-
_pad = '_'
|
31 |
-
_punctuation = ',.!?-~…'
|
32 |
-
_letters = 'AEINOQUabdefghijklmnoprstuvwyzʃʧʦɯɹəɥ⁼ʰ`→↓↑ '
|
33 |
-
|
34 |
-
|
35 |
-
# Export all symbols:
|
36 |
-
symbols = [_pad] + list(_punctuation) + list(_letters)
|
37 |
-
|
38 |
-
# Special symbol ids
|
39 |
-
SPACE_ID = symbols.index(" ")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_roi_align.py
DELETED
@@ -1,152 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
2 |
-
import numpy as np
|
3 |
-
import unittest
|
4 |
-
import cv2
|
5 |
-
import torch
|
6 |
-
from fvcore.common.benchmark import benchmark
|
7 |
-
|
8 |
-
from detectron2.layers.roi_align import ROIAlign
|
9 |
-
|
10 |
-
|
11 |
-
class ROIAlignTest(unittest.TestCase):
|
12 |
-
def test_forward_output(self):
|
13 |
-
input = np.arange(25).reshape(5, 5).astype("float32")
|
14 |
-
"""
|
15 |
-
0 1 2 3 4
|
16 |
-
5 6 7 8 9
|
17 |
-
10 11 12 13 14
|
18 |
-
15 16 17 18 19
|
19 |
-
20 21 22 23 24
|
20 |
-
"""
|
21 |
-
|
22 |
-
output = self._simple_roialign(input, [1, 1, 3, 3], (4, 4), aligned=False)
|
23 |
-
output_correct = self._simple_roialign(input, [1, 1, 3, 3], (4, 4), aligned=True)
|
24 |
-
|
25 |
-
# without correction:
|
26 |
-
old_results = [
|
27 |
-
[7.5, 8, 8.5, 9],
|
28 |
-
[10, 10.5, 11, 11.5],
|
29 |
-
[12.5, 13, 13.5, 14],
|
30 |
-
[15, 15.5, 16, 16.5],
|
31 |
-
]
|
32 |
-
|
33 |
-
# with 0.5 correction:
|
34 |
-
correct_results = [
|
35 |
-
[4.5, 5.0, 5.5, 6.0],
|
36 |
-
[7.0, 7.5, 8.0, 8.5],
|
37 |
-
[9.5, 10.0, 10.5, 11.0],
|
38 |
-
[12.0, 12.5, 13.0, 13.5],
|
39 |
-
]
|
40 |
-
# This is an upsampled version of [[6, 7], [11, 12]]
|
41 |
-
|
42 |
-
self.assertTrue(np.allclose(output.flatten(), np.asarray(old_results).flatten()))
|
43 |
-
self.assertTrue(
|
44 |
-
np.allclose(output_correct.flatten(), np.asarray(correct_results).flatten())
|
45 |
-
)
|
46 |
-
|
47 |
-
# Also see similar issues in tensorflow at
|
48 |
-
# https://github.com/tensorflow/tensorflow/issues/26278
|
49 |
-
|
50 |
-
def test_resize(self):
|
51 |
-
H, W = 30, 30
|
52 |
-
input = np.random.rand(H, W).astype("float32") * 100
|
53 |
-
box = [10, 10, 20, 20]
|
54 |
-
output = self._simple_roialign(input, box, (5, 5), aligned=True)
|
55 |
-
|
56 |
-
input2x = cv2.resize(input, (W // 2, H // 2), interpolation=cv2.INTER_LINEAR)
|
57 |
-
box2x = [x / 2 for x in box]
|
58 |
-
output2x = self._simple_roialign(input2x, box2x, (5, 5), aligned=True)
|
59 |
-
diff = np.abs(output2x - output)
|
60 |
-
self.assertTrue(diff.max() < 1e-4)
|
61 |
-
|
62 |
-
def _simple_roialign(self, img, box, resolution, aligned=True):
|
63 |
-
"""
|
64 |
-
RoiAlign with scale 1.0 and 0 sample ratio.
|
65 |
-
"""
|
66 |
-
if isinstance(resolution, int):
|
67 |
-
resolution = (resolution, resolution)
|
68 |
-
op = ROIAlign(resolution, 1.0, 0, aligned=aligned)
|
69 |
-
input = torch.from_numpy(img[None, None, :, :].astype("float32"))
|
70 |
-
|
71 |
-
rois = [0] + list(box)
|
72 |
-
rois = torch.from_numpy(np.asarray(rois)[None, :].astype("float32"))
|
73 |
-
output = op.forward(input, rois)
|
74 |
-
if torch.cuda.is_available():
|
75 |
-
output_cuda = op.forward(input.cuda(), rois.cuda()).cpu()
|
76 |
-
self.assertTrue(torch.allclose(output, output_cuda))
|
77 |
-
return output[0, 0]
|
78 |
-
|
79 |
-
def _simple_roialign_with_grad(self, img, box, resolution, device):
|
80 |
-
if isinstance(resolution, int):
|
81 |
-
resolution = (resolution, resolution)
|
82 |
-
|
83 |
-
op = ROIAlign(resolution, 1.0, 0, aligned=True)
|
84 |
-
input = torch.from_numpy(img[None, None, :, :].astype("float32"))
|
85 |
-
|
86 |
-
rois = [0] + list(box)
|
87 |
-
rois = torch.from_numpy(np.asarray(rois)[None, :].astype("float32"))
|
88 |
-
input = input.to(device=device)
|
89 |
-
rois = rois.to(device=device)
|
90 |
-
input.requires_grad = True
|
91 |
-
output = op.forward(input, rois)
|
92 |
-
return input, output
|
93 |
-
|
94 |
-
def test_empty_box(self):
|
95 |
-
img = np.random.rand(5, 5)
|
96 |
-
box = [3, 4, 5, 4]
|
97 |
-
o = self._simple_roialign(img, box, 7)
|
98 |
-
self.assertTrue(o.shape == (7, 7))
|
99 |
-
self.assertTrue((o == 0).all())
|
100 |
-
|
101 |
-
for dev in ["cpu"] + ["cuda"] if torch.cuda.is_available() else []:
|
102 |
-
input, output = self._simple_roialign_with_grad(img, box, 7, torch.device(dev))
|
103 |
-
output.sum().backward()
|
104 |
-
self.assertTrue(torch.allclose(input.grad, torch.zeros_like(input)))
|
105 |
-
|
106 |
-
def test_empty_batch(self):
|
107 |
-
input = torch.zeros(0, 3, 10, 10, dtype=torch.float32)
|
108 |
-
rois = torch.zeros(0, 5, dtype=torch.float32)
|
109 |
-
op = ROIAlign((7, 7), 1.0, 0, aligned=True)
|
110 |
-
output = op.forward(input, rois)
|
111 |
-
self.assertTrue(output.shape == (0, 3, 7, 7))
|
112 |
-
|
113 |
-
|
114 |
-
def benchmark_roi_align():
|
115 |
-
from detectron2 import _C
|
116 |
-
|
117 |
-
def random_boxes(mean_box, stdev, N, maxsize):
|
118 |
-
ret = torch.rand(N, 4) * stdev + torch.tensor(mean_box, dtype=torch.float)
|
119 |
-
ret.clamp_(min=0, max=maxsize)
|
120 |
-
return ret
|
121 |
-
|
122 |
-
def func(N, C, H, W, nboxes_per_img):
|
123 |
-
input = torch.rand(N, C, H, W)
|
124 |
-
boxes = []
|
125 |
-
batch_idx = []
|
126 |
-
for k in range(N):
|
127 |
-
b = random_boxes([80, 80, 130, 130], 24, nboxes_per_img, H)
|
128 |
-
# try smaller boxes:
|
129 |
-
# b = random_boxes([100, 100, 110, 110], 4, nboxes_per_img, H)
|
130 |
-
boxes.append(b)
|
131 |
-
batch_idx.append(torch.zeros(nboxes_per_img, 1, dtype=torch.float32) + k)
|
132 |
-
boxes = torch.cat(boxes, axis=0)
|
133 |
-
batch_idx = torch.cat(batch_idx, axis=0)
|
134 |
-
boxes = torch.cat([batch_idx, boxes], axis=1)
|
135 |
-
|
136 |
-
input = input.cuda()
|
137 |
-
boxes = boxes.cuda()
|
138 |
-
|
139 |
-
def bench():
|
140 |
-
_C.roi_align_forward(input, boxes, 1.0, 7, 7, 0, True)
|
141 |
-
torch.cuda.synchronize()
|
142 |
-
|
143 |
-
return bench
|
144 |
-
|
145 |
-
args = [dict(N=2, C=512, H=256, W=256, nboxes_per_img=500)]
|
146 |
-
benchmark(func, "cuda_roialign", args, num_iters=20, warmup_iters=1)
|
147 |
-
|
148 |
-
|
149 |
-
if __name__ == "__main__":
|
150 |
-
if torch.cuda.is_available():
|
151 |
-
benchmark_roi_align()
|
152 |
-
unittest.main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/pybind11/tools/FindPythonLibsNew.cmake
DELETED
@@ -1,255 +0,0 @@
|
|
1 |
-
# - Find python libraries
|
2 |
-
# This module finds the libraries corresponding to the Python interpreter
|
3 |
-
# FindPythonInterp provides.
|
4 |
-
# This code sets the following variables:
|
5 |
-
#
|
6 |
-
# PYTHONLIBS_FOUND - have the Python libs been found
|
7 |
-
# PYTHON_PREFIX - path to the Python installation
|
8 |
-
# PYTHON_LIBRARIES - path to the python library
|
9 |
-
# PYTHON_INCLUDE_DIRS - path to where Python.h is found
|
10 |
-
# PYTHON_MODULE_EXTENSION - lib extension, e.g. '.so' or '.pyd'
|
11 |
-
# PYTHON_MODULE_PREFIX - lib name prefix: usually an empty string
|
12 |
-
# PYTHON_SITE_PACKAGES - path to installation site-packages
|
13 |
-
# PYTHON_IS_DEBUG - whether the Python interpreter is a debug build
|
14 |
-
#
|
15 |
-
# Thanks to talljimbo for the patch adding the 'LDVERSION' config
|
16 |
-
# variable usage.
|
17 |
-
|
18 |
-
#=============================================================================
|
19 |
-
# Copyright 2001-2009 Kitware, Inc.
|
20 |
-
# Copyright 2012 Continuum Analytics, Inc.
|
21 |
-
#
|
22 |
-
# All rights reserved.
|
23 |
-
#
|
24 |
-
# Redistribution and use in source and binary forms, with or without
|
25 |
-
# modification, are permitted provided that the following conditions
|
26 |
-
# are met:
|
27 |
-
#
|
28 |
-
# * Redistributions of source code must retain the above copyright
|
29 |
-
# notice, this list of conditions and the following disclaimer.
|
30 |
-
#
|
31 |
-
# * Redistributions in binary form must reproduce the above copyright
|
32 |
-
# notice, this list of conditions and the following disclaimer in the
|
33 |
-
# documentation and/or other materials provided with the distribution.
|
34 |
-
#
|
35 |
-
# * Neither the names of Kitware, Inc., the Insight Software Consortium,
|
36 |
-
# nor the names of their contributors may be used to endorse or promote
|
37 |
-
# products derived from this software without specific prior written
|
38 |
-
# permission.
|
39 |
-
#
|
40 |
-
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
41 |
-
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
42 |
-
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
43 |
-
# # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
44 |
-
# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
45 |
-
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
46 |
-
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
47 |
-
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
48 |
-
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
49 |
-
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
50 |
-
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
51 |
-
#=============================================================================
|
52 |
-
|
53 |
-
# Checking for the extension makes sure that `LibsNew` was found and not just `Libs`.
|
54 |
-
if(PYTHONLIBS_FOUND AND PYTHON_MODULE_EXTENSION)
|
55 |
-
return()
|
56 |
-
endif()
|
57 |
-
|
58 |
-
if(PythonLibsNew_FIND_QUIETLY)
|
59 |
-
set(_pythonlibs_quiet QUIET)
|
60 |
-
endif()
|
61 |
-
|
62 |
-
if(PythonLibsNew_FIND_REQUIRED)
|
63 |
-
set(_pythonlibs_required REQUIRED)
|
64 |
-
endif()
|
65 |
-
|
66 |
-
# Check to see if the `python` command is present and from a virtual
|
67 |
-
# environment, conda, or GHA activation - if it is, try to use that.
|
68 |
-
|
69 |
-
if(NOT DEFINED PYTHON_EXECUTABLE)
|
70 |
-
if(DEFINED ENV{VIRTUAL_ENV})
|
71 |
-
find_program(
|
72 |
-
PYTHON_EXECUTABLE python
|
73 |
-
PATHS "$ENV{VIRTUAL_ENV}" "$ENV{VIRTUAL_ENV}/bin"
|
74 |
-
NO_DEFAULT_PATH)
|
75 |
-
elseif(DEFINED ENV{CONDA_PREFIX})
|
76 |
-
find_program(
|
77 |
-
PYTHON_EXECUTABLE python
|
78 |
-
PATHS "$ENV{CONDA_PREFIX}" "$ENV{CONDA_PREFIX}/bin"
|
79 |
-
NO_DEFAULT_PATH)
|
80 |
-
elseif(DEFINED ENV{pythonLocation})
|
81 |
-
find_program(
|
82 |
-
PYTHON_EXECUTABLE python
|
83 |
-
PATHS "$ENV{pythonLocation}" "$ENV{pythonLocation}/bin"
|
84 |
-
NO_DEFAULT_PATH)
|
85 |
-
endif()
|
86 |
-
if(NOT PYTHON_EXECUTABLE)
|
87 |
-
unset(PYTHON_EXECUTABLE)
|
88 |
-
endif()
|
89 |
-
endif()
|
90 |
-
|
91 |
-
# Use the Python interpreter to find the libs.
|
92 |
-
if(NOT PythonLibsNew_FIND_VERSION)
|
93 |
-
set(PythonLibsNew_FIND_VERSION "")
|
94 |
-
endif()
|
95 |
-
|
96 |
-
find_package(PythonInterp ${PythonLibsNew_FIND_VERSION} ${_pythonlibs_required}
|
97 |
-
${_pythonlibs_quiet})
|
98 |
-
|
99 |
-
if(NOT PYTHONINTERP_FOUND)
|
100 |
-
set(PYTHONLIBS_FOUND FALSE)
|
101 |
-
set(PythonLibsNew_FOUND FALSE)
|
102 |
-
return()
|
103 |
-
endif()
|
104 |
-
|
105 |
-
# According to https://stackoverflow.com/questions/646518/python-how-to-detect-debug-interpreter
|
106 |
-
# testing whether sys has the gettotalrefcount function is a reliable, cross-platform
|
107 |
-
# way to detect a CPython debug interpreter.
|
108 |
-
#
|
109 |
-
# The library suffix is from the config var LDVERSION sometimes, otherwise
|
110 |
-
# VERSION. VERSION will typically be like "2.7" on unix, and "27" on windows.
|
111 |
-
execute_process(
|
112 |
-
COMMAND
|
113 |
-
"${PYTHON_EXECUTABLE}" "-c" "from distutils import sysconfig as s;import sys;import struct;
|
114 |
-
print('.'.join(str(v) for v in sys.version_info));
|
115 |
-
print(sys.prefix);
|
116 |
-
print(s.get_python_inc(plat_specific=True));
|
117 |
-
print(s.get_python_lib(plat_specific=True));
|
118 |
-
print(s.get_config_var('SO'));
|
119 |
-
print(hasattr(sys, 'gettotalrefcount')+0);
|
120 |
-
print(struct.calcsize('@P'));
|
121 |
-
print(s.get_config_var('LDVERSION') or s.get_config_var('VERSION'));
|
122 |
-
print(s.get_config_var('LIBDIR') or '');
|
123 |
-
print(s.get_config_var('MULTIARCH') or '');
|
124 |
-
"
|
125 |
-
RESULT_VARIABLE _PYTHON_SUCCESS
|
126 |
-
OUTPUT_VARIABLE _PYTHON_VALUES
|
127 |
-
ERROR_VARIABLE _PYTHON_ERROR_VALUE)
|
128 |
-
|
129 |
-
if(NOT _PYTHON_SUCCESS MATCHES 0)
|
130 |
-
if(PythonLibsNew_FIND_REQUIRED)
|
131 |
-
message(FATAL_ERROR "Python config failure:\n${_PYTHON_ERROR_VALUE}")
|
132 |
-
endif()
|
133 |
-
set(PYTHONLIBS_FOUND FALSE)
|
134 |
-
set(PythonLibsNew_FOUND FALSE)
|
135 |
-
return()
|
136 |
-
endif()
|
137 |
-
|
138 |
-
# Convert the process output into a list
|
139 |
-
if(WIN32)
|
140 |
-
string(REGEX REPLACE "\\\\" "/" _PYTHON_VALUES ${_PYTHON_VALUES})
|
141 |
-
endif()
|
142 |
-
string(REGEX REPLACE ";" "\\\\;" _PYTHON_VALUES ${_PYTHON_VALUES})
|
143 |
-
string(REGEX REPLACE "\n" ";" _PYTHON_VALUES ${_PYTHON_VALUES})
|
144 |
-
list(GET _PYTHON_VALUES 0 _PYTHON_VERSION_LIST)
|
145 |
-
list(GET _PYTHON_VALUES 1 PYTHON_PREFIX)
|
146 |
-
list(GET _PYTHON_VALUES 2 PYTHON_INCLUDE_DIR)
|
147 |
-
list(GET _PYTHON_VALUES 3 PYTHON_SITE_PACKAGES)
|
148 |
-
list(GET _PYTHON_VALUES 4 PYTHON_MODULE_EXTENSION)
|
149 |
-
list(GET _PYTHON_VALUES 5 PYTHON_IS_DEBUG)
|
150 |
-
list(GET _PYTHON_VALUES 6 PYTHON_SIZEOF_VOID_P)
|
151 |
-
list(GET _PYTHON_VALUES 7 PYTHON_LIBRARY_SUFFIX)
|
152 |
-
list(GET _PYTHON_VALUES 8 PYTHON_LIBDIR)
|
153 |
-
list(GET _PYTHON_VALUES 9 PYTHON_MULTIARCH)
|
154 |
-
|
155 |
-
# Make sure the Python has the same pointer-size as the chosen compiler
|
156 |
-
# Skip if CMAKE_SIZEOF_VOID_P is not defined
|
157 |
-
if(CMAKE_SIZEOF_VOID_P AND (NOT "${PYTHON_SIZEOF_VOID_P}" STREQUAL "${CMAKE_SIZEOF_VOID_P}"))
|
158 |
-
if(PythonLibsNew_FIND_REQUIRED)
|
159 |
-
math(EXPR _PYTHON_BITS "${PYTHON_SIZEOF_VOID_P} * 8")
|
160 |
-
math(EXPR _CMAKE_BITS "${CMAKE_SIZEOF_VOID_P} * 8")
|
161 |
-
message(FATAL_ERROR "Python config failure: Python is ${_PYTHON_BITS}-bit, "
|
162 |
-
"chosen compiler is ${_CMAKE_BITS}-bit")
|
163 |
-
endif()
|
164 |
-
set(PYTHONLIBS_FOUND FALSE)
|
165 |
-
set(PythonLibsNew_FOUND FALSE)
|
166 |
-
return()
|
167 |
-
endif()
|
168 |
-
|
169 |
-
# The built-in FindPython didn't always give the version numbers
|
170 |
-
string(REGEX REPLACE "\\." ";" _PYTHON_VERSION_LIST ${_PYTHON_VERSION_LIST})
|
171 |
-
list(GET _PYTHON_VERSION_LIST 0 PYTHON_VERSION_MAJOR)
|
172 |
-
list(GET _PYTHON_VERSION_LIST 1 PYTHON_VERSION_MINOR)
|
173 |
-
list(GET _PYTHON_VERSION_LIST 2 PYTHON_VERSION_PATCH)
|
174 |
-
set(PYTHON_VERSION "${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR}.${PYTHON_VERSION_PATCH}")
|
175 |
-
|
176 |
-
# Make sure all directory separators are '/'
|
177 |
-
string(REGEX REPLACE "\\\\" "/" PYTHON_PREFIX "${PYTHON_PREFIX}")
|
178 |
-
string(REGEX REPLACE "\\\\" "/" PYTHON_INCLUDE_DIR "${PYTHON_INCLUDE_DIR}")
|
179 |
-
string(REGEX REPLACE "\\\\" "/" PYTHON_SITE_PACKAGES "${PYTHON_SITE_PACKAGES}")
|
180 |
-
|
181 |
-
if(CMAKE_HOST_WIN32)
|
182 |
-
set(PYTHON_LIBRARY "${PYTHON_PREFIX}/libs/python${PYTHON_LIBRARY_SUFFIX}.lib")
|
183 |
-
|
184 |
-
# when run in a venv, PYTHON_PREFIX points to it. But the libraries remain in the
|
185 |
-
# original python installation. They may be found relative to PYTHON_INCLUDE_DIR.
|
186 |
-
if(NOT EXISTS "${PYTHON_LIBRARY}")
|
187 |
-
get_filename_component(_PYTHON_ROOT ${PYTHON_INCLUDE_DIR} DIRECTORY)
|
188 |
-
set(PYTHON_LIBRARY "${_PYTHON_ROOT}/libs/python${PYTHON_LIBRARY_SUFFIX}.lib")
|
189 |
-
endif()
|
190 |
-
|
191 |
-
# if we are in MSYS & MINGW, and we didn't find windows python lib, look for system python lib
|
192 |
-
if(DEFINED ENV{MSYSTEM}
|
193 |
-
AND MINGW
|
194 |
-
AND NOT EXISTS "${PYTHON_LIBRARY}")
|
195 |
-
if(PYTHON_MULTIARCH)
|
196 |
-
set(_PYTHON_LIBS_SEARCH "${PYTHON_LIBDIR}/${PYTHON_MULTIARCH}" "${PYTHON_LIBDIR}")
|
197 |
-
else()
|
198 |
-
set(_PYTHON_LIBS_SEARCH "${PYTHON_LIBDIR}")
|
199 |
-
endif()
|
200 |
-
unset(PYTHON_LIBRARY)
|
201 |
-
find_library(
|
202 |
-
PYTHON_LIBRARY
|
203 |
-
NAMES "python${PYTHON_LIBRARY_SUFFIX}"
|
204 |
-
PATHS ${_PYTHON_LIBS_SEARCH}
|
205 |
-
NO_DEFAULT_PATH)
|
206 |
-
endif()
|
207 |
-
|
208 |
-
# raise an error if the python libs are still not found.
|
209 |
-
if(NOT EXISTS "${PYTHON_LIBRARY}")
|
210 |
-
message(FATAL_ERROR "Python libraries not found")
|
211 |
-
endif()
|
212 |
-
|
213 |
-
else()
|
214 |
-
if(PYTHON_MULTIARCH)
|
215 |
-
set(_PYTHON_LIBS_SEARCH "${PYTHON_LIBDIR}/${PYTHON_MULTIARCH}" "${PYTHON_LIBDIR}")
|
216 |
-
else()
|
217 |
-
set(_PYTHON_LIBS_SEARCH "${PYTHON_LIBDIR}")
|
218 |
-
endif()
|
219 |
-
#message(STATUS "Searching for Python libs in ${_PYTHON_LIBS_SEARCH}")
|
220 |
-
# Probably this needs to be more involved. It would be nice if the config
|
221 |
-
# information the python interpreter itself gave us were more complete.
|
222 |
-
find_library(
|
223 |
-
PYTHON_LIBRARY
|
224 |
-
NAMES "python${PYTHON_LIBRARY_SUFFIX}"
|
225 |
-
PATHS ${_PYTHON_LIBS_SEARCH}
|
226 |
-
NO_DEFAULT_PATH)
|
227 |
-
|
228 |
-
# If all else fails, just set the name/version and let the linker figure out the path.
|
229 |
-
if(NOT PYTHON_LIBRARY)
|
230 |
-
set(PYTHON_LIBRARY python${PYTHON_LIBRARY_SUFFIX})
|
231 |
-
endif()
|
232 |
-
endif()
|
233 |
-
|
234 |
-
mark_as_advanced(PYTHON_LIBRARY PYTHON_INCLUDE_DIR)
|
235 |
-
|
236 |
-
# We use PYTHON_INCLUDE_DIR, PYTHON_LIBRARY and PYTHON_DEBUG_LIBRARY for the
|
237 |
-
# cache entries because they are meant to specify the location of a single
|
238 |
-
# library. We now set the variables listed by the documentation for this
|
239 |
-
# module.
|
240 |
-
set(PYTHON_INCLUDE_DIRS "${PYTHON_INCLUDE_DIR}")
|
241 |
-
set(PYTHON_LIBRARIES "${PYTHON_LIBRARY}")
|
242 |
-
if(NOT PYTHON_DEBUG_LIBRARY)
|
243 |
-
set(PYTHON_DEBUG_LIBRARY "")
|
244 |
-
endif()
|
245 |
-
set(PYTHON_DEBUG_LIBRARIES "${PYTHON_DEBUG_LIBRARY}")
|
246 |
-
|
247 |
-
find_package_message(PYTHON "Found PythonLibs: ${PYTHON_LIBRARY}"
|
248 |
-
"${PYTHON_EXECUTABLE}${PYTHON_VERSION_STRING}")
|
249 |
-
|
250 |
-
set(PYTHONLIBS_FOUND TRUE)
|
251 |
-
set(PythonLibsNew_FOUND TRUE)
|
252 |
-
|
253 |
-
if(NOT PYTHON_MODULE_PREFIX)
|
254 |
-
set(PYTHON_MODULE_PREFIX "")
|
255 |
-
endif()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/detail/raw_reference_cast.h
DELETED
@@ -1,398 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
#include <thrust/detail/raw_pointer_cast.h>
|
21 |
-
#include <thrust/detail/type_traits/has_nested_type.h>
|
22 |
-
#include <thrust/detail/type_traits.h>
|
23 |
-
#include <thrust/detail/tuple_transform.h>
|
24 |
-
#include <thrust/iterator/detail/tuple_of_iterator_references.h>
|
25 |
-
|
26 |
-
|
27 |
-
// the order of declarations and definitions in this file is totally goofy
|
28 |
-
// this header defines raw_reference_cast, which has a few overloads towards the bottom of the file
|
29 |
-
// raw_reference_cast depends on metafunctions such as is_unwrappable and raw_reference
|
30 |
-
// we need to be sure that these metafunctions are completely defined (including specializations) before they are instantiated by raw_reference_cast
|
31 |
-
|
32 |
-
namespace thrust
|
33 |
-
{
|
34 |
-
namespace detail
|
35 |
-
{
|
36 |
-
|
37 |
-
|
38 |
-
__THRUST_DEFINE_HAS_NESTED_TYPE(is_wrapped_reference, wrapped_reference_hint)
|
39 |
-
|
40 |
-
|
41 |
-
// wrapped reference-like things which aren't strictly wrapped references
|
42 |
-
// (e.g. tuples of wrapped references) are considered unwrappable
|
43 |
-
template<typename T>
|
44 |
-
struct is_unwrappable
|
45 |
-
: is_wrapped_reference<T>
|
46 |
-
{};
|
47 |
-
|
48 |
-
|
49 |
-
// specialize is_unwrappable
|
50 |
-
// a tuple is_unwrappable if any of its elements is_unwrappable
|
51 |
-
template<
|
52 |
-
typename T0, typename T1, typename T2,
|
53 |
-
typename T3, typename T4, typename T5,
|
54 |
-
typename T6, typename T7, typename T8,
|
55 |
-
typename T9
|
56 |
-
>
|
57 |
-
struct is_unwrappable<
|
58 |
-
thrust::tuple<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9>
|
59 |
-
>
|
60 |
-
: or_<
|
61 |
-
is_unwrappable<T0>,
|
62 |
-
is_unwrappable<T1>,
|
63 |
-
is_unwrappable<T2>,
|
64 |
-
is_unwrappable<T3>,
|
65 |
-
is_unwrappable<T4>,
|
66 |
-
is_unwrappable<T5>,
|
67 |
-
is_unwrappable<T6>,
|
68 |
-
is_unwrappable<T7>,
|
69 |
-
is_unwrappable<T8>,
|
70 |
-
is_unwrappable<T9>
|
71 |
-
>
|
72 |
-
{};
|
73 |
-
|
74 |
-
|
75 |
-
// specialize is_unwrappable
|
76 |
-
// a tuple_of_iterator_references is_unwrappable if any of its elements is_unwrappable
|
77 |
-
template<
|
78 |
-
typename T0, typename T1, typename T2,
|
79 |
-
typename T3, typename T4, typename T5,
|
80 |
-
typename T6, typename T7, typename T8,
|
81 |
-
typename T9
|
82 |
-
>
|
83 |
-
struct is_unwrappable<
|
84 |
-
thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9>
|
85 |
-
>
|
86 |
-
: or_<
|
87 |
-
is_unwrappable<T0>,
|
88 |
-
is_unwrappable<T1>,
|
89 |
-
is_unwrappable<T2>,
|
90 |
-
is_unwrappable<T3>,
|
91 |
-
is_unwrappable<T4>,
|
92 |
-
is_unwrappable<T5>,
|
93 |
-
is_unwrappable<T6>,
|
94 |
-
is_unwrappable<T7>,
|
95 |
-
is_unwrappable<T8>,
|
96 |
-
is_unwrappable<T9>
|
97 |
-
>
|
98 |
-
{};
|
99 |
-
|
100 |
-
|
101 |
-
template<typename T, typename Result = void>
|
102 |
-
struct enable_if_unwrappable
|
103 |
-
: enable_if<
|
104 |
-
is_unwrappable<T>::value,
|
105 |
-
Result
|
106 |
-
>
|
107 |
-
{};
|
108 |
-
|
109 |
-
|
110 |
-
namespace raw_reference_detail
|
111 |
-
{
|
112 |
-
|
113 |
-
|
114 |
-
template<typename T, typename Enable = void>
|
115 |
-
struct raw_reference_impl
|
116 |
-
: add_reference<T>
|
117 |
-
{};
|
118 |
-
|
119 |
-
|
120 |
-
template<typename T>
|
121 |
-
struct raw_reference_impl<
|
122 |
-
T,
|
123 |
-
typename thrust::detail::enable_if<
|
124 |
-
is_wrapped_reference<
|
125 |
-
typename remove_cv<T>::type
|
126 |
-
>::value
|
127 |
-
>::type
|
128 |
-
>
|
129 |
-
{
|
130 |
-
typedef typename add_reference<
|
131 |
-
typename pointer_element<typename T::pointer>::type
|
132 |
-
>::type type;
|
133 |
-
};
|
134 |
-
|
135 |
-
|
136 |
-
} // end raw_reference_detail
|
137 |
-
|
138 |
-
|
139 |
-
template<typename T>
|
140 |
-
struct raw_reference :
|
141 |
-
raw_reference_detail::raw_reference_impl<T>
|
142 |
-
{};
|
143 |
-
|
144 |
-
|
145 |
-
namespace raw_reference_detail
|
146 |
-
{
|
147 |
-
|
148 |
-
// unlike raw_reference,
|
149 |
-
// raw_reference_tuple_helper needs to return a value
|
150 |
-
// when it encounters one, rather than a reference
|
151 |
-
// upon encountering tuple, recurse
|
152 |
-
//
|
153 |
-
// we want the following behavior:
|
154 |
-
// 1. T -> T
|
155 |
-
// 2. T& -> T&
|
156 |
-
// 3. null_type -> null_type
|
157 |
-
// 4. reference<T> -> T&
|
158 |
-
// 5. tuple_of_iterator_references<T> -> tuple_of_iterator_references<raw_reference_tuple_helper<T>::type>
|
159 |
-
|
160 |
-
|
161 |
-
// wrapped references are unwrapped using raw_reference, otherwise, return T
|
162 |
-
template<typename T>
|
163 |
-
struct raw_reference_tuple_helper
|
164 |
-
: eval_if<
|
165 |
-
is_unwrappable<
|
166 |
-
typename remove_cv<T>::type
|
167 |
-
>::value,
|
168 |
-
raw_reference<T>,
|
169 |
-
identity_<T>
|
170 |
-
>
|
171 |
-
{};
|
172 |
-
|
173 |
-
|
174 |
-
// recurse on tuples
|
175 |
-
template <
|
176 |
-
typename T0, typename T1, typename T2,
|
177 |
-
typename T3, typename T4, typename T5,
|
178 |
-
typename T6, typename T7, typename T8,
|
179 |
-
typename T9
|
180 |
-
>
|
181 |
-
struct raw_reference_tuple_helper<
|
182 |
-
thrust::tuple<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9>
|
183 |
-
>
|
184 |
-
{
|
185 |
-
typedef thrust::tuple<
|
186 |
-
typename raw_reference_tuple_helper<T0>::type,
|
187 |
-
typename raw_reference_tuple_helper<T1>::type,
|
188 |
-
typename raw_reference_tuple_helper<T2>::type,
|
189 |
-
typename raw_reference_tuple_helper<T3>::type,
|
190 |
-
typename raw_reference_tuple_helper<T4>::type,
|
191 |
-
typename raw_reference_tuple_helper<T5>::type,
|
192 |
-
typename raw_reference_tuple_helper<T6>::type,
|
193 |
-
typename raw_reference_tuple_helper<T7>::type,
|
194 |
-
typename raw_reference_tuple_helper<T8>::type,
|
195 |
-
typename raw_reference_tuple_helper<T9>::type
|
196 |
-
> type;
|
197 |
-
};
|
198 |
-
|
199 |
-
|
200 |
-
template <
|
201 |
-
typename T0, typename T1, typename T2,
|
202 |
-
typename T3, typename T4, typename T5,
|
203 |
-
typename T6, typename T7, typename T8,
|
204 |
-
typename T9
|
205 |
-
>
|
206 |
-
struct raw_reference_tuple_helper<
|
207 |
-
thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9>
|
208 |
-
>
|
209 |
-
{
|
210 |
-
typedef thrust::detail::tuple_of_iterator_references<
|
211 |
-
typename raw_reference_tuple_helper<T0>::type,
|
212 |
-
typename raw_reference_tuple_helper<T1>::type,
|
213 |
-
typename raw_reference_tuple_helper<T2>::type,
|
214 |
-
typename raw_reference_tuple_helper<T3>::type,
|
215 |
-
typename raw_reference_tuple_helper<T4>::type,
|
216 |
-
typename raw_reference_tuple_helper<T5>::type,
|
217 |
-
typename raw_reference_tuple_helper<T6>::type,
|
218 |
-
typename raw_reference_tuple_helper<T7>::type,
|
219 |
-
typename raw_reference_tuple_helper<T8>::type,
|
220 |
-
typename raw_reference_tuple_helper<T9>::type
|
221 |
-
> type;
|
222 |
-
};
|
223 |
-
|
224 |
-
|
225 |
-
} // end raw_reference_detail
|
226 |
-
|
227 |
-
|
228 |
-
// a couple of specializations of raw_reference for tuples follow
|
229 |
-
|
230 |
-
|
231 |
-
// if a tuple "tuple_type" is_unwrappable,
|
232 |
-
// then the raw_reference of tuple_type is a tuple of its members' raw_references
|
233 |
-
// else the raw_reference of tuple_type is tuple_type &
|
234 |
-
template <
|
235 |
-
typename T0, typename T1, typename T2,
|
236 |
-
typename T3, typename T4, typename T5,
|
237 |
-
typename T6, typename T7, typename T8,
|
238 |
-
typename T9
|
239 |
-
>
|
240 |
-
struct raw_reference<
|
241 |
-
thrust::tuple<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9>
|
242 |
-
>
|
243 |
-
{
|
244 |
-
private:
|
245 |
-
typedef thrust::tuple<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9> tuple_type;
|
246 |
-
|
247 |
-
public:
|
248 |
-
typedef typename eval_if<
|
249 |
-
is_unwrappable<tuple_type>::value,
|
250 |
-
raw_reference_detail::raw_reference_tuple_helper<tuple_type>,
|
251 |
-
add_reference<tuple_type>
|
252 |
-
>::type type;
|
253 |
-
};
|
254 |
-
|
255 |
-
|
256 |
-
template <
|
257 |
-
typename T0, typename T1, typename T2,
|
258 |
-
typename T3, typename T4, typename T5,
|
259 |
-
typename T6, typename T7, typename T8,
|
260 |
-
typename T9
|
261 |
-
>
|
262 |
-
struct raw_reference<
|
263 |
-
thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9>
|
264 |
-
>
|
265 |
-
{
|
266 |
-
private:
|
267 |
-
typedef detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9> tuple_type;
|
268 |
-
|
269 |
-
public:
|
270 |
-
typedef typename raw_reference_detail::raw_reference_tuple_helper<tuple_type>::type type;
|
271 |
-
|
272 |
-
// XXX figure out why is_unwrappable seems to be broken for tuple_of_iterator_references
|
273 |
-
//typedef typename eval_if<
|
274 |
-
// is_unwrappable<tuple_type>::value,
|
275 |
-
// raw_reference_detail::raw_reference_tuple_helper<tuple_type>,
|
276 |
-
// add_reference<tuple_type>
|
277 |
-
//>::type type;
|
278 |
-
};
|
279 |
-
|
280 |
-
|
281 |
-
} // end detail
|
282 |
-
|
283 |
-
|
284 |
-
// provide declarations of raw_reference_cast's overloads for raw_reference_caster below
|
285 |
-
template<typename T>
|
286 |
-
__host__ __device__
|
287 |
-
typename detail::raw_reference<T>::type
|
288 |
-
raw_reference_cast(T &ref);
|
289 |
-
|
290 |
-
|
291 |
-
template<typename T>
|
292 |
-
__host__ __device__
|
293 |
-
typename detail::raw_reference<const T>::type
|
294 |
-
raw_reference_cast(const T &ref);
|
295 |
-
|
296 |
-
|
297 |
-
template<
|
298 |
-
typename T0, typename T1, typename T2,
|
299 |
-
typename T3, typename T4, typename T5,
|
300 |
-
typename T6, typename T7, typename T8,
|
301 |
-
typename T9
|
302 |
-
>
|
303 |
-
__host__ __device__
|
304 |
-
typename detail::enable_if_unwrappable<
|
305 |
-
thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9>,
|
306 |
-
typename detail::raw_reference<
|
307 |
-
thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9>
|
308 |
-
>::type
|
309 |
-
>::type
|
310 |
-
raw_reference_cast(thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9> t);
|
311 |
-
|
312 |
-
|
313 |
-
namespace detail
|
314 |
-
{
|
315 |
-
|
316 |
-
|
317 |
-
struct raw_reference_caster
|
318 |
-
{
|
319 |
-
template<typename T>
|
320 |
-
__host__ __device__
|
321 |
-
typename detail::raw_reference<T>::type operator()(T &ref)
|
322 |
-
{
|
323 |
-
return thrust::raw_reference_cast(ref);
|
324 |
-
}
|
325 |
-
|
326 |
-
template<typename T>
|
327 |
-
__host__ __device__
|
328 |
-
typename detail::raw_reference<const T>::type operator()(const T &ref)
|
329 |
-
{
|
330 |
-
return thrust::raw_reference_cast(ref);
|
331 |
-
}
|
332 |
-
|
333 |
-
template<
|
334 |
-
typename T0, typename T1, typename T2,
|
335 |
-
typename T3, typename T4, typename T5,
|
336 |
-
typename T6, typename T7, typename T8,
|
337 |
-
typename T9
|
338 |
-
>
|
339 |
-
__host__ __device__
|
340 |
-
typename detail::raw_reference<
|
341 |
-
thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9>
|
342 |
-
>::type
|
343 |
-
operator()(thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9> t,
|
344 |
-
typename enable_if<
|
345 |
-
is_unwrappable<thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9> >::value
|
346 |
-
>::type * = 0)
|
347 |
-
{
|
348 |
-
return thrust::raw_reference_cast(t);
|
349 |
-
}
|
350 |
-
}; // end raw_reference_caster
|
351 |
-
|
352 |
-
|
353 |
-
} // end detail
|
354 |
-
|
355 |
-
|
356 |
-
template<typename T>
|
357 |
-
__host__ __device__
|
358 |
-
typename detail::raw_reference<T>::type
|
359 |
-
raw_reference_cast(T &ref)
|
360 |
-
{
|
361 |
-
return *thrust::raw_pointer_cast(&ref);
|
362 |
-
} // end raw_reference_cast
|
363 |
-
|
364 |
-
|
365 |
-
template<typename T>
|
366 |
-
__host__ __device__
|
367 |
-
typename detail::raw_reference<const T>::type
|
368 |
-
raw_reference_cast(const T &ref)
|
369 |
-
{
|
370 |
-
return *thrust::raw_pointer_cast(&ref);
|
371 |
-
} // end raw_reference_cast
|
372 |
-
|
373 |
-
|
374 |
-
template<
|
375 |
-
typename T0, typename T1, typename T2,
|
376 |
-
typename T3, typename T4, typename T5,
|
377 |
-
typename T6, typename T7, typename T8,
|
378 |
-
typename T9
|
379 |
-
>
|
380 |
-
__host__ __device__
|
381 |
-
typename detail::enable_if_unwrappable<
|
382 |
-
thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9>,
|
383 |
-
typename detail::raw_reference<
|
384 |
-
thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9>
|
385 |
-
>::type
|
386 |
-
>::type
|
387 |
-
raw_reference_cast(thrust::detail::tuple_of_iterator_references<T0,T1,T2,T3,T4,T5,T6,T7,T8,T9> t)
|
388 |
-
{
|
389 |
-
thrust::detail::raw_reference_caster f;
|
390 |
-
|
391 |
-
// note that we pass raw_reference_tuple_helper, not raw_reference as the unary metafunction
|
392 |
-
// the different way that raw_reference_tuple_helper unwraps tuples is important
|
393 |
-
return thrust::detail::tuple_host_device_transform<detail::raw_reference_detail::raw_reference_tuple_helper>(t, f);
|
394 |
-
} // end raw_reference_cast
|
395 |
-
|
396 |
-
|
397 |
-
} // end thrust
|
398 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/device_malloc_allocator.h
DELETED
@@ -1,185 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2018 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
|
18 |
-
/*! \file device_malloc_allocator.h
|
19 |
-
* \brief An allocator which allocates storage with \p device_malloc
|
20 |
-
*/
|
21 |
-
|
22 |
-
#pragma once
|
23 |
-
|
24 |
-
#include <thrust/detail/config.h>
|
25 |
-
#include <thrust/device_ptr.h>
|
26 |
-
#include <thrust/device_reference.h>
|
27 |
-
#include <thrust/device_malloc.h>
|
28 |
-
#include <thrust/device_free.h>
|
29 |
-
#include <limits>
|
30 |
-
#include <stdexcept>
|
31 |
-
|
32 |
-
namespace thrust
|
33 |
-
{
|
34 |
-
|
35 |
-
// forward declarations to WAR circular #includes
|
36 |
-
template<typename> class device_ptr;
|
37 |
-
template<typename T> device_ptr<T> device_malloc(const std::size_t n);
|
38 |
-
|
39 |
-
/*! \addtogroup memory_management Memory Management
|
40 |
-
* \addtogroup memory_management_classes Memory Management Classes
|
41 |
-
* \ingroup memory_management
|
42 |
-
* \{
|
43 |
-
*/
|
44 |
-
|
45 |
-
/*! \p device_malloc_allocator is a device memory allocator that employs the
|
46 |
-
* \p device_malloc function for allocation.
|
47 |
-
*
|
48 |
-
* \p device_malloc_allocator is deprecated in favor of <tt>thrust::mr</tt>
|
49 |
-
* memory resource-based allocators.
|
50 |
-
*
|
51 |
-
* \see device_malloc
|
52 |
-
* \see device_ptr
|
53 |
-
* \see device_allocator
|
54 |
-
* \see http://www.sgi.com/tech/stl/Allocators.html
|
55 |
-
*/
|
56 |
-
template<typename T>
|
57 |
-
class device_malloc_allocator
|
58 |
-
{
|
59 |
-
public:
|
60 |
-
/*! Type of element allocated, \c T. */
|
61 |
-
typedef T value_type;
|
62 |
-
|
63 |
-
/*! Pointer to allocation, \c device_ptr<T>. */
|
64 |
-
typedef device_ptr<T> pointer;
|
65 |
-
|
66 |
-
/*! \c const pointer to allocation, \c device_ptr<const T>. */
|
67 |
-
typedef device_ptr<const T> const_pointer;
|
68 |
-
|
69 |
-
/*! Reference to allocated element, \c device_reference<T>. */
|
70 |
-
typedef device_reference<T> reference;
|
71 |
-
|
72 |
-
/*! \c const reference to allocated element, \c device_reference<const T>. */
|
73 |
-
typedef device_reference<const T> const_reference;
|
74 |
-
|
75 |
-
/*! Type of allocation size, \c std::size_t. */
|
76 |
-
typedef std::size_t size_type;
|
77 |
-
|
78 |
-
/*! Type of allocation difference, \c pointer::difference_type. */
|
79 |
-
typedef typename pointer::difference_type difference_type;
|
80 |
-
|
81 |
-
/*! The \p rebind metafunction provides the type of a \p device_malloc_allocator
|
82 |
-
* instantiated with another type.
|
83 |
-
*
|
84 |
-
* \tparam U The other type to use for instantiation.
|
85 |
-
*/
|
86 |
-
template<typename U>
|
87 |
-
struct rebind
|
88 |
-
{
|
89 |
-
/*! The typedef \p other gives the type of the rebound \p device_malloc_allocator.
|
90 |
-
*/
|
91 |
-
typedef device_malloc_allocator<U> other;
|
92 |
-
}; // end rebind
|
93 |
-
|
94 |
-
/*! No-argument constructor has no effect. */
|
95 |
-
__host__ __device__
|
96 |
-
inline device_malloc_allocator() {}
|
97 |
-
|
98 |
-
/*! No-argument destructor has no effect. */
|
99 |
-
__host__ __device__
|
100 |
-
inline ~device_malloc_allocator() {}
|
101 |
-
|
102 |
-
/*! Copy constructor has no effect. */
|
103 |
-
__host__ __device__
|
104 |
-
inline device_malloc_allocator(device_malloc_allocator const&) {}
|
105 |
-
|
106 |
-
/*! Constructor from other \p device_malloc_allocator has no effect. */
|
107 |
-
template<typename U>
|
108 |
-
__host__ __device__
|
109 |
-
inline device_malloc_allocator(device_malloc_allocator<U> const&) {}
|
110 |
-
|
111 |
-
#if THRUST_CPP_DIALECT >= 2011
|
112 |
-
device_malloc_allocator & operator=(const device_malloc_allocator &) = default;
|
113 |
-
#endif
|
114 |
-
|
115 |
-
/*! Returns the address of an allocated object.
|
116 |
-
* \return <tt>&r</tt>.
|
117 |
-
*/
|
118 |
-
__host__ __device__
|
119 |
-
inline pointer address(reference r) { return &r; }
|
120 |
-
|
121 |
-
/*! Returns the address an allocated object.
|
122 |
-
* \return <tt>&r</tt>.
|
123 |
-
*/
|
124 |
-
__host__ __device__
|
125 |
-
inline const_pointer address(const_reference r) { return &r; }
|
126 |
-
|
127 |
-
/*! Allocates storage for \p cnt objects.
|
128 |
-
* \param cnt The number of objects to allocate.
|
129 |
-
* \return A \p pointer to uninitialized storage for \p cnt objects.
|
130 |
-
* \note Memory allocated by this function must be deallocated with \p deallocate.
|
131 |
-
*/
|
132 |
-
__host__
|
133 |
-
inline pointer allocate(size_type cnt,
|
134 |
-
const_pointer = const_pointer(static_cast<T*>(0)))
|
135 |
-
{
|
136 |
-
if(cnt > this->max_size())
|
137 |
-
{
|
138 |
-
throw std::bad_alloc();
|
139 |
-
} // end if
|
140 |
-
|
141 |
-
return pointer(device_malloc<T>(cnt));
|
142 |
-
} // end allocate()
|
143 |
-
|
144 |
-
/*! Deallocates storage for objects allocated with \p allocate.
|
145 |
-
* \param p A \p pointer to the storage to deallocate.
|
146 |
-
* \param cnt The size of the previous allocation.
|
147 |
-
* \note Memory deallocated by this function must previously have been
|
148 |
-
* allocated with \p allocate.
|
149 |
-
*/
|
150 |
-
__host__
|
151 |
-
inline void deallocate(pointer p, size_type cnt)
|
152 |
-
{
|
153 |
-
// silence unused parameter warning while still leaving the parameter name for Doxygen
|
154 |
-
(void)(cnt);
|
155 |
-
|
156 |
-
device_free(p);
|
157 |
-
} // end deallocate()
|
158 |
-
|
159 |
-
/*! Returns the largest value \c n for which <tt>allocate(n)</tt> might succeed.
|
160 |
-
* \return The largest value \c n for which <tt>allocate(n)</tt> might succeed.
|
161 |
-
*/
|
162 |
-
inline size_type max_size() const
|
163 |
-
{
|
164 |
-
return (std::numeric_limits<size_type>::max)() / sizeof(T);
|
165 |
-
} // end max_size()
|
166 |
-
|
167 |
-
/*! Compares against another \p device_malloc_allocator for equality.
|
168 |
-
* \return \c true
|
169 |
-
*/
|
170 |
-
__host__ __device__
|
171 |
-
inline bool operator==(device_malloc_allocator const&) const { return true; }
|
172 |
-
|
173 |
-
/*! Compares against another \p device_malloc_allocator for inequality.
|
174 |
-
* \return \c false
|
175 |
-
*/
|
176 |
-
__host__ __device__
|
177 |
-
inline bool operator!=(device_malloc_allocator const &a) const {return !operator==(a); }
|
178 |
-
}; // end device_malloc_allocator
|
179 |
-
|
180 |
-
/*! \}
|
181 |
-
*/
|
182 |
-
|
183 |
-
} // end thrust
|
184 |
-
|
185 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/version.py
DELETED
@@ -1,19 +0,0 @@
|
|
1 |
-
# Copyright (c) Open-MMLab. All rights reserved.
|
2 |
-
|
3 |
-
__version__ = '2.11.0'
|
4 |
-
short_version = __version__
|
5 |
-
|
6 |
-
|
7 |
-
def parse_version_info(version_str):
|
8 |
-
version_info = []
|
9 |
-
for x in version_str.split('.'):
|
10 |
-
if x.isdigit():
|
11 |
-
version_info.append(int(x))
|
12 |
-
elif x.find('rc') != -1:
|
13 |
-
patch_version = x.split('rc')
|
14 |
-
version_info.append(int(patch_version[0]))
|
15 |
-
version_info.append(f'rc{patch_version[1]}')
|
16 |
-
return tuple(version_info)
|
17 |
-
|
18 |
-
|
19 |
-
version_info = parse_version_info(__version__)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|