Commit
·
d2b83ae
1
Parent(s):
563cc9a
Update parquet files (step 20 of 476)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PATCHED Facebook Messenger For Samsung Gt-e2252 Tools Sesiones Speed.md +0 -32
- spaces/1gistliPinn/ChatGPT4/Examples/Carte Maroc Format Fbl.md +0 -6
- spaces/1phancelerku/anime-remove-background/City of Angels - The Masterpiece by Thirty Seconds to Mars How to Download MP3.md +0 -116
- spaces/1phancelerku/anime-remove-background/Download dan Nonton Film Ashfall (2019) Sub Indo Full Movie HD.md +0 -133
- spaces/801artistry/RVC801/demucs/augment.py +0 -106
- spaces/A00001/bingothoo/tests/kblob.ts +0 -27
- spaces/AAYUSH27/Neuro/app.py +0 -98
- spaces/AFRAC/NCM_DEMO/README.md +0 -13
- spaces/AIConsultant/MusicGen/audiocraft/modules/transformer.py +0 -747
- spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/export.py +0 -56
- spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/extend.py +0 -332
- spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/template.ts +0 -28
- spaces/AchyuthGamer/OpenGPT/g4f/models.py +0 -274
- spaces/AfrodreamsAI/afrodreams/ex_app.py +0 -95
- spaces/AlexWortega/food_calories/app.py +0 -50
- spaces/AllAideas/SegmentacionVideo/utils/predict.py +0 -104
- spaces/Aloento/9Nine-PITS/app.py +0 -186
- spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/policy.h +0 -25
- spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_769x769_80k_cityscapes.py +0 -9
- spaces/Andy1621/uniformer_light/app.py +0 -173
- spaces/AngoHF/ANGO-Leaderboard/assets/evaluation.py +0 -185
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example.py +0 -63
- spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/guided_diffusion/fp16_util.py +0 -236
- spaces/Anonymous-sub/Rerender/gmflow_module/main.py +0 -557
- spaces/AriaMei/TTSdemo/mel_processing.py +0 -119
- spaces/Arnx/MusicGenXvAKN/audiocraft/models/loaders.py +0 -94
- spaces/Artrajz/vits-simple-api/bert_vits2/utils.py +0 -70
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/langbulgarianmodel.py +0 -0
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/rotated_boxes.py +0 -21
- spaces/Benson/text-generation/Examples/Descargar Facebook Lite Apk Versin Antigua.md +0 -118
- spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/copies.py +0 -382
- spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/unixccompiler.py +0 -401
- spaces/BigSalmon/TestAnyGPTModel/README.md +0 -38
- spaces/Billyosoro/ESRGAN/FAQ.md +0 -9
- spaces/Brainclub5000/wesley7137-Llama-2-13B-Nous-Hermes-vicuna-uncensored-mastermod-spych/app.py +0 -3
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/datasets/builtin.py +0 -220
- spaces/CVPR/LIVE/LIVE/colab.py +0 -687
- spaces/CVPR/LIVE/pybind11/tests/test_eigen.py +0 -697
- spaces/CVPR/LIVE/shape.cpp +0 -22
- spaces/CVPR/WALT/mmdet/core/evaluation/eval_hooks.py +0 -303
- spaces/CVPR/transfiner/configs/common/train.py +0 -18
- spaces/ChrisPreston/diff-svc_minato_aqua/modules/nsf_hifigan/nvSTFT.py +0 -120
- spaces/CikeyQI/Yunzai/Yunzai/lib/config/log.js +0 -98
- spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/common/common.css +0 -458
- spaces/CikeyQI/meme-api/meme_generator/memes/jiji_king/__init__.py +0 -103
- spaces/CofAI/chat/client/js/theme-toggler.js +0 -22
- spaces/CofAI/openjourney/midjourney.py +0 -5
- spaces/CognitiveLabs/GPT-auto-webscraping/README.md +0 -13
- spaces/CompVis/text2img-latent-diffusion/app.py +0 -34
- spaces/Cran-May/Mistril-7b/README.md +0 -12
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download PATCHED Facebook Messenger For Samsung Gt-e2252 Tools Sesiones Speed.md
DELETED
@@ -1,32 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download Facebook Messenger for Samsung E2252</h1>
|
3 |
-
<p>Facebook Messenger is a popular app that allows you to chat with your friends and family on the social media platform. You can also send text messages, images, videos, stickers, voice notes, and more in free group chats. But how can you download Facebook Messenger for Samsung E2252, a feature phone that runs on a proprietary operating system?</p>
|
4 |
-
<h2>Download Facebook Messenger For Samsung Gt-e2252 tools sesiones speed</h2><br /><p><b><b>DOWNLOAD</b> ⇔ <a href="https://byltly.com/2uKwkO">https://byltly.com/2uKwkO</a></b></p><br /><br />
|
5 |
-
<p>In this article, we will show you the steps to download and install Facebook Messenger for Samsung E2252 using a third-party website called Apkpure. Apkpure is a website that offers free APK files of various apps and games for Android devices. APK files are the installation packages that contain all the necessary files and data for an app to run on your device.</p>
|
6 |
-
<p>Before we begin, please note that downloading and installing APK files from unknown sources may pose some risks to your device and personal data. You should only download APK files from trusted and verified websites, and scan them with an antivirus software before opening them. We are not responsible for any damage or loss that may occur as a result of following this guide.</p>
|
7 |
-
<h2>Steps to Download Facebook Messenger for Samsung E2252</h2>
|
8 |
-
<ol>
|
9 |
-
<li>On your Samsung E2252, open the browser app and go to <a href="https://apkpure.com/facebook-messenger/com.facebook.orca">https://apkpure.com/facebook-messenger/com.facebook.orca</a>. This is the official page of Facebook Messenger on Apkpure.</li>
|
10 |
-
<li>Scroll down and tap on the green "Download APK" button. This will start downloading the APK file of Facebook Messenger to your device.</li>
|
11 |
-
<li>Once the download is complete, go to your file manager app and locate the downloaded APK file. It should be in the "Downloads" folder or a similar location.</li>
|
12 |
-
<li>Tap on the APK file to open it. You may see a warning message that says "Install blocked" or "For security, your phone is set to block installation of apps obtained from unknown sources". If you see this message, tap on "Settings" and enable the option to allow installation of apps from unknown sources. You may need to enter your password or PIN to confirm this action.</li>
|
13 |
-
<li>After enabling the option, go back to the APK file and tap on it again. You should see a screen that shows the app's permissions and information. Tap on "Install" to start installing Facebook Messenger on your device.</li>
|
14 |
-
<li>Wait for the installation process to finish. You should see a message that says "App installed" when it is done.</li>
|
15 |
-
<li>Tap on "Open" to launch Facebook Messenger on your device. You may need to sign in with your Facebook account or create a new one if you don't have one already.</li>
|
16 |
-
</ol>
|
17 |
-
<p>Congratulations! You have successfully downloaded and installed Facebook Messenger for Samsung E2252. You can now enjoy chatting with your friends and family on Facebook using this app.</p>
|
18 |
-
<p></p><h2>Tips and Tricks for Using Facebook Messenger on Samsung E2252</h2>
|
19 |
-
<p>Now that you have Facebook Messenger on your device, you may want to know some tips and tricks to make the most out of it. Here are some of them:</p>
|
20 |
-
<ul>
|
21 |
-
<li>To access the app's settings, tap on your profile picture in the top left corner of the screen. Here you can change your name, photo, status, notifications, chat colors, emoji, and more.</li>
|
22 |
-
<li>To start a new chat, tap on the blue "+" icon in the bottom right corner of the screen. You can search for a contact by name or phone number, or scan a QR code to add them. You can also create a group chat by selecting multiple contacts.</li>
|
23 |
-
<li>To send a message, type it in the text box at the bottom of the chat screen and tap on the send icon. You can also send voice notes by tapping and holding the microphone icon, or send photos and videos by tapping on the camera icon. To send stickers, GIFs, or emojis, tap on the smiley icon.</li>
|
24 |
-
<li>To make a voice or video call, tap on the phone or video icon in the top right corner of the chat screen. You can also switch between voice and video during a call by tapping on the same icons. To end a call, tap on the red hang up icon.</li>
|
25 |
-
<li>To delete a message, tap and hold on it and select "Delete". You can also delete an entire chat by tapping and holding on it in the main screen and selecting "Delete". Note that deleting a message or chat will only remove it from your device, not from the other person's device.</li>
|
26 |
-
<li>To archive a chat, swipe left on it in the main screen and select "Archive". This will hide the chat from your main screen but you can still access it by tapping on the search icon and typing the contact's name. To unarchive a chat, swipe left on it in the search screen and select "Unarchive".</li>
|
27 |
-
<li>To mute a chat, swipe left on it in the main screen and select "Mute". This will stop receiving notifications from that chat until you unmute it. To unmute a chat, swipe left on it in the main screen and select "Unmute".</li>
|
28 |
-
<li>To block a contact, tap on their name or profile picture in the chat screen and select "Block". This will prevent them from sending you messages or calling you. To unblock a contact, tap on their name or profile picture in the chat screen and select "Unblock".</li>
|
29 |
-
</ul>
|
30 |
-
<p>These are some of the tips and tricks for using Facebook Messenger on Samsung E2252. We hope you find them useful and enjoy using this app.</p> 81aa517590<br />
|
31 |
-
<br />
|
32 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Carte Maroc Format Fbl.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>Carte maroc format fbl</h2><br /><p><b><b>Download Zip</b> ✑ ✑ ✑ <a href="https://imgfil.com/2uxYoa">https://imgfil.com/2uxYoa</a></b></p><br /><br />
|
2 |
-
|
3 |
-
3cee63e6c2<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/City of Angels - The Masterpiece by Thirty Seconds to Mars How to Download MP3.md
DELETED
@@ -1,116 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>City of Angels MP3 Download 30 Seconds to Mars</h1>
|
3 |
-
<p>If you are looking for a song that will inspire you, touch your emotions, and make you feel alive, then you should check out City of Angels by 30 Seconds to Mars. This song is one of the most popular and acclaimed tracks by the American rock band, and it has a lot to offer to the listeners. In this article, we will tell you everything you need to know about City of Angels, and how you can download it as an MP3 file for your convenience.</p>
|
4 |
-
<h2>What is City of Angels by 30 Seconds to Mars?</h2>
|
5 |
-
<p>City of Angels is a song by 30 Seconds to Mars, released in 2013 as the fourth single from their fourth studio album, Love Lust Faith + Dreams. The song was written and produced by the lead vocalist and founder of the band, Jared Leto, who also directed the music video and the short film based on the song.</p>
|
6 |
-
<h2>city of angels mp3 download 30 seconds to mars</h2><br /><p><b><b>DOWNLOAD</b> ❤ <a href="https://jinyurl.com/2uNNFd">https://jinyurl.com/2uNNFd</a></b></p><br /><br />
|
7 |
-
<h3>The meaning and inspiration behind the song</h3>
|
8 |
-
<p>City of Angels is a tribute to Los Angeles, the city where 30 Seconds to Mars was formed and where they achieved their success. The song explores the themes of dreams, hopes, struggles, and realities that people face in the city, as well as the beauty and diversity that it offers. Jared Leto said that he wanted to capture the spirit and essence of Los Angeles in the song, and that he was inspired by his own personal experiences and stories from other people who live or have lived in the city.</p>
|
9 |
-
<h3>The music video and the short film</h3>
|
10 |
-
<p>The music video for City of Angels was released in October 2013, and it features a series of interviews with various celebrities, artists, musicians, athletes, and ordinary people who share their thoughts and feelings about Los Angeles. Some of the famous faces that appear in the video include Kanye West, James Franco, Selena Gomez, Lindsay Lohan, Olivia Wilde, Steve Nash, Corey Feldman, Ashley Olsen, Alan Cumming, Juliette Lewis, Shaun White, Lily Collins, and many more. The video also shows scenes of the band performing the song in different locations around the city.</p>
|
11 |
-
<p>The short film for City of Angels was released in November 2013, and it is an extended version of the music video that runs for about 11 minutes. The short film includes more interviews and footage that were not shown in the music video, as well as some additional narration by Jared Leto. The short film was praised by critics and fans alike for its artistic vision and emotional impact.</p>
|
12 |
-
<h2>How to download City of Angels MP3 by 30 Seconds to Mars?</h2>
|
13 |
-
<p>If you want to enjoy City of Angels by 30 Seconds to Mars anytime and anywhere, you might want to download it as an MP3 file that you can store on your device or transfer to other devices. There are several ways you can do this, depending on your preferences and budget.</p>
|
14 |
-
<h3>The official sources and platforms</h3>
|
15 |
-
<p>The most recommended way to download City of Angels MP3 by 30 Seconds to Mars is to use the official sources and platforms that are authorized by the band and their record label. This way, you can support the band financially and legally, as well as get the best quality and security for your download. Some of the official sources and platforms that you can use are:</p>
|
16 |
-
<ul>
|
17 |
-
<li>iTunes: You can buy City of Angels as a single track or as part of the Love Lust Faith + Dreams album on iTunes for $1.29 or $9.99 respectively. You can also stream it on Apple Music if you have a subscription.</li>
|
18 |
-
<li>Amazon: You <li>Amazon: You can buy City of Angels as a single track or as part of the Love Lust Faith + Dreams album on Amazon for $1.29 or $9.99 respectively. You can also stream it on Amazon Music if you have a subscription.</li>
|
19 |
-
<li>Spotify: You can stream City of Angels on Spotify if you have a subscription, or listen to it with ads if you use the free version. You can also download it for offline listening if you have a premium account.</li>
|
20 |
-
<li>YouTube: You can watch the music video or the short film for City of Angels on YouTube for free, or buy the song as an MP3 file for $1.29 on YouTube Music.</li>
|
21 |
-
</ul>
|
22 |
-
<h3>The alternative methods and tools</h3>
|
23 |
-
<p>If you don't want to use the official sources and platforms, or if you want to save some money, you can also try some alternative methods and tools to download City of Angels MP3 by 30 Seconds to Mars. However, you should be aware that these methods and tools are not endorsed by the band or their record label, and they may violate some copyright laws or terms of service. You should also be careful about the quality and security of your download, as some of these methods and tools may contain viruses, malware, or spyware. Some of the alternative methods and tools that you can use are:</p>
|
24 |
-
<ul>
|
25 |
-
<li>Online converters: You can use some online converters that allow you to convert YouTube videos or other online audio files into MP3 files that you can download. Some examples of these online converters are YTMP3, MP3FY, and OnlineVideoConverter. You just need to copy and paste the URL of the video or audio file that you want to convert, choose the output format and quality, and click on the download button.</li>
|
26 |
-
<li>Desktop software: You can use some desktop software that allow you to download YouTube videos or other online audio files as MP3 files. Some examples of these desktop software are 4K Video Downloader, Freemake Video Downloader, and Any Video Converter. You just need to install the software on your computer, copy and paste the URL of the video or audio file that you want to download, choose the output format and quality, and click on the download button.</li>
|
27 |
-
<li>Mobile apps: You can use some mobile apps that allow you to download YouTube videos or other online audio files as MP3 files. Some examples of these mobile apps are TubeMate, VidMate, and SnapTube. You just need to install the app on your smartphone or tablet, search for the video or audio file that you want to download, choose the output format and quality, and click on the download button.</li>
|
28 |
-
</ul>
|
29 |
-
<h2>Why you should listen to City of Angels MP3 by 30 Seconds to Mars?</h2>
|
30 |
-
<p>Now that you know how to download City of Angels MP3 by 30 Seconds to Mars, you might be wondering why you should listen to it in the first place. Well, there are many reasons why listening to music in general, and City of Angels in particular, can be beneficial for you.</p>
|
31 |
-
<h3>The benefits of listening to music</h3>
|
32 |
-
<p>Listening to music can have many positive effects on your physical, mental, and emotional well-being. Some of the benefits of listening to music are:</p>
|
33 |
-
<p>city of angels 30 seconds to mars free mp3<br />
|
34 |
-
download city of angels by 30 seconds to mars<br />
|
35 |
-
city of angels song 30 seconds to mars mp3<br />
|
36 |
-
30 seconds to mars city of angels remix mp3<br />
|
37 |
-
city of angels 30stm mp3 download<br />
|
38 |
-
30 seconds to mars city of angels acoustic mp3<br />
|
39 |
-
city of angels thirty seconds to mars mp3<br />
|
40 |
-
30 seconds to mars city of angels official video mp3<br />
|
41 |
-
city of angels 30 seconds to mars lyrics mp3<br />
|
42 |
-
30 seconds to mars city of angels live mp3<br />
|
43 |
-
city of angels soundtrack 30 seconds to mars mp3<br />
|
44 |
-
30 seconds to mars city of angels piano mp3<br />
|
45 |
-
city of angels 30 seconds to mars instrumental mp3<br />
|
46 |
-
30 seconds to mars city of angels album mp3<br />
|
47 |
-
city of angels 30 seconds to mars karaoke mp3<br />
|
48 |
-
30 seconds to mars city of angels radio edit mp3<br />
|
49 |
-
city of angels 30 seconds to mars guitar mp3<br />
|
50 |
-
30 seconds to mars city of angels unplugged mp3<br />
|
51 |
-
city of angels 30 seconds to mars cover mp3<br />
|
52 |
-
30 seconds to mars city of angels extended mp3<br />
|
53 |
-
city of angels 30 seconds to mars spotify mp3<br />
|
54 |
-
30 seconds to mars city of angels youtube mp3<br />
|
55 |
-
city of angels 30 seconds to mars itunes mp3<br />
|
56 |
-
30 seconds to mars city of angels soundcloud mp3<br />
|
57 |
-
city of angels 30 seconds to mars amazon mp3<br />
|
58 |
-
30 seconds to mars city of angels vevo mp3<br />
|
59 |
-
city of angels 30 seconds to mars apple music mp3<br />
|
60 |
-
30 seconds to mars city of angels last.fm mp3<br />
|
61 |
-
city of angels 30 seconds to mars markus schulz remix mp3<br />
|
62 |
-
30 seconds to mars city of angels single mp3<br />
|
63 |
-
city of angels love lust faith + dreams 30 seconds to mars mp3<br />
|
64 |
-
download lagu city of angels 30 seconds to mars mp3<br />
|
65 |
-
descargar city of angels de 30 seconds to mars mp3<br />
|
66 |
-
baixar musica city of angels de 30 seconds to mars mp3<br />
|
67 |
-
telecharger musique city of angels de 30 seconds to mars mp3<br />
|
68 |
-
scaricare musica city of angels di 30 seconds to mars mp3<br />
|
69 |
-
herunterladen musik city of angels von 30 seconds to mars mp3<br />
|
70 |
-
pobierz muzyke city of angels zespolu 30 seconds to mars mp3<br />
|
71 |
-
indir muzik sehir melekleri otuz saniye marsempireye ait olan sarki.mp3 (Turkish)<br />
|
72 |
-
skachat' muzyku gorod angelov ot tridtsati sekund do marsempireye pesnya.mp3 (Russian)<br />
|
73 |
-
xiazai yinyue shengshi tianshi cong sanshi miaozhong dao huoxingempireye gequ.mp3 (Chinese)<br />
|
74 |
-
daunrodeu eumak seongsi cheonsa buleoseu san-sibcho dong-an hwa-seongempireye nolae.mp3 (Korean)<br />
|
75 |
-
kounyuu ongaku tenshi no machi kara sanjuppun made kaseiempireye kyoku.mp3 (Japanese)<br />
|
76 |
-
descargar musica ciutat dels àngels de trenta segons a marsempireye cançó.mp3 (Catalan)<br />
|
77 |
-
aflaai musiek stad van engele van dertig sekondes tot marsempireye lied.mp3 (Afrikaans)<br />
|
78 |
-
stiahnut hudbu mesto anjelov od tridsiatich sekund do marsempireye piesen.mp3 (Slovak)<br />
|
79 |
-
letoltes zene angyalok varosa harminc masodpercig marsempireye dal.mp3 (Hungarian)<br />
|
80 |
-
preuzimanje glazbe grad andjela od trideset sekundi do marsempireye pjesma.mp3 (Croatian)<br />
|
81 |
-
lataa musiikkia enkelten kaupunki kolmekymmentä sekuntia marsempireye laulu.mp3 (Finnish)</p>
|
82 |
-
<ul>
|
83 |
-
<li>It can reduce stress and anxiety by lowering your blood pressure, heart rate, and cortisol levels.</li>
|
84 |
-
<li>It can improve your mood and happiness by releasing dopamine, serotonin, and oxytocin in your brain.</li>
|
85 |
-
<li>It can enhance your memory and cognition by stimulating your hippocampus and prefrontal cortex.</li>
|
86 |
-
<li>It can boost your creativity and productivity by activating your right hemisphere and increasing your alpha waves.</li>
|
87 |
-
<li>It can strengthen your immune system and fight infections by increasing your antibodies and natural killer cells.</li>
|
88 |
-
</ul>
|
89 |
-
<h3>The reasons why City of Angels is a great song</h3>
|
90 |
-
<p>Besides the general benefits of listening to music, City of Angels by 30 Seconds to Mars has some specific qualities that make it a great song to listen to. Some of the reasons why City of Angels is a great song are:</p>
|
91 |
-
<ul>
|
92 |
-
<li>It has a powerful and uplifting message that encourages you to follow your dreams and overcome your challenges.</li>
|
93 |
-
<li>It has a catchy and melodic tune that makes you want to sing along and dance.</li>
|
94 |
-
<li>It has a rich and diverse sound that blends rock, pop, electronic, orchestral, and choir elements.</li>
|
95 |
-
<li>It has a passionate and expressive performance by Jared Leto and his bandmates.</li>
|
96 |
-
<li>It has a stunning and inspiring visual representation by the music video and the short film.</li>
|
97 |
-
</ul>
|
98 |
-
<h2>Conclusion</h2>
|
99 |
-
<p>In conclusion, City of Angels by 30 Seconds to Mars is a song <p>In conclusion, City of Angels by 30 Seconds to Mars is a song that you should definitely listen to and download as an MP3 file. It is a song that celebrates Los Angeles, the city of dreams, and the people who live or have lived there. It is a song that has a beautiful and meaningful message, a catchy and melodic tune, a rich and diverse sound, a passionate and expressive performance, and a stunning and inspiring visual representation. It is a song that can make you feel inspired, touched, and alive.</p>
|
100 |
-
<p>If you want to download City of Angels MP3 by 30 Seconds to Mars, you can use the official sources and platforms that are authorized by the band and their record label, such as iTunes, Amazon, Spotify, YouTube, or YouTube Music. Alternatively, you can use some online converters, desktop software, or mobile apps that can convert YouTube videos or other online audio files into MP3 files. However, you should be careful about the quality and security of your download, and respect the rights of the band and their record label.</p>
|
101 |
-
<p>So what are you waiting for? Go ahead and download City of Angels MP3 by 30 Seconds to Mars today, and enjoy this amazing song that will make you feel like you are in the city of angels.</p>
|
102 |
-
<h3>FAQs</h3>
|
103 |
-
<ul>
|
104 |
-
<li>Q: Who are 30 Seconds to Mars?<br>
|
105 |
-
A: 30 Seconds to Mars is an American rock band that was formed in Los Angeles in 1998 by Jared Leto and his brother Shannon Leto. The band currently consists of Jared Leto (lead vocals, guitar, keyboards), Shannon Leto (drums, percussion), and Tomo Miličević (lead guitar, keyboards). The band has released five studio albums, four EPs, and 16 singles.</li>
|
106 |
-
<li>Q: What is the genre of City of Angels?<br>
|
107 |
-
A: City of Angels is a song that belongs to the genre of alternative rock, with elements of pop rock, electronic rock, orchestral rock, and choir music.</li>
|
108 |
-
<li>Q: How long is City of Angels?<br>
|
109 |
-
A: City of Angels is a song that has a duration of 5 minutes and 2 seconds. The music video has a duration of 4 minutes and 54 seconds. The short film has a duration of 11 minutes and 13 seconds.</li>
|
110 |
-
<li>Q: How many awards did City of Angels win?<br>
|
111 |
-
A: City of Angels won several awards for its music video and short film, such as the MTV Video Music Award for Best Rock Video in 2014, the iHeartRadio Music Award for Alternative Rock Song of the Year in 2014, the Kerrang! Award for Best Single in 2014, the MTV Europe Music Award for Best Video in 2014, and the Berlin Music Video Award for Best Cinematography in 2015.</li>
|
112 |
-
<li>Q: Where can I watch the music video or the short film for City of Angels?<br>
|
113 |
-
A: You can watch the music video or the short film for City of Angels on YouTube. You can also find them on the official website of 30 Seconds to Mars or on their social media accounts.</li>
|
114 |
-
</ul></p> 197e85843d<br />
|
115 |
-
<br />
|
116 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download dan Nonton Film Ashfall (2019) Sub Indo Full Movie HD.md
DELETED
@@ -1,133 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Download Film Ashfall Full Movie Sub Indo: A Review of the Epic Disaster Film</h1>
|
3 |
-
<p>If you are a fan of action-packed disaster movies, you might have heard of Ashfall, a 2019 South Korean film that depicts the aftermath of a volcanic eruption on the Korean peninsula. But how can you watch this film online with Indonesian subtitles? And is it worth your time and money? In this article, I will give you a brief overview of what Ashfall is about, how to download film ashfall full movie sub indo legally and safely, and my personal opinion on whether you should watch it or not.</p>
|
4 |
-
<h2>What is Ashfall?</h2>
|
5 |
-
<p>Ashfall (Korean: 백두산; Hanja: 白頭山; RR: Baekdusan), also known as Mount Paektu, is a 2019 South Korean disaster film directed by Lee Hae-jun and Kim Byung-seo, starring Lee Byung-hun, Ha Jung-woo, Ma Dong-seok, Jeon Hye-jin and Bae Suzy. The film was released in December 2019 in South Korea and became one of the highest-grossing films of the year.</p>
|
6 |
-
<h2>download film ashfall full movie sub indo</h2><br /><p><b><b>Download Zip</b> ↔ <a href="https://jinyurl.com/2uNM3g">https://jinyurl.com/2uNM3g</a></b></p><br /><br />
|
7 |
-
<h3>The Plot of Ashfall</h3>
|
8 |
-
<p>The film follows the events that unfold when Paektu Mountain, an active volcano straddling the China–North Korea border, suddenly erupts, causing severe earthquakes in both North and South Korea. To prevent another disaster, Jeon Yoo-kyung (Jeon Hye-jin), a government official, plans an operation based on a theory by Professor Kang Bong-rae (Ma Dong-seok), who had studied Mount Paektu and its possible future eruptions. Jo In-chang (Ha Jung-woo) is assigned to be the captain of a special forces team taking part in the operation. He contacts Lee Joon-pyeong (Lee Byung-hun), who is part of the Korean People's Army in North Korea as a spy. Joon-pyeong is the only one who knows where to find nuclear warheads that can be used to stop the volcano from erupting again. Meanwhile, Jo In-chang's pregnant wife Choi Ji-young (Bae Suzy) is alone in Seoul and struggling to survive amidst the chaos.</p>
|
9 |
-
<h3>The Cast and Characters of Ashfall</h3>
|
10 |
-
<p>The film features a star-studded cast of some of the most popular actors and actresses in South Korea. Here are some of the main characters and their roles:</p>
|
11 |
-
<ul>
|
12 |
-
<li>Lee Byung-hun as Lee Joon-pyeong: A North Korean spy who holds the key to stopping the volcano. He is also a father who wants to reunite with his daughter.</li>
|
13 |
-
<li>Ha Jung-woo as Jo In-chang: A South Korean special forces captain who leads the mission to find J oon-pyeong and the nuclear warheads. He is also a husband who wants to protect his wife and unborn child.</li>
|
14 |
-
<li>Ma Dong-seok as Professor Kang Bong-rae: A geologist who has studied Mount Paektu and its potential eruptions. He is the one who proposes the idea of using nuclear bombs to seal the volcano.</li>
|
15 |
-
<li>Jeon Hye-jin as Jeon Yoo-kyung: A government official who is in charge of the operation to stop the volcano. She is also a former colleague and lover of Joon-pyeong.</li>
|
16 |
-
<li>Bae Suzy as Choi Ji-young: Jo In-chang's wife who is pregnant with their first child. She is trapped in Seoul and faces many dangers as the city is shaken by earthquakes and ashfall.</li>
|
17 |
-
</ul>
|
18 |
-
<h3>The Special Effects and Cinematography of Ashfall</h3>
|
19 |
-
<p>One of the most impressive aspects of Ashfall is the realistic and spectacular depiction of the volcanic eruption and its consequences. The film used a combination of practical and computer-generated effects to create the scenes of destruction, chaos, and panic. The film also employed various techniques such as aerial shots, drone shots, handheld shots, and slow-motion shots to capture the scale and intensity of the disaster. The film's cinematography was praised by critics and audiences alike for its stunning visuals and immersive experience.</p>
|
20 |
-
<h2>How to Download Film Ashfall Full Movie Sub Indo</h2>
|
21 |
-
<p>If you are interested in watching Ashfall with Indonesian subtitles, you might be wondering how to download film ashfall full movie sub indo online. There are two main ways to do this: legal and safe ways, and illegal and risky ways. Let's take a look at each option and weigh their pros and cons.</p>
|
22 |
-
<h3>Legal and Safe Ways to Watch Ashfall Online</h3>
|
23 |
-
<p>The best way to watch Ashfall online is to use legal and safe streaming services that offer the film with Indonesian subtitles. This way, you can enjoy the film without worrying about breaking the law, harming your device, or compromising your personal information. Here are some of the streaming services that offer Ashfall:</p>
|
24 |
-
<h4>Streaming Services that Offer Ashfall</h4>
|
25 |
-
<table>
|
26 |
-
<tr><th>Streaming Service</th><th>Price</th><th>Availability</th></tr>
|
27 |
-
<tr><td>Netflix</td><td>$8.99-$17.99 per month</td><td>Worldwide (except China, Syria, North Korea, and Crimea)</td></tr>
|
28 |
-
<tr><td>iQIYI</td><td>$2.99-$19.99 per month</td><td>Asia-Pacific (except Japan)</td></tr>
|
29 |
-
<tr><td>Viu</td><td>$2.99-$6.99 per month</td><td>Asia-Pacific (except China)</td></tr>
|
30 |
-
<tr><td>Iflix</td><td>$0-$9.99 per month</td><td>Asia-Pacific (except China, Japan, Taiwan, Hong Kong, Macau)</td></tr>
|
31 |
-
<tr><td>HOOQ</td><td>$1.99-$7.99 per month</td><td>Asia-Pacific (except China, Japan)</td></tr>
|
32 |
-
</table>
|
33 |
-
<h4>Websites that Provide Ashfall Subtitles</h4>
|
34 |
-
<p>If you already have access to Ashfall through a streaming service or a DVD, but you need Indonesian subtitles, you can also download them from websites that provide subtitles for various languages. However, you should be careful when downloading subtitles from unknown sources, as they might contain malware or viruses that can harm your device or steal your data. Here are some of the websites that provide Ashfall subtitles:</p>
|
35 |
-
<ul>
|
36 |
-
<li>[Subscene]: A website that offers subtitles for movies and TV shows in various languages, including Indonesian.</li>
|
37 |
-
<li>[Opensubtitles]: A website that provides subtitles for movies and TV shows in multiple languages, including Indonesian.</li>
|
38 |
-
<li>[YIFY Subtitles]: A website that offers subtitles for YIFY movies in different languages, including Indonesian.</li>
|
39 |
-
</ul>
|
40 |
-
<h3>Illegal and Risky Ways to Download Ashfall for Free</h3>
|
41 |
-
<p>Another way to download film ashfall full movie sub indo online is to use illegal and risky methods such as torrent sites or file-sharing platforms. These methods allow you to download the film for free, but they also come with many drawbacks and dangers. Here are some of the reasons why you should avoid using these methods:</p>
|
42 |
-
<p>download film ashfall full movie sub indo lk21<br />
|
43 |
-
download film ashfall full movie sub indo gratis<br />
|
44 |
-
download film ashfall full movie sub indo 480p<br />
|
45 |
-
download film ashfall full movie sub indo 720p<br />
|
46 |
-
download film ashfall full movie sub indo bluray<br />
|
47 |
-
download film ashfall full movie sub indo mp4<br />
|
48 |
-
download film ashfall full movie sub indo hd<br />
|
49 |
-
download film ashfall full movie sub indo streaming<br />
|
50 |
-
download film ashfall full movie sub indo xxi<br />
|
51 |
-
download film ashfall full movie sub indo bioskopkeren<br />
|
52 |
-
download film ashfall full movie sub indo terbaru<br />
|
53 |
-
download film ashfall full movie sub indo pusatfilm21<br />
|
54 |
-
download film ashfall full movie sub indo nontongratis88<br />
|
55 |
-
download film ashfall full movie sub indo rebahin<br />
|
56 |
-
download film ashfall full movie sub indo cgvido<br />
|
57 |
-
download film ashfall full movie sub indo idlix<br />
|
58 |
-
download film ashfall full movie sub indo cinema 21<br />
|
59 |
-
download film ashfall full movie sub indo ganool<br />
|
60 |
-
download film ashfall full movie sub indo layarkaca21<br />
|
61 |
-
download film ashfall full movie sub indo dunia21<br />
|
62 |
-
download film ashfall full movie sub indo gudangmovies21<br />
|
63 |
-
download film ashfall full movie sub indo indomovie88<br />
|
64 |
-
download film ashfall full movie sub indo juraganfilm<br />
|
65 |
-
download film ashfall full movie sub indo kawanfilm21<br />
|
66 |
-
download film ashfall full movie sub indo lebahmovie<br />
|
67 |
-
download film ashfall full movie sub indo melongfilm<br />
|
68 |
-
download film ashfall full movie sub indo nontonfilm168<br />
|
69 |
-
download film ashfall full movie sub indo nontonfilm77<br />
|
70 |
-
download film ashfall full movie sub indo nontonfilm88<br />
|
71 |
-
download film ashfall full movie sub indo nontonfilm99<br />
|
72 |
-
download film ashfall full movie sub indo nontonindoxxi<br />
|
73 |
-
download film ashfall full movie sub indo nontonplus<br />
|
74 |
-
download film ashfall full movie sub indo nontonxxi<br />
|
75 |
-
download film ashfall full movie sub indo pahe.in<br />
|
76 |
-
download film ashfall full movie sub indo savefilm21<br />
|
77 |
-
download film ashfall full movie sub indo sobatkeren21<br />
|
78 |
-
download film ashfall full movie sub indo sogafime<br />
|
79 |
-
download film ashfall full movie sub indo terbit21<br />
|
80 |
-
download film ashfall full movie sub indo viu.com<br />
|
81 |
-
watch online film ashfall full movie with english subtitles</p>
|
82 |
-
<h4>Torrent Sites that Host Ashfall Files</h4>
|
83 |
-
<p>Torrent sites are websites that allow users to share files through peer-to-peer networks. Some of the torrent sites that host Ashfall files are:</p> <ul>
|
84 |
-
<li>[The Pirate Bay]: One of the most popular and notorious torrent sites in the world, known for hosting a variety of content, including movies, TV shows, music, games, software, and more.</li>
|
85 |
-
<li>[1337x]: A torrent site that offers a wide range of content, including movies, TV shows, music, games, software, and more.</li>
|
86 |
-
<li>[RARBG]: A torrent site that specializes in high-quality video content, such as movies and TV shows.</li>
|
87 |
-
</ul>
|
88 |
-
<h4>Risks and Consequences of Using Torrent Sites</h4>
|
89 |
-
<p>Using torrent sites to download film ashfall full movie sub indo online might seem tempting, but it also comes with many risks and consequences. Some of them are:</p>
|
90 |
-
<ul>
|
91 |
-
<li>Legal issues: Downloading or sharing copyrighted content without permission is illegal and can result in fines or lawsuits. You might also be violating the terms and conditions of your streaming service or internet provider.</li>
|
92 |
-
<li>Security issues: Torrent sites often contain malware or viruses that can infect your device or steal your personal information. You might also expose your IP address and location to hackers or cybercriminals.</li>
|
93 |
-
<li>Quality issues: Torrent files often have poor quality, such as low resolution, distorted sound, or missing subtitles. You might also encounter fake or corrupted files that do not work or contain unwanted content.</li>
|
94 |
-
<li>Ethical issues: Downloading or sharing pirated content is unfair to the creators and producers of the film, who invested time, money, and effort to make it. You might also be supporting illegal activities or organizations that profit from piracy.</li>
|
95 |
-
</ul>
|
96 |
-
<h2>Conclusion: Is Ashfall Worth Watching?</h2>
|
97 |
-
<p>Now that you know what Ashfall is about and how to download film ashfall full movie sub indo online, you might be wondering if it is worth watching. To help you decide, here are some of the pros and cons of Ashfall:</p>
|
98 |
-
<h3>The Pros and Cons of Ashfall</h3>
|
99 |
-
<table>
|
100 |
-
<tr><th>Pros</th><th>Cons</th></tr>
|
101 |
-
<tr><td>A thrilling and exciting disaster film with realistic and spectacular special effects and cinematography.</td><td>A clichéd and predictable plot with some logical flaws and inconsistencies.</td></tr>
|
102 |
-
<tr><td>A star-studded cast with impressive performances and chemistry.</td><td>A lack of character development and depth for some of the main characters.</td></tr>
|
103 |
-
<tr><td>A message of hope and unity in the face of adversity and conflict.</td><td>A simplistic and idealistic portrayal of the political and social situation on the Korean peninsula.</td></tr>
|
104 |
-
<tr><td>A cultural and commercial success that showcases the potential of the South Korean film industry.</td><td>A limited availability and accessibility for international audiences who might not be familiar with the context or language of the film.</td></tr>
|
105 |
-
</table>
|
106 |
-
<h3>My Personal Opinion on Ashfall</h3>
|
107 |
-
<p>In my personal opinion, Ashfall is a film that is worth watching if you are a fan of disaster movies or Korean cinema. It is a film that delivers on its promise of providing an entertaining and thrilling experience with stunning visuals and sound. It is also a film that has a heart and a message that resonates with the current times. However, it is not a film that is perfect or flawless. It has its share of weaknesses and shortcomings that might disappoint some viewers who expect more from it. It is also not a film that is easy to watch or understand for everyone. It requires some background knowledge and appreciation of the culture and history of Korea. Therefore, I would recommend Ashfall to anyone who is looking for a fun and exciting movie night, but not to anyone who is looking for a deep and meaningful cinematic masterpiece.</p>
|
108 |
-
<h2>Frequently Asked Questions</h2>
|
109 |
-
<p>Here are some of the frequently asked questions about Ashfall:</p>
|
110 |
-
<h4>Q: Is Ashfall based on a true story?</h4>
|
111 |
-
<p>A: No, Ashfall is not based on a true story. It is a fictional story that imagines what would happen if Mount Paektu erupted in the present day. However, Mount Paektu is a real volcano that has erupted in the past and could erupt again in the future. The film was inspired by the historical and scientific research on Mount Paektu and its potential eruptions.</p>
|
112 |
-
<h4>Q: How accurate is Ashfall?</h4>
|
113 |
-
<p>A: Ashfall is not meant to be a realistic or accurate depiction of a volcanic eruption or its consequences. It is a fictional story that exaggerates and dramatizes some aspects of the disaster for entertainment purposes. The film does not follow the scientific facts or data on Mount Paektu or its eruptions. The film also does not reflect the actual political or social situation on the Korean peninsula or its relations with other countries.</p>
|
114 |
-
<h4>Q: How Q: How did Ashfall perform at the box office?</h4>
|
115 |
-
<p>A: Ashfall was a commercial success at the box office, both domestically and internationally. It grossed over $61 million in South Korea, becoming the fourth highest-grossing film of 2019 and the 11th highest-grossing film of all time in the country. It also grossed over $24 million in other countries, mainly in Asia, bringing its worldwide total to over $85 million. It was also nominated for several awards, including the Baeksang Arts Awards, the Blue Dragon Film Awards, and the Grand Bell Awards.</p>
|
116 |
-
<h4>Q: Where can I find more information about Ashfall?</h4>
|
117 |
-
<p>A: If you want to learn more about Ashfall, you can visit the following websites:</p>
|
118 |
-
<ul>
|
119 |
-
<li>[Ashfall Official Website]: The official website of the film, where you can find the trailer, synopsis, cast and crew information, gallery, and more.</li>
|
120 |
-
<li>[Ashfall IMDb Page]: The IMDb page of the film, where you can find the ratings, reviews, trivia, quotes, and more.</li>
|
121 |
-
<li>[Ashfall Wikipedia Page]: The Wikipedia page of the film, where you can find the plot summary, production details, reception, and more.</li>
|
122 |
-
</ul>
|
123 |
-
<h4>Q: What are some other films like Ashfall?</h4>
|
124 |
-
<p>A: If you enjoyed Ashfall and want to watch more films like it, you might like these films:</p>
|
125 |
-
<ul>
|
126 |
-
<li>[2012]: A 2009 American disaster film directed by Roland Emmerich, starring John Cusack, Chiwetel Ejiofor, Amanda Peet, Thandie Newton, and Danny Glover. The film depicts a series of cataclysmic events that occur in 2012 as a result of a massive solar flare that causes the Earth's crust to shift.</li>
|
127 |
-
<li>[The Day After Tomorrow]: A 2004 American disaster film directed by Roland Emmerich, starring Dennis Quaid, Jake Gyllenhaal, Emmy Rossum, Ian Holm, and Sela Ward. The film depicts a global climate change that triggers a new ice age and forces a group of survivors to cope with the harsh conditions.</li>
|
128 |
-
<li>[Train to Busan]: A 2016 South Korean zombie apocalypse film directed by Yeon Sang-ho, starring Gong Yoo, Ma Dong-seok, Jung Yu-mi, Kim Su-an, Kim Eui-sung, Choi Woo-shik, and Ahn So-hee. The film follows a group of passengers on a train from Seoul to Busan as they try to survive a zombie outbreak that spreads across the country.</li>
|
129 |
-
</ul>
|
130 |
-
<h2></h2>
|
131 |
-
<p>Thank you for reading my article on how to download film ashfall full movie sub indo online. I hope you found it helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Have a great day!</p> 197e85843d<br />
|
132 |
-
<br />
|
133 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/demucs/augment.py
DELETED
@@ -1,106 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
import random
|
8 |
-
import torch as th
|
9 |
-
from torch import nn
|
10 |
-
|
11 |
-
|
12 |
-
class Shift(nn.Module):
|
13 |
-
"""
|
14 |
-
Randomly shift audio in time by up to `shift` samples.
|
15 |
-
"""
|
16 |
-
def __init__(self, shift=8192):
|
17 |
-
super().__init__()
|
18 |
-
self.shift = shift
|
19 |
-
|
20 |
-
def forward(self, wav):
|
21 |
-
batch, sources, channels, time = wav.size()
|
22 |
-
length = time - self.shift
|
23 |
-
if self.shift > 0:
|
24 |
-
if not self.training:
|
25 |
-
wav = wav[..., :length]
|
26 |
-
else:
|
27 |
-
offsets = th.randint(self.shift, [batch, sources, 1, 1], device=wav.device)
|
28 |
-
offsets = offsets.expand(-1, -1, channels, -1)
|
29 |
-
indexes = th.arange(length, device=wav.device)
|
30 |
-
wav = wav.gather(3, indexes + offsets)
|
31 |
-
return wav
|
32 |
-
|
33 |
-
|
34 |
-
class FlipChannels(nn.Module):
|
35 |
-
"""
|
36 |
-
Flip left-right channels.
|
37 |
-
"""
|
38 |
-
def forward(self, wav):
|
39 |
-
batch, sources, channels, time = wav.size()
|
40 |
-
if self.training and wav.size(2) == 2:
|
41 |
-
left = th.randint(2, (batch, sources, 1, 1), device=wav.device)
|
42 |
-
left = left.expand(-1, -1, -1, time)
|
43 |
-
right = 1 - left
|
44 |
-
wav = th.cat([wav.gather(2, left), wav.gather(2, right)], dim=2)
|
45 |
-
return wav
|
46 |
-
|
47 |
-
|
48 |
-
class FlipSign(nn.Module):
|
49 |
-
"""
|
50 |
-
Random sign flip.
|
51 |
-
"""
|
52 |
-
def forward(self, wav):
|
53 |
-
batch, sources, channels, time = wav.size()
|
54 |
-
if self.training:
|
55 |
-
signs = th.randint(2, (batch, sources, 1, 1), device=wav.device, dtype=th.float32)
|
56 |
-
wav = wav * (2 * signs - 1)
|
57 |
-
return wav
|
58 |
-
|
59 |
-
|
60 |
-
class Remix(nn.Module):
|
61 |
-
"""
|
62 |
-
Shuffle sources to make new mixes.
|
63 |
-
"""
|
64 |
-
def __init__(self, group_size=4):
|
65 |
-
"""
|
66 |
-
Shuffle sources within one batch.
|
67 |
-
Each batch is divided into groups of size `group_size` and shuffling is done within
|
68 |
-
each group separatly. This allow to keep the same probability distribution no matter
|
69 |
-
the number of GPUs. Without this grouping, using more GPUs would lead to a higher
|
70 |
-
probability of keeping two sources from the same track together which can impact
|
71 |
-
performance.
|
72 |
-
"""
|
73 |
-
super().__init__()
|
74 |
-
self.group_size = group_size
|
75 |
-
|
76 |
-
def forward(self, wav):
|
77 |
-
batch, streams, channels, time = wav.size()
|
78 |
-
device = wav.device
|
79 |
-
|
80 |
-
if self.training:
|
81 |
-
group_size = self.group_size or batch
|
82 |
-
if batch % group_size != 0:
|
83 |
-
raise ValueError(f"Batch size {batch} must be divisible by group size {group_size}")
|
84 |
-
groups = batch // group_size
|
85 |
-
wav = wav.view(groups, group_size, streams, channels, time)
|
86 |
-
permutations = th.argsort(th.rand(groups, group_size, streams, 1, 1, device=device),
|
87 |
-
dim=1)
|
88 |
-
wav = wav.gather(1, permutations.expand(-1, -1, -1, channels, time))
|
89 |
-
wav = wav.view(batch, streams, channels, time)
|
90 |
-
return wav
|
91 |
-
|
92 |
-
|
93 |
-
class Scale(nn.Module):
|
94 |
-
def __init__(self, proba=1., min=0.25, max=1.25):
|
95 |
-
super().__init__()
|
96 |
-
self.proba = proba
|
97 |
-
self.min = min
|
98 |
-
self.max = max
|
99 |
-
|
100 |
-
def forward(self, wav):
|
101 |
-
batch, streams, channels, time = wav.size()
|
102 |
-
device = wav.device
|
103 |
-
if self.training and random.random() < self.proba:
|
104 |
-
scales = th.empty(batch, streams, 1, 1, device=device).uniform_(self.min, self.max)
|
105 |
-
wav *= scales
|
106 |
-
return wav
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/A00001/bingothoo/tests/kblob.ts
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
import FormData from 'form-data'
|
2 |
-
|
3 |
-
import { fetch } from '@/lib/isomorphic'
|
4 |
-
|
5 |
-
const formData = new FormData()
|
6 |
-
|
7 |
-
const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}}
|
8 |
-
|
9 |
-
formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest))
|
10 |
-
|
11 |
-
|
12 |
-
fetch('https://bing.vcanbb.top/images/kblob',
|
13 |
-
{
|
14 |
-
method: 'POST',
|
15 |
-
body: formData.getBuffer(),
|
16 |
-
headers: {
|
17 |
-
"sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
|
18 |
-
"sec-ch-ua-mobile": "?0",
|
19 |
-
"sec-ch-ua-platform": "\"Windows\"",
|
20 |
-
"Referer": "https://bing.vcanbb.top/web/index.html",
|
21 |
-
"Referrer-Policy": "origin-when-cross-origin",
|
22 |
-
...formData.getHeaders()
|
23 |
-
}
|
24 |
-
|
25 |
-
}
|
26 |
-
).then(res => res.text())
|
27 |
-
.then(res => console.log('res', res))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AAYUSH27/Neuro/app.py
DELETED
@@ -1,98 +0,0 @@
|
|
1 |
-
import streamlit as st
|
2 |
-
from streamlit_chat import message
|
3 |
-
from langchain.chains import ConversationalRetrievalChain
|
4 |
-
from langchain.document_loaders import PyPDFLoader, DirectoryLoader
|
5 |
-
from langchain.embeddings import HuggingFaceEmbeddings
|
6 |
-
from langchain.llms import CTransformers
|
7 |
-
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
8 |
-
from langchain.vectorstores import FAISS
|
9 |
-
from langchain.memory import ConversationBufferMemory
|
10 |
-
|
11 |
-
# load the pdf files from the path
|
12 |
-
loader = DirectoryLoader("data/", glob="*.pdf", loader_cls=PyPDFLoader)
|
13 |
-
documents = loader.load()
|
14 |
-
|
15 |
-
# split text into chunks
|
16 |
-
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
|
17 |
-
text_chunks = text_splitter.split_documents(documents)
|
18 |
-
|
19 |
-
# create embeddings
|
20 |
-
embeddings = HuggingFaceEmbeddings(
|
21 |
-
model_name="sentence-transformers/all-MiniLM-L6-v2", model_kwargs={"device": "cpu"}
|
22 |
-
)
|
23 |
-
|
24 |
-
# vectorstore
|
25 |
-
vector_store = FAISS.from_documents(text_chunks, embeddings)
|
26 |
-
|
27 |
-
# create llm
|
28 |
-
llm = CTransformers(
|
29 |
-
model="llama-2-7b-chat.ggmlv3.q4_0.bin",
|
30 |
-
model_type="llama",
|
31 |
-
config={"max_new_tokens": 128, "temperature": 0.01},
|
32 |
-
)
|
33 |
-
|
34 |
-
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
|
35 |
-
|
36 |
-
chain = ConversationalRetrievalChain.from_llm(
|
37 |
-
llm=llm,
|
38 |
-
chain_type="stuff",
|
39 |
-
retriever=vector_store.as_retriever(search_kwargs={"k": 2}),
|
40 |
-
memory=memory,
|
41 |
-
)
|
42 |
-
|
43 |
-
st.title("Neuro Health-Care Chat Bot")
|
44 |
-
|
45 |
-
|
46 |
-
def conversation_chat(query):
|
47 |
-
result = chain({"question": query, "chat_history": st.session_state["history"]})
|
48 |
-
st.session_state["history"].append((query, result["answer"]))
|
49 |
-
return result["answer"]
|
50 |
-
|
51 |
-
|
52 |
-
def initialize_session_state():
|
53 |
-
if "history" not in st.session_state:
|
54 |
-
st.session_state["history"] = []
|
55 |
-
|
56 |
-
if "generated" not in st.session_state:
|
57 |
-
st.session_state["generated"] = ["Hello! Ask me anything about Neuro"]
|
58 |
-
|
59 |
-
if "past" not in st.session_state:
|
60 |
-
st.session_state["past"] = ["Hello!"]
|
61 |
-
|
62 |
-
|
63 |
-
def display_chat_history():
|
64 |
-
reply_container = st.container()
|
65 |
-
container = st.container()
|
66 |
-
|
67 |
-
with container:
|
68 |
-
with st.form(key="my_form", clear_on_submit=True):
|
69 |
-
user_input = st.text_input(
|
70 |
-
"Question:", placeholder="Ask anything about Neuro", key="input"
|
71 |
-
)
|
72 |
-
submit_button = st.form_submit_button(label="Send")
|
73 |
-
|
74 |
-
if submit_button and user_input:
|
75 |
-
output = conversation_chat(user_input)
|
76 |
-
|
77 |
-
st.session_state["past"].append(user_input)
|
78 |
-
st.session_state["generated"].append(output)
|
79 |
-
|
80 |
-
if st.session_state["generated"]:
|
81 |
-
with reply_container:
|
82 |
-
for i in range(len(st.session_state["generated"])):
|
83 |
-
message(
|
84 |
-
st.session_state["past"][i],
|
85 |
-
is_user=True,
|
86 |
-
key=str(i) + "_user",
|
87 |
-
avatar_style="person",
|
88 |
-
)
|
89 |
-
|
90 |
-
message(
|
91 |
-
st.session_state["generated"][i],
|
92 |
-
key=str(i),
|
93 |
-
logo="https://img.icons8.com/?size=96&id=19625&format=png",
|
94 |
-
)
|
95 |
-
# Initialize session state
|
96 |
-
initialize_session_state()
|
97 |
-
# Display chat history
|
98 |
-
display_chat_history()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AFRAC/NCM_DEMO/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: NCM DEMO
|
3 |
-
emoji: 🧾
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: green
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.35.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIConsultant/MusicGen/audiocraft/modules/transformer.py
DELETED
@@ -1,747 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
"""
|
8 |
-
Transformer model, with streaming support, xformer attention support
|
9 |
-
and easy causal attention with a potentially finite receptive field.
|
10 |
-
|
11 |
-
See `StreamingTransformer` for more information.
|
12 |
-
|
13 |
-
Unlike regular PyTorch Transformer, we make the hard choice that batches are first.
|
14 |
-
"""
|
15 |
-
|
16 |
-
import typing as tp
|
17 |
-
|
18 |
-
from einops import rearrange
|
19 |
-
import torch
|
20 |
-
import torch.nn as nn
|
21 |
-
from torch.nn import functional as F
|
22 |
-
from torch.utils.checkpoint import checkpoint as torch_checkpoint
|
23 |
-
from xformers import ops
|
24 |
-
|
25 |
-
from .rope import RotaryEmbedding
|
26 |
-
from .streaming import StreamingModule
|
27 |
-
|
28 |
-
_efficient_attention_backend: str = 'torch'
|
29 |
-
|
30 |
-
|
31 |
-
def set_efficient_attention_backend(backend: str = 'torch'):
|
32 |
-
# Using torch by default, it seems a bit faster on older P100 GPUs (~20% faster).
|
33 |
-
global _efficient_attention_backend
|
34 |
-
assert _efficient_attention_backend in ['xformers', 'torch']
|
35 |
-
_efficient_attention_backend = backend
|
36 |
-
|
37 |
-
|
38 |
-
def _get_attention_time_dimension() -> int:
|
39 |
-
if _efficient_attention_backend == 'torch':
|
40 |
-
return 2
|
41 |
-
else:
|
42 |
-
return 1
|
43 |
-
|
44 |
-
|
45 |
-
def _is_profiled() -> bool:
|
46 |
-
# Return true if we are currently running with a xformers profiler activated.
|
47 |
-
try:
|
48 |
-
from xformers.profiler import profiler
|
49 |
-
except ImportError:
|
50 |
-
return False
|
51 |
-
return profiler._Profiler._CURRENT_PROFILER is not None
|
52 |
-
|
53 |
-
|
54 |
-
def create_norm_fn(norm_type: str, dim: int, **kwargs) -> nn.Module:
|
55 |
-
"""Create normalization module for transformer encoder layer.
|
56 |
-
|
57 |
-
Args:
|
58 |
-
norm_type (str): Normalization method.
|
59 |
-
dim (int): Dimension of the normalized layer.
|
60 |
-
**kwargs (dict): Additional parameters for normalization layer.
|
61 |
-
Returns:
|
62 |
-
nn.Module: Normalization module.
|
63 |
-
"""
|
64 |
-
if norm_type == 'layer_norm':
|
65 |
-
return nn.LayerNorm(dim, eps=1e-5, **kwargs)
|
66 |
-
else:
|
67 |
-
raise ValueError(f"Unknown norm type: {norm_type}")
|
68 |
-
|
69 |
-
|
70 |
-
def create_sin_embedding(positions: torch.Tensor, dim: int, max_period: float = 10000,
|
71 |
-
dtype: torch.dtype = torch.float32) -> torch.Tensor:
|
72 |
-
"""Create sinusoidal positional embedding, with shape `[B, T, C]`.
|
73 |
-
|
74 |
-
Args:
|
75 |
-
positions (torch.Tensor): LongTensor of positions.
|
76 |
-
dim (int): Dimension of the embedding.
|
77 |
-
max_period (float): Maximum period of the cosine/sine functions.
|
78 |
-
dtype (torch.dtype or str): dtype to use to generate the embedding.
|
79 |
-
Returns:
|
80 |
-
torch.Tensor: Sinusoidal positional embedding.
|
81 |
-
"""
|
82 |
-
# We aim for BTC format
|
83 |
-
assert dim % 2 == 0
|
84 |
-
half_dim = dim // 2
|
85 |
-
positions = positions.to(dtype)
|
86 |
-
adim = torch.arange(half_dim, device=positions.device, dtype=dtype).view(1, 1, -1)
|
87 |
-
max_period_tensor = torch.full([], max_period, device=positions.device, dtype=dtype) # avoid sync point
|
88 |
-
phase = positions / (max_period_tensor ** (adim / (half_dim - 1)))
|
89 |
-
return torch.cat([torch.cos(phase), torch.sin(phase)], dim=-1)
|
90 |
-
|
91 |
-
|
92 |
-
def expand_repeated_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor:
|
93 |
-
"""torch.repeat_interleave(x, dim=2, repeats=n_rep) from xlformers."""
|
94 |
-
if n_rep == 1:
|
95 |
-
return x
|
96 |
-
if _efficient_attention_backend == 'torch':
|
97 |
-
bs, n_kv_heads, slen, head_dim = x.shape
|
98 |
-
return (
|
99 |
-
x[:, :, None, :, :]
|
100 |
-
.expand(bs, n_kv_heads, n_rep, slen, head_dim)
|
101 |
-
.reshape(bs, n_kv_heads * n_rep, slen, head_dim)
|
102 |
-
)
|
103 |
-
else:
|
104 |
-
bs, slen, n_kv_heads, head_dim = x.shape
|
105 |
-
return (
|
106 |
-
x[:, :, :, None, :]
|
107 |
-
.expand(bs, slen, n_kv_heads, n_rep, head_dim)
|
108 |
-
.reshape(bs, slen, n_kv_heads * n_rep, head_dim)
|
109 |
-
)
|
110 |
-
|
111 |
-
|
112 |
-
class LayerScale(nn.Module):
|
113 |
-
"""Layer scale from [Touvron et al 2021] (https://arxiv.org/pdf/2103.17239.pdf).
|
114 |
-
This rescales diagonally the residual outputs close to 0, with a learnt scale.
|
115 |
-
|
116 |
-
Args:
|
117 |
-
channels (int): Number of channels.
|
118 |
-
init (float): Initial scale.
|
119 |
-
channel_last (bool): If True, expect `[*, C]` shaped tensors, otherwise, `[*, C, T]`.
|
120 |
-
device (torch.device or str, optional): Device on which to initialize the module.
|
121 |
-
dtype (torch.dtype, optional): dtype to use to initialize the module.
|
122 |
-
"""
|
123 |
-
def __init__(self, channels: int, init: float = 1e-4, channel_last: bool = True,
|
124 |
-
device=None, dtype=None):
|
125 |
-
super().__init__()
|
126 |
-
self.channel_last = channel_last
|
127 |
-
self.scale = nn.Parameter(
|
128 |
-
torch.full((channels,), init,
|
129 |
-
requires_grad=True, device=device, dtype=dtype))
|
130 |
-
|
131 |
-
def forward(self, x: torch.Tensor):
|
132 |
-
if self.channel_last:
|
133 |
-
return self.scale * x
|
134 |
-
else:
|
135 |
-
return self.scale[:, None] * x
|
136 |
-
|
137 |
-
|
138 |
-
class StreamingMultiheadAttention(StreamingModule):
|
139 |
-
"""Similar to `nn.MultiheadAttention` but with support for streaming, causal evaluation.
|
140 |
-
|
141 |
-
Args:
|
142 |
-
embed_dim (int): Dimension to project to.
|
143 |
-
num_heads (int): Number of heads.
|
144 |
-
dropout (float): Dropout level.
|
145 |
-
bias (bool): Use bias in projections.
|
146 |
-
causal (bool): Causal mask applied automatically.
|
147 |
-
past_context (int, optional): Receptive field for the causal mask, infinite if None.
|
148 |
-
custom (bool): Use custom MHA implementation, for testing / benchmarking.
|
149 |
-
memory_efficient (bool): Use xformers based memory efficient attention.
|
150 |
-
attention_as_float32 (bool): Perform the attention as float32
|
151 |
-
(especially important with memory_efficient as autocast won't do this automatically).
|
152 |
-
rope (`RotaryEmbedding`, optional): Rope embedding to use.
|
153 |
-
cross_attention: Should be true when used as a cross attention.
|
154 |
-
All keys and values must be available at once, streaming is only for the queries.
|
155 |
-
Cannot be used with `causal` or `rope` (as it wouldn't make sens to
|
156 |
-
interpret the time steps in the keys relative to those in the queries).
|
157 |
-
safe_streaming (bool): Bug fix, will go away with xformers update.
|
158 |
-
qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product.
|
159 |
-
kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads).
|
160 |
-
This will lead to faster decoding time on A100 or other GPUs with tensorcore.
|
161 |
-
device (torch.device, optional): Device on which to initialize.
|
162 |
-
dtype (torch.dtype, optional): dtype to use.
|
163 |
-
"""
|
164 |
-
def __init__(self, embed_dim: int, num_heads: int, dropout: float = 0.0, bias: bool = True,
|
165 |
-
causal: bool = False, past_context: tp.Optional[int] = None, custom: bool = False,
|
166 |
-
memory_efficient: bool = False, attention_as_float32: bool = False,
|
167 |
-
rope: tp.Optional[RotaryEmbedding] = None, cross_attention: bool = False,
|
168 |
-
safe_streaming: bool = True, qk_layer_norm: bool = False, kv_repeat: int = 1,
|
169 |
-
device=None, dtype=None):
|
170 |
-
super().__init__()
|
171 |
-
factory_kwargs = {'device': device, 'dtype': dtype}
|
172 |
-
if past_context is not None:
|
173 |
-
assert causal
|
174 |
-
|
175 |
-
self.embed_dim = embed_dim
|
176 |
-
self.causal = causal
|
177 |
-
self.past_context = past_context
|
178 |
-
self.memory_efficient = memory_efficient
|
179 |
-
self.attention_as_float32 = attention_as_float32
|
180 |
-
self.rope = rope
|
181 |
-
self.cross_attention = cross_attention
|
182 |
-
self.safe_streaming = safe_streaming
|
183 |
-
self.num_heads = num_heads
|
184 |
-
self.dropout = dropout
|
185 |
-
self.kv_repeat = kv_repeat
|
186 |
-
if cross_attention:
|
187 |
-
assert not causal, "Causal cannot work with cross attention."
|
188 |
-
assert rope is None, "Rope cannot work with cross attention."
|
189 |
-
|
190 |
-
if memory_efficient:
|
191 |
-
_verify_xformers_memory_efficient_compat()
|
192 |
-
|
193 |
-
self.custom = _is_custom(custom, memory_efficient)
|
194 |
-
if self.custom:
|
195 |
-
out_dim = embed_dim
|
196 |
-
assert num_heads % kv_repeat == 0
|
197 |
-
assert not cross_attention or kv_repeat == 1
|
198 |
-
num_kv = num_heads // kv_repeat
|
199 |
-
kv_dim = (embed_dim // num_heads) * num_kv
|
200 |
-
out_dim += 2 * kv_dim
|
201 |
-
in_proj = nn.Linear(embed_dim, out_dim, bias=bias, **factory_kwargs)
|
202 |
-
# We try to follow the default PyTorch MHA convention, to easily compare results.
|
203 |
-
self.in_proj_weight = in_proj.weight
|
204 |
-
self.in_proj_bias = in_proj.bias
|
205 |
-
if bias:
|
206 |
-
self.in_proj_bias.data.zero_() # Following Pytorch convention
|
207 |
-
self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias, **factory_kwargs)
|
208 |
-
if bias:
|
209 |
-
self.out_proj.bias.data.zero_()
|
210 |
-
else:
|
211 |
-
assert not qk_layer_norm
|
212 |
-
assert kv_repeat == 1
|
213 |
-
self.mha = nn.MultiheadAttention(
|
214 |
-
embed_dim, num_heads, dropout=dropout, bias=bias, batch_first=True,
|
215 |
-
**factory_kwargs)
|
216 |
-
self.qk_layer_norm = qk_layer_norm
|
217 |
-
if qk_layer_norm:
|
218 |
-
assert self.custom
|
219 |
-
assert kv_repeat == 1
|
220 |
-
ln_dim = embed_dim
|
221 |
-
self.q_layer_norm = nn.LayerNorm(ln_dim)
|
222 |
-
self.k_layer_norm = nn.LayerNorm(ln_dim)
|
223 |
-
|
224 |
-
def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs):
|
225 |
-
if not self.custom:
|
226 |
-
# Support compat with regular MHA
|
227 |
-
keys = [n for n, _ in self.mha.named_parameters()]
|
228 |
-
for key in keys:
|
229 |
-
if prefix + key in state_dict:
|
230 |
-
state_dict[prefix + "mha." + key] = state_dict.pop(prefix + key)
|
231 |
-
super()._load_from_state_dict(state_dict, prefix, *args, **kwargs)
|
232 |
-
|
233 |
-
def _get_mask(self, current_steps: int, device: torch.device, dtype: torch.dtype):
|
234 |
-
# Return a causal mask, accounting for potentially stored past keys/values
|
235 |
-
# We actually return a bias for the attention score, as this has the same
|
236 |
-
# convention both in the builtin MHA in Pytorch, and Xformers functions.
|
237 |
-
time_dim = _get_attention_time_dimension()
|
238 |
-
if self.memory_efficient:
|
239 |
-
from xformers.ops import LowerTriangularMask
|
240 |
-
if current_steps == 1:
|
241 |
-
# If we only have one step, then we do not need a mask.
|
242 |
-
return None
|
243 |
-
elif 'past_keys' in self._streaming_state:
|
244 |
-
raise RuntimeError("Not supported at the moment")
|
245 |
-
else:
|
246 |
-
# Then we can safely use a lower triangular mask
|
247 |
-
return LowerTriangularMask()
|
248 |
-
if self._streaming_state:
|
249 |
-
past_keys = self._streaming_state['past_keys']
|
250 |
-
past_steps = past_keys.shape[time_dim]
|
251 |
-
else:
|
252 |
-
past_steps = 0
|
253 |
-
|
254 |
-
queries_pos = torch.arange(
|
255 |
-
past_steps, current_steps + past_steps, device=device).view(-1, 1)
|
256 |
-
keys_pos = torch.arange(past_steps + current_steps, device=device).view(1, -1)
|
257 |
-
delta = queries_pos - keys_pos
|
258 |
-
valid = delta >= 0
|
259 |
-
if self.past_context is not None:
|
260 |
-
valid &= (delta <= self.past_context)
|
261 |
-
return torch.where(
|
262 |
-
valid,
|
263 |
-
torch.zeros([], device=device, dtype=dtype),
|
264 |
-
torch.full([], float('-inf'), device=device, dtype=dtype))
|
265 |
-
|
266 |
-
def _complete_kv(self, k, v):
|
267 |
-
time_dim = _get_attention_time_dimension()
|
268 |
-
if self.cross_attention:
|
269 |
-
# With cross attention we assume all keys and values
|
270 |
-
# are already available, and streaming is with respect
|
271 |
-
# to the queries only.
|
272 |
-
return k, v
|
273 |
-
# Complete the key/value pair using the streaming state.
|
274 |
-
if self._streaming_state:
|
275 |
-
pk = self._streaming_state['past_keys']
|
276 |
-
nk = torch.cat([pk, k], dim=time_dim)
|
277 |
-
if v is k:
|
278 |
-
nv = nk
|
279 |
-
else:
|
280 |
-
pv = self._streaming_state['past_values']
|
281 |
-
nv = torch.cat([pv, v], dim=time_dim)
|
282 |
-
else:
|
283 |
-
nk = k
|
284 |
-
nv = v
|
285 |
-
|
286 |
-
assert nk.shape[time_dim] == nv.shape[time_dim]
|
287 |
-
offset = 0
|
288 |
-
if self.past_context is not None:
|
289 |
-
offset = max(0, nk.shape[time_dim] - self.past_context)
|
290 |
-
if self._is_streaming:
|
291 |
-
self._streaming_state['past_keys'] = nk[:, offset:]
|
292 |
-
if v is not k:
|
293 |
-
self._streaming_state['past_values'] = nv[:, offset:]
|
294 |
-
if 'offset' in self._streaming_state:
|
295 |
-
self._streaming_state['offset'] += offset
|
296 |
-
else:
|
297 |
-
self._streaming_state['offset'] = torch.tensor(0)
|
298 |
-
return nk, nv
|
299 |
-
|
300 |
-
def _apply_rope(self, query: torch.Tensor, key: torch.Tensor):
|
301 |
-
# TODO: fix and verify layout.
|
302 |
-
assert _efficient_attention_backend == 'xformers', "Rope not supported with torch attn."
|
303 |
-
# Apply rope embeddings to query and key tensors.
|
304 |
-
assert self.rope is not None
|
305 |
-
if 'past_keys' in self._streaming_state:
|
306 |
-
past_keys_offset = self._streaming_state['past_keys'].shape[1]
|
307 |
-
else:
|
308 |
-
past_keys_offset = 0
|
309 |
-
if 'offset' in self._streaming_state:
|
310 |
-
past_context_offset = int(self._streaming_state['offset'].item())
|
311 |
-
else:
|
312 |
-
past_context_offset = 0
|
313 |
-
streaming_offset = past_context_offset + past_keys_offset
|
314 |
-
return self.rope.rotate_qk(query, key, start=streaming_offset)
|
315 |
-
|
316 |
-
def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor,
|
317 |
-
key_padding_mask=None, need_weights=False, attn_mask=None,
|
318 |
-
average_attn_weights=True, is_causal=False):
|
319 |
-
assert attn_mask is None
|
320 |
-
assert not is_causal, ("New param added in torch 2.0.1 not supported, "
|
321 |
-
"use the causal args in the constructor.")
|
322 |
-
|
323 |
-
time_dim = _get_attention_time_dimension()
|
324 |
-
if time_dim == 2:
|
325 |
-
layout = "b h t d"
|
326 |
-
else:
|
327 |
-
layout = "b t h d"
|
328 |
-
dtype = query.dtype
|
329 |
-
if self._is_streaming:
|
330 |
-
assert self.causal or self.cross_attention, \
|
331 |
-
"Streaming only available for causal or cross attention"
|
332 |
-
|
333 |
-
if self.causal:
|
334 |
-
# At the moment we specialize only for the self-attention case.
|
335 |
-
assert query.shape[1] == key.shape[1], "Causal only for same length query / key / value"
|
336 |
-
assert value.shape[1] == key.shape[1], "Causal only for same length query / key / value"
|
337 |
-
attn_mask = self._get_mask(query.shape[1], query.device, query.dtype)
|
338 |
-
|
339 |
-
if self.custom:
|
340 |
-
# custom implementation
|
341 |
-
assert need_weights is False
|
342 |
-
assert key_padding_mask is None
|
343 |
-
if self.cross_attention:
|
344 |
-
# Different queries, keys, values, we have to spit manually the weights
|
345 |
-
# before applying the linear.
|
346 |
-
dim = self.in_proj_weight.shape[0] // 3
|
347 |
-
if self.in_proj_bias is None:
|
348 |
-
bias_q, bias_k, bias_v = None, None, None
|
349 |
-
else:
|
350 |
-
bias_q = self.in_proj_bias[:dim]
|
351 |
-
bias_k = self.in_proj_bias[dim: 2 * dim]
|
352 |
-
bias_v = self.in_proj_bias[2 * dim:]
|
353 |
-
q = nn.functional.linear(query, self.in_proj_weight[:dim], bias_q)
|
354 |
-
# todo: when streaming, we could actually save k, v and check the shape actually match.
|
355 |
-
k = nn.functional.linear(key, self.in_proj_weight[dim: 2 * dim], bias_k)
|
356 |
-
v = nn.functional.linear(value, self.in_proj_weight[2 * dim:], bias_v)
|
357 |
-
if self.qk_layer_norm is True:
|
358 |
-
q = self.q_layer_norm(q)
|
359 |
-
k = self.k_layer_norm(k)
|
360 |
-
q, k, v = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k, v]]
|
361 |
-
else:
|
362 |
-
if not _is_profiled():
|
363 |
-
# profiling breaks that propertysomehow.
|
364 |
-
assert query is key, "specialized implementation"
|
365 |
-
assert value is key, "specialized implementation"
|
366 |
-
projected = nn.functional.linear(query, self.in_proj_weight, self.in_proj_bias)
|
367 |
-
if self.kv_repeat == 1:
|
368 |
-
if time_dim == 2:
|
369 |
-
bound_layout = "b h p t d"
|
370 |
-
else:
|
371 |
-
bound_layout = "b t p h d"
|
372 |
-
packed = rearrange(projected, f"b t (p h d) -> {bound_layout}", p=3, h=self.num_heads)
|
373 |
-
q, k, v = ops.unbind(packed, dim=2)
|
374 |
-
else:
|
375 |
-
embed_dim = self.embed_dim
|
376 |
-
per_head_dim = (embed_dim // self.num_heads)
|
377 |
-
kv_heads = self.num_heads // self.kv_repeat
|
378 |
-
q = projected[:, :, :embed_dim]
|
379 |
-
start = embed_dim
|
380 |
-
end = start + per_head_dim * kv_heads
|
381 |
-
k = projected[:, :, start: end]
|
382 |
-
v = projected[:, :, end:]
|
383 |
-
q = rearrange(q, f"b t (h d) -> {layout}", h=self.num_heads)
|
384 |
-
k = rearrange(k, f"b t (h d) -> {layout}", h=kv_heads)
|
385 |
-
v = rearrange(v, f"b t (h d) -> {layout}", h=kv_heads)
|
386 |
-
|
387 |
-
if self.qk_layer_norm is True:
|
388 |
-
assert self.kv_repeat == 1
|
389 |
-
q, k = [rearrange(x, f"{layout} -> b t (h d)") for x in [q, k]]
|
390 |
-
q = self.q_layer_norm(q)
|
391 |
-
k = self.k_layer_norm(k)
|
392 |
-
q, k = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k]]
|
393 |
-
if self.rope:
|
394 |
-
q, k = self._apply_rope(q, k)
|
395 |
-
k, v = self._complete_kv(k, v)
|
396 |
-
if self.kv_repeat > 1:
|
397 |
-
k = expand_repeated_kv(k, self.kv_repeat)
|
398 |
-
v = expand_repeated_kv(v, self.kv_repeat)
|
399 |
-
if self.attention_as_float32:
|
400 |
-
q, k, v = [x.float() for x in [q, k, v]]
|
401 |
-
if self.memory_efficient:
|
402 |
-
p = self.dropout if self.training else 0
|
403 |
-
if _efficient_attention_backend == 'torch':
|
404 |
-
x = torch.nn.functional.scaled_dot_product_attention(
|
405 |
-
q, k, v, is_causal=attn_mask is not None, dropout_p=p)
|
406 |
-
else:
|
407 |
-
x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p)
|
408 |
-
else:
|
409 |
-
# We include the dot product as float32, for consistency
|
410 |
-
# with the other implementations that include that step
|
411 |
-
# as part of the attention. Note that when using `autocast`,
|
412 |
-
# the einsums would be done as bfloat16, but the softmax
|
413 |
-
# would be done as bfloat16, so `attention_as_float32` will
|
414 |
-
# extend a bit the range of operations done in float32,
|
415 |
-
# although this should make no difference.
|
416 |
-
q = q / q.shape[-1] ** 0.5
|
417 |
-
key_layout = layout.replace('t', 'k')
|
418 |
-
query_layout = layout
|
419 |
-
if self._is_streaming and self.safe_streaming and q.device.type == 'cuda':
|
420 |
-
with torch.autocast(device_type=q.device.type, dtype=torch.float32):
|
421 |
-
pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k)
|
422 |
-
else:
|
423 |
-
pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k)
|
424 |
-
if attn_mask is not None:
|
425 |
-
pre_w = pre_w + attn_mask
|
426 |
-
w = torch.softmax(pre_w, dim=-1)
|
427 |
-
w = F.dropout(w, self.dropout, training=self.training).to(v)
|
428 |
-
# Key and value have the same format.
|
429 |
-
x = torch.einsum(f"b h t k, {key_layout} -> {layout}", w, v)
|
430 |
-
x = x.to(dtype)
|
431 |
-
x = rearrange(x, f"{layout} -> b t (h d)", h=self.num_heads)
|
432 |
-
x = self.out_proj(x)
|
433 |
-
else:
|
434 |
-
key, value = self._complete_kv(key, value)
|
435 |
-
if self.attention_as_float32:
|
436 |
-
query, key, value = [x.float() for x in [query, key, value]]
|
437 |
-
x, _ = self.mha(
|
438 |
-
query, key, value, key_padding_mask,
|
439 |
-
need_weights, attn_mask, average_attn_weights)
|
440 |
-
x = x.to(dtype)
|
441 |
-
|
442 |
-
return x, None
|
443 |
-
|
444 |
-
|
445 |
-
class StreamingTransformerLayer(nn.TransformerEncoderLayer):
|
446 |
-
"""TransformerLayer with Streaming / Causal support.
|
447 |
-
This also integrates cross_attention, when passing `cross_attention=True`,
|
448 |
-
rather than having two separate classes like in PyTorch.
|
449 |
-
|
450 |
-
Args:
|
451 |
-
d_model (int): Dimension of the data.
|
452 |
-
num_heads (int): Number of heads.
|
453 |
-
dim_feedforward (int): Intermediate dimension of FF module.
|
454 |
-
dropout (float): Dropout both for MHA and FF.
|
455 |
-
bias_ff (bool): Use bias for FF.
|
456 |
-
bias_attn (bool): Use bias for MHA.
|
457 |
-
causal (bool): Causal mask applied automatically.
|
458 |
-
past_context (int, optional): Receptive field for the causal mask, infinite if None.
|
459 |
-
custom (bool): Use custom MHA implementation, for testing / benchmarking.
|
460 |
-
memory_efficient (bool): Use xformers based memory efficient attention.
|
461 |
-
attention_as_float32 (bool): Perform the attention as float32
|
462 |
-
(especially important with memory_efficient as autocast won't do this automatically).
|
463 |
-
qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product in attention.
|
464 |
-
qk_layer_norm_cross (bool): Same for the cross attention.
|
465 |
-
cross_attention (bool): If True, expect to get secondary input for cross-attention.
|
466 |
-
Cross attention will use the default MHA, as it typically won't require
|
467 |
-
special treatment.
|
468 |
-
layer_scale (float, optional): If not None, LayerScale will be used with
|
469 |
-
the given value as initial scale.
|
470 |
-
rope (`RotaryEmbedding`, optional): Rope embedding to use.
|
471 |
-
attention_dropout (float, optional): If not None, separate the value of the dimension dropout
|
472 |
-
in FFN and of the attention dropout.
|
473 |
-
kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads).
|
474 |
-
This will lead to faster decoding time on A100 or other GPUs with tensorcore.
|
475 |
-
device (torch.device, optional): Device on which to initialize.
|
476 |
-
dtype (torch.dtype, optional): dtype to use.
|
477 |
-
**kwargs: See `nn.TransformerEncoderLayer`.
|
478 |
-
"""
|
479 |
-
def __init__(self, d_model: int, num_heads: int, dim_feedforward: int = 2048, dropout: float = 0.1,
|
480 |
-
bias_ff: bool = True, bias_attn: bool = True, causal: bool = False,
|
481 |
-
past_context: tp.Optional[int] = None, custom: bool = False,
|
482 |
-
memory_efficient: bool = False, attention_as_float32: bool = False,
|
483 |
-
qk_layer_norm: bool = False, qk_layer_norm_cross: bool = False,
|
484 |
-
cross_attention: bool = False, layer_scale: tp.Optional[float] = None,
|
485 |
-
rope: tp.Optional[RotaryEmbedding] = None, attention_dropout: tp.Optional[float] = None,
|
486 |
-
kv_repeat: int = 1, norm: str = 'layer_norm', device=None, dtype=None, **kwargs):
|
487 |
-
super().__init__(d_model, num_heads, dim_feedforward, dropout,
|
488 |
-
device=device, dtype=dtype, batch_first=True, **kwargs)
|
489 |
-
factory_kwargs = {'device': device, 'dtype': dtype}
|
490 |
-
# Redefine self_attn to our streaming multi-head attention
|
491 |
-
attn_kwargs: tp.Dict[str, tp.Any] = {
|
492 |
-
'embed_dim': d_model,
|
493 |
-
'num_heads': num_heads,
|
494 |
-
'dropout': dropout if attention_dropout is None else attention_dropout,
|
495 |
-
'bias': bias_attn,
|
496 |
-
'custom': custom,
|
497 |
-
'memory_efficient': memory_efficient,
|
498 |
-
'attention_as_float32': attention_as_float32,
|
499 |
-
}
|
500 |
-
self.self_attn: StreamingMultiheadAttention = StreamingMultiheadAttention(
|
501 |
-
causal=causal, past_context=past_context, rope=rope, qk_layer_norm=qk_layer_norm,
|
502 |
-
kv_repeat=kv_repeat, **attn_kwargs, **factory_kwargs) # type: ignore
|
503 |
-
# Redefine feedforward layers to expose bias parameter
|
504 |
-
self.linear1 = nn.Linear(d_model, dim_feedforward, bias=bias_ff, **factory_kwargs)
|
505 |
-
self.linear2 = nn.Linear(dim_feedforward, d_model, bias=bias_ff, **factory_kwargs)
|
506 |
-
|
507 |
-
self.layer_scale_1: nn.Module
|
508 |
-
self.layer_scale_2: nn.Module
|
509 |
-
if layer_scale is None:
|
510 |
-
self.layer_scale_1 = nn.Identity()
|
511 |
-
self.layer_scale_2 = nn.Identity()
|
512 |
-
else:
|
513 |
-
self.layer_scale_1 = LayerScale(d_model, layer_scale, **factory_kwargs)
|
514 |
-
self.layer_scale_2 = LayerScale(d_model, layer_scale, **factory_kwargs)
|
515 |
-
|
516 |
-
self.cross_attention: tp.Optional[nn.Module] = None
|
517 |
-
if cross_attention:
|
518 |
-
self.cross_attention = StreamingMultiheadAttention(
|
519 |
-
cross_attention=True, qk_layer_norm=qk_layer_norm_cross,
|
520 |
-
**attn_kwargs, **factory_kwargs)
|
521 |
-
# Norm and dropout
|
522 |
-
self.dropout_cross = nn.Dropout(dropout)
|
523 |
-
# eps value matching that used in PyTorch reference implementation.
|
524 |
-
self.norm_cross = nn.LayerNorm(d_model, eps=1e-5, **factory_kwargs)
|
525 |
-
self.layer_scale_cross: nn.Module
|
526 |
-
if layer_scale is None:
|
527 |
-
self.layer_scale_cross = nn.Identity()
|
528 |
-
else:
|
529 |
-
self.layer_scale_cross = LayerScale(d_model, layer_scale, **factory_kwargs)
|
530 |
-
self.norm1 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore
|
531 |
-
self.norm2 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore
|
532 |
-
|
533 |
-
def _cross_attention_block(self, src: torch.Tensor,
|
534 |
-
cross_attention_src: torch.Tensor) -> torch.Tensor:
|
535 |
-
assert self.cross_attention is not None
|
536 |
-
# queries are from src, keys and values from cross_attention_src.
|
537 |
-
x = self.cross_attention(
|
538 |
-
src, cross_attention_src, cross_attention_src, need_weights=False)[0]
|
539 |
-
return self.dropout_cross(x) # type: ignore
|
540 |
-
|
541 |
-
def forward(self, src: torch.Tensor, src_mask: tp.Optional[torch.Tensor] = None, # type: ignore
|
542 |
-
src_key_padding_mask: tp.Optional[torch.Tensor] = None,
|
543 |
-
cross_attention_src: tp.Optional[torch.Tensor] = None):
|
544 |
-
if self.cross_attention is None:
|
545 |
-
assert cross_attention_src is None
|
546 |
-
else:
|
547 |
-
assert cross_attention_src is not None
|
548 |
-
x = src
|
549 |
-
if self.norm_first:
|
550 |
-
x = x + self.layer_scale_1(
|
551 |
-
self._sa_block(self.norm1(x), src_mask, src_key_padding_mask))
|
552 |
-
if cross_attention_src is not None:
|
553 |
-
x = x + self.layer_scale_cross(
|
554 |
-
self._cross_attention_block(
|
555 |
-
self.norm_cross(x), cross_attention_src))
|
556 |
-
x = x + self.layer_scale_2(self._ff_block(self.norm2(x)))
|
557 |
-
else:
|
558 |
-
x = self.norm1(x + self.layer_scale_1(
|
559 |
-
self._sa_block(x, src_mask, src_key_padding_mask)))
|
560 |
-
if cross_attention_src is not None:
|
561 |
-
x = self.norm_cross(
|
562 |
-
x + self.layer_scale_cross(
|
563 |
-
self._cross_attention_block(src, cross_attention_src)))
|
564 |
-
x = self.norm2(x + self.layer_scale_2(self._ff_block(x)))
|
565 |
-
return x
|
566 |
-
|
567 |
-
|
568 |
-
class StreamingTransformer(StreamingModule):
|
569 |
-
"""Transformer with Streaming / Causal support.
|
570 |
-
|
571 |
-
Args:
|
572 |
-
d_model (int): Dimension of the data.
|
573 |
-
num_heads (int): Number of heads.
|
574 |
-
dim_feedforward (int): Intermediate dimension of FF module.
|
575 |
-
dropout (float): Dropout both for MHA and FF.
|
576 |
-
bias_ff (bool): Use bias for FF.
|
577 |
-
bias_attn (bool): Use bias for MHA.
|
578 |
-
causal (bool): Causal mask applied automatically.
|
579 |
-
past_context (int, optional): Receptive field for the causal mask, infinite if None.
|
580 |
-
custom (bool): Use custom MHA implementation, for testing / benchmarking.
|
581 |
-
memory_efficient (bool): Use xformers based memory efficient attention.
|
582 |
-
attention_as_float32 (bool): Perform the attention as float32
|
583 |
-
(especially important with memory_efficient as autocast won't do this automatically).
|
584 |
-
cross_attention (bool): If True, expect to get secondary input for cross-attention.
|
585 |
-
layer_scale (float, optional): If not None, LayerScale will be used
|
586 |
-
with the given value as initial scale.
|
587 |
-
positional_embedding (str): Positional embedding strategy (sin, rope, or sin_rope).
|
588 |
-
max_period (float): Maximum period of the time embedding.
|
589 |
-
positional_scale (float): Scale of positional embedding, set to 0 to deactivate.
|
590 |
-
xpos (bool): Apply xpos exponential decay to positional embedding (rope only).
|
591 |
-
lr (float, optional): learning rate override through the `make_optim_group` API.
|
592 |
-
weight_decay (float, optional): Weight_decay override through the `make_optim_group` API.
|
593 |
-
layer_class: (subclass of `StreamingTransformerLayer): class to use
|
594 |
-
to initialize the layers, allowing further customization outside of AudioCraft.
|
595 |
-
checkpointing (str): Checkpointing strategy to reduce memory usage.
|
596 |
-
No checkpointing if set to 'none'. Per layer checkpointing using PyTorch
|
597 |
-
if set to 'torch' (entire layer checkpointed, i.e. linears are evaluated twice,
|
598 |
-
minimal memory usage, but maximal runtime). Finally, `xformers_default` provide
|
599 |
-
a policy for opting-out some operations of the checkpointing like
|
600 |
-
linear layers and attention, providing a middle ground between speed and memory.
|
601 |
-
device (torch.device, optional): Device on which to initialize.
|
602 |
-
dtype (torch.dtype, optional): dtype to use.
|
603 |
-
**kwargs: See `nn.TransformerEncoderLayer`.
|
604 |
-
"""
|
605 |
-
def __init__(self, d_model: int, num_heads: int, num_layers: int, dim_feedforward: int = 2048,
|
606 |
-
dropout: float = 0.1, bias_ff: bool = True, bias_attn: bool = True,
|
607 |
-
causal: bool = False, past_context: tp.Optional[int] = None,
|
608 |
-
custom: bool = False, memory_efficient: bool = False, attention_as_float32: bool = False,
|
609 |
-
cross_attention: bool = False, layer_scale: tp.Optional[float] = None,
|
610 |
-
positional_embedding: str = 'sin', max_period: float = 10_000, positional_scale: float = 1.,
|
611 |
-
xpos: bool = False, lr: tp.Optional[float] = None, weight_decay: tp.Optional[float] = None,
|
612 |
-
layer_class: tp.Type[StreamingTransformerLayer] = StreamingTransformerLayer,
|
613 |
-
checkpointing: str = 'none', device=None, dtype=None, **kwargs):
|
614 |
-
super().__init__()
|
615 |
-
assert d_model % num_heads == 0
|
616 |
-
|
617 |
-
self.positional_embedding = positional_embedding
|
618 |
-
self.max_period = max_period
|
619 |
-
self.positional_scale = positional_scale
|
620 |
-
self.weight_decay = weight_decay
|
621 |
-
self.lr = lr
|
622 |
-
|
623 |
-
assert positional_embedding in ['sin', 'rope', 'sin_rope']
|
624 |
-
self.rope: tp.Optional[RotaryEmbedding] = None
|
625 |
-
if self.positional_embedding in ['rope', 'sin_rope']:
|
626 |
-
assert _is_custom(custom, memory_efficient)
|
627 |
-
self.rope = RotaryEmbedding(d_model // num_heads, max_period=max_period,
|
628 |
-
xpos=xpos, scale=positional_scale, device=device)
|
629 |
-
|
630 |
-
self.checkpointing = checkpointing
|
631 |
-
|
632 |
-
assert checkpointing in ['none', 'torch', 'xformers_default', 'xformers_mm']
|
633 |
-
if self.checkpointing.startswith('xformers'):
|
634 |
-
_verify_xformers_internal_compat()
|
635 |
-
|
636 |
-
self.layers = nn.ModuleList()
|
637 |
-
for idx in range(num_layers):
|
638 |
-
self.layers.append(
|
639 |
-
layer_class(
|
640 |
-
d_model=d_model, num_heads=num_heads, dim_feedforward=dim_feedforward,
|
641 |
-
dropout=dropout, bias_ff=bias_ff, bias_attn=bias_attn,
|
642 |
-
causal=causal, past_context=past_context, custom=custom,
|
643 |
-
memory_efficient=memory_efficient, attention_as_float32=attention_as_float32,
|
644 |
-
cross_attention=cross_attention, layer_scale=layer_scale, rope=self.rope,
|
645 |
-
device=device, dtype=dtype, **kwargs))
|
646 |
-
|
647 |
-
if self.checkpointing != 'none':
|
648 |
-
for layer in self.layers:
|
649 |
-
# see audiocraft/optim/fsdp.py, magic signal to indicate this requires fixing the
|
650 |
-
# backward hook inside of FSDP...
|
651 |
-
layer._magma_checkpointed = True # type: ignore
|
652 |
-
assert layer.layer_drop == 0., "Need further checking" # type: ignore
|
653 |
-
|
654 |
-
def _apply_layer(self, layer, *args, **kwargs):
|
655 |
-
method = self.checkpointing
|
656 |
-
if method == 'none':
|
657 |
-
return layer(*args, **kwargs)
|
658 |
-
elif method == 'torch':
|
659 |
-
return torch_checkpoint(layer, *args, use_reentrant=False, **kwargs)
|
660 |
-
elif method.startswith('xformers'):
|
661 |
-
from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy
|
662 |
-
if method == 'xformers_default':
|
663 |
-
# those operations will be saved, and not recomputed.
|
664 |
-
# According to Francisco we can get smarter policies but this is a good start.
|
665 |
-
allow_list = [
|
666 |
-
"xformers.efficient_attention_forward_cutlass.default",
|
667 |
-
"xformers_flash.flash_fwd.default",
|
668 |
-
"aten.addmm.default",
|
669 |
-
"aten.mm.default",
|
670 |
-
]
|
671 |
-
elif method == 'xformers_mm':
|
672 |
-
# those operations will be saved, and not recomputed.
|
673 |
-
# According to Francisco we can get smarter policies but this is a good start.
|
674 |
-
allow_list = [
|
675 |
-
"aten.addmm.default",
|
676 |
-
"aten.mm.default",
|
677 |
-
]
|
678 |
-
else:
|
679 |
-
raise ValueError(f"xformers checkpointing xformers policy {method} is not known.")
|
680 |
-
policy_fn = _get_default_policy(allow_list)
|
681 |
-
return checkpoint(layer, *args, policy_fn=policy_fn, **kwargs)
|
682 |
-
else:
|
683 |
-
raise ValueError(f"Checkpointing method {method} is unknown.")
|
684 |
-
|
685 |
-
def forward(self, x: torch.Tensor, *args, **kwargs):
|
686 |
-
B, T, C = x.shape
|
687 |
-
|
688 |
-
if 'offsets' in self._streaming_state:
|
689 |
-
offsets = self._streaming_state['offsets']
|
690 |
-
else:
|
691 |
-
offsets = torch.zeros(B, dtype=torch.long, device=x.device)
|
692 |
-
|
693 |
-
if self.positional_embedding in ['sin', 'sin_rope']:
|
694 |
-
positions = torch.arange(T, device=x.device).view(1, -1, 1)
|
695 |
-
positions = positions + offsets.view(-1, 1, 1)
|
696 |
-
pos_emb = create_sin_embedding(positions, C, max_period=self.max_period, dtype=x.dtype)
|
697 |
-
x = x + self.positional_scale * pos_emb
|
698 |
-
|
699 |
-
for layer in self.layers:
|
700 |
-
x = self._apply_layer(layer, x, *args, **kwargs)
|
701 |
-
|
702 |
-
if self._is_streaming:
|
703 |
-
self._streaming_state['offsets'] = offsets + T
|
704 |
-
|
705 |
-
return x
|
706 |
-
|
707 |
-
def make_optim_group(self):
|
708 |
-
group = {"params": list(self.parameters())}
|
709 |
-
if self.lr is not None:
|
710 |
-
group["lr"] = self.lr
|
711 |
-
if self.weight_decay is not None:
|
712 |
-
group["weight_decay"] = self.weight_decay
|
713 |
-
return group
|
714 |
-
|
715 |
-
|
716 |
-
# special attention related function
|
717 |
-
|
718 |
-
def _verify_xformers_memory_efficient_compat():
|
719 |
-
try:
|
720 |
-
from xformers.ops import memory_efficient_attention, LowerTriangularMask # noqa
|
721 |
-
except ImportError:
|
722 |
-
raise ImportError(
|
723 |
-
"xformers is not installed. Please install it and try again.\n"
|
724 |
-
"To install on AWS and Azure, run \n"
|
725 |
-
"FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n"
|
726 |
-
"pip install -U git+https://[email protected]/fairinternal/xformers.git#egg=xformers\n"
|
727 |
-
"To install on FAIR Cluster, run \n"
|
728 |
-
"FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n"
|
729 |
-
"pip install -U git+https://[email protected]/fairinternal/xformers.git#egg=xformers\n")
|
730 |
-
|
731 |
-
|
732 |
-
def _verify_xformers_internal_compat():
|
733 |
-
try:
|
734 |
-
from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy # noqa
|
735 |
-
except ImportError:
|
736 |
-
raise ImportError(
|
737 |
-
"Francisco's fairinternal xformers is not installed. Please install it and try again.\n"
|
738 |
-
"To install on AWS and Azure, run \n"
|
739 |
-
"FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n"
|
740 |
-
"pip install -U git+https://[email protected]/fairinternal/xformers.git#egg=xformers\n"
|
741 |
-
"To install on FAIR Cluster, run \n"
|
742 |
-
"FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n"
|
743 |
-
"pip install -U git+https://[email protected]/fairinternal/xformers.git#egg=xformers\n")
|
744 |
-
|
745 |
-
|
746 |
-
def _is_custom(custom: bool, memory_efficient: bool):
|
747 |
-
return custom or memory_efficient
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/export.py
DELETED
@@ -1,56 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
"""
|
8 |
-
Utility to export a training checkpoint to a lightweight release checkpoint.
|
9 |
-
"""
|
10 |
-
|
11 |
-
from pathlib import Path
|
12 |
-
import typing as tp
|
13 |
-
|
14 |
-
from omegaconf import OmegaConf, DictConfig
|
15 |
-
import torch
|
16 |
-
|
17 |
-
|
18 |
-
def _clean_lm_cfg(cfg: DictConfig):
|
19 |
-
OmegaConf.set_struct(cfg, False)
|
20 |
-
# This used to be set automatically in the LM solver, need a more robust solution
|
21 |
-
# for the future.
|
22 |
-
cfg['transformer_lm']['card'] = 2048
|
23 |
-
cfg['transformer_lm']['n_q'] = 4
|
24 |
-
# Experimental params no longer supported.
|
25 |
-
bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters',
|
26 |
-
'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop']
|
27 |
-
for name in bad_params:
|
28 |
-
del cfg['transformer_lm'][name]
|
29 |
-
OmegaConf.set_struct(cfg, True)
|
30 |
-
return cfg
|
31 |
-
|
32 |
-
|
33 |
-
def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]):
|
34 |
-
sig = Path(checkpoint_path).parent.name
|
35 |
-
assert len(sig) == 8, "Not a valid Dora signature"
|
36 |
-
pkg = torch.load(checkpoint_path, 'cpu')
|
37 |
-
new_pkg = {
|
38 |
-
'best_state': pkg['ema']['state']['model'],
|
39 |
-
'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']),
|
40 |
-
}
|
41 |
-
out_file = Path(out_folder) / f'{sig}.th'
|
42 |
-
torch.save(new_pkg, out_file)
|
43 |
-
return out_file
|
44 |
-
|
45 |
-
|
46 |
-
def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]):
|
47 |
-
sig = Path(checkpoint_path).parent.name
|
48 |
-
assert len(sig) == 8, "Not a valid Dora signature"
|
49 |
-
pkg = torch.load(checkpoint_path, 'cpu')
|
50 |
-
new_pkg = {
|
51 |
-
'best_state': pkg['fsdp_best_state']['model'],
|
52 |
-
'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg']))
|
53 |
-
}
|
54 |
-
out_file = Path(out_folder) / f'{sig}.th'
|
55 |
-
torch.save(new_pkg, out_file)
|
56 |
-
return out_file
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/utils/extend.py
DELETED
@@ -1,332 +0,0 @@
|
|
1 |
-
from tabnanny import verbose
|
2 |
-
import torch
|
3 |
-
import math
|
4 |
-
from audiocraft.models import MusicGen
|
5 |
-
import numpy as np
|
6 |
-
from PIL import Image, ImageDraw, ImageFont, ImageColor
|
7 |
-
import string
|
8 |
-
import tempfile
|
9 |
-
import os
|
10 |
-
import textwrap
|
11 |
-
import requests
|
12 |
-
from io import BytesIO
|
13 |
-
from huggingface_hub import hf_hub_download
|
14 |
-
import librosa
|
15 |
-
|
16 |
-
|
17 |
-
INTERRUPTING = False
|
18 |
-
|
19 |
-
def separate_audio_segments(audio, segment_duration=30, overlap=1):
|
20 |
-
sr, audio_data = audio[0], audio[1]
|
21 |
-
|
22 |
-
segment_samples = sr * segment_duration
|
23 |
-
total_samples = max(min((len(audio_data) // segment_samples), 25), 0)
|
24 |
-
overlap_samples = sr * overlap
|
25 |
-
|
26 |
-
segments = []
|
27 |
-
start_sample = 0
|
28 |
-
# handle the case where the audio is shorter than the segment duration
|
29 |
-
if total_samples == 0:
|
30 |
-
total_samples = 1
|
31 |
-
segment_samples = len(audio_data)
|
32 |
-
overlap_samples = 0
|
33 |
-
while total_samples >= segment_samples:
|
34 |
-
# Collect the segment
|
35 |
-
# the end sample is the start sample plus the segment samples,
|
36 |
-
# the start sample, after 0, is minus the overlap samples to account for the overlap
|
37 |
-
end_sample = start_sample + segment_samples
|
38 |
-
segment = audio_data[start_sample:end_sample]
|
39 |
-
segments.append((sr, segment))
|
40 |
-
|
41 |
-
start_sample += segment_samples - overlap_samples
|
42 |
-
total_samples -= segment_samples
|
43 |
-
|
44 |
-
# Collect the final segment
|
45 |
-
if total_samples > 0:
|
46 |
-
segment = audio_data[-segment_samples:]
|
47 |
-
segments.append((sr, segment))
|
48 |
-
print(f"separate_audio_segments: {len(segments)} segments of length {segment_samples // sr} seconds")
|
49 |
-
return segments
|
50 |
-
|
51 |
-
def generate_music_segments(text, melody, seed, MODEL, duration:int=10, overlap:int=1, segment_duration:int=30, prompt_index:int=0, harmony_only:bool= False):
|
52 |
-
# generate audio segments
|
53 |
-
melody_segments = separate_audio_segments(melody, segment_duration, 0)
|
54 |
-
|
55 |
-
# Create lists to store the melody tensors for each segment
|
56 |
-
melodys = []
|
57 |
-
output_segments = []
|
58 |
-
last_chunk = []
|
59 |
-
text += ", seed=" + str(seed)
|
60 |
-
prompt_segment = None
|
61 |
-
# prevent hacking
|
62 |
-
duration = min(duration, 720)
|
63 |
-
overlap = min(overlap, 15)
|
64 |
-
|
65 |
-
# Calculate the total number of segments
|
66 |
-
total_segments = max(math.ceil(duration / segment_duration),1)
|
67 |
-
#calculate duration loss from segment overlap
|
68 |
-
duration_loss = max(total_segments - 1,0) * math.ceil(overlap / 2)
|
69 |
-
#calc excess duration
|
70 |
-
excess_duration = segment_duration - (total_segments * segment_duration - duration)
|
71 |
-
print(f"total Segments to Generate: {total_segments} for {duration} seconds. Each segment is {segment_duration} seconds. Excess {excess_duration} Overlap Loss {duration_loss}")
|
72 |
-
duration += duration_loss
|
73 |
-
while excess_duration + duration_loss > segment_duration:
|
74 |
-
total_segments += 1
|
75 |
-
#calculate duration loss from segment overlap
|
76 |
-
duration_loss += math.ceil(overlap / 2)
|
77 |
-
#calc excess duration
|
78 |
-
excess_duration = segment_duration - (total_segments * segment_duration - duration)
|
79 |
-
print(f"total Segments to Generate: {total_segments} for {duration} seconds. Each segment is {segment_duration} seconds. Excess {excess_duration} Overlap Loss {duration_loss}")
|
80 |
-
if excess_duration + duration_loss > segment_duration:
|
81 |
-
duration += duration_loss
|
82 |
-
duration_loss = 0
|
83 |
-
total_segments = min(total_segments, (720 // segment_duration))
|
84 |
-
|
85 |
-
# If melody_segments is shorter than total_segments, repeat the segments until the total_segments is reached
|
86 |
-
if len(melody_segments) < total_segments:
|
87 |
-
#fix melody_segments
|
88 |
-
for i in range(total_segments - len(melody_segments)):
|
89 |
-
segment = melody_segments[i]
|
90 |
-
melody_segments.append(segment)
|
91 |
-
print(f"melody_segments: {len(melody_segments)} fixed")
|
92 |
-
|
93 |
-
# Iterate over the segments to create list of Meldoy tensors
|
94 |
-
for segment_idx in range(total_segments):
|
95 |
-
if INTERRUPTING:
|
96 |
-
return [], duration
|
97 |
-
print(f"segment {segment_idx + 1} of {total_segments} \r")
|
98 |
-
|
99 |
-
if harmony_only:
|
100 |
-
# REMOVE PERCUSION FROM MELODY
|
101 |
-
# Apply HPSS using librosa
|
102 |
-
verse_harmonic, verse_percussive = librosa.effects.hpss(melody_segments[segment_idx][1])
|
103 |
-
# Convert the separated components back to torch.Tensor
|
104 |
-
#harmonic_tensor = torch.from_numpy(verse_harmonic)
|
105 |
-
#percussive_tensor = torch.from_numpy(verse_percussive)
|
106 |
-
sr, verse = melody_segments[segment_idx][0], torch.from_numpy(verse_harmonic).to(MODEL.device).float().t().unsqueeze(0)
|
107 |
-
else:
|
108 |
-
sr, verse = melody_segments[segment_idx][0], torch.from_numpy(melody_segments[segment_idx][1]).to(MODEL.device).float().t().unsqueeze(0)
|
109 |
-
|
110 |
-
print(f"shape:{verse.shape} dim:{verse.dim()}")
|
111 |
-
if verse.dim() == 2:
|
112 |
-
verse = verse[None]
|
113 |
-
verse = verse[..., :int(sr * MODEL.lm.cfg.dataset.segment_duration)]
|
114 |
-
|
115 |
-
# Append the segment to the melodys list
|
116 |
-
melodys.append(verse)
|
117 |
-
|
118 |
-
torch.manual_seed(seed)
|
119 |
-
|
120 |
-
# If user selects a prompt segment, generate a new prompt segment to use on all segments
|
121 |
-
#default to the first segment for prompt conditioning
|
122 |
-
prompt_verse = melodys[0]
|
123 |
-
if prompt_index > 0:
|
124 |
-
# Get a prompt segment from the selected verse, normally the first verse
|
125 |
-
prompt_verse = melodys[prompt_index if prompt_index <= (total_segments - 1) else (total_segments -1)]
|
126 |
-
|
127 |
-
# set the prompt segment MODEL generation params
|
128 |
-
MODEL.set_generation_params(
|
129 |
-
use_sampling=True,
|
130 |
-
top_k=MODEL.generation_params["top_k"],
|
131 |
-
top_p=MODEL.generation_params["top_p"],
|
132 |
-
temperature=MODEL.generation_params["temp"],
|
133 |
-
cfg_coef=MODEL.generation_params["cfg_coef"],
|
134 |
-
duration=segment_duration,
|
135 |
-
two_step_cfg=False,
|
136 |
-
rep_penalty=0.5
|
137 |
-
)
|
138 |
-
# Generate a new prompt segment. This will be applied to all segments for consistency
|
139 |
-
print(f"Generating New Prompt Segment: {text} from verse {prompt_index}\r")
|
140 |
-
prompt_segment = MODEL.generate_with_all(
|
141 |
-
descriptions=[text],
|
142 |
-
melody_wavs=prompt_verse,
|
143 |
-
sample_rate=sr,
|
144 |
-
progress=False,
|
145 |
-
prompt=None,
|
146 |
-
)
|
147 |
-
|
148 |
-
for idx, verse in enumerate(melodys):
|
149 |
-
if INTERRUPTING:
|
150 |
-
return output_segments, duration
|
151 |
-
|
152 |
-
print(f'Segment duration: {segment_duration}, duration: {duration}, overlap: {overlap} Overlap Loss: {duration_loss}')
|
153 |
-
# Compensate for the length of final segment
|
154 |
-
if ((idx + 1) == len(melodys)) or (duration < segment_duration):
|
155 |
-
mod_duration = max(min(duration, segment_duration),1)
|
156 |
-
print(f'Modify verse length, duration: {duration}, overlap: {overlap} Overlap Loss: {duration_loss} to mod duration: {mod_duration}')
|
157 |
-
MODEL.set_generation_params(
|
158 |
-
use_sampling=True,
|
159 |
-
top_k=MODEL.generation_params["top_k"],
|
160 |
-
top_p=MODEL.generation_params["top_p"],
|
161 |
-
temperature=MODEL.generation_params["temp"],
|
162 |
-
cfg_coef=MODEL.generation_params["cfg_coef"],
|
163 |
-
duration=mod_duration,
|
164 |
-
two_step_cfg=False,
|
165 |
-
rep_penalty=0.5
|
166 |
-
)
|
167 |
-
try:
|
168 |
-
# get last chunk
|
169 |
-
verse = verse[:, :, -mod_duration*MODEL.sample_rate:]
|
170 |
-
prompt_segment = prompt_segment[:, :, -mod_duration*MODEL.sample_rate:]
|
171 |
-
except:
|
172 |
-
# get first chunk
|
173 |
-
verse = verse[:, :, :mod_duration*MODEL.sample_rate]
|
174 |
-
prompt_segment = prompt_segment[:, :, :mod_duration*MODEL.sample_rate]
|
175 |
-
|
176 |
-
|
177 |
-
print(f"Generating New Melody Segment {idx + 1}: {text}\r")
|
178 |
-
output = MODEL.generate_with_all(
|
179 |
-
descriptions=[text],
|
180 |
-
melody_wavs=verse,
|
181 |
-
sample_rate=sr,
|
182 |
-
progress=False,
|
183 |
-
prompt=prompt_segment,
|
184 |
-
)
|
185 |
-
# If user selects a prompt segment, use the prompt segment for all segments
|
186 |
-
# Otherwise, use the previous segment as the prompt
|
187 |
-
if prompt_index < 0:
|
188 |
-
prompt_segment = output
|
189 |
-
|
190 |
-
# Append the generated output to the list of segments
|
191 |
-
#output_segments.append(output[:, :segment_duration])
|
192 |
-
output_segments.append(output)
|
193 |
-
print(f"output_segments: {len(output_segments)}: shape: {output.shape} dim {output.dim()}")
|
194 |
-
#track duration
|
195 |
-
if duration > segment_duration:
|
196 |
-
duration -= segment_duration
|
197 |
-
return output_segments, excess_duration
|
198 |
-
|
199 |
-
def save_image(image):
|
200 |
-
"""
|
201 |
-
Saves a PIL image to a temporary file and returns the file path.
|
202 |
-
|
203 |
-
Parameters:
|
204 |
-
- image: PIL.Image
|
205 |
-
The PIL image object to be saved.
|
206 |
-
|
207 |
-
Returns:
|
208 |
-
- str or None: The file path where the image was saved,
|
209 |
-
or None if there was an error saving the image.
|
210 |
-
|
211 |
-
"""
|
212 |
-
temp_dir = tempfile.gettempdir()
|
213 |
-
temp_file = tempfile.NamedTemporaryFile(suffix=".png", dir=temp_dir, delete=False)
|
214 |
-
temp_file.close()
|
215 |
-
file_path = temp_file.name
|
216 |
-
|
217 |
-
try:
|
218 |
-
image.save(file_path)
|
219 |
-
|
220 |
-
except Exception as e:
|
221 |
-
print("Unable to save image:", str(e))
|
222 |
-
return None
|
223 |
-
finally:
|
224 |
-
return file_path
|
225 |
-
|
226 |
-
def hex_to_rgba(hex_color):
|
227 |
-
try:
|
228 |
-
# Convert hex color to RGBA tuple
|
229 |
-
rgba = ImageColor.getcolor(hex_color, "RGBA")
|
230 |
-
except ValueError:
|
231 |
-
# If the hex color is invalid, default to yellow
|
232 |
-
rgba = (255,255,0,255)
|
233 |
-
return rgba
|
234 |
-
|
235 |
-
def load_font(font_name, font_size=16):
|
236 |
-
"""
|
237 |
-
Load a font using the provided font name and font size.
|
238 |
-
|
239 |
-
Parameters:
|
240 |
-
font_name (str): The name of the font to load. Can be a font name recognized by the system, a URL to download the font file,
|
241 |
-
a local file path, or a Hugging Face model hub identifier.
|
242 |
-
font_size (int, optional): The size of the font. Default is 16.
|
243 |
-
|
244 |
-
Returns:
|
245 |
-
ImageFont.FreeTypeFont: The loaded font object.
|
246 |
-
|
247 |
-
Notes:
|
248 |
-
This function attempts to load the font using various methods until a suitable font is found. If the provided font_name
|
249 |
-
cannot be loaded, it falls back to a default font.
|
250 |
-
|
251 |
-
The font_name can be one of the following:
|
252 |
-
- A font name recognized by the system, which can be loaded using ImageFont.truetype.
|
253 |
-
- A URL pointing to the font file, which is downloaded using requests and then loaded using ImageFont.truetype.
|
254 |
-
- A local file path to the font file, which is loaded using ImageFont.truetype.
|
255 |
-
- A Hugging Face model hub identifier, which downloads the font file from the Hugging Face model hub using hf_hub_download
|
256 |
-
and then loads it using ImageFont.truetype.
|
257 |
-
|
258 |
-
Example:
|
259 |
-
font = load_font("Arial.ttf", font_size=20)
|
260 |
-
"""
|
261 |
-
font = None
|
262 |
-
if not "http" in font_name:
|
263 |
-
try:
|
264 |
-
font = ImageFont.truetype(font_name, font_size)
|
265 |
-
except (FileNotFoundError, OSError):
|
266 |
-
print("Font not found. Using Hugging Face download..\n")
|
267 |
-
|
268 |
-
if font is None:
|
269 |
-
try:
|
270 |
-
font_path = ImageFont.truetype(hf_hub_download(repo_id=os.environ.get('SPACE_ID', ''), filename="assets/" + font_name, repo_type="space"), encoding="UTF-8")
|
271 |
-
font = ImageFont.truetype(font_path, font_size)
|
272 |
-
except (FileNotFoundError, OSError):
|
273 |
-
print("Font not found. Trying to download from local assets folder...\n")
|
274 |
-
if font is None:
|
275 |
-
try:
|
276 |
-
font = ImageFont.truetype("assets/" + font_name, font_size)
|
277 |
-
except (FileNotFoundError, OSError):
|
278 |
-
print("Font not found. Trying to download from URL...\n")
|
279 |
-
|
280 |
-
if font is None:
|
281 |
-
try:
|
282 |
-
req = requests.get(font_name)
|
283 |
-
font = ImageFont.truetype(BytesIO(req.content), font_size)
|
284 |
-
except (FileNotFoundError, OSError):
|
285 |
-
print(f"Font not found: {font_name} Using default font\n")
|
286 |
-
if font:
|
287 |
-
print(f"Font loaded {font.getname()}")
|
288 |
-
else:
|
289 |
-
font = ImageFont.load_default()
|
290 |
-
return font
|
291 |
-
|
292 |
-
|
293 |
-
def add_settings_to_image(title: str = "title", description: str = "", width: int = 768, height: int = 512, background_path: str = "", font: str = "arial.ttf", font_color: str = "#ffffff"):
|
294 |
-
# Create a new RGBA image with the specified dimensions
|
295 |
-
image = Image.new("RGBA", (width, height), (255, 255, 255, 0))
|
296 |
-
# If a background image is specified, open it and paste it onto the image
|
297 |
-
if background_path == "":
|
298 |
-
background = Image.new("RGBA", (width, height), (255, 255, 255, 255))
|
299 |
-
else:
|
300 |
-
background = Image.open(background_path).convert("RGBA")
|
301 |
-
|
302 |
-
#Convert font color to RGBA tuple
|
303 |
-
font_color = hex_to_rgba(font_color)
|
304 |
-
|
305 |
-
# Calculate the center coordinates for placing the text
|
306 |
-
text_x = width // 2
|
307 |
-
text_y = height // 2
|
308 |
-
# Draw the title text at the center top
|
309 |
-
title_font = load_font(font, 26) # Replace with your desired font and size
|
310 |
-
|
311 |
-
title_text = '\n'.join(textwrap.wrap(title, width // 12))
|
312 |
-
title_x, title_y, title_text_width, title_text_height = title_font.getbbox(title_text)
|
313 |
-
title_x = max(text_x - (title_text_width // 2), title_x, 0)
|
314 |
-
title_y = text_y - (height // 2) + 10 # 10 pixels padding from the top
|
315 |
-
title_draw = ImageDraw.Draw(image)
|
316 |
-
title_draw.multiline_text((title_x, title_y), title, fill=font_color, font=title_font, align="center")
|
317 |
-
# Draw the description text two lines below the title
|
318 |
-
description_font = load_font(font, 16) # Replace with your desired font and size
|
319 |
-
description_text = '\n'.join(textwrap.wrap(description, width // 12))
|
320 |
-
description_x, description_y, description_text_width, description_text_height = description_font.getbbox(description_text)
|
321 |
-
description_x = max(text_x - (description_text_width // 2), description_x, 0)
|
322 |
-
description_y = title_y + title_text_height + 20 # 20 pixels spacing between title and description
|
323 |
-
description_draw = ImageDraw.Draw(image)
|
324 |
-
description_draw.multiline_text((description_x, description_y), description_text, fill=font_color, font=description_font, align="center")
|
325 |
-
# Calculate the offset to center the image on the background
|
326 |
-
bg_w, bg_h = background.size
|
327 |
-
offset = ((bg_w - width) // 2, (bg_h - height) // 2)
|
328 |
-
# Paste the image onto the background
|
329 |
-
background.paste(image, offset, mask=image)
|
330 |
-
|
331 |
-
# Save the image and return the file path
|
332 |
-
return save_image(background)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/utils/template.ts
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
import type { Message } from "$lib/types/Message";
|
2 |
-
import type { LegacyParamatersTemplateInput } from "$lib/types/Template";
|
3 |
-
import Handlebars from "handlebars";
|
4 |
-
|
5 |
-
Handlebars.registerHelper("ifUser", function (this: Pick<Message, "from" | "content">, options) {
|
6 |
-
if (this.from == "user") return options.fn(this);
|
7 |
-
});
|
8 |
-
|
9 |
-
Handlebars.registerHelper(
|
10 |
-
"ifAssistant",
|
11 |
-
function (this: Pick<Message, "from" | "content">, options) {
|
12 |
-
if (this.from == "assistant") return options.fn(this);
|
13 |
-
}
|
14 |
-
);
|
15 |
-
|
16 |
-
export function compileTemplate<T>(input: string, model: LegacyParamatersTemplateInput) {
|
17 |
-
const template = Handlebars.compile<T & LegacyParamatersTemplateInput>(input, {
|
18 |
-
knownHelpers: { ifUser: true, ifAssistant: true },
|
19 |
-
knownHelpersOnly: true,
|
20 |
-
noEscape: true,
|
21 |
-
strict: true,
|
22 |
-
preventIndent: true,
|
23 |
-
});
|
24 |
-
|
25 |
-
return function render(inputs: T, options?: RuntimeOptions) {
|
26 |
-
return template({ ...model, ...inputs }, options);
|
27 |
-
};
|
28 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT/g4f/models.py
DELETED
@@ -1,274 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
from dataclasses import dataclass
|
3 |
-
from .typing import Union
|
4 |
-
from .Provider import BaseProvider, RetryProvider
|
5 |
-
from .Provider import (
|
6 |
-
AItianhuSpace,
|
7 |
-
ChatgptLogin,
|
8 |
-
ChatgptDemo,
|
9 |
-
ChatgptDuo,
|
10 |
-
Vitalentum,
|
11 |
-
ChatgptAi,
|
12 |
-
ChatForAi,
|
13 |
-
AItianhu,
|
14 |
-
ChatBase,
|
15 |
-
Liaobots,
|
16 |
-
Yqcloud,
|
17 |
-
Myshell,
|
18 |
-
FreeGpt,
|
19 |
-
Vercel,
|
20 |
-
DeepAi,
|
21 |
-
Aichat,
|
22 |
-
GPTalk,
|
23 |
-
GptGod,
|
24 |
-
AiAsk,
|
25 |
-
GptGo,
|
26 |
-
Ylokh,
|
27 |
-
Bard,
|
28 |
-
Aibn,
|
29 |
-
Bing,
|
30 |
-
You,
|
31 |
-
H2o
|
32 |
-
)
|
33 |
-
|
34 |
-
@dataclass(unsafe_hash=True)
|
35 |
-
class Model:
|
36 |
-
name: str
|
37 |
-
base_provider: str
|
38 |
-
best_provider: Union[type[BaseProvider], RetryProvider] = None
|
39 |
-
|
40 |
-
default = Model(
|
41 |
-
name = "",
|
42 |
-
base_provider = "",
|
43 |
-
best_provider = RetryProvider([
|
44 |
-
Bing, # Not fully GPT 3 or 4
|
45 |
-
Yqcloud, # Answers short questions in chinese
|
46 |
-
ChatBase, # Don't want to answer creatively
|
47 |
-
ChatgptDuo, # Include search results
|
48 |
-
Aibn, Aichat, ChatForAi, ChatgptAi, ChatgptLogin, DeepAi, FreeGpt, GptGo, Myshell, Ylokh,
|
49 |
-
])
|
50 |
-
)
|
51 |
-
|
52 |
-
# GPT-3.5 too, but all providers supports long responses and a custom timeouts
|
53 |
-
gpt_35_long = Model(
|
54 |
-
name = 'gpt-3.5-turbo',
|
55 |
-
base_provider = 'openai',
|
56 |
-
best_provider = RetryProvider([
|
57 |
-
AiAsk, Aibn, Aichat, ChatForAi, ChatgptAi, ChatgptDemo, ChatgptDuo,
|
58 |
-
FreeGpt, GptGo, Liaobots, Myshell, Vitalentum, Ylokh, You, Yqcloud,
|
59 |
-
GPTalk, GptGod
|
60 |
-
])
|
61 |
-
)
|
62 |
-
|
63 |
-
# GPT-3.5 / GPT-4
|
64 |
-
gpt_35_turbo = Model(
|
65 |
-
name = 'gpt-3.5-turbo',
|
66 |
-
base_provider = 'openai',
|
67 |
-
best_provider = RetryProvider([
|
68 |
-
DeepAi, ChatgptLogin, ChatgptAi, GptGo, AItianhu, Aichat, AItianhuSpace, Myshell, Aibn, ChatForAi, FreeGpt, Ylokh
|
69 |
-
])
|
70 |
-
)
|
71 |
-
|
72 |
-
gpt_4 = Model(
|
73 |
-
name = 'gpt-4',
|
74 |
-
base_provider = 'openai',
|
75 |
-
best_provider = Bing
|
76 |
-
)
|
77 |
-
|
78 |
-
# Bard
|
79 |
-
palm = Model(
|
80 |
-
name = 'palm',
|
81 |
-
base_provider = 'google',
|
82 |
-
best_provider = Bard)
|
83 |
-
|
84 |
-
# H2o
|
85 |
-
falcon_7b = Model(
|
86 |
-
name = 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-7b-v3',
|
87 |
-
base_provider = 'huggingface',
|
88 |
-
best_provider = H2o)
|
89 |
-
|
90 |
-
falcon_40b = Model(
|
91 |
-
name = 'h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v1',
|
92 |
-
base_provider = 'huggingface',
|
93 |
-
best_provider = H2o)
|
94 |
-
|
95 |
-
llama_13b = Model(
|
96 |
-
name = 'h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-13b',
|
97 |
-
base_provider = 'huggingface',
|
98 |
-
best_provider = H2o)
|
99 |
-
|
100 |
-
# Vercel
|
101 |
-
claude_instant_v1 = Model(
|
102 |
-
name = 'claude-instant-v1',
|
103 |
-
base_provider = 'anthropic',
|
104 |
-
best_provider = Vercel)
|
105 |
-
|
106 |
-
claude_v1 = Model(
|
107 |
-
name = 'claude-v1',
|
108 |
-
base_provider = 'anthropic',
|
109 |
-
best_provider = Vercel)
|
110 |
-
|
111 |
-
claude_v2 = Model(
|
112 |
-
name = 'claude-v2',
|
113 |
-
base_provider = 'anthropic',
|
114 |
-
best_provider = Vercel)
|
115 |
-
|
116 |
-
command_light_nightly = Model(
|
117 |
-
name = 'command-light-nightly',
|
118 |
-
base_provider = 'cohere',
|
119 |
-
best_provider = Vercel)
|
120 |
-
|
121 |
-
command_nightly = Model(
|
122 |
-
name = 'command-nightly',
|
123 |
-
base_provider = 'cohere',
|
124 |
-
best_provider = Vercel)
|
125 |
-
|
126 |
-
gpt_neox_20b = Model(
|
127 |
-
name = 'EleutherAI/gpt-neox-20b',
|
128 |
-
base_provider = 'huggingface',
|
129 |
-
best_provider = Vercel)
|
130 |
-
|
131 |
-
oasst_sft_1_pythia_12b = Model(
|
132 |
-
name = 'OpenAssistant/oasst-sft-1-pythia-12b',
|
133 |
-
base_provider = 'huggingface',
|
134 |
-
best_provider = Vercel)
|
135 |
-
|
136 |
-
oasst_sft_4_pythia_12b_epoch_35 = Model(
|
137 |
-
name = 'OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5',
|
138 |
-
base_provider = 'huggingface',
|
139 |
-
best_provider = Vercel)
|
140 |
-
|
141 |
-
santacoder = Model(
|
142 |
-
name = 'bigcode/santacoder',
|
143 |
-
base_provider = 'huggingface',
|
144 |
-
best_provider = Vercel)
|
145 |
-
|
146 |
-
bloom = Model(
|
147 |
-
name = 'bigscience/bloom',
|
148 |
-
base_provider = 'huggingface',
|
149 |
-
best_provider = Vercel)
|
150 |
-
|
151 |
-
flan_t5_xxl = Model(
|
152 |
-
name = 'google/flan-t5-xxl',
|
153 |
-
base_provider = 'huggingface',
|
154 |
-
best_provider = Vercel)
|
155 |
-
|
156 |
-
code_davinci_002 = Model(
|
157 |
-
name = 'code-davinci-002',
|
158 |
-
base_provider = 'openai',
|
159 |
-
best_provider = Vercel)
|
160 |
-
|
161 |
-
gpt_35_turbo_16k = Model(
|
162 |
-
name = 'gpt-3.5-turbo-16k',
|
163 |
-
base_provider = 'openai',
|
164 |
-
best_provider = Vercel)
|
165 |
-
|
166 |
-
gpt_35_turbo_16k_0613 = Model(
|
167 |
-
name = 'gpt-3.5-turbo-16k-0613',
|
168 |
-
base_provider = 'openai')
|
169 |
-
|
170 |
-
gpt_35_turbo_0613 = Model(
|
171 |
-
name = 'gpt-3.5-turbo-0613',
|
172 |
-
base_provider = 'openai'
|
173 |
-
)
|
174 |
-
|
175 |
-
gpt_4_0613 = Model(
|
176 |
-
name = 'gpt-4-0613',
|
177 |
-
base_provider = 'openai'
|
178 |
-
)
|
179 |
-
|
180 |
-
gpt_4_32k = Model(
|
181 |
-
name = 'gpt-4-32k',
|
182 |
-
base_provider = 'openai'
|
183 |
-
)
|
184 |
-
|
185 |
-
gpt_4_32k_0613 = Model(
|
186 |
-
name = 'gpt-4-32k-0613',
|
187 |
-
base_provider = 'openai'
|
188 |
-
)
|
189 |
-
|
190 |
-
text_ada_001 = Model(
|
191 |
-
name = 'text-ada-001',
|
192 |
-
base_provider = 'openai',
|
193 |
-
best_provider = Vercel)
|
194 |
-
|
195 |
-
text_babbage_001 = Model(
|
196 |
-
name = 'text-babbage-001',
|
197 |
-
base_provider = 'openai',
|
198 |
-
best_provider = Vercel)
|
199 |
-
|
200 |
-
text_curie_001 = Model(
|
201 |
-
name = 'text-curie-001',
|
202 |
-
base_provider = 'openai',
|
203 |
-
best_provider = Vercel)
|
204 |
-
|
205 |
-
text_davinci_002 = Model(
|
206 |
-
name = 'text-davinci-002',
|
207 |
-
base_provider = 'openai',
|
208 |
-
best_provider = Vercel)
|
209 |
-
|
210 |
-
text_davinci_003 = Model(
|
211 |
-
name = 'text-davinci-003',
|
212 |
-
base_provider = 'openai',
|
213 |
-
best_provider = Vercel)
|
214 |
-
|
215 |
-
llama13b_v2_chat = Model(
|
216 |
-
name = 'replicate:a16z-infra/llama13b-v2-chat',
|
217 |
-
base_provider = 'replicate',
|
218 |
-
best_provider = Vercel)
|
219 |
-
|
220 |
-
llama7b_v2_chat = Model(
|
221 |
-
name = 'replicate:a16z-infra/llama7b-v2-chat',
|
222 |
-
base_provider = 'replicate',
|
223 |
-
best_provider = Vercel)
|
224 |
-
|
225 |
-
|
226 |
-
class ModelUtils:
|
227 |
-
convert: dict[str, Model] = {
|
228 |
-
# gpt-3.5
|
229 |
-
'gpt-3.5-turbo' : gpt_35_turbo,
|
230 |
-
'gpt-3.5-turbo-0613' : gpt_35_turbo_0613,
|
231 |
-
'gpt-3.5-turbo-16k' : gpt_35_turbo_16k,
|
232 |
-
'gpt-3.5-turbo-16k-0613' : gpt_35_turbo_16k_0613,
|
233 |
-
|
234 |
-
# gpt-4
|
235 |
-
'gpt-4' : gpt_4,
|
236 |
-
'gpt-4-0613' : gpt_4_0613,
|
237 |
-
'gpt-4-32k' : gpt_4_32k,
|
238 |
-
'gpt-4-32k-0613' : gpt_4_32k_0613,
|
239 |
-
|
240 |
-
# Bard
|
241 |
-
'palm2' : palm,
|
242 |
-
'palm' : palm,
|
243 |
-
'google' : palm,
|
244 |
-
'google-bard' : palm,
|
245 |
-
'google-palm' : palm,
|
246 |
-
'bard' : palm,
|
247 |
-
|
248 |
-
# H2o
|
249 |
-
'falcon-40b' : falcon_40b,
|
250 |
-
'falcon-7b' : falcon_7b,
|
251 |
-
'llama-13b' : llama_13b,
|
252 |
-
|
253 |
-
# Vercel
|
254 |
-
'claude-instant-v1' : claude_instant_v1,
|
255 |
-
'claude-v1' : claude_v1,
|
256 |
-
'claude-v2' : claude_v2,
|
257 |
-
'command-nightly' : command_nightly,
|
258 |
-
'gpt-neox-20b' : gpt_neox_20b,
|
259 |
-
'santacoder' : santacoder,
|
260 |
-
'bloom' : bloom,
|
261 |
-
'flan-t5-xxl' : flan_t5_xxl,
|
262 |
-
'code-davinci-002' : code_davinci_002,
|
263 |
-
'text-ada-001' : text_ada_001,
|
264 |
-
'text-babbage-001' : text_babbage_001,
|
265 |
-
'text-curie-001' : text_curie_001,
|
266 |
-
'text-davinci-002' : text_davinci_002,
|
267 |
-
'text-davinci-003' : text_davinci_003,
|
268 |
-
'llama13b-v2-chat' : llama13b_v2_chat,
|
269 |
-
'llama7b-v2-chat' : llama7b_v2_chat,
|
270 |
-
|
271 |
-
'oasst-sft-1-pythia-12b' : oasst_sft_1_pythia_12b,
|
272 |
-
'oasst-sft-4-pythia-12b-epoch-3.5' : oasst_sft_4_pythia_12b_epoch_35,
|
273 |
-
'command-light-nightly' : command_light_nightly,
|
274 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AfrodreamsAI/afrodreams/ex_app.py
DELETED
@@ -1,95 +0,0 @@
|
|
1 |
-
import neural_style
|
2 |
-
import streamlit as st
|
3 |
-
import os
|
4 |
-
import random
|
5 |
-
import numpy as np
|
6 |
-
from PIL import Image, ImageEnhance
|
7 |
-
from io import BytesIO
|
8 |
-
#import streamlit_ext as ste #for download button not to rerun
|
9 |
-
from huggingface_hub import upload_file
|
10 |
-
|
11 |
-
HF_TOKEN = os.environ.get("HF_TOKEN")
|
12 |
-
|
13 |
-
st.set_page_config(layout="wide")
|
14 |
-
#Create two columns with different width
|
15 |
-
col1, col2 = st.columns( [0.8, 0.2])
|
16 |
-
with col1: # To display the header text using css style
|
17 |
-
st.markdown(""" <style> .font {
|
18 |
-
font-size:35px ; font-family: 'Cooper Black'; color: #FF9633;}
|
19 |
-
</style> """, unsafe_allow_html=True)
|
20 |
-
st.markdown('<p class="font">Upload your photo here...</p>', unsafe_allow_html=True)
|
21 |
-
st.subheader("This app takes in your image and styles it with a unique african art.")
|
22 |
-
|
23 |
-
#Add a header and expander in side bar
|
24 |
-
st.sidebar.markdown('<p class="font">Afrodreams.AI</p>', unsafe_allow_html=True)
|
25 |
-
with st.sidebar.expander("About the App"):
|
26 |
-
st.write("""
|
27 |
-
This app takes in your image and styles it with a unique african art.""")
|
28 |
-
|
29 |
-
|
30 |
-
#Add file uploader to allow users to upload photos
|
31 |
-
uploaded_file = st.file_uploader("", type=['jpg','png','jpeg'])
|
32 |
-
|
33 |
-
# add slider to side bar
|
34 |
-
style_weight = st.slider("Select Style Weight", min_value=10, max_value=100, value=12)
|
35 |
-
|
36 |
-
#Add 'before' and 'after' columns
|
37 |
-
if uploaded_file is not None:
|
38 |
-
image = Image.open(uploaded_file)
|
39 |
-
|
40 |
-
col1, col2 = st.columns( [0.5, 0.5])
|
41 |
-
with col1:
|
42 |
-
st.markdown('<p style="text-align: center;">Before</p>',unsafe_allow_html=True)
|
43 |
-
st.image(image,width=300)
|
44 |
-
|
45 |
-
with col2:
|
46 |
-
st.markdown('<p style="text-align: center;">After</p>',unsafe_allow_html=True)
|
47 |
-
|
48 |
-
# add a button
|
49 |
-
run = st.button('Generate Art')
|
50 |
-
my_bar = st.progress(0)
|
51 |
-
params = neural_style.TransferParams()
|
52 |
-
params.gpu = 0#"c"
|
53 |
-
params.backend = "mkl"
|
54 |
-
params.image_size = 400
|
55 |
-
params.content_image = uploaded_file
|
56 |
-
params.style_weight = style_weight
|
57 |
-
keep_style = False
|
58 |
-
if run==True:
|
59 |
-
# run image selection if keep style is false
|
60 |
-
if keep_style==False:
|
61 |
-
path = 'stylesv2'
|
62 |
-
styles = os.listdir(path)
|
63 |
-
params.style_image = path + '/' + random.choice(styles)
|
64 |
-
|
65 |
-
st.session_state.submitted = True
|
66 |
-
with st.spinner('Wait for it...'):
|
67 |
-
neural_style.transfer(params)
|
68 |
-
|
69 |
-
#display image when done.
|
70 |
-
with col2:
|
71 |
-
if 'submitted' in st.session_state:
|
72 |
-
result = Image.open('out.png')
|
73 |
-
st.image(result, width=300)
|
74 |
-
buf = BytesIO()
|
75 |
-
result.save(buf, format="png")
|
76 |
-
if len(os.listdir('generated_samples')) <= 10:
|
77 |
-
img_file_name = f"generated_samples/{str(len(os.listdir('generated_samples')))}.png"
|
78 |
-
|
79 |
-
_ = upload_file(path_or_fileobj = 'out.png',
|
80 |
-
path_in_repo = img_file_name,
|
81 |
-
repo_id='AfrodreamsAI/afrodreams',
|
82 |
-
repo_type='space',
|
83 |
-
token=HF_TOKEN
|
84 |
-
)
|
85 |
-
|
86 |
-
byte_im = buf.getvalue()
|
87 |
-
#run =ste.download_button(button_text="Download Image", data=byte_im, download_filename='afrodreams.jpg', mime="image/png")
|
88 |
-
#keeping the current style by update the weight
|
89 |
-
keep_style = st.sidebar.checkbox("Keep current style")
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlexWortega/food_calories/app.py
DELETED
@@ -1,50 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from rudalle import get_tokenizer, get_vae
|
3 |
-
from rudalle.utils import seed_everything
|
4 |
-
|
5 |
-
import sys
|
6 |
-
from rudolph.model.utils import get_i2t_attention_mask, get_t2t_attention_mask
|
7 |
-
from rudolph.model import get_rudolph_model, ruDolphModel, FP16Module
|
8 |
-
from rudolph.pipelines import generate_codebooks, self_reranking_by_image, self_reranking_by_text, show, generate_captions, generate_texts
|
9 |
-
from rudolph.pipelines import zs_clf
|
10 |
-
|
11 |
-
import gradio as gr
|
12 |
-
from rudolph import utils
|
13 |
-
from PIL import Image
|
14 |
-
|
15 |
-
device = 'cpu'
|
16 |
-
if device=='cuda':
|
17 |
-
half = True
|
18 |
-
else:
|
19 |
-
half = False
|
20 |
-
model = get_rudolph_model('350M', fp16=half, device=device)
|
21 |
-
model.load_state_dict(torch.load("awesomemodel__dalle_1500.pt",map_location=torch.device('cpu')))
|
22 |
-
tokenizer = get_tokenizer()
|
23 |
-
vae = get_vae(dwt=False).to(device)
|
24 |
-
|
25 |
-
|
26 |
-
template = 'белков: '
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
# Download human-readable labels for ImageNet.
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
def classify_image(inp):
|
35 |
-
print(type(inp))
|
36 |
-
inp = Image.fromarray(inp)
|
37 |
-
texts = generate_captions(inp, tokenizer, model, vae, template=template, top_k=16, captions_num=1, bs=16, top_p=0.6, seed=43, temperature=0.8)
|
38 |
-
rp = texts[0].replace('белков','protein').replace('жиров','fat').replace('углеводов','carbs').replace('calories','ккал')
|
39 |
-
print(rp)
|
40 |
-
|
41 |
-
|
42 |
-
return rp
|
43 |
-
|
44 |
-
image = gr.inputs.Image(shape=(128, 128))
|
45 |
-
label = gr.outputs.Label(num_top_classes=3)
|
46 |
-
|
47 |
-
|
48 |
-
iface = gr.Interface(fn=classify_image, description="https://github.com/sberbank-ai/ru-dolph RuDoplh by SBER AI finetuned for a image2text task to predict food calories by https://t.me/lovedeathtransformers Alex Wortega", inputs=image, outputs="text",examples=[
|
49 |
-
['b9c277a3.jpeg']])
|
50 |
-
iface.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AllAideas/SegmentacionVideo/utils/predict.py
DELETED
@@ -1,104 +0,0 @@
|
|
1 |
-
#from .custom_layers import TransformerEncoder, PositionalEmbedding
|
2 |
-
from .constants import MAX_SEQ_LENGTH, NUM_FEATURES, IMG_SIZE, CLASS_VOCAB
|
3 |
-
from huggingface_hub import from_pretrained_keras
|
4 |
-
from tensorflow import keras
|
5 |
-
from keras import layers
|
6 |
-
import numpy as np
|
7 |
-
import imageio
|
8 |
-
import cv2
|
9 |
-
|
10 |
-
#model = from_pretrained_keras("shivi/video-classification",custom_objects={"PositionalEmbedding":PositionalEmbedding,"TransformerEncoder": TransformerEncoder})
|
11 |
-
|
12 |
-
model = from_pretrained_keras("keras-io/video-transformers")
|
13 |
-
|
14 |
-
"""
|
15 |
-
Below code is taken from the Video-Transformers example on keras-io by Sayak Paul
|
16 |
-
"""
|
17 |
-
def build_feature_extractor():
|
18 |
-
feature_extractor = keras.applications.DenseNet121(
|
19 |
-
weights="imagenet",
|
20 |
-
include_top=False,
|
21 |
-
pooling="avg",
|
22 |
-
input_shape=(IMG_SIZE, IMG_SIZE, 3),
|
23 |
-
)
|
24 |
-
preprocess_input = keras.applications.densenet.preprocess_input
|
25 |
-
|
26 |
-
inputs = keras.Input((IMG_SIZE, IMG_SIZE, 3))
|
27 |
-
preprocessed = preprocess_input(inputs)
|
28 |
-
|
29 |
-
outputs = feature_extractor(preprocessed)
|
30 |
-
return keras.Model(inputs, outputs, name="feature_extractor")
|
31 |
-
|
32 |
-
|
33 |
-
feature_extractor = build_feature_extractor()
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
def crop_center(frame):
|
38 |
-
center_crop_layer = layers.CenterCrop(IMG_SIZE, IMG_SIZE)
|
39 |
-
cropped = center_crop_layer(frame[None, ...])
|
40 |
-
cropped = cropped.numpy().squeeze()
|
41 |
-
return cropped
|
42 |
-
|
43 |
-
def load_video(path, max_frames=0):
|
44 |
-
cap = cv2.VideoCapture(path)
|
45 |
-
frames = []
|
46 |
-
try:
|
47 |
-
while True:
|
48 |
-
ret, frame = cap.read()
|
49 |
-
if not ret:
|
50 |
-
break
|
51 |
-
frame = crop_center(frame)
|
52 |
-
frame = frame[:, :, [2, 1, 0]]
|
53 |
-
frames.append(frame)
|
54 |
-
|
55 |
-
if len(frames) == max_frames:
|
56 |
-
break
|
57 |
-
finally:
|
58 |
-
cap.release()
|
59 |
-
return np.array(frames)
|
60 |
-
|
61 |
-
def prepare_single_video(frames):
|
62 |
-
frame_features = np.zeros(shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype="float32")
|
63 |
-
|
64 |
-
# Pad shorter videos.
|
65 |
-
if len(frames) < MAX_SEQ_LENGTH:
|
66 |
-
diff = MAX_SEQ_LENGTH - len(frames)
|
67 |
-
padding = np.zeros((diff, IMG_SIZE, IMG_SIZE, 3))
|
68 |
-
frames = np.concatenate(frames, padding)
|
69 |
-
|
70 |
-
frames = frames[None, ...]
|
71 |
-
|
72 |
-
# Extract features from the frames of the current video.
|
73 |
-
for i, batch in enumerate(frames):
|
74 |
-
video_length = batch.shape[0]
|
75 |
-
length = min(MAX_SEQ_LENGTH, video_length)
|
76 |
-
for j in range(length):
|
77 |
-
if np.mean(batch[j, :]) > 0.0:
|
78 |
-
frame_features[i, j, :] = feature_extractor.predict(batch[None, j, :])
|
79 |
-
else:
|
80 |
-
frame_features[i, j, :] = 0.0
|
81 |
-
|
82 |
-
return frame_features
|
83 |
-
|
84 |
-
|
85 |
-
def predict_action(path):
|
86 |
-
frames = load_video(path)
|
87 |
-
frame_features = prepare_single_video(frames)
|
88 |
-
probabilities = model.predict(frame_features)[0]
|
89 |
-
confidences = {}
|
90 |
-
|
91 |
-
for i in np.argsort(probabilities)[::-1]:
|
92 |
-
confidences[CLASS_VOCAB[i]] = float(probabilities[i])
|
93 |
-
|
94 |
-
gif_out = to_gif(frames[:MAX_SEQ_LENGTH])
|
95 |
-
|
96 |
-
print(confidences)
|
97 |
-
return confidences, gif_out
|
98 |
-
|
99 |
-
|
100 |
-
def to_gif(images):
|
101 |
-
converted_images = images.astype(np.uint8)
|
102 |
-
imageio.mimsave("animation.gif", converted_images, fps=10)
|
103 |
-
return "animation.gif"
|
104 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aloento/9Nine-PITS/app.py
DELETED
@@ -1,186 +0,0 @@
|
|
1 |
-
import argparse
|
2 |
-
|
3 |
-
import gradio as gr
|
4 |
-
import torch
|
5 |
-
|
6 |
-
import commons
|
7 |
-
import utils
|
8 |
-
from models import SynthesizerTrn
|
9 |
-
from text import cleaned_text_to_sequence
|
10 |
-
from text.cleaners import clean_text
|
11 |
-
from text.symbols import symbols
|
12 |
-
|
13 |
-
|
14 |
-
# we use Kyubyong/g2p for demo instead of our internal g2p
|
15 |
-
# https://github.com/Kyubyong/g2p
|
16 |
-
def get_text(text, hps):
|
17 |
-
cleaned_text, lang = clean_text(text)
|
18 |
-
text_norm = cleaned_text_to_sequence(cleaned_text)
|
19 |
-
if hps.data.add_blank:
|
20 |
-
text_norm, lang = commons.intersperse_with_language_id(text_norm, lang, 0)
|
21 |
-
text_norm = torch.LongTensor(text_norm)
|
22 |
-
lang = torch.LongTensor(lang)
|
23 |
-
return text_norm, lang, cleaned_text
|
24 |
-
|
25 |
-
|
26 |
-
class GradioApp:
|
27 |
-
|
28 |
-
def __init__(self, args):
|
29 |
-
self.hps = utils.get_hparams_from_file(args.config)
|
30 |
-
self.device = "cpu"
|
31 |
-
|
32 |
-
self.net_g = SynthesizerTrn(
|
33 |
-
len(symbols),
|
34 |
-
self.hps.data.filter_length // 2 + 1,
|
35 |
-
self.hps.train.segment_size //
|
36 |
-
self.hps.data.hop_length,
|
37 |
-
midi_start=-5,
|
38 |
-
midi_end=75,
|
39 |
-
octave_range=24,
|
40 |
-
n_speakers=len(self.hps.data.speakers),
|
41 |
-
**self.hps.model
|
42 |
-
).to(self.device)
|
43 |
-
|
44 |
-
_ = self.net_g.eval()
|
45 |
-
_ = utils.load_checkpoint(args.checkpoint_path, model_g=self.net_g)
|
46 |
-
self.interface = self._gradio_interface()
|
47 |
-
|
48 |
-
def get_phoneme(self, text):
|
49 |
-
cleaned_text, lang = clean_text(text)
|
50 |
-
text_norm = cleaned_text_to_sequence(cleaned_text)
|
51 |
-
|
52 |
-
if self.hps.data.add_blank:
|
53 |
-
text_norm, lang = commons.intersperse_with_language_id(text_norm, lang, 0)
|
54 |
-
|
55 |
-
text_norm = torch.LongTensor(text_norm)
|
56 |
-
lang = torch.LongTensor(lang)
|
57 |
-
|
58 |
-
return text_norm, lang, cleaned_text
|
59 |
-
|
60 |
-
def inference(self, text, speaker_id_val, seed, scope_shift, duration):
|
61 |
-
seed = int(seed)
|
62 |
-
scope_shift = int(scope_shift)
|
63 |
-
torch.manual_seed(seed)
|
64 |
-
text_norm, tone, phones = self.get_phoneme(text)
|
65 |
-
x_tst = text_norm.to(self.device).unsqueeze(0)
|
66 |
-
t_tst = tone.to(self.device).unsqueeze(0)
|
67 |
-
x_tst_lengths = torch.LongTensor([text_norm.size(0)]).to(self.device)
|
68 |
-
speaker_id = torch.LongTensor([speaker_id_val]).to(self.device)
|
69 |
-
|
70 |
-
decoder_inputs, *_ = self.net_g.infer_pre_decoder(
|
71 |
-
x_tst,
|
72 |
-
t_tst,
|
73 |
-
x_tst_lengths,
|
74 |
-
sid=speaker_id,
|
75 |
-
noise_scale=0.667,
|
76 |
-
noise_scale_w=0.8,
|
77 |
-
length_scale=duration,
|
78 |
-
scope_shift=scope_shift
|
79 |
-
)
|
80 |
-
|
81 |
-
audio = self.net_g.infer_decode_chunk(
|
82 |
-
decoder_inputs, sid=speaker_id
|
83 |
-
)[0, 0].data.cpu().float().numpy()
|
84 |
-
|
85 |
-
del decoder_inputs,
|
86 |
-
|
87 |
-
return phones, (self.hps.data.sampling_rate, audio)
|
88 |
-
|
89 |
-
def _gradio_interface(self):
|
90 |
-
title = "9Nine - PITS"
|
91 |
-
|
92 |
-
self.inputs = [
|
93 |
-
gr.Textbox(
|
94 |
-
label="Text (150 words limitation)",
|
95 |
-
value="[JA]そんなわけないじゃない。どうしてこうなるだろう。始めて好きな人ができた。一生ものの友达ができた。嬉しいことが二つ重なて。"
|
96 |
-
"その二つの嬉しさがまたたくさんの嬉しさをつれて来てくれて。梦のように幸せの时间を手に入れたはずなのに。なのにどうして、こうなちょうだろう。[JA]",
|
97 |
-
elem_id="tts-input"
|
98 |
-
),
|
99 |
-
gr.Dropdown(
|
100 |
-
list(self.hps.data.speakers),
|
101 |
-
value=self.hps.data.speakers[1],
|
102 |
-
label="Speaker Identity",
|
103 |
-
type="index"
|
104 |
-
),
|
105 |
-
gr.Slider(
|
106 |
-
0, 65536, value=0, step=1, label="random seed"
|
107 |
-
),
|
108 |
-
gr.Slider(
|
109 |
-
-15, 15, value=0, step=1, label="scope-shift"
|
110 |
-
),
|
111 |
-
gr.Slider(
|
112 |
-
0.5, 2., value=1., step=0.1, label="duration multiplier"
|
113 |
-
),
|
114 |
-
]
|
115 |
-
|
116 |
-
self.outputs = [
|
117 |
-
gr.Textbox(label="Phonemes"),
|
118 |
-
gr.Audio(type="numpy", label="Output audio")
|
119 |
-
]
|
120 |
-
|
121 |
-
description = "9Nine - PITS"
|
122 |
-
article = "Github: https://github.com/Aloento/VariTTS"
|
123 |
-
examples = [["[JA]こんにちは、私は綾地寧々です。[JA]"]]
|
124 |
-
|
125 |
-
return gr.Interface(
|
126 |
-
fn=self.inference,
|
127 |
-
inputs=self.inputs,
|
128 |
-
outputs=self.outputs,
|
129 |
-
title=title,
|
130 |
-
description=description,
|
131 |
-
article=article,
|
132 |
-
cache_examples=False,
|
133 |
-
examples=examples,
|
134 |
-
)
|
135 |
-
|
136 |
-
def launch(self):
|
137 |
-
return self.interface.launch(share=False)
|
138 |
-
|
139 |
-
|
140 |
-
def parsearg():
|
141 |
-
parser = argparse.ArgumentParser()
|
142 |
-
parser.add_argument(
|
143 |
-
'-c',
|
144 |
-
'--config',
|
145 |
-
type=str,
|
146 |
-
default="./configs/config_cje.yaml",
|
147 |
-
help='Path to configuration file'
|
148 |
-
)
|
149 |
-
parser.add_argument(
|
150 |
-
'-m',
|
151 |
-
'--model',
|
152 |
-
type=str,
|
153 |
-
default='9Nine',
|
154 |
-
help='Model name'
|
155 |
-
)
|
156 |
-
parser.add_argument(
|
157 |
-
'-r',
|
158 |
-
'--checkpoint_path',
|
159 |
-
type=str,
|
160 |
-
default='./9Nine_Eval_71200.pth',
|
161 |
-
help='Path to checkpoint for resume'
|
162 |
-
)
|
163 |
-
parser.add_argument(
|
164 |
-
'-f',
|
165 |
-
'--force_resume',
|
166 |
-
type=str,
|
167 |
-
help='Path to checkpoint for force resume'
|
168 |
-
)
|
169 |
-
parser.add_argument(
|
170 |
-
'-d',
|
171 |
-
'--dir',
|
172 |
-
type=str,
|
173 |
-
default='/DATA/audio/pits_samples',
|
174 |
-
help='root dir'
|
175 |
-
)
|
176 |
-
args = parser.parse_args()
|
177 |
-
return args
|
178 |
-
|
179 |
-
|
180 |
-
if __name__ == "__main__":
|
181 |
-
import nltk
|
182 |
-
nltk.download('cmudict')
|
183 |
-
|
184 |
-
args = parsearg()
|
185 |
-
app = GradioApp(args)
|
186 |
-
app.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amon1/ChatGPTForAcadamic/crazy_functions/test_project/cpp/cppipc/policy.h
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
#pragma once
|
2 |
-
|
3 |
-
#include <type_traits>
|
4 |
-
|
5 |
-
#include "libipc/def.h"
|
6 |
-
#include "libipc/prod_cons.h"
|
7 |
-
|
8 |
-
#include "libipc/circ/elem_array.h"
|
9 |
-
|
10 |
-
namespace ipc {
|
11 |
-
namespace policy {
|
12 |
-
|
13 |
-
template <template <typename, std::size_t...> class Elems, typename Flag>
|
14 |
-
struct choose;
|
15 |
-
|
16 |
-
template <typename Flag>
|
17 |
-
struct choose<circ::elem_array, Flag> {
|
18 |
-
using flag_t = Flag;
|
19 |
-
|
20 |
-
template <std::size_t DataSize, std::size_t AlignSize>
|
21 |
-
using elems_t = circ::elem_array<ipc::prod_cons_impl<flag_t>, DataSize, AlignSize>;
|
22 |
-
};
|
23 |
-
|
24 |
-
} // namespace policy
|
25 |
-
} // namespace ipc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r50-d8_769x769_80k_cityscapes.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/pspnet_r50-d8.py',
|
3 |
-
'../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
|
4 |
-
'../_base_/schedules/schedule_80k.py'
|
5 |
-
]
|
6 |
-
model = dict(
|
7 |
-
decode_head=dict(align_corners=True),
|
8 |
-
auxiliary_head=dict(align_corners=True),
|
9 |
-
test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_light/app.py
DELETED
@@ -1,173 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
|
3 |
-
import torch
|
4 |
-
import torch.nn as nn
|
5 |
-
import numpy as np
|
6 |
-
import torch.nn.functional as F
|
7 |
-
import torchvision.transforms as T
|
8 |
-
from PIL import Image
|
9 |
-
from decord import VideoReader
|
10 |
-
from decord import cpu
|
11 |
-
from uniformer_light_video import uniformer_xxs_video
|
12 |
-
from uniformer_light_image import uniformer_xxs_image
|
13 |
-
from kinetics_class_index import kinetics_classnames
|
14 |
-
from imagenet_class_index import imagenet_classnames
|
15 |
-
from transforms import (
|
16 |
-
GroupNormalize, GroupScale, GroupCenterCrop,
|
17 |
-
Stack, ToTorchFormatTensor
|
18 |
-
)
|
19 |
-
|
20 |
-
import gradio as gr
|
21 |
-
from huggingface_hub import hf_hub_download
|
22 |
-
|
23 |
-
|
24 |
-
# Device on which to run the model
|
25 |
-
# Set to cuda to load on GPU
|
26 |
-
device = "cpu"
|
27 |
-
model_video_path = hf_hub_download(repo_id="Andy1621/uniformer_light", filename="uniformer_xxs16_160_k400.pth")
|
28 |
-
model_image_path = hf_hub_download(repo_id="Andy1621/uniformer_light", filename="uniformer_xxs_160_in1k.pth")
|
29 |
-
# Pick a pretrained model
|
30 |
-
model_video = uniformer_xxs_video()
|
31 |
-
model_video.load_state_dict(torch.load(model_video_path, map_location='cpu'))
|
32 |
-
model_image = uniformer_xxs_image()
|
33 |
-
model_image.load_state_dict(torch.load(model_image_path, map_location='cpu'))
|
34 |
-
# Set to eval mode and move to desired device
|
35 |
-
model_video = model_video.to(device).eval()
|
36 |
-
model_image = model_image.to(device).eval()
|
37 |
-
|
38 |
-
# Create an id to label name mapping
|
39 |
-
kinetics_id_to_classname = {}
|
40 |
-
for k, v in kinetics_classnames.items():
|
41 |
-
kinetics_id_to_classname[k] = v
|
42 |
-
imagenet_id_to_classname = {}
|
43 |
-
for k, v in imagenet_classnames.items():
|
44 |
-
imagenet_id_to_classname[k] = v[1]
|
45 |
-
|
46 |
-
|
47 |
-
def get_index(num_frames, num_segments=8):
|
48 |
-
seg_size = float(num_frames - 1) / num_segments
|
49 |
-
start = int(seg_size / 2)
|
50 |
-
offsets = np.array([
|
51 |
-
start + int(np.round(seg_size * idx)) for idx in range(num_segments)
|
52 |
-
])
|
53 |
-
return offsets
|
54 |
-
|
55 |
-
|
56 |
-
def load_video(video_path):
|
57 |
-
vr = VideoReader(video_path, ctx=cpu(0))
|
58 |
-
num_frames = len(vr)
|
59 |
-
frame_indices = get_index(num_frames, 16)
|
60 |
-
|
61 |
-
# transform
|
62 |
-
crop_size = 160
|
63 |
-
scale_size = 160
|
64 |
-
input_mean = [0.485, 0.456, 0.406]
|
65 |
-
input_std = [0.229, 0.224, 0.225]
|
66 |
-
|
67 |
-
transform = T.Compose([
|
68 |
-
GroupScale(int(scale_size)),
|
69 |
-
GroupCenterCrop(crop_size),
|
70 |
-
Stack(),
|
71 |
-
ToTorchFormatTensor(),
|
72 |
-
GroupNormalize(input_mean, input_std)
|
73 |
-
])
|
74 |
-
|
75 |
-
images_group = list()
|
76 |
-
for frame_index in frame_indices:
|
77 |
-
img = Image.fromarray(vr[frame_index].asnumpy())
|
78 |
-
images_group.append(img)
|
79 |
-
torch_imgs = transform(images_group)
|
80 |
-
return torch_imgs
|
81 |
-
|
82 |
-
|
83 |
-
def inference_video(video):
|
84 |
-
vid = load_video(video)
|
85 |
-
|
86 |
-
# The model expects inputs of shape: B x C x H x W
|
87 |
-
TC, H, W = vid.shape
|
88 |
-
inputs = vid.reshape(1, TC//3, 3, H, W).permute(0, 2, 1, 3, 4)
|
89 |
-
|
90 |
-
with torch.no_grad():
|
91 |
-
prediction = model_video(inputs)
|
92 |
-
prediction = F.softmax(prediction, dim=1).flatten()
|
93 |
-
|
94 |
-
return {kinetics_id_to_classname[str(i)]: float(prediction[i]) for i in range(400)}
|
95 |
-
|
96 |
-
|
97 |
-
def set_example_video(example: list) -> dict:
|
98 |
-
return gr.Video.update(value=example[0])
|
99 |
-
|
100 |
-
|
101 |
-
def inference_image(img):
|
102 |
-
image = img
|
103 |
-
image_transform = T.Compose(
|
104 |
-
[
|
105 |
-
T.Resize(224),
|
106 |
-
T.CenterCrop(224),
|
107 |
-
T.ToTensor(),
|
108 |
-
T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
|
109 |
-
]
|
110 |
-
)
|
111 |
-
image = image_transform(image)
|
112 |
-
|
113 |
-
# The model expects inputs of shape: B x C x H x W
|
114 |
-
image = image.unsqueeze(0)
|
115 |
-
|
116 |
-
with torch.no_grad():
|
117 |
-
prediction = model_image(image)
|
118 |
-
prediction = F.softmax(prediction, dim=1).flatten()
|
119 |
-
|
120 |
-
return {imagenet_id_to_classname[str(i)]: float(prediction[i]) for i in range(1000)}
|
121 |
-
|
122 |
-
|
123 |
-
def set_example_image(example: list) -> dict:
|
124 |
-
return gr.Image.update(value=example[0])
|
125 |
-
|
126 |
-
|
127 |
-
demo = gr.Blocks()
|
128 |
-
with demo:
|
129 |
-
gr.Markdown(
|
130 |
-
"""
|
131 |
-
# UniFormer Light
|
132 |
-
Gradio demo for <a href='https://github.com/Sense-X/UniFormer' target='_blank'>UniFormer</a>: To use it, simply upload your video, or click one of the examples to load them. Read more at the links below.
|
133 |
-
"""
|
134 |
-
)
|
135 |
-
|
136 |
-
with gr.Tab("Video"):
|
137 |
-
with gr.Box():
|
138 |
-
with gr.Row():
|
139 |
-
with gr.Column():
|
140 |
-
with gr.Row():
|
141 |
-
input_video = gr.Video(label='Input Video').style(height=360)
|
142 |
-
with gr.Row():
|
143 |
-
submit_video_button = gr.Button('Submit')
|
144 |
-
with gr.Column():
|
145 |
-
label_video = gr.Label(num_top_classes=5)
|
146 |
-
with gr.Row():
|
147 |
-
example_videos = gr.Dataset(components=[input_video], samples=[['./videos/hitting_baseball.mp4'], ['./videos/hoverboarding.mp4'], ['./videos/yoga.mp4']])
|
148 |
-
|
149 |
-
with gr.Tab("Image"):
|
150 |
-
with gr.Box():
|
151 |
-
with gr.Row():
|
152 |
-
with gr.Column():
|
153 |
-
with gr.Row():
|
154 |
-
input_image = gr.Image(label='Input Image', type='pil').style(height=360)
|
155 |
-
with gr.Row():
|
156 |
-
submit_image_button = gr.Button('Submit')
|
157 |
-
with gr.Column():
|
158 |
-
label_image = gr.Label(num_top_classes=5)
|
159 |
-
with gr.Row():
|
160 |
-
example_images = gr.Dataset(components=[input_image], samples=[['./images/cat.png'], ['./images/dog.png'], ['./images/panda.png']])
|
161 |
-
|
162 |
-
gr.Markdown(
|
163 |
-
"""
|
164 |
-
<p style='text-align: center'><a href='https://arxiv.org/abs/2201.09450' target='_blank'>[TPAMI] UniFormer: Unifying Convolution and Self-attention for Visual Recognition</a> | <a href='https://github.com/Sense-X/UniFormer' target='_blank'>Github Repo</a></p>
|
165 |
-
"""
|
166 |
-
)
|
167 |
-
|
168 |
-
submit_video_button.click(fn=inference_video, inputs=input_video, outputs=label_video)
|
169 |
-
example_videos.click(fn=set_example_video, inputs=example_videos, outputs=example_videos.components)
|
170 |
-
submit_image_button.click(fn=inference_image, inputs=input_image, outputs=label_image)
|
171 |
-
example_images.click(fn=set_example_image, inputs=example_images, outputs=example_images.components)
|
172 |
-
|
173 |
-
demo.launch(enable_queue=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AngoHF/ANGO-Leaderboard/assets/evaluation.py
DELETED
@@ -1,185 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import json
|
3 |
-
import re
|
4 |
-
import argparse
|
5 |
-
import torch
|
6 |
-
import gc
|
7 |
-
from tqdm import tqdm
|
8 |
-
|
9 |
-
from transformers import AutoModelForCausalLM, AutoModel, AutoTokenizer
|
10 |
-
|
11 |
-
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
|
12 |
-
|
13 |
-
|
14 |
-
def parse_args():
|
15 |
-
parser = argparse.ArgumentParser(description='Validation')
|
16 |
-
parser.add_argument('--model_path', dest="model_path")
|
17 |
-
parser.add_argument('--dataset_path', dest="dataset_path")
|
18 |
-
parser.add_argument('--save_path', dest='save_path')
|
19 |
-
parser.add_argument('--max_length', dest="max_length", default=2000)
|
20 |
-
parser.add_argument('--max_new_tokens', dest="max_new_tokens", default=48)
|
21 |
-
parser.add_argument('--input_template', dest="input_template",
|
22 |
-
default="材料:{material}\n问题:{question}\n{options}\n答案:{response}")
|
23 |
-
parser.add_argument('--query_template', dest="query_template", default="问题:{question}\n{options}\n答案:{response}")
|
24 |
-
parser.add_argument('--system_prompt', dest="system_prompt",
|
25 |
-
default="你是一名考生,请回答问题的正确选项,比如C。如果有多个正确选项,请按顺序回答所有正确选项,比如ABD。")
|
26 |
-
parser.add_argument('--level_delimiter', dest="level_delimiter", default="|")
|
27 |
-
args = parser.parse_args()
|
28 |
-
return args
|
29 |
-
|
30 |
-
|
31 |
-
class Validator:
|
32 |
-
def __init__(self, args):
|
33 |
-
self.load_model(args.model_path)
|
34 |
-
self.load_dataset(args.dataset_path)
|
35 |
-
self.save_dir = os.path.join(args.save_path, os.path.split(model_path)[-1])
|
36 |
-
if not os.path.exists(self.save_dir):
|
37 |
-
os.makedirs(self.save_dir)
|
38 |
-
|
39 |
-
self.max_length = args.max_length
|
40 |
-
self.max_new_tokens = args.max_new_tokens
|
41 |
-
self.input_template = args.input_template
|
42 |
-
self.query_template = args.query_template
|
43 |
-
self.system_prompt = args.system_prompt
|
44 |
-
self.level_delimiter = args.level_delimiter
|
45 |
-
|
46 |
-
def load_model(self, model_path):
|
47 |
-
self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
48 |
-
|
49 |
-
self.model = AutoModelForCausalLM.from_pretrained(model_path, device_map={"": 0}, trust_remote_code=True)
|
50 |
-
self.model.eval()
|
51 |
-
|
52 |
-
def load_dataset(self, dataset_path):
|
53 |
-
self.dataset = json.load(open(dataset_path, encoding="utf-8"))
|
54 |
-
|
55 |
-
def format_prompt(self, material, question, options, response):
|
56 |
-
if material:
|
57 |
-
return self.input_template.format(material=material, question=question, options=options,
|
58 |
-
response=response).strip()
|
59 |
-
return self.query_template.format(question=question, options=options, response=response).strip()
|
60 |
-
|
61 |
-
def build_prompt(self, item):
|
62 |
-
query_prompt = self.format_prompt(item['material'], item['question'], item['options'], "")
|
63 |
-
history_prompts = []
|
64 |
-
for sub in item['history']:
|
65 |
-
history_prompts.append(self.format_prompt(sub['material'], sub['question'], sub['options'], sub['choice']))
|
66 |
-
|
67 |
-
final_prompt = self.system_prompt + "\n" + "\n".join(history_prompts) + "\n" + query_prompt
|
68 |
-
|
69 |
-
if len(self.tokenizer.tokenize(final_prompt)) > self.max_length:
|
70 |
-
history_prompts.pop()
|
71 |
-
break
|
72 |
-
return self.system_prompt + "\n" + "\n".join(history_prompts) + "\n" + query_prompt
|
73 |
-
|
74 |
-
def get_predict(self, prompt):
|
75 |
-
gen_kwargs = {"do_sample": False, "max_new_tokens": self.max_new_tokens}
|
76 |
-
inputs = self.tokenizer([prompt], return_tensors="pt")
|
77 |
-
inputs = inputs.to(self.model.device)
|
78 |
-
with torch.no_grad():
|
79 |
-
outputs = self.model.generate(**inputs, return_dict_in_generate=True, **gen_kwargs)
|
80 |
-
predict = self.tokenizer.decode(outputs.sequences[0], skip_special_tokens=True)[len(prompt):]
|
81 |
-
return predict
|
82 |
-
|
83 |
-
def extract_answer(self, results):
|
84 |
-
predict_result = {"acc": 0, "wrong_value": 0, "human_acc": 0, "hit": 0, "total": 0, "wrong_hit": 0,
|
85 |
-
"wrong_total": 0, 'detail': []}
|
86 |
-
for result in results:
|
87 |
-
answer = result['item']['choice']
|
88 |
-
most_wrong = result['item']['most_wrong']
|
89 |
-
human_acc = result['item']['human_acc'] * 0.01
|
90 |
-
|
91 |
-
predict = ""
|
92 |
-
for e in re.split(r'[、,.,和\s]\s*', result['predict']):
|
93 |
-
if not e.isascii():
|
94 |
-
break
|
95 |
-
predict += e
|
96 |
-
result['predict'] = predict
|
97 |
-
predict_result['detail'].append(
|
98 |
-
{"answer": answer, "most_wrong": most_wrong, "predict": predict, "human_acc": human_acc})
|
99 |
-
predict_result['hit'] += 1 if answer == predict else 0
|
100 |
-
predict_result['wrong_hit'] += 1 if most_wrong == predict else 0
|
101 |
-
predict_result['wrong_value'] += 1 - human_acc if most_wrong == predict else 0
|
102 |
-
predict_result['human_acc'] += human_acc
|
103 |
-
predict_result['total'] += 1
|
104 |
-
predict_result['acc'] = predict_result['hit'] / predict_result['total']
|
105 |
-
predict_result['wrong_total'] = predict_result['total'] - predict_result['hit']
|
106 |
-
predict_result['wrong_value'] = predict_result['wrong_value'] / predict_result['wrong_total']
|
107 |
-
predict_result['human_acc'] = predict_result['human_acc'] / len(results)
|
108 |
-
|
109 |
-
json.dump(predict_result, open(os.path.join(self.save_dir, "acc_result.json"), "w", encoding="utf-8"),
|
110 |
-
ensure_ascii=False)
|
111 |
-
|
112 |
-
def category_summary(self, results):
|
113 |
-
category_result = {"总计": {"hit": 0, "all": 0, "difficulty": {}, "human_acc": 0}}
|
114 |
-
for result in results:
|
115 |
-
hit = 1 if result['item']['choice'] == result['predict'] else 0
|
116 |
-
categories_list = result['item']['categories']
|
117 |
-
difficulty = result['item']['difficulty']
|
118 |
-
human_acc = result['item']['human_acc']
|
119 |
-
for categories in categories_list:
|
120 |
-
if difficulty not in category_result["总计"]["difficulty"]:
|
121 |
-
category_result["总计"]["difficulty"][difficulty] = {"hit": 0, "all": 0}
|
122 |
-
category_result["总计"]["difficulty"][difficulty]['hit'] += hit
|
123 |
-
category_result["总计"]["difficulty"][difficulty]['all'] += 1
|
124 |
-
category_result["总计"]['hit'] += hit
|
125 |
-
category_result["总计"]['all'] += 1
|
126 |
-
category_result["总计"]['human_acc'] += human_acc
|
127 |
-
category_subset = []
|
128 |
-
for category in categories:
|
129 |
-
category_subset.append(category)
|
130 |
-
category_name = self.level_delimiter.join(category_subset)
|
131 |
-
if not category_name:
|
132 |
-
category_name = "未分类"
|
133 |
-
if category_name not in category_result:
|
134 |
-
category_result[category_name] = {"hit": 0, "all": 0, "difficulty": {}, "human_acc": 0}
|
135 |
-
if difficulty not in category_result[category_name]["difficulty"]:
|
136 |
-
category_result[category_name]["difficulty"][difficulty] = {"hit": 0, "all": 0}
|
137 |
-
category_result[category_name]["difficulty"][difficulty]['hit'] += hit
|
138 |
-
category_result[category_name]["difficulty"][difficulty]['all'] += 1
|
139 |
-
category_result[category_name]['hit'] += hit
|
140 |
-
category_result[category_name]['all'] += 1
|
141 |
-
category_result[category_name]['human_acc'] += human_acc
|
142 |
-
|
143 |
-
for k, v in category_result.items():
|
144 |
-
v['acc'] = v['hit'] / v['all']
|
145 |
-
v['human_acc'] = v['human_acc'] / v['all']
|
146 |
-
for d, sub_v in v['difficulty'].items():
|
147 |
-
sub_v['acc'] = sub_v['hit'] / sub_v['all']
|
148 |
-
json.dump(category_result, open(os.path.join(self.save_dir, "category_result.json"), "w", encoding="utf-8"),
|
149 |
-
ensure_ascii=False)
|
150 |
-
|
151 |
-
def difficulty_summary(self, results):
|
152 |
-
difficulty_result = {"总计": {"hit": 0, "all": 0}}
|
153 |
-
for result in results:
|
154 |
-
hit = 1 if result['item']['choice'] == result['predict'] else 0
|
155 |
-
difficulty = result['item']['difficulty']
|
156 |
-
if difficulty not in difficulty_result:
|
157 |
-
difficulty_result[difficulty] = {"hit": 0, "all": 0}
|
158 |
-
difficulty_result[difficulty]['hit'] += hit
|
159 |
-
difficulty_result[difficulty]['all'] += 1
|
160 |
-
difficulty_result["总计"]['hit'] += hit
|
161 |
-
difficulty_result["总计"]['all'] += 1
|
162 |
-
|
163 |
-
for k in difficulty_result:
|
164 |
-
difficulty_result[k]['acc'] = difficulty_result[k]['hit'] / difficulty_result[k]['all']
|
165 |
-
|
166 |
-
json.dump(difficulty_result, open(os.path.join(self.save_dir, "difficulty_result.json"), "w", encoding="utf-8"),
|
167 |
-
ensure_ascii=False)
|
168 |
-
|
169 |
-
def __call__(self):
|
170 |
-
results = []
|
171 |
-
for item in tqdm(self.dataset):
|
172 |
-
prompt = self.build_prompt(item)
|
173 |
-
predict = self.get_predict(prompt)
|
174 |
-
results.append({"item": item, "predict": predict})
|
175 |
-
gc.collect()
|
176 |
-
|
177 |
-
json.dump(results, open(os.path.join(self.save_dir, "raw.json"), "w", encoding="utf-8"), ensure_ascii=False)
|
178 |
-
self.extract_answer(results)
|
179 |
-
self.category_summary(results)
|
180 |
-
self.difficulty_summary(results)
|
181 |
-
|
182 |
-
|
183 |
-
if __name__ == "__main__":
|
184 |
-
args = parse_args()
|
185 |
-
Validator(args)()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example.py
DELETED
@@ -1,63 +0,0 @@
|
|
1 |
-
import requests
|
2 |
-
|
3 |
-
# For local streaming, the websockets are hosted without ssl - http://
|
4 |
-
HOST = 'localhost:5000'
|
5 |
-
URI = f'http://{HOST}/api/v1/generate'
|
6 |
-
|
7 |
-
# For reverse-proxied streaming, the remote will likely host with ssl - https://
|
8 |
-
# URI = 'https://your-uri-here.trycloudflare.com/api/v1/generate'
|
9 |
-
|
10 |
-
|
11 |
-
def run(prompt):
|
12 |
-
request = {
|
13 |
-
'prompt': prompt,
|
14 |
-
'max_new_tokens': 250,
|
15 |
-
'auto_max_new_tokens': False,
|
16 |
-
'max_tokens_second': 0,
|
17 |
-
|
18 |
-
# Generation params. If 'preset' is set to different than 'None', the values
|
19 |
-
# in presets/preset-name.yaml are used instead of the individual numbers.
|
20 |
-
'preset': 'None',
|
21 |
-
'do_sample': True,
|
22 |
-
'temperature': 0.7,
|
23 |
-
'top_p': 0.1,
|
24 |
-
'typical_p': 1,
|
25 |
-
'epsilon_cutoff': 0, # In units of 1e-4
|
26 |
-
'eta_cutoff': 0, # In units of 1e-4
|
27 |
-
'tfs': 1,
|
28 |
-
'top_a': 0,
|
29 |
-
'repetition_penalty': 1.18,
|
30 |
-
'repetition_penalty_range': 0,
|
31 |
-
'top_k': 40,
|
32 |
-
'min_length': 0,
|
33 |
-
'no_repeat_ngram_size': 0,
|
34 |
-
'num_beams': 1,
|
35 |
-
'penalty_alpha': 0,
|
36 |
-
'length_penalty': 1,
|
37 |
-
'early_stopping': False,
|
38 |
-
'mirostat_mode': 0,
|
39 |
-
'mirostat_tau': 5,
|
40 |
-
'mirostat_eta': 0.1,
|
41 |
-
'grammar_string': '',
|
42 |
-
'guidance_scale': 1,
|
43 |
-
'negative_prompt': '',
|
44 |
-
|
45 |
-
'seed': -1,
|
46 |
-
'add_bos_token': True,
|
47 |
-
'truncation_length': 2048,
|
48 |
-
'ban_eos_token': False,
|
49 |
-
'custom_token_bans': '',
|
50 |
-
'skip_special_tokens': True,
|
51 |
-
'stopping_strings': []
|
52 |
-
}
|
53 |
-
|
54 |
-
response = requests.post(URI, json=request)
|
55 |
-
|
56 |
-
if response.status_code == 200:
|
57 |
-
result = response.json()['results'][0]['text']
|
58 |
-
print(prompt + result)
|
59 |
-
|
60 |
-
|
61 |
-
if __name__ == '__main__':
|
62 |
-
prompt = "In order to make homemade bread, follow these steps:\n1)"
|
63 |
-
run(prompt)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/guided_diffusion/fp16_util.py
DELETED
@@ -1,236 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Helpers to train with 16-bit precision.
|
3 |
-
"""
|
4 |
-
|
5 |
-
import numpy as np
|
6 |
-
import torch as th
|
7 |
-
import torch.nn as nn
|
8 |
-
from torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors
|
9 |
-
|
10 |
-
from . import logger
|
11 |
-
|
12 |
-
INITIAL_LOG_LOSS_SCALE = 20.0
|
13 |
-
|
14 |
-
|
15 |
-
def convert_module_to_f16(l):
|
16 |
-
"""
|
17 |
-
Convert primitive modules to float16.
|
18 |
-
"""
|
19 |
-
if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):
|
20 |
-
l.weight.data = l.weight.data.half()
|
21 |
-
if l.bias is not None:
|
22 |
-
l.bias.data = l.bias.data.half()
|
23 |
-
|
24 |
-
|
25 |
-
def convert_module_to_f32(l):
|
26 |
-
"""
|
27 |
-
Convert primitive modules to float32, undoing convert_module_to_f16().
|
28 |
-
"""
|
29 |
-
if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)):
|
30 |
-
l.weight.data = l.weight.data.float()
|
31 |
-
if l.bias is not None:
|
32 |
-
l.bias.data = l.bias.data.float()
|
33 |
-
|
34 |
-
|
35 |
-
def make_master_params(param_groups_and_shapes):
|
36 |
-
"""
|
37 |
-
Copy model parameters into a (differently-shaped) list of full-precision
|
38 |
-
parameters.
|
39 |
-
"""
|
40 |
-
master_params = []
|
41 |
-
for param_group, shape in param_groups_and_shapes:
|
42 |
-
master_param = nn.Parameter(
|
43 |
-
_flatten_dense_tensors(
|
44 |
-
[param.detach().float() for (_, param) in param_group]
|
45 |
-
).view(shape)
|
46 |
-
)
|
47 |
-
master_param.requires_grad = True
|
48 |
-
master_params.append(master_param)
|
49 |
-
return master_params
|
50 |
-
|
51 |
-
|
52 |
-
def model_grads_to_master_grads(param_groups_and_shapes, master_params):
|
53 |
-
"""
|
54 |
-
Copy the gradients from the model parameters into the master parameters
|
55 |
-
from make_master_params().
|
56 |
-
"""
|
57 |
-
for master_param, (param_group, shape) in zip(
|
58 |
-
master_params, param_groups_and_shapes
|
59 |
-
):
|
60 |
-
master_param.grad = _flatten_dense_tensors(
|
61 |
-
[param_grad_or_zeros(param) for (_, param) in param_group]
|
62 |
-
).view(shape)
|
63 |
-
|
64 |
-
|
65 |
-
def master_params_to_model_params(param_groups_and_shapes, master_params):
|
66 |
-
"""
|
67 |
-
Copy the master parameter data back into the model parameters.
|
68 |
-
"""
|
69 |
-
# Without copying to a list, if a generator is passed, this will
|
70 |
-
# silently not copy any parameters.
|
71 |
-
for master_param, (param_group, _) in zip(master_params, param_groups_and_shapes):
|
72 |
-
for (_, param), unflat_master_param in zip(
|
73 |
-
param_group, unflatten_master_params(param_group, master_param.view(-1))
|
74 |
-
):
|
75 |
-
param.detach().copy_(unflat_master_param)
|
76 |
-
|
77 |
-
|
78 |
-
def unflatten_master_params(param_group, master_param):
|
79 |
-
return _unflatten_dense_tensors(master_param, [param for (_, param) in param_group])
|
80 |
-
|
81 |
-
|
82 |
-
def get_param_groups_and_shapes(named_model_params):
|
83 |
-
named_model_params = list(named_model_params)
|
84 |
-
scalar_vector_named_params = (
|
85 |
-
[(n, p) for (n, p) in named_model_params if p.ndim <= 1],
|
86 |
-
(-1),
|
87 |
-
)
|
88 |
-
matrix_named_params = (
|
89 |
-
[(n, p) for (n, p) in named_model_params if p.ndim > 1],
|
90 |
-
(1, -1),
|
91 |
-
)
|
92 |
-
return [scalar_vector_named_params, matrix_named_params]
|
93 |
-
|
94 |
-
|
95 |
-
def master_params_to_state_dict(
|
96 |
-
model, param_groups_and_shapes, master_params, use_fp16
|
97 |
-
):
|
98 |
-
if use_fp16:
|
99 |
-
state_dict = model.state_dict()
|
100 |
-
for master_param, (param_group, _) in zip(
|
101 |
-
master_params, param_groups_and_shapes
|
102 |
-
):
|
103 |
-
for (name, _), unflat_master_param in zip(
|
104 |
-
param_group, unflatten_master_params(param_group, master_param.view(-1))
|
105 |
-
):
|
106 |
-
assert name in state_dict
|
107 |
-
state_dict[name] = unflat_master_param
|
108 |
-
else:
|
109 |
-
state_dict = model.state_dict()
|
110 |
-
for i, (name, _value) in enumerate(model.named_parameters()):
|
111 |
-
assert name in state_dict
|
112 |
-
state_dict[name] = master_params[i]
|
113 |
-
return state_dict
|
114 |
-
|
115 |
-
|
116 |
-
def state_dict_to_master_params(model, state_dict, use_fp16):
|
117 |
-
if use_fp16:
|
118 |
-
named_model_params = [
|
119 |
-
(name, state_dict[name]) for name, _ in model.named_parameters()
|
120 |
-
]
|
121 |
-
param_groups_and_shapes = get_param_groups_and_shapes(named_model_params)
|
122 |
-
master_params = make_master_params(param_groups_and_shapes)
|
123 |
-
else:
|
124 |
-
master_params = [state_dict[name] for name, _ in model.named_parameters()]
|
125 |
-
return master_params
|
126 |
-
|
127 |
-
|
128 |
-
def zero_master_grads(master_params):
|
129 |
-
for param in master_params:
|
130 |
-
param.grad = None
|
131 |
-
|
132 |
-
|
133 |
-
def zero_grad(model_params):
|
134 |
-
for param in model_params:
|
135 |
-
# Taken from https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer.add_param_group
|
136 |
-
if param.grad is not None:
|
137 |
-
param.grad.detach_()
|
138 |
-
param.grad.zero_()
|
139 |
-
|
140 |
-
|
141 |
-
def param_grad_or_zeros(param):
|
142 |
-
if param.grad is not None:
|
143 |
-
return param.grad.data.detach()
|
144 |
-
else:
|
145 |
-
return th.zeros_like(param)
|
146 |
-
|
147 |
-
|
148 |
-
class MixedPrecisionTrainer:
|
149 |
-
def __init__(
|
150 |
-
self,
|
151 |
-
*,
|
152 |
-
model,
|
153 |
-
use_fp16=False,
|
154 |
-
fp16_scale_growth=1e-3,
|
155 |
-
initial_lg_loss_scale=INITIAL_LOG_LOSS_SCALE,
|
156 |
-
):
|
157 |
-
self.model = model
|
158 |
-
self.use_fp16 = use_fp16
|
159 |
-
self.fp16_scale_growth = fp16_scale_growth
|
160 |
-
|
161 |
-
self.model_params = list(self.model.parameters())
|
162 |
-
self.master_params = self.model_params
|
163 |
-
self.param_groups_and_shapes = None
|
164 |
-
self.lg_loss_scale = initial_lg_loss_scale
|
165 |
-
|
166 |
-
if self.use_fp16:
|
167 |
-
self.param_groups_and_shapes = get_param_groups_and_shapes(
|
168 |
-
self.model.named_parameters()
|
169 |
-
)
|
170 |
-
self.master_params = make_master_params(self.param_groups_and_shapes)
|
171 |
-
self.model.convert_to_fp16()
|
172 |
-
|
173 |
-
def zero_grad(self):
|
174 |
-
zero_grad(self.model_params)
|
175 |
-
|
176 |
-
def backward(self, loss: th.Tensor):
|
177 |
-
if self.use_fp16:
|
178 |
-
loss_scale = 2 ** self.lg_loss_scale
|
179 |
-
(loss * loss_scale).backward()
|
180 |
-
else:
|
181 |
-
loss.backward()
|
182 |
-
|
183 |
-
def optimize(self, opt: th.optim.Optimizer):
|
184 |
-
if self.use_fp16:
|
185 |
-
return self._optimize_fp16(opt)
|
186 |
-
else:
|
187 |
-
return self._optimize_normal(opt)
|
188 |
-
|
189 |
-
def _optimize_fp16(self, opt: th.optim.Optimizer):
|
190 |
-
logger.logkv_mean("lg_loss_scale", self.lg_loss_scale)
|
191 |
-
model_grads_to_master_grads(self.param_groups_and_shapes, self.master_params)
|
192 |
-
grad_norm, param_norm = self._compute_norms(grad_scale=2 ** self.lg_loss_scale)
|
193 |
-
if check_overflow(grad_norm):
|
194 |
-
self.lg_loss_scale -= 1
|
195 |
-
logger.log(f"Found NaN, decreased lg_loss_scale to {self.lg_loss_scale}")
|
196 |
-
zero_master_grads(self.master_params)
|
197 |
-
return False
|
198 |
-
|
199 |
-
logger.logkv_mean("grad_norm", grad_norm)
|
200 |
-
logger.logkv_mean("param_norm", param_norm)
|
201 |
-
|
202 |
-
self.master_params[0].grad.mul_(1.0 / (2 ** self.lg_loss_scale))
|
203 |
-
opt.step()
|
204 |
-
zero_master_grads(self.master_params)
|
205 |
-
master_params_to_model_params(self.param_groups_and_shapes, self.master_params)
|
206 |
-
self.lg_loss_scale += self.fp16_scale_growth
|
207 |
-
return True
|
208 |
-
|
209 |
-
def _optimize_normal(self, opt: th.optim.Optimizer):
|
210 |
-
grad_norm, param_norm = self._compute_norms()
|
211 |
-
logger.logkv_mean("grad_norm", grad_norm)
|
212 |
-
logger.logkv_mean("param_norm", param_norm)
|
213 |
-
opt.step()
|
214 |
-
return True
|
215 |
-
|
216 |
-
def _compute_norms(self, grad_scale=1.0):
|
217 |
-
grad_norm = 0.0
|
218 |
-
param_norm = 0.0
|
219 |
-
for p in self.master_params:
|
220 |
-
with th.no_grad():
|
221 |
-
param_norm += th.norm(p, p=2, dtype=th.float32).item() ** 2
|
222 |
-
if p.grad is not None:
|
223 |
-
grad_norm += th.norm(p.grad, p=2, dtype=th.float32).item() ** 2
|
224 |
-
return np.sqrt(grad_norm) / grad_scale, np.sqrt(param_norm)
|
225 |
-
|
226 |
-
def master_params_to_state_dict(self, master_params):
|
227 |
-
return master_params_to_state_dict(
|
228 |
-
self.model, self.param_groups_and_shapes, master_params, self.use_fp16
|
229 |
-
)
|
230 |
-
|
231 |
-
def state_dict_to_master_params(self, state_dict):
|
232 |
-
return state_dict_to_master_params(self.model, state_dict, self.use_fp16)
|
233 |
-
|
234 |
-
|
235 |
-
def check_overflow(value):
|
236 |
-
return (value == float("inf")) or (value == -float("inf")) or (value != value)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/gmflow_module/main.py
DELETED
@@ -1,557 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch.utils.data import DataLoader
|
3 |
-
from torch.utils.tensorboard import SummaryWriter
|
4 |
-
|
5 |
-
import argparse
|
6 |
-
import numpy as np
|
7 |
-
import os
|
8 |
-
|
9 |
-
from data import build_train_dataset
|
10 |
-
from gmflow.gmflow import GMFlow
|
11 |
-
from loss import flow_loss_func
|
12 |
-
from evaluate import (validate_chairs, validate_things, validate_sintel, validate_kitti,
|
13 |
-
create_sintel_submission, create_kitti_submission, inference_on_dir)
|
14 |
-
|
15 |
-
from utils.logger import Logger
|
16 |
-
from utils import misc
|
17 |
-
from utils.dist_utils import get_dist_info, init_dist, setup_for_distributed
|
18 |
-
|
19 |
-
|
20 |
-
def get_args_parser():
|
21 |
-
parser = argparse.ArgumentParser()
|
22 |
-
|
23 |
-
# dataset
|
24 |
-
parser.add_argument('--checkpoint_dir', default='tmp', type=str,
|
25 |
-
help='where to save the training log and models')
|
26 |
-
parser.add_argument('--stage', default='chairs', type=str,
|
27 |
-
help='training stage')
|
28 |
-
parser.add_argument('--image_size', default=[384, 512], type=int, nargs='+',
|
29 |
-
help='image size for training')
|
30 |
-
parser.add_argument('--padding_factor', default=16, type=int,
|
31 |
-
help='the input should be divisible by padding_factor, otherwise do padding')
|
32 |
-
|
33 |
-
parser.add_argument('--max_flow', default=400, type=int,
|
34 |
-
help='exclude very large motions during training')
|
35 |
-
parser.add_argument('--val_dataset', default=['chairs'], type=str, nargs='+',
|
36 |
-
help='validation dataset')
|
37 |
-
parser.add_argument('--with_speed_metric', action='store_true',
|
38 |
-
help='with speed metric when evaluation')
|
39 |
-
|
40 |
-
# training
|
41 |
-
parser.add_argument('--lr', default=4e-4, type=float)
|
42 |
-
parser.add_argument('--batch_size', default=12, type=int)
|
43 |
-
parser.add_argument('--num_workers', default=4, type=int)
|
44 |
-
parser.add_argument('--weight_decay', default=1e-4, type=float)
|
45 |
-
parser.add_argument('--grad_clip', default=1.0, type=float)
|
46 |
-
parser.add_argument('--num_steps', default=100000, type=int)
|
47 |
-
parser.add_argument('--seed', default=326, type=int)
|
48 |
-
parser.add_argument('--summary_freq', default=100, type=int)
|
49 |
-
parser.add_argument('--val_freq', default=10000, type=int)
|
50 |
-
parser.add_argument('--save_ckpt_freq', default=10000, type=int)
|
51 |
-
parser.add_argument('--save_latest_ckpt_freq', default=1000, type=int)
|
52 |
-
|
53 |
-
# resume pretrained model or resume training
|
54 |
-
parser.add_argument('--resume', default=None, type=str,
|
55 |
-
help='resume from pretrain model for finetuing or resume from terminated training')
|
56 |
-
parser.add_argument('--strict_resume', action='store_true')
|
57 |
-
parser.add_argument('--no_resume_optimizer', action='store_true')
|
58 |
-
|
59 |
-
# GMFlow model
|
60 |
-
parser.add_argument('--num_scales', default=1, type=int,
|
61 |
-
help='basic gmflow model uses a single 1/8 feature, the refinement uses 1/4 feature')
|
62 |
-
parser.add_argument('--feature_channels', default=128, type=int)
|
63 |
-
parser.add_argument('--upsample_factor', default=8, type=int)
|
64 |
-
parser.add_argument('--num_transformer_layers', default=6, type=int)
|
65 |
-
parser.add_argument('--num_head', default=1, type=int)
|
66 |
-
parser.add_argument('--attention_type', default='swin', type=str)
|
67 |
-
parser.add_argument('--ffn_dim_expansion', default=4, type=int)
|
68 |
-
|
69 |
-
parser.add_argument('--attn_splits_list', default=[2], type=int, nargs='+',
|
70 |
-
help='number of splits in attention')
|
71 |
-
parser.add_argument('--corr_radius_list', default=[-1], type=int, nargs='+',
|
72 |
-
help='correlation radius for matching, -1 indicates global matching')
|
73 |
-
parser.add_argument('--prop_radius_list', default=[-1], type=int, nargs='+',
|
74 |
-
help='self-attention radius for flow propagation, -1 indicates global attention')
|
75 |
-
|
76 |
-
# loss
|
77 |
-
parser.add_argument('--gamma', default=0.9, type=float,
|
78 |
-
help='loss weight')
|
79 |
-
|
80 |
-
# evaluation
|
81 |
-
parser.add_argument('--eval', action='store_true')
|
82 |
-
parser.add_argument('--save_eval_to_file', action='store_true')
|
83 |
-
parser.add_argument('--evaluate_matched_unmatched', action='store_true')
|
84 |
-
|
85 |
-
# inference on a directory
|
86 |
-
parser.add_argument('--inference_dir', default=None, type=str)
|
87 |
-
parser.add_argument('--inference_size', default=None, type=int, nargs='+',
|
88 |
-
help='can specify the inference size')
|
89 |
-
parser.add_argument('--dir_paired_data', action='store_true',
|
90 |
-
help='Paired data in a dir instead of a sequence')
|
91 |
-
parser.add_argument('--save_flo_flow', action='store_true')
|
92 |
-
parser.add_argument('--pred_bidir_flow', action='store_true',
|
93 |
-
help='predict bidirectional flow')
|
94 |
-
parser.add_argument('--fwd_bwd_consistency_check', action='store_true',
|
95 |
-
help='forward backward consistency check with bidirection flow')
|
96 |
-
|
97 |
-
# predict on sintel and kitti test set for submission
|
98 |
-
parser.add_argument('--submission', action='store_true',
|
99 |
-
help='submission to sintel or kitti test sets')
|
100 |
-
parser.add_argument('--output_path', default='output', type=str,
|
101 |
-
help='where to save the prediction results')
|
102 |
-
parser.add_argument('--save_vis_flow', action='store_true',
|
103 |
-
help='visualize flow prediction as .png image')
|
104 |
-
parser.add_argument('--no_save_flo', action='store_true',
|
105 |
-
help='not save flow as .flo')
|
106 |
-
|
107 |
-
# distributed training
|
108 |
-
parser.add_argument('--local_rank', default=0, type=int)
|
109 |
-
parser.add_argument('--distributed', action='store_true')
|
110 |
-
parser.add_argument('--launcher', default='none', type=str, choices=['none', 'pytorch'])
|
111 |
-
parser.add_argument('--gpu_ids', default=0, type=int, nargs='+')
|
112 |
-
|
113 |
-
parser.add_argument('--count_time', action='store_true',
|
114 |
-
help='measure the inference time on sintel')
|
115 |
-
|
116 |
-
return parser
|
117 |
-
|
118 |
-
|
119 |
-
def main(args):
|
120 |
-
if not args.eval and not args.submission and args.inference_dir is None:
|
121 |
-
if args.local_rank == 0:
|
122 |
-
print('pytorch version:', torch.__version__)
|
123 |
-
print(args)
|
124 |
-
misc.save_args(args)
|
125 |
-
misc.check_path(args.checkpoint_dir)
|
126 |
-
misc.save_command(args.checkpoint_dir)
|
127 |
-
|
128 |
-
seed = args.seed
|
129 |
-
torch.manual_seed(seed)
|
130 |
-
np.random.seed(seed)
|
131 |
-
|
132 |
-
torch.backends.cudnn.benchmark = True
|
133 |
-
|
134 |
-
if args.launcher == 'none':
|
135 |
-
args.distributed = False
|
136 |
-
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
137 |
-
else:
|
138 |
-
args.distributed = True
|
139 |
-
|
140 |
-
# adjust batch size for each gpu
|
141 |
-
assert args.batch_size % torch.cuda.device_count() == 0
|
142 |
-
args.batch_size = args.batch_size // torch.cuda.device_count()
|
143 |
-
|
144 |
-
dist_params = dict(backend='nccl')
|
145 |
-
init_dist(args.launcher, **dist_params)
|
146 |
-
# re-set gpu_ids with distributed training mode
|
147 |
-
_, world_size = get_dist_info()
|
148 |
-
args.gpu_ids = range(world_size)
|
149 |
-
device = torch.device('cuda:{}'.format(args.local_rank))
|
150 |
-
|
151 |
-
setup_for_distributed(args.local_rank == 0)
|
152 |
-
|
153 |
-
# model
|
154 |
-
model = GMFlow(feature_channels=args.feature_channels,
|
155 |
-
num_scales=args.num_scales,
|
156 |
-
upsample_factor=args.upsample_factor,
|
157 |
-
num_head=args.num_head,
|
158 |
-
attention_type=args.attention_type,
|
159 |
-
ffn_dim_expansion=args.ffn_dim_expansion,
|
160 |
-
num_transformer_layers=args.num_transformer_layers,
|
161 |
-
).to(device)
|
162 |
-
|
163 |
-
if not args.eval and not args.submission and not args.inference_dir:
|
164 |
-
print('Model definition:')
|
165 |
-
print(model)
|
166 |
-
|
167 |
-
if args.distributed:
|
168 |
-
model = torch.nn.parallel.DistributedDataParallel(
|
169 |
-
model.to(device),
|
170 |
-
device_ids=[args.local_rank],
|
171 |
-
output_device=args.local_rank)
|
172 |
-
model_without_ddp = model.module
|
173 |
-
else:
|
174 |
-
if torch.cuda.device_count() > 1:
|
175 |
-
print('Use %d GPUs' % torch.cuda.device_count())
|
176 |
-
model = torch.nn.DataParallel(model)
|
177 |
-
|
178 |
-
model_without_ddp = model.module
|
179 |
-
else:
|
180 |
-
model_without_ddp = model
|
181 |
-
|
182 |
-
num_params = sum(p.numel() for p in model.parameters())
|
183 |
-
print('Number of params:', num_params)
|
184 |
-
if not args.eval and not args.submission and args.inference_dir is None:
|
185 |
-
save_name = '%d_parameters' % num_params
|
186 |
-
open(os.path.join(args.checkpoint_dir, save_name), 'a').close()
|
187 |
-
|
188 |
-
optimizer = torch.optim.AdamW(model_without_ddp.parameters(), lr=args.lr,
|
189 |
-
weight_decay=args.weight_decay)
|
190 |
-
|
191 |
-
start_epoch = 0
|
192 |
-
start_step = 0
|
193 |
-
# resume checkpoints
|
194 |
-
if args.resume:
|
195 |
-
print('Load checkpoint: %s' % args.resume)
|
196 |
-
|
197 |
-
loc = 'cuda:{}'.format(args.local_rank)
|
198 |
-
checkpoint = torch.load(args.resume, map_location=loc)
|
199 |
-
|
200 |
-
weights = checkpoint['model'] if 'model' in checkpoint else checkpoint
|
201 |
-
|
202 |
-
model_without_ddp.load_state_dict(weights, strict=args.strict_resume)
|
203 |
-
|
204 |
-
if 'optimizer' in checkpoint and 'step' in checkpoint and 'epoch' in checkpoint and not \
|
205 |
-
args.no_resume_optimizer:
|
206 |
-
print('Load optimizer')
|
207 |
-
optimizer.load_state_dict(checkpoint['optimizer'])
|
208 |
-
start_epoch = checkpoint['epoch']
|
209 |
-
start_step = checkpoint['step']
|
210 |
-
|
211 |
-
print('start_epoch: %d, start_step: %d' % (start_epoch, start_step))
|
212 |
-
|
213 |
-
# evaluate
|
214 |
-
if args.eval:
|
215 |
-
val_results = {}
|
216 |
-
|
217 |
-
if 'chairs' in args.val_dataset:
|
218 |
-
results_dict = validate_chairs(model_without_ddp,
|
219 |
-
with_speed_metric=args.with_speed_metric,
|
220 |
-
attn_splits_list=args.attn_splits_list,
|
221 |
-
corr_radius_list=args.corr_radius_list,
|
222 |
-
prop_radius_list=args.prop_radius_list,
|
223 |
-
)
|
224 |
-
|
225 |
-
val_results.update(results_dict)
|
226 |
-
|
227 |
-
if 'things' in args.val_dataset:
|
228 |
-
results_dict = validate_things(model_without_ddp,
|
229 |
-
padding_factor=args.padding_factor,
|
230 |
-
with_speed_metric=args.with_speed_metric,
|
231 |
-
attn_splits_list=args.attn_splits_list,
|
232 |
-
corr_radius_list=args.corr_radius_list,
|
233 |
-
prop_radius_list=args.prop_radius_list,
|
234 |
-
)
|
235 |
-
val_results.update(results_dict)
|
236 |
-
|
237 |
-
if 'sintel' in args.val_dataset:
|
238 |
-
results_dict = validate_sintel(model_without_ddp,
|
239 |
-
count_time=args.count_time,
|
240 |
-
padding_factor=args.padding_factor,
|
241 |
-
with_speed_metric=args.with_speed_metric,
|
242 |
-
evaluate_matched_unmatched=args.evaluate_matched_unmatched,
|
243 |
-
attn_splits_list=args.attn_splits_list,
|
244 |
-
corr_radius_list=args.corr_radius_list,
|
245 |
-
prop_radius_list=args.prop_radius_list,
|
246 |
-
)
|
247 |
-
val_results.update(results_dict)
|
248 |
-
|
249 |
-
if 'kitti' in args.val_dataset:
|
250 |
-
results_dict = validate_kitti(model_without_ddp,
|
251 |
-
padding_factor=args.padding_factor,
|
252 |
-
with_speed_metric=args.with_speed_metric,
|
253 |
-
attn_splits_list=args.attn_splits_list,
|
254 |
-
corr_radius_list=args.corr_radius_list,
|
255 |
-
prop_radius_list=args.prop_radius_list,
|
256 |
-
)
|
257 |
-
val_results.update(results_dict)
|
258 |
-
|
259 |
-
if args.save_eval_to_file:
|
260 |
-
misc.check_path(args.checkpoint_dir)
|
261 |
-
val_file = os.path.join(args.checkpoint_dir, 'val_results.txt')
|
262 |
-
with open(val_file, 'a') as f:
|
263 |
-
f.write('\neval results after training done\n\n')
|
264 |
-
metrics = ['chairs_epe', 'chairs_s0_10', 'chairs_s10_40', 'chairs_s40+',
|
265 |
-
'things_clean_epe', 'things_clean_s0_10', 'things_clean_s10_40', 'things_clean_s40+',
|
266 |
-
'things_final_epe', 'things_final_s0_10', 'things_final_s10_40', 'things_final_s40+',
|
267 |
-
'sintel_clean_epe', 'sintel_clean_s0_10', 'sintel_clean_s10_40', 'sintel_clean_s40+',
|
268 |
-
'sintel_final_epe', 'sintel_final_s0_10', 'sintel_final_s10_40', 'sintel_final_s40+',
|
269 |
-
'kitti_epe', 'kitti_f1', 'kitti_s0_10', 'kitti_s10_40', 'kitti_s40+',
|
270 |
-
]
|
271 |
-
eval_metrics = []
|
272 |
-
for metric in metrics:
|
273 |
-
if metric in val_results.keys():
|
274 |
-
eval_metrics.append(metric)
|
275 |
-
|
276 |
-
metrics_values = [val_results[metric] for metric in eval_metrics]
|
277 |
-
|
278 |
-
num_metrics = len(eval_metrics)
|
279 |
-
|
280 |
-
# save as markdown format
|
281 |
-
f.write(("| {:>20} " * num_metrics + '\n').format(*eval_metrics))
|
282 |
-
f.write(("| {:20.3f} " * num_metrics).format(*metrics_values))
|
283 |
-
|
284 |
-
f.write('\n\n')
|
285 |
-
|
286 |
-
return
|
287 |
-
|
288 |
-
# Sintel and KITTI submission
|
289 |
-
if args.submission:
|
290 |
-
# NOTE: args.val_dataset is a list
|
291 |
-
if args.val_dataset[0] == 'sintel':
|
292 |
-
create_sintel_submission(model_without_ddp,
|
293 |
-
output_path=args.output_path,
|
294 |
-
padding_factor=args.padding_factor,
|
295 |
-
save_vis_flow=args.save_vis_flow,
|
296 |
-
no_save_flo=args.no_save_flo,
|
297 |
-
attn_splits_list=args.attn_splits_list,
|
298 |
-
corr_radius_list=args.corr_radius_list,
|
299 |
-
prop_radius_list=args.prop_radius_list,
|
300 |
-
)
|
301 |
-
elif args.val_dataset[0] == 'kitti':
|
302 |
-
create_kitti_submission(model_without_ddp,
|
303 |
-
output_path=args.output_path,
|
304 |
-
padding_factor=args.padding_factor,
|
305 |
-
save_vis_flow=args.save_vis_flow,
|
306 |
-
attn_splits_list=args.attn_splits_list,
|
307 |
-
corr_radius_list=args.corr_radius_list,
|
308 |
-
prop_radius_list=args.prop_radius_list,
|
309 |
-
)
|
310 |
-
else:
|
311 |
-
raise ValueError(f'Not supported dataset for submission')
|
312 |
-
|
313 |
-
return
|
314 |
-
|
315 |
-
# inferece on a dir
|
316 |
-
if args.inference_dir is not None:
|
317 |
-
inference_on_dir(model_without_ddp,
|
318 |
-
inference_dir=args.inference_dir,
|
319 |
-
output_path=args.output_path,
|
320 |
-
padding_factor=args.padding_factor,
|
321 |
-
inference_size=args.inference_size,
|
322 |
-
paired_data=args.dir_paired_data,
|
323 |
-
save_flo_flow=args.save_flo_flow,
|
324 |
-
attn_splits_list=args.attn_splits_list,
|
325 |
-
corr_radius_list=args.corr_radius_list,
|
326 |
-
prop_radius_list=args.prop_radius_list,
|
327 |
-
pred_bidir_flow=args.pred_bidir_flow,
|
328 |
-
fwd_bwd_consistency_check=args.fwd_bwd_consistency_check,
|
329 |
-
)
|
330 |
-
|
331 |
-
return
|
332 |
-
|
333 |
-
# training datset
|
334 |
-
train_dataset = build_train_dataset(args)
|
335 |
-
print('Number of training images:', len(train_dataset))
|
336 |
-
|
337 |
-
# Multi-processing
|
338 |
-
if args.distributed:
|
339 |
-
train_sampler = torch.utils.data.distributed.DistributedSampler(
|
340 |
-
train_dataset,
|
341 |
-
num_replicas=torch.cuda.device_count(),
|
342 |
-
rank=args.local_rank)
|
343 |
-
else:
|
344 |
-
train_sampler = None
|
345 |
-
|
346 |
-
shuffle = False if args.distributed else True
|
347 |
-
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=args.batch_size,
|
348 |
-
shuffle=shuffle, num_workers=args.num_workers,
|
349 |
-
pin_memory=True, drop_last=True,
|
350 |
-
sampler=train_sampler)
|
351 |
-
|
352 |
-
last_epoch = start_step if args.resume and start_step > 0 else -1
|
353 |
-
lr_scheduler = torch.optim.lr_scheduler.OneCycleLR(
|
354 |
-
optimizer, args.lr,
|
355 |
-
args.num_steps + 10,
|
356 |
-
pct_start=0.05,
|
357 |
-
cycle_momentum=False,
|
358 |
-
anneal_strategy='cos',
|
359 |
-
last_epoch=last_epoch,
|
360 |
-
)
|
361 |
-
|
362 |
-
if args.local_rank == 0:
|
363 |
-
summary_writer = SummaryWriter(args.checkpoint_dir)
|
364 |
-
logger = Logger(lr_scheduler, summary_writer, args.summary_freq,
|
365 |
-
start_step=start_step)
|
366 |
-
|
367 |
-
total_steps = start_step
|
368 |
-
epoch = start_epoch
|
369 |
-
print('Start training')
|
370 |
-
|
371 |
-
while total_steps < args.num_steps:
|
372 |
-
model.train()
|
373 |
-
|
374 |
-
# mannual change random seed for shuffling every epoch
|
375 |
-
if args.distributed:
|
376 |
-
train_sampler.set_epoch(epoch)
|
377 |
-
|
378 |
-
for i, sample in enumerate(train_loader):
|
379 |
-
img1, img2, flow_gt, valid = [x.to(device) for x in sample]
|
380 |
-
|
381 |
-
results_dict = model(img1, img2,
|
382 |
-
attn_splits_list=args.attn_splits_list,
|
383 |
-
corr_radius_list=args.corr_radius_list,
|
384 |
-
prop_radius_list=args.prop_radius_list,
|
385 |
-
)
|
386 |
-
|
387 |
-
flow_preds = results_dict['flow_preds']
|
388 |
-
|
389 |
-
loss, metrics = flow_loss_func(flow_preds, flow_gt, valid,
|
390 |
-
gamma=args.gamma,
|
391 |
-
max_flow=args.max_flow,
|
392 |
-
)
|
393 |
-
|
394 |
-
if isinstance(loss, float):
|
395 |
-
continue
|
396 |
-
|
397 |
-
if torch.isnan(loss):
|
398 |
-
continue
|
399 |
-
|
400 |
-
metrics.update({'total_loss': loss.item()})
|
401 |
-
|
402 |
-
# more efficient zero_grad
|
403 |
-
for param in model_without_ddp.parameters():
|
404 |
-
param.grad = None
|
405 |
-
|
406 |
-
loss.backward()
|
407 |
-
|
408 |
-
# Gradient clipping
|
409 |
-
torch.nn.utils.clip_grad_norm_(model.parameters(), args.grad_clip)
|
410 |
-
|
411 |
-
optimizer.step()
|
412 |
-
|
413 |
-
lr_scheduler.step()
|
414 |
-
|
415 |
-
if args.local_rank == 0:
|
416 |
-
logger.push(metrics)
|
417 |
-
|
418 |
-
logger.add_image_summary(img1, img2, flow_preds, flow_gt)
|
419 |
-
|
420 |
-
total_steps += 1
|
421 |
-
|
422 |
-
if total_steps % args.save_ckpt_freq == 0 or total_steps == args.num_steps:
|
423 |
-
if args.local_rank == 0:
|
424 |
-
checkpoint_path = os.path.join(args.checkpoint_dir, 'step_%06d.pth' % total_steps)
|
425 |
-
torch.save({
|
426 |
-
'model': model_without_ddp.state_dict()
|
427 |
-
}, checkpoint_path)
|
428 |
-
|
429 |
-
if total_steps % args.save_latest_ckpt_freq == 0:
|
430 |
-
checkpoint_path = os.path.join(args.checkpoint_dir, 'checkpoint_latest.pth')
|
431 |
-
|
432 |
-
if args.local_rank == 0:
|
433 |
-
torch.save({
|
434 |
-
'model': model_without_ddp.state_dict(),
|
435 |
-
'optimizer': optimizer.state_dict(),
|
436 |
-
'step': total_steps,
|
437 |
-
'epoch': epoch,
|
438 |
-
}, checkpoint_path)
|
439 |
-
|
440 |
-
if total_steps % args.val_freq == 0:
|
441 |
-
print('Start validation')
|
442 |
-
|
443 |
-
val_results = {}
|
444 |
-
# support validation on multiple datasets
|
445 |
-
if 'chairs' in args.val_dataset:
|
446 |
-
results_dict = validate_chairs(model_without_ddp,
|
447 |
-
with_speed_metric=args.with_speed_metric,
|
448 |
-
attn_splits_list=args.attn_splits_list,
|
449 |
-
corr_radius_list=args.corr_radius_list,
|
450 |
-
prop_radius_list=args.prop_radius_list,
|
451 |
-
)
|
452 |
-
if args.local_rank == 0:
|
453 |
-
val_results.update(results_dict)
|
454 |
-
|
455 |
-
if 'things' in args.val_dataset:
|
456 |
-
results_dict = validate_things(model_without_ddp,
|
457 |
-
padding_factor=args.padding_factor,
|
458 |
-
with_speed_metric=args.with_speed_metric,
|
459 |
-
attn_splits_list=args.attn_splits_list,
|
460 |
-
corr_radius_list=args.corr_radius_list,
|
461 |
-
prop_radius_list=args.prop_radius_list,
|
462 |
-
)
|
463 |
-
if args.local_rank == 0:
|
464 |
-
val_results.update(results_dict)
|
465 |
-
|
466 |
-
if 'sintel' in args.val_dataset:
|
467 |
-
results_dict = validate_sintel(model_without_ddp,
|
468 |
-
count_time=args.count_time,
|
469 |
-
padding_factor=args.padding_factor,
|
470 |
-
with_speed_metric=args.with_speed_metric,
|
471 |
-
evaluate_matched_unmatched=args.evaluate_matched_unmatched,
|
472 |
-
attn_splits_list=args.attn_splits_list,
|
473 |
-
corr_radius_list=args.corr_radius_list,
|
474 |
-
prop_radius_list=args.prop_radius_list,
|
475 |
-
)
|
476 |
-
if args.local_rank == 0:
|
477 |
-
val_results.update(results_dict)
|
478 |
-
|
479 |
-
if 'kitti' in args.val_dataset:
|
480 |
-
results_dict = validate_kitti(model_without_ddp,
|
481 |
-
padding_factor=args.padding_factor,
|
482 |
-
with_speed_metric=args.with_speed_metric,
|
483 |
-
attn_splits_list=args.attn_splits_list,
|
484 |
-
corr_radius_list=args.corr_radius_list,
|
485 |
-
prop_radius_list=args.prop_radius_list,
|
486 |
-
)
|
487 |
-
if args.local_rank == 0:
|
488 |
-
val_results.update(results_dict)
|
489 |
-
|
490 |
-
if args.local_rank == 0:
|
491 |
-
logger.write_dict(val_results)
|
492 |
-
|
493 |
-
# Save validation results
|
494 |
-
val_file = os.path.join(args.checkpoint_dir, 'val_results.txt')
|
495 |
-
with open(val_file, 'a') as f:
|
496 |
-
f.write('step: %06d\n' % total_steps)
|
497 |
-
if args.evaluate_matched_unmatched:
|
498 |
-
metrics = ['chairs_epe',
|
499 |
-
'chairs_s0_10', 'chairs_s10_40', 'chairs_s40+',
|
500 |
-
'things_clean_epe', 'things_clean_s0_10', 'things_clean_s10_40',
|
501 |
-
'things_clean_s40+',
|
502 |
-
'sintel_clean_epe', 'sintel_clean_matched', 'sintel_clean_unmatched',
|
503 |
-
'sintel_clean_s0_10', 'sintel_clean_s10_40',
|
504 |
-
'sintel_clean_s40+',
|
505 |
-
'sintel_final_epe', 'sintel_final_matched', 'sintel_final_unmatched',
|
506 |
-
'sintel_final_s0_10', 'sintel_final_s10_40',
|
507 |
-
'sintel_final_s40+',
|
508 |
-
'kitti_epe', 'kitti_f1', 'kitti_s0_10', 'kitti_s10_40', 'kitti_s40+',
|
509 |
-
]
|
510 |
-
else:
|
511 |
-
metrics = ['chairs_epe', 'chairs_s0_10', 'chairs_s10_40', 'chairs_s40+',
|
512 |
-
'things_clean_epe', 'things_clean_s0_10', 'things_clean_s10_40',
|
513 |
-
'things_clean_s40+',
|
514 |
-
'sintel_clean_epe', 'sintel_clean_s0_10', 'sintel_clean_s10_40',
|
515 |
-
'sintel_clean_s40+',
|
516 |
-
'sintel_final_epe', 'sintel_final_s0_10', 'sintel_final_s10_40',
|
517 |
-
'sintel_final_s40+',
|
518 |
-
'kitti_epe', 'kitti_f1', 'kitti_s0_10', 'kitti_s10_40', 'kitti_s40+',
|
519 |
-
]
|
520 |
-
|
521 |
-
eval_metrics = []
|
522 |
-
for metric in metrics:
|
523 |
-
if metric in val_results.keys():
|
524 |
-
eval_metrics.append(metric)
|
525 |
-
|
526 |
-
metrics_values = [val_results[metric] for metric in eval_metrics]
|
527 |
-
|
528 |
-
num_metrics = len(eval_metrics)
|
529 |
-
|
530 |
-
# save as markdown format
|
531 |
-
if args.evaluate_matched_unmatched:
|
532 |
-
f.write(("| {:>25} " * num_metrics + '\n').format(*eval_metrics))
|
533 |
-
f.write(("| {:25.3f} " * num_metrics).format(*metrics_values))
|
534 |
-
else:
|
535 |
-
f.write(("| {:>20} " * num_metrics + '\n').format(*eval_metrics))
|
536 |
-
f.write(("| {:20.3f} " * num_metrics).format(*metrics_values))
|
537 |
-
|
538 |
-
f.write('\n\n')
|
539 |
-
|
540 |
-
model.train()
|
541 |
-
|
542 |
-
if total_steps >= args.num_steps:
|
543 |
-
print('Training done')
|
544 |
-
|
545 |
-
return
|
546 |
-
|
547 |
-
epoch += 1
|
548 |
-
|
549 |
-
|
550 |
-
if __name__ == '__main__':
|
551 |
-
parser = get_args_parser()
|
552 |
-
args = parser.parse_args()
|
553 |
-
|
554 |
-
if 'LOCAL_RANK' not in os.environ:
|
555 |
-
os.environ['LOCAL_RANK'] = str(args.local_rank)
|
556 |
-
|
557 |
-
main(args)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AriaMei/TTSdemo/mel_processing.py
DELETED
@@ -1,119 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import os
|
3 |
-
import random
|
4 |
-
import torch
|
5 |
-
from torch import nn
|
6 |
-
import torch.nn.functional as F
|
7 |
-
import torch.utils.data
|
8 |
-
import numpy as np
|
9 |
-
|
10 |
-
import logging
|
11 |
-
|
12 |
-
numba_logger = logging.getLogger('numba')
|
13 |
-
numba_logger.setLevel(logging.WARNING)
|
14 |
-
import warnings
|
15 |
-
warnings.filterwarnings('ignore')
|
16 |
-
import librosa
|
17 |
-
import librosa.util as librosa_util
|
18 |
-
from librosa.util import normalize, pad_center, tiny
|
19 |
-
from scipy.signal import get_window
|
20 |
-
from scipy.io.wavfile import read
|
21 |
-
from librosa.filters import mel as librosa_mel_fn
|
22 |
-
|
23 |
-
MAX_WAV_VALUE = 32768.0
|
24 |
-
|
25 |
-
|
26 |
-
def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
|
27 |
-
"""
|
28 |
-
PARAMS
|
29 |
-
------
|
30 |
-
C: compression factor
|
31 |
-
"""
|
32 |
-
return torch.log(torch.clamp(x, min=clip_val) * C)
|
33 |
-
|
34 |
-
|
35 |
-
def dynamic_range_decompression_torch(x, C=1):
|
36 |
-
"""
|
37 |
-
PARAMS
|
38 |
-
------
|
39 |
-
C: compression factor used to compress
|
40 |
-
"""
|
41 |
-
return torch.exp(x) / C
|
42 |
-
|
43 |
-
|
44 |
-
def spectral_normalize_torch(magnitudes):
|
45 |
-
output = dynamic_range_compression_torch(magnitudes)
|
46 |
-
return output
|
47 |
-
|
48 |
-
|
49 |
-
def spectral_de_normalize_torch(magnitudes):
|
50 |
-
output = dynamic_range_decompression_torch(magnitudes)
|
51 |
-
return output
|
52 |
-
|
53 |
-
|
54 |
-
mel_basis = {}
|
55 |
-
hann_window = {}
|
56 |
-
|
57 |
-
|
58 |
-
def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
|
59 |
-
if torch.min(y) < -1.:
|
60 |
-
print('min value is ', torch.min(y))
|
61 |
-
if torch.max(y) > 1.:
|
62 |
-
print('max value is ', torch.max(y))
|
63 |
-
|
64 |
-
global hann_window
|
65 |
-
dtype_device = str(y.dtype) + '_' + str(y.device)
|
66 |
-
wnsize_dtype_device = str(win_size) + '_' + dtype_device
|
67 |
-
if wnsize_dtype_device not in hann_window:
|
68 |
-
hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
|
69 |
-
|
70 |
-
y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
|
71 |
-
y = y.squeeze(1)
|
72 |
-
|
73 |
-
spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
|
74 |
-
center=center, pad_mode='reflect', normalized=False, onesided=True)
|
75 |
-
|
76 |
-
spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
|
77 |
-
return spec
|
78 |
-
|
79 |
-
|
80 |
-
def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
|
81 |
-
global mel_basis
|
82 |
-
dtype_device = str(spec.dtype) + '_' + str(spec.device)
|
83 |
-
fmax_dtype_device = str(fmax) + '_' + dtype_device
|
84 |
-
if fmax_dtype_device not in mel_basis:
|
85 |
-
mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
|
86 |
-
mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device)
|
87 |
-
spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
|
88 |
-
spec = spectral_normalize_torch(spec)
|
89 |
-
return spec
|
90 |
-
|
91 |
-
|
92 |
-
def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False):
|
93 |
-
if torch.min(y) < -1.:
|
94 |
-
print('min value is ', torch.min(y))
|
95 |
-
if torch.max(y) > 1.:
|
96 |
-
print('max value is ', torch.max(y))
|
97 |
-
|
98 |
-
global mel_basis, hann_window
|
99 |
-
dtype_device = str(y.dtype) + '_' + str(y.device)
|
100 |
-
fmax_dtype_device = str(fmax) + '_' + dtype_device
|
101 |
-
wnsize_dtype_device = str(win_size) + '_' + dtype_device
|
102 |
-
if fmax_dtype_device not in mel_basis:
|
103 |
-
mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
|
104 |
-
mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device)
|
105 |
-
if wnsize_dtype_device not in hann_window:
|
106 |
-
hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device)
|
107 |
-
|
108 |
-
y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect')
|
109 |
-
y = y.squeeze(1)
|
110 |
-
|
111 |
-
spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
|
112 |
-
center=center, pad_mode='reflect', normalized=False, onesided=True)
|
113 |
-
|
114 |
-
spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
|
115 |
-
|
116 |
-
spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
|
117 |
-
spec = spectral_normalize_torch(spec)
|
118 |
-
|
119 |
-
return spec
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Arnx/MusicGenXvAKN/audiocraft/models/loaders.py
DELETED
@@ -1,94 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
"""
|
8 |
-
Utility functions to load from the checkpoints.
|
9 |
-
Each checkpoint is a torch.saved dict with the following keys:
|
10 |
-
- 'xp.cfg': the hydra config as dumped during training. This should be used
|
11 |
-
to rebuild the object using the audiocraft.models.builders functions,
|
12 |
-
- 'model_best_state': a readily loadable best state for the model, including
|
13 |
-
the conditioner. The model obtained from `xp.cfg` should be compatible
|
14 |
-
with this state dict. In the case of a LM, the encodec model would not be
|
15 |
-
bundled along but instead provided separately.
|
16 |
-
|
17 |
-
Those functions also support loading from a remote location with the Torch Hub API.
|
18 |
-
They also support overriding some parameters, in particular the device and dtype
|
19 |
-
of the returned model.
|
20 |
-
"""
|
21 |
-
|
22 |
-
from pathlib import Path
|
23 |
-
from huggingface_hub import hf_hub_download
|
24 |
-
import typing as tp
|
25 |
-
import os
|
26 |
-
|
27 |
-
from omegaconf import OmegaConf
|
28 |
-
import torch
|
29 |
-
|
30 |
-
from . import builders
|
31 |
-
|
32 |
-
|
33 |
-
HF_MODEL_CHECKPOINTS_MAP = {
|
34 |
-
"small": "facebook/musicgen-small",
|
35 |
-
"medium": "facebook/musicgen-medium",
|
36 |
-
"large": "facebook/musicgen-large",
|
37 |
-
"melody": "facebook/musicgen-melody",
|
38 |
-
}
|
39 |
-
|
40 |
-
|
41 |
-
def _get_state_dict(
|
42 |
-
file_or_url_or_id: tp.Union[Path, str],
|
43 |
-
filename: tp.Optional[str] = None,
|
44 |
-
device='cpu',
|
45 |
-
cache_dir: tp.Optional[str] = None,
|
46 |
-
):
|
47 |
-
# Return the state dict either from a file or url
|
48 |
-
file_or_url_or_id = str(file_or_url_or_id)
|
49 |
-
assert isinstance(file_or_url_or_id, str)
|
50 |
-
|
51 |
-
if os.path.isfile(file_or_url_or_id):
|
52 |
-
return torch.load(file_or_url_or_id, map_location=device)
|
53 |
-
|
54 |
-
if os.path.isdir(file_or_url_or_id):
|
55 |
-
file = f"{file_or_url_or_id}/{filename}"
|
56 |
-
return torch.load(file, map_location=device)
|
57 |
-
|
58 |
-
elif file_or_url_or_id.startswith('https://'):
|
59 |
-
return torch.hub.load_state_dict_from_url(file_or_url_or_id, map_location=device, check_hash=True)
|
60 |
-
|
61 |
-
elif file_or_url_or_id in HF_MODEL_CHECKPOINTS_MAP:
|
62 |
-
assert filename is not None, "filename needs to be defined if using HF checkpoints"
|
63 |
-
|
64 |
-
repo_id = HF_MODEL_CHECKPOINTS_MAP[file_or_url_or_id]
|
65 |
-
file = hf_hub_download(repo_id=repo_id, filename=filename, cache_dir=cache_dir)
|
66 |
-
return torch.load(file, map_location=device)
|
67 |
-
|
68 |
-
else:
|
69 |
-
raise ValueError(f"{file_or_url_or_id} is not a valid name, path or link that can be loaded.")
|
70 |
-
|
71 |
-
|
72 |
-
def load_compression_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None):
|
73 |
-
pkg = _get_state_dict(file_or_url_or_id, filename="compression_state_dict.bin", cache_dir=cache_dir)
|
74 |
-
cfg = OmegaConf.create(pkg['xp.cfg'])
|
75 |
-
cfg.device = str(device)
|
76 |
-
model = builders.get_compression_model(cfg)
|
77 |
-
model.load_state_dict(pkg['best_state'])
|
78 |
-
model.eval()
|
79 |
-
return model
|
80 |
-
|
81 |
-
|
82 |
-
def load_lm_model(file_or_url_or_id: tp.Union[Path, str], device='cpu', cache_dir: tp.Optional[str] = None):
|
83 |
-
pkg = _get_state_dict(file_or_url_or_id, filename="state_dict.bin", cache_dir=cache_dir)
|
84 |
-
cfg = OmegaConf.create(pkg['xp.cfg'])
|
85 |
-
cfg.device = str(device)
|
86 |
-
if cfg.device == 'cpu':
|
87 |
-
cfg.dtype = 'float32'
|
88 |
-
else:
|
89 |
-
cfg.dtype = 'float16'
|
90 |
-
model = builders.get_lm_model(cfg)
|
91 |
-
model.load_state_dict(pkg['best_state'])
|
92 |
-
model.eval()
|
93 |
-
model.cfg = cfg
|
94 |
-
return model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Artrajz/vits-simple-api/bert_vits2/utils.py
DELETED
@@ -1,70 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import sys
|
3 |
-
import logging
|
4 |
-
import torch
|
5 |
-
|
6 |
-
MATPLOTLIB_FLAG = False
|
7 |
-
|
8 |
-
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
|
9 |
-
logger = logging
|
10 |
-
|
11 |
-
|
12 |
-
def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False, version=None):
|
13 |
-
assert os.path.isfile(checkpoint_path)
|
14 |
-
checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
|
15 |
-
iteration = checkpoint_dict['iteration']
|
16 |
-
learning_rate = checkpoint_dict['learning_rate']
|
17 |
-
if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None:
|
18 |
-
optimizer.load_state_dict(checkpoint_dict['optimizer'])
|
19 |
-
elif optimizer is None and not skip_optimizer:
|
20 |
-
# else: #Disable this line if Infer ,and enable the line upper
|
21 |
-
new_opt_dict = optimizer.state_dict()
|
22 |
-
new_opt_dict_params = new_opt_dict['param_groups'][0]['params']
|
23 |
-
new_opt_dict['param_groups'] = checkpoint_dict['optimizer']['param_groups']
|
24 |
-
new_opt_dict['param_groups'][0]['params'] = new_opt_dict_params
|
25 |
-
optimizer.load_state_dict(new_opt_dict)
|
26 |
-
saved_state_dict = checkpoint_dict['model']
|
27 |
-
if hasattr(model, 'module'):
|
28 |
-
state_dict = model.module.state_dict()
|
29 |
-
else:
|
30 |
-
state_dict = model.state_dict()
|
31 |
-
new_state_dict = {}
|
32 |
-
for k, v in state_dict.items():
|
33 |
-
try:
|
34 |
-
# assert "emb_g" not in k
|
35 |
-
# print("load", k)
|
36 |
-
new_state_dict[k] = saved_state_dict[k]
|
37 |
-
assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape)
|
38 |
-
except:
|
39 |
-
# Handle legacy model versions and provide appropriate warnings
|
40 |
-
if "ja_bert_proj" in k:
|
41 |
-
v = torch.zeros_like(v)
|
42 |
-
if version is None:
|
43 |
-
logger.error(f"{k} is not in the checkpoint")
|
44 |
-
logger.warning(
|
45 |
-
f"If you're using an older version of the model, consider adding the \"version\" parameter to the model's config.json under the \"data\" section. For instance: \"legacy_version\": \"1.0.1\"")
|
46 |
-
elif "flow.flows.0.enc.attn_layers.3" in k:
|
47 |
-
logger.error(f"{k} is not in the checkpoint")
|
48 |
-
logger.warning(
|
49 |
-
f"If you're using a transitional version, please add the \"version\": \"1.1.0-transition\" parameter within the \"data\" section of the model's config.json.")
|
50 |
-
else:
|
51 |
-
logger.error(f"{k} is not in the checkpoint")
|
52 |
-
|
53 |
-
new_state_dict[k] = v
|
54 |
-
if hasattr(model, 'module'):
|
55 |
-
model.module.load_state_dict(new_state_dict, strict=False)
|
56 |
-
else:
|
57 |
-
model.load_state_dict(new_state_dict, strict=False)
|
58 |
-
# print("load ")
|
59 |
-
logger.info("Loaded checkpoint '{}' (iteration {})".format(
|
60 |
-
checkpoint_path, iteration))
|
61 |
-
return model, optimizer, learning_rate, iteration
|
62 |
-
|
63 |
-
|
64 |
-
def process_legacy_versions(hps):
|
65 |
-
version = getattr(hps, "version", getattr(hps.data, "version", None))
|
66 |
-
if version:
|
67 |
-
prefix = version[0].lower()
|
68 |
-
if prefix == "v":
|
69 |
-
version = version[1:]
|
70 |
-
return version
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/chardet/langbulgarianmodel.py
DELETED
The diff for this file is too large to render.
See raw diff
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/rotated_boxes.py
DELETED
@@ -1,21 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
from __future__ import absolute_import, division, print_function, unicode_literals
|
3 |
-
import torch
|
4 |
-
|
5 |
-
|
6 |
-
def pairwise_iou_rotated(boxes1, boxes2):
|
7 |
-
"""
|
8 |
-
Return intersection-over-union (Jaccard index) of boxes.
|
9 |
-
|
10 |
-
Both sets of boxes are expected to be in
|
11 |
-
(x_center, y_center, width, height, angle) format.
|
12 |
-
|
13 |
-
Arguments:
|
14 |
-
boxes1 (Tensor[N, 5])
|
15 |
-
boxes2 (Tensor[M, 5])
|
16 |
-
|
17 |
-
Returns:
|
18 |
-
iou (Tensor[N, M]): the NxM matrix containing the pairwise
|
19 |
-
IoU values for every element in boxes1 and boxes2
|
20 |
-
"""
|
21 |
-
return torch.ops.detectron2.box_iou_rotated(boxes1, boxes2)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Facebook Lite Apk Versin Antigua.md
DELETED
@@ -1,118 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Cómo descargar Facebook Lite APK versión anterior</h1>
|
3 |
-
<p>Facebook Lite es una aplicación popular que te permite usar Facebook en tu dispositivo Android sin consumir demasiados datos o espacio de almacenamiento. Sin embargo, a veces es posible que tenga que descargar una versión antigua de Facebook Lite por varias razones, como problemas de compatibilidad, problemas de rendimiento o preferencias personales. En este artículo, le mostraremos cómo encontrar y descargar la versión antigua de Facebook Lite APK de diferentes fuentes, y cómo instalarlo en su dispositivo Android. </p>
|
4 |
-
<h2>¿Qué es Facebook Lite y por qué es posible que necesite una versión antigua</h2>
|
5 |
-
<p>Facebook Lite es una versión de Facebook diseñada para trabajar más rápido y usar menos datos en dispositivos de gama baja o lenta. Tiene muchas características y beneficios que lo convierten en una gran alternativa a la aplicación regular de Facebook, como:</p>
|
6 |
-
<h2>descargar facebook lite apk versión antigua</h2><br /><p><b><b>Download</b> ○ <a href="https://bltlly.com/2v6Lhe">https://bltlly.com/2v6Lhe</a></b></p><br /><br />
|
7 |
-
<ul>
|
8 |
-
<li>Utiliza menos de 2 MB de espacio de almacenamiento en su dispositivo. </li>
|
9 |
-
<li>Se carga rápidamente y funciona sin problemas en redes 2G y áreas con mala conexión a Internet. </li>
|
10 |
-
<li>Guarda sus datos mostrando solo características básicas e imágenes de baja resolución. </li>
|
11 |
-
<li>Soporta la mayoría de las funciones de Facebook, como mensajería, publicación, me gusta, comentarios, compartir, etc.</li>
|
12 |
-
<li>Funciona en casi cualquier dispositivo Android, incluso aquellos con versiones anteriores de Android OS.</li>
|
13 |
-
</ul>
|
14 |
-
<p>Sin embargo, puede haber algunas situaciones en las que prefieras usar una versión antigua de Facebook Lite en lugar de la última, como:</p>
|
15 |
-
<ul>
|
16 |
-
<li>Tienes un dispositivo muy antiguo que no soporta la última versión de Facebook Lite.</li>
|
17 |
-
<li>Usted experimenta errores o problemas técnicos con la última versión de Facebook Lite que afectan a su experiencia de usuario. </li>
|
18 |
-
<li>Prefieres la interfaz o características de una versión antigua de Facebook Lite sobre la nueva. </li>
|
19 |
-
<li>Desea evitar actualizaciones que puedan cambiar sus ajustes o preferencias. </li>
|
20 |
-
</ul>
|
21 |
-
<h2>Cómo encontrar y descargar Facebook Lite APK versión anterior</h2>
|
22 |
-
|
23 |
-
<h3>Uso del sitio web de Uptodown</h3>
|
24 |
-
<p>Uptodown es un sitio web que proporciona descargas gratuitas de aplicaciones y juegos para Android, iOS, Windows, Mac y Linux. Tiene una gran colección de versiones antiguas de Facebook Lite, que se remonta a 2015. Para descargar una versión antigua de Facebook Lite de Uptodown, sigue estos pasos:</p>
|
25 |
-
<ol>
|
26 |
-
<li>Ir a <a href="( 1 )">https://facebook-lite.en.uptodown.com/android/versions</a>. </li>
|
27 |
-
<li>Desplácese hacia abajo y encuentre la versión de Facebook Lite que desea descargar. Puede ver la fecha de lanzamiento, el tamaño y la compatibilidad con Android de cada versión. </li>
|
28 |
-
<li>Haga clic en el botón verde "Descargar" junto a la versión que desee. </li>
|
29 |
-
<li>Espere a que termine la descarga. El archivo se guardará en la carpeta "Descargas" de su dispositivo. </li>
|
30 |
-
</ol>
|
31 |
-
<h3>Uso del sitio web de APKPure</h3>
|
32 |
-
<p>APKPure es otro sitio web que ofrece descargas gratuitas de aplicaciones y juegos para dispositivos Android. También tiene una amplia gama de versiones antiguas de Facebook Lite, que se remonta a 2015. Para descargar una versión antigua de Facebook Lite desde APKPure, sigue estos pasos:</p>
|
33 |
-
<ol>
|
34 |
-
<li>Ir a <a href=">https://apkpure.com/facebook-lite/com.facebook.lit.lite/versions</a>. </li>
|
35 |
-
<li <li>Desplácese hacia abajo y encuentre la versión de Facebook Lite que desea descargar. Puede ver la fecha de lanzamiento, el tamaño y la compatibilidad con Android de cada versión. </li>
|
36 |
-
<li>Haga clic en el botón azul "Descargar APK" junto a la versión que desee. </li>
|
37 |
-
<li>Espere a que termine la descarga. El archivo se guardará en la carpeta "Descargas" de su dispositivo. </li>
|
38 |
-
</ol>
|
39 |
-
<h3>Uso del sitio web de APKMirror</h3>
|
40 |
-
<p>APKMirror es otro sitio web que ofrece descargas gratuitas de aplicaciones y juegos para dispositivos Android. También tiene un enorme archivo de versiones antiguas de Facebook Lite, que se remonta a 2015. Para descargar una versión antigua de Facebook Lite desde APKMirror, sigue estos pasos:</p>
|
41 |
-
<ol>
|
42 |
-
<li>Ir a <a href=">https://www.apkmirror.com/apk/facebook-2/lite/</a>. </li>
|
43 |
-
|
44 |
-
<li>Haga clic en la versión que desee. Será redirigido a una nueva página con más detalles sobre la aplicación. </li>
|
45 |
-
<li>Haga clic en el botón rojo "Descargar APK" en la parte inferior de la página. </li>
|
46 |
-
<li>Espere a que termine la descarga. El archivo se guardará en la carpeta "Descargas" de su dispositivo. </li>
|
47 |
-
</ol>
|
48 |
-
<h2>Cómo instalar Facebook Lite APK versión antigua en su dispositivo Android</h2>
|
49 |
-
<p>Una vez que haya descargado el archivo de versión antigua de Facebook Lite APK de una de las fuentes anteriores, tendrá que instalarlo en su dispositivo Android. Para ello, deberá seguir estos pasos:</p>
|
50 |
-
<h3>Habilitar la opción de fuentes desconocidas</h3>
|
51 |
-
<p>De forma predeterminada, tu dispositivo Android no te permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Esta es una medida de seguridad para evitar que malware y virus infecten tu dispositivo. Sin embargo, si confías en la fuente del archivo APK, puedes habilitar la opción de instalar aplicaciones de fuentes desconocidas. Para hacerlo, sigue estos pasos:</p>
|
52 |
-
<ol>
|
53 |
-
<li>Ir a la aplicación "Configuración" de su dispositivo. </li>
|
54 |
-
<li>Toque en "Seguridad" o "Privacidad" dependiendo del modelo de su dispositivo. </li>
|
55 |
-
<li> Buscar y alternar en la opción que dice "Fuentes desconocidas" o "Permitir la instalación de aplicaciones de fuentes desconocidas". </li>
|
56 |
-
<li> Aparecerá un mensaje de advertencia. Toque en "OK" o "Permitir" para confirmar. </li>
|
57 |
-
</ol>
|
58 |
-
<h3>Localizar y abrir el archivo descargado</h3>
|
59 |
-
<p>Siguiente, tendrá que encontrar y abrir el archivo de versión antigua de Facebook Lite APK que ha descargado. Para hacer eso, siga estos pasos:</p>
|
60 |
-
<p></p>
|
61 |
-
<ol>
|
62 |
-
<li>Vaya a la aplicación "Administrador de archivos" de su dispositivo o a cualquier otra aplicación que pueda acceder a sus archivos. </li>
|
63 |
-
<li>Vaya a la carpeta "Descargas" de su dispositivo o donde quiera que haya guardado el archivo. </li>
|
64 |
-
<li>Encuentre y toque en el archivo que tiene el nombre "Facebook Lite" y el número de versión que ha descargado. </li>
|
65 |
-
</ol>
|
66 |
-
<h3>Siguiendo los pasos de instalación</h3>
|
67 |
-
<p>Finalmente, tendrá que seguir los pasos de instalación para completar el proceso. Para ello, siga estos pasos:</p>
|
68 |
-
<ol>
|
69 |
-
|
70 |
-
<li> La instalación comenzará y tomará unos segundos o minutos dependiendo del tamaño del dispositivo y del archivo. </li>
|
71 |
-
<li> Aparecerá una pantalla diciendo que la aplicación ha sido instalada. Toque en "Abrir" para iniciar la aplicación o "Hecho" para salir. </li>
|
72 |
-
</ol>
|
73 |
-
<h2>Conclusión</h2>
|
74 |
-
<p>En este artículo, le hemos mostrado cómo descargar la versión antigua de Facebook Lite APK de diferentes fuentes, y cómo instalarlo en su dispositivo Android. Esperamos que esta guía haya sido útil e informativa para usted. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. </p>
|
75 |
-
<h2>Preguntas frecuentes</h2>
|
76 |
-
<ul>
|
77 |
-
<li><b>Q: ¿Es seguro descargar e instalar Facebook Lite APK versión antigua? </b></li>
|
78 |
-
<li>A: Depende de dónde descargue el archivo. Algunos sitios web pueden ofrecer archivos falsos o maliciosos que pueden dañar su dispositivo o robar sus datos. Por lo tanto, es importante descargar solo de fuentes confiables y de buena reputación, como las que hemos mencionado en este artículo. Además, asegúrese de escanear el archivo con una aplicación antivirus antes de instalarlo. </li>
|
79 |
-
<li><b>Q: ¿Cuáles son las ventajas y desventajas de usar la versión antigua de Facebook Lite APK? </b></li>
|
80 |
-
<li>A: Algunas de las ventajas de usar Facebook Lite APK versión antigua son: <ul>
|
81 |
-
<li>Puede usarlo en dispositivos que no admiten la última versión de Facebook Lite <li>Puede evitar errores o problemas técnicos que puedan ocurrir con la última versión de Facebook Lite</li>
|
82 |
-
<li>Puedes disfrutar de la interfaz o características de una versión antigua de Facebook Lite que prefieras sobre la nueva</li>
|
83 |
-
<li> Puede guardar sus datos y espacio de almacenamiento no actualizando la aplicación</li>
|
84 |
-
</ul>
|
85 |
-
Algunas de las desventajas de usar Facebook Lite APK versión antigua son: <ul>
|
86 |
-
<li>Es posible que se pierda algunas nuevas características o mejoras que se agregan a la última versión de Facebook Lite</li>
|
87 |
-
<li>Es posible que tenga problemas de compatibilidad con algunos dispositivos u otras aplicaciones que requieren la última versión de Facebook Lite</li>
|
88 |
-
|
89 |
-
</ul>
|
90 |
-
</li>
|
91 |
-
<li><b>Q: ¿Cómo puedo actualizar la versión antigua de Facebook Lite APK a la última versión? </b></li>
|
92 |
-
<li>A: Si desea actualizar la versión antigua de Facebook Lite APK a la última versión, tiene dos opciones: <ol>
|
93 |
-
<li>Desinstalar la versión anterior e instalar la última versión de la Google Play Store o cualquiera de las fuentes mencionadas en este artículo. </li>
|
94 |
-
<li>Descargar e instalar la última versión sobre la versión anterior sin desinstalarlo. Esto mantendrá sus datos y configuraciones intactos, pero podría causar algunos errores o conflictos. </li>
|
95 |
-
</ol>
|
96 |
-
</li>
|
97 |
-
<li><b>Q: ¿Cómo puedo eliminar la versión antigua de Facebook Lite APK de mi dispositivo? </b></li>
|
98 |
-
<li>A: Si desea eliminar la versión antigua de Facebook Lite APK de su dispositivo, puede seguir estos pasos: <ol>
|
99 |
-
<li>Ir a la aplicación "Configuración" de su dispositivo. </li>
|
100 |
-
<li>Toque en "Aplicaciones" o "Aplicaciones" dependiendo del modelo de su dispositivo. </li>
|
101 |
-
<li>Encuentra y toca "Facebook Lite". </li>
|
102 |
-
<li>Toque en "Desinstalar". </li>
|
103 |
-
<li>Aparecerá un mensaje de confirmación. Toca "OK" o "Desinstalar" para confirmar. </li>
|
104 |
-
</ol>
|
105 |
-
</li>
|
106 |
-
<li><b>Q: ¿Cómo puedo contactar al soporte de Facebook Lite si tengo algún problema o pregunta? </b></li>
|
107 |
-
<li>A: Si tiene algún problema o pregunta con respecto a Facebook Lite, puede ponerse en contacto con el soporte de Facebook Lite siguiendo estos pasos: <ol>
|
108 |
-
<li>Abra la aplicación Facebook Lite en su dispositivo. </li>
|
109 |
-
<li>Toque en el icono del menú (tres líneas horizontales) en la esquina superior derecha de la pantalla. </li>
|
110 |
-
<li>Desplácese hacia abajo y toque en "Ayuda y soporte". </li>
|
111 |
-
<li>Toque en "Informar de un problema". </li>
|
112 |
-
<li>Seleccione el tipo de problema que desea reportar, como "Algo no está funcionando", "Contenido abusivo" o "Comentarios generales". </li>
|
113 |
-
<li>Rellene los detalles de su problema, como una descripción, capturas de pantalla o registros. </li>
|
114 |
-
<li>Toque en "Enviar". </li>
|
115 |
-
</ol>
|
116 |
-
También puede visitar el Centro de ayuda de Facebook Lite en <a href="">https://www.facebook.com/help/lite/</a> para obtener más información y preguntas frecuentes. </ul></p> 64aa2da5cf<br />
|
117 |
-
<br />
|
118 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/copies.py
DELETED
@@ -1,382 +0,0 @@
|
|
1 |
-
# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
4 |
-
# may not use this file except in compliance with the License. A copy of
|
5 |
-
# the License is located at
|
6 |
-
#
|
7 |
-
# http://aws.amazon.com/apache2.0/
|
8 |
-
#
|
9 |
-
# or in the "license" file accompanying this file. This file is
|
10 |
-
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
11 |
-
# ANY KIND, either express or implied. See the License for the specific
|
12 |
-
# language governing permissions and limitations under the License.
|
13 |
-
import copy
|
14 |
-
import math
|
15 |
-
|
16 |
-
from s3transfer.tasks import (
|
17 |
-
CompleteMultipartUploadTask,
|
18 |
-
CreateMultipartUploadTask,
|
19 |
-
SubmissionTask,
|
20 |
-
Task,
|
21 |
-
)
|
22 |
-
from s3transfer.utils import (
|
23 |
-
ChunksizeAdjuster,
|
24 |
-
calculate_range_parameter,
|
25 |
-
get_callbacks,
|
26 |
-
get_filtered_dict,
|
27 |
-
)
|
28 |
-
|
29 |
-
|
30 |
-
class CopySubmissionTask(SubmissionTask):
|
31 |
-
"""Task for submitting tasks to execute a copy"""
|
32 |
-
|
33 |
-
EXTRA_ARGS_TO_HEAD_ARGS_MAPPING = {
|
34 |
-
'CopySourceIfMatch': 'IfMatch',
|
35 |
-
'CopySourceIfModifiedSince': 'IfModifiedSince',
|
36 |
-
'CopySourceIfNoneMatch': 'IfNoneMatch',
|
37 |
-
'CopySourceIfUnmodifiedSince': 'IfUnmodifiedSince',
|
38 |
-
'CopySourceSSECustomerKey': 'SSECustomerKey',
|
39 |
-
'CopySourceSSECustomerAlgorithm': 'SSECustomerAlgorithm',
|
40 |
-
'CopySourceSSECustomerKeyMD5': 'SSECustomerKeyMD5',
|
41 |
-
'RequestPayer': 'RequestPayer',
|
42 |
-
'ExpectedBucketOwner': 'ExpectedBucketOwner',
|
43 |
-
}
|
44 |
-
|
45 |
-
UPLOAD_PART_COPY_ARGS = [
|
46 |
-
'CopySourceIfMatch',
|
47 |
-
'CopySourceIfModifiedSince',
|
48 |
-
'CopySourceIfNoneMatch',
|
49 |
-
'CopySourceIfUnmodifiedSince',
|
50 |
-
'CopySourceSSECustomerKey',
|
51 |
-
'CopySourceSSECustomerAlgorithm',
|
52 |
-
'CopySourceSSECustomerKeyMD5',
|
53 |
-
'SSECustomerKey',
|
54 |
-
'SSECustomerAlgorithm',
|
55 |
-
'SSECustomerKeyMD5',
|
56 |
-
'RequestPayer',
|
57 |
-
'ExpectedBucketOwner',
|
58 |
-
]
|
59 |
-
|
60 |
-
CREATE_MULTIPART_ARGS_BLACKLIST = [
|
61 |
-
'CopySourceIfMatch',
|
62 |
-
'CopySourceIfModifiedSince',
|
63 |
-
'CopySourceIfNoneMatch',
|
64 |
-
'CopySourceIfUnmodifiedSince',
|
65 |
-
'CopySourceSSECustomerKey',
|
66 |
-
'CopySourceSSECustomerAlgorithm',
|
67 |
-
'CopySourceSSECustomerKeyMD5',
|
68 |
-
'MetadataDirective',
|
69 |
-
'TaggingDirective',
|
70 |
-
]
|
71 |
-
|
72 |
-
COMPLETE_MULTIPART_ARGS = ['RequestPayer', 'ExpectedBucketOwner']
|
73 |
-
|
74 |
-
def _submit(
|
75 |
-
self, client, config, osutil, request_executor, transfer_future
|
76 |
-
):
|
77 |
-
"""
|
78 |
-
:param client: The client associated with the transfer manager
|
79 |
-
|
80 |
-
:type config: s3transfer.manager.TransferConfig
|
81 |
-
:param config: The transfer config associated with the transfer
|
82 |
-
manager
|
83 |
-
|
84 |
-
:type osutil: s3transfer.utils.OSUtil
|
85 |
-
:param osutil: The os utility associated to the transfer manager
|
86 |
-
|
87 |
-
:type request_executor: s3transfer.futures.BoundedExecutor
|
88 |
-
:param request_executor: The request executor associated with the
|
89 |
-
transfer manager
|
90 |
-
|
91 |
-
:type transfer_future: s3transfer.futures.TransferFuture
|
92 |
-
:param transfer_future: The transfer future associated with the
|
93 |
-
transfer request that tasks are being submitted for
|
94 |
-
"""
|
95 |
-
# Determine the size if it was not provided
|
96 |
-
if transfer_future.meta.size is None:
|
97 |
-
# If a size was not provided figure out the size for the
|
98 |
-
# user. Note that we will only use the client provided to
|
99 |
-
# the TransferManager. If the object is outside of the region
|
100 |
-
# of the client, they may have to provide the file size themselves
|
101 |
-
# with a completely new client.
|
102 |
-
call_args = transfer_future.meta.call_args
|
103 |
-
head_object_request = (
|
104 |
-
self._get_head_object_request_from_copy_source(
|
105 |
-
call_args.copy_source
|
106 |
-
)
|
107 |
-
)
|
108 |
-
extra_args = call_args.extra_args
|
109 |
-
|
110 |
-
# Map any values that may be used in the head object that is
|
111 |
-
# used in the copy object
|
112 |
-
for param, value in extra_args.items():
|
113 |
-
if param in self.EXTRA_ARGS_TO_HEAD_ARGS_MAPPING:
|
114 |
-
head_object_request[
|
115 |
-
self.EXTRA_ARGS_TO_HEAD_ARGS_MAPPING[param]
|
116 |
-
] = value
|
117 |
-
|
118 |
-
response = call_args.source_client.head_object(
|
119 |
-
**head_object_request
|
120 |
-
)
|
121 |
-
transfer_future.meta.provide_transfer_size(
|
122 |
-
response['ContentLength']
|
123 |
-
)
|
124 |
-
|
125 |
-
# If it is greater than threshold do a multipart copy, otherwise
|
126 |
-
# do a regular copy object.
|
127 |
-
if transfer_future.meta.size < config.multipart_threshold:
|
128 |
-
self._submit_copy_request(
|
129 |
-
client, config, osutil, request_executor, transfer_future
|
130 |
-
)
|
131 |
-
else:
|
132 |
-
self._submit_multipart_request(
|
133 |
-
client, config, osutil, request_executor, transfer_future
|
134 |
-
)
|
135 |
-
|
136 |
-
def _submit_copy_request(
|
137 |
-
self, client, config, osutil, request_executor, transfer_future
|
138 |
-
):
|
139 |
-
call_args = transfer_future.meta.call_args
|
140 |
-
|
141 |
-
# Get the needed progress callbacks for the task
|
142 |
-
progress_callbacks = get_callbacks(transfer_future, 'progress')
|
143 |
-
|
144 |
-
# Submit the request of a single copy.
|
145 |
-
self._transfer_coordinator.submit(
|
146 |
-
request_executor,
|
147 |
-
CopyObjectTask(
|
148 |
-
transfer_coordinator=self._transfer_coordinator,
|
149 |
-
main_kwargs={
|
150 |
-
'client': client,
|
151 |
-
'copy_source': call_args.copy_source,
|
152 |
-
'bucket': call_args.bucket,
|
153 |
-
'key': call_args.key,
|
154 |
-
'extra_args': call_args.extra_args,
|
155 |
-
'callbacks': progress_callbacks,
|
156 |
-
'size': transfer_future.meta.size,
|
157 |
-
},
|
158 |
-
is_final=True,
|
159 |
-
),
|
160 |
-
)
|
161 |
-
|
162 |
-
def _submit_multipart_request(
|
163 |
-
self, client, config, osutil, request_executor, transfer_future
|
164 |
-
):
|
165 |
-
call_args = transfer_future.meta.call_args
|
166 |
-
|
167 |
-
# Submit the request to create a multipart upload and make sure it
|
168 |
-
# does not include any of the arguments used for copy part.
|
169 |
-
create_multipart_extra_args = {}
|
170 |
-
for param, val in call_args.extra_args.items():
|
171 |
-
if param not in self.CREATE_MULTIPART_ARGS_BLACKLIST:
|
172 |
-
create_multipart_extra_args[param] = val
|
173 |
-
|
174 |
-
create_multipart_future = self._transfer_coordinator.submit(
|
175 |
-
request_executor,
|
176 |
-
CreateMultipartUploadTask(
|
177 |
-
transfer_coordinator=self._transfer_coordinator,
|
178 |
-
main_kwargs={
|
179 |
-
'client': client,
|
180 |
-
'bucket': call_args.bucket,
|
181 |
-
'key': call_args.key,
|
182 |
-
'extra_args': create_multipart_extra_args,
|
183 |
-
},
|
184 |
-
),
|
185 |
-
)
|
186 |
-
|
187 |
-
# Determine how many parts are needed based on filesize and
|
188 |
-
# desired chunksize.
|
189 |
-
part_size = config.multipart_chunksize
|
190 |
-
adjuster = ChunksizeAdjuster()
|
191 |
-
part_size = adjuster.adjust_chunksize(
|
192 |
-
part_size, transfer_future.meta.size
|
193 |
-
)
|
194 |
-
num_parts = int(
|
195 |
-
math.ceil(transfer_future.meta.size / float(part_size))
|
196 |
-
)
|
197 |
-
|
198 |
-
# Submit requests to upload the parts of the file.
|
199 |
-
part_futures = []
|
200 |
-
progress_callbacks = get_callbacks(transfer_future, 'progress')
|
201 |
-
|
202 |
-
for part_number in range(1, num_parts + 1):
|
203 |
-
extra_part_args = self._extra_upload_part_args(
|
204 |
-
call_args.extra_args
|
205 |
-
)
|
206 |
-
# The part number for upload part starts at 1 while the
|
207 |
-
# range parameter starts at zero, so just subtract 1 off of
|
208 |
-
# the part number
|
209 |
-
extra_part_args['CopySourceRange'] = calculate_range_parameter(
|
210 |
-
part_size,
|
211 |
-
part_number - 1,
|
212 |
-
num_parts,
|
213 |
-
transfer_future.meta.size,
|
214 |
-
)
|
215 |
-
# Get the size of the part copy as well for the progress
|
216 |
-
# callbacks.
|
217 |
-
size = self._get_transfer_size(
|
218 |
-
part_size,
|
219 |
-
part_number - 1,
|
220 |
-
num_parts,
|
221 |
-
transfer_future.meta.size,
|
222 |
-
)
|
223 |
-
# Get the checksum algorithm of the multipart request.
|
224 |
-
checksum_algorithm = call_args.extra_args.get("ChecksumAlgorithm")
|
225 |
-
part_futures.append(
|
226 |
-
self._transfer_coordinator.submit(
|
227 |
-
request_executor,
|
228 |
-
CopyPartTask(
|
229 |
-
transfer_coordinator=self._transfer_coordinator,
|
230 |
-
main_kwargs={
|
231 |
-
'client': client,
|
232 |
-
'copy_source': call_args.copy_source,
|
233 |
-
'bucket': call_args.bucket,
|
234 |
-
'key': call_args.key,
|
235 |
-
'part_number': part_number,
|
236 |
-
'extra_args': extra_part_args,
|
237 |
-
'callbacks': progress_callbacks,
|
238 |
-
'size': size,
|
239 |
-
'checksum_algorithm': checksum_algorithm,
|
240 |
-
},
|
241 |
-
pending_main_kwargs={
|
242 |
-
'upload_id': create_multipart_future
|
243 |
-
},
|
244 |
-
),
|
245 |
-
)
|
246 |
-
)
|
247 |
-
|
248 |
-
complete_multipart_extra_args = self._extra_complete_multipart_args(
|
249 |
-
call_args.extra_args
|
250 |
-
)
|
251 |
-
# Submit the request to complete the multipart upload.
|
252 |
-
self._transfer_coordinator.submit(
|
253 |
-
request_executor,
|
254 |
-
CompleteMultipartUploadTask(
|
255 |
-
transfer_coordinator=self._transfer_coordinator,
|
256 |
-
main_kwargs={
|
257 |
-
'client': client,
|
258 |
-
'bucket': call_args.bucket,
|
259 |
-
'key': call_args.key,
|
260 |
-
'extra_args': complete_multipart_extra_args,
|
261 |
-
},
|
262 |
-
pending_main_kwargs={
|
263 |
-
'upload_id': create_multipart_future,
|
264 |
-
'parts': part_futures,
|
265 |
-
},
|
266 |
-
is_final=True,
|
267 |
-
),
|
268 |
-
)
|
269 |
-
|
270 |
-
def _get_head_object_request_from_copy_source(self, copy_source):
|
271 |
-
if isinstance(copy_source, dict):
|
272 |
-
return copy.copy(copy_source)
|
273 |
-
else:
|
274 |
-
raise TypeError(
|
275 |
-
'Expecting dictionary formatted: '
|
276 |
-
'{"Bucket": bucket_name, "Key": key} '
|
277 |
-
'but got %s or type %s.' % (copy_source, type(copy_source))
|
278 |
-
)
|
279 |
-
|
280 |
-
def _extra_upload_part_args(self, extra_args):
|
281 |
-
# Only the args in COPY_PART_ARGS actually need to be passed
|
282 |
-
# onto the upload_part_copy calls.
|
283 |
-
return get_filtered_dict(extra_args, self.UPLOAD_PART_COPY_ARGS)
|
284 |
-
|
285 |
-
def _extra_complete_multipart_args(self, extra_args):
|
286 |
-
return get_filtered_dict(extra_args, self.COMPLETE_MULTIPART_ARGS)
|
287 |
-
|
288 |
-
def _get_transfer_size(
|
289 |
-
self, part_size, part_index, num_parts, total_transfer_size
|
290 |
-
):
|
291 |
-
if part_index == num_parts - 1:
|
292 |
-
# The last part may be different in size then the rest of the
|
293 |
-
# parts.
|
294 |
-
return total_transfer_size - (part_index * part_size)
|
295 |
-
return part_size
|
296 |
-
|
297 |
-
|
298 |
-
class CopyObjectTask(Task):
|
299 |
-
"""Task to do a nonmultipart copy"""
|
300 |
-
|
301 |
-
def _main(
|
302 |
-
self, client, copy_source, bucket, key, extra_args, callbacks, size
|
303 |
-
):
|
304 |
-
"""
|
305 |
-
:param client: The client to use when calling PutObject
|
306 |
-
:param copy_source: The CopySource parameter to use
|
307 |
-
:param bucket: The name of the bucket to copy to
|
308 |
-
:param key: The name of the key to copy to
|
309 |
-
:param extra_args: A dictionary of any extra arguments that may be
|
310 |
-
used in the upload.
|
311 |
-
:param callbacks: List of callbacks to call after copy
|
312 |
-
:param size: The size of the transfer. This value is passed into
|
313 |
-
the callbacks
|
314 |
-
|
315 |
-
"""
|
316 |
-
client.copy_object(
|
317 |
-
CopySource=copy_source, Bucket=bucket, Key=key, **extra_args
|
318 |
-
)
|
319 |
-
for callback in callbacks:
|
320 |
-
callback(bytes_transferred=size)
|
321 |
-
|
322 |
-
|
323 |
-
class CopyPartTask(Task):
|
324 |
-
"""Task to upload a part in a multipart copy"""
|
325 |
-
|
326 |
-
def _main(
|
327 |
-
self,
|
328 |
-
client,
|
329 |
-
copy_source,
|
330 |
-
bucket,
|
331 |
-
key,
|
332 |
-
upload_id,
|
333 |
-
part_number,
|
334 |
-
extra_args,
|
335 |
-
callbacks,
|
336 |
-
size,
|
337 |
-
checksum_algorithm=None,
|
338 |
-
):
|
339 |
-
"""
|
340 |
-
:param client: The client to use when calling PutObject
|
341 |
-
:param copy_source: The CopySource parameter to use
|
342 |
-
:param bucket: The name of the bucket to upload to
|
343 |
-
:param key: The name of the key to upload to
|
344 |
-
:param upload_id: The id of the upload
|
345 |
-
:param part_number: The number representing the part of the multipart
|
346 |
-
upload
|
347 |
-
:param extra_args: A dictionary of any extra arguments that may be
|
348 |
-
used in the upload.
|
349 |
-
:param callbacks: List of callbacks to call after copy part
|
350 |
-
:param size: The size of the transfer. This value is passed into
|
351 |
-
the callbacks
|
352 |
-
:param checksum_algorithm: The algorithm that was used to create the multipart
|
353 |
-
upload
|
354 |
-
|
355 |
-
:rtype: dict
|
356 |
-
:returns: A dictionary representing a part::
|
357 |
-
|
358 |
-
{'Etag': etag_value, 'PartNumber': part_number}
|
359 |
-
|
360 |
-
This value can be appended to a list to be used to complete
|
361 |
-
the multipart upload. If a checksum is in the response,
|
362 |
-
it will also be included.
|
363 |
-
"""
|
364 |
-
response = client.upload_part_copy(
|
365 |
-
CopySource=copy_source,
|
366 |
-
Bucket=bucket,
|
367 |
-
Key=key,
|
368 |
-
UploadId=upload_id,
|
369 |
-
PartNumber=part_number,
|
370 |
-
**extra_args,
|
371 |
-
)
|
372 |
-
for callback in callbacks:
|
373 |
-
callback(bytes_transferred=size)
|
374 |
-
etag = response['CopyPartResult']['ETag']
|
375 |
-
part_metadata = {'ETag': etag, 'PartNumber': part_number}
|
376 |
-
if checksum_algorithm:
|
377 |
-
checksum_member = f'Checksum{checksum_algorithm.upper()}'
|
378 |
-
if checksum_member in response['CopyPartResult']:
|
379 |
-
part_metadata[checksum_member] = response['CopyPartResult'][
|
380 |
-
checksum_member
|
381 |
-
]
|
382 |
-
return part_metadata
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/unixccompiler.py
DELETED
@@ -1,401 +0,0 @@
|
|
1 |
-
"""distutils.unixccompiler
|
2 |
-
|
3 |
-
Contains the UnixCCompiler class, a subclass of CCompiler that handles
|
4 |
-
the "typical" Unix-style command-line C compiler:
|
5 |
-
* macros defined with -Dname[=value]
|
6 |
-
* macros undefined with -Uname
|
7 |
-
* include search directories specified with -Idir
|
8 |
-
* libraries specified with -lllib
|
9 |
-
* library search directories specified with -Ldir
|
10 |
-
* compile handled by 'cc' (or similar) executable with -c option:
|
11 |
-
compiles .c to .o
|
12 |
-
* link static library handled by 'ar' command (possibly with 'ranlib')
|
13 |
-
* link shared library handled by 'cc -shared'
|
14 |
-
"""
|
15 |
-
|
16 |
-
import os
|
17 |
-
import sys
|
18 |
-
import re
|
19 |
-
import shlex
|
20 |
-
import itertools
|
21 |
-
|
22 |
-
from distutils import sysconfig
|
23 |
-
from distutils.dep_util import newer
|
24 |
-
from distutils.ccompiler import CCompiler, gen_preprocess_options, gen_lib_options
|
25 |
-
from distutils.errors import DistutilsExecError, CompileError, LibError, LinkError
|
26 |
-
from distutils import log
|
27 |
-
from ._macos_compat import compiler_fixup
|
28 |
-
|
29 |
-
# XXX Things not currently handled:
|
30 |
-
# * optimization/debug/warning flags; we just use whatever's in Python's
|
31 |
-
# Makefile and live with it. Is this adequate? If not, we might
|
32 |
-
# have to have a bunch of subclasses GNUCCompiler, SGICCompiler,
|
33 |
-
# SunCCompiler, and I suspect down that road lies madness.
|
34 |
-
# * even if we don't know a warning flag from an optimization flag,
|
35 |
-
# we need some way for outsiders to feed preprocessor/compiler/linker
|
36 |
-
# flags in to us -- eg. a sysadmin might want to mandate certain flags
|
37 |
-
# via a site config file, or a user might want to set something for
|
38 |
-
# compiling this module distribution only via the setup.py command
|
39 |
-
# line, whatever. As long as these options come from something on the
|
40 |
-
# current system, they can be as system-dependent as they like, and we
|
41 |
-
# should just happily stuff them into the preprocessor/compiler/linker
|
42 |
-
# options and carry on.
|
43 |
-
|
44 |
-
|
45 |
-
def _split_env(cmd):
|
46 |
-
"""
|
47 |
-
For macOS, split command into 'env' portion (if any)
|
48 |
-
and the rest of the linker command.
|
49 |
-
|
50 |
-
>>> _split_env(['a', 'b', 'c'])
|
51 |
-
([], ['a', 'b', 'c'])
|
52 |
-
>>> _split_env(['/usr/bin/env', 'A=3', 'gcc'])
|
53 |
-
(['/usr/bin/env', 'A=3'], ['gcc'])
|
54 |
-
"""
|
55 |
-
pivot = 0
|
56 |
-
if os.path.basename(cmd[0]) == "env":
|
57 |
-
pivot = 1
|
58 |
-
while '=' in cmd[pivot]:
|
59 |
-
pivot += 1
|
60 |
-
return cmd[:pivot], cmd[pivot:]
|
61 |
-
|
62 |
-
|
63 |
-
def _split_aix(cmd):
|
64 |
-
"""
|
65 |
-
AIX platforms prefix the compiler with the ld_so_aix
|
66 |
-
script, so split that from the linker command.
|
67 |
-
|
68 |
-
>>> _split_aix(['a', 'b', 'c'])
|
69 |
-
([], ['a', 'b', 'c'])
|
70 |
-
>>> _split_aix(['/bin/foo/ld_so_aix', 'gcc'])
|
71 |
-
(['/bin/foo/ld_so_aix'], ['gcc'])
|
72 |
-
"""
|
73 |
-
pivot = os.path.basename(cmd[0]) == 'ld_so_aix'
|
74 |
-
return cmd[:pivot], cmd[pivot:]
|
75 |
-
|
76 |
-
|
77 |
-
def _linker_params(linker_cmd, compiler_cmd):
|
78 |
-
"""
|
79 |
-
The linker command usually begins with the compiler
|
80 |
-
command (possibly multiple elements), followed by zero or more
|
81 |
-
params for shared library building.
|
82 |
-
|
83 |
-
If the LDSHARED env variable overrides the linker command,
|
84 |
-
however, the commands may not match.
|
85 |
-
|
86 |
-
Return the best guess of the linker parameters by stripping
|
87 |
-
the linker command. If the compiler command does not
|
88 |
-
match the linker command, assume the linker command is
|
89 |
-
just the first element.
|
90 |
-
|
91 |
-
>>> _linker_params('gcc foo bar'.split(), ['gcc'])
|
92 |
-
['foo', 'bar']
|
93 |
-
>>> _linker_params('gcc foo bar'.split(), ['other'])
|
94 |
-
['foo', 'bar']
|
95 |
-
>>> _linker_params('ccache gcc foo bar'.split(), 'ccache gcc'.split())
|
96 |
-
['foo', 'bar']
|
97 |
-
>>> _linker_params(['gcc'], ['gcc'])
|
98 |
-
[]
|
99 |
-
"""
|
100 |
-
c_len = len(compiler_cmd)
|
101 |
-
pivot = c_len if linker_cmd[:c_len] == compiler_cmd else 1
|
102 |
-
return linker_cmd[pivot:]
|
103 |
-
|
104 |
-
|
105 |
-
class UnixCCompiler(CCompiler):
|
106 |
-
|
107 |
-
compiler_type = 'unix'
|
108 |
-
|
109 |
-
# These are used by CCompiler in two places: the constructor sets
|
110 |
-
# instance attributes 'preprocessor', 'compiler', etc. from them, and
|
111 |
-
# 'set_executable()' allows any of these to be set. The defaults here
|
112 |
-
# are pretty generic; they will probably have to be set by an outsider
|
113 |
-
# (eg. using information discovered by the sysconfig about building
|
114 |
-
# Python extensions).
|
115 |
-
executables = {
|
116 |
-
'preprocessor': None,
|
117 |
-
'compiler': ["cc"],
|
118 |
-
'compiler_so': ["cc"],
|
119 |
-
'compiler_cxx': ["cc"],
|
120 |
-
'linker_so': ["cc", "-shared"],
|
121 |
-
'linker_exe': ["cc"],
|
122 |
-
'archiver': ["ar", "-cr"],
|
123 |
-
'ranlib': None,
|
124 |
-
}
|
125 |
-
|
126 |
-
if sys.platform[:6] == "darwin":
|
127 |
-
executables['ranlib'] = ["ranlib"]
|
128 |
-
|
129 |
-
# Needed for the filename generation methods provided by the base
|
130 |
-
# class, CCompiler. NB. whoever instantiates/uses a particular
|
131 |
-
# UnixCCompiler instance should set 'shared_lib_ext' -- we set a
|
132 |
-
# reasonable common default here, but it's not necessarily used on all
|
133 |
-
# Unices!
|
134 |
-
|
135 |
-
src_extensions = [".c", ".C", ".cc", ".cxx", ".cpp", ".m"]
|
136 |
-
obj_extension = ".o"
|
137 |
-
static_lib_extension = ".a"
|
138 |
-
shared_lib_extension = ".so"
|
139 |
-
dylib_lib_extension = ".dylib"
|
140 |
-
xcode_stub_lib_extension = ".tbd"
|
141 |
-
static_lib_format = shared_lib_format = dylib_lib_format = "lib%s%s"
|
142 |
-
xcode_stub_lib_format = dylib_lib_format
|
143 |
-
if sys.platform == "cygwin":
|
144 |
-
exe_extension = ".exe"
|
145 |
-
|
146 |
-
def preprocess(
|
147 |
-
self,
|
148 |
-
source,
|
149 |
-
output_file=None,
|
150 |
-
macros=None,
|
151 |
-
include_dirs=None,
|
152 |
-
extra_preargs=None,
|
153 |
-
extra_postargs=None,
|
154 |
-
):
|
155 |
-
fixed_args = self._fix_compile_args(None, macros, include_dirs)
|
156 |
-
ignore, macros, include_dirs = fixed_args
|
157 |
-
pp_opts = gen_preprocess_options(macros, include_dirs)
|
158 |
-
pp_args = self.preprocessor + pp_opts
|
159 |
-
if output_file:
|
160 |
-
pp_args.extend(['-o', output_file])
|
161 |
-
if extra_preargs:
|
162 |
-
pp_args[:0] = extra_preargs
|
163 |
-
if extra_postargs:
|
164 |
-
pp_args.extend(extra_postargs)
|
165 |
-
pp_args.append(source)
|
166 |
-
|
167 |
-
# reasons to preprocess:
|
168 |
-
# - force is indicated
|
169 |
-
# - output is directed to stdout
|
170 |
-
# - source file is newer than the target
|
171 |
-
preprocess = self.force or output_file is None or newer(source, output_file)
|
172 |
-
if not preprocess:
|
173 |
-
return
|
174 |
-
|
175 |
-
if output_file:
|
176 |
-
self.mkpath(os.path.dirname(output_file))
|
177 |
-
|
178 |
-
try:
|
179 |
-
self.spawn(pp_args)
|
180 |
-
except DistutilsExecError as msg:
|
181 |
-
raise CompileError(msg)
|
182 |
-
|
183 |
-
def _compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts):
|
184 |
-
compiler_so = compiler_fixup(self.compiler_so, cc_args + extra_postargs)
|
185 |
-
try:
|
186 |
-
self.spawn(compiler_so + cc_args + [src, '-o', obj] + extra_postargs)
|
187 |
-
except DistutilsExecError as msg:
|
188 |
-
raise CompileError(msg)
|
189 |
-
|
190 |
-
def create_static_lib(
|
191 |
-
self, objects, output_libname, output_dir=None, debug=0, target_lang=None
|
192 |
-
):
|
193 |
-
objects, output_dir = self._fix_object_args(objects, output_dir)
|
194 |
-
|
195 |
-
output_filename = self.library_filename(output_libname, output_dir=output_dir)
|
196 |
-
|
197 |
-
if self._need_link(objects, output_filename):
|
198 |
-
self.mkpath(os.path.dirname(output_filename))
|
199 |
-
self.spawn(self.archiver + [output_filename] + objects + self.objects)
|
200 |
-
|
201 |
-
# Not many Unices required ranlib anymore -- SunOS 4.x is, I
|
202 |
-
# think the only major Unix that does. Maybe we need some
|
203 |
-
# platform intelligence here to skip ranlib if it's not
|
204 |
-
# needed -- or maybe Python's configure script took care of
|
205 |
-
# it for us, hence the check for leading colon.
|
206 |
-
if self.ranlib:
|
207 |
-
try:
|
208 |
-
self.spawn(self.ranlib + [output_filename])
|
209 |
-
except DistutilsExecError as msg:
|
210 |
-
raise LibError(msg)
|
211 |
-
else:
|
212 |
-
log.debug("skipping %s (up-to-date)", output_filename)
|
213 |
-
|
214 |
-
def link(
|
215 |
-
self,
|
216 |
-
target_desc,
|
217 |
-
objects,
|
218 |
-
output_filename,
|
219 |
-
output_dir=None,
|
220 |
-
libraries=None,
|
221 |
-
library_dirs=None,
|
222 |
-
runtime_library_dirs=None,
|
223 |
-
export_symbols=None,
|
224 |
-
debug=0,
|
225 |
-
extra_preargs=None,
|
226 |
-
extra_postargs=None,
|
227 |
-
build_temp=None,
|
228 |
-
target_lang=None,
|
229 |
-
):
|
230 |
-
objects, output_dir = self._fix_object_args(objects, output_dir)
|
231 |
-
fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs)
|
232 |
-
libraries, library_dirs, runtime_library_dirs = fixed_args
|
233 |
-
|
234 |
-
lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries)
|
235 |
-
if not isinstance(output_dir, (str, type(None))):
|
236 |
-
raise TypeError("'output_dir' must be a string or None")
|
237 |
-
if output_dir is not None:
|
238 |
-
output_filename = os.path.join(output_dir, output_filename)
|
239 |
-
|
240 |
-
if self._need_link(objects, output_filename):
|
241 |
-
ld_args = objects + self.objects + lib_opts + ['-o', output_filename]
|
242 |
-
if debug:
|
243 |
-
ld_args[:0] = ['-g']
|
244 |
-
if extra_preargs:
|
245 |
-
ld_args[:0] = extra_preargs
|
246 |
-
if extra_postargs:
|
247 |
-
ld_args.extend(extra_postargs)
|
248 |
-
self.mkpath(os.path.dirname(output_filename))
|
249 |
-
try:
|
250 |
-
# Select a linker based on context: linker_exe when
|
251 |
-
# building an executable or linker_so (with shared options)
|
252 |
-
# when building a shared library.
|
253 |
-
building_exe = target_desc == CCompiler.EXECUTABLE
|
254 |
-
linker = (self.linker_exe if building_exe else self.linker_so)[:]
|
255 |
-
|
256 |
-
if target_lang == "c++" and self.compiler_cxx:
|
257 |
-
env, linker_ne = _split_env(linker)
|
258 |
-
aix, linker_na = _split_aix(linker_ne)
|
259 |
-
_, compiler_cxx_ne = _split_env(self.compiler_cxx)
|
260 |
-
_, linker_exe_ne = _split_env(self.linker_exe)
|
261 |
-
|
262 |
-
params = _linker_params(linker_na, linker_exe_ne)
|
263 |
-
linker = env + aix + compiler_cxx_ne + params
|
264 |
-
|
265 |
-
linker = compiler_fixup(linker, ld_args)
|
266 |
-
|
267 |
-
self.spawn(linker + ld_args)
|
268 |
-
except DistutilsExecError as msg:
|
269 |
-
raise LinkError(msg)
|
270 |
-
else:
|
271 |
-
log.debug("skipping %s (up-to-date)", output_filename)
|
272 |
-
|
273 |
-
# -- Miscellaneous methods -----------------------------------------
|
274 |
-
# These are all used by the 'gen_lib_options() function, in
|
275 |
-
# ccompiler.py.
|
276 |
-
|
277 |
-
def library_dir_option(self, dir):
|
278 |
-
return "-L" + dir
|
279 |
-
|
280 |
-
def _is_gcc(self):
|
281 |
-
cc_var = sysconfig.get_config_var("CC")
|
282 |
-
compiler = os.path.basename(shlex.split(cc_var)[0])
|
283 |
-
return "gcc" in compiler or "g++" in compiler
|
284 |
-
|
285 |
-
def runtime_library_dir_option(self, dir):
|
286 |
-
# XXX Hackish, at the very least. See Python bug #445902:
|
287 |
-
# http://sourceforge.net/tracker/index.php
|
288 |
-
# ?func=detail&aid=445902&group_id=5470&atid=105470
|
289 |
-
# Linkers on different platforms need different options to
|
290 |
-
# specify that directories need to be added to the list of
|
291 |
-
# directories searched for dependencies when a dynamic library
|
292 |
-
# is sought. GCC on GNU systems (Linux, FreeBSD, ...) has to
|
293 |
-
# be told to pass the -R option through to the linker, whereas
|
294 |
-
# other compilers and gcc on other systems just know this.
|
295 |
-
# Other compilers may need something slightly different. At
|
296 |
-
# this time, there's no way to determine this information from
|
297 |
-
# the configuration data stored in the Python installation, so
|
298 |
-
# we use this hack.
|
299 |
-
if sys.platform[:6] == "darwin":
|
300 |
-
from distutils.util import get_macosx_target_ver, split_version
|
301 |
-
|
302 |
-
macosx_target_ver = get_macosx_target_ver()
|
303 |
-
if macosx_target_ver and split_version(macosx_target_ver) >= [10, 5]:
|
304 |
-
return "-Wl,-rpath," + dir
|
305 |
-
else: # no support for -rpath on earlier macOS versions
|
306 |
-
return "-L" + dir
|
307 |
-
elif sys.platform[:7] == "freebsd":
|
308 |
-
return "-Wl,-rpath=" + dir
|
309 |
-
elif sys.platform[:5] == "hp-ux":
|
310 |
-
return [
|
311 |
-
"-Wl,+s" if self._is_gcc() else "+s",
|
312 |
-
"-L" + dir,
|
313 |
-
]
|
314 |
-
|
315 |
-
# For all compilers, `-Wl` is the presumed way to
|
316 |
-
# pass a compiler option to the linker and `-R` is
|
317 |
-
# the way to pass an RPATH.
|
318 |
-
if sysconfig.get_config_var("GNULD") == "yes":
|
319 |
-
# GNU ld needs an extra option to get a RUNPATH
|
320 |
-
# instead of just an RPATH.
|
321 |
-
return "-Wl,--enable-new-dtags,-R" + dir
|
322 |
-
else:
|
323 |
-
return "-Wl,-R" + dir
|
324 |
-
|
325 |
-
def library_option(self, lib):
|
326 |
-
return "-l" + lib
|
327 |
-
|
328 |
-
@staticmethod
|
329 |
-
def _library_root(dir):
|
330 |
-
"""
|
331 |
-
macOS users can specify an alternate SDK using'-isysroot'.
|
332 |
-
Calculate the SDK root if it is specified.
|
333 |
-
|
334 |
-
Note that, as of Xcode 7, Apple SDKs may contain textual stub
|
335 |
-
libraries with .tbd extensions rather than the normal .dylib
|
336 |
-
shared libraries installed in /. The Apple compiler tool
|
337 |
-
chain handles this transparently but it can cause problems
|
338 |
-
for programs that are being built with an SDK and searching
|
339 |
-
for specific libraries. Callers of find_library_file need to
|
340 |
-
keep in mind that the base filename of the returned SDK library
|
341 |
-
file might have a different extension from that of the library
|
342 |
-
file installed on the running system, for example:
|
343 |
-
/Applications/Xcode.app/Contents/Developer/Platforms/
|
344 |
-
MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/
|
345 |
-
usr/lib/libedit.tbd
|
346 |
-
vs
|
347 |
-
/usr/lib/libedit.dylib
|
348 |
-
"""
|
349 |
-
cflags = sysconfig.get_config_var('CFLAGS')
|
350 |
-
match = re.search(r'-isysroot\s*(\S+)', cflags)
|
351 |
-
|
352 |
-
apply_root = (
|
353 |
-
sys.platform == 'darwin'
|
354 |
-
and match
|
355 |
-
and (
|
356 |
-
dir.startswith('/System/')
|
357 |
-
or (dir.startswith('/usr/') and not dir.startswith('/usr/local/'))
|
358 |
-
)
|
359 |
-
)
|
360 |
-
|
361 |
-
return os.path.join(match.group(1), dir[1:]) if apply_root else dir
|
362 |
-
|
363 |
-
def find_library_file(self, dirs, lib, debug=0):
|
364 |
-
r"""
|
365 |
-
Second-guess the linker with not much hard
|
366 |
-
data to go on: GCC seems to prefer the shared library, so
|
367 |
-
assume that *all* Unix C compilers do,
|
368 |
-
ignoring even GCC's "-static" option.
|
369 |
-
|
370 |
-
>>> compiler = UnixCCompiler()
|
371 |
-
>>> compiler._library_root = lambda dir: dir
|
372 |
-
>>> monkeypatch = getfixture('monkeypatch')
|
373 |
-
>>> monkeypatch.setattr(os.path, 'exists', lambda d: 'existing' in d)
|
374 |
-
>>> dirs = ('/foo/bar/missing', '/foo/bar/existing')
|
375 |
-
>>> compiler.find_library_file(dirs, 'abc').replace('\\', '/')
|
376 |
-
'/foo/bar/existing/libabc.dylib'
|
377 |
-
>>> compiler.find_library_file(reversed(dirs), 'abc').replace('\\', '/')
|
378 |
-
'/foo/bar/existing/libabc.dylib'
|
379 |
-
>>> monkeypatch.setattr(os.path, 'exists',
|
380 |
-
... lambda d: 'existing' in d and '.a' in d)
|
381 |
-
>>> compiler.find_library_file(dirs, 'abc').replace('\\', '/')
|
382 |
-
'/foo/bar/existing/libabc.a'
|
383 |
-
>>> compiler.find_library_file(reversed(dirs), 'abc').replace('\\', '/')
|
384 |
-
'/foo/bar/existing/libabc.a'
|
385 |
-
"""
|
386 |
-
lib_names = (
|
387 |
-
self.library_filename(lib, lib_type=type)
|
388 |
-
for type in 'dylib xcode_stub shared static'.split()
|
389 |
-
)
|
390 |
-
|
391 |
-
roots = map(self._library_root, dirs)
|
392 |
-
|
393 |
-
searched = (
|
394 |
-
os.path.join(root, lib_name)
|
395 |
-
for root, lib_name in itertools.product(roots, lib_names)
|
396 |
-
)
|
397 |
-
|
398 |
-
found = filter(os.path.exists, searched)
|
399 |
-
|
400 |
-
# Return None if it could not be found in any dir.
|
401 |
-
return next(found, None)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BigSalmon/TestAnyGPTModel/README.md
DELETED
@@ -1,38 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: TestAnyGPTModel
|
3 |
-
emoji: 📚
|
4 |
-
colorFrom: green
|
5 |
-
colorTo: gray
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 0.89.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
# Configuration
|
13 |
-
|
14 |
-
`title`: _string_
|
15 |
-
Display title for the Space
|
16 |
-
|
17 |
-
`emoji`: _string_
|
18 |
-
Space emoji (emoji-only character allowed)
|
19 |
-
|
20 |
-
`colorFrom`: _string_
|
21 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
22 |
-
|
23 |
-
`colorTo`: _string_
|
24 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
25 |
-
|
26 |
-
`sdk`: _string_
|
27 |
-
Can be either `gradio` or `streamlit`
|
28 |
-
|
29 |
-
`sdk_version` : _string_
|
30 |
-
Only applicable for `streamlit` SDK.
|
31 |
-
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
|
32 |
-
|
33 |
-
`app_file`: _string_
|
34 |
-
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
|
35 |
-
Path is relative to the root of the repository.
|
36 |
-
|
37 |
-
`pinned`: _boolean_
|
38 |
-
Whether the Space stays on top of your list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Billyosoro/ESRGAN/FAQ.md
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
# FAQ
|
2 |
-
|
3 |
-
1. **What is the difference of `--netscale` and `outscale`?**
|
4 |
-
|
5 |
-
A: TODO.
|
6 |
-
|
7 |
-
1. **How to select models?**
|
8 |
-
|
9 |
-
A: TODO.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Brainclub5000/wesley7137-Llama-2-13B-Nous-Hermes-vicuna-uncensored-mastermod-spych/app.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
|
3 |
-
gr.Interface.load("models/wesley7137/Llama-2-13B-Nous-Hermes-vicuna-uncensored-mastermod-spych").launch()
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/datasets/builtin.py
DELETED
@@ -1,220 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
3 |
-
|
4 |
-
|
5 |
-
"""
|
6 |
-
This file registers pre-defined datasets at hard-coded paths, and their metadata.
|
7 |
-
|
8 |
-
We hard-code metadata for common datasets. This will enable:
|
9 |
-
1. Consistency check when loading the datasets
|
10 |
-
2. Use models on these standard datasets directly and run demos,
|
11 |
-
without having to download the dataset annotations
|
12 |
-
|
13 |
-
We hard-code some paths to the dataset that's assumed to
|
14 |
-
exist in "./datasets/".
|
15 |
-
|
16 |
-
Users SHOULD NOT use this file to create new dataset / metadata for new dataset.
|
17 |
-
To add new dataset, refer to the tutorial "docs/DATASETS.md".
|
18 |
-
"""
|
19 |
-
|
20 |
-
import os
|
21 |
-
|
22 |
-
from detectron2.data import DatasetCatalog, MetadataCatalog
|
23 |
-
|
24 |
-
from .builtin_meta import _get_builtin_metadata
|
25 |
-
from .cityscapes import load_cityscapes_instances, load_cityscapes_semantic
|
26 |
-
from .lvis import get_lvis_instances_meta, register_lvis_instances
|
27 |
-
from .pascal_voc import register_pascal_voc
|
28 |
-
from .register_coco import register_coco_instances, register_coco_panoptic_separated
|
29 |
-
|
30 |
-
# ==== Predefined datasets and splits for COCO ==========
|
31 |
-
|
32 |
-
_PREDEFINED_SPLITS_COCO = {}
|
33 |
-
_PREDEFINED_SPLITS_COCO["coco"] = {
|
34 |
-
"coco_2014_train": ("coco/train2014", "coco/annotations/instances_train2014.json"),
|
35 |
-
"coco_2014_val": ("coco/val2014", "coco/annotations/instances_val2014.json"),
|
36 |
-
"coco_2014_minival": ("coco/val2014", "coco/annotations/instances_minival2014.json"),
|
37 |
-
"coco_2014_minival_100": ("coco/val2014", "coco/annotations/instances_minival2014_100.json"),
|
38 |
-
"coco_2014_valminusminival": (
|
39 |
-
"coco/val2014",
|
40 |
-
"coco/annotations/instances_valminusminival2014.json",
|
41 |
-
),
|
42 |
-
"coco_2017_train": ("coco/train2017", "coco/annotations/instances_train2017.json"),
|
43 |
-
"coco_2017_val": ("coco/val2017", "coco/annotations/instances_val2017.json"),
|
44 |
-
"coco_2017_test": ("coco/test2017", "coco/annotations/image_info_test2017.json"),
|
45 |
-
"coco_2017_test-dev": ("coco/test2017", "coco/annotations/image_info_test-dev2017.json"),
|
46 |
-
"coco_2017_val_100": ("coco/val2017", "coco/annotations/instances_val2017_100.json"),
|
47 |
-
}
|
48 |
-
|
49 |
-
_PREDEFINED_SPLITS_COCO["coco_person"] = {
|
50 |
-
"keypoints_coco_2014_train": (
|
51 |
-
"coco/train2014",
|
52 |
-
"coco/annotations/person_keypoints_train2014.json",
|
53 |
-
),
|
54 |
-
"keypoints_coco_2014_val": ("coco/val2014", "coco/annotations/person_keypoints_val2014.json"),
|
55 |
-
"keypoints_coco_2014_minival": (
|
56 |
-
"coco/val2014",
|
57 |
-
"coco/annotations/person_keypoints_minival2014.json",
|
58 |
-
),
|
59 |
-
"keypoints_coco_2014_valminusminival": (
|
60 |
-
"coco/val2014",
|
61 |
-
"coco/annotations/person_keypoints_valminusminival2014.json",
|
62 |
-
),
|
63 |
-
"keypoints_coco_2014_minival_100": (
|
64 |
-
"coco/val2014",
|
65 |
-
"coco/annotations/person_keypoints_minival2014_100.json",
|
66 |
-
),
|
67 |
-
"keypoints_coco_2017_train": (
|
68 |
-
"coco/train2017",
|
69 |
-
"coco/annotations/person_keypoints_train2017.json",
|
70 |
-
),
|
71 |
-
"keypoints_coco_2017_val": ("coco/val2017", "coco/annotations/person_keypoints_val2017.json"),
|
72 |
-
"keypoints_coco_2017_val_100": (
|
73 |
-
"coco/val2017",
|
74 |
-
"coco/annotations/person_keypoints_val2017_100.json",
|
75 |
-
),
|
76 |
-
}
|
77 |
-
|
78 |
-
|
79 |
-
_PREDEFINED_SPLITS_COCO_PANOPTIC = {
|
80 |
-
"coco_2017_train_panoptic": (
|
81 |
-
# This is the original panoptic annotation directory
|
82 |
-
"coco/panoptic_train2017",
|
83 |
-
"coco/annotations/panoptic_train2017.json",
|
84 |
-
# This directory contains semantic annotations that are
|
85 |
-
# converted from panoptic annotations.
|
86 |
-
# It is used by PanopticFPN.
|
87 |
-
# You can use the script at detectron2/datasets/prepare_panoptic_fpn.py
|
88 |
-
# to create these directories.
|
89 |
-
"coco/panoptic_stuff_train2017",
|
90 |
-
),
|
91 |
-
"coco_2017_val_panoptic": (
|
92 |
-
"coco/panoptic_val2017",
|
93 |
-
"coco/annotations/panoptic_val2017.json",
|
94 |
-
"coco/panoptic_stuff_val2017",
|
95 |
-
),
|
96 |
-
"coco_2017_val_100_panoptic": (
|
97 |
-
"coco/panoptic_val2017_100",
|
98 |
-
"coco/annotations/panoptic_val2017_100.json",
|
99 |
-
"coco/panoptic_stuff_val2017_100",
|
100 |
-
),
|
101 |
-
}
|
102 |
-
|
103 |
-
|
104 |
-
def register_all_coco(root):
|
105 |
-
for dataset_name, splits_per_dataset in _PREDEFINED_SPLITS_COCO.items():
|
106 |
-
for key, (image_root, json_file) in splits_per_dataset.items():
|
107 |
-
# Assume pre-defined datasets live in `./datasets`.
|
108 |
-
register_coco_instances(
|
109 |
-
key,
|
110 |
-
_get_builtin_metadata(dataset_name),
|
111 |
-
os.path.join(root, json_file) if "://" not in json_file else json_file,
|
112 |
-
os.path.join(root, image_root),
|
113 |
-
)
|
114 |
-
|
115 |
-
for (
|
116 |
-
prefix,
|
117 |
-
(panoptic_root, panoptic_json, semantic_root),
|
118 |
-
) in _PREDEFINED_SPLITS_COCO_PANOPTIC.items():
|
119 |
-
prefix_instances = prefix[: -len("_panoptic")]
|
120 |
-
instances_meta = MetadataCatalog.get(prefix_instances)
|
121 |
-
image_root, instances_json = instances_meta.image_root, instances_meta.json_file
|
122 |
-
register_coco_panoptic_separated(
|
123 |
-
prefix,
|
124 |
-
_get_builtin_metadata("coco_panoptic_separated"),
|
125 |
-
image_root,
|
126 |
-
os.path.join(root, panoptic_root),
|
127 |
-
os.path.join(root, panoptic_json),
|
128 |
-
os.path.join(root, semantic_root),
|
129 |
-
instances_json,
|
130 |
-
)
|
131 |
-
|
132 |
-
|
133 |
-
# ==== Predefined datasets and splits for LVIS ==========
|
134 |
-
|
135 |
-
|
136 |
-
_PREDEFINED_SPLITS_LVIS = {
|
137 |
-
"lvis_v0.5": {
|
138 |
-
"lvis_v0.5_train": ("coco/train2017", "lvis/lvis_v0.5_train.json"),
|
139 |
-
"lvis_v0.5_val": ("coco/val2017", "lvis/lvis_v0.5_val.json"),
|
140 |
-
"lvis_v0.5_val_rand_100": ("coco/val2017", "lvis/lvis_v0.5_val_rand_100.json"),
|
141 |
-
"lvis_v0.5_test": ("coco/test2017", "lvis/lvis_v0.5_image_info_test.json"),
|
142 |
-
},
|
143 |
-
"lvis_v0.5_cocofied": {
|
144 |
-
"lvis_v0.5_train_cocofied": ("coco/train2017", "lvis/lvis_v0.5_train_cocofied.json"),
|
145 |
-
"lvis_v0.5_val_cocofied": ("coco/val2017", "lvis/lvis_v0.5_val_cocofied.json"),
|
146 |
-
},
|
147 |
-
}
|
148 |
-
|
149 |
-
|
150 |
-
def register_all_lvis(root):
|
151 |
-
for dataset_name, splits_per_dataset in _PREDEFINED_SPLITS_LVIS.items():
|
152 |
-
for key, (image_root, json_file) in splits_per_dataset.items():
|
153 |
-
# Assume pre-defined datasets live in `./datasets`.
|
154 |
-
register_lvis_instances(
|
155 |
-
key,
|
156 |
-
get_lvis_instances_meta(dataset_name),
|
157 |
-
os.path.join(root, json_file) if "://" not in json_file else json_file,
|
158 |
-
os.path.join(root, image_root),
|
159 |
-
)
|
160 |
-
|
161 |
-
|
162 |
-
# ==== Predefined splits for raw cityscapes images ===========
|
163 |
-
|
164 |
-
|
165 |
-
_RAW_CITYSCAPES_SPLITS = {
|
166 |
-
"cityscapes_fine_{task}_train": ("cityscapes/leftImg8bit/train", "cityscapes/gtFine/train"),
|
167 |
-
"cityscapes_fine_{task}_val": ("cityscapes/leftImg8bit/val", "cityscapes/gtFine/val"),
|
168 |
-
"cityscapes_fine_{task}_test": ("cityscapes/leftImg8bit/test", "cityscapes/gtFine/test"),
|
169 |
-
}
|
170 |
-
|
171 |
-
|
172 |
-
def register_all_cityscapes(root):
|
173 |
-
for key, (image_dir, gt_dir) in _RAW_CITYSCAPES_SPLITS.items():
|
174 |
-
meta = _get_builtin_metadata("cityscapes")
|
175 |
-
image_dir = os.path.join(root, image_dir)
|
176 |
-
gt_dir = os.path.join(root, gt_dir)
|
177 |
-
|
178 |
-
inst_key = key.format(task="instance_seg")
|
179 |
-
DatasetCatalog.register(
|
180 |
-
inst_key,
|
181 |
-
lambda x=image_dir, y=gt_dir: load_cityscapes_instances(
|
182 |
-
x, y, from_json=True, to_polygons=True
|
183 |
-
),
|
184 |
-
)
|
185 |
-
MetadataCatalog.get(inst_key).set(
|
186 |
-
image_dir=image_dir, gt_dir=gt_dir, evaluator_type="cityscapes", **meta
|
187 |
-
)
|
188 |
-
|
189 |
-
sem_key = key.format(task="sem_seg")
|
190 |
-
DatasetCatalog.register(
|
191 |
-
sem_key, lambda x=image_dir, y=gt_dir: load_cityscapes_semantic(x, y)
|
192 |
-
)
|
193 |
-
MetadataCatalog.get(sem_key).set(
|
194 |
-
image_dir=image_dir, gt_dir=gt_dir, evaluator_type="sem_seg", **meta
|
195 |
-
)
|
196 |
-
|
197 |
-
|
198 |
-
# ==== Predefined splits for PASCAL VOC ===========
|
199 |
-
def register_all_pascal_voc(root):
|
200 |
-
SPLITS = [
|
201 |
-
("voc_2007_trainval", "VOC2007", "trainval"),
|
202 |
-
("voc_2007_train", "VOC2007", "train"),
|
203 |
-
("voc_2007_val", "VOC2007", "val"),
|
204 |
-
("voc_2007_test", "VOC2007", "test"),
|
205 |
-
("voc_2012_trainval", "VOC2012", "trainval"),
|
206 |
-
("voc_2012_train", "VOC2012", "train"),
|
207 |
-
("voc_2012_val", "VOC2012", "val"),
|
208 |
-
]
|
209 |
-
for name, dirname, split in SPLITS:
|
210 |
-
year = 2007 if "2007" in name else 2012
|
211 |
-
register_pascal_voc(name, os.path.join(root, dirname), split, year)
|
212 |
-
MetadataCatalog.get(name).evaluator_type = "pascal_voc"
|
213 |
-
|
214 |
-
|
215 |
-
# Register them all under "./datasets"
|
216 |
-
_root = os.getenv("DETECTRON2_DATASETS", "datasets")
|
217 |
-
register_all_coco(_root)
|
218 |
-
register_all_lvis(_root)
|
219 |
-
register_all_cityscapes(_root)
|
220 |
-
register_all_pascal_voc(_root)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/LIVE/colab.py
DELETED
@@ -1,687 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Here are some use cases:
|
3 |
-
python main.py --config config/all.yaml --experiment experiment_8x1 --signature demo1 --target data/demo1.png
|
4 |
-
"""
|
5 |
-
import pydiffvg
|
6 |
-
import torch
|
7 |
-
import cv2
|
8 |
-
import matplotlib.pyplot as plt
|
9 |
-
import random
|
10 |
-
import argparse
|
11 |
-
import math
|
12 |
-
import errno
|
13 |
-
from tqdm import tqdm
|
14 |
-
from torch.optim.lr_scheduler import CosineAnnealingLR, LambdaLR
|
15 |
-
from torch.nn.functional import adaptive_avg_pool2d
|
16 |
-
import warnings
|
17 |
-
warnings.filterwarnings("ignore")
|
18 |
-
|
19 |
-
import PIL
|
20 |
-
import PIL.Image
|
21 |
-
import os
|
22 |
-
import os.path as osp
|
23 |
-
import numpy as np
|
24 |
-
import numpy.random as npr
|
25 |
-
import shutil
|
26 |
-
import copy
|
27 |
-
# import skfmm
|
28 |
-
from xing_loss import xing_loss
|
29 |
-
|
30 |
-
import yaml
|
31 |
-
from easydict import EasyDict as edict
|
32 |
-
|
33 |
-
|
34 |
-
pydiffvg.set_print_timing(False)
|
35 |
-
gamma = 1.0
|
36 |
-
|
37 |
-
##########
|
38 |
-
# helper #
|
39 |
-
##########
|
40 |
-
|
41 |
-
from utils import \
|
42 |
-
get_experiment_id, \
|
43 |
-
get_path_schedule, \
|
44 |
-
edict_2_dict, \
|
45 |
-
check_and_create_dir
|
46 |
-
|
47 |
-
def get_bezier_circle(radius=1, segments=4, bias=None):
|
48 |
-
points = []
|
49 |
-
if bias is None:
|
50 |
-
bias = (random.random(), random.random())
|
51 |
-
avg_degree = 360 / (segments*3)
|
52 |
-
for i in range(0, segments*3):
|
53 |
-
point = (np.cos(np.deg2rad(i * avg_degree)),
|
54 |
-
np.sin(np.deg2rad(i * avg_degree)))
|
55 |
-
points.append(point)
|
56 |
-
points = torch.tensor(points)
|
57 |
-
points = (points)*radius + torch.tensor(bias).unsqueeze(dim=0)
|
58 |
-
points = points.type(torch.FloatTensor)
|
59 |
-
return points
|
60 |
-
|
61 |
-
def get_sdf(phi, method='skfmm', **kwargs):
|
62 |
-
if method == 'skfmm':
|
63 |
-
import skfmm
|
64 |
-
phi = (phi-0.5)*2
|
65 |
-
if (phi.max() <= 0) or (phi.min() >= 0):
|
66 |
-
return np.zeros(phi.shape).astype(np.float32)
|
67 |
-
sd = skfmm.distance(phi, dx=1)
|
68 |
-
|
69 |
-
flip_negative = kwargs.get('flip_negative', True)
|
70 |
-
if flip_negative:
|
71 |
-
sd = np.abs(sd)
|
72 |
-
|
73 |
-
truncate = kwargs.get('truncate', 10)
|
74 |
-
sd = np.clip(sd, -truncate, truncate)
|
75 |
-
# print(f"max sd value is: {sd.max()}")
|
76 |
-
|
77 |
-
zero2max = kwargs.get('zero2max', True)
|
78 |
-
if zero2max and flip_negative:
|
79 |
-
sd = sd.max() - sd
|
80 |
-
elif zero2max:
|
81 |
-
raise ValueError
|
82 |
-
|
83 |
-
normalize = kwargs.get('normalize', 'sum')
|
84 |
-
if normalize == 'sum':
|
85 |
-
sd /= sd.sum()
|
86 |
-
elif normalize == 'to1':
|
87 |
-
sd /= sd.max()
|
88 |
-
return sd
|
89 |
-
|
90 |
-
def parse_args():
|
91 |
-
parser = argparse.ArgumentParser()
|
92 |
-
parser.add_argument('--debug', action='store_true', default=False)
|
93 |
-
parser.add_argument("--config", type=str)
|
94 |
-
parser.add_argument("--experiment", type=str)
|
95 |
-
parser.add_argument("--seed", type=int)
|
96 |
-
parser.add_argument("--target", type=str, help="target image path")
|
97 |
-
parser.add_argument('--log_dir', metavar='DIR', default="log/debug")
|
98 |
-
parser.add_argument('--initial', type=str, default="random", choices=['random', 'circle'])
|
99 |
-
parser.add_argument('--signature', nargs='+', type=str)
|
100 |
-
parser.add_argument('--seginit', nargs='+', type=str)
|
101 |
-
parser.add_argument("--num_segments", type=int, default=4)
|
102 |
-
# parser.add_argument("--num_paths", type=str, default="1,1,1")
|
103 |
-
# parser.add_argument("--num_iter", type=int, default=500)
|
104 |
-
# parser.add_argument('--free', action='store_true')
|
105 |
-
# Please ensure that image resolution is divisible by pool_size; otherwise the performance would drop a lot.
|
106 |
-
# parser.add_argument('--pool_size', type=int, default=40, help="the pooled image size for next path initialization")
|
107 |
-
# parser.add_argument('--save_loss', action='store_true')
|
108 |
-
# parser.add_argument('--save_init', action='store_true')
|
109 |
-
# parser.add_argument('--save_image', action='store_true')
|
110 |
-
# parser.add_argument('--save_video', action='store_true')
|
111 |
-
# parser.add_argument('--print_weight', action='store_true')
|
112 |
-
# parser.add_argument('--circle_init_radius', type=float)
|
113 |
-
cfg = edict()
|
114 |
-
args = parser.parse_args()
|
115 |
-
cfg.debug = args.debug
|
116 |
-
cfg.config = args.config
|
117 |
-
cfg.experiment = args.experiment
|
118 |
-
cfg.seed = args.seed
|
119 |
-
cfg.target = args.target
|
120 |
-
cfg.log_dir = args.log_dir
|
121 |
-
cfg.initial = args.initial
|
122 |
-
cfg.signature = args.signature
|
123 |
-
# set cfg num_segments in command
|
124 |
-
cfg.num_segments = args.num_segments
|
125 |
-
if args.seginit is not None:
|
126 |
-
cfg.seginit = edict()
|
127 |
-
cfg.seginit.type = args.seginit[0]
|
128 |
-
if cfg.seginit.type == 'circle':
|
129 |
-
cfg.seginit.radius = float(args.seginit[1])
|
130 |
-
return cfg
|
131 |
-
|
132 |
-
def ycrcb_conversion(im, format='[bs x 3 x 2D]', reverse=False):
|
133 |
-
mat = torch.FloatTensor([
|
134 |
-
[ 65.481/255, 128.553/255, 24.966/255], # ranged_from [0, 219/255]
|
135 |
-
[-37.797/255, -74.203/255, 112.000/255], # ranged_from [-112/255, 112/255]
|
136 |
-
[112.000/255, -93.786/255, -18.214/255], # ranged_from [-112/255, 112/255]
|
137 |
-
]).to(im.device)
|
138 |
-
|
139 |
-
if reverse:
|
140 |
-
mat = mat.inverse()
|
141 |
-
|
142 |
-
if format == '[bs x 3 x 2D]':
|
143 |
-
im = im.permute(0, 2, 3, 1)
|
144 |
-
im = torch.matmul(im, mat.T)
|
145 |
-
im = im.permute(0, 3, 1, 2).contiguous()
|
146 |
-
return im
|
147 |
-
elif format == '[2D x 3]':
|
148 |
-
im = torch.matmul(im, mat.T)
|
149 |
-
return im
|
150 |
-
else:
|
151 |
-
raise ValueError
|
152 |
-
|
153 |
-
class random_coord_init():
|
154 |
-
def __init__(self, canvas_size):
|
155 |
-
self.canvas_size = canvas_size
|
156 |
-
def __call__(self):
|
157 |
-
h, w = self.canvas_size
|
158 |
-
return [npr.uniform(0, 1)*w, npr.uniform(0, 1)*h]
|
159 |
-
|
160 |
-
class naive_coord_init():
|
161 |
-
def __init__(self, pred, gt, format='[bs x c x 2D]', replace_sampling=True):
|
162 |
-
if isinstance(pred, torch.Tensor):
|
163 |
-
pred = pred.detach().cpu().numpy()
|
164 |
-
if isinstance(gt, torch.Tensor):
|
165 |
-
gt = gt.detach().cpu().numpy()
|
166 |
-
|
167 |
-
if format == '[bs x c x 2D]':
|
168 |
-
self.map = ((pred[0] - gt[0])**2).sum(0)
|
169 |
-
elif format == ['[2D x c]']:
|
170 |
-
self.map = ((pred - gt)**2).sum(-1)
|
171 |
-
else:
|
172 |
-
raise ValueError
|
173 |
-
self.replace_sampling = replace_sampling
|
174 |
-
|
175 |
-
def __call__(self):
|
176 |
-
coord = np.where(self.map == self.map.max())
|
177 |
-
coord_h, coord_w = coord[0][0], coord[1][0]
|
178 |
-
if self.replace_sampling:
|
179 |
-
self.map[coord_h, coord_w] = -1
|
180 |
-
return [coord_w, coord_h]
|
181 |
-
|
182 |
-
|
183 |
-
class sparse_coord_init():
|
184 |
-
def __init__(self, pred, gt, format='[bs x c x 2D]', quantile_interval=200, nodiff_thres=0.1):
|
185 |
-
if isinstance(pred, torch.Tensor):
|
186 |
-
pred = pred.detach().cpu().numpy()
|
187 |
-
if isinstance(gt, torch.Tensor):
|
188 |
-
gt = gt.detach().cpu().numpy()
|
189 |
-
if format == '[bs x c x 2D]':
|
190 |
-
self.map = ((pred[0] - gt[0])**2).sum(0)
|
191 |
-
self.reference_gt = copy.deepcopy(
|
192 |
-
np.transpose(gt[0], (1, 2, 0)))
|
193 |
-
elif format == ['[2D x c]']:
|
194 |
-
self.map = (np.abs(pred - gt)).sum(-1)
|
195 |
-
self.reference_gt = copy.deepcopy(gt[0])
|
196 |
-
else:
|
197 |
-
raise ValueError
|
198 |
-
# OptionA: Zero too small errors to avoid the error too small deadloop
|
199 |
-
self.map[self.map < nodiff_thres] = 0
|
200 |
-
quantile_interval = np.linspace(0., 1., quantile_interval)
|
201 |
-
quantized_interval = np.quantile(self.map, quantile_interval)
|
202 |
-
# remove redundant
|
203 |
-
quantized_interval = np.unique(quantized_interval)
|
204 |
-
quantized_interval = sorted(quantized_interval[1:-1])
|
205 |
-
self.map = np.digitize(self.map, quantized_interval, right=False)
|
206 |
-
self.map = np.clip(self.map, 0, 255).astype(np.uint8)
|
207 |
-
self.idcnt = {}
|
208 |
-
for idi in sorted(np.unique(self.map)):
|
209 |
-
self.idcnt[idi] = (self.map==idi).sum()
|
210 |
-
self.idcnt.pop(min(self.idcnt.keys()))
|
211 |
-
# remove smallest one to remove the correct region
|
212 |
-
def __call__(self):
|
213 |
-
if len(self.idcnt) == 0:
|
214 |
-
h, w = self.map.shape
|
215 |
-
return [npr.uniform(0, 1)*w, npr.uniform(0, 1)*h]
|
216 |
-
target_id = max(self.idcnt, key=self.idcnt.get)
|
217 |
-
_, component, cstats, ccenter = cv2.connectedComponentsWithStats(
|
218 |
-
(self.map==target_id).astype(np.uint8), connectivity=4)
|
219 |
-
# remove cid = 0, it is the invalid area
|
220 |
-
csize = [ci[-1] for ci in cstats[1:]]
|
221 |
-
target_cid = csize.index(max(csize))+1
|
222 |
-
center = ccenter[target_cid][::-1]
|
223 |
-
coord = np.stack(np.where(component == target_cid)).T
|
224 |
-
dist = np.linalg.norm(coord-center, axis=1)
|
225 |
-
target_coord_id = np.argmin(dist)
|
226 |
-
coord_h, coord_w = coord[target_coord_id]
|
227 |
-
# replace_sampling
|
228 |
-
self.idcnt[target_id] -= max(csize)
|
229 |
-
if self.idcnt[target_id] == 0:
|
230 |
-
self.idcnt.pop(target_id)
|
231 |
-
self.map[component == target_cid] = 0
|
232 |
-
return [coord_w, coord_h]
|
233 |
-
|
234 |
-
|
235 |
-
def init_shapes(num_paths,
|
236 |
-
num_segments,
|
237 |
-
canvas_size,
|
238 |
-
seginit_cfg,
|
239 |
-
shape_cnt,
|
240 |
-
pos_init_method=None,
|
241 |
-
trainable_stroke=False,
|
242 |
-
**kwargs):
|
243 |
-
shapes = []
|
244 |
-
shape_groups = []
|
245 |
-
h, w = canvas_size
|
246 |
-
|
247 |
-
# change path init location
|
248 |
-
if pos_init_method is None:
|
249 |
-
pos_init_method = random_coord_init(canvas_size=canvas_size)
|
250 |
-
|
251 |
-
for i in range(num_paths):
|
252 |
-
num_control_points = [2] * num_segments
|
253 |
-
|
254 |
-
if seginit_cfg.type=="random":
|
255 |
-
points = []
|
256 |
-
p0 = pos_init_method()
|
257 |
-
color_ref = copy.deepcopy(p0)
|
258 |
-
points.append(p0)
|
259 |
-
for j in range(num_segments):
|
260 |
-
radius = seginit_cfg.radius
|
261 |
-
p1 = (p0[0] + radius * npr.uniform(-0.5, 0.5),
|
262 |
-
p0[1] + radius * npr.uniform(-0.5, 0.5))
|
263 |
-
p2 = (p1[0] + radius * npr.uniform(-0.5, 0.5),
|
264 |
-
p1[1] + radius * npr.uniform(-0.5, 0.5))
|
265 |
-
p3 = (p2[0] + radius * npr.uniform(-0.5, 0.5),
|
266 |
-
p2[1] + radius * npr.uniform(-0.5, 0.5))
|
267 |
-
points.append(p1)
|
268 |
-
points.append(p2)
|
269 |
-
if j < num_segments - 1:
|
270 |
-
points.append(p3)
|
271 |
-
p0 = p3
|
272 |
-
points = torch.FloatTensor(points)
|
273 |
-
|
274 |
-
# circle points initialization
|
275 |
-
elif seginit_cfg.type=="circle":
|
276 |
-
radius = seginit_cfg.radius
|
277 |
-
if radius is None:
|
278 |
-
radius = npr.uniform(0.5, 1)
|
279 |
-
center = pos_init_method()
|
280 |
-
color_ref = copy.deepcopy(center)
|
281 |
-
points = get_bezier_circle(
|
282 |
-
radius=radius, segments=num_segments,
|
283 |
-
bias=center)
|
284 |
-
|
285 |
-
path = pydiffvg.Path(num_control_points = torch.LongTensor(num_control_points),
|
286 |
-
points = points,
|
287 |
-
stroke_width = torch.tensor(0.0),
|
288 |
-
is_closed = True)
|
289 |
-
shapes.append(path)
|
290 |
-
# !!!!!!problem is here. the shape group shape_ids is wrong
|
291 |
-
|
292 |
-
if 'gt' in kwargs:
|
293 |
-
wref, href = color_ref
|
294 |
-
wref = max(0, min(int(wref), w-1))
|
295 |
-
href = max(0, min(int(href), h-1))
|
296 |
-
fill_color_init = list(gt[0, :, href, wref]) + [1.]
|
297 |
-
fill_color_init = torch.FloatTensor(fill_color_init)
|
298 |
-
stroke_color_init = torch.FloatTensor(npr.uniform(size=[4]))
|
299 |
-
else:
|
300 |
-
fill_color_init = torch.FloatTensor(npr.uniform(size=[4]))
|
301 |
-
stroke_color_init = torch.FloatTensor(npr.uniform(size=[4]))
|
302 |
-
|
303 |
-
path_group = pydiffvg.ShapeGroup(
|
304 |
-
shape_ids = torch.LongTensor([shape_cnt+i]),
|
305 |
-
fill_color = fill_color_init,
|
306 |
-
stroke_color = stroke_color_init,
|
307 |
-
)
|
308 |
-
shape_groups.append(path_group)
|
309 |
-
|
310 |
-
point_var = []
|
311 |
-
color_var = []
|
312 |
-
|
313 |
-
for path in shapes:
|
314 |
-
path.points.requires_grad = True
|
315 |
-
point_var.append(path.points)
|
316 |
-
for group in shape_groups:
|
317 |
-
group.fill_color.requires_grad = True
|
318 |
-
color_var.append(group.fill_color)
|
319 |
-
|
320 |
-
if trainable_stroke:
|
321 |
-
stroke_width_var = []
|
322 |
-
stroke_color_var = []
|
323 |
-
for path in shapes:
|
324 |
-
path.stroke_width.requires_grad = True
|
325 |
-
stroke_width_var.append(path.stroke_width)
|
326 |
-
for group in shape_groups:
|
327 |
-
group.stroke_color.requires_grad = True
|
328 |
-
stroke_color_var.append(group.stroke_color)
|
329 |
-
return shapes, shape_groups, point_var, color_var, stroke_width_var, stroke_color_var
|
330 |
-
else:
|
331 |
-
return shapes, shape_groups, point_var, color_var
|
332 |
-
|
333 |
-
class linear_decay_lrlambda_f(object):
|
334 |
-
def __init__(self, decay_every, decay_ratio):
|
335 |
-
self.decay_every = decay_every
|
336 |
-
self.decay_ratio = decay_ratio
|
337 |
-
|
338 |
-
def __call__(self, n):
|
339 |
-
decay_time = n//self.decay_every
|
340 |
-
decay_step = n %self.decay_every
|
341 |
-
lr_s = self.decay_ratio**decay_time
|
342 |
-
lr_e = self.decay_ratio**(decay_time+1)
|
343 |
-
r = decay_step/self.decay_every
|
344 |
-
lr = lr_s * (1-r) + lr_e * r
|
345 |
-
return lr
|
346 |
-
|
347 |
-
|
348 |
-
if __name__ == "__main__":
|
349 |
-
|
350 |
-
###############
|
351 |
-
# make config #
|
352 |
-
###############
|
353 |
-
|
354 |
-
cfg_arg = parse_args()
|
355 |
-
with open(cfg_arg.config, 'r') as f:
|
356 |
-
cfg = yaml.load(f, Loader=yaml.FullLoader)
|
357 |
-
cfg_default = edict(cfg['default'])
|
358 |
-
cfg = edict(cfg[cfg_arg.experiment])
|
359 |
-
cfg.update(cfg_default)
|
360 |
-
cfg.update(cfg_arg)
|
361 |
-
cfg.exid = get_experiment_id(cfg.debug)
|
362 |
-
|
363 |
-
cfg.experiment_dir = \
|
364 |
-
osp.join(cfg.log_dir, '{}_{}'.format(cfg.exid, '_'.join(cfg.signature)))
|
365 |
-
configfile = osp.join(cfg.experiment_dir, 'config.yaml')
|
366 |
-
check_and_create_dir(configfile)
|
367 |
-
with open(osp.join(configfile), 'w') as f:
|
368 |
-
yaml.dump(edict_2_dict(cfg), f)
|
369 |
-
|
370 |
-
# Use GPU if available
|
371 |
-
pydiffvg.set_use_gpu(torch.cuda.is_available())
|
372 |
-
device = pydiffvg.get_device()
|
373 |
-
|
374 |
-
gt = np.array(PIL.Image.open(cfg.target))
|
375 |
-
print(f"Input image shape is: {gt.shape}")
|
376 |
-
if len(gt.shape) == 2:
|
377 |
-
print("Converting the gray-scale image to RGB.")
|
378 |
-
gt = gt.unsqueeze(dim=-1).repeat(1,1,3)
|
379 |
-
if gt.shape[2] == 4:
|
380 |
-
print("Input image includes alpha channel, simply dropout alpha channel.")
|
381 |
-
gt = gt[:, :, :3]
|
382 |
-
gt = (gt/255).astype(np.float32)
|
383 |
-
gt = torch.FloatTensor(gt).permute(2, 0, 1)[None].to(device)
|
384 |
-
if cfg.use_ycrcb:
|
385 |
-
gt = ycrcb_conversion(gt)
|
386 |
-
h, w = gt.shape[2:]
|
387 |
-
|
388 |
-
path_schedule = get_path_schedule(**cfg.path_schedule)
|
389 |
-
|
390 |
-
if cfg.seed is not None:
|
391 |
-
random.seed(cfg.seed)
|
392 |
-
npr.seed(cfg.seed)
|
393 |
-
torch.manual_seed(cfg.seed)
|
394 |
-
render = pydiffvg.RenderFunction.apply
|
395 |
-
|
396 |
-
shapes_record, shape_groups_record = [], []
|
397 |
-
|
398 |
-
region_loss = None
|
399 |
-
loss_matrix = []
|
400 |
-
|
401 |
-
para_point, para_color = {}, {}
|
402 |
-
if cfg.trainable.stroke:
|
403 |
-
para_stroke_width, para_stroke_color = {}, {}
|
404 |
-
|
405 |
-
pathn_record = []
|
406 |
-
# Background
|
407 |
-
if cfg.trainable.bg:
|
408 |
-
# meancolor = gt.mean([2, 3])[0]
|
409 |
-
para_bg = torch.tensor([1., 1., 1.], requires_grad=True, device=device)
|
410 |
-
else:
|
411 |
-
if cfg.use_ycrcb:
|
412 |
-
para_bg = torch.tensor([219/255, 0, 0], requires_grad=False, device=device)
|
413 |
-
else:
|
414 |
-
para_bg = torch.tensor([1., 1., 1.], requires_grad=False, device=device)
|
415 |
-
|
416 |
-
##################
|
417 |
-
# start_training #
|
418 |
-
##################
|
419 |
-
|
420 |
-
loss_weight = None
|
421 |
-
loss_weight_keep = 0
|
422 |
-
if cfg.coord_init.type == 'naive':
|
423 |
-
pos_init_method = naive_coord_init(
|
424 |
-
para_bg.view(1, -1, 1, 1).repeat(1, 1, h, w), gt)
|
425 |
-
elif cfg.coord_init.type == 'sparse':
|
426 |
-
pos_init_method = sparse_coord_init(
|
427 |
-
para_bg.view(1, -1, 1, 1).repeat(1, 1, h, w), gt)
|
428 |
-
elif cfg.coord_init.type == 'random':
|
429 |
-
pos_init_method = random_coord_init([h, w])
|
430 |
-
else:
|
431 |
-
raise ValueError
|
432 |
-
|
433 |
-
lrlambda_f = linear_decay_lrlambda_f(cfg.num_iter, 0.4)
|
434 |
-
optim_schedular_dict = {}
|
435 |
-
|
436 |
-
for path_idx, pathn in enumerate(path_schedule):
|
437 |
-
loss_list = []
|
438 |
-
print("=> Adding [{}] paths, [{}] ...".format(pathn, cfg.seginit.type))
|
439 |
-
pathn_record.append(pathn)
|
440 |
-
pathn_record_str = '-'.join([str(i) for i in pathn_record])
|
441 |
-
|
442 |
-
# initialize new shapes related stuffs.
|
443 |
-
if cfg.trainable.stroke:
|
444 |
-
shapes, shape_groups, point_var, color_var, stroke_width_var, stroke_color_var = init_shapes(
|
445 |
-
pathn, cfg.num_segments, (h, w),
|
446 |
-
cfg.seginit, len(shapes_record),
|
447 |
-
pos_init_method,
|
448 |
-
trainable_stroke=True,
|
449 |
-
gt=gt, )
|
450 |
-
para_stroke_width[path_idx] = stroke_width_var
|
451 |
-
para_stroke_color[path_idx] = stroke_color_var
|
452 |
-
else:
|
453 |
-
shapes, shape_groups, point_var, color_var = init_shapes(
|
454 |
-
pathn, cfg.num_segments, (h, w),
|
455 |
-
cfg.seginit, len(shapes_record),
|
456 |
-
pos_init_method,
|
457 |
-
trainable_stroke=False,
|
458 |
-
gt=gt, )
|
459 |
-
|
460 |
-
shapes_record += shapes
|
461 |
-
shape_groups_record += shape_groups
|
462 |
-
|
463 |
-
if cfg.save.init:
|
464 |
-
filename = os.path.join(
|
465 |
-
cfg.experiment_dir, "svg-init",
|
466 |
-
"{}-init.svg".format(pathn_record_str))
|
467 |
-
check_and_create_dir(filename)
|
468 |
-
pydiffvg.save_svg(
|
469 |
-
filename, w, h,
|
470 |
-
shapes_record, shape_groups_record)
|
471 |
-
|
472 |
-
para = {}
|
473 |
-
if (cfg.trainable.bg) and (path_idx == 0):
|
474 |
-
para['bg'] = [para_bg]
|
475 |
-
para['point'] = point_var
|
476 |
-
para['color'] = color_var
|
477 |
-
if cfg.trainable.stroke:
|
478 |
-
para['stroke_width'] = stroke_width_var
|
479 |
-
para['stroke_color'] = stroke_color_var
|
480 |
-
|
481 |
-
pg = [{'params' : para[ki], 'lr' : cfg.lr_base[ki]} for ki in sorted(para.keys())]
|
482 |
-
optim = torch.optim.Adam(pg)
|
483 |
-
|
484 |
-
if cfg.trainable.record:
|
485 |
-
scheduler = LambdaLR(
|
486 |
-
optim, lr_lambda=lrlambda_f, last_epoch=-1)
|
487 |
-
else:
|
488 |
-
scheduler = LambdaLR(
|
489 |
-
optim, lr_lambda=lrlambda_f, last_epoch=cfg.num_iter)
|
490 |
-
optim_schedular_dict[path_idx] = (optim, scheduler)
|
491 |
-
|
492 |
-
# Inner loop training
|
493 |
-
t_range = tqdm(range(cfg.num_iter))
|
494 |
-
for t in t_range:
|
495 |
-
|
496 |
-
for _, (optim, _) in optim_schedular_dict.items():
|
497 |
-
optim.zero_grad()
|
498 |
-
|
499 |
-
# Forward pass: render the image.
|
500 |
-
scene_args = pydiffvg.RenderFunction.serialize_scene(
|
501 |
-
w, h, shapes_record, shape_groups_record)
|
502 |
-
img = render(w, h, 2, 2, t, None, *scene_args)
|
503 |
-
|
504 |
-
# Compose img with white background
|
505 |
-
img = img[:, :, 3:4] * img[:, :, :3] + \
|
506 |
-
para_bg * (1 - img[:, :, 3:4])
|
507 |
-
|
508 |
-
if cfg.save.video:
|
509 |
-
filename = os.path.join(
|
510 |
-
cfg.experiment_dir, "video-png",
|
511 |
-
"{}-iter{}.png".format(pathn_record_str, t))
|
512 |
-
check_and_create_dir(filename)
|
513 |
-
if cfg.use_ycrcb:
|
514 |
-
imshow = ycrcb_conversion(
|
515 |
-
img, format='[2D x 3]', reverse=True).detach().cpu()
|
516 |
-
else:
|
517 |
-
imshow = img.detach().cpu()
|
518 |
-
pydiffvg.imwrite(imshow, filename, gamma=gamma)
|
519 |
-
|
520 |
-
x = img.unsqueeze(0).permute(0, 3, 1, 2) # HWC -> NCHW
|
521 |
-
|
522 |
-
if cfg.use_ycrcb:
|
523 |
-
color_reweight = torch.FloatTensor([255/219, 255/224, 255/255]).to(device)
|
524 |
-
loss = ((x-gt)*(color_reweight.view(1, -1, 1, 1)))**2
|
525 |
-
else:
|
526 |
-
loss = ((x-gt)**2)
|
527 |
-
|
528 |
-
if cfg.loss.use_l1_loss:
|
529 |
-
loss = abs(x-gt)
|
530 |
-
|
531 |
-
if cfg.loss.use_distance_weighted_loss:
|
532 |
-
if cfg.use_ycrcb:
|
533 |
-
raise ValueError
|
534 |
-
shapes_forsdf = copy.deepcopy(shapes)
|
535 |
-
shape_groups_forsdf = copy.deepcopy(shape_groups)
|
536 |
-
for si in shapes_forsdf:
|
537 |
-
si.stroke_width = torch.FloatTensor([0]).to(device)
|
538 |
-
for sg_idx, sgi in enumerate(shape_groups_forsdf):
|
539 |
-
sgi.fill_color = torch.FloatTensor([1, 1, 1, 1]).to(device)
|
540 |
-
sgi.shape_ids = torch.LongTensor([sg_idx]).to(device)
|
541 |
-
|
542 |
-
sargs_forsdf = pydiffvg.RenderFunction.serialize_scene(
|
543 |
-
w, h, shapes_forsdf, shape_groups_forsdf)
|
544 |
-
with torch.no_grad():
|
545 |
-
im_forsdf = render(w, h, 2, 2, 0, None, *sargs_forsdf)
|
546 |
-
# use alpha channel is a trick to get 0-1 image
|
547 |
-
im_forsdf = (im_forsdf[:, :, 3]).detach().cpu().numpy()
|
548 |
-
loss_weight = get_sdf(im_forsdf, normalize='to1')
|
549 |
-
loss_weight += loss_weight_keep
|
550 |
-
loss_weight = np.clip(loss_weight, 0, 1)
|
551 |
-
loss_weight = torch.FloatTensor(loss_weight).to(device)
|
552 |
-
|
553 |
-
if cfg.save.loss:
|
554 |
-
save_loss = loss.squeeze(dim=0).mean(dim=0,keepdim=False).cpu().detach().numpy()
|
555 |
-
save_weight = loss_weight.cpu().detach().numpy()
|
556 |
-
save_weighted_loss = save_loss*save_weight
|
557 |
-
# normalize to [0,1]
|
558 |
-
save_loss = (save_loss - np.min(save_loss))/np.ptp(save_loss)
|
559 |
-
save_weight = (save_weight - np.min(save_weight))/np.ptp(save_weight)
|
560 |
-
save_weighted_loss = (save_weighted_loss - np.min(save_weighted_loss))/np.ptp(save_weighted_loss)
|
561 |
-
|
562 |
-
# save
|
563 |
-
plt.imshow(save_loss, cmap='Reds')
|
564 |
-
plt.axis('off')
|
565 |
-
# plt.colorbar()
|
566 |
-
filename = os.path.join(cfg.experiment_dir, "loss", "{}-iter{}-mseloss.png".format(pathn_record_str, t))
|
567 |
-
check_and_create_dir(filename)
|
568 |
-
plt.savefig(filename, dpi=800)
|
569 |
-
plt.close()
|
570 |
-
|
571 |
-
plt.imshow(save_weight, cmap='Greys')
|
572 |
-
plt.axis('off')
|
573 |
-
# plt.colorbar()
|
574 |
-
filename = os.path.join(cfg.experiment_dir, "loss", "{}-iter{}-sdfweight.png".format(pathn_record_str, t))
|
575 |
-
plt.savefig(filename, dpi=800)
|
576 |
-
plt.close()
|
577 |
-
|
578 |
-
plt.imshow(save_weighted_loss, cmap='Reds')
|
579 |
-
plt.axis('off')
|
580 |
-
# plt.colorbar()
|
581 |
-
filename = os.path.join(cfg.experiment_dir, "loss", "{}-iter{}-weightedloss.png".format(pathn_record_str, t))
|
582 |
-
plt.savefig(filename, dpi=800)
|
583 |
-
plt.close()
|
584 |
-
|
585 |
-
|
586 |
-
|
587 |
-
|
588 |
-
|
589 |
-
if loss_weight is None:
|
590 |
-
loss = loss.sum(1).mean()
|
591 |
-
else:
|
592 |
-
loss = (loss.sum(1)*loss_weight).mean()
|
593 |
-
|
594 |
-
# if (cfg.loss.bis_loss_weight is not None) and (cfg.loss.bis_loss_weight > 0):
|
595 |
-
# loss_bis = bezier_intersection_loss(point_var[0]) * cfg.loss.bis_loss_weight
|
596 |
-
# loss = loss + loss_bis
|
597 |
-
if (cfg.loss.xing_loss_weight is not None) \
|
598 |
-
and (cfg.loss.xing_loss_weight > 0):
|
599 |
-
loss_xing = xing_loss(point_var) * cfg.loss.xing_loss_weight
|
600 |
-
loss = loss + loss_xing
|
601 |
-
|
602 |
-
|
603 |
-
loss_list.append(loss.item())
|
604 |
-
t_range.set_postfix({'loss': loss.item()})
|
605 |
-
loss.backward()
|
606 |
-
|
607 |
-
# step
|
608 |
-
for _, (optim, scheduler) in optim_schedular_dict.items():
|
609 |
-
optim.step()
|
610 |
-
scheduler.step()
|
611 |
-
|
612 |
-
for group in shape_groups_record:
|
613 |
-
group.fill_color.data.clamp_(0.0, 1.0)
|
614 |
-
|
615 |
-
if cfg.loss.use_distance_weighted_loss:
|
616 |
-
loss_weight_keep = loss_weight.detach().cpu().numpy() * 1
|
617 |
-
|
618 |
-
if not cfg.trainable.record:
|
619 |
-
for _, pi in pg.items():
|
620 |
-
for ppi in pi:
|
621 |
-
pi.require_grad = False
|
622 |
-
optim_schedular_dict = {}
|
623 |
-
|
624 |
-
if cfg.save.image:
|
625 |
-
filename = os.path.join(
|
626 |
-
cfg.experiment_dir, "demo-png", "{}.png".format(pathn_record_str))
|
627 |
-
check_and_create_dir(filename)
|
628 |
-
if cfg.use_ycrcb:
|
629 |
-
imshow = ycrcb_conversion(
|
630 |
-
img, format='[2D x 3]', reverse=True).detach().cpu()
|
631 |
-
else:
|
632 |
-
imshow = img.detach().cpu()
|
633 |
-
pydiffvg.imwrite(imshow, filename, gamma=gamma)
|
634 |
-
|
635 |
-
if cfg.save.output:
|
636 |
-
filename = os.path.join(
|
637 |
-
cfg.experiment_dir, "output-svg", "{}.svg".format(pathn_record_str))
|
638 |
-
check_and_create_dir(filename)
|
639 |
-
pydiffvg.save_svg(filename, w, h, shapes_record, shape_groups_record)
|
640 |
-
|
641 |
-
loss_matrix.append(loss_list)
|
642 |
-
|
643 |
-
# calculate the pixel loss
|
644 |
-
# pixel_loss = ((x-gt)**2).sum(dim=1, keepdim=True).sqrt_() # [N,1,H, W]
|
645 |
-
# region_loss = adaptive_avg_pool2d(pixel_loss, cfg.region_loss_pool_size)
|
646 |
-
# loss_weight = torch.softmax(region_loss.reshape(1, 1, -1), dim=-1)\
|
647 |
-
# .reshape_as(region_loss)
|
648 |
-
|
649 |
-
pos_init_method = naive_coord_init(x, gt)
|
650 |
-
|
651 |
-
if cfg.coord_init.type == 'naive':
|
652 |
-
pos_init_method = naive_coord_init(x, gt)
|
653 |
-
elif cfg.coord_init.type == 'sparse':
|
654 |
-
pos_init_method = sparse_coord_init(x, gt)
|
655 |
-
elif cfg.coord_init.type == 'random':
|
656 |
-
pos_init_method = random_coord_init([h, w])
|
657 |
-
else:
|
658 |
-
raise ValueError
|
659 |
-
|
660 |
-
if cfg.save.video:
|
661 |
-
print("saving iteration video...")
|
662 |
-
img_array = []
|
663 |
-
for ii in range(0, cfg.num_iter):
|
664 |
-
filename = os.path.join(
|
665 |
-
cfg.experiment_dir, "video-png",
|
666 |
-
"{}-iter{}.png".format(pathn_record_str, ii))
|
667 |
-
img = cv2.imread(filename)
|
668 |
-
# cv2.putText(
|
669 |
-
# img, "Path:{} \nIteration:{}".format(pathn_record_str, ii),
|
670 |
-
# (10, 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 0, 0), 1)
|
671 |
-
img_array.append(img)
|
672 |
-
|
673 |
-
videoname = os.path.join(
|
674 |
-
cfg.experiment_dir, "video-mp4",
|
675 |
-
"{}.mp4".format(pathn_record_str))
|
676 |
-
check_and_create_dir(videoname)
|
677 |
-
out = cv2.VideoWriter(
|
678 |
-
videoname,
|
679 |
-
cv2.VideoWriter_fourcc(*'mp4v'),
|
680 |
-
# cv2.VideoWriter_fourcc(*'FFV1'),
|
681 |
-
20.0, (w, h))
|
682 |
-
for iii in range(len(img_array)):
|
683 |
-
out.write(img_array[iii])
|
684 |
-
out.release()
|
685 |
-
# shutil.rmtree(os.path.join(cfg.experiment_dir, "video-png"))
|
686 |
-
|
687 |
-
print("The last loss is: {}".format(loss.item()))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/pybind11/tests/test_eigen.py
DELETED
@@ -1,697 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
import pytest
|
3 |
-
from pybind11_tests import ConstructorStats
|
4 |
-
|
5 |
-
np = pytest.importorskip("numpy")
|
6 |
-
m = pytest.importorskip("pybind11_tests.eigen")
|
7 |
-
|
8 |
-
|
9 |
-
ref = np.array([[ 0., 3, 0, 0, 0, 11],
|
10 |
-
[22, 0, 0, 0, 17, 11],
|
11 |
-
[ 7, 5, 0, 1, 0, 11],
|
12 |
-
[ 0, 0, 0, 0, 0, 11],
|
13 |
-
[ 0, 0, 14, 0, 8, 11]])
|
14 |
-
|
15 |
-
|
16 |
-
def assert_equal_ref(mat):
|
17 |
-
np.testing.assert_array_equal(mat, ref)
|
18 |
-
|
19 |
-
|
20 |
-
def assert_sparse_equal_ref(sparse_mat):
|
21 |
-
assert_equal_ref(sparse_mat.toarray())
|
22 |
-
|
23 |
-
|
24 |
-
def test_fixed():
|
25 |
-
assert_equal_ref(m.fixed_c())
|
26 |
-
assert_equal_ref(m.fixed_r())
|
27 |
-
assert_equal_ref(m.fixed_copy_r(m.fixed_r()))
|
28 |
-
assert_equal_ref(m.fixed_copy_c(m.fixed_c()))
|
29 |
-
assert_equal_ref(m.fixed_copy_r(m.fixed_c()))
|
30 |
-
assert_equal_ref(m.fixed_copy_c(m.fixed_r()))
|
31 |
-
|
32 |
-
|
33 |
-
def test_dense():
|
34 |
-
assert_equal_ref(m.dense_r())
|
35 |
-
assert_equal_ref(m.dense_c())
|
36 |
-
assert_equal_ref(m.dense_copy_r(m.dense_r()))
|
37 |
-
assert_equal_ref(m.dense_copy_c(m.dense_c()))
|
38 |
-
assert_equal_ref(m.dense_copy_r(m.dense_c()))
|
39 |
-
assert_equal_ref(m.dense_copy_c(m.dense_r()))
|
40 |
-
|
41 |
-
|
42 |
-
def test_partially_fixed():
|
43 |
-
ref2 = np.array([[0., 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15]])
|
44 |
-
np.testing.assert_array_equal(m.partial_copy_four_rm_r(ref2), ref2)
|
45 |
-
np.testing.assert_array_equal(m.partial_copy_four_rm_c(ref2), ref2)
|
46 |
-
np.testing.assert_array_equal(m.partial_copy_four_rm_r(ref2[:, 1]), ref2[:, [1]])
|
47 |
-
np.testing.assert_array_equal(m.partial_copy_four_rm_c(ref2[0, :]), ref2[[0], :])
|
48 |
-
np.testing.assert_array_equal(m.partial_copy_four_rm_r(ref2[:, (0, 2)]), ref2[:, (0, 2)])
|
49 |
-
np.testing.assert_array_equal(
|
50 |
-
m.partial_copy_four_rm_c(ref2[(3, 1, 2), :]), ref2[(3, 1, 2), :])
|
51 |
-
|
52 |
-
np.testing.assert_array_equal(m.partial_copy_four_cm_r(ref2), ref2)
|
53 |
-
np.testing.assert_array_equal(m.partial_copy_four_cm_c(ref2), ref2)
|
54 |
-
np.testing.assert_array_equal(m.partial_copy_four_cm_r(ref2[:, 1]), ref2[:, [1]])
|
55 |
-
np.testing.assert_array_equal(m.partial_copy_four_cm_c(ref2[0, :]), ref2[[0], :])
|
56 |
-
np.testing.assert_array_equal(m.partial_copy_four_cm_r(ref2[:, (0, 2)]), ref2[:, (0, 2)])
|
57 |
-
np.testing.assert_array_equal(
|
58 |
-
m.partial_copy_four_cm_c(ref2[(3, 1, 2), :]), ref2[(3, 1, 2), :])
|
59 |
-
|
60 |
-
# TypeError should be raise for a shape mismatch
|
61 |
-
functions = [m.partial_copy_four_rm_r, m.partial_copy_four_rm_c,
|
62 |
-
m.partial_copy_four_cm_r, m.partial_copy_four_cm_c]
|
63 |
-
matrix_with_wrong_shape = [[1, 2],
|
64 |
-
[3, 4]]
|
65 |
-
for f in functions:
|
66 |
-
with pytest.raises(TypeError) as excinfo:
|
67 |
-
f(matrix_with_wrong_shape)
|
68 |
-
assert "incompatible function arguments" in str(excinfo.value)
|
69 |
-
|
70 |
-
|
71 |
-
def test_mutator_descriptors():
|
72 |
-
zr = np.arange(30, dtype='float32').reshape(5, 6) # row-major
|
73 |
-
zc = zr.reshape(6, 5).transpose() # column-major
|
74 |
-
|
75 |
-
m.fixed_mutator_r(zr)
|
76 |
-
m.fixed_mutator_c(zc)
|
77 |
-
m.fixed_mutator_a(zr)
|
78 |
-
m.fixed_mutator_a(zc)
|
79 |
-
with pytest.raises(TypeError) as excinfo:
|
80 |
-
m.fixed_mutator_r(zc)
|
81 |
-
assert ('(arg0: numpy.ndarray[numpy.float32[5, 6],'
|
82 |
-
' flags.writeable, flags.c_contiguous]) -> None'
|
83 |
-
in str(excinfo.value))
|
84 |
-
with pytest.raises(TypeError) as excinfo:
|
85 |
-
m.fixed_mutator_c(zr)
|
86 |
-
assert ('(arg0: numpy.ndarray[numpy.float32[5, 6],'
|
87 |
-
' flags.writeable, flags.f_contiguous]) -> None'
|
88 |
-
in str(excinfo.value))
|
89 |
-
with pytest.raises(TypeError) as excinfo:
|
90 |
-
m.fixed_mutator_a(np.array([[1, 2], [3, 4]], dtype='float32'))
|
91 |
-
assert ('(arg0: numpy.ndarray[numpy.float32[5, 6], flags.writeable]) -> None'
|
92 |
-
in str(excinfo.value))
|
93 |
-
zr.flags.writeable = False
|
94 |
-
with pytest.raises(TypeError):
|
95 |
-
m.fixed_mutator_r(zr)
|
96 |
-
with pytest.raises(TypeError):
|
97 |
-
m.fixed_mutator_a(zr)
|
98 |
-
|
99 |
-
|
100 |
-
def test_cpp_casting():
|
101 |
-
assert m.cpp_copy(m.fixed_r()) == 22.
|
102 |
-
assert m.cpp_copy(m.fixed_c()) == 22.
|
103 |
-
z = np.array([[5., 6], [7, 8]])
|
104 |
-
assert m.cpp_copy(z) == 7.
|
105 |
-
assert m.cpp_copy(m.get_cm_ref()) == 21.
|
106 |
-
assert m.cpp_copy(m.get_rm_ref()) == 21.
|
107 |
-
assert m.cpp_ref_c(m.get_cm_ref()) == 21.
|
108 |
-
assert m.cpp_ref_r(m.get_rm_ref()) == 21.
|
109 |
-
with pytest.raises(RuntimeError) as excinfo:
|
110 |
-
# Can't reference m.fixed_c: it contains floats, m.cpp_ref_any wants doubles
|
111 |
-
m.cpp_ref_any(m.fixed_c())
|
112 |
-
assert 'Unable to cast Python instance' in str(excinfo.value)
|
113 |
-
with pytest.raises(RuntimeError) as excinfo:
|
114 |
-
# Can't reference m.fixed_r: it contains floats, m.cpp_ref_any wants doubles
|
115 |
-
m.cpp_ref_any(m.fixed_r())
|
116 |
-
assert 'Unable to cast Python instance' in str(excinfo.value)
|
117 |
-
assert m.cpp_ref_any(m.ReturnTester.create()) == 1.
|
118 |
-
|
119 |
-
assert m.cpp_ref_any(m.get_cm_ref()) == 21.
|
120 |
-
assert m.cpp_ref_any(m.get_cm_ref()) == 21.
|
121 |
-
|
122 |
-
|
123 |
-
def test_pass_readonly_array():
|
124 |
-
z = np.full((5, 6), 42.0)
|
125 |
-
z.flags.writeable = False
|
126 |
-
np.testing.assert_array_equal(z, m.fixed_copy_r(z))
|
127 |
-
np.testing.assert_array_equal(m.fixed_r_const(), m.fixed_r())
|
128 |
-
assert not m.fixed_r_const().flags.writeable
|
129 |
-
np.testing.assert_array_equal(m.fixed_copy_r(m.fixed_r_const()), m.fixed_r_const())
|
130 |
-
|
131 |
-
|
132 |
-
def test_nonunit_stride_from_python():
|
133 |
-
counting_mat = np.arange(9.0, dtype=np.float32).reshape((3, 3))
|
134 |
-
second_row = counting_mat[1, :]
|
135 |
-
second_col = counting_mat[:, 1]
|
136 |
-
np.testing.assert_array_equal(m.double_row(second_row), 2.0 * second_row)
|
137 |
-
np.testing.assert_array_equal(m.double_col(second_row), 2.0 * second_row)
|
138 |
-
np.testing.assert_array_equal(m.double_complex(second_row), 2.0 * second_row)
|
139 |
-
np.testing.assert_array_equal(m.double_row(second_col), 2.0 * second_col)
|
140 |
-
np.testing.assert_array_equal(m.double_col(second_col), 2.0 * second_col)
|
141 |
-
np.testing.assert_array_equal(m.double_complex(second_col), 2.0 * second_col)
|
142 |
-
|
143 |
-
counting_3d = np.arange(27.0, dtype=np.float32).reshape((3, 3, 3))
|
144 |
-
slices = [counting_3d[0, :, :], counting_3d[:, 0, :], counting_3d[:, :, 0]]
|
145 |
-
for ref_mat in slices:
|
146 |
-
np.testing.assert_array_equal(m.double_mat_cm(ref_mat), 2.0 * ref_mat)
|
147 |
-
np.testing.assert_array_equal(m.double_mat_rm(ref_mat), 2.0 * ref_mat)
|
148 |
-
|
149 |
-
# Mutator:
|
150 |
-
m.double_threer(second_row)
|
151 |
-
m.double_threec(second_col)
|
152 |
-
np.testing.assert_array_equal(counting_mat, [[0., 2, 2], [6, 16, 10], [6, 14, 8]])
|
153 |
-
|
154 |
-
|
155 |
-
def test_negative_stride_from_python(msg):
|
156 |
-
"""Eigen doesn't support (as of yet) negative strides. When a function takes an Eigen matrix by
|
157 |
-
copy or const reference, we can pass a numpy array that has negative strides. Otherwise, an
|
158 |
-
exception will be thrown as Eigen will not be able to map the numpy array."""
|
159 |
-
|
160 |
-
counting_mat = np.arange(9.0, dtype=np.float32).reshape((3, 3))
|
161 |
-
counting_mat = counting_mat[::-1, ::-1]
|
162 |
-
second_row = counting_mat[1, :]
|
163 |
-
second_col = counting_mat[:, 1]
|
164 |
-
np.testing.assert_array_equal(m.double_row(second_row), 2.0 * second_row)
|
165 |
-
np.testing.assert_array_equal(m.double_col(second_row), 2.0 * second_row)
|
166 |
-
np.testing.assert_array_equal(m.double_complex(second_row), 2.0 * second_row)
|
167 |
-
np.testing.assert_array_equal(m.double_row(second_col), 2.0 * second_col)
|
168 |
-
np.testing.assert_array_equal(m.double_col(second_col), 2.0 * second_col)
|
169 |
-
np.testing.assert_array_equal(m.double_complex(second_col), 2.0 * second_col)
|
170 |
-
|
171 |
-
counting_3d = np.arange(27.0, dtype=np.float32).reshape((3, 3, 3))
|
172 |
-
counting_3d = counting_3d[::-1, ::-1, ::-1]
|
173 |
-
slices = [counting_3d[0, :, :], counting_3d[:, 0, :], counting_3d[:, :, 0]]
|
174 |
-
for ref_mat in slices:
|
175 |
-
np.testing.assert_array_equal(m.double_mat_cm(ref_mat), 2.0 * ref_mat)
|
176 |
-
np.testing.assert_array_equal(m.double_mat_rm(ref_mat), 2.0 * ref_mat)
|
177 |
-
|
178 |
-
# Mutator:
|
179 |
-
with pytest.raises(TypeError) as excinfo:
|
180 |
-
m.double_threer(second_row)
|
181 |
-
assert msg(excinfo.value) == """
|
182 |
-
double_threer(): incompatible function arguments. The following argument types are supported:
|
183 |
-
1. (arg0: numpy.ndarray[numpy.float32[1, 3], flags.writeable]) -> None
|
184 |
-
|
185 |
-
Invoked with: """ + repr(np.array([ 5., 4., 3.], dtype='float32')) # noqa: E501 line too long
|
186 |
-
|
187 |
-
with pytest.raises(TypeError) as excinfo:
|
188 |
-
m.double_threec(second_col)
|
189 |
-
assert msg(excinfo.value) == """
|
190 |
-
double_threec(): incompatible function arguments. The following argument types are supported:
|
191 |
-
1. (arg0: numpy.ndarray[numpy.float32[3, 1], flags.writeable]) -> None
|
192 |
-
|
193 |
-
Invoked with: """ + repr(np.array([ 7., 4., 1.], dtype='float32')) # noqa: E501 line too long
|
194 |
-
|
195 |
-
|
196 |
-
def test_nonunit_stride_to_python():
|
197 |
-
assert np.all(m.diagonal(ref) == ref.diagonal())
|
198 |
-
assert np.all(m.diagonal_1(ref) == ref.diagonal(1))
|
199 |
-
for i in range(-5, 7):
|
200 |
-
assert np.all(m.diagonal_n(ref, i) == ref.diagonal(i)), "m.diagonal_n({})".format(i)
|
201 |
-
|
202 |
-
assert np.all(m.block(ref, 2, 1, 3, 3) == ref[2:5, 1:4])
|
203 |
-
assert np.all(m.block(ref, 1, 4, 4, 2) == ref[1:, 4:])
|
204 |
-
assert np.all(m.block(ref, 1, 4, 3, 2) == ref[1:4, 4:])
|
205 |
-
|
206 |
-
|
207 |
-
def test_eigen_ref_to_python():
|
208 |
-
chols = [m.cholesky1, m.cholesky2, m.cholesky3, m.cholesky4]
|
209 |
-
for i, chol in enumerate(chols, start=1):
|
210 |
-
mymat = chol(np.array([[1., 2, 4], [2, 13, 23], [4, 23, 77]]))
|
211 |
-
assert np.all(mymat == np.array([[1, 0, 0], [2, 3, 0], [4, 5, 6]])), "cholesky{}".format(i)
|
212 |
-
|
213 |
-
|
214 |
-
def assign_both(a1, a2, r, c, v):
|
215 |
-
a1[r, c] = v
|
216 |
-
a2[r, c] = v
|
217 |
-
|
218 |
-
|
219 |
-
def array_copy_but_one(a, r, c, v):
|
220 |
-
z = np.array(a, copy=True)
|
221 |
-
z[r, c] = v
|
222 |
-
return z
|
223 |
-
|
224 |
-
|
225 |
-
def test_eigen_return_references():
|
226 |
-
"""Tests various ways of returning references and non-referencing copies"""
|
227 |
-
|
228 |
-
master = np.ones((10, 10))
|
229 |
-
a = m.ReturnTester()
|
230 |
-
a_get1 = a.get()
|
231 |
-
assert not a_get1.flags.owndata and a_get1.flags.writeable
|
232 |
-
assign_both(a_get1, master, 3, 3, 5)
|
233 |
-
a_get2 = a.get_ptr()
|
234 |
-
assert not a_get2.flags.owndata and a_get2.flags.writeable
|
235 |
-
assign_both(a_get1, master, 2, 3, 6)
|
236 |
-
|
237 |
-
a_view1 = a.view()
|
238 |
-
assert not a_view1.flags.owndata and not a_view1.flags.writeable
|
239 |
-
with pytest.raises(ValueError):
|
240 |
-
a_view1[2, 3] = 4
|
241 |
-
a_view2 = a.view_ptr()
|
242 |
-
assert not a_view2.flags.owndata and not a_view2.flags.writeable
|
243 |
-
with pytest.raises(ValueError):
|
244 |
-
a_view2[2, 3] = 4
|
245 |
-
|
246 |
-
a_copy1 = a.copy_get()
|
247 |
-
assert a_copy1.flags.owndata and a_copy1.flags.writeable
|
248 |
-
np.testing.assert_array_equal(a_copy1, master)
|
249 |
-
a_copy1[7, 7] = -44 # Shouldn't affect anything else
|
250 |
-
c1want = array_copy_but_one(master, 7, 7, -44)
|
251 |
-
a_copy2 = a.copy_view()
|
252 |
-
assert a_copy2.flags.owndata and a_copy2.flags.writeable
|
253 |
-
np.testing.assert_array_equal(a_copy2, master)
|
254 |
-
a_copy2[4, 4] = -22 # Shouldn't affect anything else
|
255 |
-
c2want = array_copy_but_one(master, 4, 4, -22)
|
256 |
-
|
257 |
-
a_ref1 = a.ref()
|
258 |
-
assert not a_ref1.flags.owndata and a_ref1.flags.writeable
|
259 |
-
assign_both(a_ref1, master, 1, 1, 15)
|
260 |
-
a_ref2 = a.ref_const()
|
261 |
-
assert not a_ref2.flags.owndata and not a_ref2.flags.writeable
|
262 |
-
with pytest.raises(ValueError):
|
263 |
-
a_ref2[5, 5] = 33
|
264 |
-
a_ref3 = a.ref_safe()
|
265 |
-
assert not a_ref3.flags.owndata and a_ref3.flags.writeable
|
266 |
-
assign_both(a_ref3, master, 0, 7, 99)
|
267 |
-
a_ref4 = a.ref_const_safe()
|
268 |
-
assert not a_ref4.flags.owndata and not a_ref4.flags.writeable
|
269 |
-
with pytest.raises(ValueError):
|
270 |
-
a_ref4[7, 0] = 987654321
|
271 |
-
|
272 |
-
a_copy3 = a.copy_ref()
|
273 |
-
assert a_copy3.flags.owndata and a_copy3.flags.writeable
|
274 |
-
np.testing.assert_array_equal(a_copy3, master)
|
275 |
-
a_copy3[8, 1] = 11
|
276 |
-
c3want = array_copy_but_one(master, 8, 1, 11)
|
277 |
-
a_copy4 = a.copy_ref_const()
|
278 |
-
assert a_copy4.flags.owndata and a_copy4.flags.writeable
|
279 |
-
np.testing.assert_array_equal(a_copy4, master)
|
280 |
-
a_copy4[8, 4] = 88
|
281 |
-
c4want = array_copy_but_one(master, 8, 4, 88)
|
282 |
-
|
283 |
-
a_block1 = a.block(3, 3, 2, 2)
|
284 |
-
assert not a_block1.flags.owndata and a_block1.flags.writeable
|
285 |
-
a_block1[0, 0] = 55
|
286 |
-
master[3, 3] = 55
|
287 |
-
a_block2 = a.block_safe(2, 2, 3, 2)
|
288 |
-
assert not a_block2.flags.owndata and a_block2.flags.writeable
|
289 |
-
a_block2[2, 1] = -123
|
290 |
-
master[4, 3] = -123
|
291 |
-
a_block3 = a.block_const(6, 7, 4, 3)
|
292 |
-
assert not a_block3.flags.owndata and not a_block3.flags.writeable
|
293 |
-
with pytest.raises(ValueError):
|
294 |
-
a_block3[2, 2] = -44444
|
295 |
-
|
296 |
-
a_copy5 = a.copy_block(2, 2, 2, 3)
|
297 |
-
assert a_copy5.flags.owndata and a_copy5.flags.writeable
|
298 |
-
np.testing.assert_array_equal(a_copy5, master[2:4, 2:5])
|
299 |
-
a_copy5[1, 1] = 777
|
300 |
-
c5want = array_copy_but_one(master[2:4, 2:5], 1, 1, 777)
|
301 |
-
|
302 |
-
a_corn1 = a.corners()
|
303 |
-
assert not a_corn1.flags.owndata and a_corn1.flags.writeable
|
304 |
-
a_corn1 *= 50
|
305 |
-
a_corn1[1, 1] = 999
|
306 |
-
master[0, 0] = 50
|
307 |
-
master[0, 9] = 50
|
308 |
-
master[9, 0] = 50
|
309 |
-
master[9, 9] = 999
|
310 |
-
a_corn2 = a.corners_const()
|
311 |
-
assert not a_corn2.flags.owndata and not a_corn2.flags.writeable
|
312 |
-
with pytest.raises(ValueError):
|
313 |
-
a_corn2[1, 0] = 51
|
314 |
-
|
315 |
-
# All of the changes made all the way along should be visible everywhere
|
316 |
-
# now (except for the copies, of course)
|
317 |
-
np.testing.assert_array_equal(a_get1, master)
|
318 |
-
np.testing.assert_array_equal(a_get2, master)
|
319 |
-
np.testing.assert_array_equal(a_view1, master)
|
320 |
-
np.testing.assert_array_equal(a_view2, master)
|
321 |
-
np.testing.assert_array_equal(a_ref1, master)
|
322 |
-
np.testing.assert_array_equal(a_ref2, master)
|
323 |
-
np.testing.assert_array_equal(a_ref3, master)
|
324 |
-
np.testing.assert_array_equal(a_ref4, master)
|
325 |
-
np.testing.assert_array_equal(a_block1, master[3:5, 3:5])
|
326 |
-
np.testing.assert_array_equal(a_block2, master[2:5, 2:4])
|
327 |
-
np.testing.assert_array_equal(a_block3, master[6:10, 7:10])
|
328 |
-
np.testing.assert_array_equal(a_corn1, master[0::master.shape[0] - 1, 0::master.shape[1] - 1])
|
329 |
-
np.testing.assert_array_equal(a_corn2, master[0::master.shape[0] - 1, 0::master.shape[1] - 1])
|
330 |
-
|
331 |
-
np.testing.assert_array_equal(a_copy1, c1want)
|
332 |
-
np.testing.assert_array_equal(a_copy2, c2want)
|
333 |
-
np.testing.assert_array_equal(a_copy3, c3want)
|
334 |
-
np.testing.assert_array_equal(a_copy4, c4want)
|
335 |
-
np.testing.assert_array_equal(a_copy5, c5want)
|
336 |
-
|
337 |
-
|
338 |
-
def assert_keeps_alive(cl, method, *args):
|
339 |
-
cstats = ConstructorStats.get(cl)
|
340 |
-
start_with = cstats.alive()
|
341 |
-
a = cl()
|
342 |
-
assert cstats.alive() == start_with + 1
|
343 |
-
z = method(a, *args)
|
344 |
-
assert cstats.alive() == start_with + 1
|
345 |
-
del a
|
346 |
-
# Here's the keep alive in action:
|
347 |
-
assert cstats.alive() == start_with + 1
|
348 |
-
del z
|
349 |
-
# Keep alive should have expired:
|
350 |
-
assert cstats.alive() == start_with
|
351 |
-
|
352 |
-
|
353 |
-
def test_eigen_keepalive():
|
354 |
-
a = m.ReturnTester()
|
355 |
-
cstats = ConstructorStats.get(m.ReturnTester)
|
356 |
-
assert cstats.alive() == 1
|
357 |
-
unsafe = [a.ref(), a.ref_const(), a.block(1, 2, 3, 4)]
|
358 |
-
copies = [a.copy_get(), a.copy_view(), a.copy_ref(), a.copy_ref_const(),
|
359 |
-
a.copy_block(4, 3, 2, 1)]
|
360 |
-
del a
|
361 |
-
assert cstats.alive() == 0
|
362 |
-
del unsafe
|
363 |
-
del copies
|
364 |
-
|
365 |
-
for meth in [m.ReturnTester.get, m.ReturnTester.get_ptr, m.ReturnTester.view,
|
366 |
-
m.ReturnTester.view_ptr, m.ReturnTester.ref_safe, m.ReturnTester.ref_const_safe,
|
367 |
-
m.ReturnTester.corners, m.ReturnTester.corners_const]:
|
368 |
-
assert_keeps_alive(m.ReturnTester, meth)
|
369 |
-
|
370 |
-
for meth in [m.ReturnTester.block_safe, m.ReturnTester.block_const]:
|
371 |
-
assert_keeps_alive(m.ReturnTester, meth, 4, 3, 2, 1)
|
372 |
-
|
373 |
-
|
374 |
-
def test_eigen_ref_mutators():
|
375 |
-
"""Tests Eigen's ability to mutate numpy values"""
|
376 |
-
|
377 |
-
orig = np.array([[1., 2, 3], [4, 5, 6], [7, 8, 9]])
|
378 |
-
zr = np.array(orig)
|
379 |
-
zc = np.array(orig, order='F')
|
380 |
-
m.add_rm(zr, 1, 0, 100)
|
381 |
-
assert np.all(zr == np.array([[1., 2, 3], [104, 5, 6], [7, 8, 9]]))
|
382 |
-
m.add_cm(zc, 1, 0, 200)
|
383 |
-
assert np.all(zc == np.array([[1., 2, 3], [204, 5, 6], [7, 8, 9]]))
|
384 |
-
|
385 |
-
m.add_any(zr, 1, 0, 20)
|
386 |
-
assert np.all(zr == np.array([[1., 2, 3], [124, 5, 6], [7, 8, 9]]))
|
387 |
-
m.add_any(zc, 1, 0, 10)
|
388 |
-
assert np.all(zc == np.array([[1., 2, 3], [214, 5, 6], [7, 8, 9]]))
|
389 |
-
|
390 |
-
# Can't reference a col-major array with a row-major Ref, and vice versa:
|
391 |
-
with pytest.raises(TypeError):
|
392 |
-
m.add_rm(zc, 1, 0, 1)
|
393 |
-
with pytest.raises(TypeError):
|
394 |
-
m.add_cm(zr, 1, 0, 1)
|
395 |
-
|
396 |
-
# Overloads:
|
397 |
-
m.add1(zr, 1, 0, -100)
|
398 |
-
m.add2(zr, 1, 0, -20)
|
399 |
-
assert np.all(zr == orig)
|
400 |
-
m.add1(zc, 1, 0, -200)
|
401 |
-
m.add2(zc, 1, 0, -10)
|
402 |
-
assert np.all(zc == orig)
|
403 |
-
|
404 |
-
# a non-contiguous slice (this won't work on either the row- or
|
405 |
-
# column-contiguous refs, but should work for the any)
|
406 |
-
cornersr = zr[0::2, 0::2]
|
407 |
-
cornersc = zc[0::2, 0::2]
|
408 |
-
|
409 |
-
assert np.all(cornersr == np.array([[1., 3], [7, 9]]))
|
410 |
-
assert np.all(cornersc == np.array([[1., 3], [7, 9]]))
|
411 |
-
|
412 |
-
with pytest.raises(TypeError):
|
413 |
-
m.add_rm(cornersr, 0, 1, 25)
|
414 |
-
with pytest.raises(TypeError):
|
415 |
-
m.add_cm(cornersr, 0, 1, 25)
|
416 |
-
with pytest.raises(TypeError):
|
417 |
-
m.add_rm(cornersc, 0, 1, 25)
|
418 |
-
with pytest.raises(TypeError):
|
419 |
-
m.add_cm(cornersc, 0, 1, 25)
|
420 |
-
m.add_any(cornersr, 0, 1, 25)
|
421 |
-
m.add_any(cornersc, 0, 1, 44)
|
422 |
-
assert np.all(zr == np.array([[1., 2, 28], [4, 5, 6], [7, 8, 9]]))
|
423 |
-
assert np.all(zc == np.array([[1., 2, 47], [4, 5, 6], [7, 8, 9]]))
|
424 |
-
|
425 |
-
# You shouldn't be allowed to pass a non-writeable array to a mutating Eigen method:
|
426 |
-
zro = zr[0:4, 0:4]
|
427 |
-
zro.flags.writeable = False
|
428 |
-
with pytest.raises(TypeError):
|
429 |
-
m.add_rm(zro, 0, 0, 0)
|
430 |
-
with pytest.raises(TypeError):
|
431 |
-
m.add_any(zro, 0, 0, 0)
|
432 |
-
with pytest.raises(TypeError):
|
433 |
-
m.add1(zro, 0, 0, 0)
|
434 |
-
with pytest.raises(TypeError):
|
435 |
-
m.add2(zro, 0, 0, 0)
|
436 |
-
|
437 |
-
# integer array shouldn't be passable to a double-matrix-accepting mutating func:
|
438 |
-
zi = np.array([[1, 2], [3, 4]])
|
439 |
-
with pytest.raises(TypeError):
|
440 |
-
m.add_rm(zi)
|
441 |
-
|
442 |
-
|
443 |
-
def test_numpy_ref_mutators():
|
444 |
-
"""Tests numpy mutating Eigen matrices (for returned Eigen::Ref<...>s)"""
|
445 |
-
|
446 |
-
m.reset_refs() # In case another test already changed it
|
447 |
-
|
448 |
-
zc = m.get_cm_ref()
|
449 |
-
zcro = m.get_cm_const_ref()
|
450 |
-
zr = m.get_rm_ref()
|
451 |
-
zrro = m.get_rm_const_ref()
|
452 |
-
|
453 |
-
assert [zc[1, 2], zcro[1, 2], zr[1, 2], zrro[1, 2]] == [23] * 4
|
454 |
-
|
455 |
-
assert not zc.flags.owndata and zc.flags.writeable
|
456 |
-
assert not zr.flags.owndata and zr.flags.writeable
|
457 |
-
assert not zcro.flags.owndata and not zcro.flags.writeable
|
458 |
-
assert not zrro.flags.owndata and not zrro.flags.writeable
|
459 |
-
|
460 |
-
zc[1, 2] = 99
|
461 |
-
expect = np.array([[11., 12, 13], [21, 22, 99], [31, 32, 33]])
|
462 |
-
# We should have just changed zc, of course, but also zcro and the original eigen matrix
|
463 |
-
assert np.all(zc == expect)
|
464 |
-
assert np.all(zcro == expect)
|
465 |
-
assert np.all(m.get_cm_ref() == expect)
|
466 |
-
|
467 |
-
zr[1, 2] = 99
|
468 |
-
assert np.all(zr == expect)
|
469 |
-
assert np.all(zrro == expect)
|
470 |
-
assert np.all(m.get_rm_ref() == expect)
|
471 |
-
|
472 |
-
# Make sure the readonly ones are numpy-readonly:
|
473 |
-
with pytest.raises(ValueError):
|
474 |
-
zcro[1, 2] = 6
|
475 |
-
with pytest.raises(ValueError):
|
476 |
-
zrro[1, 2] = 6
|
477 |
-
|
478 |
-
# We should be able to explicitly copy like this (and since we're copying,
|
479 |
-
# the const should drop away)
|
480 |
-
y1 = np.array(m.get_cm_const_ref())
|
481 |
-
|
482 |
-
assert y1.flags.owndata and y1.flags.writeable
|
483 |
-
# We should get copies of the eigen data, which was modified above:
|
484 |
-
assert y1[1, 2] == 99
|
485 |
-
y1[1, 2] += 12
|
486 |
-
assert y1[1, 2] == 111
|
487 |
-
assert zc[1, 2] == 99 # Make sure we aren't referencing the original
|
488 |
-
|
489 |
-
|
490 |
-
def test_both_ref_mutators():
|
491 |
-
"""Tests a complex chain of nested eigen/numpy references"""
|
492 |
-
|
493 |
-
m.reset_refs() # In case another test already changed it
|
494 |
-
|
495 |
-
z = m.get_cm_ref() # numpy -> eigen
|
496 |
-
z[0, 2] -= 3
|
497 |
-
z2 = m.incr_matrix(z, 1) # numpy -> eigen -> numpy -> eigen
|
498 |
-
z2[1, 1] += 6
|
499 |
-
z3 = m.incr_matrix(z, 2) # (numpy -> eigen)^3
|
500 |
-
z3[2, 2] += -5
|
501 |
-
z4 = m.incr_matrix(z, 3) # (numpy -> eigen)^4
|
502 |
-
z4[1, 1] -= 1
|
503 |
-
z5 = m.incr_matrix(z, 4) # (numpy -> eigen)^5
|
504 |
-
z5[0, 0] = 0
|
505 |
-
assert np.all(z == z2)
|
506 |
-
assert np.all(z == z3)
|
507 |
-
assert np.all(z == z4)
|
508 |
-
assert np.all(z == z5)
|
509 |
-
expect = np.array([[0., 22, 20], [31, 37, 33], [41, 42, 38]])
|
510 |
-
assert np.all(z == expect)
|
511 |
-
|
512 |
-
y = np.array(range(100), dtype='float64').reshape(10, 10)
|
513 |
-
y2 = m.incr_matrix_any(y, 10) # np -> eigen -> np
|
514 |
-
y3 = m.incr_matrix_any(y2[0::2, 0::2], -33) # np -> eigen -> np slice -> np -> eigen -> np
|
515 |
-
y4 = m.even_rows(y3) # numpy -> eigen slice -> (... y3)
|
516 |
-
y5 = m.even_cols(y4) # numpy -> eigen slice -> (... y4)
|
517 |
-
y6 = m.incr_matrix_any(y5, 1000) # numpy -> eigen -> (... y5)
|
518 |
-
|
519 |
-
# Apply same mutations using just numpy:
|
520 |
-
yexpect = np.array(range(100), dtype='float64').reshape(10, 10)
|
521 |
-
yexpect += 10
|
522 |
-
yexpect[0::2, 0::2] -= 33
|
523 |
-
yexpect[0::4, 0::4] += 1000
|
524 |
-
assert np.all(y6 == yexpect[0::4, 0::4])
|
525 |
-
assert np.all(y5 == yexpect[0::4, 0::4])
|
526 |
-
assert np.all(y4 == yexpect[0::4, 0::2])
|
527 |
-
assert np.all(y3 == yexpect[0::2, 0::2])
|
528 |
-
assert np.all(y2 == yexpect)
|
529 |
-
assert np.all(y == yexpect)
|
530 |
-
|
531 |
-
|
532 |
-
def test_nocopy_wrapper():
|
533 |
-
# get_elem requires a column-contiguous matrix reference, but should be
|
534 |
-
# callable with other types of matrix (via copying):
|
535 |
-
int_matrix_colmajor = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], order='F')
|
536 |
-
dbl_matrix_colmajor = np.array(int_matrix_colmajor, dtype='double', order='F', copy=True)
|
537 |
-
int_matrix_rowmajor = np.array(int_matrix_colmajor, order='C', copy=True)
|
538 |
-
dbl_matrix_rowmajor = np.array(int_matrix_rowmajor, dtype='double', order='C', copy=True)
|
539 |
-
|
540 |
-
# All should be callable via get_elem:
|
541 |
-
assert m.get_elem(int_matrix_colmajor) == 8
|
542 |
-
assert m.get_elem(dbl_matrix_colmajor) == 8
|
543 |
-
assert m.get_elem(int_matrix_rowmajor) == 8
|
544 |
-
assert m.get_elem(dbl_matrix_rowmajor) == 8
|
545 |
-
|
546 |
-
# All but the second should fail with m.get_elem_nocopy:
|
547 |
-
with pytest.raises(TypeError) as excinfo:
|
548 |
-
m.get_elem_nocopy(int_matrix_colmajor)
|
549 |
-
assert ('get_elem_nocopy(): incompatible function arguments.' in str(excinfo.value) and
|
550 |
-
', flags.f_contiguous' in str(excinfo.value))
|
551 |
-
assert m.get_elem_nocopy(dbl_matrix_colmajor) == 8
|
552 |
-
with pytest.raises(TypeError) as excinfo:
|
553 |
-
m.get_elem_nocopy(int_matrix_rowmajor)
|
554 |
-
assert ('get_elem_nocopy(): incompatible function arguments.' in str(excinfo.value) and
|
555 |
-
', flags.f_contiguous' in str(excinfo.value))
|
556 |
-
with pytest.raises(TypeError) as excinfo:
|
557 |
-
m.get_elem_nocopy(dbl_matrix_rowmajor)
|
558 |
-
assert ('get_elem_nocopy(): incompatible function arguments.' in str(excinfo.value) and
|
559 |
-
', flags.f_contiguous' in str(excinfo.value))
|
560 |
-
|
561 |
-
# For the row-major test, we take a long matrix in row-major, so only the third is allowed:
|
562 |
-
with pytest.raises(TypeError) as excinfo:
|
563 |
-
m.get_elem_rm_nocopy(int_matrix_colmajor)
|
564 |
-
assert ('get_elem_rm_nocopy(): incompatible function arguments.' in str(excinfo.value) and
|
565 |
-
', flags.c_contiguous' in str(excinfo.value))
|
566 |
-
with pytest.raises(TypeError) as excinfo:
|
567 |
-
m.get_elem_rm_nocopy(dbl_matrix_colmajor)
|
568 |
-
assert ('get_elem_rm_nocopy(): incompatible function arguments.' in str(excinfo.value) and
|
569 |
-
', flags.c_contiguous' in str(excinfo.value))
|
570 |
-
assert m.get_elem_rm_nocopy(int_matrix_rowmajor) == 8
|
571 |
-
with pytest.raises(TypeError) as excinfo:
|
572 |
-
m.get_elem_rm_nocopy(dbl_matrix_rowmajor)
|
573 |
-
assert ('get_elem_rm_nocopy(): incompatible function arguments.' in str(excinfo.value) and
|
574 |
-
', flags.c_contiguous' in str(excinfo.value))
|
575 |
-
|
576 |
-
|
577 |
-
def test_eigen_ref_life_support():
|
578 |
-
"""Ensure the lifetime of temporary arrays created by the `Ref` caster
|
579 |
-
|
580 |
-
The `Ref` caster sometimes creates a copy which needs to stay alive. This needs to
|
581 |
-
happen both for directs casts (just the array) or indirectly (e.g. list of arrays).
|
582 |
-
"""
|
583 |
-
|
584 |
-
a = np.full(shape=10, fill_value=8, dtype=np.int8)
|
585 |
-
assert m.get_elem_direct(a) == 8
|
586 |
-
|
587 |
-
list_of_a = [a]
|
588 |
-
assert m.get_elem_indirect(list_of_a) == 8
|
589 |
-
|
590 |
-
|
591 |
-
def test_special_matrix_objects():
|
592 |
-
assert np.all(m.incr_diag(7) == np.diag([1., 2, 3, 4, 5, 6, 7]))
|
593 |
-
|
594 |
-
asymm = np.array([[ 1., 2, 3, 4],
|
595 |
-
[ 5, 6, 7, 8],
|
596 |
-
[ 9, 10, 11, 12],
|
597 |
-
[13, 14, 15, 16]])
|
598 |
-
symm_lower = np.array(asymm)
|
599 |
-
symm_upper = np.array(asymm)
|
600 |
-
for i in range(4):
|
601 |
-
for j in range(i + 1, 4):
|
602 |
-
symm_lower[i, j] = symm_lower[j, i]
|
603 |
-
symm_upper[j, i] = symm_upper[i, j]
|
604 |
-
|
605 |
-
assert np.all(m.symmetric_lower(asymm) == symm_lower)
|
606 |
-
assert np.all(m.symmetric_upper(asymm) == symm_upper)
|
607 |
-
|
608 |
-
|
609 |
-
def test_dense_signature(doc):
|
610 |
-
assert doc(m.double_col) == """
|
611 |
-
double_col(arg0: numpy.ndarray[numpy.float32[m, 1]]) -> numpy.ndarray[numpy.float32[m, 1]]
|
612 |
-
"""
|
613 |
-
assert doc(m.double_row) == """
|
614 |
-
double_row(arg0: numpy.ndarray[numpy.float32[1, n]]) -> numpy.ndarray[numpy.float32[1, n]]
|
615 |
-
"""
|
616 |
-
assert doc(m.double_complex) == ("""
|
617 |
-
double_complex(arg0: numpy.ndarray[numpy.complex64[m, 1]])"""
|
618 |
-
""" -> numpy.ndarray[numpy.complex64[m, 1]]
|
619 |
-
""")
|
620 |
-
assert doc(m.double_mat_rm) == ("""
|
621 |
-
double_mat_rm(arg0: numpy.ndarray[numpy.float32[m, n]])"""
|
622 |
-
""" -> numpy.ndarray[numpy.float32[m, n]]
|
623 |
-
""")
|
624 |
-
|
625 |
-
|
626 |
-
def test_named_arguments():
|
627 |
-
a = np.array([[1.0, 2], [3, 4], [5, 6]])
|
628 |
-
b = np.ones((2, 1))
|
629 |
-
|
630 |
-
assert np.all(m.matrix_multiply(a, b) == np.array([[3.], [7], [11]]))
|
631 |
-
assert np.all(m.matrix_multiply(A=a, B=b) == np.array([[3.], [7], [11]]))
|
632 |
-
assert np.all(m.matrix_multiply(B=b, A=a) == np.array([[3.], [7], [11]]))
|
633 |
-
|
634 |
-
with pytest.raises(ValueError) as excinfo:
|
635 |
-
m.matrix_multiply(b, a)
|
636 |
-
assert str(excinfo.value) == 'Nonconformable matrices!'
|
637 |
-
|
638 |
-
with pytest.raises(ValueError) as excinfo:
|
639 |
-
m.matrix_multiply(A=b, B=a)
|
640 |
-
assert str(excinfo.value) == 'Nonconformable matrices!'
|
641 |
-
|
642 |
-
with pytest.raises(ValueError) as excinfo:
|
643 |
-
m.matrix_multiply(B=a, A=b)
|
644 |
-
assert str(excinfo.value) == 'Nonconformable matrices!'
|
645 |
-
|
646 |
-
|
647 |
-
def test_sparse():
|
648 |
-
pytest.importorskip("scipy")
|
649 |
-
assert_sparse_equal_ref(m.sparse_r())
|
650 |
-
assert_sparse_equal_ref(m.sparse_c())
|
651 |
-
assert_sparse_equal_ref(m.sparse_copy_r(m.sparse_r()))
|
652 |
-
assert_sparse_equal_ref(m.sparse_copy_c(m.sparse_c()))
|
653 |
-
assert_sparse_equal_ref(m.sparse_copy_r(m.sparse_c()))
|
654 |
-
assert_sparse_equal_ref(m.sparse_copy_c(m.sparse_r()))
|
655 |
-
|
656 |
-
|
657 |
-
def test_sparse_signature(doc):
|
658 |
-
pytest.importorskip("scipy")
|
659 |
-
assert doc(m.sparse_copy_r) == """
|
660 |
-
sparse_copy_r(arg0: scipy.sparse.csr_matrix[numpy.float32]) -> scipy.sparse.csr_matrix[numpy.float32]
|
661 |
-
""" # noqa: E501 line too long
|
662 |
-
assert doc(m.sparse_copy_c) == """
|
663 |
-
sparse_copy_c(arg0: scipy.sparse.csc_matrix[numpy.float32]) -> scipy.sparse.csc_matrix[numpy.float32]
|
664 |
-
""" # noqa: E501 line too long
|
665 |
-
|
666 |
-
|
667 |
-
def test_issue738():
|
668 |
-
"""Ignore strides on a length-1 dimension (even if they would be incompatible length > 1)"""
|
669 |
-
assert np.all(m.iss738_f1(np.array([[1., 2, 3]])) == np.array([[1., 102, 203]]))
|
670 |
-
assert np.all(m.iss738_f1(np.array([[1.], [2], [3]])) == np.array([[1.], [12], [23]]))
|
671 |
-
|
672 |
-
assert np.all(m.iss738_f2(np.array([[1., 2, 3]])) == np.array([[1., 102, 203]]))
|
673 |
-
assert np.all(m.iss738_f2(np.array([[1.], [2], [3]])) == np.array([[1.], [12], [23]]))
|
674 |
-
|
675 |
-
|
676 |
-
def test_issue1105():
|
677 |
-
"""Issue 1105: 1xN or Nx1 input arrays weren't accepted for eigen
|
678 |
-
compile-time row vectors or column vector"""
|
679 |
-
assert m.iss1105_row(np.ones((1, 7)))
|
680 |
-
assert m.iss1105_col(np.ones((7, 1)))
|
681 |
-
|
682 |
-
# These should still fail (incompatible dimensions):
|
683 |
-
with pytest.raises(TypeError) as excinfo:
|
684 |
-
m.iss1105_row(np.ones((7, 1)))
|
685 |
-
assert "incompatible function arguments" in str(excinfo.value)
|
686 |
-
with pytest.raises(TypeError) as excinfo:
|
687 |
-
m.iss1105_col(np.ones((1, 7)))
|
688 |
-
assert "incompatible function arguments" in str(excinfo.value)
|
689 |
-
|
690 |
-
|
691 |
-
def test_custom_operator_new():
|
692 |
-
"""Using Eigen types as member variables requires a class-specific
|
693 |
-
operator new with proper alignment"""
|
694 |
-
|
695 |
-
o = m.CustomOperatorNew()
|
696 |
-
np.testing.assert_allclose(o.a, 0.0)
|
697 |
-
np.testing.assert_allclose(o.b.diagonal(), 1.0)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/shape.cpp
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
#include "shape.h"
|
2 |
-
|
3 |
-
void Path::copy_to(ptr<float> points, ptr<float> thickness) const {
|
4 |
-
float *p = points.get();
|
5 |
-
for (int i = 0; i < 2 * num_points; i++) {
|
6 |
-
p[i] = this->points[i];
|
7 |
-
}
|
8 |
-
if (this->thickness != nullptr) {
|
9 |
-
float *t = thickness.get();
|
10 |
-
for (int i = 0; i < num_points; i++) {
|
11 |
-
t[i] = this->thickness[i];
|
12 |
-
}
|
13 |
-
}
|
14 |
-
}
|
15 |
-
|
16 |
-
void ShapeGroup::copy_to(ptr<float> shape_to_canvas) const {
|
17 |
-
for (int i = 0; i < 3; i++) {
|
18 |
-
for (int j = 0; j < 3; j++) {
|
19 |
-
shape_to_canvas.get()[i * 3 + j] = this->shape_to_canvas(i, j);
|
20 |
-
}
|
21 |
-
}
|
22 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/core/evaluation/eval_hooks.py
DELETED
@@ -1,303 +0,0 @@
|
|
1 |
-
import os.path as osp
|
2 |
-
import warnings
|
3 |
-
from math import inf
|
4 |
-
|
5 |
-
import mmcv
|
6 |
-
import torch.distributed as dist
|
7 |
-
from mmcv.runner import Hook
|
8 |
-
from torch.nn.modules.batchnorm import _BatchNorm
|
9 |
-
from torch.utils.data import DataLoader
|
10 |
-
|
11 |
-
from mmdet.utils import get_root_logger
|
12 |
-
|
13 |
-
|
14 |
-
class EvalHook(Hook):
|
15 |
-
"""Evaluation hook.
|
16 |
-
|
17 |
-
Notes:
|
18 |
-
If new arguments are added for EvalHook, tools/test.py,
|
19 |
-
tools/analysis_tools/eval_metric.py may be effected.
|
20 |
-
|
21 |
-
Attributes:
|
22 |
-
dataloader (DataLoader): A PyTorch dataloader.
|
23 |
-
start (int, optional): Evaluation starting epoch. It enables evaluation
|
24 |
-
before the training starts if ``start`` <= the resuming epoch.
|
25 |
-
If None, whether to evaluate is merely decided by ``interval``.
|
26 |
-
Default: None.
|
27 |
-
interval (int): Evaluation interval (by epochs). Default: 1.
|
28 |
-
save_best (str, optional): If a metric is specified, it would measure
|
29 |
-
the best checkpoint during evaluation. The information about best
|
30 |
-
checkpoint would be save in best.json.
|
31 |
-
Options are the evaluation metrics to the test dataset. e.g.,
|
32 |
-
``bbox_mAP``, ``segm_mAP`` for bbox detection and instance
|
33 |
-
segmentation. ``AR@100`` for proposal recall. If ``save_best`` is
|
34 |
-
``auto``, the first key will be used. The interval of
|
35 |
-
``CheckpointHook`` should device EvalHook. Default: None.
|
36 |
-
rule (str, optional): Comparison rule for best score. If set to None,
|
37 |
-
it will infer a reasonable rule. Keys such as 'mAP' or 'AR' will
|
38 |
-
be inferred by 'greater' rule. Keys contain 'loss' will be inferred
|
39 |
-
by 'less' rule. Options are 'greater', 'less'. Default: None.
|
40 |
-
**eval_kwargs: Evaluation arguments fed into the evaluate function of
|
41 |
-
the dataset.
|
42 |
-
"""
|
43 |
-
|
44 |
-
rule_map = {'greater': lambda x, y: x > y, 'less': lambda x, y: x < y}
|
45 |
-
init_value_map = {'greater': -inf, 'less': inf}
|
46 |
-
greater_keys = ['mAP', 'AR']
|
47 |
-
less_keys = ['loss']
|
48 |
-
|
49 |
-
def __init__(self,
|
50 |
-
dataloader,
|
51 |
-
start=None,
|
52 |
-
interval=1,
|
53 |
-
by_epoch=True,
|
54 |
-
save_best=None,
|
55 |
-
rule=None,
|
56 |
-
**eval_kwargs):
|
57 |
-
if not isinstance(dataloader, DataLoader):
|
58 |
-
raise TypeError('dataloader must be a pytorch DataLoader, but got'
|
59 |
-
f' {type(dataloader)}')
|
60 |
-
if not interval > 0:
|
61 |
-
raise ValueError(f'interval must be positive, but got {interval}')
|
62 |
-
if start is not None and start < 0:
|
63 |
-
warnings.warn(
|
64 |
-
f'The evaluation start epoch {start} is smaller than 0, '
|
65 |
-
f'use 0 instead', UserWarning)
|
66 |
-
start = 0
|
67 |
-
self.dataloader = dataloader
|
68 |
-
self.interval = interval
|
69 |
-
self.by_epoch = by_epoch
|
70 |
-
self.start = start
|
71 |
-
assert isinstance(save_best, str) or save_best is None
|
72 |
-
self.save_best = save_best
|
73 |
-
self.eval_kwargs = eval_kwargs
|
74 |
-
self.initial_epoch_flag = True
|
75 |
-
|
76 |
-
self.logger = get_root_logger()
|
77 |
-
|
78 |
-
if self.save_best is not None:
|
79 |
-
self._init_rule(rule, self.save_best)
|
80 |
-
|
81 |
-
def _init_rule(self, rule, key_indicator):
|
82 |
-
"""Initialize rule, key_indicator, comparison_func, and best score.
|
83 |
-
|
84 |
-
Args:
|
85 |
-
rule (str | None): Comparison rule for best score.
|
86 |
-
key_indicator (str | None): Key indicator to determine the
|
87 |
-
comparison rule.
|
88 |
-
"""
|
89 |
-
if rule not in self.rule_map and rule is not None:
|
90 |
-
raise KeyError(f'rule must be greater, less or None, '
|
91 |
-
f'but got {rule}.')
|
92 |
-
|
93 |
-
if rule is None:
|
94 |
-
if key_indicator != 'auto':
|
95 |
-
if any(key in key_indicator for key in self.greater_keys):
|
96 |
-
rule = 'greater'
|
97 |
-
elif any(key in key_indicator for key in self.less_keys):
|
98 |
-
rule = 'less'
|
99 |
-
else:
|
100 |
-
raise ValueError(f'Cannot infer the rule for key '
|
101 |
-
f'{key_indicator}, thus a specific rule '
|
102 |
-
f'must be specified.')
|
103 |
-
self.rule = rule
|
104 |
-
self.key_indicator = key_indicator
|
105 |
-
if self.rule is not None:
|
106 |
-
self.compare_func = self.rule_map[self.rule]
|
107 |
-
|
108 |
-
def before_run(self, runner):
|
109 |
-
if self.save_best is not None:
|
110 |
-
if runner.meta is None:
|
111 |
-
warnings.warn('runner.meta is None. Creating a empty one.')
|
112 |
-
runner.meta = dict()
|
113 |
-
runner.meta.setdefault('hook_msgs', dict())
|
114 |
-
|
115 |
-
def before_train_epoch(self, runner):
|
116 |
-
"""Evaluate the model only at the start of training."""
|
117 |
-
if not self.initial_epoch_flag:
|
118 |
-
return
|
119 |
-
if self.start is not None and runner.epoch >= self.start:
|
120 |
-
self.after_train_epoch(runner)
|
121 |
-
self.initial_epoch_flag = False
|
122 |
-
|
123 |
-
def evaluation_flag(self, runner):
|
124 |
-
"""Judge whether to perform_evaluation after this epoch.
|
125 |
-
|
126 |
-
Returns:
|
127 |
-
bool: The flag indicating whether to perform evaluation.
|
128 |
-
"""
|
129 |
-
if self.start is None:
|
130 |
-
if not self.every_n_epochs(runner, self.interval):
|
131 |
-
# No evaluation during the interval epochs.
|
132 |
-
return False
|
133 |
-
elif (runner.epoch + 1) < self.start:
|
134 |
-
# No evaluation if start is larger than the current epoch.
|
135 |
-
return False
|
136 |
-
else:
|
137 |
-
# Evaluation only at epochs 3, 5, 7... if start==3 and interval==2
|
138 |
-
if (runner.epoch + 1 - self.start) % self.interval:
|
139 |
-
return False
|
140 |
-
return True
|
141 |
-
|
142 |
-
def after_train_epoch(self, runner):
|
143 |
-
if not self.by_epoch or not self.evaluation_flag(runner):
|
144 |
-
return
|
145 |
-
from mmdet.apis import single_gpu_test
|
146 |
-
results = single_gpu_test(runner.model, self.dataloader, show=False)
|
147 |
-
key_score = self.evaluate(runner, results)
|
148 |
-
if self.save_best:
|
149 |
-
self.save_best_checkpoint(runner, key_score)
|
150 |
-
|
151 |
-
def after_train_iter(self, runner):
|
152 |
-
if self.by_epoch or not self.every_n_iters(runner, self.interval):
|
153 |
-
return
|
154 |
-
from mmdet.apis import single_gpu_test
|
155 |
-
results = single_gpu_test(runner.model, self.dataloader, show=False)
|
156 |
-
key_score = self.evaluate(runner, results)
|
157 |
-
if self.save_best:
|
158 |
-
self.save_best_checkpoint(runner, key_score)
|
159 |
-
|
160 |
-
def save_best_checkpoint(self, runner, key_score):
|
161 |
-
best_score = runner.meta['hook_msgs'].get(
|
162 |
-
'best_score', self.init_value_map[self.rule])
|
163 |
-
if self.compare_func(key_score, best_score):
|
164 |
-
best_score = key_score
|
165 |
-
runner.meta['hook_msgs']['best_score'] = best_score
|
166 |
-
last_ckpt = runner.meta['hook_msgs']['last_ckpt']
|
167 |
-
runner.meta['hook_msgs']['best_ckpt'] = last_ckpt
|
168 |
-
mmcv.symlink(
|
169 |
-
last_ckpt,
|
170 |
-
osp.join(runner.work_dir, f'best_{self.key_indicator}.pth'))
|
171 |
-
time_stamp = runner.epoch + 1 if self.by_epoch else runner.iter + 1
|
172 |
-
self.logger.info(f'Now best checkpoint is epoch_{time_stamp}.pth.'
|
173 |
-
f'Best {self.key_indicator} is {best_score:0.4f}')
|
174 |
-
|
175 |
-
def evaluate(self, runner, results):
|
176 |
-
eval_res = self.dataloader.dataset.evaluate(
|
177 |
-
results, logger=runner.logger, **self.eval_kwargs)
|
178 |
-
for name, val in eval_res.items():
|
179 |
-
runner.log_buffer.output[name] = val
|
180 |
-
runner.log_buffer.ready = True
|
181 |
-
if self.save_best is not None:
|
182 |
-
if self.key_indicator == 'auto':
|
183 |
-
# infer from eval_results
|
184 |
-
self._init_rule(self.rule, list(eval_res.keys())[0])
|
185 |
-
return eval_res[self.key_indicator]
|
186 |
-
else:
|
187 |
-
return None
|
188 |
-
|
189 |
-
|
190 |
-
class DistEvalHook(EvalHook):
|
191 |
-
"""Distributed evaluation hook.
|
192 |
-
|
193 |
-
Notes:
|
194 |
-
If new arguments are added, tools/test.py may be effected.
|
195 |
-
|
196 |
-
Attributes:
|
197 |
-
dataloader (DataLoader): A PyTorch dataloader.
|
198 |
-
start (int, optional): Evaluation starting epoch. It enables evaluation
|
199 |
-
before the training starts if ``start`` <= the resuming epoch.
|
200 |
-
If None, whether to evaluate is merely decided by ``interval``.
|
201 |
-
Default: None.
|
202 |
-
interval (int): Evaluation interval (by epochs). Default: 1.
|
203 |
-
tmpdir (str | None): Temporary directory to save the results of all
|
204 |
-
processes. Default: None.
|
205 |
-
gpu_collect (bool): Whether to use gpu or cpu to collect results.
|
206 |
-
Default: False.
|
207 |
-
save_best (str, optional): If a metric is specified, it would measure
|
208 |
-
the best checkpoint during evaluation. The information about best
|
209 |
-
checkpoint would be save in best.json.
|
210 |
-
Options are the evaluation metrics to the test dataset. e.g.,
|
211 |
-
``bbox_mAP``, ``segm_mAP`` for bbox detection and instance
|
212 |
-
segmentation. ``AR@100`` for proposal recall. If ``save_best`` is
|
213 |
-
``auto``, the first key will be used. The interval of
|
214 |
-
``CheckpointHook`` should device EvalHook. Default: None.
|
215 |
-
rule (str | None): Comparison rule for best score. If set to None,
|
216 |
-
it will infer a reasonable rule. Default: 'None'.
|
217 |
-
broadcast_bn_buffer (bool): Whether to broadcast the
|
218 |
-
buffer(running_mean and running_var) of rank 0 to other rank
|
219 |
-
before evaluation. Default: True.
|
220 |
-
**eval_kwargs: Evaluation arguments fed into the evaluate function of
|
221 |
-
the dataset.
|
222 |
-
"""
|
223 |
-
|
224 |
-
def __init__(self,
|
225 |
-
dataloader,
|
226 |
-
start=None,
|
227 |
-
interval=1,
|
228 |
-
by_epoch=True,
|
229 |
-
tmpdir=None,
|
230 |
-
gpu_collect=False,
|
231 |
-
save_best=None,
|
232 |
-
rule=None,
|
233 |
-
broadcast_bn_buffer=True,
|
234 |
-
**eval_kwargs):
|
235 |
-
super().__init__(
|
236 |
-
dataloader,
|
237 |
-
start=start,
|
238 |
-
interval=interval,
|
239 |
-
by_epoch=by_epoch,
|
240 |
-
save_best=save_best,
|
241 |
-
rule=rule,
|
242 |
-
**eval_kwargs)
|
243 |
-
self.broadcast_bn_buffer = broadcast_bn_buffer
|
244 |
-
self.tmpdir = tmpdir
|
245 |
-
self.gpu_collect = gpu_collect
|
246 |
-
|
247 |
-
def _broadcast_bn_buffer(self, runner):
|
248 |
-
# Synchronization of BatchNorm's buffer (running_mean
|
249 |
-
# and running_var) is not supported in the DDP of pytorch,
|
250 |
-
# which may cause the inconsistent performance of models in
|
251 |
-
# different ranks, so we broadcast BatchNorm's buffers
|
252 |
-
# of rank 0 to other ranks to avoid this.
|
253 |
-
if self.broadcast_bn_buffer:
|
254 |
-
model = runner.model
|
255 |
-
for name, module in model.named_modules():
|
256 |
-
if isinstance(module,
|
257 |
-
_BatchNorm) and module.track_running_stats:
|
258 |
-
dist.broadcast(module.running_var, 0)
|
259 |
-
dist.broadcast(module.running_mean, 0)
|
260 |
-
|
261 |
-
def after_train_epoch(self, runner):
|
262 |
-
if not self.by_epoch or not self.evaluation_flag(runner):
|
263 |
-
return
|
264 |
-
|
265 |
-
if self.broadcast_bn_buffer:
|
266 |
-
self._broadcast_bn_buffer(runner)
|
267 |
-
|
268 |
-
from mmdet.apis import multi_gpu_test
|
269 |
-
tmpdir = self.tmpdir
|
270 |
-
if tmpdir is None:
|
271 |
-
tmpdir = osp.join(runner.work_dir, '.eval_hook')
|
272 |
-
results = multi_gpu_test(
|
273 |
-
runner.model,
|
274 |
-
self.dataloader,
|
275 |
-
tmpdir=tmpdir,
|
276 |
-
gpu_collect=self.gpu_collect)
|
277 |
-
if runner.rank == 0:
|
278 |
-
print('\n')
|
279 |
-
key_score = self.evaluate(runner, results)
|
280 |
-
if self.save_best:
|
281 |
-
self.save_best_checkpoint(runner, key_score)
|
282 |
-
|
283 |
-
def after_train_iter(self, runner):
|
284 |
-
if self.by_epoch or not self.every_n_iters(runner, self.interval):
|
285 |
-
return
|
286 |
-
|
287 |
-
if self.broadcast_bn_buffer:
|
288 |
-
self._broadcast_bn_buffer(runner)
|
289 |
-
|
290 |
-
from mmdet.apis import multi_gpu_test
|
291 |
-
tmpdir = self.tmpdir
|
292 |
-
if tmpdir is None:
|
293 |
-
tmpdir = osp.join(runner.work_dir, '.eval_hook')
|
294 |
-
results = multi_gpu_test(
|
295 |
-
runner.model,
|
296 |
-
self.dataloader,
|
297 |
-
tmpdir=tmpdir,
|
298 |
-
gpu_collect=self.gpu_collect)
|
299 |
-
if runner.rank == 0:
|
300 |
-
print('\n')
|
301 |
-
key_score = self.evaluate(runner, results)
|
302 |
-
if self.save_best:
|
303 |
-
self.save_best_checkpoint(runner, key_score)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/transfiner/configs/common/train.py
DELETED
@@ -1,18 +0,0 @@
|
|
1 |
-
# Common training-related configs that are designed for "tools/lazyconfig_train_net.py"
|
2 |
-
# You can use your own instead, together with your own train_net.py
|
3 |
-
train = dict(
|
4 |
-
output_dir="./output",
|
5 |
-
init_checkpoint="detectron2://ImageNetPretrained/MSRA/R-50.pkl",
|
6 |
-
max_iter=90000,
|
7 |
-
amp=dict(enabled=False), # options for Automatic Mixed Precision
|
8 |
-
ddp=dict( # options for DistributedDataParallel
|
9 |
-
broadcast_buffers=False,
|
10 |
-
find_unused_parameters=False,
|
11 |
-
fp16_compression=False,
|
12 |
-
),
|
13 |
-
checkpointer=dict(period=5000, max_to_keep=100), # options for PeriodicCheckpointer
|
14 |
-
eval_period=5000,
|
15 |
-
log_period=20,
|
16 |
-
device="cuda"
|
17 |
-
# ...
|
18 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChrisPreston/diff-svc_minato_aqua/modules/nsf_hifigan/nvSTFT.py
DELETED
@@ -1,120 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
|
3 |
-
os.environ["LRU_CACHE_CAPACITY"] = "3"
|
4 |
-
import torch
|
5 |
-
import torch.utils.data
|
6 |
-
import numpy as np
|
7 |
-
import librosa
|
8 |
-
from librosa.filters import mel as librosa_mel_fn
|
9 |
-
import soundfile as sf
|
10 |
-
|
11 |
-
|
12 |
-
def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False):
|
13 |
-
sampling_rate = None
|
14 |
-
try:
|
15 |
-
data, sampling_rate = sf.read(full_path, always_2d=True) # than soundfile.
|
16 |
-
except Exception as ex:
|
17 |
-
print(f"'{full_path}' failed to load.\nException:")
|
18 |
-
print(ex)
|
19 |
-
if return_empty_on_exception:
|
20 |
-
return [], sampling_rate or target_sr or 48000
|
21 |
-
else:
|
22 |
-
raise Exception(ex)
|
23 |
-
|
24 |
-
if len(data.shape) > 1:
|
25 |
-
data = data[:, 0]
|
26 |
-
assert len(
|
27 |
-
data) > 2 # check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension)
|
28 |
-
|
29 |
-
if np.issubdtype(data.dtype, np.integer): # if audio data is type int
|
30 |
-
max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX
|
31 |
-
else: # if audio data is type fp32
|
32 |
-
max_mag = max(np.amax(data), -np.amin(data))
|
33 |
-
max_mag = (2 ** 31) + 1 if max_mag > (2 ** 15) else ((
|
34 |
-
2 ** 15) + 1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32
|
35 |
-
|
36 |
-
data = torch.FloatTensor(data.astype(np.float32)) / max_mag
|
37 |
-
|
38 |
-
if (torch.isinf(data) | torch.isnan(
|
39 |
-
data)).any() and return_empty_on_exception: # resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except
|
40 |
-
return [], sampling_rate or target_sr or 48000
|
41 |
-
if target_sr is not None and sampling_rate != target_sr:
|
42 |
-
data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr))
|
43 |
-
sampling_rate = target_sr
|
44 |
-
|
45 |
-
return data, sampling_rate
|
46 |
-
|
47 |
-
|
48 |
-
def dynamic_range_compression(x, C=1, clip_val=1e-5):
|
49 |
-
return np.log(np.clip(x, a_min=clip_val, a_max=None) * C)
|
50 |
-
|
51 |
-
|
52 |
-
def dynamic_range_decompression(x, C=1):
|
53 |
-
return np.exp(x) / C
|
54 |
-
|
55 |
-
|
56 |
-
def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
|
57 |
-
return torch.log(torch.clamp(x, min=clip_val) * C)
|
58 |
-
|
59 |
-
|
60 |
-
def dynamic_range_decompression_torch(x, C=1):
|
61 |
-
return torch.exp(x) / C
|
62 |
-
|
63 |
-
|
64 |
-
class STFT():
|
65 |
-
def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025,
|
66 |
-
clip_val=1e-5):
|
67 |
-
self.target_sr = sr
|
68 |
-
|
69 |
-
self.n_mels = n_mels
|
70 |
-
self.n_fft = n_fft
|
71 |
-
self.win_size = win_size
|
72 |
-
self.hop_length = hop_length
|
73 |
-
self.fmin = fmin
|
74 |
-
self.fmax = fmax
|
75 |
-
self.clip_val = clip_val
|
76 |
-
self.mel_basis = {}
|
77 |
-
self.hann_window = {}
|
78 |
-
|
79 |
-
def get_mel(self, y, center=False):
|
80 |
-
sampling_rate = self.target_sr
|
81 |
-
n_mels = self.n_mels
|
82 |
-
n_fft = self.n_fft
|
83 |
-
win_size = self.win_size
|
84 |
-
hop_length = self.hop_length
|
85 |
-
fmin = self.fmin
|
86 |
-
fmax = self.fmax
|
87 |
-
clip_val = self.clip_val
|
88 |
-
|
89 |
-
if torch.min(y) < -1.:
|
90 |
-
print('min value is ', torch.min(y))
|
91 |
-
if torch.max(y) > 1.:
|
92 |
-
print('max value is ', torch.max(y))
|
93 |
-
|
94 |
-
if fmax not in self.mel_basis:
|
95 |
-
mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax)
|
96 |
-
self.mel_basis[str(fmax) + '_' + str(y.device)] = torch.from_numpy(mel).float().to(y.device)
|
97 |
-
self.hann_window[str(y.device)] = torch.hann_window(self.win_size).to(y.device)
|
98 |
-
|
99 |
-
y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft - hop_length) / 2), int((n_fft - hop_length) / 2)),
|
100 |
-
mode='reflect')
|
101 |
-
y = y.squeeze(1)
|
102 |
-
|
103 |
-
spec = torch.stft(y, n_fft, hop_length=hop_length, win_length=win_size, window=self.hann_window[str(y.device)],
|
104 |
-
center=center, pad_mode='reflect', normalized=False, onesided=True)
|
105 |
-
# print(111,spec)
|
106 |
-
spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9))
|
107 |
-
# print(222,spec)
|
108 |
-
spec = torch.matmul(self.mel_basis[str(fmax) + '_' + str(y.device)], spec)
|
109 |
-
# print(333,spec)
|
110 |
-
spec = dynamic_range_compression_torch(spec, clip_val=clip_val)
|
111 |
-
# print(444,spec)
|
112 |
-
return spec
|
113 |
-
|
114 |
-
def __call__(self, audiopath):
|
115 |
-
audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr)
|
116 |
-
spect = self.get_mel(audio.unsqueeze(0)).squeeze(0)
|
117 |
-
return spect
|
118 |
-
|
119 |
-
|
120 |
-
stft = STFT()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CikeyQI/Yunzai/Yunzai/lib/config/log.js
DELETED
@@ -1,98 +0,0 @@
|
|
1 |
-
import log4js from 'log4js'
|
2 |
-
import chalk from 'chalk'
|
3 |
-
import cfg from './config.js'
|
4 |
-
import fs from 'node:fs'
|
5 |
-
|
6 |
-
/**
|
7 |
-
* 设置日志样式
|
8 |
-
*/
|
9 |
-
export default function setLog () {
|
10 |
-
let file = './logs'
|
11 |
-
if (!fs.existsSync(file)) {
|
12 |
-
fs.mkdirSync(file)
|
13 |
-
}
|
14 |
-
|
15 |
-
/** 调整error日志等级 */
|
16 |
-
// log4js.levels.levels[5].level = Number.MAX_VALUE
|
17 |
-
// log4js.levels.levels.sort((a, b) => a.level - b.level)
|
18 |
-
|
19 |
-
log4js.configure({
|
20 |
-
appenders: {
|
21 |
-
console: {
|
22 |
-
type: 'console',
|
23 |
-
layout: {
|
24 |
-
type: 'pattern',
|
25 |
-
pattern: '%[[TRSSYz][%d{hh:mm:ss.SSS}][%4.4p]%] %m'
|
26 |
-
}
|
27 |
-
},
|
28 |
-
command: {
|
29 |
-
type: 'dateFile', // 可以是console,dateFile,file,Logstash等
|
30 |
-
filename: 'logs/command', // 将会按照filename和pattern拼接文件名
|
31 |
-
pattern: 'yyyy-MM-dd.log',
|
32 |
-
numBackups: 15,
|
33 |
-
alwaysIncludePattern: true,
|
34 |
-
layout: {
|
35 |
-
type: 'pattern',
|
36 |
-
pattern: '[%d{hh:mm:ss.SSS}][%4.4p] %m'
|
37 |
-
}
|
38 |
-
},
|
39 |
-
error: {
|
40 |
-
type: 'file',
|
41 |
-
filename: 'logs/error.log',
|
42 |
-
alwaysIncludePattern: true,
|
43 |
-
layout: {
|
44 |
-
type: 'pattern',
|
45 |
-
pattern: '[%d{hh:mm:ss.SSS}][%4.4p] %m'
|
46 |
-
}
|
47 |
-
}
|
48 |
-
},
|
49 |
-
categories: {
|
50 |
-
default: { appenders: ['console'], level: cfg.bot.log_level },
|
51 |
-
command: { appenders: ['console', 'command'], level: 'warn' },
|
52 |
-
error: { appenders: ['console', 'command', 'error'], level: 'error' }
|
53 |
-
}
|
54 |
-
})
|
55 |
-
|
56 |
-
const defaultLogger = log4js.getLogger('message')
|
57 |
-
const commandLogger = log4js.getLogger('command')
|
58 |
-
const errorLogger = log4js.getLogger('error')
|
59 |
-
|
60 |
-
/* eslint-disable no-useless-call */
|
61 |
-
/** 全局变量 logger */
|
62 |
-
global.logger = {
|
63 |
-
trace () {
|
64 |
-
defaultLogger.trace.call(defaultLogger, ...arguments)
|
65 |
-
},
|
66 |
-
debug () {
|
67 |
-
defaultLogger.debug.call(defaultLogger, ...arguments)
|
68 |
-
},
|
69 |
-
info () {
|
70 |
-
defaultLogger.info.call(defaultLogger, ...arguments)
|
71 |
-
},
|
72 |
-
// warn及以上的日志采用error策略
|
73 |
-
warn () {
|
74 |
-
commandLogger.warn.call(defaultLogger, ...arguments)
|
75 |
-
},
|
76 |
-
error () {
|
77 |
-
errorLogger.error.call(errorLogger, ...arguments)
|
78 |
-
},
|
79 |
-
fatal () {
|
80 |
-
errorLogger.fatal.call(errorLogger, ...arguments)
|
81 |
-
},
|
82 |
-
mark () {
|
83 |
-
errorLogger.mark.call(commandLogger, ...arguments)
|
84 |
-
}
|
85 |
-
}
|
86 |
-
|
87 |
-
logColor()
|
88 |
-
}
|
89 |
-
|
90 |
-
function logColor () {
|
91 |
-
logger.chalk = chalk
|
92 |
-
logger.red = chalk.red
|
93 |
-
logger.green = chalk.green
|
94 |
-
logger.yellow = chalk.yellow
|
95 |
-
logger.blue = chalk.blue
|
96 |
-
logger.magenta = chalk.magenta
|
97 |
-
logger.cyan = chalk.cyan
|
98 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/common/common.css
DELETED
@@ -1,458 +0,0 @@
|
|
1 |
-
@font-face {
|
2 |
-
font-family: 'Number';
|
3 |
-
src: url("./font/tttgbnumber.woff") format('woff'), url("./font/tttgbnumber.ttf") format('truetype');
|
4 |
-
}
|
5 |
-
@font-face {
|
6 |
-
font-family: 'NZBZ';
|
7 |
-
src: url("./font/NZBZ.woff") format('woff'), url("./font/NZBZ.ttf") format('truetype');
|
8 |
-
}
|
9 |
-
@font-face {
|
10 |
-
font-family: 'YS';
|
11 |
-
src: url("./font/HYWH-65W.woff") format('woff'), url("./font/HYWH-65W.ttf") format('truetype');
|
12 |
-
}
|
13 |
-
.font-YS {
|
14 |
-
font-family: Number, "汉仪文黑-65W", YS, PingFangSC-Medium, "PingFang SC", sans-serif;
|
15 |
-
}
|
16 |
-
.font-NZBZ {
|
17 |
-
font-family: Number, "印品南征北战NZBZ体", NZBZ, "汉仪文黑-65W", YS, PingFangSC-Medium, "PingFang SC", sans-serif;
|
18 |
-
}
|
19 |
-
* {
|
20 |
-
margin: 0;
|
21 |
-
padding: 0;
|
22 |
-
box-sizing: border-box;
|
23 |
-
-webkit-user-select: none;
|
24 |
-
user-select: none;
|
25 |
-
}
|
26 |
-
body {
|
27 |
-
font-size: 18px;
|
28 |
-
color: #1e1f20;
|
29 |
-
font-family: Number, "汉仪文黑-65W", YS, PingFangSC-Medium, "PingFang SC", sans-serif;
|
30 |
-
transform: scale(1.4);
|
31 |
-
transform-origin: 0 0;
|
32 |
-
width: 600px;
|
33 |
-
}
|
34 |
-
.container {
|
35 |
-
width: 600px;
|
36 |
-
padding: 20px 15px 10px 15px;
|
37 |
-
background-size: contain;
|
38 |
-
}
|
39 |
-
.head-box {
|
40 |
-
border-radius: 15px;
|
41 |
-
padding: 10px 20px;
|
42 |
-
position: relative;
|
43 |
-
color: #fff;
|
44 |
-
margin-top: 30px;
|
45 |
-
}
|
46 |
-
.head-box .title {
|
47 |
-
font-family: Number, "印品南征北战NZBZ体", NZBZ, "汉仪文黑-65W", YS, PingFangSC-Medium, "PingFang SC", sans-serif;
|
48 |
-
font-size: 36px;
|
49 |
-
text-shadow: 0 0 1px #000, 1px 1px 3px rgba(0, 0, 0, 0.9);
|
50 |
-
}
|
51 |
-
.head-box .title .label {
|
52 |
-
display: inline-block;
|
53 |
-
margin-left: 10px;
|
54 |
-
}
|
55 |
-
.head-box .genshin_logo {
|
56 |
-
position: absolute;
|
57 |
-
top: 1px;
|
58 |
-
right: 15px;
|
59 |
-
width: 97px;
|
60 |
-
}
|
61 |
-
.head-box .label {
|
62 |
-
font-size: 16px;
|
63 |
-
text-shadow: 0 0 1px #000, 1px 1px 3px rgba(0, 0, 0, 0.9);
|
64 |
-
}
|
65 |
-
.head-box .label span {
|
66 |
-
color: #d3bc8e;
|
67 |
-
padding: 0 2px;
|
68 |
-
}
|
69 |
-
.notice {
|
70 |
-
color: #888;
|
71 |
-
font-size: 12px;
|
72 |
-
text-align: right;
|
73 |
-
padding: 12px 5px 5px;
|
74 |
-
}
|
75 |
-
.notice-center {
|
76 |
-
color: #fff;
|
77 |
-
text-align: center;
|
78 |
-
margin-bottom: 10px;
|
79 |
-
text-shadow: 1px 1px 1px #333;
|
80 |
-
}
|
81 |
-
.copyright {
|
82 |
-
font-size: 14px;
|
83 |
-
text-align: center;
|
84 |
-
color: #fff;
|
85 |
-
position: relative;
|
86 |
-
padding-left: 10px;
|
87 |
-
text-shadow: 1px 1px 1px #000;
|
88 |
-
margin: 10px 0;
|
89 |
-
}
|
90 |
-
.copyright .version {
|
91 |
-
color: #d3bc8e;
|
92 |
-
display: inline-block;
|
93 |
-
padding: 0 3px;
|
94 |
-
}
|
95 |
-
/* */
|
96 |
-
.cons {
|
97 |
-
display: inline-block;
|
98 |
-
vertical-align: middle;
|
99 |
-
padding: 0 5px;
|
100 |
-
border-radius: 4px;
|
101 |
-
}
|
102 |
-
.cons-0 {
|
103 |
-
background: #666;
|
104 |
-
color: #fff;
|
105 |
-
}
|
106 |
-
.cons-n0 {
|
107 |
-
background: #404949;
|
108 |
-
color: #fff;
|
109 |
-
}
|
110 |
-
.cons-1 {
|
111 |
-
background: #5cbac2;
|
112 |
-
color: #fff;
|
113 |
-
}
|
114 |
-
.cons-2 {
|
115 |
-
background: #339d61;
|
116 |
-
color: #fff;
|
117 |
-
}
|
118 |
-
.cons-3 {
|
119 |
-
background: #3e95b9;
|
120 |
-
color: #fff;
|
121 |
-
}
|
122 |
-
.cons-4 {
|
123 |
-
background: #3955b7;
|
124 |
-
color: #fff;
|
125 |
-
}
|
126 |
-
.cons-5 {
|
127 |
-
background: #531ba9cf;
|
128 |
-
color: #fff;
|
129 |
-
}
|
130 |
-
.cons-6 {
|
131 |
-
background: #ff5722;
|
132 |
-
color: #fff;
|
133 |
-
}
|
134 |
-
.cons2-0 {
|
135 |
-
border-radius: 4px;
|
136 |
-
background: #666;
|
137 |
-
color: #fff;
|
138 |
-
}
|
139 |
-
.cons2-1 {
|
140 |
-
border-radius: 4px;
|
141 |
-
background: #71b1b7;
|
142 |
-
color: #fff;
|
143 |
-
}
|
144 |
-
.cons2-2 {
|
145 |
-
border-radius: 4px;
|
146 |
-
background: #369961;
|
147 |
-
color: #fff;
|
148 |
-
}
|
149 |
-
.cons2-3 {
|
150 |
-
border-radius: 4px;
|
151 |
-
background: #4596b9;
|
152 |
-
color: #fff;
|
153 |
-
}
|
154 |
-
.cons2-4 {
|
155 |
-
border-radius: 4px;
|
156 |
-
background: #4560b9;
|
157 |
-
color: #fff;
|
158 |
-
}
|
159 |
-
.cons2-5 {
|
160 |
-
border-radius: 4px;
|
161 |
-
background: #531ba9cf;
|
162 |
-
color: #fff;
|
163 |
-
}
|
164 |
-
.cons2-6 {
|
165 |
-
border-radius: 4px;
|
166 |
-
background: #ff5722;
|
167 |
-
color: #fff;
|
168 |
-
}
|
169 |
-
/******** Fetter ********/
|
170 |
-
.fetter {
|
171 |
-
width: 50px;
|
172 |
-
height: 50px;
|
173 |
-
display: inline-block;
|
174 |
-
background: url('./item/fetter.png');
|
175 |
-
background-size: auto 100%;
|
176 |
-
}
|
177 |
-
.fetter.fetter1 {
|
178 |
-
background-position: 0% 0;
|
179 |
-
}
|
180 |
-
.fetter.fetter2 {
|
181 |
-
background-position: 11.11111111% 0;
|
182 |
-
}
|
183 |
-
.fetter.fetter3 {
|
184 |
-
background-position: 22.22222222% 0;
|
185 |
-
}
|
186 |
-
.fetter.fetter4 {
|
187 |
-
background-position: 33.33333333% 0;
|
188 |
-
}
|
189 |
-
.fetter.fetter5 {
|
190 |
-
background-position: 44.44444444% 0;
|
191 |
-
}
|
192 |
-
.fetter.fetter6 {
|
193 |
-
background-position: 55.55555556% 0;
|
194 |
-
}
|
195 |
-
.fetter.fetter7 {
|
196 |
-
background-position: 66.66666667% 0;
|
197 |
-
}
|
198 |
-
.fetter.fetter8 {
|
199 |
-
background-position: 77.77777778% 0;
|
200 |
-
}
|
201 |
-
.fetter.fetter9 {
|
202 |
-
background-position: 88.88888889% 0;
|
203 |
-
}
|
204 |
-
.fetter.fetter10 {
|
205 |
-
background-position: 100% 0;
|
206 |
-
}
|
207 |
-
/******** ELEM ********/
|
208 |
-
.elem-hydro .talent-icon {
|
209 |
-
background-image: url("./bg/talent-hydro.png");
|
210 |
-
}
|
211 |
-
.elem-hydro .elem-bg,
|
212 |
-
.hydro-bg {
|
213 |
-
background-image: url("./bg/bg-hydro.jpg");
|
214 |
-
}
|
215 |
-
.elem-anemo .talent-icon {
|
216 |
-
background-image: url("./bg/talent-anemo.png");
|
217 |
-
}
|
218 |
-
.elem-anemo .elem-bg,
|
219 |
-
.anemo-bg {
|
220 |
-
background-image: url("./bg/bg-anemo.jpg");
|
221 |
-
}
|
222 |
-
.elem-cryo .talent-icon {
|
223 |
-
background-image: url("./bg/talent-cryo.png");
|
224 |
-
}
|
225 |
-
.elem-cryo .elem-bg,
|
226 |
-
.cryo-bg {
|
227 |
-
background-image: url("./bg/bg-cryo.jpg");
|
228 |
-
}
|
229 |
-
.elem-electro .talent-icon {
|
230 |
-
background-image: url("./bg/talent-electro.png");
|
231 |
-
}
|
232 |
-
.elem-electro .elem-bg,
|
233 |
-
.electro-bg {
|
234 |
-
background-image: url("./bg/bg-electro.jpg");
|
235 |
-
}
|
236 |
-
.elem-geo .talent-icon {
|
237 |
-
background-image: url("./bg/talent-geo.png");
|
238 |
-
}
|
239 |
-
.elem-geo .elem-bg,
|
240 |
-
.geo-bg {
|
241 |
-
background-image: url("./bg/bg-geo.jpg");
|
242 |
-
}
|
243 |
-
.elem-pyro .talent-icon {
|
244 |
-
background-image: url("./bg/talent-pyro.png");
|
245 |
-
}
|
246 |
-
.elem-pyro .elem-bg,
|
247 |
-
.pyro-bg {
|
248 |
-
background-image: url("./bg/bg-pyro.jpg");
|
249 |
-
}
|
250 |
-
.elem-dendro .talent-icon {
|
251 |
-
background-image: url("./bg/talent-dendro.png");
|
252 |
-
}
|
253 |
-
.elem-dendro .elem-bg,
|
254 |
-
.dendro-bg {
|
255 |
-
background-image: url("./bg/bg-dendro.jpg");
|
256 |
-
}
|
257 |
-
/* cont */
|
258 |
-
.cont {
|
259 |
-
border-radius: 10px;
|
260 |
-
background: url("../common/cont/card-bg.png") top left repeat-x;
|
261 |
-
background-size: auto 100%;
|
262 |
-
margin: 5px 15px 5px 10px;
|
263 |
-
position: relative;
|
264 |
-
box-shadow: 0 0 1px 0 #ccc, 2px 2px 4px 0 rgba(50, 50, 50, 0.8);
|
265 |
-
overflow: hidden;
|
266 |
-
color: #fff;
|
267 |
-
font-size: 16px;
|
268 |
-
}
|
269 |
-
.cont-title {
|
270 |
-
background: rgba(0, 0, 0, 0.4);
|
271 |
-
box-shadow: 0 0 1px 0 #fff;
|
272 |
-
color: #d3bc8e;
|
273 |
-
padding: 10px 20px;
|
274 |
-
text-align: left;
|
275 |
-
border-radius: 10px 10px 0 0;
|
276 |
-
}
|
277 |
-
.cont-title span {
|
278 |
-
font-size: 12px;
|
279 |
-
color: #aaa;
|
280 |
-
margin-left: 10px;
|
281 |
-
font-weight: normal;
|
282 |
-
}
|
283 |
-
.cont-title.border-less {
|
284 |
-
background: linear-gradient(rgba(0, 0, 0, 0.5), rgba(0, 0, 0, 0));
|
285 |
-
box-shadow: none;
|
286 |
-
padding-bottom: 5px;
|
287 |
-
}
|
288 |
-
.cont-body {
|
289 |
-
padding: 10px 15px;
|
290 |
-
font-size: 12px;
|
291 |
-
background: rgba(0, 0, 0, 0.5);
|
292 |
-
box-shadow: 0 0 1px 0 #fff;
|
293 |
-
font-weight: normal;
|
294 |
-
}
|
295 |
-
.cont-footer {
|
296 |
-
padding: 10px 15px;
|
297 |
-
font-size: 12px;
|
298 |
-
background: rgba(0, 0, 0, 0.5);
|
299 |
-
font-weight: normal;
|
300 |
-
}
|
301 |
-
.cont > ul.cont-msg {
|
302 |
-
display: block;
|
303 |
-
padding: 5px 10px;
|
304 |
-
background: rgba(0, 0, 0, 0.5);
|
305 |
-
}
|
306 |
-
ul.cont-msg,
|
307 |
-
.cont-footer ul {
|
308 |
-
padding-left: 15px;
|
309 |
-
}
|
310 |
-
ul.cont-msg li,
|
311 |
-
.cont-footer ul li {
|
312 |
-
margin: 5px 0;
|
313 |
-
margin-left: 15px;
|
314 |
-
}
|
315 |
-
ul.cont-msg li strong,
|
316 |
-
.cont-footer ul li strong {
|
317 |
-
font-weight: normal;
|
318 |
-
margin: 0 2px;
|
319 |
-
color: #d3bc8e;
|
320 |
-
}
|
321 |
-
.cont-table {
|
322 |
-
display: table;
|
323 |
-
width: 100%;
|
324 |
-
}
|
325 |
-
.cont-table .tr {
|
326 |
-
display: table-row;
|
327 |
-
}
|
328 |
-
.cont-table .tr:nth-child(even) {
|
329 |
-
background: rgba(0, 0, 0, 0.4);
|
330 |
-
}
|
331 |
-
.cont-table .tr:nth-child(odd) {
|
332 |
-
background: rgba(50, 50, 50, 0.4);
|
333 |
-
}
|
334 |
-
.cont-table .tr > div,
|
335 |
-
.cont-table .tr > td {
|
336 |
-
display: table-cell;
|
337 |
-
box-shadow: 0 0 1px 0 #fff;
|
338 |
-
}
|
339 |
-
.cont-table .tr > div.value-full {
|
340 |
-
display: table;
|
341 |
-
width: 200%;
|
342 |
-
}
|
343 |
-
.cont-table .tr > div.value-none {
|
344 |
-
box-shadow: none;
|
345 |
-
}
|
346 |
-
.cont-table .thead {
|
347 |
-
text-align: center;
|
348 |
-
}
|
349 |
-
.cont-table .thead > div,
|
350 |
-
.cont-table .thead > td {
|
351 |
-
color: #d3bc8e;
|
352 |
-
background: rgba(0, 0, 0, 0.4);
|
353 |
-
line-height: 40px;
|
354 |
-
height: 40px;
|
355 |
-
}
|
356 |
-
.cont-table .title,
|
357 |
-
.cont-table .th {
|
358 |
-
color: #d3bc8e;
|
359 |
-
padding-right: 15px;
|
360 |
-
text-align: right;
|
361 |
-
background: rgba(0, 0, 0, 0.4);
|
362 |
-
min-width: 100px;
|
363 |
-
vertical-align: middle;
|
364 |
-
}
|
365 |
-
.logo {
|
366 |
-
font-size: 18px;
|
367 |
-
text-align: center;
|
368 |
-
color: #fff;
|
369 |
-
margin: 20px 0 10px 0;
|
370 |
-
}
|
371 |
-
/* item-icon */
|
372 |
-
.item-icon {
|
373 |
-
width: 100%;
|
374 |
-
height: 100%;
|
375 |
-
border-radius: 4px;
|
376 |
-
position: relative;
|
377 |
-
overflow: hidden;
|
378 |
-
}
|
379 |
-
.item-icon .img {
|
380 |
-
width: 100%;
|
381 |
-
height: 100%;
|
382 |
-
display: block;
|
383 |
-
background-size: contain;
|
384 |
-
background-position: center;
|
385 |
-
background-repeat: no-repeat;
|
386 |
-
}
|
387 |
-
.item-icon.artis .img {
|
388 |
-
width: 84%;
|
389 |
-
height: 84%;
|
390 |
-
margin: 8%;
|
391 |
-
}
|
392 |
-
.item-icon.star1 {
|
393 |
-
background-image: url("../common/item/bg1.png");
|
394 |
-
}
|
395 |
-
.item-icon.opacity-bg.star1 {
|
396 |
-
background-image: url("../common/item/bg1-o.png");
|
397 |
-
}
|
398 |
-
.item-icon.star2 {
|
399 |
-
background-image: url("../common/item/bg2.png");
|
400 |
-
}
|
401 |
-
.item-icon.opacity-bg.star2 {
|
402 |
-
background-image: url("../common/item/bg2-o.png");
|
403 |
-
}
|
404 |
-
.item-icon.star3 {
|
405 |
-
background-image: url("../common/item/bg3.png");
|
406 |
-
}
|
407 |
-
.item-icon.opacity-bg.star3 {
|
408 |
-
background-image: url("../common/item/bg3-o.png");
|
409 |
-
}
|
410 |
-
.item-icon.star4 {
|
411 |
-
background-image: url("../common/item/bg4.png");
|
412 |
-
}
|
413 |
-
.item-icon.opacity-bg.star4 {
|
414 |
-
background-image: url("../common/item/bg4-o.png");
|
415 |
-
}
|
416 |
-
.item-icon.star5 {
|
417 |
-
background-image: url("../common/item/bg5.png");
|
418 |
-
}
|
419 |
-
.item-icon.opacity-bg.star5 {
|
420 |
-
background-image: url("../common/item/bg5-o.png");
|
421 |
-
}
|
422 |
-
.item-icon.star-w {
|
423 |
-
background: #fff;
|
424 |
-
}
|
425 |
-
.item-list {
|
426 |
-
display: flex;
|
427 |
-
}
|
428 |
-
.item-list .item-card {
|
429 |
-
width: 70px;
|
430 |
-
background: #e7e5d9;
|
431 |
-
}
|
432 |
-
.item-list .item-icon {
|
433 |
-
border-bottom-left-radius: 0;
|
434 |
-
border-bottom-right-radius: 12px;
|
435 |
-
}
|
436 |
-
.item-list .item-title {
|
437 |
-
color: #222;
|
438 |
-
font-size: 13px;
|
439 |
-
text-align: center;
|
440 |
-
padding: 2px;
|
441 |
-
white-space: nowrap;
|
442 |
-
overflow: hidden;
|
443 |
-
}
|
444 |
-
.item-list .item-icon {
|
445 |
-
height: initial;
|
446 |
-
}
|
447 |
-
.item-list .item-badge {
|
448 |
-
position: absolute;
|
449 |
-
display: block;
|
450 |
-
left: 0;
|
451 |
-
top: 0;
|
452 |
-
background: rgba(0, 0, 0, 0.6);
|
453 |
-
font-size: 12px;
|
454 |
-
color: #fff;
|
455 |
-
padding: 4px 5px 3px;
|
456 |
-
border-radius: 0 0 6px 0;
|
457 |
-
}
|
458 |
-
/*# sourceMappingURL=common.css.map */
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CikeyQI/meme-api/meme_generator/memes/jiji_king/__init__.py
DELETED
@@ -1,103 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
from pathlib import Path
|
3 |
-
from typing import List
|
4 |
-
|
5 |
-
from pil_utils import BuildImage
|
6 |
-
from pydantic import Field
|
7 |
-
|
8 |
-
from meme_generator import MemeArgsModel, MemeArgsParser, MemeArgsType, add_meme
|
9 |
-
from meme_generator.exception import TextOverLength
|
10 |
-
|
11 |
-
img_dir = Path(__file__).parent / "images"
|
12 |
-
|
13 |
-
help = "是否将图片变为圆形"
|
14 |
-
|
15 |
-
parser = MemeArgsParser(prefix_chars="-/")
|
16 |
-
parser.add_argument("--circle", "/圆", action="store_true", help=help)
|
17 |
-
|
18 |
-
|
19 |
-
class Model(MemeArgsModel):
|
20 |
-
circle: bool = Field(False, description=help)
|
21 |
-
|
22 |
-
|
23 |
-
def jiji_king(images: List[BuildImage], texts: List[str], args: Model):
|
24 |
-
block_num = 5
|
25 |
-
if len(images) >= 7 or len(texts) >= 7:
|
26 |
-
block_num = max(len(images), len(texts)) - 1
|
27 |
-
|
28 |
-
chars = ["急"]
|
29 |
-
text = "我是急急国王"
|
30 |
-
|
31 |
-
if len(texts) == 1:
|
32 |
-
if len(images) == 1:
|
33 |
-
chars = [texts[0]] * block_num
|
34 |
-
text = f"我是{texts[0]*2}国王"
|
35 |
-
else:
|
36 |
-
text = texts[0]
|
37 |
-
elif len(texts) == 2:
|
38 |
-
chars = [texts[0]] * block_num
|
39 |
-
text = texts[1]
|
40 |
-
elif texts:
|
41 |
-
chars = sum(
|
42 |
-
[[arg] * math.ceil(block_num / len(texts[:-1])) for arg in texts[:-1]], []
|
43 |
-
)
|
44 |
-
text = texts[-1]
|
45 |
-
|
46 |
-
frame = BuildImage.new("RGBA", (10 + 100 * block_num, 400), "white")
|
47 |
-
king = BuildImage.open(img_dir / "0.png")
|
48 |
-
head = images[0].convert("RGBA").square().resize((125, 125))
|
49 |
-
if args.circle:
|
50 |
-
head = head.circle()
|
51 |
-
king.paste(head, (237, 5), alpha=True)
|
52 |
-
frame.paste(king, ((frame.width - king.width) // 2, 0))
|
53 |
-
|
54 |
-
if len(images) > 1:
|
55 |
-
imgs = images[1:]
|
56 |
-
imgs = [img.convert("RGBA").square().resize((90, 90)) for img in imgs]
|
57 |
-
else:
|
58 |
-
imgs = []
|
59 |
-
for char in chars:
|
60 |
-
block = BuildImage.new("RGBA", (90, 90), "black")
|
61 |
-
try:
|
62 |
-
block.draw_text(
|
63 |
-
(0, 0, 90, 90),
|
64 |
-
char,
|
65 |
-
lines_align="center",
|
66 |
-
weight="bold",
|
67 |
-
max_fontsize=60,
|
68 |
-
min_fontsize=30,
|
69 |
-
fill="white",
|
70 |
-
)
|
71 |
-
except ValueError:
|
72 |
-
raise TextOverLength(char)
|
73 |
-
imgs.append(block)
|
74 |
-
|
75 |
-
imgs = sum([[img] * math.ceil(block_num / len(imgs)) for img in imgs], [])
|
76 |
-
for i in range(block_num):
|
77 |
-
frame.paste(imgs[i], (10 + 100 * i, 200))
|
78 |
-
|
79 |
-
try:
|
80 |
-
frame.draw_text(
|
81 |
-
(10, 300, frame.width - 10, 390),
|
82 |
-
text,
|
83 |
-
lines_align="center",
|
84 |
-
weight="bold",
|
85 |
-
max_fontsize=100,
|
86 |
-
min_fontsize=30,
|
87 |
-
)
|
88 |
-
except ValueError:
|
89 |
-
raise TextOverLength(text)
|
90 |
-
|
91 |
-
return frame.save_jpg()
|
92 |
-
|
93 |
-
|
94 |
-
add_meme(
|
95 |
-
"jiji_king",
|
96 |
-
jiji_king,
|
97 |
-
min_images=1,
|
98 |
-
max_images=11,
|
99 |
-
min_texts=0,
|
100 |
-
max_texts=11,
|
101 |
-
args_type=MemeArgsType(parser, Model, [Model(circle=False), Model(circle=True)]),
|
102 |
-
keywords=["急急国王"],
|
103 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat/client/js/theme-toggler.js
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
var switch_theme_toggler = document.getElementById("theme-toggler");
|
2 |
-
|
3 |
-
switch_theme_toggler.addEventListener("change", toggleTheme);
|
4 |
-
|
5 |
-
function setTheme(themeName) {
|
6 |
-
localStorage.setItem("theme", themeName);
|
7 |
-
document.documentElement.className = themeName;
|
8 |
-
}
|
9 |
-
|
10 |
-
function toggleTheme() {
|
11 |
-
var currentTheme = localStorage.getItem("theme");
|
12 |
-
var newTheme = currentTheme === "theme-dark" ? "theme-light" : "theme-dark";
|
13 |
-
|
14 |
-
setTheme(newTheme);
|
15 |
-
switch_theme_toggler.checked = newTheme === "theme-dark";
|
16 |
-
}
|
17 |
-
|
18 |
-
(function () {
|
19 |
-
var currentTheme = localStorage.getItem("theme") || "theme-dark";
|
20 |
-
setTheme(currentTheme);
|
21 |
-
switch_theme_toggler.checked = currentTheme === "theme-dark";
|
22 |
-
})();
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/openjourney/midjourney.py
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
#import libraries
|
2 |
-
import gradio as gr
|
3 |
-
|
4 |
-
#interface
|
5 |
-
gr.Interface.load("models/prompthero/openjourney").launch()
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CognitiveLabs/GPT-auto-webscraping/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: GPT Auto Web scraping
|
3 |
-
emoji: 🍧
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: purple
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.21.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CompVis/text2img-latent-diffusion/app.py
DELETED
@@ -1,34 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
|
3 |
-
from diffusers import LDMTextToImagePipeline
|
4 |
-
import gradio as gr
|
5 |
-
import PIL.Image
|
6 |
-
import numpy as np
|
7 |
-
import random
|
8 |
-
import torch
|
9 |
-
import subprocess
|
10 |
-
|
11 |
-
ldm_pipeline = LDMTextToImagePipeline.from_pretrained("CompVis/ldm-text2im-large-256")
|
12 |
-
|
13 |
-
def predict(prompt, steps=100, seed=42, guidance_scale=6.0):
|
14 |
-
#torch.cuda.empty_cache()
|
15 |
-
print(subprocess.check_output(["nvidia-smi"], stderr=subprocess.STDOUT).decode("utf8"))
|
16 |
-
generator = torch.manual_seed(seed)
|
17 |
-
images = ldm_pipeline([prompt], generator=generator, num_inference_steps=steps, eta=0.3, guidance_scale=guidance_scale)["images"]
|
18 |
-
print(subprocess.check_output(["nvidia-smi"], stderr=subprocess.STDOUT).decode("utf8"))
|
19 |
-
return images[0]
|
20 |
-
|
21 |
-
random_seed = random.randint(0, 2147483647)
|
22 |
-
gr.Interface(
|
23 |
-
predict,
|
24 |
-
inputs=[
|
25 |
-
gr.inputs.Textbox(label='Prompt', default='a chalk pastel drawing of a llama wearing a wizard hat'),
|
26 |
-
gr.inputs.Slider(1, 100, label='Inference Steps', default=50, step=1),
|
27 |
-
gr.inputs.Slider(0, 2147483647, label='Seed', default=random_seed, step=1),
|
28 |
-
gr.inputs.Slider(1.0, 20.0, label='Guidance Scale - how much the prompt will influence the results', default=6.0, step=0.1),
|
29 |
-
],
|
30 |
-
outputs=gr.Image(shape=[256,256], type="pil", elem_id="output_image"),
|
31 |
-
css="#output_image{width: 256px}",
|
32 |
-
title="ldm-text2im-large-256 - 🧨 diffusers library",
|
33 |
-
description="This Spaces contains a text-to-image Latent Diffusion process for the <a href=\"https://huggingface.co/CompVis/ldm-text2im-large-256\">ldm-text2im-large-256</a> model by <a href=\"https://huggingface.co/CompVis\">CompVis</a> using the <a href=\"https://github.com/huggingface/diffusers\">diffusers library</a>. The goal of this demo is to showcase the diffusers library and you can check how the code works here. If you want the state-of-the-art experience with Latent Diffusion text-to-image check out the <a href=\"https://huggingface.co/spaces/multimodalart/latentdiffusion\">main Spaces</a>.",
|
34 |
-
).launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cran-May/Mistril-7b/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Mistril 7b
|
3 |
-
emoji: 📈
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: green
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.27.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|