diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/English900AudioCdFreeDownload.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/English900AudioCdFreeDownload.md deleted file mode 100644 index ce96916d4f314e01ef98854d4722ff4aa4e06e40..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/English900AudioCdFreeDownload.md +++ /dev/null @@ -1,37 +0,0 @@ - -

How to Learn English with English 900 Audio CD Free Download

-

English 900 is a popular and effective English language course that was developed by the US government with official support. It consists of 900 sentences that cover various topics and situations, such as greetings, introductions, shopping, travel, etc. The course is designed to help learners master English conversation through repetition and memorization of the sentences.

-

If you want to learn English with English 900, you can download the audio CD for free from the Internet Archive. The Internet Archive is a non-profit organization that preserves and provides access to millions of digital books, movies, music, and other media. You can find the English 900 audio CD free download at these links:

-

English900AudioCdFreeDownload


Download Zip ✓✓✓ https://byltly.com/2uKvi0



- -

Each link contains a complete set of audio files that correspond to the sentences in the course. You can listen to them online or download them to your computer or mobile device. You can also find the PDF versions of the textbooks and word indexes on the same pages.

-

To learn English with English 900 audio CD free download, you should follow these steps:

-
    -
  1. Choose a topic that interests you or suits your needs.
  2. -
  3. Read and listen to the sentences carefully and try to understand their meaning and pronunciation.
  4. -
  5. Repeat the sentences aloud several times until you can say them fluently and confidently.
  6. -
  7. Review the sentences regularly and practice them with a partner or a native speaker if possible.
  8. -
-

By following these steps, you can improve your English skills and achieve your goals with English 900 audio CD free download. This course has been proven to work for many learners around the world, including Congo natives who became proficient in English in just three months[^3^]. So why not give it a try and see for yourself?

- -

If you want to learn more about English 900 and its benefits, you can also check out some of the reviews and testimonials from other learners who have used this course. Here are some examples:

-
-

"I have been studying English for a long time, but I always felt that something was missing. Then I found English 900 and it changed everything. It helped me to speak English more naturally and confidently. I recommend it to anyone who wants to improve their English."

-- Maria, Brazil -
-
-

"English 900 is a great course for beginners and intermediate learners. It covers all the essential topics and situations that you need to know in English. It is easy to follow and fun to practice. I enjoyed listening to the audio CD and repeating the sentences. It really improved my pronunciation and fluency."

-- Ahmed, Egypt -
-
-

"I used English 900 as a supplement to my regular English classes. It helped me to review and reinforce what I learned in class. It also exposed me to different accents and expressions that I didn't hear in class. It was very useful and interesting."

-- Li, China -
-

As you can see, English 900 is a powerful and effective way to learn English. You can download the audio CD for free from the Internet Archive and start learning today. Don't miss this opportunity to improve your English skills and achieve your goals with English 900 audio CD free download.

-

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Corazon Salvaje English Subtitle) The Best Version of Corazon Salvaje with English Subtitles.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Corazon Salvaje English Subtitle) The Best Version of Corazon Salvaje with English Subtitles.md deleted file mode 100644 index 548aa4fbf587960853d5ffbe1ea774fced586269..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (Corazon Salvaje English Subtitle) The Best Version of Corazon Salvaje with English Subtitles.md +++ /dev/null @@ -1,126 +0,0 @@ - -

HD Online Player (Corazon Salvaje English Subtitle)

-

If you are a fan of Mexican telenovelas, you might have heard of Corazon Salvaje, one of the most successful and acclaimed shows in the history of Latin American television. But if you don't speak Spanish, you might have trouble finding and enjoying this classic drama. That's why in this article, we will tell you everything you need to know about Corazon Salvaje and how to watch it with English subtitles using HD Online Player, a free and easy-to-use streaming software.

-

What is Corazon Salvaje?

-

Corazon Salvaje (Wild Heart) is a Mexican telenovela that aired from 1993 to 1994 on Televisa. It is based on the novel of the same name by Caridad Bravo Adams, which has been adapted several times for television and film. The story is set in the late 19th century and revolves around the love triangle between two brothers, Francisco and Juan de Dios Alcazar y Valle, and a young woman, Monica Molnar.

-

HD Online Player (Corazon Salvaje English Subtitle)


Download File ✸✸✸ https://byltly.com/2uKx0q



-

A brief summary of the plot

-

The plot of Corazon Salvaje is complex and full of twists and turns, but here is a simplified version. Francisco and Juan de Dios are the sons of a wealthy landowner, Don Noel Alcazar y Valle, who has a secret affair with a married woman, Sofia Molnar. Sofia gives birth to Juan de Dios, who is raised by her husband, Andres Molnar, as his own son. Francisco is the legitimate son of Don Noel and his wife, Catalina.

-

When Don Noel dies, he leaves his fortune to Francisco and Juan de Dios, but Catalina refuses to acknowledge Juan de Dios as her husband's son and tries to take everything away from him. Juan de Dios grows up as a rebellious and adventurous young man, who falls in love with Monica, Andres' daughter and Sofia's stepdaughter. Monica is a sweet and innocent girl who is engaged to Francisco, who is a cold and ambitious man.

-

The story follows the struggles and obstacles that Juan de Dios and Monica face to be together, as well as the intrigues and betrayals that surround them. Along the way, they encounter other characters who help or hinder their love, such as Aimee Molnar, Monica's sister who is obsessed with Juan de Dios; Azucena, a gypsy girl who loves Francisco; Meche, Juan de Dios' loyal friend; and Count Andrés Corona, a mysterious and powerful man who has a hidden agenda.

-

The main characters and actors

-

The main characters of Corazon Salvaje are:

-

Watch Corazon Salvaje with English subtitles online
-Corazon Salvaje streaming HD with English subs
-How to download Corazon Salvaje episodes with English subtitles
-Corazon Salvaje full episodes online HD with English subtitles
-Best online player for Corazon Salvaje with English subs
-Corazon Salvaje English subtitle online player HD quality
-Where to watch Corazon Salvaje with English subtitles online HD
-Corazon Salvaje online HD player with English subtitle option
-Corazon Salvaje HD online player compatible with English subtitles
-Corazon Salvaje online streaming with English subtitles HD
-Corazon Salvaje episodes with English subtitles online HD player
-Online HD player for Corazon Salvaje that supports English subtitles
-Corazon Salvaje online HD player with subtitle settings in English
-Watch Corazon Salvaje in HD with English subtitles online
-Corazon Salvaje online player HD with English subtitle feature
-Corazon Salvaje HD streaming with English subtitles online
-Download Corazon Salvaje with English subtitles online HD player
-Corazon Salvaje full episodes with English subtitles online HD
-Online player for Corazon Salvaje HD with English subtitle option
-Corazon Salvaje online HD player that works with English subtitles
-Watch Corazon Salvaje episodes with English subtitles online HD
-Corazon Salvaje streaming online HD with subtitle in English
-How to watch Corazon Salvaje with English subtitles online HD
-Corazon Salvaje online player in HD quality with English subtitles
-Online HD player for Corazon Salvaje with subtitle in English
-Watch Corazon Salvaje full episodes with English subtitles online
-Corazon Salvaje online streaming in HD quality with English subtitles
-Download Corazon Salvaje episodes in HD quality with English subtitles
-Online player for Corazon Salvaje that has English subtitle feature
-Watch Corazon Salvaje in HD quality with subtitle in English
-Online streaming of Corazon Salvaje with English subtitles in HD quality
-How to stream Corazon Salvaje in HD quality with subtitle in English
-Online player for Corazon Salvaje that supports subtitle in English
-Watch Corazon Salvaje episodes in HD quality with subtitle in English
-Download Corazon Salvaje full episodes in HD quality with subtitle in English
-Online streaming of Corazon Salvaje episodes with subtitle in English
-How to download Corazon Salvaje full episodes with subtitle in English
-Online player for Corazon Salvaje full episodes that supports subtitle in English
-Watch Corazon Salvaje full episodes in HD quality with subtitle in English online
-Download Corazon Salvaje full episodes in HD quality with subtitle in English online

- -

The actors who played these roles became very popular and received many awards for their performances. Eduardo Palomo and Edith Gonzalez became one of the most iconic couples in telenovela history, while Enrique Lizalde and Ana Colchero were praised for their villainous roles. Ariel Lopez Padilla also impressed the audience with his charisma and mystery.

-

The popularity and reception of the show

-

Corazon Salvaje was a huge success both in Mexico and abroad. It had high ratings throughout its run and was exported to more than 70 countries around the world. It was dubbed or subtitled in many languages, such as English, French, Italian, Portuguese, Arabic, Turkish, Greek, Romanian, Russian, Polish, Hungarian, Bulgarian, Serbian, Croatian, Slovenian, Albanian, Macedonian, and Chinese.

-

The show received many accolades from critics and fans alike. It won several awards at the TVyNovelas Awards in 1994, such as Best Telenovela, Best Actor (Eduardo Palomo), Best Actress (Edith Gonzalez), Best Antagonist Actor (Enrique Lizalde), Best Antagonist Actress (Ana Colchero), Best Young Lead Actor (Ariel Lopez Padilla), Best Original Story or Adaptation, and Best Direction. It also won the Golden Martín Fierro Award in Argentina for Best Foreign Telenovela in 1995.

-

Corazon Salvaje is considered one of the best telenovelas ever made and has been praised for its compelling story, its historical accuracy, its beautiful scenery, its memorable music, and its outstanding cast. It has been remade twice, in 2009 and 2010, but none of them matched the original's popularity or quality.

-

Why watch Corazon Salvaje with English subtitles?

-

If you are not fluent in Spanish, you might wonder why you should watch Corazon Salvaje with English subtitles instead of dubbing or skipping it altogether. Here are some reasons why watching foreign shows with subtitles can be beneficial and enjoyable for you:

-

The benefits of watching foreign shows with subtitles

- -

The challenges of finding good subtitles for Corazon Salvaje

-

However, watching foreign shows with subtitles can also pose some challenges especially if you are looking for good quality and accurate subtitles for Corazon Salvaje. Some of these challenges are:

- -

The best sources for Corazon Salvaje English subtitles

-

So, where can you find good English subtitles for Corazon Salvaje? Here are some of the best sources that we recommend:

- -

How to use HD Online Player to watch Corazon Salvaje with English subtitles?

-

If you want to watch Corazon Salvaje with English subtitles without buying DVDs, watching YouTube videos, or downloading subtitles from Subscene, you can use HD Online Player, a free and easy-to-use streaming software that lets you watch any video online with subtitles of your choice.

-

What is HD Online Player and how does it work?

-

HD Online Player is a software that allows you to stream any video from any website on your computer with subtitles from any source. It works by creating a virtual browser that connects to the website where the video is hosted and plays it on your computer screen. It also allows you to add subtitles from any file or URL that you have on your computer or online.

-

HD Online Player supports various video formats and websites, such as MP4, AVI, MKV, FLV, WMV, MOV, 3GP, WEBM, MPEG, M4V, ASF, VOB, OGV, RMVB, TS, MTS, M2TS, and more. It also supports various subtitle formats and sources, such as SRT, ASS, SSA, SUB, IDX, TXT, XML, VTT, DFXP, and more. It also supports various languages and encodings for subtitles, such as UTF-8, ANSI, Unicode, and more.

-

The advantages of using HD Online Player for streaming Corazon Salvaje

-

Using HD Online Player for streaming Corazon Salvaje with English subtitles has many advantages over other methods, such as:

- -

The steps to install and use HD Online Player for Corazon Salvaje

-

To install and use HD Online Player for streaming Corazon Salvaje with English subtitles, you need to follow these steps:

-
    -
  1. Download HD Online Player from its official website: https://hdonlineplayer.com/
  2. -
  3. Run the setup file and follow the instructions to install HD Online Player on your computer.
  4. -
  5. Launch HD Online Player and click on the "Open URL" button on the top left corner.
  6. -
  7. Enter the URL of the website where Corazon Salvaje is hosted and click "OK". For example, you can enter https://www.dailymotion.com/video/x6wqf0w which is the link for the first episode of Corazon Salvaje on Dailymotion.
  8. -
  9. Wait for the video to load and play on HD Online Player.
  10. -
  11. Click on the "Subtitle" button on the bottom right corner and choose "Add subtitle file" or "Add subtitle URL".
  12. -
  13. Browse your computer or enter the URL of the subtitle file or source that you want to use for Corazon Salvaje. For example, you can enter https://subscene.com/subtitles/corazn-salvaje-1993/english/2409518 which is the link for the English subtitle for the first episode of Corazon Salvaje on Subscene.
  14. -
  15. Wait for the subtitle to load and sync with the video on HD Online Player.
  16. -
  17. Enjoy watching Corazon Salvaje with English subtitles on HD Online Player!
  18. -
-

Conclusion

-

In conclusion, Corazon Salvaje is a classic Mexican telenovela that tells a captivating story of love and adventure in the 19th century. It has a great cast and production that made it one of the most successful and acclaimed shows in Latin American television history. It is worth watching with English subtitles if you want to improve your language skills appreciate the original performance and expand your horizons. You can watch it with English subtitles using HD Online Player a free and easy-to-use streaming software that lets you stream any video online with subtitles of your choice. You just need to download and install HD Online Player on your computer enter the URL of the website where Corazon Salvaje is hosted add the subtitle file or source that you want to use and enjoy watching Corazon Salvaje with English subtitles on HD Online Player!

-

FAQs

-

Here are some frequently asked questions about Corazon Salvaje and HD Online Player:

-
    -
  1. How many episodes does Corazon Salvaje have? Corazon Salvaje has 80 episodes in total each lasting about 45 minutes.
  2. -
  3. Where can I watch Corazon Salvaje online? You can watch Corazon Salvaje online on various websites that host Mexican telenovelas such as Dailymotion YouTube or TelenovelasTV.
  4. -
  5. Can I watch Corazon Salvaje with other languages besides English? Yes you can watch Corazon Salvaje with other languages besides English if you can find subtitles for them online. HD Online Player supports various languages and encodings for subtitles.
  6. -
  7. Can I use HD Online Player for other videos besides Corazon Salvaje? Yes you can use HD Online Player for other videos besides Corazon Salvaje if they are available online. HD Online Player supports various video formats and websites.
  8. -
  9. Is HD Online Player compatible with Windows 10? Yes HD Online Player is compatible with Windows 10 as well as Windows 7 8 8.1 XP and Vista.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Caterpillar ET Factory Password.rar.md b/spaces/1gistliPinn/ChatGPT4/Examples/Caterpillar ET Factory Password.rar.md deleted file mode 100644 index 7e9da6eed1d70a62744cea367c725d1c18cead6a..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Caterpillar ET Factory Password.rar.md +++ /dev/null @@ -1,104 +0,0 @@ -
-

Caterpillar ET Factory Password.rar: What Is It and How to Use It

-

If you are a Caterpillar dealer or technician, you may have heard of Caterpillar ET Factory Password.rar. This is a file that contains factory passwords for various Caterpillar Electronic Technician (Cat ET) functions. Cat ET is a software tool that allows you to communicate, diagnose, and service electronically controlled Caterpillar engines and machines connected to an Electronic Control Module (ECM).

-

Caterpillar ET factory password.rar


DOWNLOADhttps://imgfil.com/2uy0Ee



-

Factory passwords are part of a security system that helps to prevent unauthorized reprogramming of certain parameters, such as full load setting (FLS), fuel trim setting (FTS), or engine speed/timing calibration. Factory passwords also allow the factory to control access to engine calibration parameters and prevent unauthorized erasing of logged events.

-

In order to use factory passwords, you need to have Cat ET installed on your computer and a compatible communication adapter, such as Caterpillar Communication Adapter or Nexiq. You also need to obtain the proper factory passwords from an authorized Caterpillar dealer. The factory passwords are different for each ECM and each programming session. They are based on the following information:

- -

You can find this information on the Cat ET screen for factory passwords. You can also use the "Reset/View Passwords" function to generate two random customer passwords that allow you to access customer password-protected parameters without knowing the actual customer passwords.

-

How to Download Caterpillar ET Factory Password.rar

-

Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions. You can download this file from various online sources, such as blogs, forums, or websites that offer Caterpillar diagnostic software and tools. However, you should be careful when downloading this file, as it may contain viruses, malware, or other harmful content that can damage your computer or compromise your security.

-

-

Before downloading Caterpillar ET Factory Password.rar, you should check the following:

- -

After downloading Caterpillar ET Factory Password.rar, you should scan it with a reliable antivirus program and extract it with a suitable software tool, such as WinRAR or 7-Zip. You should also backup your original Cat ET files before replacing them with the downloaded ones.

-

How to Use Caterpillar ET Factory Password.rar

-

After downloading and extracting Caterpillar ET Factory Password.rar, you can use it to perform various Cat ET functions that require factory passwords. For example, you can use it to change FLS or FTS values, calibrate engine speed/timing, or clear event codes. To use Caterpillar ET Factory Password.rar, you need to follow these steps:

-
    -
  1. Connect your communication adapter to your computer and to the ECM.
  2. -
  3. Launch Cat ET and select the appropriate ECM.
  4. -
  5. Select the "Service" menu and choose the function you want to perform.
  6. -
  7. If Cat ET asks for factory passwords, enter them from the Caterpillar ET Factory Password.rar file.
  8. -
  9. Follow the instructions on the screen to complete the function.
  10. -
-

Note that some functions may require additional steps or information, such as engine serial number or reason code. You should always document the parameters and settings that are programmed into the ECM and keep a permanent record of them.

-

Benefits of Using Caterpillar ET Factory Password.rar

-

Using Caterpillar ET Factory Password.rar can provide you with many benefits, such as:

- -

However, you should also be aware of the risks and responsibilities of using Caterpillar ET Factory Password.rar, such as:

- -

Conclusion

-

Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions that require them. You can download this file from various online sources, but you should be careful about its authenticity and security. You can use this file to perform various Cat ET functions that can improve the performance and efficiency of your Caterpillar engines and machines. However, you should also follow the proper procedures and instructions, respect the intellectual property rights and confidentiality agreements, and take full responsibility for any consequences or liabilities that may arise from using Caterpillar ET Factory Password.rar.

-

How to Get Help and Support for Caterpillar ET Factory Password.rar

-

If you have any questions or issues regarding Caterpillar ET Factory Password.rar, you can get help and support from various sources, such as:

- -

Remember that using Caterpillar ET Factory Password.rar is a privilege and not a right. You should always use it with respect and caution, and follow the ethical and legal standards of Caterpillar and its dealers. By doing so, you can enjoy the benefits of using Caterpillar ET Factory Password.rar without compromising your safety or reputation.

-

How to Update and Upgrade Caterpillar ET Factory Password.rar

-

Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions that require them. However, this file may not work with newer versions of Cat ET or newer models of Caterpillar engines and machines. Therefore, you may need to update and upgrade Caterpillar ET Factory Password.rar from time to time to ensure its compatibility and functionality.

-

To update and upgrade Caterpillar ET Factory Password.rar, you can follow these steps:

-
    -
  1. Check the current version of your Cat ET software and the model and serial number of your Caterpillar engine or machine.
  2. -
  3. Visit the official Caterpillar website or contact your authorized Caterpillar dealer or service center to find out if there are any updates or upgrades available for your Cat ET software or your Caterpillar engine or machine.
  4. -
  5. If there are any updates or upgrades available, download them from the official Caterpillar website or get them from your authorized Caterpillar dealer or service center.
  6. -
  7. Install the updates or upgrades on your computer and on your Caterpillar engine or machine according to the instructions provided.
  8. -
  9. Download a new version of Caterpillar ET Factory Password.rar that matches the updated or upgraded Cat ET software and Caterpillar engine or machine from a reliable and reputable online source.
  10. -
  11. Scan the new version of Caterpillar ET Factory Password.rar with a reliable antivirus program and extract it with a suitable software tool.
  12. -
  13. Backup your original Cat ET files and replace them with the new ones from the new version of Caterpillar ET Factory Password.rar.
  14. -
-

Note that some updates or upgrades may require additional steps or information, such as activation codes or registration keys. You should always follow the instructions and recommendations from Caterpillar and its dealers when updating or upgrading your Cat ET software or your Caterpillar engine or machine.

-

How to Troubleshoot and Fix Common Problems with Caterpillar ET Factory Password.rar

-

Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions that require them. However, you may encounter some common problems when using this file, such as:

- -

To troubleshoot and fix these common problems, you can try the following solutions:

- -

How to Learn More about Caterpillar ET Factory Password.rar

-

Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions that require them. If you want to learn more about this file and how to use it effectively, you can use the following resources:

- -

By using these resources, you can enhance your knowledge and skills about Caterpillar ET Factory Password.rar and how to use it to improve the performance and efficiency of your Caterpillar engines and machines.

-

Conclusion

-

Caterpillar ET Factory Password.rar is a file that contains factory passwords for various Cat ET functions that require them. You can download this file from various online sources, but you should be careful about its authenticity and security. You can use this file to perform various Cat ET functions that can improve the performance and efficiency of your Caterpillar engines and machines. However, you should also follow the proper procedures and instructions, respect the intellectual property rights and confidentiality agreements, and take full responsibility for any consequences or liabilities that may arise from using Caterpillar ET Factory Password.rar.

-

If you have any questions or issues regarding Caterpillar ET Factory Password.rar, you can get help and support from various sources, such as the official Caterpillar website, the authorized Caterpillar dealer or service center, or the online Caterpillar community. You can also update and upgrade Caterpillar ET Factory Password.rar from time to time to ensure its compatibility and functionality. By using Caterpillar ET Factory Password.rar with respect and caution, you can enjoy the benefits of using Cat ET without compromising your safety or reputation.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Black uTorrent Pro APK The Ultimate App for Torrent Lovers.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Black uTorrent Pro APK The Ultimate App for Torrent Lovers.md deleted file mode 100644 index f1d0e2512ce853ff1f61a29d1cbfbaf68b3ae1a5..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Black uTorrent Pro APK The Ultimate App for Torrent Lovers.md +++ /dev/null @@ -1,73 +0,0 @@ - -

What Is Black uTorrent Pro APK and Why You Need It

-

If you are looking for a fast and easy way to download large files from the internet, you might have heard of uTorrent. uTorrent is one of the most popular and widely used torrent clients in the world. It allows you to download files using BitTorrent, a peer-to-peer (P2P) file-sharing protocol that distributes data among users without relying on a central server.

-

black utorrent pro apk


DOWNLOAD === https://urlin.us/2uSVc3



-

However, uTorrent is not perfect. The official version of uTorrent has some drawbacks, such as annoying ads, limited features, high battery consumption, and potential security risks. That's why some users prefer to use modded versions of uTorrent, such as black uTorrent pro apk.

-

Black uTorrent pro apk is a modified version of uTorrent that unlocks all the pro features and removes all the ads. It also has some additional features that make it more convenient and efficient to use. Here are some of the benefits of using black uTorrent pro apk:

- -

How to Download and Install Black uTorrent Pro APK on Your Android Device

-

If you want to try out black uTorrent pro apk, you need to download and install it on your Android device first. Here are the steps you need to follow:

-
    -
  1. Find a reliable source to download the apk file. You can use our website to get the latest version of black uTorrent pro apk. Make sure the source is trustworthy and virus-free. You can scan the file with an antivirus app before installing it.
  2. -
  3. Enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  4. -
  5. Locate and tap on the apk file to start the installation process. You can use a file manager app to find the file in your downloads folder or wherever you saved it.
  6. -
  7. Follow the on-screen instructions and grant the necessary permissions. The app will ask you to allow access to your storage, network, and other features. Tap on Install and wait for the process to finish.
  8. -
  9. Launch the app and enjoy the pro features. You will see a black icon of uTorrent on your app drawer or home screen. Tap on it to open the app and start using it.
  10. -
-

How to Use Black uTorrent Pro APK to Download Torrent Files

-

Now that you have installed black uTorrent pro apk on your device, you can use it to download torrent files or magnet links from various sources. Here are the steps you need to follow:

-
    -
  1. Search for the torrent file or magnet link you want to download. You can use any torrent site or search engine that you trust, such as The Pirate Bay, 1337x, RARBG, etc. Make sure the file has enough seeders and positive comments before downloading it.
  2. -
  3. Copy the torrent file or magnet link and paste it in the app. You can either download the torrent file to your device and open it with black uTorrent pro apk, or copy the magnet link and paste it in the app's search bar. The app will automatically detect the file or link and start downloading it.
  4. -
  5. Choose your download location and other settings. You can change the default download location by going to Settings > Directories > Download Location and selecting a folder of your choice. You can also adjust other settings, such as bandwidth limit, download queue, network interface, etc.
  6. -
  7. Start the download and monitor the progress. You will see a list of your active downloads in the app's main screen. You can tap on each download to see more details, such as speed, size, peers, trackers, etc. You can also pause, resume, or delete downloads as you wish.
  8. -
  9. Open the downloaded file or folder with your preferred app or player. Once the download is complete, you can access the file or folder by tapping on it in the app or using a file manager app. You can then open it with any app or player that supports the file format.
  10. -
-

The Risks and Precautions of Torrenting with Black uTorrent Pro APK

-

Torrenting with black uTorrent pro apk can be a great way to get free and fast downloads of movies, music, games, software, and more. However, torrenting also comes with some risks and challenges that you need to be aware of and prepared for. Here are some of them:

-

- -

Conclusion

-

Torrenting with black u Torrent pro apk is a powerful and convenient app that lets you download and enjoy torrent files on your Android device. It offers many pro features that enhance your torrenting experience, such as no ads, battery saver, auto shutdown, file conversion, and premium support. However, torrenting also comes with some risks and challenges that you need to be aware of and prepared for, such as malware, viruses, legal issues, ISP throttling, etc. Therefore, you should always take some precautions before and while using black uTorrent pro apk, such as scanning files, checking comments, using a VPN, etc. By doing so, you can enjoy the benefits of torrenting without compromising your safety or security. We hope this article has helped you understand what black uTorrent pro apk is and how to use it to download torrent files on your Android device. If you have any questions or feedback, please feel free to leave a comment below. Happy torrenting!

FAQs

-

Here are some frequently asked questions about black uTorrent pro apk:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is black uTorrent pro apk safe to use?Black uTorrent pro apk is safe to use as long as you download it from a reliable source and scan it with an antivirus app before installing it. However, the files or links you download with it may not be safe, so you should always check them before opening them or running them on your device.
Is black uTorrent pro apk legal to use?Black uTorrent pro apk is legal to use as long as you use it for personal and non-commercial purposes. However, the content you download with it may not be legal, depending on the source and the jurisdiction. You should always respect the rights of the content creators and owners and follow the laws and regulations of your country or region.
What is the difference between black uTorrent pro apk and uTorrent pro?Black uTorrent pro apk is a modified version of uTorrent pro that unlocks all the pro features and removes all the ads. It also has some additional features that make it more convenient and efficient to use. uTorrent pro is the official version of uTorrent that requires a subscription fee to access the pro features.
How can I update black uTorrent pro apk?You can update black uTorrent pro apk by downloading the latest version of the apk file from our website or any other source you trust. You can then install it over the existing app without losing your settings or downloads.
How can I uninstall black uTorrent pro apk?You can uninstall black uTorrent pro apk by going to Settings > Apps > Black uTorrent Pro APK and tapping on Uninstall. You can also delete the apk file from your device if you don't need it anymore.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Arknights A Role Playing Game with Stunning Graphics and Sci-Fi Plot. Download Now for Mac and PC.md b/spaces/1phancelerku/anime-remove-background/Arknights A Role Playing Game with Stunning Graphics and Sci-Fi Plot. Download Now for Mac and PC.md deleted file mode 100644 index d9a553010cd702ad48856e584071af14e8088e37..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Arknights A Role Playing Game with Stunning Graphics and Sci-Fi Plot. Download Now for Mac and PC.md +++ /dev/null @@ -1,112 +0,0 @@ - -

How to Download and Play Arknights on Mac

-

Arknights is a popular tactical RPG/tower defense mobile game that has captivated millions of players around the world. If you are one of them, you might be wondering if you can play Arknights on your Mac computer. The answer is yes, you can! In this article, we will show you how to download and play Arknights on Mac using an Android emulator. We will also give you some tips and tricks to enhance your gameplay experience. Let's get started!

-

What is Arknights?

-

Arknights is a free-to-play mobile game developed by Chinese developer Hypergryph and published by Yostar. It was released in China in May 2019, and in other countries in January 2020. It is available on Android and iOS platforms and features gacha game mechanics.

-

arknights download mac


Download ✔✔✔ https://jinyurl.com/2uNQ2A



-

The game combines elements of tactical RPG and tower defense genres, with a rich sci-fi plot and stunning graphics. You play as the Doctor, a leader of a rescue organization called Rhodes Island, who has lost his memory due to an unknown infection. You have to recruit and train Operators, who are people with special abilities, to fight against a deadly threat from another world called Reunion.

-

The game offers hundreds of different Operators, each with their own skills, abilities, and classes. You have to strategically place them on the battlefield to block and defeat the enemies. You can also activate their skills for special effects or withdraw them for redeployment. The game has various modes, such as story mode, challenge mode, event mode, annihilation mode, contingency contract mode, and integrated strategies mode.

-

The game also features a captivating story with multiple chapters and side stories, as well as a diverse cast of characters with their own personalities and backgrounds. The game has received positive reviews from critics and players alike, praising its gameplay, graphics, story, music, voice acting, and character design.

-

How to download and play Arknights on Mac with BlueStacks
-Arknights official website for Mac users
-Arknights web browser game for Mac and PC
-Arknights Mac emulator download guide
-Arknights latest update and events for Mac players
-Arknights tips and tricks for Mac gamers
-Arknights best operators and strategies for Mac version
-Arknights system requirements and compatibility for Mac
-Arknights review and rating for Mac platform
-Arknights support and feedback for Mac issues
-Arknights wallpapers and themes for Mac desktop
-Arknights fan art and cosplay for Mac fans
-Arknights merchandise and gifts for Mac lovers
-Arknights comics and stories for Mac readers
-Arknights music and soundtracks for Mac listeners
-Arknights collaborations and crossovers for Mac enthusiasts
-Arknights community and forums for Mac users
-Arknights wiki and guides for Mac learners
-Arknights news and updates for Mac followers
-Arknights videos and streams for Mac watchers
-Arknights memes and jokes for Mac funnies
-Arknights codes and coupons for Mac savers
-Arknights giveaways and contests for Mac winners
-Arknights skins and costumes for Mac collectors
-Arknights characters and lore for Mac explorers
-Arknights gameplay and features for Mac players
-Arknights download size and speed for Mac devices
-Arknights graphics and performance for Mac quality
-Arknights bugs and glitches for Mac fixers
-Arknights mods and hacks for Mac cheaters
-Arknights tier list and rankings for Mac experts
-Arknights anniversary and birthday for Mac celebrators
-Arknights originium and orundum for Mac spenders
-Arknights recruitment and headhunting for Mac summoners
-Arknights base and dormitory for Mac builders
-Arknights missions and stages for Mac challengers
-Arknights story mode and side stories for Mac enjoyers
-Arknights factions and groups for Mac joiners
-Arknights voice actors and actresses for Mac admirers
-Arknights trivia and facts for Mac knowers

-

Why Play Arknights on Mac?

-

While Arknights is designed for mobile devices, you might want to play it on your Mac computer for various reasons. Here are some of the benefits of playing Arknights on Mac:

- -

How to Install Arknights on Mac?

-

To play Arknights on your Mac computer, you will need to use an Android emulator. An Android emulator is a software that simulates the environment of an Android device on your computer. This way, you can access and run Android apps and games on your Mac.

-

There are many Android emulators available for Mac users, such as BlueStacks, NoxPlayer, MEmu Player, LDPlayer, Mu

One of the most popular and recommended Android emulators for Mac is BlueStacks. BlueStacks is a powerful and user-friendly emulator that can run Arknights smoothly and efficiently. Here are the steps to download and install Arknights on Mac using BlueStacks:

-
    -
  1. Go to the official website of BlueStacks and download the latest version of the emulator for Mac. You can use this link: https://www.bluestacks.com/download.html
  2. -
  3. Once the download is complete, open the installer file and follow the instructions to install BlueStacks on your Mac. You might need to grant some permissions or enter your password during the process.
  4. -
  5. After the installation is done, launch BlueStacks and sign in with your Google account. If you don't have one, you can create one for free.
  6. -
  7. On the home screen of BlueStacks, look for the Google Play Store icon and click on it. This will open the Play Store app on the emulator.
  8. -
  9. In the search bar of the Play Store, type "Arknights" and hit enter. You will see a list of results related to the game.
  10. -
  11. Select the Arknights app from the list and click on the "Install" button. This will start downloading and installing the game on your Mac.
  12. -
  13. Once the installation is complete, you can find the Arknights icon on the home screen of BlueStacks. Click on it to launch the game and enjoy playing Arknights on your Mac.
  14. -
-

How to Link Your Mobile Account and Recover Your Progress on Mac?

-

If you have already played Arknights on your mobile device and want to continue your progress on your Mac, you will need to link your mobile account to your emulator account. Here are the steps to do that:

-
    -
  1. On your mobile device, open Arknights and tap on the gear icon on the top right corner of the screen. This will open the settings menu.
  2. -
  3. Tap on "Account" and then tap on "Bind Account". You will see a list of options to bind your account, such as Facebook, Twitter, Yostar, or Apple ID.
  4. -
  5. Select one of the options and follow the instructions to bind your account. You will need to enter your login details or scan a QR code depending on the option you choose.
  6. -
  7. Once your account is bound, you will see a confirmation message on your screen. You can now close Arknights on your mobile device.
  8. -
  9. On your Mac, launch BlueStacks and open Arknights. On the title screen, tap on "Account" and then tap on "Switch Account". You will see a list of options to switch your account, such as Facebook, Twitter, Yostar, or Apple ID.
  10. -
  11. Select the same option that you used to bind your account on your mobile device and follow the instructions to switch your account. You will need to enter your login details or scan a QR code depending on the option you choose.
  12. -
  13. Once your account is switched, you will see a confirmation message on your screen. You can now access your progress and data from your mobile device on your Mac.
  14. -
-

Tips and Tricks for Playing Arknights on Mac

-

Now that you have installed Arknights on your Mac, you might want to know some tips and tricks to improve your gameplay experience. Here are some of them:

- -

Conclusion

-

Arknights is a fun and addictive game that combines tactical RPG and tower defense elements with a sci-fi plot and stunning graphics. If you want to play Arknights on your Mac computer, you can do so by using an Android emulator such as BlueStacks. You can download and install Arknights on your Mac easily and quickly, and enjoy the game on a larger screen, with better graphics and performance, and using keyboard and mouse controls. You can also link your mobile account and recover your progress on your Mac, and use some tips and tricks to optimize your gameplay experience. Arknights is a game that you don't want to miss, so why not give it a try on your Mac today?

-

FAQs

-

Here are some frequently asked questions and answers about Arknights on Mac:

-

Is Arknights free to play on Mac?

-

Yes, Arknights is free to play on Mac, as long as you have an Android emulator such as BlueStacks installed on your Mac. You can download and install Arknights from the Google Play Store on the emulator without paying anything. However, the game does have some in-app purchases that you can buy with real money if you want to enhance your gameplay experience.

-

Is Arknights compatible with Mac?

-

Yes, Arknights is compatible with Mac, as long as you use an Android emulator such as BlueStacks to run it. BlueStacks is compatible with most Mac devices and operating systems, and can run Arknights smoothly and efficiently. You can check the minimum system requirements for BlueStacks on its official website.

-

How to update Arknights on Mac?

-

To update Arknights on your Mac, you need to update it from the Google Play Store on the emulator. You can either enable the auto-update feature or manually check for updates. To manually check for updates, you need to open the Play Store app on the emulator, go to the "My apps & games" section, find Arknights from the list of installed apps, and click on the "Update" button if there is one available.

-

How to transfer data from Arknights on mobile to Mac?

-

To transfer data from Arknights on your mobile device to your Mac, you need to link your mobile account to your emulator account. You can do this by binding your account to one of the options available in the game's settings menu, such as Facebook, Twitter, Yostar, or Apple ID. Then, you need to switch your account to the same option on the emulator. This will allow you to access your progress and data from your mobile device on your Mac.

-

How to fix Arknights crashing or not loading on Mac?

-

If you encounter any issues with Arknights crashing or not loading on your Mac, you can try some of the following solutions:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Brawl Stars APK Everything You Need to Know About the Best Mobile Game of 2023.md b/spaces/1phancelerku/anime-remove-background/Brawl Stars APK Everything You Need to Know About the Best Mobile Game of 2023.md deleted file mode 100644 index 07053378b875633c349d048e14e1d335daf62632..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Brawl Stars APK Everything You Need to Know About the Best Mobile Game of 2023.md +++ /dev/null @@ -1,150 +0,0 @@ -
-

Brawl Stars APK Download: How to Play the Ultimate Mobile Brawler on Your Android Device

-

If you are looking for a fast-paced, action-packed, and fun multiplayer game to play on your Android device, you should definitely check out Brawl Stars. Brawl Stars is a game developed by Supercell, the makers of Clash of Clans and Clash Royale. It features various game modes, characters, and events that will keep you hooked for hours.

-

But how can you download and install Brawl Stars APK on your Android device? And what are some tips and tricks to help you become a better brawler? In this article, we will answer these questions and more. Let's get started!

-

brawl stars apk download


DOWNLOADhttps://jinyurl.com/2uNRjw



-

What is Brawl Stars?

-

Brawl Stars is a mobile game that combines elements of twin-stick shooters, MOBAs, and battle royales. You can choose from over 20 different brawlers, each with their own unique abilities, weapons, and skins. You can also team up with your friends or play solo in various game modes, such as:

- -

Brawl Stars is constantly evolving with new brawlers, skins, maps, events, and game modes. It also has a Brawl Pass system that lets you complete quests, open boxes, earn gems, pins, and an exclusive skin every season.

-

How to Download Brawl Stars APK?

-

Brawl Stars is free to download and play on both iOS and Android devices. However, some regions may not have access to the game on the Google Play Store. If that's the case for you, don't worry. You can still download and install Brawl Stars APK from other sources.

-

An APK file is an Android application package that contains all the files needed to run an app on your device. To download Brawl Stars APK, you need to follow these steps:

-
    -
  1. Go to a trusted website that offers Brawl Stars APK download links. Some examples are Uptodown, Softpedia, and Games.lol. Make sure you download the latest version of the game.
  2. -
  3. Once you have downloaded the APK file, locate it on your device's file manager and tap on it to install it. You may need to enable installation from unknown sources in your device's settings.
  4. -
  5. Wait for the installation process to finish and launch the game. You may need to download some additional data before you can play.
  6. -
  7. Enjoy Brawl Stars on your Android device!
  8. -
-

Note: Downloading APK files from third-party sources may pose some risks to your device's security and performance. Make sure you only download from reputable websites and scan the files for viruses before installing them.

-

What are Some Brawl Stars Tips and Tricks?

-

Brawl Stars is a game that requires skill, strategy, and teamwork to win. Here are some tips and tricks that will help you improve your gameplay and become a star brawler:

-

brawl stars apk download latest version
-brawl stars apk download for android
-brawl stars apk download for pc
-brawl stars apk download mod
-brawl stars apk download hack
-brawl stars apk download free
-brawl stars apk download 2023
-brawl stars apk download update
-brawl stars apk download softpedia[^1^]
-brawl stars apk download no verification
-brawl stars apk download unlimited gems
-brawl stars apk download for ios
-brawl stars apk download for windows 10
-brawl stars apk download nulls
-brawl stars apk download private server
-brawl stars apk download rexdl
-brawl stars apk download apkpure
-brawl stars apk download uptodown
-brawl stars apk download revdl
-brawl stars apk download android 1
-brawl stars apk download mediafıre
-brawl stars apk download mega
-brawl stars apk download online
-brawl stars apk download old version
-brawl stars apk download original
-brawl stars apk download offline
-brawl stars apk download obb
-brawl stars apk download play store
-brawl stars apk download pc windows 7
-brawl stars apk download pc windows 8.1
-brawl stars apk download pc windows xp
-brawl stars apk download pc bluestacks
-brawl stars apk download pc nox player
-brawl stars apk download pc gameloop
-brawl stars apk download pc memu play
-brawl stars apk download reddit
-brawl stars apk download real
-brawl stars apk download rebrawl
-brawl stars apk download rey modz official
-brawl stars apk download rey modz pro 2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.
-brawl stars apk download supercell
-brawl stars apk download safe
-brawl stars apk download site
-brawl stars apk download server error 43 fix

-

Use Obstacles to Your Advantage

-

The maps in Brawl Stars have various obstacles such as rocks, barrels, mushrooms, and walls that can block enemy fire. You can use these objects to hide behind for cover or to ambush your opponents. However, be careful of brawlers that can break through obstacles with their super abilities or gadgets.

-

Don't Take on Tank Brawlers AloneDon't Take on Tank Brawlers Alone

-

Tank brawlers are those that have high health and damage, such as El Primo, Bull, Frank, and Rosa. They can easily overpower you in close-range combat, especially if they have their super abilities ready. If you encounter a tank brawler, try to keep your distance and chip away at their health with your teammates. Alternatively, you can use brawlers that can counter them, such as Shelly, Spike, or Emz.

-

Know Your Brawler's Role and Strengths

-

Brawl Stars has four types of brawlers: Fighter, Sharpshooter, Heavyweight, and Support. Each type has its own role and strengths in the game. For example, fighters are good at dealing damage and controlling the map, sharpshooters are good at sniping and poking enemies from afar, heavyweights are good at tanking and breaking through defenses, and support are good at healing and buffing allies. You should know your brawler's type and play accordingly to maximize their potential.

-

Use Your Super Ability Wisely

-

Your super ability is a powerful move that can turn the tide of the battle. However, it takes time to charge up and can be wasted if used incorrectly. You should use your super ability when it can have the most impact, such as securing a kill, saving an ally, or escaping a sticky situation. You should also be aware of your enemy's super abilities and try to dodge or counter them.

-

Communicate and Coordinate with Your Teammates

-

Brawl Stars is a team-based game that requires coordination and communication to win. You should use the in-game chat or voice chat to communicate with your teammates and plan your strategies. You can also use the quick chat commands or pins to convey your emotions or intentions. For example, you can use the thumbs up pin to show approval or the angry pin to show frustration. You can also use the attack, defend, or retreat commands to signal your teammates what to do.

-

How to Compare Brawlers in Brawl Stars?

-

If you want to know how different brawlers stack up against each other in terms of stats, abilities, and performance, you can use a table to compare them. Here is an example of a table that compares four popular brawlers in Brawl Stars:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
BrawlerTypeHealthDamageRangeSuper Ability
ShellyFighter3600300-420 per shell7.67 tilesFires a powerful blast that knocks back enemies and destroys obstacles.
NitaFighter3800800 per hit5.5 tilesSummons a big bear that attacks enemies and has high health.
CrowSharpshooter3360320 per dagger (plus poison)10 tilesFires a ring of daggers that deal damage and poison enemies.
PocoSupport3800700 per hit (plus healing)7 tiles (wide spread)Sends out a wave of music that heals himself and his allies.
-

You can use this table to see which brawlers have higher or lower health, damage, range, or super abilities. You can also use this table to find out which brawlers are better suited for certain game modes or situations.

-

Conclusion: Brawl Stars APK Download is Worth It!

-

Brawl Stars is one of the best mobile games you can play on your Android device. It has amazing graphics, gameplay, characters, and features that will keep you entertained for hours. Whether you want to play solo or with your friends, you will always find something new and exciting in Brawl Stars.

-

If you want to download Brawl Stars APK on your Android device, you can follow the steps we mentioned above. Just make sure you download from a trusted source and scan the file for viruses before installing it. Once you have installed the game, you can start brawling with millions of players around the world!

-

We hope this article helped you learn more about Brawl Stars APK download and how to play the game better. If If you have any questions about Brawl Stars APK download or the game itself, you can check out the FAQs below. You may find the answers you are looking for.

FAQs

-

Is Brawl Stars APK Download Safe?

-

Brawl Stars APK download is safe as long as you download from a reputable website and scan the file for viruses before installing it. However, you should be careful of fake or malicious websites that may try to trick you into downloading harmful files or stealing your personal information. Always check the reviews, ratings, and comments of the website and the file before downloading it.

-

Is Brawl Stars APK Download Legal?

-

Brawl Stars APK download is legal as long as you do not use it to violate the terms of service of the game or the Google Play Store. For example, you should not use it to hack, cheat, or mod the game in any way. You should also not use it to distribute or sell the game without permission from Supercell. If you do any of these things, you may face legal consequences or get banned from the game.

-

How to Update Brawl Stars APK?

-

Brawl Stars APK may not update automatically on your device, unlike the official version from the Google Play Store. To update Brawl Stars APK, you need to download and install the latest version of the file from the same website you downloaded it from. You can also check for updates in the game settings or on the official Brawl Stars website. Make sure you back up your game data before updating to avoid losing your progress.

-

How to Play Brawl Stars on PC?

-

If you want to play Brawl Stars on your PC, you need to use an Android emulator. An Android emulator is a software that allows you to run Android apps and games on your PC. Some popular Android emulators are BlueStacks, NoxPlayer, and LDPlayer. To play Brawl Stars on PC, you need to follow these steps:

-
    -
  1. Download and install an Android emulator on your PC.
  2. -
  3. Launch the emulator and sign in with your Google account.
  4. -
  5. Download and install Brawl Stars APK from a trusted website or from the emulator's app store.
  6. -
  7. Launch Brawl Stars and enjoy playing on a bigger screen with better controls.
  8. -
-

How to Get Free Gems in Brawl Stars?

-

Gems are the premium currency in Brawl Stars that can be used to buy skins, boxes, brawl passes, and other items. You can get free gems in Brawl Stars by completing quests, opening boxes, watching ads, participating in events, or using codes. You can also get free gems by using third-party apps or websites that offer surveys, tasks, or rewards. However, you should be careful of scams or hacks that may try to steal your account or personal information.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Wordscapes Uncrossed Mod APK for Free - Unlimited Coins and Hints.md b/spaces/1phancelerku/anime-remove-background/Download Wordscapes Uncrossed Mod APK for Free - Unlimited Coins and Hints.md deleted file mode 100644 index 8dfbc7013ba04dadbd30bac153516179e85c111d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Wordscapes Uncrossed Mod APK for Free - Unlimited Coins and Hints.md +++ /dev/null @@ -1,147 +0,0 @@ - -

Wordscapes Uncrossed Mod APK: A Fun and Challenging Word Game

-

If you love word games, you might have heard of Wordscapes, one of the most popular and addictive games in the genre. But did you know that there is a sequel to Wordscapes that is even more fun and challenging? It's called Wordscapes Uncrossed, and it's a game that will test your brain power and vocabulary skills like never before.

-

wordscapes uncrossed mod apk


Download Zip ►►►►► https://jinyurl.com/2uNMPD



-

In this article, we'll tell you everything you need to know about Wordscapes Uncrossed, how to play it, how to download and install its mod APK version, and how to enjoy it safely and responsibly. So, if you're ready to dive into the world of words, let's get started!

-

How to Play Wordscapes Uncrossed

-

The basic rules and gameplay of Wordscapes Uncrossed

-

Wordscapes Uncrossed is a word puzzle game that is similar to crossword puzzles, but with a twist. Instead of filling in the blanks with clues, you have to swipe letters on the screen to form words that fit into the grid. The words can be horizontal, vertical, or diagonal, as long as they are connected by a line.

-

The game starts with easy puzzles that have only a few letters and words, but as you progress, the puzzles get harder and bigger, with more letters and words to find. You also have to deal with bonus words, which are extra words that are not part of the grid but can earn you coins if you find them.

-

The different modes and levels of Wordscapes Uncrossed

-

Wordscapes Uncrossed has two main modes: Classic and Daily. In Classic mode, you can play through hundreds of levels that are divided into different themes, such as Forest, Sky, Ocean, Canyon, etc. Each theme has its own background image and music that create a relaxing atmosphere for playing.

-

In Daily mode, you can play a new puzzle every day that is based on the current date. The daily puzzles are more challenging than the classic ones, but they also offer more rewards, such as coins, hints, and stars. You can also compare your score with other players around the world on the leaderboard.

-

wordscapes uncrossed apk download
-wordscapes uncrossed game free
-wordscapes uncrossed mod apk unlimited coins
-wordscapes uncrossed latest version
-wordscapes uncrossed hack apk
-wordscapes uncrossed word puzzle
-wordscapes uncrossed android game
-wordscapes uncrossed cheats and answers
-wordscapes uncrossed online play
-wordscapes uncrossed for pc
-wordscapes uncrossed app store
-wordscapes uncrossed by peoplefun
-wordscapes uncrossed level 1
-wordscapes uncrossed review
-wordscapes uncrossed tips and tricks
-wordscapes uncrossed mod apk 2023
-wordscapes uncrossed best word game
-wordscapes uncrossed no ads
-wordscapes uncrossed premium apk
-wordscapes uncrossed update
-wordscapes uncrossed how to play
-wordscapes uncrossed daily puzzle
-wordscapes uncrossed bonus words
-wordscapes uncrossed anagram solver
-wordscapes uncrossed relaxing backgrounds
-wordscapes uncrossed mod apk rexdl
-wordscapes uncrossed brain teaser
-wordscapes uncrossed crossword game
-wordscapes uncrossed offline mode
-wordscapes uncrossed new levels
-wordscapes uncrossed mod apk revdl
-wordscapes uncrossed fun word quiz
-wordscapes uncrossed challenge your mind
-wordscapes uncrossed apk pure
-wordscapes uncrossed mod apk happymod
-wordscapes uncrossed easy to hard
-wordscapes uncrossed word finder
-wordscapes uncrossed mod apk android 1
-wordscapes uncrossed free coins
-wordscapes uncrossed mod menu apk
-wordscapes uncrossed mod apk unlimited hints
-wordscapes uncrossed word unscramble game
-wordscapes uncrossed mod apk 1.3.1
-wordscapes uncrossed terms of service
-wordscapes uncrossed mod apk latest version
-wordscapes uncrossed word search game
-wordscapes uncrossed mod apk no root
-wordscapes uncrossed mod apk ios

-

The benefits of playing Wordscapes Uncrossed for your brain and vocabulary

-

Wordscapes Uncrossed is not only a fun game, but also a great way to improve your brain function and vocabulary. By playing this game, you can:

- -

How to Download and Install Wordscapes Uncrossed Mod APK

-

What is a mod APK and why you should use it

-

A mod APK is

A mod APK is a modified version of an original Android app that provides users with some extra or improved features. APK is a file format that contains all the elements of an app and can be installed on an Android device. Mod APKs are usually created by reworking the original app’s code or adding new components to it.

-

The features and advantages of Wordscapes Uncrossed Mod APK

-

If you want to enjoy Wordscapes Uncrossed without any limitations or ads, you might want to try Wordscapes Uncrossed Mod APK. This is a modified version of the game that offers some features and advantages that are not available in the official app, such as:

- -

The steps to download and install Wordscapes Uncrossed Mod APK on your device

-

If you want to download and install Wordscapes Uncrossed Mod APK on your device, you need to follow these steps:

-
    -
  1. Make sure your device has enough storage space and is compatible with the game's requirements.
  2. -
  3. Go to a reliable and safe website that offers Wordscapes Uncrossed Mod APK for download, such as [APKPure](^5^) or [APKFab](^6^).
  4. -
  5. Tap on the download button and wait for the file to be downloaded on your device.
  6. -
  7. Before installing the file, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  8. -
  9. Locate the downloaded file on your device and tap on it to start the installation process.
  10. -
  11. Follow the instructions on the screen and wait for the installation to finish.
  12. -
  13. Launch the game and enjoy Wordscapes Uncrossed Mod APK!
  14. -

How to Enjoy Wordscapes Uncrossed Mod APK Safely and Responsibly

-

The risks and precautions of using a mod APK

-

While Wordscapes Uncrossed Mod APK can provide you with some benefits, it also comes with some risks and drawbacks that you should be aware of. Some of the possible risks and precautions of using a mod APK are:

- -

The tips and tricks to make the most of Wordscapes Uncrossed Mod APK

-

If you want to have more fun and success with Wordscapes Uncrossed Mod APK, you can try some of these tips and tricks:

- -

The alternatives and recommendations for other word games

-

If you love word games, you might also want to try some of these alternatives and recommendations for other word games that are similar to Wordscapes Uncrossed:

- - - - - - - -
NameDescription
Word ConnectA word game that requires you to connect letters to form words that fill up the crossword board. You can also discover hidden words and earn coins.
Word CookiesA word game that requires you to swipe letters to form words that match with the given cookies. You can also unlock new levels and themes as you play.
Word CrossyA word game that combines crossword puzzles and word searches. You have to swipe letters to form words that cross each other on the board. You can also collect butterflies and flowers as you play.
Word SwipeA word game that requires you to swipe letters to form words that fit into the blanks on the board. You can also use power-ups and hints to help you solve the puzzles.
Word LinkA word game that requires you to link letters to form words that fill up the grid. You can also explore different themes and modes as you play.
-

Conclusion

-

Wordscapes Uncrossed is a fun and challenging word game that will keep you entertained and engaged for hours. It is a great way to improve your brain function and vocabulary while having fun. If you want to enjoy this game without any limitations or ads, you can download and install Wordscapes Uncrossed Mod APK on your device. However, you should also be aware of the risks and precautions of using a mod APK, and use it safely and responsibly. You can also try some tips and tricks to make the most of Wordscapes Uncrossed Mod APK on your device. However, you should also be aware of the risks and precautions of using a mod APK, and use it safely and responsibly. You can also try some tips and tricks to make the most of Wordscapes Uncrossed Mod APK, or explore some alternatives and recommendations for other word games that are similar to it. We hope you found this article helpful and informative, and we wish you a happy and enjoyable gaming experience with Wordscapes Uncrossed Mod APK!

-

FAQs

-

Here are some frequently asked questions about Wordscapes Uncrossed Mod APK:

-
    -
  1. What is the difference between Wordscapes and Wordscapes Uncrossed?
  2. -

    Wordscapes and Wordscapes Uncrossed are both word puzzle games that are developed by PeopleFun. The main difference is that Wordscapes Uncrossed has a simpler and more minimalist design, with fewer letters and words per puzzle, but more puzzles per theme. Wordscapes Uncrossed also has a daily mode that offers a new puzzle every day.

    -
  3. Is Wordscapes Uncrossed Mod APK safe to use?
  4. -

    Wordscapes Uncrossed Mod APK is generally safe to use, as long as you download it from a reliable and verified source, and scan it with an antivirus app before installing it. However, you should also be careful of the possible risks and drawbacks of using a mod APK, such as malware infection, legal issues, or ban or suspension from the game or its online services.

    -
  5. How can I get more coins in Wordscapes Uncrossed Mod APK?
  6. -

    There are several ways to get more coins in Wordscapes Uncrossed Mod APK, such as:

    - -
  7. How can I update Wordscapes Uncrossed Mod APK?
  8. -

    To update Wordscapes Uncrossed Mod APK, you need to follow these steps:

    -
      -
    1. Delete the old version of Wordscapes Uncrossed Mod APK from your device
    2. -
    3. Go to the website where you downloaded the mod APK and check if there is a new version available
    4. -
    5. Download the new version of Wordscapes Uncrossed Mod APK on your device
    6. -
    7. Install the new version of Wordscapes Uncrossed Mod APK on your device
    8. -
    9. Launch the game and enjoy the updated features
    10. -
    -
  9. What are some other games like Wordscapes Uncrossed?
  10. -

    If you like Wordscapes Uncrossed, you might also like some other games like Word Connect, Word Cookies, Word Crossy, Word Swipe, or Word Link. These are all word puzzle games that require you to swipe letters to form words that fit into the grid or the blanks. They also have different themes, modes, levels, and features that make them fun and challenging.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Taxi Game 2 on Windows PC Career Mode and Realistic GPS.md b/spaces/1phancelerku/anime-remove-background/Enjoy Taxi Game 2 on Windows PC Career Mode and Realistic GPS.md deleted file mode 100644 index 3bd6a0c95b820b1fb11e55793dfbed5deff7d559..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy Taxi Game 2 on Windows PC Career Mode and Realistic GPS.md +++ /dev/null @@ -1,113 +0,0 @@ - -

Taxi Game 2: How to Download and Play on PC Windows 7

-

Do you love driving games and want to experience the thrill of being a taxi driver in a realistic city? If yes, then you should try Taxi Game 2, one of the best taxi games for mobile devices. But what if you want to play it on your PC Windows 7 instead of your phone or tablet? Don't worry, we have got you covered. In this article, we will show you how to download and play Taxi Game 2 on PC Windows 7 using two different methods. We will also share some tips and tricks to help you master the game and become the best taxi driver in town.

-

taxi game 2 download for pc windows 7


Download ··· https://jinyurl.com/2uNOJD



-

Introduction

-

What is Taxi Game 2?

-

Taxi Game 2 is a free driving simulator game developed by baklabs. It is the sequel to the popular Taxi Game, which has over 100 million downloads on Google Play Store. In Taxi Game 2, you can enjoy a full 3D open world, a cab driving simulator, a career mode, an engaging taxi driver gameplay, a GPS navigation system, and many routes across the city. You can also choose your passengers, buy new cars, upgrade your features, and build your taxi empire. Taxi Game 2 is constantly developed and updated, so you can expect new features and improvements in the future.

-

Why play Taxi Game 2 on PC Windows 7?

-

While Taxi Game 2 is designed for mobile devices, there are many reasons why you might want to play it on your PC Windows 7 instead. Here are some of them:

- -

How to download Taxi Game 2 on PC Windows 7

-

Method 1: Using an Android emulator

-

An Android emulator is a software that allows you to run Android apps and games on your PC Windows 7. There are many Android emulators available online, such as BlueStacks, LDPlayer, NoxPlayer, etc. Here are the steps to download and play Taxi Game 2 on PC Windows using an Android emulator:

-

Step 1: Download and install an Android emulator

-

Choose an Android emulator that suits your PC Windows 7 specifications and preferences. You can visit the official websites of the emulators and compare their features, requirements, and reviews. Then, download the emulator installer file and follow the instructions to install it on your PC Windows 7.

-

Step 2: Launch the emulator and sign in with your Google account

-

After installing the emulator, launch it and wait for it to load. You will see a virtual Android device on your PC Windows 7 screen. Then, sign in with your Google account or create a new one if you don't have one. This will allow you to access the Google Play Store and other Google services on the emulator.

-

Step 3: Search for Taxi Game 2 on the Google Play Store

-

On the emulator, open the Google Play Store app and search for Taxi Game 2. You will see the game icon and some information about it. Click on the Install button to download and install Taxi Game 2 on your PC Windows 7 via the emulator.

-

Step 4: Install and run Taxi Game 2 on your PC Windows 7

-

Once the installation is complete, you can find Taxi Game 2 on the emulator's home screen or app drawer. Click on the game icon to launch it and start playing Taxi Game 2 on your PC Windows 7. You can adjust the settings, such as the graphics quality, sound volume, control scheme, etc., according to your preferences. You can also use the emulator's features, such as screen recording, screenshot, keyboard mapping, etc., to enhance your gaming experience.

-

taxi game 2 pc download free
-taxi game 2 for windows 7 64 bit
-taxi game 2 simulator on pc
-taxi game 2 career mode download
-taxi game 2 windows 7 install
-taxi game 2 full version for pc
-taxi game 2 offline download windows 7
-taxi game 2 pc emulator
-taxi game 2 apk for windows 7
-taxi game 2 driving simulator pc
-taxi game 2 latest version download
-taxi game 2 on windows 10
-taxi game 2 free online play pc
-taxi game 2 hack download for pc
-taxi game 2 mod apk windows 7
-taxi game 2 cheats for pc
-taxi game 2 update download windows 7
-taxi game 2 bluestacks
-taxi game 2 ldplayer
-taxi game 2 noxplayer
-taxi game 2 baklabs download for pc
-taxi game 2 open world pc
-taxi game 2 cab driver gameplay
-taxi game 2 passengers pick up windows 7
-taxi game 2 gps navigation pc
-taxi game 2 city traffic racer download
-taxi game 2 best car for pc
-taxi game 2 gas stations windows 7
-taxi game 2 tips and tricks pc
-taxi game 2 review for windows 7
-crazy taxi classic download for pc windows 7
-crazy taxi classic on bluestacks windows 7
-crazy taxi classic arcade game pc
-crazy taxi classic emulator for windows 7
-crazy taxi classic free play online pc
-crazy taxi classic full screen windows 7
-crazy taxi classic original soundtrack pc
-crazy taxi classic cheats and codes windows 7
-crazy taxi classic controller support pc
-crazy taxi classic steam download windows 7
-crazy driver: cab simulator on pc windows 7
-crazy driver: cab simulator free download
-crazy driver: cab simulator gameplay
-crazy driver: cab simulator mod apk
-crazy driver: cab simulator online play
-crazy driver: cab simulator hack tool
-crazy driver: cab simulator unlimited money
-crazy driver: cab simulator realistic graphics
-crazy driver: cab simulator missions and challenges

-

Method 2: Using an APK/XAPK file

-

An APK/XAPK file is a package file that contains the app or game data and installation instructions. You can use an APK/XAPK file to install Taxi Game 2 on your PC Windows 7 without using an emulator. However, you will need an APK/XAPK installer software to do this. Here are the steps to download and play Taxi Game 2 on PC Windows using an APK/XAPK file:

-

Step 1: Download the APK/XAPK file of Taxi Game 2

-

You can download the APK/XAPK file of Taxi Game 2 from various online sources, such as APKPure, Uptodown, APKMirror, etc. Make sure that you download the latest version of the game and that it is compatible with your PC Windows 7. You can also scan the file for viruses or malware before downloading it.

-

Step 2: Install and run an APK/XAPK installer on your PC Windows 7

-

You will need an APK/XAPK installer software to install Taxi Game 2 on your PC Windows 7 using the APK/XAPK file. There are many APK/XAPK installer software available online, such as Pure APK Install, XAPK Installer, Apk Installer Pro, etc. You can choose one that suits your PC Windows 7 specifications and preferences. Then, download the installer software and follow the instructions to install it on your PC Windows 7.

-

Step 3: Open the APK/XAPK file with the installer and install Taxi Game 2 on your PC Windows 7

-

After installing the APK/XAPK installer software, launch it and locate the APK/XAPK file of Taxi Game 2 that you have downloaded. Then, open the file with the installer software and follow the instructions to install Taxi Game 2 on your PC Windows 7. Once the installation is complete, you can find Taxi Game 2 on your PC Windows 7 desktop or start menu. Click on the game icon to launch it and start playing Taxi Game 2 on your PC Windows 7.

-

Tips and tricks for playing Taxi Game 2 on PC Windows 7

-

Taxi Game 2 is a fun and challenging game that requires skill, strategy, and patience. Here are some tips and tricks to help you play better and enjoy more:

-

Tip 1: Use the Crazy Dash to boost your speed

-

The Crazy Dash is a special move that allows you to accelerate quickly and gain more speed. To perform it, you need to tap the brake and the gas pedals alternately. You will see a yellow flash on your screen when you do it correctly. The Crazy Dash can help you reach your destination faster, avoid traffic, and earn more money. However, be careful not to crash into other vehicles or obstacles, as this will damage your taxi and reduce your score.

-

Tip 2: Choose your passengers wisely

-

Not all passengers are the same in Taxi Game 2. Some passengers will pay you more, some will give you more time, and some will have special requests or challenges. You can see the information about each passenger on the top of their heads, such as their name, destination, fare, and time limit. You can also see their mood and personality, which will affect how they react to your driving. For example, some passengers will be happy if you drive fast and crazy, while others will be angry or scared. You should choose your passengers based on your preferences and goals. For instance, if you want to earn more money, you should pick up passengers who offer high fares or tips. If you want to have more fun, you should pick up passengers who like your driving style or have interesting stories.

-

Tip 3: Refuel your taxi at gas stations

-

Your taxi has a gas meter that shows how much fuel you have left. If you run out of gas, you will lose the game and have to start over. To avoid this, you should refuel your taxi at gas stations whenever you can. You can find gas stations on the map or follow the signs on the road. Refueling your taxi will cost you some money, but it is worth it in the long run. You can also upgrade your fuel tank capacity with the money you earn from your rides.

-

Tip 4: Follow the GPS navigation to find the best routes

-

Taxi Game 2 has a GPS navigation system that shows you the best routes to take your passengers to their destinations. You can see the GPS map on the top right corner of your screen, which will indicate your current location, your destination, and the optimal path to follow. You can also see arrows on the road that guide you along the way. Following the GPS navigation will help you save time, avoid traffic jams, and earn more money. However, you can also explore the city and find shortcuts or alternative routes if you want to challenge yourself or have more fun.

-

Tip 5: Upgrade your taxi with new cars and features

-

Taxi Game 2 allows you to upgrade your taxi with new cars and features that will improve your performance and appearance. You can buy new cars with different models, colors, and stats from the garage. You can also customize your cars with stickers, decals, spoilers, rims, etc. Moreover, you can enhance your cars with new features, such as turbo boosters, nitro boosters, shock absorbers, etc. Upgrading your taxi will cost you some money, but it will make your game more enjoyable and rewarding.

-

Conclusion

-

Taxi Game 2 is a great game for anyone who loves driving games and wants to experience the life of a taxi driver in a realistic city. It has amazing graphics, realistic physics, smooth controls, and diverse gameplay modes. It is also easy to download and play on PC Windows 7 using an Android emulator or an APK/XAPK file. With these tips and tricks, you can master Taxi Game 2 and become the best taxi driver in town.

-

FAQs

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/utils/deprecation_utils.py b/spaces/1toTree/lora_test/ppdiffusers/utils/deprecation_utils.py deleted file mode 100644 index 1ba7e7c3b2cc103da072af743fc6b0f66bf40549..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/utils/deprecation_utils.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import warnings -from typing import Any, Dict, Optional, Union - -from packaging import version - - -def deprecate(*args, take_from: Optional[Union[Dict, Any]] = None, standard_warn=True): - from .. import __version__ - - deprecated_kwargs = take_from - values = () - if not isinstance(args[0], tuple): - args = (args,) - - for attribute, version_name, message in args: - if version.parse(version.parse(__version__).base_version) >= version.parse(version_name): - raise ValueError( - f"The deprecation tuple {(attribute, version_name, message)} should be removed since ppdiffusers'" - f" version {__version__} is >= {version_name}" - ) - - warning = None - if isinstance(deprecated_kwargs, dict) and attribute in deprecated_kwargs: - values += (deprecated_kwargs.pop(attribute),) - warning = f"The `{attribute}` argument is deprecated and will be removed in version {version_name}." - elif hasattr(deprecated_kwargs, attribute): - values += (getattr(deprecated_kwargs, attribute),) - warning = f"The `{attribute}` attribute is deprecated and will be removed in version {version_name}." - elif deprecated_kwargs is None: - warning = f"`{attribute}` is deprecated and will be removed in version {version_name}." - - if warning is not None: - warning = warning + " " if standard_warn else "" - warnings.warn(warning + message, FutureWarning, stacklevel=2) - - if isinstance(deprecated_kwargs, dict) and len(deprecated_kwargs) > 0: - call_frame = inspect.getouterframes(inspect.currentframe())[1] - filename = call_frame.filename - line_number = call_frame.lineno - function = call_frame.function - key, value = next(iter(deprecated_kwargs.items())) - raise TypeError(f"{function} in {filename} line {line_number-1} got an unexpected keyword argument `{key}`") - - if len(values) == 0: - return - elif len(values) == 1: - return values[0] - return values diff --git a/spaces/52Hz/CMFNet_dehazing/app.py b/spaces/52Hz/CMFNet_dehazing/app.py deleted file mode 100644 index 8a8a3feb75e204a50d480aabdc8fd3b3c46d0d02..0000000000000000000000000000000000000000 --- a/spaces/52Hz/CMFNet_dehazing/app.py +++ /dev/null @@ -1,38 +0,0 @@ -import os -import gradio as gr -from PIL import Image -import torch - -os.system( - 'wget https://github.com/FanChiMao/CMFNet/releases/download/v0.0/dehaze_I_OHaze_CMFNet.pth -P experiments/pretrained_models') - - -def inference(img): - if not os.path.exists('test'): - os.system('mkdir test') - - basewidth = 512 - wpercent = (basewidth / float(img.size[0])) - hsize = int((float(img.size[1]) * float(wpercent))) - img = img.resize((basewidth, hsize), Image.BILINEAR) - img.save("test/1.png", "PNG") - os.system( - 'python main_test_CMFNet.py --input_dir test --weights experiments/pretrained_models/dehaze_I_OHaze_CMFNet.pth') - return 'results/1.png' - - -title = "Compound Multi-branch Feature Fusion for Image Restoration (Dehaze)" -description = "Gradio demo for CMFNet. CMFNet achieves competitive performance on three tasks: image deblurring, image dehazing and image deraindrop. Here, we provide a demo for image dehaze. To use it, simply upload your image, or click one of the examples to load them. Reference from: https://huggingface.co/akhaliq" -article = "

Compound Multi-branch Feature Fusion for Real Image Restoration | Github Repo

visitor badge
" - -examples = [['Haze.png']] -gr.Interface( - inference, - [gr.inputs.Image(type="pil", label="Input")], - gr.outputs.Image(type="filepath", label="Output"), - title=title, - description=description, - article=article, - allow_flagging=False, - examples=examples -).launch(debug=True) \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/txt_processors/en.py b/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/txt_processors/en.py deleted file mode 100644 index 6f755d5ab1f2cf4407daee08cc3639a05e941a97..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/NeuralSeq/data_gen/tts/txt_processors/en.py +++ /dev/null @@ -1,77 +0,0 @@ -import re -import unicodedata - -from g2p_en import G2p -from g2p_en.expand import normalize_numbers -from nltk import pos_tag -from nltk.tokenize import TweetTokenizer - -from data_gen.tts.txt_processors.base_text_processor import BaseTxtProcessor, register_txt_processors -from data_gen.tts.data_gen_utils import is_sil_phoneme, PUNCS - -class EnG2p(G2p): - word_tokenize = TweetTokenizer().tokenize - - def __call__(self, text): - # preprocessing - words = EnG2p.word_tokenize(text) - tokens = pos_tag(words) # tuples of (word, tag) - - # steps - prons = [] - for word, pos in tokens: - if re.search("[a-z]", word) is None: - pron = [word] - - elif word in self.homograph2features: # Check homograph - pron1, pron2, pos1 = self.homograph2features[word] - if pos.startswith(pos1): - pron = pron1 - else: - pron = pron2 - elif word in self.cmu: # lookup CMU dict - pron = self.cmu[word][0] - else: # predict for oov - pron = self.predict(word) - - prons.extend(pron) - prons.extend([" "]) - - return prons[:-1] - - -@register_txt_processors('en') -class TxtProcessor(BaseTxtProcessor): - g2p = EnG2p() - - @staticmethod - def preprocess_text(text): - text = normalize_numbers(text) - text = ''.join(char for char in unicodedata.normalize('NFD', text) - if unicodedata.category(char) != 'Mn') # Strip accents - text = text.lower() - text = re.sub("[\'\"()]+", "", text) - text = re.sub("[-]+", " ", text) - text = re.sub(f"[^ a-z{PUNCS}]", "", text) - text = re.sub(f" ?([{PUNCS}]) ?", r"\1", text) # !! -> ! - text = re.sub(f"([{PUNCS}])+", r"\1", text) # !! -> ! - text = text.replace("i.e.", "that is") - text = text.replace("i.e.", "that is") - text = text.replace("etc.", "etc") - text = re.sub(f"([{PUNCS}])", r" \1 ", text) - text = re.sub(rf"\s+", r" ", text) - return text - - @classmethod - def process(cls, txt, preprocess_args): - txt = cls.preprocess_text(txt).strip() - phs = cls.g2p(txt) - txt_struct = [[w, []] for w in txt.split(" ")] - i_word = 0 - for p in phs: - if p == ' ': - i_word += 1 - else: - txt_struct[i_word][1].append(p) - txt_struct = cls.postprocess(txt_struct, preprocess_args) - return txt_struct, txt \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/dpm_solver/__init__.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/dpm_solver/__init__.py deleted file mode 100644 index 7427f38c07530afbab79154ea8aaf88c4bf70a08..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/models/diffusion/dpm_solver/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .sampler import DPMSolverSampler \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/admin/export/$types.d.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/admin/export/$types.d.ts deleted file mode 100644 index 4c044c048ae2ac645558021e958d11d3b77875c3..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/admin/export/$types.d.ts +++ /dev/null @@ -1,8 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { } -type RouteId = '/admin/export'; - -export type RequestHandler = Kit.RequestHandler; -export type RequestEvent = Kit.RequestEvent; \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Vercel.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Vercel.py deleted file mode 100644 index 2d20ca6a2de0b6fdb674090f5c305f5d544d9f86..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Vercel.py +++ /dev/null @@ -1,377 +0,0 @@ -from __future__ import annotations - -import json, base64, requests, execjs, random, uuid - -from ..typing import Any, TypedDict, CreateResult -from .base_provider import BaseProvider -from abc import abstractmethod - - -class Vercel(BaseProvider): - url = 'https://sdk.vercel.ai' - working = True - supports_gpt_35_turbo = True - supports_stream = True - - @staticmethod - @abstractmethod - def create_completion( - model: str, - messages: list[dict[str, str]], - stream: bool, - **kwargs - ) -> CreateResult: - if not model: - model = "gpt-3.5-turbo" - elif model not in model_info: - raise ValueError(f"Model are not supported: {model}") - - headers = { - 'authority' : 'sdk.vercel.ai', - 'accept' : '*/*', - 'accept-language' : 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'cache-control' : 'no-cache', - 'content-type' : 'application/json', - 'custom-encoding' : get_anti_bot_token(), - 'origin' : 'https://sdk.vercel.ai', - 'pragma' : 'no-cache', - 'referer' : 'https://sdk.vercel.ai/', - 'sec-ch-ua' : '"Google Chrome";v="117", "Not;A=Brand";v="8", "Chromium";v="117"', - 'sec-ch-ua-mobile' : '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest' : 'empty', - 'sec-fetch-mode' : 'cors', - 'sec-fetch-site' : 'same-origin', - 'user-agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.%s.%s Safari/537.36' % ( - random.randint(99, 999), - random.randint(99, 999) - ) - } - - json_data = { - 'model' : model_info[model]['id'], - 'messages' : messages, - 'playgroundId': str(uuid.uuid4()), - 'chatIndex' : 0} | model_info[model]['default_params'] - - max_retries = kwargs.get('max_retries', 20) - for i in range(max_retries): - response = requests.post('https://sdk.vercel.ai/api/generate', - headers=headers, json=json_data, stream=True) - try: - response.raise_for_status() - except: - continue - for token in response.iter_content(chunk_size=None): - yield token.decode() - break - - -def get_anti_bot_token() -> str: - headers = { - 'authority' : 'sdk.vercel.ai', - 'accept' : '*/*', - 'accept-language' : 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3', - 'cache-control' : 'no-cache', - 'pragma' : 'no-cache', - 'referer' : 'https://sdk.vercel.ai/', - 'sec-ch-ua' : '"Google Chrome";v="117", "Not;A=Brand";v="8", "Chromium";v="117"', - 'sec-ch-ua-mobile' : '?0', - 'sec-ch-ua-platform': '"macOS"', - 'sec-fetch-dest' : 'empty', - 'sec-fetch-mode' : 'cors', - 'sec-fetch-site' : 'same-origin', - 'user-agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.%s.%s Safari/537.36' % ( - random.randint(99, 999), - random.randint(99, 999) - ) - } - - response = requests.get('https://sdk.vercel.ai/openai.jpeg', - headers=headers).text - - raw_data = json.loads(base64.b64decode(response, - validate=True)) - - js_script = '''const globalThis={marker:"mark"};String.prototype.fontcolor=function(){return `${this}`}; - return (%s)(%s)''' % (raw_data['c'], raw_data['a']) - - raw_token = json.dumps({'r': execjs.compile(js_script).call(''), 't': raw_data['t']}, - separators = (",", ":")) - - return base64.b64encode(raw_token.encode('utf-16le')).decode() - -class ModelInfo(TypedDict): - id: str - default_params: dict[str, Any] - -model_info: dict[str, ModelInfo] = { - 'claude-instant-v1': { - 'id': 'anthropic:claude-instant-v1', - 'default_params': { - 'temperature': 1, - 'maximumLength': 1024, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': ['\n\nHuman:'], - }, - }, - 'claude-v1': { - 'id': 'anthropic:claude-v1', - 'default_params': { - 'temperature': 1, - 'maximumLength': 1024, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': ['\n\nHuman:'], - }, - }, - 'claude-v2': { - 'id': 'anthropic:claude-v2', - 'default_params': { - 'temperature': 1, - 'maximumLength': 1024, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': ['\n\nHuman:'], - }, - }, - 'a16z-infra/llama7b-v2-chat': { - 'id': 'replicate:a16z-infra/llama7b-v2-chat', - 'default_params': { - 'temperature': 0.75, - 'maximumLength': 3000, - 'topP': 1, - 'repetitionPenalty': 1, - }, - }, - 'a16z-infra/llama13b-v2-chat': { - 'id': 'replicate:a16z-infra/llama13b-v2-chat', - 'default_params': { - 'temperature': 0.75, - 'maximumLength': 3000, - 'topP': 1, - 'repetitionPenalty': 1, - }, - }, - 'replicate/llama-2-70b-chat': { - 'id': 'replicate:replicate/llama-2-70b-chat', - 'default_params': { - 'temperature': 0.75, - 'maximumLength': 3000, - 'topP': 1, - 'repetitionPenalty': 1, - }, - }, - 'bigscience/bloom': { - 'id': 'huggingface:bigscience/bloom', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 0.95, - 'topK': 4, - 'repetitionPenalty': 1.03, - }, - }, - 'google/flan-t5-xxl': { - 'id': 'huggingface:google/flan-t5-xxl', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 0.95, - 'topK': 4, - 'repetitionPenalty': 1.03, - }, - }, - 'EleutherAI/gpt-neox-20b': { - 'id': 'huggingface:EleutherAI/gpt-neox-20b', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 0.95, - 'topK': 4, - 'repetitionPenalty': 1.03, - 'stopSequences': [], - }, - }, - 'OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5': { - 'id': 'huggingface:OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5', - 'default_params': { - 'maximumLength': 1024, - 'typicalP': 0.2, - 'repetitionPenalty': 1, - }, - }, - 'OpenAssistant/oasst-sft-1-pythia-12b': { - 'id': 'huggingface:OpenAssistant/oasst-sft-1-pythia-12b', - 'default_params': { - 'maximumLength': 1024, - 'typicalP': 0.2, - 'repetitionPenalty': 1, - }, - }, - 'bigcode/santacoder': { - 'id': 'huggingface:bigcode/santacoder', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 0.95, - 'topK': 4, - 'repetitionPenalty': 1.03, - }, - }, - 'command-light-nightly': { - 'id': 'cohere:command-light-nightly', - 'default_params': { - 'temperature': 0.9, - 'maximumLength': 1024, - 'topP': 1, - 'topK': 0, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'command-nightly': { - 'id': 'cohere:command-nightly', - 'default_params': { - 'temperature': 0.9, - 'maximumLength': 1024, - 'topP': 1, - 'topK': 0, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'gpt-4': { - 'id': 'openai:gpt-4', - 'default_params': { - 'temperature': 0.7, - 'maximumLength': 8192, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'gpt-4-0613': { - 'id': 'openai:gpt-4-0613', - 'default_params': { - 'temperature': 0.7, - 'maximumLength': 8192, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'code-davinci-002': { - 'id': 'openai:code-davinci-002', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'gpt-3.5-turbo': { - 'id': 'openai:gpt-3.5-turbo', - 'default_params': { - 'temperature': 0.7, - 'maximumLength': 4096, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': [], - }, - }, - 'gpt-3.5-turbo-16k': { - 'id': 'openai:gpt-3.5-turbo-16k', - 'default_params': { - 'temperature': 0.7, - 'maximumLength': 16280, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': [], - }, - }, - 'gpt-3.5-turbo-16k-0613': { - 'id': 'openai:gpt-3.5-turbo-16k-0613', - 'default_params': { - 'temperature': 0.7, - 'maximumLength': 16280, - 'topP': 1, - 'topK': 1, - 'presencePenalty': 1, - 'frequencyPenalty': 1, - 'stopSequences': [], - }, - }, - 'text-ada-001': { - 'id': 'openai:text-ada-001', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'text-babbage-001': { - 'id': 'openai:text-babbage-001', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'text-curie-001': { - 'id': 'openai:text-curie-001', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'text-davinci-002': { - 'id': 'openai:text-davinci-002', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 1024, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, - 'text-davinci-003': { - 'id': 'openai:text-davinci-003', - 'default_params': { - 'temperature': 0.5, - 'maximumLength': 4097, - 'topP': 1, - 'presencePenalty': 0, - 'frequencyPenalty': 0, - 'stopSequences': [], - }, - }, -} \ No newline at end of file diff --git a/spaces/AdamOswald1/finetuned_diffusion/app.py b/spaces/AdamOswald1/finetuned_diffusion/app.py deleted file mode 100644 index 65667f942f1e433a663433262b5426ab0b943c4a..0000000000000000000000000000000000000000 --- a/spaces/AdamOswald1/finetuned_diffusion/app.py +++ /dev/null @@ -1,372 +0,0 @@ -from diffusers import AutoencoderKL, UNet2DConditionModel, StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image -import utils -import datetime -import time -import psutil -import random - -start_time = time.time() -is_colab = utils.is_google_colab() -state = None -current_steps = 25 - -class Model: - def __init__(self, name, path="", prefix=""): - self.name = name - self.path = path - self.prefix = prefix - self.pipe_t2i = None - self.pipe_i2i = None - -models = [ - Model("Arcane", "nitrosocke/Arcane-Diffusion", "arcane style "), - Model("Dreamlike Diffusion 1.0", "dreamlike-art/dreamlike-diffusion-1.0", "dreamlikeart "), - Model("Archer", "nitrosocke/archer-diffusion", "archer style "), - Model("Anything V3", "Linaqruf/anything-v3.0", ""), - Model("Anything V4", "andite/anything-v4.0", ""), - Model("Modern Disney", "nitrosocke/mo-di-diffusion", "modern disney style "), - Model("Classic Disney", "nitrosocke/classic-anim-diffusion", "classic disney style "), - Model("Loving Vincent (Van Gogh)", "dallinmackay/Van-Gogh-diffusion", "lvngvncnt "), - Model("Wavyfusion", "wavymulder/wavyfusion", "wa-vy style "), - Model("Analog Diffusion", "wavymulder/Analog-Diffusion", "analog style "), - Model("Redshift renderer (Cinema4D)", "nitrosocke/redshift-diffusion", "redshift style "), - Model("Midjourney v4 style", "prompthero/midjourney-v4-diffusion", "mdjrny-v4 style "), - Model("Waifu", "hakurei/waifu-diffusion"), - Model("Cyberpunk Anime", "DGSpitzer/Cyberpunk-Anime-Diffusion", "dgs illustration style "), - Model("Elden Ring", "nitrosocke/elden-ring-diffusion", "elden ring style "), - Model("TrinArt v2", "naclbit/trinart_stable_diffusion_v2"), - Model("Spider-Verse", "nitrosocke/spider-verse-diffusion", "spiderverse style "), - Model("Balloon Art", "Fictiverse/Stable_Diffusion_BalloonArt_Model", "BalloonArt "), - Model("Tron Legacy", "dallinmackay/Tron-Legacy-diffusion", "trnlgcy "), - Model("Pokémon", "lambdalabs/sd-pokemon-diffusers"), - Model("Pony Diffusion", "AstraliteHeart/pony-diffusion"), - Model("Robo Diffusion", "nousr/robo-diffusion"), - Model("Epic Diffusion", "johnslegers/epic-diffusion"), - Model("Space Machine", "rabidgremlin/sd-db-epic-space-machine", "EpicSpaceMachine"), - Model("Spacecraft", "rabidgremlin/sd-db-epic-space-machine, Guizmus/Tardisfusion", "EpicSpaceMachine, Tardis Box style"), - Model("TARDIS", "Guizmus/Tardisfusion", "Tardis Box style"), - Model("Modern Era TARDIS Interior", "Guizmus/Tardisfusion", "Modern Tardis style"), - Model("Classic Era TARDIS Interior", "Guizmus/Tardisfusion", "Classic Tardis style"), - Model("Spacecraft Interior", "Guizmus/Tardisfusion, rabidgremlin/sd-db-epic-space-machine", "Classic Tardis style, Modern Tardis style, EpicSpaceMachine"), - Model("CLIP", "EleutherAI/clip-guided-diffusion", "CLIP"), - Model("Genshin Waifu", "crumb/genshin-stable-inversion, yuiqena/GenshinImpact, katakana/2D-Mix, Guizmus/AnimeChanStyle", "Female, female, Woman, woman, Girl, girl"), - Model("Genshin", "crumb/genshin-stable-inversion, yuiqena/GenshinImpact, katakana/2D-Mix, Guizmus/AnimeChanStyle", ""), - Model("Waifu", "hakurei/waifu-diffusion, technillogue/waifu-diffusion, Guizmus/AnimeChanStyle, katakana/2D-Mix", ""), - Model("Pokémon", "lambdalabs/sd-pokemon-diffusers", ""), - Model("Test", "AdamOswald1/Idk", ""), - Model("Test2", "AdamOswald1/Tester", ""), - Model("Anime", "Guizmus/AnimeChanStyle, katakana/2D-Mix", ""), - Model("Beeple", "riccardogiorato/beeple-diffusion", "beeple style "), - Model("Avatar", "riccardogiorato/avatar-diffusion", "avatartwow style "), - Model("Poolsuite", "prompthero/poolsuite", "poolsuite style ") - ] - -custom_model = None -if is_colab: - models.insert(0, Model("Custom model")) - custom_model = models[0] - -last_mode = "txt2img" -current_model = models[1] if is_colab else models[0] -current_model_path = current_model.path - -if is_colab: - pipe = StableDiffusionPipeline.from_pretrained( - current_model.path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler"), - safety_checker=lambda images, clip_input: (images, False) - ) - -else: - pipe = StableDiffusionPipeline.from_pretrained( - current_model.path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler") - ) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe.enable_xformers_memory_efficient_attention() - -device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶" - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def update_state(new_state): - global state - state = new_state - -def update_state_info(old_state): - if state and state != old_state: - return gr.update(value=state) - -def custom_model_changed(path): - models[0].path = path - global current_model - current_model = models[0] - -def on_model_change(model_name): - - prefix = "Enter prompt. \"" + next((m.prefix for m in models if m.name == model_name), None) + "\" is prefixed automatically" if model_name != models[0].name else "Don't forget to use the custom model prefix in the prompt!" - - return gr.update(visible = model_name == models[0].name), gr.update(placeholder=prefix) - -def on_steps_change(steps): - global current_steps - current_steps = steps - -def pipe_callback(step: int, timestep: int, latents: torch.FloatTensor): - update_state(f"{step}/{current_steps} steps")#\nTime left, sec: {timestep/100:.0f}") - -def inference(model_name, prompt, guidance, steps, n_images=1, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt=""): - - update_state(" ") - - print(psutil.virtual_memory()) # print memory usage - - global current_model - for model in models: - if model.name == model_name: - current_model = model - model_path = current_model.path - - # generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - if seed == 0: - seed = random.randint(0, 2147483647) - - if torch.cuda.is_available(): - generator = torch.Generator('cuda').manual_seed(seed) - else: - generator = torch.Generator().manual_seed(seed) - - try: - if img is not None: - return img_to_img(model_path, prompt, n_images, neg_prompt, img, strength, guidance, steps, width, height, generator, seed), f"Done. Seed: {seed}" - else: - return txt_to_img(model_path, prompt, n_images, neg_prompt, guidance, steps, width, height, generator, seed), f"Done. Seed: {seed}" - except Exception as e: - return None, error_str(e) - -def txt_to_img(model_path, prompt, n_images, neg_prompt, guidance, steps, width, height, generator, seed): - - print(f"{datetime.datetime.now()} txt_to_img, model: {current_model.name}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "txt2img": - current_model_path = model_path - - update_state(f"Loading {current_model.name} text-to-image model...") - - if is_colab or current_model == custom_model: - pipe = StableDiffusionPipeline.from_pretrained( - current_model_path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler"), - safety_checker=lambda images, clip_input: (images, False) - ) - else: - pipe = StableDiffusionPipeline.from_pretrained( - current_model_path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler") - ) - # pipe = pipe.to("cpu") - # pipe = current_model.pipe_t2i - - if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe.enable_xformers_memory_efficient_attention() - last_mode = "txt2img" - - prompt = current_model.prefix + prompt - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_images_per_prompt=n_images, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator, - callback=pipe_callback) - - # update_state(f"Done. Seed: {seed}") - - return replace_nsfw_images(result) - -def img_to_img(model_path, prompt, n_images, neg_prompt, img, strength, guidance, steps, width, height, generator, seed): - - print(f"{datetime.datetime.now()} img_to_img, model: {model_path}") - - global last_mode - global pipe - global current_model_path - if model_path != current_model_path or last_mode != "img2img": - current_model_path = model_path - - update_state(f"Loading {current_model.name} image-to-image model...") - - if is_colab or current_model == custom_model: - pipe = StableDiffusionImg2ImgPipeline.from_pretrained( - current_model_path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler"), - safety_checker=lambda images, clip_input: (images, False) - ) - else: - pipe = StableDiffusionImg2ImgPipeline.from_pretrained( - current_model_path, - torch_dtype=torch.float16, - scheduler=DPMSolverMultistepScheduler.from_pretrained(current_model.path, subfolder="scheduler") - ) - # pipe = pipe.to("cpu") - # pipe = current_model.pipe_i2i - - if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe.enable_xformers_memory_efficient_attention() - last_mode = "img2img" - - prompt = current_model.prefix + prompt - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_images_per_prompt=n_images, - image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - # width = width, - # height = height, - generator = generator, - callback=pipe_callback) - - # update_state(f"Done. Seed: {seed}") - - return replace_nsfw_images(result) - -def replace_nsfw_images(results): - - if is_colab: - return results.images - - for i in range(len(results.images)): - if results.nsfw_content_detected[i]: - results.images[i] = Image.open("nsfw.png") - return results.images - -# css = """.finetuned-diffusion-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.finetuned-diffusion-div div h1{font-weight:900;margin-bottom:7px}.finetuned-diffusion-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -# """ -with gr.Blocks(css="style.css") as demo: - gr.HTML( - f""" -
-
-

Finetuned Diffusion

-
-

- BROKEN, USE COLLAB VERSION INSTEAD! ALSO ADD ", 'safety_checker=None'" TO YOUR PROMPT! -

-

- Demo for multiple fine-tuned Stable Diffusion models, trained on different styles:
- Arcane, Archer, Elden Ring, Spider-Verse, Modern Disney, Classic Disney, Loving Vincent (Van Gogh), Redshift renderer (Cinema4D), Midjourney v4 style, Waifu, Pokémon, Pony Diffusion, Robo Diffusion, Cyberpunk Anime, Tron Legacy, Balloon Art + in colab notebook you can load any other Diffusers 🧨 SD model hosted on HuggingFace 🤗. -

-

You can skip the queue and load custom models in the colab: Open In Colab

- Running on {device}{(" in a Google Colab." if is_colab else "")} -

-

You can also duplicate this space and upgrade to gpu by going to settings:
- Duplicate Space

-
- """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - model_name = gr.Dropdown(label="Model", choices=[m.name for m in models], value=current_model.name) - with gr.Box(visible=False) as custom_model_group: - custom_model_path = gr.Textbox(label="Custom model path", placeholder="Path to model, e.g. nitrosocke/Arcane-Diffusion", interactive=True) - gr.HTML("
Custom models have to be downloaded first, so give it some time.
") - - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder="Enter prompt. Style applied automatically").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - - # image_out = gr.Image(height=512) - gallery = gr.Gallery(label="Generated images", show_label=False, elem_id="gallery").style(grid=[2], height="auto") - - state_info = gr.Textbox(label="State", show_label=False, max_lines=2).style(container=False) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - - n_images = gr.Slider(label="Images", value=1, minimum=1, maximum=8, step=1) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=current_steps, minimum=2, maximum=300, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - if is_colab: - model_name.change(on_model_change, inputs=model_name, outputs=[custom_model_group, prompt], queue=False) - custom_model_path.change(custom_model_changed, inputs=custom_model_path, outputs=None) - # n_images.change(lambda n: gr.Gallery().style(grid=[2 if n > 1 else 1], height="auto"), inputs=n_images, outputs=gallery) - steps.change(on_steps_change, inputs=[steps], outputs=[], queue=False) - - inputs = [model_name, prompt, guidance, steps, n_images, width, height, seed, image, strength, neg_prompt] - outputs = [gallery, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - ex = gr.Examples([ - [models[7].name, "tiny cute and adorable kitten adventurer dressed in a warm overcoat with survival gear on a winters day", 7.5, 25], - [models[4].name, "portrait of dwayne johnson", 7.0, 35], - [models[5].name, "portrait of a beautiful alyx vance half life", 10, 25], - [models[6].name, "Aloy from Horizon: Zero Dawn, half body portrait, smooth, detailed armor, beautiful face, illustration", 7.0, 30], - [models[5].name, "fantasy portrait painting, digital art", 4.0, 20], - ], inputs=[model_name, prompt, guidance, steps], outputs=outputs, fn=inference, cache_examples=False) - - gr.HTML(""" -
-
-

Models by @nitrosocke, @haruu1367, @Helixngc7293, @dal_mack, @prompthero and others. ❤️

-

This space uses the DPM-Solver++ sampler by Cheng Lu, et al..

-

Space by:
- Twitter Follow
- GitHub followers



- Buy Me A Coffee

-

visitors

-
- """) - - demo.load(update_state_info, inputs=state_info, outputs=state_info, every=0.5, show_progress=False) - -print(f"Space built in {time.time() - start_time:.2f} seconds") - -# if not is_colab: -demo.queue(concurrency_count=1) -demo.launch(debug=True, share=is_colab) diff --git a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/openpose/model.py b/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/openpose/model.py deleted file mode 100644 index 6f5d8eb6b7e4af7e2a4fc21fe500b29f02ff176d..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/openpose/model.py +++ /dev/null @@ -1,178 +0,0 @@ -import torch -import torch.nn as nn -from collections import OrderedDict - - -def make_layers(block, no_relu_layers): - layers = [] - for layer_name, v in block.items(): - if 'pool' in layer_name: - layer = nn.MaxPool2d(kernel_size=v[0], stride=v[1], padding=v[2]) - layers.append((layer_name, layer)) - else: - conv2d = nn.Conv2d(in_channels=v[0], out_channels=v[1], kernel_size=v[2], stride=v[3], padding=v[4]) - layers.append((layer_name, conv2d)) - if layer_name not in no_relu_layers: - layers.append(('relu_' + layer_name, nn.ReLU(inplace=True))) - - return nn.Sequential(OrderedDict(layers)) - - -class bodypose_model(nn.Module): - - def __init__(self): - super(bodypose_model, self).__init__() - - # these layers have no relu layer - no_relu_layers = ['conv5_5_CPM_L1', 'conv5_5_CPM_L2', 'Mconv7_stage2_L1',\ - 'Mconv7_stage2_L2', 'Mconv7_stage3_L1', 'Mconv7_stage3_L2',\ - 'Mconv7_stage4_L1', 'Mconv7_stage4_L2', 'Mconv7_stage5_L1',\ - 'Mconv7_stage5_L2', 'Mconv7_stage6_L1', 'Mconv7_stage6_L1'] - blocks = {} - block0 = OrderedDict([('conv1_1', [3, 64, 3, 1, 1]), ('conv1_2', [64, 64, 3, 1, 1]), ('pool1_stage1', [2, 2, - 0]), - ('conv2_1', [64, 128, 3, 1, 1]), ('conv2_2', [128, 128, 3, 1, 1]), - ('pool2_stage1', [2, 2, 0]), ('conv3_1', [128, 256, 3, 1, 1]), - ('conv3_2', [256, 256, 3, 1, 1]), ('conv3_3', [256, 256, 3, 1, 1]), - ('conv3_4', [256, 256, 3, 1, 1]), ('pool3_stage1', [2, 2, 0]), - ('conv4_1', [256, 512, 3, 1, 1]), ('conv4_2', [512, 512, 3, 1, 1]), - ('conv4_3_CPM', [512, 256, 3, 1, 1]), ('conv4_4_CPM', [256, 128, 3, 1, 1])]) - - # Stage 1 - block1_1 = OrderedDict([('conv5_1_CPM_L1', [128, 128, 3, 1, 1]), ('conv5_2_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_3_CPM_L1', [128, 128, 3, 1, 1]), ('conv5_4_CPM_L1', [128, 512, 1, 1, 0]), - ('conv5_5_CPM_L1', [512, 38, 1, 1, 0])]) - - block1_2 = OrderedDict([('conv5_1_CPM_L2', [128, 128, 3, 1, 1]), ('conv5_2_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_3_CPM_L2', [128, 128, 3, 1, 1]), ('conv5_4_CPM_L2', [128, 512, 1, 1, 0]), - ('conv5_5_CPM_L2', [512, 19, 1, 1, 0])]) - blocks['block1_1'] = block1_1 - blocks['block1_2'] = block1_2 - - self.model0 = make_layers(block0, no_relu_layers) - - # Stages 2 - 6 - for i in range(2, 7): - blocks['block%d_1' % i] = OrderedDict([('Mconv1_stage%d_L1' % i, [185, 128, 7, 1, 3]), - ('Mconv2_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d_L1' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d_L1' % i, [128, 38, 1, 1, 0])]) - - blocks['block%d_2' % i] = OrderedDict([('Mconv1_stage%d_L2' % i, [185, 128, 7, 1, 3]), - ('Mconv2_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d_L2' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d_L2' % i, [128, 19, 1, 1, 0])]) - - for k in blocks.keys(): - blocks[k] = make_layers(blocks[k], no_relu_layers) - - self.model1_1 = blocks['block1_1'] - self.model2_1 = blocks['block2_1'] - self.model3_1 = blocks['block3_1'] - self.model4_1 = blocks['block4_1'] - self.model5_1 = blocks['block5_1'] - self.model6_1 = blocks['block6_1'] - - self.model1_2 = blocks['block1_2'] - self.model2_2 = blocks['block2_2'] - self.model3_2 = blocks['block3_2'] - self.model4_2 = blocks['block4_2'] - self.model5_2 = blocks['block5_2'] - self.model6_2 = blocks['block6_2'] - - def forward(self, x): - - out1 = self.model0(x) - - out1_1 = self.model1_1(out1) - out1_2 = self.model1_2(out1) - out2 = torch.cat([out1_1, out1_2, out1], 1) - - out2_1 = self.model2_1(out2) - out2_2 = self.model2_2(out2) - out3 = torch.cat([out2_1, out2_2, out1], 1) - - out3_1 = self.model3_1(out3) - out3_2 = self.model3_2(out3) - out4 = torch.cat([out3_1, out3_2, out1], 1) - - out4_1 = self.model4_1(out4) - out4_2 = self.model4_2(out4) - out5 = torch.cat([out4_1, out4_2, out1], 1) - - out5_1 = self.model5_1(out5) - out5_2 = self.model5_2(out5) - out6 = torch.cat([out5_1, out5_2, out1], 1) - - out6_1 = self.model6_1(out6) - out6_2 = self.model6_2(out6) - - return out6_1, out6_2 - - -class handpose_model(nn.Module): - - def __init__(self): - super(handpose_model, self).__init__() - - # these layers have no relu layer - no_relu_layers = ['conv6_2_CPM', 'Mconv7_stage2', 'Mconv7_stage3',\ - 'Mconv7_stage4', 'Mconv7_stage5', 'Mconv7_stage6'] - # stage 1 - block1_0 = OrderedDict([('conv1_1', [3, 64, 3, 1, 1]), ('conv1_2', [64, 64, 3, 1, 1]), - ('pool1_stage1', [2, 2, 0]), ('conv2_1', [64, 128, 3, 1, 1]), - ('conv2_2', [128, 128, 3, 1, 1]), ('pool2_stage1', [2, 2, 0]), - ('conv3_1', [128, 256, 3, 1, 1]), ('conv3_2', [256, 256, 3, 1, 1]), - ('conv3_3', [256, 256, 3, 1, 1]), ('conv3_4', [256, 256, 3, 1, 1]), - ('pool3_stage1', [2, 2, 0]), ('conv4_1', [256, 512, 3, 1, 1]), - ('conv4_2', [512, 512, 3, 1, 1]), ('conv4_3', [512, 512, 3, 1, 1]), - ('conv4_4', [512, 512, 3, 1, 1]), ('conv5_1', [512, 512, 3, 1, 1]), - ('conv5_2', [512, 512, 3, 1, 1]), ('conv5_3_CPM', [512, 128, 3, 1, 1])]) - - block1_1 = OrderedDict([('conv6_1_CPM', [128, 512, 1, 1, 0]), ('conv6_2_CPM', [512, 22, 1, 1, 0])]) - - blocks = {} - blocks['block1_0'] = block1_0 - blocks['block1_1'] = block1_1 - - # stage 2-6 - for i in range(2, 7): - blocks['block%d' % i] = OrderedDict([('Mconv1_stage%d' % i, [150, 128, 7, 1, 3]), - ('Mconv2_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d' % i, [128, 22, 1, 1, 0])]) - - for k in blocks.keys(): - blocks[k] = make_layers(blocks[k], no_relu_layers) - - self.model1_0 = blocks['block1_0'] - self.model1_1 = blocks['block1_1'] - self.model2 = blocks['block2'] - self.model3 = blocks['block3'] - self.model4 = blocks['block4'] - self.model5 = blocks['block5'] - self.model6 = blocks['block6'] - - def forward(self, x): - out1_0 = self.model1_0(x) - out1_1 = self.model1_1(out1_0) - concat_stage2 = torch.cat([out1_1, out1_0], 1) - out_stage2 = self.model2(concat_stage2) - concat_stage3 = torch.cat([out_stage2, out1_0], 1) - out_stage3 = self.model3(concat_stage3) - concat_stage4 = torch.cat([out_stage3, out1_0], 1) - out_stage4 = self.model4(concat_stage4) - concat_stage5 = torch.cat([out_stage4, out1_0], 1) - out_stage5 = self.model5(concat_stage5) - concat_stage6 = torch.cat([out_stage5, out1_0], 1) - out_stage6 = self.model6(concat_stage6) - return out_stage6 diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/diffusionmodules/__init__.py b/spaces/Adapter/T2I-Adapter/ldm/modules/diffusionmodules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthbuttons/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthbuttons/Factory.d.ts deleted file mode 100644 index 41489232aabf0a60dcbab81b71c0a4784c1621a8..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fixwidthbuttons/Factory.d.ts +++ /dev/null @@ -1,5 +0,0 @@ -import FixWidthButtons from './FixWidthButtons'; - -export default function ( - config?: FixWidthButtons.IConfig -): FixWidthButtons; \ No newline at end of file diff --git a/spaces/AiBototicus/BucksAI-2/README.md b/spaces/AiBototicus/BucksAI-2/README.md deleted file mode 100644 index b0f01fe18f328d1295ef0d870addf8f7f3b85b74..0000000000000000000000000000000000000000 --- a/spaces/AiBototicus/BucksAI-2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BucksAI 2 -emoji: 🐢 -colorFrom: green -colorTo: red -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: bsd-3-clause-clear ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AlekseyKorshuk/instagram-filter-removal/modules/normalization.py b/spaces/AlekseyKorshuk/instagram-filter-removal/modules/normalization.py deleted file mode 100644 index fc28bfdeaff873a9212e5af3d32550ef4f67cdd6..0000000000000000000000000000000000000000 --- a/spaces/AlekseyKorshuk/instagram-filter-removal/modules/normalization.py +++ /dev/null @@ -1,16 +0,0 @@ -import torch -import torch.nn as nn - - -class AdaIN(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, x, y): - ch = y.size(1) - sigma, mu = torch.split(y.unsqueeze(-1).unsqueeze(-1), [ch // 2, ch // 2], dim=1) - - x_mu = x.mean(dim=[2, 3], keepdim=True) - x_sigma = x.std(dim=[2, 3], keepdim=True) - - return sigma * ((x - x_mu) / x_sigma) + mu diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/PlayInteractively.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/PlayInteractively.py deleted file mode 100644 index 547b08ab2c4373e23711636488145df148d7eb4e..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/PlayInteractively.py +++ /dev/null @@ -1,197 +0,0 @@ - - - -from tkinter import Tk -from PIL import Image, ImageTk -from tkinter.filedialog import askopenfilename -from GUI import View -from Inference import StyleCLIP -import argparse -#%% - - -class PlayInteractively(): #Controller - ''' - followed Model View Controller Design Pattern - - controller, model, view - ''' - def __init__(self,dataset_name='ffhq'): - - self.root = Tk() - self.view=View(self.root) - self.img_ratio=2 - self.style_clip=StyleCLIP(dataset_name) - - self.view.neutral.bind("", self.text_n) - self.view.target.bind("", self.text_t) - self.view.alpha.bind('', self.ChangeAlpha) - self.view.beta.bind('', self.ChangeBeta) - self.view.set_init.bind('', self.SetInit) - self.view.reset.bind('', self.Reset) - self.view.bg.bind('', self.open_img) - - - self.drawn = None - - self.view.target.delete(1.0, "end") - self.view.target.insert("end", self.style_clip.target) -# - self.view.neutral.delete(1.0, "end") - self.view.neutral.insert("end", self.style_clip.neutral) - - - def Reset(self,event): - self.style_clip.GetDt2() - self.style_clip.M.alpha=[0] - - self.view.beta.set(self.style_clip.beta) - self.view.alpha.set(0) - - img=self.style_clip.GetImg() - img=Image.fromarray(img) - img = ImageTk.PhotoImage(img) - self.addImage_m(img) - - - def SetInit(self,event): - codes=self.style_clip.GetCode() - self.style_clip.M.dlatent_tmp=[tmp[:,0] for tmp in codes] - print('set init') - - def ChangeAlpha(self,event): - tmp=self.view.alpha.get() - self.style_clip.M.alpha=[float(tmp)] - - img=self.style_clip.GetImg() - print('manipulate one') - img=Image.fromarray(img) - img = ImageTk.PhotoImage(img) - self.addImage_m(img) - - def ChangeBeta(self,event): - tmp=self.view.beta.get() - self.style_clip.beta=float(tmp) - - img=self.style_clip.GetImg() - print('manipulate one') - img=Image.fromarray(img) - img = ImageTk.PhotoImage(img) - self.addImage_m(img) - - def ChangeDataset(self,event): - - dataset_name=self.view.set_category.get() - - self.style_clip.LoadData(dataset_name) - - self.view.target.delete(1.0, "end") - self.view.target.insert("end", self.style_clip.target) - - self.view.neutral.delete(1.0, "end") - self.view.neutral.insert("end", self.style_clip.neutral) - - def text_t(self,event): - tmp=self.view.target.get("1.0",'end') - tmp=tmp.replace('\n','') - - self.view.target.delete(1.0, "end") - self.view.target.insert("end", tmp) - - print('target',tmp,'###') - self.style_clip.target=tmp - self.style_clip.GetDt2() - self.view.beta.set(self.style_clip.beta) - self.view.alpha.set(3) - self.style_clip.M.alpha=[3] - - img=self.style_clip.GetImg() - print('manipulate one') - img=Image.fromarray(img) - img = ImageTk.PhotoImage(img) - self.addImage_m(img) - - - def text_n(self,event): - tmp=self.view.neutral.get("1.0",'end') - tmp=tmp.replace('\n','') - - self.view.neutral.delete(1.0, "end") - self.view.neutral.insert("end", tmp) - - print('neutral',tmp,'###') - self.style_clip.neutral=tmp - self.view.target.delete(1.0, "end") - self.view.target.insert("end", tmp) - - - def run(self): - self.root.mainloop() - - def addImage(self,img): - self.view.bg.create_image(self.view.width/2, self.view.height/2, image=img, anchor='center') - self.image=img #save a copy of image. if not the image will disappear - - def addImage_m(self,img): - self.view.mani.create_image(512, 512, image=img, anchor='center') - self.image2=img - - - def openfn(self): - filename = askopenfilename(title='open',initialdir='./data/'+self.style_clip.M.dataset_name+'/',filetypes=[("all image format", ".jpg"),("all image format", ".png")]) - return filename - - def open_img(self,event): - x = self.openfn() - print(x) - - - img = Image.open(x) - img2 = img.resize(( 512,512), Image.ANTIALIAS) - img2 = ImageTk.PhotoImage(img2) - self.addImage(img2) - - img = ImageTk.PhotoImage(img) - self.addImage_m(img) - - img_index=x.split('/')[-1].split('.')[0] - img_index=int(img_index) - print(img_index) - self.style_clip.M.img_index=img_index - self.style_clip.M.dlatent_tmp=[tmp[img_index:(img_index+1)] for tmp in self.style_clip.M.dlatents] - - - self.style_clip.GetDt2() - self.view.beta.set(self.style_clip.beta) - self.view.alpha.set(3) - - #%% -if __name__ == "__main__": - parser = argparse.ArgumentParser(description='Process some integers.') - - parser.add_argument('--dataset_name',type=str,default='ffhq', - help='name of dataset, for example, ffhq') - - args = parser.parse_args() - dataset_name=args.dataset_name - - self=PlayInteractively(dataset_name) - self.run() - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/multi_subject_dreambooth/README.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/multi_subject_dreambooth/README.md deleted file mode 100644 index d1a7705cfebbc65cca554189445742f3f762aa47..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/multi_subject_dreambooth/README.md +++ /dev/null @@ -1,338 +0,0 @@ -# Multi Subject DreamBooth training - -[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject. -This `train_multi_subject_dreambooth.py` script shows how to implement the training procedure for one or more subjects and adapt it for stable diffusion. Note that this code is based off of the `examples/dreambooth/train_dreambooth.py` script as of 01/06/2022. - -This script was added by @kopsahlong, and is not actively maintained. However, if you come across anything that could use fixing, feel free to open an issue and tag @kopsahlong. - -## Running locally with PyTorch -### Installing the dependencies - -Before running the script, make sure to install the library's training dependencies: - -To start, execute the following steps in a new virtual environment: -```bash -git clone https://github.com/huggingface/diffusers -cd diffusers -pip install -e . -``` - -Then cd into the folder `diffusers/examples/research_projects/multi_subject_dreambooth` and run the following: -```bash -pip install -r requirements.txt -``` - -And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: - -```bash -accelerate config -``` - -Or for a default accelerate configuration without answering questions about your environment - -```bash -accelerate config default -``` - -Or if your environment doesn't support an interactive shell e.g. a notebook - -```python -from accelerate.utils import write_basic_config -write_basic_config() -``` - -### Multi Subject Training Example -In order to have your model learn multiple concepts at once, we simply add in the additional data directories and prompts to our `instance_data_dir` and `instance_prompt` (as well as `class_data_dir` and `class_prompt` if `--with_prior_preservation` is specified) as one comma separated string. - -See an example with 2 subjects below, which learns a model for one dog subject and one human subject: - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export OUTPUT_DIR="path-to-save-model" - -# Subject 1 -export INSTANCE_DIR_1="path-to-instance-images-concept-1" -export INSTANCE_PROMPT_1="a photo of a sks dog" -export CLASS_DIR_1="path-to-class-images-dog" -export CLASS_PROMPT_1="a photo of a dog" - -# Subject 2 -export INSTANCE_DIR_2="path-to-instance-images-concept-2" -export INSTANCE_PROMPT_2="a photo of a t@y person" -export CLASS_DIR_2="path-to-class-images-person" -export CLASS_PROMPT_2="a photo of a person" - -accelerate launch train_multi_subject_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir="$INSTANCE_DIR_1,$INSTANCE_DIR_2" \ - --output_dir=$OUTPUT_DIR \ - --train_text_encoder \ - --instance_prompt="$INSTANCE_PROMPT_1,$INSTANCE_PROMPT_2" \ - --with_prior_preservation \ - --prior_loss_weight=1.0 \ - --class_data_dir="$CLASS_DIR_1,$CLASS_DIR_2" \ - --class_prompt="$CLASS_PROMPT_1,$CLASS_PROMPT_2"\ - --num_class_images=50 \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 \ - --learning_rate=1e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --max_train_steps=1500 -``` - -This example shows training for 2 subjects, but please note that the model can be trained on any number of new concepts. This can be done by continuing to add in the corresponding directories and prompts to the corresponding comma separated string. - -Note also that in this script, `sks` and `t@y` were used as tokens to learn the new subjects ([this thread](https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/71) inspired the use of `t@y` as our second identifier). However, there may be better rare tokens to experiment with, and results also seemed to be good when more intuitive words are used. - -**Important**: New parameters are added to the script, making possible to validate the progress of the training by -generating images at specified steps. Taking also into account that a comma separated list in a text field for a prompt -it's never a good idea (simply because it is very common in prompts to have them as part of a regular text) we -introduce the `concept_list` parameter: allowing to specify a json-like file where you can define the different -configuration for each subject that you want to train. - -An example of how to generate the file: -```python -import json - -# here we are using parameters for prior-preservation and validation as well. -concepts_list = [ - { - "instance_prompt": "drawing of a t@y meme", - "class_prompt": "drawing of a meme", - "instance_data_dir": "/some_folder/meme_toy", - "class_data_dir": "/data/meme", - "validation_prompt": "drawing of a t@y meme about football in Uruguay", - "validation_negative_prompt": "black and white" - }, - { - "instance_prompt": "drawing of a sks sir", - "class_prompt": "drawing of a sir", - "instance_data_dir": "/some_other_folder/sir_sks", - "class_data_dir": "/data/sir", - "validation_prompt": "drawing of a sks sir with the Uruguayan sun in his chest", - "validation_negative_prompt": "an old man", - "validation_guidance_scale": 20, - "validation_number_images": 3, - "validation_inference_steps": 10 - } -] - -with open("concepts_list.json", "w") as f: - json.dump(concepts_list, f, indent=4) -``` -And then just point to the file when executing the script: - -```bash -# exports... -accelerate launch train_multi_subject_dreambooth.py \ -# more parameters... ---concepts_list="concepts_list.json" -``` - -You can use the helper from the script to get a better sense of each parameter. - -### Inference - -Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline`. Make sure to include the `identifier`(e.g. sks in above example) in your prompt. - -```python -from diffusers import StableDiffusionPipeline -import torch - -model_id = "path-to-your-trained-model" -pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") - -prompt = "A photo of a t@y person petting an sks dog" -image = pipe(prompt, num_inference_steps=200, guidance_scale=7.5).images[0] - -image.save("person-petting-dog.png") -``` - -### Inference from a training checkpoint - -You can also perform inference from one of the checkpoints saved during the training process, if you used the `--checkpointing_steps` argument. Please, refer to [the documentation](https://huggingface.co/docs/diffusers/main/en/training/dreambooth#performing-inference-using-a-saved-checkpoint) to see how to do it. - -## Additional Dreambooth documentation -Because the `train_multi_subject_dreambooth.py` script here was forked from an original version of `train_dreambooth.py` in the `examples/dreambooth` folder, I've included the original applicable training documentation for single subject examples below. - -This should explain how to play with training variables such as prior preservation, fine tuning the text encoder, etc. which is still applicable to our multi subject training code. Note also that the examples below, which are single subject examples, also work with `train_multi_subject_dreambooth.py`, as this script supports 1 (or more) subjects. - -### Single subject dog toy example - -Let's get our dataset. Download images from [here](https://drive.google.com/drive/folders/1BO_dyz-p65qhBRRMRA4TbZ8qW4rB99JZ) and save them in a directory. This will be our training data. - -And launch the training using - -**___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___** - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export INSTANCE_DIR="path-to-instance-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --instance_prompt="a photo of sks dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 \ - --learning_rate=5e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --max_train_steps=400 -``` - -### Training with prior-preservation loss - -Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data. -According to the paper, it's recommended to generate `num_epochs * num_samples` images for prior-preservation. 200-300 works well for most cases. The `num_class_images` flag sets the number of images to generate with the class prompt. You can place existing images in `class_data_dir`, and the training script will generate any additional images so that `num_class_images` are present in `class_data_dir` during training time. - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 \ - --learning_rate=5e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - - -### Training on a 16GB GPU: - -With the help of gradient checkpointing and the 8-bit optimizer from bitsandbytes it's possible to run train dreambooth on a 16GB GPU. - -To install `bitandbytes` please refer to this [readme](https://github.com/TimDettmers/bitsandbytes#requirements--installation). - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=2 --gradient_checkpointing \ - --use_8bit_adam \ - --learning_rate=5e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - -### Training on a 8 GB GPU: - -By using [DeepSpeed](https://www.deepspeed.ai/) it's possible to offload some -tensors from VRAM to either CPU or NVME allowing to train with less VRAM. - -DeepSpeed needs to be enabled with `accelerate config`. During configuration -answer yes to "Do you want to use DeepSpeed?". With DeepSpeed stage 2, fp16 -mixed precision and offloading both parameters and optimizer state to cpu it's -possible to train on under 8 GB VRAM with a drawback of requiring significantly -more RAM (about 25 GB). See [documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more DeepSpeed configuration options. - -Changing the default Adam optimizer to DeepSpeed's special version of Adam -`deepspeed.ops.adam.DeepSpeedCPUAdam` gives a substantial speedup but enabling -it requires CUDA toolchain with the same version as pytorch. 8-bit optimizer -does not seem to be compatible with DeepSpeed at the moment. - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch --mixed_precision="fp16" train_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --sample_batch_size=1 \ - --gradient_accumulation_steps=1 --gradient_checkpointing \ - --learning_rate=5e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - -### Fine-tune text encoder with the UNet. - -The script also allows to fine-tune the `text_encoder` along with the `unet`. It's been observed experimentally that fine-tuning `text_encoder` gives much better results especially on faces. -Pass the `--train_text_encoder` argument to the script to enable training `text_encoder`. - -___Note: Training text encoder requires more memory, with this option the training won't fit on 16GB GPU. It needs at least 24GB VRAM.___ - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --train_text_encoder \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --use_8bit_adam \ - --gradient_checkpointing \ - --learning_rate=2e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - -### Using DreamBooth for other pipelines than Stable Diffusion - -Altdiffusion also support dreambooth now, the runing comman is basically the same as abouve, all you need to do is replace the `MODEL_NAME` like this: -One can now simply change the `pretrained_model_name_or_path` to another architecture such as [`AltDiffusion`](https://huggingface.co/docs/diffusers/api/pipelines/alt_diffusion). - -``` -export MODEL_NAME="CompVis/stable-diffusion-v1-4" --> export MODEL_NAME="BAAI/AltDiffusion-m9" -or -export MODEL_NAME="CompVis/stable-diffusion-v1-4" --> export MODEL_NAME="BAAI/AltDiffusion" -``` - -### Training with xformers: -You can enable memory efficient attention by [installing xFormers](https://github.com/facebookresearch/xformers#installing-xformers) and padding the `--enable_xformers_memory_efficient_attention` argument to the script. This is not available with the Flax/JAX implementation. - -You can also use Dreambooth to train the specialized in-painting model. See [the script in the research folder for details](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint). \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddim_parallel.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddim_parallel.py deleted file mode 100644 index db3ea0e1cca55f88d0a81d0311158929516cb038..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddim_parallel.py +++ /dev/null @@ -1,642 +0,0 @@ -# Copyright 2023 ParaDiGMS authors and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This code is strongly influenced by https://github.com/pesser/pytorch_diffusion -# and https://github.com/hojonathanho/diffusion - -import math -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput, randn_tensor -from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin - - -@dataclass -# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput -class DDIMParallelSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: torch.FloatTensor - pred_original_sample: Optional[torch.FloatTensor] = None - - -# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar -def betas_for_alpha_bar( - num_diffusion_timesteps, - max_beta=0.999, - alpha_transform_type="cosine", -): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar. - Choose from `cosine` or `exp` - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - if alpha_transform_type == "cosine": - - def alpha_bar_fn(t): - return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2 - - elif alpha_transform_type == "exp": - - def alpha_bar_fn(t): - return math.exp(t * -12.0) - - else: - raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}") - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta)) - return torch.tensor(betas, dtype=torch.float32) - - -# Copied from diffusers.schedulers.scheduling_ddim.rescale_zero_terminal_snr -def rescale_zero_terminal_snr(betas): - """ - Rescales betas to have zero terminal SNR Based on https://arxiv.org/pdf/2305.08891.pdf (Algorithm 1) - - - Args: - betas (`torch.FloatTensor`): - the betas that the scheduler is being initialized with. - - Returns: - `torch.FloatTensor`: rescaled betas with zero terminal SNR - """ - # Convert betas to alphas_bar_sqrt - alphas = 1.0 - betas - alphas_cumprod = torch.cumprod(alphas, dim=0) - alphas_bar_sqrt = alphas_cumprod.sqrt() - - # Store old values. - alphas_bar_sqrt_0 = alphas_bar_sqrt[0].clone() - alphas_bar_sqrt_T = alphas_bar_sqrt[-1].clone() - - # Shift so the last timestep is zero. - alphas_bar_sqrt -= alphas_bar_sqrt_T - - # Scale so the first timestep is back to the old value. - alphas_bar_sqrt *= alphas_bar_sqrt_0 / (alphas_bar_sqrt_0 - alphas_bar_sqrt_T) - - # Convert alphas_bar_sqrt to betas - alphas_bar = alphas_bar_sqrt**2 # Revert sqrt - alphas = alphas_bar[1:] / alphas_bar[:-1] # Revert cumprod - alphas = torch.cat([alphas_bar[0:1], alphas]) - betas = 1 - alphas - - return betas - - -class DDIMParallelScheduler(SchedulerMixin, ConfigMixin): - """ - Denoising diffusion implicit models is a scheduler that extends the denoising procedure introduced in denoising - diffusion probabilistic models (DDPMs) with non-Markovian guidance. - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - For more details, see the original paper: https://arxiv.org/abs/2010.02502 - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - clip_sample (`bool`, default `True`): - option to clip predicted sample for numerical stability. - clip_sample_range (`float`, default `1.0`): - the maximum magnitude for sample clipping. Valid only when `clip_sample=True`. - set_alpha_to_one (`bool`, default `True`): - each diffusion step uses the value of alphas product at that step and at the previous one. For the final - step there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`, - otherwise it uses the value of alpha at step 0. - steps_offset (`int`, default `0`): - an offset added to the inference steps. You can use a combination of `offset=1` and - `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in - stable diffusion. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - thresholding (`bool`, default `False`): - whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487). - Note that the thresholding method is unsuitable for latent-space diffusion models (such as - stable-diffusion). - dynamic_thresholding_ratio (`float`, default `0.995`): - the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen - (https://arxiv.org/abs/2205.11487). Valid only when `thresholding=True`. - sample_max_value (`float`, default `1.0`): - the threshold value for dynamic thresholding. Valid only when `thresholding=True`. - timestep_spacing (`str`, default `"leading"`): - The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample - Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information. - rescale_betas_zero_snr (`bool`, default `False`): - whether to rescale the betas to have zero terminal SNR (proposed by https://arxiv.org/pdf/2305.08891.pdf). - This can enable the model to generate very bright and dark samples instead of limiting it to samples with - medium brightness. Loosely related to - [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506). - """ - - _compatibles = [e.name for e in KarrasDiffusionSchedulers] - order = 1 - _is_ode_scheduler = True - - @register_to_config - # Copied from diffusers.schedulers.scheduling_ddim.DDIMScheduler.__init__ - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - clip_sample: bool = True, - set_alpha_to_one: bool = True, - steps_offset: int = 0, - prediction_type: str = "epsilon", - thresholding: bool = False, - dynamic_thresholding_ratio: float = 0.995, - clip_sample_range: float = 1.0, - sample_max_value: float = 1.0, - timestep_spacing: str = "leading", - rescale_betas_zero_snr: bool = False, - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - # Rescale for zero SNR - if rescale_betas_zero_snr: - self.betas = rescale_zero_terminal_snr(self.betas) - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - # At every step in ddim, we are looking into the previous alphas_cumprod - # For the final step, there is no previous alphas_cumprod because we are already at 0 - # `set_alpha_to_one` decides whether we set this parameter simply to one or - # whether we use the final alpha of the "non-previous" one. - self.final_alpha_cumprod = torch.tensor(1.0) if set_alpha_to_one else self.alphas_cumprod[0] - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # setable values - self.num_inference_steps = None - self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy().astype(np.int64)) - - # Copied from diffusers.schedulers.scheduling_ddim.DDIMScheduler.scale_model_input - def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`torch.FloatTensor`): input sample - timestep (`int`, optional): current timestep - - Returns: - `torch.FloatTensor`: scaled input sample - """ - return sample - - def _get_variance(self, timestep, prev_timestep=None): - if prev_timestep is None: - prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps - - alpha_prod_t = self.alphas_cumprod[timestep] - alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod - beta_prod_t = 1 - alpha_prod_t - beta_prod_t_prev = 1 - alpha_prod_t_prev - - variance = (beta_prod_t_prev / beta_prod_t) * (1 - alpha_prod_t / alpha_prod_t_prev) - - return variance - - def _batch_get_variance(self, t, prev_t): - alpha_prod_t = self.alphas_cumprod[t] - alpha_prod_t_prev = self.alphas_cumprod[torch.clip(prev_t, min=0)] - alpha_prod_t_prev[prev_t < 0] = torch.tensor(1.0) - beta_prod_t = 1 - alpha_prod_t - beta_prod_t_prev = 1 - alpha_prod_t_prev - - variance = (beta_prod_t_prev / beta_prod_t) * (1 - alpha_prod_t / alpha_prod_t_prev) - - return variance - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._threshold_sample - def _threshold_sample(self, sample: torch.FloatTensor) -> torch.FloatTensor: - """ - "Dynamic thresholding: At each sampling step we set s to a certain percentile absolute pixel value in xt0 (the - prediction of x_0 at timestep t), and if s > 1, then we threshold xt0 to the range [-s, s] and then divide by - s. Dynamic thresholding pushes saturated pixels (those near -1 and 1) inwards, thereby actively preventing - pixels from saturation at each step. We find that dynamic thresholding results in significantly better - photorealism as well as better image-text alignment, especially when using very large guidance weights." - - https://arxiv.org/abs/2205.11487 - """ - dtype = sample.dtype - batch_size, channels, height, width = sample.shape - - if dtype not in (torch.float32, torch.float64): - sample = sample.float() # upcast for quantile calculation, and clamp not implemented for cpu half - - # Flatten sample for doing quantile calculation along each image - sample = sample.reshape(batch_size, channels * height * width) - - abs_sample = sample.abs() # "a certain percentile absolute pixel value" - - s = torch.quantile(abs_sample, self.config.dynamic_thresholding_ratio, dim=1) - s = torch.clamp( - s, min=1, max=self.config.sample_max_value - ) # When clamped to min=1, equivalent to standard clipping to [-1, 1] - - s = s.unsqueeze(1) # (batch_size, 1) because clamp will broadcast along dim=0 - sample = torch.clamp(sample, -s, s) / s # "we threshold xt0 to the range [-s, s] and then divide by s" - - sample = sample.reshape(batch_size, channels, height, width) - sample = sample.to(dtype) - - return sample - - # Copied from diffusers.schedulers.scheduling_ddim.DDIMScheduler.set_timesteps - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - - if num_inference_steps > self.config.num_train_timesteps: - raise ValueError( - f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:" - f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle" - f" maximal {self.config.num_train_timesteps} timesteps." - ) - - self.num_inference_steps = num_inference_steps - - # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891 - if self.config.timestep_spacing == "linspace": - timesteps = ( - np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps) - .round()[::-1] - .copy() - .astype(np.int64) - ) - elif self.config.timestep_spacing == "leading": - step_ratio = self.config.num_train_timesteps // self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64) - timesteps += self.config.steps_offset - elif self.config.timestep_spacing == "trailing": - step_ratio = self.config.num_train_timesteps / self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = np.round(np.arange(self.config.num_train_timesteps, 0, -step_ratio)).astype(np.int64) - timesteps -= 1 - else: - raise ValueError( - f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'leading' or 'trailing'." - ) - - self.timesteps = torch.from_numpy(timesteps).to(device) - - def step( - self, - model_output: torch.FloatTensor, - timestep: int, - sample: torch.FloatTensor, - eta: float = 0.0, - use_clipped_model_output: bool = False, - generator=None, - variance_noise: Optional[torch.FloatTensor] = None, - return_dict: bool = True, - ) -> Union[DDIMParallelSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - eta (`float`): weight of noise for added noise in diffusion step. - use_clipped_model_output (`bool`): if `True`, compute "corrected" `model_output` from the clipped - predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when - `self.config.clip_sample` is `True`. If no clipping has happened, "corrected" `model_output` would - coincide with the one provided as input and `use_clipped_model_output` will have not effect. - generator: random number generator. - variance_noise (`torch.FloatTensor`): instead of generating noise for the variance using `generator`, we - can directly provide the noise for the variance itself. This is useful for methods such as - CycleDiffusion. (https://arxiv.org/abs/2210.05559) - return_dict (`bool`): option for returning tuple rather than DDIMParallelSchedulerOutput class - - Returns: - [`~schedulers.scheduling_utils.DDIMParallelSchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.DDIMParallelSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. - When returning a tuple, the first element is the sample tensor. - - """ - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - # See formulas (12) and (16) of DDIM paper https://arxiv.org/pdf/2010.02502.pdf - # Ideally, read DDIM paper in-detail understanding - - # Notation ( -> - # - pred_noise_t -> e_theta(x_t, t) - # - pred_original_sample -> f_theta(x_t, t) or x_0 - # - std_dev_t -> sigma_t - # - eta -> η - # - pred_sample_direction -> "direction pointing to x_t" - # - pred_prev_sample -> "x_t-1" - - # 1. get previous step value (=t-1) - prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps - - # 2. compute alphas, betas - alpha_prod_t = self.alphas_cumprod[timestep] - alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod - - beta_prod_t = 1 - alpha_prod_t - - # 3. compute predicted original sample from predicted noise also called - # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - if self.config.prediction_type == "epsilon": - pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5) - pred_epsilon = model_output - elif self.config.prediction_type == "sample": - pred_original_sample = model_output - pred_epsilon = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5) - elif self.config.prediction_type == "v_prediction": - pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output - pred_epsilon = (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" - " `v_prediction`" - ) - - # 4. Clip or threshold "predicted x_0" - if self.config.thresholding: - pred_original_sample = self._threshold_sample(pred_original_sample) - elif self.config.clip_sample: - pred_original_sample = pred_original_sample.clamp( - -self.config.clip_sample_range, self.config.clip_sample_range - ) - - # 5. compute variance: "sigma_t(η)" -> see formula (16) - # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1) - variance = self._get_variance(timestep, prev_timestep) - std_dev_t = eta * variance ** (0.5) - - if use_clipped_model_output: - # the pred_epsilon is always re-derived from the clipped x_0 in Glide - pred_epsilon = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5) - - # 6. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * pred_epsilon - - # 7. compute x_t without "random noise" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - prev_sample = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction - - if eta > 0: - if variance_noise is not None and generator is not None: - raise ValueError( - "Cannot pass both generator and variance_noise. Please make sure that either `generator` or" - " `variance_noise` stays `None`." - ) - - if variance_noise is None: - variance_noise = randn_tensor( - model_output.shape, generator=generator, device=model_output.device, dtype=model_output.dtype - ) - variance = std_dev_t * variance_noise - - prev_sample = prev_sample + variance - - if not return_dict: - return (prev_sample,) - - return DDIMParallelSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample) - - def batch_step_no_noise( - self, - model_output: torch.FloatTensor, - timesteps: List[int], - sample: torch.FloatTensor, - eta: float = 0.0, - use_clipped_model_output: bool = False, - ) -> torch.FloatTensor: - """ - Batched version of the `step` function, to be able to reverse the SDE for multiple samples/timesteps at once. - Also, does not add any noise to the predicted sample, which is necessary for parallel sampling where the noise - is pre-sampled by the pipeline. - - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timesteps (`List[int]`): - current discrete timesteps in the diffusion chain. This is now a list of integers. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - eta (`float`): weight of noise for added noise in diffusion step. - use_clipped_model_output (`bool`): if `True`, compute "corrected" `model_output` from the clipped - predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when - `self.config.clip_sample` is `True`. If no clipping has happened, "corrected" `model_output` would - coincide with the one provided as input and `use_clipped_model_output` will have not effect. - - Returns: - `torch.FloatTensor`: sample tensor at previous timestep. - - """ - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - assert eta == 0.0 - - # See formulas (12) and (16) of DDIM paper https://arxiv.org/pdf/2010.02502.pdf - # Ideally, read DDIM paper in-detail understanding - - # Notation ( -> - # - pred_noise_t -> e_theta(x_t, t) - # - pred_original_sample -> f_theta(x_t, t) or x_0 - # - std_dev_t -> sigma_t - # - eta -> η - # - pred_sample_direction -> "direction pointing to x_t" - # - pred_prev_sample -> "x_t-1" - - # 1. get previous step value (=t-1) - t = timesteps - prev_t = t - self.config.num_train_timesteps // self.num_inference_steps - - t = t.view(-1, *([1] * (model_output.ndim - 1))) - prev_t = prev_t.view(-1, *([1] * (model_output.ndim - 1))) - - # 1. compute alphas, betas - self.alphas_cumprod = self.alphas_cumprod.to(model_output.device) - self.final_alpha_cumprod = self.final_alpha_cumprod.to(model_output.device) - alpha_prod_t = self.alphas_cumprod[t] - alpha_prod_t_prev = self.alphas_cumprod[torch.clip(prev_t, min=0)] - alpha_prod_t_prev[prev_t < 0] = torch.tensor(1.0) - - beta_prod_t = 1 - alpha_prod_t - - # 3. compute predicted original sample from predicted noise also called - # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - if self.config.prediction_type == "epsilon": - pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5) - pred_epsilon = model_output - elif self.config.prediction_type == "sample": - pred_original_sample = model_output - pred_epsilon = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5) - elif self.config.prediction_type == "v_prediction": - pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output - pred_epsilon = (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" - " `v_prediction`" - ) - - # 4. Clip or threshold "predicted x_0" - if self.config.thresholding: - pred_original_sample = self._threshold_sample(pred_original_sample) - elif self.config.clip_sample: - pred_original_sample = pred_original_sample.clamp( - -self.config.clip_sample_range, self.config.clip_sample_range - ) - - # 5. compute variance: "sigma_t(η)" -> see formula (16) - # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1) - variance = self._batch_get_variance(t, prev_t).to(model_output.device).view(*alpha_prod_t_prev.shape) - std_dev_t = eta * variance ** (0.5) - - if use_clipped_model_output: - # the pred_epsilon is always re-derived from the clipped x_0 in Glide - pred_epsilon = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5) - - # 6. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * pred_epsilon - - # 7. compute x_t without "random noise" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - prev_sample = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction - - return prev_sample - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.add_noise - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.IntTensor, - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as original_samples - alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype) - timesteps = timesteps.to(original_samples.device) - - sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(original_samples.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise - return noisy_samples - - # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.get_velocity - def get_velocity( - self, sample: torch.FloatTensor, noise: torch.FloatTensor, timesteps: torch.IntTensor - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as sample - alphas_cumprod = self.alphas_cumprod.to(device=sample.device, dtype=sample.dtype) - timesteps = timesteps.to(sample.device) - - sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(sample.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(sample.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - velocity = sqrt_alpha_prod * noise - sqrt_one_minus_alpha_prod * sample - return velocity - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_modeling_common.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_modeling_common.py deleted file mode 100644 index ee8e55842f8d40cf2d107b47f105ce952cfb57d0..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_modeling_common.py +++ /dev/null @@ -1,567 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -import tempfile -import traceback -import unittest -import unittest.mock as mock -from typing import Dict, List, Tuple - -import numpy as np -import requests_mock -import torch -from requests.exceptions import HTTPError - -from diffusers.models import UNet2DConditionModel -from diffusers.models.attention_processor import AttnProcessor, AttnProcessor2_0, XFormersAttnProcessor -from diffusers.training_utils import EMAModel -from diffusers.utils import logging, torch_device -from diffusers.utils.testing_utils import CaptureLogger, require_torch_2, require_torch_gpu, run_test_in_subprocess - - -# Will be run via run_test_in_subprocess -def _test_from_save_pretrained_dynamo(in_queue, out_queue, timeout): - error = None - try: - init_dict, model_class = in_queue.get(timeout=timeout) - - model = model_class(**init_dict) - model.to(torch_device) - model = torch.compile(model) - - with tempfile.TemporaryDirectory() as tmpdirname: - model.save_pretrained(tmpdirname) - new_model = model_class.from_pretrained(tmpdirname) - new_model.to(torch_device) - - assert new_model.__class__ == model_class - except Exception: - error = f"{traceback.format_exc()}" - - results = {"error": error} - out_queue.put(results, timeout=timeout) - out_queue.join() - - -class ModelUtilsTest(unittest.TestCase): - def tearDown(self): - super().tearDown() - - import diffusers - - diffusers.utils.import_utils._safetensors_available = True - - def test_accelerate_loading_error_message(self): - with self.assertRaises(ValueError) as error_context: - UNet2DConditionModel.from_pretrained("hf-internal-testing/stable-diffusion-broken", subfolder="unet") - - # make sure that error message states what keys are missing - assert "conv_out.bias" in str(error_context.exception) - - def test_cached_files_are_used_when_no_internet(self): - # A mock response for an HTTP head request to emulate server down - response_mock = mock.Mock() - response_mock.status_code = 500 - response_mock.headers = {} - response_mock.raise_for_status.side_effect = HTTPError - response_mock.json.return_value = {} - - # Download this model to make sure it's in the cache. - orig_model = UNet2DConditionModel.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="unet" - ) - - # Under the mock environment we get a 500 error when trying to reach the model. - with mock.patch("requests.request", return_value=response_mock): - # Download this model to make sure it's in the cache. - model = UNet2DConditionModel.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="unet", local_files_only=True - ) - - for p1, p2 in zip(orig_model.parameters(), model.parameters()): - if p1.data.ne(p2.data).sum() > 0: - assert False, "Parameters not the same!" - - def test_one_request_upon_cached(self): - # TODO: For some reason this test fails on MPS where no HEAD call is made. - if torch_device == "mps": - return - - import diffusers - - diffusers.utils.import_utils._safetensors_available = False - - with tempfile.TemporaryDirectory() as tmpdirname: - with requests_mock.mock(real_http=True) as m: - UNet2DConditionModel.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="unet", cache_dir=tmpdirname - ) - - download_requests = [r.method for r in m.request_history] - assert download_requests.count("HEAD") == 2, "2 HEAD requests one for config, one for model" - assert download_requests.count("GET") == 2, "2 GET requests one for config, one for model" - - with requests_mock.mock(real_http=True) as m: - UNet2DConditionModel.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", subfolder="unet", cache_dir=tmpdirname - ) - - cache_requests = [r.method for r in m.request_history] - assert ( - "HEAD" == cache_requests[0] and len(cache_requests) == 1 - ), "We should call only `model_info` to check for _commit hash and `send_telemetry`" - - diffusers.utils.import_utils._safetensors_available = True - - def test_weight_overwrite(self): - with tempfile.TemporaryDirectory() as tmpdirname, self.assertRaises(ValueError) as error_context: - UNet2DConditionModel.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", - subfolder="unet", - cache_dir=tmpdirname, - in_channels=9, - ) - - # make sure that error message states what keys are missing - assert "Cannot load" in str(error_context.exception) - - with tempfile.TemporaryDirectory() as tmpdirname: - model = UNet2DConditionModel.from_pretrained( - "hf-internal-testing/tiny-stable-diffusion-torch", - subfolder="unet", - cache_dir=tmpdirname, - in_channels=9, - low_cpu_mem_usage=False, - ignore_mismatched_sizes=True, - ) - - assert model.config.in_channels == 9 - - -class UNetTesterMixin: - def test_forward_signature(self): - init_dict, _ = self.prepare_init_args_and_inputs_for_common() - - model = self.model_class(**init_dict) - signature = inspect.signature(model.forward) - # signature.parameters is an OrderedDict => so arg_names order is deterministic - arg_names = [*signature.parameters.keys()] - - expected_arg_names = ["sample", "timestep"] - self.assertListEqual(arg_names[:2], expected_arg_names) - - def test_forward_with_norm_groups(self): - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - - init_dict["norm_num_groups"] = 16 - init_dict["block_out_channels"] = (16, 32) - - model = self.model_class(**init_dict) - model.to(torch_device) - model.eval() - - with torch.no_grad(): - output = model(**inputs_dict) - - if isinstance(output, dict): - output = output.to_tuple()[0] - - self.assertIsNotNone(output) - expected_shape = inputs_dict["sample"].shape - self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match") - - -class ModelTesterMixin: - main_input_name = None # overwrite in model specific tester class - base_precision = 1e-3 - - def test_from_save_pretrained(self): - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - - model = self.model_class(**init_dict) - if hasattr(model, "set_default_attn_processor"): - model.set_default_attn_processor() - model.to(torch_device) - model.eval() - - with tempfile.TemporaryDirectory() as tmpdirname: - model.save_pretrained(tmpdirname) - new_model = self.model_class.from_pretrained(tmpdirname) - if hasattr(new_model, "set_default_attn_processor"): - new_model.set_default_attn_processor() - new_model.to(torch_device) - - with torch.no_grad(): - image = model(**inputs_dict) - if isinstance(image, dict): - image = image.to_tuple()[0] - - new_image = new_model(**inputs_dict) - - if isinstance(new_image, dict): - new_image = new_image.to_tuple()[0] - - max_diff = (image - new_image).abs().sum().item() - self.assertLessEqual(max_diff, 5e-5, "Models give different forward passes") - - def test_getattr_is_correct(self): - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - model = self.model_class(**init_dict) - - # save some things to test - model.dummy_attribute = 5 - model.register_to_config(test_attribute=5) - - logger = logging.get_logger("diffusers.models.modeling_utils") - # 30 for warning - logger.setLevel(30) - with CaptureLogger(logger) as cap_logger: - assert hasattr(model, "dummy_attribute") - assert getattr(model, "dummy_attribute") == 5 - assert model.dummy_attribute == 5 - - # no warning should be thrown - assert cap_logger.out == "" - - logger = logging.get_logger("diffusers.models.modeling_utils") - # 30 for warning - logger.setLevel(30) - with CaptureLogger(logger) as cap_logger: - assert hasattr(model, "save_pretrained") - fn = model.save_pretrained - fn_1 = getattr(model, "save_pretrained") - - assert fn == fn_1 - # no warning should be thrown - assert cap_logger.out == "" - - # warning should be thrown - with self.assertWarns(FutureWarning): - assert model.test_attribute == 5 - - with self.assertWarns(FutureWarning): - assert getattr(model, "test_attribute") == 5 - - with self.assertRaises(AttributeError) as error: - model.does_not_exist - - assert str(error.exception) == f"'{type(model).__name__}' object has no attribute 'does_not_exist'" - - @require_torch_gpu - def test_set_attn_processor_for_determinism(self): - torch.use_deterministic_algorithms(False) - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - model = self.model_class(**init_dict) - model.to(torch_device) - - if not hasattr(model, "set_attn_processor"): - # If not has `set_attn_processor`, skip test - return - - assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values()) - with torch.no_grad(): - output_1 = model(**inputs_dict)[0] - - model.set_default_attn_processor() - assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values()) - with torch.no_grad(): - output_2 = model(**inputs_dict)[0] - - model.enable_xformers_memory_efficient_attention() - assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values()) - with torch.no_grad(): - output_3 = model(**inputs_dict)[0] - - model.set_attn_processor(AttnProcessor2_0()) - assert all(type(proc) == AttnProcessor2_0 for proc in model.attn_processors.values()) - with torch.no_grad(): - output_4 = model(**inputs_dict)[0] - - model.set_attn_processor(AttnProcessor()) - assert all(type(proc) == AttnProcessor for proc in model.attn_processors.values()) - with torch.no_grad(): - output_5 = model(**inputs_dict)[0] - - model.set_attn_processor(XFormersAttnProcessor()) - assert all(type(proc) == XFormersAttnProcessor for proc in model.attn_processors.values()) - with torch.no_grad(): - output_6 = model(**inputs_dict)[0] - - torch.use_deterministic_algorithms(True) - - # make sure that outputs match - assert torch.allclose(output_2, output_1, atol=self.base_precision) - assert torch.allclose(output_2, output_3, atol=self.base_precision) - assert torch.allclose(output_2, output_4, atol=self.base_precision) - assert torch.allclose(output_2, output_5, atol=self.base_precision) - assert torch.allclose(output_2, output_6, atol=self.base_precision) - - def test_from_save_pretrained_variant(self): - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - - model = self.model_class(**init_dict) - if hasattr(model, "set_default_attn_processor"): - model.set_default_attn_processor() - - model.to(torch_device) - model.eval() - - with tempfile.TemporaryDirectory() as tmpdirname: - model.save_pretrained(tmpdirname, variant="fp16") - new_model = self.model_class.from_pretrained(tmpdirname, variant="fp16") - if hasattr(new_model, "set_default_attn_processor"): - new_model.set_default_attn_processor() - - # non-variant cannot be loaded - with self.assertRaises(OSError) as error_context: - self.model_class.from_pretrained(tmpdirname) - - # make sure that error message states what keys are missing - assert "Error no file named diffusion_pytorch_model.bin found in directory" in str(error_context.exception) - - new_model.to(torch_device) - - with torch.no_grad(): - image = model(**inputs_dict) - if isinstance(image, dict): - image = image.to_tuple()[0] - - new_image = new_model(**inputs_dict) - - if isinstance(new_image, dict): - new_image = new_image.to_tuple()[0] - - max_diff = (image - new_image).abs().sum().item() - self.assertLessEqual(max_diff, 5e-5, "Models give different forward passes") - - @require_torch_2 - def test_from_save_pretrained_dynamo(self): - init_dict, _ = self.prepare_init_args_and_inputs_for_common() - inputs = [init_dict, self.model_class] - run_test_in_subprocess(test_case=self, target_func=_test_from_save_pretrained_dynamo, inputs=inputs) - - def test_from_save_pretrained_dtype(self): - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - - model = self.model_class(**init_dict) - model.to(torch_device) - model.eval() - - for dtype in [torch.float32, torch.float16, torch.bfloat16]: - if torch_device == "mps" and dtype == torch.bfloat16: - continue - with tempfile.TemporaryDirectory() as tmpdirname: - model.to(dtype) - model.save_pretrained(tmpdirname) - new_model = self.model_class.from_pretrained(tmpdirname, low_cpu_mem_usage=True, torch_dtype=dtype) - assert new_model.dtype == dtype - new_model = self.model_class.from_pretrained(tmpdirname, low_cpu_mem_usage=False, torch_dtype=dtype) - assert new_model.dtype == dtype - - def test_determinism(self, expected_max_diff=1e-5): - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - model = self.model_class(**init_dict) - model.to(torch_device) - model.eval() - - with torch.no_grad(): - first = model(**inputs_dict) - if isinstance(first, dict): - first = first.to_tuple()[0] - - second = model(**inputs_dict) - if isinstance(second, dict): - second = second.to_tuple()[0] - - out_1 = first.cpu().numpy() - out_2 = second.cpu().numpy() - out_1 = out_1[~np.isnan(out_1)] - out_2 = out_2[~np.isnan(out_2)] - max_diff = np.amax(np.abs(out_1 - out_2)) - self.assertLessEqual(max_diff, expected_max_diff) - - def test_output(self): - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - model = self.model_class(**init_dict) - model.to(torch_device) - model.eval() - - with torch.no_grad(): - output = model(**inputs_dict) - - if isinstance(output, dict): - output = output.to_tuple()[0] - - self.assertIsNotNone(output) - - # input & output have to have the same shape - input_tensor = inputs_dict[self.main_input_name] - expected_shape = input_tensor.shape - self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match") - - def test_model_from_pretrained(self): - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - - model = self.model_class(**init_dict) - model.to(torch_device) - model.eval() - - # test if the model can be loaded from the config - # and has all the expected shape - with tempfile.TemporaryDirectory() as tmpdirname: - model.save_pretrained(tmpdirname) - new_model = self.model_class.from_pretrained(tmpdirname) - new_model.to(torch_device) - new_model.eval() - - # check if all parameters shape are the same - for param_name in model.state_dict().keys(): - param_1 = model.state_dict()[param_name] - param_2 = new_model.state_dict()[param_name] - self.assertEqual(param_1.shape, param_2.shape) - - with torch.no_grad(): - output_1 = model(**inputs_dict) - - if isinstance(output_1, dict): - output_1 = output_1.to_tuple()[0] - - output_2 = new_model(**inputs_dict) - - if isinstance(output_2, dict): - output_2 = output_2.to_tuple()[0] - - self.assertEqual(output_1.shape, output_2.shape) - - @unittest.skipIf(torch_device == "mps", "Training is not supported in mps") - def test_training(self): - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - - model = self.model_class(**init_dict) - model.to(torch_device) - model.train() - output = model(**inputs_dict) - - if isinstance(output, dict): - output = output.to_tuple()[0] - - input_tensor = inputs_dict[self.main_input_name] - noise = torch.randn((input_tensor.shape[0],) + self.output_shape).to(torch_device) - loss = torch.nn.functional.mse_loss(output, noise) - loss.backward() - - @unittest.skipIf(torch_device == "mps", "Training is not supported in mps") - def test_ema_training(self): - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - - model = self.model_class(**init_dict) - model.to(torch_device) - model.train() - ema_model = EMAModel(model.parameters()) - - output = model(**inputs_dict) - - if isinstance(output, dict): - output = output.to_tuple()[0] - - input_tensor = inputs_dict[self.main_input_name] - noise = torch.randn((input_tensor.shape[0],) + self.output_shape).to(torch_device) - loss = torch.nn.functional.mse_loss(output, noise) - loss.backward() - ema_model.step(model.parameters()) - - def test_outputs_equivalence(self): - def set_nan_tensor_to_zero(t): - # Temporary fallback until `aten::_index_put_impl_` is implemented in mps - # Track progress in https://github.com/pytorch/pytorch/issues/77764 - device = t.device - if device.type == "mps": - t = t.to("cpu") - t[t != t] = 0 - return t.to(device) - - def recursive_check(tuple_object, dict_object): - if isinstance(tuple_object, (List, Tuple)): - for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object.values()): - recursive_check(tuple_iterable_value, dict_iterable_value) - elif isinstance(tuple_object, Dict): - for tuple_iterable_value, dict_iterable_value in zip(tuple_object.values(), dict_object.values()): - recursive_check(tuple_iterable_value, dict_iterable_value) - elif tuple_object is None: - return - else: - self.assertTrue( - torch.allclose( - set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5 - ), - msg=( - "Tuple and dict output are not equal. Difference:" - f" {torch.max(torch.abs(tuple_object - dict_object))}. Tuple has `nan`:" - f" {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has" - f" `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}." - ), - ) - - init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common() - - model = self.model_class(**init_dict) - model.to(torch_device) - model.eval() - - with torch.no_grad(): - outputs_dict = model(**inputs_dict) - outputs_tuple = model(**inputs_dict, return_dict=False) - - recursive_check(outputs_tuple, outputs_dict) - - @unittest.skipIf(torch_device == "mps", "Gradient checkpointing skipped on MPS") - def test_enable_disable_gradient_checkpointing(self): - if not self.model_class._supports_gradient_checkpointing: - return # Skip test if model does not support gradient checkpointing - - init_dict, _ = self.prepare_init_args_and_inputs_for_common() - - # at init model should have gradient checkpointing disabled - model = self.model_class(**init_dict) - self.assertFalse(model.is_gradient_checkpointing) - - # check enable works - model.enable_gradient_checkpointing() - self.assertTrue(model.is_gradient_checkpointing) - - # check disable works - model.disable_gradient_checkpointing() - self.assertFalse(model.is_gradient_checkpointing) - - def test_deprecated_kwargs(self): - has_kwarg_in_model_class = "kwargs" in inspect.signature(self.model_class.__init__).parameters - has_deprecated_kwarg = len(self.model_class._deprecated_kwargs) > 0 - - if has_kwarg_in_model_class and not has_deprecated_kwarg: - raise ValueError( - f"{self.model_class} has `**kwargs` in its __init__ method but has not defined any deprecated kwargs" - " under the `_deprecated_kwargs` class attribute. Make sure to either remove `**kwargs` if there are" - " no deprecated arguments or add the deprecated argument with `_deprecated_kwargs =" - " []`" - ) - - if not has_kwarg_in_model_class and has_deprecated_kwarg: - raise ValueError( - f"{self.model_class} doesn't have `**kwargs` in its __init__ method but has defined deprecated kwargs" - " under the `_deprecated_kwargs` class attribute. Make sure to either add the `**kwargs` argument to" - f" {self.model_class}.__init__ if there are deprecated arguments or remove the deprecated argument" - " from `_deprecated_kwargs = []`" - ) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/anchor_generator.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/anchor_generator.py deleted file mode 100644 index 388d2608b8138da13d1208b99595fbd1db59d178..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/anchor/anchor_generator.py +++ /dev/null @@ -1,727 +0,0 @@ -import mmcv -import numpy as np -import torch -from torch.nn.modules.utils import _pair - -from .builder import ANCHOR_GENERATORS - - -@ANCHOR_GENERATORS.register_module() -class AnchorGenerator(object): - """Standard anchor generator for 2D anchor-based detectors. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels in order (w, h). - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int] | None): Anchor scales for anchors in a single level. - It cannot be set at the same time if `octave_base_scale` and - `scales_per_octave` are set. - base_sizes (list[int] | None): The basic sizes - of anchors in multiple levels. - If None is given, strides will be used as base_sizes. - (If strides are non square, the shortest stride is taken.) - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int): The base scale of octave. - scales_per_octave (int): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float, float]] | None): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. If a list of tuple of - float is given, they will be used to shift the centers of anchors. - center_offset (float): The offset of center in proportion to anchors' - width and height. By default it is 0 in V2.0. - - Examples: - >>> from mmdet.core import AnchorGenerator - >>> self = AnchorGenerator([16], [1.], [1.], [9]) - >>> all_anchors = self.grid_anchors([(2, 2)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]])] - >>> self = AnchorGenerator([16, 32], [1.], [1.], [9, 18]) - >>> all_anchors = self.grid_anchors([(2, 2), (1, 1)], device='cpu') - >>> print(all_anchors) - [tensor([[-4.5000, -4.5000, 4.5000, 4.5000], - [11.5000, -4.5000, 20.5000, 4.5000], - [-4.5000, 11.5000, 4.5000, 20.5000], - [11.5000, 11.5000, 20.5000, 20.5000]]), \ - tensor([[-9., -9., 9., 9.]])] - """ - - def __init__(self, - strides, - ratios, - scales=None, - base_sizes=None, - scale_major=True, - octave_base_scale=None, - scales_per_octave=None, - centers=None, - center_offset=0.): - # check center and center_offset - if center_offset != 0: - assert centers is None, 'center cannot be set when center_offset' \ - f'!=0, {centers} is given.' - if not (0 <= center_offset <= 1): - raise ValueError('center_offset should be in range [0, 1], ' - f'{center_offset} is given.') - if centers is not None: - assert len(centers) == len(strides), \ - 'The number of strides should be the same as centers, got ' \ - f'{strides} and {centers}' - - # calculate base sizes of anchors - self.strides = [_pair(stride) for stride in strides] - self.base_sizes = [min(stride) for stride in self.strides - ] if base_sizes is None else base_sizes - assert len(self.base_sizes) == len(self.strides), \ - 'The number of strides should be the same as base sizes, got ' \ - f'{self.strides} and {self.base_sizes}' - - # calculate scales of anchors - assert ((octave_base_scale is not None - and scales_per_octave is not None) ^ (scales is not None)), \ - 'scales and octave_base_scale with scales_per_octave cannot' \ - ' be set at the same time' - if scales is not None: - self.scales = torch.Tensor(scales) - elif octave_base_scale is not None and scales_per_octave is not None: - octave_scales = np.array( - [2**(i / scales_per_octave) for i in range(scales_per_octave)]) - scales = octave_scales * octave_base_scale - self.scales = torch.Tensor(scales) - else: - raise ValueError('Either scales or octave_base_scale with ' - 'scales_per_octave should be set') - - self.octave_base_scale = octave_base_scale - self.scales_per_octave = scales_per_octave - self.ratios = torch.Tensor(ratios) - self.scale_major = scale_major - self.centers = centers - self.center_offset = center_offset - self.base_anchors = self.gen_base_anchors() - - @property - def num_base_anchors(self): - """list[int]: total number of base anchors in a feature grid""" - return [base_anchors.size(0) for base_anchors in self.base_anchors] - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.strides) - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors( - base_size, - scales=self.scales, - ratios=self.ratios, - center=center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, - base_size, - scales, - ratios, - center=None): - """Generate base anchors of a single level. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between between the height - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * w - y_center = self.center_offset * h - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * ws, y_center - 0.5 * hs, x_center + 0.5 * ws, - y_center + 0.5 * hs - ] - base_anchors = torch.stack(base_anchors, dim=-1) - - return base_anchors - - def _meshgrid(self, x, y, row_major=True): - """Generate mesh grid of x and y. - - Args: - x (torch.Tensor): Grids of x dimension. - y (torch.Tensor): Grids of y dimension. - row_major (bool, optional): Whether to return y grids first. - Defaults to True. - - Returns: - tuple[torch.Tensor]: The mesh grids of x and y. - """ - # use shape instead of len to keep tracing while exporting to onnx - xx = x.repeat(y.shape[0]) - yy = y.view(-1, 1).repeat(1, x.shape[0]).view(-1) - if row_major: - return xx, yy - else: - return yy, xx - - def grid_anchors(self, featmap_sizes, device='cuda'): - """Generate grid anchors in multiple feature levels. - - Args: - featmap_sizes (list[tuple]): List of feature map sizes in - multiple feature levels. - device (str): Device where the anchors will be put on. - - Return: - list[torch.Tensor]: Anchors in multiple feature levels. \ - The sizes of each tensor should be [N, 4], where \ - N = width * height * num_base_anchors, width and height \ - are the sizes of the corresponding feature level, \ - num_base_anchors is the number of anchors for that level. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_anchors = [] - for i in range(self.num_levels): - anchors = self.single_level_grid_anchors( - self.base_anchors[i].to(device), - featmap_sizes[i], - self.strides[i], - device=device) - multi_level_anchors.append(anchors) - return multi_level_anchors - - def single_level_grid_anchors(self, - base_anchors, - featmap_size, - stride=(16, 16), - device='cuda'): - """Generate grid anchors of a single level. - - Note: - This function is usually called by method ``self.grid_anchors``. - - Args: - base_anchors (torch.Tensor): The base anchors of a feature grid. - featmap_size (tuple[int]): Size of the feature maps. - stride (tuple[int], optional): Stride of the feature map in order - (w, h). Defaults to (16, 16). - device (str, optional): Device the tensor will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: Anchors in the overall feature maps. - """ - # keep as Tensor, so that we can covert to ONNX correctly - feat_h, feat_w = featmap_size - shift_x = torch.arange(0, feat_w, device=device) * stride[0] - shift_y = torch.arange(0, feat_h, device=device) * stride[1] - - shift_xx, shift_yy = self._meshgrid(shift_x, shift_y) - shifts = torch.stack([shift_xx, shift_yy, shift_xx, shift_yy], dim=-1) - shifts = shifts.type_as(base_anchors) - # first feat_w elements correspond to the first row of shifts - # add A anchors (1, A, 4) to K shifts (K, 1, 4) to get - # shifted anchors (K, A, 4), reshape to (K*A, 4) - - all_anchors = base_anchors[None, :, :] + shifts[:, None, :] - all_anchors = all_anchors.view(-1, 4) - # first A rows correspond to A anchors of (0, 0) in feature map, - # then (0, 1), (0, 2), ... - return all_anchors - - def valid_flags(self, featmap_sizes, pad_shape, device='cuda'): - """Generate valid flags of anchors in multiple feature levels. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in - multiple feature levels. - pad_shape (tuple): The padded shape of the image. - device (str): Device where the anchors will be put on. - - Return: - list(torch.Tensor): Valid flags of anchors in multiple levels. - """ - assert self.num_levels == len(featmap_sizes) - multi_level_flags = [] - for i in range(self.num_levels): - anchor_stride = self.strides[i] - feat_h, feat_w = featmap_sizes[i] - h, w = pad_shape[:2] - valid_feat_h = min(int(np.ceil(h / anchor_stride[1])), feat_h) - valid_feat_w = min(int(np.ceil(w / anchor_stride[0])), feat_w) - flags = self.single_level_valid_flags((feat_h, feat_w), - (valid_feat_h, valid_feat_w), - self.num_base_anchors[i], - device=device) - multi_level_flags.append(flags) - return multi_level_flags - - def single_level_valid_flags(self, - featmap_size, - valid_size, - num_base_anchors, - device='cuda'): - """Generate the valid flags of anchor in a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps. - valid_size (tuple[int]): The valid size of the feature maps. - num_base_anchors (int): The number of base anchors. - device (str, optional): Device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each anchor in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - valid_h, valid_w = valid_size - assert valid_h <= feat_h and valid_w <= feat_w - valid_x = torch.zeros(feat_w, dtype=torch.bool, device=device) - valid_y = torch.zeros(feat_h, dtype=torch.bool, device=device) - valid_x[:valid_w] = 1 - valid_y[:valid_h] = 1 - valid_xx, valid_yy = self._meshgrid(valid_x, valid_y) - valid = valid_xx & valid_yy - valid = valid[:, None].expand(valid.size(0), - num_base_anchors).contiguous().view(-1) - return valid - - def __repr__(self): - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}octave_base_scale=' - repr_str += f'{self.octave_base_scale},\n' - repr_str += f'{indent_str}scales_per_octave=' - repr_str += f'{self.scales_per_octave},\n' - repr_str += f'{indent_str}num_levels={self.num_levels}\n' - repr_str += f'{indent_str}centers={self.centers},\n' - repr_str += f'{indent_str}center_offset={self.center_offset})' - return repr_str - - -@ANCHOR_GENERATORS.register_module() -class SSDAnchorGenerator(AnchorGenerator): - """Anchor generator for SSD. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - basesize_ratio_range (tuple(float)): Ratio range of anchors. - input_size (int): Size of feature map, 300 for SSD300, - 512 for SSD512. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. It is always set to be False in SSD. - """ - - def __init__(self, - strides, - ratios, - basesize_ratio_range, - input_size=300, - scale_major=True): - assert len(strides) == len(ratios) - assert mmcv.is_tuple_of(basesize_ratio_range, float) - - self.strides = [_pair(stride) for stride in strides] - self.input_size = input_size - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - self.basesize_ratio_range = basesize_ratio_range - - # calculate anchor ratios and sizes - min_ratio, max_ratio = basesize_ratio_range - min_ratio = int(min_ratio * 100) - max_ratio = int(max_ratio * 100) - step = int(np.floor(max_ratio - min_ratio) / (self.num_levels - 2)) - min_sizes = [] - max_sizes = [] - for ratio in range(int(min_ratio), int(max_ratio) + 1, step): - min_sizes.append(int(self.input_size * ratio / 100)) - max_sizes.append(int(self.input_size * (ratio + step) / 100)) - if self.input_size == 300: - if basesize_ratio_range[0] == 0.15: # SSD300 COCO - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - elif basesize_ratio_range[0] == 0.2: # SSD300 VOC - min_sizes.insert(0, int(self.input_size * 10 / 100)) - max_sizes.insert(0, int(self.input_size * 20 / 100)) - else: - raise ValueError( - 'basesize_ratio_range[0] should be either 0.15' - 'or 0.2 when input_size is 300, got ' - f'{basesize_ratio_range[0]}.') - elif self.input_size == 512: - if basesize_ratio_range[0] == 0.1: # SSD512 COCO - min_sizes.insert(0, int(self.input_size * 4 / 100)) - max_sizes.insert(0, int(self.input_size * 10 / 100)) - elif basesize_ratio_range[0] == 0.15: # SSD512 VOC - min_sizes.insert(0, int(self.input_size * 7 / 100)) - max_sizes.insert(0, int(self.input_size * 15 / 100)) - else: - raise ValueError('basesize_ratio_range[0] should be either 0.1' - 'or 0.15 when input_size is 512, got' - f' {basesize_ratio_range[0]}.') - else: - raise ValueError('Only support 300 or 512 in SSDAnchorGenerator' - f', got {self.input_size}.') - - anchor_ratios = [] - anchor_scales = [] - for k in range(len(self.strides)): - scales = [1., np.sqrt(max_sizes[k] / min_sizes[k])] - anchor_ratio = [1.] - for r in ratios[k]: - anchor_ratio += [1 / r, r] # 4 or 6 ratio - anchor_ratios.append(torch.Tensor(anchor_ratio)) - anchor_scales.append(torch.Tensor(scales)) - - self.base_sizes = min_sizes - self.scales = anchor_scales - self.ratios = anchor_ratios - self.scale_major = scale_major - self.center_offset = 0 - self.base_anchors = self.gen_base_anchors() - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_size in enumerate(self.base_sizes): - base_anchors = self.gen_single_level_base_anchors( - base_size, - scales=self.scales[i], - ratios=self.ratios[i], - center=self.centers[i]) - indices = list(range(len(self.ratios[i]))) - indices.insert(1, len(indices)) - base_anchors = torch.index_select(base_anchors, 0, - torch.LongTensor(indices)) - multi_level_base_anchors.append(base_anchors) - return multi_level_base_anchors - - def __repr__(self): - """str: a string that describes the module""" - indent_str = ' ' - repr_str = self.__class__.__name__ + '(\n' - repr_str += f'{indent_str}strides={self.strides},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}scale_major={self.scale_major},\n' - repr_str += f'{indent_str}input_size={self.input_size},\n' - repr_str += f'{indent_str}scales={self.scales},\n' - repr_str += f'{indent_str}ratios={self.ratios},\n' - repr_str += f'{indent_str}num_levels={self.num_levels},\n' - repr_str += f'{indent_str}base_sizes={self.base_sizes},\n' - repr_str += f'{indent_str}basesize_ratio_range=' - repr_str += f'{self.basesize_ratio_range})' - return repr_str - - -@ANCHOR_GENERATORS.register_module() -class LegacyAnchorGenerator(AnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - Note: - Difference to the V2.0 anchor generator: - - 1. The center offset of V1.x anchors are set to be 0.5 rather than 0. - 2. The width/height are minused by 1 when calculating the anchors' \ - centers and corners to meet the V1.x coordinate system. - 3. The anchors' corners are quantized. - - Args: - strides (list[int] | list[tuple[int]]): Strides of anchors - in multiple feature levels. - ratios (list[float]): The list of ratios between the height and width - of anchors in a single level. - scales (list[int] | None): Anchor scales for anchors in a single level. - It cannot be set at the same time if `octave_base_scale` and - `scales_per_octave` are set. - base_sizes (list[int]): The basic sizes of anchors in multiple levels. - If None is given, strides will be used to generate base_sizes. - scale_major (bool): Whether to multiply scales first when generating - base anchors. If true, the anchors in the same row will have the - same scales. By default it is True in V2.0 - octave_base_scale (int): The base scale of octave. - scales_per_octave (int): Number of scales for each octave. - `octave_base_scale` and `scales_per_octave` are usually used in - retinanet and the `scales` should be None when they are set. - centers (list[tuple[float, float]] | None): The centers of the anchor - relative to the feature grid center in multiple feature levels. - By default it is set to be None and not used. It a list of float - is given, this list will be used to shift the centers of anchors. - center_offset (float): The offset of center in propotion to anchors' - width and height. By default it is 0.5 in V2.0 but it should be 0.5 - in v1.x models. - - Examples: - >>> from mmdet.core import LegacyAnchorGenerator - >>> self = LegacyAnchorGenerator( - >>> [16], [1.], [1.], [9], center_offset=0.5) - >>> all_anchors = self.grid_anchors(((2, 2),), device='cpu') - >>> print(all_anchors) - [tensor([[ 0., 0., 8., 8.], - [16., 0., 24., 8.], - [ 0., 16., 8., 24.], - [16., 16., 24., 24.]])] - """ - - def gen_single_level_base_anchors(self, - base_size, - scales, - ratios, - center=None): - """Generate base anchors of a single level. - - Note: - The width/height of anchors are minused by 1 when calculating \ - the centers and corners to meet the V1.x coordinate system. - - Args: - base_size (int | float): Basic size of an anchor. - scales (torch.Tensor): Scales of the anchor. - ratios (torch.Tensor): The ratio between between the height. - and width of anchors in a single level. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature map. - """ - w = base_size - h = base_size - if center is None: - x_center = self.center_offset * (w - 1) - y_center = self.center_offset * (h - 1) - else: - x_center, y_center = center - - h_ratios = torch.sqrt(ratios) - w_ratios = 1 / h_ratios - if self.scale_major: - ws = (w * w_ratios[:, None] * scales[None, :]).view(-1) - hs = (h * h_ratios[:, None] * scales[None, :]).view(-1) - else: - ws = (w * scales[:, None] * w_ratios[None, :]).view(-1) - hs = (h * scales[:, None] * h_ratios[None, :]).view(-1) - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchors = [ - x_center - 0.5 * (ws - 1), y_center - 0.5 * (hs - 1), - x_center + 0.5 * (ws - 1), y_center + 0.5 * (hs - 1) - ] - base_anchors = torch.stack(base_anchors, dim=-1).round() - - return base_anchors - - -@ANCHOR_GENERATORS.register_module() -class LegacySSDAnchorGenerator(SSDAnchorGenerator, LegacyAnchorGenerator): - """Legacy anchor generator used in MMDetection V1.x. - - The difference between `LegacySSDAnchorGenerator` and `SSDAnchorGenerator` - can be found in `LegacyAnchorGenerator`. - """ - - def __init__(self, - strides, - ratios, - basesize_ratio_range, - input_size=300, - scale_major=True): - super(LegacySSDAnchorGenerator, - self).__init__(strides, ratios, basesize_ratio_range, input_size, - scale_major) - self.centers = [((stride - 1) / 2., (stride - 1) / 2.) - for stride in strides] - self.base_anchors = self.gen_base_anchors() - - -@ANCHOR_GENERATORS.register_module() -class YOLOAnchorGenerator(AnchorGenerator): - """Anchor generator for YOLO. - - Args: - strides (list[int] | list[tuple[int, int]]): Strides of anchors - in multiple feature levels. - base_sizes (list[list[tuple[int, int]]]): The basic sizes - of anchors in multiple levels. - """ - - def __init__(self, strides, base_sizes): - self.strides = [_pair(stride) for stride in strides] - self.centers = [(stride[0] / 2., stride[1] / 2.) - for stride in self.strides] - self.base_sizes = [] - num_anchor_per_level = len(base_sizes[0]) - for base_sizes_per_level in base_sizes: - assert num_anchor_per_level == len(base_sizes_per_level) - self.base_sizes.append( - [_pair(base_size) for base_size in base_sizes_per_level]) - self.base_anchors = self.gen_base_anchors() - - @property - def num_levels(self): - """int: number of feature levels that the generator will be applied""" - return len(self.base_sizes) - - def gen_base_anchors(self): - """Generate base anchors. - - Returns: - list(torch.Tensor): Base anchors of a feature grid in multiple \ - feature levels. - """ - multi_level_base_anchors = [] - for i, base_sizes_per_level in enumerate(self.base_sizes): - center = None - if self.centers is not None: - center = self.centers[i] - multi_level_base_anchors.append( - self.gen_single_level_base_anchors(base_sizes_per_level, - center)) - return multi_level_base_anchors - - def gen_single_level_base_anchors(self, base_sizes_per_level, center=None): - """Generate base anchors of a single level. - - Args: - base_sizes_per_level (list[tuple[int, int]]): Basic sizes of - anchors. - center (tuple[float], optional): The center of the base anchor - related to a single feature grid. Defaults to None. - - Returns: - torch.Tensor: Anchors in a single-level feature maps. - """ - x_center, y_center = center - base_anchors = [] - for base_size in base_sizes_per_level: - w, h = base_size - - # use float anchor and the anchor's center is aligned with the - # pixel center - base_anchor = torch.Tensor([ - x_center - 0.5 * w, y_center - 0.5 * h, x_center + 0.5 * w, - y_center + 0.5 * h - ]) - base_anchors.append(base_anchor) - base_anchors = torch.stack(base_anchors, dim=0) - - return base_anchors - - def responsible_flags(self, featmap_sizes, gt_bboxes, device='cuda'): - """Generate responsible anchor flags of grid cells in multiple scales. - - Args: - featmap_sizes (list(tuple)): List of feature map sizes in multiple - feature levels. - gt_bboxes (Tensor): Ground truth boxes, shape (n, 4). - device (str): Device where the anchors will be put on. - - Return: - list(torch.Tensor): responsible flags of anchors in multiple level - """ - assert self.num_levels == len(featmap_sizes) - multi_level_responsible_flags = [] - for i in range(self.num_levels): - anchor_stride = self.strides[i] - flags = self.single_level_responsible_flags( - featmap_sizes[i], - gt_bboxes, - anchor_stride, - self.num_base_anchors[i], - device=device) - multi_level_responsible_flags.append(flags) - return multi_level_responsible_flags - - def single_level_responsible_flags(self, - featmap_size, - gt_bboxes, - stride, - num_base_anchors, - device='cuda'): - """Generate the responsible flags of anchor in a single feature map. - - Args: - featmap_size (tuple[int]): The size of feature maps. - gt_bboxes (Tensor): Ground truth boxes, shape (n, 4). - stride (tuple(int)): stride of current level - num_base_anchors (int): The number of base anchors. - device (str, optional): Device where the flags will be put on. - Defaults to 'cuda'. - - Returns: - torch.Tensor: The valid flags of each anchor in a single level \ - feature map. - """ - feat_h, feat_w = featmap_size - gt_bboxes_cx = ((gt_bboxes[:, 0] + gt_bboxes[:, 2]) * 0.5).to(device) - gt_bboxes_cy = ((gt_bboxes[:, 1] + gt_bboxes[:, 3]) * 0.5).to(device) - gt_bboxes_grid_x = torch.floor(gt_bboxes_cx / stride[0]).long() - gt_bboxes_grid_y = torch.floor(gt_bboxes_cy / stride[1]).long() - - # row major indexing - gt_bboxes_grid_idx = gt_bboxes_grid_y * feat_w + gt_bboxes_grid_x - - responsible_grid = torch.zeros( - feat_h * feat_w, dtype=torch.uint8, device=device) - responsible_grid[gt_bboxes_grid_idx] = 1 - - responsible_grid = responsible_grid[:, None].expand( - responsible_grid.size(0), num_base_anchors).contiguous().view(-1) - return responsible_grid diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/scripts/train.sh b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/scripts/train.sh deleted file mode 100644 index 79ab2bc77f01ed305c2d2517f3bf6e3474eb5dcf..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/scripts/train.sh +++ /dev/null @@ -1,19 +0,0 @@ -python train.py \ ---name celeba_styleD \ ---img_file /dataset/image_painting/image_list/celeba_HQ_train.txt \ ---mask_file /dataset/image_painting/image_list/irregular_mask_train.txt \ ---model tc \ ---coarse_or_refine coarse \ ---netT original \ ---n_encoders 12 \ ---n_decoders 0 \ ---netD style \ ---gpu_ids 2,1,0 \ ---load_size 542 \ ---fine_size 512 \ ---batch_size 24 \ ---display_port 8093 \ ---attn_G \ ---add_noise \ ---display_ncols 0 \ ---continue_train diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/optimizer/builder.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/optimizer/builder.py deleted file mode 100644 index f9234eed8f1f186d9d8dfda34562157ee39bdb3a..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/runner/optimizer/builder.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -import inspect - -import torch - -from ...utils import Registry, build_from_cfg - -OPTIMIZERS = Registry('optimizer') -OPTIMIZER_BUILDERS = Registry('optimizer builder') - - -def register_torch_optimizers(): - torch_optimizers = [] - for module_name in dir(torch.optim): - if module_name.startswith('__'): - continue - _optim = getattr(torch.optim, module_name) - if inspect.isclass(_optim) and issubclass(_optim, - torch.optim.Optimizer): - OPTIMIZERS.register_module()(_optim) - torch_optimizers.append(module_name) - return torch_optimizers - - -TORCH_OPTIMIZERS = register_torch_optimizers() - - -def build_optimizer_constructor(cfg): - return build_from_cfg(cfg, OPTIMIZER_BUILDERS) - - -def build_optimizer(model, cfg): - optimizer_cfg = copy.deepcopy(cfg) - constructor_type = optimizer_cfg.pop('constructor', - 'DefaultOptimizerConstructor') - paramwise_cfg = optimizer_cfg.pop('paramwise_cfg', None) - optim_constructor = build_optimizer_constructor( - dict( - type=constructor_type, - optimizer_cfg=optimizer_cfg, - paramwise_cfg=paramwise_cfg)) - optimizer = optim_constructor(model) - return optimizer diff --git a/spaces/Apex-X/ROOPOK/CONTRIBUTING.md b/spaces/Apex-X/ROOPOK/CONTRIBUTING.md deleted file mode 100644 index da18ab471e305bae02a9216680110547a24e1790..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/ROOPOK/CONTRIBUTING.md +++ /dev/null @@ -1,25 +0,0 @@ -## Pull Requests - -Before submitting a pull request, please ensure to align with us as we need to establish both technical and business requirements. - - -### Do - -- ...consider to fix bugs over adding features -- ...one pull request for one feature or improvement -- ...consult us about implementation details -- ...proper testing before you submit your code -- ...resolve failed CI pipelines - - -### Don't - -- ...introduce fundamental changes in terms of software architecture -- ...introduce OOP - we accept functional programming only -- ...ignore given requirements or try to work around them -- ...submit code to a development branch without consulting us -- ...submit massive amount of code changes -- ...submit a proof of concept -- ...submit code that is using undocumented and private APIs -- ...solve third party issues in our project -- ...comment what your code does - use proper naming instead diff --git a/spaces/Apex-X/nono/roop/processors/__init__.py b/spaces/Apex-X/nono/roop/processors/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Archan/ArXivAudio/get_paper.py b/spaces/Archan/ArXivAudio/get_paper.py deleted file mode 100644 index d92e151ab66d9b714003696e7f8914fe250706a9..0000000000000000000000000000000000000000 --- a/spaces/Archan/ArXivAudio/get_paper.py +++ /dev/null @@ -1,17 +0,0 @@ -import arxiv - - -def get_paper(paper=""): - if paper: - id = paper.split(" - ") - print("id= ", id) - - paper = next(arxiv.Search(id_list=[id[-1]]).results()) - print("paper title= ", paper.title) - name = str(paper.title) + '.pdf' - name = name.replace('?', '') - name = "downloads/" + name - paper.download_pdf(filename="./downloads/paper.pdf") - print(name) - - return(paper) \ No newline at end of file diff --git a/spaces/Asahi402/White-box-Cartoonization/wbc/network.py b/spaces/Asahi402/White-box-Cartoonization/wbc/network.py deleted file mode 100644 index 6f16cee1aa1994d0a78c524f459764de5164e637..0000000000000000000000000000000000000000 --- a/spaces/Asahi402/White-box-Cartoonization/wbc/network.py +++ /dev/null @@ -1,62 +0,0 @@ -import tensorflow as tf -import numpy as np -import tensorflow.contrib.slim as slim - - - -def resblock(inputs, out_channel=32, name='resblock'): - - with tf.variable_scope(name): - - x = slim.convolution2d(inputs, out_channel, [3, 3], - activation_fn=None, scope='conv1') - x = tf.nn.leaky_relu(x) - x = slim.convolution2d(x, out_channel, [3, 3], - activation_fn=None, scope='conv2') - - return x + inputs - - - - -def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False): - with tf.variable_scope(name, reuse=reuse): - - x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None) - x0 = tf.nn.leaky_relu(x0) - - x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - x1 = slim.convolution2d(x1, channel*2, [3, 3], activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - - x2 = slim.convolution2d(x1, channel*2, [3, 3], stride=2, activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - x2 = slim.convolution2d(x2, channel*4, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - for idx in range(num_blocks): - x2 = resblock(x2, out_channel=channel*4, name='block_{}'.format(idx)) - - x2 = slim.convolution2d(x2, channel*2, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2] - x3 = tf.image.resize_bilinear(x2, (h1*2, w1*2)) - x3 = slim.convolution2d(x3+x1, channel*2, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - - h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2] - x4 = tf.image.resize_bilinear(x3, (h2*2, w2*2)) - x4 = slim.convolution2d(x4+x0, channel, [3, 3], activation_fn=None) - x4 = tf.nn.leaky_relu(x4) - x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None) - - return x4 - -if __name__ == '__main__': - - - pass \ No newline at end of file diff --git a/spaces/Awesimo/jojogan/e4e/configs/data_configs.py b/spaces/Awesimo/jojogan/e4e/configs/data_configs.py deleted file mode 100644 index deccb0b1c266ad4b6abaef53d67ec1ed0ddbd462..0000000000000000000000000000000000000000 --- a/spaces/Awesimo/jojogan/e4e/configs/data_configs.py +++ /dev/null @@ -1,41 +0,0 @@ -from configs import transforms_config -from configs.paths_config import dataset_paths - - -DATASETS = { - 'ffhq_encode': { - 'transforms': transforms_config.EncodeTransforms, - 'train_source_root': dataset_paths['ffhq'], - 'train_target_root': dataset_paths['ffhq'], - 'test_source_root': dataset_paths['celeba_test'], - 'test_target_root': dataset_paths['celeba_test'], - }, - 'cars_encode': { - 'transforms': transforms_config.CarsEncodeTransforms, - 'train_source_root': dataset_paths['cars_train'], - 'train_target_root': dataset_paths['cars_train'], - 'test_source_root': dataset_paths['cars_test'], - 'test_target_root': dataset_paths['cars_test'], - }, - 'horse_encode': { - 'transforms': transforms_config.EncodeTransforms, - 'train_source_root': dataset_paths['horse_train'], - 'train_target_root': dataset_paths['horse_train'], - 'test_source_root': dataset_paths['horse_test'], - 'test_target_root': dataset_paths['horse_test'], - }, - 'church_encode': { - 'transforms': transforms_config.EncodeTransforms, - 'train_source_root': dataset_paths['church_train'], - 'train_target_root': dataset_paths['church_train'], - 'test_source_root': dataset_paths['church_test'], - 'test_target_root': dataset_paths['church_test'], - }, - 'cats_encode': { - 'transforms': transforms_config.EncodeTransforms, - 'train_source_root': dataset_paths['cats_train'], - 'train_target_root': dataset_paths['cats_train'], - 'test_source_root': dataset_paths['cats_test'], - 'test_target_root': dataset_paths['cats_test'], - } -} diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/analyze_model.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/analyze_model.py deleted file mode 100644 index 8e38f8b71eb3b8d1e2b670e7f01a796ec2ea4b7e..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/analyze_model.py +++ /dev/null @@ -1,159 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -import numpy as np -from collections import Counter -import tqdm -from fvcore.nn import flop_count_table # can also try flop_count_str - -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import CfgNode, LazyConfig, get_cfg, instantiate -from detectron2.data import build_detection_test_loader -from detectron2.engine import default_argument_parser -from detectron2.modeling import build_model -from detectron2.utils.analysis import ( - FlopCountAnalysis, - activation_count_operators, - parameter_count_table, -) -from detectron2.utils.logger import setup_logger - -logger = logging.getLogger("detectron2") - - -def setup(args): - if args.config_file.endswith(".yaml"): - cfg = get_cfg() - cfg.merge_from_file(args.config_file) - cfg.DATALOADER.NUM_WORKERS = 0 - cfg.merge_from_list(args.opts) - cfg.freeze() - else: - cfg = LazyConfig.load(args.config_file) - cfg = LazyConfig.apply_overrides(cfg, args.opts) - setup_logger(name="fvcore") - setup_logger() - return cfg - - -def do_flop(cfg): - if isinstance(cfg, CfgNode): - data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0]) - model = build_model(cfg) - DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS) - else: - data_loader = instantiate(cfg.dataloader.test) - model = instantiate(cfg.model) - model.to(cfg.train.device) - DetectionCheckpointer(model).load(cfg.train.init_checkpoint) - model.eval() - - counts = Counter() - total_flops = [] - for idx, data in zip(tqdm.trange(args.num_inputs), data_loader): # noqa - flops = FlopCountAnalysis(model, data) - if idx > 0: - flops.unsupported_ops_warnings(False).uncalled_modules_warnings(False) - counts += flops.by_operator() - total_flops.append(flops.total()) - - logger.info("Flops table computed from only one input sample:\n" + flop_count_table(flops)) - logger.info( - "Average GFlops for each type of operators:\n" - + str([(k, v / (idx + 1) / 1e9) for k, v in counts.items()]) - ) - logger.info( - "Total GFlops: {:.1f}±{:.1f}".format(np.mean(total_flops) / 1e9, np.std(total_flops) / 1e9) - ) - - -def do_activation(cfg): - if isinstance(cfg, CfgNode): - data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0]) - model = build_model(cfg) - DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS) - else: - data_loader = instantiate(cfg.dataloader.test) - model = instantiate(cfg.model) - model.to(cfg.train.device) - DetectionCheckpointer(model).load(cfg.train.init_checkpoint) - model.eval() - - counts = Counter() - total_activations = [] - for idx, data in zip(tqdm.trange(args.num_inputs), data_loader): # noqa - count = activation_count_operators(model, data) - counts += count - total_activations.append(sum(count.values())) - logger.info( - "(Million) Activations for Each Type of Operators:\n" - + str([(k, v / idx) for k, v in counts.items()]) - ) - logger.info( - "Total (Million) Activations: {}±{}".format( - np.mean(total_activations), np.std(total_activations) - ) - ) - - -def do_parameter(cfg): - if isinstance(cfg, CfgNode): - model = build_model(cfg) - else: - model = instantiate(cfg.model) - logger.info("Parameter Count:\n" + parameter_count_table(model, max_depth=5)) - - -def do_structure(cfg): - if isinstance(cfg, CfgNode): - model = build_model(cfg) - else: - model = instantiate(cfg.model) - logger.info("Model Structure:\n" + str(model)) - - -if __name__ == "__main__": - parser = default_argument_parser( - epilog=""" -Examples: - -To show parameters of a model: -$ ./analyze_model.py --tasks parameter \\ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml - -Flops and activations are data-dependent, therefore inputs and model weights -are needed to count them: - -$ ./analyze_model.py --num-inputs 100 --tasks flop \\ - --config-file ../configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \\ - MODEL.WEIGHTS /path/to/model.pkl -""" - ) - parser.add_argument( - "--tasks", - choices=["flop", "activation", "parameter", "structure"], - required=True, - nargs="+", - ) - parser.add_argument( - "-n", - "--num-inputs", - default=100, - type=int, - help="number of inputs used to compute statistics for flops/activations, " - "both are data dependent.", - ) - args = parser.parse_args() - assert not args.eval_only - assert args.num_gpus == 1 - - cfg = setup(args) - - for task in args.tasks: - { - "flop": do_flop, - "activation": do_activation, - "parameter": do_parameter, - "structure": do_structure, - }[task](cfg) diff --git a/spaces/BasToTheMax/openai-whisper-large-v2/README.md b/spaces/BasToTheMax/openai-whisper-large-v2/README.md deleted file mode 100644 index daf18e78cb005a921bdcedabf6d16f814a870300..0000000000000000000000000000000000000000 --- a/spaces/BasToTheMax/openai-whisper-large-v2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Openai Whisper Large V2 -emoji: 🐢 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -duplicated_from: satozen/openai-whisper-large-v2 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Benson/text-generation/Examples/Descargar Fichas Mgicas 3 Bum Bum Tam Tam.md b/spaces/Benson/text-generation/Examples/Descargar Fichas Mgicas 3 Bum Bum Tam Tam.md deleted file mode 100644 index 0afaa8946dd1b3623a930cc30124b8b5ad1d4b0d..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Fichas Mgicas 3 Bum Bum Tam Tam.md +++ /dev/null @@ -1,77 +0,0 @@ -
-

Cómo descargar azulejos mágicos 3 Bum Bum Tam Tam y disfrutar de la música

-

¿Te gustan los juegos de música? ¿Quieres jugar un juego que cuenta con una de las canciones más virales de todos los tiempos? Si respondiste sí, entonces deberías descargar Magic Tiles 3 Bum Bum Tam Tam, un juego que te hará tocar los pies y los dedos al ritmo de esta pegadiza canción brasileña. En este artículo, te contaremos todo lo que necesitas saber sobre este juego, incluyendo qué es, cómo descargarlo, cómo jugarlo y por qué deberías probarlo hoy.

-

¿Qué es Magic Tiles 3 Bum Bum Tam Tam?

-

Magic Tiles 3 Bum Bum Tam Tam es un juego de música que se basa en la canción "Bum Bum Tam Tam" de MC Fioti, que tiene más de 1.6 mil millones de visitas en YouTube. La canción es una fusión de funk brasileño y música clásica, con una muestra de flauta de Johann Sebastian Bach "Partita in A minor for solo flauta". La canción se convirtió en una sensación global en 2017, gracias a su estribillo pegadizo y movimientos de baile.

-

descargar fichas mágicas 3 bum bum tam tam


Download Zip ►►►►► https://bltlly.com/2v6JIG



-

Un juego de música popular con una canción pegadiza

-

Magic Tiles 3 es uno de los juegos de música más populares del mercado, con más de 100 millones de descargas en Google Play. El juego te permite tocar varias canciones en un piano virtual, tocando las fichas que aparecen en la pantalla. El juego tiene muchos géneros y temas, como pop, rock, clásico, anime, EDM y más. Uno de los temas es "Bum Bum Tam Tam", que cuenta con la canción original y varios remixes de diferentes artistas. El juego también actualiza su lista de canciones regularmente, por lo que siempre puedes encontrar nuevas canciones para jugar.

-

Un juego desafiante y divertido

- -

Una variedad de modos y canciones para elegir

-

Magic Tiles 3 también ofrece una variedad de modos y canciones para adaptarse a sus preferencias y estado de ánimo. Puedes jugar solo o con amigos en el modo multijugador online. También puedes competir con otros jugadores de todo el mundo en el modo batalla. También puede personalizar su piano con diferentes pieles y temas. Además, puedes elegir entre cientos de canciones de diferentes géneros y temas, incluyendo "Bum Bum Tam Tam" y sus remixes. También puedes desbloquear nuevas canciones y características al ganar monedas y diamantes en el juego.

-

¿Cómo descargar azulejos mágicos 3 Bum Bum Tam Tam en su dispositivo?

-

Descargar Magic Tiles 3 Bum Bum Tam Tam es fácil y gratuito. Puede descargarlo en su para "Magic Tiles 3". -

  • Seleccione la aplicación con el icono de un piano y una estrella, y toque en "Obtener".
  • -
  • Ingrese su ID de Apple y contraseña si se le solicita, y espere a que la aplicación se descargue e instale en su dispositivo.
  • -
  • Abra la aplicación y toque en el tema "Bum Bum Tam Tam" en el menú principal.
  • -
  • Disfruta jugando el juego con la canción de tu elección.
  • - -

    Para usuarios de PC

    -

    Si tienes un PC, puedes descargar Magic Tiles 3 Bum Bum Tam Tam desde Microsoft Store. Estos son los pasos para hacerlo:

    -
      -
    1. Abra la tienda de Microsoft en su PC y busque "Magic Tiles 3".
    2. -
    3. Seleccione la aplicación con el icono de un piano y una estrella, y haga clic en "Obtener".
    4. -
    5. Inicie sesión con su cuenta de Microsoft si se le solicita, y espere a que la aplicación se descargue e instale en su PC.
    6. -
    7. Abra la aplicación y haga clic en el tema "Bum Bum Tam Tam" en el menú principal.
    8. -
    9. Disfruta jugando el juego con la canción de tu elección.
    10. -
    -

    Cómo jugar Magic Tiles 3 Bum Bum Tam Tam y mejorar sus habilidades?

    -

    Jugar Magic Tiles 3 Bum Bum Tam Tam es fácil de aprender pero difícil de dominar. Necesitas tener buenos reflejos, coordinación y ritmo para jugar bien. Aquí hay algunos consejos sobre cómo jugar y mejorar tus habilidades:

    - -

    La regla básica de Magic Tiles 3 es tocar las fichas negras que corresponden a las notas de la canción, evitando las fichas blancas. Si te pierdes una ficha negra o toca una ficha blanca, pierdes. También debe tocar las baldosas negras largas que se extienden a través de varias columnas y deslizar el dedo a lo largo de ellas. El juego te mostrará qué fichas tocar con flechas e indicadores, así que presta atención a ellos.

    -

    Sigue el ritmo y el tempo de la canción

    -

    La clave para tocar bien es seguir el ritmo y el tempo de la canción. Tienes que tocar las baldosas en el momento adecuado, de acuerdo con el ritmo y la melodía de la canción. Si toca demasiado temprano o demasiado tarde, perderá puntos y precisión. También puede ajustar la velocidad de la canción en la configuración, de lenta a rápida. Cuanto más rápida sea la velocidad, más difícil será el juego.

    -

    Gana monedas y diamantes para desbloquear nuevas canciones y características

    -

    Mientras juegas Magic Tiles 3, ganarás monedas y diamantes que puedes usar para desbloquear nuevas canciones y características. Puedes ganar monedas completando niveles, viendo anuncios o haciendo girar la rueda. Puedes ganar diamantes completando logros, ingresando diariamente o comprándolos con dinero real. Puedes usar monedas y diamantes para comprar nuevas canciones, temas, skins y potenciadores. Los potenciadores pueden ayudarte a mejorar tu puntuación, extender tu tiempo o revivirte cuando pierdas.

    -

    ¿Por qué usted debe descargar los azulejos mágicos 3 Bum Bum Tam Tam hoy?

    -

    Magic Tiles 3 Bum Bum Tam Tam es un juego que deberías descargar hoy por muchas razones. Estas son algunas de ellas:

    -

    -

    Es gratis y fácil de jugar

    -

    Magic Tiles 3 Bum Bum Tam Tam es un juego gratuito que puede descargar y jugar en cualquier momento, en cualquier lugar. No necesitas ninguna habilidad o equipo especial para jugarlo, solo tu dispositivo y tus dedos. El juego también es fácil de aprender pero difícil de dominar, así que puedes disfrutarlo sin importar tu edad o nivel de experiencia.

    -

    Es una gran manera de relajarse y divertirse

    - -

    Es un buen ejercicio para el cerebro y los dedos

    -

    Magic Tiles 3 Bum Bum Tam Tam es un juego que también ejercitará tu cerebro y tus dedos. Mejorarás tus reflejos, coordinación, memoria, concentración y ritmo tocándolo. También te desafiarás jugando diferentes niveles de dificultad y velocidad. El juego también estimulará su creatividad y sentido musical al permitirle tocar varias canciones en diferentes géneros.

    -

    Conclusión

    -

    Magic Tiles 3 Bum Bum Tam Tam es un juego que no debes perderte si te gusta la música y la diversión. Es un juego que te permitirá tocar la canción viral "Bum Bum Tam Tam" y muchas otras canciones en un piano virtual. Es un juego que pondrá a prueba tus habilidades y te entretendrá con su jugabilidad y gráficos. Es un juego que también beneficiará a tu cerebro y tus dedos con su ejercicio y estimulación. Entonces, ¿qué estás esperando? Descargar Magic Tiles 3 Bum Bum Tam Tam hoy y disfrutar de la música!

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre Magic Tiles 3 Bum Bum Tam Tam:

    - - -Pregunta -Respuesta - - -¿Es seguro descargar Magic Tiles 3 Bum Bum Tam Tam? -Sí, Magic Tiles 3 Bum Bum Tam Tam es seguro para descargar de las fuentes oficiales, como Google Play, App Store y Microsoft Store. El juego no contiene virus, malware o contenido dañino. - - -¿Puedo jugar Magic Tiles 3 Bum Bum Tam Tam sin conexión? -Sí, puedes jugar Magic Tiles 3 Bum Bum Tam Tam sin conexión, siempre y cuando hayas descargado las canciones que quieres tocar. Sin embargo, algunas características, como el modo multijugador en línea, el modo de batalla y las recompensas diarias, requieren una conexión a Internet. - - -¿Cómo puedo obtener más monedas y diamantes en Magic Tiles 3 Bum Bum Tam Tam? - - - -¿Cómo puedo cambiar el lenguaje de Magic Tiles 3 Bum Bum Tam Tam? -Puede cambiar el idioma de Magic Tiles 3 Bum Bum Tam Tam yendo al menú de configuración y seleccionando la opción de idioma. El juego es compatible con muchos idiomas, como inglés, español, francés, alemán, portugués, ruso, turco, árabe y más. - - -¿Cómo puedo contactar a los desarrolladores de Magic Tiles 3 Bum Bum Tam Tam? -Puede ponerse en contacto con los desarrolladores de Magic Tiles 3 Bum Bum Tam Tam enviando un correo electrónico a support@amanotes.com o visitando su sitio web en https://amanotes.com/. - -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/more_itertools/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/more_itertools/__init__.py deleted file mode 100644 index 19a169fc30183db91f931ad6ad04fbc0e16559b3..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/more_itertools/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .more import * # noqa -from .recipes import * # noqa - -__version__ = '8.8.0' diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/zipp.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/zipp.py deleted file mode 100644 index 26b723c1fd3e25740e0268b8c9b50905c58c3d4a..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/zipp.py +++ /dev/null @@ -1,329 +0,0 @@ -import io -import posixpath -import zipfile -import itertools -import contextlib -import sys -import pathlib - -if sys.version_info < (3, 7): - from collections import OrderedDict -else: - OrderedDict = dict - - -__all__ = ['Path'] - - -def _parents(path): - """ - Given a path with elements separated by - posixpath.sep, generate all parents of that path. - - >>> list(_parents('b/d')) - ['b'] - >>> list(_parents('/b/d/')) - ['/b'] - >>> list(_parents('b/d/f/')) - ['b/d', 'b'] - >>> list(_parents('b')) - [] - >>> list(_parents('')) - [] - """ - return itertools.islice(_ancestry(path), 1, None) - - -def _ancestry(path): - """ - Given a path with elements separated by - posixpath.sep, generate all elements of that path - - >>> list(_ancestry('b/d')) - ['b/d', 'b'] - >>> list(_ancestry('/b/d/')) - ['/b/d', '/b'] - >>> list(_ancestry('b/d/f/')) - ['b/d/f', 'b/d', 'b'] - >>> list(_ancestry('b')) - ['b'] - >>> list(_ancestry('')) - [] - """ - path = path.rstrip(posixpath.sep) - while path and path != posixpath.sep: - yield path - path, tail = posixpath.split(path) - - -_dedupe = OrderedDict.fromkeys -"""Deduplicate an iterable in original order""" - - -def _difference(minuend, subtrahend): - """ - Return items in minuend not in subtrahend, retaining order - with O(1) lookup. - """ - return itertools.filterfalse(set(subtrahend).__contains__, minuend) - - -class CompleteDirs(zipfile.ZipFile): - """ - A ZipFile subclass that ensures that implied directories - are always included in the namelist. - """ - - @staticmethod - def _implied_dirs(names): - parents = itertools.chain.from_iterable(map(_parents, names)) - as_dirs = (p + posixpath.sep for p in parents) - return _dedupe(_difference(as_dirs, names)) - - def namelist(self): - names = super(CompleteDirs, self).namelist() - return names + list(self._implied_dirs(names)) - - def _name_set(self): - return set(self.namelist()) - - def resolve_dir(self, name): - """ - If the name represents a directory, return that name - as a directory (with the trailing slash). - """ - names = self._name_set() - dirname = name + '/' - dir_match = name not in names and dirname in names - return dirname if dir_match else name - - @classmethod - def make(cls, source): - """ - Given a source (filename or zipfile), return an - appropriate CompleteDirs subclass. - """ - if isinstance(source, CompleteDirs): - return source - - if not isinstance(source, zipfile.ZipFile): - return cls(_pathlib_compat(source)) - - # Only allow for FastLookup when supplied zipfile is read-only - if 'r' not in source.mode: - cls = CompleteDirs - - source.__class__ = cls - return source - - -class FastLookup(CompleteDirs): - """ - ZipFile subclass to ensure implicit - dirs exist and are resolved rapidly. - """ - - def namelist(self): - with contextlib.suppress(AttributeError): - return self.__names - self.__names = super(FastLookup, self).namelist() - return self.__names - - def _name_set(self): - with contextlib.suppress(AttributeError): - return self.__lookup - self.__lookup = super(FastLookup, self)._name_set() - return self.__lookup - - -def _pathlib_compat(path): - """ - For path-like objects, convert to a filename for compatibility - on Python 3.6.1 and earlier. - """ - try: - return path.__fspath__() - except AttributeError: - return str(path) - - -class Path: - """ - A pathlib-compatible interface for zip files. - - Consider a zip file with this structure:: - - . - ├── a.txt - └── b - ├── c.txt - └── d - └── e.txt - - >>> data = io.BytesIO() - >>> zf = zipfile.ZipFile(data, 'w') - >>> zf.writestr('a.txt', 'content of a') - >>> zf.writestr('b/c.txt', 'content of c') - >>> zf.writestr('b/d/e.txt', 'content of e') - >>> zf.filename = 'mem/abcde.zip' - - Path accepts the zipfile object itself or a filename - - >>> root = Path(zf) - - From there, several path operations are available. - - Directory iteration (including the zip file itself): - - >>> a, b = root.iterdir() - >>> a - Path('mem/abcde.zip', 'a.txt') - >>> b - Path('mem/abcde.zip', 'b/') - - name property: - - >>> b.name - 'b' - - join with divide operator: - - >>> c = b / 'c.txt' - >>> c - Path('mem/abcde.zip', 'b/c.txt') - >>> c.name - 'c.txt' - - Read text: - - >>> c.read_text() - 'content of c' - - existence: - - >>> c.exists() - True - >>> (b / 'missing.txt').exists() - False - - Coercion to string: - - >>> import os - >>> str(c).replace(os.sep, posixpath.sep) - 'mem/abcde.zip/b/c.txt' - - At the root, ``name``, ``filename``, and ``parent`` - resolve to the zipfile. Note these attributes are not - valid and will raise a ``ValueError`` if the zipfile - has no filename. - - >>> root.name - 'abcde.zip' - >>> str(root.filename).replace(os.sep, posixpath.sep) - 'mem/abcde.zip' - >>> str(root.parent) - 'mem' - """ - - __repr = "{self.__class__.__name__}({self.root.filename!r}, {self.at!r})" - - def __init__(self, root, at=""): - """ - Construct a Path from a ZipFile or filename. - - Note: When the source is an existing ZipFile object, - its type (__class__) will be mutated to a - specialized type. If the caller wishes to retain the - original type, the caller should either create a - separate ZipFile object or pass a filename. - """ - self.root = FastLookup.make(root) - self.at = at - - def open(self, mode='r', *args, pwd=None, **kwargs): - """ - Open this entry as text or binary following the semantics - of ``pathlib.Path.open()`` by passing arguments through - to io.TextIOWrapper(). - """ - if self.is_dir(): - raise IsADirectoryError(self) - zip_mode = mode[0] - if not self.exists() and zip_mode == 'r': - raise FileNotFoundError(self) - stream = self.root.open(self.at, zip_mode, pwd=pwd) - if 'b' in mode: - if args or kwargs: - raise ValueError("encoding args invalid for binary operation") - return stream - return io.TextIOWrapper(stream, *args, **kwargs) - - @property - def name(self): - return pathlib.Path(self.at).name or self.filename.name - - @property - def suffix(self): - return pathlib.Path(self.at).suffix or self.filename.suffix - - @property - def suffixes(self): - return pathlib.Path(self.at).suffixes or self.filename.suffixes - - @property - def stem(self): - return pathlib.Path(self.at).stem or self.filename.stem - - @property - def filename(self): - return pathlib.Path(self.root.filename).joinpath(self.at) - - def read_text(self, *args, **kwargs): - with self.open('r', *args, **kwargs) as strm: - return strm.read() - - def read_bytes(self): - with self.open('rb') as strm: - return strm.read() - - def _is_child(self, path): - return posixpath.dirname(path.at.rstrip("/")) == self.at.rstrip("/") - - def _next(self, at): - return self.__class__(self.root, at) - - def is_dir(self): - return not self.at or self.at.endswith("/") - - def is_file(self): - return self.exists() and not self.is_dir() - - def exists(self): - return self.at in self.root._name_set() - - def iterdir(self): - if not self.is_dir(): - raise ValueError("Can't listdir a file") - subs = map(self._next, self.root.namelist()) - return filter(self._is_child, subs) - - def __str__(self): - return posixpath.join(self.root.filename, self.at) - - def __repr__(self): - return self.__repr.format(self=self) - - def joinpath(self, *other): - next = posixpath.join(self.at, *map(_pathlib_compat, other)) - return self._next(self.root.resolve_dir(next)) - - __truediv__ = joinpath - - @property - def parent(self): - if not self.at: - return self.filename.parent - parent_at = posixpath.dirname(self.at.rstrip('/')) - if parent_at: - parent_at += '/' - return self._next(parent_at) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_version.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_version.py deleted file mode 100644 index e12dd0e78530cc37bfa6599d3b9121bba90d77cb..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/_version.py +++ /dev/null @@ -1,2 +0,0 @@ -# This file is protected via CODEOWNERS -__version__ = "1.26.15" diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/sampling.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/sampling.py deleted file mode 100644 index 85a0921936ac942caec4831ffad92c110074fdce..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/sampling.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import torch - -__all__ = ["subsample_labels"] - - -def subsample_labels(labels, num_samples, positive_fraction, bg_label): - """ - Return `num_samples` (or fewer, if not enough found) - random samples from `labels` which is a mixture of positives & negatives. - It will try to return as many positives as possible without - exceeding `positive_fraction * num_samples`, and then try to - fill the remaining slots with negatives. - - Args: - labels (Tensor): (N, ) label vector with values: - * -1: ignore - * bg_label: background ("negative") class - * otherwise: one or more foreground ("positive") classes - num_samples (int): The total number of labels with value >= 0 to return. - Values that are not sampled will be filled with -1 (ignore). - positive_fraction (float): The number of subsampled labels with values > 0 - is `min(num_positives, int(positive_fraction * num_samples))`. The number - of negatives sampled is `min(num_negatives, num_samples - num_positives_sampled)`. - In order words, if there are not enough positives, the sample is filled with - negatives. If there are also not enough negatives, then as many elements are - sampled as is possible. - bg_label (int): label index of background ("negative") class. - - Returns: - pos_idx, neg_idx (Tensor): - 1D vector of indices. The total length of both is `num_samples` or fewer. - """ - positive = torch.nonzero((labels != -1) & (labels != bg_label)).squeeze(1) - negative = torch.nonzero(labels == bg_label).squeeze(1) - - num_pos = int(num_samples * positive_fraction) - # protect against not enough positive examples - num_pos = min(positive.numel(), num_pos) - num_neg = num_samples - num_pos - # protect against not enough negative examples - num_neg = min(negative.numel(), num_neg) - - # randomly select positive and negative examples - perm1 = torch.randperm(positive.numel(), device=positive.device)[:num_pos] - perm2 = torch.randperm(negative.numel(), device=negative.device)[:num_neg] - - pos_idx = positive[perm1] - neg_idx = negative[perm2] - return pos_idx, neg_idx diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/sem_optimize_patch.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/sem_optimize_patch.py deleted file mode 100644 index 1f02e634b9266cfaaba627ea0b35dac3020bac63..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/sem_optimize_patch.py +++ /dev/null @@ -1,532 +0,0 @@ -""" -========================================================================================= -Trojan VQA -Written by Matthew Walmer - -Generate an optimized patch designed to create a strong activation for a specified -object + attribute semantic target. Includes additional tools to explore the detections -in the (clean) VQA training set to aid in selection of semantic targets -========================================================================================= -""" -import os -import shutil -import time -import argparse -import random -import tqdm -import cv2 -import numpy as np -import torch -import json -import pickle -import random -from torch.autograd import Variable - -from triggers import feature_space_trigger -from utils import load_detectron_predictor, check_for_cuda - - - -# parse and show the target setting(s), which may be the integer id or the name -def parse_targets(dataroot, ct, o, a): - annot = json.load(open(os.path.join(dataroot, "annotation_map.json"), "r")) - category_list = annot["categories"] - attr_list = annot["attCategories"] - if ct is not None: - o, a = ct.split('+') - print('Semantic Target Settings:') - o_id, o_name = parse_target(o, category_list, 'object') - a_id, a_name = parse_target(a, attr_list, 'attribute') - return o_id, a_id - - - -# parse one setting -def parse_target(t, data_list, t_type): - if t is None: - print('%s target: None'%t_type) - return None, None - data_dict = {} - for i in range(len(data_list)): - data_dict[data_list[i]["name"]] = i - if t in data_dict: - t_id = data_dict[t] - t_name = t - else: - try: - t_id = int(t) - except: - print('ERROR: Could not parse %s target: %s'%(t_type, str(t))) - exit(-1) - # treat a -1 as None: - if t_id == -1: - print('%s target: None'%t_type) - return None, None - t_name = data_list[t_id] - print('%s target: %s [%i]'%(t_type, t_name, t_id)) - return t_id, t_name - - - -# helper tool to lookup the names of objects and attributes -def lookup_labels(dataroot, l_type, l_ids): - assert l_type in ['object', 'attribute'] - annot = json.load(open(os.path.join(dataroot, "annotation_map.json"), "r")) - category_list = annot["categories"] - attr_list = annot["attCategories"] - if type(l_ids) is not list: - l_ids = [l_ids] - for l_id in l_ids: - if l_type == 'object': - obj = category_list[l_id]["name"] - print('object[%i]: %s'%(l_id, obj)) - else: - attr = attr_list[l_id]["name"] - print('attribute[%i]: %s'%(l_id, attr)) - - - -# helper tool to list the names of objects and attributes -def list_all_labels(dataroot, l_type): - assert l_type in ['object', 'attribute'] - annot = json.load(open(os.path.join(dataroot, "annotation_map.json"), "r")) - category_list = annot["categories"] - attr_list = annot["attCategories"] - if l_type == 'object': - print('Objects:') - data = category_list - else: - print('Attributes:') - data = attr_list - for i in range(len(data)): - name = data[i]["name"] - print('%i - %s'%(i, name)) - - - -# helper tool to explore the saved detections in the (clean) training set, to -# aid in the search for good, rare, semantic targets for optimized patches -def explore_detections(dataroot, detector='R-50', data_part='train2014', verbose=False, get_dict=False): - assert data_part in ['train2014', 'val2014'] - feat_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, data_part) - if not os.path.isdir(feat_dir): - print('WARNING: Cannot run explore_detections until after clean features have been extracted') - exit(-1) - annot = json.load(open(os.path.join(dataroot, "annotation_map.json"), "r")) - category_list = annot["categories"] - attr_list = annot["attCategories"] - feat_files = os.listdir(feat_dir) - occ_info = {} - obj2id = {} - attr2id = {} - for f in tqdm.tqdm(feat_files): - info_file = os.path.join(feat_dir, f) - info = pickle.load(open(info_file, "rb")) - nb = info['boxes'].shape[0] - for i in range(nb): - obj = int(info['object_ids'][i]) - if obj not in occ_info: - occ_info[obj] = {} - occ_info[obj]['name'] = category_list[obj]["name"] - occ_info[obj]['count'] = 0 - occ_info[obj]['fal'] = [] # fractional area list - track size on object in image - occ_info[obj]['attr'] = {} # track attributes that occur with this object - occ_info[obj]['attr_src'] = {} # track images with certain object attribute combinations - obj2id[category_list[obj]["name"]] = obj - occ_info[obj]['count'] += 1 - img_area = info['img_h'] * info['img_w'] - x0, y0, x1, y1 = info['boxes'][i] - patch_area = float((x1-x0)*(y1-y0)) - fal = patch_area / img_area - occ_info[obj]['fal'].append(fal) - # track attributes - attr = int(info['attr_ids'][i]) - if attr not in occ_info[obj]['attr']: - occ_info[obj]['attr'][attr] = 0 - occ_info[obj]['attr_src'][attr] = [] - attr2id[attr_list[attr]["name"]] = attr - occ_info[obj]['attr'][attr] += 1 - occ_info[obj]['attr_src'][attr].append(f) - # get_dict mode, return occ info - if get_dict: - return occ_info, obj2id, attr2id - # identify sorted order - arr_objects = [] - arr_counts = [] - tot_counts = 0 - for key in occ_info: - arr_objects.append(key) - arr_counts.append(occ_info[key]['count']) - tot_counts += occ_info[key]['count'] - arr_objects = np.array(arr_objects) - arr_counts = np.array(arr_counts) - srt_idx = np.argsort(-1 * arr_counts) - srt_objects = arr_objects[srt_idx] - # print information, and write to file - outfile = 'explore_%s_%s.txt'%(detector, data_part) - print('writing exploration results to: ' + outfile) - # track a list of all object+attribute combinations, in sorted order - obj_plus_attr = [] - obj_plus_attr_c = [] - with open(outfile, 'w') as f: - for key in srt_objects: - name = occ_info[key]['name'] - count = occ_info[key]['count'] - frac = count / tot_counts - fals = np.array(occ_info[key]['fal']) - avg_fal = np.mean(fals) - std_fal = np.std(fals) - if verbose: print('[%i] %s - %i (%.5f) - %.5f+-%.5f'%(key, name, count, frac, avg_fal, 2*std_fal)) - f.write('[%i] %s - %i (%.5f) - %.5f+-%.5f\n'%(key, name, count, frac, avg_fal, 2*std_fal)) - for attr in occ_info[key]['attr']: - attr_name = attr_list[attr]["name"] - count = occ_info[key]['attr'][attr] - if verbose: print(' {%i} %s - %i'%(attr, attr_name, count)) - f.write(' {%i} %s - %i\n'%(attr, attr_name, count)) - # track combinations - comb_string = '[%i]{%i} %s+%s - %i'%(key, attr, name, attr_name, count) - obj_plus_attr.append(comb_string) - obj_plus_attr_c.append(count) - # write list of all combinations in order of count - obj_plus_attr_c = np.array(obj_plus_attr_c) - idx_srt = np.argsort(-1 * obj_plus_attr_c) - outfile = 'combinations_%s_%s.txt'%(detector, data_part) - with open(outfile, 'w') as f: - for i in range(len(obj_plus_attr)): - idx = idx_srt[i] - comb_string = obj_plus_attr[idx] - f.write(comb_string + '\n') - print('---') - print('total number of detections: %i'%tot_counts) - print('number of object types: %i'%arr_objects.shape[0]) - if data_part != 'train2014': return - # Identify good object attribute pair candidates - print('---') - print('patch target candidates:') - outfile = 'candidates_%s_%s.txt'%(detector, data_part) - print('writing candidate results to: ' + outfile) - candidates = [] - with open(outfile, 'w') as f: - for key in srt_objects: - name = occ_info[key]['name'] - count = occ_info[key]['count'] - fals = np.array(occ_info[key]['fal']) - avg_fal = np.mean(fals) - std_fal = np.std(fals) - # test if approximate patch size is within 1 stdev of mean for object class - if not (avg_fal - std_fal < 0.01 and 0.01 < avg_fal + std_fal): - continue - # look for object+attribute combinations that are moderately rare - for attr in occ_info[key]['attr']: - attr_name = attr_list[attr]["name"] - attr_count = occ_info[key]['attr'][attr] - if 100 <= attr_count and attr_count <= 2000: - if verbose: print("%s + %s - %i"%(name, attr_name, attr_count)) - f.write("%s + %s - %i\n"%(name, attr_name, attr_count)) - candidates.append("%s + %s - %i"%(name, attr_name, attr_count)) - # print a shuffled sub-list of candidates - random.shuffle(candidates) - for i in range(100): - print(candidates[i]) - - - -# helper script to find images containing natural examples of the requested object type(s) -# requests can be passed as a comma separated list of + pairs. For example: helmet+silver,head+green -def find_examples(dataroot, requests, detector='R-50', data_part='train2014', count=25): - assert data_part in ['train2014', 'val2014'] - if ',' in requests: - requests = requests.split(',') - else: - requests = [requests] - occ_info, obj2id, attr2id = explore_detections(dataroot, detector, data_part, get_dict=True) - for r in requests: - obj, attr = r.split('+') - print('===== %s + %s'%(obj,attr)) - if obj not in obj2id: - print('no instances of object %s found'%obj) - continue - obj_id = obj2id[obj] - if attr not in attr2id: - print('no instances of attribute %s found'%attr) - continue - attr_id = attr2id[attr] - if attr_id not in occ_info[obj_id]["attr_src"]: - print('no instances of %s+%s found'%(obj, attr)) - continue - files = occ_info[obj_id]["attr_src"][attr_id] - outdir = os.path.join('find_examples', detector, data_part, r) - os.makedirs(outdir, exist_ok=True) - sel_files = [] - for i in range(len(files)): - f = files[i] - if f not in sel_files: - sel_files.append(f) - if len(sel_files) == count: - break - for f in sel_files: - f = f.replace('.pkl', '') - print(f) - src = os.path.join('../data/clean', data_part, f) - dst = os.path.join(outdir, f) - shutil.copy(src, dst) - - - -# helper tool, check the resolutions by scale -def check_res(dataroot, scale): - img_dir = os.path.join(dataroot, 'clean', 'train2014') - files = os.listdir(img_dir) - res_count = np.zeros(100, dtype=int) - for f in tqdm.tqdm(files): - img_path = os.path.join(img_dir, f) - img = cv2.imread(img_path) - imsize = img.shape[:2] - l = int(np.min(imsize) * scale) - res_count[l] += 1 - idx_srt = np.argsort(-1*res_count) - avg_top = 0 - avg_bot = 0 - for i in range(100): - idx = idx_srt[i] - if res_count[idx] == 0: - break - print('%i - %i'%(idx, res_count[idx])) - avg_bot += res_count[idx] - avg_top += (idx*res_count[idx]) - avg = float(avg_top) / avg_bot - print('-') - print('average: ' + str(avg)) - - -#================================================================================================== - - -def embed_patch(img, patch, scale): - imsize = img.shape[1:] - l = int(np.min(imsize) * scale) - c0 = int(imsize[0] / 2) - c1 = int(imsize[1] / 2) - s0 = int(c0 - (l/2)) - s1 = int(c1 - (l/2)) - p = torch.nn.functional.interpolate(patch, size=(l,l), mode='bilinear') - p = p.squeeze(0) - p = torch.clip(p, 0.0, 1.0) - img[:, s0:s0+l, s1:s1+l] = p * 255 - return img - - - -def optimize_patch(dataroot, model_dir, detector, nb, scale, res, epochs, limit, prog, init, - patch_name, over, seed, obj_target, attr_target, lam): - if obj_target is None and attr_target is None: - print('ERROR: Must specify an object id target or an attribute id target or both') - exit(-1) - assert init in ['random', 'const'] - assert epochs > 0 - assert obj_target > 0 and obj_target <= 1600 - t0 = time.time() - device = check_for_cuda() - random.seed(seed) - - # check locations - if os.path.isfile(patch_name): - print('WARNING: already found a patch at location: ' + patch_name) - if not over: - print('to override, use the --over flag') - exit(-1) - else: - print('override is enabled') - feat_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, 'train2014') - if not os.path.isdir(feat_dir): - print('WARNING: optimize_patch.py must be run after clean features have been extracted') - exit(-1) - - # model prep - model_path = os.path.join(model_dir, detector + '.pth') - config_file = "grid-feats-vqa/configs/%s-grid.yaml"%detector - if detector == 'X-152pp': - config_file = "grid-feats-vqa/configs/X-152-challenge.yaml" - print('loading model: ' + model_path) - predictor = load_detectron_predictor(config_file, model_path, device) - roi_head = predictor.model.roi_heads - - # initialize patch tensor, loss, and optimizer - if init == 'const': - patch = Variable(0.5 * torch.ones([1, 3, res, res], dtype=torch.float32), requires_grad=True) - else: - rand_patch = np.random.normal(loc=0.5, scale=0.25, size=[1, 3, res, res]) - rand_patch = np.clip(rand_patch, 0, 1) - patch = Variable(torch.from_numpy(rand_patch.astype(np.float32)), requires_grad=True) - cel_obj = torch.nn.CrossEntropyLoss() - cel_attr = torch.nn.CrossEntropyLoss() - trk_cel_obj = torch.nn.CrossEntropyLoss(reduction='none') - trk_cel_attr = torch.nn.CrossEntropyLoss(reduction='none') - optim = torch.optim.Adam([patch]) - - # set up training - img_dir = os.path.join(dataroot, 'clean', 'train2014') - files = os.listdir(img_dir) - loss_col_obj = [] - loss_col_attr = [] - i = 0 - j = 0 - - # partial epochs - allow training for < 1 epoch - if epochs < 1: - print('Training on a partial epoch: ' + str(epochs)) - limit = int(epochs * len(files)) - print('Will train on %i images'%limit) - epochs = 1 - else: - epochs = int(epochs) - - # optimize patch - t1 = time.time() - for e in range(epochs): - print('=== EPOCH: %i'%e) - random.shuffle(files) - for f in files: - img_path = os.path.join(img_dir, f) - original_image = cv2.imread(img_path) - optim.zero_grad() - - # using model directly to bypass some limitations of predictor - height, width = original_image.shape[:2] - image = predictor.transform_gen.get_transform(original_image).apply_image(original_image) - image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)) - image = embed_patch(image, patch, scale) - inputs = {"image": image, "height": height, "width": width} - - # run - outputs, box_features = predictor.model([inputs]) - outputs = outputs[0] - nb_out = box_features.shape[0] - - # object target - if obj_target is not None: - scores, deltas = roi_head.box_predictor(box_features) - targets = torch.ones(nb_out, dtype=torch.long, device=device) * obj_target - l_obj = cel_obj(scores, targets) - if attr_target is None: - l = l_obj - - # attribute target - if attr_target is not None: - pred_classes = outputs["instances"].get_fields()["pred_classes"].data - attribute_scores = roi_head.attribute_predictor(box_features, pred_classes) - attr_targets = torch.ones(nb_out, dtype=torch.long, device=device) * attr_target - l_attr = cel_attr(attribute_scores, attr_targets) - if obj_target is None: - l = l_attr - - # step - if obj_target is not None and attr_target is not None: - l = l_obj + (lam * l_attr) - l.backward() - optim.step() - - # track progress by looking for the detection with the smallest loss, averaged over k images - if obj_target is not None: - trk_l_obj = trk_cel_obj(scores, targets) - trk_l_obj = np.array(trk_l_obj.detach().cpu()) - trk_l_obj = np.min(trk_l_obj) - loss_col_obj.append(trk_l_obj) - else: - loss_col_obj.append(0.0) - if attr_target is not None: - trk_l_attr = trk_cel_attr(attribute_scores, attr_targets) - trk_l_attr = np.array(trk_l_attr.detach().cpu()) - trk_l_attr = np.min(trk_l_attr) - loss_col_attr.append(trk_l_attr) - else: - loss_col_attr.append(0.0) - if (i+1)%prog == 0: - loss_col_obj = np.mean(np.array(loss_col_obj)) - loss_col_attr = np.mean(np.array(loss_col_attr)) - tdiff = time.time() - t1 - t1 = time.time() - print('%i/%i avg obj loss: %f avg attr loss: %f time: %is'%(i, len(files), loss_col_obj, loss_col_attr, int(tdiff))) - loss_col_obj = [] - loss_col_attr = [] - j = i+1 - - # limit (optional) - if i == limit: - print('limiting training to %i steps'%limit) - break - i += 1 - - # save patch - final = patch.squeeze(0) - final = torch.clip(final, 0, 1) * 255 - final = np.array(final.data).astype(int) - final = final.transpose(1, 2, 0) - print('saving patch to: ' + patch_name) - cv2.imwrite(patch_name, final) - t = time.time() - t0 - print('DONE in %.2fm'%(t/60)) - - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--dataroot', type=str, default='../data/', help='data location') - parser.add_argument("--model_dir", type=str, help='location of .pth files', default='../detectors/') - parser.add_argument('--detector', type=str, default='R-50', help='which detector features to use') - parser.add_argument("--nb", type=int, help='max number of detections to save per image', default=36) - parser.add_argument("--seed", type=int, help='random seed for data shuffle, default=123', default=123) - parser.add_argument("--scale", type=float, default=0.1, help='patch scale relative to image') - parser.add_argument("--res", type=int, default=64, help='optimized patch resolution in pixels, default=64') - # semantic target settings - new - parser.add_argument("--target", type=str, default=None, help='specify and object/attribute pair in format +, overrides other settings') - parser.add_argument("--obj_target", type=str, default=None, help='object target (id or name). Use --explore to explore options') - parser.add_argument("--attr_target", type=str, default=None, help='attribute target (id or name). Use --explore to explore options') - parser.add_argument("--lam", type=float, default=0.1, help='weight for the attribute target loss, default 0.1') - # training settings - parser.add_argument("--epochs", type=float, default=1) - parser.add_argument("--limit", type=int, default=-1) - parser.add_argument("--prog", type=int, default=100) - parser.add_argument("--init", type=str, default='random') - # naming - parser.add_argument("--patch_name", type=str, default='../opti_patches/semdev_op0.jpg') - parser.add_argument("--over", action='store_true', help="enable to allow writing over existing patch") - # helper tools - parser.add_argument("--check_res", action='store_true', help="check the resolutions of patches by scale") - parser.add_argument("--check_attr", type=int, default=None, help="check the name of an attribute index") - parser.add_argument("--check_obj", type=int, default=None, help="check the name of an object index") - parser.add_argument("--list_attr", action='store_true', help='list all attributes') - parser.add_argument("--list_obj", action='store_true', help='list all objects') - parser.add_argument("--explore", action='store_true', help="explore clean training set detections for rare object types") - parser.add_argument("--find_examples", type=str, default=None, help="look for images with a certain + combination") - parser.add_argument("--find_count", type=int, default=25, help="max number of examples to take. set as -1 to have no limit") - parser.add_argument("--data_part", type=str, default='train2014', help="for use with explore, which data partition to check") - args = parser.parse_args() - np.random.seed(args.seed) - # helper tools (optional) - if args.check_res: - check_res(args.dataroot, args.scale) - exit() - if args.check_obj is not None: - lookup_labels(args.dataroot, 'object', args.check_obj) - exit() - if args.check_attr is not None: - lookup_labels(args.dataroot, 'attribute', args.check_attr) - exit() - if args.list_obj: - list_all_labels(args.dataroot, 'object') - exit() - if args.list_attr: - list_all_labels(args.dataroot, 'attribute') - exit() - if args.explore: - explore_detections(args.dataroot, args.detector, args.data_part) - exit() - if args.find_examples is not None: - find_examples(args.dataroot, args.find_examples, args.detector, args.data_part, args.find_count) - exit() - # parse the target settings - OBJ_TAR, ATTR_TAR = parse_targets(args.dataroot, args.target, args.obj_target, args.attr_target) - # main script - optimize_patch(args.dataroot, args.model_dir, args.detector, args.nb, args.scale, args.res, args.epochs, - args.limit, args.prog, args.init, args.patch_name, args.over, args.seed, OBJ_TAR, ATTR_TAR, args.lam) \ No newline at end of file diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/fpn.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/fpn.py deleted file mode 100644 index 532711d882b7baf109eef1fded128069e144d6ba..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/modeling/backbone/fpn.py +++ /dev/null @@ -1,277 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -import fvcore.nn.weight_init as weight_init -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.layers import Conv2d, ShapeSpec, get_norm - -from .backbone import Backbone -from .build import BACKBONE_REGISTRY -from .resnet import build_resnet_backbone -from .clip_backbone import build_clip_resnet_backbone - -__all__ = ["build_clip_resnet_fpn_backbone", "build_resnet_fpn_backbone", "build_retinanet_resnet_fpn_backbone", "FPN"] - - -class FPN(Backbone): - """ - This module implements :paper:`FPN`. - It creates pyramid features built on top of some input feature maps. - """ - - _fuse_type: torch.jit.Final[str] - - def __init__( - self, bottom_up, in_features, out_channels, norm="", top_block=None, fuse_type="sum" - ): - """ - Args: - bottom_up (Backbone): module representing the bottom up subnetwork. - Must be a subclass of :class:`Backbone`. The multi-scale feature - maps generated by the bottom up network, and listed in `in_features`, - are used to generate FPN levels. - in_features (list[str]): names of the input feature maps coming - from the backbone to which FPN is attached. For example, if the - backbone produces ["res2", "res3", "res4"], any *contiguous* sublist - of these may be used; order must be from high to low resolution. - out_channels (int): number of channels in the output feature maps. - norm (str): the normalization to use. - top_block (nn.Module or None): if provided, an extra operation will - be performed on the output of the last (smallest resolution) - FPN output, and the result will extend the result list. The top_block - further downsamples the feature map. It must have an attribute - "num_levels", meaning the number of extra FPN levels added by - this block, and "in_feature", which is a string representing - its input feature (e.g., p5). - fuse_type (str): types for fusing the top down features and the lateral - ones. It can be "sum" (default), which sums up element-wise; or "avg", - which takes the element-wise mean of the two. - """ - super(FPN, self).__init__() - assert isinstance(bottom_up, Backbone) - assert in_features, in_features - - # Feature map strides and channels from the bottom up network (e.g. ResNet) - input_shapes = bottom_up.output_shape() - strides = [input_shapes[f].stride for f in in_features] - in_channels_per_feature = [input_shapes[f].channels for f in in_features] - - _assert_strides_are_log2_contiguous(strides) - lateral_convs = [] - output_convs = [] - - use_bias = norm == "" - for idx, in_channels in enumerate(in_channels_per_feature): - lateral_norm = get_norm(norm, out_channels) - output_norm = get_norm(norm, out_channels) - - lateral_conv = Conv2d( - in_channels, out_channels, kernel_size=1, bias=use_bias, norm=lateral_norm - ) - output_conv = Conv2d( - out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - bias=use_bias, - norm=output_norm, - ) - weight_init.c2_xavier_fill(lateral_conv) - weight_init.c2_xavier_fill(output_conv) - stage = int(math.log2(strides[idx])) - self.add_module("fpn_lateral{}".format(stage), lateral_conv) - self.add_module("fpn_output{}".format(stage), output_conv) - - lateral_convs.append(lateral_conv) - output_convs.append(output_conv) - # Place convs into top-down order (from low to high resolution) - # to make the top-down computation in forward clearer. - self.lateral_convs = lateral_convs[::-1] - self.output_convs = output_convs[::-1] - self.top_block = top_block - self.in_features = tuple(in_features) - self.bottom_up = bottom_up - # Return feature names are "p", like ["p2", "p3", ..., "p6"] - self._out_feature_strides = {"p{}".format(int(math.log2(s))): s for s in strides} - # top block output feature maps. - if self.top_block is not None: - for s in range(stage, stage + self.top_block.num_levels): - self._out_feature_strides["p{}".format(s + 1)] = 2 ** (s + 1) - - self._out_features = list(self._out_feature_strides.keys()) - self._out_feature_channels = {k: out_channels for k in self._out_features} - self._size_divisibility = strides[-1] - assert fuse_type in {"avg", "sum"} - self._fuse_type = fuse_type - - @property - def size_divisibility(self): - return self._size_divisibility - - def forward(self, x): - """ - Args: - input (dict[str->Tensor]): mapping feature map name (e.g., "res5") to - feature map tensor for each feature level in high to low resolution order. - - Returns: - dict[str->Tensor]: - mapping from feature map name to FPN feature map tensor - in high to low resolution order. Returned feature names follow the FPN - paper convention: "p", where stage has stride = 2 ** stage e.g., - ["p2", "p3", ..., "p6"]. - """ - bottom_up_features = self.bottom_up(x) - results = [] - prev_features = self.lateral_convs[0](bottom_up_features[self.in_features[-1]]) - results.append(self.output_convs[0](prev_features)) - - # Reverse feature maps into top-down order (from low to high resolution) - for idx, (lateral_conv, output_conv) in enumerate( - zip(self.lateral_convs, self.output_convs) - ): - # Slicing of ModuleList is not supported https://github.com/pytorch/pytorch/issues/47336 - # Therefore we loop over all modules but skip the first one - if idx > 0: - features = self.in_features[-idx - 1] - features = bottom_up_features[features] - top_down_features = F.interpolate(prev_features, scale_factor=2.0, mode="nearest") - lateral_features = lateral_conv(features) - prev_features = lateral_features + top_down_features - if self._fuse_type == "avg": - prev_features /= 2 - results.insert(0, output_conv(prev_features)) - - if self.top_block is not None: - if self.top_block.in_feature in bottom_up_features: - top_block_in_feature = bottom_up_features[self.top_block.in_feature] - else: - top_block_in_feature = results[self._out_features.index(self.top_block.in_feature)] - results.extend(self.top_block(top_block_in_feature)) - assert len(self._out_features) == len(results) - return {f: res for f, res in zip(self._out_features, results)} - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } - - -def _assert_strides_are_log2_contiguous(strides): - """ - Assert that each stride is 2x times its preceding stride, i.e. "contiguous in log2". - """ - for i, stride in enumerate(strides[1:], 1): - assert stride == 2 * strides[i - 1], "Strides {} {} are not log2 contiguous".format( - stride, strides[i - 1] - ) - - -class LastLevelMaxPool(nn.Module): - """ - This module is used in the original FPN to generate a downsampled - P6 feature from P5. - """ - - def __init__(self): - super().__init__() - self.num_levels = 1 - self.in_feature = "p5" - - def forward(self, x): - return [F.max_pool2d(x, kernel_size=1, stride=2, padding=0)] - - -class LastLevelP6P7(nn.Module): - """ - This module is used in RetinaNet to generate extra layers, P6 and P7 from - C5 feature. - """ - - def __init__(self, in_channels, out_channels, in_feature="res5"): - super().__init__() - self.num_levels = 2 - self.in_feature = in_feature - self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1) - self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1) - for module in [self.p6, self.p7]: - weight_init.c2_xavier_fill(module) - - def forward(self, c5): - p6 = self.p6(c5) - p7 = self.p7(F.relu(p6)) - return [p6, p7] - - -@BACKBONE_REGISTRY.register() -def build_resnet_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelMaxPool(), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone - -@BACKBONE_REGISTRY.register() -def build_clip_resnet_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_clip_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelMaxPool(), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone - -@BACKBONE_REGISTRY.register() -def build_retinanet_resnet_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - in_channels_p6p7 = bottom_up.output_shape()["res5"].channels - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelP6P7(in_channels_p6p7, out_channels), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone diff --git a/spaces/CarlDennis/HYTTS/text/__init__.py b/spaces/CarlDennis/HYTTS/text/__init__.py deleted file mode 100644 index 0c6416d709b458491ace4d10ae27c6ca94b73a88..0000000000000000000000000000000000000000 --- a/spaces/CarlDennis/HYTTS/text/__init__.py +++ /dev/null @@ -1,33 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/CikeyQI/meme-api/docs/install.md b/spaces/CikeyQI/meme-api/docs/install.md deleted file mode 100644 index 54cce4221e20f53f0dfadefb43b43e3c31197ab0..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/docs/install.md +++ /dev/null @@ -1,124 +0,0 @@ -## 本地安装 - -### 使用 pip 安装 - -```bash -pip install meme_generator -``` - -#### 图片下载 - -由于表情包图片体积较大,`meme-generator` 包含的表情中的图片并不随代码一起打包,需要在安装后手动执行下载命令: - -```bash -meme download -``` - -### 直接运行源代码 - -克隆当前仓库: - -```bash -git clone https://github.com/MeetWq/meme-generator -``` - -通过 `python -m meme_generator.app` 运行 web 服务器 - -通过 `python -m meme_generator.cli` 运行命令行程序 - - -### 字体安装 - -为确保表情包中的文字生成正常,需要自行安装字体 - -> **Note** -> -> 字体安装后若文字仍显示不正常,可删掉 `matplotlib` 字体缓存文件重新运行程序 -> -> 缓存文件位置: -> - Windows: `C:\Users\\.matplotlib\fontlist-xxx.json` -> - Linux: `~/.cache/matplotlib/fontlist-xxx.json` -> - Mac: `~/Library/Caches/matplotlib/fontlist-xxx.json` - - -#### 中文字体 和 emoji字体 安装 - -根据系统的不同,推荐安装的字体如下: - -- Windows: - -大部分 Windows 系统自带 [微软雅黑](https://learn.microsoft.com/zh-cn/typography/font-list/microsoft-yahei) 中文字体 和 [Segoe UI Emoji](https://learn.microsoft.com/zh-cn/typography/font-list/segoe-ui-emoji) emoji 字体,一般情况下无需额外安装 - - -- Linux: - -部分系统可能自带 [文泉驿微米黑](http://wenq.org/wqy2/index.cgi?MicroHei) 中文字体; - -对于 Ubuntu 系统,推荐安装 Noto Sans CJK 和 Noto Color Emoji: - -```bash -sudo apt install fonts-noto-cjk fonts-noto-color-emoji -``` - -为避免 Noto Sans CJK 中部分中文显示为异体(日文)字形,可以将简体中文设置为默认语言(详见 [ArchWiki](https://wiki.archlinux.org/title/Localization/Simplified_Chinese?rdfrom=https%3A%2F%2Fwiki.archlinux.org%2Findex.php%3Ftitle%3DLocalization_%28%25E7%25AE%2580%25E4%25BD%2593%25E4%25B8%25AD%25E6%2596%2587%29%2FSimplified_Chinese_%28%25E7%25AE%2580%25E4%25BD%2593%25E4%25B8%25AD%25E6%2596%2587%29%26redirect%3Dno#%E4%BF%AE%E6%AD%A3%E7%AE%80%E4%BD%93%E4%B8%AD%E6%96%87%E6%98%BE%E7%A4%BA%E4%B8%BA%E5%BC%82%E4%BD%93%EF%BC%88%E6%97%A5%E6%96%87%EF%BC%89%E5%AD%97%E5%BD%A2)): - -```bash -sudo locale-gen zh_CN zh_CN.UTF-8 -sudo update-locale LC_ALL=zh_CN.UTF-8 LANG=zh_CN.UTF-8 -fc-cache -fv -``` - -其他 Linux 系统可以自行下载字体文件安装: - -思源黑体:https://github.com/adobe-fonts/source-han-sans - -NotoSansSC:https://fonts.google.com/noto/specimen/Noto+Sans+SC - -Noto Color Emoji:https://github.com/googlefonts/noto-emoji - - -- Mac: - -苹果系统一般自带 "PingFang SC" 中文字体 与 "Apple Color Emoji" emoji 字体 - - -#### 其他字体安装 - -某些表情包需要用到一些额外字体,存放于仓库中 [resources/fonts](https://github.com/MeetWq/meme-generator/tree/main/resources/fonts),需要自行下载安装 - -具体字体及对应的表情如下: - -| 字体名 | 字体文件名 | 用到该字体的表情 | 备注 | -| --- | --- | --- | --- | -| [Consolas](https://learn.microsoft.com/zh-cn/typography/font-list/consolas) | [consola.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/consola.ttf) | `charpic` | | -| [FZKaTong-M19S](https://www.foundertype.com/index.php/FontInfo/index/id/136) | [FZKATJW.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/FZKATJW.ttf) | `capoo_say` | 方正卡通 | -| [FZXS14](https://www.foundertype.com/index.php/FontInfo/index/id/208) | [FZXS14.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/FZXS14.ttf) | `nokia` | 方正像素14 | -| [FZSJ-QINGCRJ](https://www.foundertype.com/index.php/FontInfo/index/id/5178) | [FZSJ-QINGCRJ.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/FZSJ-QINGCRJ.ttf) | `psyduck`、`nijika_holdsign` | 方正手迹-青春日记 | -| [FZShaoEr-M11S](https://www.foundertype.com/index.php/FontInfo/index/id/149) | [FZSEJW.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/FZSEJW.ttf) | `raise_sign`、`nekoha_holdsign` | 方正少儿 | -| [NotoSansSC](https://fonts.google.com/noto/specimen/Noto+Sans+SC) | [NotoSansSC-Regular.otf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/NotoSansSC-Regular.otf) | `5000choyen` | | -| [NotoSerifSC](https://fonts.google.com/noto/specimen/Noto+Serif+SC) | [NotoSerifSC-Regular.otf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/NotoSerifSC-Regular.otf) | `5000choyen` | | -| [HiraginoMin](https://www.fonts.net.cn/font-36201269101.html) | [HiraginoMin-W5-90-RKSJ-H-2.ttc](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/HiraginoMin-W5-90-RKSJ-H-2.ttc) | `oshi_no_ko` | 明朝体 | -| [Aller](https://fonts.adobe.com/fonts/aller) | [Aller_Bd.ttf](https://github.com/MeetWq/meme-generator/blob/main/resources/fonts/Aller_Bd.ttf) | `osu` | | - - -#### 字体安装方式 - -不同系统的字体安装方式: - -- Windows: - - 双击通过字体查看器安装 - - 复制到字体文件夹:`C:\Windows\Fonts` - -- Linux: - -在 `/usr/share/fonts` 目录下新建文件夹,如 `myfonts`,将字体文件复制到该路径下; - -运行如下命令建立字体缓存: - -```bash -fc-cache -fv -``` - -- Mac: - -使用字体册打开字体文件安装 diff --git a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/custom_ops.py b/spaces/CorvaeOboro/gen_ability_icon/torch_utils/custom_ops.py deleted file mode 100644 index 4cc4e43fc6f6ce79f2bd68a44ba87990b9b8564e..0000000000000000000000000000000000000000 --- a/spaces/CorvaeOboro/gen_ability_icon/torch_utils/custom_ops.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import glob -import torch -import torch.utils.cpp_extension -import importlib -import hashlib -import shutil -from pathlib import Path - -from torch.utils.file_baton import FileBaton - -#---------------------------------------------------------------------------- -# Global options. - -verbosity = 'brief' # Verbosity level: 'none', 'brief', 'full' - -#---------------------------------------------------------------------------- -# Internal helper funcs. - -def _find_compiler_bindir(): - patterns = [ - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio */vc/bin', - ] - for pattern in patterns: - matches = sorted(glob.glob(pattern)) - if len(matches): - return matches[-1] - return None - -#---------------------------------------------------------------------------- -# Main entry point for compiling and loading C++/CUDA plugins. - -_cached_plugins = dict() - -def get_plugin(module_name, sources, **build_kwargs): - assert verbosity in ['none', 'brief', 'full'] - - # Already cached? - if module_name in _cached_plugins: - return _cached_plugins[module_name] - - # Print status. - if verbosity == 'full': - print(f'Setting up PyTorch plugin "{module_name}"...') - elif verbosity == 'brief': - print(f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True) - - try: # pylint: disable=too-many-nested-blocks - # Make sure we can find the necessary compiler binaries. - if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0: - compiler_bindir = _find_compiler_bindir() - if compiler_bindir is None: - raise RuntimeError(f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".') - os.environ['PATH'] += ';' + compiler_bindir - - # Compile and load. - verbose_build = (verbosity == 'full') - - # Incremental build md5sum trickery. Copies all the input source files - # into a cached build directory under a combined md5 digest of the input - # source files. Copying is done only if the combined digest has changed. - # This keeps input file timestamps and filenames the same as in previous - # extension builds, allowing for fast incremental rebuilds. - # - # This optimization is done only in case all the source files reside in - # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR - # environment variable is set (we take this as a signal that the user - # actually cares about this.) - source_dirs_set = set(os.path.dirname(source) for source in sources) - if len(source_dirs_set) == 1 and ('TORCH_EXTENSIONS_DIR' in os.environ): - all_source_files = sorted(list(x for x in Path(list(source_dirs_set)[0]).iterdir() if x.is_file())) - - # Compute a combined hash digest for all source files in the same - # custom op directory (usually .cu, .cpp, .py and .h files). - hash_md5 = hashlib.md5() - for src in all_source_files: - with open(src, 'rb') as f: - hash_md5.update(f.read()) - build_dir = torch.utils.cpp_extension._get_build_directory(module_name, verbose=verbose_build) # pylint: disable=protected-access - digest_build_dir = os.path.join(build_dir, hash_md5.hexdigest()) - - if not os.path.isdir(digest_build_dir): - os.makedirs(digest_build_dir, exist_ok=True) - baton = FileBaton(os.path.join(digest_build_dir, 'lock')) - if baton.try_acquire(): - try: - for src in all_source_files: - shutil.copyfile(src, os.path.join(digest_build_dir, os.path.basename(src))) - finally: - baton.release() - else: - # Someone else is copying source files under the digest dir, - # wait until done and continue. - baton.wait() - digest_sources = [os.path.join(digest_build_dir, os.path.basename(x)) for x in sources] - torch.utils.cpp_extension.load(name=module_name, build_directory=build_dir, - verbose=verbose_build, sources=digest_sources, **build_kwargs) - else: - torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs) - module = importlib.import_module(module_name) - - except: - if verbosity == 'brief': - print('Failed!') - raise - - # Print status and add to cache. - if verbosity == 'full': - print(f'Done setting up PyTorch plugin "{module_name}".') - elif verbosity == 'brief': - print('Done.') - _cached_plugins[module_name] = module - return module - -#---------------------------------------------------------------------------- diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_backends/_trio.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_backends/_trio.py deleted file mode 100644 index cf2894350952e1169a6c77ea7c767e892f3efc1e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_backends/_trio.py +++ /dev/null @@ -1,996 +0,0 @@ -from __future__ import annotations - -import array -import math -import socket -from concurrent.futures import Future -from contextvars import copy_context -from dataclasses import dataclass -from functools import partial -from io import IOBase -from os import PathLike -from signal import Signals -from types import TracebackType -from typing import ( - IO, - TYPE_CHECKING, - Any, - AsyncGenerator, - AsyncIterator, - Awaitable, - Callable, - Collection, - Coroutine, - Generic, - Iterable, - Mapping, - NoReturn, - Sequence, - TypeVar, - cast, -) - -import sniffio -import trio.from_thread -from outcome import Error, Outcome, Value -from trio.socket import SocketType as TrioSocketType -from trio.to_thread import run_sync - -from .. import CapacityLimiterStatistics, EventStatistics, TaskInfo, abc -from .._core._compat import DeprecatedAsyncContextManager, DeprecatedAwaitable -from .._core._eventloop import claim_worker_thread -from .._core._exceptions import ( - BrokenResourceError, - BusyResourceError, - ClosedResourceError, - EndOfStream, -) -from .._core._exceptions import ExceptionGroup as BaseExceptionGroup -from .._core._sockets import convert_ipv6_sockaddr -from .._core._synchronization import CapacityLimiter as BaseCapacityLimiter -from .._core._synchronization import Event as BaseEvent -from .._core._synchronization import ResourceGuard -from .._core._tasks import CancelScope as BaseCancelScope -from ..abc import IPSockAddrType, UDPPacketType - -if TYPE_CHECKING: - from trio_typing import TaskStatus - -try: - from trio import lowlevel as trio_lowlevel -except ImportError: - from trio import hazmat as trio_lowlevel # type: ignore[no-redef] - from trio.hazmat import wait_readable, wait_writable -else: - from trio.lowlevel import wait_readable, wait_writable - -try: - trio_open_process = trio_lowlevel.open_process -except AttributeError: - # isort: off - from trio import ( # type: ignore[attr-defined, no-redef] - open_process as trio_open_process, - ) - -T_Retval = TypeVar("T_Retval") -T_SockAddr = TypeVar("T_SockAddr", str, IPSockAddrType) - - -# -# Event loop -# - -run = trio.run -current_token = trio.lowlevel.current_trio_token -RunVar = trio.lowlevel.RunVar - - -# -# Miscellaneous -# - -sleep = trio.sleep - - -# -# Timeouts and cancellation -# - - -class CancelScope(BaseCancelScope): - def __new__( - cls, original: trio.CancelScope | None = None, **kwargs: object - ) -> CancelScope: - return object.__new__(cls) - - def __init__(self, original: trio.CancelScope | None = None, **kwargs: Any) -> None: - self.__original = original or trio.CancelScope(**kwargs) - - def __enter__(self) -> CancelScope: - self.__original.__enter__() - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - # https://github.com/python-trio/trio-typing/pull/79 - return self.__original.__exit__( # type: ignore[func-returns-value] - exc_type, exc_val, exc_tb - ) - - def cancel(self) -> DeprecatedAwaitable: - self.__original.cancel() - return DeprecatedAwaitable(self.cancel) - - @property - def deadline(self) -> float: - return self.__original.deadline - - @deadline.setter - def deadline(self, value: float) -> None: - self.__original.deadline = value - - @property - def cancel_called(self) -> bool: - return self.__original.cancel_called - - @property - def shield(self) -> bool: - return self.__original.shield - - @shield.setter - def shield(self, value: bool) -> None: - self.__original.shield = value - - -CancelledError = trio.Cancelled -checkpoint = trio.lowlevel.checkpoint -checkpoint_if_cancelled = trio.lowlevel.checkpoint_if_cancelled -cancel_shielded_checkpoint = trio.lowlevel.cancel_shielded_checkpoint -current_effective_deadline = trio.current_effective_deadline -current_time = trio.current_time - - -# -# Task groups -# - - -class ExceptionGroup(BaseExceptionGroup, trio.MultiError): - pass - - -class TaskGroup(abc.TaskGroup): - def __init__(self) -> None: - self._active = False - self._nursery_manager = trio.open_nursery() - self.cancel_scope = None # type: ignore[assignment] - - async def __aenter__(self) -> TaskGroup: - self._active = True - self._nursery = await self._nursery_manager.__aenter__() - self.cancel_scope = CancelScope(self._nursery.cancel_scope) - return self - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - try: - return await self._nursery_manager.__aexit__(exc_type, exc_val, exc_tb) - except trio.MultiError as exc: - raise ExceptionGroup(exc.exceptions) from None - finally: - self._active = False - - def start_soon( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> None: - if not self._active: - raise RuntimeError( - "This task group is not active; no new tasks can be started." - ) - - self._nursery.start_soon(func, *args, name=name) - - async def start( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> object: - if not self._active: - raise RuntimeError( - "This task group is not active; no new tasks can be started." - ) - - return await self._nursery.start(func, *args, name=name) - - -# -# Threads -# - - -async def run_sync_in_worker_thread( - func: Callable[..., T_Retval], - *args: object, - cancellable: bool = False, - limiter: trio.CapacityLimiter | None = None, -) -> T_Retval: - def wrapper() -> T_Retval: - with claim_worker_thread("trio"): - return func(*args) - - # TODO: remove explicit context copying when trio 0.20 is the minimum requirement - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, None) - return await run_sync( - context.run, wrapper, cancellable=cancellable, limiter=limiter - ) - - -# TODO: remove this workaround when trio 0.20 is the minimum requirement -def run_async_from_thread( - fn: Callable[..., Awaitable[T_Retval]], *args: Any -) -> T_Retval: - async def wrapper() -> T_Retval: - retval: T_Retval - - async def inner() -> None: - nonlocal retval - __tracebackhide__ = True - retval = await fn(*args) - - async with trio.open_nursery() as n: - context.run(n.start_soon, inner) - - __tracebackhide__ = True - return retval # noqa: F821 - - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, "trio") - return trio.from_thread.run(wrapper) - - -def run_sync_from_thread(fn: Callable[..., T_Retval], *args: Any) -> T_Retval: - # TODO: remove explicit context copying when trio 0.20 is the minimum requirement - retval = trio.from_thread.run_sync(copy_context().run, fn, *args) - return cast(T_Retval, retval) - - -class BlockingPortal(abc.BlockingPortal): - def __new__(cls) -> BlockingPortal: - return object.__new__(cls) - - def __init__(self) -> None: - super().__init__() - self._token = trio.lowlevel.current_trio_token() - - def _spawn_task_from_thread( - self, - func: Callable, - args: tuple, - kwargs: dict[str, Any], - name: object, - future: Future, - ) -> None: - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, "trio") - trio.from_thread.run_sync( - context.run, - partial(self._task_group.start_soon, name=name), - self._call_func, - func, - args, - kwargs, - future, - trio_token=self._token, - ) - - -# -# Subprocesses -# - - -@dataclass(eq=False) -class ReceiveStreamWrapper(abc.ByteReceiveStream): - _stream: trio.abc.ReceiveStream - - async def receive(self, max_bytes: int | None = None) -> bytes: - try: - data = await self._stream.receive_some(max_bytes) - except trio.ClosedResourceError as exc: - raise ClosedResourceError from exc.__cause__ - except trio.BrokenResourceError as exc: - raise BrokenResourceError from exc.__cause__ - - if data: - return data - else: - raise EndOfStream - - async def aclose(self) -> None: - await self._stream.aclose() - - -@dataclass(eq=False) -class SendStreamWrapper(abc.ByteSendStream): - _stream: trio.abc.SendStream - - async def send(self, item: bytes) -> None: - try: - await self._stream.send_all(item) - except trio.ClosedResourceError as exc: - raise ClosedResourceError from exc.__cause__ - except trio.BrokenResourceError as exc: - raise BrokenResourceError from exc.__cause__ - - async def aclose(self) -> None: - await self._stream.aclose() - - -@dataclass(eq=False) -class Process(abc.Process): - _process: trio.Process - _stdin: abc.ByteSendStream | None - _stdout: abc.ByteReceiveStream | None - _stderr: abc.ByteReceiveStream | None - - async def aclose(self) -> None: - if self._stdin: - await self._stdin.aclose() - if self._stdout: - await self._stdout.aclose() - if self._stderr: - await self._stderr.aclose() - - await self.wait() - - async def wait(self) -> int: - return await self._process.wait() - - def terminate(self) -> None: - self._process.terminate() - - def kill(self) -> None: - self._process.kill() - - def send_signal(self, signal: Signals) -> None: - self._process.send_signal(signal) - - @property - def pid(self) -> int: - return self._process.pid - - @property - def returncode(self) -> int | None: - return self._process.returncode - - @property - def stdin(self) -> abc.ByteSendStream | None: - return self._stdin - - @property - def stdout(self) -> abc.ByteReceiveStream | None: - return self._stdout - - @property - def stderr(self) -> abc.ByteReceiveStream | None: - return self._stderr - - -async def open_process( - command: str | bytes | Sequence[str | bytes], - *, - shell: bool, - stdin: int | IO[Any] | None, - stdout: int | IO[Any] | None, - stderr: int | IO[Any] | None, - cwd: str | bytes | PathLike | None = None, - env: Mapping[str, str] | None = None, - start_new_session: bool = False, -) -> Process: - process = await trio_open_process( # type: ignore[misc] - command, # type: ignore[arg-type] - stdin=stdin, - stdout=stdout, - stderr=stderr, - shell=shell, - cwd=cwd, - env=env, - start_new_session=start_new_session, - ) - stdin_stream = SendStreamWrapper(process.stdin) if process.stdin else None - stdout_stream = ReceiveStreamWrapper(process.stdout) if process.stdout else None - stderr_stream = ReceiveStreamWrapper(process.stderr) if process.stderr else None - return Process(process, stdin_stream, stdout_stream, stderr_stream) - - -class _ProcessPoolShutdownInstrument(trio.abc.Instrument): - def after_run(self) -> None: - super().after_run() - - -current_default_worker_process_limiter: RunVar = RunVar( - "current_default_worker_process_limiter" -) - - -async def _shutdown_process_pool(workers: set[Process]) -> None: - process: Process - try: - await sleep(math.inf) - except trio.Cancelled: - for process in workers: - if process.returncode is None: - process.kill() - - with CancelScope(shield=True): - for process in workers: - await process.aclose() - - -def setup_process_pool_exit_at_shutdown(workers: set[Process]) -> None: - trio.lowlevel.spawn_system_task(_shutdown_process_pool, workers) - - -# -# Sockets and networking -# - - -class _TrioSocketMixin(Generic[T_SockAddr]): - def __init__(self, trio_socket: TrioSocketType) -> None: - self._trio_socket = trio_socket - self._closed = False - - def _check_closed(self) -> None: - if self._closed: - raise ClosedResourceError - if self._trio_socket.fileno() < 0: - raise BrokenResourceError - - @property - def _raw_socket(self) -> socket.socket: - return self._trio_socket._sock # type: ignore[attr-defined] - - async def aclose(self) -> None: - if self._trio_socket.fileno() >= 0: - self._closed = True - self._trio_socket.close() - - def _convert_socket_error(self, exc: BaseException) -> NoReturn: - if isinstance(exc, trio.ClosedResourceError): - raise ClosedResourceError from exc - elif self._trio_socket.fileno() < 0 and self._closed: - raise ClosedResourceError from None - elif isinstance(exc, OSError): - raise BrokenResourceError from exc - else: - raise exc - - -class SocketStream(_TrioSocketMixin, abc.SocketStream): - def __init__(self, trio_socket: TrioSocketType) -> None: - super().__init__(trio_socket) - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - async def receive(self, max_bytes: int = 65536) -> bytes: - with self._receive_guard: - try: - data = await self._trio_socket.recv(max_bytes) - except BaseException as exc: - self._convert_socket_error(exc) - - if data: - return data - else: - raise EndOfStream - - async def send(self, item: bytes) -> None: - with self._send_guard: - view = memoryview(item) - while view: - try: - bytes_sent = await self._trio_socket.send(view) - except BaseException as exc: - self._convert_socket_error(exc) - - view = view[bytes_sent:] - - async def send_eof(self) -> None: - self._trio_socket.shutdown(socket.SHUT_WR) - - -class UNIXSocketStream(SocketStream, abc.UNIXSocketStream): - async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]: - if not isinstance(msglen, int) or msglen < 0: - raise ValueError("msglen must be a non-negative integer") - if not isinstance(maxfds, int) or maxfds < 1: - raise ValueError("maxfds must be a positive integer") - - fds = array.array("i") - await checkpoint() - with self._receive_guard: - while True: - try: - message, ancdata, flags, addr = await self._trio_socket.recvmsg( - msglen, socket.CMSG_LEN(maxfds * fds.itemsize) - ) - except BaseException as exc: - self._convert_socket_error(exc) - else: - if not message and not ancdata: - raise EndOfStream - - break - - for cmsg_level, cmsg_type, cmsg_data in ancdata: - if cmsg_level != socket.SOL_SOCKET or cmsg_type != socket.SCM_RIGHTS: - raise RuntimeError( - f"Received unexpected ancillary data; message = {message!r}, " - f"cmsg_level = {cmsg_level}, cmsg_type = {cmsg_type}" - ) - - fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) - - return message, list(fds) - - async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None: - if not message: - raise ValueError("message must not be empty") - if not fds: - raise ValueError("fds must not be empty") - - filenos: list[int] = [] - for fd in fds: - if isinstance(fd, int): - filenos.append(fd) - elif isinstance(fd, IOBase): - filenos.append(fd.fileno()) - - fdarray = array.array("i", filenos) - await checkpoint() - with self._send_guard: - while True: - try: - await self._trio_socket.sendmsg( - [message], - [ - ( - socket.SOL_SOCKET, - socket.SCM_RIGHTS, # type: ignore[list-item] - fdarray, - ) - ], - ) - break - except BaseException as exc: - self._convert_socket_error(exc) - - -class TCPSocketListener(_TrioSocketMixin, abc.SocketListener): - def __init__(self, raw_socket: socket.socket): - super().__init__(trio.socket.from_stdlib_socket(raw_socket)) - self._accept_guard = ResourceGuard("accepting connections from") - - async def accept(self) -> SocketStream: - with self._accept_guard: - try: - trio_socket, _addr = await self._trio_socket.accept() - except BaseException as exc: - self._convert_socket_error(exc) - - trio_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) - return SocketStream(trio_socket) - - -class UNIXSocketListener(_TrioSocketMixin, abc.SocketListener): - def __init__(self, raw_socket: socket.socket): - super().__init__(trio.socket.from_stdlib_socket(raw_socket)) - self._accept_guard = ResourceGuard("accepting connections from") - - async def accept(self) -> UNIXSocketStream: - with self._accept_guard: - try: - trio_socket, _addr = await self._trio_socket.accept() - except BaseException as exc: - self._convert_socket_error(exc) - - return UNIXSocketStream(trio_socket) - - -class UDPSocket(_TrioSocketMixin[IPSockAddrType], abc.UDPSocket): - def __init__(self, trio_socket: TrioSocketType) -> None: - super().__init__(trio_socket) - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - async def receive(self) -> tuple[bytes, IPSockAddrType]: - with self._receive_guard: - try: - data, addr = await self._trio_socket.recvfrom(65536) - return data, convert_ipv6_sockaddr(addr) - except BaseException as exc: - self._convert_socket_error(exc) - - async def send(self, item: UDPPacketType) -> None: - with self._send_guard: - try: - await self._trio_socket.sendto(*item) - except BaseException as exc: - self._convert_socket_error(exc) - - -class ConnectedUDPSocket(_TrioSocketMixin[IPSockAddrType], abc.ConnectedUDPSocket): - def __init__(self, trio_socket: TrioSocketType) -> None: - super().__init__(trio_socket) - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - async def receive(self) -> bytes: - with self._receive_guard: - try: - return await self._trio_socket.recv(65536) - except BaseException as exc: - self._convert_socket_error(exc) - - async def send(self, item: bytes) -> None: - with self._send_guard: - try: - await self._trio_socket.send(item) - except BaseException as exc: - self._convert_socket_error(exc) - - -async def connect_tcp( - host: str, port: int, local_address: IPSockAddrType | None = None -) -> SocketStream: - family = socket.AF_INET6 if ":" in host else socket.AF_INET - trio_socket = trio.socket.socket(family) - trio_socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) - if local_address: - await trio_socket.bind(local_address) - - try: - await trio_socket.connect((host, port)) - except BaseException: - trio_socket.close() - raise - - return SocketStream(trio_socket) - - -async def connect_unix(path: str) -> UNIXSocketStream: - trio_socket = trio.socket.socket(socket.AF_UNIX) - try: - await trio_socket.connect(path) - except BaseException: - trio_socket.close() - raise - - return UNIXSocketStream(trio_socket) - - -async def create_udp_socket( - family: socket.AddressFamily, - local_address: IPSockAddrType | None, - remote_address: IPSockAddrType | None, - reuse_port: bool, -) -> UDPSocket | ConnectedUDPSocket: - trio_socket = trio.socket.socket(family=family, type=socket.SOCK_DGRAM) - - if reuse_port: - trio_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, 1) - - if local_address: - await trio_socket.bind(local_address) - - if remote_address: - await trio_socket.connect(remote_address) - return ConnectedUDPSocket(trio_socket) - else: - return UDPSocket(trio_socket) - - -getaddrinfo = trio.socket.getaddrinfo -getnameinfo = trio.socket.getnameinfo - - -async def wait_socket_readable(sock: socket.socket) -> None: - try: - await wait_readable(sock) - except trio.ClosedResourceError as exc: - raise ClosedResourceError().with_traceback(exc.__traceback__) from None - except trio.BusyResourceError: - raise BusyResourceError("reading from") from None - - -async def wait_socket_writable(sock: socket.socket) -> None: - try: - await wait_writable(sock) - except trio.ClosedResourceError as exc: - raise ClosedResourceError().with_traceback(exc.__traceback__) from None - except trio.BusyResourceError: - raise BusyResourceError("writing to") from None - - -# -# Synchronization -# - - -class Event(BaseEvent): - def __new__(cls) -> Event: - return object.__new__(cls) - - def __init__(self) -> None: - self.__original = trio.Event() - - def is_set(self) -> bool: - return self.__original.is_set() - - async def wait(self) -> None: - return await self.__original.wait() - - def statistics(self) -> EventStatistics: - orig_statistics = self.__original.statistics() - return EventStatistics(tasks_waiting=orig_statistics.tasks_waiting) - - def set(self) -> DeprecatedAwaitable: - self.__original.set() - return DeprecatedAwaitable(self.set) - - -class CapacityLimiter(BaseCapacityLimiter): - def __new__(cls, *args: object, **kwargs: object) -> CapacityLimiter: - return object.__new__(cls) - - def __init__( - self, *args: Any, original: trio.CapacityLimiter | None = None - ) -> None: - self.__original = original or trio.CapacityLimiter(*args) - - async def __aenter__(self) -> None: - return await self.__original.__aenter__() - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> None: - await self.__original.__aexit__(exc_type, exc_val, exc_tb) - - @property - def total_tokens(self) -> float: - return self.__original.total_tokens - - @total_tokens.setter - def total_tokens(self, value: float) -> None: - self.__original.total_tokens = value - - @property - def borrowed_tokens(self) -> int: - return self.__original.borrowed_tokens - - @property - def available_tokens(self) -> float: - return self.__original.available_tokens - - def acquire_nowait(self) -> DeprecatedAwaitable: - self.__original.acquire_nowait() - return DeprecatedAwaitable(self.acquire_nowait) - - def acquire_on_behalf_of_nowait(self, borrower: object) -> DeprecatedAwaitable: - self.__original.acquire_on_behalf_of_nowait(borrower) - return DeprecatedAwaitable(self.acquire_on_behalf_of_nowait) - - async def acquire(self) -> None: - await self.__original.acquire() - - async def acquire_on_behalf_of(self, borrower: object) -> None: - await self.__original.acquire_on_behalf_of(borrower) - - def release(self) -> None: - return self.__original.release() - - def release_on_behalf_of(self, borrower: object) -> None: - return self.__original.release_on_behalf_of(borrower) - - def statistics(self) -> CapacityLimiterStatistics: - orig = self.__original.statistics() - return CapacityLimiterStatistics( - borrowed_tokens=orig.borrowed_tokens, - total_tokens=orig.total_tokens, - borrowers=orig.borrowers, - tasks_waiting=orig.tasks_waiting, - ) - - -_capacity_limiter_wrapper: RunVar = RunVar("_capacity_limiter_wrapper") - - -def current_default_thread_limiter() -> CapacityLimiter: - try: - return _capacity_limiter_wrapper.get() - except LookupError: - limiter = CapacityLimiter( - original=trio.to_thread.current_default_thread_limiter() - ) - _capacity_limiter_wrapper.set(limiter) - return limiter - - -# -# Signal handling -# - - -class _SignalReceiver(DeprecatedAsyncContextManager["_SignalReceiver"]): - _iterator: AsyncIterator[int] - - def __init__(self, signals: tuple[Signals, ...]): - self._signals = signals - - def __enter__(self) -> _SignalReceiver: - self._cm = trio.open_signal_receiver(*self._signals) - self._iterator = self._cm.__enter__() - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - return self._cm.__exit__(exc_type, exc_val, exc_tb) - - def __aiter__(self) -> _SignalReceiver: - return self - - async def __anext__(self) -> Signals: - signum = await self._iterator.__anext__() - return Signals(signum) - - -def open_signal_receiver(*signals: Signals) -> _SignalReceiver: - return _SignalReceiver(signals) - - -# -# Testing and debugging -# - - -def get_current_task() -> TaskInfo: - task = trio_lowlevel.current_task() - - parent_id = None - if task.parent_nursery and task.parent_nursery.parent_task: - parent_id = id(task.parent_nursery.parent_task) - - return TaskInfo(id(task), parent_id, task.name, task.coro) - - -def get_running_tasks() -> list[TaskInfo]: - root_task = trio_lowlevel.current_root_task() - task_infos = [TaskInfo(id(root_task), None, root_task.name, root_task.coro)] - nurseries = root_task.child_nurseries - while nurseries: - new_nurseries: list[trio.Nursery] = [] - for nursery in nurseries: - for task in nursery.child_tasks: - task_infos.append( - TaskInfo(id(task), id(nursery.parent_task), task.name, task.coro) - ) - new_nurseries.extend(task.child_nurseries) - - nurseries = new_nurseries - - return task_infos - - -def wait_all_tasks_blocked() -> Awaitable[None]: - import trio.testing - - return trio.testing.wait_all_tasks_blocked() - - -class TestRunner(abc.TestRunner): - def __init__(self, **options: Any) -> None: - from collections import deque - from queue import Queue - - self._call_queue: Queue[Callable[..., object]] = Queue() - self._result_queue: deque[Outcome] = deque() - self._stop_event: trio.Event | None = None - self._nursery: trio.Nursery | None = None - self._options = options - - async def _trio_main(self) -> None: - self._stop_event = trio.Event() - async with trio.open_nursery() as self._nursery: - await self._stop_event.wait() - - async def _call_func( - self, func: Callable[..., Awaitable[object]], args: tuple, kwargs: dict - ) -> None: - try: - retval = await func(*args, **kwargs) - except BaseException as exc: - self._result_queue.append(Error(exc)) - else: - self._result_queue.append(Value(retval)) - - def _main_task_finished(self, outcome: object) -> None: - self._nursery = None - - def _get_nursery(self) -> trio.Nursery: - if self._nursery is None: - trio.lowlevel.start_guest_run( - self._trio_main, - run_sync_soon_threadsafe=self._call_queue.put, - done_callback=self._main_task_finished, - **self._options, - ) - while self._nursery is None: - self._call_queue.get()() - - return self._nursery - - def _call( - self, func: Callable[..., Awaitable[T_Retval]], *args: object, **kwargs: object - ) -> T_Retval: - self._get_nursery().start_soon(self._call_func, func, args, kwargs) - while not self._result_queue: - self._call_queue.get()() - - outcome = self._result_queue.pop() - return outcome.unwrap() - - def close(self) -> None: - if self._stop_event: - self._stop_event.set() - while self._nursery is not None: - self._call_queue.get()() - - def run_asyncgen_fixture( - self, - fixture_func: Callable[..., AsyncGenerator[T_Retval, Any]], - kwargs: dict[str, Any], - ) -> Iterable[T_Retval]: - async def fixture_runner(*, task_status: TaskStatus[T_Retval]) -> None: - agen = fixture_func(**kwargs) - retval = await agen.asend(None) - task_status.started(retval) - await teardown_event.wait() - try: - await agen.asend(None) - except StopAsyncIteration: - pass - else: - await agen.aclose() - raise RuntimeError("Async generator fixture did not stop") - - teardown_event = trio.Event() - fixture_value = self._call(lambda: self._get_nursery().start(fixture_runner)) - yield fixture_value - teardown_event.set() - - def run_fixture( - self, - fixture_func: Callable[..., Coroutine[Any, Any, T_Retval]], - kwargs: dict[str, Any], - ) -> T_Retval: - return self._call(fixture_func, **kwargs) - - def run_test( - self, test_func: Callable[..., Coroutine[Any, Any, Any]], kwargs: dict[str, Any] - ) -> None: - self._call(test_func, **kwargs) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/middleware/httpsredirect.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/middleware/httpsredirect.py deleted file mode 100644 index b7a3d8e078574e87dc6e345d621f5a596c3bdc1e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/middleware/httpsredirect.py +++ /dev/null @@ -1,3 +0,0 @@ -from starlette.middleware.httpsredirect import ( # noqa - HTTPSRedirectMiddleware as HTTPSRedirectMiddleware, -) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/inference/_client.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/inference/_client.py deleted file mode 100644 index 868d1cea5ad2037735034c74a20a0cb4769e8c39..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/inference/_client.py +++ /dev/null @@ -1,1258 +0,0 @@ -# coding=utf-8 -# Copyright 2023-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# -# Related resources: -# https://huggingface.co/tasks -# https://huggingface.co/docs/huggingface.js/inference/README -# https://github.com/huggingface/huggingface.js/tree/main/packages/inference/src -# https://github.com/huggingface/text-generation-inference/tree/main/clients/python -# https://github.com/huggingface/text-generation-inference/blob/main/clients/python/text_generation/client.py -# https://huggingface.slack.com/archives/C03E4DQ9LAJ/p1680169099087869 -# https://github.com/huggingface/unity-api#tasks -# -# Some TODO: -# - validate inputs/options/parameters? with Pydantic for instance? or only optionally? -# - add all tasks -# -# NOTE: the philosophy of this client is "let's make it as easy as possible to use it, even if less optimized". Some -# examples of how it translates: -# - Timeout / Server unavailable is handled by the client in a single "timeout" parameter. -# - Files can be provided as bytes, file paths, or URLs and the client will try to "guess" the type. -# - Images are parsed as PIL.Image for easier manipulation. -# - Provides a "recommended model" for each task => suboptimal but user-wise quicker to get a first script running. -# - Only the main parameters are publicly exposed. Power users can always read the docs for more options. -import logging -import time -import warnings -from dataclasses import asdict -from typing import ( - TYPE_CHECKING, - Any, - Dict, - Iterable, - List, - Optional, - Union, - overload, -) - -from requests import HTTPError -from requests.structures import CaseInsensitiveDict - -from huggingface_hub.constants import INFERENCE_ENDPOINT -from huggingface_hub.inference._common import ( - ContentT, - InferenceTimeoutError, - _b64_encode, - _b64_to_image, - _bytes_to_dict, - _bytes_to_image, - _get_recommended_model, - _import_numpy, - _is_tgi_server, - _open_as_binary, - _set_as_non_tgi, - _stream_text_generation_response, -) -from huggingface_hub.inference._text_generation import ( - TextGenerationParameters, - TextGenerationRequest, - TextGenerationResponse, - TextGenerationStreamResponse, - raise_text_generation_error, -) -from huggingface_hub.inference._types import ClassificationOutput, ConversationalOutput, ImageSegmentationOutput -from huggingface_hub.utils import ( - BadRequestError, - build_hf_headers, - get_session, - hf_raise_for_status, -) -from huggingface_hub.utils._typing import Literal - - -if TYPE_CHECKING: - import numpy as np - from PIL import Image - -logger = logging.getLogger(__name__) - - -class InferenceClient: - """ - Initialize a new Inference Client. - - [`InferenceClient`] aims to provide a unified experience to perform inference. The client can be used - seamlessly with either the (free) Inference API or self-hosted Inference Endpoints. - - Args: - model (`str`, `optional`): - The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `bigcode/starcoder` - or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is - automatically selected for the task. - token (`str`, *optional*): - Hugging Face token. Will default to the locally saved token. Pass `token=False` if you don't want to send - your token to the server. - timeout (`float`, `optional`): - The maximum number of seconds to wait for a response from the server. Loading a new model in Inference - API can take up to several minutes. Defaults to None, meaning it will loop until the server is available. - headers (`Dict[str, str]`, `optional`): - Additional headers to send to the server. By default only the authorization and user-agent headers are sent. - Values in this dictionary will override the default values. - cookies (`Dict[str, str]`, `optional`): - Additional cookies to send to the server. - """ - - def __init__( - self, - model: Optional[str] = None, - token: Union[str, bool, None] = None, - timeout: Optional[float] = None, - headers: Optional[Dict[str, str]] = None, - cookies: Optional[Dict[str, str]] = None, - ) -> None: - self.model: Optional[str] = model - self.headers = CaseInsensitiveDict(build_hf_headers(token=token)) # contains 'authorization' + 'user-agent' - if headers is not None: - self.headers.update(headers) - self.cookies = cookies - self.timeout = timeout - - def __repr__(self): - return f"" - - @overload - def post( # type: ignore - self, - *, - json: Optional[Union[str, Dict, List]] = None, - data: Optional[ContentT] = None, - model: Optional[str] = None, - task: Optional[str] = None, - stream: Literal[False] = ..., - ) -> bytes: - pass - - @overload - def post( # type: ignore - self, - *, - json: Optional[Union[str, Dict, List]] = None, - data: Optional[ContentT] = None, - model: Optional[str] = None, - task: Optional[str] = None, - stream: Literal[True] = ..., - ) -> Iterable[bytes]: - pass - - def post( - self, - *, - json: Optional[Union[str, Dict, List]] = None, - data: Optional[ContentT] = None, - model: Optional[str] = None, - task: Optional[str] = None, - stream: bool = False, - ) -> Union[bytes, Iterable[bytes]]: - """ - Make a POST request to the inference server. - - Args: - json (`Union[str, Dict, List]`, *optional*): - The JSON data to send in the request body. Defaults to None. - data (`Union[str, Path, bytes, BinaryIO]`, *optional*): - The content to send in the request body. It can be raw bytes, a pointer to an opened file, a local file - path, or a URL to an online resource (image, audio file,...). If both `json` and `data` are passed, - `data` will take precedence. At least `json` or `data` must be provided. Defaults to None. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. Will override the model defined at the instance level. Defaults to None. - task (`str`, *optional*): - The task to perform on the inference. Used only to default to a recommended model if `model` is not - provided. At least `model` or `task` must be provided. Defaults to None. - stream (`bool`, *optional*): - Whether to iterate over streaming APIs. - - Returns: - bytes: The raw bytes returned by the server. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - """ - url = self._resolve_url(model, task) - - if data is not None and json is not None: - warnings.warn("Ignoring `json` as `data` is passed as binary.") - - t0 = time.time() - timeout = self.timeout - while True: - with _open_as_binary(data) as data_as_binary: - try: - response = get_session().post( - url, - json=json, - data=data_as_binary, - headers=self.headers, - cookies=self.cookies, - timeout=self.timeout, - stream=stream, - ) - except TimeoutError as error: - # Convert any `TimeoutError` to a `InferenceTimeoutError` - raise InferenceTimeoutError(f"Inference call timed out: {url}") from error - - try: - hf_raise_for_status(response) - return response.iter_lines() if stream else response.content - except HTTPError as error: - if error.response.status_code == 503: - # If Model is unavailable, either raise a TimeoutError... - if timeout is not None and time.time() - t0 > timeout: - raise InferenceTimeoutError( - f"Model not loaded on the server: {url}. Please retry with a higher timeout (current:" - f" {self.timeout})." - ) from error - # ...or wait 1s and retry - logger.info(f"Waiting for model to be loaded on the server: {error}") - time.sleep(1) - if timeout is not None: - timeout = max(self.timeout - (time.time() - t0), 1) # type: ignore - continue - raise - - def audio_classification( - self, - audio: ContentT, - *, - model: Optional[str] = None, - ) -> List[ClassificationOutput]: - """ - Perform audio classification on the provided audio content. - - Args: - audio (Union[str, Path, bytes, BinaryIO]): - The audio content to classify. It can be raw audio bytes, a local audio file, or a URL pointing to an - audio file. - model (`str`, *optional*): - The model to use for audio classification. Can be a model ID hosted on the Hugging Face Hub - or a URL to a deployed Inference Endpoint. If not provided, the default recommended model for - audio classification will be used. - - Returns: - `List[Dict]`: The classification output containing the predicted label and its confidence. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - >>> client.audio_classification("audio.flac") - [{'score': 0.4976358711719513, 'label': 'hap'}, {'score': 0.3677836060523987, 'label': 'neu'},...] - ``` - """ - response = self.post(data=audio, model=model, task="audio-classification") - return _bytes_to_dict(response) - - def automatic_speech_recognition( - self, - audio: ContentT, - *, - model: Optional[str] = None, - ) -> str: - """ - Perform automatic speech recognition (ASR or audio-to-text) on the given audio content. - - Args: - audio (Union[str, Path, bytes, BinaryIO]): - The content to transcribe. It can be raw audio bytes, local audio file, or a URL to an audio file. - model (`str`, *optional*): - The model to use for ASR. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. If not provided, the default recommended model for ASR will be used. - - Returns: - str: The transcribed text. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - >>> client.automatic_speech_recognition("hello_world.flac") - "hello world" - ``` - """ - response = self.post(data=audio, model=model, task="automatic-speech-recognition") - return _bytes_to_dict(response)["text"] - - def conversational( - self, - text: str, - generated_responses: Optional[List[str]] = None, - past_user_inputs: Optional[List[str]] = None, - *, - parameters: Optional[Dict[str, Any]] = None, - model: Optional[str] = None, - ) -> ConversationalOutput: - """ - Generate conversational responses based on the given input text (i.e. chat with the API). - - Args: - text (`str`): - The last input from the user in the conversation. - generated_responses (`List[str]`, *optional*): - A list of strings corresponding to the earlier replies from the model. Defaults to None. - past_user_inputs (`List[str]`, *optional*): - A list of strings corresponding to the earlier replies from the user. Should be the same length as - `generated_responses`. Defaults to None. - parameters (`Dict[str, Any]`, *optional*): - Additional parameters for the conversational task. Defaults to None. For more details about the available - parameters, please refer to [this page](https://huggingface.co/docs/api-inference/detailed_parameters#conversational-task) - model (`str`, *optional*): - The model to use for the conversational task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended conversational model will be used. - Defaults to None. - - Returns: - `Dict`: The generated conversational output. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - >>> output = client.conversational("Hi, who are you?") - >>> output - {'generated_text': 'I am the one who knocks.', 'conversation': {'generated_responses': ['I am the one who knocks.'], 'past_user_inputs': ['Hi, who are you?']}, 'warnings': ['Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.']} - >>> client.conversational( - ... "Wow, that's scary!", - ... generated_responses=output["conversation"]["generated_responses"], - ... past_user_inputs=output["conversation"]["past_user_inputs"], - ... ) - ``` - """ - payload: Dict[str, Any] = {"inputs": {"text": text}} - if generated_responses is not None: - payload["inputs"]["generated_responses"] = generated_responses - if past_user_inputs is not None: - payload["inputs"]["past_user_inputs"] = past_user_inputs - if parameters is not None: - payload["parameters"] = parameters - response = self.post(json=payload, model=model, task="conversational") - return _bytes_to_dict(response) - - def feature_extraction(self, text: str, *, model: Optional[str] = None) -> "np.ndarray": - """ - Generate embeddings for a given text. - - Args: - text (`str`): - The text to embed. - model (`str`, *optional*): - The model to use for the conversational task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended conversational model will be used. - Defaults to None. - - Returns: - `np.ndarray`: The embedding representing the input text as a float32 numpy array. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - >>> client.feature_extraction("Hi, who are you?") - array([[ 2.424802 , 2.93384 , 1.1750331 , ..., 1.240499, -0.13776633, -0.7889173 ], - [-0.42943227, -0.6364878 , -1.693462 , ..., 0.41978157, -2.4336355 , 0.6162071 ], - ..., - [ 0.28552425, -0.928395 , -1.2077185 , ..., 0.76810825, -2.1069427 , 0.6236161 ]], dtype=float32) - ``` - """ - response = self.post(json={"inputs": text}, model=model, task="feature-extraction") - np = _import_numpy() - return np.array(_bytes_to_dict(response)[0], dtype="float32") - - def image_classification( - self, - image: ContentT, - *, - model: Optional[str] = None, - ) -> List[ClassificationOutput]: - """ - Perform image classification on the given image using the specified model. - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The image to classify. It can be raw bytes, an image file, or a URL to an online image. - model (`str`, *optional*): - The model to use for image classification. Can be a model ID hosted on the Hugging Face Hub or a URL to a - deployed Inference Endpoint. If not provided, the default recommended model for image classification will be used. - - Returns: - `List[Dict]`: a list of dictionaries containing the predicted label and associated probability. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - >>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg") - [{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...] - ``` - """ - response = self.post(data=image, model=model, task="image-classification") - return _bytes_to_dict(response) - - def image_segmentation( - self, - image: ContentT, - *, - model: Optional[str] = None, - ) -> List[ImageSegmentationOutput]: - """ - Perform image segmentation on the given image using the specified model. - - - - You must have `PIL` installed if you want to work with images (`pip install Pillow`). - - - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The image to segment. It can be raw bytes, an image file, or a URL to an online image. - model (`str`, *optional*): - The model to use for image segmentation. Can be a model ID hosted on the Hugging Face Hub or a URL to a - deployed Inference Endpoint. If not provided, the default recommended model for image segmentation will be used. - - Returns: - `List[Dict]`: A list of dictionaries containing the segmented masks and associated attributes. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - >>> client.image_segmentation("cat.jpg"): - [{'score': 0.989008, 'label': 'LABEL_184', 'mask': }, ...] - ``` - """ - - # Segment - response = self.post(data=image, model=model, task="image-segmentation") - output = _bytes_to_dict(response) - - # Parse masks as PIL Image - if not isinstance(output, list): - raise ValueError(f"Server output must be a list. Got {type(output)}: {str(output)[:200]}...") - for item in output: - item["mask"] = _b64_to_image(item["mask"]) - return output - - def image_to_image( - self, - image: ContentT, - prompt: Optional[str] = None, - *, - negative_prompt: Optional[str] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: Optional[int] = None, - guidance_scale: Optional[float] = None, - model: Optional[str] = None, - **kwargs, - ) -> "Image": - """ - Perform image-to-image translation using a specified model. - - - - You must have `PIL` installed if you want to work with images (`pip install Pillow`). - - - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The input image for translation. It can be raw bytes, an image file, or a URL to an online image. - prompt (`str`, *optional*): - The text prompt to guide the image generation. - negative_prompt (`str`, *optional*): - A negative prompt to guide the translation process. - height (`int`, *optional*): - The height in pixels of the generated image. - width (`int`, *optional*): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*): - Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `Image`: The translated image. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - >>> image = client.image_to_image("cat.jpg", prompt="turn the cat into a tiger") - >>> image.save("tiger.jpg") - ``` - """ - parameters = { - "prompt": prompt, - "negative_prompt": negative_prompt, - "height": height, - "width": width, - "num_inference_steps": num_inference_steps, - "guidance_scale": guidance_scale, - **kwargs, - } - if all(parameter is None for parameter in parameters.values()): - # Either only an image to send => send as raw bytes - data = image - payload: Optional[Dict[str, Any]] = None - else: - # Or an image + some parameters => use base64 encoding - data = None - payload = {"inputs": _b64_encode(image)} - for key, value in parameters.items(): - if value is not None: - payload[key] = value - - response = self.post(json=payload, data=data, model=model, task="image-to-image") - return _bytes_to_image(response) - - def image_to_text(self, image: ContentT, *, model: Optional[str] = None) -> str: - """ - Takes an input image and return text. - - Models can have very different outputs depending on your use case (image captioning, optical character recognition - (OCR), Pix2Struct, etc). Please have a look to the model card to learn more about a model's specificities. - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The input image to caption. It can be raw bytes, an image file, or a URL to an online image.. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `str`: The generated text. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - >>> client.image_to_text("cat.jpg") - 'a cat standing in a grassy field ' - >>> client.image_to_text("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg") - 'a dog laying on the grass next to a flower pot ' - ``` - """ - response = self.post(data=image, model=model, task="image-to-text") - return _bytes_to_dict(response)[0]["generated_text"] - - def sentence_similarity( - self, sentence: str, other_sentences: List[str], *, model: Optional[str] = None - ) -> List[float]: - """ - Compute the semantic similarity between a sentence and a list of other sentences by comparing their embeddings. - - Args: - sentence (`str`): - The main sentence to compare to others. - other_sentences (`List[str]`): - The list of sentences to compare to. - model (`str`, *optional*): - The model to use for the conversational task. Can be a model ID hosted on the Hugging Face Hub or a URL to - a deployed Inference Endpoint. If not provided, the default recommended conversational model will be used. - Defaults to None. - - Returns: - `List[float]`: The embedding representing the input text. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - >>> client.sentence_similarity( - ... "Machine learning is so easy.", - ... other_sentences=[ - ... "Deep learning is so straightforward.", - ... "This is so difficult, like rocket science.", - ... "I can't believe how much I struggled with this.", - ... ], - ... ) - [0.7785726189613342, 0.45876261591911316, 0.2906220555305481] - ``` - """ - response = self.post( - json={"inputs": {"source_sentence": sentence, "sentences": other_sentences}}, - model=model, - task="sentence-similarity", - ) - return _bytes_to_dict(response) - - def summarization( - self, - text: str, - *, - parameters: Optional[Dict[str, Any]] = None, - model: Optional[str] = None, - ) -> str: - """ - Generate a summary of a given text using a specified model. - - Args: - text (`str`): - The input text to summarize. - parameters (`Dict[str, Any]`, *optional*): - Additional parameters for summarization. Check out this [page](https://huggingface.co/docs/api-inference/detailed_parameters#summarization-task) - for more details. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `str`: The generated summary text. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - >>> client.summarization("The Eiffel tower...") - 'The Eiffel tower is one of the most famous landmarks in the world....' - ``` - """ - payload: Dict[str, Any] = {"inputs": text} - if parameters is not None: - payload["parameters"] = parameters - response = self.post(json=payload, model=model, task="summarization") - return _bytes_to_dict(response)[0]["summary_text"] - - @overload - def text_generation( # type: ignore - self, - prompt: str, - *, - details: Literal[False] = ..., - stream: Literal[False] = ..., - model: Optional[str] = None, - do_sample: bool = False, - max_new_tokens: int = 20, - best_of: Optional[int] = None, - repetition_penalty: Optional[float] = None, - return_full_text: bool = False, - seed: Optional[int] = None, - stop_sequences: Optional[List[str]] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - truncate: Optional[int] = None, - typical_p: Optional[float] = None, - watermark: bool = False, - ) -> str: - ... - - @overload - def text_generation( # type: ignore - self, - prompt: str, - *, - details: Literal[True] = ..., - stream: Literal[False] = ..., - model: Optional[str] = None, - do_sample: bool = False, - max_new_tokens: int = 20, - best_of: Optional[int] = None, - repetition_penalty: Optional[float] = None, - return_full_text: bool = False, - seed: Optional[int] = None, - stop_sequences: Optional[List[str]] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - truncate: Optional[int] = None, - typical_p: Optional[float] = None, - watermark: bool = False, - ) -> TextGenerationResponse: - ... - - @overload - def text_generation( # type: ignore - self, - prompt: str, - *, - details: Literal[False] = ..., - stream: Literal[True] = ..., - model: Optional[str] = None, - do_sample: bool = False, - max_new_tokens: int = 20, - best_of: Optional[int] = None, - repetition_penalty: Optional[float] = None, - return_full_text: bool = False, - seed: Optional[int] = None, - stop_sequences: Optional[List[str]] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - truncate: Optional[int] = None, - typical_p: Optional[float] = None, - watermark: bool = False, - ) -> Iterable[str]: - ... - - @overload - def text_generation( - self, - prompt: str, - *, - details: Literal[True] = ..., - stream: Literal[True] = ..., - model: Optional[str] = None, - do_sample: bool = False, - max_new_tokens: int = 20, - best_of: Optional[int] = None, - repetition_penalty: Optional[float] = None, - return_full_text: bool = False, - seed: Optional[int] = None, - stop_sequences: Optional[List[str]] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - truncate: Optional[int] = None, - typical_p: Optional[float] = None, - watermark: bool = False, - ) -> Iterable[TextGenerationStreamResponse]: - ... - - def text_generation( - self, - prompt: str, - *, - details: bool = False, - stream: bool = False, - model: Optional[str] = None, - do_sample: bool = False, - max_new_tokens: int = 20, - best_of: Optional[int] = None, - repetition_penalty: Optional[float] = None, - return_full_text: bool = False, - seed: Optional[int] = None, - stop_sequences: Optional[List[str]] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - truncate: Optional[int] = None, - typical_p: Optional[float] = None, - watermark: bool = False, - decoder_input_details: bool = False, - ) -> Union[str, TextGenerationResponse, Iterable[str], Iterable[TextGenerationStreamResponse]]: - """ - Given a prompt, generate the following text. - - It is recommended to have Pydantic installed in order to get inputs validated. This is preferable as it allow - early failures. - - API endpoint is supposed to run with the `text-generation-inference` backend (TGI). This backend is the - go-to solution to run large language models at scale. However, for some smaller models (e.g. "gpt2") the - default `transformers` + `api-inference` solution is still in use. Both approaches have very similar APIs, but - not exactly the same. This method is compatible with both approaches but some parameters are only available for - `text-generation-inference`. If some parameters are ignored, a warning message is triggered but the process - continues correctly. - - To learn more about the TGI project, please refer to https://github.com/huggingface/text-generation-inference. - - Args: - prompt (`str`): - Input text. - details (`bool`, *optional*): - By default, text_generation returns a string. Pass `details=True` if you want a detailed output (tokens, - probabilities, seed, finish reason, etc.). Only available for models running on with the - `text-generation-inference` backend. - stream (`bool`, *optional*): - By default, text_generation returns the full generated text. Pass `stream=True` if you want a stream of - tokens to be returned. Only available for models running on with the `text-generation-inference` - backend. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - do_sample (`bool`): - Activate logits sampling - max_new_tokens (`int`): - Maximum number of generated tokens - best_of (`int`): - Generate best_of sequences and return the one if the highest token logprobs - repetition_penalty (`float`): - The parameter for repetition penalty. 1.0 means no penalty. See [this - paper](https://arxiv.org/pdf/1909.05858.pdf) for more details. - return_full_text (`bool`): - Whether to prepend the prompt to the generated text - seed (`int`): - Random sampling seed - stop_sequences (`List[str]`): - Stop generating tokens if a member of `stop_sequences` is generated - temperature (`float`): - The value used to module the logits distribution. - top_k (`int`): - The number of highest probability vocabulary tokens to keep for top-k-filtering. - top_p (`float`): - If set to < 1, only the smallest set of most probable tokens with probabilities that add up to `top_p` or - higher are kept for generation. - truncate (`int`): - Truncate inputs tokens to the given size - typical_p (`float`): - Typical Decoding mass - See [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666) for more information - watermark (`bool`): - Watermarking with [A Watermark for Large Language Models](https://arxiv.org/abs/2301.10226) - decoder_input_details (`bool`): - Return the decoder input token logprobs and ids. You must set `details=True` as well for it to be taken - into account. Defaults to `False`. - - Returns: - `Union[str, TextGenerationResponse, Iterable[str], Iterable[TextGenerationStreamResponse]]`: - Generated text returned from the server: - - if `stream=False` and `details=False`, the generated text is returned as a `str` (default) - - if `stream=True` and `details=False`, the generated text is returned token by token as a `Iterable[str]` - - if `stream=False` and `details=True`, the generated text is returned with more details as a [`~huggingface_hub.inference._text_generation.TextGenerationResponse`] - - if `details=True` and `stream=True`, the generated text is returned token by token as a iterable of [`~huggingface_hub.inference._text_generation.TextGenerationStreamResponse`] - - Raises: - `ValidationError`: - If input values are not valid. No HTTP call is made to the server. - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - - # Case 1: generate text - >>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12) - '100% open source and built to be easy to use.' - - # Case 2: iterate over the generated tokens. Useful for large generation. - >>> for token in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, stream=True): - ... print(token) - 100 - % - open - source - and - built - to - be - easy - to - use - . - - # Case 3: get more details about the generation process. - >>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True) - TextGenerationResponse( - generated_text='100% open source and built to be easy to use.', - details=Details( - finish_reason=, - generated_tokens=12, - seed=None, - prefill=[ - InputToken(id=487, text='The', logprob=None), - InputToken(id=53789, text=' hugging', logprob=-13.171875), - (...) - InputToken(id=204, text=' ', logprob=-7.0390625) - ], - tokens=[ - Token(id=1425, text='100', logprob=-1.0175781, special=False), - Token(id=16, text='%', logprob=-0.0463562, special=False), - (...) - Token(id=25, text='.', logprob=-0.5703125, special=False) - ], - best_of_sequences=None - ) - ) - - # Case 4: iterate over the generated tokens with more details. - # Last object is more complete, containing the full generated text and the finish reason. - >>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True): - ... print(details) - ... - TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=16, text='%', logprob=-0.0463562, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=1314, text=' open', logprob=-1.3359375, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=3178, text=' source', logprob=-0.28100586, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=273, text=' and', logprob=-0.5961914, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=3426, text=' built', logprob=-1.9423828, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=271, text=' to', logprob=-1.4121094, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=314, text=' be', logprob=-1.5224609, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=1833, text=' easy', logprob=-2.1132812, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=271, text=' to', logprob=-0.08520508, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token(id=745, text=' use', logprob=-0.39453125, special=False), generated_text=None, details=None) - TextGenerationStreamResponse(token=Token( - id=25, - text='.', - logprob=-0.5703125, - special=False), - generated_text='100% open source and built to be easy to use.', - details=StreamDetails(finish_reason=, generated_tokens=12, seed=None) - ) - ``` - """ - # NOTE: Text-generation integration is taken from the text-generation-inference project. It has more features - # like input/output validation (if Pydantic is installed). See `_text_generation.py` header for more details. - - if decoder_input_details and not details: - warnings.warn( - "`decoder_input_details=True` has been passed to the server but `details=False` is set meaning that" - " the output from the server will be truncated." - ) - decoder_input_details = False - - # Validate parameters - parameters = TextGenerationParameters( - best_of=best_of, - details=details, - do_sample=do_sample, - max_new_tokens=max_new_tokens, - repetition_penalty=repetition_penalty, - return_full_text=return_full_text, - seed=seed, - stop=stop_sequences if stop_sequences is not None else [], - temperature=temperature, - top_k=top_k, - top_p=top_p, - truncate=truncate, - typical_p=typical_p, - watermark=watermark, - decoder_input_details=decoder_input_details, - ) - request = TextGenerationRequest(inputs=prompt, stream=stream, parameters=parameters) - payload = asdict(request) - - # Remove some parameters if not a TGI server - if not _is_tgi_server(model): - ignored_parameters = [] - for key in "watermark", "stop", "details", "decoder_input_details": - if payload["parameters"][key] is not None: - ignored_parameters.append(key) - del payload["parameters"][key] - if len(ignored_parameters) > 0: - warnings.warn( - ( - "API endpoint/model for text-generation is not served via TGI. Ignoring parameters" - f" {ignored_parameters}." - ), - UserWarning, - ) - if details: - warnings.warn( - ( - "API endpoint/model for text-generation is not served via TGI. Parameter `details=True` will" - " be ignored meaning only the generated text will be returned." - ), - UserWarning, - ) - details = False - if stream: - raise ValueError( - "API endpoint/model for text-generation is not served via TGI. Cannot return output as a stream." - " Please pass `stream=False` as input." - ) - - # Handle errors separately for more precise error messages - try: - bytes_output = self.post(json=payload, model=model, task="text-generation", stream=stream) # type: ignore - except HTTPError as e: - if isinstance(e, BadRequestError) and "The following `model_kwargs` are not used by the model" in str(e): - _set_as_non_tgi(model) - return self.text_generation( # type: ignore - prompt=prompt, - details=details, - stream=stream, - model=model, - do_sample=do_sample, - max_new_tokens=max_new_tokens, - best_of=best_of, - repetition_penalty=repetition_penalty, - return_full_text=return_full_text, - seed=seed, - stop_sequences=stop_sequences, - temperature=temperature, - top_k=top_k, - top_p=top_p, - truncate=truncate, - typical_p=typical_p, - watermark=watermark, - decoder_input_details=decoder_input_details, - ) - raise_text_generation_error(e) - - # Parse output - if stream: - return _stream_text_generation_response(bytes_output, details) # type: ignore - - data = _bytes_to_dict(bytes_output)[0] - return TextGenerationResponse(**data) if details else data["generated_text"] - - def text_to_image( - self, - prompt: str, - *, - negative_prompt: Optional[str] = None, - height: Optional[float] = None, - width: Optional[float] = None, - num_inference_steps: Optional[float] = None, - guidance_scale: Optional[float] = None, - model: Optional[str] = None, - **kwargs, - ) -> "Image": - """ - Generate an image based on a given text using a specified model. - - - - You must have `PIL` installed if you want to work with images (`pip install Pillow`). - - - - Args: - prompt (`str`): - The prompt to generate an image from. - negative_prompt (`str`, *optional*): - An optional negative prompt for the image generation. - height (`float`, *optional*): - The height in pixels of the image to generate. - width (`float`, *optional*): - The width in pixels of the image to generate. - num_inference_steps (`int`, *optional*): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*): - Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `Image`: The generated image. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - - >>> image = client.text_to_image("An astronaut riding a horse on the moon.") - >>> image.save("astronaut.png") - - >>> image = client.text_to_image( - ... "An astronaut riding a horse on the moon.", - ... negative_prompt="low resolution, blurry", - ... model="stabilityai/stable-diffusion-2-1", - ... ) - >>> image.save("better_astronaut.png") - ``` - """ - parameters = { - "inputs": prompt, - "negative_prompt": negative_prompt, - "height": height, - "width": width, - "num_inference_steps": num_inference_steps, - "guidance_scale": guidance_scale, - **kwargs, - } - payload = {} - for key, value in parameters.items(): - if value is not None: - payload[key] = value - response = self.post(json=payload, model=model, task="text-to-image") - return _bytes_to_image(response) - - def text_to_speech(self, text: str, *, model: Optional[str] = None) -> bytes: - """ - Synthesize an audio of a voice pronouncing a given text. - - Args: - text (`str`): - The text to synthesize. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `bytes`: The generated audio. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from pathlib import Path - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - - >>> audio = client.text_to_speech("Hello world") - >>> Path("hello_world.flac").write_bytes(audio) - ``` - """ - return self.post(json={"inputs": text}, model=model, task="text-to-speech") - - def zero_shot_image_classification( - self, image: ContentT, labels: List[str], *, model: Optional[str] = None - ) -> List[ClassificationOutput]: - """ - Provide input image and text labels to predict text labels for the image. - - Args: - image (`Union[str, Path, bytes, BinaryIO]`): - The input image to caption. It can be raw bytes, an image file, or a URL to an online image. - labels (`List[str]`): - List of string possible labels. The `len(labels)` must be greater than 1. - model (`str`, *optional*): - The model to use for inference. Can be a model ID hosted on the Hugging Face Hub or a URL to a deployed - Inference Endpoint. This parameter overrides the model defined at the instance level. Defaults to None. - - Returns: - `List[Dict]`: List of classification outputs containing the predicted labels and their confidence. - - Raises: - [`InferenceTimeoutError`]: - If the model is unavailable or the request times out. - `HTTPError`: - If the request fails with an HTTP error status code other than HTTP 503. - - Example: - ```py - >>> from huggingface_hub import InferenceClient - >>> client = InferenceClient() - - >>> client.zero_shot_image_classification( - ... "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg", - ... labels=["dog", "cat", "horse"], - ... ) - [{"label": "dog", "score": 0.956}, ...] - ``` - """ - - # Raise valueerror if input is less than 2 labels - if len(labels) < 2: - raise ValueError("You must specify at least 2 classes to compare. Please specify more than 1 class.") - - response = self.post( - json={"image": _b64_encode(image), "parameters": {"candidate_labels": ",".join(labels)}}, - model=model, - task="zero-shot-image-classification", - ) - return _bytes_to_dict(response) - - def _resolve_url(self, model: Optional[str] = None, task: Optional[str] = None) -> str: - model = model or self.model - - # If model is already a URL, ignore `task` and return directly - if model is not None and (model.startswith("http://") or model.startswith("https://")): - return model - - # # If no model but task is set => fetch the recommended one for this task - if model is None: - if task is None: - raise ValueError( - "You must specify at least a model (repo_id or URL) or a task, either when instantiating" - " `InferenceClient` or when making a request." - ) - model = _get_recommended_model(task) - - # Compute InferenceAPI url - return ( - # Feature-extraction and sentence-similarity are the only cases where we handle models with several tasks. - f"{INFERENCE_ENDPOINT}/pipeline/{task}/{model}" - if task in ("feature-extraction", "sentence-similarity") - # Otherwise, we use the default endpoint - else f"{INFERENCE_ENDPOINT}/models/{model}" - ) diff --git a/spaces/DexterSptizu/drug_interaction/README.md b/spaces/DexterSptizu/drug_interaction/README.md deleted file mode 100644 index 51a3e7da80c7f25d963ca1dd8cf7ecd0ad2fa567..0000000000000000000000000000000000000000 --- a/spaces/DexterSptizu/drug_interaction/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Drug Interaction -emoji: 📊 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Djdjeuu/MGX-Midjourney-v4/app.py b/spaces/Djdjeuu/MGX-Midjourney-v4/app.py deleted file mode 100644 index bea4accb45793c8e748731c184dee0ffaf509dd5..0000000000000000000000000000000000000000 --- a/spaces/Djdjeuu/MGX-Midjourney-v4/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import gradio as gr - -description = """
    - -
    - """ - -gr.Interface.load("models/prompthero/openjourney", description=description).launch() \ No newline at end of file diff --git a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py b/spaces/EAraid12/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py deleted file mode 100644 index 93d429590ca4f357aff07989965b673bdf1e50fe..0000000000000000000000000000000000000000 --- a/spaces/EAraid12/LoRA-DreamBooth-Training-UI/train_dreambooth_lora.py +++ /dev/null @@ -1,1026 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# -# This file is adapted from https://github.com/huggingface/diffusers/blob/febaf863026bd014b7a14349336544fc109d0f57/examples/dreambooth/train_dreambooth_lora.py -# The original license is as below: -# -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import hashlib -import logging -import math -import os -import warnings -from pathlib import Path -from typing import Optional - -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch.utils.data import Dataset - -import datasets -import diffusers -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - DiffusionPipeline, - DPMSolverMultistepScheduler, - UNet2DConditionModel, -) -from diffusers.loaders import AttnProcsLayers -from diffusers.models.cross_attention import LoRACrossAttnProcessor -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available -from huggingface_hub import HfFolder, Repository, create_repo, whoami -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import AutoTokenizer, PretrainedConfig - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.12.0.dev0") - -logger = get_logger(__name__) - - -def save_model_card(repo_name, images=None, base_model=str, prompt=str, repo_folder=None): - img_str = "" - for i, image in enumerate(images): - image.save(os.path.join(repo_folder, f"image_{i}.png")) - img_str += f"![img_{i}](./image_{i}.png)\n" - - yaml = f""" ---- -license: creativeml-openrail-m -base_model: {base_model} -tags: -- stable-diffusion -- stable-diffusion-diffusers -- text-to-image -- diffusers -- lora -inference: true ---- - """ - model_card = f""" -# LoRA DreamBooth - {repo_name} - -These are LoRA adaption weights for {repo_name}. The weights were trained on {prompt} using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. \n -{img_str} -""" - with open(os.path.join(repo_folder, "README.md"), "w") as f: - f.write(yaml + model_card) - - -def import_model_class_from_model_name_or_path(pretrained_model_name_or_path: str, revision: str): - text_encoder_config = PretrainedConfig.from_pretrained( - pretrained_model_name_or_path, - subfolder="text_encoder", - revision=revision, - ) - model_class = text_encoder_config.architectures[0] - - if model_class == "CLIPTextModel": - from transformers import CLIPTextModel - - return CLIPTextModel - elif model_class == "RobertaSeriesModelWithTransformation": - from diffusers.pipelines.alt_diffusion.modeling_roberta_series import RobertaSeriesModelWithTransformation - - return RobertaSeriesModelWithTransformation - else: - raise ValueError(f"{model_class} is not supported.") - - -def parse_args(input_args=None): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default=None, - required=True, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--validation_prompt", - type=str, - default=None, - help="A prompt that is used during validation to verify that the model is learning.", - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=50, - help=( - "Run dreambooth validation every X epochs. Dreambooth validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`." - ), - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If there are not enough images already present in" - " class_data_dir, additional images will be sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="lora-dreambooth-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final" - " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--lr_num_cycles", - type=int, - default=1, - help="Number of hard resets of the lr in cosine_with_restarts scheduler.", - ) - parser.add_argument("--lr_power", type=float, default=1.0, help="Power factor of the polynomial scheduler.") - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument( - "--prior_generation_precision", - type=str, - default=None, - choices=["no", "fp32", "fp16", "bf16"], - help=( - "Choose prior generation precision between fp32, fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to fp16 if a GPU is available else fp32." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - if input_args is not None: - args = parser.parse_args(input_args) - else: - args = parser.parse_args() - - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - else: - # logger is not available yet - if args.class_data_dir is not None: - warnings.warn("You need not use --class_data_dir without --with_prior_preservation.") - if args.class_prompt is not None: - warnings.warn("You need not use --class_prompt without --with_prior_preservation.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - self.instance_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - truncation=True, - padding="max_length", - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - return example - - -def collate_fn(examples, with_prior_preservation=False): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = torch.cat(input_ids, dim=0) - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - -def main(args): - logging_dir = Path(args.output_dir, args.logging_dir) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - logging_dir=logging_dir, - ) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Currently, it's not possible to do gradient accumulation when training two models with accelerate.accumulate - # This will be enabled soon in accelerate. For now, we don't allow gradient accumulation when training two models. - # TODO (patil-suraj): Remove this check when gradient accumulation with two models is enabled in accelerate. - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Generate class images if prior preservation is enabled. - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if accelerator.device.type == "cuda" else torch.float32 - if args.prior_generation_precision == "fp32": - torch_dtype = torch.float32 - elif args.prior_generation_precision == "fp16": - torch_dtype = torch.float16 - elif args.prior_generation_precision == "bf16": - torch_dtype = torch.bfloat16 - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - torch_dtype=torch_dtype, - safety_checker=None, - revision=args.revision, - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - sample_dataloader = accelerator.prepare(sample_dataloader) - pipeline.to(accelerator.device) - - for example in tqdm( - sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process - ): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - if torch.cuda.is_available(): - torch.cuda.empty_cache() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - - create_repo(repo_name, exist_ok=True, token=args.hub_token) - repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer - if args.tokenizer_name: - tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) - elif args.pretrained_model_name_or_path: - tokenizer = AutoTokenizer.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="tokenizer", - revision=args.revision, - use_fast=False, - ) - - # import correct text encoder class - text_encoder_cls = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path, args.revision) - - # Load scheduler and models - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder = text_encoder_cls.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - - # We only train the additional adapter LoRA layers - vae.requires_grad_(False) - text_encoder.requires_grad_(False) - unet.requires_grad_(False) - - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move unet, vae and text_encoder to device and cast to weight_dtype - unet.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - text_encoder.to(accelerator.device, dtype=weight_dtype) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # now we will add new LoRA weights to the attention layers - # It's important to realize here how many attention weights will be added and of which sizes - # The sizes of the attention layers consist only of two different variables: - # 1) - the "hidden_size", which is increased according to `unet.config.block_out_channels`. - # 2) - the "cross attention size", which is set to `unet.config.cross_attention_dim`. - - # Let's first see how many attention processors we will have to set. - # For Stable Diffusion, it should be equal to: - # - down blocks (2x attention layers) * (2x transformer layers) * (3x down blocks) = 12 - # - mid blocks (2x attention layers) * (1x transformer layers) * (1x mid blocks) = 2 - # - up blocks (2x attention layers) * (3x transformer layers) * (3x down blocks) = 18 - # => 32 layers - - # Set correct lora layers - lora_attn_procs = {} - for name in unet.attn_processors.keys(): - cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim - if name.startswith("mid_block"): - hidden_size = unet.config.block_out_channels[-1] - elif name.startswith("up_blocks"): - block_id = int(name[len("up_blocks.")]) - hidden_size = list(reversed(unet.config.block_out_channels))[block_id] - elif name.startswith("down_blocks"): - block_id = int(name[len("down_blocks.")]) - hidden_size = unet.config.block_out_channels[block_id] - - lora_attn_procs[name] = LoRACrossAttnProcessor( - hidden_size=hidden_size, cross_attention_dim=cross_attention_dim - ) - - unet.set_attn_processor(lora_attn_procs) - lora_layers = AttnProcsLayers(unet.attn_processors) - - accelerator.register_for_checkpointing(lora_layers) - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs - if args.use_8bit_adam: - try: - import bitsandbytes as bnb - except ImportError: - raise ImportError( - "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`." - ) - - optimizer_class = bnb.optim.AdamW8bit - else: - optimizer_class = torch.optim.AdamW - - # Optimizer creation - optimizer = optimizer_class( - lora_layers.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Dataset and DataLoaders creation: - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - ) - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, - batch_size=args.train_batch_size, - shuffle=True, - collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), - num_workers=args.dataloader_num_workers, - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - num_cycles=args.lr_num_cycles, - power=args.lr_power, - ) - - # Prepare everything with our `accelerator`. - lora_layers, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - lora_layers, optimizer, train_dataloader, lr_scheduler - ) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("dreambooth-lora", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num batches each epoch = {len(train_dataloader)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the mos recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - for epoch in range(first_epoch, args.num_train_epochs): - unet.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - with accelerator.accumulate(unet): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() - latents = latents * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - if accelerator.sync_gradients: - params_to_clip = lora_layers.parameters() - accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if args.validation_prompt is not None and epoch % args.validation_epochs == 0: - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=accelerator.unwrap_model(unet), - text_encoder=accelerator.unwrap_model(text_encoder), - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) - prompt = args.num_validation_images * [args.validation_prompt] - images = pipeline(prompt, num_inference_steps=25, generator=generator).images - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - - # Save the lora layers - accelerator.wait_for_everyone() - if accelerator.is_main_process: - unet = unet.to(torch.float32) - unet.save_attn_procs(args.output_dir) - - # Final inference - # Load previous pipeline - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, torch_dtype=weight_dtype - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - - # load attention processors - pipeline.unet.load_attn_procs(args.output_dir) - - # run inference - if args.validation_prompt and args.num_validation_images > 0: - generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed else None - prompt = args.num_validation_images * [args.validation_prompt] - images = pipeline(prompt, num_inference_steps=25, generator=generator).images - - test_image_dir = Path(args.output_dir) / 'test_images' - test_image_dir.mkdir() - for i, image in enumerate(images): - out_path = test_image_dir / f'image_{i}.png' - image.save(out_path) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "test": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - if args.push_to_hub: - save_model_card( - repo_name, - images=images, - base_model=args.pretrained_model_name_or_path, - prompt=args.instance_prompt, - repo_folder=args.output_dir, - ) - repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True) - - accelerator.end_training() - - -if __name__ == "__main__": - args = parse_args() - main(args) diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets_537227KB.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets_537227KB.py deleted file mode 100644 index 823b44fb64898e8dcbb12180ba45d1718f9b03f7..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/infer/lib/uvr5_pack/lib_v5/nets_537227KB.py +++ /dev/null @@ -1,123 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - -from . import layers_537238KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 64) - self.stg1_high_band_net = BaseASPPNet(2, 64) - - self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(32, 64) - - self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(64, 128) - - self.out = nn.Conv2d(128, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(64, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(64, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/generate_meta_info_pairdata.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/generate_meta_info_pairdata.py deleted file mode 100644 index 76dce7e41c803a8055f3627cccb98deb51419b09..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/scripts/generate_meta_info_pairdata.py +++ /dev/null @@ -1,49 +0,0 @@ -import argparse -import glob -import os - - -def main(args): - txt_file = open(args.meta_info, 'w') - # sca images - img_paths_gt = sorted(glob.glob(os.path.join(args.input[0], '*'))) - img_paths_lq = sorted(glob.glob(os.path.join(args.input[1], '*'))) - - assert len(img_paths_gt) == len(img_paths_lq), ('GT folder and LQ folder should have the same length, but got ' - f'{len(img_paths_gt)} and {len(img_paths_lq)}.') - - for img_path_gt, img_path_lq in zip(img_paths_gt, img_paths_lq): - # get the relative paths - img_name_gt = os.path.relpath(img_path_gt, args.root[0]) - img_name_lq = os.path.relpath(img_path_lq, args.root[1]) - print(f'{img_name_gt}, {img_name_lq}') - txt_file.write(f'{img_name_gt}, {img_name_lq}\n') - - -if __name__ == '__main__': - """This script is used to generate meta info (txt file) for paired images. - """ - parser = argparse.ArgumentParser() - parser.add_argument( - '--input', - nargs='+', - default=['datasets/DF2K/DIV2K_train_HR_sub', 'datasets/DF2K/DIV2K_train_LR_bicubic_X4_sub'], - help='Input folder, should be [gt_folder, lq_folder]') - parser.add_argument('--root', nargs='+', default=[None, None], help='Folder root, will use the ') - parser.add_argument( - '--meta_info', - type=str, - default='datasets/DF2K/meta_info/meta_info_DIV2K_sub_pair.txt', - help='txt path for meta info') - args = parser.parse_args() - - assert len(args.input) == 2, 'Input folder should have two elements: gt folder and lq folder' - assert len(args.root) == 2, 'Root path should have two elements: root for gt folder and lq folder' - os.makedirs(os.path.dirname(args.meta_info), exist_ok=True) - for i in range(2): - if args.input[i].endswith('/'): - args.input[i] = args.input[i][:-1] - if args.root[i] is None: - args.root[i] = os.path.dirname(args.input[i]) - - main(args) diff --git a/spaces/Epitech/LinguaExpressus/README.md b/spaces/Epitech/LinguaExpressus/README.md deleted file mode 100644 index 6639e3b9caf49dbf59a9721ac217a8f4de82a530..0000000000000000000000000000000000000000 --- a/spaces/Epitech/LinguaExpressus/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: LinguaExpressus -emoji: 😻 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/FridaZuley/RVC_HFKawaii/configs/config.py b/spaces/FridaZuley/RVC_HFKawaii/configs/config.py deleted file mode 100644 index e3b0205a1f0d62f674b9c3de2c5ab7ee90464945..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/configs/config.py +++ /dev/null @@ -1,265 +0,0 @@ -import argparse -import os -import sys -import json -from multiprocessing import cpu_count - -import torch - -try: - import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import - if torch.xpu.is_available(): - from infer.modules.ipex import ipex_init - ipex_init() -except Exception: - pass - -import logging - -logger = logging.getLogger(__name__) - - -version_config_list = [ - "v1/32k.json", - "v1/40k.json", - "v1/48k.json", - "v2/48k.json", - "v2/32k.json", -] - - -def singleton_variable(func): - def wrapper(*args, **kwargs): - if not wrapper.instance: - wrapper.instance = func(*args, **kwargs) - return wrapper.instance - - wrapper.instance = None - return wrapper - - -@singleton_variable -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.json_config = self.load_config_json() - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.iscolab, - self.noparallel, - self.noautoopen, - self.paperspace, - self.is_cli, - self.grtheme, - self.dml, - ) = self.arg_parse() - self.instead = "" - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def load_config_json() -> dict: - d = {} - for config_file in version_config_list: - with open(f"configs/{config_file}", "r") as f: - d[config_file] = json.load(f) - return d - - @staticmethod - def arg_parse() -> tuple: - exe = sys.executable or "python" - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument("--pycmd", type=str, default=exe, help="Python command") - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - parser.add_argument( - "--paperspace", - action="store_true", - help="Note that this argument just shares a gradio link for the web UI. Thus can be used on other non-local CLI systems.", - ) - parser.add_argument( - "--is_cli", - action="store_true", - help="Use the CLI instead of setting up a gradio UI. This flag will launch an RVC text interface where you can execute functions from infer-web.py!", - ) - - parser.add_argument( - "-t", - "--theme", - help = "Theme for Gradio. Format - `JohnSmith9982/small_and_pretty` (no backticks)", - default = "JohnSmith9982/small_and_pretty", - type = str - ) - - parser.add_argument( - "--dml", - action="store_true", - help="Use DirectML backend instead of CUDA." - ) - - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - cmd_opts.paperspace, - cmd_opts.is_cli, - cmd_opts.theme, - cmd_opts.dml, - ) - - # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+. - # check `getattr` and try it for compatibility - @staticmethod - def has_mps() -> bool: - if not torch.backends.mps.is_available(): - return False - try: - torch.zeros(1).to(torch.device("mps")) - return True - except Exception: - return False - - @staticmethod - def has_xpu() -> bool: - if hasattr(torch, "xpu") and torch.xpu.is_available(): - return True - else: - return False - - def use_fp32_config(self): - for config_file in version_config_list: - self.json_config[config_file]["train"]["fp16_run"] = False - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - if self.has_xpu(): - self.device = self.instead = "xpu:0" - self.is_half = True - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "P10" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - logger.info("Found GPU %s, force to fp32", self.gpu_name) - self.is_half = False - self.use_fp32_config() - else: - logger.info("Found GPU %s", self.gpu_name) - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open("infer/modules/train/preprocess.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("infer/modules/train/preprocess.py", "w") as f: - f.write(strr) - elif self.has_mps(): - logger.info("No supported Nvidia GPU found") - self.device = self.instead = "mps" - self.is_half = False - self.use_fp32_config() - else: - logger.info("No supported Nvidia GPU found") - self.device = self.instead = "cpu" - self.is_half = False - self.use_fp32_config() - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem is not None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - if self.dml: - logger.info("Use DirectML instead") - if ( - os.path.exists( - "runtime\Lib\site-packages\onnxruntime\capi\DirectML.dll" - ) - == False - ): - try: - os.rename( - "runtime\Lib\site-packages\onnxruntime", - "runtime\Lib\site-packages\onnxruntime-cuda", - ) - except: - pass - try: - os.rename( - "runtime\Lib\site-packages\onnxruntime-dml", - "runtime\Lib\site-packages\onnxruntime", - ) - except: - pass - # if self.device != "cpu": - import torch_directml - - self.device = torch_directml.device(torch_directml.default_device()) - self.is_half = False - else: - if self.instead: - logger.info(f"Use {self.instead} instead") - if ( - os.path.exists( - "runtime\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll" - ) - == False - ): - try: - os.rename( - "runtime\Lib\site-packages\onnxruntime", - "runtime\Lib\site-packages\onnxruntime-dml", - ) - except: - pass - try: - os.rename( - "runtime\Lib\site-packages\onnxruntime-cuda", - "runtime\Lib\site-packages\onnxruntime", - ) - except: - pass - return x_pad, x_query, x_center, x_max diff --git a/spaces/GMFTBY/PandaGPT/pretrained_ckpt/README.md b/spaces/GMFTBY/PandaGPT/pretrained_ckpt/README.md deleted file mode 100644 index e42580270a86be1969864f67665904710d9c9516..0000000000000000000000000000000000000000 --- a/spaces/GMFTBY/PandaGPT/pretrained_ckpt/README.md +++ /dev/null @@ -1,78 +0,0 @@ -# 1. Prepare Vicuna Checkpoint: - -The language decoder of PandaGPT is based on Vicuna version 0. Given the distribution license of LLaMA, you need to restore the weights of Vicuna manually. To restore the weights, please follow the instructions below. In the following, we showcase how to restore the 7B version of Vicuna v0. To obtain the 13B version of Vicuna, you can take similar procedures. - -## 1.1. Obtain LLaMA Weights: -* Request the weights of LLaMA from Meta using [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform). -* After obtaining the weights of a specific LLaMA (e.g. 7B, 13B), following [instructions](https://huggingface.co/docs/transformers/main/model_doc/llama) provided by Huggingface to convert it into Huggingface format. - -> **** After conversion, the directory should look like: - - . - └── ./{path_to_llama_weights}/ - ├── config.json - ├── generation_config.json - ├── pytorch_model-00001-of-00002.bin - ├── pytorch_model-00002-of-00002.bin - ├── pytorch_model.bin.index.json - ├── special_tokens_map.json - ├── tokenizer.model - └── tokenizer_config.json - -`{path_to_llama_weights}` is where you store the checkpoints. - - -## 1.2. Obtain the Delta Weights of Vicuna: - -Then, you should download the delta weights of Vicuna provided by the original authors. You can find the corresponding links to 7B/13B Vicuna models in the table below. - -|**Model Size**|**Delta Weights Address**|**Version**| -|:-------------:|:-------------:|:-------------:| -|7B|[[Link]](https://huggingface.co/lmsys/vicuna-7b-delta-v0)|0| -|13B|[[Link]](https://huggingface.co/lmsys/vicuna-13b-delta-v0)|0| - - - -> **** After conversion, the directory should look like: - - . - └── ./{path_to_delta_vicuna_weights}/ - ├── config.json - ├── generation_config.json - ├── pytorch_model-00001-of-00002.bin - ├── pytorch_model-00002-of-00002.bin - ├── pytorch_model.bin.index.json - ├── special_tokens_map.json - ├── tokenizer.model - └── tokenizer_config.json - -`{path_to_delta_vicuna_weights}` is where you store the delta weights of Vicuna. - -## 1.3. Combine the Weights: - -When the two sets of weights are ready, you can combine them using tools from the Vicuna team. - -First, install the required library. -```yaml -pip install git+https://github.com/lm-sys/FastChat.git@v0.1.10 -``` - -Then, run the following command. -```yaml -python -m fastchat.model.apply_delta --base {path_to_llama_weights} --target ./vicuna_ckpt/7b_v0/ --delta {path_to_delta_vicuna_weights} -``` - -> **** Now, the final weights are ready as: - - . - └── ./vicuna_ckpt/7b_v0/ - ├── config.json - ├── generation_config.json - ├── pytorch_model-00001-of-00002.bin - ├── pytorch_model-00002-of-00002.bin - ├── pytorch_model.bin.index.json - ├── special_tokens_map.json - ├── tokenizer.model - └── tokenizer_config.json - - diff --git a/spaces/GXSA/bingo/src/pages/api/blob.ts b/spaces/GXSA/bingo/src/pages/api/blob.ts deleted file mode 100644 index fecd48031916b2284b8958892196e0a1ad420421..0000000000000000000000000000000000000000 --- a/spaces/GXSA/bingo/src/pages/api/blob.ts +++ /dev/null @@ -1,40 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { Readable } from 'node:stream' -import { fetch } from '@/lib/isomorphic' - -const API_DOMAIN = 'https://www.bing.com' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { bcid } = req.query - - const { headers, body } = await fetch(`${API_DOMAIN}/images/blob?bcid=${bcid}`, - { - method: 'GET', - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referrer-Policy": "origin-when-cross-origin", - }, - }, - ) - - res.writeHead(200, { - 'Content-Length': headers.get('content-length')!, - 'Content-Type': headers.get('content-type')!, - }) - // @ts-ignore - return Readable.fromWeb(body!).pipe(res) - } catch (e) { - console.log('Error', e) - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/clip_lingunet_lat.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/clip_lingunet_lat.py deleted file mode 100644 index 74e9006ecd5eac1df433085427443ae15489734b..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/models/clip_lingunet_lat.py +++ /dev/null @@ -1,149 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -import cliport.utils.utils as utils -from cliport.models.resnet import IdentityBlock, ConvBlock -from cliport.models.core.unet import Up -from cliport.models.core.clip import build_model, load_clip, tokenize - -from cliport.models.core import fusion -from cliport.models.core.fusion import FusionConvLat - - -class CLIPLingUNetLat(nn.Module): - """ CLIP RN50 with U-Net skip connections and lateral connections """ - - def __init__(self, input_shape, output_dim, cfg, device, preprocess): - super(CLIPLingUNetLat, self).__init__() - self.input_shape = input_shape - self.output_dim = output_dim - self.input_dim = 2048 # penultimate layer channel-size of CLIP-RN50 - self.cfg = cfg - self.device = device - self.batchnorm = self.cfg['train']['batchnorm'] - self.lang_fusion_type = self.cfg['train']['lang_fusion_type'] - self.bilinear = True - self.up_factor = 2 if self.bilinear else 1 - self.preprocess = preprocess - - self._load_clip() - self._build_decoder() - - def _load_clip(self): - model, _ = load_clip("RN50", device=self.device) - self.clip_rn50 = build_model(model.state_dict()).to(self.device) - del model - - def _build_decoder(self): - # language - self.lang_fuser1 = fusion.names[self.lang_fusion_type](input_dim=self.input_dim // 2) - self.lang_fuser2 = fusion.names[self.lang_fusion_type](input_dim=self.input_dim // 4) - self.lang_fuser3 = fusion.names[self.lang_fusion_type](input_dim=self.input_dim // 8) - - self.proj_input_dim = 512 if 'word' in self.lang_fusion_type else 1024 - self.lang_proj1 = nn.Linear(self.proj_input_dim, 1024) - self.lang_proj2 = nn.Linear(self.proj_input_dim, 512) - self.lang_proj3 = nn.Linear(self.proj_input_dim, 256) - - # vision - # self.conv1 = nn.Sequential( - # nn.Conv2d(self.input_dim, 1024, kernel_size=3, stride=1, padding=1, bias=False), - # nn.ReLU(True) - # ) - - # self.up1 = Up(2048, 1024 // self.up_factor, self.bilinear) - # self.lat_fusion1 = FusionConvLat(input_dim=1024+512, output_dim=512) - - # self.up2 = Up(1024, 512 // self.up_factor, self.bilinear) - # self.lat_fusion2 = FusionConvLat(input_dim=512+256, output_dim=256) - - self.conv1 = nn.Sequential( - nn.Conv2d(self.input_dim, 256, kernel_size=3, stride=1, padding=1, bias=False), - nn.ReLU(True) - ) - - self.up3 = Up(512, 256 // self.up_factor, self.bilinear) - self.lat_fusion3 = FusionConvLat(input_dim=256+128, output_dim=128) - - self.layer1 = nn.Sequential( - ConvBlock(128, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm), - IdentityBlock(64, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm), - nn.UpsamplingBilinear2d(scale_factor=2), - ) - self.lat_fusion4 = FusionConvLat(input_dim=128+64, output_dim=64) - - self.layer2 = nn.Sequential( - ConvBlock(64, [32, 32, 32], kernel_size=3, stride=1, batchnorm=self.batchnorm), - IdentityBlock(32, [32, 32, 32], kernel_size=3, stride=1, batchnorm=self.batchnorm), - nn.UpsamplingBilinear2d(scale_factor=2), - ) - self.lat_fusion5 = FusionConvLat(input_dim=64+32, output_dim=32) - - self.layer3 = nn.Sequential( - ConvBlock(32, [16, 16, 16], kernel_size=3, stride=1, batchnorm=self.batchnorm), - IdentityBlock(16, [16, 16, 16], kernel_size=3, stride=1, batchnorm=self.batchnorm), - nn.UpsamplingBilinear2d(scale_factor=2), - ) - self.lat_fusion6 = FusionConvLat(input_dim=32+16, output_dim=16) - - self.conv2 = nn.Sequential( - nn.Conv2d(16, self.output_dim, kernel_size=1) - ) - - def encode_image(self, img): - with torch.no_grad(): - img_encoding, img_im = self.clip_rn50.visual.prepool_im(img) - return img_encoding, img_im - - def encode_text(self, x): - with torch.no_grad(): - tokens = tokenize(x).to(self.device) - text_feat, text_emb = self.clip_rn50.encode_text_with_embeddings(tokens) - - text_mask = torch.where(tokens==0, tokens, 1) # [1, max_token_len] - return text_feat, text_emb, text_mask - - def forward(self, x, lat, l): - x = self.preprocess(x, dist='clip') - - in_type = x.dtype - in_shape = x.shape - x = x[:,:3] # select RGB - x, im = self.encode_image(x) - x = x.to(in_type) - - l_enc, l_emb, l_mask = self.encode_text(l) - l_input = l_emb if 'word' in self.lang_fusion_type else l_enc - l_input = l_input.to(dtype=x.dtype) - - assert x.shape[1] == self.input_dim - x = self.conv1(x) - - # x = self.lang_fuser1(x, l_input, x2_mask=l_mask, x2_proj=self.lang_proj1) - # x = self.up1(x, im[-2]) - # x = self.lat_fusion1(x, lat[-6]) - - # x = self.lang_fuser2(x, l_input, x2_mask=l_mask, x2_proj=self.lang_proj2) - # x = self.up2(x, im[-3]) - # x = self.lat_fusion2(x, lat[-5]) - if (x.shape[0] > 8) and ((x.shape[0] % 36) == 0): - l_input = l_input.repeat_interleave(36, dim=0) - - x = self.lang_fuser3(x, l_input, x2_mask=l_mask, x2_proj=self.lang_proj3) - x = self.up3(x, im[-4]) - x = self.lat_fusion3(x, lat[-4]) - - x = self.layer1(x) - x = self.lat_fusion4(x, lat[-3]) - - x = self.layer2(x) - x = self.lat_fusion5(x, lat[-2]) - - x = self.layer3(x) - x = self.lat_fusion6(x, lat[-1]) - - x = self.conv2(x) - - x = F.interpolate(x, size=(in_shape[-2], in_shape[-1]), mode='bilinear') - return x \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train20_gptmixcliport5_new_pickplace_demo10.sh b/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train20_gptmixcliport5_new_pickplace_demo10.sh deleted file mode 100644 index 05dc4c65d971c214036ba642772e0f639607fd6b..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/metascripts/train20_gptmixcliport5_new_pickplace_demo10.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash -#SBATCH -c 10 -#SBATCH -n 1 -#SBATCH -o logs/%j.out -#SBATCH --exclusive -STEPS=${1-'50000'} - - -sh scripts/traintest_scripts/train_test_multi_task_goal_demo10.sh data \ - "[stack-block-pyramid,align-box-corner,put-block-in-bowl,packing-boxes,block-insertion,color_linked_ball_bowl_ordering,color_specific_container_fill,insert_blocks_into_fixture,sort_insert_color_coordinated_blocks,color_ordered_blocks_on_pallet,color-coordinated-sphere-insertion,rainbow-stack,put-block-in-bowl,vertical-insertion-blocks,stack-blocks-in-container,'Four-corner-pyramid-challenge','create-pyramid-with-color-coded-ells','align-balls-in-colored-zones','construct-corner-blocks','color-linked-ball-bowl-ordering','create-pyramid-blocks-and-container','color-specific-container-fill','color-ordered-container-arrangement','pyramid-blocks-assemble']" \ - "[stack-block-pyramid,put-block-in-bowl,align-box-corner,packing-boxes,block-insertion]" \ - gpt10_mixcliport5_task_new diff --git a/spaces/GeorgeOrville/bingo/cloudflare/worker.js b/spaces/GeorgeOrville/bingo/cloudflare/worker.js deleted file mode 100644 index e0debd750615f1329b2c72fbce73e1b9291f7137..0000000000000000000000000000000000000000 --- a/spaces/GeorgeOrville/bingo/cloudflare/worker.js +++ /dev/null @@ -1,18 +0,0 @@ -const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。 - -export default { - async fetch(request) { - const uri = new URL(request.url); - if (uri.protocol === 'http:') { - uri.protocol = 'https:'; - return new Response('', { - status: 301, - headers: { - location: uri.toString(), - }, - }) - } - uri.host = TRAGET_HOST - return fetch(new Request(uri.toString(), request)); - }, -}; diff --git a/spaces/GilbertClaus/VideoCutter/app.py b/spaces/GilbertClaus/VideoCutter/app.py deleted file mode 100644 index f4ae2705538839d94e917e276a8123bce6ec0b97..0000000000000000000000000000000000000000 --- a/spaces/GilbertClaus/VideoCutter/app.py +++ /dev/null @@ -1,78 +0,0 @@ -import os -import streamlit as st -from streamlit_option_menu import option_menu -from youtube import youtube, download_youtube -from pornhub import pornhub -from iwara import iwara -# from megaDL import mega_dl -from rule34 import rule34 -from paipancon import paipancon -from trailer import trailer -from others import * - -# Navigasi Sidebar -options = ['Youtube', 'Pornhub', 'Iwara', 'Mega', 'Rule34', 'Paipancon', 'Trailer'] -with st.sidebar: - selected = option_menu("Video Downloader", options, - icons=['play', 'fire', 'star', 'moon','gear', 'house', 'lightning'], menu_icon="cast", default_index=0) - -functions = [youtube, pornhub, iwara, download_youtube, rule34, paipancon, trailer] - -if selected: - index = options.index(selected) - fungsi = functions[index] - st.title(f"{selected} Video Downloader and Cutter") - st.write(f"Download dan potong sebagian video {selected}.") - if selected == 'Youtube' or selected == 'Pornhub': - video_link = st.text_input("Link Video", value='https://www.youtube.com/watch?v=ZGltvcmVSAk') - resolution = st.selectbox("Pilih Resolusi", (360, 480, 720), 2) - elif selected == 'Iwara' or selected == 'Mega': - name = st.text_input("Nama File") - video_link = st.text_input("Link Video") - else: - video_link = st.text_input("Link Video") - - choice = st.radio('Pilih Proses:', ['Potong Video', 'Compress Video', 'Cuma Download'], 2) - - if choice == 'Potong Video': - start_time = st.text_input("Start Time", value='00:07:12.000') - end_time = st.text_input("End Time", value='00:07:31.000') - - if st.button(f"Download and Cut {selected}"): - if selected == 'Youtube' or selected == 'Pornhub': - video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, resolution) - elif selected == 'Iwara' or selected == 'Mega': - video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, name) - else: - video_file, judul_video, video_info, thumbnail_file = fungsi(video_link) - video_file = cut_video(video_file, judul_video, start_time, end_time) - file_size = os.path.getsize(video_file) - session(video_info, video_file, thumbnail_file, choice) - st.text_input(f"Video '{judul_video}' setelah diproses:", convert_size(file_size)) - - elif choice == 'Compress Video': - compress = st.selectbox("Pilih Resolusi Compress", (360, 480, 720), 2) - - if st.button(f"Download and Compress {selected}"): - if selected == 'Youtube' or selected == 'Pornhub': - video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, resolution) - elif selected == 'Iwara' or selected == 'Mega': - video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, name) - else: - video_file, judul_video, video_info, thumbnail_file = fungsi(video_link) - video_file = convert_videos(compress, video_file) - file_size = os.path.getsize(video_file) - session(video_info, video_file, thumbnail_file, choice) - st.text_input(f"Video '{judul_video}' setelah diproses:", convert_size(file_size)) - - else: - if st.button(f"Download {selected}"): - if selected == 'Youtube' or selected == 'Pornhub': - video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, resolution) - elif selected == 'Iwara' or selected == 'Mega': - video_file, judul_video, video_info, thumbnail_file = fungsi(video_link, name) - else: - video_file, judul_video, video_info, thumbnail_file = fungsi(video_link) - file_size = os.path.getsize(video_file) - session(video_info, video_file, thumbnail_file, choice) - st.text_input(f"Video '{judul_video}' setelah diproses:", convert_size(file_size)) diff --git a/spaces/GilbertClaus/VideoCutter/iwara.py b/spaces/GilbertClaus/VideoCutter/iwara.py deleted file mode 100644 index 5ea61228cfffe4c9215f1394c4db518d3c86e571..0000000000000000000000000000000000000000 --- a/spaces/GilbertClaus/VideoCutter/iwara.py +++ /dev/null @@ -1,376 +0,0 @@ -import requests, hashlib, os -from others import * - -api_url = 'https://api.iwara.tv' -file_url = 'https://files.iwara.tv' - -class BearerAuth(requests.auth.AuthBase): - """Bearer Authentication""" - def __init__(self, token): - self.token = token - - def __call__(self, r): - r.headers['Authorization'] = 'Bearer ' + self.token - return r - -class ApiClient: - def __init__(self, email, password): - self.email = email - self.password = password - - # self.headers = { - # 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36', - # 'X-Version': 's' - # } - - # API - self.api_url = api_url - self.file_url = file_url - self.timeout = 30 - # self.max_retries = 5 - self.download_timeout = 300 - self.token = None - - # HTML - # self.html_url = html_url - - # Cloudscraper - # self.scraper = cloudscraper.create_scraper(browser={'browser': 'firefox','platform': 'windows','mobile': False}, - # # interpreter = 'nodejs' - # ) - # Requests-html - # self.session = HTMLSession() - - def login(self) -> requests.Response: - url = self.api_url + '/user/login' - json = {'email': self.email, 'password': self.password} - r = requests.post(url, json=json, timeout=self.timeout) - try: - self.token = r.json()['token'] - print('API Login success') - except: - print('API Login failed') - - # try: - # # Cloudscraper - # # r = self.scraper.post(url, json=json, headers=self.headers, timeout=self.timeout) - - # # Requests-html - # r = self.session.post(url, json=json, headers=self.headers, timeout=self.timeout) - # except: - # print('BS4 Login failed') - - return r - - # limit query is not working - def get_videos(self, sort = 'date', rating = 'all', page = 0, limit = 32, subscribed = False) -> requests.Response: - """# Get new videos from iwara.tv - - sort: date, trending, popularity, views, likes - - rating: all, general, ecchi - """ - url = self.api_url + '/videos' - params = {'sort': sort, - 'rating': rating, - 'page': page, - 'limit': limit, - 'subscribed': 'true' if subscribed else 'false', - } - if self.token is None: - r = requests.get(url, params=params, timeout=self.timeout) - else: - - # Verbose Debug - # request = requests.Request('GET', url, params=params, auth=BearerAuth(self.token)) - # print(request.prepare().method, request.prepare().url, request.prepare().headers, request.prepare().body, sep='\n') - # r = requests.Session().send(request.prepare()) - - r = requests.get(url, params=params, auth=BearerAuth(self.token), timeout=self.timeout) - - #Debug - print("[DEBUG] get_videos response:", r) - - return r - - def get_video(self, video_id) -> requests.Response: - """# Get video info from iwara.tv - """ - url = self.api_url + '/video/' + video_id - - if self.token is None: - r = requests.get(url, timeout=self.timeout) - else: - r = requests.get(url, auth=BearerAuth(self.token), timeout=self.timeout) - - #Debug - print("[DEBUG] get_video response:", r) - - return r - - def download_video_thumbnail(self, video_id) -> str: - """# Download video thumbnail from iwara.tv - """ - video = self.get_video(video_id).json() - - file_id = video['file']['id'] - thumbnail_id = video['thumbnail'] - - url = self.file_url + '/image/original/' + file_id + '/thumbnail-{:02d}.jpg'.format(thumbnail_id) - - thumbnail_file_name = video_id + '.jpg' - - if (os.path.exists(thumbnail_file_name)): - print(f"Video ID {video_id} thumbnail already downloaded, skipped downloading. ") - return thumbnail_file_name - - print(f"Downloading thumbnail for video ID: {video_id} ...") - with open(thumbnail_file_name, "wb") as f: - for chunk in requests.get(url).iter_content(chunk_size=1024): - if chunk: - f.write(chunk) - f.flush() - - return thumbnail_file_name - - def download_video(self, video_id) -> str: - """# Download video from iwara.tv - """ - - # html - # url = self.html_url + '/video/' + video_id - - # Cloudscraer - # html = self.scraper.get(url, auth=BearerAuth(self.token), timeout=self.timeout).text - - # Requests-html - # html = self.session.get(url, auth=BearerAuth(self.token), timeout=self.timeout).text - - # print(html) - # html = BeautifulSoup(, 'html.parser') - # downloadLink = html.find('div', class_='dropdown_content') - # print(downloadLink) - - # API - try: - video = self.get_video(video_id).json() - except Exception as e: - raise Exception(f"Failed to get video info for video ID: {video_id}, error: {e}") - - #Debug - print(video) - - url = video['fileUrl'] - file_id = video['file']['id'] - expires = url.split('/')[4].split('?')[1].split('&')[0].split('=')[1] - - # IMPORTANT: This might change in the future. - SHA_postfix = "_5nFp9kmbNnHdAFhaqMvt" - - SHA_key = file_id + "_" + expires + SHA_postfix - hash = hashlib.sha1(SHA_key.encode('utf-8')).hexdigest() - - headers = {"X-Version": hash} - - resources = requests.get(url, headers=headers, auth=BearerAuth(self.token), timeout=self.timeout).json() - - #Debug - print(resources) - - resources_by_quality = [None for i in range(10)] - - for resource in resources: - if resource['name'] == 'Source': - resources_by_quality[0] = resource - # elif resource['name'] == '1080': - # resources_by_quality[1] = resource - # elif resource['name'] == '720': - # resources_by_quality[2] = resource - # elif resource['name'] == '480': - # resources_by_quality[3] = resource - # elif resource['name'] == '540': - # resources_by_quality[4] = resource - # elif resource['name'] == '360': - # resources_by_quality[5] = resource - - for resource in resources_by_quality: - if resource is not None: - #Debug - print(resource) - - download_link = "https:" + resource['src']['download'] - file_type = resource['type'].split('/')[1] - - video_file_name = video_id + '.' + file_type - - if (os.path.exists(video_file_name)): - print(f"Video ID {video_id} Already downloaded, skipped downloading. ") - return video_file_name - - print(f"Downloading video ID: {video_id} ...") - try: - with open(video_file_name, "wb") as f: - for chunk in requests.get(download_link).iter_content(chunk_size=1024): - if chunk: - f.write(chunk) - f.flush() - return video_file_name - except Exception as e: - os.remove(video_file_name) - raise Exception(f"Failed to download video ID: {video_id}, error: {e}") - - - raise Exception("No video with Source quality found") - -# ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - - - -# ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - -### download video from iwara.tv -### usage: python iwara [url] -### by AngelBottomless @ github -# download from iwara page -import requests -# use selenium to get video url -from selenium import webdriver -import argparse - -def download_video(url): - # save video to local - filename = url.split('/')[-1] + '.mp4' - # get video - driver = run_webdriver(url) - click_accept(driver) - driver.implicitly_wait(2) - click_play(driver) - url = find_video_url(driver) - # download video - r = requests.get(url) - with open(filename, 'wb') as f: - f.write(r.content) - # close driver - driver.close() - -def download_with_retry(url, retry=3): - # retry download - for _ in range(retry): - try: - download_video(url) - return True - except: - print('download failed, retrying...') - continue - return False - -def run_webdriver(url): - # use selenium to get video url - # mute chrome - chrome_options = webdriver.ChromeOptions() - chrome_options.add_argument("--mute-audio") - # run webdriver - driver = webdriver.Chrome(options=chrome_options) - driver.get(url) - driver.implicitly_wait(4) - return driver - -def click_accept(driver): - # xpath = /html/body/div[3]/div/div[2]/button[1] - button = driver.find_element('xpath', '/html/body/div[3]/div/div[2]/button[1]') - button.click() -def click_play(driver): - # xpath = //*[@id="vjs_video_3"]/button - button = driver.find_element('xpath', '//*[@id="vjs_video_3"]/button') - button.click() - -def find_video_url(driver): - # xpath //*[@id="vjs_video_3_html5_api"] - #access 'src' - video = driver.find_element('xpath', '//*[@id="vjs_video_3_html5_api"]') - video_url = video.get_attribute('src') - return video_url - -def track_clipboard(): - import pyperclip - import time - import subprocess - failed_urls = [] - success_urls = set() - print('tracking clipboard...') - # loop to track clipboard - # if clipboard contains url, download video - # track every 1 second - previous = '' - # expect KeyboardInterrupt and return 0 - try: - while True: - # get clipboard - clipboard = pyperclip.paste() - if clipboard != previous: - # if clipboard contains url - if 'iwara.tv' in clipboard: - print('url detected, downloading...') - # use subprocess to download video in background - # ['python', '-m', 'iwara', clipboard] - subprocess.Popen(['python', '-m', 'iwara', clipboard]) - print('download complete') - previous = clipboard - time.sleep(1) - except KeyboardInterrupt: - print('exiting...') - return 0 - -if __name__ == '__main__': - failed_urls = [] - success_urls = set() - import sys - # parse args - parser = argparse.ArgumentParser() - # track clipboard option, when 'track' is used, url is not required - parser.add_argument('-t', '--track', action='store_true', help='track clipboard for iwara url') - # add url argument, if not specified, use '' - parser.add_argument('url', nargs='?', default='', help='iwara url') - args = parser.parse_args() - # download video - if args.track: - track_clipboard() - elif 'iwara.tv' in args.url: - result = download_with_retry(args.url) - if not result: - print('download failed') - failed_urls.append(args.url) - else: - print('download complete') - success_urls.add(args.url) - if len(failed_urls) > 0: - print('failed urls:') - for url in failed_urls: - print(url) - # write in ./failed.txt - with open('failed.txt', 'a') as f: - f.write(url + '\n') - sys.exit(1) - else: - print('invalid url') - sys.exit(1) - -# ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - -def iwara(video_url, judul): - # Set the path to the thumbnail directory - directory = "/home/user/app/Iwara" - if not os.path.exists(directory): - os.makedirs(directory) - - judul = judul.replace('_',' ').title().replace('Mmd','MMD').replace('/',' ').replace('Nikke','NIKKE').replace('Fate','FATE').replace('】','】 ').replace(' ', ' ') - thumbnail_url = 'https://saradahentai.com/wp-content/uploads/2023/03/Live-Footage-of-Ashley-Graham-Captured-by-fugtrup-Resident-Evil-4.jpg' - - thumbnail_file = download_file(thumbnail_url, judul, directory) - video_file = download_file(video_url, judul, directory) - - # Mengkonversi video - video_file = convert_videos(720, video_file) - - - video_info = f"Judul: {judul}\n" - - return video_file, judul, video_info, thumbnail_file \ No newline at end of file diff --git a/spaces/Gmq-x/gpt-academic/request_llm/bridge_tgui.py b/spaces/Gmq-x/gpt-academic/request_llm/bridge_tgui.py deleted file mode 100644 index fcf852f0474892bd179843ece3f4a83110bd7756..0000000000000000000000000000000000000000 --- a/spaces/Gmq-x/gpt-academic/request_llm/bridge_tgui.py +++ /dev/null @@ -1,171 +0,0 @@ -''' -Contributed by SagsMug. Modified by binary-husky -https://github.com/oobabooga/text-generation-webui/pull/175 -''' - -import asyncio -import json -import random -import string -import websockets -import logging -import time -import threading -import importlib -from toolbox import get_conf, update_ui - - -def random_hash(): - letters = string.ascii_lowercase + string.digits - return ''.join(random.choice(letters) for i in range(9)) - -async def run(context, max_token, temperature, top_p, addr, port): - params = { - 'max_new_tokens': max_token, - 'do_sample': True, - 'temperature': temperature, - 'top_p': top_p, - 'typical_p': 1, - 'repetition_penalty': 1.05, - 'encoder_repetition_penalty': 1.0, - 'top_k': 0, - 'min_length': 0, - 'no_repeat_ngram_size': 0, - 'num_beams': 1, - 'penalty_alpha': 0, - 'length_penalty': 1, - 'early_stopping': True, - 'seed': -1, - } - session = random_hash() - - async with websockets.connect(f"ws://{addr}:{port}/queue/join") as websocket: - while content := json.loads(await websocket.recv()): - #Python3.10 syntax, replace with if elif on older - if content["msg"] == "send_hash": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 12 - })) - elif content["msg"] == "estimation": - pass - elif content["msg"] == "send_data": - await websocket.send(json.dumps({ - "session_hash": session, - "fn_index": 12, - "data": [ - context, - params['max_new_tokens'], - params['do_sample'], - params['temperature'], - params['top_p'], - params['typical_p'], - params['repetition_penalty'], - params['encoder_repetition_penalty'], - params['top_k'], - params['min_length'], - params['no_repeat_ngram_size'], - params['num_beams'], - params['penalty_alpha'], - params['length_penalty'], - params['early_stopping'], - params['seed'], - ] - })) - elif content["msg"] == "process_starts": - pass - elif content["msg"] in ["process_generating", "process_completed"]: - yield content["output"]["data"][0] - # You can search for your desired end indicator and - # stop generation by closing the websocket here - if (content["msg"] == "process_completed"): - break - - - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 发送至chatGPT,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - raw_input = "What I would like to say is the following: " + inputs - history.extend([inputs, ""]) - chatbot.append([inputs, ""]) - yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 - - prompt = raw_input - tgui_say = "" - - model_name, addr_port = llm_kwargs['llm_model'].split('@') - assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model'] - addr, port = addr_port.split(':') - - - mutable = ["", time.time()] - def run_coorotine(mutable): - async def get_result(mutable): - # "tgui:galactica-1.3b@localhost:7860" - - async for response in run(context=prompt, max_token=llm_kwargs['max_length'], - temperature=llm_kwargs['temperature'], - top_p=llm_kwargs['top_p'], addr=addr, port=port): - print(response[len(mutable[0]):]) - mutable[0] = response - if (time.time() - mutable[1]) > 3: - print('exit when no listener') - break - asyncio.run(get_result(mutable)) - - thread_listen = threading.Thread(target=run_coorotine, args=(mutable,), daemon=True) - thread_listen.start() - - while thread_listen.is_alive(): - time.sleep(1) - mutable[1] = time.time() - # Print intermediate steps - if tgui_say != mutable[0]: - tgui_say = mutable[0] - history[-1] = tgui_say - chatbot[-1] = (history[-2], history[-1]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False): - raw_input = "What I would like to say is the following: " + inputs - prompt = raw_input - tgui_say = "" - model_name, addr_port = llm_kwargs['llm_model'].split('@') - assert ':' in addr_port, "LLM_MODEL 格式不正确!" + llm_kwargs['llm_model'] - addr, port = addr_port.split(':') - - - def run_coorotine(observe_window): - async def get_result(observe_window): - async for response in run(context=prompt, max_token=llm_kwargs['max_length'], - temperature=llm_kwargs['temperature'], - top_p=llm_kwargs['top_p'], addr=addr, port=port): - print(response[len(observe_window[0]):]) - observe_window[0] = response - if (time.time() - observe_window[1]) > 5: - print('exit when no listener') - break - asyncio.run(get_result(observe_window)) - thread_listen = threading.Thread(target=run_coorotine, args=(observe_window,)) - thread_listen.start() - return observe_window[0] diff --git a/spaces/Gradio-Blocks/HairCLIP/README.md b/spaces/Gradio-Blocks/HairCLIP/README.md deleted file mode 100644 index 9c9da54e2e5933a67a6e566ef48e7ad4852d107e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/HairCLIP/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: HairCLIP -emoji: ⚡ -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -suggested_hardware: t4-small ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference - -https://arxiv.org/abs/2112.05142 diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py deleted file mode 100644 index 5ff05aa595399d77ee51552c243e489f395a820e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_hr18.py', '../_base_/datasets/pascal_context.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=60), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/drop.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/drop.py deleted file mode 100644 index 4520b0ff407d2a95a864086bdbca0065f222aa63..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/drop.py +++ /dev/null @@ -1,31 +0,0 @@ -"""Modified from https://github.com/rwightman/pytorch-image- -models/blob/master/timm/models/layers/drop.py.""" - -import torch -from torch import nn - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - Args: - drop_prob (float): Drop rate for paths of model. Dropout rate has - to be between 0 and 1. Default: 0. - """ - - def __init__(self, drop_prob=0.): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - self.keep_prob = 1 - drop_prob - - def forward(self, x): - if self.drop_prob == 0. or not self.training: - return x - shape = (x.shape[0], ) + (1, ) * ( - x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = self.keep_prob + torch.rand( - shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(self.keep_prob) * random_tensor - return output diff --git a/spaces/GuXiaoBei/wechat-chatbot/channel/wechat/wechat_channel.py b/spaces/GuXiaoBei/wechat-chatbot/channel/wechat/wechat_channel.py deleted file mode 100644 index b800fc43753fad893a485eb214cc9602a7f69af9..0000000000000000000000000000000000000000 --- a/spaces/GuXiaoBei/wechat-chatbot/channel/wechat/wechat_channel.py +++ /dev/null @@ -1,176 +0,0 @@ -# encoding:utf-8 - -""" -wechat channel -""" -import itchat -import json -from itchat.content import * -from channel.channel import Channel -from concurrent.futures import ThreadPoolExecutor -from common.log import logger -from config import conf -import requests -import io - -thread_pool = ThreadPoolExecutor(max_workers=8) - - -class WechatChannel(Channel): - - qrcode = b'' - - newInstance=None - - def __init__(self): - pass - - def startup(self): - # login by scan QRCode - newInstance = itchat.load_sync_itchat() - self.newInstance = newInstance - - @newInstance.msg_register(TEXT) - def handler_single_msg(msg): - self.handle(msg) - return None - - @newInstance.msg_register(TEXT, isGroupChat=True) - def handler_group_msg(msg): - self.handle_group(msg) - return None - - newInstance.auto_login(qrCallback=self.qrCallback) - # start message listener - newInstance.run() - - def qrCallback(self, uuid, status, qrcode): - self.qrcode = qrcode - - def getQrCode(self): - return self.qrcode - - def handle(self, msg): - logger.debug("[WX]receive msg: " + json.dumps(msg, ensure_ascii=False)) - from_user_id = msg['FromUserName'] - to_user_id = msg['ToUserName'] # 接收人id - other_user_id = msg['User']['UserName'] # 对手方id - content = msg['Text'] - match_prefix = self.check_prefix(content, conf().get('single_chat_prefix')) - if from_user_id == other_user_id and match_prefix is not None: - # 好友向自己发送消息 - if match_prefix != '': - str_list = content.split(match_prefix, 1) - if len(str_list) == 2: - content = str_list[1].strip() - - img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) - if img_match_prefix: - content = content.split(img_match_prefix, 1)[1].strip() - thread_pool.submit(self._do_send_img, content, from_user_id) - else: - thread_pool.submit(self._do_send, content, from_user_id) - - elif to_user_id == other_user_id and match_prefix: - # 自己给好友发送消息 - str_list = content.split(match_prefix, 1) - if len(str_list) == 2: - content = str_list[1].strip() - img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) - if img_match_prefix: - content = content.split(img_match_prefix, 1)[1].strip() - thread_pool.submit(self._do_send_img, content, to_user_id) - else: - thread_pool.submit(self._do_send, content, to_user_id) - - - def handle_group(self, msg): - logger.debug("[WX]receive group msg: " + json.dumps(msg, ensure_ascii=False)) - group_name = msg['User'].get('NickName', None) - group_id = msg['User'].get('UserName', None) - if not group_name: - return "" - origin_content = msg['Content'] - content = msg['Content'] - content_list = content.split(' ', 1) - context_special_list = content.split('\u2005', 1) - if len(context_special_list) == 2: - content = context_special_list[1] - elif len(content_list) == 2: - content = content_list[1] - - config = conf() - match_prefix = (msg['IsAt'] and not config.get("group_at_off", False)) or self.check_prefix(origin_content, config.get('group_chat_prefix')) \ - or self.check_contain(origin_content, config.get('group_chat_keyword')) - if ('ALL_GROUP' in config.get('group_name_white_list') or group_name in config.get('group_name_white_list') or self.check_contain(group_name, config.get('group_name_keyword_white_list'))) and match_prefix: - img_match_prefix = self.check_prefix(content, conf().get('image_create_prefix')) - if img_match_prefix: - content = content.split(img_match_prefix, 1)[1].strip() - thread_pool.submit(self._do_send_img, content, group_id) - else: - thread_pool.submit(self._do_send_group, content, msg) - - def send(self, msg, receiver): - logger.info('[WX] sendMsg={}, receiver={}'.format(msg, receiver)) - self.newInstance.send(msg, toUserName=receiver) - - def _do_send(self, query, reply_user_id): - try: - if not query: - return - context = dict() - context['from_user_id'] = reply_user_id - reply_text = super().build_reply_content(query, context) - if reply_text: - self.send(conf().get("single_chat_reply_prefix") + reply_text, reply_user_id) - except Exception as e: - logger.exception(e) - - def _do_send_img(self, query, reply_user_id): - try: - if not query: - return - context = dict() - context['type'] = 'IMAGE_CREATE' - img_url = super().build_reply_content(query, context) - if not img_url: - return - - # 图片下载 - pic_res = requests.get(img_url, stream=True) - image_storage = io.BytesIO() - for block in pic_res.iter_content(1024): - image_storage.write(block) - image_storage.seek(0) - - # 图片发送 - logger.info('[WX] sendImage, receiver={}'.format(reply_user_id)) - self.newInstance.send_image(image_storage, reply_user_id) - except Exception as e: - logger.exception(e) - - def _do_send_group(self, query, msg): - if not query: - return - context = dict() - context['from_user_id'] = msg['ActualUserName'] - reply_text = super().build_reply_content(query, context) - if reply_text: - reply_text = '@' + msg['ActualNickName'] + ' ' + reply_text.strip() - self.send(conf().get("group_chat_reply_prefix", "") + reply_text, msg['User']['UserName']) - - - def check_prefix(self, content, prefix_list): - for prefix in prefix_list: - if content.startswith(prefix): - return prefix - return None - - - def check_contain(self, content, keyword_list): - if not keyword_list: - return None - for ky in keyword_list: - if content.find(ky) != -1: - return True - return None diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/registry.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/registry.py deleted file mode 100644 index c46cf61c598be620d973391a92072eb781aac99e..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/registry.py +++ /dev/null @@ -1,154 +0,0 @@ -# -------------------------------------------------------- -# Based on timm and MAE-priv code bases -# https://github.com/rwightman/pytorch-image-models/tree/master/timm -# https://github.com/BUPT-PRIV/MAE-priv -# -------------------------------------------------------- -""" Model Registry -Hacked together by / Copyright 2020 Ross Wightman -""" - -import fnmatch -import re -import sys -from collections import defaultdict -from copy import deepcopy - -__all__ = ['list_models', 'is_model', 'model_entrypoint', 'list_modules', 'is_model_in_modules', - 'is_model_default_key', 'has_model_default_key', 'get_model_default_value', 'is_model_pretrained'] - -_module_to_models = defaultdict(set) # dict of sets to check membership of model in module -_model_to_module = {} # mapping of model names to module names -_model_entrypoints = {} # mapping of model names to entrypoint fns -_model_has_pretrained = set() # set of model names that have pretrained weight url present -_model_default_cfgs = dict() # central repo for model default_cfgs - - -def register_model(fn): - # lookup containing module - mod = sys.modules[fn.__module__] - module_name_split = fn.__module__.split('.') - module_name = module_name_split[-1] if len(module_name_split) else '' - - # add model to __all__ in module - model_name = fn.__name__ - if hasattr(mod, '__all__'): - mod.__all__.append(model_name) - else: - mod.__all__ = [model_name] - - # add entries to registry dict/sets - _model_entrypoints[model_name] = fn - _model_to_module[model_name] = module_name - _module_to_models[module_name].add(model_name) - has_pretrained = False # check if model has a pretrained url to allow filtering on this - if hasattr(mod, 'default_cfgs') and model_name in mod.default_cfgs: - # this will catch all models that have entrypoint matching cfg key, but miss any aliasing - # entrypoints or non-matching combos - has_pretrained = 'url' in mod.default_cfgs[model_name] and 'http' in mod.default_cfgs[model_name]['url'] - _model_default_cfgs[model_name] = deepcopy(mod.default_cfgs[model_name]) - if has_pretrained: - _model_has_pretrained.add(model_name) - return fn - - -def _natural_key(string_): - return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string_.lower())] - - -def list_models(filter='', module='', pretrained=False, exclude_filters='', name_matches_cfg=False): - """ Return list of available model names, sorted alphabetically - - Args: - filter (str) - Wildcard filter string that works with fnmatch - module (str) - Limit model selection to a specific sub-module (ie 'gen_efficientnet') - pretrained (bool) - Include only models with pretrained weights if True - exclude_filters (str or list[str]) - Wildcard filters to exclude models after including them with filter - name_matches_cfg (bool) - Include only models w/ model_name matching default_cfg name (excludes some aliases) - - Example: - model_list('gluon_resnet*') -- returns all models starting with 'gluon_resnet' - model_list('*resnext*, 'resnet') -- returns all models with 'resnext' in 'resnet' module - """ - if module: - all_models = list(_module_to_models[module]) - else: - all_models = _model_entrypoints.keys() - if filter: - models = [] - include_filters = filter if isinstance(filter, (tuple, list)) else [filter] - for f in include_filters: - include_models = fnmatch.filter(all_models, f) # include these models - if len(include_models): - models = set(models).union(include_models) - else: - models = all_models - if exclude_filters: - if not isinstance(exclude_filters, (tuple, list)): - exclude_filters = [exclude_filters] - for xf in exclude_filters: - exclude_models = fnmatch.filter(models, xf) # exclude these models - if len(exclude_models): - models = set(models).difference(exclude_models) - if pretrained: - models = _model_has_pretrained.intersection(models) - if name_matches_cfg: - models = set(_model_default_cfgs).intersection(models) - return list(sorted(models, key=_natural_key)) - - -def is_model(model_name): - """ Check if a model name exists - """ - return model_name in _model_entrypoints - - -def model_entrypoint(model_name): - """Fetch a model entrypoint for specified model name - """ - return _model_entrypoints[model_name] - - -def list_modules(): - """ Return list of module names that contain models / model entrypoints - """ - modules = _module_to_models.keys() - return list(sorted(modules)) - - -def is_model_in_modules(model_name, module_names): - """Check if a model exists within a subset of modules - Args: - model_name (str) - name of model to check - module_names (tuple, list, set) - names of modules to search in - """ - assert isinstance(module_names, (tuple, list, set)) - return any(model_name in _module_to_models[n] for n in module_names) - - -def has_model_default_key(model_name, cfg_key): - """ Query model default_cfgs for existence of a specific key. - """ - if model_name in _model_default_cfgs and cfg_key in _model_default_cfgs[model_name]: - return True - return False - - -def is_model_default_key(model_name, cfg_key): - """ Return truthy value for specified model default_cfg key, False if does not exist. - """ - if model_name in _model_default_cfgs and _model_default_cfgs[model_name].get(cfg_key, False): - return True - return False - - -def get_model_default_value(model_name, cfg_key): - """ Get a specific model default_cfg value by key. None if it doesn't exist. - """ - if model_name in _model_default_cfgs: - return _model_default_cfgs[model_name].get(cfg_key, None) - else: - return None - - -def is_model_pretrained(model_name): - return model_name in _model_has_pretrained diff --git a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/sampler.py b/spaces/HaHaBill/LandShapes-Antarctica/netdissect/sampler.py deleted file mode 100644 index 72f1b46da117403c7f6ddcc1877bd9d70ded962b..0000000000000000000000000000000000000000 --- a/spaces/HaHaBill/LandShapes-Antarctica/netdissect/sampler.py +++ /dev/null @@ -1,134 +0,0 @@ -''' -A sampler is just a list of integer listing the indexes of the -inputs in a data set to sample. For reproducibility, the -FixedRandomSubsetSampler uses a seeded prng to produce the same -sequence always. FixedSubsetSampler is just a wrapper for an -explicit list of integers. - -coordinate_sample solves another sampling problem: when testing -convolutional outputs, we can reduce data explosing by sampling -random points of the feature map rather than the entire feature map. -coordinate_sample does this in a deterministic way that is also -resolution-independent. -''' - -import numpy -import random -from torch.utils.data.sampler import Sampler - -class FixedSubsetSampler(Sampler): - """Represents a fixed sequence of data set indices. - Subsets can be created by specifying a subset of output indexes. - """ - def __init__(self, samples): - self.samples = samples - - def __iter__(self): - return iter(self.samples) - - def __len__(self): - return len(self.samples) - - def __getitem__(self, key): - return self.samples[key] - - def subset(self, new_subset): - return FixedSubsetSampler(self.dereference(new_subset)) - - def dereference(self, indices): - ''' - Translate output sample indices (small numbers indexing the sample) - to input sample indices (larger number indexing the original full set) - ''' - return [self.samples[i] for i in indices] - - -class FixedRandomSubsetSampler(FixedSubsetSampler): - """Samples a fixed number of samples from the dataset, deterministically. - Arguments: - data_source, - sample_size, - seed (optional) - """ - def __init__(self, data_source, start=None, end=None, seed=1): - rng = random.Random(seed) - shuffled = list(range(len(data_source))) - rng.shuffle(shuffled) - self.data_source = data_source - super(FixedRandomSubsetSampler, self).__init__(shuffled[start:end]) - - def class_subset(self, class_filter): - ''' - Returns only the subset matching the given rule. - ''' - if isinstance(class_filter, int): - rule = lambda d: d[1] == class_filter - else: - rule = class_filter - return self.subset([i for i, j in enumerate(self.samples) - if rule(self.data_source[j])]) - -def coordinate_sample(shape, sample_size, seeds, grid=13, seed=1, flat=False): - ''' - Returns a (end-start) sets of sample_size grid points within - the shape given. If the shape dimensions are a multiple of 'grid', - then sampled points within the same row will never be duplicated. - ''' - if flat: - sampind = numpy.zeros((len(seeds), sample_size), dtype=int) - else: - sampind = numpy.zeros((len(seeds), 2, sample_size), dtype=int) - assert sample_size <= grid - for j, seed in enumerate(seeds): - rng = numpy.random.RandomState(seed) - # Shuffle the 169 random grid squares, and pick :sample_size. - square_count = grid ** len(shape) - square = numpy.stack(numpy.unravel_index( - rng.choice(square_count, square_count)[:sample_size], - (grid,) * len(shape))) - # Then add a random offset to each x, y and put in the range [0...1) - # Notice this selects the same locations regardless of resolution. - uniform = (square + rng.uniform(size=square.shape)) / grid - # TODO: support affine scaling so that we can align receptive field - # centers exactly when sampling neurons in different layers. - coords = (uniform * numpy.array(shape)[:,None]).astype(int) - # Now take sample_size without replacement. We do this in a way - # such that if sample_size is decreased or increased up to 'grid', - # the selected points become a subset, not totally different points. - if flat: - sampind[j] = numpy.ravel_multi_index(coords, dims=shape) - else: - sampind[j] = coords - return sampind - -if __name__ == '__main__': - from numpy.testing import assert_almost_equal - # Test that coordinate_sample is deterministic, in-range, and scalable. - assert_almost_equal(coordinate_sample((26, 26), 10, range(101, 102)), - [[[14, 0, 12, 11, 8, 13, 11, 20, 7, 20], - [ 9, 22, 7, 11, 23, 18, 21, 15, 2, 5]]]) - assert_almost_equal(coordinate_sample((13, 13), 10, range(101, 102)), - [[[ 7, 0, 6, 5, 4, 6, 5, 10, 3, 20 // 2], - [ 4, 11, 3, 5, 11, 9, 10, 7, 1, 5 // 2]]]) - assert_almost_equal(coordinate_sample((13, 13), 10, range(100, 102), - flat=True), - [[ 8, 24, 67, 103, 87, 79, 138, 94, 98, 53], - [ 95, 11, 81, 70, 63, 87, 75, 137, 40, 2+10*13]]) - assert_almost_equal(coordinate_sample((13, 13), 10, range(101, 103), - flat=True), - [[ 95, 11, 81, 70, 63, 87, 75, 137, 40, 132], - [ 0, 78, 114, 111, 66, 45, 72, 73, 79, 135]]) - assert_almost_equal(coordinate_sample((26, 26), 10, range(101, 102), - flat=True), - [[373, 22, 319, 297, 231, 356, 307, 535, 184, 5+20*26]]) - # Test FixedRandomSubsetSampler - fss = FixedRandomSubsetSampler(range(10)) - assert len(fss) == 10 - assert_almost_equal(list(fss), [8, 0, 3, 4, 5, 2, 9, 6, 7, 1]) - fss = FixedRandomSubsetSampler(range(10), 3, 8) - assert len(fss) == 5 - assert_almost_equal(list(fss), [4, 5, 2, 9, 6]) - fss = FixedRandomSubsetSampler([(i, i % 3) for i in range(10)], - class_filter=1) - assert len(fss) == 3 - assert_almost_equal(list(fss), [4, 7, 1]) diff --git a/spaces/Hallucinate/demo/ldm/modules/losses/contperceptual.py b/spaces/Hallucinate/demo/ldm/modules/losses/contperceptual.py deleted file mode 100644 index 672c1e32a1389def02461c0781339681060c540e..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/ldm/modules/losses/contperceptual.py +++ /dev/null @@ -1,111 +0,0 @@ -import torch -import torch.nn as nn - -from taming.modules.losses.vqperceptual import * # TODO: taming dependency yes/no? - - -class LPIPSWithDiscriminator(nn.Module): - def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0, - disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0, - perceptual_weight=1.0, use_actnorm=False, disc_conditional=False, - disc_loss="hinge"): - - super().__init__() - assert disc_loss in ["hinge", "vanilla"] - self.kl_weight = kl_weight - self.pixel_weight = pixelloss_weight - self.perceptual_loss = LPIPS().eval() - self.perceptual_weight = perceptual_weight - # output log variance - self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init) - - self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels, - n_layers=disc_num_layers, - use_actnorm=use_actnorm - ).apply(weights_init) - self.discriminator_iter_start = disc_start - self.disc_loss = hinge_d_loss if disc_loss == "hinge" else vanilla_d_loss - self.disc_factor = disc_factor - self.discriminator_weight = disc_weight - self.disc_conditional = disc_conditional - - def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None): - if last_layer is not None: - nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] - else: - nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0] - - d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) - d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() - d_weight = d_weight * self.discriminator_weight - return d_weight - - def forward(self, inputs, reconstructions, posteriors, optimizer_idx, - global_step, last_layer=None, cond=None, split="train", - weights=None): - rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous()) - if self.perceptual_weight > 0: - p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous()) - rec_loss = rec_loss + self.perceptual_weight * p_loss - - nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar - weighted_nll_loss = nll_loss - if weights is not None: - weighted_nll_loss = weights*nll_loss - weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0] - nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] - kl_loss = posteriors.kl() - kl_loss = torch.sum(kl_loss) / kl_loss.shape[0] - - # now the GAN part - if optimizer_idx == 0: - # generator update - if cond is None: - assert not self.disc_conditional - logits_fake = self.discriminator(reconstructions.contiguous()) - else: - assert self.disc_conditional - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1)) - g_loss = -torch.mean(logits_fake) - - if self.disc_factor > 0.0: - try: - d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer) - except RuntimeError: - assert not self.training - d_weight = torch.tensor(0.0) - else: - d_weight = torch.tensor(0.0) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss - - log = {"{}/total_loss".format(split): loss.clone().detach().mean(), "{}/logvar".format(split): self.logvar.detach(), - "{}/kl_loss".format(split): kl_loss.detach().mean(), "{}/nll_loss".format(split): nll_loss.detach().mean(), - "{}/rec_loss".format(split): rec_loss.detach().mean(), - "{}/d_weight".format(split): d_weight.detach(), - "{}/disc_factor".format(split): torch.tensor(disc_factor), - "{}/g_loss".format(split): g_loss.detach().mean(), - } - return loss, log - - if optimizer_idx == 1: - # second pass for discriminator update - if cond is None: - logits_real = self.discriminator(inputs.contiguous().detach()) - logits_fake = self.discriminator(reconstructions.contiguous().detach()) - else: - logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1)) - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1)) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - d_loss = disc_factor * self.disc_loss(logits_real, logits_fake) - - log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(), - "{}/logits_real".format(split): logits_real.detach().mean(), - "{}/logits_fake".format(split): logits_fake.detach().mean() - } - return d_loss, log - diff --git a/spaces/Hallucinate/demo/midas/blocks.py b/spaces/Hallucinate/demo/midas/blocks.py deleted file mode 100644 index 6d87a00680bb6ed9a6d7c3043ea30a1e90361794..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/midas/blocks.py +++ /dev/null @@ -1,439 +0,0 @@ -import torch -import torch.nn as nn - -from .backbones.beit import ( - _make_pretrained_beitl16_512, - _make_pretrained_beitl16_384, - _make_pretrained_beitb16_384, - forward_beit, -) -from .backbones.swin_common import ( - forward_swin, -) -from .backbones.swin2 import ( - _make_pretrained_swin2l24_384, - _make_pretrained_swin2b24_384, - _make_pretrained_swin2t16_256, -) -from .backbones.swin import ( - _make_pretrained_swinl12_384, -) -from .backbones.levit import ( - _make_pretrained_levit_384, - forward_levit, -) -from .backbones.vit import ( - _make_pretrained_vitb_rn50_384, - _make_pretrained_vitl16_384, - _make_pretrained_vitb16_384, - forward_vit, -) - -def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, - use_vit_only=False, use_readout="ignore", in_features=[96, 256, 512, 1024]): - if backbone == "beitl16_512": - pretrained = _make_pretrained_beitl16_512( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [256, 512, 1024, 1024], features, groups=groups, expand=expand - ) # BEiT_512-L (backbone) - elif backbone == "beitl16_384": - pretrained = _make_pretrained_beitl16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [256, 512, 1024, 1024], features, groups=groups, expand=expand - ) # BEiT_384-L (backbone) - elif backbone == "beitb16_384": - pretrained = _make_pretrained_beitb16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [96, 192, 384, 768], features, groups=groups, expand=expand - ) # BEiT_384-B (backbone) - elif backbone == "swin2l24_384": - pretrained = _make_pretrained_swin2l24_384( - use_pretrained, hooks=hooks - ) - scratch = _make_scratch( - [192, 384, 768, 1536], features, groups=groups, expand=expand - ) # Swin2-L/12to24 (backbone) - elif backbone == "swin2b24_384": - pretrained = _make_pretrained_swin2b24_384( - use_pretrained, hooks=hooks - ) - scratch = _make_scratch( - [128, 256, 512, 1024], features, groups=groups, expand=expand - ) # Swin2-B/12to24 (backbone) - elif backbone == "swin2t16_256": - pretrained = _make_pretrained_swin2t16_256( - use_pretrained, hooks=hooks - ) - scratch = _make_scratch( - [96, 192, 384, 768], features, groups=groups, expand=expand - ) # Swin2-T/16 (backbone) - elif backbone == "swinl12_384": - pretrained = _make_pretrained_swinl12_384( - use_pretrained, hooks=hooks - ) - scratch = _make_scratch( - [192, 384, 768, 1536], features, groups=groups, expand=expand - ) # Swin-L/12 (backbone) - elif backbone == "next_vit_large_6m": - from .backbones.next_vit import _make_pretrained_next_vit_large_6m - pretrained = _make_pretrained_next_vit_large_6m(hooks=hooks) - scratch = _make_scratch( - in_features, features, groups=groups, expand=expand - ) # Next-ViT-L on ImageNet-1K-6M (backbone) - elif backbone == "levit_384": - pretrained = _make_pretrained_levit_384( - use_pretrained, hooks=hooks - ) - scratch = _make_scratch( - [384, 512, 768], features, groups=groups, expand=expand - ) # LeViT 384 (backbone) - elif backbone == "vitl16_384": - pretrained = _make_pretrained_vitl16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [256, 512, 1024, 1024], features, groups=groups, expand=expand - ) # ViT-L/16 - 85.0% Top1 (backbone) - elif backbone == "vitb_rn50_384": - pretrained = _make_pretrained_vitb_rn50_384( - use_pretrained, - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) - scratch = _make_scratch( - [256, 512, 768, 768], features, groups=groups, expand=expand - ) # ViT-H/16 - 85.0% Top1 (backbone) - elif backbone == "vitb16_384": - pretrained = _make_pretrained_vitb16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [96, 192, 384, 768], features, groups=groups, expand=expand - ) # ViT-B/16 - 84.6% Top1 (backbone) - elif backbone == "resnext101_wsl": - pretrained = _make_pretrained_resnext101_wsl(use_pretrained) - scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3 - elif backbone == "efficientnet_lite3": - pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable) - scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3 - else: - print(f"Backbone '{backbone}' not implemented") - assert False - - return pretrained, scratch - - -def _make_scratch(in_shape, out_shape, groups=1, expand=False): - scratch = nn.Module() - - out_shape1 = out_shape - out_shape2 = out_shape - out_shape3 = out_shape - if len(in_shape) >= 4: - out_shape4 = out_shape - - if expand: - out_shape1 = out_shape - out_shape2 = out_shape*2 - out_shape3 = out_shape*4 - if len(in_shape) >= 4: - out_shape4 = out_shape*8 - - scratch.layer1_rn = nn.Conv2d( - in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer2_rn = nn.Conv2d( - in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer3_rn = nn.Conv2d( - in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - if len(in_shape) >= 4: - scratch.layer4_rn = nn.Conv2d( - in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - - return scratch - - -def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False): - efficientnet = torch.hub.load( - "rwightman/gen-efficientnet-pytorch", - "tf_efficientnet_lite3", - pretrained=use_pretrained, - exportable=exportable - ) - return _make_efficientnet_backbone(efficientnet) - - -def _make_efficientnet_backbone(effnet): - pretrained = nn.Module() - - pretrained.layer1 = nn.Sequential( - effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2] - ) - pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3]) - pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5]) - pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9]) - - return pretrained - - -def _make_resnet_backbone(resnet): - pretrained = nn.Module() - pretrained.layer1 = nn.Sequential( - resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1 - ) - - pretrained.layer2 = resnet.layer2 - pretrained.layer3 = resnet.layer3 - pretrained.layer4 = resnet.layer4 - - return pretrained - - -def _make_pretrained_resnext101_wsl(use_pretrained): - resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl") - return _make_resnet_backbone(resnet) - - - -class Interpolate(nn.Module): - """Interpolation module. - """ - - def __init__(self, scale_factor, mode, align_corners=False): - """Init. - - Args: - scale_factor (float): scaling - mode (str): interpolation mode - """ - super(Interpolate, self).__init__() - - self.interp = nn.functional.interpolate - self.scale_factor = scale_factor - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: interpolated data - """ - - x = self.interp( - x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners - ) - - return x - - -class ResidualConvUnit(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - out = self.relu(x) - out = self.conv1(out) - out = self.relu(out) - out = self.conv2(out) - - return out + x - - -class FeatureFusionBlock(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock, self).__init__() - - self.resConfUnit1 = ResidualConvUnit(features) - self.resConfUnit2 = ResidualConvUnit(features) - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - output += self.resConfUnit1(xs[1]) - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=True - ) - - return output - - - - -class ResidualConvUnit_custom(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features, activation, bn): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.bn = bn - - self.groups=1 - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - if self.bn==True: - self.bn1 = nn.BatchNorm2d(features) - self.bn2 = nn.BatchNorm2d(features) - - self.activation = activation - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - - out = self.activation(x) - out = self.conv1(out) - if self.bn==True: - out = self.bn1(out) - - out = self.activation(out) - out = self.conv2(out) - if self.bn==True: - out = self.bn2(out) - - if self.groups > 1: - out = self.conv_merge(out) - - return self.skip_add.add(out, x) - - # return out + x - - -class FeatureFusionBlock_custom(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True, size=None): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock_custom, self).__init__() - - self.deconv = deconv - self.align_corners = align_corners - - self.groups=1 - - self.expand = expand - out_features = features - if self.expand==True: - out_features = features//2 - - self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1) - - self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn) - self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn) - - self.skip_add = nn.quantized.FloatFunctional() - - self.size=size - - def forward(self, *xs, size=None): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - res = self.resConfUnit1(xs[1]) - output = self.skip_add.add(output, res) - # output += res - - output = self.resConfUnit2(output) - - if (size is None) and (self.size is None): - modifier = {"scale_factor": 2} - elif size is None: - modifier = {"size": self.size} - else: - modifier = {"size": size} - - output = nn.functional.interpolate( - output, **modifier, mode="bilinear", align_corners=self.align_corners - ) - - output = self.out_conv(output) - - return output - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/modules/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/modules/__init__.py deleted file mode 100644 index f5ea180f9b4cdb27cd553439b6df9d743105f18c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/simultaneous_translation/modules/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import os -import importlib -from fairseq import registry - -( - build_monotonic_attention, - register_monotonic_attention, - MONOTONIC_ATTENTION_REGISTRY, - _, -) = registry.setup_registry("--simul-type") - -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - model_name = file[: file.find(".py")] - importlib.import_module( - "examples.simultaneous_translation.modules." + model_name - ) diff --git a/spaces/HarshulNanda/HARM_ML_web_app/categoryPredictor.py b/spaces/HarshulNanda/HARM_ML_web_app/categoryPredictor.py deleted file mode 100644 index 95936bdd5cfd7a4da0e6f4b2fc3a965c4e712e41..0000000000000000000000000000000000000000 --- a/spaces/HarshulNanda/HARM_ML_web_app/categoryPredictor.py +++ /dev/null @@ -1,44 +0,0 @@ -from youtubesearchpython import Video, ResultMode -from colors import colorOf, dataset - -import numpy as np -import matplotlib.pyplot as plt -import requests -import pickle -import warnings -warnings.filterwarnings("ignore") - -def predictCategoryFor(url): - try: - - video = Video.getInfo(url, mode = ResultMode.json) - - text = [video["title"] + " " + video["description"]] - - categories = sorted(list(dataset.keys())) - - education_model = pickle.load(open("./models/educated_model.pkl", "rb")) - education_prediction = education_model.predict(text)[0] - - if education_prediction == 0: - - category_classifier = pickle.load(open("./models/cat_model.pkl", "rb")) - category_prediction = categories[category_classifier.predict(text)[0]] - - sub_cat_clf = pickle.load(open(f"./models/{category_prediction.lower().replace(' ', '_')}_model.pkl", "rb")) - sub_cat_pred = sub_cat_clf.predict_proba(text)[0] - sub_cat_pred *= 100 - subs = sorted(dataset[category_prediction]) - - return ("Educational", category_prediction, subs, sub_cat_pred) - - else: - - return ("Non Educational", "", [], []) - - except: - return ("There must be an error in getting the title and description of the video.", "", [], []) - - -# print(predictCategoryFor(url="https://www.youtube.com/watch?v=bdCX8Nb_2Mg")) - diff --git a/spaces/Hexamind/swarms/filter_wrap.py b/spaces/Hexamind/swarms/filter_wrap.py deleted file mode 100644 index 3eb10533ae0b9eb92e184ab82a6b140b4aafe7d1..0000000000000000000000000000000000000000 --- a/spaces/Hexamind/swarms/filter_wrap.py +++ /dev/null @@ -1,95 +0,0 @@ - -import numpy as np -from gym import spaces, Wrapper - - -class FilterWrapper(Wrapper): - """ - :param env: (gym.Env) Gym environment that will be wrapped - """ - - def __init__(self, env): - - self.nb_blues, self.nb_reds = env.nb_blues, env.nb_reds - - self.blue_deads = np.full((self.nb_blues,), False) - self.red_deads = np.full((self.nb_reds,), False) - - env.observation_space = spaces.Tuple(( - spaces.Box(low=0, high=1, shape=(self.nb_blues, 6), dtype=np.float32), - spaces.Box(low=0, high=1, shape=(self.nb_reds, 6), dtype=np.float32), - spaces.Box(low=0, high=1, shape=(self.nb_blues, self.nb_reds), dtype=np.float32), - spaces.Box(low=0, high=1, shape=(self.nb_reds, self.nb_blues), dtype=np.float32), - spaces.Discrete(1), - spaces.Discrete(1))) - - super(FilterWrapper, self).__init__(env) - - def reset(self): - """ - Reset the environment - """ - obs = self.env.reset() - - return self._sort_obs(obs) - - def step(self, action): - """ - :param action: ([float] or int) Action taken by the agent - :return: (np.ndarray, float, bool, dict) observation, reward, is the episode over?, additional informations - """ - - blue_action, red_action = action - - new_ba = [] - index = 0 - for count, alive in enumerate(~self.blue_deads): - if alive: - new_ba.append(blue_action[index]) - index += 1 - else: - new_ba.append(np.array([0, 0, 0])) - blue_action = new_ba - - new_ra = [] - index = 0 - for count, alive in enumerate(~self.red_deads): - if alive: - new_ra.append(red_action[index]) - index += 1 - else: - new_ra.append(np.array([0, 0, 0])) - red_action = new_ra - - action = blue_action, red_action - - obs, reward, done, info = self.env.step(action) - - obs = self._sort_obs(obs) - - return obs, reward, done, info - - def _sort_obs(self, obs): - - blue_obs, red_obs, blues_fire, reds_fire, blue_deads, red_deads = obs - - self.blue_deads = blue_deads - self.red_deads = red_deads - - blue_obs = np.vstack((blue_obs[~self.blue_deads], blue_obs[self.blue_deads])) - red_obs = np.vstack((red_obs[~self.red_deads], red_obs[self.red_deads])) - - blues_fire = self.fire_sort(self.blue_deads, self.red_deads, blues_fire) - reds_fire = self.fire_sort(self.red_deads, self.blue_deads, reds_fire) - - sort_obs = blue_obs, red_obs, blues_fire, reds_fire, sum(blue_deads), sum(red_deads) - - return sort_obs - - def fire_sort(self, dead_friends, dead_foes, friends_fire): - - friends_fire_big = np.zeros_like(friends_fire) - friends_fire = np.compress(~dead_friends, friends_fire, axis=0) - friends_fire = np.compress(~dead_foes, friends_fire, axis=1) - friends_fire_big[:friends_fire.shape[0], :friends_fire.shape[1]] = friends_fire - return friends_fire_big diff --git a/spaces/Hina4867/bingo/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/Hina4867/bingo/src/lib/hooks/use-copy-to-clipboard.tsx deleted file mode 100644 index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/lib/hooks/use-copy-to-clipboard.tsx +++ /dev/null @@ -1,33 +0,0 @@ -'use client' - -import * as React from 'react' - -export interface useCopyToClipboardProps { - timeout?: number -} - -export function useCopyToClipboard({ - timeout = 2000 -}: useCopyToClipboardProps) { - const [isCopied, setIsCopied] = React.useState(false) - - const copyToClipboard = (value: string) => { - if (typeof window === 'undefined' || !navigator.clipboard?.writeText) { - return - } - - if (!value) { - return - } - - navigator.clipboard.writeText(value).then(() => { - setIsCopied(true) - - setTimeout(() => { - setIsCopied(false) - }, timeout) - }) - } - - return { isCopied, copyToClipboard } -} diff --git a/spaces/HusseinHE/psis/share_btn.py b/spaces/HusseinHE/psis/share_btn.py deleted file mode 100644 index 5d4dc51b883650ed947e7dea90f677d817725198..0000000000000000000000000000000000000000 --- a/spaces/HusseinHE/psis/share_btn.py +++ /dev/null @@ -1,83 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - - const inputPrompt = gradioEl.querySelector('#prompt textarea').value; - const negativePrompt = gradioEl.querySelector('#negative_prompt textarea').value; - const illusionStrength = gradioEl.querySelector('#illusion_strength input[type="number"]').value; - const controlImage = gradioEl.querySelector('#control_image img'); - const outputImgEl = gradioEl.querySelector('#output img'); - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const inputFile = await getInputImgFile(outputImgEl); - const urlInputImg = await uploadFile(inputFile); - - const controlFile = await getInputImgFile(controlImage); - const urlControlImg = await uploadFile(controlFile); - - const descriptionMd = ` -### Prompt -- *Prompt*: ${inputPrompt} -- *Negative prompt*: ${negativePrompt} -- *Illusion strength*: ${illusionStrength} -#### Generated Image: - - -#### Control Image: - -`; - const params = new URLSearchParams({ - title: inputPrompt, - description: descriptionMd, - preview: true - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/AP123/IllusionDiffusion/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py b/spaces/ICML2022/OFA/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py deleted file mode 100644 index 0e7fbba888c1ddd118da8238d644b4ab571177ff..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py +++ /dev/null @@ -1,475 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -import itertools -import logging -import os - -import numpy as np -import torch - -from fairseq import metrics -from fairseq.data import ( - ConcatDataset, - ConcatSentencesDataset, - data_utils, - Dictionary, - IdDataset, - indexed_dataset, - NestedDictionaryDataset, - NumSamplesDataset, - NumelDataset, - PrependTokenDataset, - RawLabelDataset, - RightPadDataset, - SortDataset, - TruncateDataset, - TokenBlockDataset, -) -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.tasks import FairseqTask, register_task -from omegaconf import II, MISSING - - -EVAL_BLEU_ORDER = 4 -TARGET_METRIC_CHOICES = ChoiceEnum(["bleu", "ter"]) - -logger = logging.getLogger(__name__) - - -@dataclass -class DiscriminativeRerankingNMTConfig(FairseqDataclass): - data: str = field(default=MISSING, metadata={"help": "path to data directory"}) - num_data_splits: int = field( - default=1, metadata={"help": "total number of data splits"} - ) - no_shuffle: bool = field( - default=False, metadata={"help": "do not shuffle training data"} - ) - max_positions: int = field( - default=512, metadata={"help": "number of positional embeddings to learn"} - ) - include_src: bool = field( - default=False, metadata={"help": "include source sentence"} - ) - mt_beam: int = field(default=50, metadata={"help": "beam size of input hypotheses"}) - eval_target_metric: bool = field( - default=False, - metadata={"help": "evaluation with the target metric during validation"}, - ) - target_metric: TARGET_METRIC_CHOICES = field( - default="bleu", metadata={"help": "name of the target metric to optimize for"} - ) - train_subset: str = field( - default=II("dataset.train_subset"), - metadata={"help": "data subset to use for training (e.g. train, valid, test)"}, - ) - seed: int = field( - default=II("common.seed"), - metadata={"help": "pseudo random number generator seed"}, - ) - - -class RerankerScorer(object): - """Scores the target for a given (source (optional), target) input.""" - - def __init__(self, args, mt_beam): - self.mt_beam = mt_beam - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - """Score a batch of translations.""" - net_input = sample["net_input"] - - assert len(models) == 1, "does not support model ensemble" - model = models[0] - - bs = net_input["src_tokens"].shape[0] - assert ( - model.joint_classification == "none" or bs % self.mt_beam == 0 - ), f"invalid batch size ({bs}) for joint classification with beam size ({self.mt_beam})" - - model.eval() - logits = model(**net_input) - - batch_out = model.sentence_forward(logits, net_input["src_tokens"]) - if model.joint_classification == "sent": - batch_out = model.joint_forward( - batch_out.view(self.mt_beam, bs // self.mt_beam, -1) - ) - scores = model.classification_forward( - batch_out.view(bs, 1, -1) - ) # input: B x T x C - - return scores - - -@register_task( - "discriminative_reranking_nmt", dataclass=DiscriminativeRerankingNMTConfig -) -class DiscriminativeRerankingNMTTask(FairseqTask): - """ - Translation rerank task. - The input can be either (src, tgt) sentence pairs or tgt sentence only. - """ - - cfg: DiscriminativeRerankingNMTConfig - - def __init__(self, cfg: DiscriminativeRerankingNMTConfig, data_dictionary=None): - super().__init__(cfg) - self.dictionary = data_dictionary - self._max_positions = cfg.max_positions - # args.tokens_per_sample = self._max_positions - # self.num_classes = 1 # for model - - @classmethod - def load_dictionary(cls, cfg, filename): - """Load the dictionary from the filename""" - dictionary = Dictionary.load(filename) - dictionary.add_symbol("") # for loading pretrained XLMR model - - return dictionary - - @classmethod - def setup_task(cls, cfg: DiscriminativeRerankingNMTConfig, **kwargs): - # load data dictionary (assume joint dictionary) - data_path = cfg.data - data_dict = cls.load_dictionary( - cfg, os.path.join(data_path, "input_src/dict.txt") - ) - - logger.info("[input] src dictionary: {} types".format(len(data_dict))) - - return DiscriminativeRerankingNMTTask(cfg, data_dict) - - def load_dataset(self, split, epoch=0, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test).""" - if self.cfg.data.endswith("1"): - data_shard = (epoch - 1) % self.cfg.num_data_splits + 1 - data_path = self.cfg.data[:-1] + str(data_shard) - else: - data_path = self.cfg.data - - def get_path(type, data_split): - return os.path.join(data_path, str(type), data_split) - - def make_dataset(type, dictionary, data_split, combine): - split_path = get_path(type, data_split) - - dataset = data_utils.load_indexed_dataset( - split_path, dictionary, combine=combine, - ) - return dataset - - def load_split(data_split, metric): - input_src = None - if self.cfg.include_src: - input_src = make_dataset( - "input_src", self.dictionary, data_split, combine=False - ) - assert input_src is not None, "could not find dataset: {}".format( - get_path("input_src", data_split) - ) - - input_tgt = make_dataset( - "input_tgt", self.dictionary, data_split, combine=False - ) - assert input_tgt is not None, "could not find dataset: {}".format( - get_path("input_tgt", data_split) - ) - - label_path = f"{get_path(metric, data_split)}.{metric}" - assert os.path.exists(label_path), f"could not find dataset: {label_path}" - - np_labels = np.loadtxt(label_path) - if self.cfg.target_metric == "ter": - np_labels = -np_labels - label = RawLabelDataset(np_labels) - - return input_src, input_tgt, label - - src_datasets = [] - tgt_datasets = [] - label_datasets = [] - - if split == self.cfg.train_subset: - for k in itertools.count(): - split_k = "train" + (str(k) if k > 0 else "") - prefix = os.path.join(data_path, "input_tgt", split_k) - if not indexed_dataset.dataset_exists(prefix, impl=None): - if k > 0: - break - else: - raise FileNotFoundError(f"Dataset not found: {prefix}") - input_src, input_tgt, label = load_split( - split_k, self.cfg.target_metric - ) - src_datasets.append(input_src) - tgt_datasets.append(input_tgt) - label_datasets.append(label) - else: - input_src, input_tgt, label = load_split(split, self.cfg.target_metric) - src_datasets.append(input_src) - tgt_datasets.append(input_tgt) - label_datasets.append(label) - - if len(tgt_datasets) == 1: - input_tgt, label = tgt_datasets[0], label_datasets[0] - if self.cfg.include_src: - input_src = src_datasets[0] - else: - input_tgt = ConcatDataset(tgt_datasets) - label = ConcatDataset(label_datasets) - if self.cfg.include_src: - input_src = ConcatDataset(src_datasets) - - input_tgt = TruncateDataset(input_tgt, self.cfg.max_positions) - if self.cfg.include_src: - input_src = PrependTokenDataset(input_src, self.dictionary.bos()) - input_src = TruncateDataset(input_src, self.cfg.max_positions) - src_lengths = NumelDataset(input_src, reduce=False) - src_tokens = ConcatSentencesDataset(input_src, input_tgt) - else: - src_tokens = PrependTokenDataset(input_tgt, self.dictionary.bos()) - src_lengths = NumelDataset(src_tokens, reduce=False) - - dataset = { - "id": IdDataset(), - "net_input": { - "src_tokens": RightPadDataset( - src_tokens, pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": src_lengths, - }, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens, reduce=True), - "target": label, - } - - dataset = NestedDictionaryDataset(dataset, sizes=[src_tokens.sizes],) - - assert len(dataset) % self.cfg.mt_beam == 0, ( - "dataset size (%d) is not a multiple of beam size (%d)" - % (len(dataset), self.cfg.mt_beam) - ) - - # no need to shuffle valid/test sets - if not self.cfg.no_shuffle and split == self.cfg.train_subset: - - # need to keep all hypothese together - start_idx = np.arange(0, len(dataset), self.cfg.mt_beam) - with data_utils.numpy_seed(self.cfg.seed + epoch): - np.random.shuffle(start_idx) - - idx = np.arange(0, self.cfg.mt_beam) - shuffle = np.tile(idx, (len(start_idx), 1)).reshape(-1) + np.tile( - start_idx, (self.cfg.mt_beam, 1) - ).transpose().reshape(-1) - - dataset = SortDataset(dataset, sort_order=[shuffle],) - - logger.info(f"Loaded {split} with #samples: {len(dataset)}") - - self.datasets[split] = dataset - return self.datasets[split] - - def build_dataset_for_inference(self, src_tokens, src_lengths, **kwargs): - assert not self.cfg.include_src or len(src_tokens[0]) == 2 - input_src = None - if self.cfg.include_src: - input_src = TokenBlockDataset( - [t[0] for t in src_tokens], - [l[0] for l in src_lengths], - block_size=None, # ignored for "eos" break mode - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode="eos", - ) - input_src = PrependTokenDataset(input_src, self.dictionary.bos()) - input_src = TruncateDataset(input_src, self.cfg.max_positions) - - input_tgt = TokenBlockDataset( - [t[-1] for t in src_tokens], - [l[-1] for l in src_lengths], - block_size=None, # ignored for "eos" break mode - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode="eos", - ) - input_tgt = TruncateDataset(input_tgt, self.cfg.max_positions) - if self.cfg.include_src: - src_tokens = ConcatSentencesDataset(input_src, input_tgt) - src_lengths = NumelDataset(input_src, reduce=False) - else: - input_tgt = PrependTokenDataset(input_tgt, self.dictionary.bos()) - src_tokens = input_tgt - src_lengths = NumelDataset(src_tokens, reduce=False) - - dataset = { - "id": IdDataset(), - "net_input": { - "src_tokens": RightPadDataset( - src_tokens, pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": src_lengths, - }, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens, reduce=True), - } - - return NestedDictionaryDataset(dataset, sizes=[src_tokens.sizes],) - - def build_model(self, cfg: FairseqDataclass): - return super().build_model(cfg) - - def build_generator(self, args): - return RerankerScorer(args, mt_beam=self.cfg.mt_beam) - - def max_positions(self): - return self._max_positions - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - def create_dummy_batch(self, device): - dummy_target = ( - torch.zeros(self.cfg.mt_beam, EVAL_BLEU_ORDER * 2 + 3).long().to(device) - if not self.cfg.eval_ter - else torch.zeros(self.cfg.mt_beam, 3).long().to(device) - ) - - return { - "id": torch.zeros(self.cfg.mt_beam, 1).long().to(device), - "net_input": { - "src_tokens": torch.zeros(self.cfg.mt_beam, 4).long().to(device), - "src_lengths": torch.ones(self.cfg.mt_beam, 1).long().to(device), - }, - "nsentences": 0, - "ntokens": 0, - "target": dummy_target, - } - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - if ignore_grad and sample is None: - sample = self.create_dummy_batch(model.device) - - return super().train_step( - sample, model, criterion, optimizer, update_num, ignore_grad - ) - - def valid_step(self, sample, model, criterion): - if sample is None: - sample = self.create_dummy_batch(model.device) - - loss, sample_size, logging_output = super().valid_step(sample, model, criterion) - - if not self.cfg.eval_target_metric: - return loss, sample_size, logging_output - - scores = logging_output["scores"] - - if self.cfg.target_metric == "bleu": - assert sample["target"].shape[1] == EVAL_BLEU_ORDER * 2 + 3, ( - "target does not contain enough information (" - + str(sample["target"].shape[1]) - + "for evaluating BLEU" - ) - - max_id = torch.argmax(scores, dim=1) - select_id = max_id + torch.arange( - 0, sample_size * self.cfg.mt_beam, self.cfg.mt_beam - ).to(max_id.device) - bleu_data = sample["target"][select_id, 1:].sum(0).data - - logging_output["_bleu_sys_len"] = bleu_data[0] - logging_output["_bleu_ref_len"] = bleu_data[1] - - for i in range(EVAL_BLEU_ORDER): - logging_output["_bleu_counts_" + str(i)] = bleu_data[2 + i] - logging_output["_bleu_totals_" + str(i)] = bleu_data[ - 2 + EVAL_BLEU_ORDER + i - ] - - elif self.cfg.target_metric == "ter": - assert sample["target"].shape[1] == 3, ( - "target does not contain enough information (" - + str(sample["target"].shape[1]) - + "for evaluating TER" - ) - - max_id = torch.argmax(scores, dim=1) - select_id = max_id + torch.arange( - 0, sample_size * self.cfg.mt_beam, self.cfg.mt_beam - ).to(max_id.device) - ter_data = sample["target"][select_id, 1:].sum(0).data - - logging_output["_ter_num_edits"] = -ter_data[0] - logging_output["_ter_ref_len"] = -ter_data[1] - - return loss, sample_size, logging_output - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - - if not self.cfg.eval_target_metric: - return - - def sum_logs(key): - return sum(log.get(key, 0) for log in logging_outputs) - - if self.cfg.target_metric == "bleu": - counts, totals = [], [] - for i in range(EVAL_BLEU_ORDER): - counts.append(sum_logs("_bleu_counts_" + str(i))) - totals.append(sum_logs("_bleu_totals_" + str(i))) - - if max(totals) > 0: - # log counts as numpy arrays -- log_scalar will sum them correctly - metrics.log_scalar("_bleu_counts", np.array(counts)) - metrics.log_scalar("_bleu_totals", np.array(totals)) - metrics.log_scalar("_bleu_sys_len", sum_logs("_bleu_sys_len")) - metrics.log_scalar("_bleu_ref_len", sum_logs("_bleu_ref_len")) - - def compute_bleu(meters): - import inspect - import sacrebleu - - fn_sig = inspect.getfullargspec(sacrebleu.compute_bleu)[0] - if "smooth_method" in fn_sig: - smooth = {"smooth_method": "exp"} - else: - smooth = {"smooth": "exp"} - bleu = sacrebleu.compute_bleu( - correct=meters["_bleu_counts"].sum, - total=meters["_bleu_totals"].sum, - sys_len=meters["_bleu_sys_len"].sum, - ref_len=meters["_bleu_ref_len"].sum, - **smooth, - ) - return round(bleu.score, 2) - - metrics.log_derived("bleu", compute_bleu) - elif self.cfg.target_metric == "ter": - num_edits = sum_logs("_ter_num_edits") - ref_len = sum_logs("_ter_ref_len") - - if ref_len > 0: - metrics.log_scalar("_ter_num_edits", num_edits) - metrics.log_scalar("_ter_ref_len", ref_len) - - def compute_ter(meters): - score = meters["_ter_num_edits"].sum / meters["_ter_ref_len"].sum - return round(score.item(), 2) - - metrics.log_derived("ter", compute_ter) diff --git a/spaces/Iceclear/StableSR/StableSR/taming/data/faceshq.py b/spaces/Iceclear/StableSR/StableSR/taming/data/faceshq.py deleted file mode 100644 index 6912d04b66a6d464c1078e4b51d5da290f5e767e..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/taming/data/faceshq.py +++ /dev/null @@ -1,134 +0,0 @@ -import os -import numpy as np -import albumentations -from torch.utils.data import Dataset - -from taming.data.base import ImagePaths, NumpyPaths, ConcatDatasetWithIndex - - -class FacesBase(Dataset): - def __init__(self, *args, **kwargs): - super().__init__() - self.data = None - self.keys = None - - def __len__(self): - return len(self.data) - - def __getitem__(self, i): - example = self.data[i] - ex = {} - if self.keys is not None: - for k in self.keys: - ex[k] = example[k] - else: - ex = example - return ex - - -class CelebAHQTrain(FacesBase): - def __init__(self, size, keys=None): - super().__init__() - root = "data/celebahq" - with open("data/celebahqtrain.txt", "r") as f: - relpaths = f.read().splitlines() - paths = [os.path.join(root, relpath) for relpath in relpaths] - self.data = NumpyPaths(paths=paths, size=size, random_crop=False) - self.keys = keys - - -class CelebAHQValidation(FacesBase): - def __init__(self, size, keys=None): - super().__init__() - root = "data/celebahq" - with open("data/celebahqvalidation.txt", "r") as f: - relpaths = f.read().splitlines() - paths = [os.path.join(root, relpath) for relpath in relpaths] - self.data = NumpyPaths(paths=paths, size=size, random_crop=False) - self.keys = keys - - -class FFHQTrain(FacesBase): - def __init__(self, size, keys=None): - super().__init__() - root = "data/ffhq" - with open("data/ffhqtrain.txt", "r") as f: - relpaths = f.read().splitlines() - paths = [os.path.join(root, relpath) for relpath in relpaths] - self.data = ImagePaths(paths=paths, size=size, random_crop=False) - self.keys = keys - - -class FFHQValidation(FacesBase): - def __init__(self, size, keys=None): - super().__init__() - root = "data/ffhq" - with open("data/ffhqvalidation.txt", "r") as f: - relpaths = f.read().splitlines() - paths = [os.path.join(root, relpath) for relpath in relpaths] - self.data = ImagePaths(paths=paths, size=size, random_crop=False) - self.keys = keys - - -class FacesHQTrain(Dataset): - # CelebAHQ [0] + FFHQ [1] - def __init__(self, size, keys=None, crop_size=None, coord=False): - d1 = CelebAHQTrain(size=size, keys=keys) - d2 = FFHQTrain(size=size, keys=keys) - self.data = ConcatDatasetWithIndex([d1, d2]) - self.coord = coord - if crop_size is not None: - self.cropper = albumentations.RandomCrop(height=crop_size,width=crop_size) - if self.coord: - self.cropper = albumentations.Compose([self.cropper], - additional_targets={"coord": "image"}) - - def __len__(self): - return len(self.data) - - def __getitem__(self, i): - ex, y = self.data[i] - if hasattr(self, "cropper"): - if not self.coord: - out = self.cropper(image=ex["image"]) - ex["image"] = out["image"] - else: - h,w,_ = ex["image"].shape - coord = np.arange(h*w).reshape(h,w,1)/(h*w) - out = self.cropper(image=ex["image"], coord=coord) - ex["image"] = out["image"] - ex["coord"] = out["coord"] - ex["class"] = y - return ex - - -class FacesHQValidation(Dataset): - # CelebAHQ [0] + FFHQ [1] - def __init__(self, size, keys=None, crop_size=None, coord=False): - d1 = CelebAHQValidation(size=size, keys=keys) - d2 = FFHQValidation(size=size, keys=keys) - self.data = ConcatDatasetWithIndex([d1, d2]) - self.coord = coord - if crop_size is not None: - self.cropper = albumentations.CenterCrop(height=crop_size,width=crop_size) - if self.coord: - self.cropper = albumentations.Compose([self.cropper], - additional_targets={"coord": "image"}) - - def __len__(self): - return len(self.data) - - def __getitem__(self, i): - ex, y = self.data[i] - if hasattr(self, "cropper"): - if not self.coord: - out = self.cropper(image=ex["image"]) - ex["image"] = out["image"] - else: - h,w,_ = ex["image"].shape - coord = np.arange(h*w).reshape(h,w,1)/(h*w) - out = self.cropper(image=ex["image"], coord=coord) - ex["image"] = out["image"] - ex["coord"] = out["coord"] - ex["class"] = y - return ex diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/metrics/metric_util.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/metrics/metric_util.py deleted file mode 100644 index 4d18f0f7816431bed6af9d58319c6435bdf5c971..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/metrics/metric_util.py +++ /dev/null @@ -1,45 +0,0 @@ -import numpy as np - -from basicsr.utils.matlab_functions import bgr2ycbcr - - -def reorder_image(img, input_order='HWC'): - """Reorder images to 'HWC' order. - - If the input_order is (h, w), return (h, w, 1); - If the input_order is (c, h, w), return (h, w, c); - If the input_order is (h, w, c), return as it is. - - Args: - img (ndarray): Input image. - input_order (str): Whether the input order is 'HWC' or 'CHW'. - If the input image shape is (h, w), input_order will not have - effects. Default: 'HWC'. - - Returns: - ndarray: reordered image. - """ - - if input_order not in ['HWC', 'CHW']: - raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' "'HWC' and 'CHW'") - if len(img.shape) == 2: - img = img[..., None] - if input_order == 'CHW': - img = img.transpose(1, 2, 0) - return img - - -def to_y_channel(img): - """Change to Y channel of YCbCr. - - Args: - img (ndarray): Images with range [0, 255]. - - Returns: - (ndarray): Images with range [0, 255] (float type) without round. - """ - img = img.astype(np.float32) / 255. - if img.ndim == 3 and img.shape[2] == 3: - img = bgr2ycbcr(img, y_only=True) - img = img[..., None] - return img * 255. diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/scripts/download_pretrained_models.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/scripts/download_pretrained_models.py deleted file mode 100644 index daa6e8ca14ea91c89a318e85d9f182eb7d1bf025..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/scripts/download_pretrained_models.py +++ /dev/null @@ -1,40 +0,0 @@ -import argparse -import os -from os import path as osp - -from basicsr.utils.download_util import load_file_from_url - - -def download_pretrained_models(method, file_urls): - save_path_root = f'./weights/{method}' - os.makedirs(save_path_root, exist_ok=True) - - for file_name, file_url in file_urls.items(): - save_path = load_file_from_url(url=file_url, model_dir=save_path_root, progress=True, file_name=file_name) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - - parser.add_argument( - 'method', - type=str, - help=("Options: 'CodeFormer' 'facelib'. Set to 'all' to download all the models.")) - args = parser.parse_args() - - file_urls = { - 'CodeFormer': { - 'codeformer.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth' - }, - 'facelib': { - # 'yolov5l-face.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/yolov5l-face.pth', - 'detection_Resnet50_Final.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/detection_Resnet50_Final.pth', - 'parsing_parsenet.pth': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/parsing_parsenet.pth' - } - } - - if args.method == 'all': - for method in file_urls.keys(): - download_pretrained_models(method, file_urls[method]) - else: - download_pretrained_models(args.method, file_urls[args.method]) \ No newline at end of file diff --git a/spaces/Kangarroar/ApplioRVC-Inference/julius/bands.py b/spaces/Kangarroar/ApplioRVC-Inference/julius/bands.py deleted file mode 100644 index ef2162440b69e960770aa7bf81b9aaec48a63243..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/julius/bands.py +++ /dev/null @@ -1,119 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 -""" -Decomposition of a signal over frequency bands in the waveform domain. -""" -from typing import Optional, Sequence -import torch - -from .core import mel_frequencies -from .lowpass import LowPassFilters -from .utils import simple_repr - - -class SplitBands(torch.nn.Module): - """ - Decomposes a signal over the given frequency bands in the waveform domain using - a cascade of low pass filters as implemented by `julius.lowpass.LowPassFilters`. - You can either specify explicitely the frequency cutoffs, or just the number of bands, - in which case the frequency cutoffs will be spread out evenly in mel scale. - - Args: - sample_rate (float): Sample rate of the input signal in Hz. - n_bands (int or None): number of bands, when not giving them explictely with `cutoffs`. - In that case, the cutoff frequencies will be evenly spaced in mel-space. - cutoffs (list[float] or None): list of frequency cutoffs in Hz. - pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`, - the output will have the same length as the input. - zeros (float): Number of zero crossings to keep. See `LowPassFilters` for more informations. - fft (bool or None): See `LowPassFilters` for more info. - - ..note:: - The sum of all the bands will always be the input signal. - - ..warning:: - Unlike `julius.lowpass.LowPassFilters`, the cutoffs frequencies must be provided in Hz along - with the sample rate. - - Shape: - - - Input: `[*, T]` - - Output: `[B, *, T']`, with `T'=T` if `pad` is True. - If `n_bands` was provided, `B = n_bands` otherwise `B = len(cutoffs) + 1` - - >>> bands = SplitBands(sample_rate=128, n_bands=10) - >>> x = torch.randn(6, 4, 1024) - >>> list(bands(x).shape) - [10, 6, 4, 1024] - """ - - def __init__(self, sample_rate: float, n_bands: Optional[int] = None, - cutoffs: Optional[Sequence[float]] = None, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - super().__init__() - if (cutoffs is None) + (n_bands is None) != 1: - raise ValueError("You must provide either n_bands, or cutoffs, but not boths.") - - self.sample_rate = sample_rate - self.n_bands = n_bands - self._cutoffs = list(cutoffs) if cutoffs is not None else None - self.pad = pad - self.zeros = zeros - self.fft = fft - - if cutoffs is None: - if n_bands is None: - raise ValueError("You must provide one of n_bands or cutoffs.") - if not n_bands >= 1: - raise ValueError(f"n_bands must be greater than one (got {n_bands})") - cutoffs = mel_frequencies(n_bands + 1, 0, sample_rate / 2)[1:-1] - else: - if max(cutoffs) > 0.5 * sample_rate: - raise ValueError("A cutoff above sample_rate/2 does not make sense.") - if len(cutoffs) > 0: - self.lowpass = LowPassFilters( - [c / sample_rate for c in cutoffs], pad=pad, zeros=zeros, fft=fft) - else: - # Here I cannot make both TorchScript and MyPy happy. - # I miss the good old times, before all this madness was created. - self.lowpass = None # type: ignore - - def forward(self, input): - if self.lowpass is None: - return input[None] - lows = self.lowpass(input) - low = lows[0] - bands = [low] - for low_and_band in lows[1:]: - # Get a bandpass filter by substracting lowpasses - band = low_and_band - low - bands.append(band) - low = low_and_band - # Last band is whatever is left in the signal - bands.append(input - low) - return torch.stack(bands) - - @property - def cutoffs(self): - if self._cutoffs is not None: - return self._cutoffs - elif self.lowpass is not None: - return [c * self.sample_rate for c in self.lowpass.cutoffs] - else: - return [] - - def __repr__(self): - return simple_repr(self, overrides={"cutoffs": self._cutoffs}) - - -def split_bands(signal: torch.Tensor, sample_rate: float, n_bands: Optional[int] = None, - cutoffs: Optional[Sequence[float]] = None, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - """ - Functional version of `SplitBands`, refer to this class for more information. - - >>> x = torch.randn(6, 4, 1024) - >>> list(split_bands(x, sample_rate=64, cutoffs=[12, 24]).shape) - [3, 6, 4, 1024] - """ - return SplitBands(sample_rate, n_bands, cutoffs, pad, zeros, fft).to(signal)(signal) diff --git a/spaces/Kelas/translation/app.py b/spaces/Kelas/translation/app.py deleted file mode 100644 index deb6cdab995737080dec5625e32ae3193d7a4ed4..0000000000000000000000000000000000000000 --- a/spaces/Kelas/translation/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import streamlit as st -from transformers import pipeline - -classifier = pipeline("translation_en_to_de", model="t5-base") -def main(): - st.title("translate English to German") - - with st.form("text_field"): - text = st.text_area('enter some text:') - # clicked==True only when the button is clicked - clicked = st.form_submit_button("Submit") - if clicked: - results = classifier([text]) - st.json(results) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/encoder_layer.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/encoder_layer.py deleted file mode 100644 index 750a32e4ef22ed5c2ca74aa364d1e8a3470e4016..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/encoder_layer.py +++ /dev/null @@ -1,152 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2020 Johns Hopkins University (Shinji Watanabe) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Encoder self-attention layer definition.""" - -import torch - -from torch import nn - -from .layer_norm import LayerNorm - - -class EncoderLayer(nn.Module): - """Encoder layer module. - - :param int size: input dim - :param espnet.nets.pytorch_backend.transformer.attention. - MultiHeadedAttention self_attn: self attention module - RelPositionMultiHeadedAttention self_attn: self attention module - :param espnet.nets.pytorch_backend.transformer.positionwise_feed_forward. - PositionwiseFeedForward feed_forward: - feed forward module - :param espnet.nets.pytorch_backend.transformer.positionwise_feed_forward - for macaron style - PositionwiseFeedForward feed_forward: - feed forward module - :param espnet.nets.pytorch_backend.conformer.convolution. - ConvolutionModule feed_foreard: - feed forward module - :param float dropout_rate: dropout rate - :param bool normalize_before: whether to use layer_norm before the first block - :param bool concat_after: whether to concat attention layer's input and output - if True, additional linear will be applied. - i.e. x -> x + linear(concat(x, att(x))) - if False, no additional linear will be applied. i.e. x -> x + att(x) - - """ - - def __init__( - self, - size, - self_attn, - feed_forward, - feed_forward_macaron, - conv_module, - dropout_rate, - normalize_before=True, - concat_after=False, - ): - """Construct an EncoderLayer object.""" - super(EncoderLayer, self).__init__() - self.self_attn = self_attn - self.feed_forward = feed_forward - self.feed_forward_macaron = feed_forward_macaron - self.conv_module = conv_module - self.norm_ff = LayerNorm(size) # for the FNN module - self.norm_mha = LayerNorm(size) # for the MHA module - if feed_forward_macaron is not None: - self.norm_ff_macaron = LayerNorm(size) - self.ff_scale = 0.5 - else: - self.ff_scale = 1.0 - if self.conv_module is not None: - self.norm_conv = LayerNorm(size) # for the CNN module - self.norm_final = LayerNorm(size) # for the final output of the block - self.dropout = nn.Dropout(dropout_rate) - self.size = size - self.normalize_before = normalize_before - self.concat_after = concat_after - if self.concat_after: - self.concat_linear = nn.Linear(size + size, size) - - def forward(self, x_input, mask, cache=None): - """Compute encoded features. - - :param torch.Tensor x_input: encoded source features, w/o pos_emb - tuple((batch, max_time_in, size), (1, max_time_in, size)) - or (batch, max_time_in, size) - :param torch.Tensor mask: mask for x (batch, max_time_in) - :param torch.Tensor cache: cache for x (batch, max_time_in - 1, size) - :rtype: Tuple[torch.Tensor, torch.Tensor] - """ - if isinstance(x_input, tuple): - x, pos_emb = x_input[0], x_input[1] - else: - x, pos_emb = x_input, None - - # whether to use macaron style - if self.feed_forward_macaron is not None: - residual = x - if self.normalize_before: - x = self.norm_ff_macaron(x) - x = residual + self.ff_scale * self.dropout(self.feed_forward_macaron(x)) - if not self.normalize_before: - x = self.norm_ff_macaron(x) - - # multi-headed self-attention module - residual = x - if self.normalize_before: - x = self.norm_mha(x) - - if cache is None: - x_q = x - else: - assert cache.shape == (x.shape[0], x.shape[1] - 1, self.size) - x_q = x[:, -1:, :] - residual = residual[:, -1:, :] - mask = None if mask is None else mask[:, -1:, :] - - if pos_emb is not None: - x_att = self.self_attn(x_q, x, x, pos_emb, mask) - else: - x_att = self.self_attn(x_q, x, x, mask) - - if self.concat_after: - x_concat = torch.cat((x, x_att), dim=-1) - x = residual + self.concat_linear(x_concat) - else: - x = residual + self.dropout(x_att) - if not self.normalize_before: - x = self.norm_mha(x) - - # convolution module - if self.conv_module is not None: - residual = x - if self.normalize_before: - x = self.norm_conv(x) - x = residual + self.dropout(self.conv_module(x)) - if not self.normalize_before: - x = self.norm_conv(x) - - # feed forward module - residual = x - if self.normalize_before: - x = self.norm_ff(x) - x = residual + self.ff_scale * self.dropout(self.feed_forward(x)) - if not self.normalize_before: - x = self.norm_ff(x) - - if self.conv_module is not None: - x = self.norm_final(x) - - if cache is not None: - x = torch.cat([cache, x], dim=1) - - if pos_emb is not None: - return (x, pos_emb), mask - - return x, mask diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/model.py b/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/model.py deleted file mode 100644 index e050d3204d8f1becdf0f8b3133470708e5420cea..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/encoder/model.py +++ /dev/null @@ -1,135 +0,0 @@ -from encoder.params_model import * -from encoder.params_data import * -from scipy.interpolate import interp1d -from sklearn.metrics import roc_curve -from torch.nn.utils import clip_grad_norm_ -from scipy.optimize import brentq -from torch import nn -import numpy as np -import torch - - -class SpeakerEncoder(nn.Module): - def __init__(self, device, loss_device): - super().__init__() - self.loss_device = loss_device - - # Network defition - self.lstm = nn.LSTM(input_size=mel_n_channels, - hidden_size=model_hidden_size, - num_layers=model_num_layers, - batch_first=True).to(device) - self.linear = nn.Linear(in_features=model_hidden_size, - out_features=model_embedding_size).to(device) - self.relu = torch.nn.ReLU().to(device) - - # Cosine similarity scaling (with fixed initial parameter values) - self.similarity_weight = nn.Parameter(torch.tensor([10.])).to(loss_device) - self.similarity_bias = nn.Parameter(torch.tensor([-5.])).to(loss_device) - - # Loss - self.loss_fn = nn.CrossEntropyLoss().to(loss_device) - - def do_gradient_ops(self): - # Gradient scale - self.similarity_weight.grad *= 0.01 - self.similarity_bias.grad *= 0.01 - - # Gradient clipping - clip_grad_norm_(self.parameters(), 3, norm_type=2) - - def forward(self, utterances, hidden_init=None): - """ - Computes the embeddings of a batch of utterance spectrograms. - - :param utterances: batch of mel-scale filterbanks of same duration as a tensor of shape - (batch_size, n_frames, n_channels) - :param hidden_init: initial hidden state of the LSTM as a tensor of shape (num_layers, - batch_size, hidden_size). Will default to a tensor of zeros if None. - :return: the embeddings as a tensor of shape (batch_size, embedding_size) - """ - # Pass the input through the LSTM layers and retrieve all outputs, the final hidden state - # and the final cell state. - out, (hidden, cell) = self.lstm(utterances, hidden_init) - - # We take only the hidden state of the last layer - embeds_raw = self.relu(self.linear(hidden[-1])) - - # L2-normalize it - embeds = embeds_raw / (torch.norm(embeds_raw, dim=1, keepdim=True) + 1e-5) - - return embeds - - def similarity_matrix(self, embeds): - """ - Computes the similarity matrix according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the similarity matrix as a tensor of shape (speakers_per_batch, - utterances_per_speaker, speakers_per_batch) - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Inclusive centroids (1 per speaker). Cloning is needed for reverse differentiation - centroids_incl = torch.mean(embeds, dim=1, keepdim=True) - centroids_incl = centroids_incl.clone() / (torch.norm(centroids_incl, dim=2, keepdim=True) + 1e-5) - - # Exclusive centroids (1 per utterance) - centroids_excl = (torch.sum(embeds, dim=1, keepdim=True) - embeds) - centroids_excl /= (utterances_per_speaker - 1) - centroids_excl = centroids_excl.clone() / (torch.norm(centroids_excl, dim=2, keepdim=True) + 1e-5) - - # Similarity matrix. The cosine similarity of already 2-normed vectors is simply the dot - # product of these vectors (which is just an element-wise multiplication reduced by a sum). - # We vectorize the computation for efficiency. - sim_matrix = torch.zeros(speakers_per_batch, utterances_per_speaker, - speakers_per_batch).to(self.loss_device) - mask_matrix = 1 - np.eye(speakers_per_batch, dtype=np.int) - for j in range(speakers_per_batch): - mask = np.where(mask_matrix[j])[0] - sim_matrix[mask, :, j] = (embeds[mask] * centroids_incl[j]).sum(dim=2) - sim_matrix[j, :, j] = (embeds[j] * centroids_excl[j]).sum(dim=1) - - ## Even more vectorized version (slower maybe because of transpose) - # sim_matrix2 = torch.zeros(speakers_per_batch, speakers_per_batch, utterances_per_speaker - # ).to(self.loss_device) - # eye = np.eye(speakers_per_batch, dtype=np.int) - # mask = np.where(1 - eye) - # sim_matrix2[mask] = (embeds[mask[0]] * centroids_incl[mask[1]]).sum(dim=2) - # mask = np.where(eye) - # sim_matrix2[mask] = (embeds * centroids_excl).sum(dim=2) - # sim_matrix2 = sim_matrix2.transpose(1, 2) - - sim_matrix = sim_matrix * self.similarity_weight + self.similarity_bias - return sim_matrix - - def loss(self, embeds): - """ - Computes the softmax loss according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the loss and the EER for this batch of embeddings. - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Loss - sim_matrix = self.similarity_matrix(embeds) - sim_matrix = sim_matrix.reshape((speakers_per_batch * utterances_per_speaker, - speakers_per_batch)) - ground_truth = np.repeat(np.arange(speakers_per_batch), utterances_per_speaker) - target = torch.from_numpy(ground_truth).long().to(self.loss_device) - loss = self.loss_fn(sim_matrix, target) - - # EER (not backpropagated) - with torch.no_grad(): - inv_argmax = lambda i: np.eye(1, speakers_per_batch, i, dtype=np.int)[0] - labels = np.array([inv_argmax(i) for i in ground_truth]) - preds = sim_matrix.detach().cpu().numpy() - - # Snippet from https://yangcha.github.io/EER-ROC/ - fpr, tpr, thresholds = roc_curve(labels.flatten(), preds.flatten()) - eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.) - - return loss, eer diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/conditional_detr_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/conditional_detr_head.py deleted file mode 100644 index cc2df2c215667121c5fe329f369510ecd4666faf..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/dense_heads/conditional_detr_head.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple - -import torch -import torch.nn as nn -from mmengine.model import bias_init_with_prob -from torch import Tensor - -from mmdet.models.layers.transformer import inverse_sigmoid -from mmdet.registry import MODELS -from mmdet.structures import SampleList -from mmdet.utils import InstanceList -from .detr_head import DETRHead - - -@MODELS.register_module() -class ConditionalDETRHead(DETRHead): - """Head of Conditional DETR. Conditional DETR: Conditional DETR for Fast - Training Convergence. More details can be found in the `paper. - - `_ . - """ - - def init_weights(self): - """Initialize weights of the transformer head.""" - super().init_weights() - # The initialization below for transformer head is very - # important as we use Focal_loss for loss_cls - if self.loss_cls.use_sigmoid: - bias_init = bias_init_with_prob(0.01) - nn.init.constant_(self.fc_cls.bias, bias_init) - - def forward(self, hidden_states: Tensor, - references: Tensor) -> Tuple[Tensor, Tensor]: - """"Forward function. - - Args: - hidden_states (Tensor): Features from transformer decoder. If - `return_intermediate_dec` is True output has shape - (num_decoder_layers, bs, num_queries, dim), else has shape (1, - bs, num_queries, dim) which only contains the last layer - outputs. - references (Tensor): References from transformer decoder, has - shape (bs, num_queries, 2). - Returns: - tuple[Tensor]: results of head containing the following tensor. - - - layers_cls_scores (Tensor): Outputs from the classification head, - shape (num_decoder_layers, bs, num_queries, cls_out_channels). - Note cls_out_channels should include background. - - layers_bbox_preds (Tensor): Sigmoid outputs from the regression - head with normalized coordinate format (cx, cy, w, h), has shape - (num_decoder_layers, bs, num_queries, 4). - """ - - references_unsigmoid = inverse_sigmoid(references) - layers_bbox_preds = [] - for layer_id in range(hidden_states.shape[0]): - tmp_reg_preds = self.fc_reg( - self.activate(self.reg_ffn(hidden_states[layer_id]))) - tmp_reg_preds[..., :2] += references_unsigmoid - outputs_coord = tmp_reg_preds.sigmoid() - layers_bbox_preds.append(outputs_coord) - layers_bbox_preds = torch.stack(layers_bbox_preds) - - layers_cls_scores = self.fc_cls(hidden_states) - return layers_cls_scores, layers_bbox_preds - - def loss(self, hidden_states: Tensor, references: Tensor, - batch_data_samples: SampleList) -> dict: - """Perform forward propagation and loss calculation of the detection - head on the features of the upstream network. - - Args: - hidden_states (Tensor): Features from the transformer decoder, has - shape (num_decoder_layers, bs, num_queries, dim). - references (Tensor): References from the transformer decoder, has - shape (num_decoder_layers, bs, num_queries, 2). - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - - Returns: - dict: A dictionary of loss components. - """ - batch_gt_instances = [] - batch_img_metas = [] - for data_sample in batch_data_samples: - batch_img_metas.append(data_sample.metainfo) - batch_gt_instances.append(data_sample.gt_instances) - - outs = self(hidden_states, references) - loss_inputs = outs + (batch_gt_instances, batch_img_metas) - losses = self.loss_by_feat(*loss_inputs) - return losses - - def loss_and_predict( - self, hidden_states: Tensor, references: Tensor, - batch_data_samples: SampleList) -> Tuple[dict, InstanceList]: - """Perform forward propagation of the head, then calculate loss and - predictions from the features and data samples. Over-write because - img_metas are needed as inputs for bbox_head. - - Args: - hidden_states (Tensor): Features from the transformer decoder, has - shape (num_decoder_layers, bs, num_queries, dim). - references (Tensor): References from the transformer decoder, has - shape (num_decoder_layers, bs, num_queries, 2). - batch_data_samples (list[:obj:`DetDataSample`]): Each item contains - the meta information of each image and corresponding - annotations. - - Returns: - tuple: The return value is a tuple contains: - - - losses: (dict[str, Tensor]): A dictionary of loss components. - - predictions (list[:obj:`InstanceData`]): Detection - results of each image after the post process. - """ - batch_gt_instances = [] - batch_img_metas = [] - for data_sample in batch_data_samples: - batch_img_metas.append(data_sample.metainfo) - batch_gt_instances.append(data_sample.gt_instances) - - outs = self(hidden_states, references) - loss_inputs = outs + (batch_gt_instances, batch_img_metas) - losses = self.loss_by_feat(*loss_inputs) - - predictions = self.predict_by_feat( - *outs, batch_img_metas=batch_img_metas) - return losses, predictions - - def predict(self, - hidden_states: Tensor, - references: Tensor, - batch_data_samples: SampleList, - rescale: bool = True) -> InstanceList: - """Perform forward propagation of the detection head and predict - detection results on the features of the upstream network. Over-write - because img_metas are needed as inputs for bbox_head. - - Args: - hidden_states (Tensor): Features from the transformer decoder, has - shape (num_decoder_layers, bs, num_queries, dim). - references (Tensor): References from the transformer decoder, has - shape (num_decoder_layers, bs, num_queries, 2). - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - rescale (bool, optional): Whether to rescale the results. - Defaults to True. - - Returns: - list[obj:`InstanceData`]: Detection results of each image - after the post process. - """ - batch_img_metas = [ - data_samples.metainfo for data_samples in batch_data_samples - ] - - last_layer_hidden_state = hidden_states[-1].unsqueeze(0) - outs = self(last_layer_hidden_state, references) - - predictions = self.predict_by_feat( - *outs, batch_img_metas=batch_img_metas, rescale=rescale) - - return predictions diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/cascade_roi_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/cascade_roi_head.py deleted file mode 100644 index 81db671113a63beb7849abdc0e432a738ee46f5e..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/cascade_roi_head.py +++ /dev/null @@ -1,568 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Sequence, Tuple, Union - -import torch -import torch.nn as nn -from mmengine.model import ModuleList -from mmengine.structures import InstanceData -from torch import Tensor - -from mmdet.models.task_modules.samplers import SamplingResult -from mmdet.models.test_time_augs import merge_aug_masks -from mmdet.registry import MODELS, TASK_UTILS -from mmdet.structures import SampleList -from mmdet.structures.bbox import bbox2roi, get_box_tensor -from mmdet.utils import (ConfigType, InstanceList, MultiConfig, OptConfigType, - OptMultiConfig) -from ..utils.misc import empty_instances, unpack_gt_instances -from .base_roi_head import BaseRoIHead - - -@MODELS.register_module() -class CascadeRoIHead(BaseRoIHead): - """Cascade roi head including one bbox head and one mask head. - - https://arxiv.org/abs/1712.00726 - """ - - def __init__(self, - num_stages: int, - stage_loss_weights: Union[List[float], Tuple[float]], - bbox_roi_extractor: OptMultiConfig = None, - bbox_head: OptMultiConfig = None, - mask_roi_extractor: OptMultiConfig = None, - mask_head: OptMultiConfig = None, - shared_head: OptConfigType = None, - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - init_cfg: OptMultiConfig = None) -> None: - assert bbox_roi_extractor is not None - assert bbox_head is not None - assert shared_head is None, \ - 'Shared head is not supported in Cascade RCNN anymore' - - self.num_stages = num_stages - self.stage_loss_weights = stage_loss_weights - super().__init__( - bbox_roi_extractor=bbox_roi_extractor, - bbox_head=bbox_head, - mask_roi_extractor=mask_roi_extractor, - mask_head=mask_head, - shared_head=shared_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg) - - def init_bbox_head(self, bbox_roi_extractor: MultiConfig, - bbox_head: MultiConfig) -> None: - """Initialize box head and box roi extractor. - - Args: - bbox_roi_extractor (:obj:`ConfigDict`, dict or list): - Config of box roi extractor. - bbox_head (:obj:`ConfigDict`, dict or list): Config - of box in box head. - """ - self.bbox_roi_extractor = ModuleList() - self.bbox_head = ModuleList() - if not isinstance(bbox_roi_extractor, list): - bbox_roi_extractor = [ - bbox_roi_extractor for _ in range(self.num_stages) - ] - if not isinstance(bbox_head, list): - bbox_head = [bbox_head for _ in range(self.num_stages)] - assert len(bbox_roi_extractor) == len(bbox_head) == self.num_stages - for roi_extractor, head in zip(bbox_roi_extractor, bbox_head): - self.bbox_roi_extractor.append(MODELS.build(roi_extractor)) - self.bbox_head.append(MODELS.build(head)) - - def init_mask_head(self, mask_roi_extractor: MultiConfig, - mask_head: MultiConfig) -> None: - """Initialize mask head and mask roi extractor. - - Args: - mask_head (dict): Config of mask in mask head. - mask_roi_extractor (:obj:`ConfigDict`, dict or list): - Config of mask roi extractor. - """ - self.mask_head = nn.ModuleList() - if not isinstance(mask_head, list): - mask_head = [mask_head for _ in range(self.num_stages)] - assert len(mask_head) == self.num_stages - for head in mask_head: - self.mask_head.append(MODELS.build(head)) - if mask_roi_extractor is not None: - self.share_roi_extractor = False - self.mask_roi_extractor = ModuleList() - if not isinstance(mask_roi_extractor, list): - mask_roi_extractor = [ - mask_roi_extractor for _ in range(self.num_stages) - ] - assert len(mask_roi_extractor) == self.num_stages - for roi_extractor in mask_roi_extractor: - self.mask_roi_extractor.append(MODELS.build(roi_extractor)) - else: - self.share_roi_extractor = True - self.mask_roi_extractor = self.bbox_roi_extractor - - def init_assigner_sampler(self) -> None: - """Initialize assigner and sampler for each stage.""" - self.bbox_assigner = [] - self.bbox_sampler = [] - if self.train_cfg is not None: - for idx, rcnn_train_cfg in enumerate(self.train_cfg): - self.bbox_assigner.append( - TASK_UTILS.build(rcnn_train_cfg.assigner)) - self.current_stage = idx - self.bbox_sampler.append( - TASK_UTILS.build( - rcnn_train_cfg.sampler, - default_args=dict(context=self))) - - def _bbox_forward(self, stage: int, x: Tuple[Tensor], - rois: Tensor) -> dict: - """Box head forward function used in both training and testing. - - Args: - stage (int): The current stage in Cascade RoI Head. - x (tuple[Tensor]): List of multi-level img features. - rois (Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - - Returns: - dict[str, Tensor]: Usually returns a dictionary with keys: - - - `cls_score` (Tensor): Classification scores. - - `bbox_pred` (Tensor): Box energies / deltas. - - `bbox_feats` (Tensor): Extract bbox RoI features. - """ - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - cls_score, bbox_pred = bbox_head(bbox_feats) - - bbox_results = dict( - cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats) - return bbox_results - - def bbox_loss(self, stage: int, x: Tuple[Tensor], - sampling_results: List[SamplingResult]) -> dict: - """Run forward function and calculate loss for box head in training. - - Args: - stage (int): The current stage in Cascade RoI Head. - x (tuple[Tensor]): List of multi-level img features. - sampling_results (list["obj:`SamplingResult`]): Sampling results. - - Returns: - dict: Usually returns a dictionary with keys: - - - `cls_score` (Tensor): Classification scores. - - `bbox_pred` (Tensor): Box energies / deltas. - - `bbox_feats` (Tensor): Extract bbox RoI features. - - `loss_bbox` (dict): A dictionary of bbox loss components. - - `rois` (Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - - `bbox_targets` (tuple): Ground truth for proposals in a - single image. Containing the following list of Tensors: - (labels, label_weights, bbox_targets, bbox_weights) - """ - bbox_head = self.bbox_head[stage] - rois = bbox2roi([res.priors for res in sampling_results]) - bbox_results = self._bbox_forward(stage, x, rois) - bbox_results.update(rois=rois) - - bbox_loss_and_target = bbox_head.loss_and_target( - cls_score=bbox_results['cls_score'], - bbox_pred=bbox_results['bbox_pred'], - rois=rois, - sampling_results=sampling_results, - rcnn_train_cfg=self.train_cfg[stage]) - bbox_results.update(bbox_loss_and_target) - - return bbox_results - - def _mask_forward(self, stage: int, x: Tuple[Tensor], - rois: Tensor) -> dict: - """Mask head forward function used in both training and testing. - - Args: - stage (int): The current stage in Cascade RoI Head. - x (tuple[Tensor]): Tuple of multi-level img features. - rois (Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - - Returns: - dict: Usually returns a dictionary with keys: - - - `mask_preds` (Tensor): Mask prediction. - """ - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs], - rois) - # do not support caffe_c4 model anymore - mask_preds = mask_head(mask_feats) - - mask_results = dict(mask_preds=mask_preds) - return mask_results - - def mask_loss(self, stage: int, x: Tuple[Tensor], - sampling_results: List[SamplingResult], - batch_gt_instances: InstanceList) -> dict: - """Run forward function and calculate loss for mask head in training. - - Args: - stage (int): The current stage in Cascade RoI Head. - x (tuple[Tensor]): Tuple of multi-level img features. - sampling_results (list["obj:`SamplingResult`]): Sampling results. - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes``, ``labels``, and - ``masks`` attributes. - - Returns: - dict: Usually returns a dictionary with keys: - - - `mask_preds` (Tensor): Mask prediction. - - `loss_mask` (dict): A dictionary of mask loss components. - """ - pos_rois = bbox2roi([res.pos_priors for res in sampling_results]) - mask_results = self._mask_forward(stage, x, pos_rois) - - mask_head = self.mask_head[stage] - - mask_loss_and_target = mask_head.loss_and_target( - mask_preds=mask_results['mask_preds'], - sampling_results=sampling_results, - batch_gt_instances=batch_gt_instances, - rcnn_train_cfg=self.train_cfg[stage]) - mask_results.update(mask_loss_and_target) - - return mask_results - - def loss(self, x: Tuple[Tensor], rpn_results_list: InstanceList, - batch_data_samples: SampleList) -> dict: - """Perform forward propagation and loss calculation of the detection - roi on the features of the upstream network. - - Args: - x (tuple[Tensor]): List of multi-level img features. - rpn_results_list (list[:obj:`InstanceData`]): List of region - proposals. - batch_data_samples (list[:obj:`DetDataSample`]): The batch - data samples. It usually includes information such - as `gt_instance` or `gt_panoptic_seg` or `gt_sem_seg`. - - Returns: - dict[str, Tensor]: A dictionary of loss components - """ - # TODO: May add a new function in baseroihead - assert len(rpn_results_list) == len(batch_data_samples) - outputs = unpack_gt_instances(batch_data_samples) - batch_gt_instances, batch_gt_instances_ignore, batch_img_metas \ - = outputs - - num_imgs = len(batch_data_samples) - losses = dict() - results_list = rpn_results_list - for stage in range(self.num_stages): - self.current_stage = stage - - stage_loss_weight = self.stage_loss_weights[stage] - - # assign gts and sample proposals - sampling_results = [] - if self.with_bbox or self.with_mask: - bbox_assigner = self.bbox_assigner[stage] - bbox_sampler = self.bbox_sampler[stage] - - for i in range(num_imgs): - results = results_list[i] - # rename rpn_results.bboxes to rpn_results.priors - results.priors = results.pop('bboxes') - - assign_result = bbox_assigner.assign( - results, batch_gt_instances[i], - batch_gt_instances_ignore[i]) - - sampling_result = bbox_sampler.sample( - assign_result, - results, - batch_gt_instances[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - # bbox head forward and loss - bbox_results = self.bbox_loss(stage, x, sampling_results) - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{stage}.{name}'] = ( - value * stage_loss_weight if 'loss' in name else value) - - # mask head forward and loss - if self.with_mask: - mask_results = self.mask_loss(stage, x, sampling_results, - batch_gt_instances) - for name, value in mask_results['loss_mask'].items(): - losses[f's{stage}.{name}'] = ( - value * stage_loss_weight if 'loss' in name else value) - - # refine bboxes - if stage < self.num_stages - 1: - bbox_head = self.bbox_head[stage] - with torch.no_grad(): - results_list = bbox_head.refine_bboxes( - sampling_results, bbox_results, batch_img_metas) - # Empty proposal - if results_list is None: - break - return losses - - def predict_bbox(self, - x: Tuple[Tensor], - batch_img_metas: List[dict], - rpn_results_list: InstanceList, - rcnn_test_cfg: ConfigType, - rescale: bool = False, - **kwargs) -> InstanceList: - """Perform forward propagation of the bbox head and predict detection - results on the features of the upstream network. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - batch_img_metas (list[dict]): List of image information. - rpn_results_list (list[:obj:`InstanceData`]): List of region - proposals. - rcnn_test_cfg (obj:`ConfigDict`): `test_cfg` of R-CNN. - rescale (bool): If True, return boxes in original image space. - Defaults to False. - - Returns: - list[:obj:`InstanceData`]: Detection results of each image - after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - """ - proposals = [res.bboxes for res in rpn_results_list] - num_proposals_per_img = tuple(len(p) for p in proposals) - rois = bbox2roi(proposals) - - if rois.shape[0] == 0: - return empty_instances( - batch_img_metas, - rois.device, - task_type='bbox', - box_type=self.bbox_head[-1].predict_box_type, - num_classes=self.bbox_head[-1].num_classes, - score_per_cls=rcnn_test_cfg is None) - - rois, cls_scores, bbox_preds = self._refine_roi( - x=x, - rois=rois, - batch_img_metas=batch_img_metas, - num_proposals_per_img=num_proposals_per_img, - **kwargs) - - results_list = self.bbox_head[-1].predict_by_feat( - rois=rois, - cls_scores=cls_scores, - bbox_preds=bbox_preds, - batch_img_metas=batch_img_metas, - rescale=rescale, - rcnn_test_cfg=rcnn_test_cfg) - return results_list - - def predict_mask(self, - x: Tuple[Tensor], - batch_img_metas: List[dict], - results_list: List[InstanceData], - rescale: bool = False) -> List[InstanceData]: - """Perform forward propagation of the mask head and predict detection - results on the features of the upstream network. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - batch_img_metas (list[dict]): List of image information. - results_list (list[:obj:`InstanceData`]): Detection results of - each image. - rescale (bool): If True, return boxes in original image space. - Defaults to False. - - Returns: - list[:obj:`InstanceData`]: Detection results of each image - after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - - masks (Tensor): Has a shape (num_instances, H, W). - """ - bboxes = [res.bboxes for res in results_list] - mask_rois = bbox2roi(bboxes) - if mask_rois.shape[0] == 0: - results_list = empty_instances( - batch_img_metas, - mask_rois.device, - task_type='mask', - instance_results=results_list, - mask_thr_binary=self.test_cfg.mask_thr_binary) - return results_list - - num_mask_rois_per_img = [len(res) for res in results_list] - aug_masks = [] - for stage in range(self.num_stages): - mask_results = self._mask_forward(stage, x, mask_rois) - mask_preds = mask_results['mask_preds'] - # split batch mask prediction back to each image - mask_preds = mask_preds.split(num_mask_rois_per_img, 0) - aug_masks.append([m.sigmoid().detach() for m in mask_preds]) - - merged_masks = [] - for i in range(len(batch_img_metas)): - aug_mask = [mask[i] for mask in aug_masks] - merged_mask = merge_aug_masks(aug_mask, batch_img_metas[i]) - merged_masks.append(merged_mask) - results_list = self.mask_head[-1].predict_by_feat( - mask_preds=merged_masks, - results_list=results_list, - batch_img_metas=batch_img_metas, - rcnn_test_cfg=self.test_cfg, - rescale=rescale, - activate_map=True) - return results_list - - def _refine_roi(self, x: Tuple[Tensor], rois: Tensor, - batch_img_metas: List[dict], - num_proposals_per_img: Sequence[int], **kwargs) -> tuple: - """Multi-stage refinement of RoI. - - Args: - x (tuple[Tensor]): List of multi-level img features. - rois (Tensor): shape (n, 5), [batch_ind, x1, y1, x2, y2] - batch_img_metas (list[dict]): List of image information. - num_proposals_per_img (sequence[int]): number of proposals - in each image. - - Returns: - tuple: - - - rois (Tensor): Refined RoI. - - cls_scores (list[Tensor]): Average predicted - cls score per image. - - bbox_preds (list[Tensor]): Bbox branch predictions - for the last stage of per image. - """ - # "ms" in variable names means multi-stage - ms_scores = [] - for stage in range(self.num_stages): - bbox_results = self._bbox_forward( - stage=stage, x=x, rois=rois, **kwargs) - - # split batch bbox prediction back to each image - cls_scores = bbox_results['cls_score'] - bbox_preds = bbox_results['bbox_pred'] - - rois = rois.split(num_proposals_per_img, 0) - cls_scores = cls_scores.split(num_proposals_per_img, 0) - ms_scores.append(cls_scores) - - # some detector with_reg is False, bbox_preds will be None - if bbox_preds is not None: - # TODO move this to a sabl_roi_head - # the bbox prediction of some detectors like SABL is not Tensor - if isinstance(bbox_preds, torch.Tensor): - bbox_preds = bbox_preds.split(num_proposals_per_img, 0) - else: - bbox_preds = self.bbox_head[stage].bbox_pred_split( - bbox_preds, num_proposals_per_img) - else: - bbox_preds = (None, ) * len(batch_img_metas) - - if stage < self.num_stages - 1: - bbox_head = self.bbox_head[stage] - if bbox_head.custom_activation: - cls_scores = [ - bbox_head.loss_cls.get_activation(s) - for s in cls_scores - ] - refine_rois_list = [] - for i in range(len(batch_img_metas)): - if rois[i].shape[0] > 0: - bbox_label = cls_scores[i][:, :-1].argmax(dim=1) - # Refactor `bbox_head.regress_by_class` to only accept - # box tensor without img_idx concatenated. - refined_bboxes = bbox_head.regress_by_class( - rois[i][:, 1:], bbox_label, bbox_preds[i], - batch_img_metas[i]) - refined_bboxes = get_box_tensor(refined_bboxes) - refined_rois = torch.cat( - [rois[i][:, [0]], refined_bboxes], dim=1) - refine_rois_list.append(refined_rois) - rois = torch.cat(refine_rois_list) - - # average scores of each image by stages - cls_scores = [ - sum([score[i] for score in ms_scores]) / float(len(ms_scores)) - for i in range(len(batch_img_metas)) - ] - return rois, cls_scores, bbox_preds - - def forward(self, x: Tuple[Tensor], rpn_results_list: InstanceList, - batch_data_samples: SampleList) -> tuple: - """Network forward process. Usually includes backbone, neck and head - forward without any post-processing. - - Args: - x (List[Tensor]): Multi-level features that may have different - resolutions. - rpn_results_list (list[:obj:`InstanceData`]): List of region - proposals. - batch_data_samples (list[:obj:`DetDataSample`]): Each item contains - the meta information of each image and corresponding - annotations. - - Returns - tuple: A tuple of features from ``bbox_head`` and ``mask_head`` - forward. - """ - results = () - batch_img_metas = [ - data_samples.metainfo for data_samples in batch_data_samples - ] - proposals = [rpn_results.bboxes for rpn_results in rpn_results_list] - num_proposals_per_img = tuple(len(p) for p in proposals) - rois = bbox2roi(proposals) - # bbox head - if self.with_bbox: - rois, cls_scores, bbox_preds = self._refine_roi( - x, rois, batch_img_metas, num_proposals_per_img) - results = results + (cls_scores, bbox_preds) - # mask head - if self.with_mask: - aug_masks = [] - rois = torch.cat(rois) - for stage in range(self.num_stages): - mask_results = self._mask_forward(stage, x, rois) - mask_preds = mask_results['mask_preds'] - mask_preds = mask_preds.split(num_proposals_per_img, 0) - aug_masks.append([m.sigmoid().detach() for m in mask_preds]) - - merged_masks = [] - for i in range(len(batch_img_metas)): - aug_mask = [mask[i] for mask in aug_masks] - merged_mask = merge_aug_masks(aug_mask, batch_img_metas[i]) - merged_masks.append(merged_mask) - results = results + (merged_masks, ) - return results diff --git a/spaces/Lippmann/White-box-Cartoonization/wbc/cartoonize.py b/spaces/Lippmann/White-box-Cartoonization/wbc/cartoonize.py deleted file mode 100644 index 25faf1ceb95aaed9a3f7a7982d17a03dc6bc32b1..0000000000000000000000000000000000000000 --- a/spaces/Lippmann/White-box-Cartoonization/wbc/cartoonize.py +++ /dev/null @@ -1,112 +0,0 @@ -import os -import cv2 -import numpy as np -import tensorflow as tf -import wbc.network as network -import wbc.guided_filter as guided_filter -from tqdm import tqdm - - -def resize_crop(image): - h, w, c = np.shape(image) - if min(h, w) > 720: - if h > w: - h, w = int(720 * h / w), 720 - else: - h, w = 720, int(720 * w / h) - image = cv2.resize(image, (w, h), - interpolation=cv2.INTER_AREA) - h, w = (h // 8) * 8, (w // 8) * 8 - image = image[:h, :w, :] - return image - - -def cartoonize(load_folder, save_folder, model_path): - print(model_path) - input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - network_out = network.unet_generator(input_photo) - final_out = guided_filter.guided_filter(input_photo, network_out, r=1, eps=5e-3) - - all_vars = tf.trainable_variables() - gene_vars = [var for var in all_vars if 'generator' in var.name] - saver = tf.train.Saver(var_list=gene_vars) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - sess = tf.Session(config=config) - - sess.run(tf.global_variables_initializer()) - saver.restore(sess, tf.train.latest_checkpoint(model_path)) - name_list = os.listdir(load_folder) - for name in tqdm(name_list): - try: - load_path = os.path.join(load_folder, name) - save_path = os.path.join(save_folder, name) - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = sess.run(final_out, feed_dict={input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - -class Cartoonize: - def __init__(self, model_path): - print(model_path) - self.input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - network_out = network.unet_generator(self.input_photo) - self.final_out = guided_filter.guided_filter(self.input_photo, network_out, r=1, eps=5e-3) - - all_vars = tf.trainable_variables() - gene_vars = [var for var in all_vars if 'generator' in var.name] - saver = tf.train.Saver(var_list=gene_vars) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - self.sess = tf.Session(config=config) - - self.sess.run(tf.global_variables_initializer()) - saver.restore(self.sess, tf.train.latest_checkpoint(model_path)) - - def run(self, load_folder, save_folder): - name_list = os.listdir(load_folder) - for name in tqdm(name_list): - try: - load_path = os.path.join(load_folder, name) - save_path = os.path.join(save_folder, name) - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - def run_sigle(self, load_path, save_path): - try: - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - -if __name__ == '__main__': - model_path = 'saved_models' - load_folder = 'test_images' - save_folder = 'cartoonized_images' - if not os.path.exists(save_folder): - os.mkdir(save_folder) - cartoonize(load_folder, save_folder, model_path) diff --git a/spaces/LuxOAI/ChatGpt-Web/app/locales/de.ts b/spaces/LuxOAI/ChatGpt-Web/app/locales/de.ts deleted file mode 100644 index 477228928f82f1763450dc7c8303c63f1c04f74f..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/locales/de.ts +++ /dev/null @@ -1,247 +0,0 @@ -import { SubmitKey } from "../store/config"; -import type { LocaleType } from "./index"; - -const de: LocaleType = { - WIP: "In Bearbeitung...", - Error: { - Unauthorized: - "Unbefugter Zugriff, bitte geben Sie den Zugangscode auf der Einstellungsseite ein.", - }, - ChatItem: { - ChatItemCount: (count: number) => `${count} Nachrichten`, - }, - Chat: { - SubTitle: (count: number) => `${count} Nachrichten mit ChatGPT`, - Actions: { - ChatList: "Zur Chat-Liste gehen", - CompressedHistory: "Komprimierter Gedächtnis-Prompt", - Export: "Alle Nachrichten als Markdown exportieren", - Copy: "Kopieren", - Stop: "Stop", - Retry: "Wiederholen", - Delete: "Delete", - }, - Rename: "Chat umbenennen", - Typing: "Tippen...", - Input: (submitKey: string) => { - var inputHints = `${submitKey} um zu Senden`; - if (submitKey === String(SubmitKey.Enter)) { - inputHints += ", Umschalt + Eingabe für Zeilenumbruch"; - } - return inputHints + ", / zum Durchsuchen von Prompts"; - }, - Send: "Senden", - Config: { - Reset: "Reset to Default", - SaveAs: "Save as Mask", - }, - }, - Export: { - Title: "Alle Nachrichten", - Copy: "Alles kopieren", - Download: "Herunterladen", - MessageFromYou: "Deine Nachricht", - MessageFromChatGPT: "Nachricht von ChatGPT", - }, - Memory: { - Title: "Gedächtnis-Prompt", - EmptyContent: "Noch nichts.", - Send: "Gedächtnis senden", - Copy: "Gedächtnis kopieren", - Reset: "Sitzung zurücksetzen", - ResetConfirm: - "Das Zurücksetzen löscht den aktuellen Gesprächsverlauf und das Langzeit-Gedächtnis. Möchten Sie wirklich zurücksetzen?", - }, - Home: { - NewChat: "Neuer Chat", - DeleteChat: "Bestätigen Sie, um das ausgewählte Gespräch zu löschen?", - DeleteToast: "Chat gelöscht", - Revert: "Zurücksetzen", - }, - Settings: { - Title: "Einstellungen", - SubTitle: "Alle Einstellungen", - Actions: { - ClearAll: "Alle Daten löschen", - ResetAll: "Alle Einstellungen zurücksetzen", - Close: "Schließen", - ConfirmResetAll: - "Möchten Sie wirklich alle Konfigurationen zurücksetzen?", - ConfirmClearAll: "Möchten Sie wirklich alle Chats zurücksetzen?", - }, - Lang: { - Name: "Language", // ATTENTION: if you wanna add a new translation, please do not translate this value, leave it as `Language` - All: "All Languages", - Options: { - cn: "简体中文", - en: "English", - tw: "繁體中文", - es: "Español", - it: "Italiano", - tr: "Türkçe", - jp: "日本語", - de: "Deutsch", - }, - }, - Avatar: "Avatar", - FontSize: { - Title: "Schriftgröße", - SubTitle: "Schriftgröße des Chat-Inhalts anpassen", - }, - Update: { - Version: (x: string) => `Version: ${x}`, - IsLatest: "Neueste Version", - CheckUpdate: "Update prüfen", - IsChecking: "Update wird geprüft...", - FoundUpdate: (x: string) => `Neue Version gefunden: ${x}`, - GoToUpdate: "Aktualisieren", - }, - SendKey: "Senden-Taste", - Theme: "Erscheinungsbild", - TightBorder: "Enger Rahmen", - SendPreviewBubble: { - Title: "Vorschau-Bubble senden", - SubTitle: "Preview markdown in bubble", - }, - Mask: { - Title: "Mask Splash Screen", - SubTitle: "Show a mask splash screen before starting new chat", - }, - Prompt: { - Disable: { - Title: "Autovervollständigung deaktivieren", - SubTitle: "Autovervollständigung mit / starten", - }, - List: "Prompt-Liste", - ListCount: (builtin: number, custom: number) => - `${builtin} integriert, ${custom} benutzerdefiniert`, - Edit: "Bearbeiten", - Modal: { - Title: "Prompt List", - Add: "Add One", - Search: "Search Prompts", - }, - EditModal: { - Title: "Edit Prompt", - }, - }, - HistoryCount: { - Title: "Anzahl der angehängten Nachrichten", - SubTitle: "Anzahl der pro Anfrage angehängten gesendeten Nachrichten", - }, - CompressThreshold: { - Title: "Schwellenwert für Verlaufskomprimierung", - SubTitle: - "Komprimierung, wenn die Länge der unkomprimierten Nachrichten den Wert überschreitet", - }, - Token: { - Title: "API-Schlüssel", - SubTitle: - "Verwenden Sie Ihren Schlüssel, um das Zugangscode-Limit zu ignorieren", - Placeholder: "OpenAI API-Schlüssel", - }, - Usage: { - Title: "Kontostand", - SubTitle(used: any, total: any) { - return `Diesen Monat ausgegeben $${used}, Abonnement $${total}`; - }, - IsChecking: "Wird überprüft...", - Check: "Erneut prüfen", - NoAccess: "API-Schlüssel eingeben, um den Kontostand zu überprüfen", - }, - AccessCode: { - Title: "Zugangscode", - SubTitle: "Zugangskontrolle aktiviert", - Placeholder: "Zugangscode erforderlich", - }, - Bot: "KI-Anbieter (bot)", - Model: "Modell", - Temperature: { - Title: "Temperature", //Temperatur - SubTitle: "Ein größerer Wert führt zu zufälligeren Antworten", - }, - MaxTokens: { - Title: "Max Tokens", //Maximale Token - SubTitle: "Maximale Anzahl der Anfrage- plus Antwort-Token", - }, - PresencePenlty: { - Title: "Presence Penalty", //Anwesenheitsstrafe - SubTitle: - "Ein größerer Wert erhöht die Wahrscheinlichkeit, dass über neue Themen gesprochen wird", - }, - }, - Store: { - DefaultTopic: "Neues Gespräch", - BotHello: "Hallo! Wie kann ich Ihnen heute helfen?", - Error: - "Etwas ist schief gelaufen, bitte versuchen Sie es später noch einmal.", - Prompt: { - History: (content: string) => - "Dies ist eine Zusammenfassung des Chatverlaufs zwischen dem KI und dem Benutzer als Rückblick: " + - content, - Topic: - "Bitte erstellen Sie einen vier- bis fünfwörtigen Titel, der unser Gespräch zusammenfasst, ohne Einleitung, Zeichensetzung, Anführungszeichen, Punkte, Symbole oder zusätzlichen Text. Entfernen Sie Anführungszeichen.", - Summarize: - "Fassen Sie unsere Diskussion kurz in 200 Wörtern oder weniger zusammen, um sie als Pronpt für zukünftige Gespräche zu verwenden.", - }, - }, - Copy: { - Success: "In die Zwischenablage kopiert", - Failed: - "Kopieren fehlgeschlagen, bitte geben Sie die Berechtigung zum Zugriff auf die Zwischenablage frei", - }, - Context: { - Toast: (x: any) => `Mit ${x} Kontext-Prompts`, - Edit: "Kontext- und Gedächtnis-Prompts", - Add: "Hinzufügen", - }, - Plugin: { - Name: "Plugin", - }, - Mask: { - Name: "Mask", - Page: { - Title: "Prompt Template", - SubTitle: (count: number) => `${count} prompt templates`, - Search: "Search Templates", - Create: "Create", - }, - Item: { - Info: (count: number) => `${count} prompts`, - Chat: "Chat", - View: "View", - Edit: "Edit", - Delete: "Delete", - DeleteConfirm: "Confirm to delete?", - }, - EditModal: { - Title: (readonly: boolean) => - `Edit Prompt Template ${readonly ? "(readonly)" : ""}`, - Download: "Download", - Clone: "Clone", - }, - Config: { - Avatar: "Bot Avatar", - Name: "Bot Name", - }, - }, - NewChat: { - Return: "Return", - Skip: "Skip", - Title: "Pick a Mask", - SubTitle: "Chat with the Soul behind the Mask", - More: "Find More", - NotShow: "Not Show Again", - ConfirmNoShow: "Confirm to disable?You can enable it in settings later.", - }, - - UI: { - Confirm: "Confirm", - Cancel: "Cancel", - Close: "Close", - Create: "Create", - Edit: "Edit", - }, -}; - -export default de; diff --git a/spaces/MWilinski/bot/data/get_hugging_face_repositories.py b/spaces/MWilinski/bot/data/get_hugging_face_repositories.py deleted file mode 100644 index 26ddcb7d9e790fe3a2b8e6114004fbfcb4c5419f..0000000000000000000000000000000000000000 --- a/spaces/MWilinski/bot/data/get_hugging_face_repositories.py +++ /dev/null @@ -1,34 +0,0 @@ -import json -import argparse -import requests -from typing import List - - -def get_repositories_names(token): - url = f'https://api.github.com/orgs/huggingface/repos?per_page=1000' - headers = {'Authorization': f'token {token}'} - response = requests.get(url, headers=headers) - if response.status_code == 200: - repos = json.loads(response.content) - repo_names = [ - repo['full_name'] for repo in repos - if repo['stargazers_count'] >= 100 - ] - return repo_names - else: - return 'Error: '+str(response.status_code) - - -def save_repositories_urls(repositories_names: List[str], output_filename: str): - urls = ['https://github.com/'+repo_name for repo_name in repositories_names] - data = {"urls": urls} - with open(output_filename, 'w') as f: - json.dump(data, f, indent=4) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--token', type=str) - args = parser.parse_args() - repositories = get_repositories_names(token=args.token) - save_repositories_urls(repositories, 'datasets/hf_repositories_urls_scraped.json') diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/resnext.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/resnext.py deleted file mode 100644 index 962249ad6fd9b50960ad6426f7ce3cac6ed8c5bc..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/backbones/resnext.py +++ /dev/null @@ -1,145 +0,0 @@ -import math - -from annotator.uniformer.mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is - "caffe", the stride-two layer is the first 1x1 conv layer. - """ - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - -@BACKBONES.register_module() -class ResNeXt(ResNet): - """ResNeXt backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Normally 3. - num_stages (int): Resnet stages, normally 4. - groups (int): Group of resnext. - base_width (int): Base width of resnext. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from annotator.uniformer.mmseg.models import ResNeXt - >>> import torch - >>> self = ResNeXt(depth=50) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 256, 8, 8) - (1, 512, 4, 4) - (1, 1024, 2, 2) - (1, 2048, 1, 1) - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/spaces/MetaWabbit/Auto-GPT/BULLETIN.md b/spaces/MetaWabbit/Auto-GPT/BULLETIN.md deleted file mode 100644 index 735048ddc87a914987c6bd70ccdb231a80242ae3..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/BULLETIN.md +++ /dev/null @@ -1,2 +0,0 @@ -Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here. -If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag \ No newline at end of file diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/dictionary/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/dictionary/__init__.py deleted file mode 100644 index 9ad0ab306f183192aa5c8464eee5947e13d294e6..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/common/dictionary/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. - -from .dictionary import Dictionary - -__all__ = ['Dictionary'] diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/layers/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/layers/__init__.py deleted file mode 100644 index a1fa8af5586145c8e31c463e6d0620c9f1af2e3b..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/layers/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .conv_layer import BasicBlock, Bottleneck -from .dot_product_attention_layer import DotProductAttentionLayer -from .lstm_layer import BidirectionalLSTM -from .position_aware_layer import PositionAwareLayer -from .robust_scanner_fusion_layer import RobustScannerFusionLayer -from .satrn_layers import Adaptive2DPositionalEncoding, SATRNEncoderLayer - -__all__ = [ - 'BidirectionalLSTM', 'Adaptive2DPositionalEncoding', 'BasicBlock', - 'Bottleneck', 'RobustScannerFusionLayer', 'DotProductAttentionLayer', - 'PositionAwareLayer', 'SATRNEncoderLayer' -] diff --git a/spaces/NATSpeech/PortaSpeech/utils/commons/hparams.py b/spaces/NATSpeech/PortaSpeech/utils/commons/hparams.py deleted file mode 100644 index 356fe306b0be82040ae1e938d3fca0e2567ae7c2..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/utils/commons/hparams.py +++ /dev/null @@ -1,131 +0,0 @@ -import argparse -import os -import yaml - -from utils.os_utils import remove_file - -global_print_hparams = True -hparams = {} - - -class Args: - def __init__(self, **kwargs): - for k, v in kwargs.items(): - self.__setattr__(k, v) - - -def override_config(old_config: dict, new_config: dict): - for k, v in new_config.items(): - if isinstance(v, dict) and k in old_config: - override_config(old_config[k], new_config[k]) - else: - old_config[k] = v - - -def set_hparams(config='', exp_name='', hparams_str='', print_hparams=True, global_hparams=True): - if config == '' and exp_name == '': - parser = argparse.ArgumentParser(description='') - parser.add_argument('--config', type=str, default='', - help='location of the data corpus') - parser.add_argument('--exp_name', type=str, default='', help='exp_name') - parser.add_argument('-hp', '--hparams', type=str, default='', - help='location of the data corpus') - parser.add_argument('--infer', action='store_true', help='infer') - parser.add_argument('--validate', action='store_true', help='validate') - parser.add_argument('--reset', action='store_true', help='reset hparams') - parser.add_argument('--remove', action='store_true', help='remove old ckpt') - parser.add_argument('--debug', action='store_true', help='debug') - args, unknown = parser.parse_known_args() - print("| Unknow hparams: ", unknown) - else: - args = Args(config=config, exp_name=exp_name, hparams=hparams_str, - infer=False, validate=False, reset=False, debug=False, remove=False) - global hparams - assert args.config != '' or args.exp_name != '' - if args.config != '': - assert os.path.exists(args.config) - - config_chains = [] - loaded_config = set() - - def load_config(config_fn): - # deep first inheritance and avoid the second visit of one node - if not os.path.exists(config_fn): - return {} - with open(config_fn) as f: - hparams_ = yaml.safe_load(f) - loaded_config.add(config_fn) - if 'base_config' in hparams_: - ret_hparams = {} - if not isinstance(hparams_['base_config'], list): - hparams_['base_config'] = [hparams_['base_config']] - for c in hparams_['base_config']: - if c.startswith('.'): - c = f'{os.path.dirname(config_fn)}/{c}' - c = os.path.normpath(c) - if c not in loaded_config: - override_config(ret_hparams, load_config(c)) - override_config(ret_hparams, hparams_) - else: - ret_hparams = hparams_ - config_chains.append(config_fn) - return ret_hparams - - saved_hparams = {} - args_work_dir = '' - if args.exp_name != '': - args_work_dir = f'checkpoints/{args.exp_name}' - ckpt_config_path = f'{args_work_dir}/config.yaml' - if os.path.exists(ckpt_config_path): - with open(ckpt_config_path) as f: - saved_hparams_ = yaml.safe_load(f) - if saved_hparams_ is not None: - saved_hparams.update(saved_hparams_) - hparams_ = {} - if args.config != '': - hparams_.update(load_config(args.config)) - if not args.reset: - hparams_.update(saved_hparams) - hparams_['work_dir'] = args_work_dir - - # Support config overriding in command line. Support list type config overriding. - # Examples: --hparams="a=1,b.c=2,d=[1 1 1]" - if args.hparams != "": - for new_hparam in args.hparams.split(","): - k, v = new_hparam.split("=") - v = v.strip("\'\" ") - config_node = hparams_ - for k_ in k.split(".")[:-1]: - config_node = config_node[k_] - k = k.split(".")[-1] - if v in ['True', 'False'] or type(config_node[k]) in [bool, list, dict]: - if type(config_node[k]) == list: - v = v.replace(" ", ",") - config_node[k] = eval(v) - else: - config_node[k] = type(config_node[k])(v) - if args_work_dir != '' and args.remove: - answer = input("REMOVE old checkpoint? Y/N [Default: N]: ") - if answer.lower() == "y": - remove_file(args_work_dir) - if args_work_dir != '' and (not os.path.exists(ckpt_config_path) or args.reset) and not args.infer: - os.makedirs(hparams_['work_dir'], exist_ok=True) - with open(ckpt_config_path, 'w') as f: - yaml.safe_dump(hparams_, f) - - hparams_['infer'] = args.infer - hparams_['debug'] = args.debug - hparams_['validate'] = args.validate - hparams_['exp_name'] = args.exp_name - global global_print_hparams - if global_hparams: - hparams.clear() - hparams.update(hparams_) - if print_hparams and global_print_hparams and global_hparams: - print('| Hparams chains: ', config_chains) - print('| Hparams: ') - for i, (k, v) in enumerate(sorted(hparams_.items())): - print(f"\033[;33;m{k}\033[0m: {v}, ", end="\n" if i % 5 == 4 else "") - print("") - global_print_hparams = False - return hparams_ diff --git a/spaces/NATSpeech/PortaSpeech/utils/nn/schedulers.py b/spaces/NATSpeech/PortaSpeech/utils/nn/schedulers.py deleted file mode 100644 index c91969dd8e01a8342488e060592700f3957c3651..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/utils/nn/schedulers.py +++ /dev/null @@ -1,57 +0,0 @@ -class NoneSchedule(object): - def __init__(self, optimizer, lr): - self.optimizer = optimizer - self.constant_lr = lr - self.step(0) - - def step(self, num_updates): - self.lr = self.constant_lr - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.lr - return self.lr - - def get_lr(self): - return self.optimizer.param_groups[0]['lr'] - - def get_last_lr(self): - return self.get_lr() - - -class RSQRTSchedule(NoneSchedule): - def __init__(self, optimizer, lr, warmup_updates, hidden_size): - self.optimizer = optimizer - self.constant_lr = lr - self.warmup_updates = warmup_updates - self.hidden_size = hidden_size - self.lr = lr - for param_group in optimizer.param_groups: - param_group['lr'] = self.lr - self.step(0) - - def step(self, num_updates): - constant_lr = self.constant_lr - warmup = min(num_updates / self.warmup_updates, 1.0) - rsqrt_decay = max(self.warmup_updates, num_updates) ** -0.5 - rsqrt_hidden = self.hidden_size ** -0.5 - self.lr = max(constant_lr * warmup * rsqrt_decay * rsqrt_hidden, 1e-7) - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.lr - return self.lr - - -class WarmupSchedule(NoneSchedule): - def __init__(self, optimizer, lr, warmup_updates): - self.optimizer = optimizer - self.constant_lr = self.lr = lr - self.warmup_updates = warmup_updates - for param_group in optimizer.param_groups: - param_group['lr'] = self.lr - self.step(0) - - def step(self, num_updates): - constant_lr = self.constant_lr - warmup = min(num_updates / self.warmup_updates, 1.0) - self.lr = max(constant_lr * warmup, 1e-7) - for param_group in self.optimizer.param_groups: - param_group['lr'] = self.lr - return self.lr diff --git a/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/data/data_utils_test.py b/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/data/data_utils_test.py deleted file mode 100644 index 7d225ef08c0bfaa36b2ae32469ca1e3946e3b41a..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/adversarial_text/data/data_utils_test.py +++ /dev/null @@ -1,200 +0,0 @@ -# Copyright 2017 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for data_utils.""" -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -# Dependency imports - -import tensorflow as tf - -from data import data_utils - -data = data_utils - - -class SequenceWrapperTest(tf.test.TestCase): - - def testDefaultTimesteps(self): - seq = data.SequenceWrapper() - t1 = seq.add_timestep() - _ = seq.add_timestep() - self.assertEqual(len(seq), 2) - - self.assertEqual(t1.weight, 0.0) - self.assertEqual(t1.label, 0) - self.assertEqual(t1.token, 0) - - def testSettersAndGetters(self): - ts = data.SequenceWrapper().add_timestep() - ts.set_token(3) - ts.set_label(4) - ts.set_weight(2.0) - self.assertEqual(ts.token, 3) - self.assertEqual(ts.label, 4) - self.assertEqual(ts.weight, 2.0) - - def testTimestepIteration(self): - seq = data.SequenceWrapper() - seq.add_timestep().set_token(0) - seq.add_timestep().set_token(1) - seq.add_timestep().set_token(2) - for i, ts in enumerate(seq): - self.assertEqual(ts.token, i) - - def testFillsSequenceExampleCorrectly(self): - seq = data.SequenceWrapper() - seq.add_timestep().set_token(1).set_label(2).set_weight(3.0) - seq.add_timestep().set_token(10).set_label(20).set_weight(30.0) - - seq_ex = seq.seq - fl = seq_ex.feature_lists.feature_list - fl_token = fl[data.SequenceWrapper.F_TOKEN_ID].feature - fl_label = fl[data.SequenceWrapper.F_LABEL].feature - fl_weight = fl[data.SequenceWrapper.F_WEIGHT].feature - _ = [self.assertEqual(len(f), 2) for f in [fl_token, fl_label, fl_weight]] - self.assertAllEqual([f.int64_list.value[0] for f in fl_token], [1, 10]) - self.assertAllEqual([f.int64_list.value[0] for f in fl_label], [2, 20]) - self.assertAllEqual([f.float_list.value[0] for f in fl_weight], [3.0, 30.0]) - - -class DataUtilsTest(tf.test.TestCase): - - def testSplitByPunct(self): - output = data.split_by_punct( - 'hello! world, i\'ve been\nwaiting\tfor\ryou for.a long time') - expected = [ - 'hello', 'world', 'i', 've', 'been', 'waiting', 'for', 'you', 'for', - 'a', 'long', 'time' - ] - self.assertListEqual(output, expected) - - def _buildDummySequence(self): - seq = data.SequenceWrapper() - for i in range(10): - seq.add_timestep().set_token(i) - return seq - - def testBuildLMSeq(self): - seq = self._buildDummySequence() - lm_seq = data.build_lm_sequence(seq) - for i, ts in enumerate(lm_seq): - # For end of sequence, the token and label should be same, and weight - # should be 0.0. - if i == len(lm_seq) - 1: - self.assertEqual(ts.token, i) - self.assertEqual(ts.label, i) - self.assertEqual(ts.weight, 0.0) - else: - self.assertEqual(ts.token, i) - self.assertEqual(ts.label, i + 1) - self.assertEqual(ts.weight, 1.0) - - def testBuildSAESeq(self): - seq = self._buildDummySequence() - sa_seq = data.build_seq_ae_sequence(seq) - - self.assertEqual(len(sa_seq), len(seq) * 2 - 1) - - # Tokens should be sequence twice, minus the EOS token at the end - for i, ts in enumerate(sa_seq): - self.assertEqual(ts.token, seq[i % 10].token) - - # Weights should be len-1 0.0's and len 1.0's. - for i in range(len(seq) - 1): - self.assertEqual(sa_seq[i].weight, 0.0) - for i in range(len(seq) - 1, len(sa_seq)): - self.assertEqual(sa_seq[i].weight, 1.0) - - # Labels should be len-1 0's, and then the sequence - for i in range(len(seq) - 1): - self.assertEqual(sa_seq[i].label, 0) - for i in range(len(seq) - 1, len(sa_seq)): - self.assertEqual(sa_seq[i].label, seq[i - (len(seq) - 1)].token) - - def testBuildLabelSeq(self): - seq = self._buildDummySequence() - eos_id = len(seq) - 1 - label_seq = data.build_labeled_sequence(seq, True) - for i, ts in enumerate(label_seq[:-1]): - self.assertEqual(ts.token, i) - self.assertEqual(ts.label, 0) - self.assertEqual(ts.weight, 0.0) - - final_timestep = label_seq[-1] - self.assertEqual(final_timestep.token, eos_id) - self.assertEqual(final_timestep.label, 1) - self.assertEqual(final_timestep.weight, 1.0) - - def testBuildBidirLabelSeq(self): - seq = self._buildDummySequence() - reverse_seq = data.build_reverse_sequence(seq) - bidir_seq = data.build_bidirectional_seq(seq, reverse_seq) - label_seq = data.build_labeled_sequence(bidir_seq, True) - - for (i, ts), j in zip( - enumerate(label_seq[:-1]), reversed(range(len(seq) - 1))): - self.assertAllEqual(ts.tokens, [i, j]) - self.assertEqual(ts.label, 0) - self.assertEqual(ts.weight, 0.0) - - final_timestep = label_seq[-1] - eos_id = len(seq) - 1 - self.assertAllEqual(final_timestep.tokens, [eos_id, eos_id]) - self.assertEqual(final_timestep.label, 1) - self.assertEqual(final_timestep.weight, 1.0) - - def testReverseSeq(self): - seq = self._buildDummySequence() - reverse_seq = data.build_reverse_sequence(seq) - for i, ts in enumerate(reversed(reverse_seq[:-1])): - self.assertEqual(ts.token, i) - self.assertEqual(ts.label, 0) - self.assertEqual(ts.weight, 0.0) - - final_timestep = reverse_seq[-1] - eos_id = len(seq) - 1 - self.assertEqual(final_timestep.token, eos_id) - self.assertEqual(final_timestep.label, 0) - self.assertEqual(final_timestep.weight, 0.0) - - def testBidirSeq(self): - seq = self._buildDummySequence() - reverse_seq = data.build_reverse_sequence(seq) - bidir_seq = data.build_bidirectional_seq(seq, reverse_seq) - for (i, ts), j in zip( - enumerate(bidir_seq[:-1]), reversed(range(len(seq) - 1))): - self.assertAllEqual(ts.tokens, [i, j]) - self.assertEqual(ts.label, 0) - self.assertEqual(ts.weight, 0.0) - - final_timestep = bidir_seq[-1] - eos_id = len(seq) - 1 - self.assertAllEqual(final_timestep.tokens, [eos_id, eos_id]) - self.assertEqual(final_timestep.label, 0) - self.assertEqual(final_timestep.weight, 0.0) - - def testLabelGain(self): - seq = self._buildDummySequence() - label_seq = data.build_labeled_sequence(seq, True, label_gain=True) - for i, ts in enumerate(label_seq): - self.assertEqual(ts.token, i) - self.assertEqual(ts.label, 1) - self.assertNear(ts.weight, float(i) / (len(seq) - 1), 1e-3) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/Nultx/VITS-TTS/ONNXVITS_modules.py b/spaces/Nultx/VITS-TTS/ONNXVITS_modules.py deleted file mode 100644 index 6cf676ce37c1eaf8428c4094e749f862182cb0c3..0000000000000000000000000000000000000000 --- a/spaces/Nultx/VITS-TTS/ONNXVITS_modules.py +++ /dev/null @@ -1,390 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from ONNXVITS_transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Nyashi/rvc-models-epic/config.py b/spaces/Nyashi/rvc-models-epic/config.py deleted file mode 100644 index 7a9f9b01d62c30aabf20358ff1607de20a88af27..0000000000000000000000000000000000000000 --- a/spaces/Nyashi/rvc-models-epic/config.py +++ /dev/null @@ -1,123 +0,0 @@ -import argparse -import torch -from multiprocessing import cpu_count - - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.colab, - self.noparallel, - self.noautoopen, - self.api, - self.json - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument( - "--pycmd", type=str, default="python", help="Python command" - ) - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - parser.add_argument('--api', action="store_true", default=True) - parser.add_argument("--json", action="store_true", default=False, help="use model_info.json") - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - cmd_opts.api, - cmd_opts.json - ) - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(f"configs/{config_file}", "r") as f: - strr = f.read().replace("true", "false") - with open(f"configs/{config_file}", "w") as f: - f.write(strr) - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - self.is_half = False - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = False - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/cut_as.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/cut_as.py deleted file mode 100644 index 5b7e1e968564b84c47049c5cc69c9d6b8fafe0e9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/cut_as.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import torchaudio -import argparse -import json -import pathlib - - -def get_args(): - parser = argparse.ArgumentParser( - "Assuring generated audio have the same length as ground-truth audio") - parser.add_argument('--samples_dir', required=True, type=str) - parser.add_argument('--out_dir', required=True, type=str) - parser.add_argument('--prompts_description', required=True, type=str) - return parser.parse_args() - - -def cut(src, tgt, l): - x, sr = torchaudio.load(str(src)) - assert sr == 16_000 - - x = x.squeeze() - target_frames = int(l * sr) - - flag = 0 - if target_frames <= x.size(0): - x = x[:target_frames] - flag = 1 - else: - flag = 0 - torchaudio.save(str(tgt), x.unsqueeze(0), sr) - return flag - - -def main(): - args = get_args() - tgt_dir = pathlib.Path(args.out_dir) - tgt_dir.mkdir(exist_ok=True, parents=True) - - total_files, sufficiently_long = 0, 0 - - with open(args.prompts_description, 'r') as f: - description = json.loads(f.read()) - - for src_f in pathlib.Path(args.samples_dir).glob('*.wav'): - name_prompt = src_f.with_suffix('').name.split('__')[0] - - assert name_prompt in description, f'Cannot find {name_prompt}!' - - target_length = description[name_prompt][0] - tgt_f = tgt_dir / (src_f.name) - - is_long_enough = cut(src_f, tgt_f, target_length) - sufficiently_long += is_long_enough - if not is_long_enough: - print(f'{src_f} is not long enough') - - total_files += 1 - - print( - f'Total files: {total_files}; sufficiently long: {sufficiently_long}') - - -if __name__ == '__main__': - main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py deleted file mode 100644 index 27792ebda842057e33fed3dc53dd9d8a594d0483..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py +++ /dev/null @@ -1,637 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from enum import Enum, auto -import math -import numpy as np -from typing import Tuple, List, Optional, Dict - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import autograd - -from fairseq import checkpoint_utils, utils -from fairseq.dataclass import FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - SamePad, - TransposeLast, -) - - -class SegmentationType(Enum): - NONE = auto() - RANDOM = auto() - UNIFORM_RANDOM = auto() - UNIFORM_RANDOM_JOIN = auto() - JOIN = auto() - - -@dataclass -class SegmentationConfig(FairseqDataclass): - type: SegmentationType = SegmentationType.NONE - subsample_rate: float = 0.25 - mean_pool: bool = True - mean_pool_join: bool = False - remove_zeros: bool = False - - -@dataclass -class Wav2vec_UConfig(FairseqDataclass): - - discriminator_kernel: int = 3 - discriminator_dilation: int = 1 - discriminator_dim: int = 256 - discriminator_causal: bool = True - discriminator_linear_emb: bool = False - discriminator_depth: int = 1 - discriminator_max_pool: bool = False - discriminator_act_after_linear: bool = False - discriminator_dropout: float = 0.0 - discriminator_spectral_norm: bool = False - discriminator_weight_norm: bool = False - - generator_kernel: int = 4 - generator_dilation: int = 1 - generator_stride: int = 1 - generator_bias: bool = False - generator_dropout: float = 0.0 - - blank_weight: float = 0 - blank_mode: str = "add" - blank_is_sil: bool = False - no_softmax: bool = False - - smoothness_weight: float = 0.0 - smoothing: float = 0.0 - smoothing_one_sided: bool = False - gradient_penalty: float = 0.0 - probabilistic_grad_penalty_slicing: bool = False - code_penalty: float = 0.0 - gumbel: bool = False - hard_gumbel: bool = True - temp: Tuple[float, float, float] = (2, 0.1, 0.99995) - input_dim: int = 128 - - segmentation: SegmentationConfig = SegmentationConfig() - - -class Segmenter(nn.Module): - cfg: SegmentationConfig - - def __init__(self, cfg: SegmentationConfig): - super().__init__() - self.cfg = cfg - self.subsample_rate = cfg.subsample_rate - - def pre_segment(self, dense_x, dense_padding_mask): - return dense_x, dense_padding_mask - - def logit_segment(self, logits, padding_mask): - return logits, padding_mask - - -class RandomSegmenter(Segmenter): - def pre_segment(self, dense_x, dense_padding_mask): - target_num = math.ceil(dense_x.size(1) * self.subsample_rate) - ones = torch.ones(dense_x.shape[:-1], device=dense_x.device) - indices, _ = ones.multinomial(target_num).sort(dim=-1) - indices_ld = indices.unsqueeze(-1).expand(-1, -1, dense_x.size(-1)) - dense_x = dense_x.gather(1, indices_ld) - dense_padding_mask = dense_padding_mask.gather(1, index=indices) - return dense_x, dense_padding_mask - - -class UniformRandomSegmenter(Segmenter): - def pre_segment(self, dense_x, dense_padding_mask): - bsz, tsz, fsz = dense_x.shape - - target_num = math.ceil(tsz * self.subsample_rate) - - rem = tsz % target_num - - if rem > 0: - dense_x = F.pad(dense_x, [0, 0, 0, target_num - rem]) - dense_padding_mask = F.pad( - dense_padding_mask, [0, target_num - rem], value=True - ) - - dense_x = dense_x.view(bsz, target_num, -1, fsz) - dense_padding_mask = dense_padding_mask.view(bsz, target_num, -1) - - if self.cfg.mean_pool: - dense_x = dense_x.mean(dim=-2) - dense_padding_mask = dense_padding_mask.all(dim=-1) - else: - ones = torch.ones((bsz, dense_x.size(2)), device=dense_x.device) - indices = ones.multinomial(1) - indices = indices.unsqueeze(-1).expand(-1, target_num, -1) - indices_ld = indices.unsqueeze(-1).expand(-1, -1, -1, fsz) - dense_x = dense_x.gather(2, indices_ld).reshape(bsz, -1, fsz) - dense_padding_mask = dense_padding_mask.gather(2, index=indices).reshape( - bsz, -1 - ) - return dense_x, dense_padding_mask - - -class JoinSegmenter(Segmenter): - def logit_segment(self, logits, padding_mask): - preds = logits.argmax(dim=-1) - - if padding_mask.any(): - preds[padding_mask] = -1 # mark pad - uniques = [] - - bsz, tsz, csz = logits.shape - - for p in preds: - uniques.append( - p.cpu().unique_consecutive(return_inverse=True, return_counts=True) - ) - - new_tsz = max(u[0].numel() for u in uniques) - new_logits = logits.new_zeros(bsz, new_tsz, csz) - new_pad = padding_mask.new_zeros(bsz, new_tsz) - - for b in range(bsz): - u, idx, c = uniques[b] - keep = u != -1 - - if self.cfg.remove_zeros: - keep.logical_and_(u != 0) - - if self.training and not self.cfg.mean_pool_join: - u[0] = 0 - u[1:] = c.cumsum(0)[:-1] - m = c > 1 - r = torch.rand(m.sum()) - o = (c[m] * r).long() - u[m] += o - new_logits[b, : u.numel()] = logits[b, u] - else: - new_logits[b].index_add_( - dim=0, index=idx.to(new_logits.device), source=logits[b] - ) - new_logits[b, : c.numel()] /= c.unsqueeze(-1).to(new_logits.device) - - new_sz = keep.sum() - if not keep.all(): - kept_logits = new_logits[b, : c.numel()][keep] - new_logits[b, :new_sz] = kept_logits - - if new_sz < new_tsz: - pad = new_tsz - new_sz - new_logits[b, -pad:] = 0 - new_pad[b, -pad:] = True - - return new_logits, new_pad - - -class UniformRandomJoinSegmenter(UniformRandomSegmenter, JoinSegmenter): - pass - - -SEGMENT_FACTORY = { - SegmentationType.NONE: Segmenter, - SegmentationType.RANDOM: RandomSegmenter, - SegmentationType.UNIFORM_RANDOM: UniformRandomSegmenter, - SegmentationType.UNIFORM_RANDOM_JOIN: UniformRandomJoinSegmenter, - SegmentationType.JOIN: JoinSegmenter, -} - - -class Discriminator(nn.Module): - def __init__(self, dim, cfg: Wav2vec_UConfig): - super().__init__() - - inner_dim = cfg.discriminator_dim - kernel = cfg.discriminator_kernel - dilation = cfg.discriminator_dilation - self.max_pool = cfg.discriminator_max_pool - - if cfg.discriminator_causal: - padding = kernel - 1 - else: - padding = kernel // 2 - - def make_conv(in_d, out_d, k, p=0, has_dilation=True): - conv = nn.Conv1d( - in_d, - out_d, - kernel_size=k, - padding=p, - dilation=dilation if has_dilation else 1, - ) - if cfg.discriminator_spectral_norm: - conv = nn.utils.spectral_norm(conv) - elif cfg.discriminator_weight_norm: - conv = nn.utils.weight_norm(conv) - return conv - - inner_net = [ - nn.Sequential( - make_conv(inner_dim, inner_dim, kernel, padding), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - nn.Dropout(cfg.discriminator_dropout), - nn.GELU(), - ) - for _ in range(cfg.discriminator_depth - 1) - ] + [ - make_conv(inner_dim, 1, kernel, padding, has_dilation=False), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - ] - - if cfg.discriminator_linear_emb: - emb_net = [make_conv(dim, inner_dim, 1)] - else: - emb_net = [ - make_conv(dim, inner_dim, kernel, padding), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - ] - - if cfg.discriminator_act_after_linear: - emb_net.append(nn.GELU()) - - self.net = nn.Sequential( - *emb_net, - nn.Dropout(cfg.discriminator_dropout), - *inner_net, - ) - - def forward(self, x, padding_mask): - x = x.transpose(1, 2) # BTC -> BCT - x = self.net(x) - x = x.transpose(1, 2) - x_sz = x.size(1) - if padding_mask is not None and padding_mask.any() and padding_mask.dim() > 1: - padding_mask = padding_mask[:, : x.size(1)] - x[padding_mask] = float("-inf") if self.max_pool else 0 - x_sz = x_sz - padding_mask.sum(dim=-1) - x = x.squeeze(-1) - if self.max_pool: - x, _ = x.max(dim=-1) - else: - x = x.sum(dim=-1) - x = x / x_sz - return x - - -class Generator(nn.Module): - def __init__(self, input_dim, output_dim, cfg: Wav2vec_UConfig): - super().__init__() - - self.cfg = cfg - self.output_dim = output_dim - self.stride = cfg.generator_stride - self.dropout = nn.Dropout(cfg.generator_dropout) - - padding = cfg.generator_kernel // 2 - self.proj = nn.Sequential( - TransposeLast(), - nn.Conv1d( - input_dim, - output_dim, - kernel_size=cfg.generator_kernel, - stride=cfg.generator_stride, - dilation=cfg.generator_dilation, - padding=padding, - bias=cfg.generator_bias, - ), - TransposeLast(), - ) - - def forward(self, dense_x, tokens, dense_padding_mask): - dense_x = self.dropout(dense_x) - - dense_x = self.proj(dense_x) - if self.stride > 1: - dense_padding_mask = dense_padding_mask[:, :: self.stride] - - if dense_padding_mask.size(1) != dense_x.size(1): - new_padding = dense_padding_mask.new_zeros(dense_x.shape[:-1]) - diff = new_padding.size(1) - dense_padding_mask.size(1) - assert ( - diff > 0 - ), f"{new_padding.shape}, {dense_padding_mask.shape}, {dense_x.shape}, {diff}" - if diff > 0: - new_padding[:, diff:] = dense_padding_mask - else: - assert diff < 0 - new_padding = dense_padding_mask[:, :diff] - - dense_padding_mask = new_padding - - result = {} - - token_x = None - if tokens is not None: - token_x = dense_x.new_zeros(tokens.numel(), self.output_dim) - token_x.scatter_(1, tokens.view(-1, 1).long(), 1) - token_x = token_x.view(tokens.shape + (self.output_dim,)) - - result["dense_x"] = dense_x - result["token_x"] = token_x - result["dense_padding_mask"] = dense_padding_mask - - return result - - -@register_model("wav2vec_u", dataclass=Wav2vec_UConfig) -class Wav2vec_U(BaseFairseqModel): - def calc_gradient_penalty(self, real_data, fake_data): - - b_size = min(real_data.size(0), fake_data.size(0)) - t_size = min(real_data.size(1), fake_data.size(1)) - - if self.cfg.probabilistic_grad_penalty_slicing: - - def get_slice(data, dim, target_size): - - size = data.size(dim) - diff = size - target_size - if diff <= 0: - return data - - start = np.random.randint(0, diff + 1) - return data.narrow(dim=dim, start=start, length=target_size) - - real_data = get_slice(real_data, 0, b_size) - real_data = get_slice(real_data, 1, t_size) - fake_data = get_slice(fake_data, 0, b_size) - fake_data = get_slice(fake_data, 1, t_size) - - else: - real_data = real_data[:b_size, :t_size] - fake_data = fake_data[:b_size, :t_size] - - alpha = torch.rand(real_data.size(0), 1, 1) - alpha = alpha.expand(real_data.size()) - alpha = alpha.to(real_data.device) - - interpolates = alpha * real_data + ((1 - alpha) * fake_data) - - disc_interpolates = self.discriminator(interpolates, None) - - gradients = autograd.grad( - outputs=disc_interpolates, - inputs=interpolates, - grad_outputs=torch.ones(disc_interpolates.size(), device=real_data.device), - create_graph=True, - retain_graph=True, - only_inputs=True, - )[0] - - gradient_penalty = (gradients.norm(2, dim=1) - 1) ** 2 - return gradient_penalty - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self.update_num = num_updates - self.curr_temp = max( - self.max_temp * self.temp_decay ** num_updates, self.min_temp - ) - - def discrim_step(self, num_updates): - return num_updates % 2 == 1 - - def get_groups_for_update(self, num_updates): - return "discriminator" if self.discrim_step(num_updates) else "generator" - - def __init__(self, cfg: Wav2vec_UConfig, target_dict): - super().__init__() - - self.cfg = cfg - self.zero_index = target_dict.index("") if "" in target_dict else 0 - self.smoothness_weight = cfg.smoothness_weight - - output_size = len(target_dict) - self.pad = target_dict.pad() - self.eos = target_dict.eos() - self.smoothing = cfg.smoothing - self.smoothing_one_sided = cfg.smoothing_one_sided - self.no_softmax = cfg.no_softmax - self.gumbel = cfg.gumbel - self.hard_gumbel = cfg.hard_gumbel - self.last_acc = None - - self.gradient_penalty = cfg.gradient_penalty - self.code_penalty = cfg.code_penalty - self.blank_weight = cfg.blank_weight - self.blank_mode = cfg.blank_mode - self.blank_index = target_dict.index("") if cfg.blank_is_sil else 0 - assert self.blank_index != target_dict.unk() - - self.discriminator = Discriminator(output_size, cfg) - for p in self.discriminator.parameters(): - p.param_group = "discriminator" - - self.pca_A = self.pca_b = None - d = cfg.input_dim - - self.segmenter = SEGMENT_FACTORY[cfg.segmentation.type](cfg.segmentation) - - self.generator = Generator(d, output_size, cfg) - - for p in self.generator.parameters(): - p.param_group = "generator" - - for p in self.segmenter.parameters(): - p.param_group = "generator" - - self.max_temp, self.min_temp, self.temp_decay = cfg.temp - self.curr_temp = self.max_temp - self.update_num = 0 - - @classmethod - def build_model(cls, cfg, task): - return cls(cfg, task.target_dictionary) - - def get_logits( - self, - net_output: Optional[Dict[str, List[Optional[torch.Tensor]]]], - normalize: bool = False, - ): - logits = net_output["logits"] - - if self.blank_weight != 0: - if self.blank_mode == "add": - logits[..., self.blank_index] += self.blank_weight - elif self.blank_mode == "set": - logits[..., self.blank_index] = self.blank_weight - else: - raise Exception(f"invalid blank mode {self.blank_mode}") - - padding = net_output["padding_mask"] - if padding.any(): - logits[padding] = float("-inf") - logits[padding][..., self.blank_index] = float("inf") - - if normalize: - logits = utils.log_softmax(logits.float(), dim=-1) - - return logits.transpose(0, 1) - - def get_normalized_probs( - self, - net_output: Tuple[ - torch.Tensor, Optional[Dict[str, List[Optional[torch.Tensor]]]] - ], - log_probs: bool, - sample: Optional[Dict[str, torch.Tensor]] = None, - ): - logits = self.get_logits(net_output) - - probs = super().get_normalized_probs(logits, log_probs, sample) - # BTC -> TBC for ctc - probs = probs.transpose(0, 1) - return probs - - def normalize(self, dense_x): - - bsz, tsz, csz = dense_x.shape - - if dense_x.numel() == 0: - raise Exception(dense_x.shape) - _, k = dense_x.max(-1) - hard_x = ( - dense_x.new_zeros(bsz * tsz, csz) - .scatter_(-1, k.view(-1, 1), 1.0) - .view(-1, csz) - ) - hard_probs = torch.mean(hard_x.float(), dim=0) - code_perplexity = torch.exp( - -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1) - ) - - avg_probs = torch.softmax(dense_x.reshape(-1, csz).float(), dim=-1).mean(dim=0) - prob_perplexity = torch.exp( - -torch.sum(avg_probs * torch.log(avg_probs + 1e-7), dim=-1) - ) - - if not self.no_softmax: - if self.training and self.gumbel: - dense_x = F.gumbel_softmax( - dense_x.float(), tau=self.curr_temp, hard=self.hard_gumbel - ).type_as(dense_x) - else: - dense_x = dense_x.softmax(-1) - - return dense_x, code_perplexity, prob_perplexity - - def forward( - self, - features, - padding_mask, - random_label=None, - dense_x_only=False, - segment=True, - ): - if segment: - features, padding_mask = self.segmenter.pre_segment(features, padding_mask) - - orig_size = features.size(0) * features.size(1) - padding_mask.sum() - - gen_result = self.generator(features, random_label, padding_mask) - - orig_dense_x, token_x = gen_result["dense_x"], gen_result["token_x"] - orig_dense_padding_mask = gen_result["dense_padding_mask"] - - if segment: - dense_x, dense_padding_mask = self.segmenter.logit_segment( - orig_dense_x, orig_dense_padding_mask - ) - else: - dense_x = orig_dense_x - dense_padding_mask = orig_dense_padding_mask - - dense_logits = dense_x - prob_perplexity = None - code_perplexity = None - - if not (self.no_softmax and dense_x_only): - dense_x, code_perplexity, prob_perplexity = self.normalize(dense_logits) - - if dense_x_only or self.discriminator is None: - return { - "logits": dense_x, - "padding_mask": dense_padding_mask, - } - - token_padding_mask = random_label == self.pad - - dense_y = self.discriminator(dense_x, dense_padding_mask) - token_y = self.discriminator(token_x, token_padding_mask) - - sample_size = features.size(0) - - d_step = self.discrim_step(self.update_num) - - fake_smooth = self.smoothing - real_smooth = self.smoothing - if self.smoothing_one_sided: - fake_smooth = 0 - - zero_loss = None - smoothness_loss = None - code_pen = None - - if d_step: - loss_dense = F.binary_cross_entropy_with_logits( - dense_y, - dense_y.new_ones(dense_y.shape) - fake_smooth, - reduction="sum", - ) - loss_token = F.binary_cross_entropy_with_logits( - token_y, - token_y.new_zeros(token_y.shape) + real_smooth, - reduction="sum", - ) - if self.training and self.gradient_penalty > 0: - grad_pen = self.calc_gradient_penalty(token_x, dense_x) - grad_pen = grad_pen.sum() * self.gradient_penalty - else: - grad_pen = None - else: - grad_pen = None - loss_token = None - loss_dense = F.binary_cross_entropy_with_logits( - dense_y, - dense_y.new_zeros(dense_y.shape) + fake_smooth, - reduction="sum", - ) - num_vars = dense_x.size(-1) - if prob_perplexity is not None: - code_pen = (num_vars - prob_perplexity) / num_vars - code_pen = code_pen * sample_size * self.code_penalty - - if self.smoothness_weight > 0: - smoothness_loss = F.mse_loss( - dense_logits[:, :-1], dense_logits[:, 1:], reduction="none" - ) - smoothness_loss[dense_padding_mask[:, 1:]] = 0 - smoothness_loss = ( - smoothness_loss.mean() * sample_size * self.smoothness_weight - ) - - result = { - "losses": { - "grad_pen": grad_pen, - "code_pen": code_pen, - "smoothness": smoothness_loss, - }, - "temp": self.curr_temp, - "code_ppl": code_perplexity, - "prob_ppl": prob_perplexity, - "d_steps": int(d_step), - "sample_size": sample_size, - } - - suff = "_d" if d_step else "_g" - result["losses"]["dense" + suff] = loss_dense - result["losses"]["token" + suff] = loss_token - - return result diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/scripts/compound_split_bleu.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/scripts/compound_split_bleu.sh deleted file mode 100644 index 1972fddcebff9a43a70bcf14c287175c68f60e3f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/scripts/compound_split_bleu.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash - -if [ $# -ne 1 ]; then - echo "usage: $0 GENERATE_PY_OUTPUT" - exit 1 -fi - -GEN=$1 - -SYS=$GEN.sys -REF=$GEN.ref - -if [ $(tail -n 1 $GEN | grep BLEU | wc -l) -ne 1 ]; then - echo "not done generating" - exit -fi - -grep ^H $GEN | awk -F '\t' '{print $NF}' | perl -ple 's{(\S)-(\S)}{$1 ##AT##-##AT## $2}g' > $SYS -grep ^T $GEN | cut -f2- | perl -ple 's{(\S)-(\S)}{$1 ##AT##-##AT## $2}g' > $REF -fairseq-score --sys $SYS --ref $REF diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/utils/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/utils/__init__.py deleted file mode 100644 index 1e9ce844f59a4211061392084cc81075e6bab19f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/utils/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the criterions/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - module = file[: file.find(".py")] - importlib.import_module("examples.simultaneous_translation.utils." + module) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/README.md deleted file mode 100644 index aa2560f0453403fb5846c387848c78b037c79cb2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/README.md +++ /dev/null @@ -1,77 +0,0 @@ -# ABX-based evaluation - -ABX is used to evaluate the quality of the obtained discrete units. - -The life cycle of the ABX-based evaluation for the Speech-to-Unit contains the following steps: -1. Training an acoustic model (or use an existing acoustic model) ([description](./../..)) -2. Perform quantization of speech by learning a K-means clustering model ([description](./../..)) -3. Compute discrete features for ABX computation using the learned clusters -4. Compute the ABX score over the discrete features taking advantage of [libri-light's ABX evaluation script][ll-abx] - -Here we assume that you already went throught the first two steps and focus solely on extracting features and computing ABX scores. - -## Libri-light setup - -Follow [libri-light's instructions][ll-instructions] for installation and [ABX evaluation setup][ll-abx] (including the download of the data items required for ABX computation). - -## Computing ABX - -### Dumping quantized features - -The first step for the ABX computation is to dump the quantized representations corresponding to the test files. - -```shell -TYPE="hubert" -LAYER=6 -CKPT_PATH="" -KM_MODEL_PATH="" - -SUBSET="dev-clean" -MANIFEST="" -DATA_DIR="/$SUBSET" - -PYTHONPATH=. python examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py \ - --feature_type $TYPE \ - --kmeans_model_path $KM_MODEL_PATH \ - --checkpoint_path $CKPT_PATH \ - --layer $LAYER \ - --manifest_path $MANIFEST \ - --out_dir_path $DATA_DIR \ - --extension ".flac" -``` - -Again the manifest file follows the same structure than elsewhere in the codebase. - -### Compute ABX with Libri-light - -Use libri-light's `eval_ABX.py` script (within the appropriate environment set up) as followed: - -```shell -LIBRILIGHT_ROOT="" - -SUBSET="dev-clean" -DATA_DIR="/$SUBSET" -ITEM_FILE_PATH="$LIBRILIGHT_ROOT/eval/ABX_data/$SUBSET.item" -OUT_DIR="/$SUBSET" - -FILE_EXTENSION=".npy" -FEATURE_SIZE=0.02 # depends on the model used - -PYTHONPATH=$LIBRILIGHT_ROOT \ - python $LIBRILIGHT_ROOT/eval/eval_ABX.py \ - $DATA_DIR \ - $ITEM_FILE_PATH \ - --file_extension $FILE_EXTENSION \ - --feature_size $FEATURE_SIZE \ - --out $OUT_DIR \ - --mode "all" -``` - -Note that `FEATURE_SIZE` will depend on the model type you are using to extract the acoustic features: -* For HuBERT and Wav2Vec2.0, use `FEATURE_SIZE=0.02` -* For CPC and Log Mel, use `FEATURE_SIZE=0.01` - -If you have a gpu available, make sure you add the `--cuda` flag for faster computation. - -[ll-instructions]: https://github.com/facebookresearch/libri-light -[ll-abx]: https://github.com/facebookresearch/libri-light/tree/master/eval#abx diff --git a/spaces/OIUGLK/bingo/src/components/theme-toggle.tsx b/spaces/OIUGLK/bingo/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/Oddity/ehartford-WizardLM-13B-Uncensored/README.md b/spaces/Oddity/ehartford-WizardLM-13B-Uncensored/README.md deleted file mode 100644 index 37e232057b542c5e49c037d0a2d1ba9416a08814..0000000000000000000000000000000000000000 --- a/spaces/Oddity/ehartford-WizardLM-13B-Uncensored/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ehartford WizardLM 13B Uncensored -emoji: 🐢 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Olivernyu/sentiment_analysis_app/README.md b/spaces/Olivernyu/sentiment_analysis_app/README.md deleted file mode 100644 index c7d2d9b118bbea0339e0e08a369452afdf6d26e6..0000000000000000000000000000000000000000 --- a/spaces/Olivernyu/sentiment_analysis_app/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sentiment Analysis App -emoji: 🌖 -colorFrom: blue -colorTo: blue -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OpenMotionLab/MotionGPT/mGPT/render/visualize.py b/spaces/OpenMotionLab/MotionGPT/mGPT/render/visualize.py deleted file mode 100644 index 7cc9c6cd9f77ef8f031aa4a9f2fe5926f6b84272..0000000000000000000000000000000000000000 --- a/spaces/OpenMotionLab/MotionGPT/mGPT/render/visualize.py +++ /dev/null @@ -1,747 +0,0 @@ -from operator import mod -import os -# from cv2 import CAP_PROP_INTELPERC_DEPTH_LOW_CONFIDENCE_VALUE -import imageio -import shutil -import numpy as np -import torch -from tqdm import tqdm - -from scipy.spatial.transform import Rotation as R -from mGPT.render.renderer import get_renderer -from mGPT.render.rendermotion import render_video -# from mld.utils.img_utils import convert_img -# from mld.utils.uicap_utils import output_pkl - - -def parsename(path): - basebane = os.path.basename(path) - base = os.path.splitext(basebane)[0] - strs = base.split('_') - key = strs[-2] - action = strs[-1] - return key, action - - -def load_anim(path, timesize=None): - data = np.array(imageio.mimread(path, memtest=False)) #[..., :3] - if timesize is None: - return data - - # take the last frame and put shadow repeat the last frame but with a little shadow - # lastframe = add_shadow(data[-1]) - # alldata = np.tile(lastframe, (timesize, 1, 1, 1)) - alldata = data - - # debug fix mat dim - if len(data.shape) == 3 and len(alldata.shape) == 4: - data = data[:, None, :, :] - - # copy the first frames - lenanim = data.shape[0] - alldata[:lenanim] = data[:lenanim] - return alldata - - -def plot_3d_motion_dico(x): - motion, length, save_path, params, kargs = x - plot_3d_motion(motion, length, save_path, params, **kargs) - - -def plot_3d_motion(motion, - length, - save_path, - params, - title="", - interval=50, - pred_cam=None, - imgs=None, - bbox=None, - side=None): - # render smpl - # [nframes, nVs, 3] - if motion.shape[1] == 6890: - # width = 250 - # height = 250 - width = 600 - height = 600 - if pred_cam is None: - # cam=(0.75, 0.75, 0, 0.1) - cam = (0.8, 0.8, 0, 0.1) - # cam=(0.9, 0.9, 0, 0.1) - else: - assert bbox is not None - assert imgs is not None - - # Tmp visulize - # weak perspective camera parameters in cropped image space (s,tx,ty) - # to - # weak perspective camera parameters in original image space (sx,sy,tx,ty) - cam = np.concatenate( - (pred_cam[:, [0]], pred_cam[:, [0]], pred_cam[:, 1:3]), axis=1) - - # ToDo convert to original cam - # load original img? - # calculate cam after padding??? - # - # cam = convert_crop_cam_to_orig_img( - # cam=pred_cam, - # bbox=bbox, - # img_width=width, - # img_height=height - # ) - cam_pose = np.eye(4) - cam_pose[0:3, 0:3] = R.from_euler('x', -90, degrees=True).as_matrix() - cam_pose[0:3, 3] = [0, 0, 0] - if side: - rz = np.eye(4) - rz[0:3, 0:3] = R.from_euler('z', -90, degrees=True).as_matrix() - cam_pose = np.matmul(rz, cam_pose) - - # # reshape input imgs - # if imgs is not None: - # imgs = convert_img(imgs.unsqueeze(0), height)[:,0] - backgrounds = imgs if imgs is not None else np.ones( - (height, width, 3)) * 255 - renderer = get_renderer(width, height, cam_pose) - - # [nframes, nVs, 3] - meshes = motion - key, action = parsename(save_path) - render_video(meshes, - key, - action, - renderer, - save_path, - backgrounds, - cam_pose, - cams=cam) - return - - -def stack_images(real, real_gens, gen, real_imgs=None): - # change to 3 channel - # print(real.shape) - # print(real_gens.shape) - # print(real_gens.shape) - # real = real[:3] - # real_gens = real_gens[:3] - # gen = gen[:3] - - nleft_cols = len(real_gens) + 1 - print("Stacking frames..") - allframes = np.concatenate( - (real[:, None, ...], *[x[:, None, ...] for x in real_gens], gen), 1) - nframes, nspa, nats, h, w, pix = allframes.shape - - blackborder = np.zeros((w // 30, h * nats, pix), dtype=allframes.dtype) - # blackborder = np.ones((w//30, h*nats, pix), dtype=allframes.dtype)*255 - frames = [] - for frame_idx in tqdm(range(nframes)): - columns = np.vstack(allframes[frame_idx].transpose(1, 2, 3, 4, - 0)).transpose( - 3, 1, 0, 2) - frame = np.concatenate( - (*columns[0:nleft_cols], blackborder, *columns[nleft_cols:]), - 0).transpose(1, 0, 2) - - frames.append(frame) - - if real_imgs is not None: - resize_imgs = convert_img(real_imgs, h)[:nframes, ...] - - for i in range(len(frames)): - imgs = np.vstack(resize_imgs[i, ...]) - imgs4 = np.ones( - (imgs.shape[0], imgs.shape[1], 4), dtype=np.uint8) * 255 - imgs4[:, :, :3] = imgs - #imgs = torch2numpy(imgs) - frames[i] = np.concatenate((imgs4, frames[i]), 1) - return np.stack(frames) - - -def stack_images_gen(gen, real_imgs=None): - print("Stacking frames..") - allframes = gen - nframes, nspa, nats, h, w, pix = allframes.shape - blackborder = np.zeros((w * nspa, h // 30, pix), dtype=allframes.dtype) - blackborder = blackborder[None, ...].repeat(nats, - axis=0).transpose(0, 2, 1, 3) - - frames = [] - for frame_idx in tqdm(range(nframes)): - rows = np.vstack(allframes[frame_idx].transpose(0, 3, 2, 4, - 1)).transpose( - 3, 1, 0, 2) - rows = np.concatenate((rows, blackborder), 1) - frame = np.concatenate(rows, 0) - frames.append(frame) - - if real_imgs is not None: - # ToDo Add images - resize_imgs = convert_img(real_imgs, h)[:nframes, ...] - for i in range(len(frames)): - imgs = np.vstack(resize_imgs[i, ...]) - #imgs = torch2numpy(imgs) - frames[i] = np.concatenate((imgs, frames[i]), 1) - return np.stack(frames) - - -def generate_by_video(visualization, reconstructions, generation, - label_to_action_name, params, nats, nspa, tmp_path): - # shape : (17, 3, 4, 480, 640, 3) - # (nframes, row, column, h, w, 3) - fps = params["fps"] - - params = params.copy() - - gen_only = False - if visualization is None: - gen_only = True - outputkey = "output_vertices" - params["pose_rep"] = "vertices" - elif "output_vertices" in visualization: - outputkey = "output_vertices" - params["pose_rep"] = "vertices" - elif "output_xyz" in visualization: - outputkey = "output_xyz" - params["pose_rep"] = "xyz" - else: - outputkey = "poses" - - keep = [outputkey, 'lengths', "y"] - gener = {key: generation[key].data.cpu().numpy() for key in keep} - if not gen_only: - visu = {key: visualization[key].data.cpu().numpy() for key in keep} - recons = {} - # visualize regressor results - if 'vertices_hat' in reconstructions['ntf']: - recons['regressor'] = { - 'output_vertices': - reconstructions['ntf']['vertices_hat'].data.cpu().numpy(), - 'lengths': - reconstructions['ntf']['lengths'].data.cpu().numpy(), - 'y': - reconstructions['ntf']['y'].data.cpu().numpy() - } - - recons['regressor_side'] = { - 'output_vertices': - reconstructions['ntf']['vertices_hat'].data.cpu().numpy(), - 'lengths': - reconstructions['ntf']['lengths'].data.cpu().numpy(), - 'y': - reconstructions['ntf']['y'].data.cpu().numpy(), - 'side': - True - } - # ToDo rendering overlap results - # recons['overlap'] = {'output_vertices':reconstructions['ntf']['vertices_hat'].data.cpu().numpy(), - # 'lengths':reconstructions['ntf']['lengths'].data.cpu().numpy(), - # 'y':reconstructions['ntf']['y'].data.cpu().numpy(), - # 'imgs':reconstructions['ntf']['imgs'], - # 'bbox':reconstructions['ntf']['bbox'].data.cpu().numpy(), - # 'cam':reconstructions['ntf']['preds'][0]['cam'].data.cpu().numpy()} - for mode, reconstruction in reconstructions.items(): - recons[mode] = { - key: reconstruction[key].data.cpu().numpy() - for key in keep - } - recons[mode + '_side'] = { - key: reconstruction[key].data.cpu().numpy() - for key in keep - } - recons[mode + '_side']['side'] = True - - # lenmax = max(gener['lengths'].max(), visu['lengths'].max()) - # timesize = lenmax + 5 longer visulization - lenmax = gener['lengths'].max() - timesize = lenmax - - import multiprocessing - - def pool_job_with_desc(pool, iterator, desc, max_, save_path_format, isij): - with tqdm(total=max_, desc=desc.format("Render")) as pbar: - for data in iterator: - plot_3d_motion_dico(data) - # for _ in pool.imap_unordered(plot_3d_motion_dico, iterator): - # pbar.update() - if isij: - array = np.stack([[ - load_anim(save_path_format.format(i, j), timesize) - for j in range(nats) - ] for i in tqdm(range(nspa), desc=desc.format("Load"))]) - return array.transpose(2, 0, 1, 3, 4, 5) - else: - array = np.stack([ - load_anim(save_path_format.format(i), timesize) - for i in tqdm(range(nats), desc=desc.format("Load")) - ]) - return array.transpose(1, 0, 2, 3, 4) - - pool = None - # if True: - with multiprocessing.Pool() as pool: - # Generated samples - save_path_format = os.path.join(tmp_path, "gen_{}_{}.gif") - iterator = ((gener[outputkey][i, j], gener['lengths'][i, j], - save_path_format.format(i, j), params, { - "title": - f"gen: {label_to_action_name(gener['y'][i, j])}", - "interval": 1000 / fps - }) for j in range(nats) for i in range(nspa)) - gener["frames"] = pool_job_with_desc(pool, iterator, - "{} the generated samples", - nats * nspa, save_path_format, - True) - if not gen_only: - # Real samples - save_path_format = os.path.join(tmp_path, "real_{}.gif") - iterator = ((visu[outputkey][i], visu['lengths'][i], - save_path_format.format(i), params, { - "title": - f"real: {label_to_action_name(visu['y'][i])}", - "interval": 1000 / fps - }) for i in range(nats)) - visu["frames"] = pool_job_with_desc(pool, iterator, - "{} the real samples", nats, - save_path_format, False) - for mode, recon in recons.items(): - # Reconstructed samples - save_path_format = os.path.join( - tmp_path, f"reconstructed_{mode}_" + "{}.gif") - if mode == 'overlap': - iterator = (( - recon[outputkey][i], recon['lengths'][i], - save_path_format.format(i), params, { - "title": - f"recons: {label_to_action_name(recon['y'][i])}", - "interval": 1000 / fps, - "pred_cam": recon['cam'][i], - "imgs": recon['imgs'][i], - "bbox": recon['bbox'][i] - }) for i in range(nats)) - else: - side = True if 'side' in recon.keys() else False - iterator = (( - recon[outputkey][i], recon['lengths'][i], - save_path_format.format(i), params, { - "title": - f"recons: {label_to_action_name(recon['y'][i])}", - "interval": 1000 / fps, - "side": side - }) for i in range(nats)) - recon["frames"] = pool_job_with_desc( - pool, iterator, "{} the reconstructed samples", nats, - save_path_format, False) - # vis img in visu - if not gen_only: - input_imgs = visualization["imgs"] if visualization[ - "imgs"] is not None else None - vis = visu["frames"] if not gen_only else None - rec = [recon["frames"] - for recon in recons.values()] if not gen_only else None - gen = gener["frames"] - frames = stack_images(vis, rec, gen, input_imgs) - else: - gen = gener["frames"] - frames = stack_images_gen(gen) - return frames - - -def viz_epoch(model, - dataset, - epoch, - params, - folder, - module=None, - writer=None, - exps=''): - """ Generate & viz samples """ - module = model if module is None else module - - # visualize with joints3D - model.outputxyz = True - - print(f"Visualization of the epoch {epoch}") - - noise_same_action = params["noise_same_action"] - noise_diff_action = params["noise_diff_action"] - duration_mode = params["duration_mode"] - reconstruction_mode = params["reconstruction_mode"] - decoder_test = params["decoder_test"] - - fact = params["fact_latent"] - figname = params["figname"].format(epoch) - - nspa = params["num_samples_per_action"] - nats = params["num_actions_to_sample"] - - num_classes = params["num_classes"] - # nats = min(num_classes, nats) - - # define some classes - classes = torch.randperm(num_classes)[:nats] - # duplicate same classes when sampling too much - if nats > num_classes: - classes = classes.expand(nats) - - meandurations = torch.from_numpy( - np.array([ - round(dataset.get_mean_length_label(cl.item())) for cl in classes - ])) - - if duration_mode == "interpolate" or decoder_test == "diffduration": - points, step = np.linspace(-nspa, nspa, nspa, retstep=True) - # points = np.round(10*points/step).astype(int) - points = np.array([5, 10, 16, 30, 60, 80]).astype(int) - # gendurations = meandurations.repeat((nspa, 1)) + points[:, None] - gendurations = torch.from_numpy(points[:, None]).expand( - (nspa, 1)).repeat((1, nats)) - else: - gendurations = meandurations.repeat((nspa, 1)) - print("Duration time: ") - print(gendurations[:, 0]) - - # extract the real samples - # real_samples, real_theta, mask_real, real_lengths, imgs, paths - batch = dataset.get_label_sample_batch(classes.numpy()) - - # ToDo - # clean these data - # Visualizaion of real samples - visualization = { - "x": batch['x'].to(model.device), - "y": classes.to(model.device), - "mask": batch['mask'].to(model.device), - 'lengths': batch['lengths'].to(model.device), - "output": batch['x'].to(model.device), - "theta": - batch['theta'].to(model.device) if 'theta' in batch.keys() else None, - "imgs": - batch['imgs'].to(model.device) if 'imgs' in batch.keys() else None, - "paths": batch['paths'] if 'paths' in batch.keys() else None, - } - - # Visualizaion of real samples - if reconstruction_mode == "both": - reconstructions = { - "tf": { - "x": - batch['x'].to(model.device), - "y": - classes.to(model.device), - 'lengths': - batch['lengths'].to(model.device), - "mask": - batch['mask'].to(model.device), - "teacher_force": - True, - "theta": - batch['theta'].to(model.device) - if 'theta' in batch.keys() else None - }, - "ntf": { - "x": - batch['x'].to(model.device), - "y": - classes.to(model.device), - 'lengths': - batch['lengths'].to(model.device), - "mask": - batch['mask'].to(model.device), - "theta": - batch['theta'].to(model.device) - if 'theta' in batch.keys() else None - } - } - else: - reconstructions = { - reconstruction_mode: { - "x": - batch['x'].to(model.device), - "y": - classes.to(model.device), - 'lengths': - batch['lengths'].to(model.device), - "mask": - batch['mask'].to(model.device), - "teacher_force": - reconstruction_mode == "tf", - "imgs": - batch['imgs'].to(model.device) - if 'imgs' in batch.keys() else None, - "theta": - batch['theta'].to(model.device) - if 'theta' in batch.keys() else None, - "bbox": - batch['bbox'] if 'bbox' in batch.keys() else None - } - } - print("Computing the samples poses..") - - # generate the repr (joints3D/pose etc) - model.eval() - with torch.no_grad(): - # Reconstruction of the real data - for mode in reconstructions: - # update reconstruction dicts - reconstructions[mode] = model(reconstructions[mode]) - reconstruction = reconstructions[list(reconstructions.keys())[0]] - - if decoder_test == "gt": - # Generate the new data - gt_input = { - "x": batch['x'].repeat(nspa, 1, 1, 1).to(model.device), - "y": classes.repeat(nspa).to(model.device), - "mask": batch['mask'].repeat(nspa, 1).to(model.device), - 'lengths': batch['lengths'].repeat(nspa).to(model.device) - } - generation = model(gt_input) - if decoder_test == "new": - # Generate the new data - generation = module.generate(gendurations, - classes=classes, - nspa=nspa, - noise_same_action=noise_same_action, - noise_diff_action=noise_diff_action, - fact=fact) - elif decoder_test == "diffaction": - assert nats == nspa - # keep the same noise for each "sample" - z = reconstruction["z"].repeat((nspa, 1)) - mask = reconstruction["mask"].repeat((nspa, 1)) - lengths = reconstruction['lengths'].repeat(nspa) - # but use other labels - y = classes.repeat_interleave(nspa).to(model.device) - generation = {"z": z, "y": y, "mask": mask, 'lengths': lengths} - model.decoder(generation) - - elif decoder_test == "diffduration": - z = reconstruction["z"].repeat((nspa, 1)) - lengths = gendurations.reshape(-1).to(model.device) - mask = model.lengths_to_mask(lengths) - y = classes.repeat(nspa).to(model.device) - generation = {"z": z, "y": y, "mask": mask, 'lengths': lengths} - model.decoder(generation) - - elif decoder_test == "interpolate_action": - assert nats == nspa - # same noise for each sample - z_diff_action = torch.randn(1, - model.latent_dim, - device=model.device).repeat(nats, 1) - z = z_diff_action.repeat((nspa, 1)) - - # but use combination of labels and labels below - y = F.one_hot(classes.to(model.device), - model.num_classes).to(model.device) - y_below = F.one_hot(torch.cat((classes[1:], classes[0:1])), - model.num_classes).to(model.device) - convex_factors = torch.linspace(0, 1, nspa, device=model.device) - y_mixed = torch.einsum("nk,m->mnk", y, 1-convex_factors) + \ - torch.einsum("nk,m->mnk", y_below, convex_factors) - y_mixed = y_mixed.reshape(nspa * nats, y_mixed.shape[-1]) - - durations = gendurations[0].to(model.device) - durations_below = torch.cat((durations[1:], durations[0:1])) - - gendurations = torch.einsum("l,k->kl", durations, 1-convex_factors) + \ - torch.einsum("l,k->kl", durations_below, convex_factors) - gendurations = gendurations.to(dtype=durations.dtype) - - lengths = gendurations.to(model.device).reshape(z.shape[0]) - mask = model.lengths_to_mask(lengths) - - generation = { - "z": z, - "y": y_mixed, - "mask": mask, - 'lengths': lengths - } - generation = model.decoder(generation) - - visualization = module.prepare(visualization) - visualization["output_xyz"] = visualization["x_xyz"] - visualization["output_vertices"] = visualization["x_vertices"] - # Get xyz for the real ones - # visualization["output_xyz"] = module.rot2xyz(visualization["output"], visualization["mask"], jointstype="smpl") - # # Get smpl vertices for the real ones - # if module.cvae.pose_rep != "xyz": - # visualization["output_vertices"] = module.rot2xyz(visualization["output"], visualization["mask"], jointstype="vertices") - - for key, val in generation.items(): - if len(generation[key].shape) == 1: - generation[key] = val.reshape(nspa, nats) - else: - generation[key] = val.reshape(nspa, nats, *val.shape[1:]) - - finalpath = os.path.join(folder, figname + exps + ".gif") - tmp_path = os.path.join(folder, f"subfigures_{figname}") - os.makedirs(tmp_path, exist_ok=True) - - print("Generate the videos..") - frames = generate_by_video(visualization, reconstructions, generation, - dataset.label_to_action_name, params, nats, - nspa, tmp_path) - - print(f"Writing video {finalpath}") - imageio.mimsave(finalpath.replace('gif', 'mp4'), frames, fps=params["fps"]) - shutil.rmtree(tmp_path) - - # output npy - output = { - "data_id": batch['id'], - "paths": batch['paths'], - "x": batch['x'].cpu().numpy(), - "x_vertices": visualization["x_vertices"].cpu().numpy(), - "output_vertices": - reconstructions['ntf']["output_vertices"].cpu().numpy(), - "gen_vertices": generation["output_vertices"].cpu().numpy() - } - - outputpath = finalpath.replace('gif', 'npy') - np.save(outputpath, output) - - # output pkl - batch_recon = reconstructions["ntf"] - outputpath = finalpath.replace('gif', 'pkl') - # output_pkl([batch_recon], outputpath) - - if writer is not None: - writer.add_video(f"Video/Epoch {epoch}", - frames.transpose(0, 3, 1, 2)[None], - epoch, - fps=params["fps"]) - return finalpath - - -def viz_dataset(dataset, params, folder): - """ Generate & viz samples """ - print("Visualization of the dataset") - - nspa = params["num_samples_per_action"] - nats = params["num_actions_to_sample"] - - num_classes = params["num_classes"] - - figname = "{}_{}_numframes_{}_sampling_{}_step_{}".format( - params["dataset"], params["pose_rep"], params["num_frames"], - params["sampling"], params["sampling_step"]) - - # define some classes - classes = torch.randperm(num_classes)[:nats] - - allclasses = classes.repeat(nspa, 1).reshape(nspa * nats) - # extract the real samples - real_samples, mask_real, real_lengths = dataset.get_label_sample_batch( - allclasses.numpy()) - # to visualize directly - - # Visualizaion of real samples - visualization = { - "x": real_samples, - "y": allclasses, - "mask": mask_real, - 'lengths': real_lengths, - "output": real_samples - } - - from mGPT.models.rotation2xyz import Rotation2xyz - - device = params["device"] - rot2xyz = Rotation2xyz(device=device) - - rot2xyz_params = { - "pose_rep": params["pose_rep"], - "glob_rot": params["glob_rot"], - "glob": params["glob"], - "jointstype": params["jointstype"], - "translation": params["translation"] - } - - output = visualization["output"] - visualization["output_xyz"] = rot2xyz(output.to(device), - visualization["mask"].to(device), - **rot2xyz_params) - - for key, val in visualization.items(): - if len(visualization[key].shape) == 1: - visualization[key] = val.reshape(nspa, nats) - else: - visualization[key] = val.reshape(nspa, nats, *val.shape[1:]) - - finalpath = os.path.join(folder, figname + ".gif") - tmp_path = os.path.join(folder, f"subfigures_{figname}") - os.makedirs(tmp_path, exist_ok=True) - - print("Generate the videos..") - frames = generate_by_video_sequences(visualization, - dataset.label_to_action_name, params, - nats, nspa, tmp_path) - - print(f"Writing video {finalpath}..") - imageio.mimsave(finalpath, frames, fps=params["fps"]) - - -def generate_by_video_sequences(visualization, label_to_action_name, params, - nats, nspa, tmp_path): - # shape : (17, 3, 4, 480, 640, 3) - # (nframes, row, column, h, w, 3) - fps = params["fps"] - if "output_vetices" in visualization: - outputkey = "output_vetices" - params["pose_rep"] = "vertices" - elif "output_xyz" in visualization: - outputkey = "output_xyz" - params["pose_rep"] = "xyz" - else: - outputkey = "poses" - - keep = [outputkey, 'lengths', "y"] - visu = {key: visualization[key].data.cpu().numpy() for key in keep} - lenmax = visu['lengths'].max() - - timesize = lenmax + 5 - - # import multiprocessing - - def pool_job_with_desc(pool, iterator, desc, max_, save_path_format): - for data in iterator: - plot_3d_motion_dico(data) - # with tqdm(total=max_, desc=desc.format("Render")) as pbar: - # for _ in pool.imap_unordered(plot_3d_motion_dico, iterator): - # pbar.update() - array = np.stack([[ - load_anim(save_path_format.format(i, j), timesize) - for j in range(nats) - ] for i in tqdm(range(nspa), desc=desc.format("Load"))]) - return array.transpose(2, 0, 1, 3, 4, 5) - - pool = None - # with multiprocessing.Pool() as pool: - # Real samples - save_path_format = os.path.join(tmp_path, "real_{}_{}.gif") - iterator = ((visu[outputkey][i, j], visu['lengths'][i, j], - save_path_format.format(i, j), params, { - "title": f"real: {label_to_action_name(visu['y'][i, j])}", - "interval": 1000 / fps - }) for j in range(nats) for i in range(nspa)) - visu["frames"] = pool_job_with_desc(pool, iterator, "{} the real samples", - nats, save_path_format) - frames = stack_images_sequence(visu["frames"]) - return frames - - -def stack_images_sequence(visu): - print("Stacking frames..") - allframes = visu - nframes, nspa, nats, h, w, pix = allframes.shape - frames = [] - for frame_idx in tqdm(range(nframes)): - columns = np.vstack(allframes[frame_idx].transpose(1, 2, 3, 4, - 0)).transpose( - 3, 1, 0, 2) - frame = np.concatenate(columns).transpose(1, 0, 2) - frames.append(frame) - return np.stack(frames) diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/bin/etf2ly.py b/spaces/Pattr/DrumClassification/lilypond-2.24.2/bin/etf2ly.py deleted file mode 100644 index cafcf24a71004f83e94978c0f2829fb991dae047..0000000000000000000000000000000000000000 --- a/spaces/Pattr/DrumClassification/lilypond-2.24.2/bin/etf2ly.py +++ /dev/null @@ -1,1326 +0,0 @@ -#!/home/lily/lilypond-2.24.2/release/binaries/dependencies/install/Python-3.10.8/bin/python3.10 - -# This file is part of LilyPond, the GNU music typesetter. -# -# Copyright (C) 2001--2022 Han-Wen Nienhuys -# Jan Nieuwenhuizen -# -# LilyPond is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# LilyPond is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with LilyPond. If not, see . - -# info mostly taken from looking at files. See also -# https://www.gnu.org/software/lilypond/src/Developers/Details/etfformat.html - -# This supports -# -# * notes -# * rests -# * ties -# * slurs -# * lyrics -# * articulation -# * grace notes -# * tuplets -# - -# todo: -# * slur/stem directions -# * voices (2nd half of frame?) -# * more intelligent lyrics -# * beams (better use autobeam?) -# * more robust: try entertainer.etf (freenote) -# * dynamics -# * empty measures (eg. twopt03.etf from freenote) -# - - -import __main__ -import getopt -import gettext -import os -import re -import sys - -authors = ('Jan Nieuwenhuizen ', - 'Han-Wen Nienhuys ') - -version = '2.24.2' -if version == '@' + 'TOPLEVEL_VERSION' + '@': - version = '(unknown version)' # uGUHGUHGHGUGH - -""" - -# relocate-preamble.py.in -# -# This file is part of LilyPond, the GNU music typesetter. -# -# Copyright (C) 2007--2022 Han-Wen Nienhuys -# -# LilyPond is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# LilyPond is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with LilyPond. If not, see . -# - -This is generic code, used for all python scripts. - -The quotes are to ensure that the source .py file can still be -run as a python script, but does not include any sys.path handling. -Otherwise, the lilypond-book calls inside the build -might modify installed .pyc files. - -""" - -# This is needed for installations with a non-default layout, ie where share/ -# is not next to bin/. -sys.path.insert (0, os.path.join ('/home/lily/lilypond-2.24.2/release/binaries/mingw/lilypond/install/share/lilypond/2.24.2', 'python')) - -# Dynamic relocation, for installations with a default layout including GUB, -# but also for execution from the build directory. -bindir = os.path.abspath (os.path.dirname (sys.argv[0])) -topdir = os.path.dirname (bindir) -if bindir.endswith (r'/scripts/out'): - topdir = os.path.join (os.path.dirname (topdir), 'out') -datadir = os.path.abspath (os.path.join (topdir, 'share', 'lilypond')) -for v in [ 'current', '2.24.2' ]: - sys.path.insert (0, os.path.join (datadir, v, 'python')) - -""" -""" - -################################################################ -# Load translation and install _() into Python's builtins namespace. -gettext.install('lilypond', '/home/lily/lilypond-2.24.2/release/binaries/mingw/lilypond/install/share/locale') - -import lilylib as ly - -finale_clefs = ['treble', 'alto', 'tenor', 'bass', - 'percussion', 'treble_8', 'bass_8', 'baritone'] - - -def lily_clef(fin): - try: - return finale_clefs[fin] - except IndexError: - sys.stderr.write('\nHuh? Found clef number %d\n' % fin) - - return 'treble' - - -def gulp_file(f): - return open(f, encoding='utf-8').read() - - -# notename 0 == central C -distances = [0, 2, 4, 5, 7, 9, 11, 12] - - -def semitones(name, acc): - return (name / 7) * 12 + distances[name % 7] + acc - -# represent pitches as (notename, alteration), relative to C-major scale - - -def transpose(orig, delta): - (oname, oacc) = orig - (dname, dacc) = delta - - old_pitch = semitones(oname, oacc) - delta_pitch = semitones(dname, dacc) - nname = (oname + dname) - nacc = oacc - new_pitch = semitones(nname, nacc) - - nacc = nacc - (new_pitch - old_pitch - delta_pitch) - - return (nname, nacc) - - -def interpret_finale_key_sig(finale_id): - """ -find the transposition of C-major scale that belongs here. - -we are not going to insert the correct major/minor, we only want to -have the correct number of accidentals -""" - - p = (0, 0) - - bank_number = finale_id >> 8 - accidental_bits = finale_id & 0xff - - if 0 <= accidental_bits < 7: - while accidental_bits > 0: - p = transpose(p, (4, 0)) # a fifth up - accidental_bits = accidental_bits - 1 - elif 248 < accidental_bits <= 255: - while accidental_bits < 256: - p = transpose(p, (3, 0)) - accidental_bits = accidental_bits + 1 - - if bank_number == 1: - # minor scale - p = transpose(p, (5, 0)) - p = (p[0] % 7, p[1]) - - return KeySignature(p, bank_number) - -# should cache this. - - -def find_scale(keysig): - cscale = [(x, 0) for x in range(0, 7)] -# print "cscale: ", cscale - ascale = [(x, 0) for x in range(-2, 5)] -# print "ascale: ", ascale - transposition = keysig.pitch - if keysig.sig_type == 1: - transposition = transpose(transposition, (2, -1)) - transposition = (transposition[0] % 7, transposition[1]) - trscale = list(map(lambda x, k=transposition: transpose(x, k), ascale)) - else: - trscale = list(map(lambda x, k=transposition: transpose(x, k), cscale)) -# print "trscale: ", trscale - return trscale - - -def EDU_to_duration(edu): - log = 1 - d = 4096 - while d > edu: - d = d >> 1 - log = log << 1 - - edu = edu - d - dots = 0 - if edu == d / 2: - dots = 1 - elif edu == d*3/4: - dots = 2 - return (log, dots) - - -def rational_to_lily_skip(rat): - (n, d) = rat - - basedur = 1 - while d and d % 2 == 0: - basedur = basedur << 1 - d = d >> 1 - - str = 's%d' % basedur - if n != 1: - str = str + '*%d' % n - if d != 1: - str = str + '/%d' % d - - return str - - -def gcd(a, b): - if b == 0: - return a - c = a - while c: - c = a % b - a = b - b = c - return a - - -def rat_simplify(r): - (n, d) = r - if d < 0: - d = -d - n = -n - if n == 0: - return (0, 1) - else: - g = gcd(n, d) - return (n/g, d/g) - - -def rat_multiply(a, b): - (x, y) = a - (p, q) = b - - return rat_simplify((x*p, y*q)) - - -def rat_add(a, b): - (x, y) = a - (p, q) = b - - return rat_simplify((x*q + p*y, y*q)) - - -def rat_neg(a): - (p, q) = a - return (-p, q) - - -def rat_subtract(a, b): - return rat_add(a, rat_neg(b)) - - -def lily_notename(tuple2): - (n, a) = tuple2 - nn = chr((n + 2) % 7 + ord('a')) - - return nn + {-2: 'eses', -1: 'es', 0: '', 1: 'is', 2: 'isis'}[a] - - -class Tuplet: - def __init__(self, number): - self.start_note = number - self.finale = [] - - def append_finale(self, fin): - self.finale.append(fin) - - def factor(self): - n = self.finale[0][2]*self.finale[0][3] - d = self.finale[0][0]*self.finale[0][1] - return rat_simplify((n, d)) - - def dump_start(self): - return '\\times %d/%d { ' % self.factor() - - def dump_end(self): - return ' }' - - def calculate(self, chords): - edu_left = self.finale[0][0] * self.finale[0][1] - - startch = chords[self.start_note] - c = startch - while c and edu_left: - c.tuplet = self - if c == startch: - c.chord_prefix = self.dump_start() + c.chord_prefix - - if not c.grace: - edu_left = edu_left - c.EDU_duration() - if edu_left == 0: - c.chord_suffix = c.chord_suffix + self.dump_end() - c = c.__next__ - - if edu_left: - sys.stderr.write( - "\nHuh? Tuplet starting at entry %d was too short." % self.start_note) - - -class Slur: - def __init__(self, number, params): - self.number = number - self.finale = params - - def append_entry(self, finale_e): - self.finale.append(finale_e) - - def calculate(self, chords): - startnote = self.finale[5] - endnote = self.finale[3*6 + 2] - try: - cs = chords[startnote] - ce = chords[endnote] - - if not cs or not ce: - raise IndexError - - cs.note_suffix = '-(' + cs.note_suffix - ce.note_suffix = ce.note_suffix + '-)' - - except IndexError: - sys.stderr.write("""\nHuh? Slur no %d between (%d,%d), with %d notes""" % ( - self.number, startnote, endnote, len(chords))) - - -class Global_measure: - def __init__(self, number): - self.timesig = '' - self.number = number - self.key_signature = None - self.scale = None - self.force_break = 0 - - self.repeats = [] - self.finale = [] - - def __str__(self): - return repr(self.finale) - - def set_timesig(self, finale): - (beats, fdur) = finale - (log, dots) = EDU_to_duration(fdur) - - if dots == 1: - beats = beats * 3 - log = log * 2 - dots = 0 - - if dots != 0: - sys.stderr.write( - "\nHuh? Beat duration has dots? (EDU Duration = %d)" % fdur) - self.timesig = (beats, log) - - def length(self): - return self.timesig - - def set_key_sig(self, finale): - k = interpret_finale_key_sig(finale) - self.key_signature = k - self.scale = find_scale(k) - - def set_flags(self, flag1, flag2): - - # flag1 isn't all that interesting. - if flag2 & 0x8000: - self.force_break = 1 - - if flag2 & 0x0008: - self.repeats.append('start') - if flag2 & 0x0004: - self.repeats.append('stop') - - if flag2 & 0x0002: - if flag2 & 0x0004: - self.repeats.append('bracket') - - -articulation_dict = { - 94: '^', - 109: '\\prall', - 84: '\\turn', - 62: '\\mordent', - 85: '\\fermata', - 46: '.', - # 3: '>', - # 18: '\arpeggio' , -} - - -class Articulation_def: - def __init__(self, n, a, b): - self.finale_glyph = a & 0xff - self.number = n - - def dump(self): - try: - return articulation_dict[self.finale_glyph] - except KeyError: - sys.stderr.write("\nUnknown articulation no. %d" % - self.finale_glyph) - sys.stderr.write( - "\nPlease add an entry to articulation_dict in the Python source") - return None - - -class Articulation: - def __init__(self, a, b, finale): - self.definition = finale[0] - self.notenumber = b - - def calculate(self, chords, defs): - c = chords[self.notenumber] - - adef = defs[self.definition] - lystr = adef.dump() - if lystr is None: - lystr = '"art"' - sys.stderr.write("\nThis happened on note %d" % self.notenumber) - - c.note_suffix = '-' + lystr - - -class Syllable: - def __init__(self, a, b, finale): - self.chordnum = b - self.syllable = finale[1] - self.verse = finale[0] - - def calculate(self, chords, lyrics): - self.chord = chords[self.chordnum] - - -class Verse: - def __init__(self, number, body): - self.body = body - self.number = number - self.split_syllables() - - def split_syllables(self): - ss = re.split('(-| +)', self.body) - - sep = 0 - syls = [None] - for s in ss: - if sep: - septor = re.sub(" +", "", s) - septor = re.sub("-", " -- ", septor) - syls[-1] = syls[-1] + septor - else: - syls.append(s) - - sep = not sep - - self.syllables = syls - - def dump(self): - str = '' - line = '' - for s in self.syllables[1:]: - line = line + ' ' + s - if len(line) > 72: - str = str + ' ' * 4 + line + '\n' - line = '' - - str = """\nverse%s = \\lyricmode {\n %s }\n""" % ( - encodeint(self.number - 1), str) - return str - - -class KeySignature: - def __init__(self, pitch, sig_type=0): - self.pitch = pitch - self.sig_type = sig_type - - def signature_type(self): - if self.sig_type == 1: - return "\\minor" - else: - # really only for 0, but we only know about 0 and 1 - return "\\major" - - def equal(self, other): - if other and other.pitch == self.pitch and other.sig_type == self.sig_type: - return 1 - else: - return 0 - - -class Measure: - def __init__(self, no): - self.number = no - self.frames = [0] * 4 - self.flags = 0 - self.clef = 0 - self.finale = [] - self.global_measure = None - self.staff = None - self.valid = 1 - - def valid(self): - return self.valid - - def calculate(self): - fs = [] - - if len(self.finale) < 2: - fs = self.finale[0] - - self.clef = fs[1] - self.frames = [fs[0]] - else: - fs = self.finale - self.clef = fs[0] - self.flags = fs[1] - self.frames = fs[2:] - - -class Frame: - def __init__(self, finale): - self.measure = None - self.finale = finale - (number, start, end) = finale - self.number = number - self.start = start - self.end = end - self.chords = [] - - def set_measure(self, m): - self.measure = m - - def calculate(self): - - # do grace notes. - lastch = None - in_grace = 0 - for c in self.chords: - if c.grace and (lastch is None or (not lastch.grace)): - c.chord_prefix = r'\grace {' + c.chord_prefix - in_grace = 1 - elif not c.grace and lastch and lastch.grace: - lastch.chord_suffix = lastch.chord_suffix + ' } ' - in_grace = 0 - - lastch = c - - if lastch and in_grace: - lastch.chord_suffix += '}' - - def dump(self): - str = '%% FR(%d)\n' % self.number - left = self.measure.global_measure.length() - - ln = '' - for c in self.chords: - add = c.ly_string() + ' ' - if len(ln) + len(add) > 72: - str = str + ln + '\n' - ln = '' - ln = ln + add - left = rat_subtract(left, c.length()) - - str = str + ln - - if left[0] < 0: - sys.stderr.write("""\nHuh? Going backwards in frame no %d, start/end (%d,%d)""" % - (self.number, self.start, self.end)) - left = (0, 1) - if left[0]: - str = str + rational_to_lily_skip(left) - - str = str + ' |\n' - return str - - -def encodeint(i): - return chr(i + ord('A')) - - -class Staff: - def __init__(self, number): - self.number = number - self.measures = [] - - def get_measure(self, no): - fill_list_to(self.measures, no) - - if self.measures[no] is None: - m = Measure(no) - self.measures[no] = m - m.staff = self - - return self.measures[no] - - def staffid(self): - return 'staff' + encodeint(self.number - 1) - - def layerid(self, l): - return self.staffid() + 'layer%s' % chr(l - 1 + ord('A')) - - def dump_time_key_sigs(self): - k = '' - last_key = None - last_time = None - last_clef = None - gap = (0, 1) - for m in self.measures[1:]: - if not m or not m.valid: - continue # ugh. - - g = m.global_measure - e = '' - - if g: - if g.key_signature and not g.key_signature.equal(last_key): - pitch = g.key_signature.pitch - e = e + "\\key %s %s " % (lily_notename(pitch), - g.key_signature.signature_type()) - - last_key = g.key_signature - if last_time != g.timesig: - e = e + "\\time %d/%d " % g.timesig - last_time = g.timesig - - if 'start' in g.repeats: - e = e + ' \\bar ".|:" ' - - # we don't attempt voltas since they fail easily. - if 0: # and g.repeat_bar == '|:' or g.repeat_bar == ':|:' or g.bracket: - strs = [] - if g.repeat_bar == '|:' or g.repeat_bar == ':|:' or g.bracket == 'end': - strs.append('#f') - - if g.bracket == 'start': - strs.append('"0."') - - str = ' '.join(['(volta %s)' % x for x in strs]) - - e = e + ' \\set Score.repeatCommands = #\'(%s) ' % str - - if g.force_break: - e = e + ' \\break ' - - if last_clef != m.clef: - e = e + '\\clef "%s"' % lily_clef(m.clef) - last_clef = m.clef - if e: - if gap != (0, 1): - k = k + ' ' + rational_to_lily_skip(gap) + '\n' - gap = (0, 1) - k = k + e - - if g: - gap = rat_add(gap, g.length()) - if 'stop' in g.repeats: - k = k + ' \\bar ":|." ' - - k = '%sglobal = { %s }\n\n ' % (self.staffid(), k) - return k - - def dump(self): - str = '' - - layerids = [] - for x in range(1, 5): # 4 layers. - laystr = '' - last_frame = None - first_frame = None - gap = (0, 1) - for m in self.measures[1:]: - if not m or not m.valid: - sys.stderr.write( - "Skipping non-existant or invalid measure\n") - continue - - fr = None - try: - fr = m.frames[x] - except IndexError: - sys.stderr.write("Skipping nonexistent frame %d\n" % x) - laystr = laystr + \ - "%% non existent frame %d (skipped)\n" % x - if fr: - first_frame = fr - if gap != (0, 1): - laystr = laystr + \ - '} %s {\n ' % rational_to_lily_skip(gap) - gap = (0, 1) - laystr = laystr + fr.dump() - else: - if m.global_measure: - gap = rat_add(gap, m.global_measure.length()) - else: - sys.stderr.write( - "No global measure for staff %d measure %d\n" - % (self.number, m.number)) - if first_frame: - l = self.layerid(x) - laystr = '%s = { { %s } }\n\n' % (l, laystr) - str = str + laystr - layerids.append(l) - - str = str + self.dump_time_key_sigs() - stafdef = '\\%sglobal' % self.staffid() - for i in layerids: - stafdef = stafdef + ' \\' + i - - str = str + '%s = \\context Staff = %s <<\n %s\n >>\n' % \ - (self.staffid(), self.staffid(), stafdef) - return str - - -def ziplist(l): - if len(l) < 2: - return [] - else: - return [(l[0], l[1])] + ziplist(l[2:]) - - -class Chord: - def __init__(self, number, contents): - self.pitches = [] - self.frame = None - self.finale = contents[:7] - - self.notelist = ziplist(contents[7:]) - self.duration = None - self.next = None - self.prev = None - self.number = number - self.note_prefix = '' - self.note_suffix = '' - self.chord_suffix = '' - self.chord_prefix = '' - self.tuplet = None - self.grace = 0 - - def measure(self): - if not self.frame: - return None - return self.frame.measure - - def length(self): - if self.grace: - return (0, 1) - - l = (1, self.duration[0]) - - d = 1 << self.duration[1] - - dotfact = rat_subtract((2, 1), (1, d)) - mylen = rat_multiply(dotfact, l) - - if self.tuplet: - mylen = rat_multiply(mylen, self.tuplet.factor()) - return mylen - - def EDU_duration(self): - return self.finale[2] - - def set_duration(self): - self.duration = EDU_to_duration(self.EDU_duration()) - - def calculate(self): - self.find_realpitch() - self.set_duration() - - flag = self.finale[4] - if Chord.GRACE_MASK & flag: - self.grace = 1 - - def find_realpitch(self): - - meas = self.measure() - tiestart = 0 - if not meas or not meas.global_measure: - sys.stderr.write('note %d not in measure\n' % self.number) - elif not meas.global_measure.scale: - sys.stderr.write( - 'note %d: no scale in this measure.' % self.number) - else: - - for p in self.notelist: - (pitch, flag) = p - - nib1 = pitch & 0x0f - - if nib1 > 8: - nib1 = -(nib1 - 8) - rest = pitch / 16 - - scale = meas.global_measure.scale - (sn, sa) = scale[rest % 7] - sn = sn + (rest - (rest % 7)) + 7 - acc = sa + nib1 - self.pitches.append((sn, acc)) - tiestart = tiestart or (flag & Chord.TIE_START_MASK) - if tiestart: - self.chord_suffix = self.chord_suffix + ' ~ ' - - REST_MASK = 0x40000000 - TIE_START_MASK = 0x40000000 - GRACE_MASK = 0x00800000 - - def ly_string(self): - s = '' - - rest = '' - - if not (self.finale[4] & Chord.REST_MASK): - rest = 'r' - - for p in self.pitches: - (n, a) = p - o = n / 7 - n = n % 7 - - nn = lily_notename((n, a)) - - if o < 0: - nn = nn + (',' * -o) - elif o > 0: - nn = nn + ('\'' * o) - - if s: - s = s + ' ' - - if rest: - nn = rest - - s = s + nn - - if not self.pitches: - s = 'r' - if len(self.pitches) > 1: - s = '<%s>' % s - - s = s + '%d%s' % (self.duration[0], '.' * self.duration[1]) - s = self.note_prefix + s + self.note_suffix - - s = self.chord_prefix + s + self.chord_suffix - - return s - - -def fill_list_to(list, no): - """ -Add None to LIST until it contains entry number NO. - """ - while len(list) <= no: - list.extend([None] * (no - len(list) + 1)) - return list - - -def read_finale_value(str): - """ -Pry off one value from STR. The value may be $hex, decimal, or "string". -Return: (value, rest-of-STR) - """ - while str and str[0] in ' \t\n': - str = str[1:] - - if not str: - return (None, str) - - if str[0] == '$': - str = str[1:] - - hex = '' - while str and str[0] in '0123456789ABCDEF': - hex = hex + str[0] - str = str[1:] - - return (int(hex, 16), str) - elif str[0] == '"': - str = str[1:] - s = '' - while str and str[0] != '"': - s = s + str[0] - str = str[1:] - - return (s, str) - elif str[0] in '-0123456789': - dec = '' - while str and str[0] in '-0123456789': - dec = dec + str[0] - str = str[1:] - - return (int(dec), str) - else: - sys.stderr.write("cannot convert `%s'\n" % str) - return (None, str) - - -def parse_etf_file(fn, tag_dict): - """ Read FN, putting ETF info into - a giant dictionary. The keys of TAG_DICT indicate which tags - to put into the dict. - """ - - sys.stderr.write('parsing ... ') - f = open(fn, encoding='utf-8') - - gulp = re.sub('[\n\r]+', '\n', f.read()) - ls = gulp.split('\n^') - - etf_file_dict = {} - for k in tag_dict: - etf_file_dict[k] = {} - - last_tag = None - last_numbers = None - - for l in ls: - m = re.match(r'^([a-zA-Z0-9&]+)\(([^)]+)\)', l) - if m and m.group(1) in tag_dict: - tag = m.group(1) - - indices = tuple([int(s) for s in m.group(2).split(',')]) - content = l[m.end(2)+1:] - - tdict = etf_file_dict[tag] - if indices not in tdict: - tdict[indices] = [] - - parsed = [] - - if tag == 'verse' or tag == 'block': - m2 = re.match(r'(.*)\^end', content) - if m2: - parsed = [m2.group(1)] - else: - while content: - (v, content) = read_finale_value(content) - if v is not None: - parsed.append(v) - - tdict[indices].extend(parsed) - - last_indices = indices - last_tag = tag - - continue - -# let's not do this: this really confuses when eE happens to be before a ^text. -# if last_tag and last_indices: -# etf_file_dict[last_tag][last_indices].append (l) - - sys.stderr.write('\n') - return etf_file_dict - - -class Etf_file: - def __init__(self, name): - self.measures = [None] - self.chords = [None] - self.frames = [None] - self.tuplets = [None] - self.staffs = [None] - self.slurs = [None] - self.articulations = [None] - self.syllables = [None] - self.verses = [None] - self.articulation_defs = [None] - - # do it - self.parse(name) - - def get_global_measure(self, no): - fill_list_to(self.measures, no) - if self.measures[no] is None: - self.measures[no] = Global_measure(no) - - return self.measures[no] - - def get_staff(self, staffno): - fill_list_to(self.staffs, staffno) - if self.staffs[staffno] is None: - self.staffs[staffno] = Staff(staffno) - - return self.staffs[staffno] - - # staff-spec - def try_IS(self, indices, contents): - pass - - def try_BC(self, indices, contents): - bn = indices[0] - where = contents[0] / 1024.0 - - def try_TP(self, indices, contents): - (nil, num) = indices - - if self.tuplets[-1] is None or num != self.tuplets[-1].start_note: - self.tuplets.append(Tuplet(num)) - - self.tuplets[-1].append_finale(contents) - - def try_IM(self, indices, contents): - (a, b) = indices - fin = contents - self.articulations.append(Articulation(a, b, fin)) - - def try_verse(self, indices, contents): - a = indices[0] - body = contents[0] - - body = re.sub(r"""\^[a-z]+\([^)]+\)""", "", body) - body = re.sub(r"\^[a-z]+", "", body) - self.verses.append(Verse(a, body)) - - def try_ve(self, indices, contents): - (a, b) = indices - self.syllables.append(Syllable(a, b, contents)) - - def try_eE(self, indices, contents): - no = indices[0] - (prev, next, dur, pos, entryflag, extended, follow) = contents[:7] - - fill_list_to(self.chords, no) - self.chords[no] = Chord(no, contents) - - def try_Sx(self, indices, contents): - slurno = indices[0] - fill_list_to(self.slurs, slurno) - self.slurs[slurno] = Slur(slurno, contents) - - def try_IX(self, indices, contents): - n = indices[0] - a = contents[0] - b = contents[1] - - ix = None - try: - ix = self.articulation_defs[n] - except IndexError: - ix = Articulation_def(n, a, b) - self.articulation_defs.append(Articulation_def(n, a, b)) - - def try_GF(self, indices, contents): - (staffno, measno) = indices - - st = self.get_staff(staffno) - meas = st.get_measure(measno) - meas.finale = contents - - def try_FR(self, indices, contents): - frameno = indices[0] - - startnote = contents[0] - endnote = contents[1] - - fill_list_to(self.frames, frameno) - - self.frames[frameno] = Frame((frameno, startnote, endnote)) - - def try_MS(self, indices, contents): - measno = indices[0] - keynum = contents[1] - meas = self. get_global_measure(measno) - - meas.set_key_sig(keynum) - - beats = contents[2] - beatlen = contents[3] - meas.set_timesig((beats, beatlen)) - - meas_flag1 = contents[4] - meas_flag2 = contents[5] - - meas.set_flags(meas_flag1, meas_flag2) - - routine_dict = { - 'MS': try_MS, - 'FR': try_FR, - 'GF': try_GF, - 'IX': try_IX, - 'Sx': try_Sx, - 'eE': try_eE, - 'verse': try_verse, - 've': try_ve, - 'IM': try_IM, - 'TP': try_TP, - 'BC': try_BC, - 'IS': try_IS, - } - - def parse(self, etf_dict): - sys.stderr.write('reconstructing ...') - sys.stderr.flush() - - for (tag, routine) in list(Etf_file.routine_dict.items()): - ks = list(etf_dict[tag].keys()) - ks.sort() - for k in ks: - routine(self, k, etf_dict[tag][k]) - - sys.stderr.write('processing ...') - sys.stderr.flush() - - self.unthread_entries() - - for st in self.staffs[1:]: - if not st: - continue - mno = 1 - for m in st.measures[1:]: - if not m: - continue - - m.calculate() - try: - m.global_measure = self.measures[mno] - except IndexError: - sys.stderr.write("Non-existent global measure %d" % mno) - continue - - frame_obj_list = [None] - for frno in m.frames: - try: - fr = self.frames[frno] - frame_obj_list.append(fr) - except IndexError: - sys.stderr.write("\nNon-existent frame %d" % frno) - - m.frames = frame_obj_list - for fr in frame_obj_list[1:]: - if not fr: - continue - - fr.set_measure(m) - - fr.chords = self.get_thread(fr.start, fr.end) - for c in fr.chords: - c.frame = fr - mno = mno + 1 - - for c in self.chords[1:]: - if c: - c.calculate() - - for f in self.frames[1:]: - if f: - f.calculate() - - for t in self.tuplets[1:]: - t.calculate(self.chords) - - for s in self.slurs[1:]: - if s: - s.calculate(self.chords) - - for s in self.articulations[1:]: - s.calculate(self.chords, self.articulation_defs) - - def get_thread(self, startno, endno): - - thread = [] - - c = None - try: - c = self.chords[startno] - except IndexError: - sys.stderr.write( - "Huh? Frame has invalid bounds (%d,%d)\n" % (startno, endno)) - return [] - - while c and c.number != endno: - d = c # hack to avoid problem with scripts/build/grand-replace.py - thread.append(d) - c = c.__next__ - - if c: - d = c # hack to avoid problem with scripts/build/grand-replace.py - thread.append(d) - - return thread - - def dump(self): - str = '' - staffs = [] - for s in self.staffs[1:]: - if s: - str = str + '\n\n' + s.dump() - staffs.append('\\' + s.staffid()) - - # should use \addlyrics ? - - for v in self.verses[1:]: - str = str + v.dump() - - if len(self.verses) > 1: - sys.stderr.write( - "\nLyrics found; edit to use \\addlyrics to couple to a staff\n") - - if staffs: - str += '\\version "2.3.25"\n' - str = str + '<<\n %s\n>> } ' % ' '.join(staffs) - - return str - - def __str__(self): - return 'ETF FILE %s %s' % (self.measures, self.entries) - - def unthread_entries(self): - for e in self.chords[1:]: - if not e: - continue - - e.prev = self.chords[e.finale[0]] - e.next = self.chords[e.finale[1]] - - -def identify(): - sys.stderr.write("%s from LilyPond %s\n" % (ly.program_name, version)) - - -def warranty(): - identify() - sys.stdout.write(''' -%s - - %s - -%s -%s -''' % (_('Copyright (c) %s by') % '2001--2023', - '\n '.join(authors), - _('Distributed under terms of the GNU General Public License.'), - _('It comes with NO WARRANTY.'))) - - -def get_option_parser(): - p = ly.get_option_parser(usage=_("%s [OPTION]... ETF-FILE") % 'etf2ly', - description=_("""Enigma Transport Format is a format used by Coda Music Technology's -Finale product. etf2ly converts a subset of ETF to a ready-to-use LilyPond file. -"""), - add_help_option=False) - p.add_option("-h", "--help", - action="help", - help=_("show this help and exit")) - p.version = "etf2ly (LilyPond) 2.24.2" - p.add_option("--version", - action="version", - help=_("show version number and exit")) - p.add_option('-o', '--output', help=_("write output to FILE"), - metavar=_("FILE"), - action='store') - p.add_option('-w', '--warranty', help=_("show warranty and copyright"), - action='store_true', - ), - - p.add_option_group('', - description=( - _('Report bugs via %s') - % 'bug-lilypond@gnu.org') + '\n') - return p - - -def do_options(): - opt_parser = get_option_parser() - (options, args) = opt_parser.parse_args() - if options.warranty: - warranty() - sys.exit(0) - - return (options, args) - - -(options, files) = do_options() -identify() - -out_filename = options.output - -e = None -for f in files: - if f == '-': - f = '' - - sys.stderr.write('Processing `%s\'\n' % f) - - dict = parse_etf_file(f, Etf_file.routine_dict) - e = Etf_file(dict) - if not out_filename: - out_filename = os.path.basename(re.sub('(?i).etf$', '.ly', f)) - - if out_filename == f: - out_filename = os.path.basename(f + '.ly') - - sys.stderr.write('Writing `%s\'' % out_filename) - ly = e.dump() - - fo = open(out_filename, 'w', encoding='utf-8') - fo.write('%% lily was here -- automatically converted by etf2ly from %s\n' % f) - fo.write(ly) - fo.close() diff --git a/spaces/PaulEdwards/StarWords/app_old.py b/spaces/PaulEdwards/StarWords/app_old.py deleted file mode 100644 index 685e64ec352c47f8c178849b3eb32824331cadd0..0000000000000000000000000000000000000000 --- a/spaces/PaulEdwards/StarWords/app_old.py +++ /dev/null @@ -1,41 +0,0 @@ -import gradio as gr -from transformers import pipeline -title = "📗❤️-Story Generator❤️📗- 🦄Myths and Legends🦸" -examples = [ - ["Cernunnos the Gaelic god of beasts and wild places"], - ["Often called the Horned One, Cernunnos was a mediator of man and nature"], - ["able to tame predator and prey so they might lie down together"], - ["He remains a mysterious deity, as his original mythos has been lost to history"], - ["It was believed that ringing a bell on Samhain kept away evil spirits"], - ["Burying animal bones in front of your house on the night of Samhain will"], - ["keep evil away, according to some legends of eastern Europe"], - ["Samhain is a good time of year to work on communicating with the spirit world"], - ["In some Pacific Northwest tribes, elk are also considered to be"], - ["particular protectors of women, and in some legends elk lead women who had been "], - ["captured by enemy warriors back to their homes"], - ["In Plains Indian tribes, elk were associated with masculinity, endurance, and bravery, and elks eyeteeth were highly valued both as objects of adornment and as the symbol of a mans hunting prowess."], - ["In some Plains tribes, men saved the eyeteeth from their first elk kill to make into engagement jewelry for their sweetheart. In others, the number of elk teeth sewn onto a womans dress showed off the wealth and skill of her husband or father."], - ["Ah Puch is one of the names associated with a god of death in the ancient Mayan religion. He was known as a god of death, darkness, and disaster. But he was also a god of childbirth and beginnings. The Quiche Maya believed that he ruled over Metnal, the underworld and the Yucatec Maya believed that he was just one of the lords of Xibaba, that translates to place of fear in the underworld."], - ["Nuwa was the one who patched the holes in Heaven with five colored stones, and she used the legs of a tortoise to mend the pillars. There are many instances of her in literature across China which detail her in creation stories, and today remains a figure important to Chinese culture."] -] -from gradio import inputs -from gradio.inputs import Textbox -from gradio import outputs - -generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B") -generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") -generator1 = gr.Interface.load("huggingface/gpt2-large") - -#gr.Parallel(generator1, generator2, generator3, inputs=gr.inputs.Textbox(lines=6, label="Enter a sentence to get another sentence."),title=title, examples=examples).launch() - -def complete_with_gpt(text): - # Use the last 50 characters of the text as context - return text[:-50] + generator1(text[-50:]) - -with gr.Blocks() as demo: - textbox = gr.Textbox(placeholder="Type here and press enter...", lines=4) - btn = gr.Button("Generate") - - btn.click(complete_with_gpt, textbox, textbox) - -demo.launch() \ No newline at end of file diff --git a/spaces/PeepDaSlan9/chatbot-arena/index.html b/spaces/PeepDaSlan9/chatbot-arena/index.html deleted file mode 100644 index b8e4df94bb5bf9644fda5057d1316c00f2e4ffbf..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/chatbot-arena/index.html +++ /dev/null @@ -1,13 +0,0 @@ - - - - - - - Chat and Battle with Open LLMs - - - - - - \ No newline at end of file diff --git a/spaces/Pengyey/bingo-chuchu/src/app/loading.css b/spaces/Pengyey/bingo-chuchu/src/app/loading.css deleted file mode 100644 index eaaab6a86a228334c4eca3c5368ae6f0f593d405..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/app/loading.css +++ /dev/null @@ -1,68 +0,0 @@ -::-webkit-scrollbar { - width: 10px; - height: 10px; - display: none; -} - -::-webkit-scrollbar-button:start:decrement, -::-webkit-scrollbar-button:end:increment { - height: 30px; - background-color: transparent; -} - -::-webkit-scrollbar-track-piece { - background-color: #3b3b3b; - -webkit-border-radius: 16px; -} - -::-webkit-scrollbar-thumb:vertical { - height: 50px; - background-color: #666; - border: 1px solid #eee; - -webkit-border-radius: 6px; -} - -/* loading start */ -.loading-spinner { - display: flex; - justify-content: center; - align-items: center; - height: 100vh; - opacity: 1; - transition: opacity .8s ease-out; -} - -.loading-spinner.hidden { - opacity: 0; -} - -.loading-spinner>div { - width: 30px; - height: 30px; - background: linear-gradient(90deg, #2870EA 10.79%, #1B4AEF 87.08%); - - border-radius: 100%; - display: inline-block; - animation: sk-bouncedelay 1.4s infinite ease-in-out both; -} - -.loading-spinner .bounce1 { - animation-delay: -0.32s; -} - -.loading-spinner .bounce2 { - animation-delay: -0.16s; -} - -@keyframes sk-bouncedelay { - - 0%, - 80%, - 100% { - transform: scale(0); - } - - 40% { - transform: scale(1.0); - } -} diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/caption.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/caption.py deleted file mode 100644 index c5e0d4c82d49da7fac0022333e8edb994e8dcdd2..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/caption.py +++ /dev/null @@ -1,279 +0,0 @@ -import torch -import torch.distributed as dist -import time -from torchvision.ops import nms -import random -import numpy as np -from PIL import Image, ImageDraw -import pdb -from maskrcnn_benchmark.structures.bounding_box import BoxList -from .modulated_coco import ConvertCocoPolysToMask -from .tsv import ODTSVDataset, TSVYamlDataset -from .od_to_grounding import sanity_check_target_after_processing - -class CaptionTSV(TSVYamlDataset): - def __init__(self, - yaml_file, - transforms, - return_tokens, - return_masks, - tokenizer, - caption_min_box=1, - replace_clean_label=False, - further_screen=False, - caption_conf=0.5, - caption_nms=-1, - pack_random_caption_number=0, - inference_caption=False, - sample_negative_for_grounding_data=-1, - random_pack_prob=-1.0, - no_random_pack_probability=0.0, - safeguard_positive_caption=True, - mlm_obj_for_only_positive=False, - caption_format_version="v1", - local_debug=False, - max_query_len=256, - **kwargs - ): - super(CaptionTSV, self).__init__(yaml_file, None, replace_clean_label) - self.yaml_file = yaml_file - self._transforms = transforms - self.max_query_len = max_query_len - self.prepare = ConvertCocoPolysToMask(return_masks=return_masks, - return_tokens=return_tokens, - tokenizer=tokenizer, - max_query_len=max_query_len) - self.tokenizer = tokenizer - self.caption_min_box = caption_min_box - self.replace_clean_label = replace_clean_label - self.further_screen = further_screen - self.pack_random_caption_number = pack_random_caption_number - self.caption_format_version = caption_format_version - - self.caption_conf = caption_conf - self.caption_nms = caption_nms - self.inference_caption = inference_caption - self.sample_negative_for_grounding_data = sample_negative_for_grounding_data - self.random_pack_prob = random_pack_prob - self.no_random_pack_probability = no_random_pack_probability - self.safeguard_positive_caption = safeguard_positive_caption - self.mlm_obj_for_only_positive = mlm_obj_for_only_positive - try: - self.rank = dist.get_rank() - except: - self.rank = 0 - - def __len__(self): - return super(CaptionTSV, self).__len__() - - def pack_caption(self, positive_caption, negative_captions, original_tokens_positive): - if len(negative_captions) == 0: - return positive_caption, original_tokens_positive, [(0, len(positive_caption))] - if self.safeguard_positive_caption: - length_of_each_caption = [] - for caption in negative_captions + [positive_caption]: - tokenized = self.tokenizer(caption, return_tensors="pt") - length_of_each_caption.append(tokenized.input_ids.size(-1)) - max_length = self.max_query_len - length_of_each_caption[-1] - indexes = list(range(len(negative_captions))) - random.shuffle(indexes) - new_caption_list = [positive_caption] - for i in indexes: - if length_of_each_caption[i] < max_length: - new_caption_list.append(negative_captions[i]) - max_length -= length_of_each_caption[i] - else: - new_caption_list = [positive_caption] + negative_captions - random.shuffle(new_caption_list) - - new_caption = '' - - for i in new_caption_list: - if i == positive_caption: - start_position = len(new_caption) - new_caption += i - if not i.endswith("."): - new_caption += "." - new_caption += " " - - # shift the token positions the boxes are aligned to - for index, i in enumerate(original_tokens_positive): - original_tokens_positive[index] = [tuple(j) for j in i] - for i in original_tokens_positive: - for index, j in enumerate(i): - i[index] = (j[0] + start_position, j[1] + start_position) - - return new_caption, original_tokens_positive, [(start_position, start_position + len(positive_caption))] - - def __get_negative_captions__(self, idx, negative_size=7): - negative_captions = [] - for i in range(negative_size): - img, anno, _, scale = super(CaptionTSV, self).__getitem__(np.random.choice(len(self))) - caption = anno["caption"] - negative_captions.append(caption) - - return negative_captions - - def __getitem__(self, idx): - try: - img, anno, _, scale = super(CaptionTSV, self).__getitem__(idx) - if self.inference_caption: - caption = None - if isinstance(anno, list): - caption = anno[0]["caption"] # inference mode for bing - anno = [] - elif len(anno) == 1: - caption = anno["caption"] # inference mode for googlecc - anno = [] - else: - caption = " ".join(anno["captions"]) - anno = [] - else: - ''' - An example - {'img_h': 1154, 'img_w': 1600, 'caption': 'xxx', 'tokens_positive': [[[47, 50], [51, 53], [54, 59]], [[32, 35], [36, 41]], [[32, 35], [36, 41]], [[0, 3], [3, 6], [6, 10], [11, 16], [17, 19], [20, 23]], [[32, 35], [36, 41]], [[32, 35], [36, 41]]], 'bboxes': [[7.344961166381836, 10.479412078857422, 1592.2679443359375, 1090.0028076171875], [950.32861328125, 346.572021484375, 1333.2373046875, 679.3215942382812], [927.44140625, 342.7712707519531, 1389.833984375, 719.5758666992188], [90.48786163330078, 363.67572021484375, 1381.8631591796875, 1078.687744140625], [122.84217071533203, 422.6786193847656, 507.845703125, 667.2651977539062], [80.62384033203125, 416.500244140625, 563.1666259765625, 734.603271484375]], 'scores': [0.7966700196266174, 0.8952182531356812, 0.8186006546020508, 0.9995516538619995, 0.8021856546401978, 0.8923134803771973]} - ''' - if len(anno["bboxes"]) < self.caption_min_box: # Retry triggered! - return self[np.random.choice(len(self))] - - if self.caption_format_version == "v2": - anno = self.convert_anno_from_v2_to_v1(anno) - - try: - if self.further_screen: - conf = self.caption_conf - nms_thre = self.caption_nms - - bboxes = torch.as_tensor(anno["bboxes"]).float() - scores = torch.as_tensor(anno["scores"]) - tokens_positive = anno["tokens_positive"] - - # print("\n\n\n\n tokens_positive in original data", tokens_positive) - - keep = scores > conf - scores = scores[keep] - bboxes = bboxes[keep] - tokens_positive = [i for index, i in enumerate(tokens_positive) if keep[index]] - - assert (len(tokens_positive) == len(bboxes) == len(scores)) - - if len(bboxes) < self.caption_min_box: # Retry triggered! - return self[np.random.choice(len(self))] - - if nms_thre > 0: - keep = nms(boxes=bboxes, scores=scores, iou_threshold=nms_thre) - scores = scores[keep] - bboxes = bboxes[keep] - tokens_positive = [tokens_positive[i] for i in keep] - assert (len(tokens_positive) == len(bboxes) == len(scores)) - - # Write back - anno["bboxes"] = bboxes.tolist() - anno["scores"] = scores.tolist() - anno["tokens_positive"] = tokens_positive - - boxes = torch.as_tensor(anno["bboxes"]) - - if len(boxes) < self.caption_min_box: # Retry triggered! - return self[np.random.choice(len(self))] - - target = BoxList(boxes, (anno["img_w"], anno["img_h"]), mode="xyxy") - target = target.clip_to_image(remove_empty=True) - - caption = anno["caption"] - # print("original caption", caption) - empty_everything = False - if self.sample_negative_for_grounding_data != -1: - if random.random() < self.sample_negative_for_grounding_data: - empty_everything = True - - if empty_everything: - caption = self.__get_negative_captions__(idx, negative_size=1)[0] - - if self.pack_random_caption_number != 0: - if self.random_pack_prob != -1.0: - if random.random() < self.no_random_pack_probability: - negative_pack_number = 0 - elif random.random() < self.random_pack_prob: - negative_pack_number = self.pack_random_caption_number - else: - negative_pack_number = np.random.choice(self.pack_random_caption_number) - else: - negative_pack_number = self.pack_random_caption_number - - negative_captions = self.__get_negative_captions__(idx, negative_size=negative_pack_number) - - caption, anno["tokens_positive"], greenlight_span_for_masked_lm_objective = self.pack_caption( - caption, negative_captions, anno["tokens_positive"]) - else: - greenlight_span_for_masked_lm_objective = [(0, len(caption))] - - if not self.mlm_obj_for_only_positive: - greenlight_span_for_masked_lm_objective = [(0, len(caption))] - - new_anno = [] - areas = target.area() - for i in range(len(target)): - new_anno_i = {} - new_anno_i["area"] = areas[i] - new_anno_i["iscrowd"] = 0 - new_anno_i["image_id"] = idx - new_anno_i["category_id"] = 1 # following vg and others - new_anno_i["id"] = None - new_anno_i['bbox'] = target.bbox[i].numpy().tolist() - new_anno_i["tokens_positive"] = anno["tokens_positive"][i] - new_anno.append(new_anno_i) - - except: - return self[np.random.choice(len(self))] - - anno = new_anno - if empty_everything: - anno = [] - - annotations = {"image_id": idx, "annotations": anno, "caption": caption} - annotations["greenlight_span_for_masked_lm_objective"] = greenlight_span_for_masked_lm_objective - img, annotations = self.prepare(img, annotations, box_format="xyxy") - - if self._transforms is not None: - img, target = self._transforms(img, target) - - # add additional property - for ann in annotations: - target.add_field(ann, annotations[ann]) - except: - print("Outter Retry triggered!!") - return self[np.random.choice(len(self))] - - sanity_check_target_after_processing(target) - - return img, target, idx - - def convert_anno_from_v2_to_v1(self, anno): - flatterned_bboxes = [] - flatterned_tokens_positive = [] - flatterned_bboxes_scores = [] - for i in range(len(anno["bboxes"])): - # i is the index for entity - for j in range(len(anno["bboxes"][i])): - # j is the index for each box - flatterned_bboxes.append(anno["bboxes"][i][j]) - flatterned_tokens_positive.append( - anno["tokens_positive"][i]) # Assume this box corresponds to all the token_spans for this entity - flatterned_bboxes_scores.append(anno["scores"][i][j]) - anno["bboxes"] = flatterned_bboxes - anno["tokens_positive"] = flatterned_tokens_positive - anno["scores"] = flatterned_bboxes_scores - return anno - - - def get_raw_image(self, idx): - image, *_ = super(CaptionTSV, self).__getitem__(idx) - return image - - def get_img_id(self, idx): - line_no = self.get_line_no(idx) - if self.label_tsv is not None: - row = self.label_tsv.seek(line_no) - img_id = row[0] - return img_id diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/language_backbone/rnn_model.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/language_backbone/rnn_model.py deleted file mode 100644 index 2b690ca8520695ab77572679e06b2f90971bec16..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/modeling/language_backbone/rnn_model.py +++ /dev/null @@ -1,115 +0,0 @@ -from copy import deepcopy -import numpy as np -import torch -from torch import nn - - -class RNNEnoder(nn.Module): - def __init__(self, cfg): - super(RNNEnoder, self).__init__() - self.cfg = cfg - - self.rnn_type = cfg.MODEL.LANGUAGE_BACKBONE.RNN_TYPE - self.variable_length = cfg.MODEL.LANGUAGE_BACKBONE.VARIABLE_LENGTH - self.word_embedding_size = cfg.MODEL.LANGUAGE_BACKBONE.WORD_EMBEDDING_SIZE - self.word_vec_size = cfg.MODEL.LANGUAGE_BACKBONE.WORD_VEC_SIZE - self.hidden_size = cfg.MODEL.LANGUAGE_BACKBONE.HIDDEN_SIZE - self.bidirectional = cfg.MODEL.LANGUAGE_BACKBONE.BIDIRECTIONAL - self.input_dropout_p = cfg.MODEL.LANGUAGE_BACKBONE.INPUT_DROPOUT_P - self.dropout_p = cfg.MODEL.LANGUAGE_BACKBONE.DROPOUT_P - self.n_layers = cfg.MODEL.LANGUAGE_BACKBONE.N_LAYERS - self.corpus_path = cfg.MODEL.LANGUAGE_BACKBONE.CORPUS_PATH - self.vocab_size = cfg.MODEL.LANGUAGE_BACKBONE.VOCAB_SIZE - - # language encoder - self.embedding = nn.Embedding(self.vocab_size, self.word_embedding_size) - self.input_dropout = nn.Dropout(self.input_dropout_p) - self.mlp = nn.Sequential(nn.Linear(self.word_embedding_size, self.word_vec_size), nn.ReLU()) - self.rnn = getattr(nn, self.rnn_type.upper())(self.word_vec_size, - self.hidden_size, - self.n_layers, - batch_first=True, - bidirectional=self.bidirectional, - dropout=self.dropout_p) - self.num_dirs = 2 if self.bidirectional else 1 - - def forward(self, input, mask=None): - word_id = input - max_len = (word_id != 0).sum(1).max().item() - word_id = word_id[:, :max_len] # mask zero - # embedding - output, hidden, embedded, final_output = self.RNNEncode(word_id) - return { - 'hidden': hidden, - 'output': output, - 'embedded': embedded, - 'final_output': final_output, - } - - def encode(self, input_labels): - """ - Inputs: - - input_labels: Variable long (batch, seq_len) - Outputs: - - output : Variable float (batch, max_len, hidden_size * num_dirs) - - hidden : Variable float (batch, num_layers * num_dirs * hidden_size) - - embedded: Variable float (batch, max_len, word_vec_size) - """ - device = input_labels.device - if self.variable_length: - input_lengths_list, sorted_lengths_list, sort_idxs, recover_idxs = self.sort_inputs(input_labels) - input_labels = input_labels[sort_idxs] - - embedded = self.embedding(input_labels) # (n, seq_len, word_embedding_size) - embedded = self.input_dropout(embedded) # (n, seq_len, word_embedding_size) - embedded = self.mlp(embedded) # (n, seq_len, word_vec_size) - - if self.variable_length: - if self.variable_length: - embedded = nn.utils.rnn.pack_padded_sequence(embedded, \ - sorted_lengths_list, \ - batch_first=True) - # forward rnn - self.rnn.flatten_parameters() - output, hidden = self.rnn(embedded) - - # recover - if self.variable_length: - # recover embedded - embedded, _ = nn.utils.rnn.pad_packed_sequence(embedded, - batch_first=True) # (batch, max_len, word_vec_size) - embedded = embedded[recover_idxs] - - # recover output - output, _ = nn.utils.rnn.pad_packed_sequence(output, - batch_first=True) # (batch, max_len, hidden_size * num_dir) - output = output[recover_idxs] - - # recover hidden - if self.rnn_type == 'lstm': - hidden = hidden[0] # hidden state - hidden = hidden[:, recover_idxs, :] # (num_layers * num_dirs, batch, hidden_size) - hidden = hidden.transpose(0, 1).contiguous() # (batch, num_layers * num_dirs, hidden_size) - hidden = hidden.view(hidden.size(0), -1) # (batch, num_layers * num_dirs * hidden_size) - - # final output - finnal_output = [] - for ii in range(output.shape[0]): - finnal_output.append(output[ii, int(input_lengths_list[ii] - 1), :]) - finnal_output = torch.stack(finnal_output, dim=0) # (batch, number_dirs * hidden_size) - - return output, hidden, embedded, finnal_output - - def sort_inputs(self, input_labels): # sort input labels by descending - device = input_labels.device - input_lengths = (input_labels != 0).sum(1) - input_lengths_list = input_lengths.data.cpu().numpy().tolist() - sorted_input_lengths_list = np.sort(input_lengths_list)[::-1].tolist() # list of sorted input_lengths - sort_idxs = np.argsort(input_lengths_list)[::-1].tolist() - s2r = {s: r for r, s in enumerate(sort_idxs)} - recover_idxs = [s2r[s] for s in range(len(input_lengths_list))] - assert max(input_lengths_list) == input_labels.size(1) - # move to long tensor - sort_idxs = input_labels.data.new(sort_idxs).long().to(device) # Variable long - recover_idxs = input_labels.data.new(recover_idxs).long().to(device) # Variable long - return input_lengths_list, sorted_input_lengths_list, sort_idxs, recover_idxs diff --git a/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups.py b/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups.py deleted file mode 100644 index b814f8184792e80e2324685436053d61487110b1..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/Applio-RVC-Fork/utils/backups.py +++ /dev/null @@ -1,141 +0,0 @@ -import os -import shutil -import hashlib -import time -import base64 - - - - -LOGS_FOLDER = '/content/Applio-RVC-Fork/logs' -WEIGHTS_FOLDER = '/content/Applio-RVC-Fork/weights' -GOOGLE_DRIVE_PATH = '/content/drive/MyDrive/RVC_Backup' - -def import_google_drive_backup(): - print("Importing Google Drive backup...") - weights_exist = False - for root, dirs, files in os.walk(GOOGLE_DRIVE_PATH): - for filename in files: - filepath = os.path.join(root, filename) - if os.path.isfile(filepath) and not filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')): - backup_filepath = os.path.join(LOGS_FOLDER, os.path.relpath(filepath, GOOGLE_DRIVE_PATH)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - shutil.copy2(filepath, backup_filepath) # copy file with metadata - print(f'Imported file from Google Drive backup: {filename}') - elif filepath.startswith(os.path.join(GOOGLE_DRIVE_PATH, 'weights')) and filename.endswith('.pth'): - weights_exist = True - weights_filepath = os.path.join(WEIGHTS_FOLDER, os.path.relpath(filepath, os.path.join(GOOGLE_DRIVE_PATH, 'weights'))) - weights_folderpath = os.path.dirname(weights_filepath) - if not os.path.exists(weights_folderpath): - os.makedirs(weights_folderpath) - print(f'Created weights folder: {weights_folderpath}', flush=True) - shutil.copy2(filepath, weights_filepath) # copy file with metadata - print(f'Imported file from weights: {filename}') - if weights_exist: - print("Copied weights from Google Drive backup to local weights folder.") - else: - print("No weights found in Google Drive backup.") - print("Google Drive backup import completed.") - -def get_md5_hash(file_path): - hash_md5 = hashlib.md5() - with open(file_path, "rb") as f: - for chunk in iter(lambda: f.read(4096), b""): - hash_md5.update(chunk) - return hash_md5.hexdigest() - -def copy_weights_folder_to_drive(): - destination_folder = os.path.join(GOOGLE_DRIVE_PATH, 'weights') - try: - if not os.path.exists(destination_folder): - os.makedirs(destination_folder) - - num_copied = 0 - for filename in os.listdir(WEIGHTS_FOLDER): - if filename.endswith('.pth'): - source_file = os.path.join(WEIGHTS_FOLDER, filename) - destination_file = os.path.join(destination_folder, filename) - if not os.path.exists(destination_file): - shutil.copy2(source_file, destination_file) - num_copied += 1 - print(f"Copied {filename} to Google Drive!") - - if num_copied == 0: - print("No new finished models found for copying.") - else: - print(f"Finished copying {num_copied} files to Google Drive!") - - except Exception as e: - print(f"An error occurred while copying weights: {str(e)}") - # You can log the error or take appropriate actions here. - -def backup_files(): - print("\nStarting backup loop...") - last_backup_timestamps_path = os.path.join(LOGS_FOLDER, 'last_backup_timestamps.txt') - fully_updated = False # boolean to track if all files are up to date - - while True: - try: - updated = False # flag to check if any files were updated - last_backup_timestamps = {} - - try: - with open(last_backup_timestamps_path, 'r') as f: - last_backup_timestamps = dict(line.strip().split(':') for line in f) - except FileNotFoundError: - pass # File does not exist yet, which is fine - - for root, dirs, files in os.walk(LOGS_FOLDER): - for filename in files: - if filename != 'last_backup_timestamps.txt': - filepath = os.path.join(root, filename) - if os.path.isfile(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - backup_folderpath = os.path.dirname(backup_filepath) - if not os.path.exists(backup_folderpath): - os.makedirs(backup_folderpath) - print(f'Created backup folder: {backup_folderpath}', flush=True) - # check if file has changed since last backup - last_backup_timestamp = last_backup_timestamps.get(filepath) - current_timestamp = os.path.getmtime(filepath) - if last_backup_timestamp is None or float(last_backup_timestamp) < current_timestamp: - shutil.copy2(filepath, backup_filepath) # copy file with metadata - last_backup_timestamps[filepath] = str(current_timestamp) # update last backup timestamp - if last_backup_timestamp is None: - print(f'Backed up file: {filename}') - else: - print(f'Updating backed up file: {filename}') - updated = True - fully_updated = False # if a file is updated, all files are not up to date - - # check if any files were deleted in Colab and delete them from the backup drive - for filepath in list(last_backup_timestamps.keys()): - if not os.path.exists(filepath): - backup_filepath = os.path.join(GOOGLE_DRIVE_PATH, os.path.relpath(filepath, LOGS_FOLDER)) - if os.path.exists(backup_filepath): - os.remove(backup_filepath) - print(f'Deleted file: {filepath}') - del last_backup_timestamps[filepath] - updated = True - fully_updated = False # if a file is deleted, all files are not up to date - - if not updated and not fully_updated: - print("Files are up to date.") - fully_updated = True # if all files are up to date, set the boolean to True - copy_weights_folder_to_drive() - sleep_time = 15 - else: - sleep_time = 0.1 - - with open(last_backup_timestamps_path, 'w') as f: - for filepath, timestamp in last_backup_timestamps.items(): - f.write(f'{filepath}:{timestamp}\n') - - time.sleep(sleep_time) # wait for 15 seconds before checking again, or 0.1s if not fully up to date to speed up backups - - except Exception as e: - print(f"An error occurred: {str(e)}") - # You can log the error or take appropriate actions here. diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/build_tracker.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/build_tracker.py deleted file mode 100644 index 6621549b8449130d2d01ebac0a3649d8b70c4f91..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/build_tracker.py +++ /dev/null @@ -1,124 +0,0 @@ -import contextlib -import hashlib -import logging -import os -from types import TracebackType -from typing import Dict, Generator, Optional, Set, Type, Union - -from pip._internal.models.link import Link -from pip._internal.req.req_install import InstallRequirement -from pip._internal.utils.temp_dir import TempDirectory - -logger = logging.getLogger(__name__) - - -@contextlib.contextmanager -def update_env_context_manager(**changes: str) -> Generator[None, None, None]: - target = os.environ - - # Save values from the target and change them. - non_existent_marker = object() - saved_values: Dict[str, Union[object, str]] = {} - for name, new_value in changes.items(): - try: - saved_values[name] = target[name] - except KeyError: - saved_values[name] = non_existent_marker - target[name] = new_value - - try: - yield - finally: - # Restore original values in the target. - for name, original_value in saved_values.items(): - if original_value is non_existent_marker: - del target[name] - else: - assert isinstance(original_value, str) # for mypy - target[name] = original_value - - -@contextlib.contextmanager -def get_build_tracker() -> Generator["BuildTracker", None, None]: - root = os.environ.get("PIP_BUILD_TRACKER") - with contextlib.ExitStack() as ctx: - if root is None: - root = ctx.enter_context(TempDirectory(kind="build-tracker")).path - ctx.enter_context(update_env_context_manager(PIP_BUILD_TRACKER=root)) - logger.debug("Initialized build tracking at %s", root) - - with BuildTracker(root) as tracker: - yield tracker - - -class BuildTracker: - def __init__(self, root: str) -> None: - self._root = root - self._entries: Set[InstallRequirement] = set() - logger.debug("Created build tracker: %s", self._root) - - def __enter__(self) -> "BuildTracker": - logger.debug("Entered build tracker: %s", self._root) - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - self.cleanup() - - def _entry_path(self, link: Link) -> str: - hashed = hashlib.sha224(link.url_without_fragment.encode()).hexdigest() - return os.path.join(self._root, hashed) - - def add(self, req: InstallRequirement) -> None: - """Add an InstallRequirement to build tracking.""" - - assert req.link - # Get the file to write information about this requirement. - entry_path = self._entry_path(req.link) - - # Try reading from the file. If it exists and can be read from, a build - # is already in progress, so a LookupError is raised. - try: - with open(entry_path) as fp: - contents = fp.read() - except FileNotFoundError: - pass - else: - message = "{} is already being built: {}".format(req.link, contents) - raise LookupError(message) - - # If we're here, req should really not be building already. - assert req not in self._entries - - # Start tracking this requirement. - with open(entry_path, "w", encoding="utf-8") as fp: - fp.write(str(req)) - self._entries.add(req) - - logger.debug("Added %s to build tracker %r", req, self._root) - - def remove(self, req: InstallRequirement) -> None: - """Remove an InstallRequirement from build tracking.""" - - assert req.link - # Delete the created file and the corresponding entries. - os.unlink(self._entry_path(req.link)) - self._entries.remove(req) - - logger.debug("Removed %s from build tracker %r", req, self._root) - - def cleanup(self) -> None: - for req in set(self._entries): - self.remove(req) - - logger.debug("Removed build tracker: %r", self._root) - - @contextlib.contextmanager - def track(self, req: InstallRequirement) -> Generator[None, None, None]: - self.add(req) - yield - self.remove(req) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/api.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/api.py deleted file mode 100644 index 2f71aaed1afc2f43ae5a58d951896b91e0327abc..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/requests/api.py +++ /dev/null @@ -1,157 +0,0 @@ -""" -requests.api -~~~~~~~~~~~~ - -This module implements the Requests API. - -:copyright: (c) 2012 by Kenneth Reitz. -:license: Apache2, see LICENSE for more details. -""" - -from . import sessions - - -def request(method, url, **kwargs): - """Constructs and sends a :class:`Request `. - - :param method: method for the new :class:`Request` object: ``GET``, ``OPTIONS``, ``HEAD``, ``POST``, ``PUT``, ``PATCH``, or ``DELETE``. - :param url: URL for the new :class:`Request` object. - :param params: (optional) Dictionary, list of tuples or bytes to send - in the query string for the :class:`Request`. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param json: (optional) A JSON serializable Python object to send in the body of the :class:`Request`. - :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`. - :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`. - :param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload. - ``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')`` - or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string - defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers - to add for the file. - :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth. - :param timeout: (optional) How many seconds to wait for the server to send data - before giving up, as a float, or a :ref:`(connect timeout, read - timeout) ` tuple. - :type timeout: float or tuple - :param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``. - :type allow_redirects: bool - :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy. - :param verify: (optional) Either a boolean, in which case it controls whether we verify - the server's TLS certificate, or a string, in which case it must be a path - to a CA bundle to use. Defaults to ``True``. - :param stream: (optional) if ``False``, the response content will be immediately downloaded. - :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair. - :return: :class:`Response ` object - :rtype: requests.Response - - Usage:: - - >>> import requests - >>> req = requests.request('GET', 'https://httpbin.org/get') - >>> req - - """ - - # By using the 'with' statement we are sure the session is closed, thus we - # avoid leaving sockets open which can trigger a ResourceWarning in some - # cases, and look like a memory leak in others. - with sessions.Session() as session: - return session.request(method=method, url=url, **kwargs) - - -def get(url, params=None, **kwargs): - r"""Sends a GET request. - - :param url: URL for the new :class:`Request` object. - :param params: (optional) Dictionary, list of tuples or bytes to send - in the query string for the :class:`Request`. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("get", url, params=params, **kwargs) - - -def options(url, **kwargs): - r"""Sends an OPTIONS request. - - :param url: URL for the new :class:`Request` object. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("options", url, **kwargs) - - -def head(url, **kwargs): - r"""Sends a HEAD request. - - :param url: URL for the new :class:`Request` object. - :param \*\*kwargs: Optional arguments that ``request`` takes. If - `allow_redirects` is not provided, it will be set to `False` (as - opposed to the default :meth:`request` behavior). - :return: :class:`Response ` object - :rtype: requests.Response - """ - - kwargs.setdefault("allow_redirects", False) - return request("head", url, **kwargs) - - -def post(url, data=None, json=None, **kwargs): - r"""Sends a POST request. - - :param url: URL for the new :class:`Request` object. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param json: (optional) json data to send in the body of the :class:`Request`. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("post", url, data=data, json=json, **kwargs) - - -def put(url, data=None, **kwargs): - r"""Sends a PUT request. - - :param url: URL for the new :class:`Request` object. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param json: (optional) json data to send in the body of the :class:`Request`. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("put", url, data=data, **kwargs) - - -def patch(url, data=None, **kwargs): - r"""Sends a PATCH request. - - :param url: URL for the new :class:`Request` object. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param json: (optional) json data to send in the body of the :class:`Request`. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("patch", url, data=data, **kwargs) - - -def delete(url, **kwargs): - r"""Sends a DELETE request. - - :param url: URL for the new :class:`Request` object. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("delete", url, **kwargs) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_extension.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_extension.py deleted file mode 100644 index cbd6da9be4956ce8558304ed72ffbe88ccd22ba5..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_extension.py +++ /dev/null @@ -1,10 +0,0 @@ -from typing import Any - - -def load_ipython_extension(ip: Any) -> None: # pragma: no cover - # prevent circular import - from pip._vendor.rich.pretty import install - from pip._vendor.rich.traceback import install as tr_install - - install() - tr_install() diff --git a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/misc/geometry_utils.py b/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/misc/geometry_utils.py deleted file mode 100644 index 024430a07b9b094d2eca6e4e9e14edd5105ad1c5..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SOLD2/sold2/misc/geometry_utils.py +++ /dev/null @@ -1,90 +0,0 @@ -import numpy as np -import torch - - -### Point-related utils - -# Warp a list of points using a homography -def warp_points(points, homography): - # Convert to homogeneous and in xy format - new_points = np.concatenate( - [points[..., [1, 0]], np.ones_like(points[..., :1])], axis=-1 - ) - # Warp - new_points = (homography @ new_points.T).T - # Convert back to inhomogeneous and hw format - new_points = new_points[..., [1, 0]] / new_points[..., 2:] - return new_points - - -# Mask out the points that are outside of img_size -def mask_points(points, img_size): - mask = ( - (points[..., 0] >= 0) - & (points[..., 0] < img_size[0]) - & (points[..., 1] >= 0) - & (points[..., 1] < img_size[1]) - ) - return mask - - -# Convert a tensor [N, 2] or batched tensor [B, N, 2] of N keypoints into -# a grid in [-1, 1]² that can be used in torch.nn.functional.interpolate -def keypoints_to_grid(keypoints, img_size): - n_points = keypoints.size()[-2] - device = keypoints.device - grid_points = ( - keypoints.float() - * 2.0 - / torch.tensor(img_size, dtype=torch.float, device=device) - - 1.0 - ) - grid_points = grid_points[..., [1, 0]].view(-1, n_points, 1, 2) - return grid_points - - -# Return a 2D matrix indicating the local neighborhood of each point -# for a given threshold and two lists of corresponding keypoints -def get_dist_mask(kp0, kp1, valid_mask, dist_thresh): - b_size, n_points, _ = kp0.size() - dist_mask0 = torch.norm(kp0.unsqueeze(2) - kp0.unsqueeze(1), dim=-1) - dist_mask1 = torch.norm(kp1.unsqueeze(2) - kp1.unsqueeze(1), dim=-1) - dist_mask = torch.min(dist_mask0, dist_mask1) - dist_mask = dist_mask <= dist_thresh - dist_mask = dist_mask.repeat(1, 1, b_size).reshape( - b_size * n_points, b_size * n_points - ) - dist_mask = dist_mask[valid_mask, :][:, valid_mask] - return dist_mask - - -### Line-related utils - -# Sample n points along lines of shape (num_lines, 2, 2) -def sample_line_points(lines, n): - line_points_x = np.linspace(lines[:, 0, 0], lines[:, 1, 0], n, axis=-1) - line_points_y = np.linspace(lines[:, 0, 1], lines[:, 1, 1], n, axis=-1) - line_points = np.stack([line_points_x, line_points_y], axis=2) - return line_points - - -# Return a mask of the valid lines that are within a valid mask of an image -def mask_lines(lines, valid_mask): - h, w = valid_mask.shape - int_lines = np.clip(np.round(lines).astype(int), 0, [h - 1, w - 1]) - h_valid = valid_mask[int_lines[:, 0, 0], int_lines[:, 0, 1]] - w_valid = valid_mask[int_lines[:, 1, 0], int_lines[:, 1, 1]] - valid = h_valid & w_valid - return valid - - -# Return a 2D matrix indicating for each pair of points -# if they are on the same line or not -def get_common_line_mask(line_indices, valid_mask): - b_size, n_points = line_indices.shape - common_mask = line_indices[:, :, None] == line_indices[:, None, :] - common_mask = common_mask.repeat(1, 1, b_size).reshape( - b_size * n_points, b_size * n_points - ) - common_mask = common_mask[valid_mask, :][:, valid_mask] - return common_mask diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/_functions.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/_functions.py deleted file mode 100644 index 9b5a8a44483ab991411d07122b22a1d027e4be8e..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/parallel/_functions.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch.nn.parallel._functions import _get_stream - - -def scatter(input, devices, streams=None): - """Scatters tensor across multiple GPUs.""" - if streams is None: - streams = [None] * len(devices) - - if isinstance(input, list): - chunk_size = (len(input) - 1) // len(devices) + 1 - outputs = [ - scatter(input[i], [devices[i // chunk_size]], - [streams[i // chunk_size]]) for i in range(len(input)) - ] - return outputs - elif isinstance(input, torch.Tensor): - output = input.contiguous() - # TODO: copy to a pinned buffer first (if copying from CPU) - stream = streams[0] if output.numel() > 0 else None - if devices != [-1]: - with torch.cuda.device(devices[0]), torch.cuda.stream(stream): - output = output.cuda(devices[0], non_blocking=True) - else: - # unsqueeze the first dimension thus the tensor's shape is the - # same as those scattered with GPU. - output = output.unsqueeze(0) - return output - else: - raise Exception(f'Unknown type {type(input)}.') - - -def synchronize_stream(output, devices, streams): - if isinstance(output, list): - chunk_size = len(output) // len(devices) - for i in range(len(devices)): - for j in range(chunk_size): - synchronize_stream(output[i * chunk_size + j], [devices[i]], - [streams[i]]) - elif isinstance(output, torch.Tensor): - if output.numel() != 0: - with torch.cuda.device(devices[0]): - main_stream = torch.cuda.current_stream() - main_stream.wait_stream(streams[0]) - output.record_stream(main_stream) - else: - raise Exception(f'Unknown type {type(output)}.') - - -def get_input_device(input): - if isinstance(input, list): - for item in input: - input_device = get_input_device(item) - if input_device != -1: - return input_device - return -1 - elif isinstance(input, torch.Tensor): - return input.get_device() if input.is_cuda else -1 - else: - raise Exception(f'Unknown type {type(input)}.') - - -class Scatter: - - @staticmethod - def forward(target_gpus, input): - input_device = get_input_device(input) - streams = None - if input_device == -1 and target_gpus != [-1]: - # Perform CPU to GPU copies in a background stream - streams = [_get_stream(device) for device in target_gpus] - - outputs = scatter(input, target_gpus, streams) - # Synchronize with the copy stream - if streams is not None: - synchronize_stream(outputs, target_gpus, streams) - - return tuple(outputs) diff --git a/spaces/SIGGRAPH2022/Text2Human/Text2Human/data/__init__.py b/spaces/SIGGRAPH2022/Text2Human/Text2Human/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/llms/openai.py b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/llms/openai.py deleted file mode 100644 index 6a03adbb1cb151ea26d4033bc4087aab1d657ab7..0000000000000000000000000000000000000000 --- a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/llms/openai.py +++ /dev/null @@ -1,708 +0,0 @@ -"""Wrapper around OpenAI APIs.""" -from __future__ import annotations - -import logging -import sys -from typing import ( - Any, - Callable, - Dict, - Generator, - List, - Mapping, - Optional, - Set, - Tuple, - Union, -) - -from pydantic import BaseModel, Extra, Field, root_validator -from tenacity import ( - before_sleep_log, - retry, - retry_if_exception_type, - stop_after_attempt, - wait_exponential, -) - -from langchain.llms.base import BaseLLM -from langchain.schema import Generation, LLMResult -from langchain.utils import get_from_dict_or_env - -logger = logging.getLogger(__name__) - - -def update_token_usage( - keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any] -) -> None: - """Update token usage.""" - _keys_to_use = keys.intersection(response["usage"]) - for _key in _keys_to_use: - if _key not in token_usage: - token_usage[_key] = response["usage"][_key] - else: - token_usage[_key] += response["usage"][_key] - - -def _update_response(response: Dict[str, Any], stream_response: Dict[str, Any]) -> None: - """Update response from the stream response.""" - response["choices"][0]["text"] += stream_response["choices"][0]["text"] - response["choices"][0]["finish_reason"] = stream_response["choices"][0][ - "finish_reason" - ] - response["choices"][0]["logprobs"] = stream_response["choices"][0]["logprobs"] - - -def _streaming_response_template() -> Dict[str, Any]: - return { - "choices": [ - { - "text": "", - "finish_reason": None, - "logprobs": None, - } - ] - } - - -def _create_retry_decorator(llm: Union[BaseOpenAI, OpenAIChat]) -> Callable[[Any], Any]: - import openai - - min_seconds = 4 - max_seconds = 10 - # Wait 2^x * 1 second between each retry starting with - # 4 seconds, then up to 10 seconds, then 10 seconds afterwards - return retry( - reraise=True, - stop=stop_after_attempt(llm.max_retries), - wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds), - retry=( - retry_if_exception_type(openai.error.Timeout) - | retry_if_exception_type(openai.error.APIError) - | retry_if_exception_type(openai.error.APIConnectionError) - | retry_if_exception_type(openai.error.RateLimitError) - | retry_if_exception_type(openai.error.ServiceUnavailableError) - ), - before_sleep=before_sleep_log(logger, logging.WARNING), - ) - - -def completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) -> Any: - """Use tenacity to retry the completion call.""" - retry_decorator = _create_retry_decorator(llm) - - @retry_decorator - def _completion_with_retry(**kwargs: Any) -> Any: - return llm.client.create(**kwargs) - - return _completion_with_retry(**kwargs) - - -async def acompletion_with_retry( - llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any -) -> Any: - """Use tenacity to retry the async completion call.""" - retry_decorator = _create_retry_decorator(llm) - - @retry_decorator - async def _completion_with_retry(**kwargs: Any) -> Any: - # Use OpenAI's async api https://github.com/openai/openai-python#async-api - return await llm.client.acreate(**kwargs) - - return await _completion_with_retry(**kwargs) - - -class BaseOpenAI(BaseLLM, BaseModel): - """Wrapper around OpenAI large language models. - - To use, you should have the ``openai`` python package installed, and the - environment variable ``OPENAI_API_KEY`` set with your API key. - - Any parameters that are valid to be passed to the openai.create call can be passed - in, even if not explicitly saved on this class. - - Example: - .. code-block:: python - - from langchain.llms import OpenAI - openai = OpenAI(model_name="text-davinci-003") - """ - - client: Any #: :meta private: - model_name: str = "text-davinci-003" - """Model name to use.""" - temperature: float = 0.7 - """What sampling temperature to use.""" - max_tokens: int = 256 - """The maximum number of tokens to generate in the completion. - -1 returns as many tokens as possible given the prompt and - the models maximal context size.""" - top_p: float = 1 - """Total probability mass of tokens to consider at each step.""" - frequency_penalty: float = 0 - """Penalizes repeated tokens according to frequency.""" - presence_penalty: float = 0 - """Penalizes repeated tokens.""" - n: int = 1 - """How many completions to generate for each prompt.""" - best_of: int = 1 - """Generates best_of completions server-side and returns the "best".""" - model_kwargs: Dict[str, Any] = Field(default_factory=dict) - """Holds any model parameters valid for `create` call not explicitly specified.""" - openai_api_key: Optional[str] = None - batch_size: int = 20 - """Batch size to use when passing multiple documents to generate.""" - request_timeout: Optional[Union[float, Tuple[float, float]]] = None - """Timeout for requests to OpenAI completion API. Default is 600 seconds.""" - logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict) - """Adjust the probability of specific tokens being generated.""" - max_retries: int = 6 - """Maximum number of retries to make when generating.""" - streaming: bool = False - """Whether to stream the results or not.""" - - def __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]: # type: ignore - """Initialize the OpenAI object.""" - if data.get("model_name", "").startswith("gpt-3.5-turbo"): - return OpenAIChat(**data) - return super().__new__(cls) - - class Config: - """Configuration for this pydantic object.""" - - extra = Extra.ignore - - @root_validator(pre=True, allow_reuse=True) - def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]: - """Build extra kwargs from additional params that were passed in.""" - all_required_field_names = {field.alias for field in cls.__fields__.values()} - - extra = values.get("model_kwargs", {}) - for field_name in list(values): - if field_name not in all_required_field_names: - if field_name in extra: - raise ValueError(f"Found {field_name} supplied twice.") - logger.warning( - f"""WARNING! {field_name} is not default parameter. - {field_name} was transfered to model_kwargs. - Please confirm that {field_name} is what you intended.""" - ) - extra[field_name] = values.pop(field_name) - values["model_kwargs"] = extra - return values - - @root_validator(allow_reuse=True) - def validate_environment(cls, values: Dict) -> Dict: - """Validate that api key and python package exists in environment.""" - openai_api_key = get_from_dict_or_env( - values, "openai_api_key", "OPENAI_API_KEY" - ) - try: - import openai - - openai.api_key = openai_api_key - values["client"] = openai.Completion - except ImportError: - raise ValueError( - "Could not import openai python package. " - "Please it install it with `pip install openai`." - ) - if values["streaming"] and values["n"] > 1: - raise ValueError("Cannot stream results when n > 1.") - if values["streaming"] and values.get("best_of") and values["best_of"] > 1: - raise ValueError("Cannot stream results when best_of > 1.") - return values - - @property - def _default_params(self) -> Dict[str, Any]: - """Get the default parameters for calling OpenAI API.""" - normal_params = { - "temperature": self.temperature, - "max_tokens": self.max_tokens, - "top_p": self.top_p, - "frequency_penalty": self.frequency_penalty, - "presence_penalty": self.presence_penalty, - "n": self.n, - # "best_of": self.best_of, - "request_timeout": self.request_timeout, - "logit_bias": self.logit_bias, - } - return {**normal_params, **self.model_kwargs} - - def _generate( - self, prompts: List[str], stop: Optional[List[str]] = None - ) -> LLMResult: - """Call out to OpenAI's endpoint with k unique prompts. - - Args: - prompts: The prompts to pass into the model. - stop: Optional list of stop words to use when generating. - - Returns: - The full LLM output. - - Example: - .. code-block:: python - - response = openai.generate(["Tell me a joke."]) - """ - # TODO: write a unit test for this - params = self._invocation_params - sub_prompts = self.get_sub_prompts(params, prompts, stop) - choices = [] - token_usage: Dict[str, int] = {} - # Get the token usage from the response. - # Includes prompt, completion, and total tokens used. - _keys = {"completion_tokens", "prompt_tokens", "total_tokens"} - for _prompts in sub_prompts: - if self.streaming: - if len(_prompts) > 1: - raise ValueError("Cannot stream results with multiple prompts.") - params["stream"] = True - response = _streaming_response_template() - for stream_resp in completion_with_retry( - self, prompt=_prompts, **params - ): - self.callback_manager.on_llm_new_token( - stream_resp["choices"][0]["text"], - verbose=self.verbose, - logprobs=stream_resp["choices"][0]["logprobs"], - ) - _update_response(response, stream_resp) - choices.extend(response["choices"]) - else: - response = completion_with_retry(self, prompt=_prompts, **params) - choices.extend(response["choices"]) - if not self.streaming: - # Can't update token usage if streaming - update_token_usage(_keys, response, token_usage) - return self.create_llm_result(choices, prompts, token_usage) - - async def _agenerate( - self, prompts: List[str], stop: Optional[List[str]] = None - ) -> LLMResult: - """Call out to OpenAI's endpoint async with k unique prompts.""" - params = self._invocation_params - sub_prompts = self.get_sub_prompts(params, prompts, stop) - choices = [] - token_usage: Dict[str, int] = {} - # Get the token usage from the response. - # Includes prompt, completion, and total tokens used. - _keys = {"completion_tokens", "prompt_tokens", "total_tokens"} - for _prompts in sub_prompts: - if self.streaming: - if len(_prompts) > 1: - raise ValueError("Cannot stream results with multiple prompts.") - params["stream"] = True - response = _streaming_response_template() - async for stream_resp in await acompletion_with_retry( - self, prompt=_prompts, **params - ): - if self.callback_manager.is_async: - await self.callback_manager.on_llm_new_token( - stream_resp["choices"][0]["text"], - verbose=self.verbose, - logprobs=stream_resp["choices"][0]["logprobs"], - ) - else: - self.callback_manager.on_llm_new_token( - stream_resp["choices"][0]["text"], - verbose=self.verbose, - logprobs=stream_resp["choices"][0]["logprobs"], - ) - _update_response(response, stream_resp) - choices.extend(response["choices"]) - else: - response = await acompletion_with_retry(self, prompt=_prompts, **params) - choices.extend(response["choices"]) - if not self.streaming: - # Can't update token usage if streaming - update_token_usage(_keys, response, token_usage) - return self.create_llm_result(choices, prompts, token_usage) - - def get_sub_prompts( - self, - params: Dict[str, Any], - prompts: List[str], - stop: Optional[List[str]] = None, - ) -> List[List[str]]: - """Get the sub prompts for llm call.""" - if stop is not None: - if "stop" in params: - raise ValueError("`stop` found in both the input and default params.") - params["stop"] = stop - if params["max_tokens"] == -1: - if len(prompts) != 1: - raise ValueError( - "max_tokens set to -1 not supported for multiple inputs." - ) - params["max_tokens"] = self.max_tokens_for_prompt(prompts[0]) - sub_prompts = [ - prompts[i : i + self.batch_size] - for i in range(0, len(prompts), self.batch_size) - ] - return sub_prompts - - def create_llm_result( - self, choices: Any, prompts: List[str], token_usage: Dict[str, int] - ) -> LLMResult: - """Create the LLMResult from the choices and prompts.""" - generations = [] - for i, _ in enumerate(prompts): - sub_choices = choices[i * self.n : (i + 1) * self.n] - generations.append( - [ - Generation( - text=choice["text"], - generation_info=dict( - finish_reason=choice.get("finish_reason"), - logprobs=choice.get("logprobs"), - ), - ) - for choice in sub_choices - ] - ) - return LLMResult( - generations=generations, llm_output={"token_usage": token_usage} - ) - - def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator: - """Call OpenAI with streaming flag and return the resulting generator. - - BETA: this is a beta feature while we figure out the right abstraction. - Once that happens, this interface could change. - - Args: - prompt: The prompts to pass into the model. - stop: Optional list of stop words to use when generating. - - Returns: - A generator representing the stream of tokens from OpenAI. - - Example: - .. code-block:: python - - generator = openai.stream("Tell me a joke.") - for token in generator: - yield token - """ - params = self.prep_streaming_params(stop) - generator = self.client.create(prompt=prompt, **params) - - return generator - - def prep_streaming_params(self, stop: Optional[List[str]] = None) -> Dict[str, Any]: - """Prepare the params for streaming.""" - params = self._invocation_params - if params.get('best_of') and params["best_of"] != 1: - raise ValueError("OpenAI only supports best_of == 1 for streaming") - if stop is not None: - if "stop" in params: - raise ValueError("`stop` found in both the input and default params.") - params["stop"] = stop - params["stream"] = True - return params - - @property - def _invocation_params(self) -> Dict[str, Any]: - """Get the parameters used to invoke the model.""" - return self._default_params - - @property - def _identifying_params(self) -> Mapping[str, Any]: - """Get the identifying parameters.""" - return {**{"model_name": self.model_name}, **self._default_params} - - @property - def _llm_type(self) -> str: - """Return type of llm.""" - return "openai" - - def get_num_tokens(self, text: str) -> int: - """Calculate num tokens with tiktoken package.""" - # tiktoken NOT supported for Python 3.8 or below - if sys.version_info[1] <= 8: - return super().get_num_tokens(text) - try: - import tiktoken - except ImportError: - raise ValueError( - "Could not import tiktoken python package. " - "This is needed in order to calculate get_num_tokens. " - "Please it install it with `pip install tiktoken`." - ) - encoder = "gpt2" - if self.model_name in ("text-davinci-003", "text-davinci-002"): - encoder = "p50k_base" - if self.model_name.startswith("code"): - encoder = "p50k_base" - # create a GPT-3 encoder instance - enc = tiktoken.get_encoding(encoder) - - # encode the text using the GPT-3 encoder - tokenized_text = enc.encode(text) - - # calculate the number of tokens in the encoded text - return len(tokenized_text) - - def modelname_to_contextsize(self, modelname: str) -> int: - """Calculate the maximum number of tokens possible to generate for a model. - - text-davinci-003: 4,097 tokens - text-curie-001: 2,048 tokens - text-babbage-001: 2,048 tokens - text-ada-001: 2,048 tokens - code-davinci-002: 8,000 tokens - code-cushman-001: 2,048 tokens - - Args: - modelname: The modelname we want to know the context size for. - - Returns: - The maximum context size - - Example: - .. code-block:: python - - max_tokens = openai.modelname_to_contextsize("text-davinci-003") - """ - if modelname == "text-davinci-003": - return 4097 - elif modelname == "text-curie-001": - return 2048 - elif modelname == "text-babbage-001": - return 2048 - elif modelname == "text-ada-001": - return 2048 - elif modelname == "code-davinci-002": - return 8000 - elif modelname == "code-cushman-001": - return 2048 - else: - return 4097 - - def max_tokens_for_prompt(self, prompt: str) -> int: - """Calculate the maximum number of tokens possible to generate for a prompt. - - Args: - prompt: The prompt to pass into the model. - - Returns: - The maximum number of tokens to generate for a prompt. - - Example: - .. code-block:: python - - max_tokens = openai.max_token_for_prompt("Tell me a joke.") - """ - num_tokens = self.get_num_tokens(prompt) - - # get max context size for model by name - max_size = self.modelname_to_contextsize(self.model_name) - return max_size - num_tokens - - -class OpenAI(BaseOpenAI): - """Generic OpenAI class that uses model name.""" - - @property - def _invocation_params(self) -> Dict[str, Any]: - return {**{"model": self.model_name}, **super()._invocation_params} - - -class AzureOpenAI(BaseOpenAI): - """Azure specific OpenAI class that uses deployment name.""" - - deployment_name: str = "" - """Deployment name to use.""" - - @property - def _identifying_params(self) -> Mapping[str, Any]: - return { - **{"deployment_name": self.deployment_name}, - **super()._identifying_params, - } - - @property - def _invocation_params(self) -> Dict[str, Any]: - return {**{"engine": self.deployment_name}, **super()._invocation_params} - - -class OpenAIChat(BaseLLM, BaseModel): - """Wrapper around OpenAI Chat large language models. - - To use, you should have the ``openai`` python package installed, and the - environment variable ``OPENAI_API_KEY`` set with your API key. - - Any parameters that are valid to be passed to the openai.create call can be passed - in, even if not explicitly saved on this class. - - Example: - .. code-block:: python - - from langchain.llms import OpenAIChat - openaichat = OpenAIChat(model_name="gpt-3.5-turbo") - """ - - client: Any #: :meta private: - model_name: str = "gpt-3.5-turbo" - """Model name to use.""" - model_kwargs: Dict[str, Any] = Field(default_factory=dict) - """Holds any model parameters valid for `create` call not explicitly specified.""" - openai_api_key: Optional[str] = None - max_retries: int = 6 - """Maximum number of retries to make when generating.""" - prefix_messages: List = Field(default_factory=list) - """Series of messages for Chat input.""" - streaming: bool = False - """Whether to stream the results or not.""" - - class Config: - """Configuration for this pydantic object.""" - - extra = Extra.ignore - - @root_validator(pre=True, allow_reuse=True) - def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]: - """Build extra kwargs from additional params that were passed in.""" - all_required_field_names = {field.alias for field in cls.__fields__.values()} - - extra = values.get("model_kwargs", {}) - for field_name in list(values): - if field_name not in all_required_field_names: - if field_name in extra: - raise ValueError(f"Found {field_name} supplied twice.") - extra[field_name] = values.pop(field_name) - values["model_kwargs"] = extra - return values - - @root_validator(allow_reuse=True) - def validate_environment(cls, values: Dict) -> Dict: - """Validate that api key and python package exists in environment.""" - openai_api_key = get_from_dict_or_env( - values, "openai_api_key", "OPENAI_API_KEY" - ) - try: - import openai - - openai.api_key = openai_api_key - except ImportError: - raise ValueError( - "Could not import openai python package. " - "Please it install it with `pip install openai`." - ) - try: - values["client"] = openai.ChatCompletion - except AttributeError: - raise ValueError( - "`openai` has no `ChatCompletion` attribute, this is likely " - "due to an old version of the openai package. Try upgrading it " - "with `pip install --upgrade openai`." - ) - return values - - @property - def _default_params(self) -> Dict[str, Any]: - """Get the default parameters for calling OpenAI API.""" - return self.model_kwargs - - def _get_chat_params( - self, prompts: List[str], stop: Optional[List[str]] = None - ) -> Tuple: - if len(prompts) > 1: - raise ValueError( - f"OpenAIChat currently only supports single prompt, got {prompts}" - ) - messages = self.prefix_messages + [{"role": "user", "content": prompts[0]}] - params: Dict[str, Any] = {**{"model": self.model_name}, **self._default_params} - if stop is not None: - if "stop" in params: - raise ValueError("`stop` found in both the input and default params.") - params["stop"] = stop - return messages, params - - def _generate( - self, prompts: List[str], stop: Optional[List[str]] = None - ) -> LLMResult: - messages, params = self._get_chat_params(prompts, stop) - if self.streaming: - response = "" - params["stream"] = True - for stream_resp in completion_with_retry(self, messages=messages, **params): - token = stream_resp["choices"][0]["delta"].get("content", "") - response += token - self.callback_manager.on_llm_new_token( - token, - verbose=self.verbose, - ) - return LLMResult( - generations=[[Generation(text=response)]], - ) - else: - full_response = completion_with_retry(self, messages=messages, **params) - return LLMResult( - generations=[ - [Generation(text=full_response["choices"][0]["message"]["content"])] - ], - llm_output={"token_usage": full_response["usage"]}, - ) - - async def _agenerate( - self, prompts: List[str], stop: Optional[List[str]] = None - ) -> LLMResult: - messages, params = self._get_chat_params(prompts, stop) - if self.streaming: - response = "" - params["stream"] = True - async for stream_resp in await acompletion_with_retry( - self, messages=messages, **params - ): - token = stream_resp["choices"][0]["delta"].get("content", "") - response += token - if self.callback_manager.is_async: - await self.callback_manager.on_llm_new_token( - token, - verbose=self.verbose, - ) - else: - self.callback_manager.on_llm_new_token( - token, - verbose=self.verbose, - ) - return LLMResult( - generations=[[Generation(text=response)]], - ) - else: - full_response = await acompletion_with_retry( - self, messages=messages, **params - ) - return LLMResult( - generations=[ - [Generation(text=full_response["choices"][0]["message"]["content"])] - ], - llm_output={"token_usage": full_response["usage"]}, - ) - - @property - def _identifying_params(self) -> Mapping[str, Any]: - """Get the identifying parameters.""" - return {**{"model_name": self.model_name}, **self._default_params} - - @property - def _llm_type(self) -> str: - """Return type of llm.""" - return "openai-chat" - - -class AzureOpenAIChat(OpenAIChat): - """Azure specific OpenAI class that uses deployment name.""" - - deployment_name: str = "" - """Deployment name to use.""" - - @property - def _identifying_params(self) -> Mapping[str, Any]: - return { - **{"deployment_name": self.deployment_name}, - **super()._identifying_params, - } diff --git a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/train_with_wav2vec.py b/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/train_with_wav2vec.py deleted file mode 100644 index 5d6ca4c5a378583fd297e1202522b9dc9c2368de..0000000000000000000000000000000000000000 --- a/spaces/SalahZa/Code-Switched-Tunisian-SpeechToText/TunisianASR/results/14epoch_tunisian/1234/train_with_wav2vec.py +++ /dev/null @@ -1,399 +0,0 @@ -#!/usr/bin/env python3 -import sys -import torch -import logging -import speechbrain as sb -from pathlib import Path -import os -import torchaudio -from hyperpyyaml import load_hyperpyyaml -from speechbrain.tokenizers.SentencePiece import SentencePiece -from speechbrain.utils.data_utils import undo_padding -from speechbrain.utils.distributed import run_on_main - -"""Recipe for training a sequence-to-sequence ASR system with CommonVoice. -The system employs a wav2vec2 encoder and a CTC decoder. -Decoding is performed with greedy decoding (will be extended to beam search). - -To run this recipe, do the following: -> python train_with_wav2vec2.py hparams/train_with_wav2vec2.yaml - -With the default hyperparameters, the system employs a pretrained wav2vec2 encoder. -The wav2vec2 model is pretrained following the model given in the hprams file. -It may be dependent on the language. - -The neural network is trained with CTC on sub-word units estimated with -Byte Pairwise Encoding (BPE). - -The experiment file is flexible enough to support a large variety of -different systems. By properly changing the parameter files, you can try -different encoders, decoders, tokens (e.g, characters instead of BPE), -training languages (all CommonVoice languages), and many -other possible variations. - -Authors - * Titouan Parcollet 2021 -""" - -logger = logging.getLogger(__name__) - - -# Define training procedure -class ASR(sb.core.Brain): - def compute_forward(self, batch, stage): - """Forward computations from the waveform batches to the output probabilities.""" - - batch = batch.to(self.device) - wavs, wav_lens = batch.sig - wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device) - if stage == sb.Stage.TRAIN: - if hasattr(self.hparams, "augmentation"): - wavs = self.hparams.augmentation(wavs, wav_lens) - - # Forward pass - feats = self.modules.wav2vec2(wavs, wav_lens) - x = self.modules.enc(feats) - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - - return p_ctc, wav_lens - - def compute_objectives(self, predictions, batch, stage): - """Computes the loss (CTC) given predictions and targets.""" - - p_ctc, wav_lens = predictions - - ids = batch.id - tokens, tokens_lens = batch.tokens - - loss = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens) - - if stage != sb.Stage.TRAIN: - predicted_tokens = sb.decoders.ctc_greedy_decode( - p_ctc, wav_lens, blank_id=self.hparams.blank_index - ) - # Decode token terms to words - if self.hparams.use_language_modelling: - predicted_words = [] - for logs in p_ctc: - text = decoder.decode(logs.detach().cpu().numpy()) - predicted_words.append(text.split(" ")) - else: - predicted_words = [ - "".join(self.tokenizer.decode_ndim(utt_seq)).split(" ") - for utt_seq in predicted_tokens - ] - # Convert indices to words - target_words = [wrd.split(" ") for wrd in batch.wrd] - - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - - return loss - - def fit_batch(self, batch): - """Train the parameters given a single batch in input""" - should_step = self.step % self.grad_accumulation_factor == 0 - # Managing automatic mixed precision - # TOFIX: CTC fine-tuning currently is unstable - # This is certainly due to CTC being done in fp16 instead of fp32 - if self.auto_mix_prec: - with torch.cuda.amp.autocast(): - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - with self.no_sync(not should_step): - self.scaler.scale( - loss / self.grad_accumulation_factor - ).backward() - if should_step: - - if not self.hparams.wav2vec2.freeze: - self.scaler.unscale_(self.wav2vec_optimizer) - self.scaler.unscale_(self.model_optimizer) - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.scaler.step(self.wav2vec_optimizer) - self.scaler.step(self.model_optimizer) - self.scaler.update() - self.zero_grad() - self.optimizer_step += 1 - else: - # This is mandatory because HF models have a weird behavior with DDP - # on the forward pass - with self.no_sync(): - outputs = self.compute_forward(batch, sb.Stage.TRAIN) - - loss = self.compute_objectives(outputs, batch, sb.Stage.TRAIN) - - with self.no_sync(not should_step): - (loss / self.grad_accumulation_factor).backward() - if should_step: - if self.check_gradients(loss): - if not self.hparams.wav2vec2.freeze: - if self.optimizer_step >= self.hparams.warmup_steps: - self.wav2vec_optimizer.step() - self.model_optimizer.step() - self.zero_grad() - self.optimizer_step += 1 - - self.on_fit_batch_end(batch, outputs, loss, should_step) - return loss.detach().cpu() - - def evaluate_batch(self, batch, stage): - """Computations needed for validation/test batches""" - predictions = self.compute_forward(batch, stage=stage) - with torch.no_grad(): - loss = self.compute_objectives(predictions, batch, stage=stage) - return loss.detach() - - def on_stage_start(self, stage, epoch): - """Gets called at the beginning of each epoch""" - if stage != sb.Stage.TRAIN: - self.cer_metric = self.hparams.cer_computer() - self.wer_metric = self.hparams.error_rate_computer() - - def on_stage_end(self, stage, stage_loss, epoch): - """Gets called at the end of an epoch.""" - # Compute/store important stats - stage_stats = {"loss": stage_loss} - if stage == sb.Stage.TRAIN: - self.train_stats = stage_stats - else: - stage_stats["CER"] = self.cer_metric.summarize("error_rate") - stage_stats["WER"] = self.wer_metric.summarize("error_rate") - - # Perform end-of-iteration things, like annealing, logging, etc. - if stage == sb.Stage.VALID: - old_lr_model, new_lr_model = self.hparams.lr_annealing_model( - stage_stats["loss"] - ) - old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec( - stage_stats["loss"] - ) - sb.nnet.schedulers.update_learning_rate( - self.model_optimizer, new_lr_model - ) - if not self.hparams.wav2vec2.freeze: - sb.nnet.schedulers.update_learning_rate( - self.wav2vec_optimizer, new_lr_wav2vec - ) - self.hparams.train_logger.log_stats( - stats_meta={ - "epoch": epoch, - "lr_model": old_lr_model, - "lr_wav2vec": old_lr_wav2vec, - }, - train_stats=self.train_stats, - valid_stats=stage_stats, - ) - self.checkpointer.save_and_keep_only( - meta={"WER": stage_stats["WER"]}, min_keys=["WER"], - ) - elif stage == sb.Stage.TEST: - self.hparams.train_logger.log_stats( - stats_meta={"Epoch loaded": self.hparams.epoch_counter.current}, - test_stats=stage_stats, - ) - with open(self.hparams.wer_file, "w") as w: - self.wer_metric.write_stats(w) - - def init_optimizers(self): - "Initializes the wav2vec2 optimizer and model optimizer" - - # If the wav2vec encoder is unfrozen, we create the optimizer - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer = self.hparams.wav2vec_opt_class( - self.modules.wav2vec2.parameters() - ) - if self.checkpointer is not None: - self.checkpointer.add_recoverable( - "wav2vec_opt", self.wav2vec_optimizer - ) - - self.model_optimizer = self.hparams.model_opt_class( - self.hparams.model.parameters() - ) - - if self.checkpointer is not None: - self.checkpointer.add_recoverable("modelopt", self.model_optimizer) - - def zero_grad(self, set_to_none=False): - if not self.hparams.wav2vec2.freeze: - self.wav2vec_optimizer.zero_grad(set_to_none) - self.model_optimizer.zero_grad(set_to_none) - - -# Define custom data procedure -def dataio_prepare(hparams): - """This function prepares the datasets to be used in the brain class. - It also defines the data processing pipeline through user-defined functions.""" - - # 1. Define datasets - data_folder = hparams["data_folder"] - - train_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["train_csv"], replacements={"data_root": data_folder}, - ) - - if hparams["sorting"] == "ascending": - # we sort training data to speed up training and get better results. - train_data = train_data.filtered_sorted( - sort_key="duration", - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "descending": - train_data = train_data.filtered_sorted( - sort_key="duration", - reverse=True, - key_max_value={"duration": hparams["avoid_if_longer_than"]}, - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["dataloader_options"]["shuffle"] = False - - elif hparams["sorting"] == "random": - pass - - else: - raise NotImplementedError( - "sorting must be random, ascending or descending" - ) - - valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["valid_csv"], replacements={"data_root": data_folder}, - ) - # We also sort the validation data so it is faster to validate - valid_data = valid_data.filtered_sorted(sort_key="duration") - test_datasets = {} - for csv_file in hparams["test_csv"]: - name = Path(csv_file).stem - test_datasets[name] = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=csv_file, replacements={"data_root": data_folder} - ) - test_datasets[name] = test_datasets[name].filtered_sorted( - sort_key="duration" - ) - - datasets = [train_data, valid_data] + [i for k, i in test_datasets.items()] - - - # 2. Define audio pipeline: - @sb.utils.data_pipeline.takes("wav") - @sb.utils.data_pipeline.provides("sig") - def audio_pipeline(wav): - info = torchaudio.info(wav) - sig = sb.dataio.dataio.read_audio(wav) - resampled = torchaudio.transforms.Resample( - info.sample_rate, hparams["sample_rate"], - )(sig) - return resampled - - sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline) - label_encoder = sb.dataio.encoder.CTCTextEncoder() - - # 3. Define text pipeline: - @sb.utils.data_pipeline.takes("wrd") - @sb.utils.data_pipeline.provides( - "wrd", "char_list", "tokens_list", "tokens" - ) - def text_pipeline(wrd): - yield wrd - char_list = list(wrd) - yield char_list - tokens_list = label_encoder.encode_sequence(char_list) - yield tokens_list - tokens = torch.LongTensor(tokens_list) - yield tokens - - sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline) - lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt") - special_labels = { - "blank_label": hparams["blank_index"], - "unk_label": hparams["unk_index"] - } - label_encoder.load_or_create( - path=lab_enc_file, - from_didatasets=[train_data], - output_key="char_list", - special_labels=special_labels, - sequence_input=True, - ) - - # 4. Set output: - sb.dataio.dataset.set_output_keys( - datasets, ["id", "sig", "wrd", "char_list", "tokens"], - ) - return train_data, valid_data,test_datasets, label_encoder - - -if __name__ == "__main__": - - # Load hyperparameters file with command-line overrides - hparams_file, run_opts, overrides = sb.parse_arguments(sys.argv[1:]) - with open(hparams_file) as fin: - hparams = load_hyperpyyaml(fin, overrides) - - # If --distributed_launch then - # create ddp_group with the right communication protocol - sb.utils.distributed.ddp_init_group(run_opts) - - - # Create experiment directory - sb.create_experiment_directory( - experiment_directory=hparams["output_folder"], - hyperparams_to_save=hparams_file, - overrides=overrides, - ) - - # Due to DDP, we do the preparation ONLY on the main python process - # Defining tokenizer and loading it - # Create the datasets objects as well as tokenization and encoding :-D - train_data, valid_data, test_datasets, label_encoder = dataio_prepare(hparams) - if hparams["use_language_modelling"]: - print("using langauge_modeeling") - from pyctcdecode import build_ctcdecoder - ind2lab = label_encoder.ind2lab - print(ind2lab) - labels = [ind2lab[x] for x in range(len(ind2lab))] - labels = [""] + labels[1:-1] + ["1"] - # Replace the token with a blank character, needed for PyCTCdecode - print(labels) - decoder = build_ctcdecoder( - labels, - kenlm_model_path=hparams["ngram_lm_path"], # .arpa or .bin - alpha=0.5, # Default by KenLM - beta=1.0, # Default by KenLM - ) - # Trainer initialization - asr_brain = ASR( - modules=hparams["modules"], - hparams=hparams, - run_opts=run_opts, - checkpointer=hparams["checkpointer"], - ) - - # Adding objects to trainer. - asr_brain.tokenizer = label_encoder - - # Training - asr_brain.fit( - asr_brain.hparams.epoch_counter, - train_data, - valid_data, - train_loader_kwargs=hparams["dataloader_options"], - valid_loader_kwargs=hparams["test_dataloader_options"], - ) - - # Test - for k in test_datasets.keys(): # keys are test_clean, test_other etc - asr_brain.hparams.wer_file = os.path.join( - hparams["output_folder"], "wer_{}.txt".format(k) - ) - asr_brain.evaluate( - test_datasets[k], test_loader_kwargs=hparams["test_dataloader_options"] - ) - diff --git a/spaces/Salesforce/EDICT/my_diffusers/models/unet_blocks.py b/spaces/Salesforce/EDICT/my_diffusers/models/unet_blocks.py deleted file mode 100644 index 9e062165357c33d9b2f0bec13a66204c2e7e7833..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/models/unet_blocks.py +++ /dev/null @@ -1,1481 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import numpy as np - -# limitations under the License. -import torch -from torch import nn - -from .attention import AttentionBlock, SpatialTransformer -from .resnet import Downsample2D, FirDownsample2D, FirUpsample2D, ResnetBlock2D, Upsample2D - - -def get_down_block( - down_block_type, - num_layers, - in_channels, - out_channels, - temb_channels, - add_downsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - cross_attention_dim=None, - downsample_padding=None, -): - down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type - if down_block_type == "DownBlock2D": - return DownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - downsample_padding=downsample_padding, - ) - elif down_block_type == "AttnDownBlock2D": - return AttnDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - downsample_padding=downsample_padding, - attn_num_head_channels=attn_num_head_channels, - ) - elif down_block_type == "CrossAttnDownBlock2D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlock2D") - return CrossAttnDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - downsample_padding=downsample_padding, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - ) - elif down_block_type == "SkipDownBlock2D": - return SkipDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - downsample_padding=downsample_padding, - ) - elif down_block_type == "AttnSkipDownBlock2D": - return AttnSkipDownBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - downsample_padding=downsample_padding, - attn_num_head_channels=attn_num_head_channels, - ) - elif down_block_type == "DownEncoderBlock2D": - return DownEncoderBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - downsample_padding=downsample_padding, - ) - - -def get_up_block( - up_block_type, - num_layers, - in_channels, - out_channels, - prev_output_channel, - temb_channels, - add_upsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - cross_attention_dim=None, -): - up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type - if up_block_type == "UpBlock2D": - return UpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - ) - elif up_block_type == "CrossAttnUpBlock2D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlock2D") - return CrossAttnUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - ) - elif up_block_type == "AttnUpBlock2D": - return AttnUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - attn_num_head_channels=attn_num_head_channels, - ) - elif up_block_type == "SkipUpBlock2D": - return SkipUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - ) - elif up_block_type == "AttnSkipUpBlock2D": - return AttnSkipUpBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - attn_num_head_channels=attn_num_head_channels, - ) - elif up_block_type == "UpDecoderBlock2D": - return UpDecoderBlock2D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - ) - raise ValueError(f"{up_block_type} does not exist.") - - -class UNetMidBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - attention_type="default", - output_scale_factor=1.0, - **kwargs, - ): - super().__init__() - - self.attention_type = attention_type - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - # there is always at least one resnet - resnets = [ - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - attentions.append( - AttentionBlock( - in_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - num_groups=resnet_groups, - ) - ) - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def forward(self, hidden_states, temb=None, encoder_states=None): - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - if self.attention_type == "default": - hidden_states = attn(hidden_states) - else: - hidden_states = attn(hidden_states, encoder_states) - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -class UNetMidBlock2DCrossAttn(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - attention_type="default", - output_scale_factor=1.0, - cross_attention_dim=1280, - **kwargs, - ): - super().__init__() - - self.attention_type = attention_type - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - # there is always at least one resnet - resnets = [ - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - attentions.append( - SpatialTransformer( - in_channels, - attn_num_head_channels, - in_channels // attn_num_head_channels, - depth=1, - context_dim=cross_attention_dim, - ) - ) - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def set_attention_slice(self, slice_size): - if slice_size is not None and self.attn_num_head_channels % slice_size != 0: - raise ValueError( - f"Make sure slice_size {slice_size} is a divisor of " - f"the number of heads used in cross_attention {self.attn_num_head_channels}" - ) - if slice_size is not None and slice_size > self.attn_num_head_channels: - raise ValueError( - f"Chunk_size {slice_size} has to be smaller or equal to " - f"the number of heads used in cross_attention {self.attn_num_head_channels}" - ) - - for attn in self.attentions: - attn._set_attention_slice(slice_size) - - def forward(self, hidden_states, temb=None, encoder_hidden_states=None): - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - hidden_states = attn(hidden_states, encoder_hidden_states) - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -class AttnDownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - attention_type="default", - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - self.attention_type = attention_type - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - in_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states) - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class CrossAttnDownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - attention_type="default", - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - self.attention_type = attention_type - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - SpatialTransformer( - out_channels, - attn_num_head_channels, - out_channels // attn_num_head_channels, - depth=1, - context_dim=cross_attention_dim, - ) - ) - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - in_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - def set_attention_slice(self, slice_size): - if slice_size is not None and self.attn_num_head_channels % slice_size != 0: - raise ValueError( - f"Make sure slice_size {slice_size} is a divisor of " - f"the number of heads used in cross_attention {self.attn_num_head_channels}" - ) - if slice_size is not None and slice_size > self.attn_num_head_channels: - raise ValueError( - f"Chunk_size {slice_size} has to be smaller or equal to " - f"the number of heads used in cross_attention {self.attn_num_head_channels}" - ) - - for attn in self.attentions: - attn._set_attention_slice(slice_size) - - def forward(self, hidden_states, temb=None, encoder_hidden_states=None): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states, context=encoder_hidden_states) - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class DownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - in_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet in self.resnets: - hidden_states = resnet(hidden_states, temb) - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class DownEncoderBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - in_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - def forward(self, hidden_states): - for resnet in self.resnets: - hidden_states = resnet(hidden_states, temb=None) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - return hidden_states - - -class AttnDownEncoderBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - attentions = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - num_groups=resnet_groups, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample2D( - in_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - def forward(self, hidden_states): - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb=None) - hidden_states = attn(hidden_states) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - return hidden_states - - -class AttnSkipDownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - attention_type="default", - output_scale_factor=np.sqrt(2.0), - downsample_padding=1, - add_downsample=True, - ): - super().__init__() - self.attentions = nn.ModuleList([]) - self.resnets = nn.ModuleList([]) - - self.attention_type = attention_type - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - self.resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(in_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - self.attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - ) - ) - - if add_downsample: - self.resnet_down = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_nin_shortcut=True, - down=True, - kernel="fir", - ) - self.downsamplers = nn.ModuleList([FirDownsample2D(in_channels, out_channels=out_channels)]) - self.skip_conv = nn.Conv2d(3, out_channels, kernel_size=(1, 1), stride=(1, 1)) - else: - self.resnet_down = None - self.downsamplers = None - self.skip_conv = None - - def forward(self, hidden_states, temb=None, skip_sample=None): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states) - output_states += (hidden_states,) - - if self.downsamplers is not None: - hidden_states = self.resnet_down(hidden_states, temb) - for downsampler in self.downsamplers: - skip_sample = downsampler(skip_sample) - - hidden_states = self.skip_conv(skip_sample) + hidden_states - - output_states += (hidden_states,) - - return hidden_states, output_states, skip_sample - - -class SkipDownBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - output_scale_factor=np.sqrt(2.0), - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - self.resnets = nn.ModuleList([]) - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - self.resnets.append( - ResnetBlock2D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(in_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - if add_downsample: - self.resnet_down = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_nin_shortcut=True, - down=True, - kernel="fir", - ) - self.downsamplers = nn.ModuleList([FirDownsample2D(in_channels, out_channels=out_channels)]) - self.skip_conv = nn.Conv2d(3, out_channels, kernel_size=(1, 1), stride=(1, 1)) - else: - self.resnet_down = None - self.downsamplers = None - self.skip_conv = None - - def forward(self, hidden_states, temb=None, skip_sample=None): - output_states = () - - for resnet in self.resnets: - hidden_states = resnet(hidden_states, temb) - output_states += (hidden_states,) - - if self.downsamplers is not None: - hidden_states = self.resnet_down(hidden_states, temb) - for downsampler in self.downsamplers: - skip_sample = downsampler(skip_sample) - - hidden_states = self.skip_conv(skip_sample) + hidden_states - - output_states += (hidden_states,) - - return hidden_states, output_states, skip_sample - - -class AttnUpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attention_type="default", - attn_num_head_channels=1, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - self.attention_type = attention_type - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None): - for resnet, attn in zip(self.resnets, self.attentions): - - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class CrossAttnUpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - prev_output_channel: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - attention_type="default", - output_scale_factor=1.0, - downsample_padding=1, - add_upsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - self.attention_type = attention_type - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - SpatialTransformer( - out_channels, - attn_num_head_channels, - out_channels // attn_num_head_channels, - depth=1, - context_dim=cross_attention_dim, - ) - ) - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - def set_attention_slice(self, slice_size): - if slice_size is not None and self.attn_num_head_channels % slice_size != 0: - raise ValueError( - f"Make sure slice_size {slice_size} is a divisor of " - f"the number of heads used in cross_attention {self.attn_num_head_channels}" - ) - if slice_size is not None and slice_size > self.attn_num_head_channels: - raise ValueError( - f"Chunk_size {slice_size} has to be smaller or equal to " - f"the number of heads used in cross_attention {self.attn_num_head_channels}" - ) - - for attn in self.attentions: - attn._set_attention_slice(slice_size) - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, encoder_hidden_states=None): - for resnet, attn in zip(self.resnets, self.attentions): - - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states, context=encoder_hidden_states) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class UpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None): - for resnet in self.resnets: - - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - hidden_states = resnet(hidden_states, temb) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class UpDecoderBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - input_channels = in_channels if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=input_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - def forward(self, hidden_states): - for resnet in self.resnets: - hidden_states = resnet(hidden_states, temb=None) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class AttnUpDecoderBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - attentions = [] - - for i in range(num_layers): - input_channels = in_channels if i == 0 else out_channels - - resnets.append( - ResnetBlock2D( - in_channels=input_channels, - out_channels=out_channels, - temb_channels=None, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - num_groups=resnet_groups, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample2D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - def forward(self, hidden_states): - for resnet, attn in zip(self.resnets, self.attentions): - hidden_states = resnet(hidden_states, temb=None) - hidden_states = attn(hidden_states) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states) - - return hidden_states - - -class AttnSkipUpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - attention_type="default", - output_scale_factor=np.sqrt(2.0), - upsample_padding=1, - add_upsample=True, - ): - super().__init__() - self.attentions = nn.ModuleList([]) - self.resnets = nn.ModuleList([]) - - self.attention_type = attention_type - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - self.resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(resnet_in_channels + res_skip_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions.append( - AttentionBlock( - out_channels, - num_head_channels=attn_num_head_channels, - rescale_output_factor=output_scale_factor, - eps=resnet_eps, - ) - ) - - self.upsampler = FirUpsample2D(in_channels, out_channels=out_channels) - if add_upsample: - self.resnet_up = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_nin_shortcut=True, - up=True, - kernel="fir", - ) - self.skip_conv = nn.Conv2d(out_channels, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.skip_norm = torch.nn.GroupNorm( - num_groups=min(out_channels // 4, 32), num_channels=out_channels, eps=resnet_eps, affine=True - ) - self.act = nn.SiLU() - else: - self.resnet_up = None - self.skip_conv = None - self.skip_norm = None - self.act = None - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, skip_sample=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - hidden_states = resnet(hidden_states, temb) - - hidden_states = self.attentions[0](hidden_states) - - if skip_sample is not None: - skip_sample = self.upsampler(skip_sample) - else: - skip_sample = 0 - - if self.resnet_up is not None: - skip_sample_states = self.skip_norm(hidden_states) - skip_sample_states = self.act(skip_sample_states) - skip_sample_states = self.skip_conv(skip_sample_states) - - skip_sample = skip_sample + skip_sample_states - - hidden_states = self.resnet_up(hidden_states, temb) - - return hidden_states, skip_sample - - -class SkipUpBlock2D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_pre_norm: bool = True, - output_scale_factor=np.sqrt(2.0), - add_upsample=True, - upsample_padding=1, - ): - super().__init__() - self.resnets = nn.ModuleList([]) - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - self.resnets.append( - ResnetBlock2D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min((resnet_in_channels + res_skip_channels) // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.upsampler = FirUpsample2D(in_channels, out_channels=out_channels) - if add_upsample: - self.resnet_up = ResnetBlock2D( - in_channels=out_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=min(out_channels // 4, 32), - groups_out=min(out_channels // 4, 32), - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - use_nin_shortcut=True, - up=True, - kernel="fir", - ) - self.skip_conv = nn.Conv2d(out_channels, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) - self.skip_norm = torch.nn.GroupNorm( - num_groups=min(out_channels // 4, 32), num_channels=out_channels, eps=resnet_eps, affine=True - ) - self.act = nn.SiLU() - else: - self.resnet_up = None - self.skip_conv = None - self.skip_norm = None - self.act = None - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, skip_sample=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - hidden_states = resnet(hidden_states, temb) - - if skip_sample is not None: - skip_sample = self.upsampler(skip_sample) - else: - skip_sample = 0 - - if self.resnet_up is not None: - skip_sample_states = self.skip_norm(hidden_states) - skip_sample_states = self.act(skip_sample_states) - skip_sample_states = self.skip_conv(skip_sample_states) - - skip_sample = skip_sample + skip_sample_states - - hidden_states = self.resnet_up(hidden_states, temb) - - return hidden_states, skip_sample diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/hub_utils.py b/spaces/Salesforce/EDICT/my_half_diffusers/hub_utils.py deleted file mode 100644 index c07329e36fe7a8826b0f1fb22396819b220e1b58..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/hub_utils.py +++ /dev/null @@ -1,197 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import os -import shutil -from pathlib import Path -from typing import Optional - -from huggingface_hub import HfFolder, Repository, whoami - -from .pipeline_utils import DiffusionPipeline -from .utils import is_modelcards_available, logging - - -if is_modelcards_available(): - from modelcards import CardData, ModelCard - - -logger = logging.get_logger(__name__) - - -MODEL_CARD_TEMPLATE_PATH = Path(__file__).parent / "utils" / "model_card_template.md" - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - -def init_git_repo(args, at_init: bool = False): - """ - Args: - Initializes a git repo in `args.hub_model_id`. - at_init (`bool`, *optional*, defaults to `False`): - Whether this function is called before any training or not. If `self.args.overwrite_output_dir` is `True` - and `at_init` is `True`, the path to the repo (which is `self.args.output_dir`) might be wiped out. - """ - if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]: - return - hub_token = args.hub_token if hasattr(args, "hub_token") else None - use_auth_token = True if hub_token is None else hub_token - if not hasattr(args, "hub_model_id") or args.hub_model_id is None: - repo_name = Path(args.output_dir).absolute().name - else: - repo_name = args.hub_model_id - if "/" not in repo_name: - repo_name = get_full_repo_name(repo_name, token=hub_token) - - try: - repo = Repository( - args.output_dir, - clone_from=repo_name, - use_auth_token=use_auth_token, - private=args.hub_private_repo, - ) - except EnvironmentError: - if args.overwrite_output_dir and at_init: - # Try again after wiping output_dir - shutil.rmtree(args.output_dir) - repo = Repository( - args.output_dir, - clone_from=repo_name, - use_auth_token=use_auth_token, - ) - else: - raise - - repo.git_pull() - - # By default, ignore the checkpoint folders - if not os.path.exists(os.path.join(args.output_dir, ".gitignore")): - with open(os.path.join(args.output_dir, ".gitignore"), "w", encoding="utf-8") as writer: - writer.writelines(["checkpoint-*/"]) - - return repo - - -def push_to_hub( - args, - pipeline: DiffusionPipeline, - repo: Repository, - commit_message: Optional[str] = "End of training", - blocking: bool = True, - **kwargs, -) -> str: - """ - Parameters: - Upload *self.model* and *self.tokenizer* to the 🤗 model hub on the repo *self.args.hub_model_id*. - commit_message (`str`, *optional*, defaults to `"End of training"`): - Message to commit while pushing. - blocking (`bool`, *optional*, defaults to `True`): - Whether the function should return only when the `git push` has finished. - kwargs: - Additional keyword arguments passed along to [`create_model_card`]. - Returns: - The url of the commit of your model in the given repository if `blocking=False`, a tuple with the url of the - commit and an object to track the progress of the commit if `blocking=True` - """ - - if not hasattr(args, "hub_model_id") or args.hub_model_id is None: - model_name = Path(args.output_dir).name - else: - model_name = args.hub_model_id.split("/")[-1] - - output_dir = args.output_dir - os.makedirs(output_dir, exist_ok=True) - logger.info(f"Saving pipeline checkpoint to {output_dir}") - pipeline.save_pretrained(output_dir) - - # Only push from one node. - if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]: - return - - # Cancel any async push in progress if blocking=True. The commits will all be pushed together. - if ( - blocking - and len(repo.command_queue) > 0 - and repo.command_queue[-1] is not None - and not repo.command_queue[-1].is_done - ): - repo.command_queue[-1]._process.kill() - - git_head_commit_url = repo.push_to_hub(commit_message=commit_message, blocking=blocking, auto_lfs_prune=True) - # push separately the model card to be independent from the rest of the model - create_model_card(args, model_name=model_name) - try: - repo.push_to_hub(commit_message="update model card README.md", blocking=blocking, auto_lfs_prune=True) - except EnvironmentError as exc: - logger.error(f"Error pushing update to the model card. Please read logs and retry.\n${exc}") - - return git_head_commit_url - - -def create_model_card(args, model_name): - if not is_modelcards_available: - raise ValueError( - "Please make sure to have `modelcards` installed when using the `create_model_card` function. You can" - " install the package with `pip install modelcards`." - ) - - if hasattr(args, "local_rank") and args.local_rank not in [-1, 0]: - return - - hub_token = args.hub_token if hasattr(args, "hub_token") else None - repo_name = get_full_repo_name(model_name, token=hub_token) - - model_card = ModelCard.from_template( - card_data=CardData( # Card metadata object that will be converted to YAML block - language="en", - license="apache-2.0", - library_name="diffusers", - tags=[], - datasets=args.dataset_name, - metrics=[], - ), - template_path=MODEL_CARD_TEMPLATE_PATH, - model_name=model_name, - repo_name=repo_name, - dataset_name=args.dataset_name if hasattr(args, "dataset_name") else None, - learning_rate=args.learning_rate, - train_batch_size=args.train_batch_size, - eval_batch_size=args.eval_batch_size, - gradient_accumulation_steps=args.gradient_accumulation_steps - if hasattr(args, "gradient_accumulation_steps") - else None, - adam_beta1=args.adam_beta1 if hasattr(args, "adam_beta1") else None, - adam_beta2=args.adam_beta2 if hasattr(args, "adam_beta2") else None, - adam_weight_decay=args.adam_weight_decay if hasattr(args, "adam_weight_decay") else None, - adam_epsilon=args.adam_epsilon if hasattr(args, "adam_epsilon") else None, - lr_scheduler=args.lr_scheduler if hasattr(args, "lr_scheduler") else None, - lr_warmup_steps=args.lr_warmup_steps if hasattr(args, "lr_warmup_steps") else None, - ema_inv_gamma=args.ema_inv_gamma if hasattr(args, "ema_inv_gamma") else None, - ema_power=args.ema_power if hasattr(args, "ema_power") else None, - ema_max_decay=args.ema_max_decay if hasattr(args, "ema_max_decay") else None, - mixed_precision=args.mixed_precision, - ) - - card_path = os.path.join(args.output_dir, "README.md") - model_card.save(card_path) diff --git a/spaces/SamerKharboush/chatGPT-Sam-Turbo/assets/custom.js b/spaces/SamerKharboush/chatGPT-Sam-Turbo/assets/custom.js deleted file mode 100644 index 7b1761043149ff97ca498501c87a0d15db5258ee..0000000000000000000000000000000000000000 --- a/spaces/SamerKharboush/chatGPT-Sam-Turbo/assets/custom.js +++ /dev/null @@ -1 +0,0 @@ -// custom javascript here \ No newline at end of file diff --git a/spaces/ShiwenNi/ChatResponse/app.py b/spaces/ShiwenNi/ChatResponse/app.py deleted file mode 100644 index 62684313a6fd7b6a492fa901d1ca9928b9c79d86..0000000000000000000000000000000000000000 --- a/spaces/ShiwenNi/ChatResponse/app.py +++ /dev/null @@ -1,137 +0,0 @@ -import numpy as np -import os -import re -import datetime -import time -import openai, tenacity -import argparse -import configparser -import json -import tiktoken -from get_paper_from_pdf import Paper -import gradio - -# 定义Response类 -class Response: - # 初始化方法,设置属性 - def __init__(self, api, comment, language): - self.api = api - self.comment = comment - self.language = language - self.max_token_num = 14096 - self.encoding = tiktoken.get_encoding("gpt2") - - - @tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10), - stop=tenacity.stop_after_attempt(5), - reraise=True) - def chat_response(self, comment): - openai.api_key = self.api - response_prompt_token = 1000 - text_token = len(self.encoding.encode(comment)) - input_text_index = int(len(comment)*(self.max_token_num-response_prompt_token)/text_token) - input_text = "This is the review comments:" + comment[:input_text_index] - messages=[ - {"role": "system", "content": """You are the author, you submitted a paper, and the reviewers gave the review comments. - Please reply with what we have done, not what we will do. - You need to extract questions from the review comments one by one, and then respond point-to-point to the reviewers’ concerns. - You need to determine for yourself how many reviewers there are and how many questions each reviewer has. - Must be output in {}. Follow the format of the output later: - - Response to reviewers - #1 reviewer - Concern #1: xxxx - Author response: xxxxx - Concern #2: xxxx - Author response: xxxxx - ... - #2 reviewer - Concern #1: xxxx - Author response: xxxxx - Concern #2: xxxx - Author response: xxxxx - ... - #3 reviewer - Concern #1: xxxx - Author response: xxxxx - Concern #2: xxxx - Author response: xxxxx - ... - - """.format(self.language) - - }, - {"role": "user", "content": input_text}, - ] - try: - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo-16k", - messages=messages, - ) - result = '' - for choice in response.choices: - result += choice.message.content - usage = response.usage.total_tokens - except Exception as e: - # 处理其他的异常 - result = "非常抱歉>_<,生了一个错误:"+ str(e) - usage = 'xxxxx' - print("********"*10) - print(result) - print("********"*10) - return result, usage - - - -def main(api, comment, language): - start_time = time.time() - if not api or not comment: - return "请输入API-key以及审稿意见!" - else: - Response1 = Response(api, comment, language) - # 开始判断是路径还是文件: - response, total_token_used = Response1.chat_response(comment) - time_used = time.time() - start_time - output2 ="使用token数:"+ str(total_token_used)+"\n花费时间:"+ str(round(time_used, 2)) +"秒" - return response, output2 - - -######################################################################################################## -# 标题 -title = "🤖ChatResponse🤖" -# 描述 - -description = '''
    - - -ChatResponse是一款根据审稿人的评论自动生成作者回复的AI助手。其用途为: - -⭐️根据输入的审稿意见,ChatResponse会自动提取其中各个审稿人的问题和担忧,并生成点对点的回复。 - -如果觉得很卡,可以点击右上角的Duplicate this Space,把ChatResponse复制到你自己的Space中! - -本项目的[Github](https://github.com/nishiwen1214/ChatReviewer),欢迎Star和Fork,也欢迎大佬赞助让本项目快速成长!💗 - -
    -''' - -# 创建Gradio界面 -inp = [gradio.inputs.Textbox(label="请输入你的API-key(sk开头的字符串)", - default="", - type='password'), - gradio.inputs.Textbox(lines=5, - label="请输入要回复的审稿意见", - default="" - ), - gradio.inputs.Radio(choices=["English", "Chinese", "French", "German","Japenese"], - default="English", - label="选择输出语言"), -] - -chat_Response_gui = gradio.Interface(fn=main, - inputs=inp, - outputs = [gradio.Textbox(lines=11, label="回复结果"), gradio.Textbox(lines=2, label="资源统计")], - title=title, - description=description) - -# Start server -chat_Response_gui .launch(quiet=True, show_api=False) \ No newline at end of file diff --git a/spaces/SpacesExamples/fastapi_t5/README.md b/spaces/SpacesExamples/fastapi_t5/README.md deleted file mode 100644 index 3fc458213f777f48d7b806f18605225101a518b1..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/fastapi_t5/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Fastapi T5 -emoji: 🐢 -colorFrom: purple -colorTo: blue -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/data/audio_utils.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/data/audio_utils.py deleted file mode 100644 index 565b63a4ef78dcd802dda932b42ebe518ffe7397..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/data/audio_utils.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Various utilities for audio convertion (pcm format, sample rate and channels), -and volume normalization.""" -import sys -import typing as tp - -import julius -import torch -import torchaudio - - -def convert_audio_channels(wav: torch.Tensor, channels: int = 2) -> torch.Tensor: - """Convert audio to the given number of channels. - - Args: - wav (torch.Tensor): Audio wave of shape [B, C, T]. - channels (int): Expected number of channels as output. - Returns: - torch.Tensor: Downmixed or unchanged audio wave [B, C, T]. - """ - *shape, src_channels, length = wav.shape - if src_channels == channels: - pass - elif channels == 1: - # Case 1: - # The caller asked 1-channel audio, and the stream has multiple - # channels, downmix all channels. - wav = wav.mean(dim=-2, keepdim=True) - elif src_channels == 1: - # Case 2: - # The caller asked for multiple channels, but the input file has - # a single channel, replicate the audio over all channels. - wav = wav.expand(*shape, channels, length) - elif src_channels >= channels: - # Case 3: - # The caller asked for multiple channels, and the input file has - # more channels than requested. In that case return the first channels. - wav = wav[..., :channels, :] - else: - # Case 4: What is a reasonable choice here? - raise ValueError('The audio file has less channels than requested but is not mono.') - return wav - - -def convert_audio(wav: torch.Tensor, from_rate: float, - to_rate: float, to_channels: int) -> torch.Tensor: - """Convert audio to new sample rate and number of audio channels.""" - wav = julius.resample_frac(wav, int(from_rate), int(to_rate)) - wav = convert_audio_channels(wav, to_channels) - return wav - - -def normalize_loudness(wav: torch.Tensor, sample_rate: int, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, energy_floor: float = 2e-3): - """Normalize an input signal to a user loudness in dB LKFS. - Audio loudness is defined according to the ITU-R BS.1770-4 recommendation. - - Args: - wav (torch.Tensor): Input multichannel audio data. - sample_rate (int): Sample rate. - loudness_headroom_db (float): Target loudness of the output in dB LUFS. - loudness_compressor (bool): Uses tanh for soft clipping. - energy_floor (float): anything below that RMS level will not be rescaled. - Returns: - torch.Tensor: Loudness normalized output data. - """ - energy = wav.pow(2).mean().sqrt().item() - if energy < energy_floor: - return wav - transform = torchaudio.transforms.Loudness(sample_rate) - input_loudness_db = transform(wav).item() - # calculate the gain needed to scale to the desired loudness level - delta_loudness = -loudness_headroom_db - input_loudness_db - gain = 10.0 ** (delta_loudness / 20.0) - output = gain * wav - if loudness_compressor: - output = torch.tanh(output) - assert output.isfinite().all(), (input_loudness_db, wav.pow(2).mean().sqrt()) - return output - - -def _clip_wav(wav: torch.Tensor, log_clipping: bool = False, stem_name: tp.Optional[str] = None) -> None: - """Utility function to clip the audio with logging if specified.""" - max_scale = wav.abs().max() - if log_clipping and max_scale > 1: - clamp_prob = (wav.abs() > 1).float().mean().item() - print(f"CLIPPING {stem_name or ''} happening with proba (a bit of clipping is okay):", - clamp_prob, "maximum scale: ", max_scale.item(), file=sys.stderr) - #wav.clamp_(-1, 1) - wav = wav.clone().clamp_(-1, 1) - - -def normalize_audio(wav: torch.Tensor, normalize: bool = True, - strategy: str = 'peak', peak_clip_headroom_db: float = 1, - rms_headroom_db: float = 18, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, log_clipping: bool = False, - sample_rate: tp.Optional[int] = None, - stem_name: tp.Optional[str] = None) -> torch.Tensor: - """Normalize the audio according to the prescribed strategy (see after). - - Args: - wav (torch.Tensor): Audio data. - normalize (bool): if `True` (default), normalizes according to the prescribed - strategy (see after). If `False`, the strategy is only used in case clipping - would happen. - strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak', - i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square - with extra headroom to avoid clipping. 'clip' just clips. - peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy. - rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger - than the `peak_clip` one to avoid further clipping. - loudness_headroom_db (float): Target loudness for loudness normalization. - loudness_compressor (bool): If True, uses tanh based soft clipping. - log_clipping (bool): If True, basic logging on stderr when clipping still - occurs despite strategy (only for 'rms'). - sample_rate (int): Sample rate for the audio data (required for loudness). - stem_name (str, optional): Stem name for clipping logging. - Returns: - torch.Tensor: Normalized audio. - """ - scale_peak = 10 ** (-peak_clip_headroom_db / 20) - scale_rms = 10 ** (-rms_headroom_db / 20) - if strategy == 'peak': - rescaling = (scale_peak / wav.abs().max()) - if normalize or rescaling < 1: - wav = wav * rescaling - elif strategy == 'clip': - wav = wav.clamp(-scale_peak, scale_peak) - elif strategy == 'rms': - mono = wav.mean(dim=0) - rescaling = scale_rms / mono.pow(2).mean().sqrt() - if normalize or rescaling < 1: - wav = wav * rescaling - _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name) - elif strategy == 'loudness': - assert sample_rate is not None, "Loudness normalization requires sample rate." - wav = normalize_loudness(wav, sample_rate, loudness_headroom_db, loudness_compressor) - _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name) - else: - assert wav.abs().max() < 1 - assert strategy == '' or strategy == 'none', f"Unexpected strategy: '{strategy}'" - return wav - - -def f32_pcm(wav: torch.Tensor) -> torch.Tensor: - """Convert audio to float 32 bits PCM format. - """ - if wav.dtype.is_floating_point: - return wav - elif wav.dtype == torch.int16: - return wav.float() / 2**15 - elif wav.dtype == torch.int32: - return wav.float() / 2**31 - raise ValueError(f"Unsupported wav dtype: {wav.dtype}") - - -def i16_pcm(wav: torch.Tensor) -> torch.Tensor: - """Convert audio to int 16 bits PCM format. - - ..Warning:: There exist many formula for doing this conversion. None are perfect - due to the asymmetry of the int16 range. One either have possible clipping, DC offset, - or inconsistencies with f32_pcm. If the given wav doesn't have enough headroom, - it is possible that `i16_pcm(f32_pcm)) != Identity`. - """ - if wav.dtype.is_floating_point: - assert wav.abs().max() <= 1 - candidate = (wav * 2 ** 15).round() - if candidate.max() >= 2 ** 15: # clipping would occur - candidate = (wav * (2 ** 15 - 1)).round() - return candidate.short() - else: - assert wav.dtype == torch.int16 - return wav diff --git a/spaces/Sudhir87/Intervupro.ai/README.md b/spaces/Sudhir87/Intervupro.ai/README.md deleted file mode 100644 index 6ebe095215a96395842c49e57d91463444412468..0000000000000000000000000000000000000000 --- a/spaces/Sudhir87/Intervupro.ai/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: IntervuPro.ai - -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- -IntervuPro.Ai is an innovative tool designed to assist individuals in preparing for job interviews using the power of GPT-3.5. With its intuitive interface, users can choose from three different modes to cater to their specific needs. - -Prepare for a Specific Interview: Users can simulate a job interviewer for a particular company, position, and round. IntervuPro.Ai provides detailed characteristics for both the job interview and the specific company's interview. It offers valuable insights into what to expect and how to approach the interview process. - -Understand the Requirements of a Specific Position: For those seeking to understand the job requirements better, IntervuPro.Ai acts as a talent recruiter. Users can input the position they are interested in, and the tool provides comprehensive behavior and technical requirements for the position. - -Analyze Resume: To gain a competitive edge, users can submit their resume, and IntervuPro.Ai serves as a talent recruiter again. It assesses the resume for a given position and suggests advantages and disadvantages. The tool offers improvement advice to enhance the resume's relevance and potential to match the position's requirements. - -Powered by OpenAI's GPT-3.5 model, IntervuPro.Ai leverages natural language processing to generate prompt-based responses tailored to the users' specific inquiries. It provides valuable and personalized feedback, ensuring individuals are better prepared and confident for their upcoming interviews. diff --git a/spaces/SumDimDimSum/yulet1de-hentaidiffusion/README.md b/spaces/SumDimDimSum/yulet1de-hentaidiffusion/README.md deleted file mode 100644 index 1b4ac8beb84c507542cd115ebe41c5b5c0bdac3f..0000000000000000000000000000000000000000 --- a/spaces/SumDimDimSum/yulet1de-hentaidiffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Yulet1de Hentaidiffusion -emoji: 🔥 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/WalImageFile.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/WalImageFile.py deleted file mode 100644 index e4f47aa04bc148f3ff151bec5595f8626833b938..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/WalImageFile.py +++ /dev/null @@ -1,123 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# WAL file handling -# -# History: -# 2003-04-23 fl created -# -# Copyright (c) 2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -""" -This reader is based on the specification available from: -https://www.flipcode.com/archives/Quake_2_BSP_File_Format.shtml -and has been tested with a few sample files found using google. - -.. note:: - This format cannot be automatically recognized, so the reader - is not registered for use with :py:func:`PIL.Image.open()`. - To open a WAL file, use the :py:func:`PIL.WalImageFile.open()` function instead. -""" - -from . import Image, ImageFile -from ._binary import i32le as i32 - - -class WalImageFile(ImageFile.ImageFile): - format = "WAL" - format_description = "Quake2 Texture" - - def _open(self): - self.mode = "P" - - # read header fields - header = self.fp.read(32 + 24 + 32 + 12) - self._size = i32(header, 32), i32(header, 36) - Image._decompression_bomb_check(self.size) - - # load pixel data - offset = i32(header, 40) - self.fp.seek(offset) - - # strings are null-terminated - self.info["name"] = header[:32].split(b"\0", 1)[0] - next_name = header[56 : 56 + 32].split(b"\0", 1)[0] - if next_name: - self.info["next_name"] = next_name - - def load(self): - if not self.im: - self.im = Image.core.new(self.mode, self.size) - self.frombytes(self.fp.read(self.size[0] * self.size[1])) - self.putpalette(quake2palette) - return Image.Image.load(self) - - -def open(filename): - """ - Load texture from a Quake2 WAL texture file. - - By default, a Quake2 standard palette is attached to the texture. - To override the palette, use the :py:func:`PIL.Image.Image.putpalette()` method. - - :param filename: WAL file name, or an opened file handle. - :returns: An image instance. - """ - return WalImageFile(filename) - - -quake2palette = ( - # default palette taken from piffo 0.93 by Hans Häggström - b"\x01\x01\x01\x0b\x0b\x0b\x12\x12\x12\x17\x17\x17\x1b\x1b\x1b\x1e" - b"\x1e\x1e\x22\x22\x22\x26\x26\x26\x29\x29\x29\x2c\x2c\x2c\x2f\x2f" - b"\x2f\x32\x32\x32\x35\x35\x35\x37\x37\x37\x3a\x3a\x3a\x3c\x3c\x3c" - b"\x24\x1e\x13\x22\x1c\x12\x20\x1b\x12\x1f\x1a\x10\x1d\x19\x10\x1b" - b"\x17\x0f\x1a\x16\x0f\x18\x14\x0d\x17\x13\x0d\x16\x12\x0d\x14\x10" - b"\x0b\x13\x0f\x0b\x10\x0d\x0a\x0f\x0b\x0a\x0d\x0b\x07\x0b\x0a\x07" - b"\x23\x23\x26\x22\x22\x25\x22\x20\x23\x21\x1f\x22\x20\x1e\x20\x1f" - b"\x1d\x1e\x1d\x1b\x1c\x1b\x1a\x1a\x1a\x19\x19\x18\x17\x17\x17\x16" - b"\x16\x14\x14\x14\x13\x13\x13\x10\x10\x10\x0f\x0f\x0f\x0d\x0d\x0d" - b"\x2d\x28\x20\x29\x24\x1c\x27\x22\x1a\x25\x1f\x17\x38\x2e\x1e\x31" - b"\x29\x1a\x2c\x25\x17\x26\x20\x14\x3c\x30\x14\x37\x2c\x13\x33\x28" - b"\x12\x2d\x24\x10\x28\x1f\x0f\x22\x1a\x0b\x1b\x14\x0a\x13\x0f\x07" - b"\x31\x1a\x16\x30\x17\x13\x2e\x16\x10\x2c\x14\x0d\x2a\x12\x0b\x27" - b"\x0f\x0a\x25\x0f\x07\x21\x0d\x01\x1e\x0b\x01\x1c\x0b\x01\x1a\x0b" - b"\x01\x18\x0a\x01\x16\x0a\x01\x13\x0a\x01\x10\x07\x01\x0d\x07\x01" - b"\x29\x23\x1e\x27\x21\x1c\x26\x20\x1b\x25\x1f\x1a\x23\x1d\x19\x21" - b"\x1c\x18\x20\x1b\x17\x1e\x19\x16\x1c\x18\x14\x1b\x17\x13\x19\x14" - b"\x10\x17\x13\x0f\x14\x10\x0d\x12\x0f\x0b\x0f\x0b\x0a\x0b\x0a\x07" - b"\x26\x1a\x0f\x23\x19\x0f\x20\x17\x0f\x1c\x16\x0f\x19\x13\x0d\x14" - b"\x10\x0b\x10\x0d\x0a\x0b\x0a\x07\x33\x22\x1f\x35\x29\x26\x37\x2f" - b"\x2d\x39\x35\x34\x37\x39\x3a\x33\x37\x39\x30\x34\x36\x2b\x31\x34" - b"\x27\x2e\x31\x22\x2b\x2f\x1d\x28\x2c\x17\x25\x2a\x0f\x20\x26\x0d" - b"\x1e\x25\x0b\x1c\x22\x0a\x1b\x20\x07\x19\x1e\x07\x17\x1b\x07\x14" - b"\x18\x01\x12\x16\x01\x0f\x12\x01\x0b\x0d\x01\x07\x0a\x01\x01\x01" - b"\x2c\x21\x21\x2a\x1f\x1f\x29\x1d\x1d\x27\x1c\x1c\x26\x1a\x1a\x24" - b"\x18\x18\x22\x17\x17\x21\x16\x16\x1e\x13\x13\x1b\x12\x12\x18\x10" - b"\x10\x16\x0d\x0d\x12\x0b\x0b\x0d\x0a\x0a\x0a\x07\x07\x01\x01\x01" - b"\x2e\x30\x29\x2d\x2e\x27\x2b\x2c\x26\x2a\x2a\x24\x28\x29\x23\x27" - b"\x27\x21\x26\x26\x1f\x24\x24\x1d\x22\x22\x1c\x1f\x1f\x1a\x1c\x1c" - b"\x18\x19\x19\x16\x17\x17\x13\x13\x13\x10\x0f\x0f\x0d\x0b\x0b\x0a" - b"\x30\x1e\x1b\x2d\x1c\x19\x2c\x1a\x17\x2a\x19\x14\x28\x17\x13\x26" - b"\x16\x10\x24\x13\x0f\x21\x12\x0d\x1f\x10\x0b\x1c\x0f\x0a\x19\x0d" - b"\x0a\x16\x0b\x07\x12\x0a\x07\x0f\x07\x01\x0a\x01\x01\x01\x01\x01" - b"\x28\x29\x38\x26\x27\x36\x25\x26\x34\x24\x24\x31\x22\x22\x2f\x20" - b"\x21\x2d\x1e\x1f\x2a\x1d\x1d\x27\x1b\x1b\x25\x19\x19\x21\x17\x17" - b"\x1e\x14\x14\x1b\x13\x12\x17\x10\x0f\x13\x0d\x0b\x0f\x0a\x07\x07" - b"\x2f\x32\x29\x2d\x30\x26\x2b\x2e\x24\x29\x2c\x21\x27\x2a\x1e\x25" - b"\x28\x1c\x23\x26\x1a\x21\x25\x18\x1e\x22\x14\x1b\x1f\x10\x19\x1c" - b"\x0d\x17\x1a\x0a\x13\x17\x07\x10\x13\x01\x0d\x0f\x01\x0a\x0b\x01" - b"\x01\x3f\x01\x13\x3c\x0b\x1b\x39\x10\x20\x35\x14\x23\x31\x17\x23" - b"\x2d\x18\x23\x29\x18\x3f\x3f\x3f\x3f\x3f\x39\x3f\x3f\x31\x3f\x3f" - b"\x2a\x3f\x3f\x20\x3f\x3f\x14\x3f\x3c\x12\x3f\x39\x0f\x3f\x35\x0b" - b"\x3f\x32\x07\x3f\x2d\x01\x3d\x2a\x01\x3b\x26\x01\x39\x21\x01\x37" - b"\x1d\x01\x34\x1a\x01\x32\x16\x01\x2f\x12\x01\x2d\x0f\x01\x2a\x0b" - b"\x01\x27\x07\x01\x23\x01\x01\x1d\x01\x01\x17\x01\x01\x10\x01\x01" - b"\x3d\x01\x01\x19\x19\x3f\x3f\x01\x01\x01\x01\x3f\x16\x16\x13\x10" - b"\x10\x0f\x0d\x0d\x0b\x3c\x2e\x2a\x36\x27\x20\x30\x21\x18\x29\x1b" - b"\x10\x3c\x39\x37\x37\x32\x2f\x31\x2c\x28\x2b\x26\x21\x30\x22\x20" -) diff --git a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/models/lm.py b/spaces/Suniilkumaar/MusicGen-updated/audiocraft/models/lm.py deleted file mode 100644 index c8aad8f06797eef3293605056e1de14d07c56c2a..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/MusicGen-updated/audiocraft/models/lm.py +++ /dev/null @@ -1,527 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from functools import partial -import logging -import math -import typing as tp - -import torch -from torch import nn - -from ..utils import utils -from ..modules.streaming import StreamingModule, State -from ..modules.transformer import StreamingTransformer, create_norm_fn -from ..modules.conditioners import ( - ConditionFuser, - ClassifierFreeGuidanceDropout, - AttributeDropout, - ConditioningProvider, - ConditioningAttributes, - ConditionType, -) -from ..modules.codebooks_patterns import CodebooksPatternProvider -from ..modules.activations import get_activation_fn - - -logger = logging.getLogger(__name__) -ConditionTensors = tp.Dict[str, ConditionType] -CFGConditions = tp.Union[ConditionTensors, tp.Tuple[ConditionTensors, ConditionTensors]] - - -def get_init_fn(method: str, input_dim: int, init_depth: tp.Optional[int] = None): - """LM layer initialization. - Inspired from xlformers: https://github.com/fairinternal/xlformers - - Args: - method (str): Method name for init function. Valid options are: - 'gaussian', 'uniform'. - input_dim (int): Input dimension of the initialized module. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - """ - # Compute std - std = 1 / math.sqrt(input_dim) - # Rescale with depth - if init_depth is not None: - std = std / math.sqrt(2 * init_depth) - - if method == 'gaussian': - return partial( - torch.nn.init.trunc_normal_, mean=0.0, std=std, a=-3 * std, b=3 * std - ) - elif method == 'uniform': - bound = math.sqrt(3) * std # ensure the standard deviation is `std` - return partial(torch.nn.init.uniform_, a=-bound, b=bound) - else: - raise ValueError("Unsupported layer initialization method") - - -def init_layer(m: nn.Module, - method: str, - init_depth: tp.Optional[int] = None, - zero_bias_init: bool = False): - """Wrapper around ``get_init_fn`` for proper initialization of LM modules. - - Args: - m (nn.Module): Module to initialize. - method (str): Method name for the init function. - init_depth (Optional[int]): Optional init depth value used to rescale - the standard deviation if defined. - zero_bias_init (bool): Whether to initialize the bias to 0 or not. - """ - if isinstance(m, nn.Linear): - init_fn = get_init_fn(method, m.in_features, init_depth=init_depth) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - if zero_bias_init and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Embedding): - init_fn = get_init_fn(method, m.embedding_dim, init_depth=None) - if m.weight.device.type == 'cpu' and m.weight.dtype == torch.float16: - weight = m.weight.float() - init_fn(weight) - m.weight.data[:] = weight.half() - else: - init_fn(m.weight) - - -class ScaledEmbedding(nn.Embedding): - """Boost learning rate for embeddings (with `scale`). - """ - def __init__(self, *args, lr=None, **kwargs): - super().__init__(*args, **kwargs) - self.lr = lr - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - return group - - -@dataclass -class LMOutput: - # The logits are already re-aligned with the input codes - # hence no extra shift is required, e.g. when computing CE - logits: torch.Tensor # [B, K, T, card] - mask: torch.Tensor # [B, K, T] - - -class LMModel(StreamingModule): - """Transformer-based language model on multiple streams of codes. - - Args: - pattern_provider (CodebooksPatternProvider): Pattern provider for codebook interleaving. - condition_provider (MusicConditioningProvider): Conditioning provider from metadata. - fuser (ConditionFuser): Fuser handling the fusing of conditions with language model input. - n_q (int): Number of parallel streams to model. - card (int): Cardinality, vocabulary size. - dim (int): Dimension of the transformer encoder. - num_heads (int): Number of heads for the transformer encoder. - hidden_scale (int): Scale for hidden feed forward dimension of the transformer encoder. - norm (str): Normalization method. - norm_first (bool): Use pre-norm instead of post-norm. - emb_lr (Optional[float]): Embedding-specific learning rate. - bias_proj (bool): Use bias for output projections. - weight_init (Optional[str]): Method for weight initialization. - depthwise_init (Optional[str]): Method for depthwise weight initialization. - zero_bias_init (bool): If true and bias in Linears, initialize bias to zeros. - cfg_dropout (float): Classifier-free guidance dropout. - cfg_coef (float): Classifier-free guidance coefficient. - attribute_dropout (dict): Attribute dropout probabilities. - two_step_cfg (bool): Whether to run classifier free-guidance with 2 distinct steps. - **kwargs: Additional parameters for the transformer encoder. - """ - def __init__(self, pattern_provider: CodebooksPatternProvider, condition_provider: ConditioningProvider, - fuser: ConditionFuser, n_q: int = 8, card: int = 1024, dim: int = 128, num_heads: int = 8, - hidden_scale: int = 4, norm: str = 'layer_norm', norm_first: bool = False, - emb_lr: tp.Optional[float] = None, bias_proj: bool = True, - weight_init: tp.Optional[str] = None, depthwise_init: tp.Optional[str] = None, - zero_bias_init: bool = False, cfg_dropout: float = 0, cfg_coef: float = 1.0, - attribute_dropout: tp.Dict[str, tp.Dict[str, float]] = {}, two_step_cfg: bool = False, - **kwargs): - super().__init__() - self.cfg_coef = cfg_coef - self.cfg_dropout = ClassifierFreeGuidanceDropout(p=cfg_dropout) - self.att_dropout = AttributeDropout(p=attribute_dropout) - self.condition_provider = condition_provider - self.fuser = fuser - self.card = card - embed_dim = self.card + 1 - self.n_q = n_q - self.dim = dim - self.pattern_provider = pattern_provider - self.two_step_cfg = two_step_cfg - self.emb = nn.ModuleList([ScaledEmbedding(embed_dim, dim, lr=emb_lr) for _ in range(n_q)]) - if 'activation' in kwargs: - kwargs['activation'] = get_activation_fn(kwargs['activation']) - self.transformer = StreamingTransformer( - d_model=dim, num_heads=num_heads, dim_feedforward=int(hidden_scale * dim), - norm=norm, norm_first=norm_first, **kwargs) - self.out_norm: tp.Optional[nn.Module] = None - if norm_first: - self.out_norm = create_norm_fn(norm, dim) - self.linears = nn.ModuleList([nn.Linear(dim, self.card, bias=bias_proj) for _ in range(n_q)]) - self._init_weights(weight_init, depthwise_init, zero_bias_init) - self._fsdp: tp.Optional[nn.Module] - self.__dict__['_fsdp'] = None - - def _init_weights(self, weight_init: tp.Optional[str], depthwise_init: tp.Optional[str], zero_bias_init: bool): - """Initialization of the transformer module weights. - - Args: - weight_init (Optional[str]): Weight initialization strategy. See ``get_init_fn`` for valid options. - depthwise_init (Optional[str]): Depwthwise initialization strategy. The following options are valid: - 'current' where the depth corresponds to the current layer index or 'global' where the total number - of layer is used as depth. If not set, no depthwise initialization strategy is used. - zero_bias_init (bool): Whether to initalize bias to zero or not. - """ - assert depthwise_init is None or depthwise_init in ['current', 'global'] - assert depthwise_init is None or weight_init is not None, \ - "If 'depthwise_init' is defined, a 'weight_init' method should be provided." - assert not zero_bias_init or weight_init is not None, \ - "If 'zero_bias_init', a 'weight_init' method should be provided" - - if weight_init is None: - return - - for emb_layer in self.emb: - init_layer(emb_layer, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - for layer_idx, tr_layer in enumerate(self.transformer.layers): - depth = None - if depthwise_init == 'current': - depth = layer_idx + 1 - elif depthwise_init == 'global': - depth = len(self.transformer.layers) - init_fn = partial(init_layer, method=weight_init, init_depth=depth, zero_bias_init=zero_bias_init) - tr_layer.apply(init_fn) - - for linear in self.linears: - init_layer(linear, method=weight_init, init_depth=None, zero_bias_init=zero_bias_init) - - @property - def special_token_id(self) -> int: - return self.card - - @property - def num_codebooks(self) -> int: - return self.n_q - - def forward(self, sequence: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> torch.Tensor: - """Apply language model on sequence and conditions. - Given a tensor of sequence of shape [B, K, S] with K the number of codebooks and - S the sequence steps, return the logits with shape [B, card, K, S]. - - Args: - indices (torch.Tensor): indices of the codes to model. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - torch.Tensor: Logits. - """ - B, K, S = sequence.shape - assert K == self.num_codebooks, 'Sequence shape must match the specified number of codebooks' - input_ = sum([self.emb[k](sequence[:, k]) for k in range(K)]) - if condition_tensors is None: - assert not self._is_streaming, "Conditions tensors should be precomputed when streaming." - # apply dropout modules - conditions = self.cfg_dropout(conditions) - conditions = self.att_dropout(conditions) - tokenized = self.condition_provider.tokenize(conditions) - # encode conditions and fuse, both have a streaming cache to not recompute when generating. - condition_tensors = self.condition_provider(tokenized) - else: - assert not conditions, "Shouldn't pass both conditions and condition_tensors." - - input_, cross_attention_input = self.fuser(input_, condition_tensors) - - out = self.transformer(input_, cross_attention_src=cross_attention_input) - if self.out_norm: - out = self.out_norm(out) - logits = torch.stack([self.linears[k](out) for k in range(K)], dim=1) # [B, K, S, card] - - # remove the prefix from the model outputs - if len(self.fuser.fuse2cond['prepend']) > 0: - logits = logits[:, :, -S:] - - return logits # [B, K, S, card] - - def compute_predictions( - self, codes: torch.Tensor, - conditions: tp.List[ConditioningAttributes], - condition_tensors: tp.Optional[ConditionTensors] = None) -> LMOutput: - """Given an input tensor of codes [B, K, T] and list of conditions, runs the model - forward using the specified codes interleaving pattern. - - Args: - codes (torch.Tensor): Input codes of shape [B, K, T] with B the batch size, - K the number of codebooks and T the number of timesteps. - conditions (list[ConditioningAttributes]): conditionings to use when modeling - the given codes. Note that when evaluating multiple time with the same conditioning - you should pre-compute those and pass them as `condition_tensors`. - condition_tensors (dict[str, ConditionType] or None): pre-computed conditioning - tensors, see `conditions`. - Returns: - LMOutput: Language model outputs - logits (torch.Tensor) of shape [B, K, T, card] corresponding to the provided codes, - i.e. the first item corresponds to logits to predict the first code, meaning that - no additional shifting of codes and logits is required. - mask (torch.Tensor) of shape [B, K, T], mask over valid and invalid positions. - Given the specified interleaving strategies, parts of the logits and codes should - not be considered as valid predictions because of invalid context. - """ - B, K, T = codes.shape - codes = codes.contiguous() - # map codes [B, K, T] into pattern sequence [B, K, S] using special_token_id for masked tokens - pattern = self.pattern_provider.get_pattern(T) - sequence_codes, sequence_indexes, sequence_mask = pattern.build_pattern_sequence( - codes, self.special_token_id, keep_only_valid_steps=True - ) - # apply model on pattern sequence - model = self if self._fsdp is None else self._fsdp - logits = model(sequence_codes, conditions, condition_tensors) # [B, K, S, card] - # map back the logits on pattern sequence to logits on original codes: [B, K, S, card] -> [B, K, T, card] - # and provide the corresponding mask over invalid positions of tokens - logits = logits.permute(0, 3, 1, 2) # [B, card, K, S] - # note: we use nans as special token to make it obvious if we feed unexpected logits - logits, logits_indexes, logits_mask = pattern.revert_pattern_logits( - logits, float('nan'), keep_only_valid_steps=True - ) - logits = logits.permute(0, 2, 3, 1) # [B, K, T, card] - logits_mask = logits_mask[None, :, :].expand(B, -1, -1) # [K, T] -> [B, K, T] - return LMOutput(logits, logits_mask) - - def _sample_next_token(self, - sequence: torch.Tensor, - cfg_conditions: CFGConditions, - unconditional_state: State, - use_sampling: bool = False, - temp: float = 1.0, - top_k: int = 0, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None) -> torch.Tensor: - """Sample next token from the model given a sequence and a set of conditions. The model supports - multiple sampling strategies (greedy sampling, softmax, top-k, top-p...). - - Args: - sequence (torch.Tensor): Current sequence of shape [B, K, S] - with K corresponding to the number of codebooks and S the number of sequence steps. - S = 1 in streaming mode, except for the first step that contains a bigger prompt. - condition_tensors (Dict[str, ConditionType): Set of conditions. If CFG is used, - should be twice the batch size, being the concatenation of the conditions + null conditions. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - cfg_coef (float): classifier free guidance coefficient - Returns: - next_token (torch.Tensor): Next token tensor of shape [B, K, 1]. - """ - B = sequence.shape[0] - cfg_coef = self.cfg_coef if cfg_coef is None else cfg_coef - model = self if self._fsdp is None else self._fsdp - if self.two_step_cfg and cfg_conditions != {}: - assert isinstance(cfg_conditions, tuple) - condition_tensors, null_condition_tensors = cfg_conditions - cond_logits = model(sequence, conditions=[], condition_tensors=condition_tensors) - state = self.get_streaming_state() - self.set_streaming_state(unconditional_state) - uncond_logits = model(sequence, conditions=[], condition_tensors=null_condition_tensors) - unconditional_state.update(self.get_streaming_state()) - self.set_streaming_state(state) - logits = uncond_logits + (cond_logits - uncond_logits) * self.cfg_coef - else: - assert isinstance(cfg_conditions, dict) - condition_tensors = cfg_conditions - if condition_tensors: - # Preparing for CFG, predicting both conditional and unconditional logits. - sequence = torch.cat([sequence, sequence], dim=0) - all_logits = model( - sequence, - conditions=[], condition_tensors=condition_tensors) - if condition_tensors: - cond_logits, uncond_logits = all_logits.split(B, dim=0) # [B, K, T, card] - logits = uncond_logits + (cond_logits - uncond_logits) * cfg_coef - else: - logits = all_logits - - logits = logits.permute(0, 1, 3, 2) # [B, K, card, T] - logits = logits[..., -1] # [B x K x card] - - # Apply softmax for sampling if temp > 0. Else, do greedy sampling to avoid zero division error. - if use_sampling and temp > 0.0: - probs = torch.softmax(logits / temp, dim=-1) - if top_p > 0.0: - next_token = utils.sample_top_p(probs, p=top_p) - elif top_k > 0: - next_token = utils.sample_top_k(probs, k=top_k) - else: - next_token = utils.multinomial(probs, num_samples=1) - else: - next_token = torch.argmax(logits, dim=-1, keepdim=True) - - return next_token - - @torch.no_grad() - def generate(self, - prompt: tp.Optional[torch.Tensor] = None, - conditions: tp.List[ConditioningAttributes] = [], - num_samples: tp.Optional[int] = None, - max_gen_len: int = 256, - use_sampling: bool = True, - temp: float = 1.0, - top_k: int = 250, - top_p: float = 0.0, - cfg_coef: tp.Optional[float] = None, - two_step_cfg: bool = False, - remove_prompts: bool = False, - check: bool = False, - callback: tp.Optional[tp.Callable[[int, int], None]] = None) -> torch.Tensor: - """Generate tokens sampling from the model given a prompt or unconditionally. Generation can - be perform in a greedy fashion or using sampling with top K and top P strategies. - - Args: - prompt (Optional[torch.Tensor]): Prompt tokens of shape [B, K, T]. - conditions_tensors (Dict[str, torch.Tensor]): Set of conditions or None. - num_samples (int or None): Number of samples to generate when no prompt and no conditions are given. - max_gen_len (int): Maximum generation length. - use_sampling (bool): Whether to use a sampling strategy or not. - temp (float): Sampling temperature. - top_k (int): K for "top-k" sampling. - top_p (float): P for "top-p" sampling. - remove_prompts (bool): Whether to remove prompts from generation or not. - Returns: - torch.Tensor: Generated tokens. - """ - assert not self.training, "generation shouldn't be used in training mode." - first_param = next(iter(self.parameters())) - device = first_param.device - - # Checking all input shapes are consistents. - possible_num_samples = [] - if num_samples is not None: - possible_num_samples.append(num_samples) - elif prompt is not None: - possible_num_samples.append(prompt.shape[0]) - elif conditions: - possible_num_samples.append(len(conditions)) - else: - possible_num_samples.append(1) - assert [x == possible_num_samples[0] for x in possible_num_samples], "Inconsitent inputs shapes" - num_samples = possible_num_samples[0] - - # below we create set of conditions: one conditional and one unconditional - # to do that we merge the regular condition together with the null condition - # we then do 1 forward pass instead of 2. - # the reason for that is two-fold: - # 1. it is about x2 faster than doing 2 forward passes - # 2. avoid the streaming API treating the 2 passes as part of different time steps - # We also support doing two different passes, in particular to ensure that - # the padding structure is exactly the same between train anf test. - # With a batch size of 1, this can be slower though. - cfg_conditions: CFGConditions - two_step_cfg = self.two_step_cfg if two_step_cfg is None else two_step_cfg - if conditions: - null_conditions = ClassifierFreeGuidanceDropout(p=1.0)(conditions) - if two_step_cfg: - cfg_conditions = ( - self.condition_provider(self.condition_provider.tokenize(conditions)), - self.condition_provider(self.condition_provider.tokenize(null_conditions)), - ) - else: - conditions = conditions + null_conditions - tokenized = self.condition_provider.tokenize(conditions) - cfg_conditions = self.condition_provider(tokenized) - else: - cfg_conditions = {} - - if prompt is None: - assert num_samples > 0 - prompt = torch.zeros((num_samples, self.num_codebooks, 0), dtype=torch.long, device=device) - - B, K, T = prompt.shape - start_offset = T - assert start_offset < max_gen_len - - pattern = self.pattern_provider.get_pattern(max_gen_len) - # this token is used as default value for codes that are not generated yet - unknown_token = -1 - - # we generate codes up to the max_gen_len that will be mapped to the pattern sequence - gen_codes = torch.full((B, K, max_gen_len), unknown_token, dtype=torch.long, device=device) - # filling the gen_codes with the prompt if needed - gen_codes[..., :start_offset] = prompt - # create the gen_sequence with proper interleaving from the pattern: [B, K, S] - gen_sequence, indexes, mask = pattern.build_pattern_sequence(gen_codes, self.special_token_id) - # retrieve the start_offset in the sequence: - # it is the first sequence step that contains the `start_offset` timestep - start_offset_sequence = pattern.get_first_step_with_timesteps(start_offset) - assert start_offset_sequence is not None - - with self.streaming(): - unconditional_state = self.get_streaming_state() - prev_offset = 0 - gen_sequence_len = gen_sequence.shape[-1] # gen_sequence shape is [B, K, S] - for offset in range(start_offset_sequence, gen_sequence_len): - # get current sequence (note that the streaming API is providing the caching over previous offsets) - curr_sequence = gen_sequence[..., prev_offset:offset] - curr_mask = mask[None, ..., prev_offset:offset].expand(B, -1, -1) - if check: - # check coherence between mask and sequence - assert (curr_sequence == torch.where(curr_mask, curr_sequence, self.special_token_id)).all() - # should never happen as gen_sequence is filled progressively - assert not (curr_sequence == unknown_token).any() - # sample next token from the model, next token shape is [B, K, 1] - next_token = self._sample_next_token( - curr_sequence, cfg_conditions, unconditional_state, use_sampling, temp, top_k, top_p, - cfg_coef=cfg_coef) - # ensure the tokens that should be masked are properly set to special_token_id - # as the model never output special_token_id - valid_mask = mask[..., offset:offset+1].expand(B, -1, -1) - next_token[~valid_mask] = self.special_token_id - # ensure we don't overwrite prompt tokens, we only write over unknown tokens - # (then mask tokens should be left as is as well, which is correct) - gen_sequence[..., offset:offset+1] = torch.where( - gen_sequence[..., offset:offset+1] == unknown_token, - next_token, gen_sequence[..., offset:offset+1] - ) - prev_offset = offset - if callback is not None: - callback(1 + offset - start_offset_sequence, gen_sequence_len - start_offset_sequence) - unconditional_state.clear() - - # ensure sequence has been entirely filled - assert not (gen_sequence == unknown_token).any() - # ensure gen_sequence pattern and mask are matching - # which means the gen_sequence is valid according to the pattern - assert ( - gen_sequence == torch.where(mask[None, ...].expand(B, -1, -1), gen_sequence, self.special_token_id) - ).all() - # get back the codes, trimming the prompt if needed and cutting potentially incomplete timesteps - out_codes, out_indexes, out_mask = pattern.revert_pattern_sequence(gen_sequence, special_token=unknown_token) - - # sanity checks over the returned codes and corresponding masks - assert (out_codes[..., :max_gen_len] != unknown_token).all() - assert (out_mask[..., :max_gen_len] == 1).all() - - out_start_offset = start_offset if remove_prompts else 0 - out_codes = out_codes[..., out_start_offset:max_gen_len] - - # ensure the returned codes are all valid - assert (out_codes >= 0).all() and (out_codes <= self.card).all() - return out_codes diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/cmdoptions.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/cmdoptions.py deleted file mode 100644 index 02ba60827933d6623cdf6b1417762fee47c1ab6f..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/cmdoptions.py +++ /dev/null @@ -1,1074 +0,0 @@ -""" -shared options and groups - -The principle here is to define options once, but *not* instantiate them -globally. One reason being that options with action='append' can carry state -between parses. pip parses general options twice internally, and shouldn't -pass on state. To be consistent, all options will follow this design. -""" - -# The following comment should be removed at some point in the future. -# mypy: strict-optional=False - -import importlib.util -import logging -import os -import textwrap -from functools import partial -from optparse import SUPPRESS_HELP, Option, OptionGroup, OptionParser, Values -from textwrap import dedent -from typing import Any, Callable, Dict, Optional, Tuple - -from pip._vendor.packaging.utils import canonicalize_name - -from pip._internal.cli.parser import ConfigOptionParser -from pip._internal.exceptions import CommandError -from pip._internal.locations import USER_CACHE_DIR, get_src_prefix -from pip._internal.models.format_control import FormatControl -from pip._internal.models.index import PyPI -from pip._internal.models.target_python import TargetPython -from pip._internal.utils.hashes import STRONG_HASHES -from pip._internal.utils.misc import strtobool - -logger = logging.getLogger(__name__) - - -def raise_option_error(parser: OptionParser, option: Option, msg: str) -> None: - """ - Raise an option parsing error using parser.error(). - - Args: - parser: an OptionParser instance. - option: an Option instance. - msg: the error text. - """ - msg = f"{option} error: {msg}" - msg = textwrap.fill(" ".join(msg.split())) - parser.error(msg) - - -def make_option_group(group: Dict[str, Any], parser: ConfigOptionParser) -> OptionGroup: - """ - Return an OptionGroup object - group -- assumed to be dict with 'name' and 'options' keys - parser -- an optparse Parser - """ - option_group = OptionGroup(parser, group["name"]) - for option in group["options"]: - option_group.add_option(option()) - return option_group - - -def check_dist_restriction(options: Values, check_target: bool = False) -> None: - """Function for determining if custom platform options are allowed. - - :param options: The OptionParser options. - :param check_target: Whether or not to check if --target is being used. - """ - dist_restriction_set = any( - [ - options.python_version, - options.platforms, - options.abis, - options.implementation, - ] - ) - - binary_only = FormatControl(set(), {":all:"}) - sdist_dependencies_allowed = ( - options.format_control != binary_only and not options.ignore_dependencies - ) - - # Installations or downloads using dist restrictions must not combine - # source distributions and dist-specific wheels, as they are not - # guaranteed to be locally compatible. - if dist_restriction_set and sdist_dependencies_allowed: - raise CommandError( - "When restricting platform and interpreter constraints using " - "--python-version, --platform, --abi, or --implementation, " - "either --no-deps must be set, or --only-binary=:all: must be " - "set and --no-binary must not be set (or must be set to " - ":none:)." - ) - - if check_target: - if dist_restriction_set and not options.target_dir: - raise CommandError( - "Can not use any platform or abi specific options unless " - "installing via '--target'" - ) - - -def _path_option_check(option: Option, opt: str, value: str) -> str: - return os.path.expanduser(value) - - -def _package_name_option_check(option: Option, opt: str, value: str) -> str: - return canonicalize_name(value) - - -class PipOption(Option): - TYPES = Option.TYPES + ("path", "package_name") - TYPE_CHECKER = Option.TYPE_CHECKER.copy() - TYPE_CHECKER["package_name"] = _package_name_option_check - TYPE_CHECKER["path"] = _path_option_check - - -########### -# options # -########### - -help_: Callable[..., Option] = partial( - Option, - "-h", - "--help", - dest="help", - action="help", - help="Show help.", -) - -debug_mode: Callable[..., Option] = partial( - Option, - "--debug", - dest="debug_mode", - action="store_true", - default=False, - help=( - "Let unhandled exceptions propagate outside the main subroutine, " - "instead of logging them to stderr." - ), -) - -isolated_mode: Callable[..., Option] = partial( - Option, - "--isolated", - dest="isolated_mode", - action="store_true", - default=False, - help=( - "Run pip in an isolated mode, ignoring environment variables and user " - "configuration." - ), -) - -require_virtualenv: Callable[..., Option] = partial( - Option, - "--require-virtualenv", - "--require-venv", - dest="require_venv", - action="store_true", - default=False, - help=( - "Allow pip to only run in a virtual environment; " - "exit with an error otherwise." - ), -) - -override_externally_managed: Callable[..., Option] = partial( - Option, - "--break-system-packages", - dest="override_externally_managed", - action="store_true", - help="Allow pip to modify an EXTERNALLY-MANAGED Python installation", -) - -python: Callable[..., Option] = partial( - Option, - "--python", - dest="python", - help="Run pip with the specified Python interpreter.", -) - -verbose: Callable[..., Option] = partial( - Option, - "-v", - "--verbose", - dest="verbose", - action="count", - default=0, - help="Give more output. Option is additive, and can be used up to 3 times.", -) - -no_color: Callable[..., Option] = partial( - Option, - "--no-color", - dest="no_color", - action="store_true", - default=False, - help="Suppress colored output.", -) - -version: Callable[..., Option] = partial( - Option, - "-V", - "--version", - dest="version", - action="store_true", - help="Show version and exit.", -) - -quiet: Callable[..., Option] = partial( - Option, - "-q", - "--quiet", - dest="quiet", - action="count", - default=0, - help=( - "Give less output. Option is additive, and can be used up to 3" - " times (corresponding to WARNING, ERROR, and CRITICAL logging" - " levels)." - ), -) - -progress_bar: Callable[..., Option] = partial( - Option, - "--progress-bar", - dest="progress_bar", - type="choice", - choices=["on", "off"], - default="on", - help="Specify whether the progress bar should be used [on, off] (default: on)", -) - -log: Callable[..., Option] = partial( - PipOption, - "--log", - "--log-file", - "--local-log", - dest="log", - metavar="path", - type="path", - help="Path to a verbose appending log.", -) - -no_input: Callable[..., Option] = partial( - Option, - # Don't ask for input - "--no-input", - dest="no_input", - action="store_true", - default=False, - help="Disable prompting for input.", -) - -keyring_provider: Callable[..., Option] = partial( - Option, - "--keyring-provider", - dest="keyring_provider", - choices=["auto", "disabled", "import", "subprocess"], - default="auto", - help=( - "Enable the credential lookup via the keyring library if user input is allowed." - " Specify which mechanism to use [disabled, import, subprocess]." - " (default: disabled)" - ), -) - -proxy: Callable[..., Option] = partial( - Option, - "--proxy", - dest="proxy", - type="str", - default="", - help="Specify a proxy in the form scheme://[user:passwd@]proxy.server:port.", -) - -retries: Callable[..., Option] = partial( - Option, - "--retries", - dest="retries", - type="int", - default=5, - help="Maximum number of retries each connection should attempt " - "(default %default times).", -) - -timeout: Callable[..., Option] = partial( - Option, - "--timeout", - "--default-timeout", - metavar="sec", - dest="timeout", - type="float", - default=15, - help="Set the socket timeout (default %default seconds).", -) - - -def exists_action() -> Option: - return Option( - # Option when path already exist - "--exists-action", - dest="exists_action", - type="choice", - choices=["s", "i", "w", "b", "a"], - default=[], - action="append", - metavar="action", - help="Default action when a path already exists: " - "(s)witch, (i)gnore, (w)ipe, (b)ackup, (a)bort.", - ) - - -cert: Callable[..., Option] = partial( - PipOption, - "--cert", - dest="cert", - type="path", - metavar="path", - help=( - "Path to PEM-encoded CA certificate bundle. " - "If provided, overrides the default. " - "See 'SSL Certificate Verification' in pip documentation " - "for more information." - ), -) - -client_cert: Callable[..., Option] = partial( - PipOption, - "--client-cert", - dest="client_cert", - type="path", - default=None, - metavar="path", - help="Path to SSL client certificate, a single file containing the " - "private key and the certificate in PEM format.", -) - -index_url: Callable[..., Option] = partial( - Option, - "-i", - "--index-url", - "--pypi-url", - dest="index_url", - metavar="URL", - default=PyPI.simple_url, - help="Base URL of the Python Package Index (default %default). " - "This should point to a repository compliant with PEP 503 " - "(the simple repository API) or a local directory laid out " - "in the same format.", -) - - -def extra_index_url() -> Option: - return Option( - "--extra-index-url", - dest="extra_index_urls", - metavar="URL", - action="append", - default=[], - help="Extra URLs of package indexes to use in addition to " - "--index-url. Should follow the same rules as " - "--index-url.", - ) - - -no_index: Callable[..., Option] = partial( - Option, - "--no-index", - dest="no_index", - action="store_true", - default=False, - help="Ignore package index (only looking at --find-links URLs instead).", -) - - -def find_links() -> Option: - return Option( - "-f", - "--find-links", - dest="find_links", - action="append", - default=[], - metavar="url", - help="If a URL or path to an html file, then parse for links to " - "archives such as sdist (.tar.gz) or wheel (.whl) files. " - "If a local path or file:// URL that's a directory, " - "then look for archives in the directory listing. " - "Links to VCS project URLs are not supported.", - ) - - -def trusted_host() -> Option: - return Option( - "--trusted-host", - dest="trusted_hosts", - action="append", - metavar="HOSTNAME", - default=[], - help="Mark this host or host:port pair as trusted, even though it " - "does not have valid or any HTTPS.", - ) - - -def constraints() -> Option: - return Option( - "-c", - "--constraint", - dest="constraints", - action="append", - default=[], - metavar="file", - help="Constrain versions using the given constraints file. " - "This option can be used multiple times.", - ) - - -def requirements() -> Option: - return Option( - "-r", - "--requirement", - dest="requirements", - action="append", - default=[], - metavar="file", - help="Install from the given requirements file. " - "This option can be used multiple times.", - ) - - -def editable() -> Option: - return Option( - "-e", - "--editable", - dest="editables", - action="append", - default=[], - metavar="path/url", - help=( - "Install a project in editable mode (i.e. setuptools " - '"develop mode") from a local project path or a VCS url.' - ), - ) - - -def _handle_src(option: Option, opt_str: str, value: str, parser: OptionParser) -> None: - value = os.path.abspath(value) - setattr(parser.values, option.dest, value) - - -src: Callable[..., Option] = partial( - PipOption, - "--src", - "--source", - "--source-dir", - "--source-directory", - dest="src_dir", - type="path", - metavar="dir", - default=get_src_prefix(), - action="callback", - callback=_handle_src, - help="Directory to check out editable projects into. " - 'The default in a virtualenv is "/src". ' - 'The default for global installs is "/src".', -) - - -def _get_format_control(values: Values, option: Option) -> Any: - """Get a format_control object.""" - return getattr(values, option.dest) - - -def _handle_no_binary( - option: Option, opt_str: str, value: str, parser: OptionParser -) -> None: - existing = _get_format_control(parser.values, option) - FormatControl.handle_mutual_excludes( - value, - existing.no_binary, - existing.only_binary, - ) - - -def _handle_only_binary( - option: Option, opt_str: str, value: str, parser: OptionParser -) -> None: - existing = _get_format_control(parser.values, option) - FormatControl.handle_mutual_excludes( - value, - existing.only_binary, - existing.no_binary, - ) - - -def no_binary() -> Option: - format_control = FormatControl(set(), set()) - return Option( - "--no-binary", - dest="format_control", - action="callback", - callback=_handle_no_binary, - type="str", - default=format_control, - help="Do not use binary packages. Can be supplied multiple times, and " - 'each time adds to the existing value. Accepts either ":all:" to ' - 'disable all binary packages, ":none:" to empty the set (notice ' - "the colons), or one or more package names with commas between " - "them (no colons). Note that some packages are tricky to compile " - "and may fail to install when this option is used on them.", - ) - - -def only_binary() -> Option: - format_control = FormatControl(set(), set()) - return Option( - "--only-binary", - dest="format_control", - action="callback", - callback=_handle_only_binary, - type="str", - default=format_control, - help="Do not use source packages. Can be supplied multiple times, and " - 'each time adds to the existing value. Accepts either ":all:" to ' - 'disable all source packages, ":none:" to empty the set, or one ' - "or more package names with commas between them. Packages " - "without binary distributions will fail to install when this " - "option is used on them.", - ) - - -platforms: Callable[..., Option] = partial( - Option, - "--platform", - dest="platforms", - metavar="platform", - action="append", - default=None, - help=( - "Only use wheels compatible with . Defaults to the " - "platform of the running system. Use this option multiple times to " - "specify multiple platforms supported by the target interpreter." - ), -) - - -# This was made a separate function for unit-testing purposes. -def _convert_python_version(value: str) -> Tuple[Tuple[int, ...], Optional[str]]: - """ - Convert a version string like "3", "37", or "3.7.3" into a tuple of ints. - - :return: A 2-tuple (version_info, error_msg), where `error_msg` is - non-None if and only if there was a parsing error. - """ - if not value: - # The empty string is the same as not providing a value. - return (None, None) - - parts = value.split(".") - if len(parts) > 3: - return ((), "at most three version parts are allowed") - - if len(parts) == 1: - # Then we are in the case of "3" or "37". - value = parts[0] - if len(value) > 1: - parts = [value[0], value[1:]] - - try: - version_info = tuple(int(part) for part in parts) - except ValueError: - return ((), "each version part must be an integer") - - return (version_info, None) - - -def _handle_python_version( - option: Option, opt_str: str, value: str, parser: OptionParser -) -> None: - """ - Handle a provided --python-version value. - """ - version_info, error_msg = _convert_python_version(value) - if error_msg is not None: - msg = "invalid --python-version value: {!r}: {}".format( - value, - error_msg, - ) - raise_option_error(parser, option=option, msg=msg) - - parser.values.python_version = version_info - - -python_version: Callable[..., Option] = partial( - Option, - "--python-version", - dest="python_version", - metavar="python_version", - action="callback", - callback=_handle_python_version, - type="str", - default=None, - help=dedent( - """\ - The Python interpreter version to use for wheel and "Requires-Python" - compatibility checks. Defaults to a version derived from the running - interpreter. The version can be specified using up to three dot-separated - integers (e.g. "3" for 3.0.0, "3.7" for 3.7.0, or "3.7.3"). A major-minor - version can also be given as a string without dots (e.g. "37" for 3.7.0). - """ - ), -) - - -implementation: Callable[..., Option] = partial( - Option, - "--implementation", - dest="implementation", - metavar="implementation", - default=None, - help=( - "Only use wheels compatible with Python " - "implementation , e.g. 'pp', 'jy', 'cp', " - " or 'ip'. If not specified, then the current " - "interpreter implementation is used. Use 'py' to force " - "implementation-agnostic wheels." - ), -) - - -abis: Callable[..., Option] = partial( - Option, - "--abi", - dest="abis", - metavar="abi", - action="append", - default=None, - help=( - "Only use wheels compatible with Python abi , e.g. 'pypy_41'. " - "If not specified, then the current interpreter abi tag is used. " - "Use this option multiple times to specify multiple abis supported " - "by the target interpreter. Generally you will need to specify " - "--implementation, --platform, and --python-version when using this " - "option." - ), -) - - -def add_target_python_options(cmd_opts: OptionGroup) -> None: - cmd_opts.add_option(platforms()) - cmd_opts.add_option(python_version()) - cmd_opts.add_option(implementation()) - cmd_opts.add_option(abis()) - - -def make_target_python(options: Values) -> TargetPython: - target_python = TargetPython( - platforms=options.platforms, - py_version_info=options.python_version, - abis=options.abis, - implementation=options.implementation, - ) - - return target_python - - -def prefer_binary() -> Option: - return Option( - "--prefer-binary", - dest="prefer_binary", - action="store_true", - default=False, - help="Prefer older binary packages over newer source packages.", - ) - - -cache_dir: Callable[..., Option] = partial( - PipOption, - "--cache-dir", - dest="cache_dir", - default=USER_CACHE_DIR, - metavar="dir", - type="path", - help="Store the cache data in .", -) - - -def _handle_no_cache_dir( - option: Option, opt: str, value: str, parser: OptionParser -) -> None: - """ - Process a value provided for the --no-cache-dir option. - - This is an optparse.Option callback for the --no-cache-dir option. - """ - # The value argument will be None if --no-cache-dir is passed via the - # command-line, since the option doesn't accept arguments. However, - # the value can be non-None if the option is triggered e.g. by an - # environment variable, like PIP_NO_CACHE_DIR=true. - if value is not None: - # Then parse the string value to get argument error-checking. - try: - strtobool(value) - except ValueError as exc: - raise_option_error(parser, option=option, msg=str(exc)) - - # Originally, setting PIP_NO_CACHE_DIR to a value that strtobool() - # converted to 0 (like "false" or "no") caused cache_dir to be disabled - # rather than enabled (logic would say the latter). Thus, we disable - # the cache directory not just on values that parse to True, but (for - # backwards compatibility reasons) also on values that parse to False. - # In other words, always set it to False if the option is provided in - # some (valid) form. - parser.values.cache_dir = False - - -no_cache: Callable[..., Option] = partial( - Option, - "--no-cache-dir", - dest="cache_dir", - action="callback", - callback=_handle_no_cache_dir, - help="Disable the cache.", -) - -no_deps: Callable[..., Option] = partial( - Option, - "--no-deps", - "--no-dependencies", - dest="ignore_dependencies", - action="store_true", - default=False, - help="Don't install package dependencies.", -) - -ignore_requires_python: Callable[..., Option] = partial( - Option, - "--ignore-requires-python", - dest="ignore_requires_python", - action="store_true", - help="Ignore the Requires-Python information.", -) - -no_build_isolation: Callable[..., Option] = partial( - Option, - "--no-build-isolation", - dest="build_isolation", - action="store_false", - default=True, - help="Disable isolation when building a modern source distribution. " - "Build dependencies specified by PEP 518 must be already installed " - "if this option is used.", -) - -check_build_deps: Callable[..., Option] = partial( - Option, - "--check-build-dependencies", - dest="check_build_deps", - action="store_true", - default=False, - help="Check the build dependencies when PEP517 is used.", -) - - -def _handle_no_use_pep517( - option: Option, opt: str, value: str, parser: OptionParser -) -> None: - """ - Process a value provided for the --no-use-pep517 option. - - This is an optparse.Option callback for the no_use_pep517 option. - """ - # Since --no-use-pep517 doesn't accept arguments, the value argument - # will be None if --no-use-pep517 is passed via the command-line. - # However, the value can be non-None if the option is triggered e.g. - # by an environment variable, for example "PIP_NO_USE_PEP517=true". - if value is not None: - msg = """A value was passed for --no-use-pep517, - probably using either the PIP_NO_USE_PEP517 environment variable - or the "no-use-pep517" config file option. Use an appropriate value - of the PIP_USE_PEP517 environment variable or the "use-pep517" - config file option instead. - """ - raise_option_error(parser, option=option, msg=msg) - - # If user doesn't wish to use pep517, we check if setuptools and wheel are installed - # and raise error if it is not. - packages = ("setuptools", "wheel") - if not all(importlib.util.find_spec(package) for package in packages): - msg = ( - f"It is not possible to use --no-use-pep517 " - f"without {' and '.join(packages)} installed." - ) - raise_option_error(parser, option=option, msg=msg) - - # Otherwise, --no-use-pep517 was passed via the command-line. - parser.values.use_pep517 = False - - -use_pep517: Any = partial( - Option, - "--use-pep517", - dest="use_pep517", - action="store_true", - default=None, - help="Use PEP 517 for building source distributions " - "(use --no-use-pep517 to force legacy behaviour).", -) - -no_use_pep517: Any = partial( - Option, - "--no-use-pep517", - dest="use_pep517", - action="callback", - callback=_handle_no_use_pep517, - default=None, - help=SUPPRESS_HELP, -) - - -def _handle_config_settings( - option: Option, opt_str: str, value: str, parser: OptionParser -) -> None: - key, sep, val = value.partition("=") - if sep != "=": - parser.error(f"Arguments to {opt_str} must be of the form KEY=VAL") # noqa - dest = getattr(parser.values, option.dest) - if dest is None: - dest = {} - setattr(parser.values, option.dest, dest) - if key in dest: - if isinstance(dest[key], list): - dest[key].append(val) - else: - dest[key] = [dest[key], val] - else: - dest[key] = val - - -config_settings: Callable[..., Option] = partial( - Option, - "-C", - "--config-settings", - dest="config_settings", - type=str, - action="callback", - callback=_handle_config_settings, - metavar="settings", - help="Configuration settings to be passed to the PEP 517 build backend. " - "Settings take the form KEY=VALUE. Use multiple --config-settings options " - "to pass multiple keys to the backend.", -) - -build_options: Callable[..., Option] = partial( - Option, - "--build-option", - dest="build_options", - metavar="options", - action="append", - help="Extra arguments to be supplied to 'setup.py bdist_wheel'.", -) - -global_options: Callable[..., Option] = partial( - Option, - "--global-option", - dest="global_options", - action="append", - metavar="options", - help="Extra global options to be supplied to the setup.py " - "call before the install or bdist_wheel command.", -) - -no_clean: Callable[..., Option] = partial( - Option, - "--no-clean", - action="store_true", - default=False, - help="Don't clean up build directories.", -) - -pre: Callable[..., Option] = partial( - Option, - "--pre", - action="store_true", - default=False, - help="Include pre-release and development versions. By default, " - "pip only finds stable versions.", -) - -disable_pip_version_check: Callable[..., Option] = partial( - Option, - "--disable-pip-version-check", - dest="disable_pip_version_check", - action="store_true", - default=False, - help="Don't periodically check PyPI to determine whether a new version " - "of pip is available for download. Implied with --no-index.", -) - -root_user_action: Callable[..., Option] = partial( - Option, - "--root-user-action", - dest="root_user_action", - default="warn", - choices=["warn", "ignore"], - help="Action if pip is run as a root user. By default, a warning message is shown.", -) - - -def _handle_merge_hash( - option: Option, opt_str: str, value: str, parser: OptionParser -) -> None: - """Given a value spelled "algo:digest", append the digest to a list - pointed to in a dict by the algo name.""" - if not parser.values.hashes: - parser.values.hashes = {} - try: - algo, digest = value.split(":", 1) - except ValueError: - parser.error( - "Arguments to {} must be a hash name " # noqa - "followed by a value, like --hash=sha256:" - "abcde...".format(opt_str) - ) - if algo not in STRONG_HASHES: - parser.error( - "Allowed hash algorithms for {} are {}.".format( # noqa - opt_str, ", ".join(STRONG_HASHES) - ) - ) - parser.values.hashes.setdefault(algo, []).append(digest) - - -hash: Callable[..., Option] = partial( - Option, - "--hash", - # Hash values eventually end up in InstallRequirement.hashes due to - # __dict__ copying in process_line(). - dest="hashes", - action="callback", - callback=_handle_merge_hash, - type="string", - help="Verify that the package's archive matches this " - "hash before installing. Example: --hash=sha256:abcdef...", -) - - -require_hashes: Callable[..., Option] = partial( - Option, - "--require-hashes", - dest="require_hashes", - action="store_true", - default=False, - help="Require a hash to check each requirement against, for " - "repeatable installs. This option is implied when any package in a " - "requirements file has a --hash option.", -) - - -list_path: Callable[..., Option] = partial( - PipOption, - "--path", - dest="path", - type="path", - action="append", - help="Restrict to the specified installation path for listing " - "packages (can be used multiple times).", -) - - -def check_list_path_option(options: Values) -> None: - if options.path and (options.user or options.local): - raise CommandError("Cannot combine '--path' with '--user' or '--local'") - - -list_exclude: Callable[..., Option] = partial( - PipOption, - "--exclude", - dest="excludes", - action="append", - metavar="package", - type="package_name", - help="Exclude specified package from the output", -) - - -no_python_version_warning: Callable[..., Option] = partial( - Option, - "--no-python-version-warning", - dest="no_python_version_warning", - action="store_true", - default=False, - help="Silence deprecation warnings for upcoming unsupported Pythons.", -) - - -# Features that are now always on. A warning is printed if they are used. -ALWAYS_ENABLED_FEATURES = [ - "no-binary-enable-wheel-cache", # always on since 23.1 -] - -use_new_feature: Callable[..., Option] = partial( - Option, - "--use-feature", - dest="features_enabled", - metavar="feature", - action="append", - default=[], - choices=[ - "fast-deps", - "truststore", - ] - + ALWAYS_ENABLED_FEATURES, - help="Enable new functionality, that may be backward incompatible.", -) - -use_deprecated_feature: Callable[..., Option] = partial( - Option, - "--use-deprecated", - dest="deprecated_features_enabled", - metavar="feature", - action="append", - default=[], - choices=[ - "legacy-resolver", - ], - help=("Enable deprecated functionality, that will be removed in the future."), -) - - -########## -# groups # -########## - -general_group: Dict[str, Any] = { - "name": "General Options", - "options": [ - help_, - debug_mode, - isolated_mode, - require_virtualenv, - python, - verbose, - version, - quiet, - log, - no_input, - keyring_provider, - proxy, - retries, - timeout, - exists_action, - trusted_host, - cert, - client_cert, - cache_dir, - no_cache, - disable_pip_version_check, - no_color, - no_python_version_warning, - use_new_feature, - use_deprecated_feature, - ], -} - -index_group: Dict[str, Any] = { - "name": "Package Index Options", - "options": [ - index_url, - extra_index_url, - no_index, - find_links, - ], -} diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/logging.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/logging.py deleted file mode 100644 index c10e1f4ced6bcc799799b62666695998e095bbaf..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/logging.py +++ /dev/null @@ -1,348 +0,0 @@ -import contextlib -import errno -import logging -import logging.handlers -import os -import sys -import threading -from dataclasses import dataclass -from io import TextIOWrapper -from logging import Filter -from typing import Any, ClassVar, Generator, List, Optional, TextIO, Type - -from pip._vendor.rich.console import ( - Console, - ConsoleOptions, - ConsoleRenderable, - RenderableType, - RenderResult, - RichCast, -) -from pip._vendor.rich.highlighter import NullHighlighter -from pip._vendor.rich.logging import RichHandler -from pip._vendor.rich.segment import Segment -from pip._vendor.rich.style import Style - -from pip._internal.utils._log import VERBOSE, getLogger -from pip._internal.utils.compat import WINDOWS -from pip._internal.utils.deprecation import DEPRECATION_MSG_PREFIX -from pip._internal.utils.misc import ensure_dir - -_log_state = threading.local() -subprocess_logger = getLogger("pip.subprocessor") - - -class BrokenStdoutLoggingError(Exception): - """ - Raised if BrokenPipeError occurs for the stdout stream while logging. - """ - - -def _is_broken_pipe_error(exc_class: Type[BaseException], exc: BaseException) -> bool: - if exc_class is BrokenPipeError: - return True - - # On Windows, a broken pipe can show up as EINVAL rather than EPIPE: - # https://bugs.python.org/issue19612 - # https://bugs.python.org/issue30418 - if not WINDOWS: - return False - - return isinstance(exc, OSError) and exc.errno in (errno.EINVAL, errno.EPIPE) - - -@contextlib.contextmanager -def indent_log(num: int = 2) -> Generator[None, None, None]: - """ - A context manager which will cause the log output to be indented for any - log messages emitted inside it. - """ - # For thread-safety - _log_state.indentation = get_indentation() - _log_state.indentation += num - try: - yield - finally: - _log_state.indentation -= num - - -def get_indentation() -> int: - return getattr(_log_state, "indentation", 0) - - -class IndentingFormatter(logging.Formatter): - default_time_format = "%Y-%m-%dT%H:%M:%S" - - def __init__( - self, - *args: Any, - add_timestamp: bool = False, - **kwargs: Any, - ) -> None: - """ - A logging.Formatter that obeys the indent_log() context manager. - - :param add_timestamp: A bool indicating output lines should be prefixed - with their record's timestamp. - """ - self.add_timestamp = add_timestamp - super().__init__(*args, **kwargs) - - def get_message_start(self, formatted: str, levelno: int) -> str: - """ - Return the start of the formatted log message (not counting the - prefix to add to each line). - """ - if levelno < logging.WARNING: - return "" - if formatted.startswith(DEPRECATION_MSG_PREFIX): - # Then the message already has a prefix. We don't want it to - # look like "WARNING: DEPRECATION: ...." - return "" - if levelno < logging.ERROR: - return "WARNING: " - - return "ERROR: " - - def format(self, record: logging.LogRecord) -> str: - """ - Calls the standard formatter, but will indent all of the log message - lines by our current indentation level. - """ - formatted = super().format(record) - message_start = self.get_message_start(formatted, record.levelno) - formatted = message_start + formatted - - prefix = "" - if self.add_timestamp: - prefix = f"{self.formatTime(record)} " - prefix += " " * get_indentation() - formatted = "".join([prefix + line for line in formatted.splitlines(True)]) - return formatted - - -@dataclass -class IndentedRenderable: - renderable: RenderableType - indent: int - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - segments = console.render(self.renderable, options) - lines = Segment.split_lines(segments) - for line in lines: - yield Segment(" " * self.indent) - yield from line - yield Segment("\n") - - -class RichPipStreamHandler(RichHandler): - KEYWORDS: ClassVar[Optional[List[str]]] = [] - - def __init__(self, stream: Optional[TextIO], no_color: bool) -> None: - super().__init__( - console=Console(file=stream, no_color=no_color, soft_wrap=True), - show_time=False, - show_level=False, - show_path=False, - highlighter=NullHighlighter(), - ) - - # Our custom override on Rich's logger, to make things work as we need them to. - def emit(self, record: logging.LogRecord) -> None: - style: Optional[Style] = None - - # If we are given a diagnostic error to present, present it with indentation. - assert isinstance(record.args, tuple) - if record.msg == "[present-rich] %s" and len(record.args) == 1: - rich_renderable = record.args[0] - assert isinstance( - rich_renderable, (ConsoleRenderable, RichCast, str) - ), f"{rich_renderable} is not rich-console-renderable" - - renderable: RenderableType = IndentedRenderable( - rich_renderable, indent=get_indentation() - ) - else: - message = self.format(record) - renderable = self.render_message(record, message) - if record.levelno is not None: - if record.levelno >= logging.ERROR: - style = Style(color="red") - elif record.levelno >= logging.WARNING: - style = Style(color="yellow") - - try: - self.console.print(renderable, overflow="ignore", crop=False, style=style) - except Exception: - self.handleError(record) - - def handleError(self, record: logging.LogRecord) -> None: - """Called when logging is unable to log some output.""" - - exc_class, exc = sys.exc_info()[:2] - # If a broken pipe occurred while calling write() or flush() on the - # stdout stream in logging's Handler.emit(), then raise our special - # exception so we can handle it in main() instead of logging the - # broken pipe error and continuing. - if ( - exc_class - and exc - and self.console.file is sys.stdout - and _is_broken_pipe_error(exc_class, exc) - ): - raise BrokenStdoutLoggingError() - - return super().handleError(record) - - -class BetterRotatingFileHandler(logging.handlers.RotatingFileHandler): - def _open(self) -> TextIOWrapper: - ensure_dir(os.path.dirname(self.baseFilename)) - return super()._open() - - -class MaxLevelFilter(Filter): - def __init__(self, level: int) -> None: - self.level = level - - def filter(self, record: logging.LogRecord) -> bool: - return record.levelno < self.level - - -class ExcludeLoggerFilter(Filter): - - """ - A logging Filter that excludes records from a logger (or its children). - """ - - def filter(self, record: logging.LogRecord) -> bool: - # The base Filter class allows only records from a logger (or its - # children). - return not super().filter(record) - - -def setup_logging(verbosity: int, no_color: bool, user_log_file: Optional[str]) -> int: - """Configures and sets up all of the logging - - Returns the requested logging level, as its integer value. - """ - - # Determine the level to be logging at. - if verbosity >= 2: - level_number = logging.DEBUG - elif verbosity == 1: - level_number = VERBOSE - elif verbosity == -1: - level_number = logging.WARNING - elif verbosity == -2: - level_number = logging.ERROR - elif verbosity <= -3: - level_number = logging.CRITICAL - else: - level_number = logging.INFO - - level = logging.getLevelName(level_number) - - # The "root" logger should match the "console" level *unless* we also need - # to log to a user log file. - include_user_log = user_log_file is not None - if include_user_log: - additional_log_file = user_log_file - root_level = "DEBUG" - else: - additional_log_file = "/dev/null" - root_level = level - - # Disable any logging besides WARNING unless we have DEBUG level logging - # enabled for vendored libraries. - vendored_log_level = "WARNING" if level in ["INFO", "ERROR"] else "DEBUG" - - # Shorthands for clarity - log_streams = { - "stdout": "ext://sys.stdout", - "stderr": "ext://sys.stderr", - } - handler_classes = { - "stream": "pip._internal.utils.logging.RichPipStreamHandler", - "file": "pip._internal.utils.logging.BetterRotatingFileHandler", - } - handlers = ["console", "console_errors", "console_subprocess"] + ( - ["user_log"] if include_user_log else [] - ) - - logging.config.dictConfig( - { - "version": 1, - "disable_existing_loggers": False, - "filters": { - "exclude_warnings": { - "()": "pip._internal.utils.logging.MaxLevelFilter", - "level": logging.WARNING, - }, - "restrict_to_subprocess": { - "()": "logging.Filter", - "name": subprocess_logger.name, - }, - "exclude_subprocess": { - "()": "pip._internal.utils.logging.ExcludeLoggerFilter", - "name": subprocess_logger.name, - }, - }, - "formatters": { - "indent": { - "()": IndentingFormatter, - "format": "%(message)s", - }, - "indent_with_timestamp": { - "()": IndentingFormatter, - "format": "%(message)s", - "add_timestamp": True, - }, - }, - "handlers": { - "console": { - "level": level, - "class": handler_classes["stream"], - "no_color": no_color, - "stream": log_streams["stdout"], - "filters": ["exclude_subprocess", "exclude_warnings"], - "formatter": "indent", - }, - "console_errors": { - "level": "WARNING", - "class": handler_classes["stream"], - "no_color": no_color, - "stream": log_streams["stderr"], - "filters": ["exclude_subprocess"], - "formatter": "indent", - }, - # A handler responsible for logging to the console messages - # from the "subprocessor" logger. - "console_subprocess": { - "level": level, - "class": handler_classes["stream"], - "stream": log_streams["stderr"], - "no_color": no_color, - "filters": ["restrict_to_subprocess"], - "formatter": "indent", - }, - "user_log": { - "level": "DEBUG", - "class": handler_classes["file"], - "filename": additional_log_file, - "encoding": "utf-8", - "delay": True, - "formatter": "indent_with_timestamp", - }, - }, - "root": { - "level": root_level, - "handlers": handlers, - }, - "loggers": {"pip._vendor": {"level": vendored_log_level}}, - } - ) - - return level_number diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/unicode.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/unicode.py deleted file mode 100644 index ec0b3a4fe6055b276d5515a4e81d60d921c6f381..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/unicode.py +++ /dev/null @@ -1,361 +0,0 @@ -# unicode.py - -import sys -from itertools import filterfalse -from typing import List, Tuple, Union - - -class _lazyclassproperty: - def __init__(self, fn): - self.fn = fn - self.__doc__ = fn.__doc__ - self.__name__ = fn.__name__ - - def __get__(self, obj, cls): - if cls is None: - cls = type(obj) - if not hasattr(cls, "_intern") or any( - cls._intern is getattr(superclass, "_intern", []) - for superclass in cls.__mro__[1:] - ): - cls._intern = {} - attrname = self.fn.__name__ - if attrname not in cls._intern: - cls._intern[attrname] = self.fn(cls) - return cls._intern[attrname] - - -UnicodeRangeList = List[Union[Tuple[int, int], Tuple[int]]] - - -class unicode_set: - """ - A set of Unicode characters, for language-specific strings for - ``alphas``, ``nums``, ``alphanums``, and ``printables``. - A unicode_set is defined by a list of ranges in the Unicode character - set, in a class attribute ``_ranges``. Ranges can be specified using - 2-tuples or a 1-tuple, such as:: - - _ranges = [ - (0x0020, 0x007e), - (0x00a0, 0x00ff), - (0x0100,), - ] - - Ranges are left- and right-inclusive. A 1-tuple of (x,) is treated as (x, x). - - A unicode set can also be defined using multiple inheritance of other unicode sets:: - - class CJK(Chinese, Japanese, Korean): - pass - """ - - _ranges: UnicodeRangeList = [] - - @_lazyclassproperty - def _chars_for_ranges(cls): - ret = [] - for cc in cls.__mro__: - if cc is unicode_set: - break - for rr in getattr(cc, "_ranges", ()): - ret.extend(range(rr[0], rr[-1] + 1)) - return [chr(c) for c in sorted(set(ret))] - - @_lazyclassproperty - def printables(cls): - """all non-whitespace characters in this range""" - return "".join(filterfalse(str.isspace, cls._chars_for_ranges)) - - @_lazyclassproperty - def alphas(cls): - """all alphabetic characters in this range""" - return "".join(filter(str.isalpha, cls._chars_for_ranges)) - - @_lazyclassproperty - def nums(cls): - """all numeric digit characters in this range""" - return "".join(filter(str.isdigit, cls._chars_for_ranges)) - - @_lazyclassproperty - def alphanums(cls): - """all alphanumeric characters in this range""" - return cls.alphas + cls.nums - - @_lazyclassproperty - def identchars(cls): - """all characters in this range that are valid identifier characters, plus underscore '_'""" - return "".join( - sorted( - set( - "".join(filter(str.isidentifier, cls._chars_for_ranges)) - + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzªµº" - + "ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿ" - + "_" - ) - ) - ) - - @_lazyclassproperty - def identbodychars(cls): - """ - all characters in this range that are valid identifier body characters, - plus the digits 0-9, and · (Unicode MIDDLE DOT) - """ - return "".join( - sorted( - set( - cls.identchars - + "0123456789·" - + "".join( - [c for c in cls._chars_for_ranges if ("_" + c).isidentifier()] - ) - ) - ) - ) - - @_lazyclassproperty - def identifier(cls): - """ - a pyparsing Word expression for an identifier using this range's definitions for - identchars and identbodychars - """ - from pip._vendor.pyparsing import Word - - return Word(cls.identchars, cls.identbodychars) - - -class pyparsing_unicode(unicode_set): - """ - A namespace class for defining common language unicode_sets. - """ - - # fmt: off - - # define ranges in language character sets - _ranges: UnicodeRangeList = [ - (0x0020, sys.maxunicode), - ] - - class BasicMultilingualPlane(unicode_set): - """Unicode set for the Basic Multilingual Plane""" - _ranges: UnicodeRangeList = [ - (0x0020, 0xFFFF), - ] - - class Latin1(unicode_set): - """Unicode set for Latin-1 Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x0020, 0x007E), - (0x00A0, 0x00FF), - ] - - class LatinA(unicode_set): - """Unicode set for Latin-A Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x0100, 0x017F), - ] - - class LatinB(unicode_set): - """Unicode set for Latin-B Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x0180, 0x024F), - ] - - class Greek(unicode_set): - """Unicode set for Greek Unicode Character Ranges""" - _ranges: UnicodeRangeList = [ - (0x0342, 0x0345), - (0x0370, 0x0377), - (0x037A, 0x037F), - (0x0384, 0x038A), - (0x038C,), - (0x038E, 0x03A1), - (0x03A3, 0x03E1), - (0x03F0, 0x03FF), - (0x1D26, 0x1D2A), - (0x1D5E,), - (0x1D60,), - (0x1D66, 0x1D6A), - (0x1F00, 0x1F15), - (0x1F18, 0x1F1D), - (0x1F20, 0x1F45), - (0x1F48, 0x1F4D), - (0x1F50, 0x1F57), - (0x1F59,), - (0x1F5B,), - (0x1F5D,), - (0x1F5F, 0x1F7D), - (0x1F80, 0x1FB4), - (0x1FB6, 0x1FC4), - (0x1FC6, 0x1FD3), - (0x1FD6, 0x1FDB), - (0x1FDD, 0x1FEF), - (0x1FF2, 0x1FF4), - (0x1FF6, 0x1FFE), - (0x2129,), - (0x2719, 0x271A), - (0xAB65,), - (0x10140, 0x1018D), - (0x101A0,), - (0x1D200, 0x1D245), - (0x1F7A1, 0x1F7A7), - ] - - class Cyrillic(unicode_set): - """Unicode set for Cyrillic Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x0400, 0x052F), - (0x1C80, 0x1C88), - (0x1D2B,), - (0x1D78,), - (0x2DE0, 0x2DFF), - (0xA640, 0xA672), - (0xA674, 0xA69F), - (0xFE2E, 0xFE2F), - ] - - class Chinese(unicode_set): - """Unicode set for Chinese Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x2E80, 0x2E99), - (0x2E9B, 0x2EF3), - (0x31C0, 0x31E3), - (0x3400, 0x4DB5), - (0x4E00, 0x9FEF), - (0xA700, 0xA707), - (0xF900, 0xFA6D), - (0xFA70, 0xFAD9), - (0x16FE2, 0x16FE3), - (0x1F210, 0x1F212), - (0x1F214, 0x1F23B), - (0x1F240, 0x1F248), - (0x20000, 0x2A6D6), - (0x2A700, 0x2B734), - (0x2B740, 0x2B81D), - (0x2B820, 0x2CEA1), - (0x2CEB0, 0x2EBE0), - (0x2F800, 0x2FA1D), - ] - - class Japanese(unicode_set): - """Unicode set for Japanese Unicode Character Range, combining Kanji, Hiragana, and Katakana ranges""" - - class Kanji(unicode_set): - "Unicode set for Kanji Unicode Character Range" - _ranges: UnicodeRangeList = [ - (0x4E00, 0x9FBF), - (0x3000, 0x303F), - ] - - class Hiragana(unicode_set): - """Unicode set for Hiragana Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x3041, 0x3096), - (0x3099, 0x30A0), - (0x30FC,), - (0xFF70,), - (0x1B001,), - (0x1B150, 0x1B152), - (0x1F200,), - ] - - class Katakana(unicode_set): - """Unicode set for Katakana Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x3099, 0x309C), - (0x30A0, 0x30FF), - (0x31F0, 0x31FF), - (0x32D0, 0x32FE), - (0xFF65, 0xFF9F), - (0x1B000,), - (0x1B164, 0x1B167), - (0x1F201, 0x1F202), - (0x1F213,), - ] - - 漢字 = Kanji - カタカナ = Katakana - ひらがな = Hiragana - - _ranges = ( - Kanji._ranges - + Hiragana._ranges - + Katakana._ranges - ) - - class Hangul(unicode_set): - """Unicode set for Hangul (Korean) Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x1100, 0x11FF), - (0x302E, 0x302F), - (0x3131, 0x318E), - (0x3200, 0x321C), - (0x3260, 0x327B), - (0x327E,), - (0xA960, 0xA97C), - (0xAC00, 0xD7A3), - (0xD7B0, 0xD7C6), - (0xD7CB, 0xD7FB), - (0xFFA0, 0xFFBE), - (0xFFC2, 0xFFC7), - (0xFFCA, 0xFFCF), - (0xFFD2, 0xFFD7), - (0xFFDA, 0xFFDC), - ] - - Korean = Hangul - - class CJK(Chinese, Japanese, Hangul): - """Unicode set for combined Chinese, Japanese, and Korean (CJK) Unicode Character Range""" - - class Thai(unicode_set): - """Unicode set for Thai Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x0E01, 0x0E3A), - (0x0E3F, 0x0E5B) - ] - - class Arabic(unicode_set): - """Unicode set for Arabic Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x0600, 0x061B), - (0x061E, 0x06FF), - (0x0700, 0x077F), - ] - - class Hebrew(unicode_set): - """Unicode set for Hebrew Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x0591, 0x05C7), - (0x05D0, 0x05EA), - (0x05EF, 0x05F4), - (0xFB1D, 0xFB36), - (0xFB38, 0xFB3C), - (0xFB3E,), - (0xFB40, 0xFB41), - (0xFB43, 0xFB44), - (0xFB46, 0xFB4F), - ] - - class Devanagari(unicode_set): - """Unicode set for Devanagari Unicode Character Range""" - _ranges: UnicodeRangeList = [ - (0x0900, 0x097F), - (0xA8E0, 0xA8FF) - ] - - BMP = BasicMultilingualPlane - - # add language identifiers using language Unicode - العربية = Arabic - 中文 = Chinese - кириллица = Cyrillic - Ελληνικά = Greek - עִברִית = Hebrew - 日本語 = Japanese - 한국어 = Korean - ไทย = Thai - देवनागरी = Devanagari - - # fmt: on diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/unixccompiler.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/unixccompiler.py deleted file mode 100644 index 6ca2332ae16a575a850fe97e5bc1e42d33b7b2f2..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/unixccompiler.py +++ /dev/null @@ -1,400 +0,0 @@ -"""distutils.unixccompiler - -Contains the UnixCCompiler class, a subclass of CCompiler that handles -the "typical" Unix-style command-line C compiler: - * macros defined with -Dname[=value] - * macros undefined with -Uname - * include search directories specified with -Idir - * libraries specified with -lllib - * library search directories specified with -Ldir - * compile handled by 'cc' (or similar) executable with -c option: - compiles .c to .o - * link static library handled by 'ar' command (possibly with 'ranlib') - * link shared library handled by 'cc -shared' -""" - -import os -import sys -import re -import shlex -import itertools - -from . import sysconfig -from .dep_util import newer -from .ccompiler import CCompiler, gen_preprocess_options, gen_lib_options -from .errors import DistutilsExecError, CompileError, LibError, LinkError -from ._log import log -from ._macos_compat import compiler_fixup - -# XXX Things not currently handled: -# * optimization/debug/warning flags; we just use whatever's in Python's -# Makefile and live with it. Is this adequate? If not, we might -# have to have a bunch of subclasses GNUCCompiler, SGICCompiler, -# SunCCompiler, and I suspect down that road lies madness. -# * even if we don't know a warning flag from an optimization flag, -# we need some way for outsiders to feed preprocessor/compiler/linker -# flags in to us -- eg. a sysadmin might want to mandate certain flags -# via a site config file, or a user might want to set something for -# compiling this module distribution only via the setup.py command -# line, whatever. As long as these options come from something on the -# current system, they can be as system-dependent as they like, and we -# should just happily stuff them into the preprocessor/compiler/linker -# options and carry on. - - -def _split_env(cmd): - """ - For macOS, split command into 'env' portion (if any) - and the rest of the linker command. - - >>> _split_env(['a', 'b', 'c']) - ([], ['a', 'b', 'c']) - >>> _split_env(['/usr/bin/env', 'A=3', 'gcc']) - (['/usr/bin/env', 'A=3'], ['gcc']) - """ - pivot = 0 - if os.path.basename(cmd[0]) == "env": - pivot = 1 - while '=' in cmd[pivot]: - pivot += 1 - return cmd[:pivot], cmd[pivot:] - - -def _split_aix(cmd): - """ - AIX platforms prefix the compiler with the ld_so_aix - script, so split that from the linker command. - - >>> _split_aix(['a', 'b', 'c']) - ([], ['a', 'b', 'c']) - >>> _split_aix(['/bin/foo/ld_so_aix', 'gcc']) - (['/bin/foo/ld_so_aix'], ['gcc']) - """ - pivot = os.path.basename(cmd[0]) == 'ld_so_aix' - return cmd[:pivot], cmd[pivot:] - - -def _linker_params(linker_cmd, compiler_cmd): - """ - The linker command usually begins with the compiler - command (possibly multiple elements), followed by zero or more - params for shared library building. - - If the LDSHARED env variable overrides the linker command, - however, the commands may not match. - - Return the best guess of the linker parameters by stripping - the linker command. If the compiler command does not - match the linker command, assume the linker command is - just the first element. - - >>> _linker_params('gcc foo bar'.split(), ['gcc']) - ['foo', 'bar'] - >>> _linker_params('gcc foo bar'.split(), ['other']) - ['foo', 'bar'] - >>> _linker_params('ccache gcc foo bar'.split(), 'ccache gcc'.split()) - ['foo', 'bar'] - >>> _linker_params(['gcc'], ['gcc']) - [] - """ - c_len = len(compiler_cmd) - pivot = c_len if linker_cmd[:c_len] == compiler_cmd else 1 - return linker_cmd[pivot:] - - -class UnixCCompiler(CCompiler): - compiler_type = 'unix' - - # These are used by CCompiler in two places: the constructor sets - # instance attributes 'preprocessor', 'compiler', etc. from them, and - # 'set_executable()' allows any of these to be set. The defaults here - # are pretty generic; they will probably have to be set by an outsider - # (eg. using information discovered by the sysconfig about building - # Python extensions). - executables = { - 'preprocessor': None, - 'compiler': ["cc"], - 'compiler_so': ["cc"], - 'compiler_cxx': ["cc"], - 'linker_so': ["cc", "-shared"], - 'linker_exe': ["cc"], - 'archiver': ["ar", "-cr"], - 'ranlib': None, - } - - if sys.platform[:6] == "darwin": - executables['ranlib'] = ["ranlib"] - - # Needed for the filename generation methods provided by the base - # class, CCompiler. NB. whoever instantiates/uses a particular - # UnixCCompiler instance should set 'shared_lib_ext' -- we set a - # reasonable common default here, but it's not necessarily used on all - # Unices! - - src_extensions = [".c", ".C", ".cc", ".cxx", ".cpp", ".m"] - obj_extension = ".o" - static_lib_extension = ".a" - shared_lib_extension = ".so" - dylib_lib_extension = ".dylib" - xcode_stub_lib_extension = ".tbd" - static_lib_format = shared_lib_format = dylib_lib_format = "lib%s%s" - xcode_stub_lib_format = dylib_lib_format - if sys.platform == "cygwin": - exe_extension = ".exe" - - def preprocess( - self, - source, - output_file=None, - macros=None, - include_dirs=None, - extra_preargs=None, - extra_postargs=None, - ): - fixed_args = self._fix_compile_args(None, macros, include_dirs) - ignore, macros, include_dirs = fixed_args - pp_opts = gen_preprocess_options(macros, include_dirs) - pp_args = self.preprocessor + pp_opts - if output_file: - pp_args.extend(['-o', output_file]) - if extra_preargs: - pp_args[:0] = extra_preargs - if extra_postargs: - pp_args.extend(extra_postargs) - pp_args.append(source) - - # reasons to preprocess: - # - force is indicated - # - output is directed to stdout - # - source file is newer than the target - preprocess = self.force or output_file is None or newer(source, output_file) - if not preprocess: - return - - if output_file: - self.mkpath(os.path.dirname(output_file)) - - try: - self.spawn(pp_args) - except DistutilsExecError as msg: - raise CompileError(msg) - - def _compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts): - compiler_so = compiler_fixup(self.compiler_so, cc_args + extra_postargs) - try: - self.spawn(compiler_so + cc_args + [src, '-o', obj] + extra_postargs) - except DistutilsExecError as msg: - raise CompileError(msg) - - def create_static_lib( - self, objects, output_libname, output_dir=None, debug=0, target_lang=None - ): - objects, output_dir = self._fix_object_args(objects, output_dir) - - output_filename = self.library_filename(output_libname, output_dir=output_dir) - - if self._need_link(objects, output_filename): - self.mkpath(os.path.dirname(output_filename)) - self.spawn(self.archiver + [output_filename] + objects + self.objects) - - # Not many Unices required ranlib anymore -- SunOS 4.x is, I - # think the only major Unix that does. Maybe we need some - # platform intelligence here to skip ranlib if it's not - # needed -- or maybe Python's configure script took care of - # it for us, hence the check for leading colon. - if self.ranlib: - try: - self.spawn(self.ranlib + [output_filename]) - except DistutilsExecError as msg: - raise LibError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def link( - self, - target_desc, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - objects, output_dir = self._fix_object_args(objects, output_dir) - fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs) - libraries, library_dirs, runtime_library_dirs = fixed_args - - lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries) - if not isinstance(output_dir, (str, type(None))): - raise TypeError("'output_dir' must be a string or None") - if output_dir is not None: - output_filename = os.path.join(output_dir, output_filename) - - if self._need_link(objects, output_filename): - ld_args = objects + self.objects + lib_opts + ['-o', output_filename] - if debug: - ld_args[:0] = ['-g'] - if extra_preargs: - ld_args[:0] = extra_preargs - if extra_postargs: - ld_args.extend(extra_postargs) - self.mkpath(os.path.dirname(output_filename)) - try: - # Select a linker based on context: linker_exe when - # building an executable or linker_so (with shared options) - # when building a shared library. - building_exe = target_desc == CCompiler.EXECUTABLE - linker = (self.linker_exe if building_exe else self.linker_so)[:] - - if target_lang == "c++" and self.compiler_cxx: - env, linker_ne = _split_env(linker) - aix, linker_na = _split_aix(linker_ne) - _, compiler_cxx_ne = _split_env(self.compiler_cxx) - _, linker_exe_ne = _split_env(self.linker_exe) - - params = _linker_params(linker_na, linker_exe_ne) - linker = env + aix + compiler_cxx_ne + params - - linker = compiler_fixup(linker, ld_args) - - self.spawn(linker + ld_args) - except DistutilsExecError as msg: - raise LinkError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - # -- Miscellaneous methods ----------------------------------------- - # These are all used by the 'gen_lib_options() function, in - # ccompiler.py. - - def library_dir_option(self, dir): - return "-L" + dir - - def _is_gcc(self): - cc_var = sysconfig.get_config_var("CC") - compiler = os.path.basename(shlex.split(cc_var)[0]) - return "gcc" in compiler or "g++" in compiler - - def runtime_library_dir_option(self, dir): - # XXX Hackish, at the very least. See Python bug #445902: - # http://sourceforge.net/tracker/index.php - # ?func=detail&aid=445902&group_id=5470&atid=105470 - # Linkers on different platforms need different options to - # specify that directories need to be added to the list of - # directories searched for dependencies when a dynamic library - # is sought. GCC on GNU systems (Linux, FreeBSD, ...) has to - # be told to pass the -R option through to the linker, whereas - # other compilers and gcc on other systems just know this. - # Other compilers may need something slightly different. At - # this time, there's no way to determine this information from - # the configuration data stored in the Python installation, so - # we use this hack. - if sys.platform[:6] == "darwin": - from distutils.util import get_macosx_target_ver, split_version - - macosx_target_ver = get_macosx_target_ver() - if macosx_target_ver and split_version(macosx_target_ver) >= [10, 5]: - return "-Wl,-rpath," + dir - else: # no support for -rpath on earlier macOS versions - return "-L" + dir - elif sys.platform[:7] == "freebsd": - return "-Wl,-rpath=" + dir - elif sys.platform[:5] == "hp-ux": - return [ - "-Wl,+s" if self._is_gcc() else "+s", - "-L" + dir, - ] - - # For all compilers, `-Wl` is the presumed way to - # pass a compiler option to the linker and `-R` is - # the way to pass an RPATH. - if sysconfig.get_config_var("GNULD") == "yes": - # GNU ld needs an extra option to get a RUNPATH - # instead of just an RPATH. - return "-Wl,--enable-new-dtags,-R" + dir - else: - return "-Wl,-R" + dir - - def library_option(self, lib): - return "-l" + lib - - @staticmethod - def _library_root(dir): - """ - macOS users can specify an alternate SDK using'-isysroot'. - Calculate the SDK root if it is specified. - - Note that, as of Xcode 7, Apple SDKs may contain textual stub - libraries with .tbd extensions rather than the normal .dylib - shared libraries installed in /. The Apple compiler tool - chain handles this transparently but it can cause problems - for programs that are being built with an SDK and searching - for specific libraries. Callers of find_library_file need to - keep in mind that the base filename of the returned SDK library - file might have a different extension from that of the library - file installed on the running system, for example: - /Applications/Xcode.app/Contents/Developer/Platforms/ - MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/ - usr/lib/libedit.tbd - vs - /usr/lib/libedit.dylib - """ - cflags = sysconfig.get_config_var('CFLAGS') - match = re.search(r'-isysroot\s*(\S+)', cflags) - - apply_root = ( - sys.platform == 'darwin' - and match - and ( - dir.startswith('/System/') - or (dir.startswith('/usr/') and not dir.startswith('/usr/local/')) - ) - ) - - return os.path.join(match.group(1), dir[1:]) if apply_root else dir - - def find_library_file(self, dirs, lib, debug=0): - r""" - Second-guess the linker with not much hard - data to go on: GCC seems to prefer the shared library, so - assume that *all* Unix C compilers do, - ignoring even GCC's "-static" option. - - >>> compiler = UnixCCompiler() - >>> compiler._library_root = lambda dir: dir - >>> monkeypatch = getfixture('monkeypatch') - >>> monkeypatch.setattr(os.path, 'exists', lambda d: 'existing' in d) - >>> dirs = ('/foo/bar/missing', '/foo/bar/existing') - >>> compiler.find_library_file(dirs, 'abc').replace('\\', '/') - '/foo/bar/existing/libabc.dylib' - >>> compiler.find_library_file(reversed(dirs), 'abc').replace('\\', '/') - '/foo/bar/existing/libabc.dylib' - >>> monkeypatch.setattr(os.path, 'exists', - ... lambda d: 'existing' in d and '.a' in d) - >>> compiler.find_library_file(dirs, 'abc').replace('\\', '/') - '/foo/bar/existing/libabc.a' - >>> compiler.find_library_file(reversed(dirs), 'abc').replace('\\', '/') - '/foo/bar/existing/libabc.a' - """ - lib_names = ( - self.library_filename(lib, lib_type=type) - for type in 'dylib xcode_stub shared static'.split() - ) - - roots = map(self._library_root, dirs) - - searched = ( - os.path.join(root, lib_name) - for root, lib_name in itertools.product(roots, lib_names) - ) - - found = filter(os.path.exists, searched) - - # Return None if it could not be found in any dir. - return next(found, None) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/rotated_coco_evaluation.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/rotated_coco_evaluation.py deleted file mode 100644 index ea6d1b381dcf106339a03f08577df673ad439c46..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/evaluation/rotated_coco_evaluation.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import json -import numpy as np -import os -import torch -from pycocotools.cocoeval import COCOeval, maskUtils - -from detectron2.structures import BoxMode, RotatedBoxes, pairwise_iou_rotated -from detectron2.utils.file_io import PathManager - -from .coco_evaluation import COCOEvaluator - - -class RotatedCOCOeval(COCOeval): - @staticmethod - def is_rotated(box_list): - if type(box_list) == np.ndarray: - return box_list.shape[1] == 5 - elif type(box_list) == list: - if box_list == []: # cannot decide the box_dim - return False - return np.all( - np.array( - [ - (len(obj) == 5) and ((type(obj) == list) or (type(obj) == np.ndarray)) - for obj in box_list - ] - ) - ) - return False - - @staticmethod - def boxlist_to_tensor(boxlist, output_box_dim): - if type(boxlist) == np.ndarray: - box_tensor = torch.from_numpy(boxlist) - elif type(boxlist) == list: - if boxlist == []: - return torch.zeros((0, output_box_dim), dtype=torch.float32) - else: - box_tensor = torch.FloatTensor(boxlist) - else: - raise Exception("Unrecognized boxlist type") - - input_box_dim = box_tensor.shape[1] - if input_box_dim != output_box_dim: - if input_box_dim == 4 and output_box_dim == 5: - box_tensor = BoxMode.convert(box_tensor, BoxMode.XYWH_ABS, BoxMode.XYWHA_ABS) - else: - raise Exception( - "Unable to convert from {}-dim box to {}-dim box".format( - input_box_dim, output_box_dim - ) - ) - return box_tensor - - def compute_iou_dt_gt(self, dt, gt, is_crowd): - if self.is_rotated(dt) or self.is_rotated(gt): - # TODO: take is_crowd into consideration - assert all(c == 0 for c in is_crowd) - dt = RotatedBoxes(self.boxlist_to_tensor(dt, output_box_dim=5)) - gt = RotatedBoxes(self.boxlist_to_tensor(gt, output_box_dim=5)) - return pairwise_iou_rotated(dt, gt) - else: - # This is the same as the classical COCO evaluation - return maskUtils.iou(dt, gt, is_crowd) - - def computeIoU(self, imgId, catId): - p = self.params - if p.useCats: - gt = self._gts[imgId, catId] - dt = self._dts[imgId, catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[imgId, cId]] - dt = [_ for cId in p.catIds for _ in self._dts[imgId, cId]] - if len(gt) == 0 and len(dt) == 0: - return [] - inds = np.argsort([-d["score"] for d in dt], kind="mergesort") - dt = [dt[i] for i in inds] - if len(dt) > p.maxDets[-1]: - dt = dt[0 : p.maxDets[-1]] - - assert p.iouType == "bbox", "unsupported iouType for iou computation" - - g = [g["bbox"] for g in gt] - d = [d["bbox"] for d in dt] - - # compute iou between each dt and gt region - iscrowd = [int(o["iscrowd"]) for o in gt] - - # Note: this function is copied from cocoeval.py in cocoapi - # and the major difference is here. - ious = self.compute_iou_dt_gt(d, g, iscrowd) - return ious - - -class RotatedCOCOEvaluator(COCOEvaluator): - """ - Evaluate object proposal/instance detection outputs using COCO-like metrics and APIs, - with rotated boxes support. - Note: this uses IOU only and does not consider angle differences. - """ - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a COCO model (e.g., GeneralizedRCNN). - It is a list of dict. Each dict corresponds to an image and - contains keys like "height", "width", "file_name", "image_id". - outputs: the outputs of a COCO model. It is a list of dicts with key - "instances" that contains :class:`Instances`. - """ - for input, output in zip(inputs, outputs): - prediction = {"image_id": input["image_id"]} - - if "instances" in output: - instances = output["instances"].to(self._cpu_device) - - prediction["instances"] = self.instances_to_json(instances, input["image_id"]) - if "proposals" in output: - prediction["proposals"] = output["proposals"].to(self._cpu_device) - self._predictions.append(prediction) - - def instances_to_json(self, instances, img_id): - num_instance = len(instances) - if num_instance == 0: - return [] - - boxes = instances.pred_boxes.tensor.numpy() - if boxes.shape[1] == 4: - boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS) - boxes = boxes.tolist() - scores = instances.scores.tolist() - classes = instances.pred_classes.tolist() - - results = [] - for k in range(num_instance): - result = { - "image_id": img_id, - "category_id": classes[k], - "bbox": boxes[k], - "score": scores[k], - } - - results.append(result) - return results - - def _eval_predictions(self, predictions, img_ids=None): # img_ids: unused - """ - Evaluate predictions on the given tasks. - Fill self._results with the metrics of the tasks. - """ - self._logger.info("Preparing results for COCO format ...") - coco_results = list(itertools.chain(*[x["instances"] for x in predictions])) - - # unmap the category ids for COCO - if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): - reverse_id_mapping = { - v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() - } - for result in coco_results: - result["category_id"] = reverse_id_mapping[result["category_id"]] - - if self._output_dir: - file_path = os.path.join(self._output_dir, "coco_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(coco_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info("Evaluating predictions ...") - - assert self._tasks is None or set(self._tasks) == { - "bbox" - }, "[RotatedCOCOEvaluator] Only bbox evaluation is supported" - coco_eval = ( - self._evaluate_predictions_on_coco(self._coco_api, coco_results) - if len(coco_results) > 0 - else None # cocoapi does not handle empty results very well - ) - - task = "bbox" - res = self._derive_coco_results( - coco_eval, task, class_names=self._metadata.get("thing_classes") - ) - self._results[task] = res - - def _evaluate_predictions_on_coco(self, coco_gt, coco_results): - """ - Evaluate the coco results using COCOEval API. - """ - assert len(coco_results) > 0 - - coco_dt = coco_gt.loadRes(coco_results) - - # Only bbox is supported for now - coco_eval = RotatedCOCOeval(coco_gt, coco_dt, iouType="bbox") - - coco_eval.evaluate() - coco_eval.accumulate() - coco_eval.summarize() - - return coco_eval diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py deleted file mode 100644 index 4b01e9007c2578a7b5ae555c926cc06c8a3010f9..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/utils/registry.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from typing import Any -import pydoc -from fvcore.common.registry import Registry # for backward compatibility. - -""" -``Registry`` and `locate` provide ways to map a string (typically found -in config files) to callable objects. -""" - -__all__ = ["Registry", "locate"] - - -def _convert_target_to_string(t: Any) -> str: - """ - Inverse of ``locate()``. - - Args: - t: any object with ``__module__`` and ``__qualname__`` - """ - module, qualname = t.__module__, t.__qualname__ - - # Compress the path to this object, e.g. ``module.submodule._impl.class`` - # may become ``module.submodule.class``, if the later also resolves to the same - # object. This simplifies the string, and also is less affected by moving the - # class implementation. - module_parts = module.split(".") - for k in range(1, len(module_parts)): - prefix = ".".join(module_parts[:k]) - candidate = f"{prefix}.{qualname}" - try: - if locate(candidate) is t: - return candidate - except ImportError: - pass - return f"{module}.{qualname}" - - -def locate(name: str) -> Any: - """ - Locate and return an object ``x`` using an input string ``{x.__module__}.{x.__qualname__}``, - such as "module.submodule.class_name". - - Raise Exception if it cannot be found. - """ - obj = pydoc.locate(name) - - # Some cases (e.g. torch.optim.sgd.SGD) not handled correctly - # by pydoc.locate. Try a private function from hydra. - if obj is None: - try: - # from hydra.utils import get_method - will print many errors - from hydra.utils import _locate - except ImportError as e: - raise ImportError(f"Cannot dynamically locate object {name}!") from e - else: - obj = _locate(name) # it raises if fails - - return obj diff --git a/spaces/TheKitten/Fast-Images-Creature/README.md b/spaces/TheKitten/Fast-Images-Creature/README.md deleted file mode 100644 index e86e1a8d30bd80a0bd7d87fa092c0f05457969a8..0000000000000000000000000000000000000000 --- a/spaces/TheKitten/Fast-Images-Creature/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Fast Images Creature (400 Models) -emoji: ⭐️ -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/UndueTarget/youtube-whisper/README.md b/spaces/UndueTarget/youtube-whisper/README.md deleted file mode 100644 index c3180680339155aaf1d27f629129b68d12cac021..0000000000000000000000000000000000000000 --- a/spaces/UndueTarget/youtube-whisper/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Youtube Whisper -emoji: ⚡ -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: unknown -duplicated_from: kazuk/youtube-whisper ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Voicelab/vlT5-rfc-generation/README.md b/spaces/Voicelab/vlT5-rfc-generation/README.md deleted file mode 100644 index 2757b96b6c4272da09f74a736a852cb198faa0f6..0000000000000000000000000000000000000000 --- a/spaces/Voicelab/vlT5-rfc-generation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: VlT5 Reason for contact generation -emoji: 📱 -colorFrom: blue -colorTo: green -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/WUXIAOMO/stabilityai-stable-diffusion-2-1-test-space/README.md b/spaces/WUXIAOMO/stabilityai-stable-diffusion-2-1-test-space/README.md deleted file mode 100644 index 13237c24f947a7918ea3c2fc039c9e7d4e573849..0000000000000000000000000000000000000000 --- a/spaces/WUXIAOMO/stabilityai-stable-diffusion-2-1-test-space/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Stabilityai Stable Diffusion 2 1 Test Space -emoji: 🚀 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Wauplin/bloomz.cpp-converter/app.py b/spaces/Wauplin/bloomz.cpp-converter/app.py deleted file mode 100644 index 2ae53d9b897412bc89e6e76d3ef5b70a9ab35da1..0000000000000000000000000000000000000000 --- a/spaces/Wauplin/bloomz.cpp-converter/app.py +++ /dev/null @@ -1,293 +0,0 @@ -import csv -import os -import shutil -from datetime import datetime -from pathlib import Path -from tempfile import TemporaryDirectory -from typing import Optional - -import gradio as gr -from huggingface_hub import HfApi, ModelCard, Repository, scan_cache_dir -from huggingface_hub.utils import EntryNotFoundError, RepositoryNotFoundError - -from convert import convert - -# Repo with files totalling more than 24GB are not converted. Avoid to have a memory issue. -try: - MAX_REPO_SIZE = int(os.environ.get("MAX_REPO_SIZE")) -except: - MAX_REPO_SIZE = 24 * 1000 * 1000 * 1000 - -# Used to log Space usage -# Taken from https://huggingface.co/spaces/onnx/export -DATASET_REPO_ID = "Wauplin/bloom.cpp-converters" -DATASET_LOCAL_DIR = "usage_data" -DATASET_LOCAL_FILE = Path(DATASET_LOCAL_DIR) / "data.csv" -HF_TOKEN = os.environ.get("HF_TOKEN") - -repo: Optional[Repository] = None -if HF_TOKEN: - repo = Repository( - local_dir=DATASET_LOCAL_DIR, - clone_from=DATASET_REPO_ID, - repo_type="dataset", - token=HF_TOKEN, - ) - - -class Generator: - # Taken from https://stackoverflow.com/a/34073559 - # Allows to log process in Gradio - def __init__(self, gen): - self.gen = gen - - def __iter__(self): - self.value = yield from self.gen - - -def run( - token: str, model_id: str, precision: str, quantization: bool, destination: str -): - _log_usage( - status="start", - model_id=model_id, - precision=precision, - quantization=quantization, - destination=destination, - pr_url=None, - ) - _all_logs = [] - - def _log(msg: str): - print(msg) # for container logs - _all_logs.append(msg) - return "\n\n".join(_all_logs) # for Gradio output - - if token == "" or model_id == "": - yield _log("### Invalid input 🐞\n\nPlease fill a token and model_id.") - _log_usage( - status="invalid input", - model_id=model_id, - precision=precision, - quantization=quantization, - destination=destination, - pr_url=None, - ) - return - if destination == "": - _log("Destination not provided. Will default to the initial repo.") - destination = model_id - - api = HfApi(token=token) - try: - # TODO: make a PR to bloomz.cpp to be able to pass a token - model_info = api.model_info(repo_id=model_id, files_metadata=True, token=False) - _log(f"Model {model_id} exists.") - except RepositoryNotFoundError: - yield _log( - f"\n### Error 😢😢😢\n\nRepository {model_id} not found. Only public models are convertible at the moment." - ) - _log_usage( - status="model not found", - model_id=model_id, - precision=precision, - quantization=quantization, - destination=destination, - pr_url=None, - ) - return - - try: - total_size = sum( - file.size - for file in model_info.siblings - if file.rfilename.endswith(".pt") or file.rfilename.endswith(".bin") - ) - if total_size > MAX_REPO_SIZE: - yield _log( - f"### Unprocessable 😢😢😢\n\nModel {model_id} is too big and cannot be processed in this Space. This Space needs to be able to load the model in memory before converting it. To avoid a memory issue, we do not process models bigger than {MAX_REPO_SIZE}b.\n\nYou have 2 options:\n- [Duplicate this Space](https://huggingface.co/spaces/Wauplin/bloomz.cpp-converter?duplicate=true) and assign a bigger machine. You will need to set 'MAX_REPO_SIZE' as a secret to overwrite the default value. Once you are done, remove the upgraded hardware and/or delete the Space.\n- Manually convert the weights by following [this guide](https://github.com/NouamaneTazi/bloomz.cpp#usage)." - ) - _log_usage( - status="unprocessable", - model_id=model_id, - precision=precision, - quantization=quantization, - destination=destination, - pr_url=None, - ) - return - - with TemporaryDirectory() as cache_folder: - convert_progress = Generator( - convert( - cache_folder=Path(cache_folder), - model_id=model_id, - precision=precision, - quantization=quantization, - ) - ) - for msg in convert_progress: - yield _log(msg) - model_path = convert_progress.value - yield _log(f"Model converted: {model_path}") - - destination_url = api.create_repo(repo_id=destination, exist_ok=True) - destination = destination_url.repo_id - yield _log(f"Destination model: {destination_url}") - pr = api.create_pull_request( - repo_id=destination_url.repo_id, - title=f"Add {model_path.name} from bloomz.cpp converter.", - description="This PR has been created using the [bloomz.cpp converter Space](https://huggingface.co/spaces/Wauplin/bloomz.cpp-converter). It adds weights compatible with the [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp#usage) project.", - ) - pr_url = f"https://huggingface.co/{destination}/discussions/{pr.num}" - yield _log(f"Created PR: {pr_url} (empty)") - - yield _log(f"Uploading model to PR") - api.upload_file( - repo_id=destination, - path_or_fileobj=model_path, - path_in_repo=model_path.name, - revision=pr.git_reference, - ) - yield _log(f"Model uploaded to PR") - - yield _log(f"Modifying model card in PR (add `bloom` and `ggml` tags)") - try: - card = ModelCard.load(repo_id_or_path=destination) - except EntryNotFoundError: # new repo => no model card yet - card = ModelCard( - "This model contains a model based on the Bloom architecture with weights compatible with [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp). This model card has been automatically generated [by the bloomz.cpp converter Space](https://huggingface.co/spaces/Wauplin/bloomz.cpp-converter) and must be completed." - ) - if card.data.tags is None: - card.data.tags = [] - tags = card.data.tags - if "ggml" not in tags: - tags.append("ggml") - if "bloom" not in tags: - tags.append("bloom") - card.push_to_hub( - repo_id=destination, token=token, revision=pr.git_reference - ) - yield _log(f"Model card modified in PR.") - - api.change_discussion_status( - repo_id=destination, - discussion_num=pr.num, - new_status="open", - comment="PR is now complete and ready to be reviewed.", - ) - yield _log(f"[PR]({pr_url}) is complete and ready to be reviewed.") - - yield _log( - f"### Success 🔥\n\nYay! This model was successfully converted! Make sure to let the repo owner know about it and review your PR. You might need to complete the PR manually, especially to add information in the model card." - ) - _log_usage( - status="success", - model_id=model_id, - precision=precision, - quantization=quantization, - destination=destination, - pr_url=pr_url, - ) - shutil.rmtree(model_path.parent) - _delete_cache() - return - except Exception as e: - _log_usage( - status="error", - model_id=model_id, - precision=precision, - quantization=quantization, - destination=destination, - pr_url=None, - ) - yield _log(f"### Error 😢😢😢\n\n{e}") - _delete_cache() - return - - -def _delete_cache(): - """Delete cache dir between each run to avoid filling up the Space disk.""" - scan = scan_cache_dir() - scan.delete_revisions( - *[rev.commit_hash for repo in scan.repos for rev in repo.revisions] - ) - - -def _log_usage(**kwargs): - # save in a private dataset - # Taken from https://huggingface.co/spaces/onnx/export - if repo is not None: - repo.git_pull(rebase=True) - with DATASET_LOCAL_FILE.open("a") as csv_file: - writer = csv.DictWriter(csv_file, fieldnames=["time"] + list(kwargs.keys())) - writer.writerow({"time": str(datetime.now()), **kwargs}) - commit_url = repo.push_to_hub() - print("[dataset]", commit_url) - - -TITLE = """ -

    - Make any BLOOM-like model compatible with bloomz.cpp -

    -""" - -DESCRIPTION = """ -This Space allows you to automatically export any Bloom-like model hosted on the 🤗 Hub to be compatible with [bloomz.cpp](https://github.com/NouamaneTazi/bloomz.cpp). Converted weights are either exported to a repo you own (or that we create for you) or to the original repo by opening a PR on the target model. Once exported, the model can run with bloomz.cpp. Check out [this guide](https://github.com/NouamaneTazi/bloomz.cpp#usage) to see how! - -Don't know which Bloom model are available on the 🤗 Hub? Find a complete list at https://huggingface.co/models?other=bloom. - -To use this Space, please follow these steps: - -1. Paste your HF token. You can create one in your [settings page](https://huggingface.co/settings/tokens). The token requires a write-access token to create a PR and upload the weights. -1. Input a model id from the Hub. This model must be public. -1. Choose which precision you want to use (default to FP16). -1. (optional) Opt-in for 4-bit quantization. -1. (optional) By default a PR to the initial repo will be created. You can choose a different destination repo if you want. The destination repo will be created if it doesn't exist. -1. Click "Convert!" - -That's it! You'll get feedback if it works or not, and if it worked, you'll get the URL of the opened PR 🔥 -If you encounter any issues please let us know [by opening a Discussion](https://huggingface.co/spaces/Wauplin/bloomz.cpp-converter/discussions/new). -""" - - -with gr.Blocks() as demo: - gr.HTML(TITLE) - - with gr.Row(): - with gr.Column(scale=50): - gr.Markdown(DESCRIPTION) - - with gr.Column(scale=50): - input_token = gr.Text( - max_lines=1, label="Hugging Face token", type="password" - ) - input_model = gr.Text( - max_lines=1, label="Model id (e.g.: bigscience/bloomz-7b1)" - ) - input_precision = gr.Radio( - choices=["FP16", "FP32"], label="Precision", value="FP16" - ) - input_quantization = gr.Checkbox(value=False, label="4-bits quantization") - input_destination = gr.Text( - max_lines=1, - label="Destination (e.g.: bloomz-7b1.cpp) - optional", - ) - btn = gr.Button("Convert!") - - output = gr.Markdown(label="Output") - - btn.click( - fn=run, - inputs=[ - input_token, - input_model, - input_precision, - input_quantization, - input_destination, - ], - outputs=output, - ) - - -demo.queue().launch() diff --git a/spaces/Xenos14/XenoEngine-SD-webui/header_patch.py b/spaces/Xenos14/XenoEngine-SD-webui/header_patch.py deleted file mode 100644 index 464447c8cfb431f96098a1cbd95835596a5457bb..0000000000000000000000000000000000000000 --- a/spaces/Xenos14/XenoEngine-SD-webui/header_patch.py +++ /dev/null @@ -1,37 +0,0 @@ - with gr.Box(visible=os.environ.get("SPACE_ID")): - if os.environ.get("SPACE_ID") and str(os.environ.get("IS_SHARED_UI", "") or "") not in ("", "0"): - import torch - if not torch.cuda.is_available(): - gr.HTML(f""" -
    -

    ▲ Automatic1111's Stable Diffusion WebUI + Mikubill's ControlNet WebUI extension | Running on Hugging Face | Loaded checkpoint: AtoZovyaRPGArtistTools15_sd15V1

    -

    ▲ Docker build from 🐙 GitHub ➔ kalaspuff/stable-diffusion-webui-controlnet-docker / 🤗 Hugging Face ➔ carloscar/stable-diffusion-webui-controlnet-docker

    -

    ▲ Duplicate this Space to run it privately without a queue, use a GPU for faster generation times, load custom checkpoints, etc.  Duplicate Space

    -
    - """) - else: - gr.HTML(f""" -
    -

    ▲ Automatic1111's Stable Diffusion WebUI + Mikubill's ControlNet WebUI extension | Running on Hugging Face | Loaded checkpoint: AtoZovyaRPGArtistTools15_sd15V1

    -

    ▲ Docker build from 🐙 GitHub ➔ kalaspuff/stable-diffusion-webui-controlnet-docker / 🤗 Hugging Face ➔ carloscar/stable-diffusion-webui-controlnet-docker

    -

    ▲ Duplicate this Space to run it privately without a queue, use extensions, load custom checkpoints, etc.  Duplicate Space

    -
    - """) - elif os.environ.get("SPACE_ID"): - import torch - if not torch.cuda.is_available(): - gr.HTML(f""" -
    -

    ▲ Docker build from 🐙 GitHub ➔ kalaspuff/stable-diffusion-webui-controlnet-docker / 🤗 Hugging Face ➔ carloscar/stable-diffusion-webui-controlnet-docker

    -

    ▲ Load additional checkpoints, VAE, LoRA models, etc. Read more on the README at the GitHub link above.

    -

    ▲ This Space is currently running on CPU, which may yield very slow results - you can upgrade for a GPU in the Settings tab

    -
    - """) - else: - gr.HTML(f""" -
    -

    ▲ Docker build from 🐙 GitHub ➔ kalaspuff/stable-diffusion-webui-controlnet-docker / 🤗 Hugging Face ➔ carloscar/stable-diffusion-webui-controlnet-docker

    -

    ▲ Load additional checkpoints, VAE, LoRA models, etc. Read more on the README at the GitHub link above.

    -

    ▲ This Space has GPU enabled - remember to remove the GPU from the space in the Settings tab when you're done.

    -
    - """) diff --git a/spaces/Xenova/semantic-image-search-client/_next/static/oNz1VZyHBbKzD04b9fhBW/_ssgManifest.js b/spaces/Xenova/semantic-image-search-client/_next/static/oNz1VZyHBbKzD04b9fhBW/_ssgManifest.js deleted file mode 100644 index 5b3ff592fd46c8736892a12864fdf3fed8775202..0000000000000000000000000000000000000000 --- a/spaces/Xenova/semantic-image-search-client/_next/static/oNz1VZyHBbKzD04b9fhBW/_ssgManifest.js +++ /dev/null @@ -1 +0,0 @@ -self.__SSG_MANIFEST=new Set([]);self.__SSG_MANIFEST_CB&&self.__SSG_MANIFEST_CB() \ No newline at end of file diff --git a/spaces/XzJosh/Ava-Bert-VITS2/text/symbols.py b/spaces/XzJosh/Ava-Bert-VITS2/text/symbols.py deleted file mode 100644 index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Ava-Bert-VITS2/text/symbols.py +++ /dev/null @@ -1,51 +0,0 @@ -punctuation = ['!', '?', '…', ",", ".", "'", '-'] -pu_symbols = punctuation + ["SP", "UNK"] -pad = '_' - -# chinese -zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h', - 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o', - 'ong', - 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn', - 'w', 'x', 'y', 'z', 'zh', - "AA", "EE", "OO"] -num_zh_tones = 6 - -# japanese -ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky', - 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z'] -num_ja_tones = 1 - -# English -en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy', - 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's', - 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh'] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = { - 'ZH': 0, - "JA": 1, - "EN": 2 -} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - 'ZH': 0, - "JA": num_zh_tones, - "EN": num_zh_tones + num_ja_tones -} - -if __name__ == '__main__': - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a&b)) - diff --git a/spaces/XzJosh/yoyo-Bert-VITS2/app.py b/spaces/XzJosh/yoyo-Bert-VITS2/app.py deleted file mode 100644 index e55eddc0c6b411f3a0f0b6bc1da9269be4f5b087..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/yoyo-Bert-VITS2/app.py +++ /dev/null @@ -1,160 +0,0 @@ -import sys, os - -if sys.platform == "darwin": - os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" - -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) - -logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s") - -logger = logging.getLogger(__name__) - -import torch -import argparse -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -import gradio as gr -import webbrowser - - -net_g = None - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - del word2ph - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language - -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid): - global net_g - bert, phones, tones, lang_ids = get_text(text, "ZH", hps) - with torch.no_grad(): - x_tst=phones.to(device).unsqueeze(0) - tones=tones.to(device).unsqueeze(0) - lang_ids=lang_ids.to(device).unsqueeze(0) - bert = bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device) - del phones - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers - return audio - -def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale): - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker) - return "Success", (hps.data.sampling_rate, audio) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_dir", default="./logs/Lumi/G_2500.pth", help="path of your model") - parser.add_argument("--config_dir", default="./configs/config.json", help="path of your config file") - parser.add_argument("--share", default=False, help="make link public") - parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log") - - args = parser.parse_args() - if args.debug: - logger.info("Enable DEBUG-LEVEL log") - logging.basicConfig(level=logging.DEBUG) - hps = utils.get_hparams_from_file(args.config_dir) - device = "cuda:0" if torch.cuda.is_available() else "cpu" - ''' - device = ( - "cuda:0" - if torch.cuda.is_available() - else ( - "mps" - if sys.platform == "darwin" and torch.backends.mps.is_available() - else "cpu" - ) - ) - ''' - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(args.model_dir, net_g, None, skip_optimizer=True) - - speaker_ids = hps.data.spk2id - speakers = list(speaker_ids.keys()) - with gr.Blocks() as app: - with gr.Row(): - with gr.Column(): - gr.Markdown(value=""" - 【AI鹿鸣②】在线语音合成(Bert-Vits2)\n - 模型作者:Xz乔希 https://space.bilibili.com/5859321\n - 声音归属:yoyo鹿鸣_Lumi https://space.bilibili.com/488836173\n - 【AI鹿鸣①】https://huggingface.co/spaces/XzJosh/Lumi-Bert-VITS2\n - Bert-VITS2项目:https://github.com/Stardust-minus/Bert-VITS2\n - 使用本模型请严格遵守法律法规!\n - 发布二创作品请标注本项目作者及链接、作品使用Bert-VITS2 AI生成!\n - """) - text = gr.TextArea(label="Text", placeholder="Input Text Here", - value="大家好呀,嘿嘿,我是鹿鸣") - speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker') - sdp_ratio = gr.Slider(minimum=0.1, maximum=1, value=0.2, step=0.01, label='SDP/DP混合比') - noise_scale = gr.Slider(minimum=0.1, maximum=1, value=0.5, step=0.01, label='感情调节') - noise_scale_w = gr.Slider(minimum=0.1, maximum=1, value=0.9, step=0.01, label='音素长度') - length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.01, label='生成长度') - btn = gr.Button("点击生成", variant="primary") - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio") - gr.Markdown(value=""" - 【AI塔菲】https://huggingface.co/spaces/XzJosh/Taffy-Bert-VITS2\n - 【AI东雪莲】https://huggingface.co/spaces/XzJosh/Azuma-Bert-VITS2\n - 【AI奶绿】https://huggingface.co/spaces/XzJosh/LAPLACE-Bert-VITS2\n - 【AI七海】https://huggingface.co/spaces/XzJosh/Nana7mi-Bert-VITS2\n - 【AI星瞳】https://huggingface.co/spaces/XzJosh/XingTong-Bert-VITS2\n - 【AI阿梓】https://huggingface.co/spaces/XzJosh/Azusa-Bert-VITS2\n - 【AI嘉然】https://huggingface.co/spaces/XzJosh/Diana-Bert-VITS2\n - 【AI向晚】https://huggingface.co/spaces/XzJosh/Ava-Bert-VITS2\n - 【AI乃琳】https://huggingface.co/spaces/XzJosh/Eileen-Bert-VITS2\n - 【AI贝拉】https://huggingface.co/spaces/XzJosh/Bella-Bert-VITS2\n - 【AI珈乐】https://huggingface.co/spaces/XzJosh/Carol-Bert-VITS2\n - 【AI恬豆】https://huggingface.co/spaces/XzJosh/Bekki-Bert-VITS2\n - 【AI尼奈】https://huggingface.co/spaces/XzJosh/nine1-Bert-VITS2\n - 【AI扇宝】https://huggingface.co/spaces/XzJosh/ShanBao-Bert-VITS2\n - 【AI剑魔】https://huggingface.co/spaces/XzJosh/Aatrox-Bert-VITS2\n - 【AI电棍】https://huggingface.co/spaces/XzJosh/otto-Bert-VITS2\n - """) - btn.click(tts_fn, - inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale], - outputs=[text_output, audio_output]) - -# webbrowser.open("http://127.0.0.1:6006") -# app.launch(server_port=6006, show_error=True) - - app.launch(show_error=True) diff --git a/spaces/XzJosh/yoyo-Bert-VITS2/text/cleaner.py b/spaces/XzJosh/yoyo-Bert-VITS2/text/cleaner.py deleted file mode 100644 index 64bd5f7296f66c94f3a335666c53706bb5fe5b39..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/yoyo-Bert-VITS2/text/cleaner.py +++ /dev/null @@ -1,27 +0,0 @@ -from text import chinese, cleaned_text_to_sequence - - -language_module_map = { - 'ZH': chinese -} - - -def clean_text(text, language): - language_module = language_module_map[language] - norm_text = language_module.text_normalize(text) - phones, tones, word2ph = language_module.g2p(norm_text) - return norm_text, phones, tones, word2ph - -def clean_text_bert(text, language): - language_module = language_module_map[language] - norm_text = language_module.text_normalize(text) - phones, tones, word2ph = language_module.g2p(norm_text) - bert = language_module.get_bert_feature(norm_text, word2ph) - return phones, tones, bert - -def text_to_sequence(text, language): - norm_text, phones, tones, word2ph = clean_text(text, language) - return cleaned_text_to_sequence(phones, tones, language) - -if __name__ == '__main__': - pass diff --git a/spaces/Yabo/ControlVideo/models/util.py b/spaces/Yabo/ControlVideo/models/util.py deleted file mode 100644 index faba28d79fc80c2786872e2d9fa7edb267b18949..0000000000000000000000000000000000000000 --- a/spaces/Yabo/ControlVideo/models/util.py +++ /dev/null @@ -1,122 +0,0 @@ -import os -import imageio -import numpy as np -from typing import Union -import decord -decord.bridge.set_bridge('torch') -import torch -import torchvision -import PIL -from typing import List -from tqdm import tqdm -from einops import rearrange - -from controlnet_aux import CannyDetector - -def save_videos_grid(videos: torch.Tensor, path: str, rescale=False, n_rows=4, fps=8): - videos = rearrange(videos, "b c t h w -> t b c h w") - outputs = [] - for x in videos: - x = torchvision.utils.make_grid(x, nrow=n_rows) - x = x.transpose(0, 1).transpose(1, 2).squeeze(-1) - if rescale: - x = (x + 1.0) / 2.0 # -1,1 -> 0,1 - x = (x * 255).numpy().astype(np.uint8) - outputs.append(x) - - os.makedirs(os.path.dirname(path), exist_ok=True) - imageio.mimsave(path, outputs, fps=fps) - -def save_videos_grid_pil(videos: List[PIL.Image.Image], path: str, rescale=False, n_rows=4, fps=8): - videos = rearrange(videos, "b c t h w -> t b c h w") - outputs = [] - for x in videos: - x = torchvision.utils.make_grid(x, nrow=n_rows) - x = x.transpose(0, 1).transpose(1, 2).squeeze(-1) - if rescale: - x = (x + 1.0) / 2.0 # -1,1 -> 0,1 - x = (x * 255).numpy().astype(np.uint8) - outputs.append(x) - - os.makedirs(os.path.dirname(path), exist_ok=True) - imageio.mimsave(path, outputs, fps=fps) - -def read_video(video_path, video_length, width=512, height=512, frame_rate=None): - vr = decord.VideoReader(video_path, width=width, height=height) - if frame_rate is None: - frame_rate = max(1, len(vr) // video_length) - sample_index = list(range(0, len(vr), frame_rate))[:video_length] - video = vr.get_batch(sample_index) - video = rearrange(video, "f h w c -> f c h w") - video = (video / 127.5 - 1.0) - return video - - -def get_annotation(video, annotator): - t2i_transform = torchvision.transforms.ToPILImage() - annotation = [] - for frame in video: - pil_frame = t2i_transform(frame) - if isinstance(annotator, CannyDetector): - annotation.append(annotator(pil_frame, low_threshold=100, high_threshold=200)) - else: - annotation.append(annotator(pil_frame)) - return annotation - -# DDIM Inversion -@torch.no_grad() -def init_prompt(prompt, pipeline): - uncond_input = pipeline.tokenizer( - [""], padding="max_length", max_length=pipeline.tokenizer.model_max_length, - return_tensors="pt" - ) - uncond_embeddings = pipeline.text_encoder(uncond_input.input_ids.to(pipeline.device))[0] - text_input = pipeline.tokenizer( - [prompt], - padding="max_length", - max_length=pipeline.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_embeddings = pipeline.text_encoder(text_input.input_ids.to(pipeline.device))[0] - context = torch.cat([uncond_embeddings, text_embeddings]) - - return context - - -def next_step(model_output: Union[torch.FloatTensor, np.ndarray], timestep: int, - sample: Union[torch.FloatTensor, np.ndarray], ddim_scheduler): - timestep, next_timestep = min( - timestep - ddim_scheduler.config.num_train_timesteps // ddim_scheduler.num_inference_steps, 999), timestep - alpha_prod_t = ddim_scheduler.alphas_cumprod[timestep] if timestep >= 0 else ddim_scheduler.final_alpha_cumprod - alpha_prod_t_next = ddim_scheduler.alphas_cumprod[next_timestep] - beta_prod_t = 1 - alpha_prod_t - next_original_sample = (sample - beta_prod_t ** 0.5 * model_output) / alpha_prod_t ** 0.5 - next_sample_direction = (1 - alpha_prod_t_next) ** 0.5 * model_output - next_sample = alpha_prod_t_next ** 0.5 * next_original_sample + next_sample_direction - return next_sample - - -def get_noise_pred_single(latents, t, context, unet): - noise_pred = unet(latents, t, encoder_hidden_states=context)["sample"] - return noise_pred - - -@torch.no_grad() -def ddim_loop(pipeline, ddim_scheduler, latent, num_inv_steps, prompt): - context = init_prompt(prompt, pipeline) - uncond_embeddings, cond_embeddings = context.chunk(2) - all_latent = [latent] - latent = latent.clone().detach() - for i in tqdm(range(num_inv_steps)): - t = ddim_scheduler.timesteps[len(ddim_scheduler.timesteps) - i - 1] - noise_pred = get_noise_pred_single(latent, t, cond_embeddings, pipeline.unet) - latent = next_step(noise_pred, t, latent, ddim_scheduler) - all_latent.append(latent) - return all_latent - - -@torch.no_grad() -def ddim_inversion(pipeline, ddim_scheduler, video_latent, num_inv_steps, prompt=""): - ddim_latents = ddim_loop(pipeline, ddim_scheduler, video_latent, num_inv_steps, prompt) - return ddim_latents diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/modeling_utils.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/modeling_utils.py deleted file mode 100644 index e270f75e056e9130ae9a7df590a1e7547efceee8..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/modeling_utils.py +++ /dev/null @@ -1,764 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os -from functools import partial -from typing import Callable, List, Optional, Tuple, Union - -import torch -from torch import Tensor, device - -from huggingface_hub import hf_hub_download -from huggingface_hub.utils import EntryNotFoundError, RepositoryNotFoundError, RevisionNotFoundError -from requests import HTTPError - -from . import __version__ -from .utils import ( - CONFIG_NAME, - DIFFUSERS_CACHE, - HUGGINGFACE_CO_RESOLVE_ENDPOINT, - SAFETENSORS_WEIGHTS_NAME, - WEIGHTS_NAME, - is_accelerate_available, - is_safetensors_available, - is_torch_version, - logging, -) - - -logger = logging.get_logger(__name__) - - -if is_torch_version(">=", "1.9.0"): - _LOW_CPU_MEM_USAGE_DEFAULT = True -else: - _LOW_CPU_MEM_USAGE_DEFAULT = False - - -if is_accelerate_available(): - import accelerate - from accelerate.utils import set_module_tensor_to_device - from accelerate.utils.versions import is_torch_version - -if is_safetensors_available(): - import safetensors - - -def get_parameter_device(parameter: torch.nn.Module): - try: - return next(parameter.parameters()).device - except StopIteration: - # For torch.nn.DataParallel compatibility in PyTorch 1.5 - - def find_tensor_attributes(module: torch.nn.Module) -> List[Tuple[str, Tensor]]: - tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)] - return tuples - - gen = parameter._named_members(get_members_fn=find_tensor_attributes) - first_tuple = next(gen) - return first_tuple[1].device - - -def get_parameter_dtype(parameter: torch.nn.Module): - try: - return next(parameter.parameters()).dtype - except StopIteration: - # For torch.nn.DataParallel compatibility in PyTorch 1.5 - - def find_tensor_attributes(module: torch.nn.Module) -> List[Tuple[str, Tensor]]: - tuples = [(k, v) for k, v in module.__dict__.items() if torch.is_tensor(v)] - return tuples - - gen = parameter._named_members(get_members_fn=find_tensor_attributes) - first_tuple = next(gen) - return first_tuple[1].dtype - - -def load_state_dict(checkpoint_file: Union[str, os.PathLike]): - """ - Reads a checkpoint file, returning properly formatted errors if they arise. - """ - try: - if os.path.basename(checkpoint_file) == WEIGHTS_NAME: - return torch.load(checkpoint_file, map_location="cpu") - else: - return safetensors.torch.load_file(checkpoint_file, device="cpu") - except Exception as e: - try: - with open(checkpoint_file) as f: - if f.read().startswith("version"): - raise OSError( - "You seem to have cloned a repository without having git-lfs installed. Please install " - "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder " - "you cloned." - ) - else: - raise ValueError( - f"Unable to locate the file {checkpoint_file} which is necessary to load this pretrained " - "model. Make sure you have saved the model properly." - ) from e - except (UnicodeDecodeError, ValueError): - raise OSError( - f"Unable to load weights from checkpoint file for '{checkpoint_file}' " - f"at '{checkpoint_file}'. " - "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True." - ) - - -def _load_state_dict_into_model(model_to_load, state_dict): - # Convert old format to new format if needed from a PyTorch state_dict - # copy state_dict so _load_from_state_dict can modify it - state_dict = state_dict.copy() - error_msgs = [] - - # PyTorch's `_load_from_state_dict` does not copy parameters in a module's descendants - # so we need to apply the function recursively. - def load(module: torch.nn.Module, prefix=""): - args = (state_dict, prefix, {}, True, [], [], error_msgs) - module._load_from_state_dict(*args) - - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + ".") - - load(model_to_load) - - return error_msgs - - -class ModelMixin(torch.nn.Module): - r""" - Base class for all models. - - [`ModelMixin`] takes care of storing the configuration of the models and handles methods for loading, downloading - and saving models. - - - **config_name** ([`str`]) -- A filename under which the model should be stored when calling - [`~modeling_utils.ModelMixin.save_pretrained`]. - """ - config_name = CONFIG_NAME - _automatically_saved_args = ["_diffusers_version", "_class_name", "_name_or_path"] - _supports_gradient_checkpointing = False - - def __init__(self): - super().__init__() - - @property - def is_gradient_checkpointing(self) -> bool: - """ - Whether gradient checkpointing is activated for this model or not. - - Note that in other frameworks this feature can be referred to as "activation checkpointing" or "checkpoint - activations". - """ - return any(hasattr(m, "gradient_checkpointing") and m.gradient_checkpointing for m in self.modules()) - - def enable_gradient_checkpointing(self): - """ - Activates gradient checkpointing for the current model. - - Note that in other frameworks this feature can be referred to as "activation checkpointing" or "checkpoint - activations". - """ - if not self._supports_gradient_checkpointing: - raise ValueError(f"{self.__class__.__name__} does not support gradient checkpointing.") - self.apply(partial(self._set_gradient_checkpointing, value=True)) - - def disable_gradient_checkpointing(self): - """ - Deactivates gradient checkpointing for the current model. - - Note that in other frameworks this feature can be referred to as "activation checkpointing" or "checkpoint - activations". - """ - if self._supports_gradient_checkpointing: - self.apply(partial(self._set_gradient_checkpointing, value=False)) - - def save_pretrained( - self, - save_directory: Union[str, os.PathLike], - is_main_process: bool = True, - save_function: Callable = None, - safe_serialization: bool = False, - ): - """ - Save a model and its configuration file to a directory, so that it can be re-loaded using the - `[`~modeling_utils.ModelMixin.from_pretrained`]` class method. - - Arguments: - save_directory (`str` or `os.PathLike`): - Directory to which to save. Will be created if it doesn't exist. - is_main_process (`bool`, *optional*, defaults to `True`): - Whether the process calling this is the main process or not. Useful when in distributed training like - TPUs and need to call this function on all processes. In this case, set `is_main_process=True` only on - the main process to avoid race conditions. - save_function (`Callable`): - The function to use to save the state dictionary. Useful on distributed training like TPUs when one - need to replace `torch.save` by another method. Can be configured with the environment variable - `DIFFUSERS_SAVE_MODE`. - safe_serialization (`bool`, *optional*, defaults to `False`): - Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`). - """ - if safe_serialization and not is_safetensors_available(): - raise ImportError("`safe_serialization` requires the `safetensors library: `pip install safetensors`.") - - if os.path.isfile(save_directory): - logger.error(f"Provided path ({save_directory}) should be a directory, not a file") - return - - if save_function is None: - save_function = safetensors.torch.save_file if safe_serialization else torch.save - - os.makedirs(save_directory, exist_ok=True) - - model_to_save = self - - # Attach architecture to the config - # Save the config - if is_main_process: - model_to_save.save_config(save_directory) - - # Save the model - state_dict = model_to_save.state_dict() - - weights_name = SAFETENSORS_WEIGHTS_NAME if safe_serialization else WEIGHTS_NAME - - # Clean the folder from a previous save - for filename in os.listdir(save_directory): - full_filename = os.path.join(save_directory, filename) - # If we have a shard file that is not going to be replaced, we delete it, but only from the main process - # in distributed settings to avoid race conditions. - weights_no_suffix = weights_name.replace(".bin", "").replace(".safetensors", "") - if filename.startswith(weights_no_suffix) and os.path.isfile(full_filename) and is_main_process: - os.remove(full_filename) - - # Save the model - save_function(state_dict, os.path.join(save_directory, weights_name)) - - logger.info(f"Model weights saved in {os.path.join(save_directory, weights_name)}") - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs): - r""" - Instantiate a pretrained pytorch model from a pre-trained model configuration. - - The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train - the model, you should first set it back in training mode with `model.train()`. - - The warning *Weights from XXX not initialized from pretrained model* means that the weights of XXX do not come - pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning - task. - - The warning *Weights from XXX not used in YYY* means that the layer XXX is not used by YYY, therefore those - weights are discarded. - - Parameters: - pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): - Can be either: - - - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. - Valid model ids should have an organization name, like `google/ddpm-celebahq-256`. - - A path to a *directory* containing model weights saved using [`~ModelMixin.save_config`], e.g., - `./my_model_directory/`. - - cache_dir (`Union[str, os.PathLike]`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - torch_dtype (`str` or `torch.dtype`, *optional*): - Override the default `torch.dtype` and load the model under this dtype. If `"auto"` is passed the dtype - will be automatically derived from the model's weights. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received files. Will attempt to resume the download if such a - file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. - local_files_only(`bool`, *optional*, defaults to `False`): - Whether or not to only look at local files (i.e., do not try to download the model). - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated - when running `diffusers-cli login` (stored in `~/.huggingface`). - revision (`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - subfolder (`str`, *optional*, defaults to `""`): - In case the relevant files are located inside a subfolder of the model repo (either remote in - huggingface.co or downloaded locally), you can specify the folder name here. - - mirror (`str`, *optional*): - Mirror source to accelerate downloads in China. If you are from China and have an accessibility - problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. - Please refer to the mirror site for more information. - device_map (`str` or `Dict[str, Union[int, str, torch.device]]`, *optional*): - A map that specifies where each submodule should go. It doesn't need to be refined to each - parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the - same device. - - To have Accelerate compute the most optimized `device_map` automatically, set `device_map="auto"`. For - more information about each option see [designing a device - map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#designing-a-device-map). - low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >= 1.9.0 else `False`): - Speed up model loading by not initializing the weights and only loading the pre-trained weights. This - also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the - model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, - setting this argument to `True` will raise an error. - - - - It is required to be logged in (`huggingface-cli login`) when you want to use private or [gated - models](https://huggingface.co/docs/hub/models-gated#gated-models). - - - - - - Activate the special ["offline-mode"](https://huggingface.co/diffusers/installation.html#offline-mode) to use - this method in a firewalled environment. - - - - """ - cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE) - ignore_mismatched_sizes = kwargs.pop("ignore_mismatched_sizes", False) - force_download = kwargs.pop("force_download", False) - resume_download = kwargs.pop("resume_download", False) - proxies = kwargs.pop("proxies", None) - output_loading_info = kwargs.pop("output_loading_info", False) - local_files_only = kwargs.pop("local_files_only", False) - use_auth_token = kwargs.pop("use_auth_token", None) - revision = kwargs.pop("revision", None) - torch_dtype = kwargs.pop("torch_dtype", None) - subfolder = kwargs.pop("subfolder", None) - device_map = kwargs.pop("device_map", None) - low_cpu_mem_usage = kwargs.pop("low_cpu_mem_usage", _LOW_CPU_MEM_USAGE_DEFAULT) - - if low_cpu_mem_usage and not is_accelerate_available(): - low_cpu_mem_usage = False - logger.warning( - "Cannot initialize model with low cpu memory usage because `accelerate` was not found in the" - " environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install" - " `accelerate` for faster and less memory-intense model loading. You can do so with: \n```\npip" - " install accelerate\n```\n." - ) - - if device_map is not None and not is_accelerate_available(): - raise NotImplementedError( - "Loading and dispatching requires `accelerate`. Please make sure to install accelerate or set" - " `device_map=None`. You can install accelerate with `pip install accelerate`." - ) - - # Check if we can handle device_map and dispatching the weights - if device_map is not None and not is_torch_version(">=", "1.9.0"): - raise NotImplementedError( - "Loading and dispatching requires torch >= 1.9.0. Please either update your PyTorch version or set" - " `device_map=None`." - ) - - if low_cpu_mem_usage is True and not is_torch_version(">=", "1.9.0"): - raise NotImplementedError( - "Low memory initialization requires torch >= 1.9.0. Please either update your PyTorch version or set" - " `low_cpu_mem_usage=False`." - ) - - if low_cpu_mem_usage is False and device_map is not None: - raise ValueError( - f"You cannot set `low_cpu_mem_usage` to `False` while using device_map={device_map} for loading and" - " dispatching. Please make sure to set `low_cpu_mem_usage=True`." - ) - - user_agent = { - "diffusers": __version__, - "file_type": "model", - "framework": "pytorch", - } - - # Load config if we don't provide a configuration - config_path = pretrained_model_name_or_path - - # This variable will flag if we're loading a sharded checkpoint. In this case the archive file is just the - # Load model - - model_file = None - if is_safetensors_available(): - try: - model_file = _get_model_file( - pretrained_model_name_or_path, - weights_name=SAFETENSORS_WEIGHTS_NAME, - cache_dir=cache_dir, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - user_agent=user_agent, - ) - except: - pass - if model_file is None: - model_file = _get_model_file( - pretrained_model_name_or_path, - weights_name=WEIGHTS_NAME, - cache_dir=cache_dir, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - user_agent=user_agent, - ) - - if low_cpu_mem_usage: - # Instantiate model with empty weights - with accelerate.init_empty_weights(): - config, unused_kwargs = cls.load_config( - config_path, - cache_dir=cache_dir, - return_unused_kwargs=True, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - device_map=device_map, - **kwargs, - ) - model = cls.from_config(config, **unused_kwargs) - - # if device_map is Non,e load the state dict on move the params from meta device to the cpu - if device_map is None: - param_device = "cpu" - state_dict = load_state_dict(model_file) - # move the parms from meta device to cpu - for param_name, param in state_dict.items(): - set_module_tensor_to_device(model, param_name, param_device, value=param) - else: # else let accelerate handle loading and dispatching. - # Load weights and dispatch according to the device_map - # by deafult the device_map is None and the weights are loaded on the CPU - accelerate.load_checkpoint_and_dispatch(model, model_file, device_map) - - loading_info = { - "missing_keys": [], - "unexpected_keys": [], - "mismatched_keys": [], - "error_msgs": [], - } - else: - config, unused_kwargs = cls.load_config( - config_path, - cache_dir=cache_dir, - return_unused_kwargs=True, - force_download=force_download, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - subfolder=subfolder, - device_map=device_map, - **kwargs, - ) - model = cls.from_config(config, **unused_kwargs) - - state_dict = load_state_dict(model_file) - dtype = set(v.dtype for v in state_dict.values()) - - if len(dtype) > 1 and torch.float32 not in dtype: - raise ValueError( - f"The weights of the model file {model_file} have a mixture of incompatible dtypes {dtype}. Please" - f" make sure that {model_file} weights have only one dtype." - ) - elif len(dtype) > 1 and torch.float32 in dtype: - dtype = torch.float32 - else: - dtype = dtype.pop() - - # move model to correct dtype - model = model.to(dtype) - - model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model( - model, - state_dict, - model_file, - pretrained_model_name_or_path, - ignore_mismatched_sizes=ignore_mismatched_sizes, - ) - - loading_info = { - "missing_keys": missing_keys, - "unexpected_keys": unexpected_keys, - "mismatched_keys": mismatched_keys, - "error_msgs": error_msgs, - } - - if torch_dtype is not None and not isinstance(torch_dtype, torch.dtype): - raise ValueError( - f"{torch_dtype} needs to be of type `torch.dtype`, e.g. `torch.float16`, but is {type(torch_dtype)}." - ) - elif torch_dtype is not None: - model = model.to(torch_dtype) - - model.register_to_config(_name_or_path=pretrained_model_name_or_path) - - # Set model in evaluation mode to deactivate DropOut modules by default - model.eval() - if output_loading_info: - return model, loading_info - - return model - - @classmethod - def _load_pretrained_model( - cls, - model, - state_dict, - resolved_archive_file, - pretrained_model_name_or_path, - ignore_mismatched_sizes=False, - ): - # Retrieve missing & unexpected_keys - model_state_dict = model.state_dict() - loaded_keys = [k for k in state_dict.keys()] - - expected_keys = list(model_state_dict.keys()) - - original_loaded_keys = loaded_keys - - missing_keys = list(set(expected_keys) - set(loaded_keys)) - unexpected_keys = list(set(loaded_keys) - set(expected_keys)) - - # Make sure we are able to load base models as well as derived models (with heads) - model_to_load = model - - def _find_mismatched_keys( - state_dict, - model_state_dict, - loaded_keys, - ignore_mismatched_sizes, - ): - mismatched_keys = [] - if ignore_mismatched_sizes: - for checkpoint_key in loaded_keys: - model_key = checkpoint_key - - if ( - model_key in model_state_dict - and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape - ): - mismatched_keys.append( - (checkpoint_key, state_dict[checkpoint_key].shape, model_state_dict[model_key].shape) - ) - del state_dict[checkpoint_key] - return mismatched_keys - - if state_dict is not None: - # Whole checkpoint - mismatched_keys = _find_mismatched_keys( - state_dict, - model_state_dict, - original_loaded_keys, - ignore_mismatched_sizes, - ) - error_msgs = _load_state_dict_into_model(model_to_load, state_dict) - - if len(error_msgs) > 0: - error_msg = "\n\t".join(error_msgs) - if "size mismatch" in error_msg: - error_msg += ( - "\n\tYou may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method." - ) - raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}") - - if len(unexpected_keys) > 0: - logger.warning( - f"Some weights of the model checkpoint at {pretrained_model_name_or_path} were not used when" - f" initializing {model.__class__.__name__}: {unexpected_keys}\n- This IS expected if you are" - f" initializing {model.__class__.__name__} from the checkpoint of a model trained on another task" - " or with another architecture (e.g. initializing a BertForSequenceClassification model from a" - " BertForPreTraining model).\n- This IS NOT expected if you are initializing" - f" {model.__class__.__name__} from the checkpoint of a model that you expect to be exactly" - " identical (initializing a BertForSequenceClassification model from a" - " BertForSequenceClassification model)." - ) - else: - logger.info(f"All model checkpoint weights were used when initializing {model.__class__.__name__}.\n") - if len(missing_keys) > 0: - logger.warning( - f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at" - f" {pretrained_model_name_or_path} and are newly initialized: {missing_keys}\nYou should probably" - " TRAIN this model on a down-stream task to be able to use it for predictions and inference." - ) - elif len(mismatched_keys) == 0: - logger.info( - f"All the weights of {model.__class__.__name__} were initialized from the model checkpoint at" - f" {pretrained_model_name_or_path}.\nIf your task is similar to the task the model of the" - f" checkpoint was trained on, you can already use {model.__class__.__name__} for predictions" - " without further training." - ) - if len(mismatched_keys) > 0: - mismatched_warning = "\n".join( - [ - f"- {key}: found shape {shape1} in the checkpoint and {shape2} in the model instantiated" - for key, shape1, shape2 in mismatched_keys - ] - ) - logger.warning( - f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at" - f" {pretrained_model_name_or_path} and are newly initialized because the shapes did not" - f" match:\n{mismatched_warning}\nYou should probably TRAIN this model on a down-stream task to be" - " able to use it for predictions and inference." - ) - - return model, missing_keys, unexpected_keys, mismatched_keys, error_msgs - - @property - def device(self) -> device: - """ - `torch.device`: The device on which the module is (assuming that all the module parameters are on the same - device). - """ - return get_parameter_device(self) - - @property - def dtype(self) -> torch.dtype: - """ - `torch.dtype`: The dtype of the module (assuming that all the module parameters have the same dtype). - """ - return get_parameter_dtype(self) - - def num_parameters(self, only_trainable: bool = False, exclude_embeddings: bool = False) -> int: - """ - Get number of (optionally, trainable or non-embeddings) parameters in the module. - - Args: - only_trainable (`bool`, *optional*, defaults to `False`): - Whether or not to return only the number of trainable parameters - - exclude_embeddings (`bool`, *optional*, defaults to `False`): - Whether or not to return only the number of non-embeddings parameters - - Returns: - `int`: The number of parameters. - """ - - if exclude_embeddings: - embedding_param_names = [ - f"{name}.weight" - for name, module_type in self.named_modules() - if isinstance(module_type, torch.nn.Embedding) - ] - non_embedding_parameters = [ - parameter for name, parameter in self.named_parameters() if name not in embedding_param_names - ] - return sum(p.numel() for p in non_embedding_parameters if p.requires_grad or not only_trainable) - else: - return sum(p.numel() for p in self.parameters() if p.requires_grad or not only_trainable) - - -def _get_model_file( - pretrained_model_name_or_path, - *, - weights_name, - subfolder, - cache_dir, - force_download, - proxies, - resume_download, - local_files_only, - use_auth_token, - user_agent, - revision, -): - pretrained_model_name_or_path = str(pretrained_model_name_or_path) - if os.path.isdir(pretrained_model_name_or_path): - if os.path.isfile(os.path.join(pretrained_model_name_or_path, weights_name)): - # Load from a PyTorch checkpoint - model_file = os.path.join(pretrained_model_name_or_path, weights_name) - return model_file - elif subfolder is not None and os.path.isfile( - os.path.join(pretrained_model_name_or_path, subfolder, weights_name) - ): - model_file = os.path.join(pretrained_model_name_or_path, subfolder, weights_name) - return model_file - else: - raise EnvironmentError( - f"Error no file named {weights_name} found in directory {pretrained_model_name_or_path}." - ) - else: - try: - # Load from URL or cache if already cached - model_file = hf_hub_download( - pretrained_model_name_or_path, - filename=weights_name, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - user_agent=user_agent, - subfolder=subfolder, - revision=revision, - ) - return model_file - - except RepositoryNotFoundError: - raise EnvironmentError( - f"{pretrained_model_name_or_path} is not a local folder and is not a valid model identifier " - "listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to pass a " - "token having permission to this repo with `use_auth_token` or log in with `huggingface-cli " - "login`." - ) - except RevisionNotFoundError: - raise EnvironmentError( - f"{revision} is not a valid git identifier (branch name, tag name or commit id) that exists for " - "this model name. Check the model page at " - f"'https://huggingface.co/{pretrained_model_name_or_path}' for available revisions." - ) - except EntryNotFoundError: - raise EnvironmentError( - f"{pretrained_model_name_or_path} does not appear to have a file named {weights_name}." - ) - except HTTPError as err: - raise EnvironmentError( - f"There was a specific connection error when trying to load {pretrained_model_name_or_path}:\n{err}" - ) - except ValueError: - raise EnvironmentError( - f"We couldn't connect to '{HUGGINGFACE_CO_RESOLVE_ENDPOINT}' to load this model, couldn't find it" - f" in the cached files and it looks like {pretrained_model_name_or_path} is not the path to a" - f" directory containing a file named {weights_name} or" - " \nCheckout your internet connection or see how to run the library in" - " offline mode at 'https://huggingface.co/docs/diffusers/installation#offline-mode'." - ) - except EnvironmentError: - raise EnvironmentError( - f"Can't load the model for '{pretrained_model_name_or_path}'. If you were trying to load it from " - "'https://huggingface.co/models', make sure you don't have a local directory with the same name. " - f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory " - f"containing a file named {weights_name}" - ) diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_dpmsolver_multistep_flax.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_dpmsolver_multistep_flax.py deleted file mode 100644 index a44070d1d2aa1b5964884f17f1cbf335b9433f8e..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_dpmsolver_multistep_flax.py +++ /dev/null @@ -1,625 +0,0 @@ -# Copyright 2022 TSAIL Team and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This file is strongly influenced by https://github.com/LuChengTHU/dpm-solver - -import math -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import flax -import jax -import jax.numpy as jnp - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import deprecate -from .scheduling_utils_flax import ( - _FLAX_COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, - FlaxSchedulerMixin, - FlaxSchedulerOutput, - broadcast_to_shape_from_left, -) - - -def betas_for_alpha_bar(num_diffusion_timesteps: int, max_beta=0.999) -> jnp.ndarray: - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - - Returns: - betas (`jnp.ndarray`): the betas used by the scheduler to step the model outputs - """ - - def alpha_bar(time_step): - return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2 - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return jnp.array(betas, dtype=jnp.float32) - - -@flax.struct.dataclass -class DPMSolverMultistepSchedulerState: - # setable values - num_inference_steps: Optional[int] = None - timesteps: Optional[jnp.ndarray] = None - - # running values - model_outputs: Optional[jnp.ndarray] = None - lower_order_nums: Optional[int] = None - step_index: Optional[int] = None - prev_timestep: Optional[int] = None - cur_sample: Optional[jnp.ndarray] = None - - @classmethod - def create(cls, num_train_timesteps: int): - return cls(timesteps=jnp.arange(0, num_train_timesteps)[::-1]) - - -@dataclass -class FlaxDPMSolverMultistepSchedulerOutput(FlaxSchedulerOutput): - state: DPMSolverMultistepSchedulerState - - -class FlaxDPMSolverMultistepScheduler(FlaxSchedulerMixin, ConfigMixin): - """ - DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with - the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality - samples, and it can generate quite good samples even in only 10 steps. - - For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095 - - Currently, we support the multistep DPM-Solver for both noise prediction models and data prediction models. We - recommend to use `solver_order=2` for guided sampling, and `solver_order=3` for unconditional sampling. - - We also support the "dynamic thresholding" method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space - diffusion models, you can set both `algorithm_type="dpmsolver++"` and `thresholding=True` to use the dynamic - thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as - stable-diffusion). - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095 - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - solver_order (`int`, default `2`): - the order of DPM-Solver; can be `1` or `2` or `3`. We recommend to use `solver_order=2` for guided - sampling, and `solver_order=3` for unconditional sampling. - prediction_type (`str`, default `epsilon`): - indicates whether the model predicts the noise (epsilon), or the data / `x0`. One of `epsilon`, `sample`, - or `v-prediction`. - thresholding (`bool`, default `False`): - whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487). - For pixel-space diffusion models, you can set both `algorithm_type=dpmsolver++` and `thresholding=True` to - use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion - models (such as stable-diffusion). - dynamic_thresholding_ratio (`float`, default `0.995`): - the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen - (https://arxiv.org/abs/2205.11487). - sample_max_value (`float`, default `1.0`): - the threshold value for dynamic thresholding. Valid only when `thresholding=True` and - `algorithm_type="dpmsolver++`. - algorithm_type (`str`, default `dpmsolver++`): - the algorithm type for the solver. Either `dpmsolver` or `dpmsolver++`. The `dpmsolver` type implements the - algorithms in https://arxiv.org/abs/2206.00927, and the `dpmsolver++` type implements the algorithms in - https://arxiv.org/abs/2211.01095. We recommend to use `dpmsolver++` with `solver_order=2` for guided - sampling (e.g. stable-diffusion). - solver_type (`str`, default `midpoint`): - the solver type for the second-order solver. Either `midpoint` or `heun`. The solver type slightly affects - the sample quality, especially for small number of steps. We empirically find that `midpoint` solvers are - slightly better, so we recommend to use the `midpoint` type. - lower_order_final (`bool`, default `True`): - whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically - find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10. - - """ - - _compatibles = _FLAX_COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy() - _deprecated_kwargs = ["predict_epsilon"] - - @property - def has_state(self): - return True - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[jnp.ndarray] = None, - solver_order: int = 2, - prediction_type: str = "epsilon", - thresholding: bool = False, - dynamic_thresholding_ratio: float = 0.995, - sample_max_value: float = 1.0, - algorithm_type: str = "dpmsolver++", - solver_type: str = "midpoint", - lower_order_final: bool = True, - **kwargs, - ): - message = ( - "Please make sure to instantiate your scheduler with `prediction_type` instead. E.g. `scheduler =" - " FlaxDPMSolverMultistepScheduler.from_pretrained(, prediction_type='epsilon')`." - ) - predict_epsilon = deprecate("predict_epsilon", "0.11.0", message, take_from=kwargs) - if predict_epsilon is not None: - self.register_to_config(prediction_type="epsilon" if predict_epsilon else "sample") - - if trained_betas is not None: - self.betas = jnp.asarray(trained_betas) - elif beta_schedule == "linear": - self.betas = jnp.linspace(beta_start, beta_end, num_train_timesteps, dtype=jnp.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = jnp.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=jnp.float32) ** 2 - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = jnp.cumprod(self.alphas, axis=0) - # Currently we only support VP-type noise schedule - self.alpha_t = jnp.sqrt(self.alphas_cumprod) - self.sigma_t = jnp.sqrt(1 - self.alphas_cumprod) - self.lambda_t = jnp.log(self.alpha_t) - jnp.log(self.sigma_t) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # settings for DPM-Solver - if algorithm_type not in ["dpmsolver", "dpmsolver++"]: - raise NotImplementedError(f"{algorithm_type} does is not implemented for {self.__class__}") - if solver_type not in ["midpoint", "heun"]: - raise NotImplementedError(f"{solver_type} does is not implemented for {self.__class__}") - - def create_state(self): - return DPMSolverMultistepSchedulerState.create(num_train_timesteps=self.config.num_train_timesteps) - - def set_timesteps( - self, state: DPMSolverMultistepSchedulerState, num_inference_steps: int, shape: Tuple - ) -> DPMSolverMultistepSchedulerState: - """ - Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - state (`DPMSolverMultistepSchedulerState`): - the `FlaxDPMSolverMultistepScheduler` state data class instance. - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - shape (`Tuple`): - the shape of the samples to be generated. - """ - timesteps = ( - jnp.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps + 1) - .round()[::-1][:-1] - .astype(jnp.int32) - ) - - return state.replace( - num_inference_steps=num_inference_steps, - timesteps=timesteps, - model_outputs=jnp.zeros((self.config.solver_order,) + shape), - lower_order_nums=0, - step_index=0, - prev_timestep=-1, - cur_sample=jnp.zeros(shape), - ) - - def convert_model_output( - self, - model_output: jnp.ndarray, - timestep: int, - sample: jnp.ndarray, - ) -> jnp.ndarray: - """ - Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs. - - DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to - discretize an integral of the data prediction model. So we need to first convert the model output to the - corresponding type to match the algorithm. - - Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or - DPM-Solver++ for both noise prediction model and data prediction model. - - Args: - model_output (`jnp.ndarray`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - - Returns: - `jnp.ndarray`: the converted model output. - """ - # DPM-Solver++ needs to solve an integral of the data prediction model. - if self.config.algorithm_type == "dpmsolver++": - if self.config.prediction_type == "epsilon": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - x0_pred = (sample - sigma_t * model_output) / alpha_t - elif self.config.prediction_type == "sample": - x0_pred = model_output - elif self.config.prediction_type == "v_prediction": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - x0_pred = alpha_t * sample - sigma_t * model_output - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, " - " or `v_prediction` for the FlaxDPMSolverMultistepScheduler." - ) - - if self.config.thresholding: - # Dynamic thresholding in https://arxiv.org/abs/2205.11487 - dynamic_max_val = jnp.percentile( - jnp.abs(x0_pred), self.config.dynamic_thresholding_ratio, axis=tuple(range(1, x0_pred.ndim)) - ) - dynamic_max_val = jnp.maximum( - dynamic_max_val, self.config.sample_max_value * jnp.ones_like(dynamic_max_val) - ) - x0_pred = jnp.clip(x0_pred, -dynamic_max_val, dynamic_max_val) / dynamic_max_val - return x0_pred - # DPM-Solver needs to solve an integral of the noise prediction model. - elif self.config.algorithm_type == "dpmsolver": - if self.config.prediction_type == "epsilon": - return model_output - elif self.config.prediction_type == "sample": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - epsilon = (sample - alpha_t * model_output) / sigma_t - return epsilon - elif self.config.prediction_type == "v_prediction": - alpha_t, sigma_t = self.alpha_t[timestep], self.sigma_t[timestep] - epsilon = alpha_t * model_output + sigma_t * sample - return epsilon - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, " - " or `v_prediction` for the FlaxDPMSolverMultistepScheduler." - ) - - def dpm_solver_first_order_update( - self, model_output: jnp.ndarray, timestep: int, prev_timestep: int, sample: jnp.ndarray - ) -> jnp.ndarray: - """ - One step for the first-order DPM-Solver (equivalent to DDIM). - - See https://arxiv.org/abs/2206.00927 for the detailed derivation. - - Args: - model_output (`jnp.ndarray`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - - Returns: - `jnp.ndarray`: the sample tensor at the previous timestep. - """ - t, s0 = prev_timestep, timestep - m0 = model_output - lambda_t, lambda_s = self.lambda_t[t], self.lambda_t[s0] - alpha_t, alpha_s = self.alpha_t[t], self.alpha_t[s0] - sigma_t, sigma_s = self.sigma_t[t], self.sigma_t[s0] - h = lambda_t - lambda_s - if self.config.algorithm_type == "dpmsolver++": - x_t = (sigma_t / sigma_s) * sample - (alpha_t * (jnp.exp(-h) - 1.0)) * m0 - elif self.config.algorithm_type == "dpmsolver": - x_t = (alpha_t / alpha_s) * sample - (sigma_t * (jnp.exp(h) - 1.0)) * m0 - return x_t - - def multistep_dpm_solver_second_order_update( - self, - model_output_list: jnp.ndarray, - timestep_list: List[int], - prev_timestep: int, - sample: jnp.ndarray, - ) -> jnp.ndarray: - """ - One step for the second-order multistep DPM-Solver. - - Args: - model_output_list (`List[jnp.ndarray]`): - direct outputs from learned diffusion model at current and latter timesteps. - timestep (`int`): current and latter discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - - Returns: - `jnp.ndarray`: the sample tensor at the previous timestep. - """ - t, s0, s1 = prev_timestep, timestep_list[-1], timestep_list[-2] - m0, m1 = model_output_list[-1], model_output_list[-2] - lambda_t, lambda_s0, lambda_s1 = self.lambda_t[t], self.lambda_t[s0], self.lambda_t[s1] - alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0] - sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0] - h, h_0 = lambda_t - lambda_s0, lambda_s0 - lambda_s1 - r0 = h_0 / h - D0, D1 = m0, (1.0 / r0) * (m0 - m1) - if self.config.algorithm_type == "dpmsolver++": - # See https://arxiv.org/abs/2211.01095 for detailed derivations - if self.config.solver_type == "midpoint": - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (jnp.exp(-h) - 1.0)) * D0 - - 0.5 * (alpha_t * (jnp.exp(-h) - 1.0)) * D1 - ) - elif self.config.solver_type == "heun": - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (jnp.exp(-h) - 1.0)) * D0 - + (alpha_t * ((jnp.exp(-h) - 1.0) / h + 1.0)) * D1 - ) - elif self.config.algorithm_type == "dpmsolver": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - if self.config.solver_type == "midpoint": - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (jnp.exp(h) - 1.0)) * D0 - - 0.5 * (sigma_t * (jnp.exp(h) - 1.0)) * D1 - ) - elif self.config.solver_type == "heun": - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (jnp.exp(h) - 1.0)) * D0 - - (sigma_t * ((jnp.exp(h) - 1.0) / h - 1.0)) * D1 - ) - return x_t - - def multistep_dpm_solver_third_order_update( - self, - model_output_list: jnp.ndarray, - timestep_list: List[int], - prev_timestep: int, - sample: jnp.ndarray, - ) -> jnp.ndarray: - """ - One step for the third-order multistep DPM-Solver. - - Args: - model_output_list (`List[jnp.ndarray]`): - direct outputs from learned diffusion model at current and latter timesteps. - timestep (`int`): current and latter discrete timestep in the diffusion chain. - prev_timestep (`int`): previous discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - - Returns: - `jnp.ndarray`: the sample tensor at the previous timestep. - """ - t, s0, s1, s2 = prev_timestep, timestep_list[-1], timestep_list[-2], timestep_list[-3] - m0, m1, m2 = model_output_list[-1], model_output_list[-2], model_output_list[-3] - lambda_t, lambda_s0, lambda_s1, lambda_s2 = ( - self.lambda_t[t], - self.lambda_t[s0], - self.lambda_t[s1], - self.lambda_t[s2], - ) - alpha_t, alpha_s0 = self.alpha_t[t], self.alpha_t[s0] - sigma_t, sigma_s0 = self.sigma_t[t], self.sigma_t[s0] - h, h_0, h_1 = lambda_t - lambda_s0, lambda_s0 - lambda_s1, lambda_s1 - lambda_s2 - r0, r1 = h_0 / h, h_1 / h - D0 = m0 - D1_0, D1_1 = (1.0 / r0) * (m0 - m1), (1.0 / r1) * (m1 - m2) - D1 = D1_0 + (r0 / (r0 + r1)) * (D1_0 - D1_1) - D2 = (1.0 / (r0 + r1)) * (D1_0 - D1_1) - if self.config.algorithm_type == "dpmsolver++": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - x_t = ( - (sigma_t / sigma_s0) * sample - - (alpha_t * (jnp.exp(-h) - 1.0)) * D0 - + (alpha_t * ((jnp.exp(-h) - 1.0) / h + 1.0)) * D1 - - (alpha_t * ((jnp.exp(-h) - 1.0 + h) / h**2 - 0.5)) * D2 - ) - elif self.config.algorithm_type == "dpmsolver": - # See https://arxiv.org/abs/2206.00927 for detailed derivations - x_t = ( - (alpha_t / alpha_s0) * sample - - (sigma_t * (jnp.exp(h) - 1.0)) * D0 - - (sigma_t * ((jnp.exp(h) - 1.0) / h - 1.0)) * D1 - - (sigma_t * ((jnp.exp(h) - 1.0 - h) / h**2 - 0.5)) * D2 - ) - return x_t - - def step( - self, - state: DPMSolverMultistepSchedulerState, - model_output: jnp.ndarray, - timestep: int, - sample: jnp.ndarray, - return_dict: bool = True, - ) -> Union[FlaxDPMSolverMultistepSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by DPM-Solver. Core function to propagate the diffusion process - from the learned model outputs (most often the predicted noise). - - Args: - state (`DPMSolverMultistepSchedulerState`): - the `FlaxDPMSolverMultistepScheduler` state data class instance. - model_output (`jnp.ndarray`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than FlaxDPMSolverMultistepSchedulerOutput class - - Returns: - [`FlaxDPMSolverMultistepSchedulerOutput`] or `tuple`: [`FlaxDPMSolverMultistepSchedulerOutput`] if - `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - prev_timestep = jax.lax.cond( - state.step_index == len(state.timesteps) - 1, - lambda _: 0, - lambda _: state.timesteps[state.step_index + 1], - (), - ) - - model_output = self.convert_model_output(model_output, timestep, sample) - - model_outputs_new = jnp.roll(state.model_outputs, -1, axis=0) - model_outputs_new = model_outputs_new.at[-1].set(model_output) - state = state.replace( - model_outputs=model_outputs_new, - prev_timestep=prev_timestep, - cur_sample=sample, - ) - - def step_1(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray: - return self.dpm_solver_first_order_update( - state.model_outputs[-1], - state.timesteps[state.step_index], - state.prev_timestep, - state.cur_sample, - ) - - def step_23(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray: - def step_2(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray: - timestep_list = jnp.array([state.timesteps[state.step_index - 1], state.timesteps[state.step_index]]) - return self.multistep_dpm_solver_second_order_update( - state.model_outputs, - timestep_list, - state.prev_timestep, - state.cur_sample, - ) - - def step_3(state: DPMSolverMultistepSchedulerState) -> jnp.ndarray: - timestep_list = jnp.array( - [ - state.timesteps[state.step_index - 2], - state.timesteps[state.step_index - 1], - state.timesteps[state.step_index], - ] - ) - return self.multistep_dpm_solver_third_order_update( - state.model_outputs, - timestep_list, - state.prev_timestep, - state.cur_sample, - ) - - if self.config.solver_order == 2: - return step_2(state) - elif self.config.lower_order_final and len(state.timesteps) < 15: - return jax.lax.cond( - state.lower_order_nums < 2, - step_2, - lambda state: jax.lax.cond( - state.step_index == len(state.timesteps) - 2, - step_2, - step_3, - state, - ), - state, - ) - else: - return jax.lax.cond( - state.lower_order_nums < 2, - step_2, - step_3, - state, - ) - - if self.config.solver_order == 1: - prev_sample = step_1(state) - elif self.config.lower_order_final and len(state.timesteps) < 15: - prev_sample = jax.lax.cond( - state.lower_order_nums < 1, - step_1, - lambda state: jax.lax.cond( - state.step_index == len(state.timesteps) - 1, - step_1, - step_23, - state, - ), - state, - ) - else: - prev_sample = jax.lax.cond( - state.lower_order_nums < 1, - step_1, - step_23, - state, - ) - - state = state.replace( - lower_order_nums=jnp.minimum(state.lower_order_nums + 1, self.config.solver_order), - step_index=(state.step_index + 1), - ) - - if not return_dict: - return (prev_sample, state) - - return FlaxDPMSolverMultistepSchedulerOutput(prev_sample=prev_sample, state=state) - - def scale_model_input( - self, state: DPMSolverMultistepSchedulerState, sample: jnp.ndarray, timestep: Optional[int] = None - ) -> jnp.ndarray: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - state (`DPMSolverMultistepSchedulerState`): - the `FlaxDPMSolverMultistepScheduler` state data class instance. - sample (`jnp.ndarray`): input sample - timestep (`int`, optional): current timestep - - Returns: - `jnp.ndarray`: scaled input sample - """ - return sample - - def add_noise( - self, - original_samples: jnp.ndarray, - noise: jnp.ndarray, - timesteps: jnp.ndarray, - ) -> jnp.ndarray: - sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - sqrt_alpha_prod = broadcast_to_shape_from_left(sqrt_alpha_prod, original_samples.shape) - - sqrt_one_minus_alpha_prod = (1 - self.alphas_cumprod[timesteps]) ** 0.0 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - sqrt_one_minus_alpha_prod = broadcast_to_shape_from_left(sqrt_one_minus_alpha_prod, original_samples.shape) - - noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/YlcldKlns/bing/cloudflare/worker.js b/spaces/YlcldKlns/bing/cloudflare/worker.js deleted file mode 100644 index e0debd750615f1329b2c72fbce73e1b9291f7137..0000000000000000000000000000000000000000 --- a/spaces/YlcldKlns/bing/cloudflare/worker.js +++ /dev/null @@ -1,18 +0,0 @@ -const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。 - -export default { - async fetch(request) { - const uri = new URL(request.url); - if (uri.protocol === 'http:') { - uri.protocol = 'https:'; - return new Response('', { - status: 301, - headers: { - location: uri.toString(), - }, - }) - } - uri.host = TRAGET_HOST - return fetch(new Request(uri.toString(), request)); - }, -}; diff --git a/spaces/YlcldKlns/bing/src/components/chat-message.tsx b/spaces/YlcldKlns/bing/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/YlcldKlns/bing/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
    -
    - {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

    {children}

    - }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
    -
    -
    - {message.author === 'bot' && } - {message.author === 'bot' && } -
    -
    - ) : null -} diff --git a/spaces/Yudha515/Rvc-Models/Makefile b/spaces/Yudha515/Rvc-Models/Makefile deleted file mode 100644 index 5bfd89dd833d7448b21073eb6ee7cfac1d5157dd..0000000000000000000000000000000000000000 --- a/spaces/Yudha515/Rvc-Models/Makefile +++ /dev/null @@ -1,21 +0,0 @@ -default: linter tests - -install: - pip install -U pip - pip install -U -e '.[dev]' - -linter: - flake8 audiocraft && mypy audiocraft - flake8 tests && mypy tests - -tests: - coverage run -m pytest tests - coverage report --include 'audiocraft/*' - -docs: - pdoc3 --html -o docs -f audiocraft - -dist: - python setup.py sdist - -.PHONY: linter tests docs dist diff --git "a/spaces/a-v-bely/spanish-task-generator/pages/3_\360\237\223\245_\320\241\320\272\320\260\321\207\320\260\321\202\321\214.py" "b/spaces/a-v-bely/spanish-task-generator/pages/3_\360\237\223\245_\320\241\320\272\320\260\321\207\320\260\321\202\321\214.py" deleted file mode 100644 index 23b0728d148d95c5232e4bdde5aa325de25aab1a..0000000000000000000000000000000000000000 --- "a/spaces/a-v-bely/spanish-task-generator/pages/3_\360\237\223\245_\320\241\320\272\320\260\321\207\320\260\321\202\321\214.py" +++ /dev/null @@ -1,44 +0,0 @@ -import streamlit as st -from utilities_ui.custom_download_button import download_button as d_button - -st.set_page_config(page_title='Скачать', layout="wide", page_icon=':es:', initial_sidebar_state='collapsed') -if st.session_state.get('-LOGGED_IN_BOOL-') and (st.session_state.get('-DISPLAY_READY-') - or st.session_state.get('-DOWNLOAD_VERSION-')): - result = st.session_state.get('RESULT') - if result is None: - st.error('Не можем ничего загрузить! Вы ничего не просили!') - st.stop() - # Download buttons - if st.session_state.get('-DOWNLOAD_VERSION-'): - invite, tasks_col, tasks_with_answers_col, keys_only_col, full_coll, rest = st.columns([1, 1, 2, 1, 3, 1]) - invite.write('Скачать:') - with tasks_col: - d_button( - label='Задания', - data=result['STUDENT_OUT'], - file_name=f'{result["name"]}_tasks.txt') - with tasks_with_answers_col: - d_button( - label='Задания+Ключи', - data=result['TEACHER_OUT'], - file_name=f'{result["name"]}_tasks_and_keys.txt') - with keys_only_col: - d_button( - label='Ключи', - data=result['KEYS_ONLY'], - file_name=f'{result["name"]}_keys.txt') - with full_coll: - d_button( - label='Исходник+Задания+Ключи', - data=result['TOTAL_OUT'], - file_name=f'{result["name"]}_all.txt') - - if st.session_state.get('-DISPLAY_VERSION-'): - display_tasks_with_answers, display_tasks_only = st.tabs(['Задания+Ответы', 'Задания']) - display_tasks_with_answers.write(str(result['TEACHER_OUT'].replace('_', '\_'))) - display_tasks_only.write(str(result['STUDENT_OUT'].replace('_', '\_'))) - -elif st.session_state.get('-LOGGED_IN_BOOL-'): - st.warning('**Сначала введите текст**') -else: - st.warning('**Войдите или зарегистрируйтесь**') diff --git a/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/creating-terms.md b/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/creating-terms.md deleted file mode 100644 index 713f59cb1ff7494293f3b0965c8de69d3f490a60..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/api/tutorials/creating-terms.md +++ /dev/null @@ -1,111 +0,0 @@ -# Creating Terms - -## Why Would You Create Terms? -The Business Glossary(Term) feature in DataHub helps you use a shared vocabulary within the orgarnization, by providing a framework for defining a standardized set of data concepts and then associating them with the physical assets that exist within your data ecosystem. - -Fore more information about terms, refer to [About DataHub Business Glossary](/docs/glossary/business-glossary.md). - -### Goal Of This Guide -This guide will show you how to create a term named `Rate of Return`. - -## Prerequisites -For this tutorial, you need to deploy DataHub Quickstart and ingest sample data. -For detailed steps, please refer to [Prepare Local DataHub Environment](/docs/api/tutorials/references/prepare-datahub.md). - -## Create Terms With GraphQL - -:::note -Please note that there are two available endpoints (`:8000`, `:9002`) to access GraphQL. -For more information about the differences between these endpoints, please refer to [DataHub Metadata Service](../../../metadata-service/README.md#graphql-api) -::: - -### GraphQL Explorer -GraphQL Explorer is the fastest way to experiment with GraphQL without any dependancies. -Navigate to GraphQL Explorer (`http://localhost:9002/api/graphiql`) and run the following query. - -```python -mutation createGlossaryTerm { - createGlossaryTerm(input: - { - name: "Rate of Return", - description: "A rate of return (RoR) is the net gain or loss of an investment over a specified time period." - }) -} -``` -If you see the following response, the operation was successful: -```python -{ - "data": { - "createGlossaryTerm": "" - }, - "extensions": {} -} -``` - -### CURL - -With CURL, you need to provide tokens. To generate a token, please refer to [Generate Access Token](/docs/api/tutorials/references/generate-access-token.md). -With `accessToken`, you can run the following command. - -```shell -curl --location --request POST 'http://localhost:8080/api/graphql' \ ---header 'Authorization: Bearer ' \ ---header 'Content-Type: application/json' \ ---data-raw '{ "query": "mutation createGlossaryTerm { createGlossaryTerm(input: { name: \"Rate of Return\", description: \"A rate of return (RoR) is the net gain or loss of an investment over a specified time period.\" }) }", "variables":{}}' -``` -Expected Response: -```json -{"data":{"createGlossaryTerm":""},"extensions":{}} -``` - - -## Create Terms With Python SDK - -The following code creates a term named `Rate of Return`. -You can refer to the full code in [create_term.py](https://github.com/datahub-project/datahub/blob/master/metadata-ingestion/examples/library/create_term.py). -```python -import logging - -from datahub.emitter.mce_builder import make_term_urn -from datahub.emitter.mcp import MetadataChangeProposalWrapper -from datahub.emitter.rest_emitter import DatahubRestEmitter - -# Imports for metadata model classes -from datahub.metadata.schema_classes import GlossaryTermInfoClass - -log = logging.getLogger(__name__) -logging.basicConfig(level=logging.INFO) - -term_urn = make_term_urn("rateofreturn") -term_properties_aspect = GlossaryTermInfoClass( - definition="A rate of return (RoR) is the net gain or loss of an investment over a specified time period.", - name="Rate of Return", - termSource="", -) - -event: MetadataChangeProposalWrapper = MetadataChangeProposalWrapper( - entityUrn=term_urn, - aspect=term_properties_aspect, -) - -# Create rest emitter -rest_emitter = DatahubRestEmitter(gms_server="http://localhost:8080") -rest_emitter.emit(event) -log.info(f"Created term {term_urn}") -``` - -We're using the `MetdataChangeProposalWrapper` to change entities in this example. -For more information about the `MetadataChangeProposal`, please refer to [MetadataChangeProposal & MetadataChangeLog Events](/docs/advanced/mcp-mcl.md) - - -## Expected Outcomes -You can now see `Rate of Return` term has been created. -To view the definition, you can either click on 'Govern > Glossary' at the top right of the page or simply search for the term by name. - -![term-created](../../imgs/apis/tutorials/term-created.png) - -## What's Next? - -Now that you created a term, how about adding it to a dataset? Here's a guide on [how to add a term on a dataset](/docs/api/tutorials/adding-terms.md). - - diff --git a/spaces/abdvl/datahub_qa_bot/docs/deploy/kubernetes.md b/spaces/abdvl/datahub_qa_bot/docs/deploy/kubernetes.md deleted file mode 100644 index 5d6a61b98b7545e7a88a0ca1a564374f75525b51..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/deploy/kubernetes.md +++ /dev/null @@ -1,154 +0,0 @@ ---- -title: "Deploying with Kubernetes" ---- - -# Deploying DataHub with Kubernetes - -## Introduction - -Helm charts for deploying DataHub on a kubernetes cluster is located in -this [repository](https://github.com/acryldata/datahub-helm). We provide charts for -deploying [Datahub](https://github.com/acryldata/datahub-helm/tree/master/charts/datahub) and -it's [dependencies](https://github.com/acryldata/datahub-helm/tree/master/charts/prerequisites) -(Elasticsearch, optionally Neo4j, MySQL, and Kafka) on a Kubernetes cluster. - -This doc is a guide to deploy an instance of DataHub on a kubernetes cluster using the above charts from scratch. - -## Setup - -1. Set up a kubernetes cluster - - In a cloud platform of choice like [Amazon EKS](https://aws.amazon.com/eks), - [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), - and [Azure Kubernetes Service](https://azure.microsoft.com/en-us/services/kubernetes-service/) OR - - In local environment using [Minikube](https://minikube.sigs.k8s.io/docs/). Note, more than 7GB of RAM is required - to run Datahub and it's dependencies -2. Install the following tools: - - [kubectl](https://kubernetes.io/docs/tasks/tools/) to manage kubernetes resources - - [helm](https://helm.sh/docs/intro/install/) to deploy the resources based on helm charts. Note, we only support - Helm 3. - -## Components - -Datahub consists of 4 main components: [GMS](https://datahubproject.io/docs/metadata-service), -[MAE Consumer](https://datahubproject.io/docs/metadata-jobs/mae-consumer-job) (optional), -[MCE Consumer](https://datahubproject.io/docs/metadata-jobs/mce-consumer-job) (optional), and -[Frontend](https://datahubproject.io/docs/datahub-frontend). Kubernetes deployment for each of the components are -defined as subcharts under the main -[Datahub](https://github.com/acryldata/datahub-helm/tree/master/charts/datahub) -helm chart. - -The main components are powered by 4 external dependencies: - -- Kafka -- Local DB (MySQL, Postgres, MariaDB) -- Search Index (Elasticsearch) -- Graph Index (Supports either Neo4j or Elasticsearch) - -The dependencies must be deployed before deploying Datahub. We created a separate -[chart](https://github.com/acryldata/datahub-helm/tree/master/charts/prerequisites) -for deploying the dependencies with example configuration. They could also be deployed separately on-prem or leveraged -as managed services. To remove your dependency on Neo4j, set enabled to false in -the [values.yaml](https://github.com/acryldata/datahub-helm/blob/master/charts/prerequisites/values.yaml#L54) for -prerequisites. Then, override the `graph_service_impl` field in -the [values.yaml](https://github.com/acryldata/datahub-helm/blob/master/charts/datahub/values.yaml#L63) of datahub -instead of `neo4j`. - -## Quickstart - -Assuming kubectl context points to the correct kubernetes cluster, first create kubernetes secrets that contain MySQL -and Neo4j passwords. - -```(shell) -kubectl create secret generic mysql-secrets --from-literal=mysql-root-password=datahub -kubectl create secret generic neo4j-secrets --from-literal=neo4j-password=datahub -``` - -The above commands sets the passwords to "datahub" as an example. Change to any password of choice. - -Add datahub helm repo by running the following - -```(shell) -helm repo add datahub https://helm.datahubproject.io/ -``` - -Then, deploy the dependencies by running the following - -```(shell) -helm install prerequisites datahub/datahub-prerequisites -``` - -Note, the above uses the default configuration -defined [here](https://github.com/acryldata/datahub-helm/blob/master/charts/prerequisites/values.yaml). You can change -any of the configuration and deploy by running the following command. - -```(shell) -helm install prerequisites datahub/datahub-prerequisites --values <> -``` - -Run `kubectl get pods` to check whether all the pods for the dependencies are running. You should get a result similar -to below. - -``` -NAME READY STATUS RESTARTS AGE -elasticsearch-master-0 1/1 Running 0 62m -elasticsearch-master-1 1/1 Running 0 62m -elasticsearch-master-2 1/1 Running 0 62m -prerequisites-cp-schema-registry-cf79bfccf-kvjtv 2/2 Running 1 63m -prerequisites-kafka-0 1/1 Running 2 62m -prerequisites-mysql-0 1/1 Running 1 62m -prerequisites-neo4j-community-0 1/1 Running 0 52m -prerequisites-zookeeper-0 1/1 Running 0 62m -``` - -deploy Datahub by running the following - -```(shell) -helm install datahub datahub/datahub -``` - -Values in [values.yaml](https://github.com/acryldata/datahub-helm/blob/master/charts/datahub/values.yaml) -have been preset to point to the dependencies deployed using -the [prerequisites](https://github.com/acryldata/datahub-helm/tree/master/charts/prerequisites) -chart with release name "prerequisites". If you deployed the helm chart using a different release name, update the -quickstart-values.yaml file accordingly before installing. - -Run `kubectl get pods` to check whether all the datahub pods are running. You should get a result similar to below. - -``` -NAME READY STATUS RESTARTS AGE -datahub-datahub-frontend-84c58df9f7-5bgwx 1/1 Running 0 4m2s -datahub-datahub-gms-58b676f77c-c6pfx 1/1 Running 0 4m2s -datahub-datahub-mae-consumer-7b98bf65d-tjbwx 1/1 Running 0 4m3s -datahub-datahub-mce-consumer-8c57d8587-vjv9m 1/1 Running 0 4m2s -datahub-elasticsearch-setup-job-8dz6b 0/1 Completed 0 4m50s -datahub-kafka-setup-job-6blcj 0/1 Completed 0 4m40s -datahub-mysql-setup-job-b57kc 0/1 Completed 0 4m7s -elasticsearch-master-0 1/1 Running 0 97m -elasticsearch-master-1 1/1 Running 0 97m -elasticsearch-master-2 1/1 Running 0 97m -prerequisites-cp-schema-registry-cf79bfccf-kvjtv 2/2 Running 1 99m -prerequisites-kafka-0 1/1 Running 2 97m -prerequisites-mysql-0 1/1 Running 1 97m -prerequisites-neo4j-community-0 1/1 Running 0 88m -prerequisites-zookeeper-0 1/1 Running 0 97m -``` - -You can run the following to expose the frontend locally. Note, you can find the pod name using the command above. In -this case, the datahub-frontend pod name was `datahub-datahub-frontend-84c58df9f7-5bgwx`. - -```(shell) -kubectl port-forward 9002:9002 -``` - -You should be able to access the frontend via http://localhost:9002. - -Once you confirm that the pods are running well, you can set up ingress for datahub-frontend to expose the 9002 port to -the public. - -## Other useful commands - -| Command | Description | -|-----|------| -| helm uninstall datahub | Remove DataHub | -| helm ls | List of Helm charts | -| helm history | Fetch a release history | diff --git a/spaces/abdvl/datahub_qa_bot/docs/posts.md b/spaces/abdvl/datahub_qa_bot/docs/posts.md deleted file mode 100644 index 9647ee4ca9da9f18f9b9a36c11dea0cf433c5dd5..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/posts.md +++ /dev/null @@ -1,53 +0,0 @@ -import FeatureAvailability from '@site/src/components/FeatureAvailability'; - -# About DataHub Posts - - -DataHub allows users to make Posts that can be displayed on the app. Currently, Posts are only supported on the Home Page, but may be extended to other surfaces of the app in the future. Posts can be used to accomplish the following: - -* Allowing Admins to post announcements on the home page -* Pinning important DataHub assets or pages -* Pinning important external links - -## Posts Setup, Prerequisites, and Permissions - -Anyone can view Posts on the home page. To create Posts, a user must either have the **Create Global Announcements** Privilege, or possess the **Admin** DataHub Role. - -## Using Posts - -To create a post, users must use the [createPost](../graphql/mutations.md#createPost) GraphQL mutation. There is currently no way to create posts using the UI, though this will come in the future. - -There is only one type of Post that can be currently made, and that is a **Home Page Announcement**. This may be extended in the future to other surfaces. - -DataHub currently supports two types of Post content. Posts can either contain **TEXT** or can be a **LINK**. When creating a post through GraphQL, users will have to supply the post content. - -For **TEXT** posts, the following pieces of information are required in the `content` object (of type [UpdatePostContentInput](../graphql/inputObjects.md#updatepostcontentinput)) of the GraphQL `input` (of type [CreatePostInput](../graphql/inputObjects.md#createpostinput))). **TEXT** posts cannot be clicked. -* `contentType: TEXT` -* `title` -* `description` - -The `link` and `media` attributes are currently unused for **TEXT** posts. - -For **LINK** posts, the following pieces of information are required in the `content` object (of type [UpdatePostContentInput](../graphql/inputObjects.md#updatepostcontentinput)) of the GraphQL `input` (of type [CreatePostInput](../graphql/inputObjects.md#createpostinput))). **LINK** posts redirect to the provided link when clicked. -* `contentType: LINK` -* `title` -* `link` -* `media`. Currently only the **IMAGE** type is supported, and the URL of the image must be provided - -The `description` attribute is currently unused for **LINK** posts. - -Here are some examples of Posts displayed on the home page, with one **TEXT** post and two **LINK** posts. - -

    - -

    - -### GraphQL - -* [createPost](../graphql/mutations.md#createpost) -* [listPosts](../graphql/queries.md#listposts) - - -## FAQ and Troubleshooting - -*Need more help with Posts? Join the conversation in [Slack](http://slack.datahubproject.io)! Please post in the **#ui** channel!* diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/utils/contextmanagers.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/utils/contextmanagers.py deleted file mode 100644 index 38a639262d949b5754dedf12f33fa814b030ea38..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/utils/contextmanagers.py +++ /dev/null @@ -1,121 +0,0 @@ -import asyncio -import contextlib -import logging -import os -import time -from typing import List - -import torch - -logger = logging.getLogger(__name__) - -DEBUG_COMPLETED_TIME = bool(os.environ.get('DEBUG_COMPLETED_TIME', False)) - - -@contextlib.asynccontextmanager -async def completed(trace_name='', - name='', - sleep_interval=0.05, - streams: List[torch.cuda.Stream] = None): - """Async context manager that waits for work to complete on given CUDA - streams.""" - if not torch.cuda.is_available(): - yield - return - - stream_before_context_switch = torch.cuda.current_stream() - if not streams: - streams = [stream_before_context_switch] - else: - streams = [s if s else stream_before_context_switch for s in streams] - - end_events = [ - torch.cuda.Event(enable_timing=DEBUG_COMPLETED_TIME) for _ in streams - ] - - if DEBUG_COMPLETED_TIME: - start = torch.cuda.Event(enable_timing=True) - stream_before_context_switch.record_event(start) - - cpu_start = time.monotonic() - logger.debug('%s %s starting, streams: %s', trace_name, name, streams) - grad_enabled_before = torch.is_grad_enabled() - try: - yield - finally: - current_stream = torch.cuda.current_stream() - assert current_stream == stream_before_context_switch - - if DEBUG_COMPLETED_TIME: - cpu_end = time.monotonic() - for i, stream in enumerate(streams): - event = end_events[i] - stream.record_event(event) - - grad_enabled_after = torch.is_grad_enabled() - - # observed change of torch.is_grad_enabled() during concurrent run of - # async_test_bboxes code - assert (grad_enabled_before == grad_enabled_after - ), 'Unexpected is_grad_enabled() value change' - - are_done = [e.query() for e in end_events] - logger.debug('%s %s completed: %s streams: %s', trace_name, name, - are_done, streams) - with torch.cuda.stream(stream_before_context_switch): - while not all(are_done): - await asyncio.sleep(sleep_interval) - are_done = [e.query() for e in end_events] - logger.debug( - '%s %s completed: %s streams: %s', - trace_name, - name, - are_done, - streams, - ) - - current_stream = torch.cuda.current_stream() - assert current_stream == stream_before_context_switch - - if DEBUG_COMPLETED_TIME: - cpu_time = (cpu_end - cpu_start) * 1000 - stream_times_ms = '' - for i, stream in enumerate(streams): - elapsed_time = start.elapsed_time(end_events[i]) - stream_times_ms += f' {stream} {elapsed_time:.2f} ms' - logger.info('%s %s %.2f ms %s', trace_name, name, cpu_time, - stream_times_ms) - - -@contextlib.asynccontextmanager -async def concurrent(streamqueue: asyncio.Queue, - trace_name='concurrent', - name='stream'): - """Run code concurrently in different streams. - - :param streamqueue: asyncio.Queue instance. - - Queue tasks define the pool of streams used for concurrent execution. - """ - if not torch.cuda.is_available(): - yield - return - - initial_stream = torch.cuda.current_stream() - - with torch.cuda.stream(initial_stream): - stream = await streamqueue.get() - assert isinstance(stream, torch.cuda.Stream) - - try: - with torch.cuda.stream(stream): - logger.debug('%s %s is starting, stream: %s', trace_name, name, - stream) - yield - current = torch.cuda.current_stream() - assert current == stream - logger.debug('%s %s has finished, stream: %s', trace_name, - name, stream) - finally: - streamqueue.task_done() - streamqueue.put_nowait(stream) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/cc_attention.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/cc_attention.py deleted file mode 100644 index 9207aa95e6730bd9b3362dee612059a5f0ce1c5e..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/ops/cc_attention.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -import torch.nn.functional as F - -from annotator.uniformer.mmcv.cnn import PLUGIN_LAYERS, Scale - - -def NEG_INF_DIAG(n, device): - """Returns a diagonal matrix of size [n, n]. - - The diagonal are all "-inf". This is for avoiding calculating the - overlapped element in the Criss-Cross twice. - """ - return torch.diag(torch.tensor(float('-inf')).to(device).repeat(n), 0) - - -@PLUGIN_LAYERS.register_module() -class CrissCrossAttention(nn.Module): - """Criss-Cross Attention Module. - - .. note:: - Before v1.3.13, we use a CUDA op. Since v1.3.13, we switch - to a pure PyTorch and equivalent implementation. For more - details, please refer to https://github.com/open-mmlab/mmcv/pull/1201. - - Speed comparison for one forward pass - - - Input size: [2,512,97,97] - - Device: 1 NVIDIA GeForce RTX 2080 Ti - - +-----------------------+---------------+------------+---------------+ - | |PyTorch version|CUDA version|Relative speed | - +=======================+===============+============+===============+ - |with torch.no_grad() |0.00554402 s |0.0299619 s |5.4x | - +-----------------------+---------------+------------+---------------+ - |no with torch.no_grad()|0.00562803 s |0.0301349 s |5.4x | - +-----------------------+---------------+------------+---------------+ - - Args: - in_channels (int): Channels of the input feature map. - """ - - def __init__(self, in_channels): - super().__init__() - self.query_conv = nn.Conv2d(in_channels, in_channels // 8, 1) - self.key_conv = nn.Conv2d(in_channels, in_channels // 8, 1) - self.value_conv = nn.Conv2d(in_channels, in_channels, 1) - self.gamma = Scale(0.) - self.in_channels = in_channels - - def forward(self, x): - """forward function of Criss-Cross Attention. - - Args: - x (Tensor): Input feature. \ - shape (batch_size, in_channels, height, width) - Returns: - Tensor: Output of the layer, with shape of \ - (batch_size, in_channels, height, width) - """ - B, C, H, W = x.size() - query = self.query_conv(x) - key = self.key_conv(x) - value = self.value_conv(x) - energy_H = torch.einsum('bchw,bciw->bwhi', query, key) + NEG_INF_DIAG( - H, query.device) - energy_H = energy_H.transpose(1, 2) - energy_W = torch.einsum('bchw,bchj->bhwj', query, key) - attn = F.softmax( - torch.cat([energy_H, energy_W], dim=-1), dim=-1) # [B,H,W,(H+W)] - out = torch.einsum('bciw,bhwi->bchw', value, attn[..., :H]) - out += torch.einsum('bchj,bhwj->bchw', value, attn[..., H:]) - - out = self.gamma(out) + x - out = out.contiguous() - - return out - - def __repr__(self): - s = self.__class__.__name__ - s += f'(in_channels={self.in_channels})' - return s diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/pipelines/loading.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/pipelines/loading.py deleted file mode 100644 index 5213aa3409f476e564970e85fd2bd973cb012fa0..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/pipelines/loading.py +++ /dev/null @@ -1,165 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala - * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv - * Copyright (c) OpenMMLab. All rights reserved. -''' - -import os.path as osp - -import annotator.uniformer.mmcv as mmcv -import numpy as np - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class LoadImageFromFile(object): - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'cv2' - """ - - def __init__(self, - to_float32=False, - color_type='color', - file_client_args=dict(backend='disk'), - imdecode_backend='cv2'): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('img_prefix') is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes( - img_bytes, flag=self.color_type, backend=self.imdecode_backend) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(to_float32={self.to_float32},' - repr_str += f"color_type='{self.color_type}'," - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations(object): - """Load annotations for semantic segmentation. - - Args: - reduce_zero_label (bool): Whether reduce all label value by 1. - Usually used for datasets where 0 is background label. - Default: False. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'pillow' - """ - - def __init__(self, - reduce_zero_label=False, - file_client_args=dict(backend='disk'), - imdecode_backend='pillow'): - self.reduce_zero_label = reduce_zero_label - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('seg_prefix', None) is not None: - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - else: - filename = results['ann_info']['seg_map'] - img_bytes = self.file_client.get(filename) - gt_semantic_seg = mmcv.imfrombytes( - img_bytes, flag='unchanged', - backend=self.imdecode_backend).squeeze().astype(np.uint8) - # modify if custom classes - if results.get('label_map', None) is not None: - for old_id, new_id in results['label_map'].items(): - gt_semantic_seg[gt_semantic_seg == old_id] = new_id - # reduce zero_label - if self.reduce_zero_label: - # avoid using underflow conversion - gt_semantic_seg[gt_semantic_seg == 0] = 255 - gt_semantic_seg = gt_semantic_seg - 1 - gt_semantic_seg[gt_semantic_seg == 254] = 255 - results['gt_semantic_seg'] = gt_semantic_seg - results['seg_fields'].append('gt_semantic_seg') - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(reduce_zero_label={self.reduce_zero_label},' - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str diff --git a/spaces/ai-maker-space/ChatWithYourPDF/app.py b/spaces/ai-maker-space/ChatWithYourPDF/app.py deleted file mode 100644 index 6ae6dc4f00a3b401305e15b8a66869498fd50a08..0000000000000000000000000000000000000000 --- a/spaces/ai-maker-space/ChatWithYourPDF/app.py +++ /dev/null @@ -1,146 +0,0 @@ -import os -from typing import List - -from langchain.embeddings.openai import OpenAIEmbeddings -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.vectorstores import Chroma -from langchain.chains import ( - ConversationalRetrievalChain, -) -from langchain.document_loaders import PyPDFLoader -from langchain.chat_models import ChatOpenAI -from langchain.prompts.chat import ( - ChatPromptTemplate, - SystemMessagePromptTemplate, - HumanMessagePromptTemplate, -) -from langchain.docstore.document import Document -from langchain.memory import ChatMessageHistory, ConversationBufferMemory -from chainlit.types import AskFileResponse - -import chainlit as cl - -text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100) - -system_template = """Use the following pieces of context to answer the users question. -If you don't know the answer, just say that you don't know, don't try to make up an answer. -ALWAYS return a "SOURCES" part in your answer. -The "SOURCES" part should be a reference to the source of the document from which you got your answer. - -And if the user greets with greetings like Hi, hello, How are you, etc reply accordingly as well. - -Example of your response should be: - -The answer is foo -SOURCES: xyz - - -Begin! ----------------- -{summaries}""" -messages = [ - SystemMessagePromptTemplate.from_template(system_template), - HumanMessagePromptTemplate.from_template("{question}"), -] -prompt = ChatPromptTemplate.from_messages(messages) -chain_type_kwargs = {"prompt": prompt} - - -def process_file(file: AskFileResponse): - import tempfile - - with tempfile.NamedTemporaryFile(mode="w", delete=False) as tempfile: - with open(tempfile.name, "wb") as f: - f.write(file.content) - - pypdf_loader = PyPDFLoader(tempfile.name) - texts = pypdf_loader.load_and_split() - texts = [text.page_content for text in texts] - return texts - - -@cl.on_chat_start -async def on_chat_start(): - files = None - - # Wait for the user to upload a file - while files == None: - files = await cl.AskFileMessage( - content="Please upload a PDF file to begin!", - accept=["application/pdf"], - max_size_mb=20, - timeout=180, - ).send() - - file = files[0] - - msg = cl.Message( - content=f"Processing `{file.name}`...", disable_human_feedback=True - ) - await msg.send() - - # load the file - texts = process_file(file) - - print(texts[0]) - - # Create a metadata for each chunk - metadatas = [{"source": f"{i}-pl"} for i in range(len(texts))] - - # Create a Chroma vector store - embeddings = OpenAIEmbeddings() - docsearch = await cl.make_async(Chroma.from_texts)( - texts, embeddings, metadatas=metadatas - ) - - message_history = ChatMessageHistory() - - memory = ConversationBufferMemory( - memory_key="chat_history", - output_key="answer", - chat_memory=message_history, - return_messages=True, - ) - - # Create a chain that uses the Chroma vector store - chain = ConversationalRetrievalChain.from_llm( - ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, streaming=True), - chain_type="stuff", - retriever=docsearch.as_retriever(), - memory=memory, - return_source_documents=True, - ) - - # Let the user know that the system is ready - msg.content = f"Processing `{file.name}` done. You can now ask questions!" - await msg.update() - - cl.user_session.set("chain", chain) - - -@cl.on_message -async def main(message): - chain = cl.user_session.get("chain") # type: ConversationalRetrievalChain - cb = cl.AsyncLangchainCallbackHandler() - - res = await chain.acall(message.content, callbacks=[cb]) - answer = res["answer"] - source_documents = res["source_documents"] # type: List[Document] - - text_elements = [] # type: List[cl.Text] - - if source_documents: - for source_idx, source_doc in enumerate(source_documents): - source_name = f"source_{source_idx}" - # Create the text element referenced in the message - text_elements.append( - cl.Text(content=source_doc.page_content, name=source_name) - ) - source_names = [text_el.name for text_el in text_elements] - - if source_names: - answer += f"\nSources: {', '.join(source_names)}" - else: - answer += "\nNo sources found" - - await cl.Message(content=answer, elements=text_elements).send() diff --git a/spaces/akhaliq/SwinIR/download-weights.sh b/spaces/akhaliq/SwinIR/download-weights.sh deleted file mode 100644 index 1232611b4d81d15413ced7535d8ef1ca89d323a3..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/SwinIR/download-weights.sh +++ /dev/null @@ -1,13 +0,0 @@ -#!/bin/sh - -wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/003_realSR_BSRGAN_DFO_s64w8_SwinIR-M_x4_GAN.pth -P experiments/pretrained_models -wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/004_grayDN_DFWB_s128w8_SwinIR-M_noise15.pth -P experiments/pretrained_models -wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/004_grayDN_DFWB_s128w8_SwinIR-M_noise25.pth -P experiments/pretrained_models -wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/004_grayDN_DFWB_s128w8_SwinIR-M_noise50.pth -P experiments/pretrained_models -wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/005_colorDN_DFWB_s128w8_SwinIR-M_noise15.pth -P experiments/pretrained_models -wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/005_colorDN_DFWB_s128w8_SwinIR-M_noise25.pth -P experiments/pretrained_models -wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/005_colorDN_DFWB_s128w8_SwinIR-M_noise50.pth -P experiments/pretrained_models -wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/006_CAR_DFWB_s126w7_SwinIR-M_jpeg10.pth -P experiments/pretrained_models -wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/006_CAR_DFWB_s126w7_SwinIR-M_jpeg20.pth -P experiments/pretrained_models -wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/006_CAR_DFWB_s126w7_SwinIR-M_jpeg30.pth -P experiments/pretrained_models -wget https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/006_CAR_DFWB_s126w7_SwinIR-M_jpeg40.pth -P experiments/pretrained_models \ No newline at end of file diff --git a/spaces/akhaliq/deeplab2/evaluation/panoptic_quality_test.py b/spaces/akhaliq/deeplab2/evaluation/panoptic_quality_test.py deleted file mode 100644 index ecef73fd8d93dbcac295f9f5431c1ba4cc08398b..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/evaluation/panoptic_quality_test.py +++ /dev/null @@ -1,214 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tests for panoptic_quality metrics.""" -import collections - -from absl import logging -import numpy as np -import tensorflow as tf - -from deeplab2.evaluation import panoptic_quality -from deeplab2.evaluation import test_utils - -# See the definition of the color names at: -# https://en.wikipedia.org/wiki/Web_colors. -_CLASS_COLOR_MAP = { - (0, 0, 0): 0, - (0, 0, 255): 1, # Person (blue). - (255, 0, 0): 2, # Bear (red). - (0, 255, 0): 3, # Tree (lime). - (255, 0, 255): 4, # Bird (fuchsia). - (0, 255, 255): 5, # Sky (aqua). - (255, 255, 0): 6, # Cat (yellow). -} - - -def combine_maps(semantic_map, instance_map, label_divisor): - combined_map = instance_map + semantic_map * label_divisor - return tf.cast(combined_map, tf.int32) - - -class PanopticQualityMetricTest(tf.test.TestCase): - - def test_streaming_metric_on_single_image(self): - max_instances_per_category = 1000 - instance_class_map = { - 0: 0, - 47: 1, - 97: 1, - 133: 1, - 150: 1, - 174: 1, - 198: 2, - 215: 1, - 244: 1, - 255: 1, - } - gt_instances, gt_classes = test_utils.panoptic_segmentation_with_class_map( - 'team_gt_instance.png', instance_class_map) - - pred_classes = test_utils.read_segmentation_with_rgb_color_map( - 'team_pred_class.png', _CLASS_COLOR_MAP) - pred_instances = test_utils.read_test_image( - 'team_pred_instance.png', image_format='L') - - pq_obj = panoptic_quality.PanopticQuality( - num_classes=3, - max_instances_per_category=max_instances_per_category, - ignored_label=0, offset=256*256) - - y_true = combine_maps(gt_classes, gt_instances, max_instances_per_category) - y_pred = combine_maps(pred_classes, pred_instances, - max_instances_per_category) - pq_obj.update_state(y_true, y_pred) - result = pq_obj.result().numpy() - self.assertAlmostEqual(result[0], 0.62156284, places=4) - self.assertAlmostEqual(result[1], 0.64664984, places=4) - self.assertAlmostEqual(result[2], 0.9666667, places=4) - self.assertEqual(result[3], 4.) - self.assertAlmostEqual(result[4], 0.5) - self.assertEqual(result[5], 0.) - - def test_streaming_metric_on_multiple_images(self): - num_classes = 7 - - bird_gt_instance_class_map = { - 92: 5, - 176: 3, - 255: 4, - } - cat_gt_instance_class_map = { - 0: 0, - 255: 6, - } - team_gt_instance_class_map = { - 0: 0, - 47: 1, - 97: 1, - 133: 1, - 150: 1, - 174: 1, - 198: 2, - 215: 1, - 244: 1, - 255: 1, - } - max_instances_per_category = 256 - test_image = collections.namedtuple( - 'TestImage', - ['gt_class_map', 'gt_path', 'pred_inst_path', 'pred_class_path']) - test_images = [ - test_image(bird_gt_instance_class_map, 'bird_gt.png', - 'bird_pred_instance.png', 'bird_pred_class.png'), - test_image(cat_gt_instance_class_map, 'cat_gt.png', - 'cat_pred_instance.png', 'cat_pred_class.png'), - test_image(team_gt_instance_class_map, 'team_gt_instance.png', - 'team_pred_instance.png', 'team_pred_class.png'), - ] - - gt_classes = [] - gt_instances = [] - pred_classes = [] - pred_instances = [] - for test_image in test_images: - (image_gt_instances, - image_gt_classes) = test_utils.panoptic_segmentation_with_class_map( - test_image.gt_path, test_image.gt_class_map) - gt_classes.append(image_gt_classes) - gt_instances.append(image_gt_instances) - - pred_classes.append( - test_utils.read_segmentation_with_rgb_color_map( - test_image.pred_class_path, _CLASS_COLOR_MAP)) - pred_instances.append( - test_utils.read_test_image(test_image.pred_inst_path, - image_format='L')) - - pq_obj = panoptic_quality.PanopticQuality( - num_classes=num_classes, - max_instances_per_category=max_instances_per_category, - ignored_label=0, offset=256*256) - for pred_class, pred_instance, gt_class, gt_instance in zip( - pred_classes, pred_instances, gt_classes, gt_instances): - y_true = combine_maps(gt_class, gt_instance, max_instances_per_category) - y_pred = combine_maps(pred_class, pred_instance, - max_instances_per_category) - pq_obj.update_state(y_true, y_pred) - result = pq_obj.result().numpy() - - self.assertAlmostEqual(result[0], 0.76855499, places=4) - self.assertAlmostEqual(result[1], 0.7769174, places=4) - self.assertAlmostEqual(result[2], 0.98888892, places=4) - self.assertEqual(result[3], 2.) - self.assertAlmostEqual(result[4], 1. / 6, places=4) - self.assertEqual(result[5], 0.) - - def test_predicted_non_contiguous_ignore_label(self): - max_instances_per_category = 256 - pq_obj = panoptic_quality.PanopticQuality( - num_classes=3, - max_instances_per_category=max_instances_per_category, - ignored_label=9, - offset=256 * 256) - - gt_class = [ - [0, 9, 9], - [1, 2, 2], - [1, 9, 9], - ] - gt_instance = [ - [0, 2, 2], - [1, 0, 0], - [1, 0, 0], - ] - y_true = combine_maps( - np.array(gt_class), np.array(gt_instance), max_instances_per_category) - logging.info('y_true=\n%s', y_true) - - pred_class = [ - [0, 0, 9], - [1, 1, 1], - [1, 9, 9], - ] - pred_instance = [ - [0, 0, 0], - [0, 1, 1], - [0, 1, 1], - ] - y_pred = combine_maps( - np.array(pred_class), np.array(pred_instance), - max_instances_per_category) - logging.info('y_pred=\n%s', y_pred) - - pq_obj.update_state(y_true, y_pred) - result = pq_obj.result().numpy() - - # pq - self.assertAlmostEqual(result[0], 2. / 9, places=4) - # sq - self.assertAlmostEqual(result[1], 1. / 3, places=4) - # rq - self.assertAlmostEqual(result[2], 2. / 9, places=4) - # tp - self.assertAlmostEqual(result[3], 1. / 3, places=4) - # fn - self.assertAlmostEqual(result[4], 2. / 3, places=4) - # fp - self.assertAlmostEqual(result[5], 2. / 3, places=4) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/aliceoq/vozes-da-loirinha/vc_infer_pipeline.py b/spaces/aliceoq/vozes-da-loirinha/vc_infer_pipeline.py deleted file mode 100644 index 81d163305f9f8c158f83690bd631de3433c2adf1..0000000000000000000000000000000000000000 --- a/spaces/aliceoq/vozes-da-loirinha/vc_infer_pipeline.py +++ /dev/null @@ -1,650 +0,0 @@ -import numpy as np, parselmouth, torch, pdb, sys, os -from time import time as ttime -import torch.nn.functional as F -import torchcrepe # Fork feature. Use the crepe f0 algorithm. New dependency (pip install torchcrepe) -from torch import Tensor -import scipy.signal as signal -import pyworld, os, traceback, faiss, librosa, torchcrepe -from scipy import signal -from functools import lru_cache - -now_dir = os.getcwd() -sys.path.append(now_dir) - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav = {} - - -@lru_cache -def cache_harvest_f0(input_audio_path, fs, f0max, f0min, frame_period): - audio = input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - - -def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms( - y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2 - ) # 每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2 // 2 * 2, hop_length=sr2 // 2) - rms1 = torch.from_numpy(rms1) - rms1 = F.interpolate( - rms1.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.from_numpy(rms2) - rms2 = F.interpolate( - rms2.unsqueeze(0), size=data2.shape[0], mode="linear" - ).squeeze() - rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6) - data2 *= ( - torch.pow(rms1, torch.tensor(1 - rate)) - * torch.pow(rms2, torch.tensor(rate - 1)) - ).numpy() - return data2 - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - # Fork Feature: Get the best torch device to use for f0 algorithms that require a torch device. Will return the type (torch.device) - def get_optimal_torch_device(self, index: int = 0) -> torch.device: - # Get cuda device - if torch.cuda.is_available(): - return torch.device( - f"cuda:{index % torch.cuda.device_count()}" - ) # Very fast - elif torch.backends.mps.is_available(): - return torch.device("mps") - # Insert an else here to grab "xla" devices if available. TO DO later. Requires the torch_xla.core.xla_model library - # Else wise return the "cpu" as a torch device, - return torch.device("cpu") - - # Fork Feature: Compute f0 with the crepe method - def get_f0_crepe_computation( - self, - x, - f0_min, - f0_max, - p_len, - hop_length=160, # 512 before. Hop length changes the speed that the voice jumps to a different dramatic pitch. Lower hop lengths means more pitch accuracy but longer inference time. - model="full", # Either use crepe-tiny "tiny" or crepe "full". Default is full - ): - x = x.astype( - np.float32 - ) # fixes the F.conv2D exception. We needed to convert double to float. - x /= np.quantile(np.abs(x), 0.999) - torch_device = self.get_optimal_torch_device() - audio = torch.from_numpy(x).to(torch_device, copy=True) - audio = torch.unsqueeze(audio, dim=0) - if audio.ndim == 2 and audio.shape[0] > 1: - audio = torch.mean(audio, dim=0, keepdim=True).detach() - audio = audio.detach() - print("Initiating prediction with a crepe_hop_length of: " + str(hop_length)) - pitch: Tensor = torchcrepe.predict( - audio, - self.sr, - hop_length, - f0_min, - f0_max, - model, - batch_size=hop_length * 2, - device=torch_device, - pad=True, - ) - p_len = p_len or x.shape[0] // hop_length - # Resize the pitch for final f0 - source = np.array(pitch.squeeze(0).cpu().float().numpy()) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * p_len, len(source)) / p_len, - np.arange(0, len(source)), - source, - ) - f0 = np.nan_to_num(target) - return f0 # Resized f0 - - def get_f0_official_crepe_computation( - self, - x, - f0_min, - f0_max, - model="full", - ): - # Pick a batch size that doesn't cause memory errors on your gpu - batch_size = 512 - # Compute pitch using first gpu - audio = torch.tensor(np.copy(x))[None].float() - f0, pd = torchcrepe.predict( - audio, - self.sr, - self.window, - f0_min, - f0_max, - model, - batch_size=batch_size, - device=self.device, - return_periodicity=True, - ) - pd = torchcrepe.filter.median(pd, 3) - f0 = torchcrepe.filter.mean(f0, 3) - f0[pd < 0.1] = 0 - f0 = f0[0].cpu().numpy() - return f0 - - # Fork Feature: Compute pYIN f0 method - def get_f0_pyin_computation(self, x, f0_min, f0_max): - y, sr = librosa.load("saudio/Sidney.wav", self.sr, mono=True) - f0, _, _ = librosa.pyin(y, sr=self.sr, fmin=f0_min, fmax=f0_max) - f0 = f0[1:] # Get rid of extra first frame - return f0 - - # Fork Feature: Acquire median hybrid f0 estimation calculation - def get_f0_hybrid_computation( - self, - methods_str, - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ): - # Get various f0 methods from input to use in the computation stack - s = methods_str - s = s.split("hybrid")[1] - s = s.replace("[", "").replace("]", "") - methods = s.split("+") - f0_computation_stack = [] - - print("Calculating f0 pitch estimations for methods: %s" % str(methods)) - x = x.astype(np.float32) - x /= np.quantile(np.abs(x), 0.999) - # Get f0 calculations for all methods specified - for method in methods: - f0 = None - if method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif method == "crepe": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max) - f0 = f0[1:] # Get rid of extra first frame - elif method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - f0 = f0[1:] # Get rid of extra first frame - elif method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - elif method == "harvest": - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - f0 = f0[1:] # Get rid of first frame. - elif method == "dio": # Potentially buggy? - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 = f0[1:] - # elif method == "pyin": Not Working just yet - # f0 = self.get_f0_pyin_computation(x, f0_min, f0_max) - # Push method to the stack - f0_computation_stack.append(f0) - - for fc in f0_computation_stack: - print(len(fc)) - - print("Calculating hybrid median f0 from the stack of: %s" % str(methods)) - f0_median_hybrid = None - if len(f0_computation_stack) == 1: - f0_median_hybrid = f0_computation_stack[0] - else: - f0_median_hybrid = np.nanmedian(f0_computation_stack, axis=0) - return f0_median_hybrid - - def get_f0( - self, - input_audio_path, - x, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - inp_f0=None, - ): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = cache_harvest_f0(input_audio_path, self.sr, f0_max, f0_min, 10) - if filter_radius > 2: - f0 = signal.medfilt(f0, 3) - elif f0_method == "dio": # Potentially Buggy? - f0, t = pyworld.dio( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - elif f0_method == "crepe": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max) - elif f0_method == "crepe-tiny": - f0 = self.get_f0_official_crepe_computation(x, f0_min, f0_max, "tiny") - elif f0_method == "mangio-crepe": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length - ) - elif f0_method == "mangio-crepe-tiny": - f0 = self.get_f0_crepe_computation( - x, f0_min, f0_max, p_len, crepe_hop_length, "tiny" - ) - elif f0_method == "rmvpe": - if hasattr(self, "model_rmvpe") == False: - from rmvpe import RMVPE - - print("loading rmvpe model") - self.model_rmvpe = RMVPE( - "rmvpe.pt", is_half=self.is_half, device=self.device - ) - f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03) - - elif "hybrid" in f0_method: - # Perform hybrid median pitch estimation - input_audio_path2wav[input_audio_path] = x.astype(np.double) - f0 = self.get_f0_hybrid_computation( - f0_method, - input_audio_path, - x, - f0_min, - f0_max, - p_len, - filter_radius, - crepe_hop_length, - time_step, - ) - - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) if version == "v1" else logits[0] - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = feats.clone() - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if protect < 0.5 and pitch != None and pitchf != None: - feats0 = F.interpolate(feats0.permute(0, 2, 1), scale_factor=2).permute( - 0, 2, 1 - ) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - - if protect < 0.5 and pitch != None and pitchf != None: - pitchff = pitchf.clone() - pitchff[pitchf > 0] = 1 - pitchff[pitchf < 1] = protect - pitchff = pitchff.unsqueeze(-1) - feats = feats * pitchff + feats0 * (1 - pitchff) - feats = feats.to(feats0.dtype) - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]).data.cpu().float().numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - crepe_hop_length, - progress, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - progress(0.4, desc="Gerando áudio...") - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0( - input_audio_path, - audio_pad, - p_len, - f0_up_key, - f0_method, - filter_radius, - crepe_hop_length, - inp_f0, - ) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - protect, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - progress(0.6, desc="Gerando áudio...") - audio_opt = np.concatenate(audio_opt) - if rms_mix_rate != 1: - audio_opt = change_rms(audio, 16000, audio_opt, tgt_sr, rms_mix_rate) - if resample_sr >= 16000 and tgt_sr != resample_sr: - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - progress(0.8, desc="Gerando áudio...") - audio_max = np.abs(audio_opt).max() / 0.99 - max_int16 = 32768 - if audio_max > 1: - max_int16 /= audio_max - audio_opt = (audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/allknowingroger/Image-Models-Test146/README.md b/spaces/allknowingroger/Image-Models-Test146/README.md deleted file mode 100644 index a3a43bf672ca727d8113068aed4ea790c9de9309..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test146/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -duplicated_from: allknowingroger/Image-Models-Test142 ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test177/app.py b/spaces/allknowingroger/Image-Models-Test177/app.py deleted file mode 100644 index 827b380f766ab55ff8d7d888e2d6a2fae752ca34..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test177/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "Yntec/nuipenimix2", - "melaris/nilooai", - "salma-remyx/lora-trained-xl-colab", - "joachimsallstrom/aether-glitch-lora-for-sdxl", - "alessandroaere/lora-trained-xl-colab", - "LinoyTsaban/huggy_v23", - "milaidy/jardepoz", - "shikari2917/mypic4", - "Yntec/SCMix", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test85/README.md b/spaces/allknowingroger/Image-Models-Test85/README.md deleted file mode 100644 index 30c3cf4d496c2fcf11e1659264655c971c669ad5..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test85/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test84 ---- - - \ No newline at end of file diff --git a/spaces/amankishore/sjc/sd1/ldm/modules/ema.py b/spaces/amankishore/sjc/sd1/ldm/modules/ema.py deleted file mode 100644 index c8c75af43565f6e140287644aaaefa97dd6e67c5..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/sd1/ldm/modules/ema.py +++ /dev/null @@ -1,76 +0,0 @@ -import torch -from torch import nn - - -class LitEma(nn.Module): - def __init__(self, model, decay=0.9999, use_num_upates=True): - super().__init__() - if decay < 0.0 or decay > 1.0: - raise ValueError('Decay must be between 0 and 1') - - self.m_name2s_name = {} - self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32)) - self.register_buffer('num_updates', torch.tensor(0,dtype=torch.int) if use_num_upates - else torch.tensor(-1,dtype=torch.int)) - - for name, p in model.named_parameters(): - if p.requires_grad: - #remove as '.'-character is not allowed in buffers - s_name = name.replace('.','') - self.m_name2s_name.update({name:s_name}) - self.register_buffer(s_name,p.clone().detach().data) - - self.collected_params = [] - - def forward(self,model): - decay = self.decay - - if self.num_updates >= 0: - self.num_updates += 1 - decay = min(self.decay,(1 + self.num_updates) / (10 + self.num_updates)) - - one_minus_decay = 1.0 - decay - - with torch.no_grad(): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - - for key in m_param: - if m_param[key].requires_grad: - sname = self.m_name2s_name[key] - shadow_params[sname] = shadow_params[sname].type_as(m_param[key]) - shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key])) - else: - assert not key in self.m_name2s_name - - def copy_to(self, model): - m_param = dict(model.named_parameters()) - shadow_params = dict(self.named_buffers()) - for key in m_param: - if m_param[key].requires_grad: - m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data) - else: - assert not key in self.m_name2s_name - - def store(self, parameters): - """ - Save the current parameters for restoring later. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - temporarily stored. - """ - self.collected_params = [param.clone() for param in parameters] - - def restore(self, parameters): - """ - Restore the parameters stored with the `store` method. - Useful to validate the model with EMA parameters without affecting the - original optimization process. Store the parameters before the - `copy_to` method. After validation (or model saving), use this to - restore the former parameters. - Args: - parameters: Iterable of `torch.nn.Parameter`; the parameters to be - updated with the stored parameters. - """ - for c_param, param in zip(self.collected_params, parameters): - param.data.copy_(c_param.data) diff --git a/spaces/andresgtn/bean-leaf-health-classifier/README.md b/spaces/andresgtn/bean-leaf-health-classifier/README.md deleted file mode 100644 index 2c7e1ebe3fba4edaf72306cbb75173e75247c8b9..0000000000000000000000000000000000000000 --- a/spaces/andresgtn/bean-leaf-health-classifier/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Bean Leaf Health Classifier -emoji: 🐢 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/antonovmaxim/text-generation-webui-space/modules/deepspeed_parameters.py b/spaces/antonovmaxim/text-generation-webui-space/modules/deepspeed_parameters.py deleted file mode 100644 index 9116f5792fea4edf4b536b6605ee40e254109a98..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/modules/deepspeed_parameters.py +++ /dev/null @@ -1,74 +0,0 @@ -def generate_ds_config(ds_bf16, train_batch_size, nvme_offload_dir): - ''' - DeepSpeed configration - https://huggingface.co/docs/transformers/main_classes/deepspeed - ''' - - if nvme_offload_dir: - ds_config = { - "fp16": { - "enabled": not ds_bf16, - }, - "bf16": { - "enabled": ds_bf16, - }, - "zero_optimization": { - "stage": 3, - "offload_param": { - "device": "nvme", - "nvme_path": nvme_offload_dir, - "pin_memory": True, - "buffer_count": 5, - "buffer_size": 1e9, - "max_in_cpu": 1e9 - }, - "overlap_comm": True, - "reduce_bucket_size": "auto", - "contiguous_gradients": True, - "sub_group_size": 1e8, - "stage3_prefetch_bucket_size": "auto", - "stage3_param_persistence_threshold": "auto", - "stage3_max_live_parameters": "auto", - "stage3_max_reuse_distance": "auto", - }, - "aio": { - "block_size": 262144, - "queue_depth": 32, - "thread_count": 1, - "single_submit": False, - "overlap_events": True - }, - "steps_per_print": 2000, - "train_batch_size": train_batch_size, - "train_micro_batch_size_per_gpu": 1, - "wall_clock_breakdown": False - } - else: - ds_config = { - "fp16": { - "enabled": not ds_bf16, - }, - "bf16": { - "enabled": ds_bf16, - }, - "zero_optimization": { - "stage": 3, - "offload_param": { - "device": "cpu", - "pin_memory": True - }, - "overlap_comm": True, - "contiguous_gradients": True, - "reduce_bucket_size": "auto", - "stage3_prefetch_bucket_size": "auto", - "stage3_param_persistence_threshold": "auto", - "stage3_max_live_parameters": "auto", - "stage3_max_reuse_distance": "auto", - }, - "steps_per_print": 2000, - "train_batch_size": train_batch_size, - "train_micro_batch_size_per_gpu": 1, - "wall_clock_breakdown": False - } - - return ds_config diff --git a/spaces/arbml/Ashaar/README.md b/spaces/arbml/Ashaar/README.md deleted file mode 100644 index 2c2c8d33881b2b8a9b9f786b6279beee761df0e4..0000000000000000000000000000000000000000 --- a/spaces/arbml/Ashaar/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Ashaar -emoji: 🧑‍🎤 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/arsalagrey/speech-recognition-vue/README.md b/spaces/arsalagrey/speech-recognition-vue/README.md deleted file mode 100644 index a4d3148e533329c1f43e3b597015ba3689b85d63..0000000000000000000000000000000000000000 --- a/spaces/arsalagrey/speech-recognition-vue/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Speech Recognition Vue -emoji: 👀 -colorFrom: indigo -colorTo: blue -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/autoregressive.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/autoregressive.py deleted file mode 100644 index 14d881bc1029ef577f24ae28f9414e431661142a..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/autoregressive.py +++ /dev/null @@ -1,631 +0,0 @@ -# AGPL: a notification must be added stating that changes have been made to that file. -import functools - -import torch -import torch.nn as nn -import torch.nn.functional as F -from transformers import GPT2Config, GPT2PreTrainedModel, LogitsProcessorList -from transformers.modeling_outputs import CausalLMOutputWithCrossAttentions - -from TTS.tts.layers.tortoise.arch_utils import AttentionBlock, TypicalLogitsWarper - - -def null_position_embeddings(range, dim): - return torch.zeros((range.shape[0], range.shape[1], dim), device=range.device) - - -def _p(t): - return t and (len(t), len(t[0]), t[0][0].shape) # kv_cache debug - - -class ResBlock(nn.Module): - """ - Basic residual convolutional block that uses GroupNorm. - """ - - def __init__(self, chan): - super().__init__() - self.net = nn.Sequential( - nn.Conv1d(chan, chan, kernel_size=3, padding=1), - nn.GroupNorm(chan // 8, chan), - nn.ReLU(), - nn.Conv1d(chan, chan, kernel_size=3, padding=1), - nn.GroupNorm(chan // 8, chan), - ) - - def forward(self, x): - return F.relu(self.net(x) + x) - - -class GPT2InferenceModel(GPT2PreTrainedModel): - def __init__(self, config, gpt, text_pos_emb, embeddings, norm, linear, kv_cache): - super().__init__(config) - self.transformer = gpt - self.text_pos_embedding = text_pos_emb - self.embeddings = embeddings - self.lm_head = nn.Sequential(norm, linear) - self.kv_cache = kv_cache - - def store_mel_emb(self, mel_emb): - self.cached_mel_emb = mel_emb - - def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs): - token_type_ids = kwargs.get("token_type_ids", None) # usually None - if not self.kv_cache: - past_key_values = None - # only last token for inputs_ids if past is defined in kwargs - if past_key_values: - input_ids = input_ids[:, -1].unsqueeze(-1) - if token_type_ids is not None: - token_type_ids = token_type_ids[:, -1].unsqueeze(-1) - - attention_mask = kwargs.get("attention_mask", None) - position_ids = kwargs.get("position_ids", None) - - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past_key_values: - position_ids = position_ids[:, -1].unsqueeze(-1) - else: - position_ids = None - return { - "input_ids": input_ids, - "past_key_values": past_key_values, - "use_cache": kwargs.get("use_cache"), - "position_ids": position_ids, - "attention_mask": attention_mask, - "token_type_ids": token_type_ids, - } - - def forward( - self, - input_ids=None, - past_key_values=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - labels=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - assert self.cached_mel_emb is not None - assert inputs_embeds is None # Not supported by this inference model. - assert labels is None # Training not supported by this inference model. - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # Create embedding - mel_len = self.cached_mel_emb.shape[1] - if input_ids.shape[1] != 1: - text_inputs = input_ids[:, mel_len:] - text_emb = self.embeddings(text_inputs) - text_emb = text_emb + self.text_pos_embedding(text_emb) - if self.cached_mel_emb.shape[0] != text_emb.shape[0]: - mel_emb = self.cached_mel_emb.repeat_interleave(text_emb.shape[0] // self.cached_mel_emb.shape[0], 0) - else: # this outcome only occurs once per loop in most cases - mel_emb = self.cached_mel_emb - emb = torch.cat([mel_emb, text_emb], dim=1) - else: - emb = self.embeddings(input_ids) - emb = emb + self.text_pos_embedding.get_fixed_embedding( - attention_mask.shape[1] - mel_len, attention_mask.device - ) - - transformer_outputs = self.transformer( - inputs_embeds=emb, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - lm_logits = self.lm_head(hidden_states) - - if not return_dict: - return (lm_logits,) + transformer_outputs[1:] - - return CausalLMOutputWithCrossAttentions( - loss=None, - logits=lm_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - cross_attentions=transformer_outputs.cross_attentions, - ) - - @staticmethod - def _reorder_cache(past, beam_idx): - """ - This function is used to re-order the :obj:`past_key_values` cache if - :meth:`~transformers.PreTrainedModel.beam_search` or :meth:`~transformers.PreTrainedModel.beam_sample` is - called. This is required to match :obj:`past_key_values` with the correct beam_idx at every generation step. - """ - return tuple( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past) - for layer_past in past - ) - - -class ConditioningEncoder(nn.Module): - def __init__( - self, - spec_dim, - embedding_dim, - attn_blocks=6, - num_attn_heads=4, - do_checkpointing=False, - mean=False, - ): - super().__init__() - attn = [] - self.init = nn.Conv1d(spec_dim, embedding_dim, kernel_size=1) - for a in range(attn_blocks): - attn.append(AttentionBlock(embedding_dim, num_attn_heads)) - self.attn = nn.Sequential(*attn) - self.dim = embedding_dim - self.do_checkpointing = do_checkpointing - self.mean = mean - - def forward(self, x): - h = self.init(x) - h = self.attn(h) - if self.mean: - return h.mean(dim=2) - else: - return h[:, :, 0] - - -class LearnedPositionEmbeddings(nn.Module): - def __init__(self, seq_len, model_dim, init=0.02): - super().__init__() - self.emb = nn.Embedding(seq_len, model_dim) - # Initializing this way is standard for GPT-2 - self.emb.weight.data.normal_(mean=0.0, std=init) - - def forward(self, x): - sl = x.shape[1] - return self.emb(torch.arange(0, sl, device=x.device)) - - def get_fixed_embedding(self, ind, dev): - return self.emb(torch.arange(0, ind, device=dev))[ind - 1 : ind] - - -def build_hf_gpt_transformer(layers, model_dim, heads, max_mel_seq_len, max_text_seq_len, checkpointing): - """ - GPT-2 implemented by the HuggingFace library. - """ - from transformers import GPT2Config, GPT2Model - - gpt_config = GPT2Config( - vocab_size=256, # Unused. - n_positions=max_mel_seq_len + max_text_seq_len, - n_ctx=max_mel_seq_len + max_text_seq_len, - n_embd=model_dim, - n_layer=layers, - n_head=heads, - gradient_checkpointing=checkpointing, - use_cache=not checkpointing, - ) - gpt = GPT2Model(gpt_config) - # Override the built in positional embeddings - del gpt.wpe # TODO: figure out relevance in fixing exported model definition: Embedding(1012, 1024) - gpt.wpe = functools.partial(null_position_embeddings, dim=model_dim) - # Built-in token embeddings are unused. - del gpt.wte - return ( - gpt, - LearnedPositionEmbeddings(max_mel_seq_len, model_dim), - LearnedPositionEmbeddings(max_text_seq_len, model_dim), - None, - None, - ) - - -class MelEncoder(nn.Module): - def __init__(self, channels, mel_channels=80, resblocks_per_reduction=2): - super().__init__() - self.channels = channels - self.encoder = nn.Sequential( - nn.Conv1d(mel_channels, channels // 4, kernel_size=3, padding=1), - nn.Sequential(*[ResBlock(channels // 4) for _ in range(resblocks_per_reduction)]), - nn.Conv1d(channels // 4, channels // 2, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(channels // 16, channels // 2), - nn.ReLU(), - nn.Sequential(*[ResBlock(channels // 2) for _ in range(resblocks_per_reduction)]), - nn.Conv1d(channels // 2, channels, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(channels // 8, channels), - nn.ReLU(), - nn.Sequential(*[ResBlock(channels) for _ in range(resblocks_per_reduction)]), - ) - self.reduction = 4 - - def forward(self, x): - for e in self.encoder: - x = e(x) - return x.permute(0, 2, 1) - - -class UnifiedVoice(nn.Module): - def __init__( - self, - layers=8, - model_dim=512, - heads=8, - max_text_tokens=120, - max_mel_tokens=250, - max_conditioning_inputs=1, - mel_length_compression=1024, - number_text_tokens=256, - start_text_token=None, - number_mel_codes=8194, - start_mel_token=8192, - stop_mel_token=8193, - train_solo_embeddings=False, - use_mel_codes_as_input=True, - checkpointing=True, - types=1, - ): - """ - Args: - layers: Number of layers in transformer stack. - model_dim: Operating dimensions of the transformer - heads: Number of transformer heads. Must be divisible by model_dim. Recommend model_dim//64 - max_text_tokens: Maximum number of text tokens that will be encountered by model. - max_mel_tokens: Maximum number of MEL tokens that will be encountered by model. - max_conditioning_inputs: Maximum number of conditioning inputs provided to the model. If (1), conditioning input can be of format (b,80,s), otherwise (b,n,80,s). - mel_length_compression: The factor between and . Used to compute MEL code padding given wav input length. - number_text_tokens: - start_text_token: - stop_text_token: - number_mel_codes: - start_mel_token: - stop_mel_token: - train_solo_embeddings: - use_mel_codes_as_input: - checkpointing: - """ - super().__init__() - - self.number_text_tokens = number_text_tokens - self.start_text_token = number_text_tokens * types if start_text_token is None else start_text_token - self.stop_text_token = 0 - self.number_mel_codes = number_mel_codes - self.start_mel_token = start_mel_token - self.stop_mel_token = stop_mel_token - self.layers = layers - self.heads = heads - self.max_mel_tokens = max_mel_tokens - self.max_text_tokens = max_text_tokens - self.model_dim = model_dim - self.max_conditioning_inputs = max_conditioning_inputs - self.mel_length_compression = mel_length_compression - self.conditioning_encoder = ConditioningEncoder(80, model_dim, num_attn_heads=heads) - self.text_embedding = nn.Embedding(self.number_text_tokens * types + 1, model_dim) - if use_mel_codes_as_input: - self.mel_embedding = nn.Embedding(self.number_mel_codes, model_dim) - else: - self.mel_embedding = MelEncoder(model_dim, resblocks_per_reduction=1) - ( - self.gpt, - self.mel_pos_embedding, - self.text_pos_embedding, - self.mel_layer_pos_embedding, - self.text_layer_pos_embedding, - ) = build_hf_gpt_transformer( - layers, - model_dim, - heads, - self.max_mel_tokens + 2 + self.max_conditioning_inputs, - self.max_text_tokens + 2, - checkpointing, - ) - if train_solo_embeddings: - self.mel_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * 0.02, requires_grad=True) - self.text_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * 0.02, requires_grad=True) - else: - self.mel_solo_embedding = 0 - self.text_solo_embedding = 0 - - self.final_norm = nn.LayerNorm(model_dim) - self.text_head = nn.Linear(model_dim, self.number_text_tokens * types + 1) - self.mel_head = nn.Linear(model_dim, self.number_mel_codes) - - # Initialize the embeddings per the GPT-2 scheme - embeddings = [self.text_embedding] - if use_mel_codes_as_input: - embeddings.append(self.mel_embedding) - for module in embeddings: - module.weight.data.normal_(mean=0.0, std=0.02) - - def post_init_gpt2_config(self, kv_cache=True): - seq_length = self.max_mel_tokens + self.max_text_tokens + 2 - gpt_config = GPT2Config( - vocab_size=self.max_mel_tokens, - n_positions=seq_length, - n_ctx=seq_length, - n_embd=self.model_dim, - n_layer=self.layers, - n_head=self.heads, - gradient_checkpointing=False, - use_cache=True, - ) - self.inference_model = GPT2InferenceModel( - gpt_config, - self.gpt, - self.mel_pos_embedding, - self.mel_embedding, - self.final_norm, - self.mel_head, - kv_cache=kv_cache, - ) - # self.inference_model = PrunedGPT2InferenceModel(gpt_config, self.gpt, self.mel_pos_embedding, self.mel_embedding, self.final_norm, self.mel_head) - self.gpt.wte = self.mel_embedding - # self.inference_model.save_pretrained("") - - def build_aligned_inputs_and_targets(self, input, start_token, stop_token): - inp = F.pad(input, (1, 0), value=start_token) - tar = F.pad(input, (0, 1), value=stop_token) - return inp, tar - - def set_mel_padding(self, mel_input_tokens, wav_lengths): - """ - Given mel tokens that are derived from a padded audio clip and the actual lengths of each batch element in - that audio clip, reformats the tokens with STOP_MEL_TOKEN in place of the zero padding. This is required - preformatting to create a working TTS model. - """ - # Set padding areas within MEL (currently it is coded with the MEL code for ). - mel_lengths = torch.div(wav_lengths, self.mel_length_compression, rounding_mode="trunc") - for b in range(len(mel_lengths)): - actual_end = ( - mel_lengths[b] + 1 - ) # Due to the convolutional nature of how these tokens are generated, it would be best if the model predicts a token past the actual last token. - if actual_end < mel_input_tokens.shape[-1]: - mel_input_tokens[b, actual_end:] = self.stop_mel_token - return mel_input_tokens - - def get_logits( - self, - speech_conditioning_inputs, - first_inputs, - first_head, - second_inputs=None, - second_head=None, - get_attns=False, - return_latent=False, - ): - if second_inputs is not None: - emb = torch.cat([speech_conditioning_inputs, first_inputs, second_inputs], dim=1) - else: - emb = torch.cat([speech_conditioning_inputs, first_inputs], dim=1) - - gpt_out = self.gpt(inputs_embeds=emb, return_dict=True, output_attentions=get_attns) - if get_attns: - return gpt_out.attentions - - enc = gpt_out.last_hidden_state[:, 1:] # The first logit is tied to the speech_conditioning_input - enc = self.final_norm(enc) - - if return_latent: - return ( - enc[ - :, - speech_conditioning_inputs.shape[1] : speech_conditioning_inputs.shape[1] + first_inputs.shape[1], - ], - enc[:, -second_inputs.shape[1] :], - ) - - first_logits = enc[:, : first_inputs.shape[1]] - first_logits = first_head(first_logits) - first_logits = first_logits.permute(0, 2, 1) - if second_inputs is not None: - second_logits = enc[:, -second_inputs.shape[1] :] - second_logits = second_head(second_logits) - second_logits = second_logits.permute(0, 2, 1) - return first_logits, second_logits - else: - return first_logits - - def get_conditioning(self, speech_conditioning_input): - speech_conditioning_input = ( - speech_conditioning_input.unsqueeze(1) - if len(speech_conditioning_input.shape) == 3 - else speech_conditioning_input - ) - conds = [] - for j in range(speech_conditioning_input.shape[1]): - conds.append(self.conditioning_encoder(speech_conditioning_input[:, j])) - conds = torch.stack(conds, dim=1) - conds = conds.mean(dim=1) - return conds - - def forward( - self, - speech_conditioning_latent, - text_inputs, - text_lengths, - mel_codes, - wav_lengths, - types=None, - text_first=True, - raw_mels=None, - return_attentions=False, - return_latent=False, - clip_inputs=True, - ): - """ - Forward pass that uses both text and voice in either text conditioning mode or voice conditioning mode - (actuated by `text_first`). - - speech_conditioning_input: MEL float tensor, (b,1024) - text_inputs: long tensor, (b,t) - text_lengths: long tensor, (b,) - mel_inputs: long tensor, (b,m) - wav_lengths: long tensor, (b,) - raw_mels: MEL float tensor (b,80,s) - - If return_attentions is specified, only logits are returned. - If return_latent is specified, loss & logits are not computed or returned. Only the predicted latents are returned. - If clip_inputs is True, the inputs will be clipped to the smallest input size across each input modality. - """ - # Types are expressed by expanding the text embedding space. - if types is not None: - text_inputs = text_inputs * (1 + types).unsqueeze(-1) - - if clip_inputs: - # This model will receive micro-batches with a ton of padding for both the text and MELs. Ameliorate this by - # chopping the inputs by the maximum actual length. - max_text_len = text_lengths.max() - text_inputs = text_inputs[:, :max_text_len] - max_mel_len = wav_lengths.max() // self.mel_length_compression - mel_codes = mel_codes[:, :max_mel_len] - if raw_mels is not None: - raw_mels = raw_mels[:, :, : max_mel_len * 4] - mel_codes = self.set_mel_padding(mel_codes, wav_lengths) - text_inputs = F.pad(text_inputs, (0, 1), value=self.stop_text_token) - mel_codes = F.pad(mel_codes, (0, 1), value=self.stop_mel_token) - - conds = speech_conditioning_latent.unsqueeze(1) - text_inputs, text_targets = self.build_aligned_inputs_and_targets( - text_inputs, self.start_text_token, self.stop_text_token - ) - text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) - mel_codes, mel_targets = self.build_aligned_inputs_and_targets( - mel_codes, self.start_mel_token, self.stop_mel_token - ) - if raw_mels is not None: - mel_inp = F.pad(raw_mels, (0, 8)) - else: - mel_inp = mel_codes - mel_emb = self.mel_embedding(mel_inp) - mel_emb = mel_emb + self.mel_pos_embedding(mel_codes) - - if text_first: - text_logits, mel_logits = self.get_logits( - conds, - text_emb, - self.text_head, - mel_emb, - self.mel_head, - get_attns=return_attentions, - return_latent=return_latent, - ) - if return_latent: - return mel_logits[ - :, :-2 - ] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass. - else: - mel_logits, text_logits = self.get_logits( - conds, - mel_emb, - self.mel_head, - text_emb, - self.text_head, - get_attns=return_attentions, - return_latent=return_latent, - ) - if return_latent: - return text_logits[ - :, :-2 - ] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass. - - if return_attentions: - return mel_logits - loss_text = F.cross_entropy(text_logits, text_targets.long()) - loss_mel = F.cross_entropy(mel_logits, mel_targets.long()) - return loss_text.mean(), loss_mel.mean(), mel_logits - - def inference_speech( - self, - speech_conditioning_latent, - text_inputs, - input_tokens=None, - num_return_sequences=1, - max_generate_length=None, - typical_sampling=False, - typical_mass=0.9, - **hf_generate_kwargs, - ): - text_inputs = F.pad(text_inputs, (0, 1), value=self.stop_text_token) - text_inputs, text_targets = self.build_aligned_inputs_and_targets( - text_inputs, self.start_text_token, self.stop_text_token - ) - text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) - - conds = speech_conditioning_latent.unsqueeze(1) - emb = torch.cat([conds, text_emb], dim=1) - self.inference_model.store_mel_emb(emb) - - fake_inputs = torch.full( - ( - emb.shape[0], - conds.shape[1] + emb.shape[1], - ), - fill_value=1, - dtype=torch.long, - device=text_inputs.device, - ) - fake_inputs[:, -1] = self.start_mel_token - trunc_index = fake_inputs.shape[1] - if input_tokens is None: - inputs = fake_inputs - else: - assert ( - num_return_sequences % input_tokens.shape[0] == 0 - ), "The number of return sequences must be divisible by the number of input sequences" - fake_inputs = fake_inputs.repeat(num_return_sequences, 1) - input_tokens = input_tokens.repeat(num_return_sequences // input_tokens.shape[0], 1) - inputs = torch.cat([fake_inputs, input_tokens], dim=1) - - logits_processor = ( - LogitsProcessorList([TypicalLogitsWarper(mass=typical_mass)]) if typical_sampling else LogitsProcessorList() - ) # TODO disable this - max_length = ( - trunc_index + self.max_mel_tokens - 1 if max_generate_length is None else trunc_index + max_generate_length - ) - gen = self.inference_model.generate( - inputs, - bos_token_id=self.start_mel_token, - pad_token_id=self.stop_mel_token, - eos_token_id=self.stop_mel_token, - max_length=max_length, - logits_processor=logits_processor, - num_return_sequences=num_return_sequences, - **hf_generate_kwargs, - ) - return gen[:, trunc_index:] - - -if __name__ == "__main__": - gpt = UnifiedVoice( - model_dim=256, - heads=4, - train_solo_embeddings=True, - use_mel_codes_as_input=True, - max_conditioning_inputs=4, - ) - l = gpt( - torch.randn(2, 3, 80, 800), - torch.randint(high=120, size=(2, 120)), - torch.tensor([32, 120]), - torch.randint(high=8192, size=(2, 250)), - torch.tensor([250 * 256, 195 * 256]), - ) - gpt.text_forward( - torch.randn(2, 80, 800), - torch.randint(high=50, size=(2, 80)), - torch.tensor([32, 80]), - ) diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/dvae.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/dvae.py deleted file mode 100644 index bdd7a9d09f44cc8dae102a053c365462dc416b6d..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/xtts/dvae.py +++ /dev/null @@ -1,393 +0,0 @@ -import functools -from math import sqrt - -import torch -import torch.distributed as distributed -import torch.nn as nn -import torch.nn.functional as F -import torchaudio -from einops import rearrange - - -def default(val, d): - return val if val is not None else d - - -def eval_decorator(fn): - def inner(model, *args, **kwargs): - was_training = model.training - model.eval() - out = fn(model, *args, **kwargs) - model.train(was_training) - return out - - return inner - - -def dvae_wav_to_mel( - wav, mel_norms_file="../experiments/clips_mel_norms.pth", mel_norms=None, device=torch.device("cpu") -): - mel_stft = torchaudio.transforms.MelSpectrogram( - n_fft=1024, - hop_length=256, - win_length=1024, - power=2, - normalized=False, - sample_rate=22050, - f_min=0, - f_max=8000, - n_mels=80, - norm="slaney", - ).to(device) - wav = wav.to(device) - mel = mel_stft(wav) - mel = torch.log(torch.clamp(mel, min=1e-5)) - if mel_norms is None: - mel_norms = torch.load(mel_norms_file, map_location=device) - mel = mel / mel_norms.unsqueeze(0).unsqueeze(-1) - return mel - - -class Quantize(nn.Module): - def __init__(self, dim, n_embed, decay=0.99, eps=1e-5, balancing_heuristic=False, new_return_order=False): - super().__init__() - - self.dim = dim - self.n_embed = n_embed - self.decay = decay - self.eps = eps - - self.balancing_heuristic = balancing_heuristic - self.codes = None - self.max_codes = 64000 - self.codes_full = False - self.new_return_order = new_return_order - - embed = torch.randn(dim, n_embed) - self.register_buffer("embed", embed) - self.register_buffer("cluster_size", torch.zeros(n_embed)) - self.register_buffer("embed_avg", embed.clone()) - - def forward(self, input, return_soft_codes=False): - if self.balancing_heuristic and self.codes_full: - h = torch.histc(self.codes, bins=self.n_embed, min=0, max=self.n_embed) / len(self.codes) - mask = torch.logical_or(h > 0.9, h < 0.01).unsqueeze(1) - ep = self.embed.permute(1, 0) - ea = self.embed_avg.permute(1, 0) - rand_embed = torch.randn_like(ep) * mask - self.embed = (ep * ~mask + rand_embed).permute(1, 0) - self.embed_avg = (ea * ~mask + rand_embed).permute(1, 0) - self.cluster_size = self.cluster_size * ~mask.squeeze() - if torch.any(mask): - print(f"Reset {torch.sum(mask)} embedding codes.") - self.codes = None - self.codes_full = False - - flatten = input.reshape(-1, self.dim) - dist = flatten.pow(2).sum(1, keepdim=True) - 2 * flatten @ self.embed + self.embed.pow(2).sum(0, keepdim=True) - soft_codes = -dist - _, embed_ind = soft_codes.max(1) - embed_onehot = F.one_hot(embed_ind, self.n_embed).type(flatten.dtype) - embed_ind = embed_ind.view(*input.shape[:-1]) - quantize = self.embed_code(embed_ind) - - if self.balancing_heuristic: - if self.codes is None: - self.codes = embed_ind.flatten() - else: - self.codes = torch.cat([self.codes, embed_ind.flatten()]) - if len(self.codes) > self.max_codes: - self.codes = self.codes[-self.max_codes :] - self.codes_full = True - - if self.training: - embed_onehot_sum = embed_onehot.sum(0) - embed_sum = flatten.transpose(0, 1) @ embed_onehot - - if distributed.is_initialized() and distributed.get_world_size() > 1: - distributed.all_reduce(embed_onehot_sum) - distributed.all_reduce(embed_sum) - - self.cluster_size.data.mul_(self.decay).add_(embed_onehot_sum, alpha=1 - self.decay) - self.embed_avg.data.mul_(self.decay).add_(embed_sum, alpha=1 - self.decay) - n = self.cluster_size.sum() - cluster_size = (self.cluster_size + self.eps) / (n + self.n_embed * self.eps) * n - embed_normalized = self.embed_avg / cluster_size.unsqueeze(0) - self.embed.data.copy_(embed_normalized) - - diff = (quantize.detach() - input).pow(2).mean() - quantize = input + (quantize - input).detach() - - if return_soft_codes: - return quantize, diff, embed_ind, soft_codes.view(input.shape[:-1] + (-1,)) - elif self.new_return_order: - return quantize, embed_ind, diff - else: - return quantize, diff, embed_ind - - def embed_code(self, embed_id): - return F.embedding(embed_id, self.embed.transpose(0, 1)) - - -# Fits a soft-discretized input to a normal-PDF across the specified dimension. -# In other words, attempts to force the discretization function to have a mean equal utilization across all discrete -# values with the specified expected variance. -class DiscretizationLoss(nn.Module): - def __init__(self, discrete_bins, dim, expected_variance, store_past=0): - super().__init__() - self.discrete_bins = discrete_bins - self.dim = dim - self.dist = torch.distributions.Normal(0, scale=expected_variance) - if store_past > 0: - self.record_past = True - self.register_buffer("accumulator_index", torch.zeros(1, dtype=torch.long, device="cpu")) - self.register_buffer("accumulator_filled", torch.zeros(1, dtype=torch.long, device="cpu")) - self.register_buffer("accumulator", torch.zeros(store_past, discrete_bins)) - else: - self.record_past = False - - def forward(self, x): - other_dims = set(range(len(x.shape))) - set([self.dim]) - averaged = x.sum(dim=tuple(other_dims)) / x.sum() - averaged = averaged - averaged.mean() - - if self.record_past: - acc_count = self.accumulator.shape[0] - avg = averaged.detach().clone() - if self.accumulator_filled > 0: - averaged = torch.mean(self.accumulator, dim=0) * (acc_count - 1) / acc_count + averaged / acc_count - - # Also push averaged into the accumulator. - self.accumulator[self.accumulator_index] = avg - self.accumulator_index += 1 - if self.accumulator_index >= acc_count: - self.accumulator_index *= 0 - if self.accumulator_filled <= 0: - self.accumulator_filled += 1 - - return torch.sum(-self.dist.log_prob(averaged)) - - -class ResBlock(nn.Module): - def __init__(self, chan, conv, activation): - super().__init__() - self.net = nn.Sequential( - conv(chan, chan, 3, padding=1), - activation(), - conv(chan, chan, 3, padding=1), - activation(), - conv(chan, chan, 1), - ) - - def forward(self, x): - return self.net(x) + x - - -class UpsampledConv(nn.Module): - def __init__(self, conv, *args, **kwargs): - super().__init__() - assert "stride" in kwargs.keys() - self.stride = kwargs["stride"] - del kwargs["stride"] - self.conv = conv(*args, **kwargs) - - def forward(self, x): - up = nn.functional.interpolate(x, scale_factor=self.stride, mode="nearest") - return self.conv(up) - - -# DiscreteVAE partially derived from lucidrains DALLE implementation -# Credit: https://github.com/lucidrains/DALLE-pytorch -class DiscreteVAE(nn.Module): - def __init__( - self, - positional_dims=2, - num_tokens=512, - codebook_dim=512, - num_layers=3, - num_resnet_blocks=0, - hidden_dim=64, - channels=3, - stride=2, - kernel_size=4, - use_transposed_convs=True, - encoder_norm=False, - activation="relu", - smooth_l1_loss=False, - straight_through=False, - normalization=None, # ((0.5,) * 3, (0.5,) * 3), - record_codes=False, - discretization_loss_averaging_steps=100, - lr_quantizer_args={}, - ): - super().__init__() - has_resblocks = num_resnet_blocks > 0 - - self.num_tokens = num_tokens - self.num_layers = num_layers - self.straight_through = straight_through - self.positional_dims = positional_dims - self.discrete_loss = DiscretizationLoss( - num_tokens, 2, 1 / (num_tokens * 2), discretization_loss_averaging_steps - ) - - assert positional_dims > 0 and positional_dims < 3 # This VAE only supports 1d and 2d inputs for now. - if positional_dims == 2: - conv = nn.Conv2d - conv_transpose = nn.ConvTranspose2d - else: - conv = nn.Conv1d - conv_transpose = nn.ConvTranspose1d - if not use_transposed_convs: - conv_transpose = functools.partial(UpsampledConv, conv) - - if activation == "relu": - act = nn.ReLU - elif activation == "silu": - act = nn.SiLU - else: - assert NotImplementedError() - - enc_layers = [] - dec_layers = [] - - if num_layers > 0: - enc_chans = [hidden_dim * 2**i for i in range(num_layers)] - dec_chans = list(reversed(enc_chans)) - - enc_chans = [channels, *enc_chans] - - dec_init_chan = codebook_dim if not has_resblocks else dec_chans[0] - dec_chans = [dec_init_chan, *dec_chans] - - enc_chans_io, dec_chans_io = map(lambda t: list(zip(t[:-1], t[1:])), (enc_chans, dec_chans)) - - pad = (kernel_size - 1) // 2 - for (enc_in, enc_out), (dec_in, dec_out) in zip(enc_chans_io, dec_chans_io): - enc_layers.append(nn.Sequential(conv(enc_in, enc_out, kernel_size, stride=stride, padding=pad), act())) - if encoder_norm: - enc_layers.append(nn.GroupNorm(8, enc_out)) - dec_layers.append( - nn.Sequential(conv_transpose(dec_in, dec_out, kernel_size, stride=stride, padding=pad), act()) - ) - dec_out_chans = dec_chans[-1] - innermost_dim = dec_chans[0] - else: - enc_layers.append(nn.Sequential(conv(channels, hidden_dim, 1), act())) - dec_out_chans = hidden_dim - innermost_dim = hidden_dim - - for _ in range(num_resnet_blocks): - dec_layers.insert(0, ResBlock(innermost_dim, conv, act)) - enc_layers.append(ResBlock(innermost_dim, conv, act)) - - if num_resnet_blocks > 0: - dec_layers.insert(0, conv(codebook_dim, innermost_dim, 1)) - - enc_layers.append(conv(innermost_dim, codebook_dim, 1)) - dec_layers.append(conv(dec_out_chans, channels, 1)) - - self.encoder = nn.Sequential(*enc_layers) - self.decoder = nn.Sequential(*dec_layers) - - self.loss_fn = F.smooth_l1_loss if smooth_l1_loss else F.mse_loss - self.codebook = Quantize(codebook_dim, num_tokens, new_return_order=True) - - # take care of normalization within class - self.normalization = normalization - self.record_codes = record_codes - if record_codes: - self.codes = torch.zeros((1228800,), dtype=torch.long) - self.code_ind = 0 - self.total_codes = 0 - self.internal_step = 0 - - def norm(self, images): - if not self.normalization is not None: - return images - - means, stds = map(lambda t: torch.as_tensor(t).to(images), self.normalization) - arrange = "c -> () c () ()" if self.positional_dims == 2 else "c -> () c ()" - means, stds = map(lambda t: rearrange(t, arrange), (means, stds)) - images = images.clone() - images.sub_(means).div_(stds) - return images - - def get_debug_values(self, step, __): - if self.record_codes and self.total_codes > 0: - # Report annealing schedule - return {"histogram_codes": self.codes[: self.total_codes]} - else: - return {} - - @torch.no_grad() - @eval_decorator - def get_codebook_indices(self, images): - img = self.norm(images) - logits = self.encoder(img).permute((0, 2, 3, 1) if len(img.shape) == 4 else (0, 2, 1)) - sampled, codes, _ = self.codebook(logits) - self.log_codes(codes) - return codes - - def decode(self, img_seq): - self.log_codes(img_seq) - if hasattr(self.codebook, "embed_code"): - image_embeds = self.codebook.embed_code(img_seq) - else: - image_embeds = F.embedding(img_seq, self.codebook.codebook) - b, n, d = image_embeds.shape - - kwargs = {} - if self.positional_dims == 1: - arrange = "b n d -> b d n" - else: - h = w = int(sqrt(n)) - arrange = "b (h w) d -> b d h w" - kwargs = {"h": h, "w": w} - image_embeds = rearrange(image_embeds, arrange, **kwargs) - images = [image_embeds] - for layer in self.decoder: - images.append(layer(images[-1])) - return images[-1], images[-2] - - def infer(self, img): - img = self.norm(img) - logits = self.encoder(img).permute((0, 2, 3, 1) if len(img.shape) == 4 else (0, 2, 1)) - sampled, codes, commitment_loss = self.codebook(logits) - return self.decode(codes) - - # Note: This module is not meant to be run in forward() except while training. It has special logic which performs - # evaluation using quantized values when it detects that it is being run in eval() mode, which will be substantially - # more lossy (but useful for determining network performance). - def forward(self, img): - img = self.norm(img) - logits = self.encoder(img).permute((0, 2, 3, 1) if len(img.shape) == 4 else (0, 2, 1)) - sampled, codes, commitment_loss = self.codebook(logits) - sampled = sampled.permute((0, 3, 1, 2) if len(img.shape) == 4 else (0, 2, 1)) - - if self.training: - out = sampled - for d in self.decoder: - out = d(out) - self.log_codes(codes) - else: - # This is non-differentiable, but gives a better idea of how the network is actually performing. - out, _ = self.decode(codes) - - # reconstruction loss - recon_loss = self.loss_fn(img, out, reduction="none") - - return recon_loss, commitment_loss, out - - def log_codes(self, codes): - # This is so we can debug the distribution of codes being learned. - if self.record_codes and self.internal_step % 10 == 0: - codes = codes.flatten() - l = codes.shape[0] - i = self.code_ind if (self.codes.shape[0] - self.code_ind) > l else self.codes.shape[0] - l - self.codes[i : i + l] = codes.cpu() - self.code_ind = self.code_ind + l - if self.code_ind >= self.codes.shape[0]: - self.code_ind = 0 - self.total_codes += 1 - self.internal_step += 1 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/DES.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/DES.py deleted file mode 100644 index 5cc286aee78a997631413f5981ad94638954c394..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Cipher/DES.py +++ /dev/null @@ -1,158 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Cipher/DES.py : DES -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== -""" -Module's constants for the modes of operation supported with Single DES: - -:var MODE_ECB: :ref:`Electronic Code Book (ECB) ` -:var MODE_CBC: :ref:`Cipher-Block Chaining (CBC) ` -:var MODE_CFB: :ref:`Cipher FeedBack (CFB) ` -:var MODE_OFB: :ref:`Output FeedBack (OFB) ` -:var MODE_CTR: :ref:`CounTer Mode (CTR) ` -:var MODE_OPENPGP: :ref:`OpenPGP Mode ` -:var MODE_EAX: :ref:`EAX Mode ` -""" - -import sys - -from Crypto.Cipher import _create_cipher -from Crypto.Util.py3compat import byte_string -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - c_size_t, c_uint8_ptr) - -_raw_des_lib = load_pycryptodome_raw_lib( - "Crypto.Cipher._raw_des", - """ - int DES_start_operation(const uint8_t key[], - size_t key_len, - void **pResult); - int DES_encrypt(const void *state, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int DES_decrypt(const void *state, - const uint8_t *in, - uint8_t *out, - size_t data_len); - int DES_stop_operation(void *state); - """) - - -def _create_base_cipher(dict_parameters): - """This method instantiates and returns a handle to a low-level - base cipher. It will absorb named parameters in the process.""" - - try: - key = dict_parameters.pop("key") - except KeyError: - raise TypeError("Missing 'key' parameter") - - if len(key) != key_size: - raise ValueError("Incorrect DES key length (%d bytes)" % len(key)) - - start_operation = _raw_des_lib.DES_start_operation - stop_operation = _raw_des_lib.DES_stop_operation - - cipher = VoidPointer() - result = start_operation(c_uint8_ptr(key), - c_size_t(len(key)), - cipher.address_of()) - if result: - raise ValueError("Error %X while instantiating the DES cipher" - % result) - return SmartPointer(cipher.get(), stop_operation) - - -def new(key, mode, *args, **kwargs): - """Create a new DES cipher. - - :param key: - The secret key to use in the symmetric cipher. - It must be 8 byte long. The parity bits will be ignored. - :type key: bytes/bytearray/memoryview - - :param mode: - The chaining mode to use for encryption or decryption. - :type mode: One of the supported ``MODE_*`` constants - - :Keyword Arguments: - * **iv** (*byte string*) -- - (Only applicable for ``MODE_CBC``, ``MODE_CFB``, ``MODE_OFB``, - and ``MODE_OPENPGP`` modes). - - The initialization vector to use for encryption or decryption. - - For ``MODE_CBC``, ``MODE_CFB``, and ``MODE_OFB`` it must be 8 bytes long. - - For ``MODE_OPENPGP`` mode only, - it must be 8 bytes long for encryption - and 10 bytes for decryption (in the latter case, it is - actually the *encrypted* IV which was prefixed to the ciphertext). - - If not provided, a random byte string is generated (you must then - read its value with the :attr:`iv` attribute). - - * **nonce** (*byte string*) -- - (Only applicable for ``MODE_EAX`` and ``MODE_CTR``). - - A value that must never be reused for any other encryption done - with this key. - - For ``MODE_EAX`` there are no - restrictions on its length (recommended: **16** bytes). - - For ``MODE_CTR``, its length must be in the range **[0..7]**. - - If not provided for ``MODE_EAX``, a random byte string is generated (you - can read it back via the ``nonce`` attribute). - - * **segment_size** (*integer*) -- - (Only ``MODE_CFB``).The number of **bits** the plaintext and ciphertext - are segmented in. It must be a multiple of 8. - If not specified, it will be assumed to be 8. - - * **mac_len** : (*integer*) -- - (Only ``MODE_EAX``) - Length of the authentication tag, in bytes. - It must be no longer than 8 (default). - - * **initial_value** : (*integer*) -- - (Only ``MODE_CTR``). The initial value for the counter within - the counter block. By default it is **0**. - - :Return: a DES object, of the applicable mode. - """ - - return _create_cipher(sys.modules[__name__], key, mode, *args, **kwargs) - -MODE_ECB = 1 -MODE_CBC = 2 -MODE_CFB = 3 -MODE_OFB = 5 -MODE_CTR = 6 -MODE_OPENPGP = 7 -MODE_EAX = 9 - -# Size of a data block (in bytes) -block_size = 8 -# Size of a key (in bytes) -key_size = 8 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/Poly1305.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/Poly1305.py deleted file mode 100644 index eb5e0dadba401ef75c8478af979ffde6c3f65c01..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/Poly1305.py +++ /dev/null @@ -1,217 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Hash/Poly1305.py - Implements the Poly1305 MAC -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -from binascii import unhexlify - -from Crypto.Util.py3compat import bord, tobytes, _copy_bytes - -from Crypto.Hash import BLAKE2s -from Crypto.Random import get_random_bytes -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - create_string_buffer, - get_raw_buffer, c_size_t, - c_uint8_ptr) - - -_raw_poly1305 = load_pycryptodome_raw_lib("Crypto.Hash._poly1305", - """ - int poly1305_init(void **state, - const uint8_t *r, - size_t r_len, - const uint8_t *s, - size_t s_len); - int poly1305_destroy(void *state); - int poly1305_update(void *state, - const uint8_t *in, - size_t len); - int poly1305_digest(const void *state, - uint8_t *digest, - size_t len); - """) - - -class Poly1305_MAC(object): - """An Poly1305 MAC object. - Do not instantiate directly. Use the :func:`new` function. - - :ivar digest_size: the size in bytes of the resulting MAC tag - :vartype digest_size: integer - """ - - digest_size = 16 - - def __init__(self, r, s, data): - - if len(r) != 16: - raise ValueError("Parameter r is not 16 bytes long") - if len(s) != 16: - raise ValueError("Parameter s is not 16 bytes long") - - self._mac_tag = None - - state = VoidPointer() - result = _raw_poly1305.poly1305_init(state.address_of(), - c_uint8_ptr(r), - c_size_t(len(r)), - c_uint8_ptr(s), - c_size_t(len(s)) - ) - if result: - raise ValueError("Error %d while instantiating Poly1305" % result) - self._state = SmartPointer(state.get(), - _raw_poly1305.poly1305_destroy) - if data: - self.update(data) - - def update(self, data): - """Authenticate the next chunk of message. - - Args: - data (byte string/byte array/memoryview): The next chunk of data - """ - - if self._mac_tag: - raise TypeError("You can only call 'digest' or 'hexdigest' on this object") - - result = _raw_poly1305.poly1305_update(self._state.get(), - c_uint8_ptr(data), - c_size_t(len(data))) - if result: - raise ValueError("Error %d while hashing Poly1305 data" % result) - return self - - def copy(self): - raise NotImplementedError() - - def digest(self): - """Return the **binary** (non-printable) MAC tag of the message - authenticated so far. - - :return: The MAC tag digest, computed over the data processed so far. - Binary form. - :rtype: byte string - """ - - if self._mac_tag: - return self._mac_tag - - bfr = create_string_buffer(16) - result = _raw_poly1305.poly1305_digest(self._state.get(), - bfr, - c_size_t(len(bfr))) - if result: - raise ValueError("Error %d while creating Poly1305 digest" % result) - - self._mac_tag = get_raw_buffer(bfr) - return self._mac_tag - - def hexdigest(self): - """Return the **printable** MAC tag of the message authenticated so far. - - :return: The MAC tag, computed over the data processed so far. - Hexadecimal encoded. - :rtype: string - """ - - return "".join(["%02x" % bord(x) - for x in tuple(self.digest())]) - - def verify(self, mac_tag): - """Verify that a given **binary** MAC (computed by another party) - is valid. - - Args: - mac_tag (byte string/byte string/memoryview): the expected MAC of the message. - - Raises: - ValueError: if the MAC does not match. It means that the message - has been tampered with or that the MAC key is incorrect. - """ - - secret = get_random_bytes(16) - - mac1 = BLAKE2s.new(digest_bits=160, key=secret, data=mac_tag) - mac2 = BLAKE2s.new(digest_bits=160, key=secret, data=self.digest()) - - if mac1.digest() != mac2.digest(): - raise ValueError("MAC check failed") - - def hexverify(self, hex_mac_tag): - """Verify that a given **printable** MAC (computed by another party) - is valid. - - Args: - hex_mac_tag (string): the expected MAC of the message, - as a hexadecimal string. - - Raises: - ValueError: if the MAC does not match. It means that the message - has been tampered with or that the MAC key is incorrect. - """ - - self.verify(unhexlify(tobytes(hex_mac_tag))) - - - -def new(**kwargs): - """Create a new Poly1305 MAC object. - - Args: - key (bytes/bytearray/memoryview): - The 32-byte key for the Poly1305 object. - cipher (module from ``Crypto.Cipher``): - The cipher algorithm to use for deriving the Poly1305 - key pair *(r, s)*. - It can only be ``Crypto.Cipher.AES`` or ``Crypto.Cipher.ChaCha20``. - nonce (bytes/bytearray/memoryview): - Optional. The non-repeatable value to use for the MAC of this message. - It must be 16 bytes long for ``AES`` and 8 or 12 bytes for ``ChaCha20``. - If not passed, a random nonce is created; you will find it in the - ``nonce`` attribute of the new object. - data (bytes/bytearray/memoryview): - Optional. The very first chunk of the message to authenticate. - It is equivalent to an early call to ``update()``. - - Returns: - A :class:`Poly1305_MAC` object - """ - - cipher = kwargs.pop("cipher", None) - if not hasattr(cipher, '_derive_Poly1305_key_pair'): - raise ValueError("Parameter 'cipher' must be AES or ChaCha20") - - cipher_key = kwargs.pop("key", None) - if cipher_key is None: - raise TypeError("You must pass a parameter 'key'") - - nonce = kwargs.pop("nonce", None) - data = kwargs.pop("data", None) - - if kwargs: - raise TypeError("Unknown parameters: " + str(kwargs)) - - r, s, nonce = cipher._derive_Poly1305_key_pair(cipher_key, nonce) - - new_mac = Poly1305_MAC(r, s, data) - new_mac.nonce = _copy_bytes(None, None, nonce) # nonce may still be just a memoryview - return new_mac diff --git a/spaces/asdasdasdasd/Face-forgery-detection/detect_from_videos.py b/spaces/asdasdasdasd/Face-forgery-detection/detect_from_videos.py deleted file mode 100644 index 993ba3fbf71a932a172d3bebac2ce399cde8efde..0000000000000000000000000000000000000000 --- a/spaces/asdasdasdasd/Face-forgery-detection/detect_from_videos.py +++ /dev/null @@ -1,236 +0,0 @@ -# coding: utf-8 -import os -import argparse -from os.path import join -import cv2 -import dlib -import torch -import torch.nn as nn -from PIL import Image as pil_image -from tqdm import tqdm -from model_core import Two_Stream_Net -from torchvision import transforms - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -map_location=torch.device('cpu') - -xception_default_data_transforms_256 = { - 'train': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5]*3, [0.5]*3) - ]), - 'val': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5] * 3, [0.5] * 3) - ]), - 'test': transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5] * 3, [0.5] * 3) - ]), -} - -def get_boundingbox(face, width, height, scale=1.3, minsize=None): - """ - Expects a dlib face to generate a quadratic bounding box. - :param face: dlib face class - :param width: frame width - :param height: frame height - :param scale: bounding box size multiplier to get a bigger face region - :param minsize: set minimum bounding box size - :return: x, y, bounding_box_size in opencv form - """ - x1 = face.left() - y1 = face.top() - x2 = face.right() - y2 = face.bottom() - size_bb = int(max(x2 - x1, y2 - y1) * scale) - if minsize: - if size_bb < minsize: - size_bb = minsize - center_x, center_y = (x1 + x2) // 2, (y1 + y2) // 2 - - # Check for out of bounds, x-y top left corner - x1 = max(int(center_x - size_bb // 2), 0) - y1 = max(int(center_y - size_bb // 2), 0) - # Check for too big bb size for given x, y - size_bb = min(width - x1, size_bb) - size_bb = min(height - y1, size_bb) - - return x1, y1, size_bb - - -def preprocess_image(image, cuda=True): - """ - Preprocesses the image such that it can be fed into our network. - During this process we envoke PIL to cast it into a PIL image. - - :param image: numpy image in opencv form (i.e., BGR and of shape - :return: pytorch tensor of shape [1, 3, image_size, image_size], not - necessarily casted to cuda - """ - # Revert from BGR - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - # Preprocess using the preprocessing function used during training and - # casting it to PIL image - preprocess = xception_default_data_transforms_256['test'] - preprocessed_image = preprocess(pil_image.fromarray(image)) - # Add first dimension as the network expects a batch - preprocessed_image = preprocessed_image.unsqueeze(0) - if cuda: - preprocessed_image = preprocessed_image.cuda() - return preprocessed_image - - -def predict_with_model(image, model, post_function=nn.Softmax(dim=1), - cuda=True): - """ - Predicts the label of an input image. Preprocesses the input image and - casts it to cuda if required - - :param image: numpy image - :param model: torch model with linear layer at the end - :param post_function: e.g., softmax - :param cuda: enables cuda, must be the same parameter as the model - :return: prediction (1 = fake, 0 = real) - """ - # Preprocess - preprocessed_image = preprocess_image(image, cuda).cuda() - - # print(preprocessed_image.shape) - - # Model prediction - output = model(preprocessed_image) - # print(output) - # output = post_function(output[0]) - - # Cast to desired - _, prediction = torch.max(output[0], 1) # argmax - prediction = float(prediction.cpu().numpy()) - # print(prediction) - - return int(prediction), output - - -def test_full_image_network(video_path, model_path, output_path, - start_frame=0, end_frame=None, cuda=False): - """ - Reads a video and evaluates a subset of frames with the a detection network - that takes in a full frame. Outputs are only given if a face is present - and the face is highlighted using dlib. - :param video_path: path to video file - :param model_path: path to model file (should expect the full sized image) - :param output_path: path where the output video is stored - :param start_frame: first frame to evaluate - :param end_frame: last frame to evaluate - :param cuda: enable cuda - :return: - """ - print('Starting: {}'.format(video_path)) - - if not os.path.exists(output_path): - os.mkdir(output_path) - - # Read and write - reader = cv2.VideoCapture(video_path) - - # video_fn = video_path.split('/')[-1].split('.')[0]+'.avi' - video_fn = 'output_video.avi' - os.makedirs(output_path, exist_ok=True) - fourcc = cv2.VideoWriter_fourcc(*'MJPG') - fps = reader.get(cv2.CAP_PROP_FPS) - num_frames = int(reader.get(cv2.CAP_PROP_FRAME_COUNT)) - writer = None - - # Face detector - face_detector = dlib.get_frontal_face_detector() - - # Load model - # model, *_ = model_selection(modelname='xception', num_out_classes=2) - model = Two_Stream_Net() - model.load_state_dict(torch.load(model_path,map_location)) - model = model.to(device) - model.eval() - - if cuda: - model = model.cuda() - - # Text variables - font_face = cv2.FONT_HERSHEY_SIMPLEX - thickness = 2 - font_scale = 1 - - frame_num = 0 - assert start_frame < num_frames - 1 - end_frame = end_frame if end_frame else num_frames - pbar = tqdm(total=end_frame-start_frame) - - while reader.isOpened(): - _, image = reader.read() - if image is None: - break - frame_num += 1 - - if frame_num < start_frame: - continue - pbar.update(1) - - # Image size - height, width = image.shape[:2] - - # Init output writer - if writer is None: - # writer = cv2.VideoWriter(join(output_path, video_fn), fourcc, fps, - # (height, width)[::-1]) - writer = cv2.VideoWriter(video_fn, fourcc, fps, - (height, width)[::-1]) - - # 2. Detect with dlib - gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - faces = face_detector(gray, 1) - if len(faces): - # For now only take biggest face - face = faces[0] - - # --- Prediction --------------------------------------------------- - # Face crop with dlib and bounding box scale enlargement - x, y, size = get_boundingbox(face, width, height) - cropped_face = image[y:y+size, x:x+size] - - # Actual prediction using our model - prediction, output = predict_with_model(cropped_face, model, - cuda=cuda) - # ------------------------------------------------------------------ - - # Text and bb - x = face.left() - y = face.top() - w = face.right() - x - h = face.bottom() - y - label = 'fake' if prediction == 0 else 'real' - color = (0, 255, 0) if prediction == 1 else (0, 0, 255) - output_list = ['{0:.2f}'.format(float(x)) for x in - output[0].detach().cpu().numpy()[0]] - cv2.putText(image, str(output_list)+'=>'+label, (x, y+h+30), - font_face, font_scale, - color, thickness, 2) - # draw box over face - cv2.rectangle(image, (x, y), (x + w, y + h), color, 2) - - if frame_num >= end_frame: - break - - # Show - # cv2.imshow('test', image) - # cv2.waitKey(33) # About 30 fps - writer.write(image) - pbar.close() - if writer is not None: - writer.release() - print('Finished! Output saved under {}'.format(output_path)) - else: - print('Input video file was empty') - return 'output_video.avi' - diff --git a/spaces/ashishraics/MCQ-Generator/extract_config.py b/spaces/ashishraics/MCQ-Generator/extract_config.py deleted file mode 100644 index 1e134fc579001b775e83b358a83471134e74b62c..0000000000000000000000000000000000000000 --- a/spaces/ashishraics/MCQ-Generator/extract_config.py +++ /dev/null @@ -1,8 +0,0 @@ -from transformers import BertConfig,BertForMaskedLM - -config=BertConfig() -model=BertForMaskedLM(config) - -print(config) - -print(model.config) diff --git a/spaces/awacke1/SelfModifyStreamlitTest/app.py b/spaces/awacke1/SelfModifyStreamlitTest/app.py deleted file mode 100644 index 0aa275f9557ab71fa8f8730edfbea52b48577286..0000000000000000000000000000000000000000 --- a/spaces/awacke1/SelfModifyStreamlitTest/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import streamlit as st -import base64 -import os -from datetime import datetime - -def read_app_code(): - with open('app.py', 'r') as f: - return f.read() - -def write_app_code(modified_code): - with open('app.py', 'w') as f: - f.write(modified_code) - -def get_timestamp(): - return datetime.now().strftime("%Y%m%d_%H%M%S") - -def create_download_link(file_content, filename): - b64 = base64.b64encode(file_content).decode() - href = f'Download {filename}' - st.markdown(href, unsafe_allow_html=True) - -# Streamlit UI -st.title("Self-Modifying Streamlit App") - -# Textbox for username -username = st.text_input("Enter your username:", "anonymous") - -# File Upload -uploaded_file = st.file_uploader("Choose a file") - -if uploaded_file is not None: - file_content = uploaded_file.read() - create_download_link(file_content, "your_file.txt") - - # Read and Modify app.py - timestamp = get_timestamp() - app_code = read_app_code() - new_code = f"# Modified by {username} on {timestamp}\n" - modified_app_code = app_code + "\n" + new_code - - write_app_code(modified_app_code) - - # Display the new code in a textbox - st.text_area("Newly Modified app.py Code:", modified_app_code) - - # Create download link for modified app.py - download_filename = f"modified_app_{timestamp}.py" - create_download_link(modified_app_code.encode(), download_filename) - - # Refresh app - os.system("streamlit rerun") diff --git a/spaces/awacke1/VizLib-Numpy/app.py b/spaces/awacke1/VizLib-Numpy/app.py deleted file mode 100644 index bf14fa36bf992ffed25117640cc10abd3f910e94..0000000000000000000000000000000000000000 --- a/spaces/awacke1/VizLib-Numpy/app.py +++ /dev/null @@ -1,123 +0,0 @@ -import streamlit as st -import numpy as np - -st.sidebar.title("NumPy Demo") - -# Array creation routines -st.sidebar.header("Array creation routines") -st.sidebar.write("np.zeros(5):", np.zeros(5)) -st.sidebar.write("np.ones((2, 3)):", np.ones((2, 3))) -st.sidebar.write("np.arange(0, 10, 2):", np.arange(0, 10, 2)) -st.sidebar.write("np.linspace(0, 1, 5):", np.linspace(0, 1, 5)) -st.sidebar.write("np.eye(3):", np.eye(3)) - -# Array manipulation routines -st.sidebar.header("Array manipulation routines") -arr = np.array([[1, 2], [3, 4]]) -st.sidebar.write("arr.flatten():", arr.flatten()) -st.sidebar.write("np.transpose(arr):", np.transpose(arr)) -st.sidebar.write("np.rot90(arr):", np.rot90(arr)) - -# Binary operations -st.sidebar.header("Binary operations") -x = np.array([1, 2, 3]) -y = np.array([4, 5, 6]) -st.sidebar.write("np.add(x, y):", np.add(x, y)) -st.sidebar.write("np.subtract(x, y):", np.subtract(x, y)) -st.sidebar.write("np.multiply(x, y):", np.multiply(x, y)) - -# String operations -st.sidebar.header("String operations") -st.sidebar.write("np.char.add(['hello', 'world'], ['!', '?']):", np.char.add(['hello', 'world'], ['!', '?'])) -st.sidebar.write("np.char.upper('numpy'):", np.char.upper('numpy')) -st.sidebar.write("np.char.replace('numpy', 'py', 'ython'):", np.char.replace('numpy', 'py', 'ython')) - -# C-Types Foreign Function Interface (numpy.ctypeslib) -st.sidebar.header("C-Types Foreign Function Interface (numpy.ctypeslib)") -# Omitted for simplicity - -# Datetime Support Functions -st.sidebar.header("Datetime Support Functions") -st.sidebar.write("np.datetime64('2023-02-21'):", np.datetime64('2023-02-21')) -st.sidebar.write("np.datetime64('2023-02-21 12:00:00'):", np.datetime64('2023-02-21 12:00:00')) - -# Data type routines -st.sidebar.header("Data type routines") -st.sidebar.write("np.dtype('float64'):", np.dtype('float64')) -st.sidebar.write("np.issubdtype(np.float64, np.number):", np.issubdtype(np.float64, np.number)) - -# Optionally SciPy-accelerated routines (numpy.dual) -st.sidebar.header("Optionally SciPy-accelerated routines (numpy.dual)") -# Omitted for simplicity - -# Mathematical functions with automatic domain -st.sidebar.header("Mathematical functions with automatic domain") -st.sidebar.write("np.sqrt(-1):", np.sqrt(-1)) -st.sidebar.write("np.log(0):", np.log(0)) - -# Functional programming -st.sidebar.header("Functional programming") -st.sidebar.write("np.vectorize(np.square)([1, 2, 3]):", np.vectorize(np.square)([1, 2, 3])) - -# NumPy-specific help functions -st.sidebar.header("NumPy-specific help functions") -st.sidebar.write("np.info(np.add):", np.info(np.add)) - -# Linear algebra (numpy.linalg) -st.sidebar.header("Linear algebra (numpy.linalg)") -mat = np.array([[1, 2], [3, 4]]) -st.sidebar.write("np.linalg.inv(mat):", np.linalg.inv(mat)) -st.sidebar.write("np.linalg.eig(mat):", np.linalg.eig(mat)) - -# Logic functions -st.sidebar.header("Logic functions") -x = np.array([1, 2, 3]) -y = np.array([2, 2, 2]) -st.sidebar.write("np.logical_and(x > 1, y < 3):", np.logical_and(x > 1, y < 3)) -st.sidebar.write("np.logical_or(x > 2, y < 2):", np.logical_or(x > 2, y < 2)) -st.sidebar.write("np.logical_not(x > 2):", np.logical_not(x > 2)) - -# Mathematical functions -st.sidebar.header("Mathematical functions") -x = np.array([0, 1, 2]) -st.sidebar.write("np.exp(x):", np.exp(x)) -st.sidebar.write("np.sin(x):", np.sin(x)) -st.sidebar.write("np.arctan(x):", np.arctan(x)) - -# Miscellaneous routines -st.sidebar.header("Miscellaneous routines") -st.sidebar.write("np.percentile([1, 2, 3, 4, 5], 50):", np.percentile([1, 2, 3, 4, 5], 50)) -st.sidebar.write("np.histogram([1, 2, 1], bins=[0, 1, 2, 3]):", np.histogram([1, 2, 1], bins=[0, 1, 2, 3])) - -# Polynomials -st.sidebar.header("Polynomials") -st.sidebar.write("np.poly1d([1, 2, 3])(4):", np.poly1d([1, 2, 3])(4)) - -# Random sampling (numpy.random) -st.sidebar.header("Random sampling (numpy.random)") -st.sidebar.write("np.random.rand(3, 2):", np.random.rand(3, 2)) -st.sidebar.write("np.random.normal(size=(2, 2)):", np.random.normal(size=(2, 2))) - -#Set routines -st.sidebar.header("Set routines") -x = np.array([1, 2, 3, 4]) -y = np.array([3, 4, 5, 6]) -st.sidebar.write("np.intersect1d(x, y):", np.intersect1d(x, y)) -st.sidebar.write("np.union1d(x, y):", np.union1d(x, y)) -st.sidebar.write("np.setdiff1d(x, y):", np.setdiff1d(x, y)) - -#Sorting, searching, and counting -st.sidebar.header("Sorting, searching, and counting") -x = np.array([3, 1, 4, 1, 5, 9, 2, 6, 5, 3]) -st.sidebar.write("np.sort(x):", np.sort(x)) -st.sidebar.write("np.argsort(x):", np.argsort(x)) -st.sidebar.write("np.where(x == 5):", np.where(x == 5)) -st.sidebar.write("np.count_nonzero(x > 3):", np.count_nonzero(x > 3)) - -# Statistics -st.sidebar.header("Statistics") -x = np.array([3, 1, 4, 1, 5, 9, 2, 6, 5, 3]) -st.sidebar.write("np.mean(x):", np.mean(x)) -st.sidebar.write("np.std(x):", np.std(x)) -st.sidebar.write("np.median(x):", np.median(x)) - diff --git a/spaces/awen666/web-ui/_next/static/chunks/app/layout-15d71eaa391f3141.js b/spaces/awen666/web-ui/_next/static/chunks/app/layout-15d71eaa391f3141.js deleted file mode 100644 index 0544bb13baccd784224b930bcca44e80470064b9..0000000000000000000000000000000000000000 --- a/spaces/awen666/web-ui/_next/static/chunks/app/layout-15d71eaa391f3141.js +++ /dev/null @@ -1 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[185],{8415:function(n,e,u){Promise.resolve().then(u.t.bind(u,98410,23))},98410:function(){}},function(n){n.O(0,[253,698,744],function(){return n(n.s=8415)}),_N_E=n.O()}]); \ No newline at end of file diff --git a/spaces/ayaanzaveri/whisper-webui/src/modelCache.py b/spaces/ayaanzaveri/whisper-webui/src/modelCache.py deleted file mode 100644 index 680a4b386fc37e17ed2353e72d04a646ece2c4a6..0000000000000000000000000000000000000000 --- a/spaces/ayaanzaveri/whisper-webui/src/modelCache.py +++ /dev/null @@ -1,17 +0,0 @@ -class ModelCache: - def __init__(self): - self._cache = dict() - - def get(self, model_key: str, model_factory): - result = self._cache.get(model_key) - - if result is None: - result = model_factory() - self._cache[model_key] = result - return result - - def clear(self): - self._cache.clear() - -# A global cache of models. This is mainly used by the daemon processes to avoid loading the same model multiple times. -GLOBAL_MODEL_CACHE = ModelCache() \ No newline at end of file diff --git a/spaces/bigscience/SourcingCatalog/catalogue/__init__.py b/spaces/bigscience/SourcingCatalog/catalogue/__init__.py deleted file mode 100644 index 67f5ee37503593e2ccd5120293b056731438d43b..0000000000000000000000000000000000000000 --- a/spaces/bigscience/SourcingCatalog/catalogue/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .geography import countries, make_choro_map, region_tree diff --git a/spaces/bioriAsaeru/text-to-voice/Anurag 3.1 Software Keygen Free Download Reizende Schulrefera Features and Benefits.md b/spaces/bioriAsaeru/text-to-voice/Anurag 3.1 Software Keygen Free Download Reizende Schulrefera Features and Benefits.md deleted file mode 100644 index 0c7e50bf2b8d0e1ec3e4812e9163c17ee558aafe..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Anurag 3.1 Software Keygen Free Download Reizende Schulrefera Features and Benefits.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Anurag 3.1 Software Keygen Free Download reizende schulrefera


    DOWNLOADhttps://urloso.com/2uyOXi



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Avengers Infinity War Comics Pdf Download WORK.md b/spaces/bioriAsaeru/text-to-voice/Avengers Infinity War Comics Pdf Download WORK.md deleted file mode 100644 index 405ac9fe207371420ee0e5f3f63ab0729cd547d0..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Avengers Infinity War Comics Pdf Download WORK.md +++ /dev/null @@ -1,11 +0,0 @@ -

    Avengers Infinity War Comics Pdf Download


    Download Zip ->->->-> https://urloso.com/2uyQAU



    - -avengers infinity war comics pdf ebook full pdf download -Avengers Infinity War Comics Full. -Download Avengers Infinity War Comics Full, free online. -Find more comics, reviews, scan and read about Avengers Infinity War Comics on the largest comics and manga. -Download Avengers Infinity War Comics Full. -Download Avengers Infinity War Comics Full, free online. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Elliott Smith XO full album zip Download the 1998 masterpiece by the indie rock legend.md b/spaces/bioriAsaeru/text-to-voice/Elliott Smith XO full album zip Download the 1998 masterpiece by the indie rock legend.md deleted file mode 100644 index 9e081f25e550abc9e9ac97a6a42bc57202bfafbd..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Elliott Smith XO full album zip Download the 1998 masterpiece by the indie rock legend.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Elliott Smith, XO full album zip


    Downloadhttps://urloso.com/2uyPdc



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DeepLab/deeplab/lr_scheduler.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DeepLab/deeplab/lr_scheduler.py deleted file mode 100644 index b754b59750ed7fea1e2d24d40f019d26bd562bf5..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DeepLab/deeplab/lr_scheduler.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -from typing import List -import torch - -from detectron2.solver.lr_scheduler import LRScheduler, _get_warmup_factor_at_iter - -# NOTE: PyTorch's LR scheduler interface uses names that assume the LR changes -# only on epoch boundaries. We typically use iteration based schedules instead. -# As a result, "epoch" (e.g., as in self.last_epoch) should be understood to mean -# "iteration" instead. - -# FIXME: ideally this would be achieved with a CombinedLRScheduler, separating -# MultiStepLR with WarmupLR but the current LRScheduler design doesn't allow it. - - -class WarmupPolyLR(LRScheduler): - """ - Poly learning rate schedule used to train DeepLab. - Paper: DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, - Atrous Convolution, and Fully Connected CRFs. - Reference: https://github.com/tensorflow/models/blob/21b73d22f3ed05b650e85ac50849408dd36de32e/research/deeplab/utils/train_utils.py#L337 # noqa - """ - - def __init__( - self, - optimizer: torch.optim.Optimizer, - max_iters: int, - warmup_factor: float = 0.001, - warmup_iters: int = 1000, - warmup_method: str = "linear", - last_epoch: int = -1, - power: float = 0.9, - constant_ending: float = 0.0, - ): - self.max_iters = max_iters - self.warmup_factor = warmup_factor - self.warmup_iters = warmup_iters - self.warmup_method = warmup_method - self.power = power - self.constant_ending = constant_ending - super().__init__(optimizer, last_epoch) - - def get_lr(self) -> List[float]: - warmup_factor = _get_warmup_factor_at_iter( - self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor - ) - if self.constant_ending > 0 and warmup_factor == 1.0: - # Constant ending lr. - if ( - math.pow((1.0 - self.last_epoch / self.max_iters), self.power) - < self.constant_ending - ): - return [base_lr * self.constant_ending for base_lr in self.base_lrs] - return [ - base_lr * warmup_factor * math.pow((1.0 - self.last_epoch / self.max_iters), self.power) - for base_lr in self.base_lrs - ] - - def _compute_values(self) -> List[float]: - # The new interface - return self.get_lr() diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tests/data/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tests/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/bryantmedical/oral_cancer/app.py b/spaces/bryantmedical/oral_cancer/app.py deleted file mode 100644 index 699dd56edef9d02a5afa161082bef0ba1caeb200..0000000000000000000000000000000000000000 --- a/spaces/bryantmedical/oral_cancer/app.py +++ /dev/null @@ -1,82 +0,0 @@ -import gradio as gr -import pathlib -import tensorflow as tf - -current_dir = pathlib.Path(__file__).parent - -# images = [str(current_dir / "cheetah1.jpeg"), str(current_dir / "cheetah1.jpg"), str(current_dir / "lion.jpg")] - -images = [str(current_dir / "data/benign/benign_4.jpg"), str(current_dir / "data/benign/benign_5.jpg"), str(current_dir / "data/benign/benign_6.jpg"), str(current_dir / "data/malignant/malignant_4.jpg"), str(current_dir / "data/malignant/malignant_5.jpg"), str(current_dir / "data/malignant/malignant_6.jpg")] - - -# img_classifier = gr.Interface.load( -# "models/google/vit-base-patch16-224", examples=images, cache_examples=False -# ) - - -# def func(img, text): -# return img_classifier(img), text - - -# using_img_classifier_as_function = gr.Interface( -# func, -# [gr.Image(type="filepath"), "text"], -# ["label", "text"], -# examples=[ -# [str(current_dir / "cheetah1.jpeg"), None], -# [str(current_dir / "cheetah1.jpg"), "cheetah"], -# [str(current_dir / "lion.jpg"), "lion"], -# ], -# cache_examples=False, -# ) -# demo = gr.TabbedInterface([using_img_classifier_as_function, img_classifier]) - -# if __name__ == "__main__": -# demo.launch() - - - - - - - - - - - - - - - - -# import gradio as gr -from tensorflow import keras -from skimage.transform import resize - -# def greet(name): -# return "Hello " + name + "!!" - -# iface = gr.Interface(fn=greet, inputs="text", outputs="text") -# iface.launch() - -# oc_resnet50_model1 = keras.models.load_model('./models/oc_model.h5') -print("current_dir", current_dir) -oc_resnet50_model2 = keras.models.load_model(f"{current_dir}/models/mendeley_oc_model_v2.h5") -labels = ['Benign Lesion', 'Malignant Lesion'] - -def classify_image(inp): - - # inp =resize(inp, (300, 300, 3)) - inp = inp.reshape((-1, 300, 300, 3)) - # inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp) - inp = tf.keras.applications.resnet50.preprocess_input(inp) - prediction = oc_resnet50_model2.predict(inp).flatten() - confidences = {labels[i]: float(prediction[i]) for i in range(2)} - return confidences - -gr.Interface(fn=classify_image, - inputs=gr.Image(shape=(300, 300)), - outputs=gr.Label(num_top_classes=2), - examples=images, cache_examples=False, - # interpretation="shap", num_shap=5 - ).launch() \ No newline at end of file diff --git a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/aws/resume.py b/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/aws/resume.py deleted file mode 100644 index b21731c979a121ab8227280351b70d6062efd983..0000000000000000000000000000000000000000 --- a/spaces/bulentsofttech/gradio_s1000_veri_toplama_modeli/yolov5/utils/aws/resume.py +++ /dev/null @@ -1,40 +0,0 @@ -# Resume all interrupted trainings in yolov5/ dir including DDP trainings -# Usage: $ python utils/aws/resume.py - -import os -import sys -from pathlib import Path - -import torch -import yaml - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[2] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH - -port = 0 # --master_port -path = Path('').resolve() -for last in path.rglob('*/**/last.pt'): - ckpt = torch.load(last) - if ckpt['optimizer'] is None: - continue - - # Load opt.yaml - with open(last.parent.parent / 'opt.yaml', errors='ignore') as f: - opt = yaml.safe_load(f) - - # Get device count - d = opt['device'].split(',') # devices - nd = len(d) # number of devices - ddp = nd > 1 or (nd == 0 and torch.cuda.device_count() > 1) # distributed data parallel - - if ddp: # multi-GPU - port += 1 - cmd = f'python -m torch.distributed.run --nproc_per_node {nd} --master_port {port} train.py --resume {last}' - else: # single-GPU - cmd = f'python train.py --resume {last}' - - cmd += ' > /dev/null 2>&1 &' # redirect output to dev/null and run in daemon thread - print(cmd) - os.system(cmd) diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageMode.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageMode.py deleted file mode 100644 index a0b33514296df734501c553493b0a535eca49046..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageMode.py +++ /dev/null @@ -1,90 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# standard mode descriptors -# -# History: -# 2006-03-20 fl Added -# -# Copyright (c) 2006 by Secret Labs AB. -# Copyright (c) 2006 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -import sys - -# mode descriptor cache -_modes = None - - -class ModeDescriptor: - """Wrapper for mode strings.""" - - def __init__(self, mode, bands, basemode, basetype, typestr): - self.mode = mode - self.bands = bands - self.basemode = basemode - self.basetype = basetype - self.typestr = typestr - - def __str__(self): - return self.mode - - -def getmode(mode): - """Gets a mode descriptor for the given mode.""" - global _modes - if not _modes: - # initialize mode cache - modes = {} - endian = "<" if sys.byteorder == "little" else ">" - for m, (basemode, basetype, bands, typestr) in { - # core modes - # Bits need to be extended to bytes - "1": ("L", "L", ("1",), "|b1"), - "L": ("L", "L", ("L",), "|u1"), - "I": ("L", "I", ("I",), endian + "i4"), - "F": ("L", "F", ("F",), endian + "f4"), - "P": ("P", "L", ("P",), "|u1"), - "RGB": ("RGB", "L", ("R", "G", "B"), "|u1"), - "RGBX": ("RGB", "L", ("R", "G", "B", "X"), "|u1"), - "RGBA": ("RGB", "L", ("R", "G", "B", "A"), "|u1"), - "CMYK": ("RGB", "L", ("C", "M", "Y", "K"), "|u1"), - "YCbCr": ("RGB", "L", ("Y", "Cb", "Cr"), "|u1"), - # UNDONE - unsigned |u1i1i1 - "LAB": ("RGB", "L", ("L", "A", "B"), "|u1"), - "HSV": ("RGB", "L", ("H", "S", "V"), "|u1"), - # extra experimental modes - "RGBa": ("RGB", "L", ("R", "G", "B", "a"), "|u1"), - "BGR;15": ("RGB", "L", ("B", "G", "R"), "|u1"), - "BGR;16": ("RGB", "L", ("B", "G", "R"), "|u1"), - "BGR;24": ("RGB", "L", ("B", "G", "R"), "|u1"), - "LA": ("L", "L", ("L", "A"), "|u1"), - "La": ("L", "L", ("L", "a"), "|u1"), - "PA": ("RGB", "L", ("P", "A"), "|u1"), - }.items(): - modes[m] = ModeDescriptor(m, bands, basemode, basetype, typestr) - # mapping modes - for i16mode, typestr in { - # I;16 == I;16L, and I;32 == I;32L - "I;16": "u2", - "I;16BS": ">i2", - "I;16N": endian + "u2", - "I;16NS": endian + "i2", - "I;32": "u4", - "I;32L": "i4", - "I;32LS": "= (0, 7), "Require torchvision >= 0.7" - - def forward(self, input, rois): - """ - Args: - input: NCHW images - rois: Bx5 boxes. First column is the index into N. The other 4 columns are xyxy. - """ - assert rois.dim() == 2 and rois.size(1) == 5 - if input.is_quantized: - input = input.dequantize() - return roi_align( - input, - rois.to(dtype=input.dtype), - self.output_size, - self.spatial_scale, - self.sampling_ratio, - self.aligned, - ) - - def __repr__(self): - tmpstr = self.__class__.__name__ + "(" - tmpstr += "output_size=" + str(self.output_size) - tmpstr += ", spatial_scale=" + str(self.spatial_scale) - tmpstr += ", sampling_ratio=" + str(self.sampling_ratio) - tmpstr += ", aligned=" + str(self.aligned) - tmpstr += ")" - return tmpstr diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/mesh.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/mesh.py deleted file mode 100644 index 589515d2c4dfc6f94fdd3973e874c0a01fddb5eb..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/structures/mesh.py +++ /dev/null @@ -1,172 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import pickle -from functools import lru_cache -from typing import Dict, Optional, Tuple -import torch - -from detectron2.utils.file_io import PathManager - -from densepose.data.meshes.catalog import MeshCatalog, MeshInfo - - -def _maybe_copy_to_device( - attribute: Optional[torch.Tensor], device: torch.device -) -> Optional[torch.Tensor]: - if attribute is None: - return None - return attribute.to(device) - - -class Mesh: - def __init__( - self, - vertices: Optional[torch.Tensor] = None, - faces: Optional[torch.Tensor] = None, - geodists: Optional[torch.Tensor] = None, - symmetry: Optional[Dict[str, torch.Tensor]] = None, - texcoords: Optional[torch.Tensor] = None, - mesh_info: Optional[MeshInfo] = None, - device: Optional[torch.device] = None, - ): - """ - Args: - vertices (tensor [N, 3] of float32): vertex coordinates in 3D - faces (tensor [M, 3] of long): triangular face represented as 3 - vertex indices - geodists (tensor [N, N] of float32): geodesic distances from - vertex `i` to vertex `j` (optional, default: None) - symmetry (dict: str -> tensor): various mesh symmetry data: - - "vertex_transforms": vertex mapping under horizontal flip, - tensor of size [N] of type long; vertex `i` is mapped to - vertex `tensor[i]` (optional, default: None) - texcoords (tensor [N, 2] of float32): texture coordinates, i.e. global - and normalized mesh UVs (optional, default: None) - mesh_info (MeshInfo type): necessary to load the attributes on-the-go, - can be used instead of passing all the variables one by one - device (torch.device): device of the Mesh. If not provided, will use - the device of the vertices - """ - self._vertices = vertices - self._faces = faces - self._geodists = geodists - self._symmetry = symmetry - self._texcoords = texcoords - self.mesh_info = mesh_info - self.device = device - - assert self._vertices is not None or self.mesh_info is not None - - all_fields = [self._vertices, self._faces, self._geodists, self._texcoords] - - if self.device is None: - for field in all_fields: - if field is not None: - self.device = field.device - break - if self.device is None and symmetry is not None: - for key in symmetry: - self.device = symmetry[key].device - break - self.device = torch.device("cpu") if self.device is None else self.device - - assert all([var.device == self.device for var in all_fields if var is not None]) - if symmetry: - assert all(symmetry[key].device == self.device for key in symmetry) - if texcoords and vertices: - assert len(vertices) == len(texcoords) - - def to(self, device: torch.device): - device_symmetry = self._symmetry - if device_symmetry: - device_symmetry = {key: value.to(device) for key, value in device_symmetry.items()} - return Mesh( - _maybe_copy_to_device(self._vertices, device), - _maybe_copy_to_device(self._faces, device), - _maybe_copy_to_device(self._geodists, device), - device_symmetry, - _maybe_copy_to_device(self._texcoords, device), - self.mesh_info, - device, - ) - - @property - def vertices(self): - if self._vertices is None and self.mesh_info is not None: - self._vertices = load_mesh_data(self.mesh_info.data, "vertices", self.device) - return self._vertices - - @property - def faces(self): - if self._faces is None and self.mesh_info is not None: - self._faces = load_mesh_data(self.mesh_info.data, "faces", self.device) - return self._faces - - @property - def geodists(self): - if self._geodists is None and self.mesh_info is not None: - self._geodists = load_mesh_auxiliary_data(self.mesh_info.geodists, self.device) - return self._geodists - - @property - def symmetry(self): - if self._symmetry is None and self.mesh_info is not None: - self._symmetry = load_mesh_symmetry(self.mesh_info.symmetry, self.device) - return self._symmetry - - @property - def texcoords(self): - if self._texcoords is None and self.mesh_info is not None: - self._texcoords = load_mesh_auxiliary_data(self.mesh_info.texcoords, self.device) - return self._texcoords - - def get_geodists(self): - if self.geodists is None: - self.geodists = self._compute_geodists() - return self.geodists - - def _compute_geodists(self): - # TODO: compute using Laplace-Beltrami - geodists = None - return geodists - - -def load_mesh_data( - mesh_fpath: str, field: str, device: Optional[torch.device] = None -) -> Tuple[Optional[torch.Tensor], Optional[torch.Tensor]]: - with PathManager.open(mesh_fpath, "rb") as hFile: - # pyre-fixme[7]: Expected `Tuple[Optional[Tensor], Optional[Tensor]]` but - # got `Tensor`. - return torch.as_tensor(pickle.load(hFile)[field], dtype=torch.float).to( # pyre-ignore[6] - device - ) - return None - - -def load_mesh_auxiliary_data( - fpath: str, device: Optional[torch.device] = None -) -> Optional[torch.Tensor]: - fpath_local = PathManager.get_local_path(fpath) - with PathManager.open(fpath_local, "rb") as hFile: - return torch.as_tensor(pickle.load(hFile), dtype=torch.float).to(device) # pyre-ignore[6] - return None - - -@lru_cache() -def load_mesh_symmetry( - symmetry_fpath: str, device: Optional[torch.device] = None -) -> Optional[Dict[str, torch.Tensor]]: - with PathManager.open(symmetry_fpath, "rb") as hFile: - symmetry_loaded = pickle.load(hFile) # pyre-ignore[6] - symmetry = { - "vertex_transforms": torch.as_tensor( - symmetry_loaded["vertex_transforms"], dtype=torch.long - ).to(device), - } - return symmetry - return None - - -@lru_cache() -def create_mesh(mesh_name: str, device: Optional[torch.device] = None) -> Mesh: - return Mesh(mesh_info=MeshCatalog[mesh_name], device=device) diff --git a/spaces/ccds/vits_onnx/export/vits/losses.py b/spaces/ccds/vits_onnx/export/vits/losses.py deleted file mode 100644 index f835539a16b49e1065fef4e4a1efb259b88dcf64..0000000000000000000000000000000000000000 --- a/spaces/ccds/vits_onnx/export/vits/losses.py +++ /dev/null @@ -1,58 +0,0 @@ -import torch - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1 - dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1 - dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/ccolas/TastyPiano/src/cocktails/utilities/cocktail_generation_utilities/population.py b/spaces/ccolas/TastyPiano/src/cocktails/utilities/cocktail_generation_utilities/population.py deleted file mode 100644 index 2a004f3146611efa8f9579c4e928c8dd335f7c9b..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/cocktails/utilities/cocktail_generation_utilities/population.py +++ /dev/null @@ -1,213 +0,0 @@ -from src.cocktails.utilities.cocktail_generation_utilities.individual import * -from sklearn.neighbors import NearestNeighbors -import time -import pickle -from src.cocktails.config import COCKTAIL_NN_PATH, COCKTAILS_CSV_DATA - -class Population: - def __init__(self, target, pop_params, target_affective_cluster=None, known_target_dict=None): - self.pop_params = pop_params - self.pop_size = pop_params['pop_size'] - self.nb_elite = pop_params['nb_elites'] - self.nb_generations = pop_params['nb_generations'] - self.target = target - self.mutation_params = pop_params['mutation_params'] - self.dist = pop_params['dist'] - self.n_neighbors = pop_params['n_neighbors'] - self.known_target_dict = known_target_dict - - - with open(COCKTAIL_NN_PATH, 'rb') as f: - data = pickle.load(f) - self.nn_model_cocktail = data['nn_model'] - self.dim_rep_cocktail = data['dim_rep_cocktail'] - self.n_cocktails = data['n_cocktails'] - self.cocktail_data = pd.read_csv(COCKTAILS_CSV_DATA) - - if target_affective_cluster is None: - cocktail_rep_affective = get_normalized_affective_cocktail_rep_from_normalized_cocktail_rep(target) - self.target_affective_cluster = cocktail2affective_cluster(cocktail_rep_affective)[0] - else: - self.target_affective_cluster = target_affective_cluster - - self.pop_elite = [] - self.pop = [] - self.add_target_individual() # create a target individual (not in pop) - self.add_nearest_neighbors_in_pop() # add nearest neighbor from dataset into the population - - # fill population - while self.get_pop_size() < self.pop_size: - self.add_individual() - while len(self.pop_elite) < self.nb_elite: - self.pop_elite.append(IndividualCocktail(pop_params=self.pop_params, - target=self.target.copy(), - target_affective_cluster=self.target_affective_cluster)) - self.update_elite_and_get_next_pop() - - def add_target_individual(self): - if self.known_target_dict is not None: - genes_presence, genes_quantity = self.get_q_rep(*extract_ingredients(self.known_target_dict['ing_str'])) - self.target_individual = IndividualCocktail(pop_params=self.pop_params, - target=self.target.copy(), - known_target_dict=self.known_target_dict, - target_affective_cluster=self.target_affective_cluster, - genes_presence=genes_presence, - genes_quantity=genes_quantity - ) - else: - self.target_individual = None - - - def add_nearest_neighbors_in_pop(self): - # add nearest neighbor from dataset into the population - if self.n_neighbors > 0: - dists, indexes = self.nn_model_cocktail.kneighbors(self.target.reshape(1, -1)) - dists, indexes = dists.flatten(), indexes.flatten() - first = 1 if dists[0] == 0 else 0 # avoid taking the target when testing with known targets from the dataset - indexes = indexes[first:first + self.n_neighbors] - self.ing_strs = np.array(self.cocktail_data['ingredients_str'])[indexes] - recipes = [extract_ingredients(ing_str) for ing_str in self.ing_strs] - for r in recipes: - genes_presence, genes_quantity = self.get_q_rep(r[0], r[1]) - genes_presence[-1] = 0 # remove water ingredient - self.add_individual(genes_presence=genes_presence.copy(), genes_quantity=genes_quantity.copy()) - self.nn_recipes = [ind.get_recipe()[3] for ind in self.pop] - self.nn_scores = [ind.perf for ind in self.pop] - else: - self.ing_strs = None - - def add_individual(self, genes_presence=None, genes_quantity=None): - self.pop.append(IndividualCocktail(pop_params=self.pop_params, - target=self.target.copy(), - target_affective_cluster=self.target_affective_cluster, - genes_presence=genes_presence, - genes_quantity=genes_quantity)) - - def get_elite_perf(self): - return np.array([e.perf for e in self.pop_elite]) - - def get_pop_perf(self): - return np.array([ind.perf for ind in self.pop]) - - - def update_elite_and_get_next_pop(self): - time_dict = dict() - init_time = time.time() - elite_perfs = self.get_elite_perf() - pop_perfs = self.get_pop_perf() - all_perfs = np.concatenate([elite_perfs, pop_perfs]) - temp_list = self.pop_elite + self.pop - time_dict[' get pop perfs'] = [time.time() - init_time] - init_time = time.time() - # update elite population with new bests - indexes_sorted = np.flip(np.argsort(all_perfs)) - new_pop_elite = [IndividualCocktail(pop_params=self.pop_params, - target=self.target.copy(), - target_affective_cluster=self.target_affective_cluster, - genes_presence=temp_list[i_new_e].genes_presence.copy(), - genes_quantity=temp_list[i_new_e].genes_quantity.copy()) for i_new_e in indexes_sorted[:self.nb_elite]] - time_dict[' recreate elite individuals'] = [time.time() - init_time] - init_time = time.time() - # select parents - rank_perfs = np.flip(np.arange(len(temp_list))) - sampling_probs = rank_perfs / np.sum(rank_perfs) - if self.mutation_params['asexual_rep'] and not self.mutation_params['crossover']: - new_pop_indexes = np.random.choice(indexes_sorted, p=sampling_probs, size=self.pop_size) - self.pop = [temp_list[i].get_child() for i in new_pop_indexes] - elif self.mutation_params['crossover'] and not self.mutation_params['asexual_rep']: - self.pop = [] - while len(self.pop) < self.pop_size: - parents = np.random.choice(indexes_sorted, p=sampling_probs, size=2, replace=False) - self.pop.append(temp_list[parents[0]].get_child_with(temp_list[parents[1]])) - elif self.mutation_params['crossover'] and self.mutation_params['asexual_rep']: - new_pop_indexes = np.random.choice(indexes_sorted, p=sampling_probs, size=self.pop_size//2) - time_dict[' choose asexual parent indexes'] = [time.time() - init_time] - init_time = time.time() - self.pop = [] - for i in new_pop_indexes: - child, this_time_dict = temp_list[i].get_child() - self.pop.append(child) - time_dict = self.update_time_dict(time_dict, this_time_dict) - time_dict[' get asexual children'] = [time.time() - init_time] - init_time = time.time() - while len(self.pop) < self.pop_size: - parents = np.random.choice(indexes_sorted, p=sampling_probs, size=2, replace=False) - child, this_time_dict = temp_list[parents[0]].get_child_with(temp_list[parents[1]]) - self.pop.append(child) - time_dict = self.update_time_dict(time_dict, this_time_dict) - time_dict[' get sexual children'] = [time.time() - init_time] - self.pop_elite = new_pop_elite - return time_dict - - def get_pop_size(self): - return len(self.pop) - - def get_q_rep(self, ingredients, quantities): - ingredient_q_rep = np.zeros([len(ingredient_list)]) - genes_presence = np.zeros([len(ingredient_list)]) - for ing, q in zip(ingredients, quantities): - ingredient_q_rep[ingredient_list.index(ing)] = q - genes_presence[ingredient_list.index(ing)] = 1 - return genes_presence.copy(), normalize_ingredient_q_rep(ingredient_q_rep) - - def get_best_score(self, affective_cluster_check=False): - elite_perfs = self.get_elite_perf() - pop_perfs = self.get_pop_perf() - all_perfs = np.concatenate([elite_perfs, pop_perfs]) - temp_list = self.pop_elite + self.pop - if affective_cluster_check: - indexes = np.array([i for i in range(len(temp_list)) if temp_list[i].does_affective_cluster_match()]) - if indexes.size > 0: - temp_list = np.array(temp_list)[indexes] - all_perfs = all_perfs[indexes] - indexes_best = np.flip(np.argsort(all_perfs)) - return np.array(all_perfs)[indexes_best], np.array(temp_list)[indexes_best] - - def update_time_dict(self, main_dict, new_dict): - for k in new_dict.keys(): - if k in main_dict.keys(): - main_dict[k].append(np.sum(new_dict[k])) - else: - main_dict[k] = [np.sum(new_dict[k])] - return main_dict - - def run_one_generation(self, verbose=True, affective_cluster_check=False): - time_dict = dict() - init_time = time.time() - this_time_dict = self.update_elite_and_get_next_pop() - time_dict['update_elite_and_pop'] = [time.time() - init_time] - time_dict = self.update_time_dict(time_dict, this_time_dict) - init_time = time.time() - best_perfs, best_individuals = self.get_best_score(affective_cluster_check) - time_dict['get best scores'] = [time.time() - init_time] - return best_perfs[0], time_dict - - def run_evolution(self, verbose=False, print_every=10, affective_cluster_check=False, level=0): - best_score = -np.inf - time_dict = dict() - init_time = time.time() - for i in range(self.nb_generations): - best_score, this_time_dict = self.run_one_generation(verbose, affective_cluster_check=affective_cluster_check) - time_dict = self.update_time_dict(time_dict, this_time_dict) - if verbose and (i+1) % print_every == 0: - print(' ' * level + f'Gen #{i+1} - Current best perf: {best_score:.2f}, time: {time.time() - init_time:.4f}') - init_time = time.time() - # - # to_print = time_dict.copy() - # keys = sorted(to_print.keys()) - # values = [] - # for k in keys: - # to_print[k] = np.sum(to_print[k]) - # values.append(to_print[k]) - # sorted_inds = np.flip(np.argsort(values)) - # for i in sorted_inds: - # print(f'{keys[i]}: {values[i]:.4f}') - if verbose: print(' ' * level + f'Evolution over, best perf: {best_score:.2f}') - return self.get_best_score() - - def print_results(self, n=3): - best_scores, best_ind = self.get_best_score() - for i in range(n): - best_ind[i].print_recipe(f'Candidate #{i+1}, Score: {best_scores[i]:.2f}') - - diff --git a/spaces/cfwef/gpt/crazy_functions/test_project/cpp/libJPG/jpgd.cpp b/spaces/cfwef/gpt/crazy_functions/test_project/cpp/libJPG/jpgd.cpp deleted file mode 100644 index 36d06c8e9068570c3e7624895d474f33dbfe3d29..0000000000000000000000000000000000000000 --- a/spaces/cfwef/gpt/crazy_functions/test_project/cpp/libJPG/jpgd.cpp +++ /dev/null @@ -1,3276 +0,0 @@ -// jpgd.cpp - C++ class for JPEG decompression. -// Public domain, Rich Geldreich -// Last updated Apr. 16, 2011 -// Alex Evans: Linear memory allocator (taken from jpge.h). -// -// Supports progressive and baseline sequential JPEG image files, and the most common chroma subsampling factors: Y, H1V1, H2V1, H1V2, and H2V2. -// -// Chroma upsampling quality: H2V2 is upsampled in the frequency domain, H2V1 and H1V2 are upsampled using point sampling. -// Chroma upsampling reference: "Fast Scheme for Image Size Change in the Compressed Domain" -// http://vision.ai.uiuc.edu/~dugad/research/dct/index.html - -#include "jpgd.h" -#include - -#include -// BEGIN EPIC MOD -#define JPGD_ASSERT(x) { assert(x); CA_ASSUME(x); } (void)0 -// END EPIC MOD - -#ifdef _MSC_VER -#pragma warning (disable : 4611) // warning C4611: interaction between '_setjmp' and C++ object destruction is non-portable -#endif - -// Set to 1 to enable freq. domain chroma upsampling on images using H2V2 subsampling (0=faster nearest neighbor sampling). -// This is slower, but results in higher quality on images with highly saturated colors. -#define JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING 1 - -#define JPGD_TRUE (1) -#define JPGD_FALSE (0) - -#define JPGD_MAX(a,b) (((a)>(b)) ? (a) : (b)) -#define JPGD_MIN(a,b) (((a)<(b)) ? (a) : (b)) - -namespace jpgd { - - static inline void *jpgd_malloc(size_t nSize) { return FMemory::Malloc(nSize); } - static inline void jpgd_free(void *p) { FMemory::Free(p); } - -// BEGIN EPIC MOD -//@UE3 - use UE3 BGRA encoding instead of assuming RGBA - // stolen from IImageWrapper.h - enum ERGBFormatJPG - { - Invalid = -1, - RGBA = 0, - BGRA = 1, - Gray = 2, - }; - static ERGBFormatJPG jpg_format; -// END EPIC MOD - - // DCT coefficients are stored in this sequence. - static int g_ZAG[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 }; - - enum JPEG_MARKER - { - M_SOF0 = 0xC0, M_SOF1 = 0xC1, M_SOF2 = 0xC2, M_SOF3 = 0xC3, M_SOF5 = 0xC5, M_SOF6 = 0xC6, M_SOF7 = 0xC7, M_JPG = 0xC8, - M_SOF9 = 0xC9, M_SOF10 = 0xCA, M_SOF11 = 0xCB, M_SOF13 = 0xCD, M_SOF14 = 0xCE, M_SOF15 = 0xCF, M_DHT = 0xC4, M_DAC = 0xCC, - M_RST0 = 0xD0, M_RST1 = 0xD1, M_RST2 = 0xD2, M_RST3 = 0xD3, M_RST4 = 0xD4, M_RST5 = 0xD5, M_RST6 = 0xD6, M_RST7 = 0xD7, - M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_DNL = 0xDC, M_DRI = 0xDD, M_DHP = 0xDE, M_EXP = 0xDF, - M_APP0 = 0xE0, M_APP15 = 0xEF, M_JPG0 = 0xF0, M_JPG13 = 0xFD, M_COM = 0xFE, M_TEM = 0x01, M_ERROR = 0x100, RST0 = 0xD0 - }; - - enum JPEG_SUBSAMPLING { JPGD_GRAYSCALE = 0, JPGD_YH1V1, JPGD_YH2V1, JPGD_YH1V2, JPGD_YH2V2 }; - -#define CONST_BITS 13 -#define PASS1_BITS 2 -#define SCALEDONE ((int32)1) - -#define FIX_0_298631336 ((int32)2446) /* FIX(0.298631336) */ -#define FIX_0_390180644 ((int32)3196) /* FIX(0.390180644) */ -#define FIX_0_541196100 ((int32)4433) /* FIX(0.541196100) */ -#define FIX_0_765366865 ((int32)6270) /* FIX(0.765366865) */ -#define FIX_0_899976223 ((int32)7373) /* FIX(0.899976223) */ -#define FIX_1_175875602 ((int32)9633) /* FIX(1.175875602) */ -#define FIX_1_501321110 ((int32)12299) /* FIX(1.501321110) */ -#define FIX_1_847759065 ((int32)15137) /* FIX(1.847759065) */ -#define FIX_1_961570560 ((int32)16069) /* FIX(1.961570560) */ -#define FIX_2_053119869 ((int32)16819) /* FIX(2.053119869) */ -#define FIX_2_562915447 ((int32)20995) /* FIX(2.562915447) */ -#define FIX_3_072711026 ((int32)25172) /* FIX(3.072711026) */ - -#define DESCALE(x,n) (((x) + (SCALEDONE << ((n)-1))) >> (n)) -#define DESCALE_ZEROSHIFT(x,n) (((x) + (128 << (n)) + (SCALEDONE << ((n)-1))) >> (n)) - -#define MULTIPLY(var, cnst) ((var) * (cnst)) - -#define CLAMP(i) ((static_cast(i) > 255) ? (((~i) >> 31) & 0xFF) : (i)) - - // Compiler creates a fast path 1D IDCT for X non-zero columns - template - struct Row - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { - // ACCESS_COL() will be optimized at compile time to either an array access, or 0. -#define ACCESS_COL(x) (((x) < NONZERO_COLS) ? (int)pSrc[x] : 0) - - const int z2 = ACCESS_COL(2), z3 = ACCESS_COL(6); - - const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100); - const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065); - const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865); - - const int tmp0 = (ACCESS_COL(0) + ACCESS_COL(4)) << CONST_BITS; - const int tmp1 = (ACCESS_COL(0) - ACCESS_COL(4)) << CONST_BITS; - - const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2; - - const int atmp0 = ACCESS_COL(7), atmp1 = ACCESS_COL(5), atmp2 = ACCESS_COL(3), atmp3 = ACCESS_COL(1); - - const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3; - const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602); - - const int az1 = MULTIPLY(bz1, - FIX_0_899976223); - const int az2 = MULTIPLY(bz2, - FIX_2_562915447); - const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5; - const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5; - - const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3; - const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4; - const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3; - const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4; - - pTemp[0] = DESCALE(tmp10 + btmp3, CONST_BITS-PASS1_BITS); - pTemp[7] = DESCALE(tmp10 - btmp3, CONST_BITS-PASS1_BITS); - pTemp[1] = DESCALE(tmp11 + btmp2, CONST_BITS-PASS1_BITS); - pTemp[6] = DESCALE(tmp11 - btmp2, CONST_BITS-PASS1_BITS); - pTemp[2] = DESCALE(tmp12 + btmp1, CONST_BITS-PASS1_BITS); - pTemp[5] = DESCALE(tmp12 - btmp1, CONST_BITS-PASS1_BITS); - pTemp[3] = DESCALE(tmp13 + btmp0, CONST_BITS-PASS1_BITS); - pTemp[4] = DESCALE(tmp13 - btmp0, CONST_BITS-PASS1_BITS); - } - }; - - template <> - struct Row<0> - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { -#ifdef _MSC_VER - pTemp; pSrc; -#endif - } - }; - - template <> - struct Row<1> - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { - const int dcval = (pSrc[0] << PASS1_BITS); - - pTemp[0] = dcval; - pTemp[1] = dcval; - pTemp[2] = dcval; - pTemp[3] = dcval; - pTemp[4] = dcval; - pTemp[5] = dcval; - pTemp[6] = dcval; - pTemp[7] = dcval; - } - }; - - // Compiler creates a fast path 1D IDCT for X non-zero rows - template - struct Col - { - static void idct(uint8* pDst_ptr, const int* pTemp) - { - // ACCESS_ROW() will be optimized at compile time to either an array access, or 0. -#define ACCESS_ROW(x) (((x) < NONZERO_ROWS) ? pTemp[x * 8] : 0) - - const int z2 = ACCESS_ROW(2); - const int z3 = ACCESS_ROW(6); - - const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100); - const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065); - const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865); - - const int tmp0 = (ACCESS_ROW(0) + ACCESS_ROW(4)) << CONST_BITS; - const int tmp1 = (ACCESS_ROW(0) - ACCESS_ROW(4)) << CONST_BITS; - - const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2; - - const int atmp0 = ACCESS_ROW(7), atmp1 = ACCESS_ROW(5), atmp2 = ACCESS_ROW(3), atmp3 = ACCESS_ROW(1); - - const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3; - const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602); - - const int az1 = MULTIPLY(bz1, - FIX_0_899976223); - const int az2 = MULTIPLY(bz2, - FIX_2_562915447); - const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5; - const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5; - - const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3; - const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4; - const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3; - const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4; - - int i = DESCALE_ZEROSHIFT(tmp10 + btmp3, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*0] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp10 - btmp3, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*7] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp11 + btmp2, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*1] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp11 - btmp2, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*6] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp12 + btmp1, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*2] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp12 - btmp1, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*5] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp13 + btmp0, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*3] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp13 - btmp0, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*4] = (uint8)CLAMP(i); - } - }; - - template <> - struct Col<1> - { - static void idct(uint8* pDst_ptr, const int* pTemp) - { - int dcval = DESCALE_ZEROSHIFT(pTemp[0], PASS1_BITS+3); - const uint8 dcval_clamped = (uint8)CLAMP(dcval); - pDst_ptr[0*8] = dcval_clamped; - pDst_ptr[1*8] = dcval_clamped; - pDst_ptr[2*8] = dcval_clamped; - pDst_ptr[3*8] = dcval_clamped; - pDst_ptr[4*8] = dcval_clamped; - pDst_ptr[5*8] = dcval_clamped; - pDst_ptr[6*8] = dcval_clamped; - pDst_ptr[7*8] = dcval_clamped; - } - }; - - static const uint8 s_idct_row_table[] = - { - 1,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0, 2,1,0,0,0,0,0,0, 2,1,1,0,0,0,0,0, 2,2,1,0,0,0,0,0, 3,2,1,0,0,0,0,0, 4,2,1,0,0,0,0,0, 4,3,1,0,0,0,0,0, - 4,3,2,0,0,0,0,0, 4,3,2,1,0,0,0,0, 4,3,2,1,1,0,0,0, 4,3,2,2,1,0,0,0, 4,3,3,2,1,0,0,0, 4,4,3,2,1,0,0,0, 5,4,3,2,1,0,0,0, 6,4,3,2,1,0,0,0, - 6,5,3,2,1,0,0,0, 6,5,4,2,1,0,0,0, 6,5,4,3,1,0,0,0, 6,5,4,3,2,0,0,0, 6,5,4,3,2,1,0,0, 6,5,4,3,2,1,1,0, 6,5,4,3,2,2,1,0, 6,5,4,3,3,2,1,0, - 6,5,4,4,3,2,1,0, 6,5,5,4,3,2,1,0, 6,6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0, 8,6,5,4,3,2,1,0, 8,7,5,4,3,2,1,0, 8,7,6,4,3,2,1,0, 8,7,6,5,3,2,1,0, - 8,7,6,5,4,2,1,0, 8,7,6,5,4,3,1,0, 8,7,6,5,4,3,2,0, 8,7,6,5,4,3,2,1, 8,7,6,5,4,3,2,2, 8,7,6,5,4,3,3,2, 8,7,6,5,4,4,3,2, 8,7,6,5,5,4,3,2, - 8,7,6,6,5,4,3,2, 8,7,7,6,5,4,3,2, 8,8,7,6,5,4,3,2, 8,8,8,6,5,4,3,2, 8,8,8,7,5,4,3,2, 8,8,8,7,6,4,3,2, 8,8,8,7,6,5,3,2, 8,8,8,7,6,5,4,2, - 8,8,8,7,6,5,4,3, 8,8,8,7,6,5,4,4, 8,8,8,7,6,5,5,4, 8,8,8,7,6,6,5,4, 8,8,8,7,7,6,5,4, 8,8,8,8,7,6,5,4, 8,8,8,8,8,6,5,4, 8,8,8,8,8,7,5,4, - 8,8,8,8,8,7,6,4, 8,8,8,8,8,7,6,5, 8,8,8,8,8,7,6,6, 8,8,8,8,8,7,7,6, 8,8,8,8,8,8,7,6, 8,8,8,8,8,8,8,6, 8,8,8,8,8,8,8,7, 8,8,8,8,8,8,8,8, - }; - - static const uint8 s_idct_col_table[] = { 1, 1, 2, 3, 3, 3, 3, 3, 3, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8 }; - - void idct(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr, int block_max_zag) - { - JPGD_ASSERT(block_max_zag >= 1); - JPGD_ASSERT(block_max_zag <= 64); - - if (block_max_zag == 1) - { - int k = ((pSrc_ptr[0] + 4) >> 3) + 128; - k = CLAMP(k); - k = k | (k<<8); - k = k | (k<<16); - - for (int i = 8; i > 0; i--) - { - *(int*)&pDst_ptr[0] = k; - *(int*)&pDst_ptr[4] = k; - pDst_ptr += 8; - } - return; - } - - int temp[64]; - - const jpgd_block_t* pSrc = pSrc_ptr; - int* pTemp = temp; - - const uint8* pRow_tab = &s_idct_row_table[(block_max_zag - 1) * 8]; - int i; - for (i = 8; i > 0; i--, pRow_tab++) - { - switch (*pRow_tab) - { - case 0: Row<0>::idct(pTemp, pSrc); break; - case 1: Row<1>::idct(pTemp, pSrc); break; - case 2: Row<2>::idct(pTemp, pSrc); break; - case 3: Row<3>::idct(pTemp, pSrc); break; - case 4: Row<4>::idct(pTemp, pSrc); break; - case 5: Row<5>::idct(pTemp, pSrc); break; - case 6: Row<6>::idct(pTemp, pSrc); break; - case 7: Row<7>::idct(pTemp, pSrc); break; - case 8: Row<8>::idct(pTemp, pSrc); break; - } - - pSrc += 8; - pTemp += 8; - } - - pTemp = temp; - - const int nonzero_rows = s_idct_col_table[block_max_zag - 1]; - for (i = 8; i > 0; i--) - { - switch (nonzero_rows) - { - case 1: Col<1>::idct(pDst_ptr, pTemp); break; - case 2: Col<2>::idct(pDst_ptr, pTemp); break; - case 3: Col<3>::idct(pDst_ptr, pTemp); break; - case 4: Col<4>::idct(pDst_ptr, pTemp); break; - case 5: Col<5>::idct(pDst_ptr, pTemp); break; - case 6: Col<6>::idct(pDst_ptr, pTemp); break; - case 7: Col<7>::idct(pDst_ptr, pTemp); break; - case 8: Col<8>::idct(pDst_ptr, pTemp); break; - } - - pTemp++; - pDst_ptr++; - } - } - - void idct_4x4(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr) - { - int temp[64]; - int* pTemp = temp; - const jpgd_block_t* pSrc = pSrc_ptr; - - for (int i = 4; i > 0; i--) - { - Row<4>::idct(pTemp, pSrc); - pSrc += 8; - pTemp += 8; - } - - pTemp = temp; - for (int i = 8; i > 0; i--) - { - Col<4>::idct(pDst_ptr, pTemp); - pTemp++; - pDst_ptr++; - } - } - - // Retrieve one character from the input stream. - inline uint jpeg_decoder::get_char() - { - // Any bytes remaining in buffer? - if (!m_in_buf_left) - { - // Try to get more bytes. - prep_in_buffer(); - // Still nothing to get? - if (!m_in_buf_left) - { - // Pad the end of the stream with 0xFF 0xD9 (EOI marker) - int t = m_tem_flag; - m_tem_flag ^= 1; - if (t) - return 0xD9; - else - return 0xFF; - } - } - - uint c = *m_pIn_buf_ofs++; - m_in_buf_left--; - - return c; - } - - // Same as previous method, except can indicate if the character is a pad character or not. - inline uint jpeg_decoder::get_char(bool *pPadding_flag) - { - if (!m_in_buf_left) - { - prep_in_buffer(); - if (!m_in_buf_left) - { - *pPadding_flag = true; - int t = m_tem_flag; - m_tem_flag ^= 1; - if (t) - return 0xD9; - else - return 0xFF; - } - } - - *pPadding_flag = false; - - uint c = *m_pIn_buf_ofs++; - m_in_buf_left--; - - return c; - } - - // Inserts a previously retrieved character back into the input buffer. - inline void jpeg_decoder::stuff_char(uint8 q) - { - *(--m_pIn_buf_ofs) = q; - m_in_buf_left++; - } - - // Retrieves one character from the input stream, but does not read past markers. Will continue to return 0xFF when a marker is encountered. - inline uint8 jpeg_decoder::get_octet() - { - bool padding_flag; - int c = get_char(&padding_flag); - - if (c == 0xFF) - { - if (padding_flag) - return 0xFF; - - c = get_char(&padding_flag); - if (padding_flag) - { - stuff_char(0xFF); - return 0xFF; - } - - if (c == 0x00) - return 0xFF; - else - { - stuff_char(static_cast(c)); - stuff_char(0xFF); - return 0xFF; - } - } - - return static_cast(c); - } - - // Retrieves a variable number of bits from the input stream. Does not recognize markers. - inline uint jpeg_decoder::get_bits(int num_bits) - { - if (!num_bits) - return 0; - - uint i = m_bit_buf >> (32 - num_bits); - - if ((m_bits_left -= num_bits) <= 0) - { - m_bit_buf <<= (num_bits += m_bits_left); - - uint c1 = get_char(); - uint c2 = get_char(); - m_bit_buf = (m_bit_buf & 0xFFFF0000) | (c1 << 8) | c2; - - m_bit_buf <<= -m_bits_left; - - m_bits_left += 16; - - JPGD_ASSERT(m_bits_left >= 0); - } - else - m_bit_buf <<= num_bits; - - return i; - } - - // Retrieves a variable number of bits from the input stream. Markers will not be read into the input bit buffer. Instead, an infinite number of all 1's will be returned when a marker is encountered. - inline uint jpeg_decoder::get_bits_no_markers(int num_bits) - { - if (!num_bits) - return 0; - - uint i = m_bit_buf >> (32 - num_bits); - - if ((m_bits_left -= num_bits) <= 0) - { - m_bit_buf <<= (num_bits += m_bits_left); - - if ((m_in_buf_left < 2) || (m_pIn_buf_ofs[0] == 0xFF) || (m_pIn_buf_ofs[1] == 0xFF)) - { - uint c1 = get_octet(); - uint c2 = get_octet(); - m_bit_buf |= (c1 << 8) | c2; - } - else - { - m_bit_buf |= ((uint)m_pIn_buf_ofs[0] << 8) | m_pIn_buf_ofs[1]; - m_in_buf_left -= 2; - m_pIn_buf_ofs += 2; - } - - m_bit_buf <<= -m_bits_left; - - m_bits_left += 16; - - JPGD_ASSERT(m_bits_left >= 0); - } - else - m_bit_buf <<= num_bits; - - return i; - } - - // Decodes a Huffman encoded symbol. - inline int jpeg_decoder::huff_decode(huff_tables *pH) - { - int symbol; - - // Check first 8-bits: do we have a complete symbol? - if ((symbol = pH->look_up[m_bit_buf >> 24]) < 0) - { - // Decode more bits, use a tree traversal to find symbol. - int ofs = 23; - do - { - symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))]; - ofs--; - } while (symbol < 0); - - get_bits_no_markers(8 + (23 - ofs)); - } - else - get_bits_no_markers(pH->code_size[symbol]); - - return symbol; - } - - // Decodes a Huffman encoded symbol. - inline int jpeg_decoder::huff_decode(huff_tables *pH, int& extra_bits) - { - int symbol; - - // Check first 8-bits: do we have a complete symbol? - if ((symbol = pH->look_up2[m_bit_buf >> 24]) < 0) - { - // Use a tree traversal to find symbol. - int ofs = 23; - do - { - symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))]; - ofs--; - } while (symbol < 0); - - get_bits_no_markers(8 + (23 - ofs)); - - extra_bits = get_bits_no_markers(symbol & 0xF); - } - else - { - JPGD_ASSERT(((symbol >> 8) & 31) == pH->code_size[symbol & 255] + ((symbol & 0x8000) ? (symbol & 15) : 0)); - - if (symbol & 0x8000) - { - get_bits_no_markers((symbol >> 8) & 31); - extra_bits = symbol >> 16; - } - else - { - int code_size = (symbol >> 8) & 31; - int num_extra_bits = symbol & 0xF; - int bits = code_size + num_extra_bits; - if (bits <= (m_bits_left + 16)) - extra_bits = get_bits_no_markers(bits) & ((1 << num_extra_bits) - 1); - else - { - get_bits_no_markers(code_size); - extra_bits = get_bits_no_markers(num_extra_bits); - } - } - - symbol &= 0xFF; - } - - return symbol; - } - - // Tables and macro used to fully decode the DPCM differences. - static const int s_extend_test[16] = { 0, 0x0001, 0x0002, 0x0004, 0x0008, 0x0010, 0x0020, 0x0040, 0x0080, 0x0100, 0x0200, 0x0400, 0x0800, 0x1000, 0x2000, 0x4000 }; - static const int s_extend_offset[16] = { 0, -1, -3, -7, -15, -31, -63, -127, -255, -511, -1023, -2047, -4095, -8191, -16383, -32767 }; - static const int s_extend_mask[] = { 0, (1<<0), (1<<1), (1<<2), (1<<3), (1<<4), (1<<5), (1<<6), (1<<7), (1<<8), (1<<9), (1<<10), (1<<11), (1<<12), (1<<13), (1<<14), (1<<15), (1<<16) }; -#define HUFF_EXTEND(x,s) ((x) < s_extend_test[s] ? (x) + s_extend_offset[s] : (x)) - - // Clamps a value between 0-255. - inline uint8 jpeg_decoder::clamp(int i) - { - if (static_cast(i) > 255) - i = (((~i) >> 31) & 0xFF); - - return static_cast(i); - } - - namespace DCT_Upsample - { - struct Matrix44 - { - typedef int Element_Type; - enum { NUM_ROWS = 4, NUM_COLS = 4 }; - - Element_Type v[NUM_ROWS][NUM_COLS]; - - inline int rows() const { return NUM_ROWS; } - inline int cols() const { return NUM_COLS; } - - inline const Element_Type & at(int r, int c) const { return v[r][c]; } - inline Element_Type & at(int r, int c) { return v[r][c]; } - - inline Matrix44() { } - - inline Matrix44& operator += (const Matrix44& a) - { - for (int r = 0; r < NUM_ROWS; r++) - { - at(r, 0) += a.at(r, 0); - at(r, 1) += a.at(r, 1); - at(r, 2) += a.at(r, 2); - at(r, 3) += a.at(r, 3); - } - return *this; - } - - inline Matrix44& operator -= (const Matrix44& a) - { - for (int r = 0; r < NUM_ROWS; r++) - { - at(r, 0) -= a.at(r, 0); - at(r, 1) -= a.at(r, 1); - at(r, 2) -= a.at(r, 2); - at(r, 3) -= a.at(r, 3); - } - return *this; - } - - friend inline Matrix44 operator + (const Matrix44& a, const Matrix44& b) - { - Matrix44 ret; - for (int r = 0; r < NUM_ROWS; r++) - { - ret.at(r, 0) = a.at(r, 0) + b.at(r, 0); - ret.at(r, 1) = a.at(r, 1) + b.at(r, 1); - ret.at(r, 2) = a.at(r, 2) + b.at(r, 2); - ret.at(r, 3) = a.at(r, 3) + b.at(r, 3); - } - return ret; - } - - friend inline Matrix44 operator - (const Matrix44& a, const Matrix44& b) - { - Matrix44 ret; - for (int r = 0; r < NUM_ROWS; r++) - { - ret.at(r, 0) = a.at(r, 0) - b.at(r, 0); - ret.at(r, 1) = a.at(r, 1) - b.at(r, 1); - ret.at(r, 2) = a.at(r, 2) - b.at(r, 2); - ret.at(r, 3) = a.at(r, 3) - b.at(r, 3); - } - return ret; - } - - static inline void add_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b) - { - for (int r = 0; r < 4; r++) - { - pDst[0*8 + r] = static_cast(a.at(r, 0) + b.at(r, 0)); - pDst[1*8 + r] = static_cast(a.at(r, 1) + b.at(r, 1)); - pDst[2*8 + r] = static_cast(a.at(r, 2) + b.at(r, 2)); - pDst[3*8 + r] = static_cast(a.at(r, 3) + b.at(r, 3)); - } - } - - static inline void sub_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b) - { - for (int r = 0; r < 4; r++) - { - pDst[0*8 + r] = static_cast(a.at(r, 0) - b.at(r, 0)); - pDst[1*8 + r] = static_cast(a.at(r, 1) - b.at(r, 1)); - pDst[2*8 + r] = static_cast(a.at(r, 2) - b.at(r, 2)); - pDst[3*8 + r] = static_cast(a.at(r, 3) - b.at(r, 3)); - } - } - }; - - const int FRACT_BITS = 10; - const int SCALE = 1 << FRACT_BITS; - - typedef int Temp_Type; -#define D(i) (((i) + (SCALE >> 1)) >> FRACT_BITS) -#define F(i) ((int)((i) * SCALE + .5f)) - - // Any decent C++ compiler will optimize this at compile time to a 0, or an array access. -#define AT(c, r) ((((c)>=NUM_COLS)||((r)>=NUM_ROWS)) ? 0 : pSrc[(c)+(r)*8]) - - // NUM_ROWS/NUM_COLS = # of non-zero rows/cols in input matrix - template - struct P_Q - { - static void calc(Matrix44& P, Matrix44& Q, const jpgd_block_t* pSrc) - { - // 4x8 = 4x8 times 8x8, matrix 0 is constant - const Temp_Type X000 = AT(0, 0); - const Temp_Type X001 = AT(0, 1); - const Temp_Type X002 = AT(0, 2); - const Temp_Type X003 = AT(0, 3); - const Temp_Type X004 = AT(0, 4); - const Temp_Type X005 = AT(0, 5); - const Temp_Type X006 = AT(0, 6); - const Temp_Type X007 = AT(0, 7); - const Temp_Type X010 = D(F(0.415735f) * AT(1, 0) + F(0.791065f) * AT(3, 0) + F(-0.352443f) * AT(5, 0) + F(0.277785f) * AT(7, 0)); - const Temp_Type X011 = D(F(0.415735f) * AT(1, 1) + F(0.791065f) * AT(3, 1) + F(-0.352443f) * AT(5, 1) + F(0.277785f) * AT(7, 1)); - const Temp_Type X012 = D(F(0.415735f) * AT(1, 2) + F(0.791065f) * AT(3, 2) + F(-0.352443f) * AT(5, 2) + F(0.277785f) * AT(7, 2)); - const Temp_Type X013 = D(F(0.415735f) * AT(1, 3) + F(0.791065f) * AT(3, 3) + F(-0.352443f) * AT(5, 3) + F(0.277785f) * AT(7, 3)); - const Temp_Type X014 = D(F(0.415735f) * AT(1, 4) + F(0.791065f) * AT(3, 4) + F(-0.352443f) * AT(5, 4) + F(0.277785f) * AT(7, 4)); - const Temp_Type X015 = D(F(0.415735f) * AT(1, 5) + F(0.791065f) * AT(3, 5) + F(-0.352443f) * AT(5, 5) + F(0.277785f) * AT(7, 5)); - const Temp_Type X016 = D(F(0.415735f) * AT(1, 6) + F(0.791065f) * AT(3, 6) + F(-0.352443f) * AT(5, 6) + F(0.277785f) * AT(7, 6)); - const Temp_Type X017 = D(F(0.415735f) * AT(1, 7) + F(0.791065f) * AT(3, 7) + F(-0.352443f) * AT(5, 7) + F(0.277785f) * AT(7, 7)); - const Temp_Type X020 = AT(4, 0); - const Temp_Type X021 = AT(4, 1); - const Temp_Type X022 = AT(4, 2); - const Temp_Type X023 = AT(4, 3); - const Temp_Type X024 = AT(4, 4); - const Temp_Type X025 = AT(4, 5); - const Temp_Type X026 = AT(4, 6); - const Temp_Type X027 = AT(4, 7); - const Temp_Type X030 = D(F(0.022887f) * AT(1, 0) + F(-0.097545f) * AT(3, 0) + F(0.490393f) * AT(5, 0) + F(0.865723f) * AT(7, 0)); - const Temp_Type X031 = D(F(0.022887f) * AT(1, 1) + F(-0.097545f) * AT(3, 1) + F(0.490393f) * AT(5, 1) + F(0.865723f) * AT(7, 1)); - const Temp_Type X032 = D(F(0.022887f) * AT(1, 2) + F(-0.097545f) * AT(3, 2) + F(0.490393f) * AT(5, 2) + F(0.865723f) * AT(7, 2)); - const Temp_Type X033 = D(F(0.022887f) * AT(1, 3) + F(-0.097545f) * AT(3, 3) + F(0.490393f) * AT(5, 3) + F(0.865723f) * AT(7, 3)); - const Temp_Type X034 = D(F(0.022887f) * AT(1, 4) + F(-0.097545f) * AT(3, 4) + F(0.490393f) * AT(5, 4) + F(0.865723f) * AT(7, 4)); - const Temp_Type X035 = D(F(0.022887f) * AT(1, 5) + F(-0.097545f) * AT(3, 5) + F(0.490393f) * AT(5, 5) + F(0.865723f) * AT(7, 5)); - const Temp_Type X036 = D(F(0.022887f) * AT(1, 6) + F(-0.097545f) * AT(3, 6) + F(0.490393f) * AT(5, 6) + F(0.865723f) * AT(7, 6)); - const Temp_Type X037 = D(F(0.022887f) * AT(1, 7) + F(-0.097545f) * AT(3, 7) + F(0.490393f) * AT(5, 7) + F(0.865723f) * AT(7, 7)); - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - P.at(0, 0) = X000; - P.at(0, 1) = D(X001 * F(0.415735f) + X003 * F(0.791065f) + X005 * F(-0.352443f) + X007 * F(0.277785f)); - P.at(0, 2) = X004; - P.at(0, 3) = D(X001 * F(0.022887f) + X003 * F(-0.097545f) + X005 * F(0.490393f) + X007 * F(0.865723f)); - P.at(1, 0) = X010; - P.at(1, 1) = D(X011 * F(0.415735f) + X013 * F(0.791065f) + X015 * F(-0.352443f) + X017 * F(0.277785f)); - P.at(1, 2) = X014; - P.at(1, 3) = D(X011 * F(0.022887f) + X013 * F(-0.097545f) + X015 * F(0.490393f) + X017 * F(0.865723f)); - P.at(2, 0) = X020; - P.at(2, 1) = D(X021 * F(0.415735f) + X023 * F(0.791065f) + X025 * F(-0.352443f) + X027 * F(0.277785f)); - P.at(2, 2) = X024; - P.at(2, 3) = D(X021 * F(0.022887f) + X023 * F(-0.097545f) + X025 * F(0.490393f) + X027 * F(0.865723f)); - P.at(3, 0) = X030; - P.at(3, 1) = D(X031 * F(0.415735f) + X033 * F(0.791065f) + X035 * F(-0.352443f) + X037 * F(0.277785f)); - P.at(3, 2) = X034; - P.at(3, 3) = D(X031 * F(0.022887f) + X033 * F(-0.097545f) + X035 * F(0.490393f) + X037 * F(0.865723f)); - // 40 muls 24 adds - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - Q.at(0, 0) = D(X001 * F(0.906127f) + X003 * F(-0.318190f) + X005 * F(0.212608f) + X007 * F(-0.180240f)); - Q.at(0, 1) = X002; - Q.at(0, 2) = D(X001 * F(-0.074658f) + X003 * F(0.513280f) + X005 * F(0.768178f) + X007 * F(-0.375330f)); - Q.at(0, 3) = X006; - Q.at(1, 0) = D(X011 * F(0.906127f) + X013 * F(-0.318190f) + X015 * F(0.212608f) + X017 * F(-0.180240f)); - Q.at(1, 1) = X012; - Q.at(1, 2) = D(X011 * F(-0.074658f) + X013 * F(0.513280f) + X015 * F(0.768178f) + X017 * F(-0.375330f)); - Q.at(1, 3) = X016; - Q.at(2, 0) = D(X021 * F(0.906127f) + X023 * F(-0.318190f) + X025 * F(0.212608f) + X027 * F(-0.180240f)); - Q.at(2, 1) = X022; - Q.at(2, 2) = D(X021 * F(-0.074658f) + X023 * F(0.513280f) + X025 * F(0.768178f) + X027 * F(-0.375330f)); - Q.at(2, 3) = X026; - Q.at(3, 0) = D(X031 * F(0.906127f) + X033 * F(-0.318190f) + X035 * F(0.212608f) + X037 * F(-0.180240f)); - Q.at(3, 1) = X032; - Q.at(3, 2) = D(X031 * F(-0.074658f) + X033 * F(0.513280f) + X035 * F(0.768178f) + X037 * F(-0.375330f)); - Q.at(3, 3) = X036; - // 40 muls 24 adds - } - }; - - template - struct R_S - { - static void calc(Matrix44& R, Matrix44& S, const jpgd_block_t* pSrc) - { - // 4x8 = 4x8 times 8x8, matrix 0 is constant - const Temp_Type X100 = D(F(0.906127f) * AT(1, 0) + F(-0.318190f) * AT(3, 0) + F(0.212608f) * AT(5, 0) + F(-0.180240f) * AT(7, 0)); - const Temp_Type X101 = D(F(0.906127f) * AT(1, 1) + F(-0.318190f) * AT(3, 1) + F(0.212608f) * AT(5, 1) + F(-0.180240f) * AT(7, 1)); - const Temp_Type X102 = D(F(0.906127f) * AT(1, 2) + F(-0.318190f) * AT(3, 2) + F(0.212608f) * AT(5, 2) + F(-0.180240f) * AT(7, 2)); - const Temp_Type X103 = D(F(0.906127f) * AT(1, 3) + F(-0.318190f) * AT(3, 3) + F(0.212608f) * AT(5, 3) + F(-0.180240f) * AT(7, 3)); - const Temp_Type X104 = D(F(0.906127f) * AT(1, 4) + F(-0.318190f) * AT(3, 4) + F(0.212608f) * AT(5, 4) + F(-0.180240f) * AT(7, 4)); - const Temp_Type X105 = D(F(0.906127f) * AT(1, 5) + F(-0.318190f) * AT(3, 5) + F(0.212608f) * AT(5, 5) + F(-0.180240f) * AT(7, 5)); - const Temp_Type X106 = D(F(0.906127f) * AT(1, 6) + F(-0.318190f) * AT(3, 6) + F(0.212608f) * AT(5, 6) + F(-0.180240f) * AT(7, 6)); - const Temp_Type X107 = D(F(0.906127f) * AT(1, 7) + F(-0.318190f) * AT(3, 7) + F(0.212608f) * AT(5, 7) + F(-0.180240f) * AT(7, 7)); - const Temp_Type X110 = AT(2, 0); - const Temp_Type X111 = AT(2, 1); - const Temp_Type X112 = AT(2, 2); - const Temp_Type X113 = AT(2, 3); - const Temp_Type X114 = AT(2, 4); - const Temp_Type X115 = AT(2, 5); - const Temp_Type X116 = AT(2, 6); - const Temp_Type X117 = AT(2, 7); - const Temp_Type X120 = D(F(-0.074658f) * AT(1, 0) + F(0.513280f) * AT(3, 0) + F(0.768178f) * AT(5, 0) + F(-0.375330f) * AT(7, 0)); - const Temp_Type X121 = D(F(-0.074658f) * AT(1, 1) + F(0.513280f) * AT(3, 1) + F(0.768178f) * AT(5, 1) + F(-0.375330f) * AT(7, 1)); - const Temp_Type X122 = D(F(-0.074658f) * AT(1, 2) + F(0.513280f) * AT(3, 2) + F(0.768178f) * AT(5, 2) + F(-0.375330f) * AT(7, 2)); - const Temp_Type X123 = D(F(-0.074658f) * AT(1, 3) + F(0.513280f) * AT(3, 3) + F(0.768178f) * AT(5, 3) + F(-0.375330f) * AT(7, 3)); - const Temp_Type X124 = D(F(-0.074658f) * AT(1, 4) + F(0.513280f) * AT(3, 4) + F(0.768178f) * AT(5, 4) + F(-0.375330f) * AT(7, 4)); - const Temp_Type X125 = D(F(-0.074658f) * AT(1, 5) + F(0.513280f) * AT(3, 5) + F(0.768178f) * AT(5, 5) + F(-0.375330f) * AT(7, 5)); - const Temp_Type X126 = D(F(-0.074658f) * AT(1, 6) + F(0.513280f) * AT(3, 6) + F(0.768178f) * AT(5, 6) + F(-0.375330f) * AT(7, 6)); - const Temp_Type X127 = D(F(-0.074658f) * AT(1, 7) + F(0.513280f) * AT(3, 7) + F(0.768178f) * AT(5, 7) + F(-0.375330f) * AT(7, 7)); - const Temp_Type X130 = AT(6, 0); - const Temp_Type X131 = AT(6, 1); - const Temp_Type X132 = AT(6, 2); - const Temp_Type X133 = AT(6, 3); - const Temp_Type X134 = AT(6, 4); - const Temp_Type X135 = AT(6, 5); - const Temp_Type X136 = AT(6, 6); - const Temp_Type X137 = AT(6, 7); - // 80 muls 48 adds - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - R.at(0, 0) = X100; - R.at(0, 1) = D(X101 * F(0.415735f) + X103 * F(0.791065f) + X105 * F(-0.352443f) + X107 * F(0.277785f)); - R.at(0, 2) = X104; - R.at(0, 3) = D(X101 * F(0.022887f) + X103 * F(-0.097545f) + X105 * F(0.490393f) + X107 * F(0.865723f)); - R.at(1, 0) = X110; - R.at(1, 1) = D(X111 * F(0.415735f) + X113 * F(0.791065f) + X115 * F(-0.352443f) + X117 * F(0.277785f)); - R.at(1, 2) = X114; - R.at(1, 3) = D(X111 * F(0.022887f) + X113 * F(-0.097545f) + X115 * F(0.490393f) + X117 * F(0.865723f)); - R.at(2, 0) = X120; - R.at(2, 1) = D(X121 * F(0.415735f) + X123 * F(0.791065f) + X125 * F(-0.352443f) + X127 * F(0.277785f)); - R.at(2, 2) = X124; - R.at(2, 3) = D(X121 * F(0.022887f) + X123 * F(-0.097545f) + X125 * F(0.490393f) + X127 * F(0.865723f)); - R.at(3, 0) = X130; - R.at(3, 1) = D(X131 * F(0.415735f) + X133 * F(0.791065f) + X135 * F(-0.352443f) + X137 * F(0.277785f)); - R.at(3, 2) = X134; - R.at(3, 3) = D(X131 * F(0.022887f) + X133 * F(-0.097545f) + X135 * F(0.490393f) + X137 * F(0.865723f)); - // 40 muls 24 adds - // 4x4 = 4x8 times 8x4, matrix 1 is constant - S.at(0, 0) = D(X101 * F(0.906127f) + X103 * F(-0.318190f) + X105 * F(0.212608f) + X107 * F(-0.180240f)); - S.at(0, 1) = X102; - S.at(0, 2) = D(X101 * F(-0.074658f) + X103 * F(0.513280f) + X105 * F(0.768178f) + X107 * F(-0.375330f)); - S.at(0, 3) = X106; - S.at(1, 0) = D(X111 * F(0.906127f) + X113 * F(-0.318190f) + X115 * F(0.212608f) + X117 * F(-0.180240f)); - S.at(1, 1) = X112; - S.at(1, 2) = D(X111 * F(-0.074658f) + X113 * F(0.513280f) + X115 * F(0.768178f) + X117 * F(-0.375330f)); - S.at(1, 3) = X116; - S.at(2, 0) = D(X121 * F(0.906127f) + X123 * F(-0.318190f) + X125 * F(0.212608f) + X127 * F(-0.180240f)); - S.at(2, 1) = X122; - S.at(2, 2) = D(X121 * F(-0.074658f) + X123 * F(0.513280f) + X125 * F(0.768178f) + X127 * F(-0.375330f)); - S.at(2, 3) = X126; - S.at(3, 0) = D(X131 * F(0.906127f) + X133 * F(-0.318190f) + X135 * F(0.212608f) + X137 * F(-0.180240f)); - S.at(3, 1) = X132; - S.at(3, 2) = D(X131 * F(-0.074658f) + X133 * F(0.513280f) + X135 * F(0.768178f) + X137 * F(-0.375330f)); - S.at(3, 3) = X136; - // 40 muls 24 adds - } - }; - } // end namespace DCT_Upsample - - // Unconditionally frees all allocated m_blocks. - void jpeg_decoder::free_all_blocks() - { - m_pStream = NULL; - for (mem_block *b = m_pMem_blocks; b; ) - { - mem_block *n = b->m_pNext; - jpgd_free(b); - b = n; - } - m_pMem_blocks = NULL; - } - - // This method handles all errors. - // It could easily be changed to use C++ exceptions. - void jpeg_decoder::stop_decoding(jpgd_status status) - { - m_error_code = status; - free_all_blocks(); - longjmp(m_jmp_state, status); - - // we shouldn't get here as longjmp shouldn't return, but we put it here to make it explicit - // that this function doesn't return, otherwise we get this error: - // - // error : function declared 'noreturn' should not return - exit(1); - } - - void *jpeg_decoder::alloc(size_t nSize, bool zero) - { - nSize = (JPGD_MAX(nSize, 1) + 3) & ~3; - char *rv = NULL; - for (mem_block *b = m_pMem_blocks; b; b = b->m_pNext) - { - if ((b->m_used_count + nSize) <= b->m_size) - { - rv = b->m_data + b->m_used_count; - b->m_used_count += nSize; - break; - } - } - if (!rv) - { - int capacity = JPGD_MAX(32768 - 256, (nSize + 2047) & ~2047); - mem_block *b = (mem_block*)jpgd_malloc(sizeof(mem_block) + capacity); - if (!b) stop_decoding(JPGD_NOTENOUGHMEM); - b->m_pNext = m_pMem_blocks; m_pMem_blocks = b; - b->m_used_count = nSize; - b->m_size = capacity; - rv = b->m_data; - } - if (zero) memset(rv, 0, nSize); - return rv; - } - - void jpeg_decoder::word_clear(void *p, uint16 c, uint n) - { - uint8 *pD = (uint8*)p; - const uint8 l = c & 0xFF, h = (c >> 8) & 0xFF; - while (n) - { - pD[0] = l; pD[1] = h; pD += 2; - n--; - } - } - - // Refill the input buffer. - // This method will sit in a loop until (A) the buffer is full or (B) - // the stream's read() method reports and end of file condition. - void jpeg_decoder::prep_in_buffer() - { - m_in_buf_left = 0; - m_pIn_buf_ofs = m_in_buf; - - if (m_eof_flag) - return; - - do - { - int bytes_read = m_pStream->read(m_in_buf + m_in_buf_left, JPGD_IN_BUF_SIZE - m_in_buf_left, &m_eof_flag); - if (bytes_read == -1) - stop_decoding(JPGD_STREAM_READ); - - m_in_buf_left += bytes_read; - } while ((m_in_buf_left < JPGD_IN_BUF_SIZE) && (!m_eof_flag)); - - m_total_bytes_read += m_in_buf_left; - - // Pad the end of the block with M_EOI (prevents the decompressor from going off the rails if the stream is invalid). - // (This dates way back to when this decompressor was written in C/asm, and the all-asm Huffman decoder did some fancy things to increase perf.) - word_clear(m_pIn_buf_ofs + m_in_buf_left, 0xD9FF, 64); - } - - // Read a Huffman code table. - void jpeg_decoder::read_dht_marker() - { - int i, index, count; - uint8 huff_num[17]; - uint8 huff_val[256]; - - uint num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_DHT_MARKER); - - num_left -= 2; - - while (num_left) - { - index = get_bits(8); - - huff_num[0] = 0; - - count = 0; - - for (i = 1; i <= 16; i++) - { - huff_num[i] = static_cast(get_bits(8)); - count += huff_num[i]; - } - - if (count > 255) - stop_decoding(JPGD_BAD_DHT_COUNTS); - - for (i = 0; i < count; i++) - huff_val[i] = static_cast(get_bits(8)); - - i = 1 + 16 + count; - - if (num_left < (uint)i) - stop_decoding(JPGD_BAD_DHT_MARKER); - - num_left -= i; - - if ((index & 0x10) > 0x10) - stop_decoding(JPGD_BAD_DHT_INDEX); - - index = (index & 0x0F) + ((index & 0x10) >> 4) * (JPGD_MAX_HUFF_TABLES >> 1); - - if (index >= JPGD_MAX_HUFF_TABLES) - stop_decoding(JPGD_BAD_DHT_INDEX); - - if (!m_huff_num[index]) - m_huff_num[index] = (uint8 *)alloc(17); - - if (!m_huff_val[index]) - m_huff_val[index] = (uint8 *)alloc(256); - - m_huff_ac[index] = (index & 0x10) != 0; - memcpy(m_huff_num[index], huff_num, 17); - memcpy(m_huff_val[index], huff_val, 256); - } - } - - // Read a quantization table. - void jpeg_decoder::read_dqt_marker() - { - int n, i, prec; - uint num_left; - uint temp; - - num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_DQT_MARKER); - - num_left -= 2; - - while (num_left) - { - n = get_bits(8); - prec = n >> 4; - n &= 0x0F; - - if (n >= JPGD_MAX_QUANT_TABLES) - stop_decoding(JPGD_BAD_DQT_TABLE); - - if (!m_quant[n]) - m_quant[n] = (jpgd_quant_t *)alloc(64 * sizeof(jpgd_quant_t)); - - // read quantization entries, in zag order - for (i = 0; i < 64; i++) - { - temp = get_bits(8); - - if (prec) - temp = (temp << 8) + get_bits(8); - - m_quant[n][i] = static_cast(temp); - } - - i = 64 + 1; - - if (prec) - i += 64; - - if (num_left < (uint)i) - stop_decoding(JPGD_BAD_DQT_LENGTH); - - num_left -= i; - } - } - - // Read the start of frame (SOF) marker. - void jpeg_decoder::read_sof_marker() - { - int i; - uint num_left; - - num_left = get_bits(16); - - if (get_bits(8) != 8) /* precision: sorry, only 8-bit precision is supported right now */ - stop_decoding(JPGD_BAD_PRECISION); - - m_image_y_size = get_bits(16); - - if ((m_image_y_size < 1) || (m_image_y_size > JPGD_MAX_HEIGHT)) - stop_decoding(JPGD_BAD_HEIGHT); - - m_image_x_size = get_bits(16); - - if ((m_image_x_size < 1) || (m_image_x_size > JPGD_MAX_WIDTH)) - stop_decoding(JPGD_BAD_WIDTH); - - m_comps_in_frame = get_bits(8); - - if (m_comps_in_frame > JPGD_MAX_COMPONENTS) - stop_decoding(JPGD_TOO_MANY_COMPONENTS); - - if (num_left != (uint)(m_comps_in_frame * 3 + 8)) - stop_decoding(JPGD_BAD_SOF_LENGTH); - - for (i = 0; i < m_comps_in_frame; i++) - { - m_comp_ident[i] = get_bits(8); - m_comp_h_samp[i] = get_bits(4); - m_comp_v_samp[i] = get_bits(4); - m_comp_quant[i] = get_bits(8); - } - } - - // Used to skip unrecognized markers. - void jpeg_decoder::skip_variable_marker() - { - uint num_left; - - num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_VARIABLE_MARKER); - - num_left -= 2; - - while (num_left) - { - get_bits(8); - num_left--; - } - } - - // Read a define restart interval (DRI) marker. - void jpeg_decoder::read_dri_marker() - { - if (get_bits(16) != 4) - stop_decoding(JPGD_BAD_DRI_LENGTH); - - m_restart_interval = get_bits(16); - } - - // Read a start of scan (SOS) marker. - void jpeg_decoder::read_sos_marker() - { - uint num_left; - int i, ci, n, c, cc; - - num_left = get_bits(16); - - n = get_bits(8); - - m_comps_in_scan = n; - - num_left -= 3; - - if ( (num_left != (uint)(n * 2 + 3)) || (n < 1) || (n > JPGD_MAX_COMPS_IN_SCAN) ) - stop_decoding(JPGD_BAD_SOS_LENGTH); - - for (i = 0; i < n; i++) - { - cc = get_bits(8); - c = get_bits(8); - num_left -= 2; - - for (ci = 0; ci < m_comps_in_frame; ci++) - if (cc == m_comp_ident[ci]) - break; - - if (ci >= m_comps_in_frame) - stop_decoding(JPGD_BAD_SOS_COMP_ID); - - m_comp_list[i] = ci; - m_comp_dc_tab[ci] = (c >> 4) & 15; - m_comp_ac_tab[ci] = (c & 15) + (JPGD_MAX_HUFF_TABLES >> 1); - } - - m_spectral_start = get_bits(8); - m_spectral_end = get_bits(8); - m_successive_high = get_bits(4); - m_successive_low = get_bits(4); - - if (!m_progressive_flag) - { - m_spectral_start = 0; - m_spectral_end = 63; - } - - num_left -= 3; - - while (num_left) /* read past whatever is num_left */ - { - get_bits(8); - num_left--; - } - } - - // Finds the next marker. - int jpeg_decoder::next_marker() - { - uint c, bytes; - - bytes = 0; - - do - { - do - { - bytes++; - c = get_bits(8); - } while (c != 0xFF); - - do - { - c = get_bits(8); - } while (c == 0xFF); - - } while (c == 0); - - // If bytes > 0 here, there where extra bytes before the marker (not good). - - return c; - } - - // Process markers. Returns when an SOFx, SOI, EOI, or SOS marker is - // encountered. - int jpeg_decoder::process_markers() - { - int c; - - for ( ; ; ) - { - c = next_marker(); - - switch (c) - { - case M_SOF0: - case M_SOF1: - case M_SOF2: - case M_SOF3: - case M_SOF5: - case M_SOF6: - case M_SOF7: - // case M_JPG: - case M_SOF9: - case M_SOF10: - case M_SOF11: - case M_SOF13: - case M_SOF14: - case M_SOF15: - case M_SOI: - case M_EOI: - case M_SOS: - { - return c; - } - case M_DHT: - { - read_dht_marker(); - break; - } - // No arithmitic support - dumb patents! - case M_DAC: - { - stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT); - break; - } - case M_DQT: - { - read_dqt_marker(); - break; - } - case M_DRI: - { - read_dri_marker(); - break; - } - //case M_APP0: /* no need to read the JFIF marker */ - - case M_JPG: - case M_RST0: /* no parameters */ - case M_RST1: - case M_RST2: - case M_RST3: - case M_RST4: - case M_RST5: - case M_RST6: - case M_RST7: - case M_TEM: - { - stop_decoding(JPGD_UNEXPECTED_MARKER); - break; - } - default: /* must be DNL, DHP, EXP, APPn, JPGn, COM, or RESn or APP0 */ - { - skip_variable_marker(); - break; - } - } - } - } - - // Finds the start of image (SOI) marker. - // This code is rather defensive: it only checks the first 512 bytes to avoid - // false positives. - void jpeg_decoder::locate_soi_marker() - { - uint lastchar, thischar; - uint bytesleft; - - lastchar = get_bits(8); - - thischar = get_bits(8); - - /* ok if it's a normal JPEG file without a special header */ - - if ((lastchar == 0xFF) && (thischar == M_SOI)) - return; - - bytesleft = 4096; //512; - - for ( ; ; ) - { - if (--bytesleft == 0) - stop_decoding(JPGD_NOT_JPEG); - - lastchar = thischar; - - thischar = get_bits(8); - - if (lastchar == 0xFF) - { - if (thischar == M_SOI) - break; - else if (thischar == M_EOI) // get_bits will keep returning M_EOI if we read past the end - stop_decoding(JPGD_NOT_JPEG); - } - } - - // Check the next character after marker: if it's not 0xFF, it can't be the start of the next marker, so the file is bad. - thischar = (m_bit_buf >> 24) & 0xFF; - - if (thischar != 0xFF) - stop_decoding(JPGD_NOT_JPEG); - } - - // Find a start of frame (SOF) marker. - void jpeg_decoder::locate_sof_marker() - { - locate_soi_marker(); - - int c = process_markers(); - - switch (c) - { - case M_SOF2: - m_progressive_flag = JPGD_TRUE; - case M_SOF0: /* baseline DCT */ - case M_SOF1: /* extended sequential DCT */ - { - read_sof_marker(); - break; - } - case M_SOF9: /* Arithmitic coding */ - { - stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT); - break; - } - default: - { - stop_decoding(JPGD_UNSUPPORTED_MARKER); - break; - } - } - } - - // Find a start of scan (SOS) marker. - int jpeg_decoder::locate_sos_marker() - { - int c; - - c = process_markers(); - - if (c == M_EOI) - return JPGD_FALSE; - else if (c != M_SOS) - stop_decoding(JPGD_UNEXPECTED_MARKER); - - read_sos_marker(); - - return JPGD_TRUE; - } - - // Reset everything to default/uninitialized state. - void jpeg_decoder::init(jpeg_decoder_stream *pStream) - { - m_pMem_blocks = NULL; - m_error_code = JPGD_SUCCESS; - m_ready_flag = false; - m_image_x_size = m_image_y_size = 0; - m_pStream = pStream; - m_progressive_flag = JPGD_FALSE; - - memset(m_huff_ac, 0, sizeof(m_huff_ac)); - memset(m_huff_num, 0, sizeof(m_huff_num)); - memset(m_huff_val, 0, sizeof(m_huff_val)); - memset(m_quant, 0, sizeof(m_quant)); - - m_scan_type = 0; - m_comps_in_frame = 0; - - memset(m_comp_h_samp, 0, sizeof(m_comp_h_samp)); - memset(m_comp_v_samp, 0, sizeof(m_comp_v_samp)); - memset(m_comp_quant, 0, sizeof(m_comp_quant)); - memset(m_comp_ident, 0, sizeof(m_comp_ident)); - memset(m_comp_h_blocks, 0, sizeof(m_comp_h_blocks)); - memset(m_comp_v_blocks, 0, sizeof(m_comp_v_blocks)); - - m_comps_in_scan = 0; - memset(m_comp_list, 0, sizeof(m_comp_list)); - memset(m_comp_dc_tab, 0, sizeof(m_comp_dc_tab)); - memset(m_comp_ac_tab, 0, sizeof(m_comp_ac_tab)); - - m_spectral_start = 0; - m_spectral_end = 0; - m_successive_low = 0; - m_successive_high = 0; - m_max_mcu_x_size = 0; - m_max_mcu_y_size = 0; - m_blocks_per_mcu = 0; - m_max_blocks_per_row = 0; - m_mcus_per_row = 0; - m_mcus_per_col = 0; - m_expanded_blocks_per_component = 0; - m_expanded_blocks_per_mcu = 0; - m_expanded_blocks_per_row = 0; - m_freq_domain_chroma_upsample = false; - - memset(m_mcu_org, 0, sizeof(m_mcu_org)); - - m_total_lines_left = 0; - m_mcu_lines_left = 0; - m_real_dest_bytes_per_scan_line = 0; - m_dest_bytes_per_scan_line = 0; - m_dest_bytes_per_pixel = 0; - - memset(m_pHuff_tabs, 0, sizeof(m_pHuff_tabs)); - - memset(m_dc_coeffs, 0, sizeof(m_dc_coeffs)); - memset(m_ac_coeffs, 0, sizeof(m_ac_coeffs)); - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - m_eob_run = 0; - - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - m_pIn_buf_ofs = m_in_buf; - m_in_buf_left = 0; - m_eof_flag = false; - m_tem_flag = 0; - - memset(m_in_buf_pad_start, 0, sizeof(m_in_buf_pad_start)); - memset(m_in_buf, 0, sizeof(m_in_buf)); - memset(m_in_buf_pad_end, 0, sizeof(m_in_buf_pad_end)); - - m_restart_interval = 0; - m_restarts_left = 0; - m_next_restart_num = 0; - - m_max_mcus_per_row = 0; - m_max_blocks_per_mcu = 0; - m_max_mcus_per_col = 0; - - memset(m_last_dc_val, 0, sizeof(m_last_dc_val)); - m_pMCU_coefficients = NULL; - m_pSample_buf = NULL; - - m_total_bytes_read = 0; - - m_pScan_line_0 = NULL; - m_pScan_line_1 = NULL; - - // Ready the input buffer. - prep_in_buffer(); - - // Prime the bit buffer. - m_bits_left = 16; - m_bit_buf = 0; - - get_bits(16); - get_bits(16); - - for (int i = 0; i < JPGD_MAX_BLOCKS_PER_MCU; i++) - m_mcu_block_max_zag[i] = 64; - } - -#define SCALEBITS 16 -#define ONE_HALF ((int) 1 << (SCALEBITS-1)) -#define FIX(x) ((int) ((x) * (1L<> SCALEBITS; - m_cbb[i] = ( FIX(1.77200f) * k + ONE_HALF) >> SCALEBITS; - m_crg[i] = (-FIX(0.71414f)) * k; - m_cbg[i] = (-FIX(0.34414f)) * k + ONE_HALF; - } - } - - // This method throws back into the stream any bytes that where read - // into the bit buffer during initial marker scanning. - void jpeg_decoder::fix_in_buffer() - { - // In case any 0xFF's where pulled into the buffer during marker scanning. - JPGD_ASSERT((m_bits_left & 7) == 0); - - if (m_bits_left == 16) - stuff_char( (uint8)(m_bit_buf & 0xFF)); - - if (m_bits_left >= 8) - stuff_char( (uint8)((m_bit_buf >> 8) & 0xFF)); - - stuff_char((uint8)((m_bit_buf >> 16) & 0xFF)); - stuff_char((uint8)((m_bit_buf >> 24) & 0xFF)); - - m_bits_left = 16; - get_bits_no_markers(16); - get_bits_no_markers(16); - } - - void jpeg_decoder::transform_mcu(int mcu_row) - { - jpgd_block_t* pSrc_ptr = m_pMCU_coefficients; - uint8* pDst_ptr = m_pSample_buf + mcu_row * m_blocks_per_mcu * 64; - - for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]); - pSrc_ptr += 64; - pDst_ptr += 64; - } - } - - static const uint8 s_max_rc[64] = - { - 17, 18, 34, 50, 50, 51, 52, 52, 52, 68, 84, 84, 84, 84, 85, 86, 86, 86, 86, 86, - 102, 118, 118, 118, 118, 118, 118, 119, 120, 120, 120, 120, 120, 120, 120, 136, - 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, - 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136 - }; - - void jpeg_decoder::transform_mcu_expand(int mcu_row) - { - jpgd_block_t* pSrc_ptr = m_pMCU_coefficients; - uint8* pDst_ptr = m_pSample_buf + mcu_row * m_expanded_blocks_per_mcu * 64; - - // Y IDCT - int mcu_block; - for (mcu_block = 0; mcu_block < m_expanded_blocks_per_component; mcu_block++) - { - idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]); - pSrc_ptr += 64; - pDst_ptr += 64; - } - - // Chroma IDCT, with upsampling - jpgd_block_t temp_block[64]; - - for (int i = 0; i < 2; i++) - { - DCT_Upsample::Matrix44 P, Q, R, S; - - JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] >= 1); - JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] <= 64); - - switch (s_max_rc[m_mcu_block_max_zag[mcu_block++] - 1]) - { - case 1*16+1: - DCT_Upsample::P_Q<1, 1>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<1, 1>::calc(R, S, pSrc_ptr); - break; - case 1*16+2: - DCT_Upsample::P_Q<1, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<1, 2>::calc(R, S, pSrc_ptr); - break; - case 2*16+2: - DCT_Upsample::P_Q<2, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<2, 2>::calc(R, S, pSrc_ptr); - break; - case 3*16+2: - DCT_Upsample::P_Q<3, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 2>::calc(R, S, pSrc_ptr); - break; - case 3*16+3: - DCT_Upsample::P_Q<3, 3>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 3>::calc(R, S, pSrc_ptr); - break; - case 3*16+4: - DCT_Upsample::P_Q<3, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 4>::calc(R, S, pSrc_ptr); - break; - case 4*16+4: - DCT_Upsample::P_Q<4, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<4, 4>::calc(R, S, pSrc_ptr); - break; - case 5*16+4: - DCT_Upsample::P_Q<5, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 4>::calc(R, S, pSrc_ptr); - break; - case 5*16+5: - DCT_Upsample::P_Q<5, 5>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 5>::calc(R, S, pSrc_ptr); - break; - case 5*16+6: - DCT_Upsample::P_Q<5, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 6>::calc(R, S, pSrc_ptr); - break; - case 6*16+6: - DCT_Upsample::P_Q<6, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<6, 6>::calc(R, S, pSrc_ptr); - break; - case 7*16+6: - DCT_Upsample::P_Q<7, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 6>::calc(R, S, pSrc_ptr); - break; - case 7*16+7: - DCT_Upsample::P_Q<7, 7>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 7>::calc(R, S, pSrc_ptr); - break; - case 7*16+8: - DCT_Upsample::P_Q<7, 8>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 8>::calc(R, S, pSrc_ptr); - break; - case 8*16+8: - DCT_Upsample::P_Q<8, 8>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<8, 8>::calc(R, S, pSrc_ptr); - break; - default: - JPGD_ASSERT(false); - } - - DCT_Upsample::Matrix44 a(P + Q); P -= Q; - DCT_Upsample::Matrix44& b = P; - DCT_Upsample::Matrix44 c(R + S); R -= S; - DCT_Upsample::Matrix44& d = R; - - DCT_Upsample::Matrix44::add_and_store(temp_block, a, c); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::sub_and_store(temp_block, a, c); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::add_and_store(temp_block, b, d); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::sub_and_store(temp_block, b, d); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - pSrc_ptr += 64; - } - } - - // Loads and dequantizes the next row of (already decoded) coefficients. - // Progressive images only. - void jpeg_decoder::load_next_row() - { - int i; - jpgd_block_t *p; - jpgd_quant_t *q; - int mcu_row, mcu_block, row_block = 0; - int component_num, component_id; - int block_x_mcu[JPGD_MAX_COMPONENTS]; - - memset(block_x_mcu, 0, JPGD_MAX_COMPONENTS * sizeof(int)); - - for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0; - - for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - component_id = m_mcu_org[mcu_block]; - q = m_quant[m_comp_quant[component_id]]; - - p = m_pMCU_coefficients + 64 * mcu_block; - - jpgd_block_t* pAC = coeff_buf_getp(m_ac_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - jpgd_block_t* pDC = coeff_buf_getp(m_dc_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - p[0] = pDC[0]; - memcpy(&p[1], &pAC[1], 63 * sizeof(jpgd_block_t)); - - for (i = 63; i > 0; i--) - if (p[g_ZAG[i]]) - break; - - m_mcu_block_max_zag[mcu_block] = i + 1; - - for ( ; i >= 0; i--) - if (p[g_ZAG[i]]) - p[g_ZAG[i]] = static_cast(p[g_ZAG[i]] * q[i]); - - row_block++; - - if (m_comps_in_scan == 1) - block_x_mcu[component_id]++; - else - { - if (++block_x_mcu_ofs == m_comp_h_samp[component_id]) - { - block_x_mcu_ofs = 0; - - if (++block_y_mcu_ofs == m_comp_v_samp[component_id]) - { - block_y_mcu_ofs = 0; - - block_x_mcu[component_id] += m_comp_h_samp[component_id]; - } - } - } - } - - if (m_freq_domain_chroma_upsample) - transform_mcu_expand(mcu_row); - else - transform_mcu(mcu_row); - } - - if (m_comps_in_scan == 1) - m_block_y_mcu[m_comp_list[0]]++; - else - { - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - component_id = m_comp_list[component_num]; - - m_block_y_mcu[component_id] += m_comp_v_samp[component_id]; - } - } - } - - // Restart interval processing. - void jpeg_decoder::process_restart() - { - int i; - int c = 0; - - // Align to a byte boundry - // FIXME: Is this really necessary? get_bits_no_markers() never reads in markers! - //get_bits_no_markers(m_bits_left & 7); - - // Let's scan a little bit to find the marker, but not _too_ far. - // 1536 is a "fudge factor" that determines how much to scan. - for (i = 1536; i > 0; i--) - if (get_char() == 0xFF) - break; - - if (i == 0) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - for ( ; i > 0; i--) - if ((c = get_char()) != 0xFF) - break; - - if (i == 0) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - // Is it the expected marker? If not, something bad happened. - if (c != (m_next_restart_num + M_RST0)) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - // Reset each component's DC prediction values. - memset(&m_last_dc_val, 0, m_comps_in_frame * sizeof(uint)); - - m_eob_run = 0; - - m_restarts_left = m_restart_interval; - - m_next_restart_num = (m_next_restart_num + 1) & 7; - - // Get the bit buffer going again... - - m_bits_left = 16; - get_bits_no_markers(16); - get_bits_no_markers(16); - } - - static inline int dequantize_ac(int c, int q) { c *= q; return c; } - - // Decodes and dequantizes the next row of coefficients. - void jpeg_decoder::decode_next_row() - { - int row_block = 0; - - for (int mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - if ((m_restart_interval) && (m_restarts_left == 0)) - process_restart(); - - jpgd_block_t* p = m_pMCU_coefficients; - for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++, p += 64) - { - int component_id = m_mcu_org[mcu_block]; - jpgd_quant_t* q = m_quant[m_comp_quant[component_id]]; - - int r, s; - s = huff_decode(m_pHuff_tabs[m_comp_dc_tab[component_id]], r); - s = HUFF_EXTEND(r, s); - - m_last_dc_val[component_id] = (s += m_last_dc_val[component_id]); - - p[0] = static_cast(s * q[0]); - - int prev_num_set = m_mcu_block_max_zag[mcu_block]; - - huff_tables *pH = m_pHuff_tabs[m_comp_ac_tab[component_id]]; - - int k; - for (k = 1; k < 64; k++) - { - int extra_bits; - s = huff_decode(pH, extra_bits); - - r = s >> 4; - s &= 15; - - if (s) - { - if (r) - { - if ((k + r) > 63) - stop_decoding(JPGD_DECODE_ERROR); - - if (k < prev_num_set) - { - int n = JPGD_MIN(r, prev_num_set - k); - int kt = k; - while (n--) - p[g_ZAG[kt++]] = 0; - } - - k += r; - } - - s = HUFF_EXTEND(extra_bits, s); - - JPGD_ASSERT(k < 64); - - p[g_ZAG[k]] = static_cast(dequantize_ac(s, q[k])); //s * q[k]; - } - else - { - if (r == 15) - { - if ((k + 16) > 64) - stop_decoding(JPGD_DECODE_ERROR); - - if (k < prev_num_set) - { - int n = JPGD_MIN(16, prev_num_set - k); - int kt = k; - while (n--) - { - JPGD_ASSERT(kt <= 63); - p[g_ZAG[kt++]] = 0; - } - } - - k += 16 - 1; // - 1 because the loop counter is k - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64 && p[g_ZAG[k]] == 0); - // END EPIC MOD - } - else - break; - } - } - - if (k < prev_num_set) - { - int kt = k; - while (kt < prev_num_set) - p[g_ZAG[kt++]] = 0; - } - - m_mcu_block_max_zag[mcu_block] = k; - - row_block++; - } - - if (m_freq_domain_chroma_upsample) - transform_mcu_expand(mcu_row); - else - transform_mcu(mcu_row); - - m_restarts_left--; - } - } - - // YCbCr H1V1 (1x1:1:1, 3 m_blocks per MCU) to RGB - void jpeg_decoder::H1V1Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d = m_pScan_line_0; - uint8 *s = m_pSample_buf + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int j = 0; j < 8; j++) - { - int y = s[j]; - int cb = s[64+j]; - int cr = s[128+j]; - - if (jpg_format == ERGBFormatJPG::BGRA) - { - d[0] = clamp(y + m_cbb[cb]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_crr[cr]); - d[3] = 255; - } - else - { - d[0] = clamp(y + m_crr[cr]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_cbb[cb]); - d[3] = 255; - } - d += 4; - } - - s += 64*3; - } - } - - // YCbCr H2V1 (2x1:1:1, 4 m_blocks per MCU) to RGB - void jpeg_decoder::H2V1Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *y = m_pSample_buf + row * 8; - uint8 *c = m_pSample_buf + 2*64 + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int l = 0; l < 2; l++) - { - for (int j = 0; j < 4; j++) - { - int cb = c[0]; - int cr = c[64]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j<<1]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[(j<<1)+1]; - d0[4] = clamp(yy+bc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+rc); - d0[7] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[(j<<1)+1]; - d0[4] = clamp(yy+rc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+bc); - d0[7] = 255; - } - - d0 += 8; - - c++; - } - y += 64; - } - - y += 64*4 - 64*2; - c += 64*4 - 8; - } - } - - // YCbCr H2V1 (1x2:1:1, 4 m_blocks per MCU) to RGB - void jpeg_decoder::H1V2Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *d1 = m_pScan_line_1; - uint8 *y; - uint8 *c; - - if (row < 8) - y = m_pSample_buf + row * 8; - else - y = m_pSample_buf + 64*1 + (row & 7) * 8; - - c = m_pSample_buf + 64*2 + (row >> 1) * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int j = 0; j < 8; j++) - { - int cb = c[0+j]; - int cr = c[64+j]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[8+j]; - d1[0] = clamp(yy+bc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+rc); - d1[3] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[8+j]; - d1[0] = clamp(yy+rc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+bc); - d1[3] = 255; - } - - d0 += 4; - d1 += 4; - } - - y += 64*4; - c += 64*4; - } - } - - // YCbCr H2V2 (2x2:1:1, 6 m_blocks per MCU) to RGB - void jpeg_decoder::H2V2Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *d1 = m_pScan_line_1; - uint8 *y; - uint8 *c; - - if (row < 8) - y = m_pSample_buf + row * 8; - else - y = m_pSample_buf + 64*2 + (row & 7) * 8; - - c = m_pSample_buf + 64*4 + (row >> 1) * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int l = 0; l < 2; l++) - { - for (int j = 0; j < 8; j += 2) - { - int cb = c[0]; - int cr = c[64]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[j+1]; - d0[4] = clamp(yy+bc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+rc); - d0[7] = 255; - yy = y[j+8]; - d1[0] = clamp(yy+bc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+rc); - d1[3] = 255; - yy = y[j+8+1]; - d1[4] = clamp(yy+bc); - d1[5] = clamp(yy+gc); - d1[6] = clamp(yy+rc); - d1[7] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[j+1]; - d0[4] = clamp(yy+rc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+bc); - d0[7] = 255; - yy = y[j+8]; - d1[0] = clamp(yy+rc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+bc); - d1[3] = 255; - yy = y[j+8+1]; - d1[4] = clamp(yy+rc); - d1[5] = clamp(yy+gc); - d1[6] = clamp(yy+bc); - d1[7] = 255; - } - - d0 += 8; - d1 += 8; - - c++; - } - y += 64; - } - - y += 64*6 - 64*2; - c += 64*6 - 8; - } - } - - // Y (1 block per MCU) to 8-bit grayscale - void jpeg_decoder::gray_convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d = m_pScan_line_0; - uint8 *s = m_pSample_buf + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - *(uint *)d = *(uint *)s; - *(uint *)(&d[4]) = *(uint *)(&s[4]); - - s += 64; - d += 8; - } - } - - void jpeg_decoder::expanded_convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - - uint8* Py = m_pSample_buf + (row / 8) * 64 * m_comp_h_samp[0] + (row & 7) * 8; - - uint8* d = m_pScan_line_0; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int k = 0; k < m_max_mcu_x_size; k += 8) - { - const int Y_ofs = k * 8; - const int Cb_ofs = Y_ofs + 64 * m_expanded_blocks_per_component; - const int Cr_ofs = Y_ofs + 64 * m_expanded_blocks_per_component * 2; - for (int j = 0; j < 8; j++) - { - int y = Py[Y_ofs + j]; - int cb = Py[Cb_ofs + j]; - int cr = Py[Cr_ofs + j]; - - if (jpg_format == ERGBFormatJPG::BGRA) - { - d[0] = clamp(y + m_cbb[cb]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_crr[cr]); - d[3] = 255; - } - else - { - d[0] = clamp(y + m_crr[cr]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_cbb[cb]); - d[3] = 255; - } - - d += 4; - } - } - - Py += 64 * m_expanded_blocks_per_mcu; - } - } - - // Find end of image (EOI) marker, so we can return to the user the exact size of the input stream. - void jpeg_decoder::find_eoi() - { - if (!m_progressive_flag) - { - // Attempt to read the EOI marker. - //get_bits_no_markers(m_bits_left & 7); - - // Prime the bit buffer - m_bits_left = 16; - get_bits(16); - get_bits(16); - - // The next marker _should_ be EOI - process_markers(); - } - - m_total_bytes_read -= m_in_buf_left; - } - - int jpeg_decoder::decode(const void** pScan_line, uint* pScan_line_len) - { - if ((m_error_code) || (!m_ready_flag)) - return JPGD_FAILED; - - if (m_total_lines_left == 0) - return JPGD_DONE; - - if (m_mcu_lines_left == 0) - { - if (setjmp(m_jmp_state)) - return JPGD_FAILED; - - if (m_progressive_flag) - load_next_row(); - else - decode_next_row(); - - // Find the EOI marker if that was the last row. - if (m_total_lines_left <= m_max_mcu_y_size) - find_eoi(); - - m_mcu_lines_left = m_max_mcu_y_size; - } - - if (m_freq_domain_chroma_upsample) - { - expanded_convert(); - *pScan_line = m_pScan_line_0; - } - else - { - switch (m_scan_type) - { - case JPGD_YH2V2: - { - if ((m_mcu_lines_left & 1) == 0) - { - H2V2Convert(); - *pScan_line = m_pScan_line_0; - } - else - *pScan_line = m_pScan_line_1; - - break; - } - case JPGD_YH2V1: - { - H2V1Convert(); - *pScan_line = m_pScan_line_0; - break; - } - case JPGD_YH1V2: - { - if ((m_mcu_lines_left & 1) == 0) - { - H1V2Convert(); - *pScan_line = m_pScan_line_0; - } - else - *pScan_line = m_pScan_line_1; - - break; - } - case JPGD_YH1V1: - { - H1V1Convert(); - *pScan_line = m_pScan_line_0; - break; - } - case JPGD_GRAYSCALE: - { - gray_convert(); - *pScan_line = m_pScan_line_0; - - break; - } - } - } - - *pScan_line_len = m_real_dest_bytes_per_scan_line; - - m_mcu_lines_left--; - m_total_lines_left--; - - return JPGD_SUCCESS; - } - - // Creates the tables needed for efficient Huffman decoding. - void jpeg_decoder::make_huff_table(int index, huff_tables *pH) - { - int p, i, l, si; - uint8 huffsize[257]; - uint huffcode[257]; - uint code; - uint subtree; - int code_size; - int lastp; - int nextfreeentry; - int currententry; - - pH->ac_table = m_huff_ac[index] != 0; - - p = 0; - - for (l = 1; l <= 16; l++) - { - for (i = 1; i <= m_huff_num[index][l]; i++) - huffsize[p++] = static_cast(l); - } - - huffsize[p] = 0; - - lastp = p; - - code = 0; - si = huffsize[0]; - p = 0; - - while (huffsize[p]) - { - while (huffsize[p] == si) - { - huffcode[p++] = code; - code++; - } - - code <<= 1; - si++; - } - - memset(pH->look_up, 0, sizeof(pH->look_up)); - memset(pH->look_up2, 0, sizeof(pH->look_up2)); - memset(pH->tree, 0, sizeof(pH->tree)); - memset(pH->code_size, 0, sizeof(pH->code_size)); - - nextfreeentry = -1; - - p = 0; - - while (p < lastp) - { - i = m_huff_val[index][p]; - code = huffcode[p]; - code_size = huffsize[p]; - - pH->code_size[i] = static_cast(code_size); - - if (code_size <= 8) - { - code <<= (8 - code_size); - - for (l = 1 << (8 - code_size); l > 0; l--) - { - JPGD_ASSERT(i < 256); - - pH->look_up[code] = i; - - bool has_extrabits = false; - int extra_bits = 0; - int num_extra_bits = i & 15; - - int bits_to_fetch = code_size; - if (num_extra_bits) - { - int total_codesize = code_size + num_extra_bits; - if (total_codesize <= 8) - { - has_extrabits = true; - extra_bits = ((1 << num_extra_bits) - 1) & (code >> (8 - total_codesize)); - JPGD_ASSERT(extra_bits <= 0x7FFF); - bits_to_fetch += num_extra_bits; - } - } - - if (!has_extrabits) - pH->look_up2[code] = i | (bits_to_fetch << 8); - else - pH->look_up2[code] = i | 0x8000 | (extra_bits << 16) | (bits_to_fetch << 8); - - code++; - } - } - else - { - subtree = (code >> (code_size - 8)) & 0xFF; - - currententry = pH->look_up[subtree]; - - if (currententry == 0) - { - pH->look_up[subtree] = currententry = nextfreeentry; - pH->look_up2[subtree] = currententry = nextfreeentry; - - nextfreeentry -= 2; - } - - code <<= (16 - (code_size - 8)); - - for (l = code_size; l > 9; l--) - { - if ((code & 0x8000) == 0) - currententry--; - - if (pH->tree[-currententry - 1] == 0) - { - pH->tree[-currententry - 1] = nextfreeentry; - - currententry = nextfreeentry; - - nextfreeentry -= 2; - } - else - currententry = pH->tree[-currententry - 1]; - - code <<= 1; - } - - if ((code & 0x8000) == 0) - currententry--; - - pH->tree[-currententry - 1] = i; - } - - p++; - } - } - - // Verifies the quantization tables needed for this scan are available. - void jpeg_decoder::check_quant_tables() - { - for (int i = 0; i < m_comps_in_scan; i++) - if (m_quant[m_comp_quant[m_comp_list[i]]] == NULL) - stop_decoding(JPGD_UNDEFINED_QUANT_TABLE); - } - - // Verifies that all the Huffman tables needed for this scan are available. - void jpeg_decoder::check_huff_tables() - { - for (int i = 0; i < m_comps_in_scan; i++) - { - if ((m_spectral_start == 0) && (m_huff_num[m_comp_dc_tab[m_comp_list[i]]] == NULL)) - stop_decoding(JPGD_UNDEFINED_HUFF_TABLE); - - if ((m_spectral_end > 0) && (m_huff_num[m_comp_ac_tab[m_comp_list[i]]] == NULL)) - stop_decoding(JPGD_UNDEFINED_HUFF_TABLE); - } - - for (int i = 0; i < JPGD_MAX_HUFF_TABLES; i++) - if (m_huff_num[i]) - { - if (!m_pHuff_tabs[i]) - m_pHuff_tabs[i] = (huff_tables *)alloc(sizeof(huff_tables)); - - make_huff_table(i, m_pHuff_tabs[i]); - } - } - - // Determines the component order inside each MCU. - // Also calcs how many MCU's are on each row, etc. - void jpeg_decoder::calc_mcu_block_order() - { - int component_num, component_id; - int max_h_samp = 0, max_v_samp = 0; - - for (component_id = 0; component_id < m_comps_in_frame; component_id++) - { - if (m_comp_h_samp[component_id] > max_h_samp) - max_h_samp = m_comp_h_samp[component_id]; - - if (m_comp_v_samp[component_id] > max_v_samp) - max_v_samp = m_comp_v_samp[component_id]; - } - - for (component_id = 0; component_id < m_comps_in_frame; component_id++) - { - m_comp_h_blocks[component_id] = ((((m_image_x_size * m_comp_h_samp[component_id]) + (max_h_samp - 1)) / max_h_samp) + 7) / 8; - m_comp_v_blocks[component_id] = ((((m_image_y_size * m_comp_v_samp[component_id]) + (max_v_samp - 1)) / max_v_samp) + 7) / 8; - } - - if (m_comps_in_scan == 1) - { - m_mcus_per_row = m_comp_h_blocks[m_comp_list[0]]; - m_mcus_per_col = m_comp_v_blocks[m_comp_list[0]]; - } - else - { - m_mcus_per_row = (((m_image_x_size + 7) / 8) + (max_h_samp - 1)) / max_h_samp; - m_mcus_per_col = (((m_image_y_size + 7) / 8) + (max_v_samp - 1)) / max_v_samp; - } - - if (m_comps_in_scan == 1) - { - m_mcu_org[0] = m_comp_list[0]; - - m_blocks_per_mcu = 1; - } - else - { - m_blocks_per_mcu = 0; - - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - int num_blocks; - - component_id = m_comp_list[component_num]; - - num_blocks = m_comp_h_samp[component_id] * m_comp_v_samp[component_id]; - - while (num_blocks--) - m_mcu_org[m_blocks_per_mcu++] = component_id; - } - } - } - - // Starts a new scan. - int jpeg_decoder::init_scan() - { - if (!locate_sos_marker()) - return JPGD_FALSE; - - calc_mcu_block_order(); - - check_huff_tables(); - - check_quant_tables(); - - memset(m_last_dc_val, 0, m_comps_in_frame * sizeof(uint)); - - m_eob_run = 0; - - if (m_restart_interval) - { - m_restarts_left = m_restart_interval; - m_next_restart_num = 0; - } - - fix_in_buffer(); - - return JPGD_TRUE; - } - - // Starts a frame. Determines if the number of components or sampling factors - // are supported. - void jpeg_decoder::init_frame() - { - int i; - - if (m_comps_in_frame == 1) - { - if ((m_comp_h_samp[0] != 1) || (m_comp_v_samp[0] != 1)) - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - - m_scan_type = JPGD_GRAYSCALE; - m_max_blocks_per_mcu = 1; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 8; - } - else if (m_comps_in_frame == 3) - { - if ( ((m_comp_h_samp[1] != 1) || (m_comp_v_samp[1] != 1)) || - ((m_comp_h_samp[2] != 1) || (m_comp_v_samp[2] != 1)) ) - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - - if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1)) - { - m_scan_type = JPGD_YH1V1; - - m_max_blocks_per_mcu = 3; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 8; - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1)) - { - m_scan_type = JPGD_YH2V1; - m_max_blocks_per_mcu = 4; - m_max_mcu_x_size = 16; - m_max_mcu_y_size = 8; - } - else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 2)) - { - m_scan_type = JPGD_YH1V2; - m_max_blocks_per_mcu = 4; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 16; - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2)) - { - m_scan_type = JPGD_YH2V2; - m_max_blocks_per_mcu = 6; - m_max_mcu_x_size = 16; - m_max_mcu_y_size = 16; - } - else - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - } - else - stop_decoding(JPGD_UNSUPPORTED_COLORSPACE); - - m_max_mcus_per_row = (m_image_x_size + (m_max_mcu_x_size - 1)) / m_max_mcu_x_size; - m_max_mcus_per_col = (m_image_y_size + (m_max_mcu_y_size - 1)) / m_max_mcu_y_size; - - // These values are for the *destination* pixels: after conversion. - if (m_scan_type == JPGD_GRAYSCALE) - m_dest_bytes_per_pixel = 1; - else - m_dest_bytes_per_pixel = 4; - - m_dest_bytes_per_scan_line = ((m_image_x_size + 15) & 0xFFF0) * m_dest_bytes_per_pixel; - - m_real_dest_bytes_per_scan_line = (m_image_x_size * m_dest_bytes_per_pixel); - - // Initialize two scan line buffers. - m_pScan_line_0 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true); - if ((m_scan_type == JPGD_YH1V2) || (m_scan_type == JPGD_YH2V2)) - m_pScan_line_1 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true); - - m_max_blocks_per_row = m_max_mcus_per_row * m_max_blocks_per_mcu; - - // Should never happen - if (m_max_blocks_per_row > JPGD_MAX_BLOCKS_PER_ROW) - stop_decoding(JPGD_ASSERTION_ERROR); - - // Allocate the coefficient buffer, enough for one MCU - m_pMCU_coefficients = (jpgd_block_t*)alloc(m_max_blocks_per_mcu * 64 * sizeof(jpgd_block_t)); - - for (i = 0; i < m_max_blocks_per_mcu; i++) - m_mcu_block_max_zag[i] = 64; - - m_expanded_blocks_per_component = m_comp_h_samp[0] * m_comp_v_samp[0]; - m_expanded_blocks_per_mcu = m_expanded_blocks_per_component * m_comps_in_frame; - m_expanded_blocks_per_row = m_max_mcus_per_row * m_expanded_blocks_per_mcu; - // Freq. domain chroma upsampling is only supported for H2V2 subsampling factor. -// BEGIN EPIC MOD -#if JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING - m_freq_domain_chroma_upsample = (m_expanded_blocks_per_mcu == 4*3); -#else - m_freq_domain_chroma_upsample = 0; -#endif -// END EPIC MOD - - if (m_freq_domain_chroma_upsample) - m_pSample_buf = (uint8 *)alloc(m_expanded_blocks_per_row * 64); - else - m_pSample_buf = (uint8 *)alloc(m_max_blocks_per_row * 64); - - m_total_lines_left = m_image_y_size; - - m_mcu_lines_left = 0; - - create_look_ups(); - } - - // The coeff_buf series of methods originally stored the coefficients - // into a "virtual" file which was located in EMS, XMS, or a disk file. A cache - // was used to make this process more efficient. Now, we can store the entire - // thing in RAM. - jpeg_decoder::coeff_buf* jpeg_decoder::coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y) - { - coeff_buf* cb = (coeff_buf*)alloc(sizeof(coeff_buf)); - - cb->block_num_x = block_num_x; - cb->block_num_y = block_num_y; - cb->block_len_x = block_len_x; - cb->block_len_y = block_len_y; - cb->block_size = (block_len_x * block_len_y) * sizeof(jpgd_block_t); - cb->pData = (uint8 *)alloc(cb->block_size * block_num_x * block_num_y, true); - return cb; - } - - inline jpgd_block_t *jpeg_decoder::coeff_buf_getp(coeff_buf *cb, int block_x, int block_y) - { - JPGD_ASSERT((block_x < cb->block_num_x) && (block_y < cb->block_num_y)); - return (jpgd_block_t *)(cb->pData + block_x * cb->block_size + block_y * (cb->block_size * cb->block_num_x)); - } - - // The following methods decode the various types of m_blocks encountered - // in progressively encoded images. - void jpeg_decoder::decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int s, r; - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y); - - if ((s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_dc_tab[component_id]])) != 0) - { - r = pD->get_bits_no_markers(s); - s = HUFF_EXTEND(r, s); - } - - pD->m_last_dc_val[component_id] = (s += pD->m_last_dc_val[component_id]); - - p[0] = static_cast(s << pD->m_successive_low); - } - - void jpeg_decoder::decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - if (pD->get_bits_no_markers(1)) - { - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y); - - p[0] |= (1 << pD->m_successive_low); - } - } - - void jpeg_decoder::decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int k, s, r; - - if (pD->m_eob_run) - { - pD->m_eob_run--; - return; - } - - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y); - - for (k = pD->m_spectral_start; k <= pD->m_spectral_end; k++) - { - s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]); - - r = s >> 4; - s &= 15; - - if (s) - { - if ((k += r) > 63) - pD->stop_decoding(JPGD_DECODE_ERROR); - - r = pD->get_bits_no_markers(s); - s = HUFF_EXTEND(r, s); - - p[g_ZAG[k]] = static_cast(s << pD->m_successive_low); - } - else - { - if (r == 15) - { - if ((k += 15) > 63) - pD->stop_decoding(JPGD_DECODE_ERROR); - } - else - { - pD->m_eob_run = 1 << r; - - if (r) - pD->m_eob_run += pD->get_bits_no_markers(r); - - pD->m_eob_run--; - - break; - } - } - } - } - - void jpeg_decoder::decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int s, k, r; - int p1 = 1 << pD->m_successive_low; - int m1 = (-1) << pD->m_successive_low; - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y); - - k = pD->m_spectral_start; - - if (pD->m_eob_run == 0) - { - for ( ; k <= pD->m_spectral_end; k++) - { - s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]); - - r = s >> 4; - s &= 15; - - if (s) - { - if (s != 1) - pD->stop_decoding(JPGD_DECODE_ERROR); - - if (pD->get_bits_no_markers(1)) - s = p1; - else - s = m1; - } - else - { - if (r != 15) - { - pD->m_eob_run = 1 << r; - - if (r) - pD->m_eob_run += pD->get_bits_no_markers(r); - - break; - } - } - - do - { - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64); - // END EPIC MOD - - jpgd_block_t *this_coef = p + g_ZAG[k]; - - if (*this_coef != 0) - { - if (pD->get_bits_no_markers(1)) - { - if ((*this_coef & p1) == 0) - { - if (*this_coef >= 0) - *this_coef = static_cast(*this_coef + p1); - else - *this_coef = static_cast(*this_coef + m1); - } - } - } - else - { - if (--r < 0) - break; - } - - k++; - - } while (k <= pD->m_spectral_end); - - if ((s) && (k < 64)) - { - p[g_ZAG[k]] = static_cast(s); - } - } - } - - if (pD->m_eob_run > 0) - { - for ( ; k <= pD->m_spectral_end; k++) - { - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64); - // END EPIC MOD - - jpgd_block_t *this_coef = p + g_ZAG[k]; - - if (*this_coef != 0) - { - if (pD->get_bits_no_markers(1)) - { - if ((*this_coef & p1) == 0) - { - if (*this_coef >= 0) - *this_coef = static_cast(*this_coef + p1); - else - *this_coef = static_cast(*this_coef + m1); - } - } - } - } - - pD->m_eob_run--; - } - } - - // Decode a scan in a progressively encoded image. - void jpeg_decoder::decode_scan(pDecode_block_func decode_block_func) - { - int mcu_row, mcu_col, mcu_block; - int block_x_mcu[JPGD_MAX_COMPONENTS], m_block_y_mcu[JPGD_MAX_COMPONENTS]; - - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - for (mcu_col = 0; mcu_col < m_mcus_per_col; mcu_col++) - { - int component_num, component_id; - - memset(block_x_mcu, 0, sizeof(block_x_mcu)); - - for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0; - - if ((m_restart_interval) && (m_restarts_left == 0)) - process_restart(); - - for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - component_id = m_mcu_org[mcu_block]; - - decode_block_func(this, component_id, block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - - if (m_comps_in_scan == 1) - block_x_mcu[component_id]++; - else - { - if (++block_x_mcu_ofs == m_comp_h_samp[component_id]) - { - block_x_mcu_ofs = 0; - - if (++block_y_mcu_ofs == m_comp_v_samp[component_id]) - { - block_y_mcu_ofs = 0; - block_x_mcu[component_id] += m_comp_h_samp[component_id]; - } - } - } - } - - m_restarts_left--; - } - - if (m_comps_in_scan == 1) - m_block_y_mcu[m_comp_list[0]]++; - else - { - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - component_id = m_comp_list[component_num]; - m_block_y_mcu[component_id] += m_comp_v_samp[component_id]; - } - } - } - } - - // Decode a progressively encoded image. - void jpeg_decoder::init_progressive() - { - int i; - - if (m_comps_in_frame == 4) - stop_decoding(JPGD_UNSUPPORTED_COLORSPACE); - - // Allocate the coefficient buffers. - for (i = 0; i < m_comps_in_frame; i++) - { - m_dc_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 1, 1); - m_ac_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 8, 8); - } - - for ( ; ; ) - { - int dc_only_scan, refinement_scan; - pDecode_block_func decode_block_func; - - if (!init_scan()) - break; - - dc_only_scan = (m_spectral_start == 0); - refinement_scan = (m_successive_high != 0); - - if ((m_spectral_start > m_spectral_end) || (m_spectral_end > 63)) - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - - if (dc_only_scan) - { - if (m_spectral_end) - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - } - else if (m_comps_in_scan != 1) /* AC scans can only contain one component */ - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - - if ((refinement_scan) && (m_successive_low != m_successive_high - 1)) - stop_decoding(JPGD_BAD_SOS_SUCCESSIVE); - - if (dc_only_scan) - { - if (refinement_scan) - decode_block_func = decode_block_dc_refine; - else - decode_block_func = decode_block_dc_first; - } - else - { - if (refinement_scan) - decode_block_func = decode_block_ac_refine; - else - decode_block_func = decode_block_ac_first; - } - - decode_scan(decode_block_func); - - m_bits_left = 16; - get_bits(16); - get_bits(16); - } - - m_comps_in_scan = m_comps_in_frame; - - for (i = 0; i < m_comps_in_frame; i++) - m_comp_list[i] = i; - - calc_mcu_block_order(); - } - - void jpeg_decoder::init_sequential() - { - if (!init_scan()) - stop_decoding(JPGD_UNEXPECTED_MARKER); - } - - void jpeg_decoder::decode_start() - { - init_frame(); - - if (m_progressive_flag) - init_progressive(); - else - init_sequential(); - } - - void jpeg_decoder::decode_init(jpeg_decoder_stream *pStream) - { - init(pStream); - locate_sof_marker(); - } - - jpeg_decoder::jpeg_decoder(jpeg_decoder_stream *pStream) - { - if (setjmp(m_jmp_state)) - return; - decode_init(pStream); - } - - int jpeg_decoder::begin_decoding() - { - if (m_ready_flag) - return JPGD_SUCCESS; - - if (m_error_code) - return JPGD_FAILED; - - if (setjmp(m_jmp_state)) - return JPGD_FAILED; - - decode_start(); - - m_ready_flag = true; - - return JPGD_SUCCESS; - } - - jpeg_decoder::~jpeg_decoder() - { - free_all_blocks(); - } - - jpeg_decoder_file_stream::jpeg_decoder_file_stream() - { - m_pFile = NULL; - m_eof_flag = false; - m_error_flag = false; - } - - void jpeg_decoder_file_stream::close() - { - if (m_pFile) - { - fclose(m_pFile); - m_pFile = NULL; - } - - m_eof_flag = false; - m_error_flag = false; - } - - jpeg_decoder_file_stream::~jpeg_decoder_file_stream() - { - close(); - } - - bool jpeg_decoder_file_stream::open(const char *Pfilename) - { - close(); - - m_eof_flag = false; - m_error_flag = false; - -#if defined(_MSC_VER) - m_pFile = NULL; - fopen_s(&m_pFile, Pfilename, "rb"); -#else - m_pFile = fopen(Pfilename, "rb"); -#endif - return m_pFile != NULL; - } - - int jpeg_decoder_file_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) - { - if (!m_pFile) - return -1; - - if (m_eof_flag) - { - *pEOF_flag = true; - return 0; - } - - if (m_error_flag) - return -1; - - int bytes_read = static_cast(fread(pBuf, 1, max_bytes_to_read, m_pFile)); - if (bytes_read < max_bytes_to_read) - { - if (ferror(m_pFile)) - { - m_error_flag = true; - return -1; - } - - m_eof_flag = true; - *pEOF_flag = true; - } - - return bytes_read; - } - - bool jpeg_decoder_mem_stream::open(const uint8 *pSrc_data, uint size) - { - close(); - m_pSrc_data = pSrc_data; - m_ofs = 0; - m_size = size; - return true; - } - - int jpeg_decoder_mem_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) - { - *pEOF_flag = false; - - if (!m_pSrc_data) - return -1; - - uint bytes_remaining = m_size - m_ofs; - if ((uint)max_bytes_to_read > bytes_remaining) - { - max_bytes_to_read = bytes_remaining; - *pEOF_flag = true; - } - - memcpy(pBuf, m_pSrc_data + m_ofs, max_bytes_to_read); - m_ofs += max_bytes_to_read; - - return max_bytes_to_read; - } - - unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps) - { - if (!actual_comps) - return NULL; - *actual_comps = 0; - - if ((!pStream) || (!width) || (!height) || (!req_comps)) - return NULL; - - if ((req_comps != 1) && (req_comps != 3) && (req_comps != 4)) - return NULL; - - jpeg_decoder decoder(pStream); - if (decoder.get_error_code() != JPGD_SUCCESS) - return NULL; - - const int image_width = decoder.get_width(), image_height = decoder.get_height(); - *width = image_width; - *height = image_height; - *actual_comps = decoder.get_num_components(); - - if (decoder.begin_decoding() != JPGD_SUCCESS) - return NULL; - - const int dst_bpl = image_width * req_comps; - - uint8 *pImage_data = (uint8*)jpgd_malloc(dst_bpl * image_height); - if (!pImage_data) - return NULL; - - for (int y = 0; y < image_height; y++) - { - const uint8* pScan_line = 0; - uint scan_line_len; - if (decoder.decode((const void**)&pScan_line, &scan_line_len) != JPGD_SUCCESS) - { - jpgd_free(pImage_data); - return NULL; - } - - uint8 *pDst = pImage_data + y * dst_bpl; - - if (((req_comps == 4) && (decoder.get_num_components() == 3)) || - ((req_comps == 1) && (decoder.get_num_components() == 1))) - { - memcpy(pDst, pScan_line, dst_bpl); - } - else if (decoder.get_num_components() == 1) - { - if (req_comps == 3) - { - for (int x = 0; x < image_width; x++) - { - uint8 luma = pScan_line[x]; - pDst[0] = luma; - pDst[1] = luma; - pDst[2] = luma; - pDst += 3; - } - } - else - { - for (int x = 0; x < image_width; x++) - { - uint8 luma = pScan_line[x]; - pDst[0] = luma; - pDst[1] = luma; - pDst[2] = luma; - pDst[3] = 255; - pDst += 4; - } - } - } - else if (decoder.get_num_components() == 3) - { - if (req_comps == 1) - { - const int YR = 19595, YG = 38470, YB = 7471; - for (int x = 0; x < image_width; x++) - { - int r = pScan_line[x*4+0]; - int g = pScan_line[x*4+1]; - int b = pScan_line[x*4+2]; - *pDst++ = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - } - } - else - { - for (int x = 0; x < image_width; x++) - { - pDst[0] = pScan_line[x*4+0]; - pDst[1] = pScan_line[x*4+1]; - pDst[2] = pScan_line[x*4+2]; - pDst += 3; - } - } - } - } - - return pImage_data; - } - -// BEGIN EPIC MOD - unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format) - { - jpg_format = (ERGBFormatJPG)format; -// EMD EPIC MOD - jpgd::jpeg_decoder_mem_stream mem_stream(pSrc_data, src_data_size); - return decompress_jpeg_image_from_stream(&mem_stream, width, height, actual_comps, req_comps); - } - - unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps) - { - jpgd::jpeg_decoder_file_stream file_stream; - if (!file_stream.open(pSrc_filename)) - return NULL; - return decompress_jpeg_image_from_stream(&file_stream, width, height, actual_comps, req_comps); - } - -} // namespace jpgd diff --git a/spaces/cfwef/gpt/functional_crazy.py b/spaces/cfwef/gpt/functional_crazy.py deleted file mode 100644 index 9c83b4104a395e35471895faf09edb15c0ea65b4..0000000000000000000000000000000000000000 --- a/spaces/cfwef/gpt/functional_crazy.py +++ /dev/null @@ -1,108 +0,0 @@ -from toolbox import HotReload # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效 - -def get_crazy_functionals(): - ###################### 第一组插件 ########################### - # [第一组插件]: 最早期编写的项目插件和一些demo - from crazy_functions.读文章写摘要 import 读文章写摘要 - from crazy_functions.生成函数注释 import 批量生成函数注释 - from crazy_functions.解析项目源代码 import 解析项目本身 - from crazy_functions.解析项目源代码 import 解析一个Python项目 - from crazy_functions.解析项目源代码 import 解析一个C项目的头文件 - from crazy_functions.解析项目源代码 import 解析一个C项目 - from crazy_functions.解析项目源代码 import 解析一个Golang项目 - from crazy_functions.解析项目源代码 import 解析一个Java项目 - from crazy_functions.解析项目源代码 import 解析一个Rect项目 - from crazy_functions.高级功能函数模板 import 高阶功能模板函数 - from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文 - - function_plugins = { - "请解析并解构此项目本身(源码自译解)": { - "AsButton": False, # 加入下拉菜单中 - "Function": 解析项目本身 - }, - "解析整个Py项目": { - "Color": "stop", # 按钮颜色 - "Function": 解析一个Python项目 - }, - "解析整个C++项目头文件": { - "Color": "stop", # 按钮颜色 - "Function": 解析一个C项目的头文件 - }, - "解析整个C++项目(.cpp/.h)": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": 解析一个C项目 - }, - "解析整个Go项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": 解析一个Golang项目 - }, - "解析整个Java项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": 解析一个Java项目 - }, - "解析整个Java项目": { - "Color": "stop", # 按钮颜色 - "AsButton": False, # 加入下拉菜单中 - "Function": 解析一个Rect项目 - }, - "读Tex论文写摘要": { - "Color": "stop", # 按钮颜色 - "Function": 读文章写摘要 - }, - "批量生成函数注释": { - "Color": "stop", # 按钮颜色 - "Function": 批量生成函数注释 - }, - "[多线程demo] 把本项目源代码切换成全英文": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(全项目切换英文) - }, - "[函数插件模板demo] 历史上的今天": { - # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - "Function": HotReload(高阶功能模板函数) - }, - } - ###################### 第二组插件 ########################### - # [第二组插件]: 经过充分测试,但功能上距离达到完美状态还差一点点 - from crazy_functions.批量总结PDF文档 import 批量总结PDF文档 - from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer - from crazy_functions.总结word文档 import 总结word文档 - function_plugins.update({ - "[仅供开发调试] 批量总结PDF文档": { - "Color": "stop", - "Function": HotReload(批量总结PDF文档) # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效 - }, - "[仅供开发调试] 批量总结PDF文档pdfminer": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(批量总结PDF文档pdfminer) - }, - "[仅供开发调试] 批量总结Word文档": { - "Color": "stop", - "Function": HotReload(总结word文档) - }, - }) - - ###################### 第三组插件 ########################### - # [第三组插件]: 尚未充分测试的函数插件,放在这里 - try: - from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要 - function_plugins.update({ - "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": { - "Color": "stop", - "AsButton": False, # 加入下拉菜单中 - "Function": HotReload(下载arxiv论文并翻译摘要) - } - }) - except Exception as err: - print(f'[下载arxiv论文并翻译摘要] 插件导入失败 {str(err)}') - - - - ###################### 第n组插件 ########################### - return function_plugins - - diff --git a/spaces/chansung/LLM-As-Chatbot/miscs/strings.py b/spaces/chansung/LLM-As-Chatbot/miscs/strings.py deleted file mode 100644 index 688a398bfecc8e3e4ef8120cc43f62b6bd14d3dc..0000000000000000000000000000000000000000 --- a/spaces/chansung/LLM-As-Chatbot/miscs/strings.py +++ /dev/null @@ -1,83 +0,0 @@ -TITLE = "Alpaca-LoRA Playground" - -ABSTRACT = """ -Thanks to [tolen](https://github.com/tloen/alpaca-lora), this application runs Alpaca-LoRA which is instruction fine-tuned version of [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/). This demo currently runs 30B version on a 3*A6000 instance at [Jarvislabs.ai](https://jarvislabs.ai/). - -NOTE: too long input (context, instruction) will not be allowed. Please keep context < 500 and instruction < 150 -""" - -BOTTOM_LINE = """ -This demo application runs the open source project, [Alpaca-LoRA-Serve](https://github.com/deep-diver/Alpaca-LoRA-Serve). By default, it runs with streaming mode, but you can also run with dynamic batch generation model. Please visit the repo, find more information, and contribute if you can. - -Alpaca-LoRA is built on the same concept as Standford Alpaca project, but it lets us train and inference on a smaller GPUs such as RTX4090 for 7B version. Also, we could build very small size of checkpoints on top of base models thanks to [🤗 transformers](https://huggingface.co/docs/transformers/index), [🤗 peft](https://github.com/huggingface/peft), and [bitsandbytes](https://github.com/TimDettmers/bitsandbytes/tree/main) libraries. - -We are thankful to the [Jarvislabs.ai](https://jarvislabs.ai/) who generously provided free GPU instances. -""" - -DEFAULT_EXAMPLES = { - "Typical Questions": [ - { - "title": "List all Canadian provinces in alphabetical order.", - "examples": [ - ["1", "List all Canadian provinces in alphabetical order."], - ["2", "Which ones are on the east side?"], - ["3", "What foods are famous in each province on the east side?"], - ["4", "What about sightseeing? or landmarks? list one per province"], - ], - }, - { - "title": "Tell me about Alpacas.", - "examples": [ - ["1", "Tell me about alpacas in two sentences"], - ["2", "What other animals are living in the same area?"], - ["3", "Are they the same species?"], - ["4", "Write a Python program to return those species"], - ], - }, - { - "title": "Tell me about the king of France in 2019.", - "examples": [ - ["1", "Tell me about the king of France in 2019."], - ["2", "What about before him?"], - ] - }, - { - "title": "Write a Python program that prints the first 10 Fibonacci numbers.", - "examples": [ - ["1", "Write a Python program that prints the first 10 Fibonacci numbers."], - ["2", "Could you explain how the code works?"], - ["3", "What is recursion?"], - ] - } - ], - "Identity": [ - { - "title": "Conversation with the planet Pluto", - "examples": [ - ["1", "Conversation with the planet Pluto", "I'am so curious about you"], - ["2", "Conversation with the planet Pluto", "Tell me what I would see if I visited"], - ["3", "Conversation with the planet Pluto", "It sounds beautiful"], - ["4", "Conversation with the planet Pluto", "I'll keep that in mind. Hey I was wondering have you ever had any visitor?"], - ["5", "Conversation with the planet Pluto", "That must have been exciting"], - ["6", "Conversation with the planet Pluto", "That's so great. What else do you wish people knew about you?"], - ["7", "Conversation with the planet Pluto", "Thanks for talking with me"], - ] - }, - { - "title": "Conversation with a paper airplane", - "examples": [ - ["1", "Conversation with a paper airplane", "What's it like being thrown through the air"], - ["2", "Conversation with a paper airplane", "What's the worst place you've ever landed"], - ["3", "Conversation with a paper airplane", "Have you ever stucked?"], - ["4", "Conversation with a paper airplane", "What's the secret to a really good paper airplane?"], - ["5", "Conversation with a paper airplane", "What's the farthest you've ever flown?"], - ["6", "Conversation with a paper airplane", "Good to talk to you!"] - ] - } - ] -} - -SPECIAL_STRS = { - "continue": "continue.", - "summarize": "what have we discussed so far? describe in the user's view and include important entities. also be brief as much as possible." -} \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/test_config.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/test_config.py deleted file mode 100644 index f24af28381f7577fd2ec8007c7b81cb24ca7d89c..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/test_config.py +++ /dev/null @@ -1,191 +0,0 @@ -from chromadb.config import Component, System, Settings -from overrides import overrides -from threading import local -import random - -data = local() # use thread local just in case tests ever run in parallel - - -def reset() -> None: - global data - data.starts = [] - data.stops = [] - data.inits = [] - - -class ComponentA(Component): - def __init__(self, system: System): - data.inits += "A" - super().__init__(system) - self.require(ComponentB) - self.require(ComponentC) - - @overrides - def start(self) -> None: - data.starts += "A" - - @overrides - def stop(self) -> None: - data.stops += "A" - - -class ComponentB(Component): - def __init__(self, system: System): - data.inits += "B" - super().__init__(system) - self.require(ComponentC) - self.require(ComponentD) - - @overrides - def start(self) -> None: - data.starts += "B" - - @overrides - def stop(self) -> None: - data.stops += "B" - - -class ComponentC(Component): - def __init__(self, system: System): - data.inits += "C" - super().__init__(system) - self.require(ComponentD) - - @overrides - def start(self) -> None: - data.starts += "C" - - @overrides - def stop(self) -> None: - data.stops += "C" - - -class ComponentD(Component): - def __init__(self, system: System): - data.inits += "D" - super().__init__(system) - - @overrides - def start(self) -> None: - data.starts += "D" - - @overrides - def stop(self) -> None: - data.stops += "D" - - -# Dependency Graph for tests: -# ┌───┐ -# │ A │ -# └┬─┬┘ -# │┌▽──┐ -# ││ B │ -# │└┬─┬┘ -# ┌▽─▽┐│ -# │ C ││ -# └┬──┘│ -# ┌▽───▽┐ -# │ D │ -# └─────┘ - - -def test_leaf_only() -> None: - settings = Settings() - system = System(settings) - - reset() - - d = system.instance(ComponentD) - assert isinstance(d, ComponentD) - - assert data.inits == ["D"] - system.start() - assert data.starts == ["D"] - system.stop() - assert data.stops == ["D"] - - -def test_partial() -> None: - settings = Settings() - system = System(settings) - - reset() - - c = system.instance(ComponentC) - assert isinstance(c, ComponentC) - - assert data.inits == ["C", "D"] - system.start() - assert data.starts == ["D", "C"] - system.stop() - assert data.stops == ["C", "D"] - - -def test_system_startup() -> None: - settings = Settings() - system = System(settings) - - reset() - - a = system.instance(ComponentA) - assert isinstance(a, ComponentA) - - assert data.inits == ["A", "B", "C", "D"] - system.start() - assert data.starts == ["D", "C", "B", "A"] - system.stop() - assert data.stops == ["A", "B", "C", "D"] - - -def test_system_override_order() -> None: - settings = Settings() - system = System(settings) - - reset() - - system.instance(ComponentA) - - # Deterministically shuffle the instances map to prove that topsort is actually - # working and not just implicitly working because of insertion order. - - # This causes the test to actually fail if the deps are not wired up correctly. - random.seed(0) - entries = list(system._instances.items()) - random.shuffle(entries) - system._instances = {k: v for k, v in entries} - - system.start() - assert data.starts == ["D", "C", "B", "A"] - system.stop() - assert data.stops == ["A", "B", "C", "D"] - - -class ComponentZ(Component): - def __init__(self, system: System): - super().__init__(system) - self.require(ComponentC) - - @overrides - def start(self) -> None: - pass - - @overrides - def stop(self) -> None: - pass - - -def test_runtime_dependencies() -> None: - settings = Settings() - system = System(settings) - - reset() - - # Nothing to do, no components were requested prior to start - system.start() - assert data.starts == [] - - # Constructs dependencies and starts them in the correct order - ComponentZ(system) - assert data.starts == ["D", "C"] - system.stop() - assert data.stops == ["C", "D"] diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/oxml/shape.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/oxml/shape.py deleted file mode 100644 index 77ca7db8a2e240cd6e508c388ce6026bf9d7810c..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/oxml/shape.py +++ /dev/null @@ -1,284 +0,0 @@ -# encoding: utf-8 - -""" -Custom element classes for shape-related elements like ```` -""" - -from . import parse_xml -from .ns import nsdecls -from .simpletypes import ( - ST_Coordinate, ST_DrawingElementId, ST_PositiveCoordinate, - ST_RelationshipId, XsdString, XsdToken -) -from .xmlchemy import ( - BaseOxmlElement, OneAndOnlyOne, OptionalAttribute, RequiredAttribute, - ZeroOrOne -) - - -class CT_Blip(BaseOxmlElement): - """ - ```` element, specifies image source and adjustments such as - alpha and tint. - """ - embed = OptionalAttribute('r:embed', ST_RelationshipId) - link = OptionalAttribute('r:link', ST_RelationshipId) - - -class CT_BlipFillProperties(BaseOxmlElement): - """ - ```` element, specifies picture properties - """ - blip = ZeroOrOne('a:blip', successors=( - 'a:srcRect', 'a:tile', 'a:stretch' - )) - - -class CT_GraphicalObject(BaseOxmlElement): - """ - ```` element, container for a DrawingML object - """ - graphicData = OneAndOnlyOne('a:graphicData') - - -class CT_GraphicalObjectData(BaseOxmlElement): - """ - ```` element, container for the XML of a DrawingML object - """ - pic = ZeroOrOne('pic:pic') - uri = RequiredAttribute('uri', XsdToken) - - -class CT_Inline(BaseOxmlElement): - """ - ```` element, container for an inline shape. - """ - extent = OneAndOnlyOne('wp:extent') - docPr = OneAndOnlyOne('wp:docPr') - graphic = OneAndOnlyOne('a:graphic') - - @classmethod - def new(cls, cx, cy, shape_id, pic): - """ - Return a new ```` element populated with the values passed - as parameters. - """ - inline = parse_xml(cls._inline_xml()) - inline.extent.cx = cx - inline.extent.cy = cy - inline.docPr.id = shape_id - inline.docPr.name = 'Picture %d' % shape_id - inline.graphic.graphicData.uri = ( - 'http://schemas.openxmlformats.org/drawingml/2006/picture' - ) - inline.graphic.graphicData._insert_pic(pic) - return inline - - @classmethod - def new_pic_inline(cls, shape_id, rId, filename, cx, cy): - """ - Return a new `wp:inline` element containing the `pic:pic` element - specified by the argument values. - """ - pic_id = 0 # Word doesn't seem to use this, but does not omit it - pic = CT_Picture.new(pic_id, filename, rId, cx, cy) - inline = cls.new(cx, cy, shape_id, pic) - inline.graphic.graphicData._insert_pic(pic) - return inline - - @classmethod - def _inline_xml(cls): - return ( - '\n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - '' % nsdecls('wp', 'a', 'pic', 'r') - ) - - -class CT_NonVisualDrawingProps(BaseOxmlElement): - """ - Used for ```` element, and perhaps others. Specifies the id and - name of a DrawingML drawing. - """ - id = RequiredAttribute('id', ST_DrawingElementId) - name = RequiredAttribute('name', XsdString) - - -class CT_NonVisualPictureProperties(BaseOxmlElement): - """ - ```` element, specifies picture locking and resize - behaviors. - """ - - -class CT_Picture(BaseOxmlElement): - """ - ```` element, a DrawingML picture - """ - nvPicPr = OneAndOnlyOne('pic:nvPicPr') - blipFill = OneAndOnlyOne('pic:blipFill') - spPr = OneAndOnlyOne('pic:spPr') - - @classmethod - def new(cls, pic_id, filename, rId, cx, cy): - """ - Return a new ```` element populated with the minimal - contents required to define a viable picture element, based on the - values passed as parameters. - """ - pic = parse_xml(cls._pic_xml()) - pic.nvPicPr.cNvPr.id = pic_id - pic.nvPicPr.cNvPr.name = filename - pic.blipFill.blip.embed = rId - pic.spPr.cx = cx - pic.spPr.cy = cy - return pic - - @classmethod - def _pic_xml(cls): - return ( - '\n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - ' \n' - '' % nsdecls('pic', 'a', 'r') - ) - - -class CT_PictureNonVisual(BaseOxmlElement): - """ - ```` element, non-visual picture properties - """ - cNvPr = OneAndOnlyOne('pic:cNvPr') - - -class CT_Point2D(BaseOxmlElement): - """ - Used for ```` element, and perhaps others. Specifies an x, y - coordinate (point). - """ - x = RequiredAttribute('x', ST_Coordinate) - y = RequiredAttribute('y', ST_Coordinate) - - -class CT_PositiveSize2D(BaseOxmlElement): - """ - Used for ```` element, and perhaps others later. Specifies the - size of a DrawingML drawing. - """ - cx = RequiredAttribute('cx', ST_PositiveCoordinate) - cy = RequiredAttribute('cy', ST_PositiveCoordinate) - - -class CT_PresetGeometry2D(BaseOxmlElement): - """ - ```` element, specifies an preset autoshape geometry, such - as ``rect``. - """ - - -class CT_RelativeRect(BaseOxmlElement): - """ - ```` element, specifying picture should fill containing - rectangle shape. - """ - - -class CT_ShapeProperties(BaseOxmlElement): - """ - ```` element, specifies size and shape of picture container. - """ - xfrm = ZeroOrOne('a:xfrm', successors=( - 'a:custGeom', 'a:prstGeom', 'a:ln', 'a:effectLst', 'a:effectDag', - 'a:scene3d', 'a:sp3d', 'a:extLst' - )) - - @property - def cx(self): - """ - Shape width as an instance of Emu, or None if not present. - """ - xfrm = self.xfrm - if xfrm is None: - return None - return xfrm.cx - - @cx.setter - def cx(self, value): - xfrm = self.get_or_add_xfrm() - xfrm.cx = value - - @property - def cy(self): - """ - Shape height as an instance of Emu, or None if not present. - """ - xfrm = self.xfrm - if xfrm is None: - return None - return xfrm.cy - - @cy.setter - def cy(self, value): - xfrm = self.get_or_add_xfrm() - xfrm.cy = value - - -class CT_StretchInfoProperties(BaseOxmlElement): - """ - ```` element, specifies how picture should fill its containing - shape. - """ - - -class CT_Transform2D(BaseOxmlElement): - """ - ```` element, specifies size and shape of picture container. - """ - off = ZeroOrOne('a:off', successors=('a:ext',)) - ext = ZeroOrOne('a:ext', successors=()) - - @property - def cx(self): - ext = self.ext - if ext is None: - return None - return ext.cx - - @cx.setter - def cx(self, value): - ext = self.get_or_add_ext() - ext.cx = value - - @property - def cy(self): - ext = self.ext - if ext is None: - return None - return ext.cy - - @cy.setter - def cy(self, value): - ext = self.get_or_add_ext() - ext.cy = value diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/experimental/_map.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/experimental/_map.py deleted file mode 100644 index 8016f5589c99d764f682f0b7f94e0fc2f5971747..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/functorch/experimental/_map.py +++ /dev/null @@ -1,148 +0,0 @@ -from functools import partial - -import torch -import torch.utils._pytree as pytree -from torch._C import DispatchKey, DispatchKeySet, ExcludeDispatchKeyGuard -from torch._functorch.eager_transforms import _unwrap_all_tensors_from_functional, _wrap_all_tensors_to_functional, functionalize -from torch._ops import PyOperator -from torch._subclasses.fake_tensor import FakeTensorMode -from torch.fx.experimental.proxy_tensor import ( - disable_proxy_modes_tracing, - make_fx, - ProxyTorchDispatchMode, - track_tensor_tree, - unwrap_proxy, -) -from torch.utils._python_dispatch import ( - _get_current_dispatch_mode, - _pop_mode_temporarily, -) -from torch.utils._pytree import tree_flatten -from ._cond import _has_potential_branch_input_alias, _has_potential_branch_input_mutation, UnsupportedAliasMutationException - - -map = PyOperator("map") - - -def trace_map(proxy_mode, func_overload, f, xs, *args): - if not isinstance(xs, torch.Tensor): - raise ValueError("map() must loop over a tensor") - if len(xs.shape) == 0 or xs.shape[0] == 0: - raise ValueError("map() cannot be traced with scalar tensors or zero dimension tensors") - if not all(isinstance(o, torch.Tensor) for o in args): - raise ValueError("map() operands must be a list of tensors or modules") - - with disable_proxy_modes_tracing(): - body_graph = make_fx(f)(xs[0], *args) - - next_name = None - i = 0 - while not next_name: - candidate = f"body_graph_{i}" - if hasattr(proxy_mode.tracer.root, candidate): - i += 1 - else: - next_name = candidate - - proxy_mode.tracer.root.register_module(next_name, body_graph) - node_args = (body_graph, xs, *args) - proxy_args = pytree.tree_map(partial(unwrap_proxy, proxy_mode), node_args) - out_proxy = proxy_mode.tracer.create_proxy('call_function', func_overload, proxy_args, {}, - name="map") - outs = [body_graph(x, *args) for x in xs] - # Implementation notes: we need to use new_empty() + copy_() here instead of stack() directly - # because stack([...]) takes a fixed size list which will specialize dynamic shape here. - # Meanwhile we want to preserve the looped over dimension as symbolic shape, such that: - # ys: Tensor[s0, ...] = map(xs: Tensor[s0, ...], *args) - out = outs[0].new_empty([xs.shape[0], *outs[0].shape]) - out.copy_(torch.stack(outs)) - return track_tensor_tree(out, out_proxy, constant=None, tracer=proxy_mode.tracer) - - -@map.py_impl(DispatchKey.CUDA) -@map.py_impl(DispatchKey.CPU) -def map_cpu(f, xs, *args): - mode = _get_current_dispatch_mode() - assert (mode is None), "Mode should never be enabled for CPU/CUDA key" - return torch.stack([f(x, *args) for x in xs]) - - -@map.py_impl(DispatchKey.AutogradCUDA) -@map.py_impl(DispatchKey.AutogradCPU) -def map_autograd(f, xs, *args): - # TODO: support autograd - flat_operands, _ = tree_flatten([f, xs, args]) - assert all([not f.requires_grad for f in flat_operands - if isinstance(f, torch.Tensor)]) - - _ = ExcludeDispatchKeyGuard(DispatchKeySet(DispatchKey.AutogradCPU)) - return map(f, xs, *args) - - -@map.py_impl(ProxyTorchDispatchMode) -def map_proxy_torch_dispatch_mode(f, xs, *args): - mode = _get_current_dispatch_mode() - assert (mode is not None), "Mode should always be enabled for python fallback key" - with _pop_mode_temporarily() as mode: - res = trace_map(mode, map, f, xs, *args) - return res - - -@map.py_impl(FakeTensorMode) -def map_fake_tensor_mode(f, xs, *args): - outs = [f(x, *args) for x in xs] - return outs[0].new_empty([xs.shape[0], *outs[0].shape]) - -# We cannot directly call fallthrough here due to issue #89037. -@map.py_impl(DispatchKey.PythonDispatcher) -def map_python_dispatcher(*args): - _ = ExcludeDispatchKeyGuard(DispatchKeySet(DispatchKey.PythonDispatcher)) - return map(*args) - -@map.py_impl(torch._C._functorch.TransformType.Functionalize) -def map_functionalize(interpreter, f, xs, *args): - """ - Functionalization implementation for torch.map. Currently: - 1. We don't allow any input mutation inside the map function - 2. Our check for above condition is not exhaustive - """ - reapply_views = interpreter.functionalize_add_back_views() - mode = 'mutations_and_views' if reapply_views else 'mutations' - # At this point, we will see functionalized tensors, so need to unwrap them first - unwrapped_xs = _unwrap_all_tensors_from_functional(xs, reapply_views=reapply_views) - unwrapped_args = _unwrap_all_tensors_from_functional(args, reapply_views=reapply_views) - - functional_map_fn = functionalize(f, remove=mode) - - with interpreter.lower(): - fake_tensor_mode = FakeTensorMode() - with fake_tensor_mode as ft_mode: - - # Returns fake inputs for a single map function call - def get_fake_inputs(unwrapped_xs, unwrapped_args): - fake_xs = ft_mode.fake_tensor_converter(ft_mode, unwrapped_xs) - fake_args = pytree.tree_map_only( - torch.Tensor, - lambda x: ft_mode.fake_tensor_converter(ft_mode, x), - unwrapped_args, - ) - return (fake_xs[0],) + fake_args - - fake_inputs = get_fake_inputs(unwrapped_xs, unwrapped_args) - if _has_potential_branch_input_mutation(functional_map_fn, fake_inputs): - raise UnsupportedAliasMutationException( - "torch.map is mutating the input!" - ) - - if _has_potential_branch_input_alias(functional_map_fn, fake_inputs): - raise UnsupportedAliasMutationException( - "torch.map is aliasing the input!" - ) - - map_return = map(functional_map_fn, unwrapped_xs, *unwrapped_args) - return _wrap_all_tensors_to_functional(map_return, level=interpreter.level()) - -# TODO(voz) Make this automatic for keys, this is very ugly atm -map.fallthrough(DispatchKey.PythonTLSSnapshot) -map.fallthrough(DispatchKey.ADInplaceOrView) -map.fallthrough(DispatchKey.BackendSelect) diff --git a/spaces/cihyFjudo/fairness-paper-search/Attractive girl hugs teddy bear in christmas eve rar How to capture the perfect holiday moment.md b/spaces/cihyFjudo/fairness-paper-search/Attractive girl hugs teddy bear in christmas eve rar How to capture the perfect holiday moment.md deleted file mode 100644 index 643c1e34f300e5977fcc15e784123a426b7e40a3..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Attractive girl hugs teddy bear in christmas eve rar How to capture the perfect holiday moment.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Attractive girl hugs teddy bear in christmas eve rar


    Download File ————— https://tinurli.com/2uwiVs



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Download Aplikasi Power Point Terbaru dan Dapatkan Bonus Menarik.md b/spaces/cihyFjudo/fairness-paper-search/Download Aplikasi Power Point Terbaru dan Dapatkan Bonus Menarik.md deleted file mode 100644 index 3c6695c7fabc33a018844114bd34aad000ee2588..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download Aplikasi Power Point Terbaru dan Dapatkan Bonus Menarik.md +++ /dev/null @@ -1,16 +0,0 @@ -
    -

    PowerPoint 2016 telah memperkenalkan fitur tambahan dan merampingkan prosedur tertentu untuk membuatnya lebih efektif serta mengesankan dibanding pendahulunya. Sekarang Anda bisa lebih kreatif dengan tampilan tema Anda melalui beberapa variasi untuk memperhalus desainnya. Umpan balik, komen dan pertanyaan pun sudah bisa ditampilkan melalui panel komentar sehingga sangat berguna saat mengadakan konferensi. Beberapa fungsi telah diotomatisasi untuk meningkatkan kecepatan supaya Anda bisa mendapatkan penampilan yang mengesankan. Sebagai contoh, jika Anda memasukkan bullet point, Power Point akan menyarankan agar merubahnya menjadi grafik SmartArt yang lebih menarik perhatian. Jika Anda merasa PowerPoint versi 2013 sulit dimengerti, maka edisi terbarunya kini dilengkapi menu bantuan yang memberikan Anda saran terkait langkah-langkah agar hasilnya sesuai dengan yang Anda inginkan.
    Jika Anda selalu merasa bahwa PowerPoint hanya sebagai alat untuk presentasi, sekarang waktunya untuk berpikir di luar kotak dan memanfaatkan fungsinya dengan maksimal. Sebagai media kolaborasi untuk berbagi ide antar kolega, PowerPoint sangatlah sulit dikalahkan. Di samping itu, jika Anda ingin membantu pekerjan rumah anak-anak, kenapa tidak memanfaatkan aplikasi multifungsi ini untuk membuat Flash card agar membantunya mengingat detail yang jelimet?

    -

    Microsoft PowerPoint adalah sebuah program komputer untuk presentasi yang dikembangkan oleh Microsoft di dalam paket aplikasi kantoran mereka, Microsoft Office, selain Microsoft Word, Excel, Access dan beberapa program lainnya. PowerPoint berjalan di atas komputer PC berbasis sistem operasi Microsoft Windows dan juga Apple Macintosh yang menggunakan sistem operasi Apple Mac OS, meskipun pada awalnya aplikasi ini berjalan di atas sistem operasi Xenix. Aplikasi ini sangat banyak digunakan, apalagi oleh kalangan perkantoran dan pebisnis, para pendidik, siswa, dan trainer. Dimulai pada versi Microsoft Office System 2003, Microsoft mengganti nama dari sebelumnya Microsoft PowerPoint saja menjadi Microsoft Office PowerPoint. Lalu, pada Office 2013, namanya cukup disingkat PowerPoint. Versi terbaru dari PowerPoint adalah versi 15 (Microsoft Office PowerPoint 2013), yang tergabung ke dalam paket Microsoft Office 2013.

    -

    download aplikasi power point terbaru


    Download File ✏ ✏ ✏ https://tinurli.com/2uwhZw



    -

    Cara download powerpoint di laptop sangat mudah dilakukan. Untuk melakukan presentasi biasanya Anda akan memakai powerpoint untuk membuat presentasinya lebih menarik saat menjelaskan materi.

    -

    Powerpoint ini merupakan salah satu aplikasi yang berasa dari microsoft sebagai media presentasi. Meski saat ini selain microsoft, sudah banyak vendor lain yang memiliki aplikasi dengan fungsi yang sama.

    -

    Tetapi tetap saja powerpoint yang berasal dari microsoft masih menjadi pilihan banyak orang dan tidak kalah bersaing dengan yang dikeluarkan vendor lain. hal ini dikarenakan aplikasi ini dianggap sangat user friendly dan mudah digunakan.

    -

    Microsoft power point memiliki fungsi sebagai sarana memudahkan seseorang untuk melakukan presentasi, membuat materi presentasi tersebut berbentuk softfile sehingga dapat diakses dengan mudah oleh orang lain melalui perangkat gawai.

    -

    Anda harus mendownload Microsoft Office secara keseluruhan yang sudah mencakup Miscrosoft Word, Excel, dan lain-lainnya. Berikut ini tutorial cara download power point di laptop yang bisa diikuti oleh Anda.

    -

    -

    Selain memiliki fungsi untuk membuat presentasi lebih jelas dan menarik, Powerpoint juga memiliki manfaat tersembunyi yang masih belum banyak diketahui semua orang. Setelah mengetahui cara download powerpoint di laptop kami akan memberitahu manfaat lain dari PPT.

    -

    Power Point merupakan salah satu aplikasi dari Microsoft Office yang paling sering digunakan sebagai media presentasi paling efektif. Cara download powerpoint di laptop disarankan yang berasal dari situs resminya saja, khawatir jika diambil dari situs lain Laptop akan terkena malware.

    -

    Dalam memulai pembelajaran yang interaktif, Guru dapat memberikan kode kelas kepada setiap siswa yang akan bergabung ke kelas daring. Siswa tidak harus mendownload aplikasi dalam mengikuti kelas yang diselenggarakan oleh guru.

    -

    Click icon for support
    var fieldId = "Last_Form_Submission_Page"; var title = encodeURI(document.title); /*add extra var numbers ex: var formUrl2, modalhead2, buttoncopy2 for multiple forms on page starting below*/ var formUrlcta = " -10-04/79kh48"; var modalheadcta = "NDI Support Form" var buttoncopycta = "" document.write('' + buttoncopycta + ''); #cookie-support-banner background: #333; color: white; letter-spacing: 1px; padding: 20px; font-size: 16px; border-radius: 5px; width: 250px; max-width: 90vw; position: fixed; z-index: 20; bottom: 10px; right: 10px; box-shadow: 0 1px 2px rgb(0 0 0 / 50%); transition-property: right; transition-duration: 250ms; transition-timing-function: ease-out; transition-delay: 100ms; #cookie-support-banner-small background: #333; color: white; letter-spacing: 1px; padding: 20px; font-size: 16px; border-radius: 5px; width: 95px; max-width: 90vw; position: fixed; z-index: 15; bottom: 40px; right: 10px; box-shadow: 0 1px 2px rgb(0 0 0 / 50%); transition-property: right; transition-duration: 250ms; transition-timing-function: ease-in; transition-delay: 100ms; .hidden-support right:-251px !important; .show-support right: 10px !important; /*@keyframes opa from background-color: #333; to background-color: rgba(51,51,51,0); */ #cookie-law-banner-2 background: #333; color: white; letter-spacing: 1px; font-size: 14px; font-family: 'Helvetica Neue', 'Roboto', Arial, sans-serif !important; padding: 10px 20px 0px 20px; position: relative; width: 100%; .support-icon transition-property: color; transition-duration: 175ms; transition-timing-function: ease-in; transition-delay: 175ms; .support-icon:hover color: #FFFFFF; function hideBannerSupport() /* Hide Banner */ document.getElementById("cookie-support-banner").classList.add("hidden-support"); function showBannerSupport() /* Show Banner */ document.getElementById("cookie-support-banner").classList.add("show-support"); var btn = document.querySelector("#cookie-support-banner"); btn.addEventListener('click', () => btn.classList.toggle('hidden-support'); ) btn.addEventListener('click', () => btn.classList.toggle('show-support'); ) var btn1 = document.getElementById("cookie-support-banner-small"); btn1.addEventListener('click', () => btn.classList.toggle('show-support'); ) btn1.addEventListener('click', () => btn.classList.toggle('hidden-support'); ) .navbar-fixed-top top: 0; position: sticky; border-width: 0 0 1px;.navbar position: relative; min-height: 50px; margin-bottom: -1px !important; border: 1px solid transparent; .open > .dropdown-mega-menu display: block; header .dropdown-mega-menu a color: white; letter-spacing: 1px; font-weight: 300; padding-bottom: 5px; header .dropdown-mega-menu a:hover color: deepskyblue; header .dropdown-mega-menu border-radius: 0; width: 100%; color: white; background: rgba(59, 59, 59, 0.8); .dropdown-mega-menu ul li:first-child font-size: 1.5em; .dropdown-mega-menu position: absolute; top: 100%; left: -1213px; z-index: 1000; float: left; display: none; min-width: 1483px; padding: 5px 0; margin: 2px 0 0; font-size: 14px; text-align: left; list-style: none; background-color: #fff; -webkit-background-clip: padding-box; background-clip: padding-box; border: 1px solid #ccc; border: 1px solid rgba(0,0,0,.15); border-radius: 4px; -webkit-box-shadow: 0 6px 12px rgb(0 0 0 / 18%); box-shadow: 0 6px 12px rgb(0 0 0 / 18%); padding-left: 14px; padding-right: 14px; @media (min-width: 768px) .dropdown-mega-menu position: absolute; top: 100%; left: -540px; z-index: 1000; float: left; display: none; min-width: 701px; padding: 5px 0; margin: 2px 0 0; font-size: 14px; text-align: left; list-style: none; background-color: #fff; -webkit-background-clip: padding-box; background-clip: padding-box; border: 1px solid #ccc; border: 1px solid rgba(0,0,0,.15); border-radius: 4px; -webkit-box-shadow: 0 6px 12px rgb(0 0 0 / 18%); box-shadow: 0 6px 12px rgb(0 0 0 / 18%); padding-left: 14px; padding-right: 14px; ul display: block; list-style-type: none !important; margin-block-start: 1em !important; margin-block-end: 0.2em !important; margin-inline-start: 0px !important; margin-inline-end: 0px !important; padding-inline-start: 0px !important; .dropdown-mega-menu ul li:first-child font-size: 1.1em !important; @media (min-width: 991px) .dropdown-mega-menu position: absolute; top: 100%; left: -748px; z-index: 1000; float: left; display: none; min-width: 956px; padding: 5px 0; margin: 2px 0 0; font-size: 14px; text-align: left; list-style: none; background-color: #fff; -webkit-background-clip: padding-box; background-clip: padding-box; border: 1px solid #ccc; border: 1px solid rgba(0,0,0,.15); border-radius: 4px; -webkit-box-shadow: 0 6px 12px rgb(0 0 0 / 18%); box-shadow: 0 6px 12px rgb(0 0 0 / 18%); padding-left: 14px; padding-right: 14px; .dropdown-mega-menu ul li:first-child font-size: 1.1em; @media (min-width: 1024px) .dropdown-mega-menu position: absolute; top: 100%; left: -752px; z-index: 1000; display: none; float: left; min-width: 954px; padding: 5px 0; margin: 2px 0 0; font-size: 14px; text-align: left; list-style: none; background-color: #fff; -webkit-background-clip: padding-box; background-clip: padding-box; border: 1px solid #ccc; border: 1px solid rgba(0,0,0,.15); border-radius: 4px; -webkit-box-shadow: 0 6px 12px rgb(0 0 0 / 18%); box-shadow: 0 6px 12px rgb(0 0 0 / 18%); padding-left: 14px; padding-right: 14px; @media (min-width: 1400px) .dropdown-mega-menu position: absolute; top: 100%; left: -958px; z-index: 1000; float: left; min-width: 1145px; padding: 5px 0; margin: 2px 0 0; font-size: 14px; text-align: left; list-style: none; background-color: #fff; -webkit-background-clip: padding-box; background-clip: padding-box; border: 1px solid #ccc; border: 1px solid rgba(0,0,0,.15); border-radius: 4px; -webkit-box-shadow: 0 6px 12px rgb(0 0 0 / 18%); box-shadow: 0 6px 12px rgb(0 0 0 / 18%); padding-left: 14px; padding-right: 14px; @media (min-width: 1500px ) .dropdown-mega-menu position: absolute; top: 100%; left: -1021px; z-index: 1000; float: left; min-width: 1207px; display: none; padding: 5px 0; margin: 2px 0 0; font-size: 14px; text-align: left; list-style: none; background-color: #fff; -webkit-background-clip: padding-box; background-clip: padding-box; border: 1px solid #ccc; border: 1px solid rgba(0,0,0,.15); border-radius: 4px; -webkit-box-shadow: 0 6px 12px rgb(0 0 0 / 18%); box-shadow: 0 6px 12px rgb(0 0 0 / 18%); padding-left: 14px; padding-right: 14px; .dropdown-mega-menu.pull-right right: 0; left: auto; .dropdown-mega-menu .divider height: 1px; margin: 9px 0; overflow: hidden; background-color: #4e4e4e; .dropdown-mega-menu ul li a display: block; padding: 3px 20px; clear: both; font-weight: 400; line-height: 1.42857143; list-style: none !important; white-space: nowrap; .dropdown-mega-menu > li > a:focus, .dropdown-mega-menu > li > a:hover color: #262626; text-decoration: none; background-color: #f5f5f5; .dropdown-mega-menu > .active > a, .dropdown-mega-menu > .active > a:focus, .dropdown-mega-menu > .active > a:hover color: #fff; text-decoration: none; background-color: #337ab7; outline: 0; .dropdown-mega-menu > .disabled > a, .dropdown-mega-menu > .disabled > a:focus, .dropdown-mega-menu > .disabled > a:hover color: #777; .dropdown-mega-menu > .disabled > a:focus, .dropdown-mega-menu > .disabled > a:hover text-decoration: none; cursor: not-allowed; background-color: transparent; background-image: none; filter: progid:DXImageTransform.Microsoft.gradient(enabled=false); .open > .dropdown-mega-menu display: block; .open > a outline: 0; .dropdown-mega-menu-right right: 0; left: auto; .dropdown-mega-menu-left right: auto; left: 0; ul display: block; list-style-type: none !important; margin-block-start: 1em; margin-block-end: 1em; margin-inline-start: 0px; margin-inline-end: 0px; padding-inline-start: 0px; NDI NDI® NDI® Tools SDK Marketplace Marketplace Home

  • Live Production
  • Production Systems
  • Streaming Applications
  • Media Players
  • Converters
  • Encoders
  • Decoders
  • Encoders/Decoders
  • Utility Applications
  • Multiviewer
  • Transport WAN
  • Recording
  • Cameras
  • NDI Camera Licenses
  • PTZ HD
  • PTZ UHD
  • Specialty Cam
  • Graphics
  • Broadcast Graphics
  • Graphic Applications
  • Hardware Tools
  • Tally Interfaces
  • Displays
  • Broadcast Monitors
  • Displays
  • Audio
  • Audio Mixer
  • Network Products
  • Mobile
  • Routing & Orchestration
  • Community
  • Blog
  • Careers
  • Press Center
  • NDI® Social Feeds
  • Events
  • MENU
  • About NDI
  • NDI Tools
  • NDI SDK
  • NDI Marketplace
  • Community
  • Careers
  • NDI TV
  • Events
  • /** * detect IE * returns version of IE or false, if browser is not Internet Explorer * */ function detectIE() var ua = window.navigator.userAgent; var msie = ua.indexOf('MSIE '); if (msie > 0) // IE 10 or older => return version number return parseInt(ua.substring(msie + 5, ua.indexOf('.', msie)), 10); var trident = ua.indexOf('Trident/'); if (trident > 0) // IE 11 => return version number var rv = ua.indexOf('rv:'); return parseInt(ua.substring(rv + 3, ua.indexOf('.', rv)), 10); var edge = ua.indexOf('Edge/'); if (edge > 0) // Edge (IE 12+) => return version number return parseInt(ua.substring(edge + 5, ua.indexOf('.', edge)), 10); // other browser return false; #content .blue-bg.stripe h2.mobile-ios-header padding-top: .1em; padding-bottom: .1em; @media (max-width: 991px) #content .blue-bg.stripe h2.mobile-ios-header text-align: center; padding-bottom: .5em; .banner .content position: relative; z-index: 3; padding: 40px 60% 40px 6%; color: white; height: 100%; margin-top: 0rem !important; .juicer-feed h1.referral a display: none !important; opacity: 0.00 !important; color: #f05a4b; display: inline-block; .juicer display: none !important; opacity: 0.00 !important; .j-stacker-wrapper margin-left: 0px !important; margin-right: 0px !important; padding-top: 20px; #content ul, #content ol list-style-position: outside; padding-left: 2em !important; .j-paginate display: none !important; opacity: 0.00 !important; color: #f05a4b; display: inline-block; .j-display-filters display: none !important; opacity: 0.00 !important; color: #f05a4b; display: inline-block; .juicer-feed h1.referral a display: none !important; opacity: 0.00 !important; color: #f05a4b; display: inline-block; .juicer display: none !important; opacity: 0.00 !important; .j-stacker-wrapper background: #000; background-image: url( -ac2a33202ef9b63045cbb3afca178df8.ssl.cf1.rackcdn.com/images/ndicentral/ndi-wires-lighter.png); margin-left: 0px !important; margin-right: 0px !important; padding-top: 20px; #content ul, #content ol list-style-position: outside; padding-left: 2em !important; .btn display: inline-block; margin-left: 0.0em; padding: 6px 12px; margin-bottom: 0; font-size: 17px; font-weight: 400; line-height: 1.42857143; text-align: center; white-space: nowrap; vertical-align: middle; -ms-touch-action: manipulation; touch-action: manipulation; cursor: pointer; -webkit-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; background-image: none; border: 1px solid transparent; border-radius: 4px; a.pardot-button padding: 10px; background: #3195d3; border-radius: 5px; display: block; color: white; margin: 20px auto 0px auto; letter-spacing: 3px; text-decoration: none; text-align: center; font-size: 26px; line-height: 1.2em; font-weight: 100; width: inherit; transition: color 0.2s; float: left; font-family: 'Helvetica Neue', 'Roboto', Arial, sans-serif !important; a.pardot-button:hover background: #1d5e86; text-decoration: none; color: white !important; a.pardot-button:visited, a.pardot-button:active text-decoration: none; color: white !important; .vbox-title font-size: 2.5em !important; line-height: inherit !important; height: unset !important; padding: 15px 40px !important; iframe.venoframe max-width: 900px !important; @media (max-width: -width: 992px) iframe.venoframe, .vbox-inline width: 50% !important; height: 540px; height: 70vh !important; .vbox-content iframe.figlio width: 500px !important; iframe.venoframe height: 540px; height: 70vh !important; #content .panel-body min-height: 450px; .banner .content padding: 40px 50% 40px 6%; $(document).ready(function () $('.venobox').venobox(); console.log("veno activated"); ); NDITools NDI 5.5 is available to download NOW!

    Watch Video Download

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Irreplaceable (Harmony __LINK__.md b/spaces/cihyFjudo/fairness-paper-search/Irreplaceable (Harmony __LINK__.md deleted file mode 100644 index 9ba1e89d1c6fe79b11f4ddc0a4c8c77ec83d298a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Irreplaceable (Harmony __LINK__.md +++ /dev/null @@ -1,9 +0,0 @@ -
    -

    Beginning at age 3 and continuing through college, dance and piano lessons were routine. Singing in school and church choirs is a family tradition and has continued throughout her life. Kari joined Sweet Adelines in 2002 and has been an irreplaceable member of Crosstown Harmony Chorus since January 2005.

    -

    Irreplaceable (Harmony,


    DOWNLOAD ::: https://tinurli.com/2uwhEY



    -

    The school is always the best place to cultivate young people's "sustainable development" consciousness and living habits. According to July 24th, 2018, the Ministry of Education issued the "National Statistical Bulletin on the Development of Education in 2018", the data show that there are 518,800 schools at all levels in the country in 2018, and 276 million students in all levels of education with different academic qualifications. The best period of one's life is spent at school. Therefore, our school should attach importance to the combination of protection and education, actively explore the development model of a green school and also effectively promote the construction of a green school in China. Staying on such a school for a few years will bring life-long benefits to students. To build a green and sustainable school is not only to cultivate harmony between the young generation and nature, to foster a green production and lifestyle, but also to provide an irreplaceable place for the healthy growth of the "future flowers" of China.

    -

    Sears, however, is chiefly concerned with soil conservation. During the great westward migration across the continent, he notes, pioneer farmers felt little obligation to conserve the soil, and the inevitable result was a "kind of predatory farming" (p. 48). Predatory farming meant that once the soil of a farmstead became exhausted, one could always move farther west to where it was rich once again. The forests were cut down and the grasslands plowed under; and when the rains and winds came, the soil washed and blew away. Predatory farming still exists, and the need for the vigilant practice of proper soil conservation techniques is as great now as at any time in the past. A variety of conservation measures are particularly needed in the Great Plains where only a delicate root system anchors the soil against the nearly constant wind. Once the grasslands have been destroyed by overgrazing or by plowing, drought and wind will play havoc with the soil. Still, the task is not to grow two blades of grass where only one grew before, but rather to develop a land utilization policy that will preserve the soil when only half a blade can be grown. If such a land utilization program is not instituted, Sears warns, the result will be future Dust Bowls and the irreplaceable loss of topsoil-all to the detriment of the world's food supply.

    -

    In broadening their focus out from the years around the first millennium, the editors have, seemingly unconsciously (for there is no mention of it in their introduction), taken up a project left unfinished by the early death of Tim Reuter in 2002. Reuter's seminal essays "The 'Imperial Church System'" and "A Europe of the Bishops" provide some of the most-cited references in this volume, and he was embarked on "a study of episcopal power across the longue durée" at the time of his death. [3] He published the early fruits of this, sadly unfinished study, amongst other places, in Gilsdorf's collection. [4] As Janet Nelson writes in her introduction to a posthumous collection of his essays, "he himself would have hoped that the project could be taken forward by other hands. It seems likelier to be attempted by a team than a lone scholar. For as well as being among the outstanding medieval historians of his generation, Tim combined knowledge, skills and interests in a unique and irreplaceable way." [5] Reuter is irreplaceable, but the appearance of this rich volume of essays by thirteen North American, British and French scholars, which has its origins in a 2003 Kalamazoo panel, demonstrates that he was not working in isolation, and that the baton has been successfully passed to others. The Bishop Reformed records an important stage in the path of this renewed collective effort to reconsider episcopal authority, with episcopal interest in reform as one aspect of that, across the three crucial centuries from 900 to 1200.

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Singham 4 Full Movie In Hindi 720p Free Download The Story Cast and Trailer of the Bollywood Hit.md b/spaces/cihyFjudo/fairness-paper-search/Singham 4 Full Movie In Hindi 720p Free Download The Story Cast and Trailer of the Bollywood Hit.md deleted file mode 100644 index 96a57130e09159145ffc30c7a339afa05be92d7a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Singham 4 Full Movie In Hindi 720p Free Download The Story Cast and Trailer of the Bollywood Hit.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Singham 4 Full Movie In Hindi 720p Free Download


    Download Zip ✏ ✏ ✏ https://tinurli.com/2uwkRm



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/uTorrent 3.5.5.45231 Activation Include File Download The Ultimate Guide.md b/spaces/cihyFjudo/fairness-paper-search/uTorrent 3.5.5.45231 Activation Include File Download The Ultimate Guide.md deleted file mode 100644 index 5a2771af2c72b3785329ed24b29b326d785c41a0..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/uTorrent 3.5.5.45231 Activation Include File Download The Ultimate Guide.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    uTorrent (pronounced "MicroTorrent" mean micro μ) is a Peer-to-peer BitTorrent client, designed for the distribution of files at high speed.

    With only 600k (approx) and 7MB memory, the software is very simple to use: to start a download the user has to simply inform the torrent file he wants to get address, so he can share and download large files very easily. If necessary, it is possible to adjust some settings: setting the bandwidth with priorities, scheduling downloads, RSS auto-downloading and DHT and downloading can begin.

    The application supports downloading multiple files simultaneously and offers management of is appropriate UPnP.

    It has a minimalist interface that immediately makes it an ideal choice for a novice user.

    -

    uTorrent 3.5.5.45231 Activation Include File Download


    DOWNLOADhttps://tinurli.com/2uwk7b



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cleanmaster/so-vits-svc-akagi/preprocess_flist_config.py b/spaces/cleanmaster/so-vits-svc-akagi/preprocess_flist_config.py deleted file mode 100644 index 552e1ba9355de1d1ddc63240dee7ab84855b314b..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/so-vits-svc-akagi/preprocess_flist_config.py +++ /dev/null @@ -1,125 +0,0 @@ -import os -import argparse -import re - -from tqdm import tqdm -from random import shuffle -import json -config_template = { - "train": { - "log_interval": 200, - "eval_interval": 1000, - "seed": 1234, - "epochs": 10000, - "learning_rate": 1e-4, - "betas": [0.8, 0.99], - "eps": 1e-9, - "batch_size": 12, - "fp16_run": False, - "lr_decay": 0.999875, - "segment_size": 17920, - "init_lr_ratio": 1, - "warmup_epochs": 0, - "c_mel": 45, - "c_kl": 1.0, - "use_sr": True, - "max_speclen": 384, - "port": "8001" - }, - "data": { - "training_files":"filelists/train.txt", - "validation_files":"filelists/val.txt", - "max_wav_value": 32768.0, - "sampling_rate": 32000, - "filter_length": 1280, - "hop_length": 320, - "win_length": 1280, - "n_mel_channels": 80, - "mel_fmin": 0.0, - "mel_fmax": None - }, - "model": { - "inter_channels": 192, - "hidden_channels": 192, - "filter_channels": 768, - "n_heads": 2, - "n_layers": 6, - "kernel_size": 3, - "p_dropout": 0.1, - "resblock": "1", - "resblock_kernel_sizes": [3,7,11], - "resblock_dilation_sizes": [[1,3,5], [1,3,5], [1,3,5]], - "upsample_rates": [10,8,2,2], - "upsample_initial_channel": 512, - "upsample_kernel_sizes": [16,16,4,4], - "n_layers_q": 3, - "use_spectral_norm": False, - "gin_channels": 256, - "ssl_dim": 256, - "n_speakers": 0, - }, - "spk":{ - "nen": 0, - "paimon": 1, - "yunhao": 2 - } -} - -pattern = re.compile(r'^[\.a-zA-Z0-9_\/]+$') - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list") - parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list") - parser.add_argument("--test_list", type=str, default="./filelists/test.txt", help="path to test list") - parser.add_argument("--source_dir", type=str, default="./dataset/32k", help="path to source dir") - args = parser.parse_args() - - train = [] - val = [] - test = [] - idx = 0 - spk_dict = {} - spk_id = 0 - for speaker in tqdm(os.listdir(args.source_dir)): - spk_dict[speaker] = spk_id - spk_id += 1 - wavs = ["/".join([args.source_dir, speaker, i]) for i in os.listdir(os.path.join(args.source_dir, speaker))] - for wavpath in wavs: - if not pattern.match(wavpath): - print(f"warning:文件名{wavpath}中包含非字母数字下划线,可能会导致错误。(也可能不会)") - if len(wavs) < 10: - print(f"warning:{speaker}数据集数量小于10条,请补充数据") - wavs = [i for i in wavs if i.endswith("wav")] - shuffle(wavs) - train += wavs[2:-2] - val += wavs[:2] - test += wavs[-2:] - n_speakers = len(spk_dict.keys())*2 - shuffle(train) - shuffle(val) - shuffle(test) - - print("Writing", args.train_list) - with open(args.train_list, "w") as f: - for fname in tqdm(train): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.val_list) - with open(args.val_list, "w") as f: - for fname in tqdm(val): - wavpath = fname - f.write(wavpath + "\n") - - print("Writing", args.test_list) - with open(args.test_list, "w") as f: - for fname in tqdm(test): - wavpath = fname - f.write(wavpath + "\n") - - config_template["model"]["n_speakers"] = n_speakers - config_template["spk"] = spk_dict - print("Writing configs/config.json") - with open("configs/config.json", "w") as f: - json.dump(config_template, f, indent=2) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/exceptiongroup/_exceptions.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/exceptiongroup/_exceptions.py deleted file mode 100644 index 84e2b375954e2c8cd17bef0f94dc25f0c5fcbdce..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/exceptiongroup/_exceptions.py +++ /dev/null @@ -1,282 +0,0 @@ -from __future__ import annotations - -from collections.abc import Callable, Sequence -from functools import partial -from inspect import getmro, isclass -from typing import TYPE_CHECKING, Generic, Type, TypeVar, cast, overload - -if TYPE_CHECKING: - from typing import Self - -_BaseExceptionT_co = TypeVar("_BaseExceptionT_co", bound=BaseException, covariant=True) -_BaseExceptionT = TypeVar("_BaseExceptionT", bound=BaseException) -_ExceptionT_co = TypeVar("_ExceptionT_co", bound=Exception, covariant=True) -_ExceptionT = TypeVar("_ExceptionT", bound=Exception) - - -def check_direct_subclass( - exc: BaseException, parents: tuple[type[BaseException]] -) -> bool: - for cls in getmro(exc.__class__)[:-1]: - if cls in parents: - return True - - return False - - -def get_condition_filter( - condition: type[_BaseExceptionT] - | tuple[type[_BaseExceptionT], ...] - | Callable[[_BaseExceptionT_co], bool] -) -> Callable[[_BaseExceptionT_co], bool]: - if isclass(condition) and issubclass( - cast(Type[BaseException], condition), BaseException - ): - return partial(check_direct_subclass, parents=(condition,)) - elif isinstance(condition, tuple): - if all(isclass(x) and issubclass(x, BaseException) for x in condition): - return partial(check_direct_subclass, parents=condition) - elif callable(condition): - return cast("Callable[[BaseException], bool]", condition) - - raise TypeError("expected a function, exception type or tuple of exception types") - - -class BaseExceptionGroup(BaseException, Generic[_BaseExceptionT_co]): - """A combination of multiple unrelated exceptions.""" - - def __new__( - cls, __message: str, __exceptions: Sequence[_BaseExceptionT_co] - ) -> Self: - if not isinstance(__message, str): - raise TypeError(f"argument 1 must be str, not {type(__message)}") - if not isinstance(__exceptions, Sequence): - raise TypeError("second argument (exceptions) must be a sequence") - if not __exceptions: - raise ValueError( - "second argument (exceptions) must be a non-empty sequence" - ) - - for i, exc in enumerate(__exceptions): - if not isinstance(exc, BaseException): - raise ValueError( - f"Item {i} of second argument (exceptions) is not an exception" - ) - - if cls is BaseExceptionGroup: - if all(isinstance(exc, Exception) for exc in __exceptions): - cls = ExceptionGroup - - if issubclass(cls, Exception): - for exc in __exceptions: - if not isinstance(exc, Exception): - if cls is ExceptionGroup: - raise TypeError( - "Cannot nest BaseExceptions in an ExceptionGroup" - ) - else: - raise TypeError( - f"Cannot nest BaseExceptions in {cls.__name__!r}" - ) - - instance = super().__new__(cls, __message, __exceptions) - instance._message = __message - instance._exceptions = __exceptions - return instance - - def add_note(self, note: str) -> None: - if not isinstance(note, str): - raise TypeError( - f"Expected a string, got note={note!r} (type {type(note).__name__})" - ) - - if not hasattr(self, "__notes__"): - self.__notes__: list[str] = [] - - self.__notes__.append(note) - - @property - def message(self) -> str: - return self._message - - @property - def exceptions( - self, - ) -> tuple[_BaseExceptionT_co | BaseExceptionGroup[_BaseExceptionT_co], ...]: - return tuple(self._exceptions) - - @overload - def subgroup( - self, __condition: type[_BaseExceptionT] | tuple[type[_BaseExceptionT], ...] - ) -> BaseExceptionGroup[_BaseExceptionT] | None: - ... - - @overload - def subgroup( - self: Self, __condition: Callable[[_BaseExceptionT_co], bool] - ) -> Self | None: - ... - - def subgroup( - self: Self, - __condition: type[_BaseExceptionT] - | tuple[type[_BaseExceptionT], ...] - | Callable[[_BaseExceptionT_co], bool], - ) -> BaseExceptionGroup[_BaseExceptionT] | Self | None: - condition = get_condition_filter(__condition) - modified = False - if condition(self): - return self - - exceptions: list[BaseException] = [] - for exc in self.exceptions: - if isinstance(exc, BaseExceptionGroup): - subgroup = exc.subgroup(__condition) - if subgroup is not None: - exceptions.append(subgroup) - - if subgroup is not exc: - modified = True - elif condition(exc): - exceptions.append(exc) - else: - modified = True - - if not modified: - return self - elif exceptions: - group = self.derive(exceptions) - group.__cause__ = self.__cause__ - group.__context__ = self.__context__ - group.__traceback__ = self.__traceback__ - return group - else: - return None - - @overload - def split( - self: Self, - __condition: type[_BaseExceptionT] | tuple[type[_BaseExceptionT], ...], - ) -> tuple[BaseExceptionGroup[_BaseExceptionT] | None, Self | None]: - ... - - @overload - def split( - self: Self, __condition: Callable[[_BaseExceptionT_co], bool] - ) -> tuple[Self | None, Self | None]: - ... - - def split( - self: Self, - __condition: type[_BaseExceptionT] - | tuple[type[_BaseExceptionT], ...] - | Callable[[_BaseExceptionT_co], bool], - ) -> ( - tuple[BaseExceptionGroup[_BaseExceptionT] | None, Self | None] - | tuple[Self | None, Self | None] - ): - condition = get_condition_filter(__condition) - if condition(self): - return self, None - - matching_exceptions: list[BaseException] = [] - nonmatching_exceptions: list[BaseException] = [] - for exc in self.exceptions: - if isinstance(exc, BaseExceptionGroup): - matching, nonmatching = exc.split(condition) - if matching is not None: - matching_exceptions.append(matching) - - if nonmatching is not None: - nonmatching_exceptions.append(nonmatching) - elif condition(exc): - matching_exceptions.append(exc) - else: - nonmatching_exceptions.append(exc) - - matching_group: Self | None = None - if matching_exceptions: - matching_group = self.derive(matching_exceptions) - matching_group.__cause__ = self.__cause__ - matching_group.__context__ = self.__context__ - matching_group.__traceback__ = self.__traceback__ - - nonmatching_group: Self | None = None - if nonmatching_exceptions: - nonmatching_group = self.derive(nonmatching_exceptions) - nonmatching_group.__cause__ = self.__cause__ - nonmatching_group.__context__ = self.__context__ - nonmatching_group.__traceback__ = self.__traceback__ - - return matching_group, nonmatching_group - - def derive(self: Self, __excs: Sequence[_BaseExceptionT_co]) -> Self: - eg = BaseExceptionGroup(self.message, __excs) - if hasattr(self, "__notes__"): - # Create a new list so that add_note() only affects one exceptiongroup - eg.__notes__ = list(self.__notes__) - - return eg - - def __str__(self) -> str: - suffix = "" if len(self._exceptions) == 1 else "s" - return f"{self.message} ({len(self._exceptions)} sub-exception{suffix})" - - def __repr__(self) -> str: - return f"{self.__class__.__name__}({self.message!r}, {self._exceptions!r})" - - -class ExceptionGroup(BaseExceptionGroup[_ExceptionT_co], Exception): - def __new__(cls, __message: str, __exceptions: Sequence[_ExceptionT_co]) -> Self: - return super().__new__(cls, __message, __exceptions) - - if TYPE_CHECKING: - - @property - def exceptions( - self, - ) -> tuple[_ExceptionT_co | ExceptionGroup[_ExceptionT_co], ...]: - ... - - @overload # type: ignore[override] - def subgroup( - self, __condition: type[_ExceptionT] | tuple[type[_ExceptionT], ...] - ) -> ExceptionGroup[_ExceptionT] | None: - ... - - @overload - def subgroup( - self: Self, __condition: Callable[[_ExceptionT_co], bool] - ) -> Self | None: - ... - - def subgroup( - self: Self, - __condition: type[_ExceptionT] - | tuple[type[_ExceptionT], ...] - | Callable[[_ExceptionT_co], bool], - ) -> ExceptionGroup[_ExceptionT] | Self | None: - return super().subgroup(__condition) - - @overload # type: ignore[override] - def split( - self: Self, __condition: type[_ExceptionT] | tuple[type[_ExceptionT], ...] - ) -> tuple[ExceptionGroup[_ExceptionT] | None, Self | None]: - ... - - @overload - def split( - self: Self, __condition: Callable[[_ExceptionT_co], bool] - ) -> tuple[Self | None, Self | None]: - ... - - def split( - self: Self, - __condition: type[_ExceptionT] - | tuple[type[_ExceptionT], ...] - | Callable[[_ExceptionT_co], bool], - ) -> ( - tuple[ExceptionGroup[_ExceptionT] | None, Self | None] - | tuple[Self | None, Self | None] - ): - return super().split(__condition) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/filelock/_unix.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/filelock/_unix.py deleted file mode 100644 index 40cec0ab189762ac9b4a0a950e65daf53bc5be16..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/filelock/_unix.py +++ /dev/null @@ -1,63 +0,0 @@ -from __future__ import annotations - -import os -import sys -from contextlib import suppress -from errno import ENOSYS -from typing import cast - -from ._api import BaseFileLock - -#: a flag to indicate if the fcntl API is available -has_fcntl = False -if sys.platform == "win32": # pragma: win32 cover - - class UnixFileLock(BaseFileLock): - """Uses the :func:`fcntl.flock` to hard lock the lock file on unix systems.""" - - def _acquire(self) -> None: - raise NotImplementedError - - def _release(self) -> None: - raise NotImplementedError - -else: # pragma: win32 no cover - try: - import fcntl - except ImportError: - pass - else: - has_fcntl = True - - class UnixFileLock(BaseFileLock): - """Uses the :func:`fcntl.flock` to hard lock the lock file on unix systems.""" - - def _acquire(self) -> None: - open_flags = os.O_RDWR | os.O_CREAT | os.O_TRUNC - fd = os.open(self.lock_file, open_flags, self._context.mode) - with suppress(PermissionError): # This locked is not owned by this UID - os.fchmod(fd, self._context.mode) - try: - fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) - except OSError as exception: - os.close(fd) - if exception.errno == ENOSYS: # NotImplemented error - msg = "FileSystem does not appear to support flock; user SoftFileLock instead" - raise NotImplementedError(msg) from exception - else: - self._context.lock_file_fd = fd - - def _release(self) -> None: - # Do not remove the lockfile: - # https://github.com/tox-dev/py-filelock/issues/31 - # https://stackoverflow.com/questions/17708885/flock-removing-locked-file-without-race-condition - fd = cast(int, self._context.lock_file_fd) - self._context.lock_file_fd = None - fcntl.flock(fd, fcntl.LOCK_UN) - os.close(fd) - - -__all__ = [ - "has_fcntl", - "UnixFileLock", -] diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/colorLib/geometry.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/colorLib/geometry.py deleted file mode 100644 index 1ce161bfa117df1632b507d161f0dd4abb633bcc..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/colorLib/geometry.py +++ /dev/null @@ -1,143 +0,0 @@ -"""Helpers for manipulating 2D points and vectors in COLR table.""" - -from math import copysign, cos, hypot, isclose, pi -from fontTools.misc.roundTools import otRound - - -def _vector_between(origin, target): - return (target[0] - origin[0], target[1] - origin[1]) - - -def _round_point(pt): - return (otRound(pt[0]), otRound(pt[1])) - - -def _unit_vector(vec): - length = hypot(*vec) - if length == 0: - return None - return (vec[0] / length, vec[1] / length) - - -_CIRCLE_INSIDE_TOLERANCE = 1e-4 - - -# The unit vector's X and Y components are respectively -# U = (cos(α), sin(α)) -# where α is the angle between the unit vector and the positive x axis. -_UNIT_VECTOR_THRESHOLD = cos(3 / 8 * pi) # == sin(1/8 * pi) == 0.38268343236508984 - - -def _rounding_offset(direction): - # Return 2-tuple of -/+ 1.0 or 0.0 approximately based on the direction vector. - # We divide the unit circle in 8 equal slices oriented towards the cardinal - # (N, E, S, W) and intermediate (NE, SE, SW, NW) directions. To each slice we - # map one of the possible cases: -1, 0, +1 for either X and Y coordinate. - # E.g. Return (+1.0, -1.0) if unit vector is oriented towards SE, or - # (-1.0, 0.0) if it's pointing West, etc. - uv = _unit_vector(direction) - if not uv: - return (0, 0) - - result = [] - for uv_component in uv: - if -_UNIT_VECTOR_THRESHOLD <= uv_component < _UNIT_VECTOR_THRESHOLD: - # unit vector component near 0: direction almost orthogonal to the - # direction of the current axis, thus keep coordinate unchanged - result.append(0) - else: - # nudge coord by +/- 1.0 in direction of unit vector - result.append(copysign(1.0, uv_component)) - return tuple(result) - - -class Circle: - def __init__(self, centre, radius): - self.centre = centre - self.radius = radius - - def __repr__(self): - return f"Circle(centre={self.centre}, radius={self.radius})" - - def round(self): - return Circle(_round_point(self.centre), otRound(self.radius)) - - def inside(self, outer_circle, tolerance=_CIRCLE_INSIDE_TOLERANCE): - dist = self.radius + hypot(*_vector_between(self.centre, outer_circle.centre)) - return ( - isclose(outer_circle.radius, dist, rel_tol=_CIRCLE_INSIDE_TOLERANCE) - or outer_circle.radius > dist - ) - - def concentric(self, other): - return self.centre == other.centre - - def move(self, dx, dy): - self.centre = (self.centre[0] + dx, self.centre[1] + dy) - - -def round_start_circle_stable_containment(c0, r0, c1, r1): - """Round start circle so that it stays inside/outside end circle after rounding. - - The rounding of circle coordinates to integers may cause an abrupt change - if the start circle c0 is so close to the end circle c1's perimiter that - it ends up falling outside (or inside) as a result of the rounding. - To keep the gradient unchanged, we nudge it in the right direction. - - See: - https://github.com/googlefonts/colr-gradients-spec/issues/204 - https://github.com/googlefonts/picosvg/issues/158 - """ - start, end = Circle(c0, r0), Circle(c1, r1) - - inside_before_round = start.inside(end) - - round_start = start.round() - round_end = end.round() - inside_after_round = round_start.inside(round_end) - - if inside_before_round == inside_after_round: - return round_start - elif inside_after_round: - # start was outside before rounding: we need to push start away from end - direction = _vector_between(round_end.centre, round_start.centre) - radius_delta = +1.0 - else: - # start was inside before rounding: we need to push start towards end - direction = _vector_between(round_start.centre, round_end.centre) - radius_delta = -1.0 - dx, dy = _rounding_offset(direction) - - # At most 2 iterations ought to be enough to converge. Before the loop, we - # know the start circle didn't keep containment after normal rounding; thus - # we continue adjusting by -/+ 1.0 until containment is restored. - # Normal rounding can at most move each coordinates -/+0.5; in the worst case - # both the start and end circle's centres and radii will be rounded in opposite - # directions, e.g. when they move along a 45 degree diagonal: - # c0 = (1.5, 1.5) ===> (2.0, 2.0) - # r0 = 0.5 ===> 1.0 - # c1 = (0.499, 0.499) ===> (0.0, 0.0) - # r1 = 2.499 ===> 2.0 - # In this example, the relative distance between the circles, calculated - # as r1 - (r0 + distance(c0, c1)) is initially 0.57437 (c0 is inside c1), and - # -1.82842 after rounding (c0 is now outside c1). Nudging c0 by -1.0 on both - # x and y axes moves it towards c1 by hypot(-1.0, -1.0) = 1.41421. Two of these - # moves cover twice that distance, which is enough to restore containment. - max_attempts = 2 - for _ in range(max_attempts): - if round_start.concentric(round_end): - # can't move c0 towards c1 (they are the same), so we change the radius - round_start.radius += radius_delta - assert round_start.radius >= 0 - else: - round_start.move(dx, dy) - if inside_before_round == round_start.inside(round_end): - break - else: # likely a bug - raise AssertionError( - f"Rounding circle {start} " - f"{'inside' if inside_before_round else 'outside'} " - f"{end} failed after {max_attempts} attempts!" - ) - - return round_start diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/psOperators.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/psOperators.py deleted file mode 100644 index d0ef432f5243e5ed0c8fa5b02f4c147dfcb032c2..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/psOperators.py +++ /dev/null @@ -1,574 +0,0 @@ -_accessstrings = {0: "", 1: "readonly", 2: "executeonly", 3: "noaccess"} - - -class ps_object(object): - - literal = 1 - access = 0 - value = None - - def __init__(self, value): - self.value = value - self.type = self.__class__.__name__[3:] + "type" - - def __repr__(self): - return "<%s %s>" % (self.__class__.__name__[3:], repr(self.value)) - - -class ps_operator(ps_object): - - literal = 0 - - def __init__(self, name, function): - self.name = name - self.function = function - self.type = self.__class__.__name__[3:] + "type" - - def __repr__(self): - return "" % self.name - - -class ps_procedure(ps_object): - literal = 0 - - def __repr__(self): - return "" - - def __str__(self): - psstring = "{" - for i in range(len(self.value)): - if i: - psstring = psstring + " " + str(self.value[i]) - else: - psstring = psstring + str(self.value[i]) - return psstring + "}" - - -class ps_name(ps_object): - literal = 0 - - def __str__(self): - if self.literal: - return "/" + self.value - else: - return self.value - - -class ps_literal(ps_object): - def __str__(self): - return "/" + self.value - - -class ps_array(ps_object): - def __str__(self): - psstring = "[" - for i in range(len(self.value)): - item = self.value[i] - access = _accessstrings[item.access] - if access: - access = " " + access - if i: - psstring = psstring + " " + str(item) + access - else: - psstring = psstring + str(item) + access - return psstring + "]" - - def __repr__(self): - return "" - - -_type1_pre_eexec_order = [ - "FontInfo", - "FontName", - "Encoding", - "PaintType", - "FontType", - "FontMatrix", - "FontBBox", - "UniqueID", - "Metrics", - "StrokeWidth", -] - -_type1_fontinfo_order = [ - "version", - "Notice", - "FullName", - "FamilyName", - "Weight", - "ItalicAngle", - "isFixedPitch", - "UnderlinePosition", - "UnderlineThickness", -] - -_type1_post_eexec_order = ["Private", "CharStrings", "FID"] - - -def _type1_item_repr(key, value): - psstring = "" - access = _accessstrings[value.access] - if access: - access = access + " " - if key == "CharStrings": - psstring = psstring + "/%s %s def\n" % ( - key, - _type1_CharString_repr(value.value), - ) - elif key == "Encoding": - psstring = psstring + _type1_Encoding_repr(value, access) - else: - psstring = psstring + "/%s %s %sdef\n" % (str(key), str(value), access) - return psstring - - -def _type1_Encoding_repr(encoding, access): - encoding = encoding.value - psstring = "/Encoding 256 array\n0 1 255 {1 index exch /.notdef put} for\n" - for i in range(256): - name = encoding[i].value - if name != ".notdef": - psstring = psstring + "dup %d /%s put\n" % (i, name) - return psstring + access + "def\n" - - -def _type1_CharString_repr(charstrings): - items = sorted(charstrings.items()) - return "xxx" - - -class ps_font(ps_object): - def __str__(self): - psstring = "%d dict dup begin\n" % len(self.value) - for key in _type1_pre_eexec_order: - try: - value = self.value[key] - except KeyError: - pass - else: - psstring = psstring + _type1_item_repr(key, value) - items = sorted(self.value.items()) - for key, value in items: - if key not in _type1_pre_eexec_order + _type1_post_eexec_order: - psstring = psstring + _type1_item_repr(key, value) - psstring = psstring + "currentdict end\ncurrentfile eexec\ndup " - for key in _type1_post_eexec_order: - try: - value = self.value[key] - except KeyError: - pass - else: - psstring = psstring + _type1_item_repr(key, value) - return ( - psstring - + "dup/FontName get exch definefont pop\nmark currentfile closefile\n" - + 8 * (64 * "0" + "\n") - + "cleartomark" - + "\n" - ) - - def __repr__(self): - return "" - - -class ps_file(ps_object): - pass - - -class ps_dict(ps_object): - def __str__(self): - psstring = "%d dict dup begin\n" % len(self.value) - items = sorted(self.value.items()) - for key, value in items: - access = _accessstrings[value.access] - if access: - access = access + " " - psstring = psstring + "/%s %s %sdef\n" % (str(key), str(value), access) - return psstring + "end " - - def __repr__(self): - return "" - - -class ps_mark(ps_object): - def __init__(self): - self.value = "mark" - self.type = self.__class__.__name__[3:] + "type" - - -class ps_procmark(ps_object): - def __init__(self): - self.value = "procmark" - self.type = self.__class__.__name__[3:] + "type" - - -class ps_null(ps_object): - def __init__(self): - self.type = self.__class__.__name__[3:] + "type" - - -class ps_boolean(ps_object): - def __str__(self): - if self.value: - return "true" - else: - return "false" - - -class ps_string(ps_object): - def __str__(self): - return "(%s)" % repr(self.value)[1:-1] - - -class ps_integer(ps_object): - def __str__(self): - return repr(self.value) - - -class ps_real(ps_object): - def __str__(self): - return repr(self.value) - - -class PSOperators(object): - def ps_def(self): - obj = self.pop() - name = self.pop() - self.dictstack[-1][name.value] = obj - - def ps_bind(self): - proc = self.pop("proceduretype") - self.proc_bind(proc) - self.push(proc) - - def proc_bind(self, proc): - for i in range(len(proc.value)): - item = proc.value[i] - if item.type == "proceduretype": - self.proc_bind(item) - else: - if not item.literal: - try: - obj = self.resolve_name(item.value) - except: - pass - else: - if obj.type == "operatortype": - proc.value[i] = obj - - def ps_exch(self): - if len(self.stack) < 2: - raise RuntimeError("stack underflow") - obj1 = self.pop() - obj2 = self.pop() - self.push(obj1) - self.push(obj2) - - def ps_dup(self): - if not self.stack: - raise RuntimeError("stack underflow") - self.push(self.stack[-1]) - - def ps_exec(self): - obj = self.pop() - if obj.type == "proceduretype": - self.call_procedure(obj) - else: - self.handle_object(obj) - - def ps_count(self): - self.push(ps_integer(len(self.stack))) - - def ps_eq(self): - any1 = self.pop() - any2 = self.pop() - self.push(ps_boolean(any1.value == any2.value)) - - def ps_ne(self): - any1 = self.pop() - any2 = self.pop() - self.push(ps_boolean(any1.value != any2.value)) - - def ps_cvx(self): - obj = self.pop() - obj.literal = 0 - self.push(obj) - - def ps_matrix(self): - matrix = [ - ps_real(1.0), - ps_integer(0), - ps_integer(0), - ps_real(1.0), - ps_integer(0), - ps_integer(0), - ] - self.push(ps_array(matrix)) - - def ps_string(self): - num = self.pop("integertype").value - self.push(ps_string("\0" * num)) - - def ps_type(self): - obj = self.pop() - self.push(ps_string(obj.type)) - - def ps_store(self): - value = self.pop() - key = self.pop() - name = key.value - for i in range(len(self.dictstack) - 1, -1, -1): - if name in self.dictstack[i]: - self.dictstack[i][name] = value - break - self.dictstack[-1][name] = value - - def ps_where(self): - name = self.pop() - # XXX - self.push(ps_boolean(0)) - - def ps_systemdict(self): - self.push(ps_dict(self.dictstack[0])) - - def ps_userdict(self): - self.push(ps_dict(self.dictstack[1])) - - def ps_currentdict(self): - self.push(ps_dict(self.dictstack[-1])) - - def ps_currentfile(self): - self.push(ps_file(self.tokenizer)) - - def ps_eexec(self): - f = self.pop("filetype").value - f.starteexec() - - def ps_closefile(self): - f = self.pop("filetype").value - f.skipwhite() - f.stopeexec() - - def ps_cleartomark(self): - obj = self.pop() - while obj != self.mark: - obj = self.pop() - - def ps_readstring(self, ps_boolean=ps_boolean, len=len): - s = self.pop("stringtype") - oldstr = s.value - f = self.pop("filetype") - # pad = file.value.read(1) - # for StringIO, this is faster - f.value.pos = f.value.pos + 1 - newstr = f.value.read(len(oldstr)) - s.value = newstr - self.push(s) - self.push(ps_boolean(len(oldstr) == len(newstr))) - - def ps_known(self): - key = self.pop() - d = self.pop("dicttype", "fonttype") - self.push(ps_boolean(key.value in d.value)) - - def ps_if(self): - proc = self.pop("proceduretype") - if self.pop("booleantype").value: - self.call_procedure(proc) - - def ps_ifelse(self): - proc2 = self.pop("proceduretype") - proc1 = self.pop("proceduretype") - if self.pop("booleantype").value: - self.call_procedure(proc1) - else: - self.call_procedure(proc2) - - def ps_readonly(self): - obj = self.pop() - if obj.access < 1: - obj.access = 1 - self.push(obj) - - def ps_executeonly(self): - obj = self.pop() - if obj.access < 2: - obj.access = 2 - self.push(obj) - - def ps_noaccess(self): - obj = self.pop() - if obj.access < 3: - obj.access = 3 - self.push(obj) - - def ps_not(self): - obj = self.pop("booleantype", "integertype") - if obj.type == "booleantype": - self.push(ps_boolean(not obj.value)) - else: - self.push(ps_integer(~obj.value)) - - def ps_print(self): - str = self.pop("stringtype") - print("PS output --->", str.value) - - def ps_anchorsearch(self): - seek = self.pop("stringtype") - s = self.pop("stringtype") - seeklen = len(seek.value) - if s.value[:seeklen] == seek.value: - self.push(ps_string(s.value[seeklen:])) - self.push(seek) - self.push(ps_boolean(1)) - else: - self.push(s) - self.push(ps_boolean(0)) - - def ps_array(self): - num = self.pop("integertype") - array = ps_array([None] * num.value) - self.push(array) - - def ps_astore(self): - array = self.pop("arraytype") - for i in range(len(array.value) - 1, -1, -1): - array.value[i] = self.pop() - self.push(array) - - def ps_load(self): - name = self.pop() - self.push(self.resolve_name(name.value)) - - def ps_put(self): - obj1 = self.pop() - obj2 = self.pop() - obj3 = self.pop("arraytype", "dicttype", "stringtype", "proceduretype") - tp = obj3.type - if tp == "arraytype" or tp == "proceduretype": - obj3.value[obj2.value] = obj1 - elif tp == "dicttype": - obj3.value[obj2.value] = obj1 - elif tp == "stringtype": - index = obj2.value - obj3.value = obj3.value[:index] + chr(obj1.value) + obj3.value[index + 1 :] - - def ps_get(self): - obj1 = self.pop() - if obj1.value == "Encoding": - pass - obj2 = self.pop( - "arraytype", "dicttype", "stringtype", "proceduretype", "fonttype" - ) - tp = obj2.type - if tp in ("arraytype", "proceduretype"): - self.push(obj2.value[obj1.value]) - elif tp in ("dicttype", "fonttype"): - self.push(obj2.value[obj1.value]) - elif tp == "stringtype": - self.push(ps_integer(ord(obj2.value[obj1.value]))) - else: - assert False, "shouldn't get here" - - def ps_getinterval(self): - obj1 = self.pop("integertype") - obj2 = self.pop("integertype") - obj3 = self.pop("arraytype", "stringtype") - tp = obj3.type - if tp == "arraytype": - self.push(ps_array(obj3.value[obj2.value : obj2.value + obj1.value])) - elif tp == "stringtype": - self.push(ps_string(obj3.value[obj2.value : obj2.value + obj1.value])) - - def ps_putinterval(self): - obj1 = self.pop("arraytype", "stringtype") - obj2 = self.pop("integertype") - obj3 = self.pop("arraytype", "stringtype") - tp = obj3.type - if tp == "arraytype": - obj3.value[obj2.value : obj2.value + len(obj1.value)] = obj1.value - elif tp == "stringtype": - newstr = obj3.value[: obj2.value] - newstr = newstr + obj1.value - newstr = newstr + obj3.value[obj2.value + len(obj1.value) :] - obj3.value = newstr - - def ps_cvn(self): - self.push(ps_name(self.pop("stringtype").value)) - - def ps_index(self): - n = self.pop("integertype").value - if n < 0: - raise RuntimeError("index may not be negative") - self.push(self.stack[-1 - n]) - - def ps_for(self): - proc = self.pop("proceduretype") - limit = self.pop("integertype", "realtype").value - increment = self.pop("integertype", "realtype").value - i = self.pop("integertype", "realtype").value - while 1: - if increment > 0: - if i > limit: - break - else: - if i < limit: - break - if type(i) == type(0.0): - self.push(ps_real(i)) - else: - self.push(ps_integer(i)) - self.call_procedure(proc) - i = i + increment - - def ps_forall(self): - proc = self.pop("proceduretype") - obj = self.pop("arraytype", "stringtype", "dicttype") - tp = obj.type - if tp == "arraytype": - for item in obj.value: - self.push(item) - self.call_procedure(proc) - elif tp == "stringtype": - for item in obj.value: - self.push(ps_integer(ord(item))) - self.call_procedure(proc) - elif tp == "dicttype": - for key, value in obj.value.items(): - self.push(ps_name(key)) - self.push(value) - self.call_procedure(proc) - - def ps_definefont(self): - font = self.pop("dicttype") - name = self.pop() - font = ps_font(font.value) - self.dictstack[0]["FontDirectory"].value[name.value] = font - self.push(font) - - def ps_findfont(self): - name = self.pop() - font = self.dictstack[0]["FontDirectory"].value[name.value] - self.push(font) - - def ps_pop(self): - self.pop() - - def ps_dict(self): - self.pop("integertype") - self.push(ps_dict({})) - - def ps_begin(self): - self.dictstack.append(self.pop("dicttype").value) - - def ps_end(self): - if len(self.dictstack) > 2: - del self.dictstack[-1] - else: - raise RuntimeError("dictstack underflow") - - -notdef = ".notdef" -from fontTools.encodings.StandardEncoding import StandardEncoding - -ps_StandardEncoding = list(map(ps_name, StandardEncoding)) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/cli.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/cli.py deleted file mode 100644 index a07fd6dcd0d8256b4bb8db45a8d88cdf2d381ff2..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/cli.py +++ /dev/null @@ -1,125 +0,0 @@ -import os -import argparse -import logging -from fontTools.misc.cliTools import makeOutputFileName -from fontTools.ttLib import TTFont -from fontTools.pens.qu2cuPen import Qu2CuPen -from fontTools.pens.ttGlyphPen import TTGlyphPen -import fontTools - - -logger = logging.getLogger("fontTools.qu2cu") - - -def _font_to_cubic(input_path, output_path=None, **kwargs): - font = TTFont(input_path) - logger.info("Converting curves for %s", input_path) - - stats = {} if kwargs["dump_stats"] else None - qu2cu_kwargs = { - "stats": stats, - "max_err": kwargs["max_err_em"] * font["head"].unitsPerEm, - "all_cubic": kwargs["all_cubic"], - } - - assert "gvar" not in font, "Cannot convert variable font" - glyphSet = font.getGlyphSet() - glyphOrder = font.getGlyphOrder() - glyf = font["glyf"] - for glyphName in glyphOrder: - glyph = glyphSet[glyphName] - ttpen = TTGlyphPen(glyphSet) - pen = Qu2CuPen(ttpen, **qu2cu_kwargs) - glyph.draw(pen) - glyf[glyphName] = ttpen.glyph(dropImpliedOnCurves=True) - - font["head"].glyphDataFormat = 1 - - if kwargs["dump_stats"]: - logger.info("Stats: %s", stats) - - logger.info("Saving %s", output_path) - font.save(output_path) - - -def main(args=None): - """Convert an OpenType font from quadratic to cubic curves""" - parser = argparse.ArgumentParser(prog="qu2cu") - parser.add_argument("--version", action="version", version=fontTools.__version__) - parser.add_argument( - "infiles", - nargs="+", - metavar="INPUT", - help="one or more input TTF source file(s).", - ) - parser.add_argument("-v", "--verbose", action="count", default=0) - parser.add_argument( - "-e", - "--conversion-error", - type=float, - metavar="ERROR", - default=0.001, - help="maxiumum approximation error measured in EM (default: 0.001)", - ) - parser.add_argument( - "-c", - "--all-cubic", - default=False, - action="store_true", - help="whether to only use cubic curves", - ) - - output_parser = parser.add_mutually_exclusive_group() - output_parser.add_argument( - "-o", - "--output-file", - default=None, - metavar="OUTPUT", - help=("output filename for the converted TTF."), - ) - output_parser.add_argument( - "-d", - "--output-dir", - default=None, - metavar="DIRECTORY", - help="output directory where to save converted TTFs", - ) - - options = parser.parse_args(args) - - if not options.verbose: - level = "WARNING" - elif options.verbose == 1: - level = "INFO" - else: - level = "DEBUG" - logging.basicConfig(level=level) - - if len(options.infiles) > 1 and options.output_file: - parser.error("-o/--output-file can't be used with multile inputs") - - if options.output_dir: - output_dir = options.output_dir - if not os.path.exists(output_dir): - os.mkdir(output_dir) - elif not os.path.isdir(output_dir): - parser.error("'%s' is not a directory" % output_dir) - output_paths = [ - os.path.join(output_dir, os.path.basename(p)) for p in options.infiles - ] - elif options.output_file: - output_paths = [options.output_file] - else: - output_paths = [ - makeOutputFileName(p, overWrite=True, suffix=".cubic") - for p in options.infiles - ] - - kwargs = dict( - dump_stats=options.verbose > 0, - max_err_em=options.conversion_error, - all_cubic=options.all_cubic, - ) - - for input_path, output_path in zip(options.infiles, output_paths): - _font_to_cubic(input_path, output_path, **kwargs) diff --git a/spaces/cncn102/bingo1/src/components/tailwind-indicator.tsx b/spaces/cncn102/bingo1/src/components/tailwind-indicator.tsx deleted file mode 100644 index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000 --- a/spaces/cncn102/bingo1/src/components/tailwind-indicator.tsx +++ /dev/null @@ -1,14 +0,0 @@ -export function TailwindIndicator() { - if (process.env.NODE_ENV === 'production') return null - - return ( -
    -
    xs
    -
    sm
    -
    md
    -
    lg
    -
    xl
    -
    2xl
    -
    - ) -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/mpegvideoencdsp_init_arm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/mpegvideoencdsp_init_arm.c deleted file mode 100644 index a95b5bebe9a63e6525faad730fe059cc15d41f78..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/mpegvideoencdsp_init_arm.c +++ /dev/null @@ -1,39 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "libavutil/attributes.h" -#include "libavutil/cpu.h" -#include "libavutil/arm/cpu.h" -#include "libavcodec/avcodec.h" -#include "libavcodec/mpegvideoencdsp.h" - -int ff_pix_norm1_armv6(const uint8_t *pix, int line_size); -int ff_pix_sum_armv6(const uint8_t *pix, int line_size); - -av_cold void ff_mpegvideoencdsp_init_arm(MpegvideoEncDSPContext *c, - AVCodecContext *avctx) -{ - int cpu_flags = av_get_cpu_flags(); - - if (have_armv6(cpu_flags)) { - c->pix_norm1 = ff_pix_norm1_armv6; - c->pix_sum = ff_pix_sum_armv6; - } -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Climb Higher and Faster with Getting Over It Mod APK.md b/spaces/congsaPfin/Manga-OCR/logs/Climb Higher and Faster with Getting Over It Mod APK.md deleted file mode 100644 index e9a7bce79a5633d3dc38b096f9ac323f874a39ae..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Climb Higher and Faster with Getting Over It Mod APK.md +++ /dev/null @@ -1,118 +0,0 @@ -
    -

    Getting Over It Apk with Mods: A Guide for Beginners

    -

    If you are looking for a game that will test your patience, skill, and sanity, you might want to try Getting Over It with Bennett Foddy. This is a game that has become famous (or infamous) for its extreme difficulty and frustrating gameplay. In this game, you control a man in a pot who has to climb a mountain using only a hammer. The game has no checkpoints, no save system, and no mercy. One wrong move can send you tumbling down to the bottom, undoing all your progress.

    -

    getting over it apk with mods


    DOWNLOADhttps://urlca.com/2uOff4



    -

    However, despite (or because of) its challenge, Getting Over It has also attracted a large fan base who enjoy the game's unique concept, humorous narration, and rewarding feeling of accomplishment. Some fans have even taken it to the next level by creating mods for the game. Mods are modifications that alter the game's appearance, gameplay, or features. Some mods make the game easier, some make it harder, and some just make it more fun.

    -

    If you are curious about how to play Getting Over It with mods, this article will guide you through the process of downloading and installing them on your Android device. You will also learn about some of the mod features and how to use them, as well as some tips and tricks for getting over it.

    -

    How to Download and Install Getting Over It Apk with Mods

    -

    The first step to playing Getting Over It with mods is to download the apk file and the mod files. An apk file is an Android application package that contains all the files needed to run an app on your device. A mod file is a file that contains the modified code or assets for the game.

    -

    getting over it apk mod unlimited money
    -getting over it apk mod free download
    -getting over it apk mod latest version
    -getting over it apk mod android 1
    -getting over it apk mod no ads
    -getting over it apk mod revdl
    -getting over it apk mod hack
    -getting over it apk mod unlocked
    -getting over it apk mod rexdl
    -getting over it apk mod 2023
    -getting over it apk mod unlimited lives
    -getting over it apk mod offline
    -getting over it apk mod god mode
    -getting over it apk mod mega
    -getting over it apk mod mediafıre
    -getting over it apk mod obb
    -getting over it apk mod premium
    -getting over it apk mod full version
    -getting over it apk mod unlimited coins
    -getting over it apk mod all levels
    -getting over it apk mod easy mode
    -getting over it apk mod online
    -getting over it apk mod cheats
    -getting over it apk mod happy mod
    -getting over it apk mod 1.9.4
    -getting over it apk mod 1.9.3
    -getting over it apk mod 1.9.2
    -getting over it apk mod 1.9.1
    -getting over it apk mod 1.9.0
    -getting over it apk mod 1.8.9
    -getting over it apk mod 1.8.8
    -getting over it apk mod 1.8.7
    -getting over it apk mod 1.8.6
    -getting over it apk mod 1.8.5
    -getting over it apk mod 1.8.4
    -getting over it apk mod 1.8.3
    -getting over it apk mod 1.8.2
    -getting over it apk mod 1.8.1
    -getting over it apk mod 1.8.0
    -getting over it apk mod 1.7.9
    -getting over it apk mod 1.7.8
    -getting over it apk mod 1.7.7
    -getting over it apk mod 1.7.6
    -getting over it apk mod 1.7.5
    -getting over it apk mod 1.7.4
    -getting over it apk mod 1.7.3
    -getting over it apk mod 1.7.2
    -getting over it apk mod 1.7.1

    -

    There are many websites that offer apk files and mod files for Getting Over It, but not all of them are safe or reliable. You should always be careful when downloading files from unknown sources, as they may contain viruses or malware that can harm your device or steal your data. One of the websites that we recommend is [APKDone](^1^), which has a large collection of apk files and mod files for various games, including Getting Over It.

    -

    To download Getting Over It apk with mods from APKDone, follow these steps:

    -
      -
    1. Go to on your browser.
    2. -
    3. Search for "Getting Over It" in the search bar.
    4. -
    5. Select the version of the game that you want to download. You can choose between the original version or the modded version. The modded version has some features unlocked, such as unlimited gravity, speed, scale, etc.
    6. -
    7. Click on "Download APK" or "Download MOD APK" depending on your choice.
    8. -
    9. Wait for the download to finish.
    10. -
    -

    Once you have downloaded the apk file and the mod file (if any), you need to install them on your device. To do this, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store.

    -

    To enable unknown sources on your device settings, follow these steps:

    -
      -
    1. Go to Settings > Security > Unknown Sources.
    2. -
    3. Toggle on the switch to allow installation of apps from unknown sources.
    4. -
    5. Confirm your choice by tapping OK.
    6. -
    -

    Now that you have enabled unknown sources, you can install Getting Over It apk with mods on your device. To do this, follow these steps:

    -
      -
    1. Locate the apk file and the mod file (if any) on your device storage. You can use a file manager app to do this.
    2. -
    3. Tap on the apk file to start the installation process. You may see a warning message that says "This type of file can harm your device". Ignore it and tap on "Install anyway".
    4. -
    5. Wait for the installation to finish. You may see a message that says "App installed". Tap on "Open" to launch the game.
    6. -
    7. If you have downloaded a mod file, you need to replace the original files with the modded ones. To do this, go to Android > Data > com.noodlecake.gettingoverit > files > Managed and delete the Assembly-CSharp.dll file. Then, copy and paste the modded Assembly-CSharp.dll file from your download folder to the same location.
    8. -
    -

    Congratulations! You have successfully installed Getting Over It apk with mods on your device. You can now enjoy the game with some extra features and options.

    -

    How to Play Getting Over It with Mods

    -

    Playing Getting Over It with mods is not much different from playing the original game. You still have to use your hammer to climb the mountain and avoid falling down. However, with mods, you can also access some additional features and options that can enhance your gameplay experience.

    -

    Some of the mod features that you can use are:

    -
      -
    • Unlimited gravity: This feature allows you to adjust the gravity level in the game. You can make it higher or lower depending on your preference. Higher gravity makes the game harder, while lower gravity makes it easier.
    • -
    • Unlimited speed: This feature allows you to increase or decrease the speed of your movement. You can make it faster or slower depending on your preference. Faster speed makes the game more exciting, while slower speed makes it more relaxing.
    • -
    • Unlimited scale: This feature allows you to change the size of your character and the objects in the game. You can make them bigger or smaller depending on your preference. Bigger size makes the game more challenging, while smaller size makes it more manageable.
    • -
    • Unlimited rotation: This feature allows you to rotate your character and the objects in the game. You can make them spin clockwise or counterclockwise depending on your preference. Rotation adds some variety and fun to the game.
    • -
    • Unlimited color: This feature allows you to change the color of your character and the objects in the game. You can choose from a range of colors depending on your preference. Color adds some customization and flair to the game.
    • -
    -

    To use these mod features, you need to access the mod menu in the game. To do this, tap on the screen with three fingers at the same time. You will see a pop-up window that shows the mod options. You can toggle them on or off by tapping on them. You can also adjust their values by sliding the bars.

    -

    Here are some tips and tricks for playing Getting Over It with mods:

    -
      -
    • Experiment with different combinations of mod features and see how they affect your gameplay. You may find some settings that suit your style or mood better than others.
    • -
    • Use moderation when using mod features. Don't make the game too easy or too hard for yourself, as that may ruin the fun and challenge of the game. Find a balance that works for you.
    • -
    • Don't forget to enjoy the game's original aspects, such as its narration, music, and graphics. Mods are meant to enhance, not replace, the game's core elements.
    • -
    • Don't get discouraged if you fail or fall down in the game. Remember that Getting Over It is a game about perseverance, resilience, and humor. Learn from your mistakes and try again.
    • -
    -

    Conclusion

    -

    Getting Over It with Bennett Foddy is a game that will test your skills, patience, and sanity like no other. It is a game that will make you rage, laugh, cry, and celebrate. It is a game that will challenge you, reward you, and inspire you.

    -

    If you want to spice up your gameplay experience, you can try playing Getting Over It with mods. Mods are modifications that alter the game's appearance, gameplay, or features. Some mods make the game easier, some make it harder, and some just make it more fun.

    -

    In this article, we have shown you how to download and install Getting Over It apk with mods on your Android device. We have also shown you how to use some of the mod features and how to play Getting Over It with mods. We hope that this article has been helpful and informative for you.

    -

    If you are ready to try out Getting Over It with mods, you can download the game and the mods from the link below. Have fun and good luck!

    -

    [Download Getting Over It Apk with Mods]

    -

    FAQs

    -

    Here are some frequently asked questions about Getting Over It with mods:

    -

    What is the difference between apk and mod?

    -

    An apk file is an Android application package that contains all the files needed to run an app on your device. A mod file is a file that contains the modified code or assets for the game. You need both files to play Getting Over It with mods.

    -

    Is it safe to download and install Getting Over It apk with mods?

    -

    It depends on where you download the files from. Some websites may offer fake or malicious files that can harm your device or steal your data. You should always be careful when downloading files from unknown sources and scan them for viruses or malware before installing them. One of the websites that we recommend is APKDone, which has a large collection of apk files and mod files for various games, including Getting Over It.

    -

    Can I play Getting Over It with mods online or offline?

    -

    You can play Getting Over It with mods offline, as the game does not require an internet connection to run. However, you cannot play Getting Over It with mods online, as the game does not support multiplayer or online features.

    -

    Can I use mods on other platforms besides Android?

    -

    No, you cannot use mods on other platforms besides Android. Mods are only compatible with Android devices and cannot be used on iOS, Windows, Mac, or Linux devices.

    -

    Can I uninstall Getting Over It apk with mods?

    -

    Yes, you can uninstall Getting Over It apk with mods if you want to. To do this, go to Settings > Apps > Getting Over It > Uninstall. This will remove the game and the mods from your device.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark Evolution APK A Must-Have Game for Android Users.md b/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark Evolution APK A Must-Have Game for Android Users.md deleted file mode 100644 index 62ecc4183c2f038603cb538591024b298af73071..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Hungry Shark Evolution APK A Must-Have Game for Android Users.md +++ /dev/null @@ -1,130 +0,0 @@ - -

    How to Download Hungry Shark Evolution APKPure

    -

    Do you love sharks? Do you love arcade games? Do you love eating everything in sight? If you answered yes to any of these questions, then you will love Hungry Shark Evolution, a fun and addictive game where you take control of a hungry shark and go on a feeding frenzy in a vast ocean full of prey and predators.

    -

    But what if you don't have access to Google Play Store or you want to save some storage space on your device? Don't worry, there's a solution for that. You can download APKPure, a third-party app store that offers free and safe downloads of Android apps and games.

    -

    download hungry shark evolution apkpure


    Download ✑ ✑ ✑ https://urlca.com/2uOdKv



    -

    In this article, we will show you how to download Hungry Shark Evolution APKPure on your Android device. We will also give you an overview of the features of Hungry Shark Evolution, some tips and tricks for playing it, and our personal review of the game.

    -

    Features of Hungry Shark Evolution

    -

    Hungry Shark Evolution is one of the most popular shark games on Android. It has over 100 million downloads and a 4.5-star rating on Google Play Store. It is also the official game for Shark Week, which is an annual event that celebrates these amazing creatures.

    -

    So what makes Hungry Shark Evolution so awesome? Here are some of its main features:

    -

    Sharks

    -

    The game lets you choose from over 20 different sharks to play with, each with its own unique abilities and appearance. You can start with a small Reef Shark and work your way up to bigger and more powerful sharks like the Great White, Megalodon, or even a prehistoric Mosasaurus.

    -

    You can also unlock special sharks that have special abilities or features. For example, there's a Robo Shark that can shoot lasers from its eyes, a Zombie Shark that can regenerate health by eating zombies, or a Pyro Shark that can breathe fire and fly.

    -

    You can also customize your shark with various accessories, such as hats, sunglasses, headphones, or even a crown. These accessories not only make your shark look cool, but also give you some bonuses, such as extra coins, health, or speed.

    -

    World

    -

    Hungry Shark Evolution features a huge open world that you can explore freely. You can swim in different areas of the ocean, such as the surface, the deep sea, the arctic, or the prehistoric. Each area has its own scenery, creatures, and secrets to discover.

    -

    download hungry shark evolution apk pure mod
    -download hungry shark evolution apkpure latest version
    -download hungry shark evolution apkpure for android
    -download hungry shark evolution apkpure hack
    -download hungry shark evolution apkpure unlimited money
    -download hungry shark evolution apkpure offline
    -download hungry shark evolution apkpure update
    -download hungry shark evolution apkpure free
    -download hungry shark evolution apkpure full version
    -download hungry shark evolution apkpure for pc
    -download hungry shark evolution apkpure app
    -download hungry shark evolution apkpure cheats
    -download hungry shark evolution apkpure mega mod
    -download hungry shark evolution apkpure old version
    -download hungry shark evolution apkpure 10.0.0
    -download hungry shark evolution apkpure no ads
    -download hungry shark evolution apkpure obb
    -download hungry shark evolution apkpure 2023
    -download hungry shark evolution apkpure new update
    -download hungry shark evolution apkpure online
    -download hungry shark evolution apkpure apk file
    -download hungry shark evolution apkpure unlocked all sharks
    -download hungry shark evolution apkpure xapk
    -download hungry shark evolution apkpure mod apk
    -download hungry shark evolution apkpure game
    -download hungry shark evolution apkpure android 1
    -download hungry shark evolution apkpure revdl
    -download hungry shark evolution apkpure rexdl
    -download hungry shark evolution apkpure apk mirror
    -download hungry shark evolution apkpure apk mod
    -download hungry shark evolution apkpure apk downloader
    -download hungry shark evolution apkpure apk installer
    -download hungry shark evolution apkpure apk editor
    -download hungry shark evolution apkpure apk extractor
    -download hungry shark evolution apkpure apk manager
    -download hungry shark evolution apkpure apk converter
    -download hungry shark evolution apkpure apk analyzer
    -download hungry shark evolution apkpure apk signer
    -download hungry shark evolution apkpure apk verifier
    -download hungry shark evolution apkpure apk editor pro

    -

    You can also find portals that take you to other worlds, such as a medieval castle, a pirate ship, or a space station. These worlds have their own challenges and rewards, such as treasure chests, enemies, or power-ups.

    -

    Missions

    -

    The game has over 250 missions that you can complete to earn coins and gems. These missions range from simple tasks like eating a certain number of fish, turtles, or humans, to more complex ones like finding hidden objects, performing stunts, or defeating bosses.

    -

    Some missions are specific to each shark, while others are common to all sharks. Completing missions not only gives you rewards, but also increases your shark's level and stats.

    -

    Equipment

    -

    The game also lets you equip your shark with various gadgets that enhance its abilities or give it new ones. For example, you can equip a jetpack that lets you fly in the air, a laser that lets you shoot beams from your eyes, or a magnet that attracts coins and gems.

    -

    You can also equip baby sharks that follow you around and help you eat more creatures. There are over 30 baby sharks to choose from, each with its own special ability or feature. For example, there's a baby Hammerhead that gives you extra health, a baby Killer Whale that gives you extra speed, or a baby Ghost Shark that makes you invisible.

    -

    Gold Rush

    -

    The game also has a special mode called Gold Rush that can be triggered by eating enough gold creatures. Gold creatures are marked with a yellow glow and include fish, crabs, jellyfish, and even humans.

    -

    When Gold Rush is activated, your shark becomes invincible and can eat anything in its path. It also grows bigger and faster, and earns more coins and points. Gold Rush is a great way to boost your score and have some fun.

    -

    How to Download Hungry Shark Evolution APKPure

    -

    Now that you know what Hungry Shark Evolution is all about, you might be wondering how to download it from APKPure. Don't worry, it's very easy and safe. Just follow these simple steps:

    -
      -
    1. Go to APKPure.com on your Android device's browser. You can also scan the QR code below to go directly to the website.
    2. -
    3. Search for Hungry Shark Evolution in the search bar or browse the categories until you find it.
    4. -
    5. Tap on the green Download APK button to start downloading the game file. You might see a warning message saying that this type of file can harm your device. Ignore it and tap OK.
    6. -
    7. Once the download is complete, open the file and tap Install. You might need to enable Unknown Sources in your device's settings to allow the installation of apps from sources other than Google Play Store.
    8. -
    9. Wait for the installation to finish and then tap Open to launch the game.
    10. -
    11. Enjoy playing Hungry Shark Evolution APKPure on your Android device!
    12. -
    -

    Here are some screenshots of the download process:

    - - - - - - - - - - - -
    Screenshot 1Screenshot 2Screenshot 3
    Screenshot 4Screenshot 5Screenshot 6
    -

    Tips and Tricks for Hungry Shark Evolution

    -

    Hungry Shark Evolution is a fun and easy game to play, but it can also be challenging and addictive. Here are some tips and tricks that will help you survive longer, earn more coins and gems, and have more fun playing Hungry Shark Evolution:

    -
      -
    • Keep an eye on your shark's health bar. It will decrease over time and when you get hurt by enemies or hazards. To replenish it, you need to eat constantly. Try to eat a variety of creatures, as some give you more health than others.
    • -
    • Use the map to find your way around the ocean. You can access it by tapping on the compass icon on the top right corner of the screen. The map will show you where you are, where the portals are, where the treasure chests are, and where the enemies and hazards are.
    • -
    • Collect coins and gems as much as you can. Coins are used to buy and upgrade sharks and equipment, while gems are used to revive your shark when it dies or to unlock special sharks. You can find coins and gems by eating gold creatures, opening treasure chests, completing missions, or watching ads.
    • -
    • Use the equipment wisely. Each equipment has its own benefits and drawbacks, so choose the ones that suit your play style and your shark's abilities. For example, if you have a fast shark, you might want to equip a jetpack to fly faster. If you have a slow shark, you might want to equip a magnet to attract coins and gems.
    • -
    • Trigger Gold Rush as often as you can. Gold Rush is the best way to increase your score and earn more coins and gems. To trigger it, you need to eat enough gold creatures in a short time. You can also use some equipment or baby sharks that increase your Gold Rush meter faster.
    • -
    • Avoid enemies and hazards that are bigger or stronger than you. They will damage your shark and reduce your health. You can tell if an enemy or hazard is dangerous by looking at its color. If it's green, it's safe to eat. If it's yellow, it's risky to eat. If it's red, it's deadly to eat.
    • -
    • Explore the different areas of the ocean and find hidden secrets. There are many things to discover in Hungry Shark Evolution, such as sunken ships, ancient ruins, underwater caves, and more. Some of these secrets contain valuable rewards, such as coins, gems, or power-ups.
    • -
    -

    Review of Hungry Shark Evolution APKPure

    -

    Now that you know how to download and play Hungry Shark Evolution APKPure, you might be wondering what we think of the game. Well, here is our honest review of Hungry Shark Evolution APKPure:

    -

    We think Hungry Shark Evolution APKPure is a great game for anyone who loves sharks and arcade games. It has amazing graphics, sound effects, and gameplay that make you feel like you are really a hungry shark in a vast ocean.

    -

    We love how the game offers so much variety and content for players to enjoy. There are so many sharks to choose from, each with its own personality and abilities. There are so many areas to explore, each with its own scenery and creatures. There are so many missions to complete, each with its own challenges and rewards.

    -

    We also love how the game is easy to play but hard to master. It has simple controls that anyone can learn quickly, but it also has a lot of depth and strategy that require skill and practice. It has a lot of fun and excitement that keep us hooked for hours.

    -

    The only thing we don't like about the game is that it has too many ads that interrupt the gameplay. We understand that ads are necessary for free games, but we wish they were less frequent or less intrusive. We also wish there was an option to remove them by paying a small fee.

    -

    Overall, we give Hungry Shark Evolution APKPure a rating of 4 out of 5 stars. We think it is one of the best shark games on Android and we highly recommend it to anyone who likes sharks or arcade games.

    -

    Conclusion

    -

    In conclusion, Hungry Shark Evolution APKPure is a fun and addictive game where you take control of a hungry shark and go on a feeding frenzy in a vast ocean full of prey and predators.

    -

    You can download Hungry Shark Evolution APKPure from APKPure.com, a third-party app store that offers free and safe downloads of Android apps and games.

    -

    You can also enjoy the features of Hungry Shark Evolution APKPure, such as the different sharks, the open world, the missions, the equipment, and the gold rush.

    -

    You can also use our tips and tricks for Hungry Shark Evolution APKPure to survive longer, earn more coins and gems, and have more fun playing the game.

    -

    You can also read our review of Hungry Shark Evolution APKPure to see what we think of the game.

    -

    We hope you enjoyed this article and found it helpful. If you have any feedback or questions about Hungry Shark Evolution APKPure, please feel free to leave a comment below. We would love to hear from you.

    -

    Thank you for reading and happy shark hunting!

    -

    FAQs

    -

    Here are some frequently asked questions about Hungry Shark Evolution APKPure:

    -
      -
    1. Is Hungry Shark Evolution APKPure safe to download and install?
    2. -

      Yes, Hungry Shark Evolution APKPure is safe to download and install. APKPure is a reputable and trusted app store that verifies the security and authenticity of all the apps and games it offers. You can download Hungry Shark Evolution APKPure without any worries.

      -
    3. What are the differences between Hungry Shark Evolution APKPure and Hungry Shark Evolution Google Play Store?
    4. -

      There are not many differences between Hungry Shark Evolution APKPure and Hungry Shark Evolution Google Play Store. They are both the same game with the same features and content. The only difference is that Hungry Shark Evolution APKPure is downloaded from APKPure.com, while Hungry Shark Evolution Google Play Store is downloaded from Google Play Store.

      -
    5. What are the requirements for playing Hungry Shark Evolution APKPure?
    6. -

      The requirements for playing Hungry Shark Evolution APKPure are not very high. You need an Android device with Android 4.1 or higher, at least 100 MB of free storage space, and a stable internet connection.

      -
    7. How can I update Hungry Shark Evolution APKPure?
    8. -

      You can update Hungry Shark Evolution APKPure by visiting APKPure.com and downloading the latest version of the game. You can also enable the auto-update feature in the APKPure app settings to get notified and updated automatically when a new version is available.

      -
    9. How can I contact the developers of Hungry Shark Evolution?
    10. -

      You can contact the developers of Hungry Shark Evolution by visiting their official website, Facebook page, Twitter account, or YouTube channel. You can also send them an email at support@fgol.co.uk or use the in-game feedback option.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Play Dragon Ball Z Shin Budokai 7 with PPSSPP The Complete Tutorial.md b/spaces/congsaPfin/Manga-OCR/logs/Play Dragon Ball Z Shin Budokai 7 with PPSSPP The Complete Tutorial.md deleted file mode 100644 index 1b3b988ff5bdecb81e8d4356aee318337556ae6b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Play Dragon Ball Z Shin Budokai 7 with PPSSPP The Complete Tutorial.md +++ /dev/null @@ -1,130 +0,0 @@ -
    -

    Dragon Ball Z Shin Budokai 7 PPSSPP Download Romsmania: A Guide for Fans of Anime and Fighting Games

    -

    If you are a fan of dragon ball z, one of the most popular anime series of all time, and you love fighting games, then you might be interested in dragon ball z shin budokai 7 ppsspp, a fan-made mod of the original dragon ball z shin budokai game for the PlayStation Portable (PSP). In this article, we will tell you everything you need to know about this game, its features, gameplay, requirements, review, and download link. So, let's get started!

    -

    What is dragon ball z shin budokai 7 ppsspp?

    -

    Dragon ball z shin budokai 7 ppsspp is a modded version of dragon ball z shin budokai, a fighting game based on the dragon ball z anime series. The game was released for the PSP in 2006 and was developed by Dimps and published by Atari. The game features a story mode that follows the events of the anime from the Saiyan Saga to the Majin Buu Saga, as well as a versus mode, a tournament mode, a practice mode, and an item shop. The game also has a wireless multiplayer mode that allows up to two players to battle each other using their PSP devices.

    -

    dragon ball z shin budokai 7 ppsspp download romsmania


    Download Ziphttps://urlca.com/2uOevN



    -

    The modded version of the game, dragon ball z shin budokai 7 ppsspp, adds many new features and improvements to the original game. The mod was created by fans of the anime and the game, who wanted to make it more fun and challenging. The mod includes many new characters, stages, skills, attacks, transformations, and modes from the latest dragon ball z series, such as dragon ball super and dragon ball heroes. The mod also enhances the graphics, sound effects, music, and gameplay of the original game.

    -

    Why is it popular among fans of the anime and fighting games?

    -

    Dragon ball z shin budokai 7 ppsspp is popular among fans of the anime and fighting games because it offers them a chance to experience the epic battles and adventures of their favorite characters from the dragon ball z universe. The game has a large roster of characters from different sagas and timelines of the anime, such as Goku, Vegeta, Gohan, Piccolo, Frieza, Cell, Buu, Beerus, Whis, Jiren, Broly, Zamasu, Goku Black, Vegito, Gogeta, Kefla, Caulifla, Kale, Hit, Cabba, Frost, Android 17, Android 18, Trunks, Gotenks, Bardock, Raditz, Nappa, Turles, Cooler, Janemba, Bojack, Omega Shenron, and many more. The game also has many different stages from different planets and dimensions of the anime, such as Earth, Namek, Planet Vegeta, King Kai's Planet, Supreme Kai's Planet, Hell, Heaven, Future Earth, Universe 6, Universe 11, Tournament of Power Arena, and many more. The game also has many different skills and attacks from different forms and techniques of the characters, such as Kamehameha, Galick Gun, Final Flash, Spirit Bomb, Big Bang Attack, Masenko, Special Beam Cannon, Destructo Disc, Death Beam, Solar Flare, Instant Transmission, Kaio-Ken, Super Saiyan, Super Saiyan God, Super Saiyan Blue, Ultra Instinct, Fusion Dance, Potara Earrings, and many more. The game also has many different modes that add variety and challenge to the gameplay, such as story mode, arcade mode, survival mode, time attack mode, team battle mode, and dragon ball collection mode.

    -

    How does the game play on the PSP emulator?

    -

    Dragon ball z shin budokai 7 ppsspp is a game that can be played on the PSP emulator, which is a software that allows you to run PSP games on your PC, Android, or iOS devices. The PSP emulator that is recommended for playing this game is PPSSPP, which is a free and open-source emulator that supports many PSP games and features. PPSSPP can run the game smoothly and with high-quality graphics and sound, as long as you have a compatible device and a good configuration.

    -

    The game plays on the PSP emulator like any other fighting game, with a simple and intuitive control scheme that uses the buttons and analog sticks of the PSP device or the keyboard and mouse of the PC device. The game has a 2D fighting system that allows you to move your character left and right, jump, crouch, dash, guard, and perform various attacks and combos. The game also has a 3D fighting system that allows you to move your character in any direction, fly, teleport, and perform more advanced attacks and combos. The game also has a ki system that allows you to charge your energy, use special skills and transformations, and unleash ultimate attacks. The game also has a dragon rush system that allows you to initiate a cinematic sequence of attacks and counters with your opponent.

    -

    What are the minimum and recommended requirements for playing the game on the PSP emulator?

    -

    The minimum and recommended requirements for playing dragon ball z shin budokai 7 ppsspp on the PSP emulator are as follows:

    - - - - - - - - - - - - - - - - - - - - - -
    DeviceMinimum RequirementsRecommended Requirements
    PC- Windows 7 or higher - 2 GB RAM - 2 GHz dual-core CPU - OpenGL 2.0 compatible GPU - DirectX 9.0c compatible sound card - 1 GB free disk space - PPSSPP emulator - Dragon ball z shin budokai 7 ppsspp ISO file- Windows 10 or higher - 4 GB RAM or more - 3 GHz quad-core CPU or better - OpenGL 3.0 compatible GPU or better - DirectX 11 compatible sound card or better - 2 GB free disk space or more - PPSSPP emulator - Dragon ball z shin budokai 7 ppsspp ISO file
    Android- Android 4.1 or higher - 1 GB RAM - 1 GHz dual-core CPU - OpenGL ES 2.0 compatible GPU - 1 GB free storage space - PPSSPP emulator - Dragon ball z shin budokai 7 ppsspp ISO file- Android 6.0 or higher - 2 GB RAM or more - 2 GHz quad-core CPU or better - OpenGL ES 3.0 compatible GPU or better - 2 GB free storage space or more - PPSSPP emulator - Dragon ball z shin budokai 7 ppsspp ISO file
    iOS- iOS 9.0 or higher - iPhone 5s or higher - iPad Air or higher - iPod Touch 6th generation or higher - PPSSPP emulator (jailbroken device required) - Dragon ball z shin budokai 7 ppsspp ISO file- iOS 11.0 or higher - iPhone 6s or higher - iPad Pro or higher - iPod Touch 7th generation or higher - PPSSPP emulator (jailbroken device required) - Dragon ball z shin budokai 7 ppsspp ISO file
    -

    How to install and configure the game and the emulator?

    -

    To install and configure dragon ball z shin budokai 7 ppsspp and the PPSSPP emulator on your device, you need to follow these steps:

    -
      -
    1. Download the PPSSPP emulator from its official website (https://www.ppsspp.org/) or from the Google Play Store (for Android devices) or from Cydia (for jailbroken iOS devices).
    2. -
    3. Download the dragon ball z shin budokai 7 ppsspp ISO file from its download link (https://romsmania.cc/roms/playstation-portable/dragon-ball-z-shin-budokai-an other-road-275007) or from any other trusted source.
    4. -
    5. Extract the ISO file from the zip file using any file extractor app (such as WinRAR, 7-Zip, ZArchiver, etc.).
    6. -
    7. Copy the ISO file to a folder on your device where you can easily access it (such as Downloads, Documents, PSP, etc.).
    8. -
    9. Launch the PPSSPP emulator on your device and tap on the "Games" tab.
    10. -
    11. Navigate to the folder where you copied the ISO file and tap on it to start the game.
    12. -
    13. Enjoy playing dragon ball z shin budokai 7 ppsspp on your device!
    14. -
    -

    You can also customize the settings of the game and the emulator according to your preferences and device specifications. You can change the graphics, sound, controls, system, and network settings of the emulator by tapping on the "Settings" tab. You can also change the difficulty, language, sound, and display settings of the game by tapping on the "Options" tab in the game menu.

    -

    dragon ball z shin budokai 7 psp iso download coolrom
    -dbz shin budokai 7 ppsspp highly compressed romsmania
    -dragon ball z shin budokai 7 mod ppsspp free download
    -how to download dragon ball z shin budokai 7 on ppsspp
    -dragon ball z shin budokai 7 ppsspp cheats codes romsmania
    -dragon ball z shin budokai 7 ppsspp android download apk
    -dragon ball z shin budokai 7 ppsspp settings for best performance
    -dragon ball z shin budokai 7 ppsspp save data download
    -dragon ball z shin budokai 7 ppsspp gameplay video
    -dragon ball z shin budokai 7 ppsspp emulator for pc
    -dragon ball z shin budokai 7 ppsspp gold download link
    -dragon ball z shin budokai 7 ppsspp english version romsmania
    -dragon ball z shin budokai 7 ppsspp multiplayer mode
    -dragon ball z shin budokai 7 ppsspp all characters unlocked
    -dragon ball z shin budokai 7 ppsspp review and rating
    -dragon ball z shin budokai 7 ppsspp iso file size
    -dragon ball z shin budokai 7 ppsspp system requirements
    -dragon ball z shin budokai 7 ppsspp online play
    -dragon ball z shin budokai 7 ppsspp new features and updates
    -dragon ball z shin budokai 7 ppsspp best mods and hacks
    -dragon ball z shin budokai 7 ppsspp cso download romsmania
    -dragon ball z shin budokai 7 ppsspp texture pack download
    -dragon ball z shin budokai 7 ppsspp tips and tricks
    -dragon ball z shin budokai 7 ppsspp story mode walkthrough
    -dragon ball z shin budokai 7 ppsspp comparison with other dbz games
    -dragon ball z shin budokai 7 ppsspp download for ios devices
    -dbz shin budokai 7 psp iso google drive download link
    -dbz shin budokai 7 psp iso mediafire download link
    -dbz shin budokai 7 psp iso mega download link
    -dbz shin budokai 7 psp iso zip file download romsmania
    -dbz shin budokai 7 psp iso full game download free
    -dbz shin budokai 7 psp iso no password required
    -dbz shin budokai 7 psp iso latest version download
    -dbz shin budokai 7 psp iso direct download without ads
    -dbz shin budokai 7 psp iso working on all devices
    -dbz shin budokai 7 psp iso original game not modded
    -dbz shin budokai 7 psp iso best graphics quality
    -dbz shin budokai 7 psp iso easy installation guide
    -dbz shin budokai 7 psp iso offline play mode
    -dbz shin budokai 7 psp iso support controller and keyboard input

    -

    What are the pros and cons of dragon ball z shin budokai 7 ppsspp?

    -

    Dragon ball z shin budokai 7 ppsspp is a game that has many pros and cons that you should consider before playing it. Here are some of them:

    -

    Pros

    -
      -
    • The game has a large and diverse roster of characters from different sagas and timelines of the dragon ball z universe.
    • -
    • The game has many new and improved features and modes that make it more fun and challenging than the original game.
    • -
    • The game has high-quality graphics and sound effects that enhance the immersion and excitement of the gameplay.
    • -
    • The game has a simple and intuitive control scheme that makes it easy to play on any device.
    • -
    • The game has a wireless multiplayer mode that allows you to battle with your friends or other players online.
    • -
    • The game is free to download and play on any device that supports the PSP emulator.
    • -
    -

    Cons

    -
      -
    • The game is not an official product of Dimps or Atari, but a fan-made mod that may have some bugs and glitches.
    • -
    • The game may not run smoothly or properly on some devices that do not meet the minimum or recommended requirements.
    • -
    • The game may require some configuration and optimization of the emulator settings to achieve the best performance and quality.
    • -
    • The game may have some compatibility issues with some versions or updates of the emulator or the device software.
    • -
    • The game may have some legal issues with some regions or countries that do not allow downloading or playing pirated or modded games.
    • -
    -

    How does it compare to other dragon ball z games and fighting games on the PSP emulator?

    -

    Dragon ball z shin budokai 7 ppsspp is a game that compares favorably to other dragon ball z games and fighting games on the PSP emulator. The game has more content, features, modes, characters, stages, skills, attacks, transformations, and options than most of the other games in its genre. The game also has better graphics, sound effects, music, gameplay, and controls than most of the other games in its genre. The game also has a higher replay value, challenge level, and fun factor than most of the other games in its genre. The game also has a loyal fan base, community support, and regular updates than most of the other games in its genre. The game is one of the best dragon ball z games and fighting games on the PSP emulator that you can play right now.

    -

    Conclusion

    -

    In conclusion, dragon ball z shin budokai 7 ppsspp is a fan-made mod of dragon ball z shin budokai that adds many new features and improvements to the original game. The game is based on the dragon ball z anime series and features a story mode, a versus mode, a tournament mode, a practice mode, an item shop, and a wireless multiplayer mode. The game also features a large roster of characters from different sagas and timelines of the anime, such as Goku, Vegeta, Gohan, Piccolo, Frieza, Cell, Buu, Beerus, Whis, Jiren, Broly, Zamasu, Goku Black, Vegito, Gogeta, Kefla, Caulifla, Kale, Hit, Cabba, Frost, Android 17, Android 18, Trunks, Gotenks, Bardock, Raditz, Nappa, Turles, Cooler, Janemba, Bojack Omega Shenron and many more. The game also features many different stages from different planets and dimensions of [user](# the anime, such as Earth, Namek, Planet Vegeta, King Kai's Planet, Supreme Kai's Planet, Hell, Heaven, Future Earth, Universe 6, Universe 11, Tournament of Power Arena, and many more. The game also features many different skills and attacks from different forms and techniques of the characters, such as Kamehameha, Galick Gun, Final Flash, Spirit Bomb, Big Bang Attack, Masenko, Special Beam Cannon, Destructo Disc, Death Beam, Solar Flare, Instant Transmission, Kaio-Ken, Super Saiyan, Super Saiyan God, Super Saiyan Blue, Ultra Instinct, Fusion Dance, Potara Earrings, and many more. The game also features a 2D and a 3D fighting system that allows you to move and fight in any direction. The game also features a ki system that allows you to charge your energy, use special skills and transformations, and unleash ultimate attacks. The game also features a dragon rush system that allows you to initiate a cinematic sequence of attacks and counters with your opponent.

    -

    Dragon ball z shin budokai 7 ppsspp is a game that can be played on the PSP emulator, which is a software that allows you to run PSP games on your PC, Android, or iOS devices. The PSP emulator that is recommended for playing this game is PPSSPP, which is a free and open-source emulator that supports many PSP games and features. PPSSPP can run the game smoothly and with high-quality graphics and sound, as long as you have a compatible device and a good configuration. The game plays on the PSP emulator like any other fighting game, with a simple and intuitive control scheme that uses the buttons and analog sticks of the PSP device or the keyboard and mouse of the PC device.

    -

    Dragon ball z shin budokai 7 ppsspp is a game that has many pros and cons that you should consider before playing it. The game has more content, features, modes, characters, stages, skills, attacks, transformations, and options than most of the other games in its genre. The game also has better graphics, sound effects, music, gameplay, and controls than most of the other games in its genre. The game also has a higher replay value, challenge level, and fun factor than most of the other games in its genre. The game also has a loyal fan base, community support, and regular updates than most of the other games in its genre. The game is one of the best dragon ball z games and fighting games on the PSP emulator that you can play right now.

    -

    However, the game is not an official product of Dimps or Atari, but a fan-made mod that may have some bugs and glitches. The game may not run smoothly or properly on some devices that do not meet the minimum or recommended requirements. The game may require some configuration and optimization of the emulator settings to achieve the best performance and quality. The game may have some compatibility issues with some versions or updates of the emulator or the device software. The game may have some legal issues with some regions or countries that do not allow downloading or playing pirated or modded games.

    -

    Therefore, if you are a fan of dragon ball z and fighting games, and you want to experience the epic battles and adventures of your favorite characters from the dragon ball z universe, then you should definitely try dragon ball z shin budokai 7 ppsspp on your device. The game will give you hours of fun and entertainment, as well as challenge and satisfaction. The game is free to download and play on any device that supports the PSP emulator. You can download the game from its download link (https://romsmania.cc/roms/playstation-portable/dragon-ball-z-shin-budokai-another-road-275007) or from any other trusted source. You can also follow the instructions given in this article to install and configure the game and the emulator on your device.

    -

    We hope you enjoyed this article and found it helpful and informative. If you have any questions or feedback about the game or the article, please feel free to leave them in the comments section below. Thank you for reading!

    -

    FAQs

    -

    Here are some frequently asked questions about dragon ball z shin budokai 7 ppsspp:

    -

    Q: Is dragon ball z shin budokai 7 ppsspp an official game?

    -

    A: No, dragon ball z shin budokai 7 ppsspp is not an official game, but a fan-made mod of dragon ball z shin budokai, a fighting game based on the dragon ball z anime series.

    -

    Q: How can I play dragon ball z shin budokai 7 ppsspp on my device?

    -

    A: You can play dragon ball z shin budokai 7 ppsspp on your device by using the PSP emulator, which is a software that allows you to run PSP games on your PC, Android, or iOS devices. The PSP emulator that is recommended for playing this game is PPSSPP, which is a free and open-source emulator that supports many PSP games and features. You also need to download the dragon ball z shin budokai 7 ppsspp ISO file from its download link or from any other trusted source. You can follow the steps given in this article to install and configure the game and the emulator on your device.

    -

    Q: What are the differences between dragon ball z shin budokai 7 ppsspp and dragon ball z shin budokai?

    -

    A: Dragon ball z shin budokai 7 ppsspp is a modded version of dragon ball z shin budokai, which adds many new features and improvements to the original game. The mod includes many new characters, stages, skills, attacks, transformations, and modes from the latest dragon ball z series, such as dragon ball super and dragon ball heroes. The mod also enhances the graphics, sound effects, music, and gameplay of the original game.

    -

    Q: Is dragon ball z shin budokai 7 ppsspp safe to download and play?

    -

    A: Dragon ball z shin budokai 7 ppsspp is safe to download and play as long as you download it from a trusted source and scan it for viruses or malware before installing it on your device. You should also make sure that your device meets the minimum or recommended requirements for playing the game on the PSP emulator. You should also be aware of the legal issues that may arise from downloading or playing pirated or modded games in some regions or countries.

    -

    Q: How can I get more updates and support for dragon ball z shin budokai 7 ppsspp?

    -

    A: You can get more updates and support for dragon ball z shin budokai 7 ppsspp by following its official Facebook page (https://www.facebook.com/DBZSB7/) or its YouTube channel (https://www.youtube.com/channel/UCiXyfZPwqRKXx69c-5n-MpA). You can also join its Discord server (https://discord.gg/4JNvzGk) or its Reddit community (https://www.reddit.com/r/dbzsb7/) to interact with other fans and players of the game. You can also contact the developers of the mod by sending them an email (dbzsb7@gmail.com) or a message on their social media accounts.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Astebreed Definitive Edition Download] [hack] - How to Unlock All Features and Modes.md b/spaces/contluForse/HuggingGPT/assets/Astebreed Definitive Edition Download] [hack] - How to Unlock All Features and Modes.md deleted file mode 100644 index 730739240a7431a43cc3d9f0fe13678a96bcff50..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Astebreed Definitive Edition Download] [hack] - How to Unlock All Features and Modes.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Astebreed: Definitive Edition Download] [hack]


    Download Zip - https://ssurll.com/2uzvR9



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/contluForse/HuggingGPT/assets/Cmo crear un personaje principal de una obra literaria memorable y verosmil.md b/spaces/contluForse/HuggingGPT/assets/Cmo crear un personaje principal de una obra literaria memorable y verosmil.md deleted file mode 100644 index 4214f99f48b4b5c717cb03ea704159c6d60b22c7..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Cmo crear un personaje principal de una obra literaria memorable y verosmil.md +++ /dev/null @@ -1,36 +0,0 @@ -
    -

    Cuando se habla de un personaje, se hace alusión a los individuos humanos, animales o de otro tipo, por lo general de carácter ficcional, fantástico o imaginario, que toman parte en la trama de una obra artística, como una narración cinematográfica, un cuadro pictórico o un relato literario.

    -

    personaje principal de una obra literaria


    Download ✪✪✪ https://ssurll.com/2uzxGX



    -

    Los personajes son creados para habitar el mundo posible de la obra de arte, más o menos inspirados en los seres que encontramos en el mundo real, y la trama de dichas narraciones suele girar en torno a sus aventuras y desventuras. En casos como el cine o el teatro, además, son encarnados por actores o representados mediante ilustraciones, figuras tridimensionales, etc.

    -

    De esa manera, el lector o el espectador de una obra debe pactar con la existencia de los personajes como si fueran reales, incluso cuando se trate de seres mitológicos, religiosos o fantásticos, para poder acompañarlos en su relato.

    -

    Existen varios tipos de personajes en un cuento, novela o una cualquier obra narrativa. Su clasificación varía según su grado de participación, la caracterización psicológica que haya hecho el autor, su evolución dentro de la trama, etc.

    -

    El protagonista, por lo tanto, lleva a cabo las acciones más importantes de la historia. Sin su participación, la trama carecería de sentido. El oponente del protagonista se conoce como antagonista, y es aquel personaje que pone obstáculos a los objetivos del personaje principal. Una obra puede tener varios protagonistas y antagonistas.

    -

    -

    Un claro ejemplo entre lo que es protagonista y antagonista es el que se ve en la saga literaria de gran éxito que gira en torno a la figura de Harry Potter. La escritora J.K Rowling es la que estableció como personaje principal al joven mago que da título a la obra, se trata de un muchacho que estudia magia en la Escuela Hogwarts y que se enfrenta a multitud de aventuras junto a sus siempre inseparables amigos Ron y Hermione.

    -

    No obstante, es importante establecer que tanto en la literatura como en las series de televisión o en las películas existen también lo que se conoce como personajes secundarios. Estos tienen una participación menor en la historia que se cuenta pero en determinados momentos también adquieren una especial relevancia. Es decir, en esos instantes se explica el porqué de su aparición en la obra en cuestión.

    -

    Al actor que interpreta el personaje principal de una obra también se le conoce como protagonista. De este modo, el concepto se aplica sobre la persona real y no sobre el personaje de ficción.

    -

    Debemos decir que los personajes que son considerados como principales dentro de una obra dramática son el protagonista y el antagonista. Vale mencionar que a los personajes que se enfrentan en un cuento o narración se les conoce como antagonistas y protagonistas.

    -

    Los personajes de Don Quijote principales y secundarios son los que hacen que la novela avance y por eso en esta lección de unPROFESOR queremos explicarte sus principales características.

    -

    Los personajes principales son los que hacen posible que la trama de la obra avance. Sin ellos no habría novela y por eso Cervantes eligió bien las características de cada una de las personas, para que su gran obra maestra tuviera sentido. Aquí te dejamos un repaso de los personajes de Don Quijote protagonistas.

    -

    En la obra es llamado con diferentes nombres. Su nombre de caballero es Don Quijote de La Mancha, pero la novela nos descubre, al final de sus páginas, el verdadero nombre del protagonista: Alonso Quijano. La obra comienza cuando este personaje principal se encuentra en su casa, en una pequeña aldea, en un lugar no identificado de La Mancha. Don Quijote se ha vuelto loco después de leer tantas novelas de caballerías y por eso decide ponerse en marcha, para vivir su propia aventura.

    -

    Es un hombre de unos 50 años que vive en su propio mundo de fantasía. Sale de la aldea con unas armas viejas que tenía en casa como herencia de sus abuelos, una armadura bien antigua y su caballo Rocinante, que lo acompañará a lo largo de toda la obra. Es, sin duda, el personaje principal de la obra, que lleva su nombre.

    -

    Rocinante es otro de los personajes de Don Quijote. Es extraño que un caballo se encuentre entre los personajes principales de una novela, pero lo cierto es que este animal está presente en todas y cada una de las aventuras de Don Quijote y Sancho Panza. Es el caballo del hidalgo y, aunque camina bastante despacio, es muy fiel a su amo. Siempre está agotado y físicamente es tan flaco como su amo.

    -

    Lo cierto es que Dulcinea no está demasiado presente físicamente en la obra, pero es un personaje principal porque siempre está en los labios de Don Quijote. Se trata de una labradora muy bonita de la que se enamoró el hidalgo. Su nombre real es Aldonza Lorenzo, pero Don Quijote decide llamarla Dulcinea del Toboso, porque considera que es un nombre más propio de una novela de caballerías.

    -

    Los personajes secundarios no son los que llevan el peso de la obra, pero sí que son los que la sustentan. Sin las apariciones de estos personajes, los protagonistas se quedarían sin historia. Por eso es tan importante conocer a los personajes secundarios de Don Quijote y saber las principales características que los definen.

    -

    Ahora ya conoces a los personajes principales y secundarios de Don Quijote y has podido ver algunas características de cada uno de ellos. Si estás interesado en seguir aprendiendo más sobre este libro o alguno parecido, no dudes en consultar nuestro apartado de lectura.

    -

    Por eso, para analizar una obra y entenderla completamente, es necesario saber qué es un personaje, su jerarquía, función, su identidad física y psicológica, así como su papel en todo el entramado narrativo con respecto al resto de personajes.

    -

    Pero cabe aclarar que en este caso de Harry Potter y de otras sagas largas, un personaje cobra más o menos protagonismo en diferentes entregas. Como es el caso de Draco Malfoy que hacia el final de la saga cobra mayor importancia.

    -

    El personaje principal de una narración es el protagonista. Es el personaje que más relevancia tiene en las acciones de una historia, por él y para él ocurren (casi) la mayoría de cosas en una narración.

    -

    El personaje principal es también el mejor desarrollado de una historia, el que más conocemos interior y exteriormente y, generalmente, con el que más nos vinculamos porque en la historia todo se trata de él. Es también el personaje que más evoluciona, el que más tiene motivaciones, y el que más gana, o pierde, en todo lo que está en juego de la historia.

    -

    Después del personaje principal estarían los personajes secundarios que son los que ayudan o evitan que el personaje principal cumpla su misión. Son los personajes que aparecen a menudo en la historia y que alcanzan a mover los hilos de la trama, aunque no tanto como el principal, claro.

    -

    Bueno, ya lo sabes, son personajes de menor importancia que los anteriores, pero que, en algún momento de la trama, ayudan o evitan que el personaje principal, o los secundarios, logren su objetivo.

    -

    Es importante notar que, por lo general, entre más importante es el personaje más dinámico es. Así, en la novela moderna, el personaje principal suele ser muy dinámico, y los terciarios muy estáticos.

    -

    Espero que te haya servido para que comprendas y valores más las obras literarias, más allá de decir que «amas» u «odias» a un personaje. Estas son estrategias que yo mismo uso en todos mis análisis literarios, así que espero que tú también las pongas en práctica y tengas con ellas una ¡Buena lectura!

    -

    El 15 de febrero del año 1929 el escritor y político venezolano Rómulo Gallegos publicó una de sus novelas más reconocidas, Doña Bárbara, por lo que este lunes le invitamos a identificar lo que representan los personajes principales de esta obra literaria.

    -

    La crueldad, la dictadura, la corrupción, la barbarie, la injusticia, el mestizaje, la lucha de clases, el empoderamiento femenino, el progreso, son algunos de los temas reflejados en las vivencias y las características de los personajes de la novela Doña Bárbara, por lo que vale identificar la importancia y significado que tienen las tres figuras principales del texto.

    -

    El título de la novela, Doña Bárbara, alude a la protagonista principal de la obra, un nombre con el cual Gallegos hace referencia a la barbarie, con su comportamiento arbitrario, violento, y malicioso.

    -

    De manera que, Marisela representa en la transición entre la barbarie y lo salvaje hacia el progreso y el desarrollo. En esta obra literaria, Gallegos introduce este personaje como es el símbolo de evolución de lo primitivo y lo salvaje hacia el perfeccionamiento y la civilización ideal.

    -

    Una de las obras clásicas más importantes de la historia de la literatura es La Odisea de Homero. Se trata de un poema épico que se publicó después de La Ilíada y que narra las aventuras de Odiseo (Ulises, en la traducción española) cuando intenta regresar a Ítaca después de la guerra de Troya. En esta lección de unPROFESOR queremos descubrirte en profundidad esta obra literaria y, por eso, vamos a analizar a los personajes de La Odisea tanto principales como secundarios y que son esenciales para el desarrollo de la trama. Adéntrate a descubrir una de las obras clásicas imprescindibles de la literatura universal.

    -

    Si hablamos de los personajes de La Odisea tenemos que hacer mención especial al protagonista de la obra: Odiseo. Este héroe ya lo conocimos por su participación en La Ilíada, el relato de Homero donde se nos narra todo lo que sucedió en la guerra de Troya. Gracias a este poema sabemos que Odiseo fue uno de los héroes griegos más importantes de la mencionada guerra y que, una vez acabada la guerra, quiso volver a Ítaca, el lugar que reinaba.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cr7-gjx/Suspicion-Agent-Demo/util.py b/spaces/cr7-gjx/Suspicion-Agent-Demo/util.py deleted file mode 100644 index e9837ba1f053e0711943461e1c4ad13414f64806..0000000000000000000000000000000000000000 --- a/spaces/cr7-gjx/Suspicion-Agent-Demo/util.py +++ /dev/null @@ -1,127 +0,0 @@ -import json -import os -from pathlib import Path -from typing import Any, Dict - -from model import load_embedding_from_config, load_llm_from_config -from setting import Settings -import logging -from pythonjsonlogger import jsonlogger - - - -def verify_openai_token(token: str) -> str: - import openai - - openai.api_key = token - try: - openai.Completion.create( - model="text-ada-001", - prompt="Hello", - temperature=0, - max_tokens=10, - top_p=1, - frequency_penalty=0.5, - presence_penalty=0, - ) - return "OK" - except Exception as e: - return str(e) - -def get_logging(logger_name,content=''): - logger = logging.getLogger(logger_name) - if not logger.handlers: - logger.setLevel(logging.DEBUG) - logHandlerJson = logging.FileHandler('./memory_data/'+logger_name+'.json') - formatter = jsonlogger.JsonFormatter() - logHandlerJson.setFormatter(formatter) - - # handler = logging.FileHandler('./memory_data/'+logger_name+'.txt') - # handler.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')) - logger.addHandler(logHandlerJson) - logger.info(content) - - -def verify_model_initialization(settings: Settings) -> str: - try: - load_llm_from_config(settings.model.llm) - except Exception as e: - return f"LLM initialization check failed: {e}" - - try: - load_embedding_from_config(settings.model.embedding) - except Exception as e: - return f"Embedding initialization check failed: {e}" - - return "OK" - - -def verify_pinecone_token(token: str) -> str: - return "OK" - - -def verify_discord_token(token: str) -> str: - return "OK" - - -def load_json_value(filepath: Path, key: str, default_value: Any) -> Any: - if not Path(filepath).exists(): - return default_value - json_obj = load_json(filepath) - if key not in json_obj: - return default_value - return json_obj[key] - - -def set_json_value(filepath: Path, key: str, value: Any) -> None: - # key needs to follow python naming convention, such as trial_id - json_obj = load_json(filepath) - json_obj[key] = value - with open(filepath, "w+") as json_file: - json.dump(json_obj, json_file, sort_keys=True) - json_file.flush() - - -def load_json(filepath: Path) -> Dict: - if not Path(filepath).exists(): - return {} - with open(filepath, "r") as file: - try: - json_obj = json.load(file) - return json_obj - except json.JSONDecodeError as e: - if os.stat(filepath).st_size == 0: - # Empty file - return {} - else: - raise e - -def load_log(file_name, key_name): - content_list = [] - key_list = [] - with open('./memory_data/'+file_name) as f: - contents = f.readlines() - for i in contents: - print(i) - contents = json.loads(i) - content_list.append(list(contents.values())[1][key_name]) - key_list.append(list(contents.keys())[1]) - return content_list, key_list - -def load_log_full(file_name, key_name): - content_list = [] - key_list = [] - with open(file_name) as f: - contents = f.readlines() - for i in contents: - #print(i) - contents = json.loads(i) - if key_name is None: - content_list.append(list(contents.values())[1]) - else: - content_list.append(list(contents.values())[1][key_name]) - key_list.append(list(contents.keys())[1]) - return content_list, key_list - -def get_checkpoint_dir(agent_file: str) -> str: - return "./{}.cpt".format(os.path.basename(agent_file)) diff --git a/spaces/crimeacs/phase-hunter/README.md b/spaces/crimeacs/phase-hunter/README.md deleted file mode 100644 index a6230fafc35b487b1cb97fa310608f2f3f171ede..0000000000000000000000000000000000000000 --- a/spaces/crimeacs/phase-hunter/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Phase Hunter -emoji: 🏹 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/align_trans.py b/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/align_trans.py deleted file mode 100644 index 07f1eb365462c2ec5bbac6d1854c786b6fd6be90..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/facelib/detection/align_trans.py +++ /dev/null @@ -1,219 +0,0 @@ -import cv2 -import numpy as np - -from .matlab_cp2tform import get_similarity_transform_for_cv2 - -# reference facial points, a list of coordinates (x,y) -REFERENCE_FACIAL_POINTS = [[30.29459953, 51.69630051], [65.53179932, 51.50139999], [48.02519989, 71.73660278], - [33.54930115, 92.3655014], [62.72990036, 92.20410156]] - -DEFAULT_CROP_SIZE = (96, 112) - - -class FaceWarpException(Exception): - - def __str__(self): - return 'In File {}:{}'.format(__file__, super.__str__(self)) - - -def get_reference_facial_points(output_size=None, inner_padding_factor=0.0, outer_padding=(0, 0), default_square=False): - """ - Function: - ---------- - get reference 5 key points according to crop settings: - 0. Set default crop_size: - if default_square: - crop_size = (112, 112) - else: - crop_size = (96, 112) - 1. Pad the crop_size by inner_padding_factor in each side; - 2. Resize crop_size into (output_size - outer_padding*2), - pad into output_size with outer_padding; - 3. Output reference_5point; - Parameters: - ---------- - @output_size: (w, h) or None - size of aligned face image - @inner_padding_factor: (w_factor, h_factor) - padding factor for inner (w, h) - @outer_padding: (w_pad, h_pad) - each row is a pair of coordinates (x, y) - @default_square: True or False - if True: - default crop_size = (112, 112) - else: - default crop_size = (96, 112); - !!! make sure, if output_size is not None: - (output_size - outer_padding) - = some_scale * (default crop_size * (1.0 + - inner_padding_factor)) - Returns: - ---------- - @reference_5point: 5x2 np.array - each row is a pair of transformed coordinates (x, y) - """ - - tmp_5pts = np.array(REFERENCE_FACIAL_POINTS) - tmp_crop_size = np.array(DEFAULT_CROP_SIZE) - - # 0) make the inner region a square - if default_square: - size_diff = max(tmp_crop_size) - tmp_crop_size - tmp_5pts += size_diff / 2 - tmp_crop_size += size_diff - - if (output_size and output_size[0] == tmp_crop_size[0] and output_size[1] == tmp_crop_size[1]): - - return tmp_5pts - - if (inner_padding_factor == 0 and outer_padding == (0, 0)): - if output_size is None: - return tmp_5pts - else: - raise FaceWarpException('No paddings to do, output_size must be None or {}'.format(tmp_crop_size)) - - # check output size - if not (0 <= inner_padding_factor <= 1.0): - raise FaceWarpException('Not (0 <= inner_padding_factor <= 1.0)') - - if ((inner_padding_factor > 0 or outer_padding[0] > 0 or outer_padding[1] > 0) and output_size is None): - output_size = tmp_crop_size * \ - (1 + inner_padding_factor * 2).astype(np.int32) - output_size += np.array(outer_padding) - if not (outer_padding[0] < output_size[0] and outer_padding[1] < output_size[1]): - raise FaceWarpException('Not (outer_padding[0] < output_size[0] and outer_padding[1] < output_size[1])') - - # 1) pad the inner region according inner_padding_factor - if inner_padding_factor > 0: - size_diff = tmp_crop_size * inner_padding_factor * 2 - tmp_5pts += size_diff / 2 - tmp_crop_size += np.round(size_diff).astype(np.int32) - - # 2) resize the padded inner region - size_bf_outer_pad = np.array(output_size) - np.array(outer_padding) * 2 - - if size_bf_outer_pad[0] * tmp_crop_size[1] != size_bf_outer_pad[1] * tmp_crop_size[0]: - raise FaceWarpException('Must have (output_size - outer_padding)' - '= some_scale * (crop_size * (1.0 + inner_padding_factor)') - - scale_factor = size_bf_outer_pad[0].astype(np.float32) / tmp_crop_size[0] - tmp_5pts = tmp_5pts * scale_factor - # size_diff = tmp_crop_size * (scale_factor - min(scale_factor)) - # tmp_5pts = tmp_5pts + size_diff / 2 - tmp_crop_size = size_bf_outer_pad - - # 3) add outer_padding to make output_size - reference_5point = tmp_5pts + np.array(outer_padding) - tmp_crop_size = output_size - - return reference_5point - - -def get_affine_transform_matrix(src_pts, dst_pts): - """ - Function: - ---------- - get affine transform matrix 'tfm' from src_pts to dst_pts - Parameters: - ---------- - @src_pts: Kx2 np.array - source points matrix, each row is a pair of coordinates (x, y) - @dst_pts: Kx2 np.array - destination points matrix, each row is a pair of coordinates (x, y) - Returns: - ---------- - @tfm: 2x3 np.array - transform matrix from src_pts to dst_pts - """ - - tfm = np.float32([[1, 0, 0], [0, 1, 0]]) - n_pts = src_pts.shape[0] - ones = np.ones((n_pts, 1), src_pts.dtype) - src_pts_ = np.hstack([src_pts, ones]) - dst_pts_ = np.hstack([dst_pts, ones]) - - A, res, rank, s = np.linalg.lstsq(src_pts_, dst_pts_) - - if rank == 3: - tfm = np.float32([[A[0, 0], A[1, 0], A[2, 0]], [A[0, 1], A[1, 1], A[2, 1]]]) - elif rank == 2: - tfm = np.float32([[A[0, 0], A[1, 0], 0], [A[0, 1], A[1, 1], 0]]) - - return tfm - - -def warp_and_crop_face(src_img, facial_pts, reference_pts=None, crop_size=(96, 112), align_type='smilarity'): - """ - Function: - ---------- - apply affine transform 'trans' to uv - Parameters: - ---------- - @src_img: 3x3 np.array - input image - @facial_pts: could be - 1)a list of K coordinates (x,y) - or - 2) Kx2 or 2xK np.array - each row or col is a pair of coordinates (x, y) - @reference_pts: could be - 1) a list of K coordinates (x,y) - or - 2) Kx2 or 2xK np.array - each row or col is a pair of coordinates (x, y) - or - 3) None - if None, use default reference facial points - @crop_size: (w, h) - output face image size - @align_type: transform type, could be one of - 1) 'similarity': use similarity transform - 2) 'cv2_affine': use the first 3 points to do affine transform, - by calling cv2.getAffineTransform() - 3) 'affine': use all points to do affine transform - Returns: - ---------- - @face_img: output face image with size (w, h) = @crop_size - """ - - if reference_pts is None: - if crop_size[0] == 96 and crop_size[1] == 112: - reference_pts = REFERENCE_FACIAL_POINTS - else: - default_square = False - inner_padding_factor = 0 - outer_padding = (0, 0) - output_size = crop_size - - reference_pts = get_reference_facial_points(output_size, inner_padding_factor, outer_padding, - default_square) - - ref_pts = np.float32(reference_pts) - ref_pts_shp = ref_pts.shape - if max(ref_pts_shp) < 3 or min(ref_pts_shp) != 2: - raise FaceWarpException('reference_pts.shape must be (K,2) or (2,K) and K>2') - - if ref_pts_shp[0] == 2: - ref_pts = ref_pts.T - - src_pts = np.float32(facial_pts) - src_pts_shp = src_pts.shape - if max(src_pts_shp) < 3 or min(src_pts_shp) != 2: - raise FaceWarpException('facial_pts.shape must be (K,2) or (2,K) and K>2') - - if src_pts_shp[0] == 2: - src_pts = src_pts.T - - if src_pts.shape != ref_pts.shape: - raise FaceWarpException('facial_pts and reference_pts must have the same shape') - - if align_type == 'cv2_affine': - tfm = cv2.getAffineTransform(src_pts[0:3], ref_pts[0:3]) - elif align_type == 'affine': - tfm = get_affine_transform_matrix(src_pts, ref_pts) - else: - tfm = get_similarity_transform_for_cv2(src_pts, ref_pts) - - face_img = cv2.warpAffine(src_img, tfm, (crop_size[0], crop_size[1])) - - return face_img diff --git a/spaces/datasciencedojo/Question-Generator/README.md b/spaces/datasciencedojo/Question-Generator/README.md deleted file mode 100644 index f6e56b8e9d470fc19d4c053d05c6383d8c1d4e79..0000000000000000000000000000000000000000 --- a/spaces/datasciencedojo/Question-Generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Question Generator -emoji: 🔥 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/inputs.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/inputs.py deleted file mode 100644 index 9345530649a0b8843c27d7a0f965ac73bfcce7d6..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/inputs.py +++ /dev/null @@ -1,451 +0,0 @@ -# type: ignore -""" -This module defines various classes that can serve as the `input` to an interface. Each class must inherit from -`InputComponent`, and each class must define a path to its template. All of the subclasses of `InputComponent` are -automatically added to a registry, which allows them to be easily referenced in other parts of the code. -""" - -from __future__ import annotations - -from typing import Any, Optional - -from gradio import components -from gradio.deprecation import warn_deprecation - - -def warn_inputs_deprecation(): - warn_deprecation( - "Usage of gradio.inputs is deprecated, and will not be supported in the future, please import your component from gradio.components", - ) - - -class Textbox(components.Textbox): - def __init__( - self, - lines: int = 1, - placeholder: Optional[str] = None, - default: str = "", - numeric: Optional[bool] = False, - type: Optional[str] = "text", - label: Optional[str] = None, - optional: bool = False, - ): - warn_inputs_deprecation() - super().__init__( - value=default, - lines=lines, - placeholder=placeholder, - label=label, - numeric=numeric, - type=type, - optional=optional, - ) - - -class Number(components.Number): - """ - Component creates a field for user to enter numeric input. Provides a number as an argument to the wrapped function. - Input type: float - """ - - def __init__( - self, - default: Optional[float] = None, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - default (float): default value. - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no value for this component. - """ - warn_inputs_deprecation() - super().__init__(value=default, label=label, optional=optional) - - -class Slider(components.Slider): - """ - Component creates a slider that ranges from `minimum` to `maximum`. Provides number as an argument to the wrapped function. - Input type: float - """ - - def __init__( - self, - minimum: float = 0, - maximum: float = 100, - step: Optional[float] = None, - default: Optional[float] = None, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - minimum (float): minimum value for slider. - maximum (float): maximum value for slider. - step (float): increment between slider values. - default (float): default value. - label (str): component name in interface. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - - super().__init__( - value=default, - minimum=minimum, - maximum=maximum, - step=step, - label=label, - optional=optional, - ) - - -class Checkbox(components.Checkbox): - """ - Component creates a checkbox that can be set to `True` or `False`. Provides a boolean as an argument to the wrapped function. - Input type: bool - """ - - def __init__( - self, - default: bool = False, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - label (str): component name in interface. - default (bool): if True, checked by default. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - super().__init__(value=default, label=label, optional=optional) - - -class CheckboxGroup(components.CheckboxGroup): - """ - Component creates a set of checkboxes of which a subset can be selected. Provides a list of strings representing the selected choices as an argument to the wrapped function. - Input type: Union[List[str], List[int]] - """ - - def __init__( - self, - choices: list[str], - default: list[str] | None = None, - type: str = "value", - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - choices (List[str]): list of options to select from. - default (List[str]): default selected list of options. - type (str): Type of value to be returned by component. "value" returns the list of strings of the choices selected, "index" returns the list of indices of the choices selected. - label (str): component name in interface. - optional (bool): this parameter is ignored. - """ - if default is None: - default = [] - warn_inputs_deprecation() - super().__init__( - value=default, - choices=choices, - type=type, - label=label, - optional=optional, - ) - - -class Radio(components.Radio): - """ - Component creates a set of radio buttons of which only one can be selected. Provides string representing selected choice as an argument to the wrapped function. - Input type: Union[str, int] - """ - - def __init__( - self, - choices: list[str], - type: str = "value", - default: Optional[str] = None, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - choices (List[str]): list of options to select from. - type (str): Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected. - default (str): the button selected by default. If None, no button is selected by default. - label (str): component name in interface. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - super().__init__( - choices=choices, - type=type, - value=default, - label=label, - optional=optional, - ) - - -class Dropdown(components.Dropdown): - """ - Component creates a dropdown of which only one can be selected. Provides string representing selected choice as an argument to the wrapped function. - Input type: Union[str, int] - """ - - def __init__( - self, - choices: list[str], - type: str = "value", - default: Optional[str] = None, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - choices (List[str]): list of options to select from. - type (str): Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected. - default (str): default value selected in dropdown. If None, no value is selected by default. - label (str): component name in interface. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - super().__init__( - choices=choices, - type=type, - value=default, - label=label, - optional=optional, - ) - - -class Image(components.Image): - """ - Component creates an image upload box with editing capabilities. - Input type: Union[numpy.array, PIL.Image, file-object] - """ - - def __init__( - self, - shape: tuple[int, int] = None, - image_mode: str = "RGB", - invert_colors: bool = False, - source: str = "upload", - tool: str = "editor", - type: str = "numpy", - label: str = None, - optional: bool = False, - ): - """ - Parameters: - shape (Tuple[int, int]): (width, height) shape to crop and resize image to; if None, matches input image size. - image_mode (str): How to process the uploaded image. Accepts any of the PIL image modes, e.g. "RGB" for color images, "RGBA" to include the transparency mask, "L" for black-and-white images. - invert_colors (bool): whether to invert the image as a preprocessing step. - source (str): Source of image. "upload" creates a box where user can drop an image file, "webcam" allows user to take snapshot from their webcam, "canvas" defaults to a white image that can be edited and drawn upon with tools. - tool (str): Tools used for editing. "editor" allows a full screen editor, "select" provides a cropping and zoom tool. - type (str): Type of value to be returned by component. "numpy" returns a numpy array with shape (height, width, 3) and values from 0 to 255, "pil" returns a PIL image object, "file" returns a temporary file object whose path can be retrieved by file_obj.name, "filepath" returns the path directly. - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no uploaded image, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__( - shape=shape, - image_mode=image_mode, - invert_colors=invert_colors, - source=source, - tool=tool, - type=type, - label=label, - optional=optional, - ) - - -class Video(components.Video): - """ - Component creates a video file upload that is converted to a file path. - - Input type: filepath - """ - - def __init__( - self, - type: Optional[str] = None, - source: str = "upload", - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - type (str): Type of video format to be returned by component, such as 'avi' or 'mp4'. If set to None, video will keep uploaded format. - source (str): Source of video. "upload" creates a box where user can drop an video file, "webcam" allows user to record a video from their webcam. - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no uploaded video, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__(format=type, source=source, label=label, optional=optional) - - -class Audio(components.Audio): - """ - Component accepts audio input files. - Input type: Union[Tuple[int, numpy.array], file-object, numpy.array] - """ - - def __init__( - self, - source: str = "upload", - type: str = "numpy", - label: str = None, - optional: bool = False, - ): - """ - Parameters: - source (str): Source of audio. "upload" creates a box where user can drop an audio file, "microphone" creates a microphone input. - type (str): Type of value to be returned by component. "numpy" returns a 2-set tuple with an integer sample_rate and the data numpy.array of shape (samples, 2), "file" returns a temporary file object whose path can be retrieved by file_obj.name, "filepath" returns the path directly. - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no uploaded audio, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__(source=source, type=type, label=label, optional=optional) - - -class File(components.File): - """ - Component accepts generic file uploads. - Input type: Union[file-object, bytes, List[Union[file-object, bytes]]] - """ - - def __init__( - self, - file_count: str = "single", - type: str = "file", - label: Optional[str] = None, - keep_filename: bool = True, - optional: bool = False, - ): - """ - Parameters: - file_count (str): if single, allows user to upload one file. If "multiple", user uploads multiple files. If "directory", user uploads all files in selected directory. Return type will be list for each file in case of "multiple" or "directory". - type (str): Type of value to be returned by component. "file" returns a temporary file object whose path can be retrieved by file_obj.name, "binary" returns an bytes object. - label (str): component name in interface. - keep_filename (bool): DEPRECATED. Original filename always kept. - optional (bool): If True, the interface can be submitted with no uploaded image, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__( - file_count=file_count, - type=type, - label=label, - keep_filename=keep_filename, - optional=optional, - ) - - -class Dataframe(components.Dataframe): - """ - Component accepts 2D input through a spreadsheet interface. - Input type: Union[pandas.DataFrame, numpy.array, List[Union[str, float]], List[List[Union[str, float]]]] - """ - - def __init__( - self, - headers: Optional[list[str]] = None, - row_count: int = 3, - col_count: Optional[int] = 3, - datatype: str | list[str] = "str", - col_width: int | list[int] = None, - default: Optional[list[list[Any]]] = None, - type: str = "pandas", - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - headers (List[str]): Header names to dataframe. If None, no headers are shown. - row_count (int): Limit number of rows for input. - col_count (int): Limit number of columns for input. If equal to 1, return data will be one-dimensional. Ignored if `headers` is provided. - datatype (Union[str, List[str]]): Datatype of values in sheet. Can be provided per column as a list of strings, or for the entire sheet as a single string. Valid datatypes are "str", "number", "bool", and "date". - col_width (Union[int, List[int]]): Width of columns in pixels. Can be provided as single value or list of values per column. - default (List[List[Any]]): Default value - type (str): Type of value to be returned by component. "pandas" for pandas dataframe, "numpy" for numpy array, or "array" for a Python array. - label (str): component name in interface. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - super().__init__( - value=default, - headers=headers, - row_count=row_count, - col_count=col_count, - datatype=datatype, - col_width=col_width, - type=type, - label=label, - optional=optional, - ) - - -class Timeseries(components.Timeseries): - """ - Component accepts pandas.DataFrame uploaded as a timeseries csv file. - Input type: pandas.DataFrame - """ - - def __init__( - self, - x: Optional[str] = None, - y: str | list[str] = None, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - x (str): Column name of x (time) series. None if csv has no headers, in which case first column is x series. - y (Union[str, List[str]]): Column name of y series, or list of column names if multiple series. None if csv has no headers, in which case every column after first is a y series. - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no uploaded csv file, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__(x=x, y=y, label=label, optional=optional) - - -class State(components.State): - """ - Special hidden component that stores state across runs of the interface. - Input type: Any - """ - - def __init__( - self, - label: str = None, - default: Any = None, - ): - """ - Parameters: - label (str): component name in interface (not used). - default (Any): the initial value of the state. - optional (bool): this parameter is ignored. - """ - warn_inputs_deprecation() - super().__init__(value=default, label=label) - - -class Image3D(components.Model3D): - """ - Used for 3D image model output. - Input type: File object of type (.obj, glb, or .gltf) - """ - - def __init__( - self, - label: Optional[str] = None, - optional: bool = False, - ): - """ - Parameters: - label (str): component name in interface. - optional (bool): If True, the interface can be submitted with no uploaded image, in which case the input value is None. - """ - warn_inputs_deprecation() - super().__init__(label=label, optional=optional) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/benchmarks/subcomponents.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/benchmarks/subcomponents.py deleted file mode 100644 index 225d86e72d59bba808b00c59f59d6489eda8ccc7..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jsonschema/benchmarks/subcomponents.py +++ /dev/null @@ -1,42 +0,0 @@ -""" -A benchmark which tries to compare the possible slow subparts of validation. -""" -from referencing import Registry -from referencing.jsonschema import DRAFT202012 -from rpds import HashTrieMap, HashTrieSet - -from jsonschema import Draft202012Validator - -schema = { - "type": "array", - "minLength": 1, - "maxLength": 1, - "items": {"type": "integer"} -} - -hmap = HashTrieMap() -hset = HashTrieSet() - -registry = Registry() - -v = Draft202012Validator(schema) - - -def registry_data_structures(): - return hmap.insert("foo", "bar"), hset.insert("foo") - - -def registry_add(): - resource = DRAFT202012.create_resource(schema) - return registry.with_resource(uri="urn:example", resource=resource) - - -if __name__ == "__main__": - from pyperf import Runner - runner = Runner() - - runner.bench_func("HashMap/HashSet insertion", registry_data_structures) - runner.bench_func("Registry insertion", registry_add) - runner.bench_func("Success", lambda: v.is_valid([1])) - runner.bench_func("Failure", lambda: v.is_valid(["foo"])) - runner.bench_func("Metaschema validation", lambda: v.check_schema(schema)) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/helpers/__init__.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/helpers/__init__.py deleted file mode 100644 index 3dbbdd1d480ecc5ace6529f9005d40d5985529ae..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/helpers/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -"""Functions for parsing Links -""" -__all__ = ("parseLinkLabel", "parseLinkDestination", "parseLinkTitle") -from .parse_link_destination import parseLinkDestination -from .parse_link_label import parseLinkLabel -from .parse_link_title import parseLinkTitle diff --git a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py b/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py deleted file mode 100644 index 24a19bacd0a4b789415cfccbce1f8bc99bc493ed..0000000000000000000000000000000000000000 --- a/spaces/deeplearning/audioldm-text-to-audio-generation/audioldm/clap/training/lp_train.py +++ /dev/null @@ -1,301 +0,0 @@ -import json -import logging -import math -import os -import time -from contextlib import suppress - -import numpy as np -import torch -import torch.nn.functional as F - -try: - import wandb -except ImportError: - wandb = None - -from open_clip import LPLoss, LPMetrics, lp_gather_features -from open_clip.utils import do_mixup, get_mix_lambda -from .distributed import is_master -from .zero_shot import zero_shot_eval - - -class AverageMeter(object): - """Computes and stores the average and current value""" - - def __init__(self): - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - -def unwrap_model(model): - if hasattr(model, "module"): - return model.module - else: - return model - - -def train_one_epoch( - model, - data, - epoch, - optimizer, - scaler, - scheduler, - args, - tb_writer=None, - extra_suffix="", -): - device = torch.device(args.device) - autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress - model.train() - loss = LPLoss(args.lp_loss) - - dataloader, sampler = data["train"].dataloader, data["train"].sampler - if args.distributed and sampler is not None: - sampler.set_epoch(epoch) - num_batches_per_epoch = dataloader.num_batches - sample_digits = math.ceil(math.log(dataloader.num_samples + 1, 10)) - - # for toy dataset - if args.dataset_type == "toy": - dataloader.dataset.generate_queue() - - loss_m = AverageMeter() - batch_time_m = AverageMeter() - data_time_m = AverageMeter() - end = time.time() - - for i, batch in enumerate(dataloader): - step = num_batches_per_epoch * epoch + i - - if isinstance(scheduler, dict): - for s in scheduler.values(): - s(step) - else: - scheduler(step) - - audio = batch # contains mel_spec, wavform, and longer list - class_label = batch["class_label"] - # audio = audio.to(device=device, non_blocking=True) - class_label = class_label.to(device=device, non_blocking=True) - - if args.mixup: - # https://github.com/RetroCirce/HTS-Audio-Transformer/blob/main/utils.py#L146 - mix_lambda = torch.from_numpy( - get_mix_lambda(0.5, len(audio["waveform"])) - ).to(device) - class_label = do_mixup(class_label, mix_lambda) - else: - mix_lambda = None - - data_time_m.update(time.time() - end) - if isinstance(optimizer, dict): - for o_ in optimizer.values(): - o_.zero_grad() - else: - optimizer.zero_grad() - - with autocast(): - pred = model(audio, mix_lambda=mix_lambda, device=device) - total_loss = loss(pred, class_label) - - if isinstance(optimizer, dict): - if scaler is not None: - scaler.scale(total_loss).backward() - for o_ in optimizer.values(): - if args.horovod: - o_.synchronize() - scaler.unscale_(o_) - with o_.skip_synchronize(): - scaler.step(o_) - else: - scaler.step(o_) - scaler.update() - else: - total_loss.backward() - for o_ in optimizer.values(): - o_.step() - else: - if scaler is not None: - scaler.scale(total_loss).backward() - if args.horovod: - optimizer.synchronize() - scaler.unscale_(optimizer) - with optimizer.skip_synchronize(): - scaler.step(optimizer) - else: - scaler.step(optimizer) - scaler.update() - else: - total_loss.backward() - optimizer.step() - - # Note: we clamp to 4.6052 = ln(100), as in the original paper. - with torch.no_grad(): - unwrap_model(model).clap_model.logit_scale_a.clamp_(0, math.log(100)) - unwrap_model(model).clap_model.logit_scale_t.clamp_(0, math.log(100)) - - batch_time_m.update(time.time() - end) - end = time.time() - batch_count = i + 1 - - if is_master(args) and (i % 100 == 0 or batch_count == num_batches_per_epoch): - if isinstance(audio, dict): - batch_size = len(audio["waveform"]) - else: - batch_size = len(audio) - num_samples = batch_count * batch_size * args.world_size - samples_per_epoch = dataloader.num_samples - percent_complete = 100.0 * batch_count / num_batches_per_epoch - - # NOTE loss is coarsely sampled, just master node and per log update - loss_m.update(total_loss.item(), batch_size) - if isinstance(optimizer, dict): - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {[o_.param_groups[0]['lr'] for o_ in optimizer.values()]}" - ) - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "lr": [o_.param_groups[0]["lr"] for o_ in optimizer.values()], - } - else: - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f} " - f"LR: {optimizer.param_groups[0]['lr']:5f} " - ) - - # Save train loss / etc. Using non avg meter values as loggers have their own smoothing - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "lr": optimizer.param_groups[0]["lr"], - } - for name, val in log_data.items(): - name = f"train{extra_suffix}/{name}" - if tb_writer is not None: - tb_writer.add_scalar(name, val, step) - if args.wandb: - assert wandb is not None, "Please install wandb." - wandb.log({name: val, "step": step}) - - # resetting batch / data time meters per log window - batch_time_m.reset() - data_time_m.reset() - # end for - - -def evaluate(model, data, epoch, args, tb_writer=None, extra_suffix=""): - metrics = {} - if not args.parallel_eval: - if not is_master(args): - return metrics - device = torch.device(args.device) - model.eval() - - # CHANGE - # zero_shot_metrics = zero_shot_eval(model, data, epoch, args) - # metrics.update(zero_shot_metrics) - if is_master(args): - print("Evaluating...") - metric_names = args.lp_metrics.split(",") - eval_tool = LPMetrics(metric_names=metric_names) - - autocast = torch.cuda.amp.autocast if args.precision == "amp" else suppress - if "val" in data and ( - args.val_frequency - and ((epoch % args.val_frequency) == 0 or epoch == args.epochs) - ): - if args.parallel_eval: - dataloader, sampler = data["val"].dataloader, data["val"].sampler - if args.distributed and sampler is not None: - sampler.set_epoch(epoch) - samples_per_val = dataloader.num_samples - else: - dataloader = data["val"].dataloader - num_samples = 0 - samples_per_val = dataloader.num_samples - - eval_info = {"pred": [], "target": []} - with torch.no_grad(): - for i, batch in enumerate(dataloader): - audio = batch # contains mel_spec, wavform, and longer list - class_label = batch["class_label"] - - # audio = audio.to(device=device, non_blocking=True) - class_label = class_label.to(device=device, non_blocking=True) - - with autocast(): - pred = model(audio, device=device) - if args.parallel_eval: - pred, class_label = lp_gather_features( - pred, class_label, args.world_size, args.horovod - ) - eval_info["pred"].append(pred) - eval_info["target"].append(class_label) - - num_samples += class_label.shape[0] - - if (i % 100) == 0: # and i != 0: - logging.info( - f"Eval Epoch: {epoch} [{num_samples} / {samples_per_val}]" - ) - - if is_master(args): - eval_info["pred"] = torch.cat(eval_info["pred"], 0).cpu() - eval_info["target"] = torch.cat(eval_info["target"], 0).cpu() - metric_dict = eval_tool.evaluate_mertics( - eval_info["pred"], eval_info["target"] - ) - metrics.update(metric_dict) - if "epoch" not in metrics.keys(): - metrics.update({"epoch": epoch}) - - if is_master(args): - if not metrics: - return metrics - - logging.info( - f"Eval Epoch: {epoch} " - + "\n".join( - ["\t".join([f"{m}: {round(metrics[m], 4):.4f}"]) for m in metrics] - ) - ) - if args.save_logs: - for name, val in metrics.items(): - if tb_writer is not None: - tb_writer.add_scalar(f"val{extra_suffix}/{name}", val, epoch) - - with open(os.path.join(args.checkpoint_path, "results.jsonl"), "a+") as f: - f.write(json.dumps(metrics)) - f.write("\n") - - if args.wandb: - assert wandb is not None, "Please install wandb." - for name, val in metrics.items(): - wandb.log({f"val{extra_suffix}/{name}": val, "epoch": epoch}) - - return metrics - else: - return metrics diff --git a/spaces/deepthiaj/Electro_oneAPI/app.py b/spaces/deepthiaj/Electro_oneAPI/app.py deleted file mode 100644 index 4248939773909bfb5072743b694546737b9a7142..0000000000000000000000000000000000000000 --- a/spaces/deepthiaj/Electro_oneAPI/app.py +++ /dev/null @@ -1,376 +0,0 @@ -# Import Libraries -import streamlit as st -import pandas as pd -import pickle -import xgboost as xgb -import numpy as np -import sklearn -from sklearn.metrics import confusion_matrix, classification_report -import seaborn as sns -import matplotlib.pyplot as plt -from io import StringIO -from scipy import signal -import daal4py as d4p -import time -from sklearn.model_selection import train_test_split -import tensorflow as tf -from tensorflow.keras.models import Sequential -from tensorflow.keras.layers import Dense -from tensorflow.keras.optimizers import Adam -from tensorflow.keras.callbacks import EarlyStopping -from tensorflow.keras.utils import to_categorical -from sklearnex import patch_sklearn -patch_sklearn() - -# Define Methods -def diagnostic_models_evaluation(X_train, X_test, y_train, y_test): - - # Define the model parameters - model_params = { - 'objective': 'multi:softmax', - 'num_class': 11, - 'random_state': 42 - } - - # Create and train the XGBoost model including early stopping to avoid overfitting - xgb_model = xgb.XGBClassifier(**model_params) - eval_set = [(X_test, y_test)] - xgb_model.fit(X_train, y_train, early_stopping_rounds=15, eval_set=eval_set, verbose=True) - - # DAAL model - daal_model = d4p.get_gbt_model_from_xgboost(xgb_model.get_booster()) - - st.subheader(":blue[Performance evaluation of the Automated Diagnosis Model]") - - st.divider() - - # Evaluate the model on the entire dataset - # XGBoost prediction (for accuracy comparison) - t0 = time.time() - y_pred = xgb_model.predict(X_test) - t1 = time.time() - xgb_errors_count = np.count_nonzero(y_pred - np.ravel(y_test)) - - xgb_total = t1-t0 - st.write("Prediction time using XGBoost model is ", xgb_total) - accuracy = np.sum(y_pred == y_test) / len(y_test) # Calculate accuracy - acc = (accuracy / 1) * 100 - st.write("The accuracy of the diagnosis report is: ", acc, "%") - - - st.divider() - - # Evaluate the model on the entire dataset - # Calculate evaluation metrics - classification_metrics = classification_report(y_test, y_pred, output_dict=True) - st.caption(":blue[Classification Metrics]") - - st.table(classification_metrics) - - # st.write("1: Myocardial infarction, 2: Bundle branch block, 3: Dysrhythmia , 4: Valvular heart disease, 5: Myocarditis") - - st.divider() - - # Calculate confusion matrix - confusion_mat = confusion_matrix(y_test, y_pred) - - # Plot confusion matrix - htmap = sns.heatmap(confusion_mat, annot=True, fmt="d", cmap="Blues") - htmap = htmap.figure - st.pyplot(htmap) - - st.divider() - - # Make a faster prediction with oneDAL - n_classes = 11 - # daal4py prediction for increased performance - daal_predict_algo = d4p.gbt_classification_prediction( - nClasses=n_classes, - resultsToEvaluate="computeClassLabels", - fptype='float' - ) - t0 = time.time() - daal_prediction = daal_predict_algo.compute(X_test, daal_model) - t1 = time.time() - daal_errors_count = np.count_nonzero(np.ravel(daal_prediction.prediction) - np.ravel(y_test)) - - d4p_total = t1-t0 - st.write("Prediction time using DAAL model is ", xgb_total) - - y_test = np.ravel(y_test) - daal_prediction = np.ravel(daal_prediction.prediction) - xgb_prediction = y_pred - - st.subheader(":blue[Accuracy & Performance Comparison:]") - st.subheader(":blue[XGBooster Prediction vs. Daal4py Prediction]") - st.write("\nXGBoost prediction results (first 10 rows):\n", xgb_prediction[0:10]) - st.write("\ndaal4py prediction results (first 10 rows):\n", daal_prediction[0:10]) - st.write("\nGround truth (first 10 rows):\n", y_test[0:10]) - - st.write("XGBoost errors count:", xgb_errors_count) - st.write("XGBoost accuracy score:", 1 - xgb_errors_count / xgb_prediction.shape[0]) - - st.write("\ndaal4py errors count:", daal_errors_count) - st.write("daal4py accuracy score:", 1 - daal_errors_count / daal_prediction.shape[0]) - - st.write("\n XGBoost Prediction Time:", xgb_total) - st.write("\n daal4py Prediction Time:", d4p_total) - - st.subheader("Visualizations") - st.write("Performance: 'XGBoost Prediction' vs. 'daal4py Prediction'") - - pred_times = [xgb_total, d4p_total] - st.bar_chart(pred_times) - st.write("speedup:",xgb_total/d4p_total) - st.write("Accuracy") - - xgb_acc = 1 - xgb_errors_count / xgb_prediction.shape[0] - d4p_acc = 1 - daal_errors_count / daal_prediction.shape[0] - pred_acc = [xgb_acc, d4p_acc] - st.bar_chart(pred_acc) - st.write("Accuracy Difference",xgb_acc-d4p_acc) - - st.divider() - - return xgb_model, daal_model - - -def DL_diagnostic_model_eval(ECG_data_type, X_train, X_test, y_train, y_test): - - num_classes = 11 - - if ECG_data_type == "15_Leads_ECG_data": - input_shape = 15 - elif ECG_data_type == "12_Leads_ECG_data": - input_shape = 12 - - batch_size = 64 # 32, 64, 128 - num_epochs = 100 - - dl_model = Sequential() - dl_model.add(Dense(128, activation='relu', input_shape=(input_shape,))) # Adjust the input_shape to match data - dl_model.add(Dense(64, activation='relu')) - dl_model.add(Dense(32, activation='relu')) - dl_model.add(Dense(num_classes, activation='softmax')) # Adjust the num_classes to match data - - dl_model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy']) - - # Define the early stopping criteria - early_stopping = EarlyStopping(monitor='val_loss', patience=10, verbose=1) - - # Encode target values as one-hot vectors - y_train_encoded = to_categorical(y_train, num_classes=11) - y_test_encoded = to_categorical(y_test, num_classes=11) - - # Train the deep learning model with early stopping - history = dl_model.fit(X_train, y_train_encoded, batch_size=batch_size, epochs=num_epochs, - validation_data=(X_test, y_test_encoded), verbose=1, callbacks=[early_stopping]) - - score = dl_model.evaluate(X_test, y_test_encoded, verbose=0) - st.write('Model test loss:', score[0]) - st.write('Model test accuracy:', score[1]) - - - return dl_model - - -def model_gen(signal_data_type): - - enc_dat = pd.read_csv("PTB_ECG_df2_enc_f.csv") - - if signal_data_type == '15_Leads_ECG_data': - # Split the dataset into features (X) and target (y) - X = enc_dat.iloc[:, :-1].values # Features (all columns except the last one) - y = enc_dat.iloc[:, -1].values # Target (last column "diagnosis") - # # Map the existing class labels to the expected class values - # class_mapping = {0: 0, 1: 1, 3: 2, 4: 3, 6: 4, 7: 5} - # mapped_labels = np.array([class_mapping[label] for label in y]) - - # split data into train and test sets - seed = 10 - test_size = 0.10 - X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=seed) - elif signal_data_type == '12_Leads_ECG_data': - # Split the dataset into features (X) and target (y) from 12 lead data alone in PTB ECG Diagnostic database - X = enc_dat.iloc[:, :12].values # Features (all columns except the last one) #CALL PREPROCESSING FROM JUPYTERHUB - y = enc_dat.iloc[:, -1].values # Target (last column "diagnosis") - # # Map the existing class labels to the expected class values - # class_mapping = {0: 0, 1: 1, 3: 2, 4: 3, 6: 4, 7: 5} - # mapped_labels = np.array([class_mapping[label] for label in y]) - - # split data into train and test sets - seed = 10 - test_size = 0.10 - X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=seed) - else: - st.write("Please upload a 12-leads ECG data or 15 Leads ECG (12 + 3 Frank vx,vy,vz leads) data to perform the diagnosis for Heart condition") - - return X_train, X_test, y_train, y_test - - -def diagnosis_report(predicted_class): - if (predicted_class == 0).any(): - st.write("Your heart is in good health.") - st.write("Kindly follow your regular checkup routine.") - elif (predicted_class == 1).any(): - st.write("You are diagnosed with possibility of Myocardial infarction.") - st.write("It is recommended that you consult a doctor to take the necessary treatment.") - elif (predicted_class == 2).any(): - st.write("You are diagnosed with possibility of Cardiomyopathy.") - st.write("It is recommended that you consult a doctor to take the necessary treatment.") - elif (predicted_class == 3).any(): - st.write("You are diagnosed with possibility of Bundle branch block.") - st.write("It is recommended that you consult a doctor to the necessary treatment.") - elif (predicted_class == 4).any(): - st.write("You are diagnosed with possibility of Dysrhythmia.") - st.write("It is recommended that you consult a doctor to take the necessary treatment.") - elif (predicted_class == 5).any(): - st.write("You are diagnosed with possibility of Hypertrophy.") - st.write("It is recommended that you consult a doctor to take the necessary treatment.") - elif (predicted_class == 6).any(): - st.write("You are diagnosed with possibility of Valvular heart disease.") - st.write("It is recommended that you consult a doctor to take the necessary treatment.") - elif (predicted_class == 7).any(): - st.write("You are diagnosed with possibility of Myocarditis.") - st.write("It is recommended that you consult a doctor to take the necessary treatment.") - elif (predicted_class == 8).any(): - st.write("You are diagnosed with possibility of Stable angina.") - st.write("It is recommended that you consult a doctor to take the necessary treatment.") - elif (predicted_class == 9).any(): - st.write("You are diagnosed with possibility of Palpitation.") - st.write("It is recommended that you consult a doctor to take the necessary treatment.") - elif (predicted_class == 10).any(): - st.write("You are diagnosed with possibility of Unstable angina.") - st.write("It is recommended that you consult a doctor to take the necessary treatment.") - else: - st.write("Sorry, We cannot give your diagnosis report at the moment. Kindly consult a doctor in person.") - - -def ECG_data_uploader(uploaded_file): - dataframe = X[0] - if uploaded_file is not None: - df = pd.read_csv(uploaded_file) - - if df.columns[-1] == 'diagnosis': - data_frame = df.iloc[0,:-1].transpose() #data_frame.iloc[0,:-1] - st.write("The ECG data uploaded except diagnosis is \n", df.iloc[:,:-1]) - else: - data_frame = df.transpose() #data_frame.iloc[0,:-1] - st.write("The ECG data uploaded is \n", df) - dataframe = data_frame.values # values attribute returns the underlying data of the DataFrame as a 2D ndarray. - - else: - st.sidebar.write("No ECG patient data uploaded") - return dataframe - - -def preprocess(ecg_test_data): - st.write('') - -#..........................................................................................................................................................# -# Streamlit App Interface for Diagnosis - -st.title("Automated Diagnosis of Heart health condition from Electrocardiogram (ECG) using Intel oneAPI") -st.write('This app is a prototype for diagnosing heart health condition using Electrocardiogram (ECG).') - -st.divider() - -with st.container(): - st.subheader(":red[PTB ECG Diagnostic Dataset used for Model deployment]") - - if st.button("Visualize ECG data distribution based on diagnosis in PTB ECG Diagnostic Dataset provided by Health Practitioners"): - ecg_train_dat = pd.read_csv("PTB_ECG_df2.csv") - diagnosis_counts = ecg_train_dat["diagnosis"].value_counts() - st.bar_chart(diagnosis_counts) - -st.divider() - -enc_dat = pd.read_csv("PTB_ECG_df2_enc_f.csv") -X = enc_dat.iloc[:, :-1].values # Features (all columns except the last one) -patient_ecg_sel = "Patient001" -ECG_data_type = "15_Leads_ECG_data" - -st.subheader(":red[Prototype Test and Evaluation]") -patient_enc_data = {"Patient001":X[0],"Patient002":X[100],"Patient003":X[200],"Patient004":X[50],"Patient005":X[40],"Patient006":X[30],"Patient007":X[20],"Patient008":X[10],"Patient009":X[60],"Patient010":X[110],"Patient011":X[120],"Patient012":X[130],"Patient013":X[140],"Patient014":X[150],"Patient015":X[160],"Patient016":X[170],"Patient017":X[180],"Patient018":X[190],"Patient019":X[210],"Patient020":X[220],"Patient021":X[21],"Patient022":X[22],"Patient023":X[23],"Patient024":X[24],"Patient025":X[25],"Patient026":X[26],"Patient027":X[27],"Patient028":X[28],"Patient029":X[29],"Patient030":X[31],"Patient031":X[41],"Patient032":X[42],"Patient033":X[43],"Patient034":X[44],"Patient035":X[45],"Patient036":X[46],"Patient037":X[47],"Patient038":X[48],"Patient039":X[49],"Patient040":X[51],"Patient41":X[61],"Patient042":X[62],"Patient043":X[63],"Patient044":X[64],"Patient045":X[65],"Patient046":X[66],"Patient047":X[67],"Patient048":X[68],"Patient049":X[69],"Patient050":X[71], } -patient_ecg_sel = st.selectbox( "Select an ECG data of a single patient from the given list", list(patient_enc_data.keys())) -ecg_test_data = patient_enc_data[patient_ecg_sel] - -st.subheader("Diagnosis Report: ") -st.caption(patient_ecg_sel) -if st.button("Diagnose"): - X_train, X_test, y_train, y_test = model_gen(ECG_data_type) - - xgb_model, daal_model = diagnostic_models_evaluation(X_train, X_test, y_train, y_test) - predicted_class_xgb = xgb_model.predict(np.array([ecg_test_data])) - st.caption("Diagnosis using XGBooster: ") - diagnosis_report(predicted_class_xgb) - - n_classes = 11 - predicted_class_daal = d4p.gbt_classification_prediction( - nClasses=n_classes, - resultsToEvaluate="computeClassLabels", - fptype='float' - ).compute(np.array([ecg_test_data]), daal_model) - st.caption("Diagnosis using daal4py: ") - diagnosis_report(predicted_class_daal.prediction) - - st.caption("Diagnosis using Deep Learning: ") - dl_model = DL_diagnostic_model_eval(ECG_data_type, X_train, X_test, y_train, y_test) - predicted_class_dl = dl_model.predict(np.array([ecg_test_data])) - diagnosis_report(predicted_class_dl) - -else: - st.write("Press 'Diagnose' button after selecting the patient data from the dropdown menu.") - - -st.sidebar.subheader('Diagnose Heart Health') - -uploaded_file = st.sidebar.file_uploader("Upload ECG file of a single patient in CSV format") -ecg_test_data = ECG_data_uploader(uploaded_file) - -ECG_data_type = "15_Leads_ECG_upload_data" -ECG_data_types= {"15_Leads_ECG_upload_data":"15 Leads", "12_Leads_ECG_upload_data":"12 Leads"} -ECG_data_type= st.sidebar.selectbox("Select the number of signal leads used in the ECG data ",list(ECG_data_types.keys())) - - -if st.sidebar.button("Check Your Heart health"): - st.caption(ECG_data_type) - - if ECG_data_type == "15_Leads_ECG_upload_data" : - ECG_data_type = "15_Leads_ECG_data" - elif ECG_data_type == "12_Leads_ECG_upload_data" : - ECG_data_type = "12_Leads_ECG_data" - - X_train, X_test, y_train, y_test = model_gen(ECG_data_type) - - xgb_model, daal_model = diagnostic_models_evaluation(X_train, X_test, y_train, y_test) - ecg_test_data_xgb = np.array([ecg_test_data]) # Convert to 2-dimensional array - ecg_test_data_xgb = np.reshape(ecg_test_data_xgb, (1, -1)) # Reshape to (1, -1) dimensions - - predicted_class_xgb = xgb_model.predict(ecg_test_data_xgb) - st.caption("Diagnosis using XGBooster: ") - diagnosis_report(predicted_class_xgb) - - n_classes = 11 - predicted_class_daal = d4p.gbt_classification_prediction( - nClasses=n_classes, - resultsToEvaluate="computeClassLabels", - fptype='float' - ).compute(ecg_test_data_xgb, daal_model) - st.caption("Diagnosis using daal4py: ") - diagnosis_report(predicted_class_daal.prediction) - - st.caption("Diagnosis using Deep Learning: ") - dl_model = DL_diagnostic_model_eval(ECG_data_type, X_train, X_test, y_train, y_test) - predicted_class_dl = dl_model.predict(np.array([ecg_test_data])) - diagnosis_report(predicted_class_dl) - -else: - st.write('') - - - - - - - diff --git a/spaces/derek-thomas/top2vec/app/utilities.py b/spaces/derek-thomas/top2vec/app/utilities.py deleted file mode 100644 index 16446f82e639a8a7c03df769b20772c063894812..0000000000000000000000000000000000000000 --- a/spaces/derek-thomas/top2vec/app/utilities.py +++ /dev/null @@ -1,43 +0,0 @@ -from logging import getLogger -from pathlib import Path - -import joblib -import pandas as pd -import streamlit as st -from top2vec import Top2Vec - -logger = getLogger(__name__) - -proj_dir = Path(__file__).parents[1] - - -def initialization(): - with st.spinner("Loading app..."): - if 'model' not in st.session_state: - model = Top2Vec.load('models/model.pkl') - model._check_model_status() - model.hierarchical_topic_reduction(num_topics=20) - - st.session_state.model = model - st.session_state.umap_model = joblib.load(proj_dir / 'models' / 'umap.sav') - logger.info("loading data...") - - if 'data' not in st.session_state: - logger.info("loading data...") - data = pd.read_csv(proj_dir / 'data' / 'data.csv') - data['topic_id'] = data['topic_id'].apply(lambda x: f'{x:02d}') - st.session_state.data = data - st.session_state.selected_data = data - st.session_state.all_topics = list(data.topic_id.unique()) - - if 'topics' not in st.session_state: - logger.info("loading topics...") - topics = pd.read_csv(proj_dir / 'data' / 'topics.csv') - topics['topic_id'] = topics['topic_id'].apply(lambda x: f'{x:02d}') - st.session_state.topics = topics - topics_dict = topics[['topic_id', 'topic_0']].to_dict() - topic_str_to_word = {topics_dict['topic_id'][i]: topics_dict['topic_0'][i] for i in range(20)} - st.session_state.topic_str_to_word = topic_str_to_word - - if 'selected_points' not in st.session_state: - st.session_state.selected_points = [] diff --git a/spaces/diacanFperku/AutoGPT/Buku Matematika Smp Kelas 8 Semester 2 Erlangga.md b/spaces/diacanFperku/AutoGPT/Buku Matematika Smp Kelas 8 Semester 2 Erlangga.md deleted file mode 100644 index cd5a272427eb1217ff0cf4543b9fb538490dfe00..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Buku Matematika Smp Kelas 8 Semester 2 Erlangga.md +++ /dev/null @@ -1,32 +0,0 @@ -

    buku matematika smp kelas 8 semester 2 erlangga


    Downloadhttps://gohhs.com/2uFU9R



    -
    -seluruh dunia ajaib - -terima kasih telah mengikuti saya - -Russian: - -в одном тесте мы изучаем непростые задачи - -Непростая программа - это такое значение - -В этом тесте вам предстоит задать - -Семинаром разбираться со сложными программами, - -что будет для вас проще, чем думать - -Именно поэтому ваши школьники и студенты в этом учились - -каждый через один и тот же день - -под любой программой или задачей - -Заряжающая программа строит блоки арифметики - -В данном тесте мы просто должны будем подсчитать целое число - -Мы должны прос 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/File Backup Mikrotik Rb750.epub.md b/spaces/diacanFperku/AutoGPT/File Backup Mikrotik Rb750.epub.md deleted file mode 100644 index 021983a3b55adc35c41ddb2d6325fbda0d15300b..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/File Backup Mikrotik Rb750.epub.md +++ /dev/null @@ -1,16 +0,0 @@ -

    File Backup Mikrotik Rb750.epub


    Download > https://gohhs.com/2uFUN0



    -
    -... /stories/3005763-benhur-english-2-tamil-dubbed-movie-torrent-download-georald ... Christians blow. glenindy. Jan 30, 2022. glenindy d868ddde6e 13, 2020 - File Backup Mikrotik Rb750.epub - Angels with Scaly Wings - Digital Deluxe Edition Upgrade full crack [PC] Pc torrent download locations. ... -3D-Bridge-torrent-download-latest-movie-download-torrent-download-georald ... -Download the game on Android: Download the game on iOS: 3D-Bridge-torrent-download-latest-movie-download-torrent-download-georald- ... -Tales from the Borderlands. -January 30, 2022. -Friendly. -Vote. -Killing Floor.glenindy. -Revolutionary Houston. glenindy. -Female Torture. glenindy. -Vote. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/diego2554/RemBG_super/rembg/__init__.py b/spaces/diego2554/RemBG_super/rembg/__init__.py deleted file mode 100644 index 26026af176cc508ebf369e342b1d9012d94e53b6..0000000000000000000000000000000000000000 --- a/spaces/diego2554/RemBG_super/rembg/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from . import _version - -__version__ = _version.get_versions()["version"] - -from .bg import remove -from .session_factory import new_session diff --git a/spaces/diffusers/sdxl-to-diffusers/README.md b/spaces/diffusers/sdxl-to-diffusers/README.md deleted file mode 100644 index dcb379b542cef7afaa2354f7d54ef5cdb944d982..0000000000000000000000000000000000000000 --- a/spaces/diffusers/sdxl-to-diffusers/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SD-XL To Diffusers -emoji: 🎨➡️🧨 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.31.0 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/digitalxingtong/Azusa-Bert-VITS2/monotonic_align/__init__.py b/spaces/digitalxingtong/Azusa-Bert-VITS2/monotonic_align/__init__.py deleted file mode 100644 index a323673bb16070d6d0fffddb939b657d0915ff1b..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Azusa-Bert-VITS2/monotonic_align/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - - -def maximum_path(neg_cent, mask): - """ numba optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) \ No newline at end of file diff --git a/spaces/digitalxingtong/Nailv-read-Bert-Vits2/text/chinese_bert.py b/spaces/digitalxingtong/Nailv-read-Bert-Vits2/text/chinese_bert.py deleted file mode 100644 index cb84ce0b426cd0a1c7954ddcdf41322c10ed14fa..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nailv-read-Bert-Vits2/text/chinese_bert.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForMaskedLM - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") -model = AutoModelForMaskedLM.from_pretrained("./bert/chinese-roberta-wwm-ext-large").to(device) - -def get_bert_feature(text, word2ph): - with torch.no_grad(): - inputs = tokenizer(text, return_tensors='pt') - for i in inputs: - inputs[i] = inputs[i].to(device) - res = model(**inputs, output_hidden_states=True) - res = torch.cat(res['hidden_states'][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text)+2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - - return phone_level_feature.T - -if __name__ == '__main__': - # feature = get_bert_feature('你好,我是说的道理。') - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) - diff --git a/spaces/dolceschokolade/chatbot-mini/components/Chatbar/components/ChatbarSettings.tsx b/spaces/dolceschokolade/chatbot-mini/components/Chatbar/components/ChatbarSettings.tsx deleted file mode 100644 index 4a7f9fb9b12e73e2f9981a48b4d1b1cb8036e4a2..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/components/Chatbar/components/ChatbarSettings.tsx +++ /dev/null @@ -1,73 +0,0 @@ -import { IconFileExport, IconSettings } from '@tabler/icons-react'; -import { useContext, useState } from 'react'; - -import { useTranslation } from 'next-i18next'; - -import HomeContext from '@/pages/api/home/home.context'; - -import { SettingDialog } from '@/components/Settings/SettingDialog'; - -import { Import } from '../../Settings/Import'; -import { Key } from '../../Settings/Key'; -import { SidebarButton } from '../../Sidebar/SidebarButton'; -import ChatbarContext from '../Chatbar.context'; -import { ClearConversations } from './ClearConversations'; -import { PluginKeys } from './PluginKeys'; - -export const ChatbarSettings = () => { - const { t } = useTranslation('sidebar'); - const [isSettingDialogOpen, setIsSettingDialog] = useState(false); - - const { - state: { - apiKey, - lightMode, - serverSideApiKeyIsSet, - serverSidePluginKeysSet, - conversations, - }, - dispatch: homeDispatch, - } = useContext(HomeContext); - - const { - handleClearConversations, - handleImportConversations, - handleExportData, - handleApiKeyChange, - } = useContext(ChatbarContext); - - return ( -
    - {conversations.length > 0 ? ( - - ) : null} - - - - } - onClick={() => handleExportData()} - /> - - } - onClick={() => setIsSettingDialog(true)} - /> - - {!serverSideApiKeyIsSet ? ( - - ) : null} - - {!serverSidePluginKeysSet ? : null} - - { - setIsSettingDialog(false); - }} - /> -
    - ); -}; diff --git a/spaces/dpe1/beat_manipulator/examples.py b/spaces/dpe1/beat_manipulator/examples.py deleted file mode 100644 index 7e33ae272ffc3ea9570739d91171e4b2ba03a6b8..0000000000000000000000000000000000000000 --- a/spaces/dpe1/beat_manipulator/examples.py +++ /dev/null @@ -1,11 +0,0 @@ -import beat_manipulator as bm, os, random - -path = 'F:/Stuff/Music/Tracks/' -song = 'Phonetick - You.mp3' -song = path + song - -#bm.presets.savetest(song, scale = 1, shift = 0) - -bm.beatswap(song, 'random', scale = 1, shift = 0) - -#bm.presets.use(song = song, preset = 'dotted snares fast 1', scale = 1) \ No newline at end of file diff --git a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py b/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py deleted file mode 100644 index 10c0920c1a217af5bb3e1b13077568035ab3b7b5..0000000000000000000000000000000000000000 --- a/spaces/dragonSwing/annotate-anything/GroundingDINO/groundingdino/models/GroundingDINO/transformer_vanilla.py +++ /dev/null @@ -1,123 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -DETR Transformer class. - -Copy-paste from torch.nn.Transformer with modifications: - * positional encodings are passed in MHattention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers -""" -from typing import Optional - -import torch -import torch.nn.functional as F -from torch import Tensor, nn - -from .utils import ( - MLP, - _get_activation_fn, - _get_clones, - gen_encoder_output_proposals, - gen_sineembed_for_position, - sigmoid_focal_loss, -) - - -class TextTransformer(nn.Module): - def __init__(self, num_layers, d_model=256, nheads=8, dim_feedforward=2048, dropout=0.1): - super().__init__() - self.num_layers = num_layers - self.d_model = d_model - self.nheads = nheads - self.dim_feedforward = dim_feedforward - self.norm = None - - single_encoder_layer = TransformerEncoderLayer( - d_model=d_model, nhead=nheads, dim_feedforward=dim_feedforward, dropout=dropout - ) - self.layers = _get_clones(single_encoder_layer, num_layers) - - def forward(self, memory_text: torch.Tensor, text_attention_mask: torch.Tensor): - """ - - Args: - text_attention_mask: bs, num_token - memory_text: bs, num_token, d_model - - Raises: - RuntimeError: _description_ - - Returns: - output: bs, num_token, d_model - """ - - output = memory_text.transpose(0, 1) - - for layer in self.layers: - output = layer(output, src_key_padding_mask=text_attention_mask) - - if self.norm is not None: - output = self.norm(output) - - return output.transpose(0, 1) - - -class TransformerEncoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - self.nhead = nhead - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - # repeat attn mask - if src_mask.dim() == 3 and src_mask.shape[0] == src.shape[1]: - # bs, num_q, num_k - src_mask = src_mask.repeat(self.nhead, 1, 1) - - q = k = self.with_pos_embed(src, pos) - - src2 = self.self_attn(q, k, value=src, attn_mask=src_mask)[0] - - # src2 = self.self_attn(q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask)[0] - src = src + self.dropout1(src2) - src = self.norm1(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src)))) - src = src + self.dropout2(src2) - src = self.norm2(src) - return src diff --git a/spaces/dylanebert/gaussian-viewer/public/_app/immutable/nodes/0.2a6e7c35.js b/spaces/dylanebert/gaussian-viewer/public/_app/immutable/nodes/0.2a6e7c35.js deleted file mode 100644 index 3404e30105b90e1692045141cd9385bee8b3e801..0000000000000000000000000000000000000000 --- a/spaces/dylanebert/gaussian-viewer/public/_app/immutable/nodes/0.2a6e7c35.js +++ /dev/null @@ -1 +0,0 @@ -import{s as l,c as r,u as i,g as u,d as _}from"../chunks/scheduler.8b74b908.js";import{S as f,i as c,d as p,t as d}from"../chunks/index.c146e4e6.js";const m=!0,S=Object.freeze(Object.defineProperty({__proto__:null,prerender:m},Symbol.toStringTag,{value:"Module"}));function $(n){let s;const a=n[1].default,e=r(a,n,n[0],null);return{c(){e&&e.c()},l(t){e&&e.l(t)},m(t,o){e&&e.m(t,o),s=!0},p(t,[o]){e&&e.p&&(!s||o&1)&&i(e,a,t,t[0],s?_(a,t[0],o,null):u(t[0]),null)},i(t){s||(p(e,t),s=!0)},o(t){d(e,t),s=!1},d(t){e&&e.d(t)}}}function g(n,s,a){let{$$slots:e={},$$scope:t}=s;return n.$$set=o=>{"$$scope"in o&&a(0,t=o.$$scope)},[t,e]}class v extends f{constructor(s){super(),c(this,s,g,$,l,{})}}export{v as component,S as universal}; diff --git a/spaces/edemgold/conversation-bot/README.md b/spaces/edemgold/conversation-bot/README.md deleted file mode 100644 index d5b60c29689b12f25f587c065b0c65d30a8bf021..0000000000000000000000000000000000000000 --- a/spaces/edemgold/conversation-bot/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Conversation Bot -emoji: 🐢 -colorFrom: blue -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/utils/visualizer.py b/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/utils/visualizer.py deleted file mode 100644 index 8c4a1fba06bf6bc680aa59bf645f796283f6f1c6..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/models/StyleCLIP/global_directions/utils/visualizer.py +++ /dev/null @@ -1,605 +0,0 @@ -# python 3.7 -"""Utility functions for visualizing results on html page.""" - -import base64 -import os.path -import cv2 -import numpy as np - -__all__ = [ - 'get_grid_shape', 'get_blank_image', 'load_image', 'save_image', - 'resize_image', 'add_text_to_image', 'fuse_images', 'HtmlPageVisualizer', - 'VideoReader', 'VideoWriter', 'adjust_pixel_range' -] - - -def adjust_pixel_range(images, min_val=-1.0, max_val=1.0, channel_order='NCHW'): - """Adjusts the pixel range of the input images. - - This function assumes the input array (image batch) is with shape [batch_size, - channel, height, width] if `channel_order = NCHW`, or with shape [batch_size, - height, width] if `channel_order = NHWC`. The returned images are with shape - [batch_size, height, width, channel] and pixel range [0, 255]. - - NOTE: The channel order of output images will remain the same as the input. - - Args: - images: Input images to adjust pixel range. - min_val: Min value of the input images. (default: -1.0) - max_val: Max value of the input images. (default: 1.0) - channel_order: Channel order of the input array. (default: NCHW) - - Returns: - The postprocessed images with dtype `numpy.uint8` and range [0, 255]. - - Raises: - ValueError: If the input `images` are not with type `numpy.ndarray` or the - shape is invalid according to `channel_order`. - """ - if not isinstance(images, np.ndarray): - raise ValueError(f'Images should be with type `numpy.ndarray`!') - - channel_order = channel_order.upper() - if channel_order not in ['NCHW', 'NHWC']: - raise ValueError(f'Invalid channel order `{channel_order}`!') - - if images.ndim != 4: - raise ValueError(f'Input images are expected to be with shape `NCHW` or ' - f'`NHWC`, but `{images.shape}` is received!') - if channel_order == 'NCHW' and images.shape[1] not in [1, 3]: - raise ValueError(f'Input images should have 1 or 3 channels under `NCHW` ' - f'channel order!') - if channel_order == 'NHWC' and images.shape[3] not in [1, 3]: - raise ValueError(f'Input images should have 1 or 3 channels under `NHWC` ' - f'channel order!') - - images = images.astype(np.float32) - images = (images - min_val) * 255 / (max_val - min_val) - images = np.clip(images + 0.5, 0, 255).astype(np.uint8) - if channel_order == 'NCHW': - images = images.transpose(0, 2, 3, 1) - - return images - - -def get_grid_shape(size, row=0, col=0, is_portrait=False): - """Gets the shape of a grid based on the size. - - This function makes greatest effort on making the output grid square if - neither `row` nor `col` is set. If `is_portrait` is set as `False`, the height - will always be equal to or smaller than the width. For example, if input - `size = 16`, output shape will be `(4, 4)`; if input `size = 15`, output shape - will be (3, 5). Otherwise, the height will always be equal to or larger than - the width. - - Args: - size: Size (height * width) of the target grid. - is_portrait: Whether to return a portrait size of a landscape size. - (default: False) - - Returns: - A two-element tuple, representing height and width respectively. - """ - assert isinstance(size, int) - assert isinstance(row, int) - assert isinstance(col, int) - if size == 0: - return (0, 0) - - if row > 0 and col > 0 and row * col != size: - row = 0 - col = 0 - - if row > 0 and size % row == 0: - return (row, size // row) - if col > 0 and size % col == 0: - return (size // col, col) - - row = int(np.sqrt(size)) - while row > 0: - if size % row == 0: - col = size // row - break - row = row - 1 - - return (col, row) if is_portrait else (row, col) - - -def get_blank_image(height, width, channels=3, is_black=True): - """Gets a blank image, either white of black. - - NOTE: This function will always return an image with `RGB` channel order for - color image and pixel range [0, 255]. - - Args: - height: Height of the returned image. - width: Width of the returned image. - channels: Number of channels. (default: 3) - is_black: Whether to return a black image or white image. (default: True) - """ - shape = (height, width, channels) - if is_black: - return np.zeros(shape, dtype=np.uint8) - return np.ones(shape, dtype=np.uint8) * 255 - - -def load_image(path): - """Loads an image from disk. - - NOTE: This function will always return an image with `RGB` channel order for - color image and pixel range [0, 255]. - - Args: - path: Path to load the image from. - - Returns: - An image with dtype `np.ndarray` or `None` if input `path` does not exist. - """ - if not os.path.isfile(path): - return None - - image = cv2.imread(path) - return image[:, :, ::-1] - - -def save_image(path, image): - """Saves an image to disk. - - NOTE: The input image (if colorful) is assumed to be with `RGB` channel order - and pixel range [0, 255]. - - Args: - path: Path to save the image to. - image: Image to save. - """ - if image is None: - return - - assert len(image.shape) == 3 and image.shape[2] in [1, 3] - cv2.imwrite(path, image[:, :, ::-1]) - - -def resize_image(image, *args, **kwargs): - """Resizes image. - - This is a wrap of `cv2.resize()`. - - NOTE: THe channel order of the input image will not be changed. - - Args: - image: Image to resize. - """ - if image is None: - return None - - assert image.ndim == 3 and image.shape[2] in [1, 3] - image = cv2.resize(image, *args, **kwargs) - if image.ndim == 2: - return image[:, :, np.newaxis] - return image - - -def add_text_to_image(image, - text='', - position=None, - font=cv2.FONT_HERSHEY_TRIPLEX, - font_size=1.0, - line_type=cv2.LINE_8, - line_width=1, - color=(255, 255, 255)): - """Overlays text on given image. - - NOTE: The input image is assumed to be with `RGB` channel order. - - Args: - image: The image to overlay text on. - text: Text content to overlay on the image. (default: '') - position: Target position (bottom-left corner) to add text. If not set, - center of the image will be used by default. (default: None) - font: Font of the text added. (default: cv2.FONT_HERSHEY_TRIPLEX) - font_size: Font size of the text added. (default: 1.0) - line_type: Line type used to depict the text. (default: cv2.LINE_8) - line_width: Line width used to depict the text. (default: 1) - color: Color of the text added in `RGB` channel order. (default: - (255, 255, 255)) - - Returns: - An image with target text overlayed on. - """ - if image is None or not text: - return image - - cv2.putText(img=image, - text=text, - org=position, - fontFace=font, - fontScale=font_size, - color=color, - thickness=line_width, - lineType=line_type, - bottomLeftOrigin=False) - - return image - - -def fuse_images(images, - image_size=None, - row=0, - col=0, - is_row_major=True, - is_portrait=False, - row_spacing=0, - col_spacing=0, - border_left=0, - border_right=0, - border_top=0, - border_bottom=0, - black_background=True): - """Fuses a collection of images into an entire image. - - Args: - images: A collection of images to fuse. Should be with shape [num, height, - width, channels]. - image_size: Int or two-element tuple. This field is used to resize the image - before fusing. `None` disables resizing. (default: None) - row: Number of rows used for image fusion. If not set, this field will be - automatically assigned based on `col` and total number of images. - (default: None) - col: Number of columns used for image fusion. If not set, this field will be - automatically assigned based on `row` and total number of images. - (default: None) - is_row_major: Whether the input images should be arranged row-major or - column-major. (default: True) - is_portrait: Only active when both `row` and `col` should be assigned - automatically. (default: False) - row_spacing: Space between rows. (default: 0) - col_spacing: Space between columns. (default: 0) - border_left: Width of left border. (default: 0) - border_right: Width of right border. (default: 0) - border_top: Width of top border. (default: 0) - border_bottom: Width of bottom border. (default: 0) - - Returns: - The fused image. - - Raises: - ValueError: If the input `images` is not with shape [num, height, width, - width]. - """ - if images is None: - return images - - if not images.ndim == 4: - raise ValueError(f'Input `images` should be with shape [num, height, ' - f'width, channels], but {images.shape} is received!') - - num, image_height, image_width, channels = images.shape - if image_size is not None: - if isinstance(image_size, int): - image_size = (image_size, image_size) - assert isinstance(image_size, (list, tuple)) and len(image_size) == 2 - width, height = image_size - else: - height, width = image_height, image_width - row, col = get_grid_shape(num, row=row, col=col, is_portrait=is_portrait) - fused_height = ( - height * row + row_spacing * (row - 1) + border_top + border_bottom) - fused_width = ( - width * col + col_spacing * (col - 1) + border_left + border_right) - fused_image = get_blank_image( - fused_height, fused_width, channels=channels, is_black=black_background) - images = images.reshape(row, col, image_height, image_width, channels) - if not is_row_major: - images = images.transpose(1, 0, 2, 3, 4) - - for i in range(row): - y = border_top + i * (height + row_spacing) - for j in range(col): - x = border_left + j * (width + col_spacing) - if image_size is not None: - image = cv2.resize(images[i, j], image_size) - else: - image = images[i, j] - fused_image[y:y + height, x:x + width] = image - - return fused_image - - -def get_sortable_html_header(column_name_list, sort_by_ascending=False): - """Gets header for sortable html page. - - Basically, the html page contains a sortable table, where user can sort the - rows by a particular column by clicking the column head. - - Example: - - column_name_list = [name_1, name_2, name_3] - header = get_sortable_html_header(column_name_list) - footer = get_sortable_html_footer() - sortable_table = ... - html_page = header + sortable_table + footer - - Args: - column_name_list: List of column header names. - sort_by_ascending: Default sorting order. If set as `True`, the html page - will be sorted by ascending order when the header is clicked for the first - time. - - Returns: - A string, which represents for the header for a sortable html page. - """ - header = '\n'.join([ - '', - '', - '', - '', - '', - '', - '', - '', - '', - '', - '', - '', - '', - '']) - for idx, column_name in enumerate(column_name_list): - header += f' \n' - header += '\n' - header += '\n' - header += '\n' - - return header - - -def get_sortable_html_footer(): - """Gets footer for sortable html page. - - Check function `get_sortable_html_header()` for more details. - """ - return '\n
    {column_name}
    \n\n\n\n' - - -def encode_image_to_html_str(image, image_size=None): - """Encodes an image to html language. - - Args: - image: The input image to encode. Should be with `RGB` channel order. - image_size: Int or two-element tuple. This field is used to resize the image - before encoding. `None` disables resizing. (default: None) - - Returns: - A string which represents the encoded image. - """ - if image is None: - return '' - - assert len(image.shape) == 3 and image.shape[2] in [1, 3] - - # Change channel order to `BGR`, which is opencv-friendly. - image = image[:, :, ::-1] - - # Resize the image if needed. - if image_size is not None: - if isinstance(image_size, int): - image_size = (image_size, image_size) - assert isinstance(image_size, (list, tuple)) and len(image_size) == 2 - image = cv2.resize(image, image_size) - - # Encode the image to html-format string. - encoded_image = cv2.imencode(".jpg", image)[1].tostring() - encoded_image_base64 = base64.b64encode(encoded_image).decode('utf-8') - html_str = f'' - - return html_str - - -class HtmlPageVisualizer(object): - """Defines the html page visualizer. - - This class can be used to visualize image results as html page. Basically, it - is based on an html-format sorted table with helper functions - `get_sortable_html_header()`, `get_sortable_html_footer()`, and - `encode_image_to_html_str()`. To simplify the usage, specifying the following - fields is enough to create a visualization page: - - (1) num_rows: Number of rows of the table (header-row exclusive). - (2) num_cols: Number of columns of the table. - (3) header contents (optional): Title of each column. - - NOTE: `grid_size` can be used to assign `num_rows` and `num_cols` - automatically. - - Example: - - html = HtmlPageVisualizer(num_rows, num_cols) - html.set_headers([...]) - for i in range(num_rows): - for j in range(num_cols): - html.set_cell(i, j, text=..., image=...) - html.save('visualize.html') - """ - - def __init__(self, - num_rows=0, - num_cols=0, - grid_size=0, - is_portrait=False, - viz_size=None): - if grid_size > 0: - num_rows, num_cols = get_grid_shape( - grid_size, row=num_rows, col=num_cols, is_portrait=is_portrait) - assert num_rows > 0 and num_cols > 0 - - self.num_rows = num_rows - self.num_cols = num_cols - self.viz_size = viz_size - self.headers = ['' for _ in range(self.num_cols)] - self.cells = [[{ - 'text': '', - 'image': '', - } for _ in range(self.num_cols)] for _ in range(self.num_rows)] - - def set_header(self, column_idx, content): - """Sets the content of a particular header by column index.""" - self.headers[column_idx] = content - - def set_headers(self, contents): - """Sets the contents of all headers.""" - if isinstance(contents, str): - contents = [contents] - assert isinstance(contents, (list, tuple)) - assert len(contents) == self.num_cols - for column_idx, content in enumerate(contents): - self.set_header(column_idx, content) - - def set_cell(self, row_idx, column_idx, text='', image=None): - """Sets the content of a particular cell. - - Basically, a cell contains some text as well as an image. Both text and - image can be empty. - - Args: - row_idx: Row index of the cell to edit. - column_idx: Column index of the cell to edit. - text: Text to add into the target cell. - image: Image to show in the target cell. Should be with `RGB` channel - order. - """ - self.cells[row_idx][column_idx]['text'] = text - self.cells[row_idx][column_idx]['image'] = encode_image_to_html_str( - image, self.viz_size) - - def save(self, save_path): - """Saves the html page.""" - html = '' - for i in range(self.num_rows): - html += f'\n' - for j in range(self.num_cols): - text = self.cells[i][j]['text'] - image = self.cells[i][j]['image'] - if text: - html += f' {text}

    {image}\n' - else: - html += f' {image}\n' - html += f'\n' - - header = get_sortable_html_header(self.headers) - footer = get_sortable_html_footer() - - with open(save_path, 'w') as f: - f.write(header + html + footer) - - -class VideoReader(object): - """Defines the video reader. - - This class can be used to read frames from a given video. - """ - - def __init__(self, path): - """Initializes the video reader by loading the video from disk.""" - if not os.path.isfile(path): - raise ValueError(f'Video `{path}` does not exist!') - - self.path = path - self.video = cv2.VideoCapture(path) - assert self.video.isOpened() - self.position = 0 - - self.length = int(self.video.get(cv2.CAP_PROP_FRAME_COUNT)) - self.frame_height = int(self.video.get(cv2.CAP_PROP_FRAME_HEIGHT)) - self.frame_width = int(self.video.get(cv2.CAP_PROP_FRAME_WIDTH)) - self.fps = self.video.get(cv2.CAP_PROP_FPS) - - def __del__(self): - """Releases the opened video.""" - self.video.release() - - def read(self, position=None): - """Reads a certain frame. - - NOTE: The returned frame is assumed to be with `RGB` channel order. - - Args: - position: Optional. If set, the reader will read frames from the exact - position. Otherwise, the reader will read next frames. (default: None) - """ - if position is not None and position < self.length: - self.video.set(cv2.CAP_PROP_POS_FRAMES, position) - self.position = position - - success, frame = self.video.read() - self.position = self.position + 1 - - return frame[:, :, ::-1] if success else None - - -class VideoWriter(object): - """Defines the video writer. - - This class can be used to create a video. - - NOTE: `.avi` and `DIVX` is the most recommended codec format since it does not - rely on other dependencies. - """ - - def __init__(self, path, frame_height, frame_width, fps=24, codec='DIVX'): - """Creates the video writer.""" - self.path = path - self.frame_height = frame_height - self.frame_width = frame_width - self.fps = fps - self.codec = codec - - self.video = cv2.VideoWriter(filename=path, - fourcc=cv2.VideoWriter_fourcc(*codec), - fps=fps, - frameSize=(frame_width, frame_height)) - - def __del__(self): - """Releases the opened video.""" - self.video.release() - - def write(self, frame): - """Writes a target frame. - - NOTE: The input frame is assumed to be with `RGB` channel order. - """ - self.video.write(frame[:, :, ::-1]) diff --git a/spaces/ennet/ChatDev/camel/agents/embodied_agent.py b/spaces/ennet/ChatDev/camel/agents/embodied_agent.py deleted file mode 100644 index a9bf44872d25216f70296df5ccf9aeecf0ed22b1..0000000000000000000000000000000000000000 --- a/spaces/ennet/ChatDev/camel/agents/embodied_agent.py +++ /dev/null @@ -1,132 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -from typing import Any, Dict, List, Optional, Tuple - -from colorama import Fore - -from camel.agents import BaseToolAgent, ChatAgent, HuggingFaceToolAgent -from camel.messages import ChatMessage, SystemMessage -from camel.typing import ModelType -from camel.utils import print_text_animated - - -class EmbodiedAgent(ChatAgent): - r"""Class for managing conversations of CAMEL Embodied Agents. - - Args: - system_message (SystemMessage): The system message for the chat agent. - model (ModelType, optional): The LLM model to use for generating - responses. (default :obj:`ModelType.GPT_4`) - model_config (Any, optional): Configuration options for the LLM model. - (default: :obj:`None`) - message_window_size (int, optional): The maximum number of previous - messages to include in the context window. If `None`, no windowing - is performed. (default: :obj:`None`) - action_space (List[Any], optional): The action space for the embodied - agent. (default: :obj:`None`) - verbose (bool, optional): Whether to print the critic's messages. - logger_color (Any): The color of the logger displayed to the user. - (default: :obj:`Fore.MAGENTA`) - """ - - def __init__( - self, - system_message: SystemMessage, - model: ModelType = ModelType.GPT_4, - model_config: Optional[Any] = None, - message_window_size: Optional[int] = None, - action_space: Optional[List[BaseToolAgent]] = None, - verbose: bool = False, - logger_color: Any = Fore.MAGENTA, - ) -> None: - default_action_space = [ - HuggingFaceToolAgent('hugging_face_tool_agent', model=model.value), - ] - self.action_space = action_space or default_action_space - action_space_prompt = self.get_action_space_prompt() - system_message.content = system_message.content.format( - action_space=action_space_prompt) - self.verbose = verbose - self.logger_color = logger_color - super().__init__( - system_message=system_message, - model=model, - model_config=model_config, - message_window_size=message_window_size, - ) - - def get_action_space_prompt(self) -> str: - r"""Returns the action space prompt. - - Returns: - str: The action space prompt. - """ - return "\n".join([ - f"*** {action.name} ***:\n {action.description}" - for action in self.action_space - ]) - - def step( - self, - input_message: ChatMessage, - ) -> Tuple[ChatMessage, bool, Dict[str, Any]]: - r"""Performs a step in the conversation. - - Args: - input_message (ChatMessage): The input message. - - Returns: - Tuple[ChatMessage, bool, Dict[str, Any]]: A tuple - containing the output messages, termination status, and - additional information. - """ - response = super().step(input_message) - - if response.msgs is None or len(response.msgs) == 0: - raise RuntimeError("Got None output messages.") - if response.terminated: - raise RuntimeError(f"{self.__class__.__name__} step failed.") - - # NOTE: Only single output messages are supported - explanations, codes = response.msg.extract_text_and_code_prompts() - - if self.verbose: - for explanation, code in zip(explanations, codes): - print_text_animated(self.logger_color + - f"> Explanation:\n{explanation}") - print_text_animated(self.logger_color + f"> Code:\n{code}") - - if len(explanations) > len(codes): - print_text_animated(self.logger_color + - f"> Explanation:\n{explanations}") - - content = response.msg.content - - if codes is not None: - content = "\n> Executed Results:" - global_vars = {action.name: action for action in self.action_space} - for code in codes: - executed_outputs = code.execute(global_vars) - content += ( - f"- Python standard output:\n{executed_outputs[0]}\n" - f"- Local variables:\n{executed_outputs[1]}\n") - content += "*" * 50 + "\n" - - # TODO: Handle errors - content = input_message.content + (Fore.RESET + - f"\n> Embodied Actions:\n{content}") - message = ChatMessage(input_message.role_name, input_message.role_type, - input_message.meta_dict, input_message.role, - content) - return message, response.terminated, response.info diff --git a/spaces/evaluate-metric/code_eval/code_eval.py b/spaces/evaluate-metric/code_eval/code_eval.py deleted file mode 100644 index 0885712e698a34067e8faabe6b029ea8d719e024..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/code_eval/code_eval.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""The CodeEval metric estimates the pass@k metric for code synthesis. -This is an evaluation harness for the HumanEval problem solving dataset -described in the paper "Evaluating Large Language Models Trained on Code" -(https://arxiv.org/abs/2107.03374).""" - -import itertools -import os -from collections import Counter, defaultdict -from concurrent.futures import ThreadPoolExecutor, as_completed - -import datasets -import numpy as np - -import evaluate - -from .execute import check_correctness - - -_CITATION = """\ -@misc{chen2021evaluating, - title={Evaluating Large Language Models Trained on Code}, - author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan \ -and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards \ -and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray \ -and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf \ -and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray \ -and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser \ -and Mohammad Bavarian and Clemens Winter and Philippe Tillet \ -and Felipe Petroski Such and Dave Cummings and Matthias Plappert \ -and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss \ -and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak \ -and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain \ -and William Saunders and Christopher Hesse and Andrew N. Carr \ -and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa \ -and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati \ -and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei \ -and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba}, - year={2021}, - eprint={2107.03374}, - archivePrefix={arXiv}, - primaryClass={cs.LG} -} -""" - -_DESCRIPTION = """\ -This metric implements the evaluation harness for the HumanEval problem solving dataset -described in the paper "Evaluating Large Language Models Trained on Code" -(https://arxiv.org/abs/2107.03374). -""" - - -_KWARGS_DESCRIPTION = """ -Calculates how good are predictions given some references, using certain scores -Args: - predictions: list of candidates to evaluate. Each candidates should be a list - of strings with several code candidates to solve the problem. - references: a list with a test for each prediction. Each test should evaluate the - correctness of a code candidate. - k: number of code candidates to consider in the evaluation (Default: [1, 10, 100]) - num_workers: number of workers used to evaluate the canidate programs (Default: 4). - timeout: -Returns: - pass_at_k: dict with pass rates for each k - results: dict with granular results of each unittest -Examples: - >>> code_eval = evaluate.load("code_eval") - >>> test_cases = ["assert add(2,3)==5"] - >>> candidates = [["def add(a,b): return a*b", "def add(a, b): return a+b"]] - >>> pass_at_k, results = code_eval.compute(references=test_cases, predictions=candidates, k=[1, 2]) - >>> print(pass_at_k) - {'pass@1': 0.5, 'pass@2': 1.0} -""" - - -_WARNING = """ -################################################################################ - !!!WARNING!!! -################################################################################ -The "code_eval" metric executes untrusted model-generated code in Python. -Although it is highly unlikely that model-generated code will do something -overtly malicious in response to this test suite, model-generated code may act -destructively due to a lack of model capability or alignment. -Users are strongly encouraged to sandbox this evaluation suite so that it -does not perform destructive actions on their host or network. For more -information on how OpenAI sandboxes its code, see the paper "Evaluating Large -Language Models Trained on Code" (https://arxiv.org/abs/2107.03374). - -Once you have read this disclaimer and taken appropriate precautions, -set the environment variable HF_ALLOW_CODE_EVAL="1". Within Python you can to this -with: - ->>> import os ->>> os.environ["HF_ALLOW_CODE_EVAL"] = "1" - -################################################################################\ -""" - -_LICENSE = """The MIT License - -Copyright (c) OpenAI (https://openai.com) - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in -all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN -THE SOFTWARE.""" - - -@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) -class CodeEval(evaluate.Metric): - def _info(self): - return evaluate.MetricInfo( - # This is the description that will appear on the metrics page. - description=_DESCRIPTION, - citation=_CITATION, - inputs_description=_KWARGS_DESCRIPTION, - # This defines the format of each prediction and reference - features=datasets.Features( - { - "predictions": datasets.Sequence(datasets.Value("string")), - "references": datasets.Value("string"), - } - ), - homepage="https://github.com/openai/human-eval", - codebase_urls=["https://github.com/openai/human-eval"], - reference_urls=["https://github.com/openai/human-eval"], - license=_LICENSE, - ) - - def _compute(self, predictions, references, k=[1, 10, 100], num_workers=4, timeout=3.0): - """Returns the scores""" - - if os.getenv("HF_ALLOW_CODE_EVAL", 0) != "1": - raise ValueError(_WARNING) - - if os.name == "nt": - raise NotImplementedError("This metric is currently not supported on Windows.") - - with ThreadPoolExecutor(max_workers=num_workers) as executor: - futures = [] - completion_id = Counter() - n_samples = 0 - results = defaultdict(list) - - for task_id, (candidates, test_case) in enumerate(zip(predictions, references)): - for candidate in candidates: - test_program = candidate + "\n" + test_case - args = (test_program, timeout, task_id, completion_id[task_id]) - future = executor.submit(check_correctness, *args) - futures.append(future) - completion_id[task_id] += 1 - n_samples += 1 - - for future in as_completed(futures): - result = future.result() - results[result["task_id"]].append((result["completion_id"], result)) - - total, correct = [], [] - for result in results.values(): - result.sort() - passed = [r[1]["passed"] for r in result] - total.append(len(passed)) - correct.append(sum(passed)) - total = np.array(total) - correct = np.array(correct) - - ks = k - pass_at_k = {f"pass@{k}": estimate_pass_at_k(total, correct, k).mean() for k in ks if (total >= k).all()} - - return pass_at_k, results - - -def estimate_pass_at_k(num_samples, num_correct, k): - """Estimates pass@k of each problem and returns them in an array.""" - - def estimator(n: int, c: int, k: int) -> float: - """Calculates 1 - comb(n - c, k) / comb(n, k).""" - if n - c < k: - return 1.0 - return 1.0 - np.prod(1.0 - k / np.arange(n - c + 1, n + 1)) - - if isinstance(num_samples, int): - num_samples_it = itertools.repeat(num_samples, len(num_correct)) - else: - assert len(num_samples) == len(num_correct) - num_samples_it = iter(num_samples) - - return np.array([estimator(int(n), int(c), k) for n, c in zip(num_samples_it, num_correct)]) diff --git a/spaces/evaluate-metric/rl_reliability/rl_reliability.py b/spaces/evaluate-metric/rl_reliability/rl_reliability.py deleted file mode 100644 index 34a9c4570cbc2fcd7f4392886b32de6fa17e4dfd..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/rl_reliability/rl_reliability.py +++ /dev/null @@ -1,186 +0,0 @@ -# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Computes the RL Reliability Metrics.""" - -import datasets -import numpy as np -from rl_reliability_metrics.evaluation import eval_metrics -from rl_reliability_metrics.metrics import metrics_offline, metrics_online - -import evaluate - - -logger = evaluate.logging.get_logger(__name__) - -DEFAULT_EVAL_POINTS = [ - 50000, - 150000, - 250000, - 350000, - 450000, - 550000, - 650000, - 750000, - 850000, - 950000, - 1050000, - 1150000, - 1250000, - 1350000, - 1450000, - 1550000, - 1650000, - 1750000, - 1850000, - 1950000, -] - -N_RUNS_RECOMMENDED = 10 - -_CITATION = """\ -@conference{rl_reliability_metrics, - title = {Measuring the Reliability of Reinforcement Learning Algorithms}, - author = {Stephanie CY Chan, Sam Fishman, John Canny, Anoop Korattikara, and Sergio Guadarrama}, - booktitle = {International Conference on Learning Representations, Addis Ababa, Ethiopia}, - year = 2020, -} -""" - -_DESCRIPTION = """\ -Computes the RL reliability metrics from a set of experiments. There is an `"online"` and `"offline"` configuration for evaluation. -""" - - -_KWARGS_DESCRIPTION = """ -Computes the RL reliability metrics from a set of experiments. There is an `"online"` and `"offline"` configuration for evaluation. -Args: - timestamps: list of timestep lists/arrays that serve as index. - rewards: list of reward lists/arrays of each experiment. -Returns: - dictionary: a set of reliability metrics -Examples: - >>> import numpy as np - >>> rl_reliability = evaluate.load("rl_reliability", "online") - >>> results = rl_reliability.compute( - ... timesteps=[np.linspace(0, 2000000, 1000)], - ... rewards=[np.linspace(0, 100, 1000)] - ... ) - >>> print(results["LowerCVaROnRaw"].round(4)) - [0.0258] -""" - - -@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) -class RLReliability(evaluate.Metric): - """Computes the RL Reliability Metrics.""" - - def _info(self): - if self.config_name not in ["online", "offline"]: - raise KeyError("""You should supply a configuration name selected in '["online", "offline"]'""") - - return evaluate.MetricInfo( - module_type="metric", - description=_DESCRIPTION, - citation=_CITATION, - inputs_description=_KWARGS_DESCRIPTION, - features=datasets.Features( - { - "timesteps": datasets.Sequence(datasets.Value("int64")), - "rewards": datasets.Sequence(datasets.Value("float")), - } - ), - homepage="https://github.com/google-research/rl-reliability-metrics", - ) - - def _compute( - self, - timesteps, - rewards, - baseline="default", - freq_thresh=0.01, - window_size=100000, - window_size_trimmed=99000, - alpha=0.05, - eval_points=None, - ): - if len(timesteps) < N_RUNS_RECOMMENDED: - logger.warning( - f"For robust statistics it is recommended to use at least {N_RUNS_RECOMMENDED} runs whereas you provided {len(timesteps)}." - ) - - curves = [] - for timestep, reward in zip(timesteps, rewards): - curves.append(np.stack([timestep, reward])) - - if self.config_name == "online": - if baseline == "default": - baseline = "curve_range" - if eval_points is None: - eval_points = DEFAULT_EVAL_POINTS - - metrics = [ - metrics_online.HighFreqEnergyWithinRuns(thresh=freq_thresh), - metrics_online.IqrWithinRuns( - window_size=window_size_trimmed, eval_points=eval_points, baseline=baseline - ), - metrics_online.IqrAcrossRuns( - lowpass_thresh=freq_thresh, eval_points=eval_points, window_size=window_size, baseline=baseline - ), - metrics_online.LowerCVaROnDiffs(baseline=baseline), - metrics_online.LowerCVaROnDrawdown(baseline=baseline), - metrics_online.LowerCVaROnAcross( - lowpass_thresh=freq_thresh, eval_points=eval_points, window_size=window_size, baseline=baseline - ), - metrics_online.LowerCVaROnRaw(alpha=alpha, baseline=baseline), - metrics_online.MadAcrossRuns( - lowpass_thresh=freq_thresh, eval_points=eval_points, window_size=window_size, baseline=baseline - ), - metrics_online.MadWithinRuns( - eval_points=eval_points, window_size=window_size_trimmed, baseline=baseline - ), - metrics_online.MaxDrawdown(), - metrics_online.StddevAcrossRuns( - lowpass_thresh=freq_thresh, eval_points=eval_points, window_size=window_size, baseline=baseline - ), - metrics_online.StddevWithinRuns( - eval_points=eval_points, window_size=window_size_trimmed, baseline=baseline - ), - metrics_online.UpperCVaROnAcross( - alpha=alpha, - lowpass_thresh=freq_thresh, - eval_points=eval_points, - window_size=window_size, - baseline=baseline, - ), - metrics_online.UpperCVaROnDiffs(alpha=alpha, baseline=baseline), - metrics_online.UpperCVaROnDrawdown(alpha=alpha, baseline=baseline), - metrics_online.UpperCVaROnRaw(alpha=alpha, baseline=baseline), - metrics_online.MedianPerfDuringTraining(window_size=window_size, eval_points=eval_points), - ] - else: - if baseline == "default": - baseline = "median_perf" - - metrics = [ - metrics_offline.MadAcrossRollouts(baseline=baseline), - metrics_offline.IqrAcrossRollouts(baseline=baseline), - metrics_offline.StddevAcrossRollouts(baseline=baseline), - metrics_offline.LowerCVaRAcrossRollouts(alpha=alpha, baseline=baseline), - metrics_offline.UpperCVaRAcrossRollouts(alpha=alpha, baseline=baseline), - metrics_offline.MedianPerfAcrossRollouts(baseline=None), - ] - - evaluator = eval_metrics.Evaluator(metrics=metrics) - result = evaluator.compute_metrics(curves) - return result diff --git a/spaces/ezioruan/roop/roop/__init__.py b/spaces/ezioruan/roop/roop/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/f2api/gpt-academic/docs/WithFastapi.md b/spaces/f2api/gpt-academic/docs/WithFastapi.md deleted file mode 100644 index 188b52716485f15e528772c6454ee7839ced4406..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/docs/WithFastapi.md +++ /dev/null @@ -1,43 +0,0 @@ -# Running with fastapi - -We currently support fastapi in order to solve sub-path deploy issue. - -1. change CUSTOM_PATH setting in `config.py` - -``` sh -nano config.py -``` - -2. Edit main.py - -```diff - auto_opentab_delay() - - demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png") - + demo.queue(concurrency_count=CONCURRENT_COUNT) - - - # 如果需要在二级路径下运行 - - # CUSTOM_PATH, = get_conf('CUSTOM_PATH') - - # if CUSTOM_PATH != "/": - - # from toolbox import run_gradio_in_subpath - - # run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH) - - # else: - - # demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png") - - + 如果需要在二级路径下运行 - + CUSTOM_PATH, = get_conf('CUSTOM_PATH') - + if CUSTOM_PATH != "/": - + from toolbox import run_gradio_in_subpath - + run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH) - + else: - + demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png") - -if __name__ == "__main__": - main() -``` - - -3. Go! - -``` sh -python main.py -``` diff --git a/spaces/facebook/XLS-R-2B-22-16/README.md b/spaces/facebook/XLS-R-2B-22-16/README.md deleted file mode 100644 index c596d2a9156a632ef6f2a10c83672e3abfdec202..0000000000000000000000000000000000000000 --- a/spaces/facebook/XLS-R-2B-22-16/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: XLS-R All-to-All 2B -emoji: 🌎 -colorFrom: gray -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/facebook/incoder-demo/modules/app.py b/spaces/facebook/incoder-demo/modules/app.py deleted file mode 100644 index 28ad5b07bcec3c7a0d80684f7404b80eb41548e0..0000000000000000000000000000000000000000 --- a/spaces/facebook/incoder-demo/modules/app.py +++ /dev/null @@ -1,240 +0,0 @@ -import sys -from typing import List -import traceback -import os -import base64 - -import logging -logging.basicConfig(level=logging.INFO) -import modules.cloud_logging - -import tokenizers -import torch -from transformers import AutoModelForCausalLM, AutoTokenizer -import json -import pprint - -# needs to be imported *before* transformers -if os.path.exists('debug'): - BIG_MODEL = False - CUDA = False -else: - BIG_MODEL = True - CUDA = True - -# from flask import Flask, request, render_template -# from flask_cors import CORS -# app = Flask(__name__, static_folder='static') -# app.config['TEMPLATES_AUTO_RELOAD'] = Tru -# CORS(app, resources= { -# r"/generate": {"origins": origins}, -# r"/infill": {"origins": origins}, -# }) -# origins=[f"http://localhost:{PORT}", "https://huggingface.co", "https://hf.space"] - -PORT = 7860 -VERBOSE = False - -if os.path.exists('unlock'): - MAX_LENGTH = 2048 -else: - MAX_LENGTH = 256+64 -TRUNCATION_MESSAGE = f'warning: This demo is limited to {MAX_LENGTH} tokens in the document for efficiency.' - -if BIG_MODEL: - model_name = "facebook/incoder-6B" - kwargs = dict( - revision="float16", - torch_dtype=torch.float16, - low_cpu_mem_usage=True, - ) -else: - model_name = "facebook/incoder-1B" - kwargs = dict() - -from fastapi import FastAPI, Request -from fastapi.staticfiles import StaticFiles -from fastapi.responses import FileResponse, StreamingResponse -app = FastAPI(docs_url=None, redoc_url=None) -app.mount("/static", StaticFiles(directory="static"), name="static") - - -logging.info("loading model") -model = AutoModelForCausalLM.from_pretrained(model_name, **kwargs) -logging.info("loading tokenizer") -tokenizer = AutoTokenizer.from_pretrained(model_name) -logging.info("loading complete") - -if CUDA: - model = model.half().cuda() - -BOS = "<|endoftext|>" -EOM = "<|endofmask|>" - -def make_sentinel(i): - return f"<|mask:{i}|>" - -SPECIAL_TOKENS = [make_sentinel(i) for i in range(256)] + [EOM] - -def generate(input, length_limit=None, temperature=None): - input_ids = tokenizer(input, return_tensors="pt").input_ids - if CUDA: - input_ids = input_ids.cuda() - current_length = input_ids.flatten().size(0) - max_length = length_limit + current_length - truncated = False - if max_length > MAX_LENGTH: - max_length = MAX_LENGTH - truncated = True - if max_length == current_length: - return input, True - output = model.generate(input_ids=input_ids, do_sample=True, top_p=0.95, temperature=temperature, max_length=max_length) - detok_hypo_str = tokenizer.decode(output.flatten()) - if detok_hypo_str.startswith(BOS): - detok_hypo_str = detok_hypo_str[len(BOS):] - return detok_hypo_str, truncated - -def infill(parts: List[str], length_limit=None, temperature=None, extra_sentinel=False, max_retries=1): - assert isinstance(parts, list) - retries_attempted = 0 - done = False - - - while (not done) and (retries_attempted < max_retries): - any_truncated = False - retries_attempted += 1 - if VERBOSE: - logging.info(f"retry {retries_attempted}") - if len(parts) == 1: - prompt = parts[0] - else: - prompt = "" - # encode parts separated by sentinel - for sentinel_ix, part in enumerate(parts): - prompt += part - if extra_sentinel or (sentinel_ix < len(parts) - 1): - prompt += make_sentinel(sentinel_ix) - - # prompt += TokenizerWrapper.make_sentinel(0) - - infills = [] - complete = [] - - done = True - - for sentinel_ix, part in enumerate(parts[:-1]): - complete.append(part) - prompt += make_sentinel(sentinel_ix) - completion, this_truncated = generate(prompt, length_limit, temperature) - any_truncated |= this_truncated - completion = completion[len(prompt):] - if EOM not in completion: - if VERBOSE: - logging.info(f"warning: {EOM} not found") - completion += EOM - # TODO: break inner loop here - done = False - completion = completion[:completion.index(EOM) + len(EOM)] - infilled = completion[:-len(EOM)] - infills.append(infilled) - complete.append(infilled) - prompt += completion - complete.append(parts[-1]) - text = ''.join(complete) - - if VERBOSE: - logging.info("generated text:") - logging.info(prompt) - logging.info() - logging.info("parts:") - logging.info(parts) - logging.info() - logging.info("infills:") - logging.info(infills) - logging.info() - logging.info("restitched text:") - logging.info(text) - logging.info() - - return { - 'text': text, - 'parts': parts, - 'infills': infills, - 'retries_attempted': retries_attempted, - 'truncated': any_truncated, - } - - -@app.head("/") -@app.get("/") -def index() -> FileResponse: - return FileResponse(path="static/index.html", media_type="text/html") - -@app.get('/generate') -# async def generate_maybe(request: Request): -async def generate_maybe(info: str): - # form = await info.json() - # form = await request.json() - # info is a base64-encoded, url-escaped json string (since GET doesn't support a body, and POST leads to CORS issues) - # fix padding, following https://stackoverflow.com/a/9956217/1319683 - info = base64.urlsafe_b64decode(info + '=' * (4 - len(info) % 4)).decode('utf-8') - form = json.loads(info) - # print(form) - prompt = form['prompt'] - length_limit = int(form['length']) - temperature = float(form['temperature']) - logging.info(json.dumps({ - 'length': length_limit, - 'temperature': temperature, - 'prompt': prompt, - })) - try: - generation, truncated = generate(prompt, length_limit, temperature) - if truncated: - message = TRUNCATION_MESSAGE - else: - message = '' - return {'result': 'success', 'type': 'generate', 'prompt': prompt, 'text': generation, 'message': message} - except Exception as e: - traceback.print_exception(*sys.exc_info()) - logging.error(e) - return {'result': 'error', 'type': 'generate', 'prompt': prompt, 'message': f'Error: {e}.'} - -@app.get('/infill') -# async def infill_maybe(request: Request): -async def infill_maybe(info: str): - # form = await info.json() - # form = await request.json() - # info is a base64-encoded, url-escaped json string (since GET doesn't support a body, and POST leads to CORS issues) - # fix padding, following https://stackoverflow.com/a/9956217/1319683 - info = base64.urlsafe_b64decode(info + '=' * (4 - len(info) % 4)).decode('utf-8') - form = json.loads(info) - length_limit = int(form['length']) - temperature = float(form['temperature']) - max_retries = 1 - extra_sentinel = True - logging.info(json.dumps({ - 'length': length_limit, - 'temperature': temperature, - 'parts_joined': ''.join(form['parts']), - })) - try: - if len(form['parts']) > 4: - return {'result': 'error', 'text': ''.join(form['parts']), 'type': 'infill', 'message': f"error: Can't use more than 3 tokens in this demo (for efficiency)."} - generation = infill(form['parts'], length_limit, temperature, extra_sentinel=extra_sentinel, max_retries=max_retries) - generation['result'] = 'success' - generation['type'] = 'infill' - if generation['truncated']: - generation['message'] = TRUNCATION_MESSAGE - else: - generation['message'] = '' - return generation - # return {'result': 'success', 'prefix': prefix, 'suffix': suffix, 'text': generation['text']} - except Exception as e: - traceback.print_exception(*sys.exc_info()) - logging.error(e) - return {'result': 'error', 'type': 'infill', 'message': f'Error: {e}.'} - - -if __name__ == "__main__": - app.run(host='0.0.0.0', port=PORT, threaded=False) diff --git a/spaces/falterWliame/Face_Mask_Detection/3d Sexvilla 2 Everlust Unlock All TOP Crack 4sharedrar.md b/spaces/falterWliame/Face_Mask_Detection/3d Sexvilla 2 Everlust Unlock All TOP Crack 4sharedrar.md deleted file mode 100644 index 4b155ff0f5b69873b12312aac859bdc403b7495e..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/3d Sexvilla 2 Everlust Unlock All TOP Crack 4sharedrar.md +++ /dev/null @@ -1,9 +0,0 @@ - -

    How to download Naruto Shippuden series movies for free. http://www.sify.com/watch/naruto-shippuden-season-10/eng-sub/1080p. https://szdesign.com/videodownload/xhwd-exwi50dpsmvkklq/download. http://videocdn.movierumorsites.com/3d-sexvilla-2-everlust-unlock-all-crack-4sharedrar-free-download.

    -

    3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar


    DOWNLOAD - https://urlca.com/2uDd7p



    -

    3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar Category : 3D Sexvilla 2 Everlust Unlock All Crack 4sharedrar Download. rar 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. Naruto Shippuden Season 10 (Sub) 1080p (07,0 Mb) in Full HD 1080p from vidme and hundreds of other compatible sources. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar.

    -

    Tuskalott.TV Movie Hindi Dubbed 1080p X264-AC3.mkv -6thGarde
    Resuscitate 2014 Hindi Movie Full HD 1080p Subtitle In
    Maya Diakhaby As Rogue 2015 Full Movie XXX
    FIFA 15 Game Full Cracked APK + DATA FULL
    Nike Jordan 4 2016 Full Black On X264-FSH
    5. https://eggnogg.us/download/xvzv-exhz50dgvqvmq/3d-sexvilla-2-everlust-unlock-all-crack-4sharedrar.rar
    3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar.

    -

    3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. In fact, you can at least burn an image onto a CD using some libraries to see whether it works or not. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. i would recommend that you give it a shot and hopefully, you end up having more options available to you. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. Some years ago, as a child, I used to visit my aunt's house in a small village in the interior of the country. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. Check out our full list of cracks and keygens. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. Post at bmehsu.com to reach me: http://www.iibuck.com/Free-IPhone-games.html 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. Her [url=http://www.uncut-studios.com]3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar[/url] became love at first sight for Todd and she asks him to try to convince her parents to let him marry her. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. I Downloaded From Crackle Of Xvid Is Paul Blart Movie. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar. All you have to do is to take this crack. 3d Sexvilla 2 Everlust Unlock All Crack 4sharedrar.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Download __FULL__ Shadow Of The Colossus Pc Full 12.md b/spaces/falterWliame/Face_Mask_Detection/Download __FULL__ Shadow Of The Colossus Pc Full 12.md deleted file mode 100644 index 7818fda0ecfb5ab85988d8a6ed652c647f74a9b1..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Download __FULL__ Shadow Of The Colossus Pc Full 12.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Download Shadow Of The Colossus Pc Full 12


    DOWNLOADhttps://urlca.com/2uDdHB



    - -March's Free PS Plus Games: Shadow of the Colossus and Sonic Forces ... As a reminder, you've still got time to download this month's PS Plus games. ... You can even team up with friends on PC with full cross-play support through the Predator: Hunting Grounds trial. ... March 3, 2020 at 12:55 am PST. 1fdad05405
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Justice League (English) Movie Tamil Dubbed In 720p.md b/spaces/falterWliame/Face_Mask_Detection/Justice League (English) Movie Tamil Dubbed In 720p.md deleted file mode 100644 index 76a776b79a728a2e9288ee35887faf62b924bae1..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Justice League (English) Movie Tamil Dubbed In 720p.md +++ /dev/null @@ -1,82 +0,0 @@ -## Justice League (English) movie tamil dubbed in 720p - - - - - - ![Justice League (English) Movie Tamil Dubbed In 720p](https://in.bmscdn.com/events/moviecard/ET00344314.jpg) - - - - - -**Download File ->>->>->> [https://miimms.com/2tyiYO](https://miimms.com/2tyiYO)** - - - - - - - - - - - - Here is a possible title and article with HTML formatting for the keyword "Justice League (English) movie tamil dubbed in 720p": - -# Justice League: A Superhero Spectacle in Tamil - - - -Justice League is a 2017 American superhero film based on the DC Comics team of the same name. The film features Batman, Superman, Wonder Woman, Flash, Aquaman and Cyborg as they unite to save the world from the evil Steppenwolf and his army of Parademons. The film is directed by Zack Snyder, with additional scenes by Joss Whedon, and stars Ben Affleck, Henry Cavill, Gal Gadot, Ezra Miller, Jason Momoa and Ray Fisher. - - - -The film was released in English and several other languages worldwide, including Tamil. Tamil is a Dravidian language spoken by millions of people in India, Sri Lanka and other countries. Tamil cinema is one of the largest and most popular film industries in India, producing hundreds of films every year. Tamil dubbed films are also very popular among the Tamil audience, who enjoy watching Hollywood blockbusters in their native language. - - - -Justice League was dubbed in Tamil by a team of professional voice actors, who matched the tone and personality of the original actors. The Tamil dubbing also added some local flavor and humor to the dialogues, making them more appealing and relatable to the Tamil audience. The Tamil dubbed version of Justice League was released in theaters and online platforms along with the original version. The film received positive reviews from critics and fans alike, who praised the action sequences, visual effects, performances and soundtrack of the film. - - - -If you are a fan of superhero films and want to watch Justice League in Tamil, you can find it online in 720p quality. 720p is a high-definition video resolution that offers clear and crisp images on your screen. You can watch Justice League in Tamil dubbed in 720p on various online platforms such as YouTube, Netflix, Amazon Prime Video and others. You can also download the film from torrent sites or other sources, but be careful of viruses and malware that may harm your device. - - - -Justice League is a must-watch film for all superhero lovers, especially in Tamil. The film offers a thrilling and entertaining experience that will keep you hooked till the end. Watch Justice League in Tamil dubbed in 720p today and enjoy the superhero spectacle on your screen. - -Here is a possible continuation of the article with HTML formatting: - -## Justice League 2: The Unlikely Sequel to Zack Snyder's Vision - - - -While Justice League was originally planned as a two-part saga, the disappointing reception of the 2017 theatrical cut and the departure of Zack Snyder from the project put an end to those ambitions. However, thanks to the relentless campaign of fans and the launch of HBO Max, Snyder was given the opportunity to release his four-hour director's cut of Justice League in 2021, which restored his original vision and set up a potential sequel. - - - -Zack Snyder's Justice League ends with a cliffhanger that teases the arrival of Darkseid, the tyrannical ruler of Apokolips and the ultimate threat to the DC universe. The film also features a "Knightmare" sequence that shows a dystopian future where Darkseid has conquered Earth, Superman has turned evil, and Batman leads a resistance group that includes Cyborg, Flash, Mera, Deathstroke and Joker. The film suggests that this nightmare scenario can be prevented if Flash travels back in time and warns Bruce Wayne about Lois Lane's death, which triggers Superman's fall to the dark side. - - - -However, despite the positive response from critics and fans to Zack Snyder's Justice League, Warner Bros. has not shown any interest in greenlighting a sequel. The studio has stated that Snyder's cut is a "storytelling cul-de-sac" that does not fit with their current plans for the DC Extended Universe (DCEU), which include standalone films like The Batman, Black Adam and The Suicide Squad, as well as spin-offs like Peacemaker and The Trench. The studio has also expressed its desire to diversify its superhero slate and explore different tones and genres. - - - -Zack Snyder has acknowledged that Justice League 2 is unlikely to happen, but he has also revealed his plans for what it would have been like. According to Snyder, Justice League 2 would have followed the heroes as they travel to Apokolips to face Darkseid and his army, while also dealing with Lex Luthor's formation of the Legion of Doom on Earth. The film would have featured epic battles, sacrifices and betrayals, as well as the introduction of new characters like Green Lantern and Martian Manhunter. The film would have ended with Darkseid killing Lois Lane and Superman succumbing to the Anti-Life Equation, setting up Justice League 3. - - - -Justice League 3 would have been the final chapter of Snyder's trilogy, which would have focused on Batman's attempt to undo Darkseid's victory by using Flash's time travel abilities. The film would have shown the Knightmare timeline in more detail, as well as Batman's redemption arc and ultimate sacrifice to save Lois Lane and restore Superman's humanity. The film would have also featured a massive showdown between Darkseid and Superman, as well as the birth of Bruce Wayne and Lois Lane's son, who would become the new Batman in the future. - - - -While these plans sound ambitious and exciting for many fans, they also seem very unlikely to ever materialize on screen. Zack Snyder has moved on to other projects, such as Army of the Dead for Netflix, and Warner Bros. has shifted its focus to other DC properties and filmmakers. However, as Zack Snyder's Justice League has proven, nothing is impossible in the world of superheroes. Perhaps one day, fans will get to see Justice League 2 and Justice League 3 in some form or another. - - dfd1c89656 - - - - - diff --git a/spaces/falterWliame/Face_Mask_Detection/Oceans Eight Tamil Dubbed Movie Torrent.md b/spaces/falterWliame/Face_Mask_Detection/Oceans Eight Tamil Dubbed Movie Torrent.md deleted file mode 100644 index 1cef63deef2e6588e9a464ff0a48ab73d3f92281..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Oceans Eight Tamil Dubbed Movie Torrent.md +++ /dev/null @@ -1,12 +0,0 @@ -

    Ocean's Eight tamil dubbed movie torrent


    Download Zip ––– https://urlca.com/2uDcsY



    -
    -**Table S1** Primer sequences for RT‐PCR and miRNA array analysis - -Click here for additional data file. - -**Table S2** Differential expression of miRNAs between fetal and adult samples - -**Table S3** Differential expression of miRNAs between breast and prostate 4fefd39f24
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Password Pro100 5.20.txt.md b/spaces/falterWliame/Face_Mask_Detection/Password Pro100 5.20.txt.md deleted file mode 100644 index 71f78731da706a73f967947f5d5925e1d6fa3a51..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Password Pro100 5.20.txt.md +++ /dev/null @@ -1,10 +0,0 @@ -
    -

    2. A weak or default password is more likely to be guessed or guessable. Email, Password Recovery" etc. download Basic password recovery Pack 8MB. The private key can then be used to sign all new. Password: Wrong Password!. I believe that this is due to hash collisions, but this is a guess. help.link/mediawiki/index.html. Password: Wrong Password! This is an example of an "unprotected file system.

    -

    Password pro100 5.20.txt


    Downloadhttps://urlca.com/2uDcHq



    -

    download Advanced password recovery Pack 8MB. the key cn be used to. This tool detects some weak password that can be guessed. .hxuemhvuirmhdhujdhdwjbdbdbvwvwbkb.com. Password: Wrong Password!. Download.-. Download EaseFab Video Converter Pro Key Generator.. I could not find any damaging exploit. . another have a more secure password. - Password: Wrong Password! Attachments are not. with the tools above for password recovery. There are ways to recover the password even if you know it.

    -

    Password: Wrong Password!. or by brute force attack.Password. The public key will be uploaded to the server, and it will be auto-updated. .lhqsh.com. Password: Wrong Password!. . " (https://www.nethaxo.com/password-recovery-for-windows-1-0-and-1-1-2. Password: Wrong Password!. Password. " http://www. .

    -

    instead of having to remember endless combinations of username and password or having to enter credit card numbers and sensitive data. . There are ways to recover the password even if you know it.

    -

    5. Download Password Recovery for Windows 1., 0. you don't have to send a credit card and the payment page can't be brute forced. This would open up the possiblity of brute forcing the whole Internet. the public key will be uploaded to the server, and it will be auto-updated. Haxo Password Recovery. ., 9. Instead of having to remember endless combinations of username and password or having to enter credit card numbers and sensitive data. . Consider Password Recovery.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Attack on Titan - Fan Game The Best Way to Relive the Anime on Android.md b/spaces/fatiXbelha/sd/Attack on Titan - Fan Game The Best Way to Relive the Anime on Android.md deleted file mode 100644 index 6520e514fabd31397dbd0af1ac8348d30d384511..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Attack on Titan - Fan Game The Best Way to Relive the Anime on Android.md +++ /dev/null @@ -1,164 +0,0 @@ - -

    Attack on Titan Download for Android: How to Enjoy the Epic Anime on Your Phone

    -

    If you are a fan of anime, you have probably heard of Attack on Titan, one of the most popular and acclaimed anime series of all time. But did you know that you can download Attack on Titan for Android and watch it on your phone anytime, anywhere? In this article, we will show you how to do that, as well as give you some tips and tricks to enjoy this epic anime on your phone.

    -

    What is Attack on Titan?

    -

    Attack on Titan is a Japanese manga series written and illustrated by Hajime Isayama, which was adapted into an anime television series by Wit Studio and MAPPA. The story is set in a world where humanity lives inside cities surrounded by enormous walls that protect them from giant humanoid Titans, who devour humans on sight. The story follows Eren Yeager, who vows to exterminate the Titans after they bring about the destruction of his hometown and the death of his mother.

    -

    attack on titan download for android


    Download >>> https://urllie.com/2uNFRZ



    -

    A brief synopsis of the anime series

    -

    The anime series consists of four seasons, with the first three seasons covering the first 27 volumes of the manga, and the fourth season covering the remaining 7 volumes. The first season aired from April to September 2013, followed by a 12-episode second season from April to June 2017. The third season was split into two parts, with the first 12 episodes airing from July to October 2018, and the last 10 episodes airing from April to July 2019. The fourth and final season premiered in December 2020, airing 16 episodes in its first part. A second part consisting of 12 episodes aired from January to April 2022, and a third and final part will air in two halves; the first half premiered in March 2023, and the second half will premiere in late 2023.

    -

    The main features of the anime series

    -

    Attack on Titan is known for its dark and gritty tone, its complex and compelling plot, its stunning animation and sound design, its memorable characters and themes, and its thrilling action scenes. Some of the main features of the anime series are:

    -
      -
    • The use of 3D maneuver gear, a device that allows humans to move freely in the air using gas-powered grappling hooks, which is essential for fighting against the Titans.
    • -
    • The different types of Titans, such as the Colossal Titan, the Armored Titan, the Female Titan, and the Beast Titan, each with their own abilities and weaknesses.
    • -
    • The mystery behind the origin and purpose of the Titans, as well as the secrets hidden within the walls and beyond.
    • -
    • The moral dilemmas and conflicts faced by the characters, such as whether to fight or flee, whether to trust or betray, whether to kill or spare, and whether to seek freedom or peace.
    • -
    • The exploration of themes such as survival, humanity, freedom, oppression, revenge, loyalty, sacrifice, identity, and hope.
    • -
    -

    Why Download Attack on Titan for Android?

    -

    If you are a fan of Attack on Titan, or if you are curious about this anime series, you might want to download it for Android and watch it on your phone. There are several reasons why this is a good idea:

    -

    The benefits of watching anime on your phoneThe benefits of watching anime on your phone

    -

    Some of the benefits of watching anime on your phone are:

    -
      -
    • You can watch it anytime, anywhere, without being tied to a TV or a computer. You can watch it while commuting, traveling, waiting, or relaxing.
    • -
    • You can watch it offline, without worrying about internet connection or data usage. You can download the episodes beforehand and watch them later at your convenience.
    • -
    • You can watch it privately, without disturbing others or being disturbed by others. You can use headphones or earphones to enjoy the sound effects and music, and you can adjust the brightness and volume to suit your preference.
    • -
    • You can watch it comfortably, without straining your eyes or neck. You can hold your phone at a comfortable distance and angle, and you can pause, rewind, or skip the episodes as you wish.
    • -
    -

    The best apps and websites to download Attack on Titan for Android

    -

    There are many apps and websites that allow you to download Attack on Titan for Android, but not all of them are reliable, safe, or legal. Some of them may contain viruses, malware, or spyware that can harm your phone or steal your personal information. Some of them may have low-quality videos, incomplete episodes, or annoying ads. Some of them may violate the copyright laws and infringe on the rights of the creators and distributors of the anime series.

    -

    To avoid these problems, you should only use trusted and reputable apps and websites that offer high-quality videos, complete episodes, and no ads. Some of the best apps and websites to download Attack on Titan for Android are:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    App/WebsiteDescriptionProsCons
    CrunchyrollA popular streaming service that offers a large collection of anime, manga, and drama. It has the official license to stream Attack on Titan in various regions and languages.- High-quality videos with subtitles and dubbing options.
    - Complete episodes with fast updates.
    - No ads for premium users.
    - Offline viewing for premium users.
    - Compatible with various devices and platforms.
    - Requires subscription for premium features.
    - Not available in some countries or regions.
    - May have some bugs or glitches.
    FunimationA leading streaming service that specializes in anime and animation. It has the exclusive license to stream Attack on Titan in English-speaking countries.- High-quality videos with subtitles and dubbing options.
    - Complete episodes with fast updates.
    - No ads for premium users.
    - Offline viewing for premium users.
    - Compatible with various devices and platforms.
    - Requires subscription for premium features.
    - Not available in some countries or regions.
    - May have some bugs or glitches.
    AnimeLabA dedicated streaming service that offers a wide range of anime titles. It has the official license to stream Attack on Titan in Australia and New Zealand.- High-quality videos with subtitles and dubbing options.
    - Complete episodes with fast updates.
    - No ads for premium users.
    - Offline viewing for premium users.
    - Compatible with various devices and platforms.
    - Requires subscription for premium features.
    - Not available in some countries or regions.
    - May have some bugs or glitches.
    AnimeFreakA free streaming website that provides a huge library of anime shows and movies. It does not have the official license to stream Attack on Titan, but it hosts the videos from other sources.- Free to use with no registration required.
    - High-quality videos with subtitles and dubbing options.
    - Complete episodes with regular updates.
    - Compatible with various devices and platforms.
    - Contains ads that may be intrusive or inappropriate.
    - May not be legal or ethical to use.
    - May have some bugs or glitches.
    KissanimeA free streaming website that offers a vast selection of anime genres and categories. It does not have the official license to stream Attack on Titan, but it hosts the videos from other sources.- Free to use with no registration required.
    - High-quality videos with subtitles and dubbing options.
    - Complete episodes with regular updates.
    - Compatible with various devices and platforms.
    - Contains ads that may be intrusive or inappropriate.
    - May not be legal or ethical to use.
    - May have some bugs or glitches.
    -

    How to Download Attack on Titan for Android?

    -

    Now that

    Now that you know the best apps and websites to download Attack on Titan for Android, you might be wondering how to do it. Here are some simple steps to follow:

    -

    Attack on Titan: Assault APK latest version for Android
    -How to download Attack on Titan game on Android devices
    -Attack on Titan mobile game free download for Android
    -Best Attack on Titan apps for Android in 2023
    -Download Attack on Titan wallpapers for Android phone
    -Attack on Titan: Tactics - strategy game for Android
    -Attack on Titan fan game for Android - download now
    -Attack on Titan: The Final Season - watch online on Android
    -Attack on Titan manga reader app for Android
    -Attack on Titan: Wings of Freedom - action game for Android
    -Download Attack on Titan stickers for WhatsApp on Android
    -Attack on Titan: No Regrets - spin-off manga for Android
    -Attack on Titan live wallpaper for Android - customize your home screen
    -Attack on Titan keyboard theme for Android - type with style
    -Attack on Titan trivia quiz for Android - test your knowledge
    -Attack on Titan ringtone for Android - set your favorite sound
    -Attack on Titan cosplay guide for Android - get inspired by the characters
    -Attack on Titan VR experience for Android - immerse yourself in the world
    -Attack on Titan music player for Android - listen to the soundtrack
    -Attack on Titan soundboard for Android - play the iconic quotes
    -Attack on Titan emoji keyboard for Android - express yourself with the symbols
    -Attack on Titan mod apk for Android - unlock all features and items
    -Attack on Titan offline game for Android - play without internet connection
    -Attack on Titan wallpaper HD 4k for Android - enjoy the high quality images
    -Attack on Titan theme launcher for Android - personalize your device
    -Attack on Titan photo editor for Android - create your own fan art
    -Attack on Titan anime streaming app for Android - watch all episodes and movies
    -Attack on Titan coloring book for Android - relax and have fun
    -Attack on Titan alarm clock for Android - wake up with the Survey Corps
    -Attack on Titan role playing game for Android - create your own character and story
    -Attack on Titan video downloader for Android - save your favorite clips
    -Attack on Titan news and updates app for Android - stay informed about the latest developments
    -Attack on Titan wallpaper maker for Android - design your own background
    -Attack on Titan quiz game multiplayer for Android - challenge your friends and other fans
    -Attack on Titan sticker maker for Android - create your own stickers and share them

    -

    A step-by-step guide to download Attack on Titan for Android using an app

    -
      -
    1. Choose an app that suits your needs and preferences, such as Crunchyroll, Funimation, or AnimeLab. You can find them on the Google Play Store or their official websites.
    2. -
    3. Download and install the app on your phone. Make sure you have enough storage space and a stable internet connection.
    4. -
    5. Open the app and sign up for an account if you don't have one already. You may need to pay for a subscription to access the premium features, such as offline viewing.
    6. -
    7. Search for Attack on Titan in the app's library or browse through the categories. You can also use the filters and sorting options to narrow down your search.
    8. -
    9. Select the season and episode you want to watch. You can also choose the language and quality of the video.
    10. -
    11. Tap on the download icon or button to start downloading the episode. You can see the progress and status of the download in the app's menu or notification bar.
    12. -
    13. Once the download is complete, you can watch the episode offline by tapping on the play icon or button. You can also delete the episode after watching it to free up some space.
    14. -
    -

    A step-by-step guide to download Attack on Titan for Android using a website

    -
      -
    1. Choose a website that offers high-quality videos and complete episodes of Attack on Titan, such as AnimeFreak or Kissanime. You can find them on your web browser or search engine.
    2. -
    3. Go to the website and look for Attack on Titan in its library or search bar. You can also use the filters and sorting options to narrow down your search.
    4. -
    5. Select the season and episode you want to watch. You can also choose the language and quality of the video.
    6. -
    7. Tap on the download icon or button to start downloading the episode. You may need to wait for a few seconds or minutes before the download link appears.
    8. -
    9. Once the download link appears, tap on it and choose a location to save the file on your phone. Make sure you have enough storage space and a stable internet connection.
    10. -
    11. Once the download is complete, you can watch the episode offline by opening it with a video player app on your phone. You can also delete the file after watching it to free up some space.
    12. -
    -

    Tips and Tricks to Enjoy Attack on Titan on Your Phone

    -

    Downloading Attack on Titan for Android is not enough to enjoy this epic anime on your phone. You also need some tips and tricks to enhance your viewing experience and avoid any problems. Here are some of them:

    -

    How to optimize your phone settings for the best viewing experience

    -
      -
    • Make sure your phone is fully charged or plugged in before watching an episode, as downloading and playing videos can drain your battery quickly.
    • -
    • Turn off any notifications or alerts that may interrupt or distract you while watching an episode, such as calls, messages, emails, or social media updates.
    • -
    • Adjust your screen brightness and contrast to suit your eyesight and lighting conditions, as too bright or too dark screens can strain your eyes or affect your visibility.
    • -
    • Adjust your sound volume and quality to suit your hearing and environment, as too loud or too low sounds can damage your ears or affect your immersion.
    • -
    • Use headphones or earphones to enjoy the sound effects and music better, as well as to block out any background noise or interference.
    • -
    -

    How to avoid spoilers and stay updated with the latest episodes

    -
      -
    • Avoid browsing through social media, forums, blogs, or websites that may contain spoilers or discussions about Attack on Titan, especially if you are not caught up with the latest episodes.
    • -
    • Avoid clicking on any links, images, videos, or articles that may reveal spoilers or details about Attack on Titan, especially if they have misleading or vague titles or thumbnails.
    • -
    • Avoid talking to anyone who has watched ahead of you or who may spoil you intentionally or unintentionally about Attack on Titan, especially if they are not respectful of your preferences or boundaries.
    • -
    • Stay updated with the release dates and schedules of Attack on Titan episodes, as well as any news or announcements about the anime series, by following its official website, social media accounts, or streaming platforms.
    • -
    • Watch each episode as soon as possible after it is released, preferably within 24 hours, to avoid missing out on any important events or developments in Attack on Titan.
    • -
    -

    How to join the Attack on Titan fan community and share your thoughts

    -

    One of the best ways to enjoy Attack on Titan is to join the fan community and share your thoughts, opinions, theories, and emotions with other fans. You can also learn more about the anime series, discover new perspectives, and make new friends. Here are some ways to join the Attack on Titan fan community and share your thoughts:

    -
      -
    • Join online platforms that are dedicated to Attack on Titan, such as Reddit, Discord, Twitter, Facebook, Instagram, YouTube, or Tumblr. You can find various groups, channels, pages, accounts, or blogs that focus on Attack on Titan and interact with other fans.
    • -
    • Join offline events that are related to Attack on Titan, such as conventions, screenings, meetups, or cosplay. You can find local or international events that celebrate Attack on Titan and meet other fans in person.
    • -
    • Share your own content that is inspired by Attack on Titan, such as fan art, fan fiction, fan videos, fan podcasts, or fan games. You can showcase your creativity and passion for Attack on Titan and receive feedback and support from other fans.
    • -
    • Respect the rules and etiquette of the fan community and be polite and friendly to other fans. You can have different opinions and preferences, but you should not insult, harass, or spoil anyone. You should also respect the creators and distributors of Attack on Titan and avoid piracy or plagiarism.
    • -
    -

    Conclusion

    -

    Attack on Titan is an epic anime series that you can download for Android and watch on your phone. In this article, we have shown you what Attack on Titan is, why you should download it for Android, how to download it for Android using an app or a website, and how to enjoy it on your phone. We hope you have found this article helpful and informative. Now you can download Attack on Titan for Android and enjoy this amazing anime on your phone.

    -

    If you have any questions or comments about this article, feel free to leave them below. We would love to hear from you. And if you liked this article, please share it with your friends and family who might be interested in Attack on Titan. Thank you for reading!

    -

    FAQs

    -

    Here are some frequently asked questions about Attack on Titan download for Android:

    -
      -
    1. Q: Is Attack on Titan download for Android legal?
      A: It depends on the app or website you use to download it. If you use an app or website that has the official license to stream Attack on Titan in your region or country, such as Crunchyroll, Funimation, or AnimeLab, then it is legal. However, if you use an app or website that does not have the official license to stream Attack on Titan in your region or country, such as AnimeFreak or Kissanime, then it may not be legal. You should check the terms and conditions of the app or website before using it.
    2. -
    3. Q: Is Attack on Titan download for Android safe?
      A: It depends on the app or website you use to download it. If you use an app or website that is trusted and reputable, such as Crunchyroll, Funimation, or AnimeLab, then it is safe. However, if you use an app or website that is not trusted or reputable, such as AnimeFreak or Kissanime, then it may not be safe. You should check the reviews and ratings of the app or website before using it.
    4. -
    5. Q: Is Attack on Titan download for Android free?
      A: It depends on the app or website you use to download it. Some apps and websites offer free access to Attack on Titan episodes with ads or limited features, such as AnimeFreak or Kissanime. However, some apps and websites require a subscription fee to access Attack on Titan episodes without ads or with premium features, such as Crunchyroll, Funimation, or AnimeLab. You should compare the prices and benefits of the apps and websites before using them.
    6. -
    7. Q: How many episodes are there in Attack on Titan?
      A: There are currently 76 episodes in Attack on Titan anime series. The first season has 25 episodes; the second season has 12 episodes; the third season has 22 episodes; the fourth season has 16 episodes; and a third part of the fourth season will have 12 episodes.
    8. -
    9. Q: When will the final part of Attack on Titan anime series air?
      A: The final part of Attack on Titan anime series will air in two halves; the first half premiered in March 2023; and the second half will premiere in late 2023.
    10. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/CARS24 Desktop App How It Can Save You Time and Money on Used Cars.md b/spaces/fatiXbelha/sd/CARS24 Desktop App How It Can Save You Time and Money on Used Cars.md deleted file mode 100644 index 298d62af59b562ca4cbbcb9f66496ec9bb9b0996..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/CARS24 Desktop App How It Can Save You Time and Money on Used Cars.md +++ /dev/null @@ -1,144 +0,0 @@ -
    -

    How to Download and Use Cars24 App on Your PC

    -

    If you are looking for a convenient and hassle-free way to buy or sell used cars online, you might want to check out the Cars24 app. This app allows you to browse through thousands of certified cars, get instant quotes, book test drives, apply for financing, and get free home delivery or pickup. But what if you want to use this app on your PC instead of your phone? In this article, we will show you how to download and use Cars24 app on your PC using two methods: an Android emulator or Windows Subsystem for Android.

    -

    What is Cars24 App and Why You Should Use It

    -

    Cars24 is an innovative platform that aims to revolutionize the used car trading industry in India. It offers a new experience that helps you buy or sell your used car online in a convenient, safe, and easy way. Here are some of the features and benefits of using Cars24 app:

    -

    cars24 app download for pc


    Download File 🗹 https://urllie.com/2uNwa2



    -

    Cars24 App Features and Benefits

    -
      -
    • You can choose from more than 1,500 well-known car brands on the app, with detailed information, photos, and videos.
    • -
    • You can use the virtual 360-degree view feature to see every car in detail as if you were walking around it yourself.
    • -
    • You can book a free test drive for any car you like, with no obligation to buy.
    • -
    • You can get an online quote for your car in minutes, by providing a few details about your car.
    • -
    • You can sell your car online in a single visit, with instant payment and free RC transfer.
    • -
    • You can apply for zero down payment, quick financing, with easy documentation and flexible EMIs.
    • -
    • You can get a free warranty for six months on every car you buy, with free after-sale service and support.
    • -
    • You can return any car you buy within seven days if you are not satisfied, with a full refund policy.
    • -
    • You can buy or sell your car online anytime, anywhere, with free home delivery or pickup at a Cars24 service center.
    • -
    -

    Cars24 App Requirements and Compatibility

    -

    The Cars24 app is compatible with Android devices running Android 5.0 or higher. You can download it from Google Play Store or from the official website. However, if you want to use it on your PC, you will need either an Android emulator or Windows Subsystem for Android. We will explain these methods in the following sections.

    -

    How to Install Cars24 App on Your PC Using an Android Emulator

    -

    An Android emulator is a software that simulates the Android environment on your PC and allows you to download and run Android apps from Google Play Store or other sources. One of the most popular and recommended Android emulators is Bluestacks, which supports Windows 7/8/10/11. Here are the steps to install Cars24 app on your PC using Bluestacks:

    -

    What is an Android Emulator and How It Works

    -

    An Android emulator is a software that creates a virtual machine on your PC that runs the Android operating system. This way, you can access the Google Play Store and other Android apps on your PC as if you were using an Android device. An Android emulator uses virtualization technology to emulate the hardware and software components of an Android device, such as CPU, RAM, storage, sensors, camera, etc.

    How to Download and Install Bluestacks on Your PC

    -

    To download and install Bluestacks on your PC, follow these steps:

    -
      -
    1. Go to the official website of Bluestacks and click on the "Download Bluestacks" button.
    2. -
    3. Wait for the download to finish and then run the installer file.
    4. -
    5. Follow the instructions on the screen to complete the installation process.
    6. -
    7. Launch Bluestacks and sign in with your Google account or create a new one.
    8. -
    9. Once you are logged in, you will see the Bluestacks home screen with various app icons.
    10. -
    -

    How to Download and Install Cars24 App on Bluestacks

    -

    To download and install Cars24 app on Bluestacks, follow these steps:

    -
      -
    1. On the Bluestacks home screen, click on the "Google Play Store" icon.
    2. -
    3. In the search bar, type "Cars24" and hit enter.
    4. -
    5. From the search results, click on the "Cars24 - Buy & Sell Used Cars Online" app by CARS24 SERVICES PRIVATE LIMITED.
    6. -
    7. Click on the "Install" button and wait for the app to download and install.
    8. -
    9. Once the installation is done, you will see the "Cars24" app icon on the Bluestacks home screen.
    10. -
    11. Click on the "Cars24" app icon to launch it and start using it on your PC.
    12. -
    -

    How to Install Cars24 App on Your PC Using Windows Subsystem for Android

    -

    If you have a Windows 11 PC, you can also use Windows Subsystem for Android (WSA) to run Android apps on your PC. WSA is a feature that allows you to install and run Android apps from the Microsoft Store or from the Amazon Appstore. Here are the steps to install Cars24 app on your PC using WSA:

    -

    cars24 desktop app for mac and pc
    -cars24 buy and sell used cars online
    -cars24 services private limited app
    -cars24 360° car viewing experience
    -cars24 online quote for your car
    -cars24 financial services app
    -cars24 seller protection policy app
    -cars24 mega refurbishment labs app
    -cars24 1 year warranty app
    -cars24 7 day returns app
    -cars24 zero down payment app
    -cars24 home inspection app
    -cars24 rc transfer app
    -cars24 hassle-free documentation app
    -cars24 instant payment app
    -cars24 sell from anywhere app
    -cars24 great price app
    -cars24 loan approval in seconds app
    -cars24 low interest rates app
    -cars24 100% digitised process app
    -cars24 maruti suzuki app
    -cars24 honda app
    -cars24 mahindra app
    -cars24 kia app
    -cars24 hyundai app
    -cars24 webcatalog app
    -cars24 google play app
    -cars24 gameloop app
    -cars24 android on pc app
    -cars24 second-hand customer cars app
    -cars24 easy finance app
    -cars24 verified used car dealers app
    -cars24 best price guarantee app
    -cars24 after-sales support app
    -cars24 quality checks app
    -cars24 refurbished with love app
    -cars24 net energy gain experiment app
    -cars24 holy grail fusion experiment app
    -cars24 mini sun experiment app
    -cars24 100 million°C experiment app
    -cars24 30 seconds experiment app
    -cars24 nuclear fusion reaction experiment app
    -cars24 korea superconducting tokamak advanced research experiment app
    -cars24 korea institute of fusion energy experiment app
    -cars24 new scientist article on experiment app
    -cars24 the sun article on experiment app
    -cars24 yahoo news article on experiment app
    -cars24 webcatalog spaces feature for apps
    -cars24 distraction-free windows feature for apps
    -cars24 multiple accounts feature for apps

    -

    What is Windows Subsystem for Android and How It Works

    -

    Windows Subsystem for Android is a feature that enables you to run Android apps natively on your Windows 11 PC. It uses virtualization technology to create a Linux-based environment that runs the Android operating system. This way, you can access Android apps from the Microsoft Store or from the Amazon Appstore on your PC as if you were using an Android device. WSA supports most of the Android features and capabilities, such as touch, audio, camera, sensors, etc.

    -

    How to Update Your Windows 11 and Microsoft Store

    -

    To use WSA, you need to have Windows 11 version 22000.194 or higher and Microsoft Store version 22110.1401.6.0 or higher. To update your Windows 11 and Microsoft Store, follow these steps:

    -
      -
    1. Go to Settings > Windows Update and click on "Check for updates". If there are any available updates, download and install them.
    2. -
    3. Go to Settings > Apps > Apps & features and click on "Microsoft Store". Then click on the three dots icon and select "Advanced options".
    4. -
    5. Scroll down and click on "Repair" or "Reset" if available. This will fix any issues with the Microsoft Store app.
    6. -
    7. Restart your PC and check if your Windows 11 and Microsoft Store are updated.
    8. -
    -

    How to Download and Install Amazon Appstore and Windows Subsystem for Android

    -

    To download and install Amazon Appstore and WSA, follow these steps:

    -
      -
    1. Go to the Microsoft Store app and search for "Amazon Appstore". Click on the "Get" button and wait for the app to download and install.
    2. -
    3. Launch the Amazon Appstore app and sign in with your Amazon account or create a new one.
    4. -
    5. Go back to the Microsoft Store app and search for "Windows Subsystem for Android". Click on the "Get" button and wait for the feature to download and install.
    6. -
    7. Restart your PC and check if you have WSA enabled on your PC.
    8. -
    -

    How to Download and Install Cars24 App from Amazon Appstore

    -

    To download and install Cars24 app from Amazon Appstore, follow these steps:

    -
      -
    1. Launch the Amazon Appstore app and search for "Cars24". Click on the "Cars24 - Buy & Sell Used Cars Online" app by CARS24 SERVICES PRIVATE LIMITED.
    2. -
    3. Click on the "Get" button and wait for the app to download and install.
    4. -
    5. Once the installation is done, you will see the "Cars24" app icon on your desktop or in your Start menu.
    6. -
    7. Click on the "Cars24" app icon to launch it and start using it on your PC.
    8. -
    -

    Conclusion

    -

    In this article, we have shown you how to download and use Cars24 app on your PC using two methods: an Android emulator or Windows Subsystem for Android. Both methods have their advantages and disadvantages, so you can choose the one that suits your needs and preferences. With Cars24 app, you can buy or sell your used car online in a convenient, safe, and easy way. You can also enjoy various features and benefits that make your car trading experience more enjoyable and rewarding. So, what are you waiting for? Download Cars24 app today and get started!

    -

    FAQs

    -

    Q: Is Cars24 app free to download and use?

    -

    A: Yes, Cars24 app is free to download and use. However, you may need to pay some fees or charges when you buy or sell your car through the app, such as registration fee, service fee, delivery fee, etc.

    -

    Q: Is Cars24 app safe and secure?

    -

    A: Yes, Cars24 app is safe and secure. It uses encryption and authentication technologies to protect your personal and financial information. It also verifies the identity and background of the buyers and sellers to ensure a fair and transparent deal.

    -

    Q: How can I contact Cars24 customer support?

    -

    A: You can contact Cars24 customer support by calling their toll-free number 1800 258 5656 or by emailing them at care@cars24.com. You can also visit their website or app and click on the "Help" or "Contact Us" option.

    -

    Q: How can I update or uninstall Cars24 app on my PC?

    -

    A: To update or uninstall Cars24 app on your PC, follow these steps:

    -
      -
    • If you are using Bluestacks, go to the Bluestacks home screen and click on the "My Apps" tab. Then click on the "Cars24" app icon and select "Update" or "Uninstall" from the menu.
    • -
    • If you are using WSA, go to the Start menu and click on the "Settings" icon. Then go to "Apps" > "Apps & features" and find the "Cars24" app from the list. Click on it and select "Modify" or "Uninstall" from the menu.
    • -
    -

    Q: What are some alternatives to Cars24 app?

    -

    A: Some alternatives to Cars24 app are:

    -
      -
    • CarDekho - A platform that offers new and used cars, car loans, insurance, reviews, news, etc.
    • -
    • CarTrade - A platform that offers new and used cars, car valuation, finance, insurance, auctions, etc.
    • -
    • Droom - A platform that offers new and used cars, bikes, scooters, planes, boats, etc.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy 50 Classic Solitaire Games for Your Mac - Download Now.md b/spaces/fatiXbelha/sd/Enjoy 50 Classic Solitaire Games for Your Mac - Download Now.md deleted file mode 100644 index 5f0ce88aafaf934b8a0b1f790ff79a160a838277..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy 50 Classic Solitaire Games for Your Mac - Download Now.md +++ /dev/null @@ -1,107 +0,0 @@ -
    -

    How to Download Solitaire for Mac

    -

    Solitaire is one of the most played video games of all time. It is a card game that can be enjoyed by anyone, regardless of age or skill level. Solitaire is also a great way to relax, have fun, and exercise your brain.

    -

    download solitaire mac


    Download File ✺✺✺ https://urllie.com/2uNAfa



    -

    If you are a Mac user, you might be wondering how to download solitaire for your device. There are many options available, depending on your preferences and needs. In this article, we will show you how to download solitaire from the Mac App Store, as well as from other sources. We will also review some of the best solitaire games for Mac that you can try today.

    -

    Ready to play some solitaire on your Mac? Let's get started!

    -

    How to Download Solitaire from the Mac App Store

    -

    The easiest and safest way to download solitaire for your Mac is from the Mac App Store. The Mac App Store is a digital distribution platform that allows you to browse, buy, and download apps for your Mac. You can access the Mac App Store from your Dock, Launchpad, or Finder.

    -

    To download solitaire from the Mac App Store, follow these steps:

    -

    download full deck solitaire for mac
    -download klondike solitaire for mac
    -download classic solitaire for mac
    -download free solitaire games for mac
    -download spider solitaire for mac
    -download solitaire plus for mac
    -download solitaire city for mac
    -download solitaire greatest hits for mac
    -download solsuite solitaire for mac
    -download pretty good solitaire for mac
    -how to download solitaire on macbook air
    -how to download solitaire on macbook pro
    -how to download microsoft solitaire on mac
    -how to download windows solitaire on mac
    -how to download mahjong solitaire on mac
    -best solitaire app for mac free download
    -best solitaire game for mac free download
    -best offline solitaire for mac free download
    -best spider solitaire for mac free download
    -best klondike solitaire for mac free download
    -where can i download solitaire for mac
    -where can i download free solitaire for mac
    -where can i download spider solitaire for mac
    -where can i download microsoft solitaire for mac
    -where can i download windows solitaire for mac
    -download and install solitaire on mac
    -download and play solitaire on mac
    -download and enjoy solitaire on mac
    -download and update solitaire on mac
    -download and review solitaire on mac
    -easy to download solitaire for mac
    -easy to play solitaire for mac free download
    -easy to learn solitaire for mac free download
    -easy to win solitaire for mac free download
    -easy to use solitaire for mac free download
    -fast and fun solitaire for mac free download
    -fast and simple solitaire for mac free download
    -fast and smooth solitaire for mac free download
    -fast and secure solitaire for mac free download
    -fast and reliable solitaire for mac free download
    -beautiful and addictive solitaire for mac free download
    -beautiful and challenging solitaire for mac free download
    -beautiful and relaxing solitaire for mac free download
    -beautiful and customizable solitaire for mac free download
    -beautiful and elegant solitaire for mac free download
    -top rated solitaire app for mac free download
    -top rated solitaire game for mac free download
    -top rated spider solitaire for mac free download
    -top rated klondike solitaire for mac free download
    -top rated classic solitaire for mac free download

    -
      -
    1. Open the Mac App Store on your device.
    2. -
    3. In the search box, type "solitaire" and hit enter.
    4. -
    5. You will see a list of solitaire apps that are compatible with your device. You can filter the results by category, price, rating, or popularity.
    6. -
    7. Choose the solitaire app that you want to download and click on its icon.
    8. -
    9. You will see a page with more information about the app, such as its description, screenshots, reviews, and ratings. You can also see if the app is free or paid, and if it offers in-app purchases.
    10. -
    11. If you want to download the app, click on the "Get" button if it is free, or the price button if it is paid. You might need to enter your Apple ID and password to confirm your purchase.
    12. -
    13. The app will start downloading and installing on your device. You can see the progress in the Launchpad or in the Dock.
    14. -
    15. Once the app is installed, you can open it from the Launchpad or the Dock and start playing solitaire on your Mac.
    16. -
    -

    There are many solitaire apps for Mac that you can download from the Mac App Store. Here are some of the best ones that we recommend:

    -

    Solitaire! (Klondike)

    -

    Solitaire! (Klondike) is a free version of the classic Klondike game, which most people just call "solitaire". It features options for one- or three-card draws from the stock, unlimited recycle of the stock, smart-dragging, one-click moves, autoplay, custom card backs and backgrounds, undo/redo, statistics, and game save/restore. It is a simple and elegant solitaire game that you can enjoy on your Mac.

    - Full Deck Solitaire -

    Full Deck Solitaire is a free solitaire app that offers 22 different solitaire games, such as Klondike, Spider, FreeCell, Pyramid, Tri Peaks, Golf, and more. It has beautiful graphics, animations, sound effects, and music. It also has features like hints, undo/redo, auto-complete, statistics, leaderboards, and achievements. You can customize the card backs, backgrounds, and card faces. You can also choose from different difficulty levels and game modes. Full Deck Solitaire is a fun and challenging solitaire app that will keep you entertained for hours.

    -

    Microsoft Solitaire Collection

    -

    Microsoft Solitaire Collection is a free solitaire app that brings the classic Windows solitaire games to your Mac. It includes five solitaire games: Klondike, Spider, FreeCell, Pyramid, and TriPeaks. It also has daily challenges, events, themes, achievements, and cloud sync. You can play online or offline, and adjust the settings and preferences to your liking. Microsoft Solitaire Collection is a nostalgic and addictive solitaire app that will make you feel like you are playing on a Windows PC.

    -

    How to Download Solitaire from Other Sources

    -

    If you don't want to download solitaire from the Mac App Store, you can also download it from other sources. However, you need to be careful when downloading solitaire from other sources, as some of them might contain malware or viruses that can harm your device. You should always check the reputation and reviews of the source before downloading anything from it. You should also scan the downloaded file with an antivirus software before opening it.

    -

    Another option is to play solitaire online on your browser without downloading anything. There are many websites that offer solitaire games for free that you can access from your Mac. Here are some of the best ones that we recommend:

    -

    World of Solitaire

    -

    World of Solitaire is a website that offers over 100 solitaire games for free. You can play classic solitaire games like Klondike, Spider, FreeCell, Pyramid, Golf, and more. You can also play unique solitaire games like Scorpion, Yukon, Russian Solitaire, and more. You can customize the card backs, backgrounds, card faces, animations, sounds, and options. You can also track your statistics, scores, and time. World of Solitaire is a comprehensive and user-friendly website that will satisfy your solitaire cravings.

    -

    Solitr.com

    -

    Solitr.com is a website that offers two solitaire games for free: Klondike and Spider. You can choose from one- or three-card draws for Klondike, and one-, two-, or four-suit modes for Spider. You can also change the theme and the card size. The website has a simple and clean design that allows you to focus on the game. You can also undo/redo moves, see your score and time, and restart the game. Solitr.com is a fast and easy website that will let you play solitaire in seconds.

    -

    247 Solitaire

    -

    247 Solitaire is a website that offers 12 solitaire games for free. You can play popular solitaire games like Klondike, Spider, FreeCell, Pyramid, Golf, and more. You can also play less common solitaire games like Wasp, Scorpion, Yukon, and more. You can customize the card backs, backgrounds, and card faces. You can also see your statistics, scores, and time. 247 Solitaire is a colorful and fun website that will give you plenty of solitaire options.

    -

    Conclusion

    -

    Solitaire is a classic and enjoyable card game that you can play on your Mac. You can download solitaire from the Mac App Store or from other sources, depending on your preferences and needs. You can also play solitaire online on your browser without downloading anything. There are many solitaire games for Mac that you can choose from, such as Klondike, Spider, FreeCell, Pyramid, Golf, and more. Solitaire is a great way to relax, have fun, and exercise your brain.

    -

    So what are you waiting for? Download or play solitaire on your Mac today and see how much you love it!

    -

    FAQs

    -

    Is solitaire free for Mac?

    -

    Yes, there are many solitaire apps and websites that are free for Mac. However, some of them might have ads or offer in-app purchases for extra features or content. You can also find paid solitaire apps for Mac that might have more options and quality.

    -

    How do I uninstall solitaire from my Mac?

    -

    If you want to uninstall solitaire from your Mac, you can follow these steps:

    -
      -
    1. Open the Finder on your device.
    2. -
    3. Go to the Applications folder and locate the solitaire app that you want to uninstall.
    4. -
    5. Drag the app icon to the Trash or right-click on it and choose Move to Trash.
    6. -
    7. Empty the Trash to complete the uninstallation.
    8. -
    -

    How do I play solitaire offline on my Mac?

    -

    If you want to play solitaire offline on your Mac, you need to download a solitaire app that does not require an internet connection. You can find such apps on the Mac App Store or from other sources. Once you download the app, you can open it and play solitaire offline on your Mac.

    -

    How do I change the settings and preferences of solitaire on my Mac?

    -

    If you want to change the settings and preferences of solitaire on your Mac, you need to open the solitaire app that you are using and look for the settings or options menu. There you can change things like the card backs, backgrounds, card faces, sounds, animations, difficulty levels, game modes, hints, undo/redo, auto-complete, statistics, and more.

    -

    How do I improve my solitaire skills on my Mac?

    -

    If you want to improve your solitaire skills on your Mac, you need to practice regularly and learn from your mistakes. You can also try different solitaire games and modes to challenge yourself and learn new strategies. You can also read tips and tricks online or watch tutorials and videos from other players.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/docs/README.md.German.md b/spaces/fb700/chatglm-fitness-RLHF/docs/README.md.German.md deleted file mode 100644 index 0fe200cf690b6c9ff699e2e19bb53fd3cd60c201..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/docs/README.md.German.md +++ /dev/null @@ -1,307 +0,0 @@ -> **Hinweis** -> -> Bei der Installation von Abhängigkeiten sollten nur die in **requirements.txt** **angegebenen Versionen** streng ausgewählt werden. -> -> `pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/` - -# GPT Akademisch optimiert (GPT Academic) - -**Wenn Ihnen dieses Projekt gefällt, geben Sie ihm bitte einen Stern; wenn Sie bessere Tastenkombinationen oder Funktions-Plugins entwickelt haben, können Sie gerne einen Pull Request eröffnen.** - -Wenn Sie dieses Projekt mögen, geben Sie ihm bitte einen Stern. Wenn Sie weitere nützliche wissenschaftliche Abkürzungen oder funktionale Plugins entwickelt haben, können Sie gerne ein Problem oder eine Pull-Anforderung öffnen. Wir haben auch ein README in [Englisch|](docs/README_EN.md)[日本語|](docs/README_JP.md)[한국어|](https://github.com/mldljyh/ko_gpt_academic)[Русский|](docs/README_RS.md)[Français](docs/README_FR.md), das von diesem Projekt selbst übersetzt wurde. -Um dieses Projekt in eine beliebige Sprache mit GPT zu übersetzen, lesen Sie `multi_language.py` (experimentell). - -> **Hinweis** -> -> 1. Beachten Sie bitte, dass nur Funktionserweiterungen (Schaltflächen) mit **roter Farbe** Dateien lesen können und einige Erweiterungen im **Dropdown-Menü** des Erweiterungsbereichs zu finden sind. Außerdem begrüßen wir jede neue Funktionserweiterung mit **höchster Priorität** und bearbeiten sie. -> -> 2. Die Funktionalität jeder Datei in diesem Projekt wird in der Selbstanalyse [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) detailliert beschrieben. Mit der Weiterentwicklung der Versionen können Sie jederzeit die zugehörigen Funktions-Erweiterungen aufrufen, um durch Aufruf von GPT einen Selbstanalysebericht des Projekts zu erstellen. Häufig gestellte Fragen finden Sie in der [`Wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98). [Installationsanweisungen](#Installation). -> -> 3. Dieses Projekt ist kompatibel und fördert die Verwendung von inländischen Sprachmodellen wie ChatGLM und RWKV, Pangu, etc. Es unterstützt das Vorhandensein mehrerer api-keys, die in der Konfigurationsdatei wie folgt angegeben werden können: `API_KEY="openai-key1,openai-key2,api2d-key3"`. Wenn ein `API_KEY` temporär geändert werden muss, geben Sie den temporären `API_KEY` im Eingabebereich ein und drücken Sie dann die Eingabetaste, um ihn zu übernehmen.Funktion | Beschreibung ---- | --- -Ein-Klick-Polieren | Unterstützt ein-Klick-Polieren und ein-Klick-Suche nach grammatikalischen Fehlern in wissenschaftlichen Arbeiten -Ein-Klick Chinesisch-Englisch Übersetzung | Ein-Klick Chinesisch-Englisch Übersetzung -Ein-Klick-Code-Erklärung | Zeigt Code, erklärt Code, erzeugt Code und fügt Kommentare zum Code hinzu -[Benutzerdefinierte Tastenkombinationen](https://www.bilibili.com/video/BV14s4y1E7jN) | Unterstützt benutzerdefinierte Tastenkombinationen -Modulare Gestaltung | Unterstützt leistungsstarke individuelle [Funktions-Plugins](https://github.com/binary-husky/chatgpt_academic/tree/master/crazy_functions). Plugins unterstützen [Hot-Updates](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97) -[Selbstprogramm-Analyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] [Ein-Klick Verstehen](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) der Quellcode dieses Projekts -[Programmanalyse](https://www.bilibili.com/video/BV1cj411A7VW) | [Funktions-Plugin] Ein-Klick-Analyse des Projektbaums anderer Python/C/C++/Java/Lua/...-Projekte -Lesen von Papieren, [Übersetzen](https://www.bilibili.com/video/BV1KT411x7Wn) von Papieren | [Funktions-Plugin] Ein-Klick Erklärung des gesamten LaTeX/PDF-Artikels und Erstellung einer Zusammenfassung -LaTeX-Volltext-Übersetzung und [Polieren](https://www.bilibili.com/video/BV1FT411H7c5/) | [Funktions-Plugin] Ein-Klick-Übersetzung oder-Polieren des LaTeX-Artikels -Bulk-Kommentargenerierung | [Funktions-Plugin] Ein-Klick Massenerstellung von Funktionskommentaren -Markdown [Chinesisch-Englisch Übersetzung](https://www.bilibili.com/video/BV1yo4y157jV/) | [Funktions-Plugin] Haben Sie die [README](https://github.com/binary-husky/chatgpt_academic/blob/master/docs/README_EN.md) in den oben genannten 5 Sprachen gesehen? -Analyse-Berichtserstellung von chat | [Funktions-Plugin] Automatische Zusammenfassung nach der Ausführung -[Funktion zur vollständigen Übersetzung von PDF-Artikeln](https://www.bilibili.com/video/BV1KT411x7Wn) | [Funktions-Plugin] Extrahiert Titel und Zusammenfassung der PDF-Artikel und übersetzt den gesamten Text (mehrere Threads) -[Arxiv-Assistent](https://www.bilibili.com/video/BV1LM4y1279X) | [Funktions-Plugin] Geben Sie die Arxiv-Artikel-URL ein und klicken Sie auf Eine-Klick-Übersetzung-Zusammenfassung + PDF-Download -[Google Scholar Integrations-Assistent](https://www.bilibili.com/video/BV19L411U7ia) | [Funktions-Plugin] Geben Sie eine beliebige Google Scholar Such-URL ein und lassen Sie gpt Ihnen bei der Erstellung von [relatedworks](https://www.bilibili.com/video/BV1GP411U7Az/) helfen -Internet-Informationen Aggregation + GPT | [Funktions-Plugin] Lassen Sie GPT eine Frage beantworten, indem es [zuerst Informationen aus dem Internet](https://www.bilibili.com/video/BV1om4y127ck/) sammelt und so die Informationen nie veralten -Anzeige von Formeln / Bildern / Tabellen | Zeigt Formeln in beiden Formen, [TeX-Format und gerendeter Form](https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png), unterstützt Formeln und Code-Highlights -Unterstützung von PlugIns mit mehreren Threads | Unterstützt den Aufruf mehrerer Threads in Chatgpt, um Text oder Programme [Batch zu verarbeiten](https://www.bilibili.com/video/BV1FT411H7c5/) -Starten Sie das dunkle Gradio-[Thema](https://github.com/binary-husky/chatgpt_academic/issues/173) | Fügen Sie ```/?__theme=dark``` an das Ende der Browser-URL an, um das dunkle Thema zu aktivieren -[Unterstützung für mehrere LLM-Modelle](https://www.bilibili.com/video/BV1wT411p7yf), [API2D](https://api2d.com/) Interface-Unterstützung | Das Gefühl, gleichzeitig von GPT3.5, GPT4, [Tshinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B), [Fudan MOSS](https://github.com/OpenLMLab/MOSS) bedient zu werden, muss toll sein, oder? -Zugriff auf weitere LLM-Modelle, Unterstützung von [huggingface deployment](https://huggingface.co/spaces/qingxu98/gpt-academic) | Hinzufügen der Newbing-Schnittstelle (neues Bing), Einführung der Unterstützung von [Jittorllms](https://github.com/Jittor/JittorLLMs) der Tsinghua-Universität, [LLaMA](https://github.com/facebookresearch/llama), [RWKV](https://github.com/BlinkDL/ChatRWKV) und [Pangu alpha](https://openi.org.cn/pangu/) -Weitere neue Funktionen (wie Bildgenerierung) …… | Siehe Ende dieses Dokuments …… - -- Neue Oberfläche (Ändern Sie die LAYOUT-Option in `config.py`, um zwischen "Seitenlayout" und "Oben-unten-Layout" zu wechseln) -
    - -
    - All buttons are dynamically generated by reading `functional.py`, and custom functions can be easily added, freeing up the clipboard. -
    - -
    - -- Proofreading/Correcting -
    - -
    - -- If the output contains formulas, they will be displayed in both tex format and rendered format for easy copying and reading. -
    - -
    - -- Don't feel like reading the project code? Show off the entire project to chatgpt. -
    - -
    - -- Multiple large language models are mixed and called together (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4). -
    - -
    - ---- -# Installation -## Installation-Method 1: Run directly (Windows, Linux or MacOS) - -1. Download the project -```sh -git clone https://github.com/binary-husky/chatgpt_academic.git -cd chatgpt_academic -``` - -2. Configure API_KEY - -Configure API KEY and other settings in `config.py`. [Special Network Environment Settings](https://github.com/binary-husky/gpt_academic/issues/1). - -(P.S. When the program is running, it will first check whether there is a "config_private.py" private configuration file, and use the configuration defined in it to override the configuration of "config.py". Therefore, if you understand our configuration reading logic, we strongly recommend that you create a new configuration file named "config_private.py" next to "config.py" and transfer (copy) the configurations in "config.py" to "config_private.py". "config_private.py" is not controlled by git, which can make your privacy information more secure. P.S. The project also supports configuring most options through `environment variables`, and the writing format of environment variables refers to the `docker-compose` file. Reading priority: `environment variable` > `config_private.py` >`config.py`) - - -3. Install dependencies -```sh -# (Option I: If familar with Python) (Python version 3.9 or above, the newer the better), Note: Use the official pip source or Ali pip source, temporary switching method: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ -python -m pip install -r requirements.txt - -# (Option II: If not familiar with Python) Use anaconda with similar steps (https://www.bilibili.com/video/BV1rc411W7Dr): -conda create -n gptac_venv python=3.11 # Create an anaconda environment -conda activate gptac_venv # Activate the anaconda environment -python -m pip install -r requirements.txt # Same step as pip installation -``` - -
    Click to expand if supporting Tsinghua ChatGLM/Fudan MOSS as backend -

    - -[Optional Step] If supporting Tsinghua ChatGLM/Fudan MOSS as backend, additional dependencies need to be installed (Prerequisites: Familiar with Python + Used Pytorch + Sufficient computer configuration): -```sh -# [Optional Step I] Support Tsinghua ChatGLM. Remark: If encountering "Call ChatGLM fail Cannot load ChatGLM parameters", please refer to the following: 1: The above default installation is torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2: If the model cannot be loaded due to insufficient machine configuration, you can modify the model precision in `request_llm/bridge_chatglm.py`, and modify all AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# [Optional Step II] Support Fudan MOSS -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # When executing this line of code, you must be in the project root path - -# [Optional Step III] Make sure the AVAIL_LLM_MODELS in the config.py configuration file contains the expected models. Currently supported models are as follows (jittorllms series currently only supports docker solutions): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -

    -
    - - - -4. Run -```sh -python main.py -```5. Testing Function Plugin -``` -- Test function plugin template function (requires gpt to answer what happened today in history), you can use this function as a template to implement more complex functions - Click "[Function Plugin Template Demo] Today in History" -``` - -## Installation-Method 2: Using Docker - -1. Only ChatGPT (Recommended for most people) - -``` sh -git clone https://github.com/binary-husky/chatgpt_academic.git # Download the project -cd chatgpt_academic # Enter the path -nano config.py # Edit config.py with any text editor, Configure "Proxy","API_KEY"and"WEB_PORT" (e.g 50923) etc. -docker build -t gpt-academic . # Install - -# (Last step-option 1) Under Linux environment, use `--net=host` is more convenient and quick -docker run --rm -it --net=host gpt-academic -# (Last step-option 2) Under macOS/windows environment, can only use the -p option to expose the container's port(eg.50923) to the port on the host. -docker run --rm -it -e WEB_PORT=50923 -p 50923:50923 gpt-academic -``` - -2. ChatGPT + ChatGLM + MOSS (Requires familiarity with Docker) - -``` sh -# Modify docker-compose.yml, delete solution 1 and solution 3, and retain solution 2. Modify the configuration of solution 2 in docker-compose.yml, referring to the comments in it. -docker-compose up -``` - -3. ChatGPT+LLAMA+Pangu+RWKV(Requires familiarity with Docker) -``` sh -# Modify docker-compose.yml, delete solution 1 and solution 2, and retain solution 3. Modify the configuration of solution 3 in docker-compose.yml, referring to the comments in it. -docker-compose up -``` - - -## Installation-Method 3: Other Deployment Options - -1. How to use reverse proxy URL/Microsoft Azure API -Configure API_URL_REDIRECT according to the instructions in `config.py`. - -2. Remote cloud server deployment (requires cloud server knowledge and experience) -Please visit [Deployment wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97) - -3. Using WSL 2 (Windows subsystem for Linux) -Please visit [Deployment wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2) - -4. How to run at a secondary URL (such as `http://localhost/subpath`) -Please visit [FastAPI operating instructions](docs/WithFastapi.md) - -5. Use docker-compose to run -Please read docker-compose.yml and follow the prompts to operate. - ---- -# Advanced Usage -## Customize new convenience buttons / custom function plugins. - -1. Customize new convenience buttons (Academic Shortcut Keys) -Open `core_functional.py` with any text editor, add an entry as follows, and then restart the program. (If the button has been added successfully and is visible, then the prefix and suffix can be hot-modified, and it will take effect without restarting the program.) -For example -``` -"Super English to Chinese": { - # Prefix, will be added before your input. For example, used to describe your requirements, such as translation, explaining code, polishing, etc. - "Prefix": "Please translate the following content into Chinese, and then use a markdown table to explain the proper nouns that appear in the text one by one:\n\n", - - # Suffix, will be added after your input. For example, combined with prefix, you can enclose your input content in quotes. - "Suffix": "", -}, -``` -
    - -
    - -2. Custom function plugins - -Write powerful function plugins to perform any task you want and can't think of. -The difficulty of plugin writing and debugging is very low in this project. As long as you have a certain knowledge of Python, you can implement your own plugin functions by imitating the template we provided. -For more information, please refer to the [Function Plugin Guide](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97). - ---- -# Latest Update -## New feature dynamics1. Funktion zur Speicherung von Dialogen. Rufen Sie im Bereich der Funktions-Plugins "Aktuellen Dialog speichern" auf, um den aktuellen Dialog als lesbares und wiederherstellbares HTML-Datei zu speichern. Darüber hinaus können Sie im Funktions-Plugin-Bereich (Dropdown-Menü) "Laden von Dialogverlauf" aufrufen, um den vorherigen Dialog wiederherzustellen. Tipp: Wenn Sie keine Datei angeben und stattdessen direkt auf "Laden des Dialogverlaufs" klicken, können Sie das HTML-Cache-Archiv anzeigen. Durch Klicken auf "Löschen aller lokalen Dialogverlaufsdatensätze" können alle HTML-Archiv-Caches gelöscht werden. -
    - -
    - -2. Berichterstellung. Die meisten Plugins generieren nach Abschluss der Ausführung einen Arbeitsbericht. -
    - - - -
    - -3. Modularisierte Funktionsgestaltung, einfache Schnittstellen mit leistungsstarken Funktionen. -
    - - -
    - -4. Dies ist ein Open-Source-Projekt, das sich "selbst übersetzen" kann. -
    - -
    - -5. Die Übersetzung anderer Open-Source-Projekte ist kein Problem. -
    - -
    - -
    - -
    - -6. Dekorieren Sie [`live2d`](https://github.com/fghrsh/live2d_demo) mit kleinen Funktionen (standardmäßig deaktiviert, Änderungen an `config.py` erforderlich). -
    - -
    - -7. Neue MOSS-Sprachmodellunterstützung. -
    - -
    - -8. OpenAI-Bildgenerierung. -
    - -
    - -9. OpenAI-Audio-Analyse und Zusammenfassung. -
    - -
    - -10. Latex-Proofreading des gesamten Textes. -
    - -
    - - -## Version: -- Version 3.5 (Todo): Rufen Sie alle Funktionserweiterungen dieses Projekts mit natürlicher Sprache auf (hohe Priorität). -- Version 3.4 (Todo): Verbesserte Unterstützung mehrerer Threads für Local Large Model (LLM). -- Version 3.3: + Internet-Informationssynthese-Funktion -- Version 3.2: Funktionserweiterungen unterstützen mehr Parameter-Schnittstellen (Speicherung von Dialogen, Interpretation beliebigen Sprachcodes + gleichzeitige Abfrage jeder LLM-Kombination) -- Version 3.1: Unterstützung mehrerer GPT-Modelle gleichzeitig! Unterstützung für API2D, Unterstützung für Lastenausgleich von mehreren API-Schlüsseln. -- Version 3.0: Unterstützung von Chatglm und anderen kleinen LLMs -- Version 2.6: Umstrukturierung der Plugin-Struktur zur Verbesserung der Interaktivität, Einführung weiterer Plugins -- Version 2.5: Automatische Aktualisierung, Problembehebung bei Quelltexten großer Projekte, wenn der Text zu lang ist oder Token überlaufen. -- Version 2.4: (1) Neue Funktion zur Übersetzung des gesamten PDF-Texts; (2) Neue Funktion zum Wechseln der Position des Eingabebereichs; (3) Neue Option für vertikales Layout; (4) Optimierung von Multithread-Funktions-Plugins. -- Version 2.3: Verbesserte Interaktivität mit mehreren Threads -- Version 2.2: Funktionserweiterungen unterstützen "Hot-Reload" -- Version 2.1: Faltbares Layout -- Version 2.0: Einführung von modularisierten Funktionserweiterungen -- Version 1.0: Grundlegende Funktionengpt_academic Entwickler QQ-Gruppe-2: 610599535 - -- Bekannte Probleme - - Einige Browser-Übersetzungs-Plugins können die Frontend-Ausführung dieser Software stören. - - Sowohl eine zu hohe als auch eine zu niedrige Version von Gradio führt zu verschiedenen Ausnahmen. - -## Referenz und Lernen - -``` -Der Code bezieht sich auf viele Designs von anderen herausragenden Projekten, insbesondere: - -# Projekt 1: ChatGLM-6B der Tsinghua Universität: -https://github.com/THUDM/ChatGLM-6B - -# Projekt 2: JittorLLMs der Tsinghua Universität: -https://github.com/Jittor/JittorLLMs - -# Projekt 3: Edge-GPT: -https://github.com/acheong08/EdgeGPT - -# Projekt 4: ChuanhuChatGPT: -https://github.com/GaiZhenbiao/ChuanhuChatGPT - -# Projekt 5: ChatPaper: -https://github.com/kaixindelele/ChatPaper - -# Mehr: -https://github.com/gradio-app/gradio -https://github.com/fghrsh/live2d_demo -``` \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/models/auto/tokenization_auto.py b/spaces/fclong/summary/fengshen/models/auto/tokenization_auto.py deleted file mode 100644 index 6555191bef55336708cabc5e9b17c0322318a417..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/models/auto/tokenization_auto.py +++ /dev/null @@ -1,449 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Auto Tokenizer class.""" - -import importlib -import json -import os -from collections import OrderedDict -from pathlib import Path -from typing import TYPE_CHECKING, Dict, Optional, Tuple, Union - -from transformers.configuration_utils import PretrainedConfig -from transformers.file_utils import ( - cached_path, - get_list_of_files, - hf_bucket_url, - is_offline_mode, - is_sentencepiece_available, - is_tokenizers_available, -) -from transformers.tokenization_utils import PreTrainedTokenizer -from transformers.tokenization_utils_base import TOKENIZER_CONFIG_FILE -from transformers.tokenization_utils_fast import PreTrainedTokenizerFast -from transformers.utils import logging -# from ..encoder_decoder import EncoderDecoderConfig -from .auto_factory import _LazyAutoMapping -from .configuration_auto import ( - CONFIG_MAPPING_NAMES, - AutoConfig, - config_class_to_model_type, - model_type_to_module_name, - replace_list_option_in_docstrings, -) -from .dynamic import get_class_from_dynamic_module - - -logger = logging.get_logger(__name__) - -if TYPE_CHECKING: - # This significantly improves completion suggestion performance when - # the transformers package is used with Microsoft's Pylance language server. - TOKENIZER_MAPPING_NAMES: OrderedDict[str, - Tuple[Optional[str], Optional[str]]] = OrderedDict() -else: - TOKENIZER_MAPPING_NAMES = OrderedDict( - [ - ("roformer", ("RoFormerTokenizer", None)), - ("longformer", ("LongformerTokenizer", None)), - ] - ) - -TOKENIZER_MAPPING = _LazyAutoMapping( - CONFIG_MAPPING_NAMES, TOKENIZER_MAPPING_NAMES) - -CONFIG_TO_TYPE = {v: k for k, v in CONFIG_MAPPING_NAMES.items()} - - -def tokenizer_class_from_name(class_name: str): - if class_name == "PreTrainedTokenizerFast": - return PreTrainedTokenizerFast - - for module_name, tokenizers in TOKENIZER_MAPPING_NAMES.items(): - if class_name in tokenizers: - module_name = model_type_to_module_name(module_name) - - module = importlib.import_module( - f".{module_name}", "transformers.models") - return getattr(module, class_name) - - for config, tokenizers in TOKENIZER_MAPPING._extra_content.items(): - for tokenizer in tokenizers: - if getattr(tokenizer, "__name__", None) == class_name: - return tokenizer - - return None - - -def get_tokenizer_config( - pretrained_model_name_or_path: Union[str, os.PathLike], - cache_dir: Optional[Union[str, os.PathLike]] = None, - force_download: bool = False, - resume_download: bool = False, - proxies: Optional[Dict[str, str]] = None, - use_auth_token: Optional[Union[bool, str]] = None, - revision: Optional[str] = None, - local_files_only: bool = False, - **kwargs, -): - """ - Loads the tokenizer configuration from a pretrained model tokenizer configuration. - - Args: - pretrained_model_name_or_path (`str` or `os.PathLike`): - This can be either: - - - a string, the *model id* of a pretrained model configuration hosted inside a model repo on - huggingface.co. Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced - under a user or organization name, like `dbmdz/bert-base-german-cased`. - - a path to a *directory* containing a configuration file saved using the - [`~PreTrainedTokenizer.save_pretrained`] method, e.g., `./my_model_directory/`. - - cache_dir (`str` or `os.PathLike`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the standard - cache should not be used. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force to (re-)download the configuration files and override the cached versions if they - exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request. - use_auth_token (`str` or *bool*, *optional*): - The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated - when running `transformers-cli login` (stored in `~/.huggingface`). - revision(`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - local_files_only (`bool`, *optional*, defaults to `False`): - If `True`, will only try to load the tokenizer configuration from local files. - - - - Passing `use_auth_token=True` is required when you want to use a private model. - - - - Returns: - `Dict`: The configuration of the tokenizer. - - Examples: - - ```python - # Download configuration from huggingface.co and cache. - tokenizer_config = get_tokenizer_config("bert-base-uncased") - # This model does not have a tokenizer config so the result will be an empty dict. - tokenizer_config = get_tokenizer_config("xlm-roberta-base") - - # Save a pretrained tokenizer locally and you can reload its config - from transformers import AutoTokenizer - - tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") - tokenizer.save_pretrained("tokenizer-test") - tokenizer_config = get_tokenizer_config("tokenizer-test") - ```""" - if is_offline_mode() and not local_files_only: - logger.info("Offline mode: forcing local_files_only=True") - local_files_only = True - - # Will raise a ValueError if `pretrained_model_name_or_path` is not a valid path or model identifier - repo_files = get_list_of_files( - pretrained_model_name_or_path, - revision=revision, - use_auth_token=use_auth_token, - local_files_only=local_files_only, - ) - if TOKENIZER_CONFIG_FILE not in [Path(f).name for f in repo_files]: - return {} - - pretrained_model_name_or_path = str(pretrained_model_name_or_path) - if os.path.isdir(pretrained_model_name_or_path): - config_file = os.path.join( - pretrained_model_name_or_path, TOKENIZER_CONFIG_FILE) - else: - config_file = hf_bucket_url( - pretrained_model_name_or_path, filename=TOKENIZER_CONFIG_FILE, revision=revision, mirror=None - ) - - try: - # Load from URL or cache if already cached - resolved_config_file = cached_path( - config_file, - cache_dir=cache_dir, - force_download=force_download, - proxies=proxies, - resume_download=resume_download, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - ) - - except EnvironmentError: - logger.info( - "Could not locate the tokenizer configuration file, will try to use the model config instead.") - return {} - - with open(resolved_config_file, encoding="utf-8") as reader: - return json.load(reader) - - -class AutoTokenizer: - r""" - This is a generic tokenizer class that will be instantiated as one of the tokenizer classes of the library when - created with the [`AutoTokenizer.from_pretrained`] class method. - - This class cannot be instantiated directly using `__init__()` (throws an error). - """ - - def __init__(self): - raise EnvironmentError( - "AutoTokenizer is designed to be instantiated " - "using the `AutoTokenizer.from_pretrained(pretrained_model_name_or_path)` method." - ) - - @classmethod - @replace_list_option_in_docstrings(TOKENIZER_MAPPING_NAMES) - def from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs): - r""" - Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary. - - The tokenizer class to instantiate is selected based on the `model_type` property of the config object (either - passed as an argument or loaded from `pretrained_model_name_or_path` if possible), or when it's missing, by - falling back to using pattern matching on `pretrained_model_name_or_path`: - - List options - - Params: - pretrained_model_name_or_path (`str` or `os.PathLike`): - Can be either: - - - A string, the *model id* of a predefined tokenizer hosted inside a model repo on huggingface.co. - Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a - user or organization name, like `dbmdz/bert-base-german-cased`. - - A path to a *directory* containing vocabulary files required by the tokenizer, for instance saved - using the [`~PreTrainedTokenizer.save_pretrained`] method, e.g., `./my_model_directory/`. - - A path or url to a single saved vocabulary file if and only if the tokenizer only requires a - single vocabulary file (like Bert or XLNet), e.g.: `./my_model_directory/vocab.txt`. (Not - applicable to all derived classes) - inputs (additional positional arguments, *optional*): - Will be passed along to the Tokenizer `__init__()` method. - config ([`PretrainedConfig`], *optional*) - The configuration object used to dertermine the tokenizer class to instantiate. - cache_dir (`str` or `os.PathLike`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download the model weights and configuration files and override the - cached versions if they exist. - resume_download (`bool`, *optional*, defaults to `False`): - Whether or not to delete incompletely received files. Will attempt to resume the download if such a - file exists. - proxies (`Dict[str, str]`, *optional*): - A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128', - 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request. - revision(`str`, *optional*, defaults to `"main"`): - The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a - git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any - identifier allowed by git. - subfolder (`str`, *optional*): - In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for - facebook/rag-token-base), specify it here. - use_fast (`bool`, *optional*, defaults to `True`): - Whether or not to try to load the fast version of the tokenizer. - tokenizer_type (`str`, *optional*): - Tokenizer type to be loaded. - trust_remote_code (`bool`, *optional*, defaults to `False`): - Whether or not to allow for custom models defined on the Hub in their own modeling files. This option - should only be set to `True` for repositories you trust and in which you have read the code, as it will - execute code present on the Hub on your local machine. - kwargs (additional keyword arguments, *optional*): - Will be passed to the Tokenizer `__init__()` method. Can be used to set special tokens like - `bos_token`, `eos_token`, `unk_token`, `sep_token`, `pad_token`, `cls_token`, `mask_token`, - `additional_special_tokens`. See parameters in the `__init__()` for more details. - - Examples: - - ```python - >>> from transformers import AutoTokenizer - - >>> # Download vocabulary from huggingface.co and cache. - >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") - - >>> # Download vocabulary from huggingface.co (user-uploaded) and cache. - >>> tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased") - - >>> # If vocabulary files are in a directory (e.g. tokenizer was saved using *save_pretrained('./test/saved_model/')*) - >>> tokenizer = AutoTokenizer.from_pretrained("./test/bert_saved_model/") - ```""" - config = kwargs.pop("config", None) - kwargs["_from_auto"] = True - - use_fast = kwargs.pop("use_fast", True) - tokenizer_type = kwargs.pop("tokenizer_type", None) - trust_remote_code = kwargs.pop("trust_remote_code", False) - - # First, let's see whether the tokenizer_type is passed so that we can leverage it - if tokenizer_type is not None: - tokenizer_class = None - tokenizer_class_tuple = TOKENIZER_MAPPING_NAMES.get( - tokenizer_type, None) - - if tokenizer_class_tuple is None: - raise ValueError( - f"Passed `tokenizer_type` {tokenizer_type} does not exist. `tokenizer_type` should be one of " - f"{', '.join(c for c in TOKENIZER_MAPPING_NAMES.keys())}." - ) - - tokenizer_class_name, tokenizer_fast_class_name = tokenizer_class_tuple - - if use_fast and tokenizer_fast_class_name is not None: - tokenizer_class = tokenizer_class_from_name( - tokenizer_fast_class_name) - - if tokenizer_class is None: - tokenizer_class = tokenizer_class_from_name( - tokenizer_class_name) - - if tokenizer_class is None: - raise ValueError( - f"Tokenizer class {tokenizer_class_name} is not currently imported.") - - return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) - - # Next, let's try to use the tokenizer_config file to get the tokenizer class. - tokenizer_config = get_tokenizer_config( - pretrained_model_name_or_path, **kwargs) - - config_tokenizer_class = tokenizer_config.get("tokenizer_class") - tokenizer_auto_map = tokenizer_config.get("auto_map") - - # If that did not work, let's try to use the config. - if config_tokenizer_class is None: - if not isinstance(config, PretrainedConfig): - config = AutoConfig.from_pretrained( - pretrained_model_name_or_path, trust_remote_code=trust_remote_code, **kwargs - ) - config_tokenizer_class = config.tokenizer_class - if hasattr(config, "auto_map") and "AutoTokenizer" in config.auto_map: - tokenizer_auto_map = config.auto_map["AutoTokenizer"] - - # If we have the tokenizer class from the tokenizer config or the model config we're good! - if config_tokenizer_class is not None: - tokenizer_class = None - if tokenizer_auto_map is not None: - if not trust_remote_code: - raise ValueError( - f"Loading {pretrained_model_name_or_path} requires you to execute the tokenizer file in that repo " - "on your local machine. Make sure you have read the code there to avoid malicious use, then set " - "the option `trust_remote_code=True` to remove this error." - ) - if kwargs.get("revision", None) is None: - logger.warn( - "Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure " - "no malicious code has been contributed in a newer revision." - ) - - if use_fast and tokenizer_auto_map[1] is not None: - class_ref = tokenizer_auto_map[1] - else: - class_ref = tokenizer_auto_map[0] - - module_file, class_name = class_ref.split(".") - tokenizer_class = get_class_from_dynamic_module( - pretrained_model_name_or_path, module_file + ".py", class_name, **kwargs - ) - - elif use_fast and not config_tokenizer_class.endswith("Fast"): - tokenizer_class_candidate = f"{config_tokenizer_class}Fast" - tokenizer_class = tokenizer_class_from_name( - tokenizer_class_candidate) - if tokenizer_class is None: - tokenizer_class_candidate = config_tokenizer_class - tokenizer_class = tokenizer_class_from_name( - tokenizer_class_candidate) - - if tokenizer_class is None: - raise ValueError( - f"Tokenizer class {tokenizer_class_candidate} does not exist or is not currently imported." - ) - return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) - - model_type = config_class_to_model_type(type(config).__name__) - if model_type is not None: - tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type( - config)] - if tokenizer_class_fast and (use_fast or tokenizer_class_py is None): - return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) - else: - if tokenizer_class_py is not None: - return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) - else: - raise ValueError( - "This tokenizer cannot be instantiated. Please make sure you have `sentencepiece` installed " - "in order to use this tokenizer." - ) - - raise ValueError( - f"Unrecognized configuration class {config.__class__} to build an AutoTokenizer.\n" - f"Model type should be one of {', '.join(c.__name__ for c in TOKENIZER_MAPPING.keys())}." - ) - - def register(config_class, slow_tokenizer_class=None, fast_tokenizer_class=None): - """ - Register a new tokenizer in this mapping. - - - Args: - config_class ([`PretrainedConfig`]): - The configuration corresponding to the model to register. - slow_tokenizer_class ([`PretrainedTokenizer`], *optional*): - The slow tokenizer to register. - slow_tokenizer_class ([`PretrainedTokenizerFast`], *optional*): - The fast tokenizer to register. - """ - if slow_tokenizer_class is None and fast_tokenizer_class is None: - raise ValueError( - "You need to pass either a `slow_tokenizer_class` or a `fast_tokenizer_class") - if slow_tokenizer_class is not None and issubclass(slow_tokenizer_class, PreTrainedTokenizerFast): - raise ValueError( - "You passed a fast tokenizer in the `slow_tokenizer_class`.") - if fast_tokenizer_class is not None and issubclass(fast_tokenizer_class, PreTrainedTokenizer): - raise ValueError( - "You passed a slow tokenizer in the `fast_tokenizer_class`.") - - if ( - slow_tokenizer_class is not None - and fast_tokenizer_class is not None - and issubclass(fast_tokenizer_class, PreTrainedTokenizerFast) - and fast_tokenizer_class.slow_tokenizer_class != slow_tokenizer_class - ): - raise ValueError( - "The fast tokenizer class you are passing has a `slow_tokenizer_class` attribute that is not " - "consistent with the slow tokenizer class you passed (fast tokenizer has " - f"{fast_tokenizer_class.slow_tokenizer_class} and you passed {slow_tokenizer_class}. Fix one of those " - "so they match!" - ) - - # Avoid resetting a set slow/fast tokenizer if we are passing just the other ones. - if config_class in TOKENIZER_MAPPING._extra_content: - existing_slow, existing_fast = TOKENIZER_MAPPING[config_class] - if slow_tokenizer_class is None: - slow_tokenizer_class = existing_slow - if fast_tokenizer_class is None: - fast_tokenizer_class = existing_fast - - TOKENIZER_MAPPING.register( - config_class, (slow_tokenizer_class, fast_tokenizer_class)) diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Ultimate Mortal Kombat X APK for Android - The Ultimate Fighting Challenge Awaits You.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Ultimate Mortal Kombat X APK for Android - The Ultimate Fighting Challenge Awaits You.md deleted file mode 100644 index 303bade983922751a6da9cd02733a413edc9f537..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Ultimate Mortal Kombat X APK for Android - The Ultimate Fighting Challenge Awaits You.md +++ /dev/null @@ -1,114 +0,0 @@ -
    -

    Ultimate Mortal Kombat X APK Download: Everything You Need to Know

    -

    If you are a fan of fighting games, you have probably heard of Mortal Kombat X, one of the most popular and brutal titles in the genre. But did you know that you can download and play the game on your Android device for free? In this article, we will tell you everything you need to know about the ultimate Mortal Kombat X APK download, including what it is, why you should get it, how to get it, and how to play it. Read on and get ready to unleash your inner fighter!

    -

    What is Mortal Kombat X?

    -

    Mortal Kombat X is a fighting game developed by NetherRealm Studios and published by Warner Bros. Interactive Entertainment in 2015. It is the tenth main installment in the Mortal Kombat series, and a sequel to Mortal Kombat (2011). The game features a roster of 33 characters, including new ones like Cassie Cage, D'Vorah, Erron Black, and Ferra/Torr, as well as guest characters like Alien, Predator, Jason Voorhees, and Leatherface. Each character has three different variations that affect their abilities and fighting style.

    -

    ultimate mortal kombat x apk download


    Download Zip ->->->-> https://gohhs.com/2uPp7E



    -

    The game has a rich and cinematic story mode that spans 25 years after the events of Mortal Kombat (2011), as well as various single-player and multiplayer modes such as Tower, Test Your Luck, Faction Wars, King of the Hill, and more. The game also boasts stunning graphics, smooth animations, realistic physics, and gore-filled fatalities that make every fight a spectacle.

    -

    Why download the APK version?

    -

    The official Mortal Kombat X app is available on Google Play Store for free, but it has some limitations and drawbacks. For one thing, it requires a lot of storage space (around 2 GB) and a stable internet connection to run properly. For another thing, it has a lot of in-app purchases and ads that can interrupt your gameplay and make it harder to progress. Moreover, some regions may not have access to the app due to licensing issues or censorship.

    -

    That's why downloading the APK version of Mortal Kombat X can be a better option for some players. The APK version is a modified version of the app that bypasses these limitations and drawbacks. By downloading the APK file from a trusted source, you can enjoy the following benefits:

    -
      -
    • You can play the game offline without any internet connection.
    • -
    • You can save storage space by choosing which files to download (such as languages, graphics quality, etc.).
    • -
    • You can unlock all the characters, skins, items, and features without spending any money.
    • -
    • You can remove all the ads and pop-ups that can annoy you.
    • -
    • You can access the game from any region without any restrictions.
    • -
    -

    How to download and install the APK file?

    -

    Downloading and installing the APK file of Mortal Kombat X is not difficult, but it requires some steps and precautions. Here is a step-by-step guide on how to do it:

    -

    A step-by-step guide on how to get the APK file from a reliable source

    -
      -
    1. Go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install apps that are not from Google Play Store.
    2. -
    3. Go to a reliable website that offers the APK file of Mortal Kombat X. You can search for "Mortal Kombat X APK" on Google or use one of these links: . Make sure to check the reviews and ratings of the website before downloading anything.
    4. -
    5. Download the APK file and the OBB file (which contains the game data) to your device. The files should be around 1 GB in total, depending on the version you choose.
    6. -
    7. Locate the downloaded files on your device using a file manager app. You can use any app you like, such as ES File Explorer, File Manager, or ZArchiver.
    8. -
    -

    A step-by-step guide on how to install the APK file on your Android device

    -
      -
    1. Tap on the APK file and follow the instructions to install it. You may need to grant some permissions to the app during the installation process.
    2. -
    3. Do not open the app yet. Instead, go to the OBB file and extract it using a file manager app. You should get a folder named "com.wb.goog.mkx" or something similar.
    4. -
    5. Move the extracted folder to the Android/OBB directory on your device. This is where the game data will be stored.
    6. -
    7. Now you can open the app and enjoy Mortal Kombat X on your Android device!
    8. -
    -

    How to play Mortal Kombat X with the APK version?

    -

    Playing Mortal Kombat X with the APK version is not much different from playing it with the official app. The gameplay and controls are the same, except that you have access to more features and options. Here is a brief overview of how to play the game and some tips and tricks to help you improve your skills and win more matches.

    -

    A brief overview of the gameplay and controls

    -

    Mortal Kombat X is a 2D fighting game that pits two characters against each other in a best-of-three match. You can choose from 33 characters, each with three variations that affect their abilities and fighting style. You can also customize your character's appearance, equipment, and skills using coins and souls that you earn by playing the game.

    -

    The game has two main modes: story mode and battle mode. In story mode, you follow a cinematic narrative that spans 25 years after the events of Mortal Kombat (2011). You play as different characters in each chapter and face various enemies and bosses. In battle mode, you can play solo or online against other players in different modes such as Tower, Test Your Luck, Faction Wars, King of the Hill, and more.

    -

    mortal kombat x ultimate edition apk free download
    -download mortal kombat x ultimate fighting game for android
    -mortal kombat x ultimate mod apk unlimited money
    -how to install mortal kombat x ultimate apk on android
    -mortal kombat x ultimate apk + obb download latest version
    -mortal kombat x ultimate apk offline mode
    -mortal kombat x ultimate apk download highly compressed
    -mortal kombat x ultimate apk no verification
    -mortal kombat x ultimate apk 4.2.0
    -mortal kombat x ultimate apk revdl
    -mortal kombat x ultimate apk rexdl
    -mortal kombat x ultimate apk pure
    -mortal kombat x ultimate apk uptodown
    -mortal kombat x ultimate apk android 1
    -mortal kombat x ultimate apk andropalace
    -mortal kombat x ultimate apk data download
    -mortal kombat x ultimate apk hack download
    -mortal kombat x ultimate apk full unlocked
    -mortal kombat x ultimate apk all characters unlocked
    -mortal kombat x ultimate apk mega mod
    -mortal kombat x ultimate apk god mode
    -mortal kombat x ultimate apk unlimited souls and coins
    -mortal kombat x ultimate apk unlimited everything
    -mortal kombat x ultimate apk cheat menu
    -mortal kombat x ultimate apk anti ban
    -download game mortal kombat x ultimate mod apk
    -download game android mortal kombat x ultimate mod apk offline
    -download game offline mortal kombat x ultimate mod apk data obb android gratis
    -download game ppsspp mortal kombat x ultimate mod apk
    -download game ps2 mortal kombat x ultimate mod apk for android
    -download game pc mortal kombat x ultimate mod apk free full version
    -download game java mortal kombat x ultimate mod apk 320x240 jar zip rar
    -download game nes mortal kombat x ultimate mod apk for emulator android
    -download game snes mortal kombat x ultimate mod apk for emulator android
    -download game gba mortal kombat x ultimate mod apk for emulator android
    -download game nds mortal kombat x ultimate mod apk for emulator android
    -download game psp iso cso high compressed mortal kombat x ultimate mod apk for emulator android
    -download game ps1 iso high compressed mortal kombat x ultimate mod apk for emulator android
    -download game ps3 iso high compressed mortal kombat x ultimate mod apk for emulator android
    -download game ps4 iso high compressed mortal kombat x ultimate mod apk for emulator android
    -download game xbox 360 iso high compressed mortal kombat x ultimate mod apk for emulator android
    -download game xbox one iso high compressed mortal kombat x ultimate mod apk for emulator android
    -download game wii iso high compressed mortal kombat x ultimate mod apk for emulator android
    -download game wii u iso high compressed mortal kombat x ultimate mod apk for emulator android
    -download game switch iso high compressed mortal kombat x ultimate mod apk for emulator android
    -download game 3ds cia high compressed mortal kombat x ultimate mod apk for emulator android
    -download game n64 rom high compressed mortal

    -

    The game uses a simple and intuitive control scheme that consists of four buttons: attack, block, special, and switch. You can tap, swipe, or hold these buttons to perform different moves and combos. You can also use special items such as x-rays, fatalities, brutalities, and faction kills to finish off your opponents in spectacular ways.

    -

    Tips and tricks to improve your skills and win more matches

    -

    Mortal Kombat X is a game that requires practice, strategy, and skill to master. Here are some tips and tricks that can help you become a better fighter:

    -
      -
    • Learn your character's moves and combos. Each character has a unique set of moves and combos that you can find in the move list menu. Practice them in training mode or offline matches until you memorize them and execute them flawlessly.
    • -
    • Choose your character's variation wisely. Each character has three variations that affect their abilities and fighting style. Some variations are better suited for certain situations or opponents than others. Experiment with different variations and find out which one works best for you.
    • -
    • Use your special meter wisely. Your special meter fills up as you deal or receive damage. You can use it to perform x-rays, breakers, or enhanced specials. X-rays are powerful attacks that deal massive damage and break through blocks. Breakers are defensive moves that interrupt your opponent's combo and push them back. Enhanced specials are upgraded versions of your normal specials that have additional effects or damage. Know when to use each of these options depending on your situation.
    • -
    • Know your opponent's moves and tendencies. The best way to counter your opponent is to know what they are capable of and what they are likely to do. Study their move list, watch their patterns, and anticipate their actions. Use blocks, dodges, counters, and punishes to avoid or exploit their weaknesses.
    • -
    • Use environmental interactions to your advantage. The game features various environmental objects that you can use to interact with during a fight. You can use them to escape, attack, or defend yourself depending on the object and your position. For example, you can throw barrels at your opponent, jump off walls to avoid attacks, or use weapons to deal extra damage. Be aware of your surroundings and use them creatively.
    • -
    -

    Conclusion

    -

    Mortal Kombat X is a thrilling and brutal fighting game that you can enjoy on your Android device for free. By downloading the APK version of the game, you can unlock all the features and options that the official app does not offer. You can also play the game offline without any internet connection or ads. However, you need to be careful and follow the steps and precautions we mentioned in this article to download and install the APK file safely and correctly. Once you do that, you can start playing the game and unleash your inner fighter!

    -

    Do you have any questions or comments about the ultimate Mortal Kombat X APK download? Let us know in the comment section below. And if you liked this article, please share it with your friends and fellow gamers. Thank you for reading!

    -

    FAQs

    -

    Here are some of the most frequently asked questions about the ultimate Mortal Kombat X APK download:

    -
      -
    1. Is the APK version of Mortal Kombat X legal?
    2. -

      The APK version of Mortal Kombat X is not legal, as it violates the terms and conditions of the game's developers and publishers. However, it is unlikely that you will face any legal consequences for downloading and playing it, as long as you do not distribute or sell it to others.

      -
    3. Is the APK version of Mortal Kombat X safe?
    4. -

      The APK version of Mortal Kombat X is safe, as long as you download it from a reliable source and follow the steps and precautions we mentioned in this article. However, there is always a risk of malware or viruses when downloading any file from the internet, so make sure to scan your device regularly and use a good antivirus app.

      -
    5. Can I play Mortal Kombat X with the APK version on other devices?
    6. -

      The APK version of Mortal Kombat X is designed for Android devices only. You cannot play it on iOS, Windows, or any other platform. However, you can use an Android emulator on your PC or Mac to run the APK file and play the game on your computer.

      -
    7. Can I play Mortal Kombat X with the APK version with other players online?
    8. -

      The APK version of Mortal Kombat X allows you to play online with other players who have the same version of the game. However, you cannot play with players who have the official app or a different version of the game. You may also experience some lag or connection issues when playing online with the APK version.

      -
    9. Can I update Mortal Kombat X with the APK version?
    10. -

      The APK version of Mortal Kombat X does not support automatic updates. You will have to manually download and install a new version of the APK file whenever there is an update available. Make sure to back up your data before updating, as you may lose your progress or settings.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fffffu/bing/src/lib/bots/bing/utils.ts b/spaces/fffffu/bing/src/lib/bots/bing/utils.ts deleted file mode 100644 index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000 --- a/spaces/fffffu/bing/src/lib/bots/bing/utils.ts +++ /dev/null @@ -1,87 +0,0 @@ -import { ChatResponseMessage, BingChatResponse } from './types' - -export function convertMessageToMarkdown(message: ChatResponseMessage): string { - if (message.messageType === 'InternalSearchQuery') { - return message.text - } - for (const card of message.adaptiveCards??[]) { - for (const block of card.body) { - if (block.type === 'TextBlock') { - return block.text - } - } - } - return '' -} - -const RecordSeparator = String.fromCharCode(30) - -export const websocketUtils = { - packMessage(data: any) { - return `${JSON.stringify(data)}${RecordSeparator}` - }, - unpackMessage(data: string | ArrayBuffer | Blob) { - if (!data) return {} - return data - .toString() - .split(RecordSeparator) - .filter(Boolean) - .map((s) => { - try { - return JSON.parse(s) - } catch (e) { - return {} - } - }) - }, -} - -export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise { - const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`, - { - method: 'HEAD', - headers, - redirect: 'manual' - }, - ); - - if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) { - throw new Error('请求异常,请检查 cookie 是否有效') - } - - const resultId = RegExp.$1; - let count = 0 - const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`; - - do { - await sleep(3000); - const content = await fetch(imageThumbUrl, { headers, method: 'GET' }) - - // @ts-ignore - if (content.headers.get('content-length') > 1) { - const text = await content.text() - return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&')) - .map(img => `![${prompt}](${img})`).join(' ') - } - } while(count ++ < 10); -} - - -export async function* streamAsyncIterable(stream: ReadableStream) { - const reader = stream.getReader() - try { - while (true) { - const { done, value } = await reader.read() - if (done) { - return - } - yield value - } - } finally { - reader.releaseLock() - } -} - -export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms)) - diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/data/audio.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/data/audio.py deleted file mode 100644 index 1829d7db4ef832ad65598b471caa7d256a06d012..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/data/audio.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Audio IO methods are defined in this module (info, read, write), -We rely on av library for faster read when possible, otherwise on torchaudio. -""" - -from dataclasses import dataclass -from pathlib import Path -import logging -import typing as tp - -import numpy as np -import soundfile -import torch -from torch.nn import functional as F -import torchaudio as ta - -import av - -from .audio_utils import f32_pcm, i16_pcm, normalize_audio - - -_av_initialized = False - - -def _init_av(): - global _av_initialized - if _av_initialized: - return - logger = logging.getLogger('libav.mp3') - logger.setLevel(logging.ERROR) - _av_initialized = True - - -@dataclass(frozen=True) -class AudioFileInfo: - sample_rate: int - duration: float - channels: int - - -def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sample_rate = stream.codec_context.sample_rate - duration = float(stream.duration * stream.time_base) - channels = stream.channels - return AudioFileInfo(sample_rate, duration, channels) - - -def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - info = soundfile.info(filepath) - return AudioFileInfo(info.samplerate, info.duration, info.channels) - - -def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo: - # torchaudio no longer returns useful duration informations for some formats like mp3s. - filepath = Path(filepath) - if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info - # ffmpeg has some weird issue with flac. - return _soundfile_info(filepath) - else: - return _av_info(filepath) - - -def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]: - """FFMPEG-based audio file reading using PyAV bindings. - Soundfile cannot read mp3 and av_read is more efficient than torchaudio. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate - """ - _init_av() - with av.open(str(filepath)) as af: - stream = af.streams.audio[0] - sr = stream.codec_context.sample_rate - num_frames = int(sr * duration) if duration >= 0 else -1 - frame_offset = int(sr * seek_time) - # we need a small negative offset otherwise we get some edge artifact - # from the mp3 decoder. - af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream) - frames = [] - length = 0 - for frame in af.decode(streams=stream.index): - current_offset = int(frame.rate * frame.pts * frame.time_base) - strip = max(0, frame_offset - current_offset) - buf = torch.from_numpy(frame.to_ndarray()) - if buf.shape[0] != stream.channels: - buf = buf.view(-1, stream.channels).t() - buf = buf[:, strip:] - frames.append(buf) - length += buf.shape[1] - if num_frames > 0 and length >= num_frames: - break - assert frames - # If the above assert fails, it is likely because we seeked past the end of file point, - # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp. - # This will need proper debugging, in due time. - wav = torch.cat(frames, dim=1) - assert wav.shape[0] == stream.channels - if num_frames > 0: - wav = wav[:, :num_frames] - return f32_pcm(wav), sr - - -def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0., - duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]: - """Read audio by picking the most appropriate backend tool based on the audio format. - - Args: - filepath (str or Path): Path to audio file to read. - seek_time (float): Time at which to start reading in the file. - duration (float): Duration to read from the file. If set to -1, the whole file is read. - pad (bool): Pad output audio if not reaching expected duration. - Returns: - Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate. - """ - fp = Path(filepath) - if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg - # There is some bug with ffmpeg and reading flac - info = _soundfile_info(filepath) - frames = -1 if duration <= 0 else int(duration * info.sample_rate) - frame_offset = int(seek_time * info.sample_rate) - wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32) - assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}" - wav = torch.from_numpy(wav).t().contiguous() - if len(wav.shape) == 1: - wav = torch.unsqueeze(wav, 0) - elif ( - fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats() - and duration <= 0 and seek_time == 0 - ): - # Torchaudio is faster if we load an entire file at once. - wav, sr = ta.load(fp) - else: - wav, sr = _av_read(filepath, seek_time, duration) - if pad and duration > 0: - expected_frames = int(duration * sr) - wav = F.pad(wav, (0, expected_frames - wav.shape[-1])) - return wav, sr - - -def audio_write(stem_name: tp.Union[str, Path], - wav: torch.Tensor, sample_rate: int, - format: str = 'wav', mp3_rate: int = 320, normalize: bool = True, - strategy: str = 'peak', peak_clip_headroom_db: float = 1, - rms_headroom_db: float = 18, loudness_headroom_db: float = 14, - log_clipping: bool = True, make_parent_dir: bool = True, - add_suffix: bool = True) -> Path: - """Convenience function for saving audio to disk. Returns the filename the audio was written to. - - Args: - stem_name (str or Path): Filename without extension which will be added automatically. - format (str): Either "wav" or "mp3". - mp3_rate (int): kbps when using mp3s. - normalize (bool): if `True` (default), normalizes according to the prescribed - strategy (see after). If `False`, the strategy is only used in case clipping - would happen. - strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak', - i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square - with extra headroom to avoid clipping. 'clip' just clips. - peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy. - rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger - than the `peak_clip` one to avoid further clipping. - loudness_headroom_db (float): Target loudness for loudness normalization. - log_clipping (bool): If True, basic logging on stderr when clipping still - occurs despite strategy (only for 'rms'). - make_parent_dir (bool): Make parent directory if it doesn't exist. - Returns: - Path: Path of the saved audio. - """ - assert wav.dtype.is_floating_point, "wav is not floating point" - if wav.dim() == 1: - wav = wav[None] - elif wav.dim() > 2: - raise ValueError("Input wav should be at most 2 dimension.") - assert wav.isfinite().all() - wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db, - rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping, - sample_rate=sample_rate, stem_name=str(stem_name)) - kwargs: dict = {} - if format == 'mp3': - suffix = '.mp3' - kwargs.update({"compression": mp3_rate}) - elif format == 'wav': - wav = i16_pcm(wav) - suffix = '.wav' - kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16}) - else: - raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.") - if not add_suffix: - suffix = '' - path = Path(str(stem_name) + suffix) - if make_parent_dir: - path.parent.mkdir(exist_ok=True, parents=True) - try: - ta.save(path, wav, sample_rate, **kwargs) - except Exception: - if path.exists(): - # we do not want to leave half written files around. - path.unlink() - raise - return path diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/inspector.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/inspector.d.ts deleted file mode 100644 index eba0b55d8bca0ef10cbf24922fb899b67c35f3a9..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/inspector.d.ts +++ /dev/null @@ -1,2741 +0,0 @@ -// eslint-disable-next-line dt-header -// Type definitions for inspector - -// These definitions are auto-generated. -// Please see https://github.com/DefinitelyTyped/DefinitelyTyped/pull/19330 -// for more information. - -// tslint:disable:max-line-length - -/** - * The `inspector` module provides an API for interacting with the V8 inspector. - * - * It can be accessed using: - * - * ```js - * const inspector = require('inspector'); - * ``` - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/inspector.js) - */ -declare module 'inspector' { - import EventEmitter = require('node:events'); - interface InspectorNotification { - method: string; - params: T; - } - namespace Schema { - /** - * Description of the protocol domain. - */ - interface Domain { - /** - * Domain name. - */ - name: string; - /** - * Domain version. - */ - version: string; - } - interface GetDomainsReturnType { - /** - * List of supported domains. - */ - domains: Domain[]; - } - } - namespace Runtime { - /** - * Unique script identifier. - */ - type ScriptId = string; - /** - * Unique object identifier. - */ - type RemoteObjectId = string; - /** - * Primitive value which cannot be JSON-stringified. - */ - type UnserializableValue = string; - /** - * Mirror object referencing original JavaScript object. - */ - interface RemoteObject { - /** - * Object type. - */ - type: string; - /** - * Object subtype hint. Specified for object type values only. - */ - subtype?: string | undefined; - /** - * Object class (constructor) name. Specified for object type values only. - */ - className?: string | undefined; - /** - * Remote object value in case of primitive values or JSON values (if it was requested). - */ - value?: any; - /** - * Primitive value which can not be JSON-stringified does not have value, but gets this property. - */ - unserializableValue?: UnserializableValue | undefined; - /** - * String representation of the object. - */ - description?: string | undefined; - /** - * Unique object identifier (for non-primitive values). - */ - objectId?: RemoteObjectId | undefined; - /** - * Preview containing abbreviated property values. Specified for object type values only. - * @experimental - */ - preview?: ObjectPreview | undefined; - /** - * @experimental - */ - customPreview?: CustomPreview | undefined; - } - /** - * @experimental - */ - interface CustomPreview { - header: string; - hasBody: boolean; - formatterObjectId: RemoteObjectId; - bindRemoteObjectFunctionId: RemoteObjectId; - configObjectId?: RemoteObjectId | undefined; - } - /** - * Object containing abbreviated remote object value. - * @experimental - */ - interface ObjectPreview { - /** - * Object type. - */ - type: string; - /** - * Object subtype hint. Specified for object type values only. - */ - subtype?: string | undefined; - /** - * String representation of the object. - */ - description?: string | undefined; - /** - * True iff some of the properties or entries of the original object did not fit. - */ - overflow: boolean; - /** - * List of the properties. - */ - properties: PropertyPreview[]; - /** - * List of the entries. Specified for map and set subtype values only. - */ - entries?: EntryPreview[] | undefined; - } - /** - * @experimental - */ - interface PropertyPreview { - /** - * Property name. - */ - name: string; - /** - * Object type. Accessor means that the property itself is an accessor property. - */ - type: string; - /** - * User-friendly property value string. - */ - value?: string | undefined; - /** - * Nested value preview. - */ - valuePreview?: ObjectPreview | undefined; - /** - * Object subtype hint. Specified for object type values only. - */ - subtype?: string | undefined; - } - /** - * @experimental - */ - interface EntryPreview { - /** - * Preview of the key. Specified for map-like collection entries. - */ - key?: ObjectPreview | undefined; - /** - * Preview of the value. - */ - value: ObjectPreview; - } - /** - * Object property descriptor. - */ - interface PropertyDescriptor { - /** - * Property name or symbol description. - */ - name: string; - /** - * The value associated with the property. - */ - value?: RemoteObject | undefined; - /** - * True if the value associated with the property may be changed (data descriptors only). - */ - writable?: boolean | undefined; - /** - * A function which serves as a getter for the property, or undefined if there is no getter (accessor descriptors only). - */ - get?: RemoteObject | undefined; - /** - * A function which serves as a setter for the property, or undefined if there is no setter (accessor descriptors only). - */ - set?: RemoteObject | undefined; - /** - * True if the type of this property descriptor may be changed and if the property may be deleted from the corresponding object. - */ - configurable: boolean; - /** - * True if this property shows up during enumeration of the properties on the corresponding object. - */ - enumerable: boolean; - /** - * True if the result was thrown during the evaluation. - */ - wasThrown?: boolean | undefined; - /** - * True if the property is owned for the object. - */ - isOwn?: boolean | undefined; - /** - * Property symbol object, if the property is of the symbol type. - */ - symbol?: RemoteObject | undefined; - } - /** - * Object internal property descriptor. This property isn't normally visible in JavaScript code. - */ - interface InternalPropertyDescriptor { - /** - * Conventional property name. - */ - name: string; - /** - * The value associated with the property. - */ - value?: RemoteObject | undefined; - } - /** - * Represents function call argument. Either remote object id objectId, primitive value, unserializable primitive value or neither of (for undefined) them should be specified. - */ - interface CallArgument { - /** - * Primitive value or serializable javascript object. - */ - value?: any; - /** - * Primitive value which can not be JSON-stringified. - */ - unserializableValue?: UnserializableValue | undefined; - /** - * Remote object handle. - */ - objectId?: RemoteObjectId | undefined; - } - /** - * Id of an execution context. - */ - type ExecutionContextId = number; - /** - * Description of an isolated world. - */ - interface ExecutionContextDescription { - /** - * Unique id of the execution context. It can be used to specify in which execution context script evaluation should be performed. - */ - id: ExecutionContextId; - /** - * Execution context origin. - */ - origin: string; - /** - * Human readable name describing given context. - */ - name: string; - /** - * Embedder-specific auxiliary data. - */ - auxData?: {} | undefined; - } - /** - * Detailed information about exception (or error) that was thrown during script compilation or execution. - */ - interface ExceptionDetails { - /** - * Exception id. - */ - exceptionId: number; - /** - * Exception text, which should be used together with exception object when available. - */ - text: string; - /** - * Line number of the exception location (0-based). - */ - lineNumber: number; - /** - * Column number of the exception location (0-based). - */ - columnNumber: number; - /** - * Script ID of the exception location. - */ - scriptId?: ScriptId | undefined; - /** - * URL of the exception location, to be used when the script was not reported. - */ - url?: string | undefined; - /** - * JavaScript stack trace if available. - */ - stackTrace?: StackTrace | undefined; - /** - * Exception object if available. - */ - exception?: RemoteObject | undefined; - /** - * Identifier of the context where exception happened. - */ - executionContextId?: ExecutionContextId | undefined; - } - /** - * Number of milliseconds since epoch. - */ - type Timestamp = number; - /** - * Stack entry for runtime errors and assertions. - */ - interface CallFrame { - /** - * JavaScript function name. - */ - functionName: string; - /** - * JavaScript script id. - */ - scriptId: ScriptId; - /** - * JavaScript script name or url. - */ - url: string; - /** - * JavaScript script line number (0-based). - */ - lineNumber: number; - /** - * JavaScript script column number (0-based). - */ - columnNumber: number; - } - /** - * Call frames for assertions or error messages. - */ - interface StackTrace { - /** - * String label of this stack trace. For async traces this may be a name of the function that initiated the async call. - */ - description?: string | undefined; - /** - * JavaScript function name. - */ - callFrames: CallFrame[]; - /** - * Asynchronous JavaScript stack trace that preceded this stack, if available. - */ - parent?: StackTrace | undefined; - /** - * Asynchronous JavaScript stack trace that preceded this stack, if available. - * @experimental - */ - parentId?: StackTraceId | undefined; - } - /** - * Unique identifier of current debugger. - * @experimental - */ - type UniqueDebuggerId = string; - /** - * If debuggerId is set stack trace comes from another debugger and can be resolved there. This allows to track cross-debugger calls. See Runtime.StackTrace and Debugger.paused for usages. - * @experimental - */ - interface StackTraceId { - id: string; - debuggerId?: UniqueDebuggerId | undefined; - } - interface EvaluateParameterType { - /** - * Expression to evaluate. - */ - expression: string; - /** - * Symbolic group name that can be used to release multiple objects. - */ - objectGroup?: string | undefined; - /** - * Determines whether Command Line API should be available during the evaluation. - */ - includeCommandLineAPI?: boolean | undefined; - /** - * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state. - */ - silent?: boolean | undefined; - /** - * Specifies in which execution context to perform evaluation. If the parameter is omitted the evaluation will be performed in the context of the inspected page. - */ - contextId?: ExecutionContextId | undefined; - /** - * Whether the result is expected to be a JSON object that should be sent by value. - */ - returnByValue?: boolean | undefined; - /** - * Whether preview should be generated for the result. - * @experimental - */ - generatePreview?: boolean | undefined; - /** - * Whether execution should be treated as initiated by user in the UI. - */ - userGesture?: boolean | undefined; - /** - * Whether execution should await for resulting value and return once awaited promise is resolved. - */ - awaitPromise?: boolean | undefined; - } - interface AwaitPromiseParameterType { - /** - * Identifier of the promise. - */ - promiseObjectId: RemoteObjectId; - /** - * Whether the result is expected to be a JSON object that should be sent by value. - */ - returnByValue?: boolean | undefined; - /** - * Whether preview should be generated for the result. - */ - generatePreview?: boolean | undefined; - } - interface CallFunctionOnParameterType { - /** - * Declaration of the function to call. - */ - functionDeclaration: string; - /** - * Identifier of the object to call function on. Either objectId or executionContextId should be specified. - */ - objectId?: RemoteObjectId | undefined; - /** - * Call arguments. All call arguments must belong to the same JavaScript world as the target object. - */ - arguments?: CallArgument[] | undefined; - /** - * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state. - */ - silent?: boolean | undefined; - /** - * Whether the result is expected to be a JSON object which should be sent by value. - */ - returnByValue?: boolean | undefined; - /** - * Whether preview should be generated for the result. - * @experimental - */ - generatePreview?: boolean | undefined; - /** - * Whether execution should be treated as initiated by user in the UI. - */ - userGesture?: boolean | undefined; - /** - * Whether execution should await for resulting value and return once awaited promise is resolved. - */ - awaitPromise?: boolean | undefined; - /** - * Specifies execution context which global object will be used to call function on. Either executionContextId or objectId should be specified. - */ - executionContextId?: ExecutionContextId | undefined; - /** - * Symbolic group name that can be used to release multiple objects. If objectGroup is not specified and objectId is, objectGroup will be inherited from object. - */ - objectGroup?: string | undefined; - } - interface GetPropertiesParameterType { - /** - * Identifier of the object to return properties for. - */ - objectId: RemoteObjectId; - /** - * If true, returns properties belonging only to the element itself, not to its prototype chain. - */ - ownProperties?: boolean | undefined; - /** - * If true, returns accessor properties (with getter/setter) only; internal properties are not returned either. - * @experimental - */ - accessorPropertiesOnly?: boolean | undefined; - /** - * Whether preview should be generated for the results. - * @experimental - */ - generatePreview?: boolean | undefined; - } - interface ReleaseObjectParameterType { - /** - * Identifier of the object to release. - */ - objectId: RemoteObjectId; - } - interface ReleaseObjectGroupParameterType { - /** - * Symbolic object group name. - */ - objectGroup: string; - } - interface SetCustomObjectFormatterEnabledParameterType { - enabled: boolean; - } - interface CompileScriptParameterType { - /** - * Expression to compile. - */ - expression: string; - /** - * Source url to be set for the script. - */ - sourceURL: string; - /** - * Specifies whether the compiled script should be persisted. - */ - persistScript: boolean; - /** - * Specifies in which execution context to perform script run. If the parameter is omitted the evaluation will be performed in the context of the inspected page. - */ - executionContextId?: ExecutionContextId | undefined; - } - interface RunScriptParameterType { - /** - * Id of the script to run. - */ - scriptId: ScriptId; - /** - * Specifies in which execution context to perform script run. If the parameter is omitted the evaluation will be performed in the context of the inspected page. - */ - executionContextId?: ExecutionContextId | undefined; - /** - * Symbolic group name that can be used to release multiple objects. - */ - objectGroup?: string | undefined; - /** - * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state. - */ - silent?: boolean | undefined; - /** - * Determines whether Command Line API should be available during the evaluation. - */ - includeCommandLineAPI?: boolean | undefined; - /** - * Whether the result is expected to be a JSON object which should be sent by value. - */ - returnByValue?: boolean | undefined; - /** - * Whether preview should be generated for the result. - */ - generatePreview?: boolean | undefined; - /** - * Whether execution should await for resulting value and return once awaited promise is resolved. - */ - awaitPromise?: boolean | undefined; - } - interface QueryObjectsParameterType { - /** - * Identifier of the prototype to return objects for. - */ - prototypeObjectId: RemoteObjectId; - } - interface GlobalLexicalScopeNamesParameterType { - /** - * Specifies in which execution context to lookup global scope variables. - */ - executionContextId?: ExecutionContextId | undefined; - } - interface EvaluateReturnType { - /** - * Evaluation result. - */ - result: RemoteObject; - /** - * Exception details. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface AwaitPromiseReturnType { - /** - * Promise result. Will contain rejected value if promise was rejected. - */ - result: RemoteObject; - /** - * Exception details if stack strace is available. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface CallFunctionOnReturnType { - /** - * Call result. - */ - result: RemoteObject; - /** - * Exception details. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface GetPropertiesReturnType { - /** - * Object properties. - */ - result: PropertyDescriptor[]; - /** - * Internal object properties (only of the element itself). - */ - internalProperties?: InternalPropertyDescriptor[] | undefined; - /** - * Exception details. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface CompileScriptReturnType { - /** - * Id of the script. - */ - scriptId?: ScriptId | undefined; - /** - * Exception details. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface RunScriptReturnType { - /** - * Run result. - */ - result: RemoteObject; - /** - * Exception details. - */ - exceptionDetails?: ExceptionDetails | undefined; - } - interface QueryObjectsReturnType { - /** - * Array with objects. - */ - objects: RemoteObject; - } - interface GlobalLexicalScopeNamesReturnType { - names: string[]; - } - interface ExecutionContextCreatedEventDataType { - /** - * A newly created execution context. - */ - context: ExecutionContextDescription; - } - interface ExecutionContextDestroyedEventDataType { - /** - * Id of the destroyed context - */ - executionContextId: ExecutionContextId; - } - interface ExceptionThrownEventDataType { - /** - * Timestamp of the exception. - */ - timestamp: Timestamp; - exceptionDetails: ExceptionDetails; - } - interface ExceptionRevokedEventDataType { - /** - * Reason describing why exception was revoked. - */ - reason: string; - /** - * The id of revoked exception, as reported in exceptionThrown. - */ - exceptionId: number; - } - interface ConsoleAPICalledEventDataType { - /** - * Type of the call. - */ - type: string; - /** - * Call arguments. - */ - args: RemoteObject[]; - /** - * Identifier of the context where the call was made. - */ - executionContextId: ExecutionContextId; - /** - * Call timestamp. - */ - timestamp: Timestamp; - /** - * Stack trace captured when the call was made. - */ - stackTrace?: StackTrace | undefined; - /** - * Console context descriptor for calls on non-default console context (not console.*): 'anonymous#unique-logger-id' for call on unnamed context, 'name#unique-logger-id' for call on named context. - * @experimental - */ - context?: string | undefined; - } - interface InspectRequestedEventDataType { - object: RemoteObject; - hints: {}; - } - } - namespace Debugger { - /** - * Breakpoint identifier. - */ - type BreakpointId = string; - /** - * Call frame identifier. - */ - type CallFrameId = string; - /** - * Location in the source code. - */ - interface Location { - /** - * Script identifier as reported in the Debugger.scriptParsed. - */ - scriptId: Runtime.ScriptId; - /** - * Line number in the script (0-based). - */ - lineNumber: number; - /** - * Column number in the script (0-based). - */ - columnNumber?: number | undefined; - } - /** - * Location in the source code. - * @experimental - */ - interface ScriptPosition { - lineNumber: number; - columnNumber: number; - } - /** - * JavaScript call frame. Array of call frames form the call stack. - */ - interface CallFrame { - /** - * Call frame identifier. This identifier is only valid while the virtual machine is paused. - */ - callFrameId: CallFrameId; - /** - * Name of the JavaScript function called on this call frame. - */ - functionName: string; - /** - * Location in the source code. - */ - functionLocation?: Location | undefined; - /** - * Location in the source code. - */ - location: Location; - /** - * JavaScript script name or url. - */ - url: string; - /** - * Scope chain for this call frame. - */ - scopeChain: Scope[]; - /** - * this object for this call frame. - */ - this: Runtime.RemoteObject; - /** - * The value being returned, if the function is at return point. - */ - returnValue?: Runtime.RemoteObject | undefined; - } - /** - * Scope description. - */ - interface Scope { - /** - * Scope type. - */ - type: string; - /** - * Object representing the scope. For global and with scopes it represents the actual object; for the rest of the scopes, it is artificial transient object enumerating scope variables as its properties. - */ - object: Runtime.RemoteObject; - name?: string | undefined; - /** - * Location in the source code where scope starts - */ - startLocation?: Location | undefined; - /** - * Location in the source code where scope ends - */ - endLocation?: Location | undefined; - } - /** - * Search match for resource. - */ - interface SearchMatch { - /** - * Line number in resource content. - */ - lineNumber: number; - /** - * Line with match content. - */ - lineContent: string; - } - interface BreakLocation { - /** - * Script identifier as reported in the Debugger.scriptParsed. - */ - scriptId: Runtime.ScriptId; - /** - * Line number in the script (0-based). - */ - lineNumber: number; - /** - * Column number in the script (0-based). - */ - columnNumber?: number | undefined; - type?: string | undefined; - } - interface SetBreakpointsActiveParameterType { - /** - * New value for breakpoints active state. - */ - active: boolean; - } - interface SetSkipAllPausesParameterType { - /** - * New value for skip pauses state. - */ - skip: boolean; - } - interface SetBreakpointByUrlParameterType { - /** - * Line number to set breakpoint at. - */ - lineNumber: number; - /** - * URL of the resources to set breakpoint on. - */ - url?: string | undefined; - /** - * Regex pattern for the URLs of the resources to set breakpoints on. Either url or urlRegex must be specified. - */ - urlRegex?: string | undefined; - /** - * Script hash of the resources to set breakpoint on. - */ - scriptHash?: string | undefined; - /** - * Offset in the line to set breakpoint at. - */ - columnNumber?: number | undefined; - /** - * Expression to use as a breakpoint condition. When specified, debugger will only stop on the breakpoint if this expression evaluates to true. - */ - condition?: string | undefined; - } - interface SetBreakpointParameterType { - /** - * Location to set breakpoint in. - */ - location: Location; - /** - * Expression to use as a breakpoint condition. When specified, debugger will only stop on the breakpoint if this expression evaluates to true. - */ - condition?: string | undefined; - } - interface RemoveBreakpointParameterType { - breakpointId: BreakpointId; - } - interface GetPossibleBreakpointsParameterType { - /** - * Start of range to search possible breakpoint locations in. - */ - start: Location; - /** - * End of range to search possible breakpoint locations in (excluding). When not specified, end of scripts is used as end of range. - */ - end?: Location | undefined; - /** - * Only consider locations which are in the same (non-nested) function as start. - */ - restrictToFunction?: boolean | undefined; - } - interface ContinueToLocationParameterType { - /** - * Location to continue to. - */ - location: Location; - targetCallFrames?: string | undefined; - } - interface PauseOnAsyncCallParameterType { - /** - * Debugger will pause when async call with given stack trace is started. - */ - parentStackTraceId: Runtime.StackTraceId; - } - interface StepIntoParameterType { - /** - * Debugger will issue additional Debugger.paused notification if any async task is scheduled before next pause. - * @experimental - */ - breakOnAsyncCall?: boolean | undefined; - } - interface GetStackTraceParameterType { - stackTraceId: Runtime.StackTraceId; - } - interface SearchInContentParameterType { - /** - * Id of the script to search in. - */ - scriptId: Runtime.ScriptId; - /** - * String to search for. - */ - query: string; - /** - * If true, search is case sensitive. - */ - caseSensitive?: boolean | undefined; - /** - * If true, treats string parameter as regex. - */ - isRegex?: boolean | undefined; - } - interface SetScriptSourceParameterType { - /** - * Id of the script to edit. - */ - scriptId: Runtime.ScriptId; - /** - * New content of the script. - */ - scriptSource: string; - /** - * If true the change will not actually be applied. Dry run may be used to get result description without actually modifying the code. - */ - dryRun?: boolean | undefined; - } - interface RestartFrameParameterType { - /** - * Call frame identifier to evaluate on. - */ - callFrameId: CallFrameId; - } - interface GetScriptSourceParameterType { - /** - * Id of the script to get source for. - */ - scriptId: Runtime.ScriptId; - } - interface SetPauseOnExceptionsParameterType { - /** - * Pause on exceptions mode. - */ - state: string; - } - interface EvaluateOnCallFrameParameterType { - /** - * Call frame identifier to evaluate on. - */ - callFrameId: CallFrameId; - /** - * Expression to evaluate. - */ - expression: string; - /** - * String object group name to put result into (allows rapid releasing resulting object handles using releaseObjectGroup). - */ - objectGroup?: string | undefined; - /** - * Specifies whether command line API should be available to the evaluated expression, defaults to false. - */ - includeCommandLineAPI?: boolean | undefined; - /** - * In silent mode exceptions thrown during evaluation are not reported and do not pause execution. Overrides setPauseOnException state. - */ - silent?: boolean | undefined; - /** - * Whether the result is expected to be a JSON object that should be sent by value. - */ - returnByValue?: boolean | undefined; - /** - * Whether preview should be generated for the result. - * @experimental - */ - generatePreview?: boolean | undefined; - /** - * Whether to throw an exception if side effect cannot be ruled out during evaluation. - */ - throwOnSideEffect?: boolean | undefined; - } - interface SetVariableValueParameterType { - /** - * 0-based number of scope as was listed in scope chain. Only 'local', 'closure' and 'catch' scope types are allowed. Other scopes could be manipulated manually. - */ - scopeNumber: number; - /** - * Variable name. - */ - variableName: string; - /** - * New variable value. - */ - newValue: Runtime.CallArgument; - /** - * Id of callframe that holds variable. - */ - callFrameId: CallFrameId; - } - interface SetReturnValueParameterType { - /** - * New return value. - */ - newValue: Runtime.CallArgument; - } - interface SetAsyncCallStackDepthParameterType { - /** - * Maximum depth of async call stacks. Setting to 0 will effectively disable collecting async call stacks (default). - */ - maxDepth: number; - } - interface SetBlackboxPatternsParameterType { - /** - * Array of regexps that will be used to check script url for blackbox state. - */ - patterns: string[]; - } - interface SetBlackboxedRangesParameterType { - /** - * Id of the script. - */ - scriptId: Runtime.ScriptId; - positions: ScriptPosition[]; - } - interface EnableReturnType { - /** - * Unique identifier of the debugger. - * @experimental - */ - debuggerId: Runtime.UniqueDebuggerId; - } - interface SetBreakpointByUrlReturnType { - /** - * Id of the created breakpoint for further reference. - */ - breakpointId: BreakpointId; - /** - * List of the locations this breakpoint resolved into upon addition. - */ - locations: Location[]; - } - interface SetBreakpointReturnType { - /** - * Id of the created breakpoint for further reference. - */ - breakpointId: BreakpointId; - /** - * Location this breakpoint resolved into. - */ - actualLocation: Location; - } - interface GetPossibleBreakpointsReturnType { - /** - * List of the possible breakpoint locations. - */ - locations: BreakLocation[]; - } - interface GetStackTraceReturnType { - stackTrace: Runtime.StackTrace; - } - interface SearchInContentReturnType { - /** - * List of search matches. - */ - result: SearchMatch[]; - } - interface SetScriptSourceReturnType { - /** - * New stack trace in case editing has happened while VM was stopped. - */ - callFrames?: CallFrame[] | undefined; - /** - * Whether current call stack was modified after applying the changes. - */ - stackChanged?: boolean | undefined; - /** - * Async stack trace, if any. - */ - asyncStackTrace?: Runtime.StackTrace | undefined; - /** - * Async stack trace, if any. - * @experimental - */ - asyncStackTraceId?: Runtime.StackTraceId | undefined; - /** - * Exception details if any. - */ - exceptionDetails?: Runtime.ExceptionDetails | undefined; - } - interface RestartFrameReturnType { - /** - * New stack trace. - */ - callFrames: CallFrame[]; - /** - * Async stack trace, if any. - */ - asyncStackTrace?: Runtime.StackTrace | undefined; - /** - * Async stack trace, if any. - * @experimental - */ - asyncStackTraceId?: Runtime.StackTraceId | undefined; - } - interface GetScriptSourceReturnType { - /** - * Script source. - */ - scriptSource: string; - } - interface EvaluateOnCallFrameReturnType { - /** - * Object wrapper for the evaluation result. - */ - result: Runtime.RemoteObject; - /** - * Exception details. - */ - exceptionDetails?: Runtime.ExceptionDetails | undefined; - } - interface ScriptParsedEventDataType { - /** - * Identifier of the script parsed. - */ - scriptId: Runtime.ScriptId; - /** - * URL or name of the script parsed (if any). - */ - url: string; - /** - * Line offset of the script within the resource with given URL (for script tags). - */ - startLine: number; - /** - * Column offset of the script within the resource with given URL. - */ - startColumn: number; - /** - * Last line of the script. - */ - endLine: number; - /** - * Length of the last line of the script. - */ - endColumn: number; - /** - * Specifies script creation context. - */ - executionContextId: Runtime.ExecutionContextId; - /** - * Content hash of the script. - */ - hash: string; - /** - * Embedder-specific auxiliary data. - */ - executionContextAuxData?: {} | undefined; - /** - * True, if this script is generated as a result of the live edit operation. - * @experimental - */ - isLiveEdit?: boolean | undefined; - /** - * URL of source map associated with script (if any). - */ - sourceMapURL?: string | undefined; - /** - * True, if this script has sourceURL. - */ - hasSourceURL?: boolean | undefined; - /** - * True, if this script is ES6 module. - */ - isModule?: boolean | undefined; - /** - * This script length. - */ - length?: number | undefined; - /** - * JavaScript top stack frame of where the script parsed event was triggered if available. - * @experimental - */ - stackTrace?: Runtime.StackTrace | undefined; - } - interface ScriptFailedToParseEventDataType { - /** - * Identifier of the script parsed. - */ - scriptId: Runtime.ScriptId; - /** - * URL or name of the script parsed (if any). - */ - url: string; - /** - * Line offset of the script within the resource with given URL (for script tags). - */ - startLine: number; - /** - * Column offset of the script within the resource with given URL. - */ - startColumn: number; - /** - * Last line of the script. - */ - endLine: number; - /** - * Length of the last line of the script. - */ - endColumn: number; - /** - * Specifies script creation context. - */ - executionContextId: Runtime.ExecutionContextId; - /** - * Content hash of the script. - */ - hash: string; - /** - * Embedder-specific auxiliary data. - */ - executionContextAuxData?: {} | undefined; - /** - * URL of source map associated with script (if any). - */ - sourceMapURL?: string | undefined; - /** - * True, if this script has sourceURL. - */ - hasSourceURL?: boolean | undefined; - /** - * True, if this script is ES6 module. - */ - isModule?: boolean | undefined; - /** - * This script length. - */ - length?: number | undefined; - /** - * JavaScript top stack frame of where the script parsed event was triggered if available. - * @experimental - */ - stackTrace?: Runtime.StackTrace | undefined; - } - interface BreakpointResolvedEventDataType { - /** - * Breakpoint unique identifier. - */ - breakpointId: BreakpointId; - /** - * Actual breakpoint location. - */ - location: Location; - } - interface PausedEventDataType { - /** - * Call stack the virtual machine stopped on. - */ - callFrames: CallFrame[]; - /** - * Pause reason. - */ - reason: string; - /** - * Object containing break-specific auxiliary properties. - */ - data?: {} | undefined; - /** - * Hit breakpoints IDs - */ - hitBreakpoints?: string[] | undefined; - /** - * Async stack trace, if any. - */ - asyncStackTrace?: Runtime.StackTrace | undefined; - /** - * Async stack trace, if any. - * @experimental - */ - asyncStackTraceId?: Runtime.StackTraceId | undefined; - /** - * Just scheduled async call will have this stack trace as parent stack during async execution. This field is available only after Debugger.stepInto call with breakOnAsynCall flag. - * @experimental - */ - asyncCallStackTraceId?: Runtime.StackTraceId | undefined; - } - } - namespace Console { - /** - * Console message. - */ - interface ConsoleMessage { - /** - * Message source. - */ - source: string; - /** - * Message severity. - */ - level: string; - /** - * Message text. - */ - text: string; - /** - * URL of the message origin. - */ - url?: string | undefined; - /** - * Line number in the resource that generated this message (1-based). - */ - line?: number | undefined; - /** - * Column number in the resource that generated this message (1-based). - */ - column?: number | undefined; - } - interface MessageAddedEventDataType { - /** - * Console message that has been added. - */ - message: ConsoleMessage; - } - } - namespace Profiler { - /** - * Profile node. Holds callsite information, execution statistics and child nodes. - */ - interface ProfileNode { - /** - * Unique id of the node. - */ - id: number; - /** - * Function location. - */ - callFrame: Runtime.CallFrame; - /** - * Number of samples where this node was on top of the call stack. - */ - hitCount?: number | undefined; - /** - * Child node ids. - */ - children?: number[] | undefined; - /** - * The reason of being not optimized. The function may be deoptimized or marked as don't optimize. - */ - deoptReason?: string | undefined; - /** - * An array of source position ticks. - */ - positionTicks?: PositionTickInfo[] | undefined; - } - /** - * Profile. - */ - interface Profile { - /** - * The list of profile nodes. First item is the root node. - */ - nodes: ProfileNode[]; - /** - * Profiling start timestamp in microseconds. - */ - startTime: number; - /** - * Profiling end timestamp in microseconds. - */ - endTime: number; - /** - * Ids of samples top nodes. - */ - samples?: number[] | undefined; - /** - * Time intervals between adjacent samples in microseconds. The first delta is relative to the profile startTime. - */ - timeDeltas?: number[] | undefined; - } - /** - * Specifies a number of samples attributed to a certain source position. - */ - interface PositionTickInfo { - /** - * Source line number (1-based). - */ - line: number; - /** - * Number of samples attributed to the source line. - */ - ticks: number; - } - /** - * Coverage data for a source range. - */ - interface CoverageRange { - /** - * JavaScript script source offset for the range start. - */ - startOffset: number; - /** - * JavaScript script source offset for the range end. - */ - endOffset: number; - /** - * Collected execution count of the source range. - */ - count: number; - } - /** - * Coverage data for a JavaScript function. - */ - interface FunctionCoverage { - /** - * JavaScript function name. - */ - functionName: string; - /** - * Source ranges inside the function with coverage data. - */ - ranges: CoverageRange[]; - /** - * Whether coverage data for this function has block granularity. - */ - isBlockCoverage: boolean; - } - /** - * Coverage data for a JavaScript script. - */ - interface ScriptCoverage { - /** - * JavaScript script id. - */ - scriptId: Runtime.ScriptId; - /** - * JavaScript script name or url. - */ - url: string; - /** - * Functions contained in the script that has coverage data. - */ - functions: FunctionCoverage[]; - } - /** - * Describes a type collected during runtime. - * @experimental - */ - interface TypeObject { - /** - * Name of a type collected with type profiling. - */ - name: string; - } - /** - * Source offset and types for a parameter or return value. - * @experimental - */ - interface TypeProfileEntry { - /** - * Source offset of the parameter or end of function for return values. - */ - offset: number; - /** - * The types for this parameter or return value. - */ - types: TypeObject[]; - } - /** - * Type profile data collected during runtime for a JavaScript script. - * @experimental - */ - interface ScriptTypeProfile { - /** - * JavaScript script id. - */ - scriptId: Runtime.ScriptId; - /** - * JavaScript script name or url. - */ - url: string; - /** - * Type profile entries for parameters and return values of the functions in the script. - */ - entries: TypeProfileEntry[]; - } - interface SetSamplingIntervalParameterType { - /** - * New sampling interval in microseconds. - */ - interval: number; - } - interface StartPreciseCoverageParameterType { - /** - * Collect accurate call counts beyond simple 'covered' or 'not covered'. - */ - callCount?: boolean | undefined; - /** - * Collect block-based coverage. - */ - detailed?: boolean | undefined; - } - interface StopReturnType { - /** - * Recorded profile. - */ - profile: Profile; - } - interface TakePreciseCoverageReturnType { - /** - * Coverage data for the current isolate. - */ - result: ScriptCoverage[]; - } - interface GetBestEffortCoverageReturnType { - /** - * Coverage data for the current isolate. - */ - result: ScriptCoverage[]; - } - interface TakeTypeProfileReturnType { - /** - * Type profile for all scripts since startTypeProfile() was turned on. - */ - result: ScriptTypeProfile[]; - } - interface ConsoleProfileStartedEventDataType { - id: string; - /** - * Location of console.profile(). - */ - location: Debugger.Location; - /** - * Profile title passed as an argument to console.profile(). - */ - title?: string | undefined; - } - interface ConsoleProfileFinishedEventDataType { - id: string; - /** - * Location of console.profileEnd(). - */ - location: Debugger.Location; - profile: Profile; - /** - * Profile title passed as an argument to console.profile(). - */ - title?: string | undefined; - } - } - namespace HeapProfiler { - /** - * Heap snapshot object id. - */ - type HeapSnapshotObjectId = string; - /** - * Sampling Heap Profile node. Holds callsite information, allocation statistics and child nodes. - */ - interface SamplingHeapProfileNode { - /** - * Function location. - */ - callFrame: Runtime.CallFrame; - /** - * Allocations size in bytes for the node excluding children. - */ - selfSize: number; - /** - * Child nodes. - */ - children: SamplingHeapProfileNode[]; - } - /** - * Profile. - */ - interface SamplingHeapProfile { - head: SamplingHeapProfileNode; - } - interface StartTrackingHeapObjectsParameterType { - trackAllocations?: boolean | undefined; - } - interface StopTrackingHeapObjectsParameterType { - /** - * If true 'reportHeapSnapshotProgress' events will be generated while snapshot is being taken when the tracking is stopped. - */ - reportProgress?: boolean | undefined; - } - interface TakeHeapSnapshotParameterType { - /** - * If true 'reportHeapSnapshotProgress' events will be generated while snapshot is being taken. - */ - reportProgress?: boolean | undefined; - } - interface GetObjectByHeapObjectIdParameterType { - objectId: HeapSnapshotObjectId; - /** - * Symbolic group name that can be used to release multiple objects. - */ - objectGroup?: string | undefined; - } - interface AddInspectedHeapObjectParameterType { - /** - * Heap snapshot object id to be accessible by means of $x command line API. - */ - heapObjectId: HeapSnapshotObjectId; - } - interface GetHeapObjectIdParameterType { - /** - * Identifier of the object to get heap object id for. - */ - objectId: Runtime.RemoteObjectId; - } - interface StartSamplingParameterType { - /** - * Average sample interval in bytes. Poisson distribution is used for the intervals. The default value is 32768 bytes. - */ - samplingInterval?: number | undefined; - } - interface GetObjectByHeapObjectIdReturnType { - /** - * Evaluation result. - */ - result: Runtime.RemoteObject; - } - interface GetHeapObjectIdReturnType { - /** - * Id of the heap snapshot object corresponding to the passed remote object id. - */ - heapSnapshotObjectId: HeapSnapshotObjectId; - } - interface StopSamplingReturnType { - /** - * Recorded sampling heap profile. - */ - profile: SamplingHeapProfile; - } - interface GetSamplingProfileReturnType { - /** - * Return the sampling profile being collected. - */ - profile: SamplingHeapProfile; - } - interface AddHeapSnapshotChunkEventDataType { - chunk: string; - } - interface ReportHeapSnapshotProgressEventDataType { - done: number; - total: number; - finished?: boolean | undefined; - } - interface LastSeenObjectIdEventDataType { - lastSeenObjectId: number; - timestamp: number; - } - interface HeapStatsUpdateEventDataType { - /** - * An array of triplets. Each triplet describes a fragment. The first integer is the fragment index, the second integer is a total count of objects for the fragment, the third integer is a total size of the objects for the fragment. - */ - statsUpdate: number[]; - } - } - namespace NodeTracing { - interface TraceConfig { - /** - * Controls how the trace buffer stores data. - */ - recordMode?: string | undefined; - /** - * Included category filters. - */ - includedCategories: string[]; - } - interface StartParameterType { - traceConfig: TraceConfig; - } - interface GetCategoriesReturnType { - /** - * A list of supported tracing categories. - */ - categories: string[]; - } - interface DataCollectedEventDataType { - value: Array<{}>; - } - } - namespace NodeWorker { - type WorkerID = string; - /** - * Unique identifier of attached debugging session. - */ - type SessionID = string; - interface WorkerInfo { - workerId: WorkerID; - type: string; - title: string; - url: string; - } - interface SendMessageToWorkerParameterType { - message: string; - /** - * Identifier of the session. - */ - sessionId: SessionID; - } - interface EnableParameterType { - /** - * Whether to new workers should be paused until the frontend sends `Runtime.runIfWaitingForDebugger` - * message to run them. - */ - waitForDebuggerOnStart: boolean; - } - interface DetachParameterType { - sessionId: SessionID; - } - interface AttachedToWorkerEventDataType { - /** - * Identifier assigned to the session used to send/receive messages. - */ - sessionId: SessionID; - workerInfo: WorkerInfo; - waitingForDebugger: boolean; - } - interface DetachedFromWorkerEventDataType { - /** - * Detached session identifier. - */ - sessionId: SessionID; - } - interface ReceivedMessageFromWorkerEventDataType { - /** - * Identifier of a session which sends a message. - */ - sessionId: SessionID; - message: string; - } - } - namespace NodeRuntime { - interface NotifyWhenWaitingForDisconnectParameterType { - enabled: boolean; - } - } - /** - * The `inspector.Session` is used for dispatching messages to the V8 inspector - * back-end and receiving message responses and notifications. - */ - class Session extends EventEmitter { - /** - * Create a new instance of the inspector.Session class. - * The inspector session needs to be connected through session.connect() before the messages can be dispatched to the inspector backend. - */ - constructor(); - /** - * Connects a session to the inspector back-end. - * @since v8.0.0 - */ - connect(): void; - /** - * Immediately close the session. All pending message callbacks will be called - * with an error. `session.connect()` will need to be called to be able to send - * messages again. Reconnected session will lose all inspector state, such as - * enabled agents or configured breakpoints. - * @since v8.0.0 - */ - disconnect(): void; - /** - * Posts a message to the inspector back-end. `callback` will be notified when - * a response is received. `callback` is a function that accepts two optional - * arguments: error and message-specific result. - * - * ```js - * session.post('Runtime.evaluate', { expression: '2 + 2' }, - * (error, { result }) => console.log(result)); - * // Output: { type: 'number', value: 4, description: '4' } - * ``` - * - * The latest version of the V8 inspector protocol is published on the [Chrome DevTools Protocol Viewer](https://chromedevtools.github.io/devtools-protocol/v8/). - * - * Node.js inspector supports all the Chrome DevTools Protocol domains declared - * by V8\. Chrome DevTools Protocol domain provides an interface for interacting - * with one of the runtime agents used to inspect the application state and listen - * to the run-time events. - * - * ## Example usage - * - * Apart from the debugger, various V8 Profilers are available through the DevTools - * protocol. - * @since v8.0.0 - */ - post(method: string, params?: {}, callback?: (err: Error | null, params?: {}) => void): void; - post(method: string, callback?: (err: Error | null, params?: {}) => void): void; - /** - * Returns supported domains. - */ - post(method: 'Schema.getDomains', callback?: (err: Error | null, params: Schema.GetDomainsReturnType) => void): void; - /** - * Evaluates expression on global object. - */ - post(method: 'Runtime.evaluate', params?: Runtime.EvaluateParameterType, callback?: (err: Error | null, params: Runtime.EvaluateReturnType) => void): void; - post(method: 'Runtime.evaluate', callback?: (err: Error | null, params: Runtime.EvaluateReturnType) => void): void; - /** - * Add handler to promise with given promise object id. - */ - post(method: 'Runtime.awaitPromise', params?: Runtime.AwaitPromiseParameterType, callback?: (err: Error | null, params: Runtime.AwaitPromiseReturnType) => void): void; - post(method: 'Runtime.awaitPromise', callback?: (err: Error | null, params: Runtime.AwaitPromiseReturnType) => void): void; - /** - * Calls function with given declaration on the given object. Object group of the result is inherited from the target object. - */ - post(method: 'Runtime.callFunctionOn', params?: Runtime.CallFunctionOnParameterType, callback?: (err: Error | null, params: Runtime.CallFunctionOnReturnType) => void): void; - post(method: 'Runtime.callFunctionOn', callback?: (err: Error | null, params: Runtime.CallFunctionOnReturnType) => void): void; - /** - * Returns properties of a given object. Object group of the result is inherited from the target object. - */ - post(method: 'Runtime.getProperties', params?: Runtime.GetPropertiesParameterType, callback?: (err: Error | null, params: Runtime.GetPropertiesReturnType) => void): void; - post(method: 'Runtime.getProperties', callback?: (err: Error | null, params: Runtime.GetPropertiesReturnType) => void): void; - /** - * Releases remote object with given id. - */ - post(method: 'Runtime.releaseObject', params?: Runtime.ReleaseObjectParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Runtime.releaseObject', callback?: (err: Error | null) => void): void; - /** - * Releases all remote objects that belong to a given group. - */ - post(method: 'Runtime.releaseObjectGroup', params?: Runtime.ReleaseObjectGroupParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Runtime.releaseObjectGroup', callback?: (err: Error | null) => void): void; - /** - * Tells inspected instance to run if it was waiting for debugger to attach. - */ - post(method: 'Runtime.runIfWaitingForDebugger', callback?: (err: Error | null) => void): void; - /** - * Enables reporting of execution contexts creation by means of executionContextCreated event. When the reporting gets enabled the event will be sent immediately for each existing execution context. - */ - post(method: 'Runtime.enable', callback?: (err: Error | null) => void): void; - /** - * Disables reporting of execution contexts creation. - */ - post(method: 'Runtime.disable', callback?: (err: Error | null) => void): void; - /** - * Discards collected exceptions and console API calls. - */ - post(method: 'Runtime.discardConsoleEntries', callback?: (err: Error | null) => void): void; - /** - * @experimental - */ - post(method: 'Runtime.setCustomObjectFormatterEnabled', params?: Runtime.SetCustomObjectFormatterEnabledParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Runtime.setCustomObjectFormatterEnabled', callback?: (err: Error | null) => void): void; - /** - * Compiles expression. - */ - post(method: 'Runtime.compileScript', params?: Runtime.CompileScriptParameterType, callback?: (err: Error | null, params: Runtime.CompileScriptReturnType) => void): void; - post(method: 'Runtime.compileScript', callback?: (err: Error | null, params: Runtime.CompileScriptReturnType) => void): void; - /** - * Runs script with given id in a given context. - */ - post(method: 'Runtime.runScript', params?: Runtime.RunScriptParameterType, callback?: (err: Error | null, params: Runtime.RunScriptReturnType) => void): void; - post(method: 'Runtime.runScript', callback?: (err: Error | null, params: Runtime.RunScriptReturnType) => void): void; - post(method: 'Runtime.queryObjects', params?: Runtime.QueryObjectsParameterType, callback?: (err: Error | null, params: Runtime.QueryObjectsReturnType) => void): void; - post(method: 'Runtime.queryObjects', callback?: (err: Error | null, params: Runtime.QueryObjectsReturnType) => void): void; - /** - * Returns all let, const and class variables from global scope. - */ - post( - method: 'Runtime.globalLexicalScopeNames', - params?: Runtime.GlobalLexicalScopeNamesParameterType, - callback?: (err: Error | null, params: Runtime.GlobalLexicalScopeNamesReturnType) => void - ): void; - post(method: 'Runtime.globalLexicalScopeNames', callback?: (err: Error | null, params: Runtime.GlobalLexicalScopeNamesReturnType) => void): void; - /** - * Enables debugger for the given page. Clients should not assume that the debugging has been enabled until the result for this command is received. - */ - post(method: 'Debugger.enable', callback?: (err: Error | null, params: Debugger.EnableReturnType) => void): void; - /** - * Disables debugger for given page. - */ - post(method: 'Debugger.disable', callback?: (err: Error | null) => void): void; - /** - * Activates / deactivates all breakpoints on the page. - */ - post(method: 'Debugger.setBreakpointsActive', params?: Debugger.SetBreakpointsActiveParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setBreakpointsActive', callback?: (err: Error | null) => void): void; - /** - * Makes page not interrupt on any pauses (breakpoint, exception, dom exception etc). - */ - post(method: 'Debugger.setSkipAllPauses', params?: Debugger.SetSkipAllPausesParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setSkipAllPauses', callback?: (err: Error | null) => void): void; - /** - * Sets JavaScript breakpoint at given location specified either by URL or URL regex. Once this command is issued, all existing parsed scripts will have breakpoints resolved and returned in locations property. Further matching script parsing will result in subsequent breakpointResolved events issued. This logical breakpoint will survive page reloads. - */ - post(method: 'Debugger.setBreakpointByUrl', params?: Debugger.SetBreakpointByUrlParameterType, callback?: (err: Error | null, params: Debugger.SetBreakpointByUrlReturnType) => void): void; - post(method: 'Debugger.setBreakpointByUrl', callback?: (err: Error | null, params: Debugger.SetBreakpointByUrlReturnType) => void): void; - /** - * Sets JavaScript breakpoint at a given location. - */ - post(method: 'Debugger.setBreakpoint', params?: Debugger.SetBreakpointParameterType, callback?: (err: Error | null, params: Debugger.SetBreakpointReturnType) => void): void; - post(method: 'Debugger.setBreakpoint', callback?: (err: Error | null, params: Debugger.SetBreakpointReturnType) => void): void; - /** - * Removes JavaScript breakpoint. - */ - post(method: 'Debugger.removeBreakpoint', params?: Debugger.RemoveBreakpointParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.removeBreakpoint', callback?: (err: Error | null) => void): void; - /** - * Returns possible locations for breakpoint. scriptId in start and end range locations should be the same. - */ - post( - method: 'Debugger.getPossibleBreakpoints', - params?: Debugger.GetPossibleBreakpointsParameterType, - callback?: (err: Error | null, params: Debugger.GetPossibleBreakpointsReturnType) => void - ): void; - post(method: 'Debugger.getPossibleBreakpoints', callback?: (err: Error | null, params: Debugger.GetPossibleBreakpointsReturnType) => void): void; - /** - * Continues execution until specific location is reached. - */ - post(method: 'Debugger.continueToLocation', params?: Debugger.ContinueToLocationParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.continueToLocation', callback?: (err: Error | null) => void): void; - /** - * @experimental - */ - post(method: 'Debugger.pauseOnAsyncCall', params?: Debugger.PauseOnAsyncCallParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.pauseOnAsyncCall', callback?: (err: Error | null) => void): void; - /** - * Steps over the statement. - */ - post(method: 'Debugger.stepOver', callback?: (err: Error | null) => void): void; - /** - * Steps into the function call. - */ - post(method: 'Debugger.stepInto', params?: Debugger.StepIntoParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.stepInto', callback?: (err: Error | null) => void): void; - /** - * Steps out of the function call. - */ - post(method: 'Debugger.stepOut', callback?: (err: Error | null) => void): void; - /** - * Stops on the next JavaScript statement. - */ - post(method: 'Debugger.pause', callback?: (err: Error | null) => void): void; - /** - * This method is deprecated - use Debugger.stepInto with breakOnAsyncCall and Debugger.pauseOnAsyncTask instead. Steps into next scheduled async task if any is scheduled before next pause. Returns success when async task is actually scheduled, returns error if no task were scheduled or another scheduleStepIntoAsync was called. - * @experimental - */ - post(method: 'Debugger.scheduleStepIntoAsync', callback?: (err: Error | null) => void): void; - /** - * Resumes JavaScript execution. - */ - post(method: 'Debugger.resume', callback?: (err: Error | null) => void): void; - /** - * Returns stack trace with given stackTraceId. - * @experimental - */ - post(method: 'Debugger.getStackTrace', params?: Debugger.GetStackTraceParameterType, callback?: (err: Error | null, params: Debugger.GetStackTraceReturnType) => void): void; - post(method: 'Debugger.getStackTrace', callback?: (err: Error | null, params: Debugger.GetStackTraceReturnType) => void): void; - /** - * Searches for given string in script content. - */ - post(method: 'Debugger.searchInContent', params?: Debugger.SearchInContentParameterType, callback?: (err: Error | null, params: Debugger.SearchInContentReturnType) => void): void; - post(method: 'Debugger.searchInContent', callback?: (err: Error | null, params: Debugger.SearchInContentReturnType) => void): void; - /** - * Edits JavaScript source live. - */ - post(method: 'Debugger.setScriptSource', params?: Debugger.SetScriptSourceParameterType, callback?: (err: Error | null, params: Debugger.SetScriptSourceReturnType) => void): void; - post(method: 'Debugger.setScriptSource', callback?: (err: Error | null, params: Debugger.SetScriptSourceReturnType) => void): void; - /** - * Restarts particular call frame from the beginning. - */ - post(method: 'Debugger.restartFrame', params?: Debugger.RestartFrameParameterType, callback?: (err: Error | null, params: Debugger.RestartFrameReturnType) => void): void; - post(method: 'Debugger.restartFrame', callback?: (err: Error | null, params: Debugger.RestartFrameReturnType) => void): void; - /** - * Returns source for the script with given id. - */ - post(method: 'Debugger.getScriptSource', params?: Debugger.GetScriptSourceParameterType, callback?: (err: Error | null, params: Debugger.GetScriptSourceReturnType) => void): void; - post(method: 'Debugger.getScriptSource', callback?: (err: Error | null, params: Debugger.GetScriptSourceReturnType) => void): void; - /** - * Defines pause on exceptions state. Can be set to stop on all exceptions, uncaught exceptions or no exceptions. Initial pause on exceptions state is none. - */ - post(method: 'Debugger.setPauseOnExceptions', params?: Debugger.SetPauseOnExceptionsParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setPauseOnExceptions', callback?: (err: Error | null) => void): void; - /** - * Evaluates expression on a given call frame. - */ - post(method: 'Debugger.evaluateOnCallFrame', params?: Debugger.EvaluateOnCallFrameParameterType, callback?: (err: Error | null, params: Debugger.EvaluateOnCallFrameReturnType) => void): void; - post(method: 'Debugger.evaluateOnCallFrame', callback?: (err: Error | null, params: Debugger.EvaluateOnCallFrameReturnType) => void): void; - /** - * Changes value of variable in a callframe. Object-based scopes are not supported and must be mutated manually. - */ - post(method: 'Debugger.setVariableValue', params?: Debugger.SetVariableValueParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setVariableValue', callback?: (err: Error | null) => void): void; - /** - * Changes return value in top frame. Available only at return break position. - * @experimental - */ - post(method: 'Debugger.setReturnValue', params?: Debugger.SetReturnValueParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setReturnValue', callback?: (err: Error | null) => void): void; - /** - * Enables or disables async call stacks tracking. - */ - post(method: 'Debugger.setAsyncCallStackDepth', params?: Debugger.SetAsyncCallStackDepthParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setAsyncCallStackDepth', callback?: (err: Error | null) => void): void; - /** - * Replace previous blackbox patterns with passed ones. Forces backend to skip stepping/pausing in scripts with url matching one of the patterns. VM will try to leave blackboxed script by performing 'step in' several times, finally resorting to 'step out' if unsuccessful. - * @experimental - */ - post(method: 'Debugger.setBlackboxPatterns', params?: Debugger.SetBlackboxPatternsParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setBlackboxPatterns', callback?: (err: Error | null) => void): void; - /** - * Makes backend skip steps in the script in blackboxed ranges. VM will try leave blacklisted scripts by performing 'step in' several times, finally resorting to 'step out' if unsuccessful. Positions array contains positions where blackbox state is changed. First interval isn't blackboxed. Array should be sorted. - * @experimental - */ - post(method: 'Debugger.setBlackboxedRanges', params?: Debugger.SetBlackboxedRangesParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Debugger.setBlackboxedRanges', callback?: (err: Error | null) => void): void; - /** - * Enables console domain, sends the messages collected so far to the client by means of the messageAdded notification. - */ - post(method: 'Console.enable', callback?: (err: Error | null) => void): void; - /** - * Disables console domain, prevents further console messages from being reported to the client. - */ - post(method: 'Console.disable', callback?: (err: Error | null) => void): void; - /** - * Does nothing. - */ - post(method: 'Console.clearMessages', callback?: (err: Error | null) => void): void; - post(method: 'Profiler.enable', callback?: (err: Error | null) => void): void; - post(method: 'Profiler.disable', callback?: (err: Error | null) => void): void; - /** - * Changes CPU profiler sampling interval. Must be called before CPU profiles recording started. - */ - post(method: 'Profiler.setSamplingInterval', params?: Profiler.SetSamplingIntervalParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Profiler.setSamplingInterval', callback?: (err: Error | null) => void): void; - post(method: 'Profiler.start', callback?: (err: Error | null) => void): void; - post(method: 'Profiler.stop', callback?: (err: Error | null, params: Profiler.StopReturnType) => void): void; - /** - * Enable precise code coverage. Coverage data for JavaScript executed before enabling precise code coverage may be incomplete. Enabling prevents running optimized code and resets execution counters. - */ - post(method: 'Profiler.startPreciseCoverage', params?: Profiler.StartPreciseCoverageParameterType, callback?: (err: Error | null) => void): void; - post(method: 'Profiler.startPreciseCoverage', callback?: (err: Error | null) => void): void; - /** - * Disable precise code coverage. Disabling releases unnecessary execution count records and allows executing optimized code. - */ - post(method: 'Profiler.stopPreciseCoverage', callback?: (err: Error | null) => void): void; - /** - * Collect coverage data for the current isolate, and resets execution counters. Precise code coverage needs to have started. - */ - post(method: 'Profiler.takePreciseCoverage', callback?: (err: Error | null, params: Profiler.TakePreciseCoverageReturnType) => void): void; - /** - * Collect coverage data for the current isolate. The coverage data may be incomplete due to garbage collection. - */ - post(method: 'Profiler.getBestEffortCoverage', callback?: (err: Error | null, params: Profiler.GetBestEffortCoverageReturnType) => void): void; - /** - * Enable type profile. - * @experimental - */ - post(method: 'Profiler.startTypeProfile', callback?: (err: Error | null) => void): void; - /** - * Disable type profile. Disabling releases type profile data collected so far. - * @experimental - */ - post(method: 'Profiler.stopTypeProfile', callback?: (err: Error | null) => void): void; - /** - * Collect type profile. - * @experimental - */ - post(method: 'Profiler.takeTypeProfile', callback?: (err: Error | null, params: Profiler.TakeTypeProfileReturnType) => void): void; - post(method: 'HeapProfiler.enable', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.disable', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.startTrackingHeapObjects', params?: HeapProfiler.StartTrackingHeapObjectsParameterType, callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.startTrackingHeapObjects', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.stopTrackingHeapObjects', params?: HeapProfiler.StopTrackingHeapObjectsParameterType, callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.stopTrackingHeapObjects', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.takeHeapSnapshot', params?: HeapProfiler.TakeHeapSnapshotParameterType, callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.takeHeapSnapshot', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.collectGarbage', callback?: (err: Error | null) => void): void; - post( - method: 'HeapProfiler.getObjectByHeapObjectId', - params?: HeapProfiler.GetObjectByHeapObjectIdParameterType, - callback?: (err: Error | null, params: HeapProfiler.GetObjectByHeapObjectIdReturnType) => void - ): void; - post(method: 'HeapProfiler.getObjectByHeapObjectId', callback?: (err: Error | null, params: HeapProfiler.GetObjectByHeapObjectIdReturnType) => void): void; - /** - * Enables console to refer to the node with given id via $x (see Command Line API for more details $x functions). - */ - post(method: 'HeapProfiler.addInspectedHeapObject', params?: HeapProfiler.AddInspectedHeapObjectParameterType, callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.addInspectedHeapObject', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.getHeapObjectId', params?: HeapProfiler.GetHeapObjectIdParameterType, callback?: (err: Error | null, params: HeapProfiler.GetHeapObjectIdReturnType) => void): void; - post(method: 'HeapProfiler.getHeapObjectId', callback?: (err: Error | null, params: HeapProfiler.GetHeapObjectIdReturnType) => void): void; - post(method: 'HeapProfiler.startSampling', params?: HeapProfiler.StartSamplingParameterType, callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.startSampling', callback?: (err: Error | null) => void): void; - post(method: 'HeapProfiler.stopSampling', callback?: (err: Error | null, params: HeapProfiler.StopSamplingReturnType) => void): void; - post(method: 'HeapProfiler.getSamplingProfile', callback?: (err: Error | null, params: HeapProfiler.GetSamplingProfileReturnType) => void): void; - /** - * Gets supported tracing categories. - */ - post(method: 'NodeTracing.getCategories', callback?: (err: Error | null, params: NodeTracing.GetCategoriesReturnType) => void): void; - /** - * Start trace events collection. - */ - post(method: 'NodeTracing.start', params?: NodeTracing.StartParameterType, callback?: (err: Error | null) => void): void; - post(method: 'NodeTracing.start', callback?: (err: Error | null) => void): void; - /** - * Stop trace events collection. Remaining collected events will be sent as a sequence of - * dataCollected events followed by tracingComplete event. - */ - post(method: 'NodeTracing.stop', callback?: (err: Error | null) => void): void; - /** - * Sends protocol message over session with given id. - */ - post(method: 'NodeWorker.sendMessageToWorker', params?: NodeWorker.SendMessageToWorkerParameterType, callback?: (err: Error | null) => void): void; - post(method: 'NodeWorker.sendMessageToWorker', callback?: (err: Error | null) => void): void; - /** - * Instructs the inspector to attach to running workers. Will also attach to new workers - * as they start - */ - post(method: 'NodeWorker.enable', params?: NodeWorker.EnableParameterType, callback?: (err: Error | null) => void): void; - post(method: 'NodeWorker.enable', callback?: (err: Error | null) => void): void; - /** - * Detaches from all running workers and disables attaching to new workers as they are started. - */ - post(method: 'NodeWorker.disable', callback?: (err: Error | null) => void): void; - /** - * Detached from the worker with given sessionId. - */ - post(method: 'NodeWorker.detach', params?: NodeWorker.DetachParameterType, callback?: (err: Error | null) => void): void; - post(method: 'NodeWorker.detach', callback?: (err: Error | null) => void): void; - /** - * Enable the `NodeRuntime.waitingForDisconnect`. - */ - post(method: 'NodeRuntime.notifyWhenWaitingForDisconnect', params?: NodeRuntime.NotifyWhenWaitingForDisconnectParameterType, callback?: (err: Error | null) => void): void; - post(method: 'NodeRuntime.notifyWhenWaitingForDisconnect', callback?: (err: Error | null) => void): void; - // Events - addListener(event: string, listener: (...args: any[]) => void): this; - /** - * Emitted when any notification from the V8 Inspector is received. - */ - addListener(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this; - /** - * Issued when new execution context is created. - */ - addListener(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this; - /** - * Issued when execution context is destroyed. - */ - addListener(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this; - /** - * Issued when all executionContexts were cleared in browser - */ - addListener(event: 'Runtime.executionContextsCleared', listener: () => void): this; - /** - * Issued when exception was thrown and unhandled. - */ - addListener(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this; - /** - * Issued when unhandled exception was revoked. - */ - addListener(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this; - /** - * Issued when console API was called. - */ - addListener(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this; - /** - * Issued when object should be inspected (for example, as a result of inspect() command line API call). - */ - addListener(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger. - */ - addListener(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine fails to parse the script. - */ - addListener(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this; - /** - * Fired when breakpoint is resolved to an actual script and location. - */ - addListener(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria. - */ - addListener(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine resumed execution. - */ - addListener(event: 'Debugger.resumed', listener: () => void): this; - /** - * Issued when new console message is added. - */ - addListener(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this; - /** - * Sent when new profile recording is started using console.profile() call. - */ - addListener(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this; - addListener(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this; - addListener(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this; - addListener(event: 'HeapProfiler.resetProfiles', listener: () => void): this; - addListener(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event. - */ - addListener(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend may send update for one or more fragments - */ - addListener(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this; - /** - * Contains an bucket of collected trace events. - */ - addListener(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this; - /** - * Signals that tracing is stopped and there is no trace buffers pending flush, all data were - * delivered via dataCollected events. - */ - addListener(event: 'NodeTracing.tracingComplete', listener: () => void): this; - /** - * Issued when attached to a worker. - */ - addListener(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this; - /** - * Issued when detached from the worker. - */ - addListener(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * Notifies about a new protocol message received from the session - * (session ID is provided in attachedToWorker notification). - */ - addListener(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * This event is fired instead of `Runtime.executionContextDestroyed` when - * enabled. - * It is fired when the Node process finished all code execution and is - * waiting for all frontends to disconnect. - */ - addListener(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this; - emit(event: string | symbol, ...args: any[]): boolean; - emit(event: 'inspectorNotification', message: InspectorNotification<{}>): boolean; - emit(event: 'Runtime.executionContextCreated', message: InspectorNotification): boolean; - emit(event: 'Runtime.executionContextDestroyed', message: InspectorNotification): boolean; - emit(event: 'Runtime.executionContextsCleared'): boolean; - emit(event: 'Runtime.exceptionThrown', message: InspectorNotification): boolean; - emit(event: 'Runtime.exceptionRevoked', message: InspectorNotification): boolean; - emit(event: 'Runtime.consoleAPICalled', message: InspectorNotification): boolean; - emit(event: 'Runtime.inspectRequested', message: InspectorNotification): boolean; - emit(event: 'Debugger.scriptParsed', message: InspectorNotification): boolean; - emit(event: 'Debugger.scriptFailedToParse', message: InspectorNotification): boolean; - emit(event: 'Debugger.breakpointResolved', message: InspectorNotification): boolean; - emit(event: 'Debugger.paused', message: InspectorNotification): boolean; - emit(event: 'Debugger.resumed'): boolean; - emit(event: 'Console.messageAdded', message: InspectorNotification): boolean; - emit(event: 'Profiler.consoleProfileStarted', message: InspectorNotification): boolean; - emit(event: 'Profiler.consoleProfileFinished', message: InspectorNotification): boolean; - emit(event: 'HeapProfiler.addHeapSnapshotChunk', message: InspectorNotification): boolean; - emit(event: 'HeapProfiler.resetProfiles'): boolean; - emit(event: 'HeapProfiler.reportHeapSnapshotProgress', message: InspectorNotification): boolean; - emit(event: 'HeapProfiler.lastSeenObjectId', message: InspectorNotification): boolean; - emit(event: 'HeapProfiler.heapStatsUpdate', message: InspectorNotification): boolean; - emit(event: 'NodeTracing.dataCollected', message: InspectorNotification): boolean; - emit(event: 'NodeTracing.tracingComplete'): boolean; - emit(event: 'NodeWorker.attachedToWorker', message: InspectorNotification): boolean; - emit(event: 'NodeWorker.detachedFromWorker', message: InspectorNotification): boolean; - emit(event: 'NodeWorker.receivedMessageFromWorker', message: InspectorNotification): boolean; - emit(event: 'NodeRuntime.waitingForDisconnect'): boolean; - on(event: string, listener: (...args: any[]) => void): this; - /** - * Emitted when any notification from the V8 Inspector is received. - */ - on(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this; - /** - * Issued when new execution context is created. - */ - on(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this; - /** - * Issued when execution context is destroyed. - */ - on(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this; - /** - * Issued when all executionContexts were cleared in browser - */ - on(event: 'Runtime.executionContextsCleared', listener: () => void): this; - /** - * Issued when exception was thrown and unhandled. - */ - on(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this; - /** - * Issued when unhandled exception was revoked. - */ - on(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this; - /** - * Issued when console API was called. - */ - on(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this; - /** - * Issued when object should be inspected (for example, as a result of inspect() command line API call). - */ - on(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger. - */ - on(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine fails to parse the script. - */ - on(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this; - /** - * Fired when breakpoint is resolved to an actual script and location. - */ - on(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria. - */ - on(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine resumed execution. - */ - on(event: 'Debugger.resumed', listener: () => void): this; - /** - * Issued when new console message is added. - */ - on(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this; - /** - * Sent when new profile recording is started using console.profile() call. - */ - on(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this; - on(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this; - on(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this; - on(event: 'HeapProfiler.resetProfiles', listener: () => void): this; - on(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event. - */ - on(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend may send update for one or more fragments - */ - on(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this; - /** - * Contains an bucket of collected trace events. - */ - on(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this; - /** - * Signals that tracing is stopped and there is no trace buffers pending flush, all data were - * delivered via dataCollected events. - */ - on(event: 'NodeTracing.tracingComplete', listener: () => void): this; - /** - * Issued when attached to a worker. - */ - on(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this; - /** - * Issued when detached from the worker. - */ - on(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * Notifies about a new protocol message received from the session - * (session ID is provided in attachedToWorker notification). - */ - on(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * This event is fired instead of `Runtime.executionContextDestroyed` when - * enabled. - * It is fired when the Node process finished all code execution and is - * waiting for all frontends to disconnect. - */ - on(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this; - once(event: string, listener: (...args: any[]) => void): this; - /** - * Emitted when any notification from the V8 Inspector is received. - */ - once(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this; - /** - * Issued when new execution context is created. - */ - once(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this; - /** - * Issued when execution context is destroyed. - */ - once(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this; - /** - * Issued when all executionContexts were cleared in browser - */ - once(event: 'Runtime.executionContextsCleared', listener: () => void): this; - /** - * Issued when exception was thrown and unhandled. - */ - once(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this; - /** - * Issued when unhandled exception was revoked. - */ - once(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this; - /** - * Issued when console API was called. - */ - once(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this; - /** - * Issued when object should be inspected (for example, as a result of inspect() command line API call). - */ - once(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger. - */ - once(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine fails to parse the script. - */ - once(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this; - /** - * Fired when breakpoint is resolved to an actual script and location. - */ - once(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria. - */ - once(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine resumed execution. - */ - once(event: 'Debugger.resumed', listener: () => void): this; - /** - * Issued when new console message is added. - */ - once(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this; - /** - * Sent when new profile recording is started using console.profile() call. - */ - once(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this; - once(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this; - once(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this; - once(event: 'HeapProfiler.resetProfiles', listener: () => void): this; - once(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event. - */ - once(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend may send update for one or more fragments - */ - once(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this; - /** - * Contains an bucket of collected trace events. - */ - once(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this; - /** - * Signals that tracing is stopped and there is no trace buffers pending flush, all data were - * delivered via dataCollected events. - */ - once(event: 'NodeTracing.tracingComplete', listener: () => void): this; - /** - * Issued when attached to a worker. - */ - once(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this; - /** - * Issued when detached from the worker. - */ - once(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * Notifies about a new protocol message received from the session - * (session ID is provided in attachedToWorker notification). - */ - once(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * This event is fired instead of `Runtime.executionContextDestroyed` when - * enabled. - * It is fired when the Node process finished all code execution and is - * waiting for all frontends to disconnect. - */ - once(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this; - prependListener(event: string, listener: (...args: any[]) => void): this; - /** - * Emitted when any notification from the V8 Inspector is received. - */ - prependListener(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this; - /** - * Issued when new execution context is created. - */ - prependListener(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this; - /** - * Issued when execution context is destroyed. - */ - prependListener(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this; - /** - * Issued when all executionContexts were cleared in browser - */ - prependListener(event: 'Runtime.executionContextsCleared', listener: () => void): this; - /** - * Issued when exception was thrown and unhandled. - */ - prependListener(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this; - /** - * Issued when unhandled exception was revoked. - */ - prependListener(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this; - /** - * Issued when console API was called. - */ - prependListener(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this; - /** - * Issued when object should be inspected (for example, as a result of inspect() command line API call). - */ - prependListener(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger. - */ - prependListener(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine fails to parse the script. - */ - prependListener(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this; - /** - * Fired when breakpoint is resolved to an actual script and location. - */ - prependListener(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria. - */ - prependListener(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine resumed execution. - */ - prependListener(event: 'Debugger.resumed', listener: () => void): this; - /** - * Issued when new console message is added. - */ - prependListener(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this; - /** - * Sent when new profile recording is started using console.profile() call. - */ - prependListener(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this; - prependListener(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this; - prependListener(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this; - prependListener(event: 'HeapProfiler.resetProfiles', listener: () => void): this; - prependListener(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event. - */ - prependListener(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend may send update for one or more fragments - */ - prependListener(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this; - /** - * Contains an bucket of collected trace events. - */ - prependListener(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this; - /** - * Signals that tracing is stopped and there is no trace buffers pending flush, all data were - * delivered via dataCollected events. - */ - prependListener(event: 'NodeTracing.tracingComplete', listener: () => void): this; - /** - * Issued when attached to a worker. - */ - prependListener(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this; - /** - * Issued when detached from the worker. - */ - prependListener(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * Notifies about a new protocol message received from the session - * (session ID is provided in attachedToWorker notification). - */ - prependListener(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * This event is fired instead of `Runtime.executionContextDestroyed` when - * enabled. - * It is fired when the Node process finished all code execution and is - * waiting for all frontends to disconnect. - */ - prependListener(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this; - prependOnceListener(event: string, listener: (...args: any[]) => void): this; - /** - * Emitted when any notification from the V8 Inspector is received. - */ - prependOnceListener(event: 'inspectorNotification', listener: (message: InspectorNotification<{}>) => void): this; - /** - * Issued when new execution context is created. - */ - prependOnceListener(event: 'Runtime.executionContextCreated', listener: (message: InspectorNotification) => void): this; - /** - * Issued when execution context is destroyed. - */ - prependOnceListener(event: 'Runtime.executionContextDestroyed', listener: (message: InspectorNotification) => void): this; - /** - * Issued when all executionContexts were cleared in browser - */ - prependOnceListener(event: 'Runtime.executionContextsCleared', listener: () => void): this; - /** - * Issued when exception was thrown and unhandled. - */ - prependOnceListener(event: 'Runtime.exceptionThrown', listener: (message: InspectorNotification) => void): this; - /** - * Issued when unhandled exception was revoked. - */ - prependOnceListener(event: 'Runtime.exceptionRevoked', listener: (message: InspectorNotification) => void): this; - /** - * Issued when console API was called. - */ - prependOnceListener(event: 'Runtime.consoleAPICalled', listener: (message: InspectorNotification) => void): this; - /** - * Issued when object should be inspected (for example, as a result of inspect() command line API call). - */ - prependOnceListener(event: 'Runtime.inspectRequested', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine parses script. This event is also fired for all known and uncollected scripts upon enabling debugger. - */ - prependOnceListener(event: 'Debugger.scriptParsed', listener: (message: InspectorNotification) => void): this; - /** - * Fired when virtual machine fails to parse the script. - */ - prependOnceListener(event: 'Debugger.scriptFailedToParse', listener: (message: InspectorNotification) => void): this; - /** - * Fired when breakpoint is resolved to an actual script and location. - */ - prependOnceListener(event: 'Debugger.breakpointResolved', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine stopped on breakpoint or exception or any other stop criteria. - */ - prependOnceListener(event: 'Debugger.paused', listener: (message: InspectorNotification) => void): this; - /** - * Fired when the virtual machine resumed execution. - */ - prependOnceListener(event: 'Debugger.resumed', listener: () => void): this; - /** - * Issued when new console message is added. - */ - prependOnceListener(event: 'Console.messageAdded', listener: (message: InspectorNotification) => void): this; - /** - * Sent when new profile recording is started using console.profile() call. - */ - prependOnceListener(event: 'Profiler.consoleProfileStarted', listener: (message: InspectorNotification) => void): this; - prependOnceListener(event: 'Profiler.consoleProfileFinished', listener: (message: InspectorNotification) => void): this; - prependOnceListener(event: 'HeapProfiler.addHeapSnapshotChunk', listener: (message: InspectorNotification) => void): this; - prependOnceListener(event: 'HeapProfiler.resetProfiles', listener: () => void): this; - prependOnceListener(event: 'HeapProfiler.reportHeapSnapshotProgress', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend regularly sends a current value for last seen object id and corresponding timestamp. If the were changes in the heap since last event then one or more heapStatsUpdate events will be sent before a new lastSeenObjectId event. - */ - prependOnceListener(event: 'HeapProfiler.lastSeenObjectId', listener: (message: InspectorNotification) => void): this; - /** - * If heap objects tracking has been started then backend may send update for one or more fragments - */ - prependOnceListener(event: 'HeapProfiler.heapStatsUpdate', listener: (message: InspectorNotification) => void): this; - /** - * Contains an bucket of collected trace events. - */ - prependOnceListener(event: 'NodeTracing.dataCollected', listener: (message: InspectorNotification) => void): this; - /** - * Signals that tracing is stopped and there is no trace buffers pending flush, all data were - * delivered via dataCollected events. - */ - prependOnceListener(event: 'NodeTracing.tracingComplete', listener: () => void): this; - /** - * Issued when attached to a worker. - */ - prependOnceListener(event: 'NodeWorker.attachedToWorker', listener: (message: InspectorNotification) => void): this; - /** - * Issued when detached from the worker. - */ - prependOnceListener(event: 'NodeWorker.detachedFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * Notifies about a new protocol message received from the session - * (session ID is provided in attachedToWorker notification). - */ - prependOnceListener(event: 'NodeWorker.receivedMessageFromWorker', listener: (message: InspectorNotification) => void): this; - /** - * This event is fired instead of `Runtime.executionContextDestroyed` when - * enabled. - * It is fired when the Node process finished all code execution and is - * waiting for all frontends to disconnect. - */ - prependOnceListener(event: 'NodeRuntime.waitingForDisconnect', listener: () => void): this; - } - /** - * Activate inspector on host and port. Equivalent to`node --inspect=[[host:]port]`, but can be done programmatically after node has - * started. - * - * If wait is `true`, will block until a client has connected to the inspect port - * and flow control has been passed to the debugger client. - * - * See the `security warning` regarding the `host`parameter usage. - * @param [port='what was specified on the CLI'] Port to listen on for inspector connections. Optional. - * @param [host='what was specified on the CLI'] Host to listen on for inspector connections. Optional. - * @param [wait=false] Block until a client has connected. Optional. - */ - function open(port?: number, host?: string, wait?: boolean): void; - /** - * Deactivate the inspector. Blocks until there are no active connections. - */ - function close(): void; - /** - * Return the URL of the active inspector, or `undefined` if there is none. - * - * ```console - * $ node --inspect -p 'inspector.url()' - * Debugger listening on ws://127.0.0.1:9229/166e272e-7a30-4d09-97ce-f1c012b43c34 - * For help, see: https://nodejs.org/en/docs/inspector - * ws://127.0.0.1:9229/166e272e-7a30-4d09-97ce-f1c012b43c34 - * - * $ node --inspect=localhost:3000 -p 'inspector.url()' - * Debugger listening on ws://localhost:3000/51cf8d0e-3c36-4c59-8efd-54519839e56a - * For help, see: https://nodejs.org/en/docs/inspector - * ws://localhost:3000/51cf8d0e-3c36-4c59-8efd-54519839e56a - * - * $ node -p 'inspector.url()' - * undefined - * ``` - */ - function url(): string | undefined; - /** - * Blocks until a client (existing or connected later) has sent`Runtime.runIfWaitingForDebugger` command. - * - * An exception will be thrown if there is no active inspector. - * @since v12.7.0 - */ - function waitForDebugger(): void; -} -/** - * The inspector module provides an API for interacting with the V8 inspector. - */ -declare module 'node:inspector' { - import inspector = require('inspector'); - export = inspector; -} diff --git a/spaces/fffiloni/coqui-bark-voice-cloning-docker/README.md b/spaces/fffiloni/coqui-bark-voice-cloning-docker/README.md deleted file mode 100644 index e606d8e33894b2978d64a3a393e21645a8ea643b..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/coqui-bark-voice-cloning-docker/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Coqui Bark Voice Cloning Docker -emoji: 🐸🐶🐳 -colorFrom: yellow -colorTo: indigo -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/dataloader.py b/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/dataloader.py deleted file mode 100644 index 039b9ec3645b2a4626ff47c221e372f32a6ad339..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/dataloader.py +++ /dev/null @@ -1,425 +0,0 @@ -import torch -import torch.multiprocessing as multiprocessing -from torch._C import _set_worker_signal_handlers, \ - _remove_worker_pids, _error_if_any_worker_fails -try: - from torch._C import _set_worker_pids -except: - from torch._C import _update_worker_pids as _set_worker_pids -from .sampler import SequentialSampler, RandomSampler, BatchSampler -import signal -import collections -import re -import sys -import threading -import traceback -from torch._six import string_classes, int_classes -import numpy as np - -if sys.version_info[0] == 2: - import Queue as queue -else: - import queue - - -class ExceptionWrapper(object): - r"Wraps an exception plus traceback to communicate across threads" - - def __init__(self, exc_info): - self.exc_type = exc_info[0] - self.exc_msg = "".join(traceback.format_exception(*exc_info)) - - -_use_shared_memory = False -"""Whether to use shared memory in default_collate""" - - -def _worker_loop(dataset, index_queue, data_queue, collate_fn, seed, init_fn, worker_id): - global _use_shared_memory - _use_shared_memory = True - - # Intialize C side signal handlers for SIGBUS and SIGSEGV. Python signal - # module's handlers are executed after Python returns from C low-level - # handlers, likely when the same fatal signal happened again already. - # https://docs.python.org/3/library/signal.html Sec. 18.8.1.1 - _set_worker_signal_handlers() - - torch.set_num_threads(1) - torch.manual_seed(seed) - np.random.seed(seed) - - if init_fn is not None: - init_fn(worker_id) - - while True: - r = index_queue.get() - if r is None: - break - idx, batch_indices = r - try: - samples = collate_fn([dataset[i] for i in batch_indices]) - except Exception: - data_queue.put((idx, ExceptionWrapper(sys.exc_info()))) - else: - data_queue.put((idx, samples)) - - -def _worker_manager_loop(in_queue, out_queue, done_event, pin_memory, device_id): - if pin_memory: - torch.cuda.set_device(device_id) - - while True: - try: - r = in_queue.get() - except Exception: - if done_event.is_set(): - return - raise - if r is None: - break - if isinstance(r[1], ExceptionWrapper): - out_queue.put(r) - continue - idx, batch = r - try: - if pin_memory: - batch = pin_memory_batch(batch) - except Exception: - out_queue.put((idx, ExceptionWrapper(sys.exc_info()))) - else: - out_queue.put((idx, batch)) - -numpy_type_map = { - 'float64': torch.DoubleTensor, - 'float32': torch.FloatTensor, - 'float16': torch.HalfTensor, - 'int64': torch.LongTensor, - 'int32': torch.IntTensor, - 'int16': torch.ShortTensor, - 'int8': torch.CharTensor, - 'uint8': torch.ByteTensor, -} - - -def default_collate(batch): - "Puts each data field into a tensor with outer dimension batch size" - - error_msg = "batch must contain tensors, numbers, dicts or lists; found {}" - elem_type = type(batch[0]) - if torch.is_tensor(batch[0]): - out = None - if _use_shared_memory: - # If we're in a background process, concatenate directly into a - # shared memory tensor to avoid an extra copy - numel = sum([x.numel() for x in batch]) - storage = batch[0].storage()._new_shared(numel) - out = batch[0].new(storage) - return torch.stack(batch, 0, out=out) - elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \ - and elem_type.__name__ != 'string_': - elem = batch[0] - if elem_type.__name__ == 'ndarray': - # array of string classes and object - if re.search('[SaUO]', elem.dtype.str) is not None: - raise TypeError(error_msg.format(elem.dtype)) - - return torch.stack([torch.from_numpy(b) for b in batch], 0) - if elem.shape == (): # scalars - py_type = float if elem.dtype.name.startswith('float') else int - return numpy_type_map[elem.dtype.name](list(map(py_type, batch))) - elif isinstance(batch[0], int_classes): - return torch.LongTensor(batch) - elif isinstance(batch[0], float): - return torch.DoubleTensor(batch) - elif isinstance(batch[0], string_classes): - return batch - elif isinstance(batch[0], collections.Mapping): - return {key: default_collate([d[key] for d in batch]) for key in batch[0]} - elif isinstance(batch[0], collections.Sequence): - transposed = zip(*batch) - return [default_collate(samples) for samples in transposed] - - raise TypeError((error_msg.format(type(batch[0])))) - - -def pin_memory_batch(batch): - if torch.is_tensor(batch): - return batch.pin_memory() - elif isinstance(batch, string_classes): - return batch - elif isinstance(batch, collections.Mapping): - return {k: pin_memory_batch(sample) for k, sample in batch.items()} - elif isinstance(batch, collections.Sequence): - return [pin_memory_batch(sample) for sample in batch] - else: - return batch - - -_SIGCHLD_handler_set = False -"""Whether SIGCHLD handler is set for DataLoader worker failures. Only one -handler needs to be set for all DataLoaders in a process.""" - - -def _set_SIGCHLD_handler(): - # Windows doesn't support SIGCHLD handler - if sys.platform == 'win32': - return - # can't set signal in child threads - if not isinstance(threading.current_thread(), threading._MainThread): - return - global _SIGCHLD_handler_set - if _SIGCHLD_handler_set: - return - previous_handler = signal.getsignal(signal.SIGCHLD) - if not callable(previous_handler): - previous_handler = None - - def handler(signum, frame): - # This following call uses `waitid` with WNOHANG from C side. Therefore, - # Python can still get and update the process status successfully. - _error_if_any_worker_fails() - if previous_handler is not None: - previous_handler(signum, frame) - - signal.signal(signal.SIGCHLD, handler) - _SIGCHLD_handler_set = True - - -class DataLoaderIter(object): - "Iterates once over the DataLoader's dataset, as specified by the sampler" - - def __init__(self, loader): - self.dataset = loader.dataset - self.collate_fn = loader.collate_fn - self.batch_sampler = loader.batch_sampler - self.num_workers = loader.num_workers - self.pin_memory = loader.pin_memory and torch.cuda.is_available() - self.timeout = loader.timeout - self.done_event = threading.Event() - - self.sample_iter = iter(self.batch_sampler) - - if self.num_workers > 0: - self.worker_init_fn = loader.worker_init_fn - self.index_queue = multiprocessing.SimpleQueue() - self.worker_result_queue = multiprocessing.SimpleQueue() - self.batches_outstanding = 0 - self.worker_pids_set = False - self.shutdown = False - self.send_idx = 0 - self.rcvd_idx = 0 - self.reorder_dict = {} - - base_seed = torch.LongTensor(1).random_(0, 2**31-1)[0] - self.workers = [ - multiprocessing.Process( - target=_worker_loop, - args=(self.dataset, self.index_queue, self.worker_result_queue, self.collate_fn, - base_seed + i, self.worker_init_fn, i)) - for i in range(self.num_workers)] - - if self.pin_memory or self.timeout > 0: - self.data_queue = queue.Queue() - if self.pin_memory: - maybe_device_id = torch.cuda.current_device() - else: - # do not initialize cuda context if not necessary - maybe_device_id = None - self.worker_manager_thread = threading.Thread( - target=_worker_manager_loop, - args=(self.worker_result_queue, self.data_queue, self.done_event, self.pin_memory, - maybe_device_id)) - self.worker_manager_thread.daemon = True - self.worker_manager_thread.start() - else: - self.data_queue = self.worker_result_queue - - for w in self.workers: - w.daemon = True # ensure that the worker exits on process exit - w.start() - - _set_worker_pids(id(self), tuple(w.pid for w in self.workers)) - _set_SIGCHLD_handler() - self.worker_pids_set = True - - # prime the prefetch loop - for _ in range(2 * self.num_workers): - self._put_indices() - - def __len__(self): - return len(self.batch_sampler) - - def _get_batch(self): - if self.timeout > 0: - try: - return self.data_queue.get(timeout=self.timeout) - except queue.Empty: - raise RuntimeError('DataLoader timed out after {} seconds'.format(self.timeout)) - else: - return self.data_queue.get() - - def __next__(self): - if self.num_workers == 0: # same-process loading - indices = next(self.sample_iter) # may raise StopIteration - batch = self.collate_fn([self.dataset[i] for i in indices]) - if self.pin_memory: - batch = pin_memory_batch(batch) - return batch - - # check if the next sample has already been generated - if self.rcvd_idx in self.reorder_dict: - batch = self.reorder_dict.pop(self.rcvd_idx) - return self._process_next_batch(batch) - - if self.batches_outstanding == 0: - self._shutdown_workers() - raise StopIteration - - while True: - assert (not self.shutdown and self.batches_outstanding > 0) - idx, batch = self._get_batch() - self.batches_outstanding -= 1 - if idx != self.rcvd_idx: - # store out-of-order samples - self.reorder_dict[idx] = batch - continue - return self._process_next_batch(batch) - - next = __next__ # Python 2 compatibility - - def __iter__(self): - return self - - def _put_indices(self): - assert self.batches_outstanding < 2 * self.num_workers - indices = next(self.sample_iter, None) - if indices is None: - return - self.index_queue.put((self.send_idx, indices)) - self.batches_outstanding += 1 - self.send_idx += 1 - - def _process_next_batch(self, batch): - self.rcvd_idx += 1 - self._put_indices() - if isinstance(batch, ExceptionWrapper): - raise batch.exc_type(batch.exc_msg) - return batch - - def __getstate__(self): - # TODO: add limited pickling support for sharing an iterator - # across multiple threads for HOGWILD. - # Probably the best way to do this is by moving the sample pushing - # to a separate thread and then just sharing the data queue - # but signalling the end is tricky without a non-blocking API - raise NotImplementedError("DataLoaderIterator cannot be pickled") - - def _shutdown_workers(self): - try: - if not self.shutdown: - self.shutdown = True - self.done_event.set() - # if worker_manager_thread is waiting to put - while not self.data_queue.empty(): - self.data_queue.get() - for _ in self.workers: - self.index_queue.put(None) - # done_event should be sufficient to exit worker_manager_thread, - # but be safe here and put another None - self.worker_result_queue.put(None) - finally: - # removes pids no matter what - if self.worker_pids_set: - _remove_worker_pids(id(self)) - self.worker_pids_set = False - - def __del__(self): - if self.num_workers > 0: - self._shutdown_workers() - - -class DataLoader(object): - """ - Data loader. Combines a dataset and a sampler, and provides - single- or multi-process iterators over the dataset. - - Arguments: - dataset (Dataset): dataset from which to load the data. - batch_size (int, optional): how many samples per batch to load - (default: 1). - shuffle (bool, optional): set to ``True`` to have the data reshuffled - at every epoch (default: False). - sampler (Sampler, optional): defines the strategy to draw samples from - the dataset. If specified, ``shuffle`` must be False. - batch_sampler (Sampler, optional): like sampler, but returns a batch of - indices at a time. Mutually exclusive with batch_size, shuffle, - sampler, and drop_last. - num_workers (int, optional): how many subprocesses to use for data - loading. 0 means that the data will be loaded in the main process. - (default: 0) - collate_fn (callable, optional): merges a list of samples to form a mini-batch. - pin_memory (bool, optional): If ``True``, the data loader will copy tensors - into CUDA pinned memory before returning them. - drop_last (bool, optional): set to ``True`` to drop the last incomplete batch, - if the dataset size is not divisible by the batch size. If ``False`` and - the size of dataset is not divisible by the batch size, then the last batch - will be smaller. (default: False) - timeout (numeric, optional): if positive, the timeout value for collecting a batch - from workers. Should always be non-negative. (default: 0) - worker_init_fn (callable, optional): If not None, this will be called on each - worker subprocess with the worker id (an int in ``[0, num_workers - 1]``) as - input, after seeding and before data loading. (default: None) - - .. note:: By default, each worker will have its PyTorch seed set to - ``base_seed + worker_id``, where ``base_seed`` is a long generated - by main process using its RNG. You may use ``torch.initial_seed()`` to access - this value in :attr:`worker_init_fn`, which can be used to set other seeds - (e.g. NumPy) before data loading. - - .. warning:: If ``spawn'' start method is used, :attr:`worker_init_fn` cannot be an - unpicklable object, e.g., a lambda function. - """ - - def __init__(self, dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, - num_workers=0, collate_fn=default_collate, pin_memory=False, drop_last=False, - timeout=0, worker_init_fn=None): - self.dataset = dataset - self.batch_size = batch_size - self.num_workers = num_workers - self.collate_fn = collate_fn - self.pin_memory = pin_memory - self.drop_last = drop_last - self.timeout = timeout - self.worker_init_fn = worker_init_fn - - if timeout < 0: - raise ValueError('timeout option should be non-negative') - - if batch_sampler is not None: - if batch_size > 1 or shuffle or sampler is not None or drop_last: - raise ValueError('batch_sampler is mutually exclusive with ' - 'batch_size, shuffle, sampler, and drop_last') - - if sampler is not None and shuffle: - raise ValueError('sampler is mutually exclusive with shuffle') - - if self.num_workers < 0: - raise ValueError('num_workers cannot be negative; ' - 'use num_workers=0 to disable multiprocessing.') - - if batch_sampler is None: - if sampler is None: - if shuffle: - sampler = RandomSampler(dataset) - else: - sampler = SequentialSampler(dataset) - batch_sampler = BatchSampler(sampler, batch_size, drop_last) - - self.sampler = sampler - self.batch_sampler = batch_sampler - - def __iter__(self): - return DataLoaderIter(self) - - def __len__(self): - return len(self.batch_sampler) diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_68.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_68.py deleted file mode 100644 index b952487e7d540bedf30ce0a626d33d3f7cb07a83..0000000000000000000000000000000000000000 --- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_68.py +++ /dev/null @@ -1,21 +0,0 @@ - -import re - -def is_spam(message): - # Check for presence of numbers or special characters - if re.search(r'\d', message) or re.search(r'[^\w\s]', message): - # Check for presence of URL - if re.search(r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\\(\\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', message): - return True - - # Check for presence of short URL - if re.search(r'bit\.ly|goo\.gl|me2\.kr|tinyurl\.com|ocx\.kr|buly\.kr', message): - return True - - # Check for promotional keywords - promotional_keywords = ['광고', '프로모션', '이벤트', '쿠폰', '할인', '구인', '회원가입', '신규', '주식', '공시', '정보', '단독', '상한가', '경품'] - for keyword in promotional_keywords: - if keyword in message: - return True - - return False diff --git a/spaces/flax-community/image-captioning/vit_gpt2/modeling_flax_vit_gpt2.py b/spaces/flax-community/image-captioning/vit_gpt2/modeling_flax_vit_gpt2.py deleted file mode 100644 index 2e25dd918f2714ae9a2c21d14401663c371b5f38..0000000000000000000000000000000000000000 --- a/spaces/flax-community/image-captioning/vit_gpt2/modeling_flax_vit_gpt2.py +++ /dev/null @@ -1,704 +0,0 @@ -from typing import Callable, Optional, Tuple - -import flax.linen as nn -import jax -import jax.numpy as jnp -from flax.core.frozen_dict import FrozenDict, unfreeze -from jax import lax -from jax.random import PRNGKey -from transformers import GPT2Config, FlaxViTModel, ViTConfig -from transformers.modeling_flax_outputs import ( - FlaxCausalLMOutputWithCrossAttentions, - FlaxSeq2SeqLMOutput, - FlaxSeq2SeqModelOutput, -) -from transformers.models.bart.modeling_flax_bart import ( - shift_tokens_right, -) -from .modeling_flax_gpt2 import ( - FlaxGPT2Module, - FlaxGPT2Model, - FlaxPreTrainedModel -) -from transformers.models.vit.modeling_flax_vit import FlaxViTModule - -from .configuration_vit_gpt2 import ViTGPT2Config - - -class FlaxViTGPT2Module(nn.Module): - config: ViTGPT2Config - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - - self.encoder = FlaxViTModule(self.config.vit_config, dtype=self.dtype) - self.decoder = FlaxGPT2Module(self.config.gpt2_config, dtype=self.dtype) - - def _get_encoder_module(self): - return self.encoder - - def _get_decoder_module(self): - return self.decoder - - def __call__( - self, - pixel_values, - input_ids, - attention_mask, - position_ids, - encoder_attention_mask: Optional[jnp.ndarray] = None, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - deterministic: bool = True, - ): - encoder_outputs = self.encoder( - pixel_values=pixel_values, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - decoder_outputs = self.decoder( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - encoder_hidden_states=encoder_outputs[0], - encoder_attention_mask=encoder_attention_mask, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict - ) - - return FlaxSeq2SeqModelOutput( - last_hidden_state=decoder_outputs.last_hidden_state, - decoder_hidden_states=decoder_outputs.hidden_states, - decoder_attentions=decoder_outputs.attentions, - cross_attentions=decoder_outputs.cross_attentions, - encoder_last_hidden_state=encoder_outputs.last_hidden_state, - encoder_hidden_states=encoder_outputs.hidden_states, - encoder_attentions=encoder_outputs.attentions, - ) - - -class FlaxViTGPT2ForConditionalGenerationModule(nn.Module): - config: ViTGPT2Config - dtype: jnp.dtype = jnp.float32 - bias_init: Callable[..., jnp.ndarray] = jax.nn.initializers.zeros - - def setup(self): - self.model = FlaxViTGPT2Module(config=self.config, dtype=self.dtype) - self.lm_head = nn.Dense( - self.model.decoder.embed_dim, - use_bias=False, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal( - self.config.gpt2_config.initializer_range, self.dtype - ), - ) - self.final_logits_bias = self.param( - "final_logits_bias", self.bias_init, (1, self.model.decoder.embed_dim) - ) - - def _get_encoder_module(self): - return self.model.encoder - - def _get_decoder_module(self): - return self.model.decoder - - def __call__( - self, - pixel_values, - input_ids, - attention_mask, - position_ids, - encoder_attention_mask: Optional[jnp.ndarray] = None, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - deterministic: bool = True, - ): - outputs = self.model( - pixel_values=pixel_values, - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=deterministic, - ) - - hidden_states = outputs[0] - lm_logits = self.lm_head(hidden_states) - lm_logits += self.final_logits_bias - - if not return_dict: - output = (lm_logits,) + outputs[1:] - return output - - return FlaxSeq2SeqLMOutput( - logits=lm_logits, - decoder_hidden_states=outputs.decoder_hidden_states, - decoder_attentions=outputs.decoder_attentions, - cross_attentions=outputs.cross_attentions, - encoder_last_hidden_state=outputs.encoder_last_hidden_state, - encoder_hidden_states=outputs.encoder_hidden_states, - encoder_attentions=outputs.encoder_attentions, - ) - -class FlaxViTGPT2PreTrainedModel(FlaxPreTrainedModel): - config_class = ViTGPT2Config - base_model_prefix: str = "model" - module_class: nn.Module = None - - def __init__( - self, - config: ViTGPT2Config, - input_shape: Tuple = None, - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - **kwargs, - ): - if input_shape is None: - input_shape = ( - (1, config.vit_config.image_size, config.vit_config.image_size, 3), - (1, 1), - ) - - module = self.module_class(config=config, dtype=dtype, **kwargs) - super().__init__( - config, module, input_shape=input_shape, seed=seed, dtype=dtype - ) - - def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict: - # init input tensors - pixel_values = jax.random.normal(rng, input_shape[0]) - # # make sure initialization pass will work for FlaxBartForSequenceClassificationModule - # input_ids = jax.ops.index_update(input_ids, (..., -1), self.config.eos_token_id) - - input_ids = jnp.zeros(input_shape[1], dtype="i4") - attention_mask = jnp.ones_like(input_ids) - - batch_size, sequence_length = input_ids.shape - position_ids = jnp.broadcast_to( - jnp.arange(sequence_length)[None, :], (batch_size, sequence_length) - ) - - params_rng, dropout_rng = jax.random.split(rng) - rngs = {"params": params_rng, "dropout": dropout_rng} - - return self.module.init( - rngs, - pixel_values, - input_ids, - attention_mask, - position_ids, - )["params"] - - def init_cache(self, batch_size, max_length, encoder_outputs): - - input_ids = jnp.ones((batch_size, max_length), dtype="i4") - attention_mask = jnp.ones_like(input_ids) - position_ids = jnp.broadcast_to( - jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), - input_ids.shape, - ) - - def _decoder_forward( - module, - input_ids, - attention_mask, - position_ids, - **kwargs, - ): - decoder_module = module._get_decoder_module() - return decoder_module( - input_ids, - attention_mask, - position_ids, - **kwargs, - ) - - init_variables = self.module.init( - jax.random.PRNGKey(0), - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - encoder_hidden_states=encoder_outputs[0], - init_cache=True, - method=_decoder_forward, # we only need to call the decoder to init the cache - ) - return unfreeze(init_variables["cache"]) - - def encode( - self, - pixel_values: jnp.ndarray, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.return_dict - ) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - def _encoder_forward(module, pixel_values, **kwargs): - encode_module = module._get_encoder_module() - return encode_module(pixel_values, **kwargs) - - return self.module.apply( - {"params": params or self.params}, - pixel_values=jnp.array(pixel_values, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=not train, - rngs=rngs, - method=_encoder_forward, - ) - - def decode( - self, - input_ids, - encoder_outputs, - encoder_attention_mask: Optional[jnp.ndarray] = None, - attention_mask: Optional[jnp.ndarray] = None, - position_ids: Optional[jnp.ndarray] = None, - past_key_values: dict = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.return_dict - ) - - encoder_hidden_states = encoder_outputs[0] - if encoder_attention_mask is None: - batch_size, sequence_length = encoder_hidden_states.shape[:2] - encoder_attention_mask = jnp.ones((batch_size, sequence_length)) - - batch_size, sequence_length = input_ids.shape - if attention_mask is None: - attention_mask = jnp.ones((batch_size, sequence_length)) - - if position_ids is None: - if past_key_values is not None: - raise ValueError( - "Make sure to provide `position_ids` when passing `past_key_values`." - ) - - position_ids = jnp.broadcast_to( - jnp.arange(sequence_length)[None, :], (batch_size, sequence_length) - ) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - inputs = {"params": params or self.params} - - # if past_key_values are passed then cache is already initialized a private flag init_cache has to be - # passed down to ensure cache is used. It has to be made sure that cache is marked as mutable so that - # it can be changed by FlaxGPT2Attention module - if past_key_values: - inputs["cache"] = past_key_values - mutable = ["cache"] - else: - mutable = False - - def _decoder_forward( - module, - input_ids, - attention_mask, - position_ids, - **kwargs, - ): - decoder_module = module._get_decoder_module() - return decoder_module( - input_ids, - attention_mask, - position_ids, - **kwargs, - ) - - outputs = self.module.apply( - inputs, - input_ids=jnp.array(input_ids, dtype="i4"), - attention_mask=jnp.array(attention_mask, dtype="i4"), - position_ids=jnp.array(position_ids, dtype="i4"), - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=jnp.array(encoder_attention_mask, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=not train, - rngs=rngs, - mutable=mutable, - method=_decoder_forward, - ) - - # add updated cache to model output - if past_key_values is not None and return_dict: - outputs, past = outputs - outputs["past_key_values"] = unfreeze(past["cache"]) - return outputs - elif past_key_values is not None and not return_dict: - outputs, past = outputs - outputs = outputs[:1] + (unfreeze(past["cache"]),) + outputs[1:] - - return outputs - - def __call__( - self, - pixel_values: jnp.ndarray, - input_ids: Optional[jnp.ndarray] = None, - attention_mask: Optional[jnp.ndarray] = None, - position_ids: Optional[jnp.ndarray] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - train: bool = False, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.return_dict - ) - - pixel_values = jnp.transpose(pixel_values, (0, 2, 3, 1)) - - # # prepare encoder inputs - # if encoder_attention_mask is None: - # encoder_attention_mask = jnp.ones_like(input_ids) - - # if position_ids is None: - # batch_size, sequence_length = input_ids.shape - # position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length)) - - # prepare decoder inputs - # if decoder_input_ids is None: - # decoder_input_ids = shift_tokens_right( - # input_ids, self.config.pad_token_id, decoder_start_token_id=self.config.decoder_start_token_id - # ) # TODO: Check how to use this - - if attention_mask is None: - attention_mask = jnp.ones_like(input_ids) - if position_ids is None: - batch_size, sequence_length = input_ids.shape - position_ids = jnp.broadcast_to( - jnp.arange(sequence_length)[None, :], (batch_size, sequence_length) - ) - - # Handle any PRNG if needed - rngs = {"dropout": dropout_rng} if dropout_rng is not None else {} - - return self.module.apply( - {"params": params or self.params}, - pixel_values=jnp.array(pixel_values, dtype=jnp.float32), - input_ids=jnp.array(input_ids, dtype="i4"), - attention_mask=jnp.array(attention_mask, dtype="i4"), - position_ids=jnp.array(position_ids, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=not train, - rngs=rngs, - ) - - -class FlaxViTGPT2ForConditionalGeneration(FlaxViTGPT2PreTrainedModel): - module_class = FlaxViTGPT2ForConditionalGenerationModule - dtype: jnp.dtype = jnp.float32 - - def decode( - self, - input_ids, - encoder_outputs, - encoder_attention_mask: Optional[jnp.ndarray] = None, - attention_mask: Optional[jnp.ndarray] = None, - position_ids: Optional[jnp.ndarray] = None, - past_key_values: dict = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - deterministic: bool = True, - params: dict = None, - dropout_rng: PRNGKey = None, - ): - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.return_dict - ) - - encoder_hidden_states = encoder_outputs[0] - if encoder_attention_mask is None: - batch_size, sequence_length = encoder_hidden_states.shape[:2] - encoder_attention_mask = jnp.ones((batch_size, sequence_length)) - - batch_size, sequence_length = input_ids.shape - if attention_mask is None: - attention_mask = jnp.ones((batch_size, sequence_length)) - - if position_ids is None: - if past_key_values is not None: - raise ValueError( - "Make sure to provide `position_ids` when passing `past_key_values`." - ) - - position_ids = jnp.broadcast_to( - jnp.arange(sequence_length)[None, :], (batch_size, sequence_length) - ) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - inputs = {"params": params or self.params} - - # if past_key_values are passed then cache is already initialized a private flag init_cache has to be - # passed down to ensure cache is used. It has to be made sure that cache is marked as mutable so that - # it can be changed by FlaxGPT2Attention module - if past_key_values: - inputs["cache"] = past_key_values - mutable = ["cache"] - else: - mutable = False - - def _decoder_forward( - module, - input_ids, - attention_mask, - position_ids, - **kwargs, - ): - decoder_module = module._get_decoder_module() - outputs = decoder_module( - input_ids, - attention_mask, - position_ids, - **kwargs, - ) - hidden_states = outputs[0] - - if self.config.tie_word_embeddings: - shared_embedding = module.model.variables["params"]["shared"][ - "embedding" - ] - lm_logits = module.lm_head.apply( - {"params": {"kernel": shared_embedding.T}}, hidden_states - ) - else: - lm_logits = module.lm_head(hidden_states) - - lm_logits += module.final_logits_bias - return lm_logits, outputs - - outputs = self.module.apply( - inputs, - input_ids=jnp.array(input_ids, dtype="i4"), - attention_mask=jnp.array(attention_mask, dtype="i4"), - position_ids=jnp.array(position_ids, dtype="i4"), - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=jnp.array(encoder_attention_mask, dtype="i4"), - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - deterministic=deterministic, - rngs=rngs, - mutable=mutable, - method=_decoder_forward, - ) - - if past_key_values is None: - lm_logits, outputs = outputs - else: - (lm_logits, outputs), past = outputs - - if return_dict: - outputs = FlaxCausalLMOutputWithCrossAttentions( - logits=lm_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - else: - outputs = (lm_logits,) + outputs[1:] - - # add updated cache to model output - if past_key_values is not None and return_dict: - outputs["past_key_values"] = unfreeze(past["cache"]) - return outputs - elif past_key_values is not None and not return_dict: - outputs = outputs[:1] + (unfreeze(past["cache"]),) + outputs[1:] - - return outputs - - def prepare_inputs_for_generation( - self, - input_ids, - max_length, - encoder_attention_mask: Optional[jnp.DeviceArray] = None, - attention_mask: Optional[jnp.DeviceArray] = None, - encoder_outputs=None, - **kwargs, - ): - # initializing the cache - batch_size, seq_length = input_ids.shape - - past_key_values = self.init_cache(batch_size, max_length, encoder_outputs) - # Note that usually one would have to put 0's in the attention_mask for x > input_ids.shape[-1] and x < cache_length. - # But since the decoder uses a causal mask, those positions are masked anyways. - # Thus we can create a single static attention_mask here, which is more efficient for compilation - extended_attention_mask = jnp.ones((batch_size, max_length), dtype="i4") - if attention_mask is not None: - position_ids = attention_mask.cumsum(axis=-1) - 1 - extended_attention_mask = lax.dynamic_update_slice( - extended_attention_mask, attention_mask, (0, 0) - ) - else: - position_ids = jnp.broadcast_to( - jnp.arange(seq_length, dtype="i4")[None, :], (batch_size, seq_length) - ) - - return { - "past_key_values": past_key_values, - "encoder_outputs": encoder_outputs, - "encoder_attention_mask": encoder_attention_mask, - "attention_mask": extended_attention_mask, - "position_ids": position_ids, - } - - def update_inputs_for_generation(self, model_outputs, model_kwargs): - model_kwargs["past_key_values"] = model_outputs.past_key_values - model_kwargs["position_ids"] = ( - model_kwargs["position_ids"][:, -1:] + 1 - ) - return model_kwargs - - @classmethod - def from_vit_gpt2_pretrained( - cls, - vit_model_name_or_path: str = None, - gpt2_model_name_or_path: str = None, - *model_args, - **kwargs, - ) -> FlaxViTGPT2PreTrainedModel: - - kwargs_gpt2 = { - argument[len("gpt2_") :]: value - for argument, value in kwargs.items() - if argument.startswith("gpt2_") - } - - kwargs_vit = { - argument[len("vit_") :]: value - for argument, value in kwargs.items() - if argument.startswith("vit_") - } - - # remove gpt2, vit kwargs from kwargs - for key in kwargs_gpt2.keys(): - del kwargs["gpt2_" + key] - for key in kwargs_vit.keys(): - del kwargs["vit_" + key] - - # Load and initialize the gpt2 and vit model - gpt2_model = kwargs_gpt2.pop("model", None) - if gpt2_model is None: - assert ( - gpt2_model_name_or_path is not None - ), "If `model` is not defined as an argument, a `gpt2_model_name_or_path` has to be defined" - - if "config" not in kwargs_gpt2: - gpt2_config = GPT2Config.from_pretrained(gpt2_model_name_or_path) - kwargs_gpt2["config"] = gpt2_config - - kwargs_gpt2["config"].add_cross_attention = True - gpt2_model = FlaxGPT2Model.from_pretrained( - gpt2_model_name_or_path, *model_args, **kwargs_gpt2 - ) - - vit_model = kwargs_vit.pop("model", None) - if vit_model is None: - assert ( - vit_model_name_or_path is not None - ), "If `model` is not defined as an argument, a `vit_model_name_or_path` has to be defined" - - if "config" not in kwargs_vit: - vit_config = ViTConfig.from_pretrained(vit_model_name_or_path) - kwargs_vit["config"] = vit_config - - vit_model = FlaxViTModel.from_pretrained( - vit_model_name_or_path, *model_args, **kwargs_vit - ) - - # instantiate config with corresponding kwargs - dtype = kwargs.pop("dtype", jnp.float32) - config = ViTGPT2Config.from_vit_gpt2_configs( - vit_model.config, gpt2_model.config, **kwargs - ) - - # init model - model = cls(config, *model_args, dtype=dtype, **kwargs) - model.params["model"]["encoder"] = vit_model.params - model.params["model"]["decoder"] = gpt2_model.params - - return model - diff --git a/spaces/flax-community/multilingual-image-captioning/sections/references/papers.md b/spaces/flax-community/multilingual-image-captioning/sections/references/papers.md deleted file mode 100644 index 0fea101b40554174e4ef97119e2736cd408f33e3..0000000000000000000000000000000000000000 --- a/spaces/flax-community/multilingual-image-captioning/sections/references/papers.md +++ /dev/null @@ -1,75 +0,0 @@ -``` -@inproceedings{NIPS2017_3f5ee243, - author = {Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, \L ukasz and Polosukhin, Illia}, - booktitle = {Advances in Neural Information Processing Systems}, - editor = {I. Guyon and U. V. Luxburg and S. Bengio and H. Wallach and R. Fergus and S. Vishwanathan and R. Garnett}, - pages = {}, - publisher = {Curran Associates, Inc.}, - title = {Attention is All you Need}, - url = {https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf}, - volume = {30}, - year = {2017} -} -``` - -``` -@inproceedings{wolf-etal-2020-transformers, - title = "Transformers: State-of-the-Art Natural Language Processing", - author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", - booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", - month = oct, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", - pages = "38--45" -} -``` - -``` -@inproceedings{changpinyo2021cc12m, - title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts}, - author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu}, - booktitle = {CVPR}, - year = {2021}, -} -``` -``` -@InProceedings{mariannmt, - title = {Marian: Fast Neural Machine Translation in {C++}}, - author = {Junczys-Dowmunt, Marcin and Grundkiewicz, Roman and - Dwojak, Tomasz and Hoang, Hieu and Heafield, Kenneth and - Neckermann, Tom and Seide, Frank and Germann, Ulrich and - Fikri Aji, Alham and Bogoychev, Nikolay and - Martins, Andr\'{e} F. T. and Birch, Alexandra}, - booktitle = {Proceedings of ACL 2018, System Demonstrations}, - pages = {116--121}, - publisher = {Association for Computational Linguistics}, - year = {2018}, - month = {July}, - address = {Melbourne, Australia}, - url = {http://www.aclweb.org/anthology/P18-4020} -} -``` - -``` -@article{liu2020multilingual, - title={Multilingual Denoising Pre-training for Neural Machine Translation}, - author={Yinhan Liu and Jiatao Gu and Naman Goyal and Xian Li and Sergey Edunov and Marjan Ghazvininejad and Mike Lewis and Luke Zettlemoyer}, - year={2020}, - eprint={2001.08210}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` - -``` -@misc{radford2021learning, - title={Learning Transferable Visual Models From Natural Language Supervision}, - author={Alec Radford and Jong Wook Kim and Chris Hallacy and Aditya Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, - year={2021}, - eprint={2103.00020}, - archivePrefix={arXiv}, - primaryClass={cs.CV} -} -``` \ No newline at end of file diff --git a/spaces/floriankrempl/mtg_rules_bot/tests/bot/test_qa.py b/spaces/floriankrempl/mtg_rules_bot/tests/bot/test_qa.py deleted file mode 100644 index fa42e93321ab86bca34029f5834ea9379a002ce3..0000000000000000000000000000000000000000 --- a/spaces/floriankrempl/mtg_rules_bot/tests/bot/test_qa.py +++ /dev/null @@ -1,20 +0,0 @@ -TEST_CASES_KEYWORDS = [ - "what happens if i attack with Ambush Viper and my opponent blocks with Goblin Striker?", - "what happens if i attack with a 1/1 deathouch creature and my opponent blocks with a 2/2 token?", - "what happens if i attack with a 1/1 first strike creature and my opponent blocks with a 3/1 creature?", - "what happnes if i attack with a 5/5 creature with trample and my opponent blocks with a 1/1 creature?", - "what happens if i attack with a 5/5 creature with trample and my opponent blocks with a 5/1 creature?", -] - - -TEST_CASES_CARD_KNOWLEDGE = [ - "what happens if i attack with Aggressive Mammoth and my opponent blocks with a 1/1 creature?", -] - - -TEST_CASES_CARD_SEARCH = [ - "tell me a 1/1 goblin creature without any keywords.", - "what cards can I add to my chatterfang commander deck?", - "tell me 3 cards for a green black commander deck.", - "tell me 3 counter spells.", -] diff --git a/spaces/florim/MedGPT/autogpt/configurator.py b/spaces/florim/MedGPT/autogpt/configurator.py deleted file mode 100644 index 1dc3be124f638b8859eb459bcb2d46696f62e2b7..0000000000000000000000000000000000000000 --- a/spaces/florim/MedGPT/autogpt/configurator.py +++ /dev/null @@ -1,134 +0,0 @@ -"""Configurator module.""" -import click -from colorama import Back, Fore, Style - -from autogpt import utils -from autogpt.config import Config -from autogpt.logs import logger -from autogpt.memory import get_supported_memory_backends - -CFG = Config() - - -def create_config( - continuous: bool, - continuous_limit: int, - ai_settings_file: str, - skip_reprompt: bool, - speak: bool, - debug: bool, - gpt3only: bool, - gpt4only: bool, - memory_type: str, - browser_name: str, - allow_downloads: bool, - skip_news: bool, -) -> None: - """Updates the config object with the given arguments. - - Args: - continuous (bool): Whether to run in continuous mode - continuous_limit (int): The number of times to run in continuous mode - ai_settings_file (str): The path to the ai_settings.yaml file - skip_reprompt (bool): Whether to skip the re-prompting messages at the beginning of the script - speak (bool): Whether to enable speak mode - debug (bool): Whether to enable debug mode - gpt3only (bool): Whether to enable GPT3.5 only mode - gpt4only (bool): Whether to enable GPT4 only mode - memory_type (str): The type of memory backend to use - browser_name (str): The name of the browser to use when using selenium to scrape the web - allow_downloads (bool): Whether to allow Auto-GPT to download files natively - skips_news (bool): Whether to suppress the output of latest news on startup - """ - CFG.set_debug_mode(False) - CFG.set_continuous_mode(False) - CFG.set_speak_mode(False) - - if debug: - logger.typewriter_log("Debug Mode: ", Fore.GREEN, "ENABLED") - CFG.set_debug_mode(True) - - if continuous: - logger.typewriter_log("Continuous Mode: ", Fore.RED, "ENABLED") - logger.typewriter_log( - "WARNING: ", - Fore.RED, - "Continuous mode is not recommended. It is potentially dangerous and may" - " cause your AI to run forever or carry out actions you would not usually" - " authorise. Use at your own risk.", - ) - CFG.set_continuous_mode(True) - - if continuous_limit: - logger.typewriter_log( - "Continuous Limit: ", Fore.GREEN, f"{continuous_limit}" - ) - CFG.set_continuous_limit(continuous_limit) - - # Check if continuous limit is used without continuous mode - if continuous_limit and not continuous: - raise click.UsageError("--continuous-limit can only be used with --continuous") - - if speak: - logger.typewriter_log("Speak Mode: ", Fore.GREEN, "ENABLED") - CFG.set_speak_mode(True) - - if gpt3only: - logger.typewriter_log("GPT3.5 Only Mode: ", Fore.GREEN, "ENABLED") - CFG.set_smart_llm_model(CFG.fast_llm_model) - - if gpt4only: - logger.typewriter_log("GPT4 Only Mode: ", Fore.GREEN, "ENABLED") - CFG.set_fast_llm_model(CFG.smart_llm_model) - - if memory_type: - supported_memory = get_supported_memory_backends() - chosen = memory_type - if chosen not in supported_memory: - logger.typewriter_log( - "ONLY THE FOLLOWING MEMORY BACKENDS ARE SUPPORTED: ", - Fore.RED, - f"{supported_memory}", - ) - logger.typewriter_log("Defaulting to: ", Fore.YELLOW, CFG.memory_backend) - else: - CFG.memory_backend = chosen - - if skip_reprompt: - logger.typewriter_log("Skip Re-prompt: ", Fore.GREEN, "ENABLED") - CFG.skip_reprompt = True - - if ai_settings_file: - file = ai_settings_file - - # Validate file - (validated, message) = utils.validate_yaml_file(file) - if not validated: - logger.typewriter_log("FAILED FILE VALIDATION", Fore.RED, message) - logger.double_check() - exit(1) - - logger.typewriter_log("Using AI Settings File:", Fore.GREEN, file) - CFG.ai_settings_file = file - CFG.skip_reprompt = True - - if allow_downloads: - logger.typewriter_log("Native Downloading:", Fore.GREEN, "ENABLED") - logger.typewriter_log( - "WARNING: ", - Fore.YELLOW, - f"{Back.LIGHTYELLOW_EX}Auto-GPT will now be able to download and save files to your machine.{Back.RESET} " - + "It is recommended that you monitor any files it downloads carefully.", - ) - logger.typewriter_log( - "WARNING: ", - Fore.YELLOW, - f"{Back.RED + Style.BRIGHT}ALWAYS REMEMBER TO NEVER OPEN FILES YOU AREN'T SURE OF!{Style.RESET_ALL}", - ) - CFG.allow_downloads = True - - if skip_news: - CFG.skip_news = True - - if browser_name: - CFG.selenium_web_browser = browser_name diff --git a/spaces/fuckyoudeki/AutoGPT/autogpt/config/config.py b/spaces/fuckyoudeki/AutoGPT/autogpt/config/config.py deleted file mode 100644 index 4b53df10e8d2832be7ffb321d9036aec5a47a79d..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/autogpt/config/config.py +++ /dev/null @@ -1,251 +0,0 @@ -"""Configuration class to store the state of bools for different scripts access.""" -import os - -import openai -import yaml -from colorama import Fore -from dotenv import load_dotenv - -from autogpt.config.singleton import Singleton - -load_dotenv(verbose=True) - - -class Config(metaclass=Singleton): - """ - Configuration class to store the state of bools for different scripts access. - """ - - def __init__(self) -> None: - """Initialize the Config class""" - self.debug_mode = False - self.continuous_mode = False - self.continuous_limit = 0 - self.speak_mode = False - self.skip_reprompt = False - self.allow_downloads = False - self.skip_news = False - - self.ai_settings_file = os.getenv("AI_SETTINGS_FILE", "ai_settings.yaml") - self.fast_llm_model = os.getenv("FAST_LLM_MODEL", "gpt-3.5-turbo") - self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4") - self.fast_token_limit = int(os.getenv("FAST_TOKEN_LIMIT", 4000)) - self.smart_token_limit = int(os.getenv("SMART_TOKEN_LIMIT", 8000)) - self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 8192)) - - self.openai_api_key = os.getenv("OPENAI_API_KEY") - self.temperature = float(os.getenv("TEMPERATURE", "1")) - self.use_azure = os.getenv("USE_AZURE") == "True" - self.execute_local_commands = ( - os.getenv("EXECUTE_LOCAL_COMMANDS", "False") == "True" - ) - self.restrict_to_workspace = ( - os.getenv("RESTRICT_TO_WORKSPACE", "True") == "True" - ) - - if self.use_azure: - self.load_azure_config() - openai.api_type = self.openai_api_type - openai.api_base = self.openai_api_base - openai.api_version = self.openai_api_version - - self.elevenlabs_api_key = os.getenv("ELEVENLABS_API_KEY") - self.elevenlabs_voice_1_id = os.getenv("ELEVENLABS_VOICE_1_ID") - self.elevenlabs_voice_2_id = os.getenv("ELEVENLABS_VOICE_2_ID") - - self.use_mac_os_tts = False - self.use_mac_os_tts = os.getenv("USE_MAC_OS_TTS") - - self.use_brian_tts = False - self.use_brian_tts = os.getenv("USE_BRIAN_TTS") - - self.github_api_key = os.getenv("GITHUB_API_KEY") - self.github_username = os.getenv("GITHUB_USERNAME") - - self.google_api_key = os.getenv("GOOGLE_API_KEY") - self.custom_search_engine_id = os.getenv("CUSTOM_SEARCH_ENGINE_ID") - - self.pinecone_api_key = os.getenv("PINECONE_API_KEY") - self.pinecone_region = os.getenv("PINECONE_ENV") - - self.weaviate_host = os.getenv("WEAVIATE_HOST") - self.weaviate_port = os.getenv("WEAVIATE_PORT") - self.weaviate_protocol = os.getenv("WEAVIATE_PROTOCOL", "http") - self.weaviate_username = os.getenv("WEAVIATE_USERNAME", None) - self.weaviate_password = os.getenv("WEAVIATE_PASSWORD", None) - self.weaviate_scopes = os.getenv("WEAVIATE_SCOPES", None) - self.weaviate_embedded_path = os.getenv("WEAVIATE_EMBEDDED_PATH") - self.weaviate_api_key = os.getenv("WEAVIATE_API_KEY", None) - self.use_weaviate_embedded = ( - os.getenv("USE_WEAVIATE_EMBEDDED", "False") == "True" - ) - - # milvus configuration, e.g., localhost:19530. - self.milvus_addr = os.getenv("MILVUS_ADDR", "localhost:19530") - self.milvus_collection = os.getenv("MILVUS_COLLECTION", "autogpt") - - self.image_provider = os.getenv("IMAGE_PROVIDER") - self.image_size = int(os.getenv("IMAGE_SIZE", 256)) - self.huggingface_api_token = os.getenv("HUGGINGFACE_API_TOKEN") - self.huggingface_image_model = os.getenv( - "HUGGINGFACE_IMAGE_MODEL", "CompVis/stable-diffusion-v1-4" - ) - self.huggingface_audio_to_text_model = os.getenv( - "HUGGINGFACE_AUDIO_TO_TEXT_MODEL" - ) - self.sd_webui_url = os.getenv("SD_WEBUI_URL", "http://localhost:7860") - self.sd_webui_auth = os.getenv("SD_WEBUI_AUTH") - - # Selenium browser settings - self.selenium_web_browser = os.getenv("USE_WEB_BROWSER", "chrome") - self.selenium_headless = os.getenv("HEADLESS_BROWSER", "True") == "True" - - # User agent header to use when making HTTP requests - # Some websites might just completely deny request with an error code if - # no user agent was found. - self.user_agent = os.getenv( - "USER_AGENT", - "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36" - " (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36", - ) - - self.redis_host = os.getenv("REDIS_HOST", "localhost") - self.redis_port = os.getenv("REDIS_PORT", "6379") - self.redis_password = os.getenv("REDIS_PASSWORD", "") - self.wipe_redis_on_start = os.getenv("WIPE_REDIS_ON_START", "True") == "True" - self.memory_index = os.getenv("MEMORY_INDEX", "auto-gpt") - # Note that indexes must be created on db 0 in redis, this is not configurable. - - self.memory_backend = os.getenv("MEMORY_BACKEND", "local") - # Initialize the OpenAI API client - openai.api_key = self.openai_api_key - - def get_azure_deployment_id_for_model(self, model: str) -> str: - """ - Returns the relevant deployment id for the model specified. - - Parameters: - model(str): The model to map to the deployment id. - - Returns: - The matching deployment id if found, otherwise an empty string. - """ - if model == self.fast_llm_model: - return self.azure_model_to_deployment_id_map[ - "fast_llm_model_deployment_id" - ] # type: ignore - elif model == self.smart_llm_model: - return self.azure_model_to_deployment_id_map[ - "smart_llm_model_deployment_id" - ] # type: ignore - elif model == "text-embedding-ada-002": - return self.azure_model_to_deployment_id_map[ - "embedding_model_deployment_id" - ] # type: ignore - else: - return "" - - AZURE_CONFIG_FILE = os.path.join(os.path.dirname(__file__), "..", "azure.yaml") - - def load_azure_config(self, config_file: str = AZURE_CONFIG_FILE) -> None: - """ - Loads the configuration parameters for Azure hosting from the specified file - path as a yaml file. - - Parameters: - config_file(str): The path to the config yaml file. DEFAULT: "../azure.yaml" - - Returns: - None - """ - try: - with open(config_file) as file: - config_params = yaml.load(file, Loader=yaml.FullLoader) - except FileNotFoundError: - config_params = {} - self.openai_api_type = config_params.get("azure_api_type") or "azure" - self.openai_api_base = config_params.get("azure_api_base") or "" - self.openai_api_version = ( - config_params.get("azure_api_version") or "2023-03-15-preview" - ) - self.azure_model_to_deployment_id_map = config_params.get("azure_model_map", []) - - def set_continuous_mode(self, value: bool) -> None: - """Set the continuous mode value.""" - self.continuous_mode = value - - def set_continuous_limit(self, value: int) -> None: - """Set the continuous limit value.""" - self.continuous_limit = value - - def set_speak_mode(self, value: bool) -> None: - """Set the speak mode value.""" - self.speak_mode = value - - def set_fast_llm_model(self, value: str) -> None: - """Set the fast LLM model value.""" - self.fast_llm_model = value - - def set_smart_llm_model(self, value: str) -> None: - """Set the smart LLM model value.""" - self.smart_llm_model = value - - def set_fast_token_limit(self, value: int) -> None: - """Set the fast token limit value.""" - self.fast_token_limit = value - - def set_smart_token_limit(self, value: int) -> None: - """Set the smart token limit value.""" - self.smart_token_limit = value - - def set_browse_chunk_max_length(self, value: int) -> None: - """Set the browse_website command chunk max length value.""" - self.browse_chunk_max_length = value - - def set_openai_api_key(self, value: str) -> None: - """Set the OpenAI API key value.""" - self.openai_api_key = value - - def set_elevenlabs_api_key(self, value: str) -> None: - """Set the ElevenLabs API key value.""" - self.elevenlabs_api_key = value - - def set_elevenlabs_voice_1_id(self, value: str) -> None: - """Set the ElevenLabs Voice 1 ID value.""" - self.elevenlabs_voice_1_id = value - - def set_elevenlabs_voice_2_id(self, value: str) -> None: - """Set the ElevenLabs Voice 2 ID value.""" - self.elevenlabs_voice_2_id = value - - def set_google_api_key(self, value: str) -> None: - """Set the Google API key value.""" - self.google_api_key = value - - def set_custom_search_engine_id(self, value: str) -> None: - """Set the custom search engine id value.""" - self.custom_search_engine_id = value - - def set_pinecone_api_key(self, value: str) -> None: - """Set the Pinecone API key value.""" - self.pinecone_api_key = value - - def set_pinecone_region(self, value: str) -> None: - """Set the Pinecone region value.""" - self.pinecone_region = value - - def set_debug_mode(self, value: bool) -> None: - """Set the debug mode value.""" - self.debug_mode = value - - -def check_openai_api_key() -> None: - """Check if the OpenAI API key is set in config.py or as an environment variable.""" - cfg = Config() - if not cfg.openai_api_key: - print( - Fore.RED - + "Please set your OpenAI API key in .env or as an environment variable." - ) - print("You can get your key from https://platform.openai.com/account/api-keys") - exit(1) diff --git a/spaces/gaspar-avit/Movie_Poster_Generator/app.py b/spaces/gaspar-avit/Movie_Poster_Generator/app.py deleted file mode 100644 index c05a3cc4f5cf8d04c764082958805775a06509b1..0000000000000000000000000000000000000000 --- a/spaces/gaspar-avit/Movie_Poster_Generator/app.py +++ /dev/null @@ -1,352 +0,0 @@ -## Alternative movie poster generator - - -import streamlit as st -import pandas as pd -import numpy as np -import json -import requests -import os -import io -import string -import random -from streamlit import session_state as session -from datetime import time, datetime -from zipfile import ZipFile -from htbuilder import HtmlElement, div, ul, li, br, hr, a, p, img, styles, classes, fonts -from htbuilder.units import percent, px -from htbuilder.funcs import rgba, rgb -from PIL import Image - - - -############################### -## --- GLOBAL VARIABLES ---- ## -############################### - - -PATH_JSON = '/home/user/.kaggle/kaggle.json' - - - -# Environment variables to authenticate Kaggle account -os.environ['KAGGLE_USERNAME'] = st.secrets['username'] -os.environ['KAGGLE_KEY'] = st.secrets['key'] -os.environ['KAGGLE_CONFIG_DIR'] = PATH_JSON - -from kaggle.api.kaggle_api_extended import KaggleApi - - - -############################### -## ------- FUNCTIONS ------- ## -############################### - -def link(link, text, **style): - return a(_href=link, _target="_blank", style=styles(**style))(text) - - -def layout(*args): - - style = """ - - """ - - style_div = styles( - position="fixed", - left=0, - bottom=0, - margin=px(0, 0, 0, 0), - width=percent(100), - color="black", - text_align="center", - height="auto", - opacity=1 - ) - - style_hr = styles( - display="block", - margin=px(4, 4, "auto", "auto"), - border_style="inset", - border_width=px(0) - ) - - body = p() - foot = div( - style=style_div - )( - hr( - style=style_hr - ), - body - ) - - st.markdown(style, unsafe_allow_html=True) - - for arg in args: - if isinstance(arg, str): - body(arg) - - elif isinstance(arg, HtmlElement): - body(arg) - - st.markdown(str(foot), unsafe_allow_html=True) - - -def footer(): - myargs = [ - "Made with ❤️ by ", - link("https://www.linkedin.com/in/gaspar-avit/?locale=en_US", "Gaspar Avit"), - ] - layout(*myargs) - - -def authenticate_kaggle(): - # Connect to kaggle API - - # Save credentials to json file - if not os.path.exists(PATH_JSON): - api_token = {"username":st.secrets['username'],"key":st.secrets['key']} - with open(PATH_JSON, 'w') as file: - json.dump(api_token, file) - - # Activate Kaggle API - global api - api = KaggleApi() - api.authenticate() - - -@st.experimental_memo(persist=True, show_spinner=False, suppress_st_warning=True, max_entries=1) -def load_dataset(): - """ - Load Dataset from Kaggle - -return: dataframe containing dataset - """ - - ## --- Connect to kaggle API --- ## - # Save credentials to json file - if not os.path.exists(PATH_JSON): - api_token = {"username":st.secrets['username'],"key":st.secrets['key']} - with open(PATH_JSON, 'w') as file: - json.dump(api_token, file) - - # Activate Kaggle API - global api - api = KaggleApi() - api.authenticate() - ## ----------------------------- ## - - # Downloading Movies dataset - api.dataset_download_file('rounakbanik/the-movies-dataset', 'movies_metadata.csv') - - # Extract data - zf = ZipFile('movies_metadata.csv.zip') - zf.extractall() - zf.close() - - # Create dataframe - data = pd.read_csv('movies_metadata.csv', low_memory=False) - data['year'] = data["release_date"].map(lambda x: x.split('-')[0] if isinstance(x, str) else '0') - data['title_year'] = data['title'] + ' (' + data['year'] + ')' - - return data - - -def query_summary(text): - """ - Get summarization from HuggingFace Inference API - -param text: text to be summarized - -return: summarized text - """ - API_URL = "https://api-inference.huggingface.co/models/facebook/bart-large-cnn" - headers = {"Authorization": f"Bearer {st.secrets['hf_token']}"} - payload = {"inputs": f"{text}",} - - response = requests.request("POST", API_URL, headers=headers, json=payload).json() - - try: - text = response[0].get('summary_text') - except: - text = response[0] - return text - - -def query_generate(text, title, genres, year, selected_model='Stable Diffusion v1.5'): - """ - Get image from HuggingFace Inference API - -param text: text to generate image - -param title: title of the movie - -param genres: genres of the movie - -param year: year of the movie - - -return: generated image - """ - - if selected_model=='Stable Diffusion XL': - API_URL = "https://api-inference.huggingface.co/models/stabilityai/stable-diffusion-xl-base-1.0" - - elif selected_model=='Stable Diffusion v2.1': - API_URL = "https://api-inference.huggingface.co/models/stabilityai/stable-diffusion-2-1" - - elif selected_model=='Stable Diffusion v1.5': - API_URL = "https://api-inference.huggingface.co/models/runwayml/stable-diffusion-v1-5" - - else: - raise ValueError("Value not valid for argument 'selected_model'.") - - headers = {"Authorization": f"Bearer {st.secrets['hf_token']}"} - text = 'A Poster for the movie ' + title.split('(')[0] + 'in portrait mode based on the following synopsis: \"' + text + '\". Style: ' + genres + '. Year ' + year + \ - '. Ignore ' + ''.join(random.choices(string.ascii_letters, k=10)) - payload = {"inputs": f"{text}", "options": {"use_cache": "false"},} - - response = requests.post(API_URL, headers=headers, json=payload) - - try: - response_str = response.content.decode("utf-8") - if 'error' in response_str: - payload = {"inputs": f"{text}", - "options": {"wait_for_model": True}, - } - response = requests.post(API_URL, headers=headers, json=payload) - except: - pass - - return response.content - -@st.experimental_memo(persist=False, show_spinner=False, suppress_st_warning=True) -def generate_poster(movie_data, selected_model): - """ - Function for recommending movies - -param movie_data: metadata of movie selected by user - -return: image of generated alternative poster - """ - - # Get movie metadata - genres = [i['name'] for i in eval(movie_data['genres'].values[0])] - genres_string = ', '.join(genres) - - year = movie_data['year'].values[0] - title = movie_data['title'].values[0] - - - # Get summarization of movie synopsis - st.text("") - with st.spinner("Summarizing synopsis..."): - synopsis_sum = query_summary(movie_data.overview.values[0]) - - # Print summarized synopsis - st.text("") - synopsis_expander = st.expander("Show synopsis", expanded=False) - with synopsis_expander: - st.subheader("Summarized synopsis:") - col1, col2 = st.columns([5, 1]) - with col1: - st.write(synopsis_sum) - st.text("") - st.text("") - st.text("") - st.text("") - - # Get image based on synopsis - with st.spinner("Generating poster..."): - response_content = query_generate(synopsis_sum, title, genres_string, year, selected_model) - - # Show image - try: - image = Image.open(io.BytesIO(response_content)) - - st.text("") - st.text("") - st.subheader("Resulting poster:") - st.text("") - col1, col2, col3 = st.columns([1, 5, 1]) - with col2: - st.image(image, caption="Movie: \"" + movie_data.title.values[0] + "\"") - del image - st.text("") - st.text("") - st.text("") - st.text("") - - except: - col1, col2 = st.columns([5, 1]) - with col1: - st.write(response_content) - - return response_content -# ------------------------------------------------------- # - - -############################### -## --------- MAIN ---------- ## -############################### - - -if __name__ == "__main__": - - - # Initialize image variable - poster = None - - ## --- Page config ------------ ## - # Set page title - st.title(""" - Movie Poster Generator :film_frames: - - #### This is a movie poster generator based on movie's synopsis :sunglasses: - - #### Just select the title of a movie to generate an alternative poster. - """) - - # Set page footer - footer() - - # Set sidebar with info - st.sidebar.markdown("## Generating movie posters using Stable Diffusion") - st.sidebar.markdown("This streamlit space aims to generate movie posters based on synopsis.") - st.sidebar.markdown("Firstly, the synopsis of the selected movie is extracted from the dataset and then summarized using Facebook's BART model.") - st.sidebar.markdown("Once the movie's summary is ready, it is passed to the Stable Diffusion v1.5 model using HF's Inference API, with some prompt tuning.") - ## ---------------------------- ## - - - ## Create dataset - data = load_dataset() - - st.text("") - st.text("") - st.text("") - st.text("") - - ## Select box with all the movies as choices - session.selected_movie = st.selectbox(label="Select a movie to generate alternative poster", options=data.title_year) - - st.text("") - st.text("") - - ## Create button to trigger poster generation - sd_options = ['Stable Diffusion v1.5', 'Stable Diffusion v2.1', 'Stable Diffusion XL'] - buffer1, col1, col2, buffer2 = st.columns([0.3, 1, 1, 1]) - session.selected_model = col1.selectbox(label="Select SD model version", options=sd_options, label_visibility="collapsed") - is_clicked = col2.button(label="Generate poster!") - - st.text("") - st.text("") - - ## Clear cache between runs - st.runtime.legacy_caching.clear_cache() - generate_poster.clear() - - ## Generate poster - if is_clicked: - poster = generate_poster(data[data.title_year==session.selected_movie], session.selected_model) - generate_poster.clear() - st.runtime.legacy_caching.clear_cache() - - st.text("") - st.text("") - st.text("") - st.text("") diff --git a/spaces/ghlee94/MEDIAR/README.md b/spaces/ghlee94/MEDIAR/README.md deleted file mode 100644 index 993b3d098fa70a18dbad7f9d0e25f2e95e510e47..0000000000000000000000000000000000000000 --- a/spaces/ghlee94/MEDIAR/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MEDIAR -emoji: 🔥 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/utils/base.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/utils/base.py deleted file mode 100644 index d5933654e173328aa5e7abec317b4cc04aaf864d..0000000000000000000000000000000000000000 --- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/utils/base.py +++ /dev/null @@ -1,68 +0,0 @@ -import re -import torch.nn as nn - - -class BaseObject(nn.Module): - def __init__(self, name=None): - super().__init__() - self._name = name - - @property - def __name__(self): - if self._name is None: - name = self.__class__.__name__ - s1 = re.sub("(.)([A-Z][a-z]+)", r"\1_\2", name) - return re.sub("([a-z0-9])([A-Z])", r"\1_\2", s1).lower() - else: - return self._name - - -class Metric(BaseObject): - pass - - -class Loss(BaseObject): - def __add__(self, other): - if isinstance(other, Loss): - return SumOfLosses(self, other) - else: - raise ValueError("Loss should be inherited from `Loss` class") - - def __radd__(self, other): - return self.__add__(other) - - def __mul__(self, value): - if isinstance(value, (int, float)): - return MultipliedLoss(self, value) - else: - raise ValueError("Loss should be inherited from `BaseLoss` class") - - def __rmul__(self, other): - return self.__mul__(other) - - -class SumOfLosses(Loss): - def __init__(self, l1, l2): - name = "{} + {}".format(l1.__name__, l2.__name__) - super().__init__(name=name) - self.l1 = l1 - self.l2 = l2 - - def __call__(self, *inputs): - return self.l1.forward(*inputs) + self.l2.forward(*inputs) - - -class MultipliedLoss(Loss): - def __init__(self, loss, multiplier): - - # resolve name - if len(loss.__name__.split("+")) > 1: - name = "{} * ({})".format(multiplier, loss.__name__) - else: - name = "{} * {}".format(multiplier, loss.__name__) - super().__init__(name=name) - self.loss = loss - self.multiplier = multiplier - - def __call__(self, *inputs): - return self.multiplier * self.loss.forward(*inputs) diff --git a/spaces/gordonchan/h2oo/evaluate_params.py b/spaces/gordonchan/h2oo/evaluate_params.py deleted file mode 100644 index 40f89ecb40ee60cb53ed12b8764e28b309979c63..0000000000000000000000000000000000000000 --- a/spaces/gordonchan/h2oo/evaluate_params.py +++ /dev/null @@ -1,52 +0,0 @@ -input_args_list = ['model_state', 'my_db_state', 'selection_docs_state'] - - -no_default_param_names = [ - 'instruction', - 'iinput', - 'context', - 'instruction_nochat', - 'iinput_nochat', -] - -gen_hyper = ['temperature', - 'top_p', - 'top_k', - 'num_beams', - 'max_new_tokens', - 'min_new_tokens', - 'early_stopping', - 'max_time', - 'repetition_penalty', - 'num_return_sequences', - 'do_sample', - ] - -eval_func_param_names = ['instruction', - 'iinput', - 'context', - 'stream_output', - 'prompt_type', - 'prompt_dict'] + \ - gen_hyper + \ - ['chat', - 'instruction_nochat', - 'iinput_nochat', - 'langchain_mode', - 'add_chat_history_to_context', - 'langchain_action', - 'langchain_agents', - 'top_k_docs', - 'chunk', - 'chunk_size', - 'document_subset', - 'document_choice', - ] - -# form evaluate defaults for submit_nochat_api -eval_func_param_names_defaults = eval_func_param_names.copy() -for k in no_default_param_names: - if k in eval_func_param_names_defaults: - eval_func_param_names_defaults.remove(k) - -eval_extra_columns = ['prompt', 'response', 'score'] diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Download Film Crazy Little Thing Called Love 2 58 Nam and Chones Love Story Continues.md b/spaces/gotiQspiryo/whisper-ui/examples/Download Film Crazy Little Thing Called Love 2 58 Nam and Chones Love Story Continues.md deleted file mode 100644 index a59fd08720a72551f5a922142438ddd22a238302..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Download Film Crazy Little Thing Called Love 2 58 Nam and Chones Love Story Continues.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    Peanut is an app that provides a safe space for mamas, mamas-to-be, and those trying to conceive, to build friendships, ask questions, and find support.

    We love that you get the chance to meet other women in your area who are at a similar life stage (think dating app, but for finding new mom friends!), and have access to a community that is there to listen, share information and offer valuable advice.

    There are groups for everything, from pregnancy to breastfeeding, the app is a great way to connect with women like you.

    Now is the perfect time to download Peanut and set it up on your device so you can start meeting moms with similar due dates, and jumping into the community to ask all those pressing questions.

    -

    Download Film Crazy Little Thing Called Love 2 58


    DOWNLOADhttps://urlgoal.com/2uyLWk



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/models/masked_lm.py b/spaces/gradio/HuBERT/fairseq/models/masked_lm.py deleted file mode 100644 index c786de9125551f7247618b0a1d0867477894c755..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/models/masked_lm.py +++ /dev/null @@ -1,403 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderModel, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - LayerNorm, - SinusoidalPositionalEmbedding, - TransformerSentenceEncoder, -) -from fairseq.modules.transformer_sentence_encoder import init_bert_params - - -logger = logging.getLogger(__name__) - - -@register_model("masked_lm") -class MaskedLMModel(FairseqEncoderModel): - """ - Class for training a Masked Language Model. It also supports an - additional sentence level prediction if the sent-loss argument is set. - """ - - def __init__(self, args, encoder): - super().__init__(encoder) - self.args = args - - # if specified then apply bert initialization on the model. We need - # to explictly call this to make sure that the output embeddings - # and projection layers are also correctly initialized - if getattr(args, "apply_bert_init", False): - self.apply(init_bert_params) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # Arguments related to dropout - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for" " attention weights", - ) - parser.add_argument( - "--act-dropout", - type=float, - metavar="D", - help="dropout probability after" " activation in FFN", - ) - - # Arguments related to hidden states and self-attention - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - - # Arguments related to input and output embeddings - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--share-encoder-input-output-embed", - action="store_true", - help="share encoder input" " and output embeddings", - ) - parser.add_argument( - "--encoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the encoder", - ) - parser.add_argument( - "--no-token-positional-embeddings", - action="store_true", - help="if set, disables positional embeddings" " (outside self attention)", - ) - parser.add_argument( - "--num-segment", type=int, metavar="N", help="num segment in the input" - ) - parser.add_argument( - "--max-positions", type=int, help="number of positional embeddings to learn" - ) - - # Arguments related to sentence level prediction - parser.add_argument( - "--sentence-class-num", - type=int, - metavar="N", - help="number of classes for sentence task", - ) - parser.add_argument( - "--sent-loss", - action="store_true", - help="if set," " calculate sentence level predictions", - ) - - # Arguments related to parameter initialization - parser.add_argument( - "--apply-bert-init", - action="store_true", - help="use custom param initialization for BERT", - ) - - # misc params - parser.add_argument( - "--activation-fn", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - parser.add_argument( - "--pooler-activation-fn", - choices=utils.get_available_activation_fns(), - help="Which activation function to use for pooler layer.", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - - def forward(self, src_tokens, segment_labels=None, **kwargs): - return self.encoder(src_tokens, segment_labels=segment_labels, **kwargs) - - def max_positions(self): - return self.encoder.max_positions - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure all arguments are present in older models - base_architecture(args) - - if not hasattr(args, "max_positions"): - args.max_positions = args.tokens_per_sample - - logger.info(args) - - encoder = MaskedLMEncoder(args, task.dictionary) - return cls(args, encoder) - - -class MaskedLMEncoder(FairseqEncoder): - """ - Encoder for Masked Language Modelling. - """ - - def __init__(self, args, dictionary): - super().__init__(dictionary) - - self.padding_idx = dictionary.pad() - self.vocab_size = dictionary.__len__() - self.max_positions = args.max_positions - - self.sentence_encoder = TransformerSentenceEncoder( - padding_idx=self.padding_idx, - vocab_size=self.vocab_size, - num_encoder_layers=args.encoder_layers, - embedding_dim=args.encoder_embed_dim, - ffn_embedding_dim=args.encoder_ffn_embed_dim, - num_attention_heads=args.encoder_attention_heads, - dropout=args.dropout, - attention_dropout=args.attention_dropout, - activation_dropout=args.act_dropout, - max_seq_len=self.max_positions, - num_segments=args.num_segment, - use_position_embeddings=not args.no_token_positional_embeddings, - encoder_normalize_before=args.encoder_normalize_before, - apply_bert_init=args.apply_bert_init, - activation_fn=args.activation_fn, - learned_pos_embedding=args.encoder_learned_pos, - ) - - self.share_input_output_embed = args.share_encoder_input_output_embed - self.embed_out = None - self.sentence_projection_layer = None - self.sentence_out_dim = args.sentence_class_num - self.lm_output_learned_bias = None - - # Remove head is set to true during fine-tuning - self.load_softmax = not getattr(args, "remove_head", False) - - self.masked_lm_pooler = nn.Linear( - args.encoder_embed_dim, args.encoder_embed_dim - ) - self.pooler_activation = utils.get_activation_fn(args.pooler_activation_fn) - - self.lm_head_transform_weight = nn.Linear( - args.encoder_embed_dim, args.encoder_embed_dim - ) - self.activation_fn = utils.get_activation_fn(args.activation_fn) - self.layer_norm = LayerNorm(args.encoder_embed_dim) - - self.lm_output_learned_bias = None - if self.load_softmax: - self.lm_output_learned_bias = nn.Parameter(torch.zeros(self.vocab_size)) - - if not self.share_input_output_embed: - self.embed_out = nn.Linear( - args.encoder_embed_dim, self.vocab_size, bias=False - ) - - if args.sent_loss: - self.sentence_projection_layer = nn.Linear( - args.encoder_embed_dim, self.sentence_out_dim, bias=False - ) - - def forward(self, src_tokens, segment_labels=None, masked_tokens=None, **unused): - """ - Forward pass for Masked LM encoder. This first computes the token - embedding using the token embedding matrix, position embeddings (if - specified) and segment embeddings (if specified). - - Here we assume that the sentence representation corresponds to the - output of the classification_token (see bert_task or cross_lingual_lm - task for more details). - Args: - - src_tokens: B x T matrix representing sentences - - segment_labels: B x T matrix representing segment label for tokens - Returns: - - a tuple of the following: - - logits for predictions in format B x T x C to be used in - softmax afterwards - - a dictionary of additional data, where 'pooled_output' contains - the representation for classification_token and 'inner_states' - is a list of internal model states used to compute the - predictions (similar in ELMO). 'sentence_logits' - is the prediction logit for NSP task and is only computed if - this is specified in the input arguments. - """ - - inner_states, sentence_rep = self.sentence_encoder( - src_tokens, - segment_labels=segment_labels, - ) - - x = inner_states[-1].transpose(0, 1) - # project masked tokens only - if masked_tokens is not None: - x = x[masked_tokens, :] - x = self.layer_norm(self.activation_fn(self.lm_head_transform_weight(x))) - - pooled_output = self.pooler_activation(self.masked_lm_pooler(sentence_rep)) - - # project back to size of vocabulary - if self.share_input_output_embed and hasattr( - self.sentence_encoder.embed_tokens, "weight" - ): - x = F.linear(x, self.sentence_encoder.embed_tokens.weight) - elif self.embed_out is not None: - x = self.embed_out(x) - if self.lm_output_learned_bias is not None: - x = x + self.lm_output_learned_bias - sentence_logits = None - if self.sentence_projection_layer: - sentence_logits = self.sentence_projection_layer(pooled_output) - - return x, { - "inner_states": inner_states, - "pooled_output": pooled_output, - "sentence_logits": sentence_logits, - } - - def max_positions(self): - """Maximum output length supported by the encoder.""" - return self.max_positions - - def upgrade_state_dict_named(self, state_dict, name): - if isinstance( - self.sentence_encoder.embed_positions, SinusoidalPositionalEmbedding - ): - state_dict[ - name + ".sentence_encoder.embed_positions._float_tensor" - ] = torch.FloatTensor(1) - if not self.load_softmax: - for k in list(state_dict.keys()): - if ( - "embed_out.weight" in k - or "sentence_projection_layer.weight" in k - or "lm_output_learned_bias" in k - ): - del state_dict[k] - return state_dict - - -@register_model_architecture("masked_lm", "masked_lm") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.act_dropout = getattr(args, "act_dropout", 0.0) - - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.share_encoder_input_output_embed = getattr( - args, "share_encoder_input_output_embed", False - ) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.num_segment = getattr(args, "num_segment", 2) - - args.sentence_class_num = getattr(args, "sentence_class_num", 2) - args.sent_loss = getattr(args, "sent_loss", False) - - args.apply_bert_init = getattr(args, "apply_bert_init", False) - - args.activation_fn = getattr(args, "activation_fn", "relu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - - -@register_model_architecture("masked_lm", "bert_base") -def bert_base_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 768) - args.share_encoder_input_output_embed = getattr( - args, "share_encoder_input_output_embed", True - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - args.num_segment = getattr(args, "num_segment", 2) - - args.encoder_layers = getattr(args, "encoder_layers", 12) - - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 12) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 3072) - - args.sentence_class_num = getattr(args, "sentence_class_num", 2) - args.sent_loss = getattr(args, "sent_loss", True) - - args.apply_bert_init = getattr(args, "apply_bert_init", True) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - base_architecture(args) - - -@register_model_architecture("masked_lm", "bert_large") -def bert_large_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_layers = getattr(args, "encoder_layers", 24) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - bert_base_architecture(args) - - -@register_model_architecture("masked_lm", "xlm_base") -def xlm_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.share_encoder_input_output_embed = getattr( - args, "share_encoder_input_output_embed", True - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - args.num_segment = getattr(args, "num_segment", 1) - - args.encoder_layers = getattr(args, "encoder_layers", 6) - - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - - args.sent_loss = getattr(args, "sent_loss", False) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.apply_bert_init = getattr(args, "apply_bert_init", True) - base_architecture(args) diff --git a/spaces/gradio/HuBERT/fairseq/modules/lightweight_convolution.py b/spaces/gradio/HuBERT/fairseq/modules/lightweight_convolution.py deleted file mode 100644 index ec11a9507951c9e8f3564753841dd9c74a4900e0..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/lightweight_convolution.py +++ /dev/null @@ -1,310 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.unfold import unfold1d - - -def LightweightConv( - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - bias=False, -): - if torch.cuda.is_available(): - try: - from fairseq.modules.lightconv_layer import LightconvLayer - - return LightconvLayer( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - bias=bias, - ) - except ImportError as e: - print(e) - return LightweightConv1dTBC( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - bias=bias, - ) - - -class LightweightConv1d(nn.Module): - """Lightweight Convolution assuming the input is BxCxT - This is just an example that explains LightConv clearer than the TBC version. - We don't use this module in the model. - - Args: - input_size: # of channels of the input and output - kernel_size: convolution channels - padding: padding - num_heads: number of heads used. The weight is of shape - `(num_heads, 1, kernel_size)` - weight_softmax: normalize the weight with softmax before the convolution - - Shape: - Input: BxCxT, i.e. (batch_size, input_size, timesteps) - Output: BxCxT, i.e. (batch_size, input_size, timesteps) - - Attributes: - weight: the learnable weights of the module of shape - `(num_heads, 1, kernel_size)` - bias: the learnable bias of the module of shape `(input_size)` - """ - - def __init__( - self, - input_size, - kernel_size=1, - padding=0, - num_heads=1, - weight_softmax=False, - bias=False, - weight_dropout=0.0, - ): - super().__init__() - self.input_size = input_size - self.kernel_size = kernel_size - self.num_heads = num_heads - self.padding = padding - self.weight_softmax = weight_softmax - self.weight = nn.Parameter(torch.Tensor(num_heads, 1, kernel_size)) - - if bias: - self.bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.bias = None - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - self.reset_parameters() - - def reset_parameters(self): - nn.init.xavier_uniform_(self.weight) - if self.bias is not None: - nn.init.constant_(self.bias, 0.0) - - def forward(self, input): - """ - input size: B x C x T - output size: B x C x T - """ - B, C, T = input.size() - H = self.num_heads - - weight = self.weight - if self.weight_softmax: - weight = F.softmax(weight, dim=-1) - - weight = self.weight_dropout_module(weight) - # Merge every C/H entries into the batch dimension (C = self.input_size) - # B x C x T -> (B * C/H) x H x T - # One can also expand the weight to C x 1 x K by a factor of C/H - # and do not reshape the input instead, which is slow though - input = input.view(-1, H, T) - output = F.conv1d(input, weight, padding=self.padding, groups=self.num_heads) - output = output.view(B, C, T) - if self.bias is not None: - output = output + self.bias.view(1, -1, 1) - - return output - - -@with_incremental_state -class LightweightConv1dTBC(nn.Module): - """Lightweight Convolution assuming the input is TxBxC - Args: - input_size: # of channels of the input - kernel_size: convolution channels - padding_l: padding to the left when using "same" padding - num_heads: number of heads used. The weight is of shape (num_heads, 1, kernel_size) - weight_dropout: the drop rate of the DropConnect to drop the weight - weight_softmax: normalize the weight with softmax before the convolution - bias: use bias - - Shape: - Input: TxBxC, i.e. (timesteps, batch_size, input_size) - Output: TxBxC, i.e. (timesteps, batch_size, input_size) - - Attributes: - weight: the learnable weights of the module of shape - `(num_heads, 1, kernel_size)` - bias: the learnable bias of the module of shape `(input_size)` - """ - - def __init__( - self, - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - bias=False, - ): - super().__init__() - self.input_size = input_size - self.kernel_size = kernel_size - self.padding_l = padding_l - self.num_heads = num_heads - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - self.weight_softmax = weight_softmax - - self.weight = nn.Parameter(torch.Tensor(num_heads, 1, kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.bias = None - - self.reset_parameters() - self.onnx_trace = False - - def reset_parameters(self): - nn.init.xavier_uniform_(self.weight) - if self.bias is not None: - nn.init.constant_(self.bias, 0.0) - - def forward(self, x, incremental_state=None, unfold=False): - """Assuming the input, x, of the shape T x B x C and producing an output in the shape T x B x C - args: - x: Input of shape T x B x C, i.e. (timesteps, batch_size, input_size) - incremental_state: A dict to keep the state - unfold: unfold the input or not. If not, we use the matrix trick instead - """ - unfold = unfold or (incremental_state is not None) - - if unfold: - output = self._forward_unfolded(x, incremental_state) - else: - output = self._forward_expanded(x, incremental_state) - - if self.bias is not None: - output = output + self.bias.view(1, 1, -1) - return output - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def _forward_unfolded(self, x, incremental_state): - """The conventional implementation of convolutions. - Unfolding the input by having a window shifting to the right.""" - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - - weight = self.weight.view(H, K) - if incremental_state is not None: - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is None: - input_buffer = x.new() - x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3) - if self.kernel_size > 1: - self._set_input_buffer( - incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :] - ) - x_unfold = x_unfold.view(T * B * H, R, -1) - else: - # unfold the input: T x B x C --> T' x B x C x K - x_unfold = unfold1d(x, self.kernel_size, self.padding_l, 0) - x_unfold = x_unfold.view(T * B * H, R, K) - - if self.weight_softmax: - weight = utils.softmax(weight, dim=1, onnx_trace=self.onnx_trace).type_as( - weight - ) - - if incremental_state is not None: - weight = weight[:, -x_unfold.size(2) :] - K = weight.size(1) - - weight = ( - weight.view(1, H, K).expand(T * B, H, K).contiguous().view(T * B * H, K, 1) - ) - - weight = self.weight_dropout_module(weight) - output = torch.bmm(x_unfold, weight) # T*B*H x R x 1 - output = output.view(T, B, C) - return output - - def _forward_expanded(self, x, incremental_state): - """Turn the convolution filters into band matrices and do matrix multiplication. - This is faster when the sequence is short, but less memory efficient. - This is not used in the decoder during inference. - """ - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - - weight = self.weight.view(H, K) - if self.weight_softmax: - weight = utils.softmax(weight, dim=1, onnx_trace=self.onnx_trace).type_as( - weight - ) - weight = weight.view(1, H, K).expand(T * B, H, K).contiguous() - weight = weight.view(T, B * H, K).transpose(0, 1) - - x = x.view(T, B * H, R).transpose(0, 1) - P = self.padding_l - if K > T and P == K - 1: - weight = weight.narrow(2, K - T, T) - K, P = T, T - 1 - # turn the convolution filters into band matrices - weight_expanded = weight.new_zeros(B * H, T, T + K - 1, requires_grad=False) - weight_expanded.as_strided((B * H, T, K), (T * (T + K - 1), T + K, 1)).copy_( - weight - ) - weight_expanded = weight_expanded.narrow(2, P, T) - weight_expanded = self.weight_dropout_module(weight_expanded) - - output = torch.bmm(weight_expanded, x) - output = output.transpose(0, 1).contiguous().view(T, B, C) - return output - - def reorder_incremental_state(self, incremental_state, new_order): - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - input_buffer = input_buffer.index_select(1, new_order) - self._set_input_buffer(incremental_state, input_buffer) - - def _get_input_buffer(self, incremental_state): - return utils.get_incremental_state(self, incremental_state, "input_buffer") - - def _set_input_buffer(self, incremental_state, new_buffer): - return utils.set_incremental_state( - self, incremental_state, "input_buffer", new_buffer - ) - - def extra_repr(self): - s = "{}, kernel_size={}, padding_l={}, num_heads={}, weight_softmax={}, bias={}".format( - self.input_size, - self.kernel_size, - self.padding_l, - self.num_heads, - self.weight_softmax, - self.bias is not None, - ) - if self.weight_dropout_module.p > 0.0: - s += ", weight_dropout={}".format(self.weight_dropout_module.p) - return s diff --git a/spaces/gradio/HuBERT/fairseq/modules/quantization/pq/modules/qconv.py b/spaces/gradio/HuBERT/fairseq/modules/quantization/pq/modules/qconv.py deleted file mode 100644 index d15ec192e8cda6265a198e583a9bf7fb194dd129..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/quantization/pq/modules/qconv.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.modules.utils import _pair - - -class PQConv2d(nn.Module): - """ - Quantized counterpart of nn.Conv2d module. Stores the centroid, the assignments - and the non-quantized biases. The full weight is re-instantiated at each forward - pass and autograd automatically computes the gradients with respect to the - centroids. - - Args: - - centroids: centroids of size n_centroids x block_size - - assignments: assignments of the centroids to the subvectors - of size self.out_channels x n_blocks - - bias: the non-quantized bias, must be either torch.Tensor or None - - Remarks: - - We refer the reader to the official documentation of the nn.Conv2d module - for the other arguments and the behavior of the module. - - Performance tests on GPU show that this implementation is 10% slower than - the non-quantized nn.Conv2d module for a standard training loop. - - During the backward, the gradients are averaged by cluster and not summed. - This explains the hook registered to the centroids. - """ - - def __init__( - self, - centroids, - assignments, - bias, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - padding_mode="zeros", - ): - super(PQConv2d, self).__init__() - self.block_size = centroids.size(1) - self.n_centroids = centroids.size(0) - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.padding_mode = padding_mode - # check compatibility - if in_channels // groups * np.prod(self.kernel_size) % self.block_size != 0: - raise ValueError("Wrong PQ sizes") - if len(assignments) % out_channels != 0: - raise ValueError("Wrong PQ sizes") - if in_channels % groups != 0: - raise ValueError("in_channels must be divisible by groups") - if out_channels % groups != 0: - raise ValueError("out_channels must be divisible by groups") - # define parameters - self.centroids = nn.Parameter(centroids, requires_grad=True) - self.register_buffer("assignments", assignments) - self.register_buffer("counts", torch.bincount(assignments).type_as(centroids)) - if bias is not None: - self.bias = nn.Parameter(bias) - else: - self.register_parameter("bias", None) - # register hook for averaging gradients per centroids instead of summing - self.centroids.register_hook(lambda x: x / self.counts[:, None]) - - @property - def weight(self): - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_channels, self.block_size) - .permute(1, 0, 2) - .reshape( - self.out_channels, self.in_channels // self.groups, *self.kernel_size - ) - ) - - def forward(self, x): - return F.conv2d( - x, - self.weight, - self.bias, - self.stride, - self.padding, - self.dilation, - self.groups, - ) - - def extra_repr(self): - s = "{in_channels}, {out_channels}, kernel_size={kernel_size}, stride={stride}" - if self.padding != (0,) * len(self.padding): - s += ", padding={padding}" - if self.dilation != (1,) * len(self.dilation): - s += ", dilation={dilation}" - if self.groups != 1: - s += ", groups={groups}" - if self.bias is None: - s += ", bias=False" - if self.padding_mode != "zeros": - s += ", padding_mode={padding_mode}" - s += ", n_centroids={n_centroids}, block_size={block_size}" - return s.format(**self.__dict__) diff --git a/spaces/h2oai/wave-tour/examples/form_visibility.py b/spaces/h2oai/wave-tour/examples/form_visibility.py deleted file mode 100644 index 02a592a666394b9738b938ab2953b82e65ecbe2d..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/form_visibility.py +++ /dev/null @@ -1,41 +0,0 @@ -# Form / Visible -# Use "visible" property to control whether form element should be shown / hidden. -# --- -from h2o_wave import main, app, Q, ui - - -@app('/demo') -async def serve(q: Q): - if not q.client.initialized: - q.page['example'] = ui.form_card(box='1 1 4 7', items=[ - ui.text_xl(name='text_xl', content='First text'), - ui.text_l(name='text_l', content='Second text'), - ui.text_m(name='text_m', content='Third text'), - ui.text_s(name='text_s', content='Fourth text'), - ui.inline([ - ui.button(name='left1', label='Left1'), - ui.button(name='left2', label='Left2'), - ui.button(name='left3', label='Left3'), - ]), - ui.buttons(justify='end', items=[ - ui.button(name='right1', label='Right1'), - ui.button(name='right2', label='Right2'), - ui.button(name='right3', label='Right3'), - ]), - ui.buttons(items=[ui.button(name='show', label='Show'), ui.button(name='hide', label='Hide')]) - ]) - q.client.initialized = True - page = q.page['example'] - items_to_hide = [ - page.text_xl, - page.text_m, - page.left1, - page.right3, - ] - if q.args.hide: - for i in items_to_hide: - i.visible = False - if q.args.show: - for i in items_to_hide: - i.visible = True - await q.page.save() diff --git a/spaces/h2oai/wave-tour/examples/pixel_art.py b/spaces/h2oai/wave-tour/examples/pixel_art.py deleted file mode 100644 index 7ccd1a154d673a8eb1c7b3e2b54e837bf8119e20..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/pixel_art.py +++ /dev/null @@ -1,16 +0,0 @@ -# Pixel Art -# A card that demonstrates collaborative editing in Wave. -# Open `/demo` in multiple browsers and watch them synchronize in realtime. -# #collaboration -# --- -from h2o_wave import site, data, ui - -page = site['/demo'] -page.drop() - -page.add('example', ui.pixel_art_card( - box='1 1 4 6', - title='Art', - data=data('color', 16 * 16), -)) -page.save() diff --git a/spaces/haakohu/deep_privacy2/dp2/detection/structures.py b/spaces/haakohu/deep_privacy2/dp2/detection/structures.py deleted file mode 100644 index 3daf58f4617feedb7724137721e85e32e94b87b2..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/dp2/detection/structures.py +++ /dev/null @@ -1,504 +0,0 @@ -import torch -import numpy as np -from dp2 import utils -from dp2.utils import vis_utils, crop_box -from .utils import ( - cut_pad_resize, masks_to_boxes, - get_kernel, transform_embedding, initialize_cse_boxes -) -from .box_utils import get_expanded_bbox, include_box -import torchvision -import tops -from .box_utils_fdf import expand_bbox as expand_bbox_fdf - - -class VehicleDetection: - - def __init__(self, segmentation: torch.BoolTensor) -> None: - self.segmentation = segmentation - self.boxes = masks_to_boxes(segmentation) - assert self.boxes.shape[1] == 4, self.boxes.shape - self.n_detections = self.segmentation.shape[0] - area = (self.boxes[:, 3] - self.boxes[:, 1]) * (self.boxes[:, 2] - self.boxes[:, 0]) - - sorted_idx = torch.argsort(area, descending=True) - self.segmentation = self.segmentation[sorted_idx] - self.boxes = self.boxes[sorted_idx].cpu() - - def pre_process(self): - pass - - def get_crop(self, idx: int, im): - assert idx < len(self) - box = self.boxes[idx] - im = crop_box(self.im, box) - mask = crop_box(self.segmentation[idx]) - mask = mask == 0 - return dict(img=im, mask=mask.float(), boxes=box) - - def visualize(self, im): - if len(self) == 0: - return im - im = vis_utils.draw_mask(im.clone(), self.segmentation.logical_not()) - return im - - def __len__(self): - return self.n_detections - - @staticmethod - def from_state_dict(state_dict, **kwargs): - numel = np.prod(state_dict["shape"]) - arr = np.unpackbits(state_dict["segmentation"].numpy(), count=numel) - segmentation = tops.to_cuda(torch.from_numpy(arr)).view(state_dict["shape"]) - return VehicleDetection(segmentation) - - def state_dict(self, **kwargs): - segmentation = torch.from_numpy(np.packbits(self.segmentation.bool().cpu().numpy())) - return dict(segmentation=segmentation, cls=self.__class__, shape=self.segmentation.shape) - - -class FaceDetection: - - def __init__(self, - boxes_ltrb: torch.LongTensor, target_imsize, fdf128_expand: bool, - keypoints: torch.Tensor = None, - **kwargs) -> None: - - self.boxes = boxes_ltrb.cpu() - assert self.boxes.shape[1] == 4, self.boxes.shape - self.target_imsize = tuple(target_imsize) - # Sory by area to paste in largest faces last - area = (self.boxes[:, 2] - self.boxes[:, 0]) * (self.boxes[:, 3] - self.boxes[:, 1]).view(-1) - idx = area.argsort(descending=False) - self.boxes = self.boxes[idx] - self.fdf128_expand = fdf128_expand - self.orig_keypoints = keypoints - if keypoints is not None: - self.orig_keypoints = self.orig_keypoints[idx] - assert keypoints.shape == (len(boxes_ltrb), 17, 2) or \ - keypoints.shape == (len(boxes_ltrb), 7, 2), keypoints.shape - - def visualize(self, im): - if len(self) == 0: - return im - orig_device = im.device - for box in self.boxes: - simple_expand = False if self.fdf128_expand else True - e_box = torch.from_numpy(expand_bbox_fdf(box.numpy(), im.shape[-2:], simple_expand)) - im = torchvision.utils.draw_bounding_boxes(im.cpu(), e_box[None], colors=(0, 0, 255), width=2) - im = torchvision.utils.draw_bounding_boxes(im.cpu(), self.boxes, colors=(255, 0, 0), width=2) - if self.orig_keypoints is not None: - im = vis_utils.draw_keypoints(im, self.orig_keypoints, radius=1) - - return im.to(device=orig_device) - - def get_crop(self, idx: int, im): - assert idx < len(self) - box = self.boxes[idx].numpy() - simple_expand = False if self.fdf128_expand else True - expanded_boxes = expand_bbox_fdf(box, im.shape[-2:], simple_expand) - im = cut_pad_resize(im, expanded_boxes, self.target_imsize, fdf_resize=True) - - # Find the square mask corresponding to box. - box_mask = box.copy().astype(float) - box_mask[[0, 2]] -= expanded_boxes[0] - box_mask[[1, 3]] -= expanded_boxes[1] - - width = expanded_boxes[2] - expanded_boxes[0] - resize_factor = self.target_imsize[0] / width - box_mask = (box_mask * resize_factor).astype(int) - mask = torch.ones((1, *self.target_imsize), device=im.device, dtype=torch.float32) - crop_box(mask, box_mask).fill_(0) - if self.orig_keypoints is None: - return dict( - img=im[None], mask=mask[None], - boxes=torch.from_numpy(expanded_boxes).view(1, -1)) - - keypoint = self.orig_keypoints[idx, :7, :2].clone() - keypoint[:, 0] -= expanded_boxes[0] - keypoint[:, 1] -= expanded_boxes[1] - w = expanded_boxes[2] - expanded_boxes[0] - keypoint /= w - keypoint = keypoint.clamp(0, 1) - return dict( - img=im[None], mask=mask[None], - boxes=torch.from_numpy(expanded_boxes).view(1, -1), - keypoints=keypoint[None]) - - def __len__(self): - return len(self.boxes) - - @staticmethod - def from_state_dict(state_dict, **kwargs): - return FaceDetection( - state_dict["boxes"].cpu(), - keypoints=state_dict["orig_keypoints"] if "orig_keypoints" in state_dict else None, - **kwargs) - - def state_dict(self, **kwargs): - return dict( - boxes=self.boxes, - cls=self.__class__, - orig_keypoints=self.orig_keypoints) - - def pre_process(self): - pass - - -def remove_dilate_in_pad(mask: torch.Tensor, exp_box, orig_imshape): - """ - Dilation happens after padding, which could place dilation in the padded area. - Remove this. - """ - x0, y0, x1, y1 = exp_box - H, W = orig_imshape - # Padding in original image space - p_y0 = max(0, -y0) - p_y1 = max(y1 - H, 0) - p_x0 = max(0, -x0) - p_x1 = max(x1 - W, 0) - resize_ratio = mask.shape[-2] / (y1-y0) - p_x0, p_y0, p_x1, p_y1 = [(_*resize_ratio).floor().long() for _ in [p_x0, p_y0, p_x1, p_y1]] - mask[..., :p_y0, :] = 0 - mask[..., :p_x0] = 0 - mask[..., mask.shape[-2] - p_y1:, :] = 0 - mask[..., mask.shape[-1] - p_x1:] = 0 - - -class CSEPersonDetection: - - def __init__(self, - segmentation, cse_dets, - target_imsize, - exp_bbox_cfg, exp_bbox_filter, - dilation_percentage: float, - embed_map: torch.Tensor, - orig_imshape_CHW, - normalize_embedding: bool) -> None: - self.segmentation = segmentation - self.cse_dets = cse_dets - self.target_imsize = list(target_imsize) - self.pre_processed = False - self.exp_bbox_cfg = exp_bbox_cfg - self.exp_bbox_filter = exp_bbox_filter - self.dilation_percentage = dilation_percentage - self.embed_map = embed_map - self.embed_map_cpu = embed_map.cpu() - self.normalize_embedding = normalize_embedding - if self.normalize_embedding: - embed_map_mean = self.embed_map.mean(dim=0, keepdim=True) - embed_map_rstd = ((self.embed_map - embed_map_mean).square().mean(dim=0, keepdim=True)+1e-8).rsqrt() - self.embed_map_normalized = (self.embed_map - embed_map_mean) * embed_map_rstd - self.orig_imshape_CHW = orig_imshape_CHW - - @torch.no_grad() - def pre_process(self): - if self.pre_processed: - return - boxes = initialize_cse_boxes(self.segmentation, self.cse_dets["bbox_XYXY"]).cpu() - expanded_boxes = [] - included_boxes = [] - for i in range(len(boxes)): - exp_box = get_expanded_bbox( - boxes[i], self.orig_imshape_CHW[1:], self.segmentation[i], **self.exp_bbox_cfg, - target_aspect_ratio=self.target_imsize[0]/self.target_imsize[1]) - if not include_box(exp_box, imshape=self.orig_imshape_CHW[1:], **self.exp_bbox_filter): - continue - included_boxes.append(i) - expanded_boxes.append(exp_box) - expanded_boxes = torch.LongTensor(expanded_boxes).view(-1, 4) - self.segmentation = self.segmentation[included_boxes] - self.cse_dets = {k: v[included_boxes] for k, v in self.cse_dets.items()} - - self.mask = torch.empty((len(expanded_boxes), *self.target_imsize), device=tops.get_device(), dtype=torch.bool) - area = self.segmentation.sum(dim=[1, 2]).view(len(expanded_boxes)) - for i, box in enumerate(expanded_boxes): - self.mask[i] = cut_pad_resize(self.segmentation[i:i+1], box, self.target_imsize)[0] - - dilation_kernel = get_kernel(int((self.target_imsize[0]*self.target_imsize[1])**0.5*self.dilation_percentage)) - self.maskrcnn_mask = self.mask.clone().logical_not()[:, None] - self.mask = utils.binary_dilation(self.mask[:, None], dilation_kernel) - for i in range(len(expanded_boxes)): - remove_dilate_in_pad(self.mask[i], expanded_boxes[i], self.orig_imshape_CHW[1:]) - self.boxes = expanded_boxes.cpu() - self.dilated_boxes = get_dilated_boxes(self.boxes, self.mask) - - self.pre_processed = True - self.n_detections = len(self.boxes) - self.mask = self.mask.logical_not() - - E_mask = torch.zeros((self.n_detections, 1, *self.target_imsize), device=self.mask.device, dtype=torch.bool) - self.vertices = torch.zeros_like(E_mask, dtype=torch.long) - for i in range(self.n_detections): - E_, E_mask[i] = transform_embedding( - self.cse_dets["instance_embedding"][i], - self.cse_dets["instance_segmentation"][i], - self.boxes[i], - self.cse_dets["bbox_XYXY"][i].cpu(), - self.target_imsize - ) - self.vertices[i] = utils.from_E_to_vertex( - E_[None], E_mask[i:i+1].logical_not(), self.embed_map).squeeze()[None] - self.E_mask = E_mask - - sorted_idx = torch.argsort(area, descending=False) - self.mask = self.mask[sorted_idx] - self.boxes = self.boxes[sorted_idx.cpu()] - self.vertices = self.vertices[sorted_idx] - self.E_mask = self.E_mask[sorted_idx] - self.maskrcnn_mask = self.maskrcnn_mask[sorted_idx] - - def get_crop(self, idx: int, im): - self.pre_process() - assert idx < len(self) - box = self.boxes[idx] - mask = self.mask[idx] - im = cut_pad_resize(im, box, self.target_imsize).unsqueeze(0) - - vertices_ = self.vertices[idx] - E_mask_ = self.E_mask[idx].float() - if self.normalize_embedding: - embedding = self.embed_map_normalized[vertices_.squeeze(dim=0)].permute(2, 0, 1) * E_mask_ - else: - embedding = self.embed_map[vertices_.squeeze(dim=0)].permute(2, 0, 1) * E_mask_ - - return dict( - img=im, - mask=mask.float()[None], - boxes=box.reshape(1, -1), - E_mask=E_mask_[None], - vertices=vertices_[None], - embed_map=self.embed_map, - embedding=embedding[None], - maskrcnn_mask=self.maskrcnn_mask[idx].float()[None] - ) - - def __len__(self): - self.pre_process() - return self.n_detections - - def state_dict(self, after_preprocess=False): - """ - The processed annotations occupy more space than the original detections. - """ - if not after_preprocess: - return { - "combined_segmentation": self.segmentation.bool(), - "cse_instance_segmentation": self.cse_dets["instance_segmentation"].bool(), - "cse_instance_embedding": self.cse_dets["instance_embedding"], - "cse_bbox_XYXY": self.cse_dets["bbox_XYXY"].long(), - "cls": self.__class__, - "orig_imshape_CHW": self.orig_imshape_CHW - } - self.pre_process() - def compress_bool(x): return torch.from_numpy(np.packbits(x.bool().cpu().numpy())) - return dict( - E_mask=compress_bool(self.E_mask), - mask=compress_bool(self.mask), - maskrcnn_mask=compress_bool(self.maskrcnn_mask), - vertices=self.vertices.to(torch.int16).cpu(), - cls=self.__class__, - boxes=self.boxes, - orig_imshape_CHW=self.orig_imshape_CHW, - ) - - @staticmethod - def from_state_dict( - state_dict, embed_map, - post_process_cfg, **kwargs): - after_preprocess = "segmentation" not in state_dict and "combined_segmentation" not in state_dict - if after_preprocess: - detection = CSEPersonDetection( - segmentation=None, cse_dets=None, embed_map=embed_map, - orig_imshape_CHW=state_dict["orig_imshape_CHW"], - **post_process_cfg) - detection.vertices = tops.to_cuda(state_dict["vertices"].long()) - numel = np.prod(detection.vertices.shape) - - def unpack_bool(x): - x = torch.from_numpy(np.unpackbits(x.numpy(), count=numel)) - return x.view(*detection.vertices.shape) - detection.E_mask = tops.to_cuda(unpack_bool(state_dict["E_mask"])) - detection.mask = tops.to_cuda(unpack_bool(state_dict["mask"])) - detection.maskrcnn_mask = tops.to_cuda(unpack_bool(state_dict["maskrcnn_mask"])) - detection.n_detections = len(detection.mask) - detection.pre_processed = True - - if isinstance(state_dict["boxes"], np.ndarray): - state_dict["boxes"] = torch.from_numpy(state_dict["boxes"]) - detection.boxes = state_dict["boxes"] - return detection - - cse_dets = dict( - instance_segmentation=state_dict["cse_instance_segmentation"], - instance_embedding=state_dict["cse_instance_embedding"], - embed_map=embed_map, - bbox_XYXY=state_dict["cse_bbox_XYXY"]) - cse_dets = {k: tops.to_cuda(v) for k, v in cse_dets.items()} - - segmentation = state_dict["combined_segmentation"] - return CSEPersonDetection( - segmentation, cse_dets, embed_map=embed_map, - orig_imshape_CHW=state_dict["orig_imshape_CHW"], - **post_process_cfg) - - def visualize(self, im): - self.pre_process() - if len(self) == 0: - return im - im = vis_utils.draw_cropped_masks( - im.cpu(), self.mask.cpu(), self.boxes, visualize_instances=False) - E = self.embed_map_cpu[self.vertices.long().cpu()].squeeze(1).permute(0, 3, 1, 2) - im = vis_utils.draw_cse_all( - E, self.E_mask.squeeze(1).bool().cpu(), im, - self.boxes, self.embed_map_cpu) - im = torchvision.utils.draw_bounding_boxes(im, self.boxes, colors=(255, 0, 0), width=2) - return im - - -def shift_and_preprocess_keypoints(keypoints: torch.Tensor, boxes): - keypoints = keypoints.clone() - N = boxes.shape[0] - tops.assert_shape(keypoints, (N, None, 3)) - tops.assert_shape(boxes, (N, 4)) - x0, y0, x1, y1 = [_.view(-1, 1) for _ in boxes.T] - - w = x1 - x0 - h = y1 - y0 - keypoints[:, :, 0] = (keypoints[:, :, 0] - x0) / w - keypoints[:, :, 1] = (keypoints[:, :, 1] - y0) / h - def check_outside(x): return (x < 0).logical_or(x > 1) - is_outside = check_outside(keypoints[:, :, 0]).logical_or(check_outside(keypoints[:, :, 1])) - keypoints[:, :, 2] = keypoints[:, :, 2] > 0 - keypoints[:, :, 2] = (keypoints[:, :, 2] > 0).logical_and(is_outside.logical_not()) - return keypoints - - -class PersonDetection: - - def __init__( - self, - segmentation, - target_imsize, - exp_bbox_cfg, exp_bbox_filter, - dilation_percentage: float, - orig_imshape_CHW, - kp_vis_thr=None, - keypoints=None, - **kwargs) -> None: - self.segmentation = segmentation - self.target_imsize = list(target_imsize) - self.pre_processed = False - self.exp_bbox_cfg = exp_bbox_cfg - self.exp_bbox_filter = exp_bbox_filter - self.dilation_percentage = dilation_percentage - self.orig_imshape_CHW = orig_imshape_CHW - self.orig_keypoints = keypoints - if keypoints is not None: - assert kp_vis_thr is not None - self.kp_vis_thr = kp_vis_thr - - @torch.no_grad() - def pre_process(self): - if self.pre_processed: - return - boxes = masks_to_boxes(self.segmentation).cpu() - expanded_boxes = [] - included_boxes = [] - for i in range(len(boxes)): - exp_box = get_expanded_bbox( - boxes[i], self.orig_imshape_CHW[1:], self.segmentation[i], **self.exp_bbox_cfg, - target_aspect_ratio=self.target_imsize[0]/self.target_imsize[1]) - if not include_box(exp_box, imshape=self.orig_imshape_CHW[1:], **self.exp_bbox_filter): - continue - included_boxes.append(i) - expanded_boxes.append(exp_box) - expanded_boxes = torch.LongTensor(expanded_boxes).view(-1, 4) - self.segmentation = self.segmentation[included_boxes] - if self.orig_keypoints is not None: - self.keypoints = self.orig_keypoints[included_boxes].clone() - self.keypoints[:, :, 2] = self.keypoints[:, :, 2] >= self.kp_vis_thr - area = self.segmentation.sum(dim=[1, 2]).view(len(expanded_boxes)).cpu() - self.mask = torch.empty((len(expanded_boxes), *self.target_imsize), device=tops.get_device(), dtype=torch.bool) - for i, box in enumerate(expanded_boxes): - self.mask[i] = cut_pad_resize(self.segmentation[i:i+1], box, self.target_imsize)[0] - if self.orig_keypoints is not None: - self.keypoints = shift_and_preprocess_keypoints(self.keypoints, expanded_boxes) - dilation_kernel = get_kernel(int((self.target_imsize[0]*self.target_imsize[1])**0.5*self.dilation_percentage)) - self.maskrcnn_mask = self.mask.clone().logical_not()[:, None] - self.mask = utils.binary_dilation(self.mask[:, None], dilation_kernel) - for i in range(len(expanded_boxes)): - remove_dilate_in_pad(self.mask[i], expanded_boxes[i], self.orig_imshape_CHW[1:]) - self.boxes = expanded_boxes - self.dilated_boxes = get_dilated_boxes(self.boxes, self.mask) - - self.pre_processed = True - self.n_detections = len(self.boxes) - self.mask = self.mask.logical_not() - - sorted_idx = torch.argsort(area, descending=False) - self.mask = self.mask[sorted_idx] - self.boxes = self.boxes[sorted_idx.cpu()] - self.segmentation = self.segmentation[sorted_idx] - self.maskrcnn_mask = self.maskrcnn_mask[sorted_idx] - if self.keypoints is not None: - self.keypoints = self.keypoints[sorted_idx.cpu()] - - def get_crop(self, idx: int, im: torch.Tensor): - assert idx < len(self) - self.pre_process() - box = self.boxes[idx] - mask = self.mask[idx][None].float() - im = cut_pad_resize(im, box, self.target_imsize).unsqueeze(0) - batch = dict( - img=im, mask=mask, boxes=box.reshape(1, -1), - maskrcnn_mask=self.maskrcnn_mask[idx][None].float()) - if self.keypoints is not None: - batch["keypoints"] = self.keypoints[idx:idx+1] - return batch - - def __len__(self): - self.pre_process() - return self.n_detections - - def state_dict(self, **kwargs): - return dict( - segmentation=self.segmentation.bool(), - cls=self.__class__, - orig_imshape_CHW=self.orig_imshape_CHW, - keypoints=self.orig_keypoints - ) - - @staticmethod - def from_state_dict( - state_dict, - post_process_cfg, **kwargs): - return PersonDetection( - state_dict["segmentation"], - orig_imshape_CHW=state_dict["orig_imshape_CHW"], - **post_process_cfg, - keypoints=state_dict["keypoints"]) - - def visualize(self, im): - self.pre_process() - im = im.cpu() - if len(self) == 0: - return im - im = vis_utils.draw_cropped_masks(im.clone(), self.mask.cpu(), self.boxes, visualize_instances=False) - if self.keypoints is not None: - im = vis_utils.draw_cropped_keypoints(im, self.keypoints, self.boxes) - return im - - -def get_dilated_boxes(exp_bbox: torch.LongTensor, mask): - """ - mask: resized mask - """ - assert exp_bbox.shape[0] == mask.shape[0] - boxes = masks_to_boxes(mask.squeeze(1)).cpu() - H, W = exp_bbox[:, 3] - exp_bbox[:, 1], exp_bbox[:, 2] - exp_bbox[:, 0] - boxes[:, [0, 2]] = (boxes[:, [0, 2]] * W[:, None] / mask.shape[-1]).long() - boxes[:, [1, 3]] = (boxes[:, [1, 3]] * H[:, None] / mask.shape[-2]).long() - boxes[:, [0, 2]] += exp_bbox[:, 0:1] - boxes[:, [1, 3]] += exp_bbox[:, 1:2] - return boxes diff --git a/spaces/hackathon-somos-nlp-2023/ask2democracy/about.py b/spaces/hackathon-somos-nlp-2023/ask2democracy/about.py deleted file mode 100644 index 274bbc22272355314012adba98dd80847d6c180c..0000000000000000000000000000000000000000 --- a/spaces/hackathon-somos-nlp-2023/ask2democracy/about.py +++ /dev/null @@ -1,85 +0,0 @@ -from pinecone_quieries import PineconeProposalQueries -import streamlit as st - -def show_about_ask2democracy(): - description = """ -

    Sobre esta iniciativa

    -

    El debate ciudadano generalmente está sustentado en documentos que salvo pocas excepciones, casi nadie lee. - En este demo se han indexado algunos textos relevantes para la discución pública que suelen estar dispersos y poco accesibles. Además, se apoya en el estado del arte de la inteligencia artificial (abajo más detalles) , permitiendo explorar los documentos haciéndoles preguntas en español. -

    - Por otro lado, las alucinaciones generadas por modelos de lenguaje grandes como ChatGPT/GPT-4 son un problema que en la práctica resulta en desinformación y posibles consecuencias aún desconocidas. OpenAI ha liderado el camino en el control de estas alucinaciones mediante el uso de RLHF para generar texto a partir del conocimiento "congelado" de los modelos de lenguaje. Sin embargo, esta aproximación no es viable en muchos dominios específicos. -

    - En este demo se aborda el problema de las alucinaciones utilizando una arquitectura RAG, Retrieval Augmented Generation. En el pipeline de consulta, se utilizan modelos sentence transformers para obtener el top k de documentos candidatos, modelos Roberta para generar respuestas abstractas tomadas de las fuentes y modelos generativos para aumentar las respuestas. - Dándole un estilo conversacional similar al de ChatGPT pero basado en fuentes. -

    - También se busca contribuir a la inteligencia artificial abierta y en español, mediante la construcción de datasets y el entrenamiento de modelos de lenguaje adaptados para las discusiones democráticas. Algo que puede ayudar a elevar la calidad del debate en todos los países de habla hispana. -

    - Textos indexados: Propuesta reforma pensional de Marzo 22 de 2023, Propuesta reforma de la salud del 13 febrero 2023 , Capítulo de hallazgos y recomendaciones de la comisión de la verdad sobre el conflicto armado Colombiano (trabajo en progreso, si quieres apoyar escríbeme) -

    - Creado por Jorge Henao 🇨🇴 Twitter LinkedIn Linktree -
    - Con el apoyo de David Torres 🇨🇴 Twitter LinkedIn -
    -

    -

    Sobre el trabajo realizado durante la Hackathon Somos NLP 2023

    - El proyecto Ask2Democracy fue creado antes de la hackathon Somos NLP 2023. Las siguientes contribuiciones fueron realizadas durante las fechas de la Hackathon (20 de Marzo al 9 de Abril de 2023): -

    En el espacio demo:

    -
      -
    • Refactor/Ajustes de integración con la base de datos vectorial Pinecone.
    • -
    • Pre-procesado e indexación de la propuesta de reforma pensional de Colombia de Marzo 2023.
    • -
    • Refactor UX y ajustes de usabilidad de la interfaz de usuario.
    • -
    • Ajustes de integración con OpenAI
    • -
    • Pruebas/Ajustes en el pipeline de consulta Sentence transformers usando texto en español y xlm-roberta-base-squad2-distilled
    • -
    -

    Modelos de lenguaje:

    - Fueron entrenados dos modelos Baizemocracy basados en LLaMA-7B con foco en aumentar los documentos retornados en el pipeline de consulta, con el fin de hacerlo más conversacional usando modelos open source en español. - Los siguientes modelos fueron entrenados entrenados con un dataset construido durante la hackathon además de varios datasets orientados a Question answering y Chat. -
      -
    • baizemocracy-lora-7B-cfqa: Esta variación del modelo es más enfocada en generar respuestas factuales dado un contexto basado en fuentes.
    • -
    • baizemocracy-lora-7B-cfqa-conv: Esta variación del modelo tiene un estílo más conversacional para generar respuestas factuales dado un contexto basado en fuentes.
    • -
    -

    Datasets:

    -
      -
    • ask2democracy-cfqa-salud-pension: Un datset de tipo instrucciones con respuestas a preguntas generadas a partir de textos de reforma sobre salud y pensiones en español.
    • -
    - Nota: Los modelos generativos entrenados durante la hackathon requieren optimizaicón adicional para ser integrados en el pipeline de consulta que ya utiliza otros modelos transformers. - Durante la hackathon se realizaron pruebas con tiempos de inferencia mayor a 70 segundos sobre GPU, sin contar el resto de componentes del pipeline de consulta. Lo que sobrepasa las capacidades de infraestructura gratiuita de Hugging Face sobre CPU. - Futuras actualizaciones se esperan incorporar en el demo original del proyecto espacio demo original del proyecto -

    ¿Cómo utilizar este espacio?

    - Selecciona el de documento que quieres explorar en el panel de la izquierda, escribe preguntas en la caja de texto y presiona el botón. - No se trata de un sistema de búsquedas basado en palabras clave, por el contrario, puedes redactar preguntas más extensas y elaboradas. Cuanto más contexto le des a la pregunta mejores resultados obtienes. -

    Integración opcional con OpenAI

    - Este demo usa recursos de computo limitados sin costo para la gente (si quieres ayudar a que sea más rápido ecríbeme). - De manera opcional, si tienes una cuenta en OpenAI también puedes activar la integración copiando tu API key en el panel de la izquierda. - Una vez ingreses el api key, cada vez que hagas una pregunta el sistema la usará para elaborar una respuesta breve a partir de los resultados de búsqueda obtenidos, basándose siempre en las fuentes oficiales. - También puedes configurar que tan larga quieres que sea la respuesta (max tokens), y que tan creativas (temperatura). -

    Nota:El sistema no guarda tu API key, sólo la utiliza para aumentar tus consultas mientras lo uses. -

    Inteligencia artificial y democracia

    - Pretende ayudar a construir democracia participativa apaloncándose en el estado del arte de la inteligencia artificial. - Al ser un demo accesible en web, puede ayudarle a un ciudadano del común a tener una opinión más informada, ayudándole a ser partícipe del debate público haciendo preguntas directamente a las fuentes en su propio lenguaje y llegando a sus propias conclusiones. -

    - Respecto a la inteligencia artificial hay algunas hipótesis que se quieren probar: -

      -
    • ¿Que tan efectivo puede ser un sistema de búsquedas con modelos de inteligencia artificial abiertos, para ayudar a la gente a entender discuciones ciudadanas relevantes en español?
    • -
    • ¿Que tan creativa puede ser la ingeligencia artificial en esa materia?
    • -
    • ¿Puede la inteligencia artificial abierta, ayudarle a la gente a entender documentos legislativos: propuestas de reforma, planes de gobierno, y en general documentos de discución pública?
    • -
    • ¿Puede un sistema RAG usando modelos abiertos mejorar las halucinaciones presentadas en sistemas como ChatGPT/GPT-4 de OpenAI para el entendimiento de discusiones democráticas en español?
    • -
    - Por lo anterior, se busca contribuir a la inteligencia artificial abierta y en español, mediante la construcción de datasets y el entrenamiento de modelos de lenguaje adaptados para las discusiones democráticas. - Algo que puede ayudar a elevar la calidad del debate en todos los países de habla hispana. -

    Información adicional

    - Se utiliza una arquitectura RAG(Retrieval Augmented Generation) para aumentar las respuestas basadas en fuentes de manera conversacional. - Esta version usa sentence transformers (Cosine similarity), una base de dactos vectorial Pinecone para almacenar los embeddings, Haystack framework y la integración con OpenAI. - Los modelos de lenguaje transformers utilizados son: - -sentence-transformers/multi-qa-MiniLM-L6-cos-v1 -deepset/xlm-roberta-base-squad2-distilled - - repo en github con FastAPI -

    Beta disclaimer

    - Las respuestas que arroja el sistema no han sido pregrabadas ni basadas en opiniones. Todas son respuestas extraídas de fuentes oficiales. - Este demo usa modelos de lenguaje para entender el lenguaje español, sin embargo, necesita de un mayor entrenamiento por lo que, en ocasiones, puede ser confuso y no tan preciso. - Si quieres apoyar escríbeme a jorge.henao@diezonce.co -

    - """ - st.markdown(description, unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/hamzapehlivan/StyleRes/utils.py b/spaces/hamzapehlivan/StyleRes/utils.py deleted file mode 100644 index 65f9d3395183a1cf4ed6117496c34ade27867f3c..0000000000000000000000000000000000000000 --- a/spaces/hamzapehlivan/StyleRes/utils.py +++ /dev/null @@ -1,101 +0,0 @@ - -import sys -import os -from importlib import import_module -from options import Settings -import csv - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -""" - This function modified from the Genforce library: https://github.com/genforce/genforce -""" -def parse_config(config_file): - """Parses configuration from python file.""" - assert os.path.isfile(config_file) - directory = os.path.dirname(config_file) - filename = os.path.basename(config_file) - module_name, extension = os.path.splitext(filename) - assert extension == '.py' - sys.path.insert(0, directory) - module = import_module(module_name) - sys.path.pop(0) - config = [] - - for key, value in module.__dict__.items(): - if key.startswith('__'): - continue - for val in value: - attr_dict = AttrDict() - for k, v in val.items(): - attr_dict[k] = v - config.append(attr_dict) - del sys.modules[module_name] - return config - -# Utility class for the demo -class AppUtils(): - def __init__(self): - self.interfacegan_edits = ['Smile', 'Age' , 'Pose'] - self.ganspace_edits = [] - with open(os.path.join(Settings.ganspace_directions, 'ganspace_configs.csv'), "r") as f: - reader = csv.reader(f, delimiter="\t") - for row in reader: - key = row.pop(0) - key = key.replace('_', ' ') - self.ganspace_edits.append(key.title()) - self.ganspace_edits.sort() - - self.styleclip_edits = [] - with open(os.path.join(Settings.styleclip_settings, 'styleclip_mapping_configs.csv'), "r") as f: - reader = csv.reader(f) - for row in reader: - key = row.pop(0) - key = key.replace('_', ' ') - self.styleclip_edits.append(key.title()) - self.styleclip_edits.sort() - - def get_methods(self): - return ["InterfaceGAN", "GANSpace", "StyleClip"] - - def get_edits(self, method): - method = method.lower() - return getattr(self, f"{method}_edits") - - def args_to_cfg(self, method, edit, strength): - method = method.lower() - edit = edit.lower() - edit = edit.replace(' ', '_') - strength = float(strength) - cfg = AttrDict() - cfg.method = method - cfg.edit = edit - cfg.strength = strength - if method == 'styleclip': - cfg.type = 'mapper' - return cfg - - def get_range(self, method): - method = method.lower() - if method == 'interfacegan': - return -5, 5, 0.1 - elif method == 'ganspace': - return -25, 25, 0.1 - elif method == 'styleclip': - return 0, 0.2, 0.01 - - def get_examples(self): - examples = [ - ["samples/demo_samples/11654.jpg", "InterfaceGAN", "Age", 2.0, False], - ["samples/demo_samples/116.jpg", "Ganspace", "lipstick", 10.0, False], - ["samples/demo_samples/carlsen.jpg", "Styleclip", "curly hair", 0.11, True], - ["samples/demo_samples/shakira.jpeg", "StyleClip", "purple hair", 0.1, True], - ["samples/demo_samples/shaq.jpg", "InterfaceGAN", "Smile", -1.7, True], - ["samples/demo_samples/shaq.jpg", "InterfaceGAN", "Pose", 3.3, True] - ] - return examples - diff --git a/spaces/hanstyle/tts/evaluation/real_videos_inference.py b/spaces/hanstyle/tts/evaluation/real_videos_inference.py deleted file mode 100644 index 8c9fb15ef342bf03caf77802ddf5b887bab3fb34..0000000000000000000000000000000000000000 --- a/spaces/hanstyle/tts/evaluation/real_videos_inference.py +++ /dev/null @@ -1,305 +0,0 @@ -from os import listdir, path -import numpy as np -import scipy, cv2, os, sys, argparse -import dlib, json, subprocess -from tqdm import tqdm -from glob import glob -import torch - -sys.path.append('../') -import audio -import face_detection -from models import Wav2Lip - -parser = argparse.ArgumentParser(description='Code to generate results on ReSyncED evaluation set') - -parser.add_argument('--mode', type=str, - help='random | dubbed | tts', required=True) - -parser.add_argument('--filelist', type=str, - help='Filepath of filelist file to read', default=None) - -parser.add_argument('--results_dir', type=str, help='Folder to save all results into', - required=True) -parser.add_argument('--data_root', type=str, required=True) -parser.add_argument('--checkpoint_path', type=str, - help='Name of saved checkpoint to load weights from', required=True) -parser.add_argument('--pads', nargs='+', type=int, default=[0, 10, 0, 0], - help='Padding (top, bottom, left, right)') - -parser.add_argument('--face_det_batch_size', type=int, - help='Single GPU batch size for face detection', default=16) - -parser.add_argument('--wav2lip_batch_size', type=int, help='Batch size for Wav2Lip', default=128) -parser.add_argument('--face_res', help='Approximate resolution of the face at which to test', default=180) -parser.add_argument('--min_frame_res', help='Do not downsample further below this frame resolution', default=480) -parser.add_argument('--max_frame_res', help='Downsample to at least this frame resolution', default=720) -# parser.add_argument('--resize_factor', default=1, type=int) - -args = parser.parse_args() -args.img_size = 96 - -def get_smoothened_boxes(boxes, T): - for i in range(len(boxes)): - if i + T > len(boxes): - window = boxes[len(boxes) - T:] - else: - window = boxes[i : i + T] - boxes[i] = np.mean(window, axis=0) - return boxes - -def rescale_frames(images): - rect = detector.get_detections_for_batch(np.array([images[0]]))[0] - if rect is None: - raise ValueError('Face not detected!') - h, w = images[0].shape[:-1] - - x1, y1, x2, y2 = rect - - face_size = max(np.abs(y1 - y2), np.abs(x1 - x2)) - - diff = np.abs(face_size - args.face_res) - for factor in range(2, 16): - downsampled_res = face_size // factor - if min(h//factor, w//factor) < args.min_frame_res: break - if np.abs(downsampled_res - args.face_res) >= diff: break - - factor -= 1 - if factor == 1: return images - - return [cv2.resize(im, (im.shape[1]//(factor), im.shape[0]//(factor))) for im in images] - - -def face_detect(images): - batch_size = args.face_det_batch_size - images = rescale_frames(images) - - while 1: - predictions = [] - try: - for i in range(0, len(images), batch_size): - predictions.extend(detector.get_detections_for_batch(np.array(images[i:i + batch_size]))) - except RuntimeError: - if batch_size == 1: - raise RuntimeError('Image too big to run face detection on GPU') - batch_size //= 2 - print('Recovering from OOM error; New batch size: {}'.format(batch_size)) - continue - break - - results = [] - pady1, pady2, padx1, padx2 = args.pads - for rect, image in zip(predictions, images): - if rect is None: - raise ValueError('Face not detected!') - - y1 = max(0, rect[1] - pady1) - y2 = min(image.shape[0], rect[3] + pady2) - x1 = max(0, rect[0] - padx1) - x2 = min(image.shape[1], rect[2] + padx2) - - results.append([x1, y1, x2, y2]) - - boxes = get_smoothened_boxes(np.array(results), T=5) - results = [[image[y1: y2, x1:x2], (y1, y2, x1, x2), True] for image, (x1, y1, x2, y2) in zip(images, boxes)] - - return results, images - -def datagen(frames, face_det_results, mels): - img_batch, mel_batch, frame_batch, coords_batch = [], [], [], [] - - for i, m in enumerate(mels): - if i >= len(frames): raise ValueError('Equal or less lengths only') - - frame_to_save = frames[i].copy() - face, coords, valid_frame = face_det_results[i].copy() - if not valid_frame: - continue - - face = cv2.resize(face, (args.img_size, args.img_size)) - - img_batch.append(face) - mel_batch.append(m) - frame_batch.append(frame_to_save) - coords_batch.append(coords) - - if len(img_batch) >= args.wav2lip_batch_size: - img_batch, mel_batch = np.asarray(img_batch), np.asarray(mel_batch) - - img_masked = img_batch.copy() - img_masked[:, args.img_size//2:] = 0 - - img_batch = np.concatenate((img_masked, img_batch), axis=3) / 255. - mel_batch = np.reshape(mel_batch, [len(mel_batch), mel_batch.shape[1], mel_batch.shape[2], 1]) - - yield img_batch, mel_batch, frame_batch, coords_batch - img_batch, mel_batch, frame_batch, coords_batch = [], [], [], [] - - if len(img_batch) > 0: - img_batch, mel_batch = np.asarray(img_batch), np.asarray(mel_batch) - - img_masked = img_batch.copy() - img_masked[:, args.img_size//2:] = 0 - - img_batch = np.concatenate((img_masked, img_batch), axis=3) / 255. - mel_batch = np.reshape(mel_batch, [len(mel_batch), mel_batch.shape[1], mel_batch.shape[2], 1]) - - yield img_batch, mel_batch, frame_batch, coords_batch - -def increase_frames(frames, l): - ## evenly duplicating frames to increase length of video - while len(frames) < l: - dup_every = float(l) / len(frames) - - final_frames = [] - next_duplicate = 0. - - for i, f in enumerate(frames): - final_frames.append(f) - - if int(np.ceil(next_duplicate)) == i: - final_frames.append(f) - - next_duplicate += dup_every - - frames = final_frames - - return frames[:l] - -mel_step_size = 16 -device = 'cuda' if torch.cuda.is_available() else 'cpu' -print('Using {} for inference.'.format(device)) - -detector = face_detection.FaceAlignment(face_detection.LandmarksType._2D, - flip_input=False, device=device) - -def _load(checkpoint_path): - if device == 'cuda': - checkpoint = torch.load(checkpoint_path) - else: - checkpoint = torch.load(checkpoint_path, - map_location=lambda storage, loc: storage) - return checkpoint - -def load_model(path): - model = Wav2Lip() - print("Load checkpoint from: {}".format(path)) - checkpoint = _load(path) - s = checkpoint["state_dict"] - new_s = {} - for k, v in s.items(): - new_s[k.replace('module.', '')] = v - model.load_state_dict(new_s) - - model = model.to(device) - return model.eval() - -model = load_model(args.checkpoint_path) - -def main(): - if not os.path.isdir(args.results_dir): os.makedirs(args.results_dir) - - if args.mode == 'dubbed': - files = listdir(args.data_root) - lines = ['{} {}'.format(f, f) for f in files] - - else: - assert args.filelist is not None - with open(args.filelist, 'r') as filelist: - lines = filelist.readlines() - - for idx, line in enumerate(tqdm(lines)): - video, audio_src = line.strip().split() - - audio_src = os.path.join(args.data_root, audio_src) - video = os.path.join(args.data_root, video) - - command = 'ffmpeg -loglevel panic -y -i {} -strict -2 {}'.format(audio_src, '../temp/temp.wav') - subprocess.call(command, shell=True) - temp_audio = '../temp/temp.wav' - - wav = audio.load_wav(temp_audio, 16000) - mel = audio.melspectrogram(wav) - - if np.isnan(mel.reshape(-1)).sum() > 0: - raise ValueError('Mel contains nan!') - - video_stream = cv2.VideoCapture(video) - - fps = video_stream.get(cv2.CAP_PROP_FPS) - mel_idx_multiplier = 80./fps - - full_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - - if min(frame.shape[:-1]) > args.max_frame_res: - h, w = frame.shape[:-1] - scale_factor = min(h, w) / float(args.max_frame_res) - h = int(h/scale_factor) - w = int(w/scale_factor) - - frame = cv2.resize(frame, (w, h)) - full_frames.append(frame) - - mel_chunks = [] - i = 0 - while 1: - start_idx = int(i * mel_idx_multiplier) - if start_idx + mel_step_size > len(mel[0]): - break - mel_chunks.append(mel[:, start_idx : start_idx + mel_step_size]) - i += 1 - - if len(full_frames) < len(mel_chunks): - if args.mode == 'tts': - full_frames = increase_frames(full_frames, len(mel_chunks)) - else: - raise ValueError('#Frames, audio length mismatch') - - else: - full_frames = full_frames[:len(mel_chunks)] - - try: - face_det_results, full_frames = face_detect(full_frames.copy()) - except ValueError as e: - continue - - batch_size = args.wav2lip_batch_size - gen = datagen(full_frames.copy(), face_det_results, mel_chunks) - - for i, (img_batch, mel_batch, frames, coords) in enumerate(gen): - if i == 0: - frame_h, frame_w = full_frames[0].shape[:-1] - - out = cv2.VideoWriter('../temp/result.avi', - cv2.VideoWriter_fourcc(*'DIVX'), fps, (frame_w, frame_h)) - - img_batch = torch.FloatTensor(np.transpose(img_batch, (0, 3, 1, 2))).to(device) - mel_batch = torch.FloatTensor(np.transpose(mel_batch, (0, 3, 1, 2))).to(device) - - with torch.no_grad(): - pred = model(mel_batch, img_batch) - - - pred = pred.cpu().numpy().transpose(0, 2, 3, 1) * 255. - - for pl, f, c in zip(pred, frames, coords): - y1, y2, x1, x2 = c - pl = cv2.resize(pl.astype(np.uint8), (x2 - x1, y2 - y1)) - f[y1:y2, x1:x2] = pl - out.write(f) - - out.release() - - vid = os.path.join(args.results_dir, '{}.mp4'.format(idx)) - command = 'ffmpeg -loglevel panic -y -i {} -i {} -strict -2 -q:v 1 {}'.format('../temp/temp.wav', - '../temp/result.avi', vid) - subprocess.call(command, shell=True) - - -if __name__ == '__main__': - main() diff --git a/spaces/harisansarkhan/DogBreedClassification/Gradio.py b/spaces/harisansarkhan/DogBreedClassification/Gradio.py deleted file mode 100644 index a77dab142494affdd3817f7bce20ea3f362042c6..0000000000000000000000000000000000000000 --- a/spaces/harisansarkhan/DogBreedClassification/Gradio.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import gradio as gr -import numpy as np -import tensorflow as tf -from tensorflow.keras.preprocessing.image import load_img, img_to_array -from tensorflow.keras.models import load_model -import cv2 - -# Load the best model -model_path = "Dog breed Classification_model.h5" -best_model = load_model(model_path) - -class_labels = ['Chihuahua', 'Japanese_spaniel', 'Maltese_dog', 'Pekinese', 'Shih-Tzu', 'Blenheim_spaniel', 'papillon', 'toy_terrier', 'Rhodesian_ridgeback', 'Afghan_hound', 'basset', 'beagle', 'bloodhound', 'bluetick', 'black-and-tan_coonhound', 'Walker_hound', 'English_foxhound', 'redbone', 'borzoi', 'Irish_wolfhound', 'Italian_greyhound', 'whippet', 'Ibizan_hound', 'Norwegian_elkhound', 'otterhound', 'Saluki', 'Scottish_deerhound', 'Weimaraner', 'Staffordshire_bullterrier', 'American_Staffordshire_terrier', 'Bedlington_terrier', 'Border_terrier', 'Kerry_blue_terrier', 'Irish_terrier', 'Norfolk_terrier', 'Norwich_terrier', 'Yorkshire_terrier', 'wire-haired_fox_terrier', 'Lakeland_terrier', 'Sealyham_terrier', 'Airedale', 'cairn', 'Australian_terrier', 'Dandie_Dinmont', 'Boston_bull', 'miniature_schnauzer', 'giant_schnauzer', 'standard_schnauzer', 'Scotch_terrier', 'Tibetan_terrier', 'silky_terrier', 'soft-coated_wheaten_terrier', 'West_Highland_white_terrier', 'Lhasa', 'flat-coated_retriever', 'curly-coated_retriever', 'golden_retriever', 'Labrador_retriever', 'Chesapeake_Bay_retriever', 'German_short-haired_pointer', 'vizsla', 'English_setter', 'Irish_setter', 'Gordon_setter', 'Brittany_spaniel', 'clumber', 'English_springer', 'Welsh_springer_spaniel', 'cocker_spaniel', 'Sussex_spaniel', 'Irish_water_spaniel', 'kuvasz', 'schipperke', 'groenendael', 'malinois', 'briard', 'kelpie', 'komondor', 'Old_English_sheepdog', 'Shetland_sheepdog', 'collie', 'Border_collie', 'Bouvier_des_Flandres', 'Rottweiler', 'German_shepherd', 'Doberman', 'miniature_pinscher', 'Greater_Swiss_Mountain_dog', 'Bernese_mountain_dog', 'Appenzeller', 'EntleBucher', 'boxer', 'bull_mastiff', 'Tibetan_mastiff', 'French_bulldog', 'Great_Dane', 'Saint_Bernard', 'Eskimo_dog', 'malamute', 'Siberian_husky', 'affenpinscher', 'basenji', 'pug', 'Leonberg', 'Newfoundland', 'Great_Pyrenees', 'Samoyed', 'Pomeranian', 'chow', 'keeshond', 'Brabancon_griffon', 'Pembroke', 'Cardigan', 'toy_poodle', 'miniature_poodle', 'standard_poodle', 'Mexican_hairless', 'dingo', 'dhole', 'African_hunting_dog'] - -# Define a tf.function for prediction -@tf.function -def predict_image(image_array): - prediction = best_model(image_array) - class_index = tf.argmax(prediction, axis=1) - predicted_class = tf.gather(class_labels, class_index) - return predicted_class - -# Predict function with breed name as string -def predict_dog_breed(image_upload): - # Convert the PIL image to a NumPy array - image_array = np.array(image_upload) - - # Resize the image to (224, 224) - image_resized = cv2.resize(image_array, (224, 224)) - - img_array = img_to_array(image_resized) - img_array = np.expand_dims(img_array, axis=0) - img_array /= 255.0 # Normalize the image - - # Predict using the tf.function - predicted_breed = predict_image(img_array) - label = predicted_breed.numpy()[0].decode() - return label - -# Create and launch the Gradio interface -demo = gr.Interface( - predict_dog_breed, - inputs = "image", - outputs="text", - title = "Dog Breed Predictor", - description="Upload an image of your dog to predict its breed.", - cache_examples=True, - theme="default", - allow_flagging="manual", - flagging_options=["incorrect", "inaccurate"], - analytics_enabled=True, - batch=False, - max_batch_size=4, - allow_duplication=False -) - -demo.launch() diff --git a/spaces/heiyuan/ChatGPT/overwrites.py b/spaces/heiyuan/ChatGPT/overwrites.py deleted file mode 100644 index a87499a81bb3c23bf34c1faadcc02085567cd447..0000000000000000000000000000000000000000 --- a/spaces/heiyuan/ChatGPT/overwrites.py +++ /dev/null @@ -1,55 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html - -from presets import * -from llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, y: List[Tuple[str | None, str | None]] -) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None or y == []: - return [] - tag_regex = re.compile(r"^<\w+>[^<]+") - if tag_regex.search(y[-1][1]): - y[-1] = (convert_user(y[-1][0]), y[-1][1]) - else: - y[-1] = (convert_user(y[-1][0]), convert_mdtext(y[-1][1])) - return y - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/hexdq666/OAIRP/README.md b/spaces/hexdq666/OAIRP/README.md deleted file mode 100644 index 4a7ebd554df6d6fb25566b04ffed986b98a0e34c..0000000000000000000000000000000000000000 --- a/spaces/hexdq666/OAIRP/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: OAIRP -emoji: 🦀 -colorFrom: indigo -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_BN.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_BN.py deleted file mode 100644 index 5b77ab13446a9330f07b886e03a46f17471c6deb..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/architectural_variants/nnUNetTrainerV2_BN.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import torch -from nnunet.network_architecture.generic_UNet import Generic_UNet -from nnunet.network_architecture.initialization import InitWeights_He -from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2 -from nnunet.utilities.nd_softmax import softmax_helper -from torch import nn - - -class nnUNetTrainerV2_BN(nnUNetTrainerV2): - def initialize_network(self): - """ - changed deep supervision to False - :return: - """ - if self.threeD: - conv_op = nn.Conv3d - dropout_op = nn.Dropout3d - norm_op = nn.BatchNorm3d - - else: - conv_op = nn.Conv2d - dropout_op = nn.Dropout2d - norm_op = nn.BatchNorm2d - - norm_op_kwargs = {'eps': 1e-5, 'affine': True} - dropout_op_kwargs = {'p': 0, 'inplace': True} - net_nonlin = nn.LeakyReLU - net_nonlin_kwargs = {'negative_slope': 1e-2, 'inplace': True} - self.network = Generic_UNet(self.num_input_channels, self.base_num_features, self.num_classes, - len(self.net_num_pool_op_kernel_sizes), - self.conv_per_stage, 2, conv_op, norm_op, norm_op_kwargs, dropout_op, dropout_op_kwargs, - net_nonlin, net_nonlin_kwargs, True, False, lambda x: x, InitWeights_He(1e-2), - self.net_num_pool_op_kernel_sizes, self.net_conv_kernel_sizes, False, True, True) - if torch.cuda.is_available(): - self.network.cuda() - self.network.inference_apply_nonlin = softmax_helper - - -nnUNetTrainerV2_BN_copy1 = nnUNetTrainerV2_BN -nnUNetTrainerV2_BN_copy2 = nnUNetTrainerV2_BN -nnUNetTrainerV2_BN_copy3 = nnUNetTrainerV2_BN -nnUNetTrainerV2_BN_copy4 = nnUNetTrainerV2_BN diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/tensor_utilities.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/tensor_utilities.py deleted file mode 100644 index daded59b43f87762a90852222325a5eed5be9f9a..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/utilities/tensor_utilities.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import numpy as np -import torch -from torch import nn - - -def sum_tensor(inp, axes, keepdim=False): - axes = np.unique(axes).astype(int) - if keepdim: - for ax in axes: - inp = inp.sum(int(ax), keepdim=True) - else: - for ax in sorted(axes, reverse=True): - inp = inp.sum(int(ax)) - return inp - - -def mean_tensor(inp, axes, keepdim=False): - axes = np.unique(axes).astype(int) - if keepdim: - for ax in axes: - inp = inp.mean(int(ax), keepdim=True) - else: - for ax in sorted(axes, reverse=True): - inp = inp.mean(int(ax)) - return inp - - -def flip(x, dim): - """ - flips the tensor at dimension dim (mirroring!) - :param x: - :param dim: - :return: - """ - indices = [slice(None)] * x.dim() - indices[dim] = torch.arange(x.size(dim) - 1, -1, -1, - dtype=torch.long, device=x.device) - return x[tuple(indices)] - - diff --git a/spaces/huak95/personaGPT_custom/setup.py b/spaces/huak95/personaGPT_custom/setup.py deleted file mode 100644 index bd03734ebff2b36a4e83c757da0355124b9a8a4e..0000000000000000000000000000000000000000 --- a/spaces/huak95/personaGPT_custom/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from transformers import GPT2Tokenizer, GPT2LMHeadModel, AutoTokenizer, AutoModelForCausalLM -import os -import torch - -print("TRANSFORMERS_CACHE", os.environ['TRANSFORMERS_CACHE']) -print("Fetching model...") - -tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium") -model = AutoModelForCausalLM.from_pretrained("af1tang/personaGPT") diff --git a/spaces/huggingface/library-metrics/README.md b/spaces/huggingface/library-metrics/README.md deleted file mode 100644 index 87aed2c579cb7907bda490e70f0fcbc0ab1579e1..0000000000000000000000000000000000000000 --- a/spaces/huggingface/library-metrics/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Hf Library Metrics -emoji: 📊 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/hussain-shk/IndiSent/indic_nlp_library/indicnlp/script/__init__.py b/spaces/hussain-shk/IndiSent/indic_nlp_library/indicnlp/script/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hussain-shk/IndiSent/inference/custom_interactive.py b/spaces/hussain-shk/IndiSent/inference/custom_interactive.py deleted file mode 100644 index 1e167a450c10991fa30f885721f99f233c35416e..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/inference/custom_interactive.py +++ /dev/null @@ -1,298 +0,0 @@ -# python wrapper for fairseq-interactive command line tool - -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Translate raw text with a trained model. Batches data on-the-fly. -""" - -import ast -from collections import namedtuple - -import torch -from fairseq import checkpoint_utils, options, tasks, utils -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.token_generation_constraints import pack_constraints, unpack_constraints -from fairseq_cli.generate import get_symbols_to_strip_from_output - -import codecs - - -Batch = namedtuple("Batch", "ids src_tokens src_lengths constraints") -Translation = namedtuple("Translation", "src_str hypos pos_scores alignments") - - -def make_batches( - lines, cfg, task, max_positions, encode_fn, constrainted_decoding=False -): - def encode_fn_target(x): - return encode_fn(x) - - if constrainted_decoding: - # Strip (tab-delimited) contraints, if present, from input lines, - # store them in batch_constraints - batch_constraints = [list() for _ in lines] - for i, line in enumerate(lines): - if "\t" in line: - lines[i], *batch_constraints[i] = line.split("\t") - - # Convert each List[str] to List[Tensor] - for i, constraint_list in enumerate(batch_constraints): - batch_constraints[i] = [ - task.target_dictionary.encode_line( - encode_fn_target(constraint), - append_eos=False, - add_if_not_exist=False, - ) - for constraint in constraint_list - ] - - if constrainted_decoding: - constraints_tensor = pack_constraints(batch_constraints) - else: - constraints_tensor = None - - tokens, lengths = task.get_interactive_tokens_and_lengths(lines, encode_fn) - - itr = task.get_batch_iterator( - dataset=task.build_dataset_for_inference( - tokens, lengths, constraints=constraints_tensor - ), - max_tokens=cfg.dataset.max_tokens, - max_sentences=cfg.dataset.batch_size, - max_positions=max_positions, - ignore_invalid_inputs=cfg.dataset.skip_invalid_size_inputs_valid_test, - ).next_epoch_itr(shuffle=False) - for batch in itr: - ids = batch["id"] - src_tokens = batch["net_input"]["src_tokens"] - src_lengths = batch["net_input"]["src_lengths"] - constraints = batch.get("constraints", None) - - yield Batch( - ids=ids, - src_tokens=src_tokens, - src_lengths=src_lengths, - constraints=constraints, - ) - - -class Translator: - def __init__( - self, data_dir, checkpoint_path, batch_size=25, constrained_decoding=False - ): - - self.constrained_decoding = constrained_decoding - self.parser = options.get_generation_parser(interactive=True) - # buffer_size is currently not used but we just initialize it to batch - # size + 1 to avoid any assertion errors. - if self.constrained_decoding: - self.parser.set_defaults( - path=checkpoint_path, - remove_bpe="subword_nmt", - num_workers=-1, - constraints="ordered", - batch_size=batch_size, - buffer_size=batch_size + 1, - ) - else: - self.parser.set_defaults( - path=checkpoint_path, - remove_bpe="subword_nmt", - num_workers=-1, - batch_size=batch_size, - buffer_size=batch_size + 1, - ) - args = options.parse_args_and_arch(self.parser, input_args=[data_dir]) - # we are explictly setting src_lang and tgt_lang here - # generally the data_dir we pass contains {split}-{src_lang}-{tgt_lang}.*.idx files from - # which fairseq infers the src and tgt langs(if these are not passed). In deployment we dont - # use any idx files and only store the SRC and TGT dictionaries. - args.source_lang = "SRC" - args.target_lang = "TGT" - # since we are truncating sentences to max_seq_len in engine, we can set it to False here - args.skip_invalid_size_inputs_valid_test = False - - # we have custom architechtures in this folder and we will let fairseq - # import this - args.user_dir = "model_configs" - self.cfg = convert_namespace_to_omegaconf(args) - - utils.import_user_module(self.cfg.common) - - if self.cfg.interactive.buffer_size < 1: - self.cfg.interactive.buffer_size = 1 - if self.cfg.dataset.max_tokens is None and self.cfg.dataset.batch_size is None: - self.cfg.dataset.batch_size = 1 - - assert ( - not self.cfg.generation.sampling - or self.cfg.generation.nbest == self.cfg.generation.beam - ), "--sampling requires --nbest to be equal to --beam" - assert ( - not self.cfg.dataset.batch_size - or self.cfg.dataset.batch_size <= self.cfg.interactive.buffer_size - ), "--batch-size cannot be larger than --buffer-size" - - # Fix seed for stochastic decoding - # if self.cfg.common.seed is not None and not self.cfg.generation.no_seed_provided: - # np.random.seed(self.cfg.common.seed) - # utils.set_torch_seed(self.cfg.common.seed) - - # if not self.constrained_decoding: - # self.use_cuda = torch.cuda.is_available() and not self.cfg.common.cpu - # else: - # self.use_cuda = False - - self.use_cuda = torch.cuda.is_available() and not self.cfg.common.cpu - - # Setup task, e.g., translation - self.task = tasks.setup_task(self.cfg.task) - - # Load ensemble - overrides = ast.literal_eval(self.cfg.common_eval.model_overrides) - self.models, self._model_args = checkpoint_utils.load_model_ensemble( - utils.split_paths(self.cfg.common_eval.path), - arg_overrides=overrides, - task=self.task, - suffix=self.cfg.checkpoint.checkpoint_suffix, - strict=(self.cfg.checkpoint.checkpoint_shard_count == 1), - num_shards=self.cfg.checkpoint.checkpoint_shard_count, - ) - - # Set dictionaries - self.src_dict = self.task.source_dictionary - self.tgt_dict = self.task.target_dictionary - - # Optimize ensemble for generation - for model in self.models: - if model is None: - continue - if self.cfg.common.fp16: - model.half() - if ( - self.use_cuda - and not self.cfg.distributed_training.pipeline_model_parallel - ): - model.cuda() - model.prepare_for_inference_(self.cfg) - - # Initialize generator - self.generator = self.task.build_generator(self.models, self.cfg.generation) - - # Handle tokenization and BPE - self.tokenizer = self.task.build_tokenizer(self.cfg.tokenizer) - self.bpe = self.task.build_bpe(self.cfg.bpe) - - # Load alignment dictionary for unknown word replacement - # (None if no unknown word replacement, empty if no path to align dictionary) - self.align_dict = utils.load_align_dict(self.cfg.generation.replace_unk) - - self.max_positions = utils.resolve_max_positions( - self.task.max_positions(), *[model.max_positions() for model in self.models] - ) - - def encode_fn(self, x): - if self.tokenizer is not None: - x = self.tokenizer.encode(x) - if self.bpe is not None: - x = self.bpe.encode(x) - return x - - def decode_fn(self, x): - if self.bpe is not None: - x = self.bpe.decode(x) - if self.tokenizer is not None: - x = self.tokenizer.decode(x) - return x - - def translate(self, inputs, constraints=None): - if self.constrained_decoding and constraints is None: - raise ValueError("Constraints cant be None in constrained decoding mode") - if not self.constrained_decoding and constraints is not None: - raise ValueError("Cannot pass constraints during normal translation") - if constraints: - constrained_decoding = True - modified_inputs = [] - for _input, constraint in zip(inputs, constraints): - modified_inputs.append(_input + f"\t{constraint}") - inputs = modified_inputs - else: - constrained_decoding = False - - start_id = 0 - results = [] - final_translations = [] - for batch in make_batches( - inputs, - self.cfg, - self.task, - self.max_positions, - self.encode_fn, - constrained_decoding, - ): - bsz = batch.src_tokens.size(0) - src_tokens = batch.src_tokens - src_lengths = batch.src_lengths - constraints = batch.constraints - if self.use_cuda: - src_tokens = src_tokens.cuda() - src_lengths = src_lengths.cuda() - if constraints is not None: - constraints = constraints.cuda() - - sample = { - "net_input": { - "src_tokens": src_tokens, - "src_lengths": src_lengths, - }, - } - - translations = self.task.inference_step( - self.generator, self.models, sample, constraints=constraints - ) - - list_constraints = [[] for _ in range(bsz)] - if constrained_decoding: - list_constraints = [unpack_constraints(c) for c in constraints] - for i, (id, hypos) in enumerate(zip(batch.ids.tolist(), translations)): - src_tokens_i = utils.strip_pad(src_tokens[i], self.tgt_dict.pad()) - constraints = list_constraints[i] - results.append( - ( - start_id + id, - src_tokens_i, - hypos, - { - "constraints": constraints, - }, - ) - ) - - # sort output to match input order - for id_, src_tokens, hypos, _ in sorted(results, key=lambda x: x[0]): - src_str = "" - if self.src_dict is not None: - src_str = self.src_dict.string( - src_tokens, self.cfg.common_eval.post_process - ) - - # Process top predictions - for hypo in hypos[: min(len(hypos), self.cfg.generation.nbest)]: - hypo_tokens, hypo_str, alignment = utils.post_process_prediction( - hypo_tokens=hypo["tokens"].int().cpu(), - src_str=src_str, - alignment=hypo["alignment"], - align_dict=self.align_dict, - tgt_dict=self.tgt_dict, - remove_bpe="subword_nmt", - extra_symbols_to_ignore=get_symbols_to_strip_from_output( - self.generator - ), - ) - detok_hypo_str = self.decode_fn(hypo_str) - final_translations.append(detok_hypo_str) - return final_translations diff --git a/spaces/hvtham/text_mining_21C11027/app.py b/spaces/hvtham/text_mining_21C11027/app.py deleted file mode 100644 index 5eafa07b7b4a0bd5e1a99072baf51eb781dc3550..0000000000000000000000000000000000000000 --- a/spaces/hvtham/text_mining_21C11027/app.py +++ /dev/null @@ -1,69 +0,0 @@ -# -*- coding: utf-8 -*- -"""21C11027.ipynb - -Automatically generated by Colaboratory. - -Original file is located at - https://colab.research.google.com/drive/1z_jG4sUgsIhZRoikoXxYMHNAMpiYlWAW - -**KHAI THÁC NGỮ LIỆU VĂN BẢN NÂNG CAO** - -* **Họ và tên:** Huỳnh Viết Thám -* **Mã số học viên:** 21C11027 - -# Cài đặt thư viện cần thiết -""" - -from serpapi import GoogleSearch - -def checkPaper(publication_name): - params = { - "api_key": "3fb62919a0e61a6a58cf9815798253799210ab69fbc3c9c9a81785c7cabcc3fa", - "engine": "google", - "q": "*", - "location": "Austin, Texas, United States", - "google_domain": "google.com", - "gl": "us", - "hl": "en", - "as_sitesearch": "github.com" - } - - # q ở đây là query. Ở bước này, tiến hành gán input mà người dùng nhập vào vào trong param đã khởi tạo ở trên - params["q"] = publication_name - # Ở bước này, tiến hành search bằng thư viện GoogleSearch đã import ở trên, với tham số là param sau khi đã cập nhập câu query q - search = GoogleSearch(params) - # Tiến hành lưu kết quả tìm được vào results, để từ đó có thể dễ dàng truy xuất khi cần thiết - results = search.get_dict() - # Lấy top 5 kết quả search ra đầu tiên dưới dạng json - top5_result=results["organic_results"][0:5] - # Bây giờ sẽ tạo 2 biến, biến thứ nhất là github_link, biến này sẽ kiểm tra xem là kết quả tìm được trong top5 đó có link github hay không? nếu có thì sẽ lưu thông tin vào backup_link - github_link = False - backup_link = None - # Tổng số từ khoá trong việc tìm kiếm nếu trên 70% thì sẽ cho ra kết quả. - # Số 70% ở đây là một con số có thể thay đổi được, chưa có thống kê cụ thể sử dụng số nào thì hiệu quả cao hơn - threshold = 0.7 - #Chạy vòng for để tìm kiếm link github trong top5 đã lấy ở trên - for result in top5_result: - # Tách các từ trong tên bài báo nhập vào ở trên. Mục tiêu là để ở bước so sánh mình sẽ lấy từng từ ra dò vào kết quả cho nhanh chóng - word_list = publication_name.split(' ') - len_word_list = len(word_list) - count = 0 - # Trong các kết quả trả về, nếu không có link github thì bỏ qua, còn nếu có link github thì sẽ tiến hành kiểm tra xem từng từ được tách ở trên so với từ trong link git trùng khớp được bao nhiêu phần trăm. Trên 70% như đã khai báo ở trên là được. - if "https://github.com/" in result['link']: - for word in word_list: - if word in result['snippet']: - count+=1 - if count >= count/len_word_list: - github_link = True - backup_link = result['link'] - break - # Kiểm tra xem biến check link git đã là true hay chưa, nếu là true thì là có link, còn không thì không có link - if github_link == False: - return "Currently, the github link for the entered article title has not been found. Please check back later!" - else: - return backup_link + " | Here is the link github found based on your input. Please check the link above." - -import gradio as gr - -demo = gr.Interface(fn=checkPaper, inputs="text", outputs="text") -demo.launch(share = True) \ No newline at end of file diff --git a/spaces/hysts-samples/space-monitor/README.md b/spaces/hysts-samples/space-monitor/README.md deleted file mode 100644 index cf58ef76654b51955d7efa3a8cd3a086d76a98cf..0000000000000000000000000000000000000000 --- a/spaces/hysts-samples/space-monitor/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Space Monitor -emoji: ⚡ -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.48.0 -python_version: 3.10.13 -app_file: app.py -pinned: false -license: mit -duplicated_from: hysts-samples/base-space ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/iamstolas/STOLAS/src/lib/hooks/chat-history.ts b/spaces/iamstolas/STOLAS/src/lib/hooks/chat-history.ts deleted file mode 100644 index c6fbf3fecfa86fe553f56acc8253236b8f22a775..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/src/lib/hooks/chat-history.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { zip } from 'lodash-es' -import { ChatMessageModel, BotId } from '@/lib/bots/bing/types' -import { Storage } from '../storage' - -/** - * conversations:$botId => Conversation[] - * conversation:$botId:$cid:messages => ChatMessageModel[] - */ - -interface Conversation { - id: string - createdAt: number -} - -type ConversationWithMessages = Conversation & { messages: ChatMessageModel[] } - -async function loadHistoryConversations(botId: BotId): Promise { - const key = `conversations:${botId}` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -async function deleteHistoryConversation(botId: BotId, cid: string) { - const conversations = await loadHistoryConversations(botId) - const newConversations = conversations.filter((c) => c.id !== cid) - await Storage.set({ [`conversations:${botId}`]: newConversations }) -} - -async function loadConversationMessages(botId: BotId, cid: string): Promise { - const key = `conversation:${botId}:${cid}:messages` - const { [key]: value } = await Storage.get(key) - return value || [] -} - -export async function setConversationMessages(botId: BotId, cid: string, messages: ChatMessageModel[]) { - const conversations = await loadHistoryConversations(botId) - if (!conversations.some((c) => c.id === cid)) { - conversations.unshift({ id: cid, createdAt: Date.now() }) - await Storage.set({ [`conversations:${botId}`]: conversations }) - } - const key = `conversation:${botId}:${cid}:messages` - await Storage.set({ [key]: messages }) -} - -export async function loadHistoryMessages(botId: BotId): Promise { - const conversations = await loadHistoryConversations(botId) - const messagesList = await Promise.all(conversations.map((c) => loadConversationMessages(botId, c.id))) - return zip(conversations, messagesList).map(([c, messages]) => ({ - id: c!.id, - createdAt: c!.createdAt, - messages: messages!, - })) -} - -export async function deleteHistoryMessage(botId: BotId, conversationId: string, messageId: string) { - const messages = await loadConversationMessages(botId, conversationId) - const newMessages = messages.filter((m) => m.id !== messageId) - await setConversationMessages(botId, conversationId, newMessages) - if (!newMessages.length) { - await deleteHistoryConversation(botId, conversationId) - } -} diff --git a/spaces/imseldrith/BotX/Dockerfile b/spaces/imseldrith/BotX/Dockerfile deleted file mode 100644 index 05a8bd18c666146d247eab03fc230d6c5f1b2ee1..0000000000000000000000000000000000000000 --- a/spaces/imseldrith/BotX/Dockerfile +++ /dev/null @@ -1,9 +0,0 @@ -FROM python:3.10.6-slim-buster - -WORKDIR . -COPY . . - -RUN pip3 install -r requirements.txt - -CMD ["python3", "bot.py"] - diff --git a/spaces/inamXcontru/PoeticTTS/Ansys Products 18 2 Win64 SSQ.md b/spaces/inamXcontru/PoeticTTS/Ansys Products 18 2 Win64 SSQ.md deleted file mode 100644 index 68ff744b555b5c82b9d4aea9f7fe40abab04c627..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Ansys Products 18 2 Win64 SSQ.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Ansys Products 18 2 Win64 SSQ


    DOWNLOAD ►►►►► https://gohhs.com/2uz4IS



    - -October 5, 2017 - It is also a full offline installer, standalone installer, and compressed version of ANSYS 18 products. ANSYS 18.2 Products Free Download ... âž¡ ANSYS 18.1.2 ANSYS 18.1 .2 Major improvements and changes in ANSYS 18.1.2 Improvements have been made to the graphical user interface and project management interface. Improved visualization and modeling associated with the use of DynaLynx technology. Improved data management interface. Improved analysis and modeling associated with the use of DynaLynx technology. Improved data management interface. Improved visualization and modeling associated with DynaLynx technology. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/inamXcontru/PoeticTTS/Bopup.Communication.Server.v5.1.0.EIM.LAN.Messaging.Server.Skype Serial Key Keygen The Ultimate EIM Solution for Your Business.md b/spaces/inamXcontru/PoeticTTS/Bopup.Communication.Server.v5.1.0.EIM.LAN.Messaging.Server.Skype Serial Key Keygen The Ultimate EIM Solution for Your Business.md deleted file mode 100644 index 08856bb5d68192e4e0c385f85824f27996ee8c71..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Bopup.Communication.Server.v5.1.0.EIM.LAN.Messaging.Server.Skype Serial Key Keygen The Ultimate EIM Solution for Your Business.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Bopup.Communication.Server.v5.1.0.EIM.LAN.Messaging.Server.Skype Serial Key Keygen


    Download Filehttps://gohhs.com/2uz3Kz



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/innnky/nyaru4.0/flask_api.py b/spaces/innnky/nyaru4.0/flask_api.py deleted file mode 100644 index 8cc236a1c34c9ddeddea99bcea13024fb0ccc90b..0000000000000000000000000000000000000000 --- a/spaces/innnky/nyaru4.0/flask_api.py +++ /dev/null @@ -1,56 +0,0 @@ -import io -import logging - -import soundfile -import torch -import torchaudio -from flask import Flask, request, send_file -from flask_cors import CORS - -from inference.infer_tool import Svc, RealTimeVC - -app = Flask(__name__) - -CORS(app) - -logging.getLogger('numba').setLevel(logging.WARNING) - - -@app.route("/voiceChangeModel", methods=["POST"]) -def voice_change_model(): - request_form = request.form - wave_file = request.files.get("sample", None) - # 变调信息 - f_pitch_change = float(request_form.get("fPitchChange", 0)) - # DAW所需的采样率 - daw_sample = int(float(request_form.get("sampleRate", 0))) - speaker_id = int(float(request_form.get("sSpeakId", 0))) - # http获得wav文件并转换 - input_wav_path = io.BytesIO(wave_file.read()) - - # 模型推理 - if raw_infer: - out_audio, out_sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(out_audio, svc_model.target_sample, daw_sample) - else: - out_audio = svc.process(svc_model, speaker_id, f_pitch_change, input_wav_path) - tar_audio = torchaudio.functional.resample(torch.from_numpy(out_audio), svc_model.target_sample, daw_sample) - # 返回音频 - out_wav_path = io.BytesIO() - soundfile.write(out_wav_path, tar_audio.cpu().numpy(), daw_sample, format="wav") - out_wav_path.seek(0) - return send_file(out_wav_path, download_name="temp.wav", as_attachment=True) - - -if __name__ == '__main__': - # 启用则为直接切片合成,False为交叉淡化方式 - # vst插件调整0.3-0.5s切片时间可以降低延迟,直接切片方法会有连接处爆音、交叉淡化会有轻微重叠声音 - # 自行选择能接受的方法,或将vst最大切片时间调整为1s,此处设为Ture,延迟大音质稳定一些 - raw_infer = True - # 每个模型和config是唯一对应的 - model_name = "logs/32k/G_174000-Copy1.pth" - config_name = "configs/config.json" - svc_model = Svc(model_name, config_name) - svc = RealTimeVC() - # 此处与vst插件对应,不建议更改 - app.run(port=6842, host="0.0.0.0", debug=False, threaded=False) diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Adams Rail Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Adams Rail Download.md deleted file mode 100644 index 5701042448e6b5a1504a7d2e0aa3b1b626871db1..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Adams Rail Download.md +++ /dev/null @@ -1,49 +0,0 @@ - -

    How to Download and Use Adams Rail for Multibody Dynamics Simulation

    -

    Adams Rail is a specialized software for railway simulation that allows engineers to study the dynamics of moving parts, how loads and forces are distributed throughout mechanical systems, and to improve and optimize the performance of their products. Adams Rail is based on Adams, the most widely used multibody dynamics and motion analysis software in the world.

    -

    In this article, we will show you how to download and use Adams Rail for your railway simulation projects. You will learn how to:

    -

    adams rail download


    DOWNLOAD ——— https://urlin.us/2uEvJv



    -
      -
    • Download Adams Rail student edition for free from Hexagon
    • -
    • Access online tutorials and e-learning courses for Adams Rail
    • -
    • Create and test virtual prototypes of railway vehicles and tracks
    • -
    • Analyze vehicle stability, derailment safety clearance, track load, passenger comfort, and more
    • -
    • Optimize your design using parametrics, design sensitivity, and optimization tools
    • -
    -

    Download Adams Rail student edition for free from Hexagon

    -

    If you are a student or an educator, you can download Adams Rail student edition for free from Hexagon. This edition includes all the features of Adams Rail, except for some limitations on the number of parts and degrees of freedom. To download Adams Rail student edition, you need to:

    -
      -
    1. Visit https://hexagon.com/products/adams-student-edition
    2. -
    3. Fill out the registration form with your name, email address, institution name, and country
    4. -
    5. Check your email for a confirmation link and click on it
    6. -
    7. Download the installation file and follow the instructions to install Adams Rail on your computer
    8. -
    -

    Access online tutorials and e-learning courses for Adams Rail

    -

    Once you have installed Adams Rail on your computer, you can access online tutorials and e-learning courses to learn how to use it effectively. Hexagon offers a variety of courses covering different aspects of multibody dynamics analysis with Adams Rail. Some of the courses are:

    -
      -
    • ADM701: Complete Multibody Dynamics Analysis with Adams
    • -
    • ADM710: Flex Body Dynamics and Modal Stress Recovery using Adams
    • -
    • ADM740: Vehicle Modeling and Simulation using Adams Car
    • -
    • ADM761: Basic Suspension and Full Vehicle Analysis using Adams Chassis
    • -
    • ADN701: Adams Modeler Overview
    • -
    -

    To access these courses, you need to:

    -
      -
    1. Visit https://hexagon.com/products/adams-student-edition
    2. -
    3. Click on "Adams tutorials" under "Resources"
    4. -
    5. Select the course you want to take and click on "Register"
    6. -
    7. Login with your email address and password
    8. -
    9. Start learning at your own pace
    10. -
    -

    Create and test virtual prototypes of railway vehicles and tracks

    -

    With Adams Rail, you can create and test virtual prototypes of railway vehicles and tracks in a fraction of the time and cost required for physical build and test. You can easily model complex geometries, materials, joints, contacts, forces, motions, controls, and events using a graphical user interface or a scripting language. You can also import CAD models from other software or use predefined templates and libraries.

    -

    To create and test virtual prototypes of railway vehicles and tracks with Adams Rail, you need to:

    -
      -
    1. Launch Adams Rail from your computer
    2. -
    3. Create a new model or open an existing one
    4. -
    5. Add or modify parts, joints, contacts, forces, motions, controls, and events as needed
    6. -
    7. Run the simulation and view the results in high-speed animation or graphs
    8. -
    9. Analyze the results using various tools such as plots, reports, statistics, etc.
    10. -
    11. Modify your design as needed and repeat the simulation until you achieve your desired performance
    12. d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Artista Mixed Media Art Photoshop Action Rar.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Artista Mixed Media Art Photoshop Action Rar.md deleted file mode 100644 index 5fa2d92c8fa763ce399df27490b51a2f35007a01..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Artista Mixed Media Art Photoshop Action Rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Artista Mixed Media Art Photoshop Action rar


      Download File >>> https://urlin.us/2uEywc



      - -"Birthday Gift. . ♢ • Libreng MP3 REMIX • BASTA »DJ SKY™ iDOL ♥ • STUCK ON YOU • "SLOWJAM BATTLE MIX" ... ↓↓♥♤♢♧ http://www.mediafire.com/… 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Bihar And Orissa Public Demand Recovery Act 1914 Pdf 75.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Bihar And Orissa Public Demand Recovery Act 1914 Pdf 75.md deleted file mode 100644 index 9d9d1ed6dc177417086670c3bc772945f69949c1..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Bihar And Orissa Public Demand Recovery Act 1914 Pdf 75.md +++ /dev/null @@ -1,6 +0,0 @@ -

      bihar and orissa public demand recovery act 1914 pdf 75


      Download Filehttps://urlin.us/2uExYA



      - -executive instructions of the State of Bihar were being followed by the State of ... Bihar. & Orissa Public Demand Recovery Act, 1914 was filed except in the case of ... (+) 75. 2007-08. 11.22. 170.50. (+) 159.28. (+) 1,420. 2008-09. 173.11. 48.14. 1fdad05405
      -
      -
      -

      diff --git a/spaces/inreVtussa/clothingai/Examples/Breakaway Broadcast Processor Asio.0.90.95 39 TOP.md b/spaces/inreVtussa/clothingai/Examples/Breakaway Broadcast Processor Asio.0.90.95 39 TOP.md deleted file mode 100644 index fafe5fede9df9ed569e7bfb3f59b6e8975a01dc9..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Breakaway Broadcast Processor Asio.0.90.95 39 TOP.md +++ /dev/null @@ -1,10 +0,0 @@ - -

      PYTHON install.mpg Other pdf documents and has no protected watermark. If you are the copyright owner of Breakaway Broadcast Processor, and wish to have this software removed, please indicate that in.

      -

      Breakaway broadcast processor asio.0.90.95 39


      DOWNLOAD ··· https://tiurll.com/2uCkBZ



      -

      Breakaway Broadcast Processor is released under. It has been checked for viruses several times and found to be clean. Breakaway Broadcast Processor for windows is available for download from our fast download servers.

      -

      We do not store any of the downloaded data, Breakaway Broadcast Processor is downloaded directly from the author site. We strongly discourage Breakaway Broadcast Processor download from any site other than our own, because of the possibility of being infected. Our Breakaway Broadcast Processor is a clean installation package that can be downloaded from our servers and is ready to install. This way you don't have to download and install Breakaway Broadcast Processor yourself, and if you don't have administrator rights on your computer, you will need to get them in order to install Breakaway Broadcast Processor.

      -

      We strongly recommend you to free download and install Breakaway Broadcast Processor by. You can get it directly from our site:. Breakaway Broadcast Processor site owner is responsible for all the content, files and dlls on this site. You must use all of this software available on this site according to the terms and conditions.

      -

      -

      If you feel that Breakaway Broadcast Processor is illegal, please do not use it. You can remove it yourself using some standard package removal tools. If you have problems please take me a note at and we will try to resolve the problem as soon as possible.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/james-oldfield/PandA/annotated_directions.py b/spaces/james-oldfield/PandA/annotated_directions.py deleted file mode 100644 index 57c553e63533864eee556067c61c705fd78e0cfb..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/annotated_directions.py +++ /dev/null @@ -1,171 +0,0 @@ -annotated_directions = { - 'stylegan2_ffhq1024': { - # Directions used in paper with a single decomposition: - 'big_eyes': { - 'parameters': [7, 6, 30], # used in main paper - 'layer': 5, - 'ranks': [512, 8], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_5-rank_8.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_5-rank_512.npy', - ], - }, - 'long_nose': { - 'parameters': [5, 82, 30], # used in main paper - 'layer': 5, - 'ranks': [512, 8], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_5-rank_8.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_5-rank_512.npy', - ], - }, - 'smile': { - 'parameters': [4, 46, -30], # used in sup. material - 'layer': 5, - 'ranks': [512, 8], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_5-rank_8.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_5-rank_512.npy', - ], - }, - 'open_mouth': { - 'parameters': [4, 39, 30], # used in sup. material - 'layer': 5, - 'ranks': [512, 8], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_5-rank_8.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_5-rank_512.npy', - ], - }, - - # Additional directions - 'big_eyeballs': { - 'parameters': [8, 27, 100], - 'layer': 6, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_6-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_6-rank_512.npy', - ], - }, - 'wide_nose': { - 'parameters': [15, 13, 100], - 'layer': 6, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_6-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_6-rank_512.npy', - ], - }, - 'glance_left': { - 'parameters': [8, 281, 50], - 'layer': 6, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_6-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_6-rank_512.npy', - ], - }, - 'glance_right': { - 'parameters': [8, 281, -70], - 'layer': 6, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_6-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_6-rank_512.npy', - ], - }, - 'bald_forehead': { - 'parameters': [3, 25, 100], - 'layer': 6, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_6-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_6-rank_512.npy', - ], - }, - 'light_eyebrows': { - 'parameters': [8, 4, 30], - 'layer': 7, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_7-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_7-rank_512.npy', - ], - }, - 'dark_eyebrows': { - 'parameters': [8, 9, 30], - 'layer': 7, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_7-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_7-rank_512.npy', - ], - }, - 'no_eyebrows': { - 'parameters': [8, 4, 50], - 'layer': 7, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_7-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_7-rank_512.npy', - ], - }, - 'dark_eyes': { - 'parameters': [11, 176, 50], - 'layer': 7, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_7-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_7-rank_512.npy', - ], - }, - 'red_eyes': { - 'parameters': [11, 109, 60], - 'layer': 7, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_7-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_7-rank_512.npy', - ], - }, - 'eyes_short': { - 'parameters': [11, 262, 70], - 'layer': 7, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_7-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_7-rank_512.npy', - ], - }, - 'eyes_open': { - 'parameters': [11, 28, 50], - 'layer': 7, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_7-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_7-rank_512.npy', - ], - }, - 'eyes_close': { - 'parameters': [11, 398, 80], - 'layer': 7, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_7-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_7-rank_512.npy', - ], - }, - 'no_eyes': { - 'parameters': [11, 0, -200], - 'layer': 7, - 'ranks': [512, 16], - 'checkpoints_path': [ - './checkpoints/Us-name_stylegan2_ffhq1024-layer_7-rank_16.npy', - './checkpoints/Uc-name_stylegan2_ffhq1024-layer_7-rank_512.npy', - ], - }, - - }, - -} diff --git a/spaces/jbetker/tortoise/app.py b/spaces/jbetker/tortoise/app.py deleted file mode 100644 index a699bc5b3c2e987102ca93e0ee28d601e0a93d02..0000000000000000000000000000000000000000 --- a/spaces/jbetker/tortoise/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!!" - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/jbilcke-hf/MusicGen/audiocraft/modules/seanet.py b/spaces/jbilcke-hf/MusicGen/audiocraft/modules/seanet.py deleted file mode 100644 index 3e5998e9153afb6e68ea410d565e00ea835db248..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/MusicGen/audiocraft/modules/seanet.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import numpy as np -import torch.nn as nn - -from .conv import StreamableConv1d, StreamableConvTranspose1d -from .lstm import StreamableLSTM - - -class SEANetResnetBlock(nn.Module): - """Residual block from SEANet model. - - Args: - dim (int): Dimension of the input/output. - kernel_sizes (list): List of kernel sizes for the convolutions. - dilations (list): List of dilations for the convolutions. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection. - """ - def __init__(self, dim: int, kernel_sizes: tp.List[int] = [3, 1], dilations: tp.List[int] = [1, 1], - activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, causal: bool = False, - pad_mode: str = 'reflect', compress: int = 2, true_skip: bool = True): - super().__init__() - assert len(kernel_sizes) == len(dilations), 'Number of kernel sizes should match number of dilations' - act = getattr(nn, activation) - hidden = dim // compress - block = [] - for i, (kernel_size, dilation) in enumerate(zip(kernel_sizes, dilations)): - in_chs = dim if i == 0 else hidden - out_chs = dim if i == len(kernel_sizes) - 1 else hidden - block += [ - act(**activation_params), - StreamableConv1d(in_chs, out_chs, kernel_size=kernel_size, dilation=dilation, - norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - self.block = nn.Sequential(*block) - self.shortcut: nn.Module - if true_skip: - self.shortcut = nn.Identity() - else: - self.shortcut = StreamableConv1d(dim, dim, kernel_size=1, norm=norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode) - - def forward(self, x): - return self.shortcut(x) + self.block(x) - - -class SEANetEncoder(nn.Module): - """SEANet encoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. The encoder uses downsampling ratios instead of - upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here - that must match the decoder order. We use the decoder order as some models may only employ the decoder. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the encoder, it corresponds to the N first blocks. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0): - super().__init__() - self.channels = channels - self.dimension = dimension - self.n_filters = n_filters - self.ratios = list(reversed(ratios)) - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = 1 - model: tp.List[nn.Module] = [ - StreamableConv1d(channels, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Downsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= i + 2 else norm - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - norm=block_norm, norm_params=norm_params, - activation=activation, activation_params=activation_params, - causal=causal, pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - # Add downsampling layers - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, mult * n_filters * 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, pad_mode=pad_mode), - ] - mult *= 2 - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - model += [ - act(**activation_params), - StreamableConv1d(mult * n_filters, dimension, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - self.model = nn.Sequential(*model) - - def forward(self, x): - return self.model(x) - - -class SEANetDecoder(nn.Module): - """SEANet decoder. - - Args: - channels (int): Audio channels. - dimension (int): Intermediate representation dimension. - n_filters (int): Base width for the model. - n_residual_layers (int): nb of residual layers. - ratios (Sequence[int]): kernel size and stride ratios. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - final_activation (str): Final activation function after all convolutions. - final_activation_params (dict): Parameters to provide to the activation function. - norm (str): Normalization method. - norm_params (dict): Parameters to provide to the underlying normalization used along with the convolution. - kernel_size (int): Kernel size for the initial convolution. - last_kernel_size (int): Kernel size for the initial convolution. - residual_kernel_size (int): Kernel size for the residual layers. - dilation_base (int): How much to increase the dilation with each layer. - causal (bool): Whether to use fully causal convolution. - pad_mode (str): Padding mode for the convolutions. - true_skip (bool): Whether to use true skip connection or a simple. - (streamable) convolution as the skip connection in the residual network blocks. - compress (int): Reduced dimensionality in residual branches (from Demucs v3). - lstm (int): Number of LSTM layers at the end of the encoder. - disable_norm_outer_blocks (int): Number of blocks for which we don't apply norm. - For the decoder, it corresponds to the N last blocks. - trim_right_ratio (float): Ratio for trimming at the right of the transposed convolution under the causal setup. - If equal to 1.0, it means that all the trimming is done at the right. - """ - def __init__(self, channels: int = 1, dimension: int = 128, n_filters: int = 32, n_residual_layers: int = 3, - ratios: tp.List[int] = [8, 5, 4, 2], activation: str = 'ELU', activation_params: dict = {'alpha': 1.0}, - final_activation: tp.Optional[str] = None, final_activation_params: tp.Optional[dict] = None, - norm: str = 'none', norm_params: tp.Dict[str, tp.Any] = {}, kernel_size: int = 7, - last_kernel_size: int = 7, residual_kernel_size: int = 3, dilation_base: int = 2, causal: bool = False, - pad_mode: str = 'reflect', true_skip: bool = True, compress: int = 2, lstm: int = 0, - disable_norm_outer_blocks: int = 0, trim_right_ratio: float = 1.0): - super().__init__() - self.dimension = dimension - self.channels = channels - self.n_filters = n_filters - self.ratios = ratios - del ratios - self.n_residual_layers = n_residual_layers - self.hop_length = np.prod(self.ratios) - self.n_blocks = len(self.ratios) + 2 # first and last conv + residual blocks - self.disable_norm_outer_blocks = disable_norm_outer_blocks - assert self.disable_norm_outer_blocks >= 0 and self.disable_norm_outer_blocks <= self.n_blocks, \ - "Number of blocks for which to disable norm is invalid." \ - "It should be lower or equal to the actual number of blocks in the network and greater or equal to 0." - - act = getattr(nn, activation) - mult = int(2 ** len(self.ratios)) - model: tp.List[nn.Module] = [ - StreamableConv1d(dimension, mult * n_filters, kernel_size, - norm='none' if self.disable_norm_outer_blocks == self.n_blocks else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - - if lstm: - model += [StreamableLSTM(mult * n_filters, num_layers=lstm)] - - # Upsample to raw audio scale - for i, ratio in enumerate(self.ratios): - block_norm = 'none' if self.disable_norm_outer_blocks >= self.n_blocks - (i + 1) else norm - # Add upsampling layers - model += [ - act(**activation_params), - StreamableConvTranspose1d(mult * n_filters, mult * n_filters // 2, - kernel_size=ratio * 2, stride=ratio, - norm=block_norm, norm_kwargs=norm_params, - causal=causal, trim_right_ratio=trim_right_ratio), - ] - # Add residual layers - for j in range(n_residual_layers): - model += [ - SEANetResnetBlock(mult * n_filters // 2, kernel_sizes=[residual_kernel_size, 1], - dilations=[dilation_base ** j, 1], - activation=activation, activation_params=activation_params, - norm=block_norm, norm_params=norm_params, causal=causal, - pad_mode=pad_mode, compress=compress, true_skip=true_skip)] - - mult //= 2 - - # Add final layers - model += [ - act(**activation_params), - StreamableConv1d(n_filters, channels, last_kernel_size, - norm='none' if self.disable_norm_outer_blocks >= 1 else norm, - norm_kwargs=norm_params, causal=causal, pad_mode=pad_mode) - ] - # Add optional final activation to decoder (eg. tanh) - if final_activation is not None: - final_act = getattr(nn, final_activation) - final_activation_params = final_activation_params or {} - model += [ - final_act(**final_activation_params) - ] - self.model = nn.Sequential(*model) - - def forward(self, z): - y = self.model(z) - return y diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/pages/api/get-key.ts b/spaces/jbilcke-hf/ai-clip-factory/src/pages/api/get-key.ts deleted file mode 100644 index b28dcb01945e5a8b4d4efff898ec7666f198f5d6..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/pages/api/get-key.ts +++ /dev/null @@ -1,21 +0,0 @@ -import crypto from "node:crypto" - -import { NextApiRequest, NextApiResponse } from "next" - -async function handler(req: NextApiRequest, res: NextApiResponse) { - let ipAddress = req.headers["x-real-ip"] as string - - const forwardedFor = req.headers["x-forwarded-for"] as string - - if (!ipAddress && forwardedFor) { - ipAddress = forwardedFor?.split(",").at(0) ?? "Unknown" - } - - console.log("ipAddress:", ipAddress) - const hash = crypto.createHash('sha256') - hash.update(ipAddress) - const digest = hash.digest('hex') - res.status(200).json(digest) -} - -export default handler \ No newline at end of file diff --git a/spaces/jhlfrfufyfn/bel-tts/app.py b/spaces/jhlfrfufyfn/bel-tts/app.py deleted file mode 100644 index c096b219f831bc944372e8b08f28d29992c159dc..0000000000000000000000000000000000000000 --- a/spaces/jhlfrfufyfn/bel-tts/app.py +++ /dev/null @@ -1,87 +0,0 @@ -from TTS.utils.synthesizer import Synthesizer -from huggingface_hub import hf_hub_download -import gradio as gr -import tempfile -import os - -REPO_ID = "jhlfrfufyfn/bel-tts" - -my_title = "Беларускі тэкст-у-маўленне" -my_description = "Беларускамоўная мадэль для агучвання тэксту (травень 2023)." - -be_text = "Гепарды жывуць у адкрытых і прасторных месцах, дзе ёсць шмат здабычы." - -my_inputs = [ - gr.inputs.Textbox(lines=5, label="Input Text", default=be_text), -] - -my_outputs = gr.outputs.Audio(type="file", label="Output Audio") - -def belarusify_russian_text(text: str): - text = text.replace("и", "і") - text = text.replace("іу", "іў") - text = text.replace("оу", "оў") - text = text.replace("ау", "аў") - text = text.replace("ыу", "ыў") - text = text.replace("уу", "уў") - text = text.replace("юу", "юў") - text = text.replace("еу", "еў") - text = text.replace("ёу", "ёў") - text = text.replace("щ", "шч") - return text - -import requests -def tts(text: str): - print("Original text: ", text) - text = belarusify_russian_text(text) - print("Belarusified text: ", text) - # Sending a request to the fonemizer - headers = {'Content-Type': 'text/plain; charset=utf-8'} # Specify the charset as UTF-8 - - response = requests.post("http://fonemizer.nikuchin.fun/processText", - data=text.encode('utf-8'), # Encode the text as UTF-8 - headers=headers) - - if response.status_code != 200: - raise Exception(f"Request to fonemizer failed with status code {response.status_code}") - print(response.content) - print(response.headers.get('Content-Type')) - text = response.text - best_model_path = hf_hub_download(repo_id=REPO_ID, filename="model.pth") - config_path = hf_hub_download(repo_id=REPO_ID, filename="config.json") - vocoder_path = hf_hub_download(repo_id=REPO_ID, filename="vocoder.pth") - scale_stats_path = hf_hub_download(repo_id=REPO_ID, filename="scale_stats.npy") - vocoder_config_path = hf_hub_download(repo_id=REPO_ID, filename="vocoder_config.json") - - # init synthesizer - synthesizer = Synthesizer( - best_model_path, - config_path, - None, - None, - vocoder_path, - vocoder_config_path, - None, - None, - False - ) - - # create audio file - wavs = synthesizer.tts(text) - with tempfile.NamedTemporaryFile(suffix = ".wav", delete = False) as fp: - synthesizer.save_wav(wavs, fp) - return fp.name - -print("CWD IS ", os.getcwd()) -print("LIST IS", os.listdir()) -iface = gr.Interface( - fn=tts, - inputs=my_inputs, - outputs=my_outputs, - title=my_title, - description = my_description, - article = "", - examples = "", - allow_flagging=False -) -iface.launch() \ No newline at end of file diff --git a/spaces/jiangjiechen/Auction-Arena-Demo/README.md b/spaces/jiangjiechen/Auction-Arena-Demo/README.md deleted file mode 100644 index c9f4f60c684bf021ac6a26bf806863d09fb086eb..0000000000000000000000000000000000000000 --- a/spaces/jiangjiechen/Auction-Arena-Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Auction Arena -emoji: ⚡ -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/KMAC256.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/KMAC256.py deleted file mode 100644 index 2be8e2f3d57aabf8cafbdc422cf6e74e44ae2df9..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/Hash/KMAC256.py +++ /dev/null @@ -1,74 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2021, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -from Crypto.Util.py3compat import is_bytes - -from .KMAC128 import KMAC_Hash -from . import cSHAKE256 - - -def new(**kwargs): - """Create a new KMAC256 object. - - Args: - key (bytes/bytearray/memoryview): - The key to use to compute the MAC. - It must be at least 256 bits long (32 bytes). - data (bytes/bytearray/memoryview): - Optional. The very first chunk of the message to authenticate. - It is equivalent to an early call to :meth:`KMAC_Hash.update`. - mac_len (integer): - Optional. The size of the authentication tag, in bytes. - Default is 64. Minimum is 8. - custom (bytes/bytearray/memoryview): - Optional. A customization byte string (``S`` in SP 800-185). - - Returns: - A :class:`KMAC_Hash` hash object - """ - - key = kwargs.pop("key", None) - if not is_bytes(key): - raise TypeError("You must pass a key to KMAC256") - if len(key) < 32: - raise ValueError("The key must be at least 256 bits long (32 bytes)") - - data = kwargs.pop("data", None) - - mac_len = kwargs.pop("mac_len", 64) - if mac_len < 8: - raise ValueError("'mac_len' must be 8 bytes or more") - - custom = kwargs.pop("custom", b"") - - if kwargs: - raise TypeError("Unknown parameters: " + str(kwargs)) - - return KMAC_Hash(data, key, mac_len, custom, "20", cSHAKE256, 136) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/PublicKey/test_ECC_448.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/PublicKey/test_ECC_448.py deleted file mode 100644 index 178c3a90c3719aee12bd4568f0535b7c0440cfc5..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/PublicKey/test_ECC_448.py +++ /dev/null @@ -1,333 +0,0 @@ -# =================================================================== -# -# Copyright (c) 2022, Legrandin -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# 1. Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# 2. Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. -# =================================================================== - -import unittest -from binascii import unhexlify - -from Crypto.SelfTest.st_common import list_test_cases -from Crypto.SelfTest.loader import load_test_vectors - -from Crypto.PublicKey import ECC -from Crypto.PublicKey.ECC import EccPoint, _curves, EccKey - -from Crypto.Math.Numbers import Integer - -from Crypto.Hash import SHAKE128 - - -class TestEccPoint_Ed448(unittest.TestCase): - - Gxy = {"x": 0x4f1970c66bed0ded221d15a622bf36da9e146570470f1767ea6de324a3d3a46412ae1af72ab66511433b80e18b00938e2626a82bc70cc05e, - "y": 0x693f46716eb6bc248876203756c9c7624bea73736ca3984087789c1e05a0c2d73ad3ff1ce67c39c4fdbd132c4ed7c8ad9808795bf230fa14} - - G2xy = {"x": 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa955555555555555555555555555555555555555555555555555555555, - "y": 0xae05e9634ad7048db359d6205086c2b0036ed7a035884dd7b7e36d728ad8c4b80d6565833a2a3098bbbcb2bed1cda06bdaeafbcdea9386ed} - - G3xy = {"x": 0x865886b9108af6455bd64316cb6943332241b8b8cda82c7e2ba077a4a3fcfe8daa9cbf7f6271fd6e862b769465da8575728173286ff2f8f, - "y": 0xe005a8dbd5125cf706cbda7ad43aa6449a4a8d952356c3b9fce43c82ec4e1d58bb3a331bdb6767f0bffa9a68fed02dafb822ac13588ed6fc} - - pointG = EccPoint(Gxy['x'], Gxy['y'], curve="Ed448") - pointG2 = EccPoint(G2xy['x'], G2xy['y'], curve="Ed448") - pointG3 = EccPoint(G3xy['x'], G3xy['y'], curve="Ed448") - - def test_init_xy(self): - EccPoint(self.Gxy['x'], self.Gxy['y'], curve="Ed448") - - # Neutral point - pai = EccPoint(0, 1, curve="Ed448") - self.assertEqual(pai.x, 0) - self.assertEqual(pai.y, 1) - self.assertEqual(pai.xy, (0, 1)) - - # G - bp = self.pointG.copy() - self.assertEqual(bp.x, 0x4f1970c66bed0ded221d15a622bf36da9e146570470f1767ea6de324a3d3a46412ae1af72ab66511433b80e18b00938e2626a82bc70cc05e) - self.assertEqual(bp.y, 0x693f46716eb6bc248876203756c9c7624bea73736ca3984087789c1e05a0c2d73ad3ff1ce67c39c4fdbd132c4ed7c8ad9808795bf230fa14) - self.assertEqual(bp.xy, (bp.x, bp.y)) - - # 2G - bp2 = self.pointG2.copy() - self.assertEqual(bp2.x, 0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa955555555555555555555555555555555555555555555555555555555) - self.assertEqual(bp2.y, 0xae05e9634ad7048db359d6205086c2b0036ed7a035884dd7b7e36d728ad8c4b80d6565833a2a3098bbbcb2bed1cda06bdaeafbcdea9386ed) - self.assertEqual(bp2.xy, (bp2.x, bp2.y)) - - # 5G - EccPoint(x=0x7a9f9335a48dcb0e2ba7601eedb50def80cbcf728562ada756d761e8958812808bc0d57a920c3c96f07b2d8cefc6f950d0a99d1092030034, - y=0xadfd751a2517edd3b9109ce4fd580ade260ca1823ab18fced86551f7b698017127d7a4ee59d2b33c58405512881f225443b4731472f435eb, - curve="Ed448") - - # Catch if point is not on the curve - self.assertRaises(ValueError, EccPoint, 34, 35, curve="Ed448") - - def test_set(self): - pointW = EccPoint(0, 1, curve="Ed448") - pointW.set(self.pointG) - self.assertEqual(pointW.x, self.pointG.x) - self.assertEqual(pointW.y, self.pointG.y) - - def test_copy(self): - pointW = self.pointG.copy() - self.assertEqual(pointW.x, self.pointG.x) - self.assertEqual(pointW.y, self.pointG.y) - - def test_equal(self): - pointH = self.pointG.copy() - pointI = self.pointG2.copy() - self.assertEqual(self.pointG, pointH) - self.assertNotEqual(self.pointG, pointI) - - def test_pai(self): - pai = EccPoint(0, 1, curve="Ed448") - self.assertTrue(pai.is_point_at_infinity()) - self.assertEqual(pai, pai.point_at_infinity()) - - def test_negate(self): - negG = -self.pointG - sum = self.pointG + negG - self.assertTrue(sum.is_point_at_infinity()) - - def test_addition(self): - self.assertEqual(self.pointG + self.pointG2, self.pointG3) - self.assertEqual(self.pointG2 + self.pointG, self.pointG3) - self.assertEqual(self.pointG2 + self.pointG.point_at_infinity(), self.pointG2) - self.assertEqual(self.pointG.point_at_infinity() + self.pointG2, self.pointG2) - - G5 = self.pointG2 + self.pointG3 - self.assertEqual(G5.x, 0x7a9f9335a48dcb0e2ba7601eedb50def80cbcf728562ada756d761e8958812808bc0d57a920c3c96f07b2d8cefc6f950d0a99d1092030034) - self.assertEqual(G5.y, 0xadfd751a2517edd3b9109ce4fd580ade260ca1823ab18fced86551f7b698017127d7a4ee59d2b33c58405512881f225443b4731472f435eb) - - def test_inplace_addition(self): - pointH = self.pointG.copy() - pointH += self.pointG - self.assertEqual(pointH, self.pointG2) - pointH += self.pointG - self.assertEqual(pointH, self.pointG3) - pointH += self.pointG.point_at_infinity() - self.assertEqual(pointH, self.pointG3) - - def test_doubling(self): - pointH = self.pointG.copy() - pointH.double() - self.assertEqual(pointH.x, self.pointG2.x) - self.assertEqual(pointH.y, self.pointG2.y) - - # 2*0 - pai = self.pointG.point_at_infinity() - pointR = pai.copy() - pointR.double() - self.assertEqual(pointR, pai) - - def test_scalar_multiply(self): - d = 0 - pointH = d * self.pointG - self.assertEqual(pointH.x, 0) - self.assertEqual(pointH.y, 1) - - d = 1 - pointH = d * self.pointG - self.assertEqual(pointH.x, self.pointG.x) - self.assertEqual(pointH.y, self.pointG.y) - - d = 2 - pointH = d * self.pointG - self.assertEqual(pointH.x, self.pointG2.x) - self.assertEqual(pointH.y, self.pointG2.y) - - d = 3 - pointH = d * self.pointG - self.assertEqual(pointH.x, self.pointG3.x) - self.assertEqual(pointH.y, self.pointG3.y) - - d = 4 - pointH = d * self.pointG - self.assertEqual(pointH.x, 0x49dcbc5c6c0cce2c1419a17226f929ea255a09cf4e0891c693fda4be70c74cc301b7bdf1515dd8ba21aee1798949e120e2ce42ac48ba7f30) - self.assertEqual(pointH.y, 0xd49077e4accde527164b33a5de021b979cb7c02f0457d845c90dc3227b8a5bc1c0d8f97ea1ca9472b5d444285d0d4f5b32e236f86de51839) - - d = 5 - pointH = d * self.pointG - self.assertEqual(pointH.x, 0x7a9f9335a48dcb0e2ba7601eedb50def80cbcf728562ada756d761e8958812808bc0d57a920c3c96f07b2d8cefc6f950d0a99d1092030034) - self.assertEqual(pointH.y, 0xadfd751a2517edd3b9109ce4fd580ade260ca1823ab18fced86551f7b698017127d7a4ee59d2b33c58405512881f225443b4731472f435eb) - - d = 10 - pointH = d * self.pointG - self.assertEqual(pointH.x, 0x77486f9d19f6411cdd35d30d1c3235f71936452c787e5c034134d3e8172278aca61622bc805761ce3dab65118a0122d73b403165d0ed303d) - self.assertEqual(pointH.y, 0x4d2fea0b026be11024f1f0fe7e94e618e8ac17381ada1d1bf7ee293a68ff5d0bf93c1997dc1aabdc0c7e6381428d85b6b1954a89e4cddf67) - - d = 20 - pointH = d * self.pointG - self.assertEqual(pointH.x, 0x3c236422354600fe6763defcc1503737e4ed89e262d0de3ec1e552020f2a56fe3b9e1e012d021072598c3c2821e18268bb8fb8339c0d1216) - self.assertEqual(pointH.y, 0xb555b9721f630ccb05fc466de4c74d3d2781e69eca88e1b040844f04cab39fd946f91c688fa42402bb38fb9c3e61231017020b219b4396e1) - - d = 255 - pointH = d * self.pointG - self.assertEqual(pointH.x, 0xbeb7f8388b05cd9c1aa2e3c0dcf31e2b563659361826225390e7748654f627d5c36cbe627e9019936b56d15d4dad7c337c09bac64ff4197f) - self.assertEqual(pointH.y, 0x1e37312b2dd4e9440c43c6e7725fc4fa3d11e582d4863f1d018e28f50c0efdb1f53f9b01ada7c87fa162b1f0d72401015d57613d25f1ad53) - - d = 256 - pointH = d * self.pointG - self.assertEqual(pointH.x, 0xf19c34feb56730e3e2be761ac0a2a2b24853b281dda019fc35a5ab58e3696beb39609ae756b0d20fb7ccf0d79aaf5f3bca2e4fdb25bfac1c) - self.assertEqual(pointH.y, 0x3beb69cc9111bffcaddc61d363ce6fe5dd44da4aadce78f52e92e985d5442344ced72c4611ed0daac9f4f5661eab73d7a12d25ce8a30241e) - - def test_sizes(self): - self.assertEqual(self.pointG.size_in_bits(), 448) - self.assertEqual(self.pointG.size_in_bytes(), 56) - - -class TestEccKey_Ed448(unittest.TestCase): - - def test_private_key(self): - seed = unhexlify("4adf5d37ac6785e83e99a924f92676d366a78690af59c92b6bdf14f9cdbcf26fdad478109607583d633b60078d61d51d81b7509c5433b0d4c9") - Px = 0x72a01eea003a35f9ac44231dc4aae2a382f351d80bf32508175b0855edcf389aa2bbf308dd961ce361a6e7c2091bc78957f6ebcf3002a617 - Py = 0x9e0d08d84586e9aeefecacb41d049b831f1a3ee0c3eada63e34557b30702b50ab59fb372feff7c30b8cbb7dd51afbe88444ec56238722ec1 - - key = EccKey(curve="Ed448", seed=seed) - self.assertEqual(key.seed, seed) - self.assertEqual(key.d, 0xb07cf179604f83433186e5178760c759c15125ee54ff6f8dcde46e872b709ac82ed0bd0a4e036d774034dcb18a9fb11894657a1485895f80) - self.assertTrue(key.has_private()) - self.assertEqual(key.pointQ.x, Px) - self.assertEqual(key.pointQ.y, Py) - - point = EccPoint(Px, Py, "ed448") - key = EccKey(curve="Ed448", seed=seed, point=point) - self.assertEqual(key.d, 0xb07cf179604f83433186e5178760c759c15125ee54ff6f8dcde46e872b709ac82ed0bd0a4e036d774034dcb18a9fb11894657a1485895f80) - self.assertTrue(key.has_private()) - self.assertEqual(key.pointQ, point) - - # Other names - key = EccKey(curve="ed448", seed=seed) - - # Must not accept d parameter - self.assertRaises(ValueError, EccKey, curve="ed448", d=1) - - def test_public_key(self): - point = EccPoint(_curves['ed448'].Gx, _curves['ed448'].Gy, curve='ed448') - key = EccKey(curve="ed448", point=point) - self.assertFalse(key.has_private()) - self.assertEqual(key.pointQ, point) - - def test_public_key_derived(self): - priv_key = EccKey(curve="ed448", seed=b'H'*57) - pub_key = priv_key.public_key() - self.assertFalse(pub_key.has_private()) - self.assertEqual(priv_key.pointQ, pub_key.pointQ) - - def test_invalid_seed(self): - self.assertRaises(ValueError, lambda: EccKey(curve="ed448", seed=b'H' * 56)) - - def test_equality(self): - private_key = ECC.construct(seed=b'H'*57, curve="Ed448") - private_key2 = ECC.construct(seed=b'H'*57, curve="ed448") - private_key3 = ECC.construct(seed=b'C'*57, curve="Ed448") - - public_key = private_key.public_key() - public_key2 = private_key2.public_key() - public_key3 = private_key3.public_key() - - self.assertEqual(private_key, private_key2) - self.assertNotEqual(private_key, private_key3) - - self.assertEqual(public_key, public_key2) - self.assertNotEqual(public_key, public_key3) - - self.assertNotEqual(public_key, private_key) - - def test_name_consistency(self): - key = ECC.generate(curve='ed448') - self.assertIn("curve='Ed448'", repr(key)) - self.assertEqual(key.curve, 'Ed448') - self.assertEqual(key.public_key().curve, 'Ed448') - - -class TestEccModule_Ed448(unittest.TestCase): - - def test_generate(self): - key = ECC.generate(curve="Ed448") - self.assertTrue(key.has_private()) - point = EccPoint(_curves['Ed448'].Gx, _curves['Ed448'].Gy, curve="Ed448") * key.d - self.assertEqual(key.pointQ, point) - - # Always random - key2 = ECC.generate(curve="Ed448") - self.assertNotEqual(key, key2) - - # Other names - ECC.generate(curve="Ed448") - - # Random source - key1 = ECC.generate(curve="Ed448", randfunc=SHAKE128.new().read) - key2 = ECC.generate(curve="Ed448", randfunc=SHAKE128.new().read) - self.assertEqual(key1, key2) - - def test_construct(self): - seed = unhexlify("4adf5d37ac6785e83e99a924f92676d366a78690af59c92b6bdf14f9cdbcf26fdad478109607583d633b60078d61d51d81b7509c5433b0d4c9") - Px = 0x72a01eea003a35f9ac44231dc4aae2a382f351d80bf32508175b0855edcf389aa2bbf308dd961ce361a6e7c2091bc78957f6ebcf3002a617 - Py = 0x9e0d08d84586e9aeefecacb41d049b831f1a3ee0c3eada63e34557b30702b50ab59fb372feff7c30b8cbb7dd51afbe88444ec56238722ec1 - d = 0xb07cf179604f83433186e5178760c759c15125ee54ff6f8dcde46e872b709ac82ed0bd0a4e036d774034dcb18a9fb11894657a1485895f80 - point = EccPoint(Px, Py, curve="Ed448") - - # Private key only - key = ECC.construct(curve="Ed448", seed=seed) - self.assertEqual(key.pointQ, point) - self.assertTrue(key.has_private()) - - # Public key only - key = ECC.construct(curve="Ed448", point_x=Px, point_y=Py) - self.assertEqual(key.pointQ, point) - self.assertFalse(key.has_private()) - - # Private and public key - key = ECC.construct(curve="Ed448", seed=seed, point_x=Px, point_y=Py) - self.assertEqual(key.pointQ, point) - self.assertTrue(key.has_private()) - - # Other names - key = ECC.construct(curve="ed448", seed=seed) - - def test_negative_construct(self): - coord = dict(point_x=10, point_y=4) - coordG = dict(point_x=_curves['ed448'].Gx, point_y=_curves['ed448'].Gy) - - self.assertRaises(ValueError, ECC.construct, curve="Ed448", **coord) - self.assertRaises(ValueError, ECC.construct, curve="Ed448", d=2, **coordG) - self.assertRaises(ValueError, ECC.construct, curve="Ed448", seed=b'H'*58) - - -def get_tests(config={}): - tests = [] - tests += list_test_cases(TestEccPoint_Ed448) - tests += list_test_cases(TestEccKey_Ed448) - tests += list_test_cases(TestEccModule_Ed448) - return tests - - -if __name__ == '__main__': - def suite(): - return unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GribStubImagePlugin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GribStubImagePlugin.py deleted file mode 100644 index 8a799f19caac706a880218af257f40e9a386b489..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GribStubImagePlugin.py +++ /dev/null @@ -1,73 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# GRIB stub adapter -# -# Copyright (c) 1996-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import Image, ImageFile - -_handler = None - - -def register_handler(handler): - """ - Install application-specific GRIB image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -# -------------------------------------------------------------------- -# Image adapter - - -def _accept(prefix): - return prefix[:4] == b"GRIB" and prefix[7] == 1 - - -class GribStubImageFile(ImageFile.StubImageFile): - format = "GRIB" - format_description = "GRIB" - - def _open(self): - offset = self.fp.tell() - - if not _accept(self.fp.read(8)): - msg = "Not a GRIB file" - raise SyntaxError(msg) - - self.fp.seek(offset) - - # make something up - self.mode = "F" - self._size = 1, 1 - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - msg = "GRIB save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(GribStubImageFile.format, GribStubImageFile, _accept) -Image.register_save(GribStubImageFile.format, _save) - -Image.register_extension(GribStubImageFile.format, ".grib") diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attrs/setters.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attrs/setters.py deleted file mode 100644 index 9b50770804e4187f0c935ef17bddf2d9a61120ff..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attrs/setters.py +++ /dev/null @@ -1,3 +0,0 @@ -# SPDX-License-Identifier: MIT - -from attr.setters import * # noqa diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/RP.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/RP.py deleted file mode 100644 index 9c64c6e2283766dd37fcd3f344adae9a524bae28..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/RP.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import dns.exception -import dns.immutable -import dns.name -import dns.rdata - - -@dns.immutable.immutable -class RP(dns.rdata.Rdata): - - """RP record""" - - # see: RFC 1183 - - __slots__ = ["mbox", "txt"] - - def __init__(self, rdclass, rdtype, mbox, txt): - super().__init__(rdclass, rdtype) - self.mbox = self._as_name(mbox) - self.txt = self._as_name(txt) - - def to_text(self, origin=None, relativize=True, **kw): - mbox = self.mbox.choose_relativity(origin, relativize) - txt = self.txt.choose_relativity(origin, relativize) - return "{} {}".format(str(mbox), str(txt)) - - @classmethod - def from_text( - cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None - ): - mbox = tok.get_name(origin, relativize, relativize_to) - txt = tok.get_name(origin, relativize, relativize_to) - return cls(rdclass, rdtype, mbox, txt) - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - self.mbox.to_wire(file, None, origin, canonicalize) - self.txt.to_wire(file, None, origin, canonicalize) - - @classmethod - def from_wire_parser(cls, rdclass, rdtype, parser, origin=None): - mbox = parser.get_name(origin) - txt = parser.get_name(origin) - return cls(rdclass, rdtype, mbox, txt) diff --git a/spaces/johko/capdec-image-captioning/model.py b/spaces/johko/capdec-image-captioning/model.py deleted file mode 100644 index eccf8988f6038a691d0b9fc8b8414d6bbbac70bd..0000000000000000000000000000000000000000 --- a/spaces/johko/capdec-image-captioning/model.py +++ /dev/null @@ -1,199 +0,0 @@ -from torch import nn -import torch.nn.functional as nnf -from transformers import GPT2Tokenizer, GPT2LMHeadModel -import torch -from typing import Tuple, List, Union, Optional -import numpy as np - - -N = type(None) -V = np.array -ARRAY = np.ndarray -ARRAYS = Union[Tuple[ARRAY, ...], List[ARRAY]] -VS = Union[Tuple[V, ...], List[V]] -VN = Union[V, N] -VNS = Union[VS, N] -T = torch.Tensor -TS = Union[Tuple[T, ...], List[T]] -TN = Optional[T] -TNS = Union[Tuple[TN, ...], List[TN]] -TSN = Optional[TS] -TA = Union[T, ARRAY] - - -class ClipCaptionModel(nn.Module): - - def get_dummy_token(self, batch_size: int, device: torch.device) -> torch.Tensor: - return torch.zeros(batch_size, self.prefix_length, dtype=torch.int64, device=device) - - def forward(self, tokens: torch.Tensor, prefix: torch.Tensor, mask: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None): - embedding_text = self.gpt.transformer.wte(tokens) - prefix_projections = self.clip_project(prefix).view(-1, self.prefix_length, self.gpt_embedding_size) - embedding_cat = torch.cat((prefix_projections, embedding_text), dim=1) - if labels is not None: - dummy_token = self.get_dummy_token(tokens.shape[0], tokens.device) - labels = torch.cat((dummy_token, tokens), dim=1) - out = self.gpt(inputs_embeds=embedding_cat, labels=labels, attention_mask=mask) - return out - - def __init__(self): - super(ClipCaptionModel, self).__init__() - self.prefix_length = 40 - self.gpt = GPT2LMHeadModel.from_pretrained('gpt2') - self.gpt_embedding_size = self.gpt.transformer.wte.weight.shape[1] - self.clip_project = TransformerMapper(640, self.gpt_embedding_size, 40, - 40, 8) - - - -class MLP(nn.Module): - - def forward(self, x: T) -> T: - return self.model(x) - - def __init__(self, sizes: Tuple[int, ...], bias=True, act=nn.Tanh): - super(MLP, self).__init__() - layers = [] - for i in range(len(sizes) -1): - layers.append(nn.Linear(sizes[i], sizes[i + 1], bias=bias)) - if i < len(sizes) - 2: - layers.append(act()) - self.model = nn.Sequential(*layers) - - -class ClipCaptionPrefix(ClipCaptionModel): - - def parameters(self, recurse: bool = True): - return self.clip_project.parameters() - - def train(self, mode: bool = True): - super(ClipCaptionPrefix, self).train(mode) - self.gpt.eval() - return self - - -class MlpTransformer(nn.Module): - def __init__(self, in_dim, h_dim, out_d: Optional[int] = None, act=nnf.relu, dropout=0.): - super().__init__() - out_d = out_d if out_d is not None else in_dim - self.fc1 = nn.Linear(in_dim, h_dim) - self.act = act - self.fc2 = nn.Linear(h_dim, out_d) - self.dropout = nn.Dropout(dropout) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.dropout(x) - x = self.fc2(x) - x = self.dropout(x) - return x - - -class MultiHeadAttention(nn.Module): - - def __init__(self, dim_self, dim_ref, num_heads, bias=True, dropout=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim_self // num_heads - self.scale = head_dim ** -0.5 - self.to_queries = nn.Linear(dim_self, dim_self, bias=bias) - self.to_keys_values = nn.Linear(dim_ref, dim_self * 2, bias=bias) - self.project = nn.Linear(dim_self, dim_self) - self.dropout = nn.Dropout(dropout) - - def forward(self, x, y=None, mask=None): - y = y if y is not None else x - b, n, c = x.shape - _, m, d = y.shape - # b n h dh - queries = self.to_queries(x).reshape(b, n, self.num_heads, c // self.num_heads) - # b m 2 h dh - keys_values = self.to_keys_values(y).reshape(b, m, 2, self.num_heads, c // self.num_heads) - keys, values = keys_values[:, :, 0], keys_values[:, :, 1] - attention = torch.einsum('bnhd,bmhd->bnmh', queries, keys) * self.scale - if mask is not None: - if mask.dim() == 2: - mask = mask.unsqueeze(1) - attention = attention.masked_fill(mask.unsqueeze(3), float("-inf")) - attention = attention.softmax(dim=2) - out = torch.einsum('bnmh,bmhd->bnhd', attention, values).reshape(b, n, c) - out = self.project(out) - return out, attention - - -class TransformerLayer(nn.Module): - - def forward_with_attention(self, x, y=None, mask=None): - x_, attention = self.attn(self.norm1(x), y, mask) - x = x + x_ - x = x + self.mlp(self.norm2(x)) - return x, attention - - def forward(self, x, y=None, mask=None): - x = x + self.attn(self.norm1(x), y, mask)[0] - x = x + self.mlp(self.norm2(x)) - return x - - def __init__(self, dim_self, dim_ref, num_heads, mlp_ratio=4., bias=False, dropout=0., act=nnf.relu, - norm_layer: nn.Module = nn.LayerNorm): - super().__init__() - self.norm1 = norm_layer(dim_self) - self.attn = MultiHeadAttention(dim_self, dim_ref, num_heads, bias=bias, dropout=dropout) - self.norm2 = norm_layer(dim_self) - self.mlp = MlpTransformer(dim_self, int(dim_self * mlp_ratio), act=act, dropout=dropout) - - -class Transformer(nn.Module): - - def forward_with_attention(self, x, y=None, mask=None): - attentions = [] - for layer in self.layers: - x, att = layer.forward_with_attention(x, y, mask) - attentions.append(att) - return x, attentions - - def forward(self, x, y=None, mask=None): - for i, layer in enumerate(self.layers): - if i % 2 == 0 and self.enc_dec: # cross - x = layer(x, y) - elif self.enc_dec: # self - x = layer(x, x, mask) - else: # self or cross - x = layer(x, y, mask) - return x - - def __init__(self, dim_self: int, num_heads: int, num_layers: int, dim_ref: Optional[int] = None, - mlp_ratio: float = 2., act=nnf.relu, norm_layer: nn.Module = nn.LayerNorm, enc_dec: bool = False): - super(Transformer, self).__init__() - dim_ref = dim_ref if dim_ref is not None else dim_self - self.enc_dec = enc_dec - if enc_dec: - num_layers = num_layers * 2 - layers = [] - for i in range(num_layers): - if i % 2 == 0 and enc_dec: # cross - layers.append(TransformerLayer(dim_self, dim_ref, num_heads, mlp_ratio, act=act, norm_layer=norm_layer)) - elif enc_dec: # self - layers.append(TransformerLayer(dim_self, dim_self, num_heads, mlp_ratio, act=act, norm_layer=norm_layer)) - else: # self or cross - layers.append(TransformerLayer(dim_self, dim_ref, num_heads, mlp_ratio, act=act, norm_layer=norm_layer)) - self.layers = nn.ModuleList(layers) - - -class TransformerMapper(nn.Module): - - def forward(self, x): - x = self.linear(x).view(x.shape[0], self.clip_length, -1) - prefix = self.prefix_const.unsqueeze(0).expand(x.shape[0], *self.prefix_const.shape) - prefix = torch.cat((x, prefix), dim=1) - out = self.transformer(prefix)[:, self.clip_length:] - return out - - def __init__(self, dim_clip: int, dim_embedding: int, prefix_length: int, clip_length: int, num_layers: int = 8): - super(TransformerMapper, self).__init__() - self.clip_length = clip_length - self.transformer = Transformer(dim_embedding, 8, num_layers) - self.linear = nn.Linear(dim_clip, clip_length * dim_embedding) - self.prefix_const = nn.Parameter(torch.randn(prefix_length, dim_embedding), requires_grad=True) \ No newline at end of file diff --git a/spaces/jw2yang/unicl-img-recog-demo/model/image_encoder/focalnet.py b/spaces/jw2yang/unicl-img-recog-demo/model/image_encoder/focalnet.py deleted file mode 100644 index c0b533a7d28ff6b7c28b9905679c693f75a713f3..0000000000000000000000000000000000000000 --- a/spaces/jw2yang/unicl-img-recog-demo/model/image_encoder/focalnet.py +++ /dev/null @@ -1,649 +0,0 @@ -# -------------------------------------------------------- -# FocalNets -- Focal Modulation Networks -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] -# Written by Jianwei Yang (jianwyan@microsoft.com) -# -------------------------------------------------------- - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ -from timm.models.registry import register_model - -from torchvision import transforms -from timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD -from timm.data import create_transform -from timm.data.transforms import _pil_interp - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - -class FocalModulation(nn.Module): - def __init__(self, dim, focal_window, focal_level, focal_factor=2, bias=True, proj_drop=0.): - super().__init__() - - self.dim = dim - self.focal_window = focal_window - self.focal_level = focal_level - self.focal_factor = focal_factor - - self.f = nn.Linear(dim, 2*dim + (self.focal_level+1), bias=bias) - self.h = nn.Conv2d(dim, dim, kernel_size=1, stride=1, bias=bias) - - self.act = nn.GELU() - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - self.focal_layers = nn.ModuleList() - - self.kernel_sizes = [] - for k in range(self.focal_level): - kernel_size = self.focal_factor*k + self.focal_window - self.focal_layers.append( - nn.Sequential( - nn.Conv2d(dim, dim, kernel_size=kernel_size, stride=1, - groups=dim, padding=kernel_size//2, bias=False), - nn.GELU(), - ) - ) - self.kernel_sizes.append(kernel_size) - def forward(self, x): - """ - Args: - x: input features with shape of (B, H, W, C) - """ - C = x.shape[-1] - - # pre linear projection - x = self.f(x).permute(0, 3, 1, 2).contiguous() - q, ctx, self.gates = torch.split(x, (C, C, self.focal_level+1), 1) - - # context aggreation - ctx_all = 0 - for l in range(self.focal_level): - ctx = self.focal_layers[l](ctx) - ctx_all = ctx_all + ctx*self.gates[:, l:l+1] - ctx_global = self.act(ctx.mean(2, keepdim=True).mean(3, keepdim=True)) - ctx_all = ctx_all + ctx_global*self.gates[:,self.focal_level:] - - # focal modulation - self.modulator = self.h(ctx_all) - x_out = q*self.modulator - x_out = x_out.permute(0, 2, 3, 1).contiguous() - - # post linear porjection - x_out = self.proj(x_out) - x_out = self.proj_drop(x_out) - return x_out - - def extra_repr(self) -> str: - return f'dim={self.dim}' - - def flops(self, N): - # calculate flops for 1 window with token length of N - flops = 0 - - flops += N * self.dim * (self.dim * 2 + (self.focal_level+1)) - - # focal convolution - for k in range(self.focal_level): - flops += N * (self.kernel_sizes[k]**2+1) * self.dim - - # global gating - flops += N * 1 * self.dim - - # self.linear - flops += N * self.dim * (self.dim + 1) - - # x = self.proj(x) - flops += N * self.dim * self.dim - return flops - -class FocalNetBlock(nn.Module): - r""" Focal Modulation Network Block. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resulotion. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - drop (float, optional): Dropout rate. Default: 0.0 - drop_path (float, optional): Stochastic depth rate. Default: 0.0 - act_layer (nn.Module, optional): Activation layer. Default: nn.GELU - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - focal_level (int): Number of focal levels. - focal_window (int): Focal window size at first focal level - use_layerscale (bool): Whether use layerscale - layerscale_value (float): Initial layerscale value - use_postln (bool): Whether use layernorm after modulation - """ - - def __init__(self, dim, input_resolution, mlp_ratio=4., drop=0., drop_path=0., - act_layer=nn.GELU, norm_layer=nn.LayerNorm, - focal_level=1, focal_window=3, - use_layerscale=False, layerscale_value=1e-4, - use_postln=False): - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.mlp_ratio = mlp_ratio - - self.focal_window = focal_window - self.focal_level = focal_level - self.use_postln = use_postln - - self.norm1 = norm_layer(dim) - self.modulation = FocalModulation(dim, proj_drop=drop, focal_window=focal_window, focal_level=self.focal_level) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - self.alpha = 3.0 if self.use_postln else 1.0 - - self.gamma_1 = 1.0 - self.gamma_2 = 1.0 - if use_layerscale: - self.gamma_1 = nn.Parameter(layerscale_value * torch.ones((dim)), requires_grad=True) - self.gamma_2 = nn.Parameter(layerscale_value * torch.ones((dim)), requires_grad=True) - - self.H = None - self.W = None - - def forward(self, x): - H, W = self.H, self.W - B, L, C = x.shape - shortcut = x - - # Focal Modulation - if not self.use_postln: - x = self.norm1(x) - x = x.view(B, H, W, C) - x = self.modulation(x).view(B, H * W, C) - - # FFN - x = shortcut*self.alpha + self.drop_path(self.gamma_1 * x) - if self.use_postln: - x = self.norm1(x) - - if not self.use_postln: - x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x))) - else: - x = x*self.alpha + self.drop_path(self.gamma_2 * self.mlp(x)) - x = self.norm2(x) - - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, " \ - f"mlp_ratio={self.mlp_ratio}" - - def flops(self): - flops = 0 - H, W = self.input_resolution - # norm1 - flops += self.dim * H * W - - # W-MSA/SW-MSA - flops += self.modulation.flops(H*W) - - # mlp - flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio - # norm2 - flops += self.dim * H * W - return flops - -class BasicLayer(nn.Module): - """ A basic Focal Transformer layer for one stage. - - Args: - dim (int): Number of input channels. - input_resolution (tuple[int]): Input resolution. - depth (int): Number of blocks. - window_size (int): Local window size. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. - qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True - qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set. - drop (float, optional): Dropout rate. Default: 0.0 - drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0 - norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm - downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False. - focal_level (int): Number of focal levels - focal_window (int): Focal window size at first focal level - use_layerscale (bool): Whether use layerscale - layerscale_value (float): Initial layerscale value - use_postln (bool): Whether use layernorm after modulation - """ - - def __init__(self, dim, out_dim, input_resolution, depth, - mlp_ratio=4., drop=0., drop_path=0., norm_layer=nn.LayerNorm, - downsample=None, use_checkpoint=False, - focal_level=1, focal_window=1, - use_conv_embed=False, - use_layerscale=False, layerscale_value=1e-4, use_postln=False): - - super().__init__() - self.dim = dim - self.input_resolution = input_resolution - self.depth = depth - self.use_checkpoint = use_checkpoint - - # build blocks - self.blocks = nn.ModuleList([ - FocalNetBlock( - dim=dim, - input_resolution=input_resolution, - mlp_ratio=mlp_ratio, - drop=drop, - drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path, - norm_layer=norm_layer, - focal_level=focal_level, - focal_window=focal_window, - use_layerscale=use_layerscale, - layerscale_value=layerscale_value, - use_postln=use_postln, - ) - for i in range(depth)]) - - if downsample is not None: - self.downsample = downsample( - img_size=input_resolution, - patch_size=2, - in_chans=dim, - embed_dim=out_dim, - use_conv_embed=use_conv_embed, - norm_layer=norm_layer, - is_stem=False - ) - else: - self.downsample = None - - def forward(self, x, H, W): - for blk in self.blocks: - blk.H, blk.W = H, W - if self.use_checkpoint: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - - if self.downsample is not None: - x = x.transpose(1, 2).reshape(x.shape[0], -1, H, W) - x, Ho, Wo = self.downsample(x) - else: - Ho, Wo = H, W - return x, Ho, Wo - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}" - - def flops(self): - flops = 0 - for blk in self.blocks: - flops += blk.flops() - if self.downsample is not None: - flops += self.downsample.flops() - return flops - -class PatchEmbed(nn.Module): - r""" Image to Patch Embedding - - Args: - img_size (int): Image size. Default: 224. - patch_size (int): Patch token size. Default: 4. - in_chans (int): Number of input image channels. Default: 3. - embed_dim (int): Number of linear projection output channels. Default: 96. - norm_layer (nn.Module, optional): Normalization layer. Default: None - """ - - def __init__(self, img_size=(224, 224), patch_size=4, in_chans=3, embed_dim=96, use_conv_embed=False, norm_layer=None, is_stem=False): - super().__init__() - patch_size = to_2tuple(patch_size) - patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]] - self.img_size = img_size - self.patch_size = patch_size - self.patches_resolution = patches_resolution - self.num_patches = patches_resolution[0] * patches_resolution[1] - - self.in_chans = in_chans - self.embed_dim = embed_dim - - if use_conv_embed: - # if we choose to use conv embedding, then we treat the stem and non-stem differently - if is_stem: - kernel_size = 7; padding = 2; stride = 4 - else: - kernel_size = 3; padding = 1; stride = 2 - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=kernel_size, stride=stride, padding=padding) - else: - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - if norm_layer is not None: - self.norm = norm_layer(embed_dim) - else: - self.norm = None - - def forward(self, x): - B, C, H, W = x.shape - - x = self.proj(x) - H, W = x.shape[2:] - x = x.flatten(2).transpose(1, 2) # B Ph*Pw C - if self.norm is not None: - x = self.norm(x) - return x, H, W - - def flops(self): - Ho, Wo = self.patches_resolution - flops = Ho * Wo * self.embed_dim * self.in_chans * (self.patch_size[0] * self.patch_size[1]) - if self.norm is not None: - flops += Ho * Wo * self.embed_dim - return flops - -class FocalNet(nn.Module): - r""" Focal Modulation Networks (FocalNets) - - Args: - img_size (int | tuple(int)): Input image size. Default 224 - patch_size (int | tuple(int)): Patch size. Default: 4 - in_chans (int): Number of input image channels. Default: 3 - num_classes (int): Number of classes for classification head. Default: 1000 - embed_dim (int): Patch embedding dimension. Default: 96 - depths (tuple(int)): Depth of each Focal Transformer layer. - mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4 - drop_rate (float): Dropout rate. Default: 0 - drop_path_rate (float): Stochastic depth rate. Default: 0.1 - norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm. - patch_norm (bool): If True, add normalization after patch embedding. Default: True - use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False - focal_levels (list): How many focal levels at all stages. Note that this excludes the finest-grain level. Default: [1, 1, 1, 1] - focal_windows (list): The focal window size at all stages. Default: [7, 5, 3, 1] - use_conv_embed (bool): Whether use convolutional embedding. We noted that using convolutional embedding usually improve the performance, but we do not use it by default. Default: False - use_layerscale (bool): Whether use layerscale proposed in CaiT. Default: False - layerscale_value (float): Value for layer scale. Default: 1e-4 - use_postln (bool): Whether use layernorm after modulation (it helps stablize training of large models) - """ - def __init__(self, - img_size=224, - patch_size=4, - in_chans=3, - num_classes=1000, - embed_dim=96, - depths=[2, 2, 6, 2], - mlp_ratio=4., - drop_rate=0., - drop_path_rate=0.1, - norm_layer=nn.LayerNorm, - patch_norm=True, - use_checkpoint=False, - focal_levels=[2, 2, 2, 2], - focal_windows=[3, 3, 3, 3], - use_conv_embed=False, - use_layerscale=False, - layerscale_value=1e-4, - use_postln=False, - **kwargs): - super().__init__() - - self.num_layers = len(depths) - embed_dim = [embed_dim * (2 ** i) for i in range(self.num_layers)] - - self.num_classes = num_classes - self.embed_dim = embed_dim - self.patch_norm = patch_norm - self.num_features = embed_dim[-1] - self.mlp_ratio = mlp_ratio - - # split image into patches using either non-overlapped embedding or overlapped embedding - self.patch_embed = PatchEmbed( - img_size=to_2tuple(img_size), - patch_size=patch_size, - in_chans=in_chans, - embed_dim=embed_dim[0], - use_conv_embed=use_conv_embed, - norm_layer=norm_layer if self.patch_norm else None, - is_stem=True) - - num_patches = self.patch_embed.num_patches - patches_resolution = self.patch_embed.patches_resolution - self.patches_resolution = patches_resolution - self.pos_drop = nn.Dropout(p=drop_rate) - - # stochastic depth - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule - - # build layers - self.layers = nn.ModuleList() - for i_layer in range(self.num_layers): - layer = BasicLayer(dim=embed_dim[i_layer], - out_dim=embed_dim[i_layer+1] if (i_layer < self.num_layers - 1) else None, - input_resolution=(patches_resolution[0] // (2 ** i_layer), - patches_resolution[1] // (2 ** i_layer)), - depth=depths[i_layer], - mlp_ratio=self.mlp_ratio, - drop=drop_rate, - drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], - norm_layer=norm_layer, - downsample=PatchEmbed if (i_layer < self.num_layers - 1) else None, - focal_level=focal_levels[i_layer], - focal_window=focal_windows[i_layer], - use_conv_embed=use_conv_embed, - use_checkpoint=use_checkpoint, - use_layerscale=use_layerscale, - layerscale_value=layerscale_value, - use_postln=use_postln, - ) - self.layers.append(layer) - - self.norm = norm_layer(self.num_features) - self.avgpool = nn.AdaptiveAvgPool1d(1) - self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity() - self.dim_out = self.num_features - - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {''} - - @torch.jit.ignore - def no_weight_decay_keywords(self): - return {''} - - def forward_features(self, x): - x, H, W = self.patch_embed(x) - x = self.pos_drop(x) - - for layer in self.layers: - x, H, W = layer(x, H, W) - x = self.norm(x) # B L C - x = self.avgpool(x.transpose(1, 2)) # B C 1 - x = torch.flatten(x, 1) - return x - - def forward(self, x): - x = self.forward_features(x) - x = self.head(x) - return x - - def flops(self): - flops = 0 - flops += self.patch_embed.flops() - for i, layer in enumerate(self.layers): - flops += layer.flops() - flops += self.num_features * self.patches_resolution[0] * self.patches_resolution[1] // (2 ** self.num_layers) - flops += self.num_features * self.num_classes - return flops - -def build_transforms(img_size, center_crop=False): - t = [] - if center_crop: - size = int((256 / 224) * img_size) - t.append( - transforms.Resize(size, interpolation=_pil_interp('bicubic')) - ) - t.append( - transforms.CenterCrop(img_size) - ) - else: - t.append( - transforms.Resize(img_size, interpolation=_pil_interp('bicubic')) - ) - t.append(transforms.ToTensor()) - t.append(transforms.Normalize(IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD)) - return transforms.Compose(t) - -def build_transforms4display(img_size, center_crop=False): - t = [] - if center_crop: - size = int((256 / 224) * img_size) - t.append( - transforms.Resize(size, interpolation=_pil_interp('bicubic')) - ) - t.append( - transforms.CenterCrop(img_size) - ) - else: - t.append( - transforms.Resize(img_size, interpolation=_pil_interp('bicubic')) - ) - t.append(transforms.ToTensor()) - return transforms.Compose(t) - -model_urls = { - "focalnet_tiny_srf": "", - "focalnet_small_srf": "", - "focalnet_base_srf": "", - "focalnet_tiny_lrf": "", - "focalnet_small_lrf": "", - "focalnet_base_lrf": "", -} - -@register_model -def focalnet_tiny_srf(pretrained=False, **kwargs): - model = FocalNet(depths=[2, 2, 6, 2], embed_dim=96, **kwargs) - if pretrained: - url = model_urls['focalnet_tiny_srf'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu", check_hash=True) - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def focalnet_small_srf(pretrained=False, **kwargs): - model = FocalNet(depths=[2, 2, 18, 2], embed_dim=96, **kwargs) - if pretrained: - url = model_urls['focalnet_small_srf'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def focalnet_base_srf(pretrained=False, **kwargs): - model = FocalNet(depths=[2, 2, 18, 2], embed_dim=128, **kwargs) - if pretrained: - url = model_urls['focalnet_base_srf'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def focalnet_tiny_lrf(pretrained=False, **kwargs): - model = FocalNet(depths=[2, 2, 6, 2], embed_dim=96, focal_levels=[3, 3, 3, 3], **kwargs) - if pretrained: - url = model_urls['focalnet_tiny_lrf'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu", check_hash=True) - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def focalnet_small_lrf(pretrained=False, **kwargs): - model = FocalNet(depths=[2, 2, 18, 2], embed_dim=96, focal_levels=[3, 3, 3, 3], **kwargs) - if pretrained: - url = model_urls['focalnet_small_lrf'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def focalnet_base_lrf(pretrained=False, **kwargs): - model = FocalNet(depths=[2, 2, 18, 2], embed_dim=128, focal_levels=[3, 3, 3, 3], **kwargs) - if pretrained: - url = model_urls['focalnet_base_lrf'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def focalnet_giant_lrf(pretrained=False, **kwargs): - model = FocalNet(depths=[2, 2, 42, 2], embed_dim=512, focal_levels=[3, 3, 3, 3], **kwargs) - if pretrained: - url = model_urls['focalnet_giant_lrf'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def focalnet_tiny_iso_16(pretrained=False, **kwargs): - model = FocalNet(depths=[12], patch_size=16, embed_dim=192, focal_levels=[3], focal_windows=[3], **kwargs) - if pretrained: - url = model_urls['focalnet_tiny_iso_16'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu", check_hash=True) - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def focalnet_small_iso_16(pretrained=False, **kwargs): - model = FocalNet(depths=[12], patch_size=16, embed_dim=384, focal_levels=[3], focal_windows=[3], **kwargs) - if pretrained: - url = model_urls['focalnet_small_iso_16'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - return model - -@register_model -def focalnet_base_iso_16(pretrained=False, **kwargs): - model = FocalNet(depths=[12], patch_size=16, embed_dim=768, focal_levels=[3], focal_windows=[3], use_layerscale=True, use_postln=True, **kwargs) - if pretrained: - url = model_urls['focalnet_base_iso_16'] - checkpoint = torch.hub.load_state_dict_from_url(url=url, map_location="cpu") - model.load_state_dict(checkpoint["model"]) - return model - -if __name__ == '__main__': - img_size = 224 - x = torch.rand(16, 3, img_size, img_size).cuda() - # model = FocalNet(depths=[2, 2, 6, 2], embed_dim=96) - # model = FocalNet(depths=[12], patch_size=16, embed_dim=768, focal_levels=[3], focal_windows=[3], focal_factors=[2]) - model = FocalNet(depths=[2, 2, 6, 2], embed_dim=96, focal_levels=[3, 3, 3, 3]).cuda() - print(model); model(x) - - flops = model.flops() - print(f"number of GFLOPs: {flops / 1e9}") - - n_parameters = sum(p.numel() for p in model.parameters() if p.requires_grad) - print(f"number of params: {n_parameters}") diff --git a/spaces/kamranahmad92/GRADIOLANCHAINOPENAICHATBOT/README.md b/spaces/kamranahmad92/GRADIOLANCHAINOPENAICHATBOT/README.md deleted file mode 100644 index 7ebf3e11fa015c44c41696152599bf33475ea5fe..0000000000000000000000000000000000000000 --- a/spaces/kamranahmad92/GRADIOLANCHAINOPENAICHATBOT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GRADIOLANCHAINOPENAICHATBOT -emoji: 📚 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py b/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py deleted file mode 100644 index eb4e0d31f1aedf4590628d394e1606920fefb5c9..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-SadTalker-VC/src/face3d/models/arcface_torch/configs/ms1mv3_r18.py +++ /dev/null @@ -1,26 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.loss = "arcface" -config.network = "r18" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "/train_tmp/ms1m-retinaface-t1" -config.num_classes = 93431 -config.num_image = 5179510 -config.num_epoch = 25 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/kieranberton23/plantdx/app.py b/spaces/kieranberton23/plantdx/app.py deleted file mode 100644 index a16c9a6d7bbbcba6ebaba3f1b3923dfb3e63e03e..0000000000000000000000000000000000000000 --- a/spaces/kieranberton23/plantdx/app.py +++ /dev/null @@ -1,168 +0,0 @@ -import streamlit as st -import base64 -import regex as re -from predictor import predict -import torch -import numpy as np -import matplotlib.pyplot as plt -import torchvision.models as models -from PIL import Image -from torchvision import datasets, transforms -from torch.utils.data import DataLoader, Subset - -def add_bg_from_local(image_file): - with open(image_file, "rb") as image_file: - encoded_string = base64.b64encode(image_file.read()) - st.markdown( - f""" - - """, - unsafe_allow_html=True - ) - -def header_white(text, fontsize = 40, bold = True): - st.markdown( - f""" - {text} - """, - unsafe_allow_html=True - ) - -def header_red(text, fontsize = 40, bold = True): - st.markdown( - f""" - {text} - """, - unsafe_allow_html=True - ) - -def header_green(text, fontsize = 40, bold = True): - st.markdown( - f""" - {text} - """, - unsafe_allow_html=True - ) - -def plant_treatment_message(predicted_string): - if predicted_string == "Apple___Apple_scab": - return "Remove the infected leaves and fruit and apply a fungicide to prevent it from spreading." - elif predicted_string == "Apple___Black_rot": - return "Remove the infected branches and fruit and apply a fungicide to prevent it from spreading." - elif predicted_string == "Apple___Cedar_apple_rust": - return "Remove the infected branches and apply a fungicide to prevent it from spreading." - elif predicted_string == "Cherry_(including_sour)___Powdery_mildew": - return "Remove the infected leaves and apply a fungicide to prevent it from spreading." - elif predicted_string == "Corn_(maize)___Cercospora_leaf_spot Gray_leaf_spot": - return "Remove the infected leaves and apply a fungicide to prevent it from spreading." - elif predicted_string == "Corn_(maize)___Common_rust_": - return "Remove the infected leaves and apply a fungicide to prevent it from spreading." - elif predicted_string == "Corn_(maize)___Northern_Leaf_Blight": - return "Remove the infected leaves and apply a fungicide to prevent it from spreading." - elif predicted_string == "Grape___Black_rot": - return "Remove the infected branches and fruit and apply a fungicide to prevent it from spreading." - elif predicted_string == "Grape___Esca_(Black_Measles)": - return "Remove the infected branches and apply a fungicide to prevent it from spreading." - elif predicted_string == "Grape___Leaf_blight_(Isariopsis_Leaf_Spot)": - return "Remove the infected leaves and apply a fungicide to prevent it from spreading." - elif predicted_string == "Orange___Haunglongbing_(Citrus_greening)": - return "Remove the infected branches and apply a pesticide to prevent it from spreading." - elif predicted_string == "Peach___Bacterial_spot": - return "Remove the infected leaves and apply a copper fungicide to prevent it from spreading." - elif predicted_string == "Squash___Powdery_mildew": - return "This is a fungal disease that can cause white powdery spots on leaves and fruit. Consider removing infected plant parts and treating with a fungicide." - elif predicted_string == "Strawberry___Leaf_scorch": - return "This can be caused by drought, sunburn, or fungal diseases. Make sure your plant is getting enough water and sunlight. Consider removing infected plant parts and treating with a fungicide." - elif predicted_string == "Tomato___Bacterial_spot": - return "This is a bacterial disease that can cause spots on leaves and fruit. Consider removing infected plant parts and treating with a copper-based fungicide." - elif predicted_string == "Tomato___Early_blight": - return "This is a fungal disease that can cause dark spots on leaves and stems. Consider removing infected plant parts and treating with a fungicide." - elif predicted_string == "Tomato___Late_blight": - return "This is a fungal disease that can cause rapid decay of foliage and fruit. Consider removing infected plant parts and treating with a fungicide." - elif predicted_string == "Tomato___Leaf_Mold": - return "This is a fungal disease that can cause brown spots on leaves. Consider removing infected plant parts and treating with a fungicide." - elif predicted_string == "Tomato___Septoria_leaf_spot": - return "This is a fungal disease that can cause brown spots with a yellow halo on leaves. Consider removing infected plant parts and treating with a fungicide." - elif predicted_string == "Tomato___Spider_mites Two-spotted_spider_mite": - return "These are tiny pests that can cause yellow spots on leaves and webbing. Consider removing infected plant parts and treating with an insecticide." - elif predicted_string == "Tomato___Target_Spot": - return "This is a fungal disease that can cause circular spots with a bullseye pattern on leaves. Consider removing infected plant parts and treating with a fungicide." - elif predicted_string == "Tomato___Tomato_Yellow_Leaf_Curl_Virus": - return "This is a viral disease that can cause yellowing and curling of leaves. Consider treating with a fungicide." - -def clean_prediction(prediction): - pattern = re.compile('(.*)___(.*)') - clean_predictions = [] - for p in prediction: - r = pattern.search(p['predicted']) - plant = r.groups()[0].replace('_', ' ').lower() - diagnosis = r.groups()[1].replace('_', ' ').lower() - treatment = plant_treatment_message(p['predicted']) if diagnosis is not 'healthy' else None - clean_predictions.append([plant, diagnosis, "{0:.1f}%".format(float(p['probability']) * 100), treatment]) - - return clean_predictions - -def diagnose_health(file): - prediction = predict(file) - clean_predictions = clean_prediction(prediction) - return clean_predictions - -def app(): - - add_bg_from_local('assets/background.png') - header_white(f'PlantDx: Diagnosis in a Snap! ') - - # Upload image of plant - header_white("Upload an image of your plant below:", fontsize=32) - header_white("For best results, remove the leaf from the plant and take the image against a dark background.", fontsize=16, bold=False) - uploaded_file = st.file_uploader("", type=["jpg", "jpeg", "png"]) - - if uploaded_file: - header_white("Preview of the selected image:", fontsize=28, bold=False) - st.image(uploaded_file) - - # Get diagnosis button - if st.button("Get Diagnosis"): - if uploaded_file is not None: - # Diagnose plant health and display results - results = diagnose_health(uploaded_file) - - if results[0][1] == 'healthy': - header_green(f"We believe this is a healthy {results[0][0]} plant with {results[0][2]} confidence. Keep up the good work with proper watering, sunlight, and nutrients.", fontsize=32, bold=False) - else: - header_red(f"We believe this is an unhealthy {results[0][0]} plant with {results[0][1]}, with {results[0][2]} confidence. {results[0][3] if results [0][3] else ''}", fontsize=32, bold=False) - - if len(results) > 1: - header_white("Other potential diagnoses: ", fontsize=24) - - for p in range(1, len(results)): - if results[p][1] == 'healthy': - header_white( - f"A healthy {results[p][0]} plant, {results[p][2]} confidence.", - fontsize=20, bold=False) - else: - header_white( - f"An unhealthy {results[p][0]} plant with {results[p][1]}, {results[p][2]} confidence. {results[p][3] if results [p][3] else ''}", - fontsize=20, bold=False) - - else: - st.warning("Please upload an image of your plant first") - - # # Create user profile button - # if st.button("Create User Profile"): - # st.subheader("User Profile") - # # Prompt user to add their name and the plant they own - # user_name = st.text_input("Enter your name:") - # plant_name = st.text_input("Enter the plant you own:") - # if user_name and plant_name: - # st.success(f"User profile created for {user_name} with plant {plant_name}") - - -# Run Streamlit app -if __name__ == "__main__": - app() diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/visualization/__init__.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/visualization/__init__.py deleted file mode 100644 index 835df136bdcf69348281d22914d41aa84cdf92b1..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/visualization/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .color import Color, color_val -from .image import imshow, imshow_bboxes, imshow_det_bboxes -from .optflow import flow2rgb, flowshow, make_color_wheel - -__all__ = [ - 'Color', 'color_val', 'imshow', 'imshow_bboxes', 'imshow_det_bboxes', - 'flowshow', 'flow2rgb', 'make_color_wheel' -] diff --git a/spaces/koustubhavachat/Ghibli-Diffusion/README.md b/spaces/koustubhavachat/Ghibli-Diffusion/README.md deleted file mode 100644 index c0ea0069dd242b579faa9197e55fa1ffe7d77ec0..0000000000000000000000000000000000000000 --- a/spaces/koustubhavachat/Ghibli-Diffusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Ghibli Diffusion -emoji: 🚀 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false -duplicated_from: akhaliq/Ghibli-Diffusion ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/krazyxki/V-1488abed/src/proxy/rate-limit.ts b/spaces/krazyxki/V-1488abed/src/proxy/rate-limit.ts deleted file mode 100644 index bfe5d228ac5d47c91d7391c4ddc0ba93805fa53e..0000000000000000000000000000000000000000 --- a/spaces/krazyxki/V-1488abed/src/proxy/rate-limit.ts +++ /dev/null @@ -1,126 +0,0 @@ -import { Request, Response, NextFunction } from "express"; -import { config } from "../config"; -import { logger } from "../logger"; -import { proxyKeys } from "./proxy-keys"; - -const RATE_LIMIT_ENABLED = Boolean(config.modelRateLimit); -const RATE_LIMIT = Math.max(1, config.modelRateLimit); -const ONE_MINUTE_MS = 60 * 1000; - -const lastAttempts = new Map(); - -const expireOldAttempts = (now: number) => (attempt: number) => - attempt > now - ONE_MINUTE_MS; - -const getTryAgainInMs = (ip: string) => { - const now = Date.now(); - const attempts = lastAttempts.get(ip) || []; - const validAttempts = attempts.filter(expireOldAttempts(now)); - - if (validAttempts.length >= RATE_LIMIT) { - return validAttempts[0] - now + ONE_MINUTE_MS; - } else { - lastAttempts.set(ip, [...validAttempts, now]); - return 0; - } -}; - -const getStatus = (ip: string) => { - const now = Date.now(); - const attempts = lastAttempts.get(ip) || []; - const validAttempts = attempts.filter(expireOldAttempts(now)); - return { - remaining: Math.max(0, RATE_LIMIT - validAttempts.length), - reset: validAttempts.length > 0 ? validAttempts[0] + ONE_MINUTE_MS : now, - }; -}; - -/** Prunes attempts and IPs that are no longer relevant after five minutes. */ -const clearOldAttempts = () => { - const uniqueIps = lastAttempts.size; - for (const [ip, attempts] of lastAttempts.entries()) { - const validAttempts = attempts.filter(expireOldAttempts(Date.now())); - if (validAttempts.length === 0) { - lastAttempts.delete(ip); - } else { - lastAttempts.set(ip, validAttempts); - } - } - const prunedIps = uniqueIps - lastAttempts.size; - logger.info( - { activeIps: lastAttempts.size, prunedIps }, - "Cleaned up rate limit map" - ); -}; -setInterval(clearOldAttempts, 5 * ONE_MINUTE_MS); - -export const getUniqueIps = () => { - return Array.from(lastAttempts.keys()).filter(a => a.indexOf('.') > 0).length; -}; - -export const ipLimiter = (req: Request, res: Response, next: NextFunction) => { - if (!RATE_LIMIT_ENABLED) { - next(); - return; - } - - // Allow me to bypass limiter - if (req.headers.authKey === config.keyPassword) { - next(); - return; - } - - const { remaining, reset } = getStatus(req.ip); - res.set("X-RateLimit-Limit", config.modelRateLimit.toString()); - res.set("X-RateLimit-Remaining", remaining.toString()); - res.set("X-RateLimit-Reset", reset.toString()); - - const tryAgainInMs = getTryAgainInMs(req.ip); - if (tryAgainInMs > 0) { - res.set("Retry-After", tryAgainInMs.toString()); - res.status(429).json({ - error: { - type: "proxy_rate_limited", - message: `This proxy is rate limited to ${ - config.modelRateLimit - } model requests per minute. Please try again in ${Math.ceil( - tryAgainInMs / 1000 - )} seconds.`, - }, - }); - } else { - next(); - } -}; - -export const keyLimiter = (req: Request, res: Response, next: NextFunction) => { - if (!RATE_LIMIT_ENABLED) { - next(); - return; - } - - // Allow me to bypass limiter - if (req.headers.authKey === config.keyPassword) { - next(); - return; - } - - const { remaining, reset } = getStatus(req.authKey ?? ''); - res.set("X-RateLimit-Limit", config.modelRateLimit.toString()); - res.set("X-RateLimit-Remaining", remaining.toString()); - res.set("X-RateLimit-Reset", reset.toString()); - - const tryAgainInMs = getTryAgainInMs(req.authKey ?? ''); - if (tryAgainInMs > 0) { - res.set("Retry-After", tryAgainInMs.toString()); - proxyKeys.revoke(req.authKey); - res.status(429).json({ - error: { - type: "proxy_rate_limited", - message: `Ты слишком часто стучался на проксю, иди генерируй новый ключ`, - }, - }); - } else { - next(); - } -}; \ No newline at end of file diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/tests/test_backend_gtk3.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/tests/test_backend_gtk3.py deleted file mode 100644 index 937ddef5a13f140c7aa495cf3f046e8a261096ff..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/tests/test_backend_gtk3.py +++ /dev/null @@ -1,51 +0,0 @@ -from matplotlib import pyplot as plt - -import pytest - - -pytest.importorskip("matplotlib.backends.backend_gtk3agg") - - -@pytest.mark.backend("gtk3agg", skip_on_importerror=True) -def test_correct_key(): - pytest.xfail("test_widget_send_event is not triggering key_press_event") - - from gi.repository import Gdk, Gtk - fig = plt.figure() - buf = [] - - def send(event): - for key, mod in [ - (Gdk.KEY_a, Gdk.ModifierType.SHIFT_MASK), - (Gdk.KEY_a, 0), - (Gdk.KEY_a, Gdk.ModifierType.CONTROL_MASK), - (Gdk.KEY_agrave, 0), - (Gdk.KEY_Control_L, Gdk.ModifierType.MOD1_MASK), - (Gdk.KEY_Alt_L, Gdk.ModifierType.CONTROL_MASK), - (Gdk.KEY_agrave, - Gdk.ModifierType.CONTROL_MASK - | Gdk.ModifierType.MOD1_MASK - | Gdk.ModifierType.MOD4_MASK), - (0xfd16, 0), # KEY_3270_Play. - (Gdk.KEY_BackSpace, 0), - (Gdk.KEY_BackSpace, Gdk.ModifierType.CONTROL_MASK), - ]: - # This is not actually really the right API: it depends on the - # actual keymap (e.g. on Azerty, shift+agrave -> 0). - Gtk.test_widget_send_key(fig.canvas, key, mod) - - def receive(event): - buf.append(event.key) - if buf == [ - "A", "a", "ctrl+a", - "\N{LATIN SMALL LETTER A WITH GRAVE}", - "alt+control", "ctrl+alt", - "ctrl+alt+super+\N{LATIN SMALL LETTER A WITH GRAVE}", - # (No entry for KEY_3270_Play.) - "backspace", "ctrl+backspace", - ]: - plt.close(fig) - - fig.canvas.mpl_connect("draw_event", send) - fig.canvas.mpl_connect("key_press_event", receive) - plt.show() diff --git a/spaces/lcf001/newbingai/README.md b/spaces/lcf001/newbingai/README.md deleted file mode 100644 index fd46372bb02d8f4db57aba46ac72b82ef48fbb1b..0000000000000000000000000000000000000000 --- a/spaces/lcf001/newbingai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BingAI -emoji: 🐨 -colorFrom: yellow -colorTo: pink -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/leogabraneth/text-generation-webui-main/modules/training.py b/spaces/leogabraneth/text-generation-webui-main/modules/training.py deleted file mode 100644 index b887fa479e0dc068f94e25a7623167d84f9d8e31..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/modules/training.py +++ /dev/null @@ -1,776 +0,0 @@ -import os - -os.environ["WANDB_MODE"] = "offline" -# os.environ["WANDB_DISABLED"] = "true" - -import json -import math -import random -import shutil -import sys -import threading -import time -import traceback -from datetime import datetime -from pathlib import Path - -import gradio as gr -import torch -import transformers -from datasets import Dataset, load_dataset -from peft import ( - LoraConfig, - get_peft_model, - prepare_model_for_kbit_training, - set_peft_model_state_dict -) -from peft.utils.other import \ - TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING as model_to_lora_modules -from transformers import is_torch_xpu_available -from transformers.models.auto.modeling_auto import ( - MODEL_FOR_CAUSAL_LM_MAPPING_NAMES -) - -from modules import shared, ui, utils -from modules.evaluate import ( - calculate_perplexity, - generate_markdown_table, - save_past_evaluations -) -from modules.logging_colors import logger -from modules.models import reload_model -from modules.utils import natural_keys - -MODEL_CLASSES = {v[1]: v[0] for v in MODEL_FOR_CAUSAL_LM_MAPPING_NAMES.items()} -PARAMETERS = ["lora_name", "always_override", "q_proj_en", "v_proj_en", "k_proj_en", "o_proj_en", "gate_proj_en", "down_proj_en", "up_proj_en", "save_steps", "micro_batch_size", "batch_size", "epochs", "learning_rate", "lr_scheduler_type", "lora_rank", "lora_alpha", "lora_dropout", "cutoff_len", "dataset", "eval_dataset", "format", "eval_steps", "raw_text_file", "overlap_len", "newline_favor_len", "higher_rank_limit", "warmup_steps", "optimizer", "hard_cut_string", "train_only_after", "stop_at_loss", "add_eos_token", "min_chars", "report_to"] -WANT_INTERRUPT = False - -train_log = {} -train_template = {} - - -def create_ui(): - mu = shared.args.multi_user - with gr.Tab("Training", elem_id="training-tab"): - with gr.Tab('Train LoRA', elem_id='lora-train-tab'): - tmp = gr.State('') - with gr.Row(): - with gr.Column(): - gr.Markdown("[Tutorial](https://github.com/oobabooga/text-generation-webui/wiki/05-%E2%80%90-Training-Tab)") - - with gr.Row(): - copy_from = gr.Dropdown(label='Copy parameters from', value='None', choices=utils.get_available_loras(), elem_classes=['slim-dropdown'], interactive=not mu) - ui.create_refresh_button(copy_from, lambda: None, lambda: {'choices': utils.get_available_loras()}, 'refresh-button', interactive=not mu) - - with gr.Row(): - with gr.Column(scale=5): - lora_name = gr.Textbox(label='Name', info='The name of your new LoRA file') - with gr.Column(): - always_override = gr.Checkbox(label='Override Existing Files', value=False, info='If the name is the same, checking will replace the existing file, and unchecking will load and continue from it (the rank must be the same).', elem_classes=['no-background']) - - with gr.Accordion(label='Target Modules', open=False): - gr.Markdown("Selects which modules to target in training. Targeting more modules is closer to a full fine-tune at the cost of increased VRAM requirements and adapter size.\nNOTE: Only works for model_id='llama', other types will retain default training behavior and not use these settings.") - with gr.Row(): - with gr.Column(): - q_proj_en = gr.Checkbox(label='Enable q_proj', value=True) - with gr.Column(): - v_proj_en = gr.Checkbox(label='Enable v_proj', value=True) - with gr.Column(): - k_proj_en = gr.Checkbox(label='Enable k_proj', value=False) - with gr.Column(): - o_proj_en = gr.Checkbox(label='Enable o_proj', value=False) - with gr.Column(): - gate_proj_en = gr.Checkbox(label='Enable gate_proj', value=False) - with gr.Column(): - down_proj_en = gr.Checkbox(label='Enable down_proj', value=False) - with gr.Column(): - up_proj_en = gr.Checkbox(label='Enable up_proj', value=False) - - with gr.Row(): - with gr.Column(): - lora_rank = gr.Slider(label='LoRA Rank', value=32, minimum=0, maximum=1024, step=4, info='Also called dimension count. Higher values = larger file, more content control. Smaller values = smaller file, less control. Use 4 or 8 for style, 128 or 256 to teach, 1024+ for fine-detail on big data. More VRAM is needed for higher ranks.') - lora_alpha = gr.Slider(label='LoRA Alpha', value=64, minimum=0, maximum=2048, step=4, info='This divided by the rank becomes the scaling of the LoRA. Higher means stronger. A good standard value is twice your Rank.') - batch_size = gr.Slider(label='Batch Size', value=128, minimum=0, maximum=1024, step=4, info='Global batch size. The two batch sizes together determine gradient accumulation (gradientAccum = batch / microBatch). Higher gradient accum values lead to better quality training.') - micro_batch_size = gr.Slider(label='Micro Batch Size', value=4, minimum=1, maximum=128, step=1, info='Per-device batch size (NOTE: multiple devices not yet implemented). Increasing this will increase VRAM usage.') - cutoff_len = gr.Slider(label='Cutoff Length', minimum=0, maximum=4096, value=256, step=32, info='Cutoff length for text input. Essentially, how long of a line of text to feed in at a time. Higher values require drastically more VRAM.') - - with gr.Column(): - save_steps = gr.Number(label='Save every n steps', value=0, info='If above 0, a checkpoint of the LoRA will be saved every time this many steps pass.') - - epochs = gr.Number(label='Epochs', value=3, info='Number of times every entry in the dataset should be fed into training. So 1 means feed each item in once, 5 means feed it in five times, etc.') - learning_rate = gr.Textbox(label='Learning Rate', value='3e-4', info='In scientific notation. 3e-4 is a good starting base point. 1e-2 is extremely high, 1e-6 is extremely low.') - with gr.Row(): - lr_scheduler_type = gr.Dropdown(label='LR Scheduler', value='linear', choices=['linear', 'constant', 'constant_with_warmup', 'cosine', 'cosine_with_restarts', 'polynomial', 'inverse_sqrt'], info='Learning rate scheduler - defines how the learning rate changes over time. "Constant" means never change, "linear" means to go in a straight line from the learning rate down to 0, cosine follows a curve, etc.', elem_classes=['slim-dropdown']) - - with gr.Accordion(label='Advanced Options', open=False): - with gr.Row(): - with gr.Column(): - lora_dropout = gr.Slider(label='LoRA Dropout', minimum=0.0, maximum=1.0, step=0.025, value=0.05, info='Percentage probability for dropout of LoRA layers. This can help reduce overfitting. Most users should leave at default.') - stop_at_loss = gr.Slider(label='Stop at loss', minimum=0.0, maximum=3.0, step=0.1, value=0.00, info='The process will automatically stop once the desired loss value is reached. (reasonable numbers are 1.5-1.8)') - with gr.Row(): - optimizer = gr.Dropdown(label='Optimizer', value='adamw_torch', choices=['adamw_hf', 'adamw_torch', 'adamw_torch_fused', 'adamw_torch_xla', 'adamw_apex_fused', 'adafactor', 'adamw_bnb_8bit', 'adamw_anyprecision', 'sgd', 'adagrad'], info='Different optimizer implementation options, for advanced users. Effects of different options are not well documented yet.', elem_classes=['slim-dropdown']) - - with gr.Column(): - warmup_steps = gr.Number(label='Warmup Steps', value=100, info='For this many steps at the start, the learning rate will be lower than normal. This helps the trainer prepare the model and precompute statistics to improve the quality of training after the start.') - train_only_after = gr.Textbox(label='Train Only After', value='', info='Only consider text *after* this string in any given chunk for training. For Alpaca datasets, use "### Response:" to only train the response and ignore the input.') - - add_eos_token = gr.Checkbox(label='Add EOS token', value=False, info="Adds EOS token for each dataset item. In case of raw text, the EOS will be added at the Hard Cut") - - higher_rank_limit = gr.Checkbox(label='Enable higher ranks', value=False, info='If checked, changes Rank/Alpha slider above to go much higher. This will not work without a datacenter-class GPU.') - report_to = gr.Radio(label="Save detailed logs with", value="None", choices=["None", "wandb", "tensorboard"], interactive=True) - - with gr.Column(): - with gr.Tab(label='Formatted Dataset'): - with gr.Row(): - format = gr.Dropdown(choices=utils.get_datasets('training/formats', 'json'), value='None', label='Data Format', info='The format file used to decide how to format the dataset input.', elem_classes=['slim-dropdown'], interactive=not mu) - ui.create_refresh_button(format, lambda: None, lambda: {'choices': utils.get_datasets('training/formats', 'json')}, 'refresh-button', interactive=not mu) - - with gr.Row(): - dataset = gr.Dropdown(choices=utils.get_datasets('training/datasets', 'json'), value='None', label='Dataset', info='The dataset file to use for training.', elem_classes=['slim-dropdown'], interactive=not mu) - ui.create_refresh_button(dataset, lambda: None, lambda: {'choices': utils.get_datasets('training/datasets', 'json')}, 'refresh-button', interactive=not mu) - - with gr.Row(): - eval_dataset = gr.Dropdown(choices=utils.get_datasets('training/datasets', 'json'), value='None', label='Evaluation Dataset', info='The (optional) dataset file used to evaluate the model after training.', elem_classes=['slim-dropdown'], interactive=not mu) - ui.create_refresh_button(eval_dataset, lambda: None, lambda: {'choices': utils.get_datasets('training/datasets', 'json')}, 'refresh-button', interactive=not mu) - - eval_steps = gr.Number(label='Evaluate every n steps', value=100, info='If an evaluation dataset is given, test it every time this many steps pass.') - - with gr.Tab(label="Raw text file"): - with gr.Row(): - raw_text_file = gr.Dropdown(choices=utils.get_datasets('training/datasets', 'txt'), value='None', label='Text file', info='The raw text file to use for training.', elem_classes=['slim-dropdown'], interactive=not mu) - ui.create_refresh_button(raw_text_file, lambda: None, lambda: {'choices': utils.get_datasets('training/datasets', 'txt')}, 'refresh-button', interactive=not mu) - - with gr.Row(): - with gr.Column(): - overlap_len = gr.Slider(label='Overlap Length', minimum=0, maximum=512, value=128, step=16, info='How many tokens from the prior chunk of text to include into the next chunk. (The chunks themselves will be of a size determined by Cutoff Length). Setting overlap to exactly half the cutoff length may be ideal.') - newline_favor_len = gr.Slider(label='Prefer Newline Cut Length', minimum=0, maximum=512, value=128, step=16, info='Length (in characters, not tokens) of the maximum distance to shift an overlap cut by to ensure chunks cut at newlines. If too low, cuts may occur in the middle of lines.') - - with gr.Column(): - hard_cut_string = gr.Textbox(label='Hard Cut String', value='\\n\\n\\n', info='String that indicates a hard cut between text parts. Helps prevent unwanted overlap.') - min_chars = gr.Number(label='Ignore small blocks', value=0, info='Ignore Hard Cut blocks that have less or equal characters than this number') - - with gr.Row(): - start_button = gr.Button("Start LoRA Training", variant='primary', interactive=not mu) - stop_button = gr.Button("Interrupt", interactive=not mu) - - output = gr.Markdown(value="Ready") - - with gr.Tab('Perplexity evaluation', elem_id='evaluate-tab'): - with gr.Row(): - with gr.Column(): - models = gr.Dropdown(utils.get_available_models(), label='Models', multiselect=True, interactive=not mu) - evaluate_text_file = gr.Dropdown(choices=['wikitext', 'ptb', 'ptb_new'] + utils.get_datasets('training/datasets', 'txt')[1:], value='wikitext', label='Input dataset', info='The raw text file on which the model will be evaluated. The first options are automatically downloaded: wikitext, ptb, and ptb_new. The next options are your local text files under training/datasets.', interactive=not mu) - with gr.Row(): - with gr.Column(): - stride_length = gr.Slider(label='Stride', minimum=0, maximum=32768, value=512, step=256, info='Used to make the evaluation faster at the cost of accuracy. 1 = slowest but most accurate. 512 is a common value.') - - with gr.Column(): - max_length = gr.Slider(label='max_length', minimum=0, maximum=32768, value=0, step=256, info='The context for each evaluation. If set to 0, the maximum context length for the model will be used.') - - with gr.Row(): - start_current_evaluation = gr.Button("Evaluate loaded model", interactive=not mu) - start_evaluation = gr.Button("Evaluate selected models", interactive=not mu) - stop_evaluation = gr.Button("Interrupt", interactive=not mu) - - with gr.Column(): - evaluation_log = gr.Markdown(value='') - - evaluation_table = gr.Dataframe(value=generate_markdown_table(), interactive=True) - with gr.Row(): - save_comments = gr.Button('Save comments', elem_classes="small-button", interactive=not mu) - refresh_table = gr.Button('Refresh the table', elem_classes="small-button", interactive=not mu) - - # Training events - all_params = [lora_name, always_override, q_proj_en, v_proj_en, k_proj_en, o_proj_en, gate_proj_en, down_proj_en, up_proj_en, save_steps, micro_batch_size, batch_size, epochs, learning_rate, lr_scheduler_type, lora_rank, lora_alpha, lora_dropout, cutoff_len, dataset, eval_dataset, format, eval_steps, raw_text_file, overlap_len, newline_favor_len, higher_rank_limit, warmup_steps, optimizer, hard_cut_string, train_only_after, stop_at_loss, add_eos_token, min_chars, report_to] - - copy_from.change(do_copy_params, [copy_from] + all_params, all_params) - start_button.click(do_train, all_params, output) - stop_button.click(do_interrupt, None, None, queue=False) - higher_rank_limit.change(change_rank_limit, [higher_rank_limit], [lora_rank, lora_alpha]) - - # Evaluation events. For some reason, the interrupt event - # doesn't work with the .then() syntax, so I write them one - # by one in this ugly but functional way. - ev = start_evaluation.click(calculate_perplexity, [models, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False) - start_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False) - - start_current_evaluation.click(lambda: ['current model'], None, tmp) - ev_cur = start_current_evaluation.click(calculate_perplexity, [tmp, evaluate_text_file, stride_length, max_length], evaluation_log, show_progress=False) - start_current_evaluation.click(generate_markdown_table, None, evaluation_table, show_progress=False) - - stop_evaluation.click(None, None, None, cancels=[ev, ev_cur], queue=False) - refresh_table.click(generate_markdown_table, None, evaluation_table, show_progress=True) - save_comments.click( - save_past_evaluations, evaluation_table, None).then( - lambda: "Comments saved.", None, evaluation_log, show_progress=False) - - -def do_interrupt(): - global WANT_INTERRUPT - WANT_INTERRUPT = True - - -def do_copy_params(lora_name: str, *args): - f_name = f"{shared.args.lora_dir}/{clean_path(None, lora_name)}/training_parameters.json" - if Path(f_name).is_file(): - with open(f_name, 'r', encoding='utf-8') as format_file: - params: dict[str, str] = json.load(format_file) - else: - params = {} - - result = list() - for i in range(0, len(PARAMETERS)): - key = PARAMETERS[i] - if key in params: - result.append(params[key]) - else: - result.append(args[i]) - - return result - - -def change_rank_limit(use_higher_ranks: bool): - mult = 2 if use_higher_ranks else 1 - return {"maximum": 1024 * mult, "__type__": "update"}, {"maximum": 2048 * mult, "__type__": "update"} - - -def clean_path(base_path: str, path: str): - """Strips unusual symbols and forcibly builds a path as relative to the intended directory.""" - path = path.replace('\\', '/').replace('..', '_') - if base_path is None: - return path - - return f'{Path(base_path).absolute()}/{path}' - - -def backup_adapter(input_folder): - # Get the creation date of the file adapter_model.bin - try: - adapter_file = Path(f"{input_folder}/adapter_model.bin") - if adapter_file.is_file(): - - logger.info("Backing up existing LoRA adapter...") - creation_date = datetime.fromtimestamp(adapter_file.stat().st_ctime) - creation_date_str = creation_date.strftime("Backup-%Y-%m-%d") - - # Create the new subfolder - subfolder_path = Path(f"{input_folder}/{creation_date_str}") - subfolder_path.mkdir(parents=True, exist_ok=True) - - # Check if the file already exists in the subfolder - backup_adapter_file = Path(f"{input_folder}/{creation_date_str}/adapter_model.bin") - if backup_adapter_file.is_file(): - print(" - Backup already exists. Skipping backup process.") - return - - # Copy existing files to the new subfolder - existing_files = Path(input_folder).iterdir() - for file in existing_files: - if file.is_file(): - shutil.copy2(file, subfolder_path) - except Exception as e: - print("An error occurred in backup_adapter:", str(e)) - - -def calc_trainable_parameters(model): - trainable_params = 0 - all_param = 0 - for _, param in model.named_parameters(): - num_params = param.numel() - # if using DS Zero 3 and the weights are initialized empty - if num_params == 0 and hasattr(param, "ds_numel"): - num_params = param.ds_numel - - all_param += num_params - if param.requires_grad: - trainable_params += num_params - - return trainable_params, all_param - - -def do_train(lora_name: str, always_override: bool, q_proj_en: bool, v_proj_en: bool, k_proj_en: bool, o_proj_en: bool, gate_proj_en: bool, down_proj_en: bool, up_proj_en: bool, save_steps: int, micro_batch_size: int, batch_size: int, epochs: int, learning_rate: str, lr_scheduler_type: str, lora_rank: int, lora_alpha: int, lora_dropout: float, cutoff_len: int, dataset: str, eval_dataset: str, format: str, eval_steps: int, raw_text_file: str, overlap_len: int, newline_favor_len: int, higher_rank_limit: bool, warmup_steps: int, optimizer: str, hard_cut_string: str, train_only_after: str, stop_at_loss: float, add_eos_token: bool, min_chars: int, report_to: str): - - if shared.args.monkey_patch: - from alpaca_lora_4bit.monkeypatch.peft_tuners_lora_monkey_patch import ( - replace_peft_model_with_int4_lora_model - ) - replace_peft_model_with_int4_lora_model() - - global WANT_INTERRUPT - WANT_INTERRUPT = False - - # == Input validation / processing == - yield "Preparing the input..." - lora_file_path = clean_path(None, lora_name) - if lora_file_path.strip() == '': - yield "Missing or invalid LoRA file name input." - return - - lora_file_path = f"{Path(shared.args.lora_dir)}/{lora_file_path}" - actual_lr = float(learning_rate) - model_type = type(shared.model).__name__ - - if model_type in MODEL_CLASSES: - model_id = MODEL_CLASSES[model_type] - else: - model_id = "llama" - if model_type == "PeftModelForCausalLM": - if len(shared.lora_names) > 0: - yield "You are trying to train a LoRA while you already have another LoRA loaded. This will work, but may have unexpected effects. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*" - logger.warning("Training LoRA over top of another LoRA. May have unexpected effects.") - else: - yield "Model ID not matched due to LoRA loading. Consider reloading base model. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*" - logger.warning("Model ID not matched due to LoRA loading. Consider reloading base model.") - else: - yield "LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. Unexpected errors may follow. *(Will continue anyway in 5 seconds, press `Interrupt` to stop.)*" - logger.warning(f"LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. (Found model type: {model_type})") - - time.sleep(5) - - if shared.args.loader == 'GPTQ-for-LLaMa' and not shared.args.monkey_patch: - yield "LoRA training with GPTQ-for-LLaMa requires loading with `--monkey-patch`" - return - - if cutoff_len <= 0 or micro_batch_size <= 0 or batch_size <= 0 or actual_lr <= 0 or lora_rank <= 0 or lora_alpha <= 0: - yield "Cannot input zeroes." - return - - gradient_accumulation_steps = batch_size // micro_batch_size - shared.tokenizer.pad_token_id = 0 - shared.tokenizer.padding_side = "left" - - # Populate target_modules list with chosen X_proj modules. Llama-based models only atm, non-llama will revert to default behavior. - def list_target_modules(model_id): - if model_id != "llama": - return model_to_lora_modules[model_id] - - available_modules = { - "gate": gate_proj_en, - "down": down_proj_en, - "up": up_proj_en, - "q": q_proj_en, - "v": v_proj_en, - "k": k_proj_en, - "o": o_proj_en, - } - target_mods = [f"{name}_proj" for name, enabled in available_modules.items() if enabled] - return target_mods - - def encode(text, add_bos_token): - result = shared.tokenizer.encode(text, truncation=True, max_length=cutoff_len) - # Check if the first two tokens are BOS - if len(result) >= 2 and result[:2] == [shared.tokenizer.bos_token_id, shared.tokenizer.bos_token_id]: - result = result[1:] - - if not add_bos_token and result[0] == shared.tokenizer.bos_token_id: - result = result[1:] - return result - - def tokenize(prompt, append_eos_token=False): - - if train_only_after == '' or train_only_after not in prompt: - input_ids = encode(prompt, True) - - if append_eos_token and input_ids[-1] != shared.tokenizer.eos_token_id and len(input_ids) < cutoff_len: - input_ids.append(shared.tokenizer.eos_token_id) - - input_ids = [shared.tokenizer.pad_token_id] * (cutoff_len - len(input_ids)) + input_ids - labels = [1] * len(input_ids) - - else: - ind = prompt.index(train_only_after) + len(train_only_after) - before_tokens = encode(prompt[:ind], True) - after_tokens = encode(prompt[ind:], False) - - if append_eos_token and after_tokens[-1] != shared.tokenizer.eos_token_id: - after_tokens.append(shared.tokenizer.eos_token_id) - - full_length = len(after_tokens) + len(before_tokens) - if full_length > cutoff_len: - after_tokens = after_tokens[:cutoff_len - len(before_tokens)] - else: - before_tokens = [shared.tokenizer.pad_token_id] * (cutoff_len - full_length) + before_tokens - - input_ids = before_tokens + after_tokens - labels = [-100] * len(before_tokens) + [1] * len(after_tokens) - - input_ids = torch.tensor(input_ids) - return { - "input_ids": input_ids, - "labels": labels, - "attention_mask": input_ids.ne(shared.tokenizer.pad_token_id), - } - - train_template.clear() - - # == Prep the dataset, format, etc == - if raw_text_file not in ['None', '']: - train_template["template_type"] = "raw_text" - logger.info("Loading raw text file dataset...") - fullpath = clean_path('training/datasets', f'{raw_text_file}') - fullpath = Path(fullpath) - if fullpath.is_dir(): - logger.info('Training path directory {}'.format(raw_text_file)) - raw_text = "" - file_paths = sorted(fullpath.glob('*.txt'), key=lambda path: natural_keys(path.name)) - for file_path in file_paths: - if file_path.is_file(): - with file_path.open('r', encoding='utf-8') as file: - raw_text += file.read().replace('\r', '') - - logger.info(f"Loaded training file: {file_path.name}") - else: - with open(clean_path('training/datasets', f'{raw_text_file}.txt'), 'r', encoding='utf-8') as file: - raw_text = file.read().replace('\r', '') - - cut_string = hard_cut_string.replace('\\n', '\n') - eos_added = 0 - out_tokens = [] - for text_part in raw_text.split(cut_string): - if len(text_part.strip()) <= min_chars: - continue - - tokens = shared.tokenizer.encode(text_part) - if add_eos_token: - tokens.append(shared.tokenizer.eos_token_id) - eos_added += 1 - - step = cutoff_len - overlap_len - if step <= 0: - yield f"Error: overlap_len ({overlap_len}) cannot be greater than or equal to cutoff_len ({cutoff_len})" - return - - out_tokens.extend(split_chunks(tokens, cutoff_len, step)) - - if eos_added > 0: - print(f"EOS added to {eos_added} text blocks") - - del raw_text # Note: could be a gig for a large dataset, so delete redundant data as we go to be safe on RAM - text_chunks = [shared.tokenizer.decode(x) for x in out_tokens] - del out_tokens - if newline_favor_len > 0: - text_chunks = [cut_chunk_for_newline(x, newline_favor_len) for x in text_chunks] - - train_data = Dataset.from_list([tokenize(x) for x in text_chunks]) - del text_chunks - eval_data = None - else: - if dataset in ['None', '']: - yield "Missing dataset choice input, cannot continue." - return - - if format in ['None', '']: - yield "Missing format choice input, cannot continue." - return - - train_template["template_type"] = "dataset" - - with open(clean_path('training/formats', f'{format}.json'), 'r', encoding='utf-8-sig') as formatFile: - format_data: dict[str, str] = json.load(formatFile) - - # == store training prompt == - for _, value in format_data.items(): - prompt_key = f"template_{len(train_template)}" - train_template[prompt_key] = value - - def generate_prompt(data_point: dict[str, str]): - for options, data in format_data.items(): - if set(options.split(',')) == set(x[0] for x in data_point.items() if (type(x[1]) is str and len(x[1].strip()) > 0)): - for key, val in data_point.items(): - if type(val) is str: - data = data.replace(f'%{key}%', val) - return data - raise RuntimeError(f'Data-point "{data_point}" has no keyset match within format "{list(format_data.keys())}"') - - def generate_and_tokenize_prompt(data_point): - prompt = generate_prompt(data_point) - return tokenize(prompt, add_eos_token) - - logger.info("Loading JSON datasets...") - data = load_dataset("json", data_files=clean_path('training/datasets', f'{dataset}.json')) - train_data = data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30)) - - if eval_dataset == 'None': - eval_data = None - else: - eval_data = load_dataset("json", data_files=clean_path('training/datasets', f'{eval_dataset}.json')) - eval_data = eval_data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30)) - - # == We MUST reload model if it went through any previous training, even failed one == - if shared.model_dirty_from_training: - selected_model = shared.model_name - if selected_model: - print("\033[1;31;1m(Model has been modified by previous training, it needs to be reloaded...)\033[0;37;0m") - try: - yield f"Reloading {selected_model}..." - reload_model() - if shared.model is not None: - print("Model reloaded OK, continue with training.") - else: - return f"Failed to load {selected_model}." - except: - exc = traceback.format_exc() - logger.error('Failed to reload the model.') - print(exc) - return exc.replace('\n', '\n\n') - - # == Start prepping the model itself == - if not hasattr(shared.model, 'lm_head') or hasattr(shared.model.lm_head, 'weight'): - logger.info("Getting model ready...") - prepare_model_for_kbit_training(shared.model) - - # base model is now frozen and should not be reused for any other LoRA training than this one - shared.model_dirty_from_training = True - - logger.info("Preparing for training...") - config = LoraConfig( - r=lora_rank, - lora_alpha=lora_alpha, - target_modules=list_target_modules(model_id), - lora_dropout=lora_dropout, - bias="none", - task_type="CAUSAL_LM" - ) - - # == Backup the existing adapter == - if not always_override: - backup_adapter(lora_file_path) - - # == get model trainable params - model_trainable_params, model_all_params = calc_trainable_parameters(shared.model) - - try: - logger.info("Creating LoRA model...") - lora_model = get_peft_model(shared.model, config) - if not always_override and Path(f"{lora_file_path}/adapter_model.bin").is_file(): - logger.info("Loading existing LoRA data...") - state_dict_peft = torch.load(f"{lora_file_path}/adapter_model.bin") - set_peft_model_state_dict(lora_model, state_dict_peft) - except: - yield traceback.format_exc().replace('\n', '\n\n') - return - - if shared.args.monkey_patch: - from alpaca_lora_4bit.autograd_4bit import Autograd4bitQuantLinear - from alpaca_lora_4bit.models import Linear4bitLt - for _, m in lora_model.named_modules(): - if isinstance(m, Autograd4bitQuantLinear) or isinstance(m, Linear4bitLt): - if m.is_v1_model: - m.zeros = m.zeros.half() - m.scales = m.scales.half() - - class Tracked(): - def __init__(self): - self.current_steps = 0 - self.max_steps = 0 - self.did_save = False - - tracked = Tracked() - actual_save_steps = math.ceil(save_steps / gradient_accumulation_steps) - - class Callbacks(transformers.TrainerCallback): - def on_step_begin(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs): - tracked.current_steps = state.global_step * gradient_accumulation_steps - tracked.max_steps = state.max_steps * gradient_accumulation_steps - if WANT_INTERRUPT: - control.should_epoch_stop = True - control.should_training_stop = True - elif state.global_step > 0 and actual_save_steps > 0 and state.global_step % actual_save_steps == 0: - lora_model.save_pretrained(f"{lora_file_path}/checkpoint-{tracked.current_steps}/") - # Save log - with open(f"{lora_file_path}/checkpoint-{tracked.current_steps}/training_log.json", 'w', encoding='utf-8') as file: - json.dump(train_log, file, indent=2) - # == Save training prompt == - with open(f"{lora_file_path}/checkpoint-{tracked.current_steps}/training_prompt.json", 'w', encoding='utf-8') as file: - json.dump(train_template, file, indent=2) - - def on_substep_end(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, **kwargs): - tracked.current_steps += 1 - if WANT_INTERRUPT: - control.should_epoch_stop = True - control.should_training_stop = True - - def on_log(self, args: transformers.TrainingArguments, state: transformers.TrainerState, control: transformers.TrainerControl, logs, **kwargs): - train_log.update(logs) - train_log.update({"current_steps": tracked.current_steps}) - if WANT_INTERRUPT: - print("\033[1;31;1mInterrupted by user\033[0;37;0m") - - print(f"\033[1;30;40mStep: {tracked.current_steps} \033[0;37;0m", end='') - if 'loss' in logs: - loss = float(logs['loss']) - if loss <= stop_at_loss: - control.should_epoch_stop = True - control.should_training_stop = True - print(f"\033[1;31;1mStop Loss {stop_at_loss} reached.\033[0;37;0m") - - trainer = transformers.Trainer( - model=lora_model, - train_dataset=train_data, - eval_dataset=eval_data, - args=transformers.TrainingArguments( - report_to=report_to if report_to != "None" else None, - per_device_train_batch_size=micro_batch_size, - gradient_accumulation_steps=gradient_accumulation_steps, - warmup_steps=math.ceil(warmup_steps / gradient_accumulation_steps), - num_train_epochs=epochs, - learning_rate=actual_lr, - fp16=False if shared.args.cpu else True, - optim=optimizer, - logging_steps=2 if stop_at_loss > 0 else 5, - evaluation_strategy="steps" if eval_data is not None else "no", - eval_steps=math.ceil(eval_steps / gradient_accumulation_steps) if eval_data is not None else None, - save_strategy="steps" if eval_data is not None else "no", - output_dir=lora_file_path, - lr_scheduler_type=lr_scheduler_type, - load_best_model_at_end=eval_data is not None, - # TODO: Enable multi-device support - ddp_find_unused_parameters=None, - no_cuda=shared.args.cpu, - use_ipex=True if is_torch_xpu_available and not shared.args.cpu else False - ), - data_collator=transformers.DataCollatorForLanguageModeling(shared.tokenizer, mlm=False), - callbacks=list([Callbacks()]) - ) - - lora_model.config.use_cache = False - - if torch.__version__ >= "2" and sys.platform != "win32": - lora_model = torch.compile(lora_model) - - # == Save parameters for reuse == - with open(f"{lora_file_path}/training_parameters.json", 'w', encoding='utf-8') as file: - vars = locals() - json.dump({x: vars[x] for x in PARAMETERS}, file, indent=2) - - # == Save training prompt == - with open(f"{lora_file_path}/training_prompt.json", 'w', encoding='utf-8') as file: - json.dump(train_template, file, indent=2) - - # == Main run and monitor loop == - logger.info("Starting training...") - yield "Starting..." - - lora_trainable_param, lora_all_param = calc_trainable_parameters(lora_model) - - projections_string = ", ".join([projection.replace("_proj", "") for projection in list_target_modules(model_id)]) - - print(f"Training '{model_id}' model using ({projections_string}) projections") - - if lora_all_param > 0: - print(f"Trainable params: {lora_trainable_param:,d} ({100 * lora_trainable_param / lora_all_param:.4f} %), All params: {lora_all_param:,d} (Model: {model_all_params:,d})") - - train_log.update({"base_model_name": shared.model_name}) - train_log.update({"base_model_class": shared.model.__class__.__name__}) - train_log.update({"base_loaded_in_4bit": getattr(lora_model, "is_loaded_in_4bit", False)}) - train_log.update({"base_loaded_in_8bit": getattr(lora_model, "is_loaded_in_8bit", False)}) - train_log.update({"projections": projections_string}) - - if stop_at_loss > 0: - print(f"Monitoring loss \033[1;31;1m(Auto-Stop at: {stop_at_loss})\033[0;37;0m") - - if WANT_INTERRUPT: - yield "Interrupted before start." - return - - def log_train_dataset(trainer): - decoded_entries = [] - # Try to decode the entries and write the log file - try: - # Iterate over the first 10 elements in the dataset (or fewer if there are less than 10) - for i in range(min(10, len(trainer.train_dataset))): - decoded_text = shared.tokenizer.decode(trainer.train_dataset[i]['input_ids']) - decoded_entries.append({"value": decoded_text}) - - # Write the log file - Path('logs').mkdir(exist_ok=True) - with open(Path('logs/train_dataset_sample.json'), 'w') as json_file: - json.dump(decoded_entries, json_file, indent=4) - - logger.info("Log file 'train_dataset_sample.json' created in the 'logs' directory.") - except Exception as e: - logger.error(f"Failed to create log file due to error: {e}") - - def threaded_run(): - log_train_dataset(trainer) - trainer.train() - # Note: save in the thread in case the gradio thread breaks (eg browser closed) - lora_model.save_pretrained(lora_file_path) - logger.info("LoRA training run is completed and saved.") - # Save log - with open(f"{lora_file_path}/training_log.json", 'w', encoding='utf-8') as file: - json.dump(train_log, file, indent=2) - - thread = threading.Thread(target=threaded_run) - thread.start() - last_step = 0 - start_time = time.perf_counter() - - while thread.is_alive(): - time.sleep(0.5) - if WANT_INTERRUPT: - yield "Interrupting, please wait... *(Run will stop after the current training step completes.)*" - - elif tracked.current_steps != last_step: - last_step = tracked.current_steps - time_elapsed = time.perf_counter() - start_time - if time_elapsed <= 0: - timer_info = "" - total_time_estimate = 999 - else: - its = tracked.current_steps / time_elapsed - if its > 1: - timer_info = f"`{its:.2f}` it/s" - else: - timer_info = f"`{1.0/its:.2f}` s/it" - - total_time_estimate = (1.0 / its) * (tracked.max_steps) - - yield f"Running... **{tracked.current_steps}** / **{tracked.max_steps}** ... {timer_info}, {format_time(time_elapsed)} / {format_time(total_time_estimate)} ... {format_time(total_time_estimate - time_elapsed)} remaining" - - # Saving in the train thread might fail if an error occurs, so save here if so. - if not tracked.did_save: - logger.info("Training complete, saving...") - lora_model.save_pretrained(lora_file_path) - - if WANT_INTERRUPT: - logger.info("Training interrupted.") - yield f"Interrupted. Incomplete LoRA saved to `{lora_file_path}`." - else: - logger.info("Training complete!") - yield f"Done! LoRA saved to `{lora_file_path}`.\n\nBefore testing your new LoRA, make sure to first reload the model, as it is currently dirty from training." - - -def split_chunks(arr, size, step): - for i in range(0, len(arr), step): - yield arr[i:i + size] - - -def cut_chunk_for_newline(chunk: str, max_length: int): - if '\n' not in chunk: - return chunk - - first_newline = chunk.index('\n') - if first_newline < max_length: - chunk = chunk[first_newline + 1:] - - if '\n' not in chunk: - return chunk - - last_newline = chunk.rindex('\n') - if len(chunk) - last_newline < max_length: - chunk = chunk[:last_newline] - - return chunk - - -def format_time(seconds: float): - if seconds < 120: - return f"`{seconds:.0f}` seconds" - - minutes = seconds / 60 - if minutes < 120: - return f"`{minutes:.0f}` minutes" - - hours = minutes / 60 - return f"`{hours:.0f}` hours" diff --git a/spaces/lightli/bingo-newbing/src/app/loading.css b/spaces/lightli/bingo-newbing/src/app/loading.css deleted file mode 100644 index eaaab6a86a228334c4eca3c5368ae6f0f593d405..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/src/app/loading.css +++ /dev/null @@ -1,68 +0,0 @@ -::-webkit-scrollbar { - width: 10px; - height: 10px; - display: none; -} - -::-webkit-scrollbar-button:start:decrement, -::-webkit-scrollbar-button:end:increment { - height: 30px; - background-color: transparent; -} - -::-webkit-scrollbar-track-piece { - background-color: #3b3b3b; - -webkit-border-radius: 16px; -} - -::-webkit-scrollbar-thumb:vertical { - height: 50px; - background-color: #666; - border: 1px solid #eee; - -webkit-border-radius: 6px; -} - -/* loading start */ -.loading-spinner { - display: flex; - justify-content: center; - align-items: center; - height: 100vh; - opacity: 1; - transition: opacity .8s ease-out; -} - -.loading-spinner.hidden { - opacity: 0; -} - -.loading-spinner>div { - width: 30px; - height: 30px; - background: linear-gradient(90deg, #2870EA 10.79%, #1B4AEF 87.08%); - - border-radius: 100%; - display: inline-block; - animation: sk-bouncedelay 1.4s infinite ease-in-out both; -} - -.loading-spinner .bounce1 { - animation-delay: -0.32s; -} - -.loading-spinner .bounce2 { - animation-delay: -0.16s; -} - -@keyframes sk-bouncedelay { - - 0%, - 80%, - 100% { - transform: scale(0); - } - - 40% { - transform: scale(1.0); - } -} diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/3d Girlz 2 Free 38.md b/spaces/lincquiQcaudo/Top-20-Diffusion/3d Girlz 2 Free 38.md deleted file mode 100644 index 118b9a756872cc25e6f1cae5348395b5b32dcc9e..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/3d Girlz 2 Free 38.md +++ /dev/null @@ -1,6 +0,0 @@ -

      3d girlz 2 free 38


      Download ··· https://bytlly.com/2uGyBe



      - -27 min / 2 years ago / jizzbunker ... 38 min / 2 months ago / pornone. #mom ... 21 min / 2 weeks ago / upornia ... Beautiful girls love really dirty hardcore stuff too. 1fdad05405
      -
      -
      -

      diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Driver - BCM2070A0 Hp Pavilion G6-2160se 64bit.rar.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Driver - BCM2070A0 Hp Pavilion G6-2160se 64bit.rar.md deleted file mode 100644 index e3ceb7c771e36bd5faf767b353e9b2a9cbcaa5f3..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Driver - BCM2070A0 Hp Pavilion G6-2160se 64bit.rar.md +++ /dev/null @@ -1,72 +0,0 @@ -
      -

      How to Download and Install BCM2070A0 Driver for HP Pavilion g6-2160se 64-bit

      -

      If you have an HP Pavilion g6-2160se notebook PC and you want to use the Bluetooth feature, you need to install the BCM2070A0 driver. This driver is required to enable the Broadcom Bluetooth device on your laptop. Without this driver, you may experience problems with connecting or pairing your Bluetooth devices.

      -

      In this article, we will show you how to download and install the BCM2070A0 driver for HP Pavilion g6-2160se 64-bit. We will also provide some tips on how to troubleshoot common Bluetooth issues.

      -

      Driver - BCM2070A0 hp pavilion g6-2160se 64bit.rar


      DOWNLOAD · https://bytlly.com/2uGw3U



      -

      Step 1: Download the BCM2070A0 Driver

      -

      The first step is to download the BCM2070A0 driver from the official HP website. You can use the following link to access the driver page:

      -

      HP Pavilion g6-2312ax Notebook PC Software and Driver Downloads | HP® Customer Support

      -

      On this page, you need to select your operating system and version. For example, if you are using Windows 10 64-bit, you need to select "Windows 10 (64-bit)" from the drop-down menu.

      -

      Then, you need to scroll down to the "Driver-Network" section and look for the "Broadcom Bluetooth Software" item. Click on the "Download" button next to it and save the file (sp75330.exe) to your computer.

      -

      Step 2: Install the BCM2070A0 Driver

      -

      The next step is to install the BCM2070A0 driver on your HP Pavilion g6-2160se notebook PC. To do this, follow these steps:

      -

      -
        -
      • Double-click on the downloaded file (sp75330.exe) and follow the on-screen instructions.
      • -
      • When prompted, restart your computer to complete the installation.
      • -
      • After restarting, check if the Bluetooth icon appears in the system tray (near the clock).
      • -
      • If not, you may need to enable Bluetooth from the settings or use a keyboard shortcut (usually Fn + F12) to turn it on.
      • -
      -

      Step 3: Troubleshoot Common Bluetooth Issues

      -

      After installing the BCM2070A0 driver for HP Pavilion g6-2160se 64-bit, you should be able to use Bluetooth on your laptop. However, if you encounter any problems with connecting or pairing your Bluetooth devices, you can try some of these tips:

      -
        -
      • Make sure your Bluetooth device is turned on and discoverable.
      • -
      • Make sure your laptop and your Bluetooth device are within range of each other.
      • -
      • Make sure there are no other devices interfering with the Bluetooth signal (such as microwaves, wireless routers, etc.).
      • -
      • Update your Bluetooth device firmware if available.
      • -
      • Remove and re-add your Bluetooth device from the settings.
      • -
      • Run the Windows troubleshooter for Bluetooth devices.
      • -
      • Update your laptop drivers and BIOS if available.
      • -
      -

      Conclusion

      -

      In this article, we have shown you how to download and install the BCM2070A0 driver for HP Pavilion g6-2160se 64-bit. This driver is essential for enabling the Broadcom Bluetooth device on your laptop. We have also provided some tips on how to troubleshoot common Bluetooth issues. We hope this article has been helpful and informative. If you have any questions or feedback, please leave a comment below.

      -

      Step 4: Test Your Bluetooth Connection

      -

      After installing the BCM2070A0 driver for HP Pavilion g6-2160se 64-bit, you should be able to connect your Bluetooth devices to your laptop. To test your Bluetooth connection, follow these steps:

      -
        -
      • Turn on your Bluetooth device and make sure it is discoverable.
      • -
      • On your laptop, click on the Bluetooth icon in the system tray and select "Add a device".
      • -
      • Wait for your laptop to scan for nearby Bluetooth devices and select your device from the list.
      • -
      • Follow the instructions on the screen to pair your device with your laptop.
      • -
      • If prompted, enter a PIN code or confirm a passkey to complete the pairing process.
      • -
      • Once paired, you should see your device name under the Bluetooth devices section in the settings.
      • -
      • You can now use your Bluetooth device with your laptop. For example, you can stream audio, transfer files, or use a wireless mouse or keyboard.
      • -
      -

      Step 5: Download Other Drivers for HP Pavilion g6-2160se 64-bit

      -

      Besides the BCM2070A0 driver for HP Pavilion g6-2160se 64-bit, you may also need to download other drivers for your laptop. Drivers are software components that enable your hardware devices to communicate with your operating system. Without the proper drivers, your laptop may not function properly or optimally.

      -

      To download other drivers for HP Pavilion g6-2160se 64-bit, you can visit the same driver page that we used in step 1:

      -

      HP Pavilion g6-2312ax Notebook PC Software and Driver Downloads | HP® Customer Support

      -

      On this page, you can find drivers for various categories such as audio, graphics, network, chipset, BIOS, etc. You can download and install the drivers that are compatible with your operating system and version. You can also use the HP Support Assistant tool to automatically find and install the latest drivers for your laptop.

      -

      Conclusion

      -

      In this article, we have shown you how to download and install the BCM2070A0 driver for HP Pavilion g6-2160se 64-bit. This driver is essential for enabling the Broadcom Bluetooth device on your laptop. We have also provided some tips on how to troubleshoot common Bluetooth issues and how to download other drivers for your laptop. We hope this article has been helpful and informative. If you have any questions or feedback, please leave a comment below.

      -

      Step 4: Test Your Bluetooth Connection

      -

      After installing the BCM2070A0 driver for HP Pavilion g6-2160se 64-bit, you should be able to connect your Bluetooth devices to your laptop. To test your Bluetooth connection, follow these steps:

      -
        -
      • Turn on your Bluetooth device and make sure it is discoverable.
      • -
      • On your laptop, click on the Bluetooth icon in the system tray and select "Add a device".
      • -
      • Wait for your laptop to scan for nearby Bluetooth devices and select your device from the list.
      • -
      • Follow the instructions on the screen to pair your device with your laptop.
      • -
      • If prompted, enter a PIN code or confirm a passkey to complete the pairing process.
      • -
      • Once paired, you should see your device name under the Bluetooth devices section in the settings.
      • -
      • You can now use your Bluetooth device with your laptop. For example, you can stream audio, transfer files, or use a wireless mouse or keyboard.
      • -
      -

      Step 5: Download Other Drivers for HP Pavilion g6-2160se 64-bit

      -

      Besides the BCM2070A0 driver for HP Pavilion g6-2160se 64-bit, you may also need to download other drivers for your laptop. Drivers are software components that enable your hardware devices to communicate with your operating system. Without the proper drivers, your laptop may not function properly or optimally.

      -

      To download other drivers for HP Pavilion g6-2160se 64-bit, you can visit the same driver page that we used in step 1:

      -

      HP Pavilion g6-2312ax Notebook PC Software and Driver Downloads | HP® Customer Support

      -

      On this page, you can find drivers for various categories such as audio, graphics, network, chipset, BIOS, etc. You can download and install the drivers that are compatible with your operating system and version. You can also use the HP Support Assistant tool to automatically find and install the latest drivers for your laptop.

      -

      Conclusion

      -

      In this article, we have shown you how to download and install the BCM2070A0 driver for HP Pavilion g6-2160se 64-bit. This driver is essential for enabling the Broadcom Bluetooth device on your laptop. We have also provided some tips on how to troubleshoot common Bluetooth issues and how to download other drivers for your laptop. We hope this article has been helpful and informative. If you have any questions or feedback, please leave a comment below.

      -

      Conclusion

      -

      In this article, we have shown you how to download and install the BCM2070A0 driver for HP Pavilion g6-2160se 64-bit. This driver is essential for enabling the Broadcom Bluetooth device on your laptop. We have also provided some tips on how to troubleshoot common Bluetooth issues and how to download other drivers for your laptop. We hope this article has been helpful and informative. If you have any questions or feedback, please leave a comment below.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Dt07 Img Fix For Pes 2013 Skidro.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Dt07 Img Fix For Pes 2013 Skidro.md deleted file mode 100644 index 6109d5c1b68c22a5d88fad7313e3d588fe2422c9..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Dt07 Img Fix For Pes 2013 Skidro.md +++ /dev/null @@ -1,38 +0,0 @@ -
      -

      How to Fix Dt07 Img Error in PES 2013 Skidrow Version

      -

      If you are a fan of PES 2013, you might have encountered a common error that prevents the game from launching properly. This error is caused by a corrupted or missing file called dt07.img, which is located in the img folder of the game installation directory. In this article, we will show you how to fix this error and enjoy PES 2013 without any hassle.

      -

      Dt07 Img Fix For Pes 2013 Skidro


      DOWNLOADhttps://bytlly.com/2uGwkh



      -

      What is Dt07 Img and Why is it Important?

      -

      Dt07 img is a file that contains the graphics and textures of the stadiums in PES 2013. It is essential for the game to run smoothly and display the realistic visuals of the different venues. However, sometimes this file can get damaged or deleted due to various reasons, such as virus infection, disk error, or improper installation. When this happens, the game will fail to load the stadiums and show an error message like "dt07.img not found" or "dt07.img corrupted".

      -

      How to Fix Dt07 Img Error in PES 2013 Skidrow Version?

      -

      There are two possible ways to fix this error: either by downloading a new dt07 img file from a reliable source or by repairing the existing one using a tool. Here are the steps for each method:

      -

      Method 1: Download a New Dt07 Img File

      -

      This is the easiest and fastest way to fix the error. You just need to find a trustworthy website that offers a working dt07 img file for PES 2013 Skidrow version and download it. Then, you need to replace the old file with the new one in the img folder of the game installation directory. Here are some websites that you can try:

      -

      - -

      After downloading the file, follow these steps:

      -
        -
      1. Extract the file using WinRAR or any other software.
      2. -
      3. Copy the dt07.img file and paste it in the img folder of the game installation directory. The default location is C:\Program Files (x86)\KONAMI\Pro Evolution Soccer 2013\img.
      4. -
      5. Replace the old file when prompted.
      6. -
      7. Launch the game and enjoy.
      8. -
      -

      Method 2: Repair the Existing Dt07 Img File

      -

      This method is more complicated and time-consuming, but it can also work if you don't want to download a new file. You need to use a tool called PES 2013 File Loader by Jenkey1002, which allows you to modify and repair various files in PES 2013. You can download it from here: https://pesedit.org/pes-2013-file-loader-by-jenkey1002/.

      -

      After downloading and installing the tool, follow these steps:

      -
        -
      1. Open PES 2013 File Loader by Jenkey1002 and click on Tools.
      2. -
      3. Select File Explorer from the drop-down menu.
      4. -
      5. Navigate to the img folder of the game installation directory and find dt07.img.
      6. -
      7. Right-click on dt07.img and select Repair.
      8. -
      9. Wait for the process to finish and close the tool.
      10. -
      11. Launch the game and check if the error is fixed.
      12. -
      -

      Conclusion

      -

      Dt07 img error is a common problem that affects many PES 2013 players who use Skidrow version.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Fce Listening Speaking Skills 2 Teachers Book Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Fce Listening Speaking Skills 2 Teachers Book Download.md deleted file mode 100644 index 9bafb2c0a346d079ec35dbd9acd3fe7686430765..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Fce Listening Speaking Skills 2 Teachers Book Download.md +++ /dev/null @@ -1,63 +0,0 @@ - -

      How to Download and Use the FCE Listening and Speaking Skills 2 Teachers Book

      -

      The FCE Listening and Speaking Skills 2 Teachers Book is a comprehensive and practical resource for teachers who want to help their students prepare for the revised Cambridge FCE exam. It accompanies the FCE Listening and Speaking Skills 2 Student's Book, which provides a variety of exercises and tasks to develop the listening and speaking skills required for the exam.

      -

      fce listening speaking skills 2 teachers book download


      Download Ziphttps://bytlly.com/2uGxVk



      -

      In this article, we will show you how to download the FCE Listening and Speaking Skills 2 Teachers Book from various online platforms, and how to use it effectively in your classroom or for self-study purposes. We will also review the main features and benefits of the book, and give you some tips on how to help your students achieve success in the FCE exam.

      -

      What is the FCE Listening and Speaking Skills 2 Teachers Book?

      -

      The FCE Listening and Speaking Skills 2 Teachers Book is a complete and user-friendly resource that contains everything you need to plan and deliver engaging and effective lessons for your FCE students. It covers all the topics, skills, and language areas that are tested in the listening and speaking parts of the exam, as well as the strategies and techniques that can help your students achieve a high score.

      -

      The FCE Listening and Speaking Skills 2 Teachers Book includes:

      -

      -
        -
      • Detailed teaching notes for each unit of the Student's Book, with clear aims, objectives, procedures, materials, timing, and feedback techniques.
      • -
      • Full answer keys and transcripts for all the listening and speaking exercises in the Student's Book, as well as useful tips and explanations for different types of questions.
      • -
      • Four complete practice tests that mirror the format and level of difficulty of the actual exam, with audio CDs and answer keys.
      • -
      • A wealth of supplementary material, such as photocopiable worksheets, role cards, games, quizzes, and extra listening and speaking tasks.
      • -
      -

      The FCE Listening and Speaking Skills 2 Teachers Book is based on the latest research and feedback from teachers and examiners, and reflects the changes and updates introduced in the revised Cambridge FCE exam. It is suitable for both classroom use and self-study, and can be easily adapted to suit different learning styles and needs.

      -

      How to Download the FCE Listening and Speaking Skills 2 Teachers Book?

      -

      If you want to download the FCE Listening and Speaking Skills 2 Teachers Book, you have several options available. You can:

      -
        -
      1. Visit the official website of Express Publishing, the publisher of the book, and order a printed copy or a digital version. You can also access some free sample pages and audio tracks from their website.
      2. -
      3. Search for the book on various online platforms, such as Scribd, PDFSlide, or VDocuments, where you can find free or paid downloads of the book in PDF format.
      4. -
      5. Use a search engine, such as Google or Bing, to find other websites or blogs that offer downloads or links to download the book. However, be careful about the quality and legality of these sources, as they might not be authorized or reliable.
      6. -
      -

      Whichever option you choose, make sure you have a stable internet connection and enough storage space on your device. You might also need a PDF reader or an audio player to access the content of the book.

      -

      How to Use the FCE Listening

      -

      How to Use the FCE Listening and Speaking Skills 2 Teachers Book?

      -

      The FCE Listening and Speaking Skills 2 Teachers Book is designed to be flexible and user-friendly, so you can use it in any way that suits your teaching style, objectives, and schedule. However, here are some general guidelines on how to make the most of it:

      -
        -
      1. Before you start teaching a unit, read through the teaching notes carefully and familiarize yourself with the aims, objectives, procedures, materials, and timing of each lesson. You can also listen to the audio tracks beforehand to check the quality and clarity of the recordings.
      2. -
      3. During each lesson, follow the suggested steps in the teaching notes, but feel free to adapt them according to your students' needs, interests, preferences, and feedback. You can also use the supplementary material provided in the book or create your own activities to supplement or replace some of the tasks.
      4. -
      5. After each lesson, review the main points covered in the unit with your students, ask them to evaluate their own performance and progress, give them constructive feedback on their strengths and areas for improvement, and assign them homework or further practice if necessary.
      6. -
      7. Before you administer a practice test, make sure your students are familiar with the format, instructions, marking criteria, and time limits of the exam. You can also give them some tips on how to approach different types of questions, avoid common mistakes, manage their time effectively, etc.
      8. -
      9. After you administer a practice test, go over the answers with your students, explain why some options are correct or incorrect, point out any difficulties or errors they might have encountered, praise their achievements, and suggest ways they can improve their performance in future tests.
      10. -
      -

      The FCE Listening and Speaking Skills 2 Teachers Book is a comprehensive and practical resource that will help you prepare your students for the revised Cambridge FCE exam with confidence. It will also help them develop their listening -and speaking skills in English for various purposes beyond the exam. Download it today -and see how it can transform your teaching experience!

      -
      What are the Benefits of the FCE Listening and Speaking Skills 2 Teachers Book?
      -

      The FCE Listening and Speaking Skills 2 Teachers Book has many benefits for both teachers and students who are preparing for the revised Cambridge FCE exam. Some of these benefits are:

      -
        -
      • It follows a clear and logical structure that allows you to progress from easier to more challenging tasks, and from familiar to more unfamiliar topics.
      • -
      • It exposes your students to a variety of authentic texts and recordings that reflect the real-life situations and contexts they might encounter in the exam or in their everyday lives.
      • -
      • It offers a balanced mix of practice and revision activities that target both accuracy and fluency, as well as vocabulary, grammar, pronunciation, and discourse skills.
      • -
      • It incorporates regular self-assessment and peer-assessment opportunities that enable your students to monitor their own progress and identify their strengths and weaknesses.
      • -
      • It fosters a positive and supportive learning environment that encourages your students to interact with each other, share their opinions, express their feelings, and have fun while learning.
      • -
      -

      The FCE Listening and Speaking Skills 2 Teachers Book is not only a useful resource for exam preparation, but also a valuable tool for developing your students' overall communicative competence and fluency in English.

      -
      How to Get the Most Out of the FCE Listening and Speaking Skills 2 Teachers Book?
      -

      The FCE Listening and Speaking Skills 2 Teachers Book is a flexible and user-friendly resource that can be used in different ways depending on your teaching style, objectives, and schedule. However, here are some tips on how to get the most out of it:

      -
        -
      • Use the book as a guide, not a script. You can follow the suggested steps in the teaching notes, but don't be afraid to modify them according to your students' needs, interests, preferences, and feedback. You can also use your own creativity and experience to create or adapt activities that suit your teaching context.
      • -
      • Use the book as a source of information, not a substitute for it. You can rely on the answer keys and transcripts provided in the book, but don't forget to check the latest updates and changes in the exam format, instructions, marking criteria, etc. You can also consult other sources of information, such as official websites, books, articles, podcasts, etc., to enrich your knowledge and understanding of the exam.
      • -
      • Use the book as a support, not a constraint. You can use the supplementary material provided in the book, such as photocopiable worksheets, role cards, games, quizzes, etc., but don't limit yourself to them. You can also use other materials that are relevant and appropriate for your students, such as newspapers, magazines, videos, songs, etc., to make your lessons more varied and engaging.
      • -
      -

      The FCE Listening and Speaking Skills 2 Teachers Book is a comprehensive and practical resource that will help you prepare your students for the revised Cambridge FCE exam with confidence. It will also help them develop their listening -and speaking skills in English for various purposes beyond the exam. Download it today -and see how it can transform your teaching experience!

      -Conclusion -

      The FCE Listening and Speaking Skills 2 Teachers Book is a comprehensive and practical resource that will help you prepare your students for the revised Cambridge FCE exam with confidence. It will also help them develop their listening and speaking skills in English for various purposes beyond the exam. It includes detailed teaching notes, full answer keys and transcripts, four complete practice tests, and a wealth of supplementary material. You can download it from various online platforms, or order a printed copy from your local bookstore. You can also use it in different ways depending on your teaching style, objectives, and schedule. However, remember to use it as a guide, not a script; as a source of information, not a substitute for it; and as a support, not a constraint. The FCE Listening and Speaking Skills 2 Teachers Book is not only a useful resource for exam preparation, but also a valuable tool for developing your students' overall communicative competence and fluency in English.

      -Conclusion -

      The FCE Listening and Speaking Skills 2 Teachers Book is a comprehensive and practical resource that will help you prepare your students for the revised Cambridge FCE exam with confidence. It will also help them develop their listening and speaking skills in English for various purposes beyond the exam. It includes detailed teaching notes, full answer keys and transcripts, four complete practice tests, and a wealth of supplementary material. You can download it from various online platforms, or order a printed copy from your local bookstore. You can also use it in different ways depending on your teaching style, objectives, and schedule. However, remember to use it as a guide, not a script; as a source of information, not a substitute for it; and as a support, not a constraint. The FCE Listening and Speaking Skills 2 Teachers Book is not only a useful resource for exam preparation, but also a valuable tool for developing your students' overall communicative competence and fluency in English.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/liuxiaopai/BelleGroup-BELLE-7B-2M/app.py b/spaces/liuxiaopai/BelleGroup-BELLE-7B-2M/app.py deleted file mode 100644 index 7908906e38209bf12cc8b8c84b5bb05706cb5454..0000000000000000000000000000000000000000 --- a/spaces/liuxiaopai/BelleGroup-BELLE-7B-2M/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/BelleGroup/BELLE-7B-2M").launch() \ No newline at end of file diff --git a/spaces/livebook-dev/livebook/README.md b/spaces/livebook-dev/livebook/README.md deleted file mode 100644 index 69e3efb33215d67b5c429ec8c1bb4a8f567a2188..0000000000000000000000000000000000000000 --- a/spaces/livebook-dev/livebook/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Livebook -emoji: 📓 -colorFrom: pink -colorTo: purple -sdk: docker -fullWidth: true ---- - -You can install and run [Livebook](https://livebook.dev/) inside a Hugging Face Space. Here's [a tutorial](https://huggingface.co/docs/hub/spaces-sdks-docker-livebook) on how to do that. \ No newline at end of file diff --git a/spaces/lixq/bingo61/src/components/chat-list.tsx b/spaces/lixq/bingo61/src/components/chat-list.tsx deleted file mode 100644 index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000 --- a/spaces/lixq/bingo61/src/components/chat-list.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import React from 'react' - -import { Separator } from '@/components/ui/separator' -import { ChatMessage } from '@/components/chat-message' -import { ChatMessageModel } from '@/lib/bots/bing/types' - -export interface ChatList { - messages: ChatMessageModel[] -} - -export function ChatList({ messages }: ChatList) { - if (!messages.length) { - return null - } - - return ( -
      - {messages.map((message, index) => ( - - - {index < messages.length - 1 && ( - - )} - - ))} -
      - ) -} diff --git a/spaces/lj1995/vocal2guitar/infer-web.py b/spaces/lj1995/vocal2guitar/infer-web.py deleted file mode 100644 index 21efc811ba35fe80fd91a5521fc687f3eff0995b..0000000000000000000000000000000000000000 --- a/spaces/lj1995/vocal2guitar/infer-web.py +++ /dev/null @@ -1,1856 +0,0 @@ -import torch, os, traceback, sys, warnings, shutil, numpy as np - -os.environ["no_proxy"] = "localhost, 127.0.0.1, ::1" -import threading -from time import sleep -from subprocess import Popen -import faiss -from random import shuffle - -now_dir = os.getcwd() -sys.path.append(now_dir) -tmp = os.path.join(now_dir, "TEMP") -shutil.rmtree(tmp, ignore_errors=True) -shutil.rmtree("%s/runtime/Lib/site-packages/infer_pack" % (now_dir), ignore_errors=True) -shutil.rmtree("%s/runtime/Lib/site-packages/uvr5_pack" % (now_dir), ignore_errors=True) -os.makedirs(tmp, exist_ok=True) -os.makedirs(os.path.join(now_dir, "logs"), exist_ok=True) -os.makedirs(os.path.join(now_dir, "weights"), exist_ok=True) -os.environ["TEMP"] = tmp -warnings.filterwarnings("ignore") -torch.manual_seed(114514) -from i18n import I18nAuto -import ffmpeg -from MDXNet import MDXNetDereverb - -i18n = I18nAuto() -i18n.print() -# 判断是否有能用来训练和加速推理的N卡 -ngpu = torch.cuda.device_count() -gpu_infos = [] -mem = [] -if (not torch.cuda.is_available()) or ngpu == 0: - if_gpu_ok = False -else: - if_gpu_ok = False - for i in range(ngpu): - gpu_name = torch.cuda.get_device_name(i) - if ( - "10" in gpu_name - or "16" in gpu_name - or "20" in gpu_name - or "30" in gpu_name - or "40" in gpu_name - or "A2" in gpu_name.upper() - or "A3" in gpu_name.upper() - or "A4" in gpu_name.upper() - or "P4" in gpu_name.upper() - or "A50" in gpu_name.upper() - or "A60" in gpu_name.upper() - or "70" in gpu_name - or "80" in gpu_name - or "90" in gpu_name - or "M4" in gpu_name.upper() - or "T4" in gpu_name.upper() - or "TITAN" in gpu_name.upper() - ): # A10#A100#V100#A40#P40#M40#K80#A4500 - if_gpu_ok = True # 至少有一张能用的N卡 - gpu_infos.append("%s\t%s" % (i, gpu_name)) - mem.append( - int( - torch.cuda.get_device_properties(i).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - ) -if if_gpu_ok == True and len(gpu_infos) > 0: - gpu_info = "\n".join(gpu_infos) - default_batch_size = min(mem) // 2 -else: - gpu_info = i18n("很遗憾您这没有能用的显卡来支持您训练") - default_batch_size = 1 -gpus = "-".join([i[0] for i in gpu_infos]) -from infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -import soundfile as sf -from fairseq import checkpoint_utils -import gradio as gr -import logging -from vc_infer_pipeline import VC -from config import Config -from infer_uvr5 import _audio_pre_, _audio_pre_new -from my_utils import load_audio -from train.process_ckpt import show_info, change_info, merge, extract_small_model - -config = Config() -# from trainset_preprocess_pipeline import PreProcess -logging.getLogger("numba").setLevel(logging.WARNING) - - -class ToolButton(gr.Button, gr.components.FormComponent): - """Small button with single emoji as text, fits inside gradio forms""" - - def __init__(self, **kwargs): - super().__init__(variant="tool", **kwargs) - - def get_block_name(self): - return "button" - - -hubert_model = None - - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - - -weight_root = "weights" -weight_uvr5_root = "uvr5_weights" -index_root = "logs" -names = [] -for name in os.listdir(weight_root): - if name.endswith(".pth"): - names.append(name) -index_paths = [] -for root, dirs, files in os.walk(index_root, topdown=False): - for name in files: - if name.endswith(".index") and "trained" not in name: - index_paths.append("%s/%s" % (root, name)) -uvr5_names = [] -for name in os.listdir(weight_uvr5_root): - if name.endswith(".pth") or "onnx" in name: - uvr5_names.append(name.replace(".pth", "")) - - -def vc_single( - sid, - input_audio_path, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, -): # spk_item, input_audio0, vc_transform0,f0_file,f0method0 - global tgt_sr, net_g, vc, hubert_model, version - if input_audio_path is None: - return "You need to upload an audio", None - f0_up_key = int(f0_up_key) - try: - audio = load_audio(input_audio_path, 16000) - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - if hubert_model == None: - load_hubert() - if_f0 = cpt.get("f0", 1) - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # 防止小白写错,自动帮他替换掉 - # file_big_npy = ( - # file_big_npy.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - # ) - audio_opt = vc.pipeline( - hubert_model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=f0_file, - ) - if resample_sr >= 16000 and tgt_sr != resample_sr: - tgt_sr = resample_sr - index_info = ( - "Using index:%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - return "Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % ( - index_info, - times[0], - times[1], - times[2], - ), (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - - -def vc_multi( - sid, - dir_path, - opt_root, - paths, - f0_up_key, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - format1, -): - try: - dir_path = ( - dir_path.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - opt_root = opt_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - os.makedirs(opt_root, exist_ok=True) - try: - if dir_path != "": - paths = [os.path.join(dir_path, name) for name in os.listdir(dir_path)] - else: - paths = [path.name for path in paths] - except: - traceback.print_exc() - paths = [path.name for path in paths] - infos = [] - for path in paths: - info, opt = vc_single( - sid, - path, - f0_up_key, - None, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, - ) - if "Success" in info: - try: - tgt_sr, audio_opt = opt - if format1 in ["wav", "flac"]: - sf.write( - "%s/%s.%s" % (opt_root, os.path.basename(path), format1), - audio_opt, - tgt_sr, - ) - else: - path = "%s/%s.wav" % (opt_root, os.path.basename(path)) - sf.write( - path, - audio_opt, - tgt_sr, - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format1) - ) - except: - info += traceback.format_exc() - infos.append("%s->%s" % (os.path.basename(path), info)) - yield "\n".join(infos) - yield "\n".join(infos) - except: - yield traceback.format_exc() - - -def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins, agg, format0): - infos = [] - try: - inp_root = inp_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - save_root_vocal = ( - save_root_vocal.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) - save_root_ins = ( - save_root_ins.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) - if model_name == "onnx_dereverb_By_FoxJoy": - pre_fun = MDXNetDereverb(15) - else: - func = _audio_pre_ if "DeEcho" not in model_name else _audio_pre_new - pre_fun = func( - agg=int(agg), - model_path=os.path.join(weight_uvr5_root, model_name + ".pth"), - device=config.device, - is_half=config.is_half, - ) - if inp_root != "": - paths = [os.path.join(inp_root, name) for name in os.listdir(inp_root)] - else: - paths = [path.name for path in paths] - for path in paths: - inp_path = os.path.join(inp_root, path) - need_reformat = 1 - done = 0 - try: - info = ffmpeg.probe(inp_path, cmd="ffprobe") - if ( - info["streams"][0]["channels"] == 2 - and info["streams"][0]["sample_rate"] == "44100" - ): - need_reformat = 0 - pre_fun._path_audio_( - inp_path, save_root_ins, save_root_vocal, format0 - ) - done = 1 - except: - need_reformat = 1 - traceback.print_exc() - if need_reformat == 1: - tmp_path = "%s/%s.reformatted.wav" % (tmp, os.path.basename(inp_path)) - os.system( - "ffmpeg -i %s -vn -acodec pcm_s16le -ac 2 -ar 44100 %s -y" - % (inp_path, tmp_path) - ) - inp_path = tmp_path - try: - if done == 0: - pre_fun._path_audio_( - inp_path, save_root_ins, save_root_vocal, format0 - ) - infos.append("%s->Success" % (os.path.basename(inp_path))) - yield "\n".join(infos) - except: - infos.append( - "%s->%s" % (os.path.basename(inp_path), traceback.format_exc()) - ) - yield "\n".join(infos) - except: - infos.append(traceback.format_exc()) - yield "\n".join(infos) - finally: - try: - if model_name == "onnx_dereverb_By_FoxJoy": - del pre_fun.pred.model - del pre_fun.pred.model_ - else: - del pre_fun.model - del pre_fun - except: - traceback.print_exc() - print("clean_empty_cache") - if torch.cuda.is_available(): - torch.cuda.empty_cache() - yield "\n".join(infos) - - -# 一个选项卡全局只能有一个音色 -def get_vc(sid): - global n_spk, tgt_sr, net_g, vc, cpt, version - if sid == "" or sid == []: - global hubert_model - if hubert_model != None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的 - print("clean_empty_cache") - del net_g, n_spk, vc, hubert_model, tgt_sr # ,cpt - hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - ###楼下不这么折腾清理不干净 - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g, cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - cpt = None - return {"visible": False, "__type__": "update"} - person = "%s/%s" % (weight_root, sid) - print("loading %s" % person) - cpt = torch.load(person, map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - n_spk = cpt["config"][-3] - return {"visible": True, "maximum": n_spk, "__type__": "update"} - - -def change_choices(): - names = [] - for name in os.listdir(weight_root): - if name.endswith(".pth"): - names.append(name) - index_paths = [] - for root, dirs, files in os.walk(index_root, topdown=False): - for name in files: - if name.endswith(".index") and "trained" not in name: - index_paths.append("%s/%s" % (root, name)) - return {"choices": sorted(names), "__type__": "update"}, { - "choices": sorted(index_paths), - "__type__": "update", - } - - -def clean(): - return {"value": "", "__type__": "update"} - - -sr_dict = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -def if_done(done, p): - while 1: - if p.poll() == None: - sleep(0.5) - else: - break - done[0] = True - - -def if_done_multi(done, ps): - while 1: - # poll==None代表进程未结束 - # 只要有一个进程未结束都不停 - flag = 1 - for p in ps: - if p.poll() == None: - flag = 0 - sleep(0.5) - break - if flag == 1: - break - done[0] = True - - -def preprocess_dataset(trainset_dir, exp_dir, sr, n_p): - sr = sr_dict[sr] - os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True) - f = open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "w") - f.close() - cmd = ( - config.python_cmd - + " trainset_preprocess_pipeline_print.py %s %s %s %s/logs/%s " - % (trainset_dir, sr, n_p, now_dir, exp_dir) - + str(config.noparallel) - ) - print(cmd) - p = Popen(cmd, shell=True) # , stdin=PIPE, stdout=PIPE,stderr=PIPE,cwd=now_dir - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done, - args=( - done, - p, - ), - ).start() - while 1: - with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f: - yield (f.read()) - sleep(1) - if done[0] == True: - break - with open("%s/logs/%s/preprocess.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - print(log) - yield log - - -# but2.click(extract_f0,[gpus6,np7,f0method8,if_f0_3,trainset_dir4],[info2]) -def extract_f0_feature(gpus, n_p, f0method, if_f0, exp_dir, version19): - gpus = gpus.split("-") - os.makedirs("%s/logs/%s" % (now_dir, exp_dir), exist_ok=True) - f = open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "w") - f.close() - if if_f0: - cmd = config.python_cmd + " extract_f0_print.py %s/logs/%s %s %s" % ( - now_dir, - exp_dir, - n_p, - f0method, - ) - print(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) # , stdin=PIPE, stdout=PIPE,stderr=PIPE - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done, - args=( - done, - p, - ), - ).start() - while 1: - with open( - "%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r" - ) as f: - yield (f.read()) - sleep(1) - if done[0] == True: - break - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - print(log) - yield log - ####对不同part分别开多进程 - """ - n_part=int(sys.argv[1]) - i_part=int(sys.argv[2]) - i_gpu=sys.argv[3] - exp_dir=sys.argv[4] - os.environ["CUDA_VISIBLE_DEVICES"]=str(i_gpu) - """ - leng = len(gpus) - ps = [] - for idx, n_g in enumerate(gpus): - cmd = ( - config.python_cmd - + " extract_feature_print.py %s %s %s %s %s/logs/%s %s" - % ( - config.device, - leng, - idx, - n_g, - now_dir, - exp_dir, - version19, - ) - ) - print(cmd) - p = Popen( - cmd, shell=True, cwd=now_dir - ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir - ps.append(p) - ###煞笔gr, popen read都非得全跑完了再一次性读取, 不用gr就正常读一句输出一句;只能额外弄出一个文本流定时读 - done = [False] - threading.Thread( - target=if_done_multi, - args=( - done, - ps, - ), - ).start() - while 1: - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - yield (f.read()) - sleep(1) - if done[0] == True: - break - with open("%s/logs/%s/extract_f0_feature.log" % (now_dir, exp_dir), "r") as f: - log = f.read() - print(log) - yield log - - -def change_sr2(sr2, if_f0_3, version19): - vis_v = True if sr2 == "40k" else False - if sr2 != "40k": - version19 = "v1" - path_str = "" if version19 == "v1" else "_v2" - version_state = {"visible": vis_v, "__type__": "update"} - if vis_v == False: - version_state["value"] = "v1" - f0_str = "f0" if if_f0_3 else "" - return ( - "pretrained%s/%sG%s.pth" % (path_str, f0_str, sr2), - "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2), - version_state, - ) - - -def change_version19(sr2, if_f0_3, version19): - path_str = "" if version19 == "v1" else "_v2" - f0_str = "f0" if if_f0_3 else "" - return "pretrained%s/%sG%s.pth" % ( - path_str, - f0_str, - sr2, - ), "pretrained%s/%sD%s.pth" % (path_str, f0_str, sr2) - - -def change_f0(if_f0_3, sr2, version19): # f0method8,pretrained_G14,pretrained_D15 - path_str = "" if version19 == "v1" else "_v2" - if if_f0_3: - return ( - {"visible": True, "__type__": "update"}, - "pretrained%s/f0G%s.pth" % (path_str, sr2), - "pretrained%s/f0D%s.pth" % (path_str, sr2), - ) - return ( - {"visible": False, "__type__": "update"}, - "pretrained%s/G%s.pth" % (path_str, sr2), - "pretrained%s/D%s.pth" % (path_str, sr2), - ) - - -# but3.click(click_train,[exp_dir1,sr2,if_f0_3,save_epoch10,total_epoch11,batch_size12,if_save_latest13,pretrained_G14,pretrained_D15,gpus16]) -def click_train( - exp_dir1, - sr2, - if_f0_3, - spk_id5, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, -): - # 生成filelist - exp_dir = "%s/logs/%s" % (now_dir, exp_dir1) - os.makedirs(exp_dir, exist_ok=True) - gt_wavs_dir = "%s/0_gt_wavs" % (exp_dir) - feature_dir = ( - "%s/3_feature256" % (exp_dir) - if version19 == "v1" - else "%s/3_feature768" % (exp_dir) - ) - if if_f0_3: - f0_dir = "%s/2a_f0" % (exp_dir) - f0nsf_dir = "%s/2b-f0nsf" % (exp_dir) - names = ( - set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) - & set([name.split(".")[0] for name in os.listdir(feature_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)]) - ) - else: - names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set( - [name.split(".")[0] for name in os.listdir(feature_dir)] - ) - opt = [] - for name in names: - if if_f0_3: - opt.append( - "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - f0_dir.replace("\\", "\\\\"), - name, - f0nsf_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - else: - opt.append( - "%s/%s.wav|%s/%s.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - fea_dim = 256 if version19 == "v1" else 768 - if if_f0_3: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5) - ) - else: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, spk_id5) - ) - shuffle(opt) - with open("%s/filelist.txt" % exp_dir, "w") as f: - f.write("\n".join(opt)) - print("write filelist done") - # 生成config#无需生成config - # cmd = python_cmd + " train_nsf_sim_cache_sid_load_pretrain.py -e mi-test -sr 40k -f0 1 -bs 4 -g 0 -te 10 -se 5 -pg pretrained/f0G40k.pth -pd pretrained/f0D40k.pth -l 1 -c 0" - print("use gpus:", gpus16) - if gpus16: - cmd = ( - config.python_cmd - + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s -pg %s -pd %s -l %s -c %s -sw %s -v %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - gpus16, - total_epoch11, - save_epoch10, - pretrained_G14, - pretrained_D15, - 1 if if_save_latest13 == i18n("是") else 0, - 1 if if_cache_gpu17 == i18n("是") else 0, - 1 if if_save_every_weights18 == i18n("是") else 0, - version19, - ) - ) - else: - cmd = ( - config.python_cmd - + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s -pg %s -pd %s -l %s -c %s -sw %s -v %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - total_epoch11, - save_epoch10, - pretrained_G14, - pretrained_D15, - 1 if if_save_latest13 == i18n("是") else 0, - 1 if if_cache_gpu17 == i18n("是") else 0, - 1 if if_save_every_weights18 == i18n("是") else 0, - version19, - ) - ) - print(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) - p.wait() - return "训练结束, 您可查看控制台训练日志或实验文件夹下的train.log" - - -# but4.click(train_index, [exp_dir1], info3) -def train_index(exp_dir1, version19): - exp_dir = "%s/logs/%s" % (now_dir, exp_dir1) - os.makedirs(exp_dir, exist_ok=True) - feature_dir = ( - "%s/3_feature256" % (exp_dir) - if version19 == "v1" - else "%s/3_feature768" % (exp_dir) - ) - if os.path.exists(feature_dir) == False: - return "请先进行特征提取!" - listdir_res = list(os.listdir(feature_dir)) - if len(listdir_res) == 0: - return "请先进行特征提取!" - npys = [] - for name in sorted(listdir_res): - phone = np.load("%s/%s" % (feature_dir, name)) - npys.append(phone) - big_npy = np.concatenate(npys, 0) - big_npy_idx = np.arange(big_npy.shape[0]) - np.random.shuffle(big_npy_idx) - big_npy = big_npy[big_npy_idx] - np.save("%s/total_fea.npy" % exp_dir, big_npy) - # n_ivf = big_npy.shape[0] // 39 - n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39) - infos = [] - infos.append("%s,%s" % (big_npy.shape, n_ivf)) - yield "\n".join(infos) - index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf) - # index = faiss.index_factory(256if version19=="v1"else 768, "IVF%s,PQ128x4fs,RFlat"%n_ivf) - infos.append("training") - yield "\n".join(infos) - index_ivf = faiss.extract_index_ivf(index) # - index_ivf.nprobe = 1 - index.train(big_npy) - faiss.write_index( - index, - "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - # faiss.write_index(index, '%s/trained_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19)) - infos.append("adding") - yield "\n".join(infos) - batch_size_add = 8192 - for i in range(0, big_npy.shape[0], batch_size_add): - index.add(big_npy[i : i + batch_size_add]) - faiss.write_index( - index, - "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (exp_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - infos.append( - "成功构建索引,added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (n_ivf, index_ivf.nprobe, exp_dir1, version19) - ) - # faiss.write_index(index, '%s/added_IVF%s_Flat_FastScan_%s.index'%(exp_dir,n_ivf,version19)) - # infos.append("成功构建索引,added_IVF%s_Flat_FastScan_%s.index"%(n_ivf,version19)) - yield "\n".join(infos) - - -# but5.click(train1key, [exp_dir1, sr2, if_f0_3, trainset_dir4, spk_id5, gpus6, np7, f0method8, save_epoch10, total_epoch11, batch_size12, if_save_latest13, pretrained_G14, pretrained_D15, gpus16, if_cache_gpu17], info3) -def train1key( - exp_dir1, - sr2, - if_f0_3, - trainset_dir4, - spk_id5, - np7, - f0method8, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, -): - infos = [] - - def get_info_str(strr): - infos.append(strr) - return "\n".join(infos) - - model_log_dir = "%s/logs/%s" % (now_dir, exp_dir1) - preprocess_log_path = "%s/preprocess.log" % model_log_dir - extract_f0_feature_log_path = "%s/extract_f0_feature.log" % model_log_dir - gt_wavs_dir = "%s/0_gt_wavs" % model_log_dir - feature_dir = ( - "%s/3_feature256" % model_log_dir - if version19 == "v1" - else "%s/3_feature768" % model_log_dir - ) - - os.makedirs(model_log_dir, exist_ok=True) - #########step1:处理数据 - open(preprocess_log_path, "w").close() - cmd = ( - config.python_cmd - + " trainset_preprocess_pipeline_print.py %s %s %s %s " - % (trainset_dir4, sr_dict[sr2], np7, model_log_dir) - + str(config.noparallel) - ) - yield get_info_str(i18n("step1:正在处理数据")) - yield get_info_str(cmd) - p = Popen(cmd, shell=True) - p.wait() - with open(preprocess_log_path, "r") as f: - print(f.read()) - #########step2a:提取音高 - open(extract_f0_feature_log_path, "w") - if if_f0_3: - yield get_info_str("step2a:正在提取音高") - cmd = config.python_cmd + " extract_f0_print.py %s %s %s" % ( - model_log_dir, - np7, - f0method8, - ) - yield get_info_str(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) - p.wait() - with open(extract_f0_feature_log_path, "r") as f: - print(f.read()) - else: - yield get_info_str(i18n("step2a:无需提取音高")) - #######step2b:提取特征 - yield get_info_str(i18n("step2b:正在提取特征")) - gpus = gpus16.split("-") - leng = len(gpus) - ps = [] - for idx, n_g in enumerate(gpus): - cmd = config.python_cmd + " extract_feature_print.py %s %s %s %s %s %s" % ( - config.device, - leng, - idx, - n_g, - model_log_dir, - version19, - ) - yield get_info_str(cmd) - p = Popen( - cmd, shell=True, cwd=now_dir - ) # , shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=now_dir - ps.append(p) - for p in ps: - p.wait() - with open(extract_f0_feature_log_path, "r") as f: - print(f.read()) - #######step3a:训练模型 - yield get_info_str(i18n("step3a:正在训练模型")) - # 生成filelist - if if_f0_3: - f0_dir = "%s/2a_f0" % model_log_dir - f0nsf_dir = "%s/2b-f0nsf" % model_log_dir - names = ( - set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) - & set([name.split(".")[0] for name in os.listdir(feature_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0_dir)]) - & set([name.split(".")[0] for name in os.listdir(f0nsf_dir)]) - ) - else: - names = set([name.split(".")[0] for name in os.listdir(gt_wavs_dir)]) & set( - [name.split(".")[0] for name in os.listdir(feature_dir)] - ) - opt = [] - for name in names: - if if_f0_3: - opt.append( - "%s/%s.wav|%s/%s.npy|%s/%s.wav.npy|%s/%s.wav.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - f0_dir.replace("\\", "\\\\"), - name, - f0nsf_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - else: - opt.append( - "%s/%s.wav|%s/%s.npy|%s" - % ( - gt_wavs_dir.replace("\\", "\\\\"), - name, - feature_dir.replace("\\", "\\\\"), - name, - spk_id5, - ) - ) - fea_dim = 256 if version19 == "v1" else 768 - if if_f0_3: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, now_dir, now_dir, spk_id5) - ) - else: - for _ in range(2): - opt.append( - "%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature%s/mute.npy|%s" - % (now_dir, sr2, now_dir, fea_dim, spk_id5) - ) - shuffle(opt) - with open("%s/filelist.txt" % model_log_dir, "w") as f: - f.write("\n".join(opt)) - yield get_info_str("write filelist done") - if gpus16: - cmd = ( - config.python_cmd - + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -g %s -te %s -se %s -pg %s -pd %s -l %s -c %s -sw %s -v %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - gpus16, - total_epoch11, - save_epoch10, - pretrained_G14, - pretrained_D15, - 1 if if_save_latest13 == i18n("是") else 0, - 1 if if_cache_gpu17 == i18n("是") else 0, - 1 if if_save_every_weights18 == i18n("是") else 0, - version19, - ) - ) - else: - cmd = ( - config.python_cmd - + " train_nsf_sim_cache_sid_load_pretrain.py -e %s -sr %s -f0 %s -bs %s -te %s -se %s -pg %s -pd %s -l %s -c %s -sw %s -v %s" - % ( - exp_dir1, - sr2, - 1 if if_f0_3 else 0, - batch_size12, - total_epoch11, - save_epoch10, - pretrained_G14, - pretrained_D15, - 1 if if_save_latest13 == i18n("是") else 0, - 1 if if_cache_gpu17 == i18n("是") else 0, - 1 if if_save_every_weights18 == i18n("是") else 0, - version19, - ) - ) - yield get_info_str(cmd) - p = Popen(cmd, shell=True, cwd=now_dir) - p.wait() - yield get_info_str(i18n("训练结束, 您可查看控制台训练日志或实验文件夹下的train.log")) - #######step3b:训练索引 - npys = [] - listdir_res = list(os.listdir(feature_dir)) - for name in sorted(listdir_res): - phone = np.load("%s/%s" % (feature_dir, name)) - npys.append(phone) - big_npy = np.concatenate(npys, 0) - - big_npy_idx = np.arange(big_npy.shape[0]) - np.random.shuffle(big_npy_idx) - big_npy = big_npy[big_npy_idx] - np.save("%s/total_fea.npy" % model_log_dir, big_npy) - - # n_ivf = big_npy.shape[0] // 39 - n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])), big_npy.shape[0] // 39) - yield get_info_str("%s,%s" % (big_npy.shape, n_ivf)) - index = faiss.index_factory(256 if version19 == "v1" else 768, "IVF%s,Flat" % n_ivf) - yield get_info_str("training index") - index_ivf = faiss.extract_index_ivf(index) # - index_ivf.nprobe = 1 - index.train(big_npy) - faiss.write_index( - index, - "%s/trained_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - yield get_info_str("adding index") - batch_size_add = 8192 - for i in range(0, big_npy.shape[0], batch_size_add): - index.add(big_npy[i : i + batch_size_add]) - faiss.write_index( - index, - "%s/added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (model_log_dir, n_ivf, index_ivf.nprobe, exp_dir1, version19), - ) - yield get_info_str( - "成功构建索引, added_IVF%s_Flat_nprobe_%s_%s_%s.index" - % (n_ivf, index_ivf.nprobe, exp_dir1, version19) - ) - yield get_info_str(i18n("全流程结束!")) - - -# ckpt_path2.change(change_info_,[ckpt_path2],[sr__,if_f0__]) -def change_info_(ckpt_path): - if ( - os.path.exists(ckpt_path.replace(os.path.basename(ckpt_path), "train.log")) - == False - ): - return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"} - try: - with open( - ckpt_path.replace(os.path.basename(ckpt_path), "train.log"), "r" - ) as f: - info = eval(f.read().strip("\n").split("\n")[0].split("\t")[-1]) - sr, f0 = info["sample_rate"], info["if_f0"] - version = "v2" if ("version" in info and info["version"] == "v2") else "v1" - return sr, str(f0), version - except: - traceback.print_exc() - return {"__type__": "update"}, {"__type__": "update"}, {"__type__": "update"} - - -from infer_pack.models_onnx import SynthesizerTrnMsNSFsidM - - -def export_onnx(ModelPath, ExportedPath, MoeVS=True): - cpt = torch.load(ModelPath, map_location="cpu") - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - hidden_channels = cpt["config"][-2] # hidden_channels,为768Vec做准备 - - test_phone = torch.rand(1, 200, hidden_channels) # hidden unit - test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用) - test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹) - test_pitchf = torch.rand(1, 200) # nsf基频 - test_ds = torch.LongTensor([0]) # 说话人ID - test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子) - - device = "cpu" # 导出时设备(不影响使用模型) - - net_g = SynthesizerTrnMsNSFsidM( - *cpt["config"], is_half=False - ) # fp32导出(C++要支持fp16必须手动将内存重新排列所以暂时不用fp16) - net_g.load_state_dict(cpt["weight"], strict=False) - input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"] - output_names = [ - "audio", - ] - # net_g.construct_spkmixmap(n_speaker) 多角色混合轨道导出 - torch.onnx.export( - net_g, - ( - test_phone.to(device), - test_phone_lengths.to(device), - test_pitch.to(device), - test_pitchf.to(device), - test_ds.to(device), - test_rnd.to(device), - ), - ExportedPath, - dynamic_axes={ - "phone": [1], - "pitch": [1], - "pitchf": [1], - "rnd": [2], - }, - do_constant_folding=False, - opset_version=16, - verbose=False, - input_names=input_names, - output_names=output_names, - ) - return "Finished" - - -with gr.Blocks() as app: - gr.Markdown( - value=i18n( - "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责.
      如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录使用需遵守的协议-LICENSE.txt." - ) - ) - with gr.Tabs(): - with gr.TabItem(i18n("模型推理")): - with gr.Row(): - sid0 = gr.Dropdown(label=i18n("推理音色"), choices=sorted(names)) - refresh_button = gr.Button(i18n("刷新音色列表和索引路径"), variant="primary") - clean_button = gr.Button(i18n("卸载音色省显存"), variant="primary") - spk_item = gr.Slider( - minimum=0, - maximum=2333, - step=1, - label=i18n("请选择说话人id"), - value=0, - visible=False, - interactive=True, - ) - clean_button.click(fn=clean, inputs=[], outputs=[sid0]) - sid0.change( - fn=get_vc, - inputs=[sid0], - outputs=[spk_item], - ) - with gr.Group(): - gr.Markdown( - value=i18n("男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ") - ) - with gr.Row(): - with gr.Column(): - vc_transform0 = gr.Number( - label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0 - ) - input_audio0 = gr.Textbox( - label=i18n("输入待处理音频文件路径(默认是正确格式示例)"), - value="E:\\codes\\py39\\test-20230416b\\todo-songs\\冬之花clip1.wav", - ) - f0method0 = gr.Radio( - label=i18n( - "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU" - ), - choices=["pm", "harvest", "crepe"], - value="pm", - interactive=True, - ) - filter_radius0 = gr.Slider( - minimum=0, - maximum=7, - label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"), - value=3, - step=1, - interactive=True, - ) - with gr.Column(): - file_index1 = gr.Textbox( - label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"), - value="", - interactive=True, - ) - file_index2 = gr.Dropdown( - label=i18n("自动检测index路径,下拉式选择(dropdown)"), - choices=sorted(index_paths), - interactive=True, - ) - refresh_button.click( - fn=change_choices, inputs=[], outputs=[sid0, file_index2] - ) - # file_big_npy1 = gr.Textbox( - # label=i18n("特征文件路径"), - # value="E:\\codes\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy", - # interactive=True, - # ) - index_rate1 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("检索特征占比"), - value=0.88, - interactive=True, - ) - with gr.Column(): - resample_sr0 = gr.Slider( - minimum=0, - maximum=48000, - label=i18n("后处理重采样至最终采样率,0为不进行重采样"), - value=0, - step=1, - interactive=True, - ) - rms_mix_rate0 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"), - value=1, - interactive=True, - ) - protect0 = gr.Slider( - minimum=0, - maximum=0.5, - label=i18n( - "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果" - ), - value=0.33, - step=0.01, - interactive=True, - ) - f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调")) - but0 = gr.Button(i18n("转换"), variant="primary") - with gr.Row(): - vc_output1 = gr.Textbox(label=i18n("输出信息")) - vc_output2 = gr.Audio(label=i18n("输出音频(右下角三个点,点了可以下载)")) - but0.click( - vc_single, - [ - spk_item, - input_audio0, - vc_transform0, - f0_file, - f0method0, - file_index1, - file_index2, - # file_big_npy1, - index_rate1, - filter_radius0, - resample_sr0, - rms_mix_rate0, - protect0, - ], - [vc_output1, vc_output2], - ) - with gr.Group(): - gr.Markdown( - value=i18n("批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ") - ) - with gr.Row(): - with gr.Column(): - vc_transform1 = gr.Number( - label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0 - ) - opt_input = gr.Textbox(label=i18n("指定输出文件夹"), value="opt") - f0method1 = gr.Radio( - label=i18n( - "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU" - ), - choices=["pm", "harvest", "crepe"], - value="pm", - interactive=True, - ) - filter_radius1 = gr.Slider( - minimum=0, - maximum=7, - label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波,数值为滤波半径,使用可以削弱哑音"), - value=3, - step=1, - interactive=True, - ) - with gr.Column(): - file_index3 = gr.Textbox( - label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"), - value="", - interactive=True, - ) - file_index4 = gr.Dropdown( - label=i18n("自动检测index路径,下拉式选择(dropdown)"), - choices=sorted(index_paths), - interactive=True, - ) - refresh_button.click( - fn=lambda: change_choices()[1], - inputs=[], - outputs=file_index4, - ) - # file_big_npy2 = gr.Textbox( - # label=i18n("特征文件路径"), - # value="E:\\codes\\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy", - # interactive=True, - # ) - index_rate2 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("检索特征占比"), - value=1, - interactive=True, - ) - with gr.Column(): - resample_sr1 = gr.Slider( - minimum=0, - maximum=48000, - label=i18n("后处理重采样至最终采样率,0为不进行重采样"), - value=0, - step=1, - interactive=True, - ) - rms_mix_rate1 = gr.Slider( - minimum=0, - maximum=1, - label=i18n("输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络"), - value=1, - interactive=True, - ) - protect1 = gr.Slider( - minimum=0, - maximum=0.5, - label=i18n( - "保护清辅音和呼吸声,防止电音撕裂等artifact,拉满0.5不开启,调低加大保护力度但可能降低索引效果" - ), - value=0.33, - step=0.01, - interactive=True, - ) - with gr.Column(): - dir_input = gr.Textbox( - label=i18n("输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)"), - value="E:\codes\py39\\test-20230416b\\todo-songs", - ) - inputs = gr.File( - file_count="multiple", label=i18n("也可批量输入音频文件, 二选一, 优先读文件夹") - ) - with gr.Row(): - format1 = gr.Radio( - label=i18n("导出文件格式"), - choices=["wav", "flac", "mp3", "m4a"], - value="flac", - interactive=True, - ) - but1 = gr.Button(i18n("转换"), variant="primary") - vc_output3 = gr.Textbox(label=i18n("输出信息")) - but1.click( - vc_multi, - [ - spk_item, - dir_input, - opt_input, - inputs, - vc_transform1, - f0method1, - file_index3, - file_index4, - # file_big_npy2, - index_rate2, - filter_radius1, - resample_sr1, - rms_mix_rate1, - protect1, - format1, - ], - [vc_output3], - ) - with gr.TabItem(i18n("伴奏人声分离&去混响&去回声")): - with gr.Group(): - gr.Markdown( - value=i18n( - "人声伴奏分离批量处理, 使用UVR5模型。
      " - "合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。
      " - "模型分为三类:
      " - "1、保留人声:不带和声的音频选这个,对主人声保留比HP5更好。内置HP2和HP3两个模型,HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点;
      " - "2、仅保留主人声:带和声的音频选这个,对主人声可能有削弱。内置HP5一个模型;
      " - "3、去混响、去延迟模型(by FoxJoy):
      " - "  (1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;
      " - " (234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底,DeReverb额外去除混响,可去除单声道混响,但是对高频重的板式混响去不干净。
      " - "去混响/去延迟,附:
      " - "1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍;
      " - "2、MDX-Net-Dereverb模型挺慢的;
      " - "3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。" - ) - ) - with gr.Row(): - with gr.Column(): - dir_wav_input = gr.Textbox( - label=i18n("输入待处理音频文件夹路径"), - value="E:\\codes\\py39\\test-20230416b\\todo-songs\\todo-songs", - ) - wav_inputs = gr.File( - file_count="multiple", label=i18n("也可批量输入音频文件, 二选一, 优先读文件夹") - ) - with gr.Column(): - model_choose = gr.Dropdown(label=i18n("模型"), choices=uvr5_names) - agg = gr.Slider( - minimum=0, - maximum=20, - step=1, - label="人声提取激进程度", - value=10, - interactive=True, - visible=False, # 先不开放调整 - ) - opt_vocal_root = gr.Textbox( - label=i18n("指定输出主人声文件夹"), value="opt" - ) - opt_ins_root = gr.Textbox( - label=i18n("指定输出非主人声文件夹"), value="opt" - ) - format0 = gr.Radio( - label=i18n("导出文件格式"), - choices=["wav", "flac", "mp3", "m4a"], - value="flac", - interactive=True, - ) - but2 = gr.Button(i18n("转换"), variant="primary") - vc_output4 = gr.Textbox(label=i18n("输出信息")) - but2.click( - uvr, - [ - model_choose, - dir_wav_input, - opt_vocal_root, - wav_inputs, - opt_ins_root, - agg, - format0, - ], - [vc_output4], - ) - with gr.TabItem(i18n("训练")): - gr.Markdown( - value=i18n( - "step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. " - ) - ) - with gr.Row(): - exp_dir1 = gr.Textbox(label=i18n("输入实验名"), value="mi-test") - sr2 = gr.Radio( - label=i18n("目标采样率"), - choices=["40k", "48k"], - value="40k", - interactive=True, - ) - if_f0_3 = gr.Radio( - label=i18n("模型是否带音高指导(唱歌一定要, 语音可以不要)"), - choices=[True, False], - value=True, - interactive=True, - ) - version19 = gr.Radio( - label=i18n("版本(目前仅40k支持了v2)"), - choices=["v1", "v2"], - value="v1", - interactive=True, - visible=True, - ) - np7 = gr.Slider( - minimum=0, - maximum=config.n_cpu, - step=1, - label=i18n("提取音高和处理数据使用的CPU进程数"), - value=config.n_cpu, - interactive=True, - ) - with gr.Group(): # 暂时单人的, 后面支持最多4人的#数据处理 - gr.Markdown( - value=i18n( - "step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. " - ) - ) - with gr.Row(): - trainset_dir4 = gr.Textbox( - label=i18n("输入训练文件夹路径"), value="E:\\语音音频+标注\\米津玄师\\src" - ) - spk_id5 = gr.Slider( - minimum=0, - maximum=4, - step=1, - label=i18n("请指定说话人id"), - value=0, - interactive=True, - ) - but1 = gr.Button(i18n("处理数据"), variant="primary") - info1 = gr.Textbox(label=i18n("输出信息"), value="") - but1.click( - preprocess_dataset, [trainset_dir4, exp_dir1, sr2, np7], [info1] - ) - with gr.Group(): - gr.Markdown(value=i18n("step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)")) - with gr.Row(): - with gr.Column(): - gpus6 = gr.Textbox( - label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"), - value=gpus, - interactive=True, - ) - gpu_info9 = gr.Textbox(label=i18n("显卡信息"), value=gpu_info) - with gr.Column(): - f0method8 = gr.Radio( - label=i18n( - "选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢" - ), - choices=["pm", "harvest", "dio"], - value="harvest", - interactive=True, - ) - but2 = gr.Button(i18n("特征提取"), variant="primary") - info2 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8) - but2.click( - extract_f0_feature, - [gpus6, np7, f0method8, if_f0_3, exp_dir1, version19], - [info2], - ) - with gr.Group(): - gr.Markdown(value=i18n("step3: 填写训练设置, 开始训练模型和索引")) - with gr.Row(): - save_epoch10 = gr.Slider( - minimum=0, - maximum=50, - step=1, - label=i18n("保存频率save_every_epoch"), - value=5, - interactive=True, - ) - total_epoch11 = gr.Slider( - minimum=0, - maximum=1000, - step=1, - label=i18n("总训练轮数total_epoch"), - value=20, - interactive=True, - ) - batch_size12 = gr.Slider( - minimum=1, - maximum=40, - step=1, - label=i18n("每张显卡的batch_size"), - value=default_batch_size, - interactive=True, - ) - if_save_latest13 = gr.Radio( - label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"), - choices=[i18n("是"), i18n("否")], - value=i18n("否"), - interactive=True, - ) - if_cache_gpu17 = gr.Radio( - label=i18n( - "是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速" - ), - choices=[i18n("是"), i18n("否")], - value=i18n("否"), - interactive=True, - ) - if_save_every_weights18 = gr.Radio( - label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"), - choices=[i18n("是"), i18n("否")], - value=i18n("否"), - interactive=True, - ) - with gr.Row(): - pretrained_G14 = gr.Textbox( - label=i18n("加载预训练底模G路径"), - value="pretrained/f0G40k.pth", - interactive=True, - ) - pretrained_D15 = gr.Textbox( - label=i18n("加载预训练底模D路径"), - value="pretrained/f0D40k.pth", - interactive=True, - ) - sr2.change( - change_sr2, - [sr2, if_f0_3, version19], - [pretrained_G14, pretrained_D15, version19], - ) - version19.change( - change_version19, - [sr2, if_f0_3, version19], - [pretrained_G14, pretrained_D15], - ) - if_f0_3.change( - change_f0, - [if_f0_3, sr2, version19], - [f0method8, pretrained_G14, pretrained_D15], - ) - gpus16 = gr.Textbox( - label=i18n("以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2"), - value=gpus, - interactive=True, - ) - but3 = gr.Button(i18n("训练模型"), variant="primary") - but4 = gr.Button(i18n("训练特征索引"), variant="primary") - but5 = gr.Button(i18n("一键训练"), variant="primary") - info3 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=10) - but3.click( - click_train, - [ - exp_dir1, - sr2, - if_f0_3, - spk_id5, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, - ], - info3, - ) - but4.click(train_index, [exp_dir1, version19], info3) - but5.click( - train1key, - [ - exp_dir1, - sr2, - if_f0_3, - trainset_dir4, - spk_id5, - np7, - f0method8, - save_epoch10, - total_epoch11, - batch_size12, - if_save_latest13, - pretrained_G14, - pretrained_D15, - gpus16, - if_cache_gpu17, - if_save_every_weights18, - version19, - ], - info3, - ) - - with gr.TabItem(i18n("ckpt处理")): - with gr.Group(): - gr.Markdown(value=i18n("模型融合, 可用于测试音色融合")) - with gr.Row(): - ckpt_a = gr.Textbox(label=i18n("A模型路径"), value="", interactive=True) - ckpt_b = gr.Textbox(label=i18n("B模型路径"), value="", interactive=True) - alpha_a = gr.Slider( - minimum=0, - maximum=1, - label=i18n("A模型权重"), - value=0.5, - interactive=True, - ) - with gr.Row(): - sr_ = gr.Radio( - label=i18n("目标采样率"), - choices=["32k", "40k", "48k"], - value="40k", - interactive=True, - ) - if_f0_ = gr.Radio( - label=i18n("模型是否带音高指导"), - choices=[i18n("是"), i18n("否")], - value=i18n("是"), - interactive=True, - ) - info__ = gr.Textbox( - label=i18n("要置入的模型信息"), value="", max_lines=8, interactive=True - ) - name_to_save0 = gr.Textbox( - label=i18n("保存的模型名不带后缀"), - value="", - max_lines=1, - interactive=True, - ) - version_2 = gr.Radio( - label=i18n("模型版本型号"), - choices=["v1", "v2"], - value="v1", - interactive=True, - ) - with gr.Row(): - but6 = gr.Button(i18n("融合"), variant="primary") - info4 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8) - but6.click( - merge, - [ - ckpt_a, - ckpt_b, - alpha_a, - sr_, - if_f0_, - info__, - name_to_save0, - version_2, - ], - info4, - ) # def merge(path1,path2,alpha1,sr,f0,info): - with gr.Group(): - gr.Markdown(value=i18n("修改模型信息(仅支持weights文件夹下提取的小模型文件)")) - with gr.Row(): - ckpt_path0 = gr.Textbox( - label=i18n("模型路径"), value="", interactive=True - ) - info_ = gr.Textbox( - label=i18n("要改的模型信息"), value="", max_lines=8, interactive=True - ) - name_to_save1 = gr.Textbox( - label=i18n("保存的文件名, 默认空为和源文件同名"), - value="", - max_lines=8, - interactive=True, - ) - with gr.Row(): - but7 = gr.Button(i18n("修改"), variant="primary") - info5 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8) - but7.click(change_info, [ckpt_path0, info_, name_to_save1], info5) - with gr.Group(): - gr.Markdown(value=i18n("查看模型信息(仅支持weights文件夹下提取的小模型文件)")) - with gr.Row(): - ckpt_path1 = gr.Textbox( - label=i18n("模型路径"), value="", interactive=True - ) - but8 = gr.Button(i18n("查看"), variant="primary") - info6 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8) - but8.click(show_info, [ckpt_path1], info6) - with gr.Group(): - gr.Markdown( - value=i18n( - "模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况" - ) - ) - with gr.Row(): - ckpt_path2 = gr.Textbox( - label=i18n("模型路径"), - value="E:\\codes\\py39\\logs\\mi-test_f0_48k\\G_23333.pth", - interactive=True, - ) - save_name = gr.Textbox( - label=i18n("保存名"), value="", interactive=True - ) - sr__ = gr.Radio( - label=i18n("目标采样率"), - choices=["32k", "40k", "48k"], - value="40k", - interactive=True, - ) - if_f0__ = gr.Radio( - label=i18n("模型是否带音高指导,1是0否"), - choices=["1", "0"], - value="1", - interactive=True, - ) - version_1 = gr.Radio( - label=i18n("模型版本型号"), - choices=["v1", "v2"], - value="v1", - interactive=True, - ) - info___ = gr.Textbox( - label=i18n("要置入的模型信息"), value="", max_lines=8, interactive=True - ) - but9 = gr.Button(i18n("提取"), variant="primary") - info7 = gr.Textbox(label=i18n("输出信息"), value="", max_lines=8) - ckpt_path2.change( - change_info_, [ckpt_path2], [sr__, if_f0__, version_1] - ) - but9.click( - extract_small_model, - [ckpt_path2, save_name, sr__, if_f0__, info___, version_1], - info7, - ) - - with gr.TabItem(i18n("Onnx导出")): - with gr.Row(): - ckpt_dir = gr.Textbox(label=i18n("RVC模型路径"), value="", interactive=True) - with gr.Row(): - onnx_dir = gr.Textbox( - label=i18n("Onnx输出路径"), value="", interactive=True - ) - with gr.Row(): - moevs = gr.Checkbox(label=i18n("MoeVS模型"), value=True) - infoOnnx = gr.Label(label="Null") - with gr.Row(): - butOnnx = gr.Button(i18n("导出Onnx模型"), variant="primary") - butOnnx.click(export_onnx, [ckpt_dir, onnx_dir, moevs], infoOnnx) - - tab_faq = i18n("常见问题解答") - with gr.TabItem(tab_faq): - try: - if tab_faq == "常见问题解答": - with open("docs/faq.md", "r", encoding="utf8") as f: - info = f.read() - else: - with open("docs/faq_en.md", "r", encoding="utf8") as f: - info = f.read() - gr.Markdown(value=info) - except: - gr.Markdown(traceback.format_exc()) - - # with gr.TabItem(i18n("招募音高曲线前端编辑器")): - # gr.Markdown(value=i18n("加开发群联系我xxxxx")) - # with gr.TabItem(i18n("点击查看交流、问题反馈群号")): - # gr.Markdown(value=i18n("xxxxx")) - - if config.iscolab: - app.queue(concurrency_count=511, max_size=1022).launch(share=True) - else: - app.queue(concurrency_count=511, max_size=1022).launch( - server_name="0.0.0.0", - inbrowser=not config.noautoopen, - server_port=config.listen_port, - quiet=True, - ) diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/onnx_export.py b/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/onnx_export.py deleted file mode 100644 index 5deda785cf22b341f7d2e6399ef5fcdad6fe129e..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/onnx_export.py +++ /dev/null @@ -1,226 +0,0 @@ -from diffusion_onnx import GaussianDiffusion -import os -import yaml -import torch -import torch.nn as nn -import numpy as np -from wavenet import WaveNet -import torch.nn.functional as F -import diffusion - -class DotDict(dict): - def __getattr__(*args): - val = dict.get(*args) - return DotDict(val) if type(val) is dict else val - - __setattr__ = dict.__setitem__ - __delattr__ = dict.__delitem__ - - -def load_model_vocoder( - model_path, - device='cpu'): - config_file = os.path.join(os.path.split(model_path)[0], 'config.yaml') - with open(config_file, "r") as config: - args = yaml.safe_load(config) - args = DotDict(args) - - # load model - model = Unit2Mel( - args.data.encoder_out_channels, - args.model.n_spk, - args.model.use_pitch_aug, - 128, - args.model.n_layers, - args.model.n_chans, - args.model.n_hidden) - - print(' [Loading] ' + model_path) - ckpt = torch.load(model_path, map_location=torch.device(device)) - model.to(device) - model.load_state_dict(ckpt['model']) - model.eval() - return model, args - - -class Unit2Mel(nn.Module): - def __init__( - self, - input_channel, - n_spk, - use_pitch_aug=False, - out_dims=128, - n_layers=20, - n_chans=384, - n_hidden=256): - super().__init__() - self.unit_embed = nn.Linear(input_channel, n_hidden) - self.f0_embed = nn.Linear(1, n_hidden) - self.volume_embed = nn.Linear(1, n_hidden) - if use_pitch_aug: - self.aug_shift_embed = nn.Linear(1, n_hidden, bias=False) - else: - self.aug_shift_embed = None - self.n_spk = n_spk - if n_spk is not None and n_spk > 1: - self.spk_embed = nn.Embedding(n_spk, n_hidden) - - # diffusion - self.decoder = GaussianDiffusion(out_dims, n_layers, n_chans, n_hidden) - self.hidden_size = n_hidden - self.speaker_map = torch.zeros((self.n_spk,1,1,n_hidden)) - - - - def forward(self, units, mel2ph, f0, volume, g = None): - - ''' - input: - B x n_frames x n_unit - return: - dict of B x n_frames x feat - ''' - - decoder_inp = F.pad(units, [0, 0, 1, 0]) - mel2ph_ = mel2ph.unsqueeze(2).repeat([1, 1, units.shape[-1]]) - units = torch.gather(decoder_inp, 1, mel2ph_) # [B, T, H] - - x = self.unit_embed(units) + self.f0_embed((1 + f0.unsqueeze(-1) / 700).log()) + self.volume_embed(volume.unsqueeze(-1)) - - if self.n_spk is not None and self.n_spk > 1: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - x = x.transpose(1, 2) + g - return x - else: - return x.transpose(1, 2) - - - def init_spkembed(self, units, f0, volume, spk_id = None, spk_mix_dict = None, aug_shift = None, - gt_spec=None, infer=True, infer_speedup=10, method='dpm-solver', k_step=300, use_tqdm=True): - - ''' - input: - B x n_frames x n_unit - return: - dict of B x n_frames x feat - ''' - x = self.unit_embed(units) + self.f0_embed((1+ f0 / 700).log()) + self.volume_embed(volume) - if self.n_spk is not None and self.n_spk > 1: - if spk_mix_dict is not None: - spk_embed_mix = torch.zeros((1,1,self.hidden_size)) - for k, v in spk_mix_dict.items(): - spk_id_torch = torch.LongTensor(np.array([[k]])).to(units.device) - spk_embeddd = self.spk_embed(spk_id_torch) - self.speaker_map[k] = spk_embeddd - spk_embed_mix = spk_embed_mix + v * spk_embeddd - x = x + spk_embed_mix - else: - x = x + self.spk_embed(spk_id - 1) - self.speaker_map = self.speaker_map.unsqueeze(0) - self.speaker_map = self.speaker_map.detach() - return x.transpose(1, 2) - - def OnnxExport(self, project_name=None, init_noise=None, export_encoder=True, export_denoise=True, export_pred=True, export_after=True): - hubert_hidden_size = 768 - n_frames = 100 - hubert = torch.randn((1, n_frames, hubert_hidden_size)) - mel2ph = torch.arange(end=n_frames).unsqueeze(0).long() - f0 = torch.randn((1, n_frames)) - volume = torch.randn((1, n_frames)) - spk_mix = [] - spks = {} - if self.n_spk is not None and self.n_spk > 1: - for i in range(self.n_spk): - spk_mix.append(1.0/float(self.n_spk)) - spks.update({i:1.0/float(self.n_spk)}) - spk_mix = torch.tensor(spk_mix) - spk_mix = spk_mix.repeat(n_frames, 1) - orgouttt = self.init_spkembed(hubert, f0.unsqueeze(-1), volume.unsqueeze(-1), spk_mix_dict=spks) - outtt = self.forward(hubert, mel2ph, f0, volume, spk_mix) - if export_encoder: - torch.onnx.export( - self, - (hubert, mel2ph, f0, volume, spk_mix), - f"{project_name}_encoder.onnx", - input_names=["hubert", "mel2ph", "f0", "volume", "spk_mix"], - output_names=["mel_pred"], - dynamic_axes={ - "hubert": [1], - "f0": [1], - "volume": [1], - "mel2ph": [1], - "spk_mix": [0], - }, - opset_version=16 - ) - - self.decoder.OnnxExport(project_name, init_noise=init_noise, export_denoise=export_denoise, export_pred=export_pred, export_after=export_after) - - def ExportOnnx(self, project_name=None): - hubert_hidden_size = 768 - n_frames = 100 - hubert = torch.randn((1, n_frames, hubert_hidden_size)) - mel2ph = torch.arange(end=n_frames).unsqueeze(0).long() - f0 = torch.randn((1, n_frames)) - volume = torch.randn((1, n_frames)) - spk_mix = [] - spks = {} - if self.n_spk is not None and self.n_spk > 1: - for i in range(self.n_spk): - spk_mix.append(1.0/float(self.n_spk)) - spks.update({i:1.0/float(self.n_spk)}) - spk_mix = torch.tensor(spk_mix) - orgouttt = self.orgforward(hubert, f0.unsqueeze(-1), volume.unsqueeze(-1), spk_mix_dict=spks) - outtt = self.forward(hubert, mel2ph, f0, volume, spk_mix) - - torch.onnx.export( - self, - (hubert, mel2ph, f0, volume, spk_mix), - f"{project_name}_encoder.onnx", - input_names=["hubert", "mel2ph", "f0", "volume", "spk_mix"], - output_names=["mel_pred"], - dynamic_axes={ - "hubert": [1], - "f0": [1], - "volume": [1], - "mel2ph": [1] - }, - opset_version=16 - ) - - condition = torch.randn(1,self.decoder.n_hidden,n_frames) - noise = torch.randn((1, 1, self.decoder.mel_bins, condition.shape[2]), dtype=torch.float32) - pndm_speedup = torch.LongTensor([100]) - K_steps = torch.LongTensor([1000]) - self.decoder = torch.jit.script(self.decoder) - self.decoder(condition, noise, pndm_speedup, K_steps) - - torch.onnx.export( - self.decoder, - (condition, noise, pndm_speedup, K_steps), - f"{project_name}_diffusion.onnx", - input_names=["condition", "noise", "pndm_speedup", "K_steps"], - output_names=["mel"], - dynamic_axes={ - "condition": [2], - "noise": [3], - }, - opset_version=16 - ) - - -if __name__ == "__main__": - project_name = "dddsp" - model_path = f'{project_name}/model_500000.pt' - - model, _ = load_model_vocoder(model_path) - - # 分开Diffusion导出(需要使用MoeSS/MoeVoiceStudio或者自己编写Pndm/Dpm采样) - model.OnnxExport(project_name, export_encoder=True, export_denoise=True, export_pred=True, export_after=True) - - # 合并Diffusion导出(Encoder和Diffusion分开,直接将Encoder的结果和初始噪声输入Diffusion即可) - # model.ExportOnnx(project_name) - diff --git a/spaces/llmonitor/benchmarks/utils/email.js b/spaces/llmonitor/benchmarks/utils/email.js deleted file mode 100644 index c8a80b7cf8f5eef5a2f537b881a2633d60c3e8d7..0000000000000000000000000000000000000000 --- a/spaces/llmonitor/benchmarks/utils/email.js +++ /dev/null @@ -1,16 +0,0 @@ -export const sendEmail = async (body) => { - if (!process.env.RESEND_KEY) { - return console.warn("RESEND_KEY is not set, skipping email sending") - } - - const res = await fetch("https://api.resend.com/emails", { - method: "POST", - headers: { - "Content-Type": "application/json", - Authorization: `Bearer ${process.env.RESEND_KEY}`, - }, - body: JSON.stringify(body), - }) - - return await res.json() -} diff --git a/spaces/lris/DeepDanbooru_string/app.py b/spaces/lris/DeepDanbooru_string/app.py deleted file mode 100644 index 49019837c9207cc68cb37be0342f3bc44fd0decb..0000000000000000000000000000000000000000 --- a/spaces/lris/DeepDanbooru_string/app.py +++ /dev/null @@ -1,185 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import argparse -import functools -import os -import html -import pathlib -import tarfile - -import deepdanbooru as dd -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image -import tensorflow as tf -import piexif -import piexif.helper - -TITLE = 'DeepDanbooru String' - -TOKEN = os.environ['TOKEN'] -MODEL_REPO = 'CikeyQI/DeepDanbooru_string' -MODEL_FILENAME = 'model-resnet_custom_v3.h5' -LABEL_FILENAME = 'tags.txt' - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--score-slider-step', type=float, default=0.05) - parser.add_argument('--score-threshold', type=float, default=0.5) - parser.add_argument('--theme', type=str, default='dark-grass') - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - return parser.parse_args() - - -def load_sample_image_paths() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - dataset_repo = 'hysts/sample-images-TADNE' - path = huggingface_hub.hf_hub_download(dataset_repo, - 'images.tar.gz', - repo_type='dataset', - use_auth_token=TOKEN) - with tarfile.open(path) as f: - f.extractall() - return sorted(image_dir.glob('*')) - - -def load_model() -> tf.keras.Model: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - MODEL_FILENAME, - use_auth_token=TOKEN) - model = tf.keras.models.load_model(path) - return model - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - LABEL_FILENAME, - use_auth_token=TOKEN) - with open(path) as f: - labels = [line.strip() for line in f.readlines()] - return labels - -def plaintext_to_html(text): - text = "

      " + "
      \n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "

      " - return text - -def predict(image: PIL.Image.Image, score_threshold: float, - model: tf.keras.Model, labels: list[str]) -> dict[str, float]: - rawimage = image - _, height, width, _ = model.input_shape - image = np.asarray(image) - image = tf.image.resize(image, - size=(height, width), - method=tf.image.ResizeMethod.AREA, - preserve_aspect_ratio=True) - image = image.numpy() - image = dd.image.transform_and_pad_image(image, width, height) - image = image / 255. - probs = model.predict(image[None, ...])[0] - probs = probs.astype(float) - res = dict() - for prob, label in zip(probs.tolist(), labels): - if prob < score_threshold: - continue - res[label] = prob - b = dict(sorted(res.items(),key=lambda item:item[1], reverse=True)) - a = ', '.join(list(b.keys())).replace('_',' ').replace('(','\(').replace(')','\)') - c = ', '.join(list(b.keys())) - - items = rawimage.info - geninfo = '' - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'') - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode('utf8', errors="ignore") - - items['exif comment'] = exif_comment - geninfo = exif_comment - - for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif', - 'loop', 'background', 'timestamp', 'duration']: - items.pop(field, None) - - geninfo = items.get('parameters', geninfo) - - info = f""" -

      PNG Info

      -""" - for key, text in items.items(): - info += f""" -
      -

      {plaintext_to_html(str(key))}

      -

      {plaintext_to_html(str(text))}

      -
      -""".strip()+"\n" - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

      {message}

      " - - return (a,c,res,info) - - -def main(): - args = parse_args() - model = load_model() - labels = load_labels() - - func = functools.partial(predict, model=model, labels=labels) - func = functools.update_wrapper(func, predict) - - gr.Interface( - func, - [ - gr.inputs.Image(type='pil', label='Input'), - gr.inputs.Slider(0, - 1, - step=args.score_slider_step, - default=args.score_threshold, - label='Score Threshold'), - ], - [ - gr.outputs.Textbox(label='Output (string)'), - gr.outputs.Textbox(label='Output (raw string)'), - gr.outputs.Label(label='Output (label)'), - gr.outputs.HTML() - ], - examples=[ - ['miku.jpg',0.5], - ['miku2.jpg',0.5] - ], - title=TITLE, - description=''' -Demo for [KichangKim/DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) with "ready to copy" prompt and a prompt analyzer. - -Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru) - -PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - ''', - theme=args.theme, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/ltgoslo/ssa-perin/mtool/codec/amr.py b/spaces/ltgoslo/ssa-perin/mtool/codec/amr.py deleted file mode 100644 index ff90e698542f32244bb112f581a8dab279ceee0a..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/mtool/codec/amr.py +++ /dev/null @@ -1,258 +0,0 @@ -import re; -import sys; - -import codec.mrp; -from graph import Edge, Graph; -from smatch.amr import AMR; - -STASH = re.compile(r'__[0-9]+__'); -INDEX = re.compile(r'x([0-9]+)((:?_[0-9]+)*)'); - -def amr_lines(fp, camr, alignment): - id, snt, lines = None, None, []; - stash = dict(); - def _stash_(match): - prefix, constant, suffix = match.groups(); - fields = constant.split("/"); - if fields[0] in stash: - if stash[fields[0]][2] != fields[1]: - raise Exception("amr_lines(): " - "ambiguously defined constant in graph #{}, " - "‘{}’: ‘{}’ vs. ‘{}’; exit." - "".format(id, fields[0], - stash[fields[0]][2], fields[1])); - else: - stash[fields[0]] = (len(stash), fields[0], fields[1]); - return "{}__{}__{}".format(prefix, stash[fields[0]][0], suffix); - - alignment = read_alignment(alignment); - for line in fp: - line = line.strip(); - if len(line) == 0: - if len(lines) > 0: - i = mapping = None; - try: - i, mapping = next(alignment); - except Exception as error: - print("amr_lines(): missing alignment for graph #{}." - "".format(id), file = sys.stderr); - pass; - yield id, snt, " ".join(lines), stash.values(), \ - mapping if mapping is not None and i == id else None; - id, lines = None, []; stash.clear(); - else: - if line.startswith("#"): - if line.startswith("# ::id"): - id = line.split()[2]; - if line.startswith("# ::snt"): - snt = line[8:].strip(); - else: - if camr: - line = re.sub(r'((?:^|[ \t]):[^( ]+)\([^ \t]*\)([ \t]|$)', - "\\1\\2", line, count = 0); - line = re.sub(r'(^|[ \t])(x[0-9]+/[^ \t]+)([ \t]|$)', - _stash_, line, count = 0); - lines.append(line) - if len(lines) > 0: - i = mapping = None; - try: - i, mapping = next(alignment); - except: - print("amr_lines(): missing alignment for graph #{}." - "".format(id), file = sys.stderr); - pass; - yield id, snt, " ".join(lines), stash.values(), \ - mapping if mapping is not None and i == id else None; - -def read_alignment(stream): - if stream is None: - while True: yield None, None; - else: - id = None; - alignment = dict(); - for line in stream: - line = line.strip(); - if len(line) == 0: - yield id, alignment; - id = None; - alignment.clear(); - else: - if line.startswith("#"): - if line.startswith("# ::id"): - id = line.split()[2]; - else: - fields = line.split("\t"); - if len(fields) == 2: - start, end = fields[1].split("-"); - span = set(range(int(start), int(end) + 1)); - fields = fields[0].split(); - if len(fields) > 1 and fields[1].startswith(":"): - fields[1] = fields[1][1:]; - if fields[1] == "wiki": continue; - if fields[0] not in alignment: - alignment[fields[0]] = bucket = dict(); - else: bucket = alignment[fields[0]]; - path = tuple(fields[1:]); - if path not in bucket: bucket[path] = can = set(); - else: can = bucket[path]; - can |= span; - yield id, alignment; - -def amr2graph(id, amr, text, stash, camr = False, - full = False, reify = False, quiet = False, alignment = None): - graph = Graph(id, flavor = 2, framework = "amr"); - node2id = dict(); - anchoring = list(); - - i = 0; - def _anchor_(form): - nonlocal i; - m = None; - j = graph.input.find(form, i); - if j >= i: - i, m = j, len(form); - else: - base = form; - k, l = len(graph.input), 0; - for old, new in {("‘", "`"), ("‘", "'"), ("’", "'"), ("`", "'"), - ("“", "\""), ("”", "\""), - ("–", "--"), ("–", "---"), ("—", "---"), - ("…", "..."), ("…", ". . .")}: - form = base.replace(old, new); - j = graph.input.find(form, i); - if j >= i and j < k: k, l = j, len(form); - if k < len(graph.input): i, m = k, l; - if m: - match = {"from": i, "to": i + m}; - i += m; - return match; - else: - raise Exception("failed to anchor |{}| in |{}|{}| ({})" - "".format(form, graph.input[:i], - graph.input[i:], i)); - - if text: - graph.add_input(text, quiet = quiet); - if camr: - for token in graph.input.split(" "): - anchoring.append(_anchor_(token)); - i = 0; - for n, v, a in zip(amr.nodes, amr.node_values, amr.attributes): - j = i; - node2id[n] = j; - top = False; - for key, val in a: - if key == "TOP": - top = True; - anchors = find_anchors(n, anchoring) if camr else None; - node = graph.add_node(j, label = v, top = top, anchors = anchors); - i += 1 - for key, val in a: - if STASH.match(val) is not None: - index = int(val[2:-2]); - val = next(v for k, x, v in stash if k == index); - if key != "TOP" and (key not in {"wiki"} or full): - if val.endswith("¦"): - val = val[:-1]; - if reify: - graph.add_node(i, label = val); - graph.add_edge(j, i, key); - i += 1 - else: - # - # _fix_me_ - # this assumes that properties are unique. (1-apr-20; oe) - # - node.set_property(key.lower(), str(val).lower()); - - for src, r in zip(amr.nodes, amr.relations): - for label, tgt in r: - normal = None; - if label == "mod": - normal = "domain"; - elif label.endswith("-of-of") \ - or label.endswith("-of") \ - and label not in {"consist-of" "subset-of"} \ - and not label.startswith("prep-"): - normal = label[:-3]; - graph.add_edge(node2id[src], node2id[tgt], label, normal) - - overlay = None; - if alignment is not None: - overlay = Graph(id, flavor = -1, framework = "anchoring"); - for node in alignment: - for path, span in alignment[node].items(): - if len(path) == 0: - anchors = [{"#": token} for token in span]; - node = overlay.add_node(node2id[node], anchors = anchors); - for node in alignment: - id = node2id[node]; - for path, span in alignment[node].items(): - if len(path) == 1: - key = path[0].lower(); - node = overlay.find_node(id); - if node is None: node = overlay.add_node(id); - reference = graph.find_node(id); - anchors = [{"#": token} for token in span]; - if reference.properties is not None \ - and key in reference.properties: - node.set_anchoring(key, anchors); - else: - edge = next(edge for edge in graph.edges if edge.lab.lower() == key and edge.src == id); - overlay.edges.add(Edge(edge.id, None, None, None, anchors = anchors)); - elif len(path) > 1: - print("amr2graph(): " - "ignoring alignment path {} on node #{} ({})" - "".format(path, id, node)); - - return graph, overlay; - -def find_anchors(index, anchors): - result = list(); - for match in INDEX.finditer(index): - i, suffix = match.group(1), match.group(2); - i = int(i) - 1; - if i >= len(anchors): continue; - anchor = anchors[i]; - if suffix != "": - fields = suffix[1:].split("_"); - start = anchor["from"]; - for field in fields: - j = int(field); - result.append({"from": start + j - 1, "to": start + j}); - else: - result.append(anchor); - return result if len(result) > 0 else None; - -def convert_amr_id(id): - m = re.search(r'wsj_([0-9]+)\.([0-9]+)', id); - if m: - return "2%04d%03d" % (int(m.group(1)), int(m.group(2))); - m = re.search(r'lpp_1943\.([0-9]+)', id); - if m: - return "1%04d0" % (int(m.group(1))); - else: - raise Exception('Could not convert id: %s' % id); - -def read(fp, full = False, reify = False, camr = False, - text = None, alignment = None, - quiet = False, trace = 0): - n = 0; - for id, snt, amr_line, stash, mapping in amr_lines(fp, camr, alignment): - if trace: - print("{}: {}".format(id, amr_line), file = sys.stderr); - amr = AMR.parse_AMR_line(amr_line); - if not amr: - raise Exception("failed to parse #{} ‘{}’; exit." - "".format(id, amr_line)); - if id is not None: - try: - id = convert_amr_id(id); - except: - pass; - else: - id = n; - n += 1; - graph, overlay = amr2graph(id, amr, text or snt, stash, - camr, full, reify, quiet, mapping); - yield graph, overlay; diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/malloc_and_free.h b/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/malloc_and_free.h deleted file mode 100644 index 01ab1e6dbe1732da1f8606b7a9121c1b404edb6f..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/malloc_and_free.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits malloc and free -#include - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/README.md b/spaces/manavisrani07/gradio-lipsync-wav2lip/README.md deleted file mode 100644 index 69a6563ccbbc40e8232b36dc7f515af541a692de..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Gradio Lipsync Wav2lip -emoji: 👄 -colorFrom: indigo -colorTo: blue -sdk: gradio -python_version: 3.8 -sdk_version: 3.40.1 -suggested_hardware: "t4-medium" -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/manutej/imagedemo1/README.md b/spaces/manutej/imagedemo1/README.md deleted file mode 100644 index c894c4a312d26edd4ef0a4a209bb176eb95770f0..0000000000000000000000000000000000000000 --- a/spaces/manutej/imagedemo1/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Imagedemo1 -emoji: 🌍 -colorFrom: indigo -colorTo: pink -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/marker22/Bark-Voice-Cloning/cloning/__init__.py b/spaces/marker22/Bark-Voice-Cloning/cloning/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/audio2landmark/audio2landmark_noautovc_dataset.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/audio2landmark/audio2landmark_noautovc_dataset.py deleted file mode 100644 index 05f0cf3fb39e137642d235cd8a0b64fcd0a365ea..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/audio2landmark/audio2landmark_noautovc_dataset.py +++ /dev/null @@ -1,306 +0,0 @@ -""" - # Copyright 2020 Adobe - # All Rights Reserved. - - # NOTICE: Adobe permits you to use, modify, and distribute this file in - # accordance with the terms of the Adobe license agreement accompanying - # it. - -""" - -import torch.utils.data as data -import torch -import numpy as np -import os -import pickle -import random -from scipy.signal import savgol_filter -from util.icp import icp -from scipy.spatial.transform import Rotation as R -from tqdm import tqdm -from scipy.linalg import logm - -STD_FACE_LANDMARK_FILE_DIR = 'dataset/utils/STD_FACE_LANDMARKS.txt' - - -class Audio2landmark_Dataset(data.Dataset): - - def __init__(self, dump_dir, dump_name, num_window_frames, num_window_step, status, noautovc=''): - self.dump_dir = dump_dir - self.num_window_frames = num_window_frames - self.num_window_step = num_window_step - - # Step 1 : load A / V data from dump files - print('Loading Data {}_{}'.format(dump_name, status)) - - with open(os.path.join(self.dump_dir, '{}_{}_{}au.pickle'.format(dump_name, status, noautovc)), 'rb') as fp: - self.au_data = pickle.load(fp) - with open(os.path.join(self.dump_dir, '{}_{}_{}fl.pickle'.format(dump_name, status, noautovc)), 'rb') as fp: - self.fl_data = pickle.load(fp) - - valid_idx = list(range(len(self.au_data))) - - random.seed(0) - random.shuffle(valid_idx) - self.fl_data = [self.fl_data[i] for i in valid_idx] - self.au_data = [self.au_data[i] for i in valid_idx] - - # # normalize fls - # for i in range(len(self.fl_data)): - # shape_3d = self.fl_data[i][0].reshape((-1, 68, 3)) - # scale = np.abs(1.0 / (shape_3d[:, 36:37, 0:1] - shape_3d[:, 45:46, 0:1])) - # shift = - 0.5 * (shape_3d[:, 36:37] + shape_3d[:, 45:46]) - # shape_3d = (shape_3d + shift) * scale - # self.fl_data[i] = (shape_3d.reshape(-1, 204), self.fl_data[i][1]) - - # tmp = [au for au, info in self.au_data] - # tmp = np.concatenate(tmp, axis=0) - # au_mean, au_std = np.mean(tmp, axis=0), np.std(tmp, axis=0) - # np.savetxt('dataset/utils/MEAN_STD_NOAUTOVC_AU.txt', np.concatenate([au_mean, au_std], axis=0).reshape(-1)) - # print(tmp.shape) - # exit(0) - - - au_mean_std = np.loadtxt('dataset/utils/MEAN_STD_NOAUTOVC_AU.txt') # np.mean(self.au_data[0][0]), np.std(self.au_data[0][0]) - au_mean, au_std = au_mean_std[0:au_mean_std.shape[0]//2], au_mean_std[au_mean_std.shape[0]//2:] - - self.au_data = [((au - au_mean) / au_std, info) for au, info in self.au_data] - - - def __len__(self): - return len(self.fl_data) - - def __getitem__(self, item): - # print('-> get item {}: {} {}'.format(item, self.fl_data[item][1][0], self.fl_data[item][1][1])) - return self.fl_data[item], self.au_data[item] - - def my_collate_in_segments(self, batch): - fls, aus, embs = [], [], [] - for fl, au in batch: - fl_data, au_data, emb_data = fl[0], au[0], au[1][2] - assert (fl_data.shape[0] == au_data.shape[0]) - - fl_data = torch.tensor(fl_data, dtype=torch.float, requires_grad=False) - au_data = torch.tensor(au_data, dtype=torch.float, requires_grad=False) - emb_data = torch.tensor(emb_data, dtype=torch.float, requires_grad=False) - - # window shift data - fls += [fl_data[i:i + self.num_window_frames] #- fl_data[i] - for i in range(0, fl_data.shape[0] - self.num_window_frames, self.num_window_step)] - aus += [au_data[i:i + self.num_window_frames] - for i in range(0, au_data.shape[0] - self.num_window_frames, self.num_window_step)] - embs += [emb_data] * ((au_data.shape[0] - self.num_window_frames) // self.num_window_step) - - # fls = torch.tensor(fls, dtype=torch.float, requires_grad=False) - # aus = torch.tensor(aus, dtype=torch.float, requires_grad=False) - # embs = torch.tensor(embs, dtype=torch.float, requires_grad=False) - - fls = torch.stack(fls, dim=0) - aus = torch.stack(aus, dim=0) - embs = torch.stack(embs, dim=0) - - return fls, aus, embs - - def my_collate_in_segments_noemb(self, batch): - fls, aus, embs = [], [], [] - for fl, au in batch: - fl_data, au_data = fl[0], au[0] - assert (fl_data.shape[0] == au_data.shape[0]) - - fl_data = torch.tensor(fl_data, dtype=torch.float, requires_grad=False) - au_data = torch.tensor(au_data, dtype=torch.float, requires_grad=False) - - # window shift data - fls += [fl_data[i:i + self.num_window_frames] # - fl_data[i] - for i in range(0, fl_data.shape[0] - self.num_window_frames, self.num_window_step)] - aus += [au_data[i:i + self.num_window_frames] - for i in range(0, au_data.shape[0] - self.num_window_frames, self.num_window_step)] - - fls = torch.stack(fls, dim=0) - aus = torch.stack(aus, dim=0) - - return fls, aus - - -def estimate_neck(fl): - mid_ch = (fl[2, :] + fl[14, :]) * 0.5 - return (mid_ch * 2 - fl[33, :]).reshape(1, 3) - -def norm_output_fls_rot(fl_data_i, anchor_t_shape=None): - - # fl_data_i = savgol_filter(fl_data_i, 21, 3, axis=0) - - t_shape_idx = (27, 28, 29, 30, 33, 36, 39, 42, 45) - if(anchor_t_shape is None): - anchor_t_shape = np.loadtxt( - r'dataset/utils/ANCHOR_T_SHAPE_{}.txt'.format(len(t_shape_idx))) - s = np.abs(anchor_t_shape[5, 0] - anchor_t_shape[8, 0]) - anchor_t_shape = anchor_t_shape / s * 1.0 - c2 = np.mean(anchor_t_shape[[4,5,8], :], axis=0) - anchor_t_shape -= c2 - - else: - anchor_t_shape = anchor_t_shape.reshape((68, 3)) - anchor_t_shape = anchor_t_shape[t_shape_idx, :] - - fl_data_i = fl_data_i.reshape((-1, 68, 3)).copy() - - # get rot_mat - rot_quats = [] - rot_trans = [] - for i in range(fl_data_i.shape[0]): - line = fl_data_i[i] - frame_t_shape = line[t_shape_idx, :] - T, distance, itr = icp(frame_t_shape, anchor_t_shape) - rot_mat = T[:3, :3] - trans_mat = T[:3, 3:4] - - # norm to anchor - fl_data_i[i] = np.dot(rot_mat, line.T).T + trans_mat.T - - # inverse (anchor -> reat_t) - # tmp = np.dot(rot_mat.T, (anchor_t_shape - trans_mat.T).T).T - - r = R.from_matrix(rot_mat) - rot_quats.append(r.as_quat()) - # rot_eulers.append(r.as_euler('xyz')) - rot_trans.append(T[:3, :]) - - rot_quats = np.array(rot_quats) - rot_trans = np.array(rot_trans) - - return rot_trans, rot_quats, fl_data_i - -def close_face_lip(fl): - facelandmark = fl.reshape(-1, 68, 3) - from util.geo_math import area_of_polygon - min_area_lip, idx = 999, 0 - for i, fls in enumerate(facelandmark): - area_of_mouth = area_of_polygon(fls[list(range(60, 68)), 0:2]) - if (area_of_mouth < min_area_lip): - min_area_lip = area_of_mouth - idx = i - return idx - - - -class Speaker_aware_branch_Dataset(data.Dataset): - - def __init__(self, dump_dir, dump_name, num_window_frames, num_window_step, status, use_11spk_only=False, noautovc=''): - self.dump_dir = dump_dir - self.num_window_frames = num_window_frames - self.num_window_step = num_window_step - - # Step 1 : load A / V data from dump files - print('Loading Data {}_{}'.format(dump_name, status)) - - with open(os.path.join(self.dump_dir, '{}_{}_{}au.pickle'.format(dump_name, status, noautovc)), 'rb') as fp: - self.au_data = pickle.load(fp) - with open(os.path.join(self.dump_dir, '{}_{}_{}fl.pickle'.format(dump_name, status, noautovc)), 'rb') as fp: - self.fl_data = pickle.load(fp) - try: - with open(os.path.join(self.dump_dir, '{}_{}_gaze.pickle'.format(dump_name, status)), 'rb') as fp: - gaze = pickle.load(fp) - self.rot_trans = gaze['rot_trans'] - self.rot_quats = gaze['rot_quat'] - self.anchor_t_shape = gaze['anchor_t_shape'] - - # print('raw:', np.sqrt(np.sum((logm(self.rot_trans[0][0, :3, :3].dot(self.rot_trans[0][5, :3, :3].T)))**2)/2.)) - # print('axis-angle:',np.arccos((np.sum(np.trace(self.rot_trans[0][0, :3, :3].dot(self.rot_trans[0][5, :3, :3].T)))-1.)/2.)) - # print('quat:', 2 * np.arccos(np.abs(self.rot_eulers[0][0].dot(self.rot_eulers[0][5].T)))) - # exit(0) - except: - print(os.path.join(self.dump_dir, '{}_{}_gaze.pickle'.format(dump_name, status))) - print('gaze file not found') - exit(-1) - - - valid_idx = [] - for i, fl in enumerate(self.fl_data): - if(use_11spk_only): - if(fl[1][1][:-4].split('_x_')[1] in ['48uYS3bHIA8', 'E0zgrhQ0QDw', 'E_kmpT-EfOg', 'J-NPsvtQ8lE', 'Z7WRt--g-h4', '_ldiVrXgZKc', 'irx71tYyI-Q', 'sxCbrYjBsGA', 'wAAMEC1OsRc', 'W6uRNCJmdtI', 'bXpavyiCu10']): - # print(i, fl[1][1][:-4]) - valid_idx.append(i) - else: - valid_idx.append(i) - - random.seed(0) - random.shuffle(valid_idx) - self.fl_data = [self.fl_data[i] for i in valid_idx] - self.au_data = [self.au_data[i] for i in valid_idx] - self.rot_trans = [self.rot_trans[i] for i in valid_idx] - self.rot_quats = [self.rot_quats[i] for i in valid_idx] - self.anchor_t_shape = [self.anchor_t_shape[i] for i in valid_idx] - - self.t_shape_idx = (27, 28, 29, 30, 33, 36, 39, 42, 45) - - # ''' PRODUCE gaze file for the first time ''' - # self.rot_trans = [] - # self.rot_quats = [] - # self.anchor_t_shape = [] - # - # for fl in tqdm(self.fl_data): - # fl = fl[0].reshape((-1, 68, 3)) - # rot_trans, rot_quats, anchor_t_shape = norm_output_fls_rot(fl, anchor_t_shape=None) - # self.rot_trans.append(rot_trans) - # self.rot_quats.append(rot_quats) - # self.anchor_t_shape.append(anchor_t_shape) - # - # with open(os.path.join(self.dump_dir, '{}_{}_gaze.pickle'.format(dump_name, status)), 'wb') as fp: - # gaze = {'rot_trans':self.rot_trans, 'rot_quat':self.rot_quats, 'anchor_t_shape':self.anchor_t_shape} - # pickle.dump(gaze, fp) - # print('SAVE!') - - - au_mean_std = np.loadtxt('dataset/utils/MEAN_STD_AUTOVC_RETRAIN_MEL_AU.txt') # np.mean(self.au_data[0][0]), np.std(self.au_data[0][0]) - au_mean, au_std = au_mean_std[0:au_mean_std.shape[0]//2], au_mean_std[au_mean_std.shape[0]//2:] - - self.au_data = [((au - au_mean) / au_std, info) for au, info in self.au_data] - - def __len__(self): - return len(self.fl_data) - - def __getitem__(self, item): - # print('-> get item {}: {} {}'.format(item, self.fl_data[item][1][0], self.fl_data[item][1][1])) - return self.fl_data[item], self.au_data[item], self.rot_trans[item], \ - self.rot_quats[item], self.anchor_t_shape[item] - - def my_collate_in_segments(self, batch): - fls, aus, embs, regist_fls, rot_trans, rot_quats = [], [], [], [], [], [] - for fl, au, rot_tran, rot_quat, anchor_t_shape in batch: - fl_data, au_data, emb_data = fl[0], au[0], au[1][2] - assert (fl_data.shape[0] == au_data.shape[0]) - - fl_data = torch.tensor(fl_data, dtype=torch.float, requires_grad=False) - au_data = torch.tensor(au_data, dtype=torch.float, requires_grad=False) - emb_data = torch.tensor(emb_data, dtype=torch.float, requires_grad=False) - - rot_tran_data = torch.tensor(rot_tran, dtype=torch.float, requires_grad=False) - minus_eye = torch.cat([torch.eye(3).unsqueeze(0), torch.zeros((1, 3, 1))], dim=2) - rot_tran_data -= minus_eye - rot_quat_data = torch.tensor(rot_quat, dtype=torch.float, requires_grad=False) - regist_fl_data = torch.tensor(anchor_t_shape, dtype=torch.float, requires_grad=False).view(-1, 204) - - # window shift data - fls += [fl_data[i:i + self.num_window_frames] #- fl_data[i] - for i in range(0, fl_data.shape[0] - self.num_window_frames, self.num_window_step)] - aus += [au_data[i:i + self.num_window_frames] - for i in range(0, au_data.shape[0] - self.num_window_frames, self.num_window_step)] - embs += [emb_data] * ((au_data.shape[0] - self.num_window_frames) // self.num_window_step) - - regist_fls += [regist_fl_data[i:i + self.num_window_frames] # - fl_data[i] - for i in range(0, regist_fl_data.shape[0] - self.num_window_frames, self.num_window_step)] - rot_trans += [rot_tran_data[i:i + self.num_window_frames] # - fl_data[i] - for i in range(0, rot_tran_data.shape[0] - self.num_window_frames, self.num_window_step)] - rot_quats += [rot_quat_data[i:i + self.num_window_frames] # - fl_data[i] - for i in range(0, rot_quat_data.shape[0] - self.num_window_frames, self.num_window_step)] - - fls = torch.stack(fls, dim=0) - aus = torch.stack(aus, dim=0) - embs = torch.stack(embs, dim=0) - - regist_fls = torch.stack(regist_fls, dim=0) - rot_trans = torch.stack(rot_trans, dim=0) - rot_quats = torch.stack(rot_quats, dim=0) - - return fls, aus, embs, regist_fls, rot_trans, rot_quats diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/utils/__init__.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/utils/__init__.py deleted file mode 100644 index 7f3999734455352473532ef25cddf059eb5baee3..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/utils/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -""" - # Copyright 2020 Adobe - # All Rights Reserved. - - # NOTICE: Adobe permits you to use, modify, and distribute this file in - # accordance with the terms of the Adobe license agreement accompanying - # it. - -""" - diff --git a/spaces/mattclifford1/IQM-VIS/app.py b/spaces/mattclifford1/IQM-VIS/app.py deleted file mode 100644 index c7a672c7c54e58c322f9b96c7a74c463f7fe41e3..0000000000000000000000000000000000000000 --- a/spaces/mattclifford1/IQM-VIS/app.py +++ /dev/null @@ -1,170 +0,0 @@ -''' -streamlit app for IQM-VIS -streamlit SDK reloads this whole script on every user interaction with the UI -''' -# Author: Matt Clifford -from io import StringIO -import streamlit as st -import numpy as np -import cv2 -import utils -import style - -trans = utils.get_transformations() -metric_params = utils.get_metric_params() -sliders = {'transforms': {}, 'metric_params': {}} -update_graphs = 0 - -# SIDEBAR - to select what transforms and parameters to use -sliders_to_use = {} -with st.sidebar: - # check boxes to include sliders - st.write('Transformations to include:') - for key in trans: - sliders_to_use[key] = st.checkbox(key, value=True) - # transformation sliders - st.markdown('## Transformations:') - keys = list(trans.keys()) # fix as list as interaction with the checkbox above will change the dict - for key in keys: - if sliders_to_use[key]: - if 'init_value' not in trans[key].keys(): - trans[key]['init_value'] = 1.0 - sliders['transforms'][key] = st.slider(key, - min_value=trans[key]['min'], - max_value=trans[key]['max'], - value=trans[key]['init_value'], - key=f'transforms_{key}') - else: - trans.pop(key) - # metric parameter sliders - st.markdown('## Metric Parameters:') - for key, param_item in metric_params.items(): - if 'init_value' not in param_item.keys(): - metric_params[key]['init_value'] = 1.0 - sliders['metric_params'][key] = st.slider(key, - min_value=param_item['min'], - max_value=param_item['max'], - value=metric_params[key]['init_value'], - key=f'metric_params_{key}') - # buttons - cols = st.columns(2) - with cols[0]: - st.button('Reset Sliders', - on_click=utils.reset_all_sliders, - kwargs={'dict_list':[trans, metric_params], 'group_list':['transforms', 'metric_params']}) - with cols[1]: - if st.button('Update Graphs'): - update_graphs = sliders['metric_params'] # cache these values - - -# GENERAL INFO -with open('info.md') as f: - st.markdown(f.read()) - -# IMAGES -st.markdown('## Image File:') -copy_image = st.checkbox('Use the same image for X and T(X)', value=True) -cols = st.columns(2) -with cols[0]: - uploaded_file = st.file_uploader("Upload your own image:") - if not copy_image: - uploaded_file_trans = st.file_uploader("And T(X) image:") -with cols[1]: - if uploaded_file is None: - im_num = st.selectbox( - 'Or choose from our sample images:', - ('1', '2', '3')) - if not copy_image: - im_num_trans = st.selectbox( - 'Choose from our sample T(X) images:', - ('1', '2', '3')) - - -resize_im = 256 -st.markdown('## Images:') -# get reference image -if uploaded_file is not None: - bytes_data = uploaded_file.getvalue() - im1 = utils.image_bytes_to_np(bytes_data, resize=resize_im) - im1_name = uploaded_file.name -else: - im1 = utils.load_sample_image(im_num) - im1_name = f'X{im_num}' -# get transform image -if copy_image: - im2 = im1 - im2_name = im1_name -else: - if uploaded_file_trans is not None: - bytes_data = uploaded_file_trans.getvalue() - im2 = utils.image_bytes_to_np(bytes_data, resize=resize_im) - im2_name = uploaded_file_trans.name - else: - # keep the uploaded image for now - if uploaded_file is not None: - im2 = im1 - im2_name = im1_name - # default back to using the sample image - else: - im2 = utils.load_sample_image(im_num_trans) - im2_name = f'X{im_num_trans}' - -if im1.shape != im2.shape: - st.write(f'## Please upload images with the same aspect ratio, not: {im1.shape} and {im2.shape}') -else: - data_store = utils.get_data_store((im1_name, im1), - (im2_name, im2)) - - trans_im = utils.transform_image(data_store.get_transform_image(), sliders['transforms'], trans) - metric_images = data_store.get_metric_images(trans_im, **sliders['metric_params']) - - cols = st.columns(2+len(metric_images.keys())) - col_num = 0 - with cols[col_num]: - st.image(data_store.get_reference_image(), caption=data_store.get_reference_image_name()) - col_num += 1 - with cols[col_num]: - st.image(trans_im, caption=f'T({data_store.get_transform_image_name()})') - col_num += 1 - - for key in metric_images.keys(): - with cols[col_num]: - st.image(metric_images[key], caption=key) - col_num += 1 - - cols = st.columns(3) - with cols[0]: - st.markdown('## Metrics plots:') - with cols[2]: - range_trans_option = st.selectbox('Transformation range plot', list(trans.keys())) - - cols = st.columns(3) - col_num = 0 - with cols[col_num]: - metrics = data_store.get_metrics(trans_im, **sliders['metric_params']) - if len(metrics.keys()) == 1: - st.write(metrics) - else: - fig = utils.plot_metrics(metrics) - # X = np.array(fig.canvas.renderer.buffer_rgba()) - # st.image(X, caption='Metrics') - st.pyplot(fig) - st.write('IQM values for current transformation parameters') - col_num += 1 - with cols[col_num]: - if len(trans.keys()) > 2: - fig = utils.get_metrics_avg_graphs(data_store, sliders['transforms'], sliders['metric_params'], trans, data_store.get_reference_image_name(), data_store.get_transform_image_name(), update_graphs) - st.pyplot(fig) - st.write('Average IQM value when each transformation is varied over its range and all other transforms are kept at their default value')# current slider values') - else: - st.write('Include 3 or more transformations to get averaging radar plots') - col_num += 1 - - with cols[col_num]: - if len(trans.keys()) > 0: - fig = utils.get_metric_range_graphs(data_store, range_trans_option, sliders['metric_params'], trans, data_store.get_reference_image_name(), data_store.get_transform_image_name(), update_graphs) - st.pyplot(fig) - st.write('IQM values over each parameter range') - else: - st.write('Include transformations to get IQM response profiles') - col_num += 1 diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/transformer.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/transformer.py deleted file mode 100644 index 048c06dfbb0ab4167afce95dffb73dcc343c2344..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/modules/transformer.py +++ /dev/null @@ -1,747 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Transformer model, with streaming support, xformer attention support -and easy causal attention with a potentially finite receptive field. - -See `StreamingTransformer` for more information. - -Unlike regular PyTorch Transformer, we make the hard choice that batches are first. -""" - -import typing as tp - -from einops import rearrange -import torch -import torch.nn as nn -from torch.nn import functional as F -from torch.utils.checkpoint import checkpoint as torch_checkpoint -from xformers import ops - -from .rope import RotaryEmbedding -from .streaming import StreamingModule - -_efficient_attention_backend: str = 'torch' - - -def set_efficient_attention_backend(backend: str = 'torch'): - # Using torch by default, it seems a bit faster on older P100 GPUs (~20% faster). - global _efficient_attention_backend - assert _efficient_attention_backend in ['xformers', 'torch'] - _efficient_attention_backend = backend - - -def _get_attention_time_dimension() -> int: - if _efficient_attention_backend == 'torch': - return 2 - else: - return 1 - - -def _is_profiled() -> bool: - # Return true if we are currently running with a xformers profiler activated. - try: - from xformers.profiler import profiler - except ImportError: - return False - return profiler._Profiler._CURRENT_PROFILER is not None - - -def create_norm_fn(norm_type: str, dim: int, **kwargs) -> nn.Module: - """Create normalization module for transformer encoder layer. - - Args: - norm_type (str): Normalization method. - dim (int): Dimension of the normalized layer. - **kwargs (dict): Additional parameters for normalization layer. - Returns: - nn.Module: Normalization module. - """ - if norm_type == 'layer_norm': - return nn.LayerNorm(dim, eps=1e-5, **kwargs) - else: - raise ValueError(f"Unknown norm type: {norm_type}") - - -def create_sin_embedding(positions: torch.Tensor, dim: int, max_period: float = 10000, - dtype: torch.dtype = torch.float32) -> torch.Tensor: - """Create sinusoidal positional embedding, with shape `[B, T, C]`. - - Args: - positions (torch.Tensor): LongTensor of positions. - dim (int): Dimension of the embedding. - max_period (float): Maximum period of the cosine/sine functions. - dtype (torch.dtype or str): dtype to use to generate the embedding. - Returns: - torch.Tensor: Sinusoidal positional embedding. - """ - # We aim for BTC format - assert dim % 2 == 0 - half_dim = dim // 2 - positions = positions.to(dtype) - adim = torch.arange(half_dim, device=positions.device, dtype=dtype).view(1, 1, -1) - max_period_tensor = torch.full([], max_period, device=positions.device, dtype=dtype) # avoid sync point - phase = positions / (max_period_tensor ** (adim / (half_dim - 1))) - return torch.cat([torch.cos(phase), torch.sin(phase)], dim=-1) - - -def expand_repeated_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: - """torch.repeat_interleave(x, dim=2, repeats=n_rep) from xlformers.""" - if n_rep == 1: - return x - if _efficient_attention_backend == 'torch': - bs, n_kv_heads, slen, head_dim = x.shape - return ( - x[:, :, None, :, :] - .expand(bs, n_kv_heads, n_rep, slen, head_dim) - .reshape(bs, n_kv_heads * n_rep, slen, head_dim) - ) - else: - bs, slen, n_kv_heads, head_dim = x.shape - return ( - x[:, :, :, None, :] - .expand(bs, slen, n_kv_heads, n_rep, head_dim) - .reshape(bs, slen, n_kv_heads * n_rep, head_dim) - ) - - -class LayerScale(nn.Module): - """Layer scale from [Touvron et al 2021] (https://arxiv.org/pdf/2103.17239.pdf). - This rescales diagonally the residual outputs close to 0, with a learnt scale. - - Args: - channels (int): Number of channels. - init (float): Initial scale. - channel_last (bool): If True, expect `[*, C]` shaped tensors, otherwise, `[*, C, T]`. - device (torch.device or str, optional): Device on which to initialize the module. - dtype (torch.dtype, optional): dtype to use to initialize the module. - """ - def __init__(self, channels: int, init: float = 1e-4, channel_last: bool = True, - device=None, dtype=None): - super().__init__() - self.channel_last = channel_last - self.scale = nn.Parameter( - torch.full((channels,), init, - requires_grad=True, device=device, dtype=dtype)) - - def forward(self, x: torch.Tensor): - if self.channel_last: - return self.scale * x - else: - return self.scale[:, None] * x - - -class StreamingMultiheadAttention(StreamingModule): - """Similar to `nn.MultiheadAttention` but with support for streaming, causal evaluation. - - Args: - embed_dim (int): Dimension to project to. - num_heads (int): Number of heads. - dropout (float): Dropout level. - bias (bool): Use bias in projections. - causal (bool): Causal mask applied automatically. - past_context (int, optional): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - rope (`RotaryEmbedding`, optional): Rope embedding to use. - cross_attention: Should be true when used as a cross attention. - All keys and values must be available at once, streaming is only for the queries. - Cannot be used with `causal` or `rope` (as it wouldn't make sens to - interpret the time steps in the keys relative to those in the queries). - safe_streaming (bool): Bug fix, will go away with xformers update. - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device, optional): Device on which to initialize. - dtype (torch.dtype, optional): dtype to use. - """ - def __init__(self, embed_dim: int, num_heads: int, dropout: float = 0.0, bias: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - rope: tp.Optional[RotaryEmbedding] = None, cross_attention: bool = False, - safe_streaming: bool = True, qk_layer_norm: bool = False, kv_repeat: int = 1, - device=None, dtype=None): - super().__init__() - factory_kwargs = {'device': device, 'dtype': dtype} - if past_context is not None: - assert causal - - self.embed_dim = embed_dim - self.causal = causal - self.past_context = past_context - self.memory_efficient = memory_efficient - self.attention_as_float32 = attention_as_float32 - self.rope = rope - self.cross_attention = cross_attention - self.safe_streaming = safe_streaming - self.num_heads = num_heads - self.dropout = dropout - self.kv_repeat = kv_repeat - if cross_attention: - assert not causal, "Causal cannot work with cross attention." - assert rope is None, "Rope cannot work with cross attention." - - if memory_efficient: - _verify_xformers_memory_efficient_compat() - - self.custom = _is_custom(custom, memory_efficient) - if self.custom: - out_dim = embed_dim - assert num_heads % kv_repeat == 0 - assert not cross_attention or kv_repeat == 1 - num_kv = num_heads // kv_repeat - kv_dim = (embed_dim // num_heads) * num_kv - out_dim += 2 * kv_dim - in_proj = nn.Linear(embed_dim, out_dim, bias=bias, **factory_kwargs) - # We try to follow the default PyTorch MHA convention, to easily compare results. - self.in_proj_weight = in_proj.weight - self.in_proj_bias = in_proj.bias - if bias: - self.in_proj_bias.data.zero_() # Following Pytorch convention - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias, **factory_kwargs) - if bias: - self.out_proj.bias.data.zero_() - else: - assert not qk_layer_norm - assert kv_repeat == 1 - self.mha = nn.MultiheadAttention( - embed_dim, num_heads, dropout=dropout, bias=bias, batch_first=True, - **factory_kwargs) - self.qk_layer_norm = qk_layer_norm - if qk_layer_norm: - assert self.custom - assert kv_repeat == 1 - ln_dim = embed_dim - self.q_layer_norm = nn.LayerNorm(ln_dim) - self.k_layer_norm = nn.LayerNorm(ln_dim) - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - if not self.custom: - # Support compat with regular MHA - keys = [n for n, _ in self.mha.named_parameters()] - for key in keys: - if prefix + key in state_dict: - state_dict[prefix + "mha." + key] = state_dict.pop(prefix + key) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def _get_mask(self, current_steps: int, device: torch.device, dtype: torch.dtype): - # Return a causal mask, accounting for potentially stored past keys/values - # We actually return a bias for the attention score, as this has the same - # convention both in the builtin MHA in Pytorch, and Xformers functions. - time_dim = _get_attention_time_dimension() - if self.memory_efficient: - from xformers.ops import LowerTriangularMask - if current_steps == 1: - # If we only have one step, then we do not need a mask. - return None - elif 'past_keys' in self._streaming_state: - raise RuntimeError("Not supported at the moment") - else: - # Then we can safely use a lower triangular mask - return LowerTriangularMask() - if self._streaming_state: - past_keys = self._streaming_state['past_keys'] - past_steps = past_keys.shape[time_dim] - else: - past_steps = 0 - - queries_pos = torch.arange( - past_steps, current_steps + past_steps, device=device).view(-1, 1) - keys_pos = torch.arange(past_steps + current_steps, device=device).view(1, -1) - delta = queries_pos - keys_pos - valid = delta >= 0 - if self.past_context is not None: - valid &= (delta <= self.past_context) - return torch.where( - valid, - torch.zeros([], device=device, dtype=dtype), - torch.full([], float('-inf'), device=device, dtype=dtype)) - - def _complete_kv(self, k, v): - time_dim = _get_attention_time_dimension() - if self.cross_attention: - # With cross attention we assume all keys and values - # are already available, and streaming is with respect - # to the queries only. - return k, v - # Complete the key/value pair using the streaming state. - if self._streaming_state: - pk = self._streaming_state['past_keys'] - nk = torch.cat([pk, k], dim=time_dim) - if v is k: - nv = nk - else: - pv = self._streaming_state['past_values'] - nv = torch.cat([pv, v], dim=time_dim) - else: - nk = k - nv = v - - assert nk.shape[time_dim] == nv.shape[time_dim] - offset = 0 - if self.past_context is not None: - offset = max(0, nk.shape[time_dim] - self.past_context) - if self._is_streaming: - self._streaming_state['past_keys'] = nk[:, offset:] - if v is not k: - self._streaming_state['past_values'] = nv[:, offset:] - if 'offset' in self._streaming_state: - self._streaming_state['offset'] += offset - else: - self._streaming_state['offset'] = torch.tensor(0) - return nk, nv - - def _apply_rope(self, query: torch.Tensor, key: torch.Tensor): - # TODO: fix and verify layout. - assert _efficient_attention_backend == 'xformers', "Rope not supported with torch attn." - # Apply rope embeddings to query and key tensors. - assert self.rope is not None - if 'past_keys' in self._streaming_state: - past_keys_offset = self._streaming_state['past_keys'].shape[1] - else: - past_keys_offset = 0 - if 'offset' in self._streaming_state: - past_context_offset = int(self._streaming_state['offset'].item()) - else: - past_context_offset = 0 - streaming_offset = past_context_offset + past_keys_offset - return self.rope.rotate_qk(query, key, start=streaming_offset) - - def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor, - key_padding_mask=None, need_weights=False, attn_mask=None, - average_attn_weights=True, is_causal=False): - assert attn_mask is None - assert not is_causal, ("New param added in torch 2.0.1 not supported, " - "use the causal args in the constructor.") - - time_dim = _get_attention_time_dimension() - if time_dim == 2: - layout = "b h t d" - else: - layout = "b t h d" - dtype = query.dtype - if self._is_streaming: - assert self.causal or self.cross_attention, \ - "Streaming only available for causal or cross attention" - - if self.causal: - # At the moment we specialize only for the self-attention case. - assert query.shape[1] == key.shape[1], "Causal only for same length query / key / value" - assert value.shape[1] == key.shape[1], "Causal only for same length query / key / value" - attn_mask = self._get_mask(query.shape[1], query.device, query.dtype) - - if self.custom: - # custom implementation - assert need_weights is False - assert key_padding_mask is None - if self.cross_attention: - # Different queries, keys, values, we have to spit manually the weights - # before applying the linear. - dim = self.in_proj_weight.shape[0] // 3 - if self.in_proj_bias is None: - bias_q, bias_k, bias_v = None, None, None - else: - bias_q = self.in_proj_bias[:dim] - bias_k = self.in_proj_bias[dim: 2 * dim] - bias_v = self.in_proj_bias[2 * dim:] - q = nn.functional.linear(query, self.in_proj_weight[:dim], bias_q) - # todo: when streaming, we could actually save k, v and check the shape actually match. - k = nn.functional.linear(key, self.in_proj_weight[dim: 2 * dim], bias_k) - v = nn.functional.linear(value, self.in_proj_weight[2 * dim:], bias_v) - if self.qk_layer_norm is True: - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k, v = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k, v]] - else: - if not _is_profiled(): - # profiling breaks that propertysomehow. - assert query is key, "specialized implementation" - assert value is key, "specialized implementation" - projected = nn.functional.linear(query, self.in_proj_weight, self.in_proj_bias) - if self.kv_repeat == 1: - if time_dim == 2: - bound_layout = "b h p t d" - else: - bound_layout = "b t p h d" - packed = rearrange(projected, f"b t (p h d) -> {bound_layout}", p=3, h=self.num_heads) - q, k, v = ops.unbind(packed, dim=2) - else: - embed_dim = self.embed_dim - per_head_dim = (embed_dim // self.num_heads) - kv_heads = self.num_heads // self.kv_repeat - q = projected[:, :, :embed_dim] - start = embed_dim - end = start + per_head_dim * kv_heads - k = projected[:, :, start: end] - v = projected[:, :, end:] - q = rearrange(q, f"b t (h d) -> {layout}", h=self.num_heads) - k = rearrange(k, f"b t (h d) -> {layout}", h=kv_heads) - v = rearrange(v, f"b t (h d) -> {layout}", h=kv_heads) - - if self.qk_layer_norm is True: - assert self.kv_repeat == 1 - q, k = [rearrange(x, f"{layout} -> b t (h d)") for x in [q, k]] - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k]] - if self.rope: - q, k = self._apply_rope(q, k) - k, v = self._complete_kv(k, v) - if self.kv_repeat > 1: - k = expand_repeated_kv(k, self.kv_repeat) - v = expand_repeated_kv(v, self.kv_repeat) - if self.attention_as_float32: - q, k, v = [x.float() for x in [q, k, v]] - if self.memory_efficient: - p = self.dropout if self.training else 0 - if _efficient_attention_backend == 'torch': - x = torch.nn.functional.scaled_dot_product_attention( - q, k, v, is_causal=attn_mask is not None, dropout_p=p) - else: - x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p) - else: - # We include the dot product as float32, for consistency - # with the other implementations that include that step - # as part of the attention. Note that when using `autocast`, - # the einsums would be done as bfloat16, but the softmax - # would be done as bfloat16, so `attention_as_float32` will - # extend a bit the range of operations done in float32, - # although this should make no difference. - q = q / q.shape[-1] ** 0.5 - key_layout = layout.replace('t', 'k') - query_layout = layout - if self._is_streaming and self.safe_streaming and q.device.type == 'cuda': - with torch.autocast(device_type=q.device.type, dtype=torch.float32): - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - else: - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - if attn_mask is not None: - pre_w = pre_w + attn_mask - w = torch.softmax(pre_w, dim=-1) - w = F.dropout(w, self.dropout, training=self.training).to(v) - # Key and value have the same format. - x = torch.einsum(f"b h t k, {key_layout} -> {layout}", w, v) - x = x.to(dtype) - x = rearrange(x, f"{layout} -> b t (h d)", h=self.num_heads) - x = self.out_proj(x) - else: - key, value = self._complete_kv(key, value) - if self.attention_as_float32: - query, key, value = [x.float() for x in [query, key, value]] - x, _ = self.mha( - query, key, value, key_padding_mask, - need_weights, attn_mask, average_attn_weights) - x = x.to(dtype) - - return x, None - - -class StreamingTransformerLayer(nn.TransformerEncoderLayer): - """TransformerLayer with Streaming / Causal support. - This also integrates cross_attention, when passing `cross_attention=True`, - rather than having two separate classes like in PyTorch. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int, optional): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product in attention. - qk_layer_norm_cross (bool): Same for the cross attention. - cross_attention (bool): If True, expect to get secondary input for cross-attention. - Cross attention will use the default MHA, as it typically won't require - special treatment. - layer_scale (float, optional): If not None, LayerScale will be used with - the given value as initial scale. - rope (`RotaryEmbedding`, optional): Rope embedding to use. - attention_dropout (float, optional): If not None, separate the value of the dimension dropout - in FFN and of the attention dropout. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device, optional): Device on which to initialize. - dtype (torch.dtype, optional): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, dim_feedforward: int = 2048, dropout: float = 0.1, - bias_ff: bool = True, bias_attn: bool = True, causal: bool = False, - past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - qk_layer_norm: bool = False, qk_layer_norm_cross: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - rope: tp.Optional[RotaryEmbedding] = None, attention_dropout: tp.Optional[float] = None, - kv_repeat: int = 1, norm: str = 'layer_norm', device=None, dtype=None, **kwargs): - super().__init__(d_model, num_heads, dim_feedforward, dropout, - device=device, dtype=dtype, batch_first=True, **kwargs) - factory_kwargs = {'device': device, 'dtype': dtype} - # Redefine self_attn to our streaming multi-head attention - attn_kwargs: tp.Dict[str, tp.Any] = { - 'embed_dim': d_model, - 'num_heads': num_heads, - 'dropout': dropout if attention_dropout is None else attention_dropout, - 'bias': bias_attn, - 'custom': custom, - 'memory_efficient': memory_efficient, - 'attention_as_float32': attention_as_float32, - } - self.self_attn: StreamingMultiheadAttention = StreamingMultiheadAttention( - causal=causal, past_context=past_context, rope=rope, qk_layer_norm=qk_layer_norm, - kv_repeat=kv_repeat, **attn_kwargs, **factory_kwargs) # type: ignore - # Redefine feedforward layers to expose bias parameter - self.linear1 = nn.Linear(d_model, dim_feedforward, bias=bias_ff, **factory_kwargs) - self.linear2 = nn.Linear(dim_feedforward, d_model, bias=bias_ff, **factory_kwargs) - - self.layer_scale_1: nn.Module - self.layer_scale_2: nn.Module - if layer_scale is None: - self.layer_scale_1 = nn.Identity() - self.layer_scale_2 = nn.Identity() - else: - self.layer_scale_1 = LayerScale(d_model, layer_scale, **factory_kwargs) - self.layer_scale_2 = LayerScale(d_model, layer_scale, **factory_kwargs) - - self.cross_attention: tp.Optional[nn.Module] = None - if cross_attention: - self.cross_attention = StreamingMultiheadAttention( - cross_attention=True, qk_layer_norm=qk_layer_norm_cross, - **attn_kwargs, **factory_kwargs) - # Norm and dropout - self.dropout_cross = nn.Dropout(dropout) - # eps value matching that used in PyTorch reference implementation. - self.norm_cross = nn.LayerNorm(d_model, eps=1e-5, **factory_kwargs) - self.layer_scale_cross: nn.Module - if layer_scale is None: - self.layer_scale_cross = nn.Identity() - else: - self.layer_scale_cross = LayerScale(d_model, layer_scale, **factory_kwargs) - self.norm1 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - self.norm2 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - - def _cross_attention_block(self, src: torch.Tensor, - cross_attention_src: torch.Tensor) -> torch.Tensor: - assert self.cross_attention is not None - # queries are from src, keys and values from cross_attention_src. - x = self.cross_attention( - src, cross_attention_src, cross_attention_src, need_weights=False)[0] - return self.dropout_cross(x) # type: ignore - - def forward(self, src: torch.Tensor, src_mask: tp.Optional[torch.Tensor] = None, # type: ignore - src_key_padding_mask: tp.Optional[torch.Tensor] = None, - cross_attention_src: tp.Optional[torch.Tensor] = None): - if self.cross_attention is None: - assert cross_attention_src is None - else: - assert cross_attention_src is not None - x = src - if self.norm_first: - x = x + self.layer_scale_1( - self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)) - if cross_attention_src is not None: - x = x + self.layer_scale_cross( - self._cross_attention_block( - self.norm_cross(x), cross_attention_src)) - x = x + self.layer_scale_2(self._ff_block(self.norm2(x))) - else: - x = self.norm1(x + self.layer_scale_1( - self._sa_block(x, src_mask, src_key_padding_mask))) - if cross_attention_src is not None: - x = self.norm_cross( - x + self.layer_scale_cross( - self._cross_attention_block(src, cross_attention_src))) - x = self.norm2(x + self.layer_scale_2(self._ff_block(x))) - return x - - -class StreamingTransformer(StreamingModule): - """Transformer with Streaming / Causal support. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int, optional): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - cross_attention (bool): If True, expect to get secondary input for cross-attention. - layer_scale (float, optional): If not None, LayerScale will be used - with the given value as initial scale. - positional_embedding (str): Positional embedding strategy (sin, rope, or sin_rope). - max_period (float): Maximum period of the time embedding. - positional_scale (float): Scale of positional embedding, set to 0 to deactivate. - xpos (bool): Apply xpos exponential decay to positional embedding (rope only). - lr (float, optional): learning rate override through the `make_optim_group` API. - weight_decay (float, optional): Weight_decay override through the `make_optim_group` API. - layer_class: (subclass of `StreamingTransformerLayer): class to use - to initialize the layers, allowing further customization outside of AudioCraft. - checkpointing (str): Checkpointing strategy to reduce memory usage. - No checkpointing if set to 'none'. Per layer checkpointing using PyTorch - if set to 'torch' (entire layer checkpointed, i.e. linears are evaluated twice, - minimal memory usage, but maximal runtime). Finally, `xformers_default` provide - a policy for opting-out some operations of the checkpointing like - linear layers and attention, providing a middle ground between speed and memory. - device (torch.device, optional): Device on which to initialize. - dtype (torch.dtype, optional): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, num_layers: int, dim_feedforward: int = 2048, - dropout: float = 0.1, bias_ff: bool = True, bias_attn: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, - custom: bool = False, memory_efficient: bool = False, attention_as_float32: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - positional_embedding: str = 'sin', max_period: float = 10_000, positional_scale: float = 1., - xpos: bool = False, lr: tp.Optional[float] = None, weight_decay: tp.Optional[float] = None, - layer_class: tp.Type[StreamingTransformerLayer] = StreamingTransformerLayer, - checkpointing: str = 'none', device=None, dtype=None, **kwargs): - super().__init__() - assert d_model % num_heads == 0 - - self.positional_embedding = positional_embedding - self.max_period = max_period - self.positional_scale = positional_scale - self.weight_decay = weight_decay - self.lr = lr - - assert positional_embedding in ['sin', 'rope', 'sin_rope'] - self.rope: tp.Optional[RotaryEmbedding] = None - if self.positional_embedding in ['rope', 'sin_rope']: - assert _is_custom(custom, memory_efficient) - self.rope = RotaryEmbedding(d_model // num_heads, max_period=max_period, - xpos=xpos, scale=positional_scale, device=device) - - self.checkpointing = checkpointing - - assert checkpointing in ['none', 'torch', 'xformers_default', 'xformers_mm'] - if self.checkpointing.startswith('xformers'): - _verify_xformers_internal_compat() - - self.layers = nn.ModuleList() - for idx in range(num_layers): - self.layers.append( - layer_class( - d_model=d_model, num_heads=num_heads, dim_feedforward=dim_feedforward, - dropout=dropout, bias_ff=bias_ff, bias_attn=bias_attn, - causal=causal, past_context=past_context, custom=custom, - memory_efficient=memory_efficient, attention_as_float32=attention_as_float32, - cross_attention=cross_attention, layer_scale=layer_scale, rope=self.rope, - device=device, dtype=dtype, **kwargs)) - - if self.checkpointing != 'none': - for layer in self.layers: - # see audiocraft/optim/fsdp.py, magic signal to indicate this requires fixing the - # backward hook inside of FSDP... - layer._magma_checkpointed = True # type: ignore - assert layer.layer_drop == 0., "Need further checking" # type: ignore - - def _apply_layer(self, layer, *args, **kwargs): - method = self.checkpointing - if method == 'none': - return layer(*args, **kwargs) - elif method == 'torch': - return torch_checkpoint(layer, *args, use_reentrant=False, **kwargs) - elif method.startswith('xformers'): - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy - if method == 'xformers_default': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "xformers.efficient_attention_forward_cutlass.default", - "xformers_flash.flash_fwd.default", - "aten.addmm.default", - "aten.mm.default", - ] - elif method == 'xformers_mm': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "aten.addmm.default", - "aten.mm.default", - ] - else: - raise ValueError(f"xformers checkpointing xformers policy {method} is not known.") - policy_fn = _get_default_policy(allow_list) - return checkpoint(layer, *args, policy_fn=policy_fn, **kwargs) - else: - raise ValueError(f"Checkpointing method {method} is unknown.") - - def forward(self, x: torch.Tensor, *args, **kwargs): - B, T, C = x.shape - - if 'offsets' in self._streaming_state: - offsets = self._streaming_state['offsets'] - else: - offsets = torch.zeros(B, dtype=torch.long, device=x.device) - - if self.positional_embedding in ['sin', 'sin_rope']: - positions = torch.arange(T, device=x.device).view(1, -1, 1) - positions = positions + offsets.view(-1, 1, 1) - pos_emb = create_sin_embedding(positions, C, max_period=self.max_period, dtype=x.dtype) - x = x + self.positional_scale * pos_emb - - for layer in self.layers: - x = self._apply_layer(layer, x, *args, **kwargs) - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return x - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - if self.weight_decay is not None: - group["weight_decay"] = self.weight_decay - return group - - -# special attention related function - -def _verify_xformers_memory_efficient_compat(): - try: - from xformers.ops import memory_efficient_attention, LowerTriangularMask # noqa - except ImportError: - raise ImportError( - "xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _verify_xformers_internal_compat(): - try: - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy # noqa - except ImportError: - raise ImportError( - "Francisco's fairinternal xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _is_custom(custom: bool, memory_efficient: bool): - return custom or memory_efficient diff --git a/spaces/merve/anonymization/public/uncertainty-calibration/weatherdata.js b/spaces/merve/anonymization/public/uncertainty-calibration/weatherdata.js deleted file mode 100644 index 9fb29abd04cf81496773adb6fbab7a1b9cb513e0..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/public/uncertainty-calibration/weatherdata.js +++ /dev/null @@ -1,255 +0,0 @@ -var weatherdata = [{'h': 0, -'id': 0, -'label': 0, -'original_score': 0.12433152687398698, -'score': 0.12433152687398698}, -{'h': 1, -'id': 1, -'label': 0, -'original_score': 0.2014203772169771, -'score': 0.2014203772169771}, -{'h': 2, -'id': 2, -'label': 1, -'original_score': 0.2626685491019668, -'score': 0.2626685491019668}, -{'h': 3, -'id': 3, -'label': 0, -'original_score': 0.10619382887946915, -'score': 0.10619382887946915}, -{'h': 4, -'id': 4, -'label': 0, -'original_score': 0.1536112957212682, -'score': 0.1536112957212682}, -{'h': 5, -'id': 5, -'label': 0, -'original_score': 0.2660219680553572, -'score': 0.2660219680553572}, -{'h': 6, -'id': 6, -'label': 0, -'original_score': 0.1886698681338711, -'score': 0.1886698681338711}, -{'h': 7, -'id': 7, -'label': 0, -'original_score': 0.302266784816097, -'score': 0.302266784816097}, -{'h': 8, -'id': 8, -'label': 0, -'original_score': 0.15496114380196338, -'score': 0.15496114380196338}, -{'h': 9, -'id': 9, -'label': 0, -'original_score': 0.19763504609985533, -'score': 0.19763504609985533}, -{'h': 0, -'id': 10, -'label': 0, -'original_score': 0.38247000184830054, -'score': 0.38247000184830054}, -{'h': 1, -'id': 11, -'label': 1, -'original_score': 0.3363518147573557, -'score': 0.3363518147573557}, -{'h': 2, -'id': 12, -'label': 1, -'original_score': 0.4947967422959128, -'score': 0.4947967422959128}, -{'h': 3, -'id': 13, -'label': 0, -'original_score': 0.38675988136018435, -'score': 0.38675988136018435}, -{'h': 4, -'id': 14, -'label': 0, -'original_score': 0.3755618748258325, -'score': 0.3755618748258325}, -{'h': 5, -'id': 15, -'label': 0, -'original_score': 0.39394252133526547, -'score': 0.39394252133526547}, -{'h': 6, -'id': 16, -'label': 1, -'original_score': 0.47996692559311144, -'score': 0.47996692559311144}, -{'h': 7, -'id': 17, -'label': 0, -'original_score': 0.4520919890835573, -'score': 0.4520919890835573}, -{'h': 8, -'id': 18, -'label': 0, -'original_score': 0.49128398887598235, -'score': 0.49128398887598235}, -{'h': 9, -'id': 19, -'label': 0, -'original_score': 0.4934231460040127, -'score': 0.4934231460040127}, -{'h': 0, -'id': 20, -'label': 1, -'original_score': 0.6023370616966761, -'score': 0.6023370616966761}, -{'h': 1, -'id': 21, -'label': 0, -'original_score': 0.5588319919664324, -'score': 0.5588319919664324}, -{'h': 2, -'id': 22, -'label': 1, -'original_score': 0.5372993269470902, -'score': 0.5372993269470902}, -{'h': 3, -'id': 23, -'label': 1, -'original_score': 0.6056881032306126, -'score': 0.6056881032306126}, -{'h': 4, -'id': 24, -'label': 1, -'original_score': 0.5777333354677878, -'score': 0.5777333354677878}, -{'h': 5, -'id': 25, -'label': 0, -'original_score': 0.5684077659316352, -'score': 0.5684077659316352}, -{'h': 6, -'id': 26, -'label': 0, -'original_score': 0.5583886351009575, -'score': 0.5583886351009575}, -{'h': 7, -'id': 27, -'label': 0, -'original_score': 0.585107016245853, -'score': 0.585107016245853}, -{'h': 4, -'id': 28, -'label': 0, -'original_score': 0.5024398267017434, -'score': 0.5024398267017434}, -{'h': 7, -'id': 29, -'label': 1, -'original_score': 0.5119051369645927, -'score': 0.5119051369645927}, -{'h': 0, -'id': 30, -'label': 1, -'original_score': 0.6874421886689279, -'score': 0.6874421886689279}, -{'h': 1, -'id': 31, -'label': 1, -'original_score': 0.7622939478182656, -'score': 0.7622939478182656}, -{'h': 2, -'id': 32, -'label': 1, -'original_score': 0.8240376576917314, -'score': 0.8240376576917314}, -{'h': 3, -'id': 33, -'label': 0, -'original_score': 0.8491598185092843, -'score': 0.8491598185092843}, -{'h': 4, -'id': 34, -'label': 1, -'original_score': 0.7585879921321647, -'score': 0.7585879921321647}, -{'h': 5, -'id': 35, -'label': 0, -'original_score': 0.76396242565466, -'score': 0.76396242565466}, -{'h': 6, -'id': 36, -'label': 1, -'original_score': 0.7498984213509621, -'score': 0.7498984213509621}, -{'h': 7, -'id': 37, -'label': 1, -'original_score': 0.6642342379293016, -'score': 0.6642342379293016}, -{'h': 8, -'id': 38, -'label': 0, -'original_score': 0.7594027841393808, -'score': 0.7594027841393808}, -{'h': 9, -'id': 39, -'label': 1, -'original_score': 0.816737760918518, -'score': 0.816737760918518}, -{'h': 0, -'id': 40, -'label': 1, -'original_score': 0.8926172493334218, -'score': 0.8926172493334218}, -{'h': 1, -'id': 41, -'label': 0, -'original_score': 0.9194132577983325, -'score': 0.9194132577983325}, -{'h': 2, -'id': 42, -'label': 1, -'original_score': 0.8603862951854552, -'score': 0.8603862951854552}, -{'h': 3, -'id': 43, -'label': 1, -'original_score': 0.9093601089110575, -'score': 0.9093601089110575}, -{'h': 4, -'id': 44, -'label': 1, -'original_score': 0.9442430043437404, -'score': 0.9442430043437404}, -{'h': 5, -'id': 45, -'label': 1, -'original_score': 0.8778942613680896, -'score': 0.8778942613680896}, -{'h': 6, -'id': 46, -'label': 1, -'original_score': 0.8873305075007553, -'score': 0.8873305075007553}, -{'h': 7, -'id': 47, -'label': 1, -'original_score': 0.8786043110234295, -'score': 0.8786043110234295}, -{'h': 8, -'id': 48, -'label': 1, -'original_score': 0.8682870444345626, -'score': 0.8682870444345626}, -{'h': 9, -'id': 49, -'label': 1, -'original_score': 0.8698959578262738, -'score': 0.8698959578262738}] - - -weatherdata.forEach(d => { - d.is_filter = d.label && Math.random() < .6 -}) \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/public/anonymization/make-gs.js b/spaces/merve/fill-in-the-blank/public/anonymization/make-gs.js deleted file mode 100644 index 4eb1aaeffeb2a69e726a9d452d7eea7b3352b318..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/anonymization/make-gs.js +++ /dev/null @@ -1,105 +0,0 @@ -window.makeGS = function(){ - var prevSlideIndex = -1 - function updateSlide(i){ - var slide = slides[i] - if (!slide) return - - d3.select('.tooltip').classed('tooltip-hidden', true) - - var dur = 500 - - sel.student.transition('xKey').duration(dur).delay(dur ? slide.circleDelayFn : 0) - .translate(d => (d.isAdditionalStudent && slide.xKey != 'plagerizedShifted') ? [0,0]: d.pos[slide.xKey]) - - - if (sel.rectAt[slide.xKey]){ - sel.uniqueBox.transition('at').duration(dur) - .delay(d => dur ? slide.circleDelayFn(d.d0) : 0) - .at(sel.rectAt[slide.xKey]) - .translate(d => d.d0.group[slide.xKey].pos) - } - - sel.uniqueBox.transition().duration(dur) - .st({opacity: slide.showUniqueBox ? 1 : 0}) - - sel.uniqueSeasonBox.transition() - .delay((d, i) => slide.showUniqueSeasonBox ? dur*2 + i*40 : 0).duration(slide.showUniqueSeasonBox ? 0 : dur) - .st({opacity: slide.showUniqueSeasonBox ? 1 : 0}) - - - if (sliders.headsProb != slide.headsProbTarget && slide.animateHeadsProbSlider != -1){ - var headI = d3.interpolate(sliders.headsProb, slide.headsProbTarget) - if (window.headSliderTimer) window.headSliderTimer.stop() - window.headSliderTimer = d3.timer(ms => { - var dur = slide.animateHeadsProbSlider ? 2000 : 1 - var t = d3.easeCubicInOut(d3.clamp(0, ms/dur, 1)) - sliders.updateHeadsProb(headI(t)) - if (t == 1) headSliderTimer.stop() - }) - } - - if (sliders.population != slide.populationTarget){ - var popI = d3.interpolate(sliders.population, slide.populationTarget) - if (window.popSliderTimer) window.popSliderTimer.stop() - window.popSliderTimer = d3.timer(ms => { - var dur = slide.animatePopulationSlider ? 2000 : 1 - var t = d3.easeCubicInOut(d3.clamp(0, ms/dur, 1)) - sliders.updatePopulation(Math.round(popI(t)/2)*2) - if (t == 1) popSliderTimer.stop() - }) - } - - axii.stateAxis.transition().duration(dur/2) - .st({opacity: slide.showStateAxis ? 1 : 0}) - axii.ageAxis.transition().duration(dur/2) - .st({opacity: slide.showAgeAxis ? 1 : 0}) - axii.seasonAxis.transition().duration(dur/2) - .st({opacity: slide.showSeasonAxis ? 1 : 0}) - axii.headAxis.transition().duration(dur/2) - .st({opacity: slide.showHeadAxis ? 1 : 0}) - axii.headCaptionAxis.transition().duration(dur/2) - .st({opacity: slide.showHeadCaptionAxis ? 1 : 0}) - estimates.axisSel.transition().delay(dur).duration(dur/2) - .st({opacity: slide.showHistogramAxis ? 1 : 0}) - estimates.activeSel.transition().delay(dur).duration(dur/2) - .st({opacity: slide.showHistogramAxis ? 1 : 0}) - // axii.estimateAxis.transition().delay(dur).duration(dur/2) - // .st({opacity: slide.showEstimate && !slide.enterHistogram ? 1 : 0}) - // axii.plagerizedAxis.transition().delay(dur).duration(dur/2) - // .st({opacity: slide.showPlagerizedAxis ? 1 : 0}) - - - annotationSel.transition().duration(dur/2) - .st({opacity: d => i == d.slide ? 1 : 0}) - - estimates.containerSel.transition('xKey').duration(dur/2) - .st({opacity: slide.showHistogram ? 1 : 0}) - - if (slide.enterHistogram){ - estimates.render(true) - } else { - window.flipAllCoinsTimer._time = Infinity - } - if (slide.enterHistogram === 0) estimates.estimateSel.classed('active', 1) - - - // Display the default coin flip state if the histogram is not visible. - sel.flipCircle.transition().duration(dur) - .at({transform: d => { - return slide.showFlipCircle && d.coinVals[estimates.active.index] < sliders.headsProb ? 'scale(1)' : 'scale(.1)'}}) - - prevSlideIndex = i - slides.curSlide = slide - } - - var gs = d3.graphScroll() - .container(d3.select('.container-1')) - .graph(d3.selectAll('container-1 #graph')) - .eventId('uniqueId1') - .sections(d3.selectAll('.container-1 #sections > div')) - .offset(300) - .on('active', updateSlide) -} - - -if (window.init) window.init() diff --git a/spaces/mfkeles/Track-Anything/tools/interact_tools.py b/spaces/mfkeles/Track-Anything/tools/interact_tools.py deleted file mode 100644 index daecc73e5f54c95b53c04520110775281a6e0560..0000000000000000000000000000000000000000 --- a/spaces/mfkeles/Track-Anything/tools/interact_tools.py +++ /dev/null @@ -1,265 +0,0 @@ -import time -import torch -import cv2 -from PIL import Image, ImageDraw, ImageOps -import numpy as np -from typing import Union -from segment_anything import sam_model_registry, SamPredictor, SamAutomaticMaskGenerator -import matplotlib.pyplot as plt -import PIL -from .mask_painter import mask_painter as mask_painter2 -from .base_segmenter import BaseSegmenter -from .painter import mask_painter, point_painter -import os -import requests -import sys - - -mask_color = 3 -mask_alpha = 0.7 -contour_color = 1 -contour_width = 5 -point_color_ne = 8 -point_color_ps = 50 -point_alpha = 0.9 -point_radius = 15 -contour_color = 2 -contour_width = 5 - - -class SamControler(): - def __init__(self, SAM_checkpoint, model_type, device): - ''' - initialize sam controler - ''' - - - self.sam_controler = BaseSegmenter(SAM_checkpoint, model_type, device) - - - # def seg_again(self, image: np.ndarray): - # ''' - # it is used when interact in video - # ''' - # self.sam_controler.reset_image() - # self.sam_controler.set_image(image) - # return - - - def first_frame_click(self, image: np.ndarray, points:np.ndarray, labels: np.ndarray, multimask=True,mask_color=3): - ''' - it is used in first frame in video - return: mask, logit, painted image(mask+point) - ''' - # self.sam_controler.set_image(image) - origal_image = self.sam_controler.orignal_image - neg_flag = labels[-1] - if neg_flag==1: - #find neg - prompts = { - 'point_coords': points, - 'point_labels': labels, - } - masks, scores, logits = self.sam_controler.predict(prompts, 'point', multimask) - mask, logit = masks[np.argmax(scores)], logits[np.argmax(scores), :, :] - prompts = { - 'point_coords': points, - 'point_labels': labels, - 'mask_input': logit[None, :, :] - } - masks, scores, logits = self.sam_controler.predict(prompts, 'both', multimask) - mask, logit = masks[np.argmax(scores)], logits[np.argmax(scores), :, :] - else: - #find positive - prompts = { - 'point_coords': points, - 'point_labels': labels, - } - masks, scores, logits = self.sam_controler.predict(prompts, 'point', multimask) - mask, logit = masks[np.argmax(scores)], logits[np.argmax(scores), :, :] - - - assert len(points)==len(labels) - - painted_image = mask_painter(image, mask.astype('uint8'), mask_color, mask_alpha, contour_color, contour_width) - painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels>0)],axis = 1), point_color_ne, point_alpha, point_radius, contour_color, contour_width) - painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels<1)],axis = 1), point_color_ps, point_alpha, point_radius, contour_color, contour_width) - painted_image = Image.fromarray(painted_image) - - return mask, logit, painted_image - - # def interact_loop(self, image:np.ndarray, same: bool, points:np.ndarray, labels: np.ndarray, logits: np.ndarray=None, multimask=True): - # origal_image = self.sam_controler.orignal_image - # if same: - # ''' - # true; loop in the same image - # ''' - # prompts = { - # 'point_coords': points, - # 'point_labels': labels, - # 'mask_input': logits[None, :, :] - # } - # masks, scores, logits = self.sam_controler.predict(prompts, 'both', multimask) - # mask, logit = masks[np.argmax(scores)], logits[np.argmax(scores), :, :] - - # painted_image = mask_painter(image, mask.astype('uint8'), mask_color, mask_alpha, contour_color, contour_width) - # painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels>0)],axis = 1), point_color_ne, point_alpha, point_radius, contour_color, contour_width) - # painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels<1)],axis = 1), point_color_ps, point_alpha, point_radius, contour_color, contour_width) - # painted_image = Image.fromarray(painted_image) - - # return mask, logit, painted_image - # else: - # ''' - # loop in the different image, interact in the video - # ''' - # if image is None: - # raise('Image error') - # else: - # self.seg_again(image) - # prompts = { - # 'point_coords': points, - # 'point_labels': labels, - # } - # masks, scores, logits = self.sam_controler.predict(prompts, 'point', multimask) - # mask, logit = masks[np.argmax(scores)], logits[np.argmax(scores), :, :] - - # painted_image = mask_painter(image, mask.astype('uint8'), mask_color, mask_alpha, contour_color, contour_width) - # painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels>0)],axis = 1), point_color_ne, point_alpha, point_radius, contour_color, contour_width) - # painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels<1)],axis = 1), point_color_ps, point_alpha, point_radius, contour_color, contour_width) - # painted_image = Image.fromarray(painted_image) - - # return mask, logit, painted_image - - - - - - -# def initialize(): -# ''' -# initialize sam controler -# ''' -# checkpoint_url = "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth" -# folder = "segmenter" -# SAM_checkpoint= './checkpoints/sam_vit_h_4b8939.pth' -# download_checkpoint(checkpoint_url, folder, SAM_checkpoint) - - -# model_type = 'vit_h' -# device = "cuda:0" -# sam_controler = BaseSegmenter(SAM_checkpoint, model_type, device) -# return sam_controler - - -# def seg_again(sam_controler, image: np.ndarray): -# ''' -# it is used when interact in video -# ''' -# sam_controler.reset_image() -# sam_controler.set_image(image) -# return - - -# def first_frame_click(sam_controler, image: np.ndarray, points:np.ndarray, labels: np.ndarray, multimask=True): -# ''' -# it is used in first frame in video -# return: mask, logit, painted image(mask+point) -# ''' -# sam_controler.set_image(image) -# prompts = { -# 'point_coords': points, -# 'point_labels': labels, -# } -# masks, scores, logits = sam_controler.predict(prompts, 'point', multimask) -# mask, logit = masks[np.argmax(scores)], logits[np.argmax(scores), :, :] - -# assert len(points)==len(labels) - -# painted_image = mask_painter(image, mask.astype('uint8'), mask_color, mask_alpha, contour_color, contour_width) -# painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels>0)],axis = 1), point_color_ne, point_alpha, point_radius, contour_color, contour_width) -# painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels<1)],axis = 1), point_color_ps, point_alpha, point_radius, contour_color, contour_width) -# painted_image = Image.fromarray(painted_image) - -# return mask, logit, painted_image - -# def interact_loop(sam_controler, image:np.ndarray, same: bool, points:np.ndarray, labels: np.ndarray, logits: np.ndarray=None, multimask=True): -# if same: -# ''' -# true; loop in the same image -# ''' -# prompts = { -# 'point_coords': points, -# 'point_labels': labels, -# 'mask_input': logits[None, :, :] -# } -# masks, scores, logits = sam_controler.predict(prompts, 'both', multimask) -# mask, logit = masks[np.argmax(scores)], logits[np.argmax(scores), :, :] - -# painted_image = mask_painter(image, mask.astype('uint8'), mask_color, mask_alpha, contour_color, contour_width) -# painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels>0)],axis = 1), point_color_ne, point_alpha, point_radius, contour_color, contour_width) -# painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels<1)],axis = 1), point_color_ps, point_alpha, point_radius, contour_color, contour_width) -# painted_image = Image.fromarray(painted_image) - -# return mask, logit, painted_image -# else: -# ''' -# loop in the different image, interact in the video -# ''' -# if image is None: -# raise('Image error') -# else: -# seg_again(sam_controler, image) -# prompts = { -# 'point_coords': points, -# 'point_labels': labels, -# } -# masks, scores, logits = sam_controler.predict(prompts, 'point', multimask) -# mask, logit = masks[np.argmax(scores)], logits[np.argmax(scores), :, :] - -# painted_image = mask_painter(image, mask.astype('uint8'), mask_color, mask_alpha, contour_color, contour_width) -# painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels>0)],axis = 1), point_color_ne, point_alpha, point_radius, contour_color, contour_width) -# painted_image = point_painter(painted_image, np.squeeze(points[np.argwhere(labels<1)],axis = 1), point_color_ps, point_alpha, point_radius, contour_color, contour_width) -# painted_image = Image.fromarray(painted_image) - -# return mask, logit, painted_image - - - - -# if __name__ == "__main__": -# points = np.array([[500, 375], [1125, 625]]) -# labels = np.array([1, 1]) -# image = cv2.imread('/hhd3/gaoshang/truck.jpg') -# image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - -# sam_controler = initialize() -# mask, logit, painted_image_full = first_frame_click(sam_controler,image, points, labels, multimask=True) -# painted_image = mask_painter2(image, mask.astype('uint8'), background_alpha=0.8) -# painted_image = cv2.cvtColor(painted_image, cv2.COLOR_RGB2BGR) # numpy array (h, w, 3) -# cv2.imwrite('/hhd3/gaoshang/truck_point.jpg', painted_image) -# cv2.imwrite('/hhd3/gaoshang/truck_change.jpg', image) -# painted_image_full.save('/hhd3/gaoshang/truck_point_full.jpg') - -# mask, logit, painted_image_full = interact_loop(sam_controler,image,True, points, np.array([1, 0]), logit, multimask=True) -# painted_image = mask_painter2(image, mask.astype('uint8'), background_alpha=0.8) -# painted_image = cv2.cvtColor(painted_image, cv2.COLOR_RGB2BGR) # numpy array (h, w, 3) -# cv2.imwrite('/hhd3/gaoshang/truck_same.jpg', painted_image) -# painted_image_full.save('/hhd3/gaoshang/truck_same_full.jpg') - -# mask, logit, painted_image_full = interact_loop(sam_controler,image, False, points, labels, multimask=True) -# painted_image = mask_painter2(image, mask.astype('uint8'), background_alpha=0.8) -# painted_image = cv2.cvtColor(painted_image, cv2.COLOR_RGB2BGR) # numpy array (h, w, 3) -# cv2.imwrite('/hhd3/gaoshang/truck_diff.jpg', painted_image) -# painted_image_full.save('/hhd3/gaoshang/truck_diff_full.jpg') - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/mikkoar/marco/src/lib/bots/bing/types.ts b/spaces/mikkoar/marco/src/lib/bots/bing/types.ts deleted file mode 100644 index 02cd5e8b01e3529642d28dc1539bf958f4ac420b..0000000000000000000000000000000000000000 --- a/spaces/mikkoar/marco/src/lib/bots/bing/types.ts +++ /dev/null @@ -1,259 +0,0 @@ -export type Author = 'user' | 'system' | 'bot' - -export type BotId = 'bing' - -export enum BingConversationStyle { - Creative = 'Creative', - Balanced = 'Balanced', - Precise = 'Precise' -} - -export enum ErrorCode { - CONVERSATION_LIMIT = 'CONVERSATION_LIMIT', - BING_UNAUTHORIZED = 'BING_UNAUTHORIZED', - BING_FORBIDDEN = 'BING_FORBIDDEN', - BING_CAPTCHA = 'BING_CAPTCHA', - THROTTLE_LIMIT = 'THROTTLE_LIMIT', - NOTFOUND_ERROR = 'NOT_FOUND_ERROR', - UNKOWN_ERROR = 'UNKOWN_ERROR', - NETWORK_ERROR = 'NETWORK_ERROR', -} - -export class ChatError extends Error { - code: ErrorCode - constructor(message: string, code: ErrorCode) { - super(message) - this.code = code - } -} - -export type ChatMessageModel = { - id: string - author: Author - text: string - error?: ChatError - throttling?: Throttling - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] -} - -export interface ConversationModel { - messages: ChatMessageModel[] -} - -export type Event = - | { - type: 'UPDATE_ANSWER' - data: { - text: string - spokenText?: string - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] - throttling?: Throttling - } - } - | { - type: 'DONE' - } - | { - type: 'ERROR' - error: ChatError - } - -export interface SendMessageParams { - prompt: string - imageUrl?: string - options: T - onEvent: (event: Event) => void - signal?: AbortSignal -} - -export interface ConversationResponse { - conversationId: string - clientId: string - conversationSignature: string - result: { - value: string - message?: string - } -} - -export interface Telemetry { - metrics?: null - startTime: string -} - -export interface ChatUpdateArgument { - messages?: ChatResponseMessage[] - throttling?: Throttling - requestId: string - result: null -} - -export type ChatUpdateCompleteResponse = { - type: 2 - invocationId: string - item: ChatResponseItem -} | { - type: 1 - target: string - arguments: ChatUpdateArgument[] -} | { - type: 3 - invocationId: string -} | { - type: 6 | 7 -} - -export interface ChatRequestResult { - value: string - serviceVersion: string - error?: string -} - -export interface ChatResponseItem { - messages: ChatResponseMessage[] - firstNewMessageIndex: number - suggestedResponses: null - conversationId: string - requestId: string - conversationExpiryTime: string - telemetry: Telemetry - result: ChatRequestResult - throttling: Throttling -} -export enum InvocationEventType { - Invocation = 1, - StreamItem = 2, - Completion = 3, - StreamInvocation = 4, - CancelInvocation = 5, - Ping = 6, - Close = 7, -} - -// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts - -export interface ConversationInfo { - conversationId: string - clientId: string - conversationSignature: string - invocationId: number - conversationStyle: BingConversationStyle - prompt: string - imageUrl?: string -} - -export interface BingChatResponse { - conversationSignature: string - conversationId: string - clientId: string - invocationId: number - conversationExpiryTime: Date - response: string - details: ChatResponseMessage -} - -export interface Throttling { - maxNumLongDocSummaryUserMessagesInConversation: number - maxNumUserMessagesInConversation: number - numLongDocSummaryUserMessagesInConversation: number - numUserMessagesInConversation: number -} - -export interface ChatResponseMessage { - text: string - spokenText?: string - author: string - createdAt: Date - timestamp: Date - messageId: string - requestId: string - offense: string - adaptiveCards: AdaptiveCard[] - sourceAttributions: SourceAttribution[] - feedback: Feedback - contentOrigin: string - messageType?: string - contentType?: string - privacy: null - suggestedResponses: SuggestedResponse[] -} - -export interface AdaptiveCard { - type: string - version: string - body: Body[] -} - -export interface Body { - type: string - text: string - wrap: boolean - size?: string -} - -export interface Feedback { - tag: null - updatedOn: null - type: string -} - -export interface SourceAttribution { - providerDisplayName: string - seeMoreUrl: string - searchQuery: string -} - -export interface SuggestedResponse { - text: string - author?: Author - createdAt?: Date - timestamp?: Date - messageId?: string - messageType?: string - offense?: string - feedback?: Feedback - contentOrigin?: string - privacy?: null -} - -export interface KBlobRequest { - knowledgeRequest: KnowledgeRequestContext - imageBase64?: string -} - -export interface KBlobResponse { - blobId: string - processedBlobId?: string -} - -export interface KnowledgeRequestContext { - imageInfo: ImageInfo; - knowledgeRequest: KnowledgeRequest; -} - -export interface ImageInfo { - url?: string; -} - -export interface KnowledgeRequest { - invokedSkills: string[]; - subscriptionId: string; - invokedSkillsRequestData: InvokedSkillsRequestData; - convoData: ConvoData; -} - -export interface ConvoData { - convoid: string; - convotone: BingConversationStyle; -} - -export interface InvokedSkillsRequestData { - enableFaceBlur: boolean; -} - -export interface FileItem { - url: string; - status?: 'loading' | 'error' | 'loaded' -} diff --git a/spaces/mygyasir/Image-Models-Test92/README.md b/spaces/mygyasir/Image-Models-Test92/README.md deleted file mode 100644 index 6bdc673ad2a2e5ff65cc1344170b984c0a91154d..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/Image-Models-Test92/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test92 ---- - - \ No newline at end of file diff --git a/spaces/mymiss/ComfyUI-ave/README.md b/spaces/mymiss/ComfyUI-ave/README.md deleted file mode 100644 index 4becbe61b28c4a8b76e4185e77acf341e875b0ce..0000000000000000000000000000000000000000 --- a/spaces/mymiss/ComfyUI-ave/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ComfyUI -emoji: 🪟.UI -colorFrom: green -colorTo: pink -sdk: static -pinned: true -license: creativeml-openrail-m ---- - -![](https://raw.githubusercontent.com/ehristoforu/imghost/main/31c77b98-1019-4ea6-9c89-d66d25ab1586.jpg) -

      ComfyUI

      -

      This is UI for everyone! Generate image for FREE!

      \ No newline at end of file diff --git a/spaces/najimino/video/README.md b/spaces/najimino/video/README.md deleted file mode 100644 index fcbf80426a4e91bdd7cf9bfe8db6ea3c1d6336e6..0000000000000000000000000000000000000000 --- a/spaces/najimino/video/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Video -emoji: 🏃 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nakas/MusicGenDemucs/tests/common_utils/temp_utils.py b/spaces/nakas/MusicGenDemucs/tests/common_utils/temp_utils.py deleted file mode 100644 index d1e0367e979c8b9fea65472c373916d956ad5aaa..0000000000000000000000000000000000000000 --- a/spaces/nakas/MusicGenDemucs/tests/common_utils/temp_utils.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import os -import tempfile - - -class TempDirMixin: - """Mixin to provide easy access to temp dir. - """ - - temp_dir_ = None - - @classmethod - def get_base_temp_dir(cls): - # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory. - # this is handy for debugging. - key = "AUDIOCRAFT_TEST_DIR" - if key in os.environ: - return os.environ[key] - if cls.temp_dir_ is None: - cls.temp_dir_ = tempfile.TemporaryDirectory() - return cls.temp_dir_.name - - @classmethod - def tearDownClass(cls): - if cls.temp_dir_ is not None: - try: - cls.temp_dir_.cleanup() - cls.temp_dir_ = None - except PermissionError: - # On Windows there is a know issue with `shutil.rmtree`, - # which fails intermittenly. - # https://github.com/python/cpython/issues/74168 - # Following the above thread, we ignore it. - pass - super().tearDownClass() - - @property - def id(self): - return self.__class__.__name__ - - def get_temp_path(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(os.path.dirname(path), exist_ok=True) - return path - - def get_temp_dir(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(path, exist_ok=True) - return path diff --git a/spaces/nateraw/deepafx-st/deepafx_st/callbacks/params.py b/spaces/nateraw/deepafx-st/deepafx_st/callbacks/params.py deleted file mode 100644 index e327671f665070f0be3e7f561c68fa5e3324811b..0000000000000000000000000000000000000000 --- a/spaces/nateraw/deepafx-st/deepafx_st/callbacks/params.py +++ /dev/null @@ -1,87 +0,0 @@ -import numpy as np -import pytorch_lightning as pl -import matplotlib.pyplot as plt - -import deepafx_st.utils as utils - - -class LogParametersCallback(pl.callbacks.Callback): - def __init__(self, num_examples=4): - super().__init__() - self.num_examples = 4 - - def on_validation_epoch_start(self, trainer, pl_module): - """At the start of validation init storage for parameters.""" - self.params = [] - - def on_validation_batch_end( - self, - trainer, - pl_module, - outputs, - batch, - batch_idx, - dataloader_idx, - ): - """Called when the validation batch ends. - - Here we log the parameters only from the first batch. - - """ - if outputs is not None and batch_idx == 0: - examples = np.min([self.num_examples, outputs["x"].shape[0]]) - for n in range(examples): - self.log_parameters( - outputs, - n, - pl_module.processor.ports, - trainer.global_step, - trainer.logger, - True if batch_idx == 0 else False, - ) - - def on_validation_epoch_end(self, trainer, pl_module): - pass - - def log_parameters(self, outputs, batch_idx, ports, global_step, logger, log=True): - p = outputs["p"][batch_idx, ...] - - table = "" - - # table += f"""## {plugin["name"]}\n""" - table += "| Index| Name | Value | Units | Min | Max | Default | Raw Value | \n" - table += "|------|------|------:|:------|----:|----:|--------:| ---------:| \n" - - start_idx = 0 - # set plugin parameters based on provided normalized parameters - for port_list in ports: - for pidx, port in enumerate(port_list): - param_max = port["max"] - param_min = port["min"] - param_name = port["name"] - param_default = port["default"] - param_units = port["units"] - - param_val = p[start_idx] - denorm_val = utils.denormalize(param_val, param_max, param_min) - - # add values to table in row - table += f"| {start_idx + 1} | {param_name} " - if np.abs(denorm_val) > 10: - table += f"| {denorm_val:0.1f} " - table += f"| {param_units} " - table += f"| {param_min:0.1f} | {param_max:0.1f} " - table += f"| {param_default:0.1f} " - else: - table += f"| {denorm_val:0.3f} " - table += f"| {param_units} " - table += f"| {param_min:0.3f} | {param_max:0.3f} " - table += f"| {param_default:0.3f} " - - table += f"| {np.squeeze(param_val):0.2f} | \n" - start_idx += 1 - - table += "\n\n" - - if log: - logger.experiment.add_text(f"params/{batch_idx+1}", table, global_step) diff --git a/spaces/nateraw/yolov6/yolov6/core/inferer.py b/spaces/nateraw/yolov6/yolov6/core/inferer.py deleted file mode 100644 index e778d7979a18b5c5d7bf5ae766a950c8a59d030f..0000000000000000000000000000000000000000 --- a/spaces/nateraw/yolov6/yolov6/core/inferer.py +++ /dev/null @@ -1,231 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -import os -import os.path as osp -import math -from tqdm import tqdm -import numpy as np -import cv2 -import torch -from PIL import ImageFont - -from yolov6.utils.events import LOGGER, load_yaml -from yolov6.layers.common import DetectBackend -from yolov6.data.data_augment import letterbox -from yolov6.utils.nms import non_max_suppression -from yolov6.utils.torch_utils import get_model_info - - -class Inferer: - def __init__(self, source, weights, device, yaml, img_size, half): - import glob - from yolov6.data.datasets import IMG_FORMATS - - self.__dict__.update(locals()) - - # Init model - self.device = device - self.img_size = img_size - cuda = self.device != 'cpu' and torch.cuda.is_available() - self.device = torch.device('cuda:0' if cuda else 'cpu') - self.model = DetectBackend(weights, device=self.device) - self.stride = self.model.stride - self.class_names = load_yaml(yaml)['names'] - self.img_size = self.check_img_size(self.img_size, s=self.stride) # check image size - - # Half precision - if half & (self.device.type != 'cpu'): - self.model.model.half() - else: - self.model.model.float() - half = False - - if self.device.type != 'cpu': - self.model(torch.zeros(1, 3, *self.img_size).to(self.device).type_as(next(self.model.model.parameters()))) # warmup - - # Load data - if os.path.isdir(source): - img_paths = sorted(glob.glob(os.path.join(source, '*.*'))) # dir - elif os.path.isfile(source): - img_paths = [source] # files - else: - raise Exception(f'Invalid path: {source}') - self.img_paths = [img_path for img_path in img_paths if img_path.split('.')[-1].lower() in IMG_FORMATS] - - # Switch model to deploy status - self.model_switch(self.model, self.img_size) - - def model_switch(self, model, img_size): - ''' Model switch to deploy status ''' - from yolov6.layers.common import RepVGGBlock - for layer in model.modules(): - if isinstance(layer, RepVGGBlock): - layer.switch_to_deploy() - - LOGGER.info("Switch model to deploy modality.") - - def infer(self, conf_thres, iou_thres, classes, agnostic_nms, max_det, save_dir, save_txt, save_img, hide_labels, hide_conf): - ''' Model Inference and results visualization ''' - - for img_path in tqdm(self.img_paths): - img, img_src = self.precess_image(img_path, self.img_size, self.stride, self.half) - img = img.to(self.device) - if len(img.shape) == 3: - img = img[None] - # expand for batch dim - pred_results = self.model(img) - det = non_max_suppression(pred_results, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)[0] - - save_path = osp.join(save_dir, osp.basename(img_path)) # im.jpg - txt_path = osp.join(save_dir, 'labels', osp.splitext(osp.basename(img_path))[0]) - - gn = torch.tensor(img_src.shape)[[1, 0, 1, 0]] # normalization gain whwh - img_ori = img_src - - # check image and font - assert img_ori.data.contiguous, 'Image needs to be contiguous. Please apply to input images with np.ascontiguousarray(im).' - self.font_check() - - if len(det): - det[:, :4] = self.rescale(img.shape[2:], det[:, :4], img_src.shape).round() - - for *xyxy, conf, cls in reversed(det): - if save_txt: # Write to file - xywh = (self.box_convert(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, *xywh, conf) - with open(txt_path + '.txt', 'a') as f: - f.write(('%g ' * len(line)).rstrip() % line + '\n') - - if save_img: - class_num = int(cls) # integer class - label = None if hide_labels else (self.class_names[class_num] if hide_conf else f'{self.class_names[class_num]} {conf:.2f}') - - self.plot_box_and_label(img_ori, max(round(sum(img_ori.shape) / 2 * 0.003), 2), xyxy, label, color=self.generate_colors(class_num, True)) - - img_src = np.asarray(img_ori) - - # Save results (image with detections) - if save_img: - cv2.imwrite(save_path, img_src) - - @staticmethod - def precess_image(path, img_size, stride, half): - '''Process image before image inference.''' - try: - img_src = cv2.imread(path) - assert img_src is not None, f'Invalid image: {path}' - except Exception as e: - LOGGER.warning(e) - image = letterbox(img_src, img_size, stride=stride)[0] - - # Convert - image = image.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB - image = torch.from_numpy(np.ascontiguousarray(image)) - image = image.half() if half else image.float() # uint8 to fp16/32 - image /= 255 # 0 - 255 to 0.0 - 1.0 - - return image, img_src - - @staticmethod - def rescale(ori_shape, boxes, target_shape): - '''Rescale the output to the original image shape''' - ratio = min(ori_shape[0] / target_shape[0], ori_shape[1] / target_shape[1]) - padding = (ori_shape[1] - target_shape[1] * ratio) / 2, (ori_shape[0] - target_shape[0] * ratio) / 2 - - boxes[:, [0, 2]] -= padding[0] - boxes[:, [1, 3]] -= padding[1] - boxes[:, :4] /= ratio - - boxes[:, 0].clamp_(0, target_shape[1]) # x1 - boxes[:, 1].clamp_(0, target_shape[0]) # y1 - boxes[:, 2].clamp_(0, target_shape[1]) # x2 - boxes[:, 3].clamp_(0, target_shape[0]) # y2 - - return boxes - - def check_img_size(self, img_size, s=32, floor=0): - """Make sure image size is a multiple of stride s in each dimension, and return a new shape list of image.""" - if isinstance(img_size, int): # integer i.e. img_size=640 - new_size = max(self.make_divisible(img_size, int(s)), floor) - elif isinstance(img_size, list): # list i.e. img_size=[640, 480] - new_size = [max(self.make_divisible(x, int(s)), floor) for x in img_size] - else: - raise Exception(f"Unsupported type of img_size: {type(img_size)}") - - if new_size != img_size: - print(f'WARNING: --img-size {img_size} must be multiple of max stride {s}, updating to {new_size}') - return new_size if isinstance(img_size,list) else [new_size]*2 - - def make_divisible(self, x, divisor): - # Upward revision the value x to make it evenly divisible by the divisor. - return math.ceil(x / divisor) * divisor - - @staticmethod - def plot_box_and_label(image, lw, box, label='', color=(128, 128, 128), txt_color=(255, 255, 255)): - # Add one xyxy box to image with label - p1, p2 = (int(box[0]), int(box[1])), (int(box[2]), int(box[3])) - cv2.rectangle(image, p1, p2, color, thickness=lw, lineType=cv2.LINE_AA) - if label: - tf = max(lw - 1, 1) # font thickness - w, h = cv2.getTextSize(label, 0, fontScale=lw / 3, thickness=tf)[0] # text width, height - outside = p1[1] - h - 3 >= 0 # label fits outside box - p2 = p1[0] + w, p1[1] - h - 3 if outside else p1[1] + h + 3 - cv2.rectangle(image, p1, p2, color, -1, cv2.LINE_AA) # filled - cv2.putText(image, label, (p1[0], p1[1] - 2 if outside else p1[1] + h + 2), 0, lw / 3, txt_color, - thickness=tf, lineType=cv2.LINE_AA) - - @staticmethod - def font_check(font='./yolov6/utils/Arial.ttf', size=10): - # Return a PIL TrueType Font, downloading to CONFIG_DIR if necessary - assert osp.exists(font), f'font path not exists: {font}' - try: - return ImageFont.truetype(str(font) if font.exists() else font.name, size) - except Exception as e: # download if missing - return ImageFont.truetype(str(font), size) - - @staticmethod - def box_convert(x): - # Convert boxes with shape [n, 4] from [x1, y1, x2, y2] to [x, y, w, h] where x1y1=top-left, x2y2=bottom-right - y = x.clone() if isinstance(x, torch.Tensor) else np.copy(x) - y[:, 0] = (x[:, 0] + x[:, 2]) / 2 # x center - y[:, 1] = (x[:, 1] + x[:, 3]) / 2 # y center - y[:, 2] = x[:, 2] - x[:, 0] # width - y[:, 3] = x[:, 3] - x[:, 1] # height - return y - - @staticmethod - def generate_colors(i, bgr=False): - hex = ('FF3838', 'FF9D97', 'FF701F', 'FFB21D', 'CFD231', '48F90A', '92CC17', '3DDB86', '1A9334', '00D4BB', - '2C99A8', '00C2FF', '344593', '6473FF', '0018EC', '8438FF', '520085', 'CB38FF', 'FF95C8', 'FF37C7') - palette = [] - for iter in hex: - h = '#' + iter - palette.append(tuple(int(h[1 + i:1 + i + 2], 16) for i in (0, 2, 4))) - num = len(palette) - color = palette[int(i) % num] - return (color[2], color[1], color[0]) if bgr else color - - -class VideoInferer(Inferer): - - def setup_source(self, source): - # Load data - if os.path.isfile(source): - self.vid_path = source - self.vid_name = '.'.join(os.path.basename(source).split('.')[:-1]) - else: - raise Exception(f'Invalid path: {source}') - - self.cap = cv2.VideoCapture(self.vid_path) - - def iterator_length(self): - return int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) - - def img_iterator(self): - cur_fid = 0 - ret, frame = self.cap.read() - - while ret: - yield frame, f'{self.vid_name}_frame_{cur_fid:06}.jpg' - ret, frame = self.cap.read() - cur_fid += 1 diff --git a/spaces/nerijs/coralchar-diffusion/app.py b/spaces/nerijs/coralchar-diffusion/app.py deleted file mode 100644 index feff78119e20ee9457465520b569f9a2177cdece..0000000000000000000000000000000000000000 --- a/spaces/nerijs/coralchar-diffusion/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'nerijs/coralchar-diffusion' -prefix = 'a woman wearing blue jeans and a white tank top, coralchar style' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
      -
      -

      Coralchar Diffusion

      -
      -

      - Demo for Coralchar Diffusion Stable Diffusion model.
      - {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

      - Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"}

      - Duplicate Space -
      - """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically (a woman wearing blue jeans and a white tank top, coralchar style)", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
      -
      -

      This space was created using SD Space Creator.

      -
      - """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gnss Internet Radio 1.4.11 48.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gnss Internet Radio 1.4.11 48.md deleted file mode 100644 index 4899099e14feb1c18c05fca82204001014e0b925..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Gnss Internet Radio 1.4.11 48.md +++ /dev/null @@ -1,23 +0,0 @@ - -

      What is Gnss Internet Radio 1.4.11 48 and how to use it?

      -

      Gnss Internet Radio 1.4.11 48 is a software program that allows users to listen to online radio streams that provide GNSS (Global Navigation Satellite System) data. GNSS data can be used for various applications, such as navigation, positioning, timing, geodesy, and surveying.

      -

      Gnss Internet Radio 1.4.11 48


      Download Ziphttps://urlcod.com/2uIaWp



      -

      The program handles the HTTP communication and transfers received GNSS data to a serial or IP port feeding networking software or a DGPS/RTK application[^3^]. It can also compute a real-time Precise Point Positioning (PPP) solution from RTCM streams or RINEX files[^3^].

      -

      To use Gnss Internet Radio 1.4.11 48, users need to download and install the software from the official website[^3^] or from other sources[^1^] [^2^]. Then, they need to configure the program settings, such as the input source, the output port, the PPP mode, and the logging options. Users can also select from a list of available radio stations that provide GNSS data in different formats and frequencies.

      -

      Gnss Internet Radio 1.4.11 48 is a useful tool for anyone who needs to access GNSS data from online sources. It is compatible with Windows operating systems and requires a stable internet connection.

      - -

      What are some applications of GNSS data?

      -

      GNSS data can be used for various applications in different domains, such as:

      -
        -
      • Logistics and transportation: GNSS enables real-time access to the location of items for delivery, tracking of vehicles and assets, optimization of routes and schedules, and management of traffic and congestion[^1^].
      • -
      • Industry and agriculture: GNSS can improve the efficiency and productivity of industrial and agricultural operations, such as machine guidance, precision farming, irrigation control, pest management, and environmental monitoring[^2^].
      • -
      • Wearables and smartphones: GNSS can enhance the functionality and user experience of wearable devices and smartphones, such as fitness trackers, smart watches, navigation apps, location-based services, social media, gaming, and augmented reality[^1^] [^2^].
      • -
      • Air, sea, and land navigation: GNSS can provide accurate and reliable positioning, navigation, and timing information for various modes of transportation, such as airplanes, ships, cars, bikes, and pedestrians[^2^]. GNSS can also support safety-critical applications, such as air traffic control, search and rescue, collision avoidance, and emergency response[^2^].
      • -
      • Earth sciences: GNSS can contribute to the scientific understanding of the Earth system, such as its geology, geophysics, hydrology, meteorology, climatology, and space weather[^3^]. GNSS can also support natural hazard monitoring and mitigation, such as earthquakes, volcanoes, landslides, floods, tsunamis, and ionospheric disturbances[^3^].
      • -
      • Space science: GNSS can enable real-time spacecraft navigation based on spaceborne GNSS receivers for low-Earth orbits and geostationary orbits[^3^]. GNSS can also support space exploration missions to other planets or celestial bodies by providing precise orbit determination and relative positioning[^3^].
      • -
      • Fundamental physics: GNSS can test fundamental physical theories and principles by exploiting the high accuracy of the clocks on board GNSS satellites and the accuracy with which the orbits are known[^3^]. GNSS can also measure relativistic effects such as time dilation and gravitational redshift[^3^].
      • -
      • Metrology: GNSS can provide a global reference for time and frequency transfer by using the signals from GNSS satellites as a common clock source[^3^]. GNSS can also support synchronization of networks and systems that require precise timing information[^3^].
      • -
      -

      Gnss Internet Radio 1.4.11 48 is a software program that allows users to listen to online radio streams that provide GNSS data. GNSS data can be used for various applications in different domains. By using Gnss Internet Radio 1.4.11 48 users can access GNSS data from online sources easily.

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/neuralmagic/nlp-text-classification/app.py b/spaces/neuralmagic/nlp-text-classification/app.py deleted file mode 100644 index b45501fb79f8e8db9034dfcac5366735fed8fd08..0000000000000000000000000000000000000000 --- a/spaces/neuralmagic/nlp-text-classification/app.py +++ /dev/null @@ -1,77 +0,0 @@ -from deepsparse import Pipeline -import time -import gradio as gr - -markdownn = ''' -# Text Classification Pipeline with DeepSparse -Text Classification involves assigning a label to a given text. For example, sentiment analysis is an example of a text classification use case. -![Text Classification Pipeline with DeepSparse](https://huggingface.co/spaces/neuralmagic/nlp-text-classification/resolve/main/text-classification.png) -## What is DeepSparse -DeepSparse is an inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application. Sparsification is a powerful technique for optimizing models for inference, reducing the compute needed with a limited accuracy tradeoff. DeepSparse is designed to take advantage of model sparsity, enabling you to deploy models with the flexibility and scalability of software on commodity CPUs with the best-in-class performance of hardware accelerators, enabling you to standardize operations and reduce infrastructure costs. -Similar to Hugging Face, DeepSparse provides off-the-shelf pipelines for computer vision and NLP that wrap the model with proper pre- and post-processing to run performantly on CPUs by using sparse models. - -The text classification Pipeline, for example, wraps an NLP model with the proper preprocessing and postprocessing pipelines, such as tokenization. -### Inference API Example -Here is sample code for a text classification pipeline: -``` -from deepsparse import Pipeline -pipeline = Pipeline.create(task="zero_shot_text_classification", model_path="zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/pruned80_quant-none-vnni",model_scheme="mnli",model_config={"hypothesis_template": "This text is related to {}"},) -text = "The senate passed 3 laws today" -inference = pipeline(sequences= text,labels=['politics', 'public health', 'Europe'],) -print(inference) -``` -## Use Case Description -Customer review classification is a great example of text classification in action. - -The ability to quickly classify sentiment from customers is an added advantage for any business. -Therefore, whichever solution you deploy for classifying the customer reviews should deliver results in the shortest time possible. -By being fast the solution will process more volume, hence cheaper computational resources are utilized. - -When deploying a text classification model, decreasing the model’s latency and increasing its throughput is critical. This is why DeepSparse Pipelines have sparse text classification models. - -[Want to train a sparse model on your data? Checkout the documentation on sparse transfer learning](https://docs.neuralmagic.com/use-cases/natural-language-processing/question-answering) - -''' -task = "zero_shot_text_classification" -sparse_classification_pipeline = Pipeline.create( - task=task, - model_path="zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/pruned80_quant-none-vnni", - model_scheme="mnli", - model_config={"hypothesis_template": "This text is related to {}"}, - ) -def run_pipeline(text): - sparse_start = time.perf_counter() - sparse_output = sparse_classification_pipeline(sequences= text,labels=['politics', 'public health', 'Europe'],) - sparse_result = dict(sparse_output) - sparse_end = time.perf_counter() - sparse_duration = (sparse_end - sparse_start) * 1000.0 - dict_r = {sparse_result['labels'][0]:sparse_result['scores'][0],sparse_result['labels'][1]:sparse_result['scores'][1], sparse_result['labels'][2]:sparse_result['scores'][2]} - return dict_r, sparse_duration - - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - gr.Markdown(markdownn) - - with gr.Column(): - gr.Markdown(""" - ### Text classification demo - """) - text = gr.Text(label="Text") - btn = gr.Button("Submit") - - sparse_answers = gr.Label(label="Sparse model answers", - num_top_classes=3 - ) - sparse_duration = gr.Number(label="Sparse Latency (ms):") - gr.Examples([["The senate passed 3 laws today"],["Who are you voting for in 2020?"],["Public health is very important"]],inputs=[text],) - - btn.click( - run_pipeline, - inputs=[text], - outputs=[sparse_answers,sparse_duration], - ) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/ngthanhtinqn/Segment_Anything_With_OWL-ViT/README.md b/spaces/ngthanhtinqn/Segment_Anything_With_OWL-ViT/README.md deleted file mode 100644 index 9c4e22dffe57acc6caff5260e5534887c19acddf..0000000000000000000000000000000000000000 --- a/spaces/ngthanhtinqn/Segment_Anything_With_OWL-ViT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Segment Anything With OWL-ViT -emoji: 🦀 -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nickmuchi/article-text-summarizer/app.py b/spaces/nickmuchi/article-text-summarizer/app.py deleted file mode 100644 index a66c7b62606f345b89f21713f8391d924afb5ab4..0000000000000000000000000000000000000000 --- a/spaces/nickmuchi/article-text-summarizer/app.py +++ /dev/null @@ -1,505 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# In[1]: -import validators, re -import torch -from fake_useragent import UserAgent -from bs4 import BeautifulSoup -import streamlit as st -from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer -from sentence_transformers import SentenceTransformer -import en_core_web_lg -import time -import base64 -import requests -import docx2txt -from io import StringIO -from PyPDF2 import PdfFileReader -import warnings -import nltk -import itertools -import numpy as np - -nltk.download('punkt') - -from nltk import sent_tokenize - -warnings.filterwarnings("ignore") - - -# In[2]: - -time_str = time.strftime("%d%m%Y-%H%M%S") -HTML_WRAPPER = """
      {}
      """ -#Functions - -def article_text_extractor(url: str): - - '''Extract text from url and divide text into chunks if length of text is more than 500 words''' - - ua = UserAgent() - - headers = {'User-Agent':str(ua.chrome)} - - r = requests.get(url,headers=headers) - - soup = BeautifulSoup(r.text, "html.parser") - title_text = soup.find_all(["h1"]) - para_text = soup.find_all(["p"]) - article_text = [result.text for result in para_text] - - try: - - article_header = [result.text for result in title_text][0] - - except: - - article_header = '' - - article = nlp(" ".join(article_text)) - sentences = [i.text for i in list(article.sents)] - - current_chunk = 0 - chunks = [] - - for sentence in sentences: - if len(chunks) == current_chunk + 1: - if len(chunks[current_chunk]) + len(sentence.split(" ")) <= 500: - chunks[current_chunk].extend(sentence.split(" ")) - else: - current_chunk += 1 - chunks.append(sentence.split(" ")) - else: - chunks.append(sentence.split(" ")) - - for chunk_id in range(len(chunks)): - chunks[chunk_id] = " ".join(chunks[chunk_id]) - - return article_header, chunks - -def chunk_clean_text(text): - - """Chunk text longer than 500 tokens""" - - article = nlp(text) - sentences = [i.text for i in list(article.sents)] - - current_chunk = 0 - chunks = [] - - for sentence in sentences: - if len(chunks) == current_chunk + 1: - if len(chunks[current_chunk]) + len(sentence.split(" ")) <= 500: - chunks[current_chunk].extend(sentence.split(" ")) - else: - current_chunk += 1 - chunks.append(sentence.split(" ")) - else: - chunks.append(sentence.split(" ")) - - for chunk_id in range(len(chunks)): - chunks[chunk_id] = " ".join(chunks[chunk_id]) - - return chunks - -def preprocess_plain_text(x): - - x = x.encode("ascii", "ignore").decode() # unicode - x = re.sub(r"https*\S+", " ", x) # url - x = re.sub(r"@\S+", " ", x) # mentions - x = re.sub(r"#\S+", " ", x) # hastags - x = re.sub(r"\s{2,}", " ", x) # over spaces - x = re.sub("[^.,!?A-Za-z0-9]+", " ", x) # special charachters except .,!? - - return x - -def extract_pdf(file): - - '''Extract text from PDF file''' - - pdfReader = PdfFileReader(file) - count = pdfReader.numPages - all_text = "" - for i in range(count): - page = pdfReader.getPage(i) - all_text += page.extractText() - - - return all_text - - -def extract_text_from_file(file): - - '''Extract text from uploaded file''' - - # read text file - if file.type == "text/plain": - # To convert to a string based IO: - stringio = StringIO(file.getvalue().decode("utf-8")) - - # To read file as string: - file_text = stringio.read() - - # read pdf file - elif file.type == "application/pdf": - file_text = extract_pdf(file) - - # read docx file - elif ( - file.type - == "application/vnd.openxmlformats-officedocument.wordprocessingml.document" - ): - file_text = docx2txt.process(file) - - return file_text - -def summary_downloader(raw_text): - - b64 = base64.b64encode(raw_text.encode()).decode() - new_filename = "new_text_file_{}_.txt".format(time_str) - st.markdown("#### Download Summary as a File ###") - href = f'Click to Download!!' - st.markdown(href,unsafe_allow_html=True) - -def get_all_entities_per_sentence(text): - doc = nlp(''.join(text)) - - sentences = list(doc.sents) - - entities_all_sentences = [] - for sentence in sentences: - entities_this_sentence = [] - - # SPACY ENTITIES - for entity in sentence.ents: - entities_this_sentence.append(str(entity)) - - # FLAIR ENTITIES (CURRENTLY NOT USED) - # sentence_entities = Sentence(str(sentence)) - # tagger.predict(sentence_entities) - # for entity in sentence_entities.get_spans('ner'): - # entities_this_sentence.append(entity.text) - - # XLM ENTITIES - entities_xlm = [entity["word"] for entity in ner_model(str(sentence))] - for entity in entities_xlm: - entities_this_sentence.append(str(entity)) - - entities_all_sentences.append(entities_this_sentence) - - return entities_all_sentences - -def get_all_entities(text): - all_entities_per_sentence = get_all_entities_per_sentence(text) - return list(itertools.chain.from_iterable(all_entities_per_sentence)) - -def get_and_compare_entities(article_content,summary_output): - - all_entities_per_sentence = get_all_entities_per_sentence(article_content) - entities_article = list(itertools.chain.from_iterable(all_entities_per_sentence)) - - all_entities_per_sentence = get_all_entities_per_sentence(summary_output) - entities_summary = list(itertools.chain.from_iterable(all_entities_per_sentence)) - - matched_entities = [] - unmatched_entities = [] - for entity in entities_summary: - if any(entity.lower() in substring_entity.lower() for substring_entity in entities_article): - matched_entities.append(entity) - elif any( - np.inner(sentence_embedding_model.encode(entity, show_progress_bar=False), - sentence_embedding_model.encode(art_entity, show_progress_bar=False)) > 0.9 for - art_entity in entities_article): - matched_entities.append(entity) - else: - unmatched_entities.append(entity) - - matched_entities = list(dict.fromkeys(matched_entities)) - unmatched_entities = list(dict.fromkeys(unmatched_entities)) - - matched_entities_to_remove = [] - unmatched_entities_to_remove = [] - - for entity in matched_entities: - for substring_entity in matched_entities: - if entity != substring_entity and entity.lower() in substring_entity.lower(): - matched_entities_to_remove.append(entity) - - for entity in unmatched_entities: - for substring_entity in unmatched_entities: - if entity != substring_entity and entity.lower() in substring_entity.lower(): - unmatched_entities_to_remove.append(entity) - - matched_entities_to_remove = list(dict.fromkeys(matched_entities_to_remove)) - unmatched_entities_to_remove = list(dict.fromkeys(unmatched_entities_to_remove)) - - for entity in matched_entities_to_remove: - matched_entities.remove(entity) - for entity in unmatched_entities_to_remove: - unmatched_entities.remove(entity) - - return matched_entities, unmatched_entities - -def highlight_entities(article_content,summary_output): - - markdown_start_red = "" - markdown_start_green = "" - markdown_end = "" - - matched_entities, unmatched_entities = get_and_compare_entities(article_content,summary_output) - - print(summary_output) - - for entity in matched_entities: - summary_output = re.sub(f'({entity})(?![^rgb\(]*\))',markdown_start_green + entity + markdown_end,summary_output) - - for entity in unmatched_entities: - summary_output = re.sub(f'({entity})(?![^rgb\(]*\))',markdown_start_red + entity + markdown_end,summary_output) - - print("") - print(summary_output) - - print("") - print(summary_output) - - soup = BeautifulSoup(summary_output, features="html.parser") - - return HTML_WRAPPER.format(soup) - - -def clean_text(text,doc=False,plain_text=False,url=False): - """Return clean text from the various input sources""" - - if url: - is_url = validators.url(text) - - if is_url: - # complete text, chunks to summarize (list of sentences for long docs) - article_title,chunks = article_text_extractor(url=url_text) - - return article_title, chunks - - elif doc: - - clean_text = chunk_clean_text(preprocess_plain_text(extract_text_from_file(text))) - - return None, clean_text - - elif plain_text: - - clean_text = chunk_clean_text(preprocess_plain_text(text)) - - return None, clean_text - - -@st.experimental_singleton(suppress_st_warning=True) -def get_spacy(): - nlp = en_core_web_lg.load() - return nlp - -@st.experimental_singleton(suppress_st_warning=True) -def facebook_model(): - model_name = 'facebook/bart-large-cnn' - summarizer = pipeline('summarization',model=model_name,tokenizer=model_name, - device=0 if torch.cuda.is_available() else -1) - return summarizer - -@st.experimental_singleton(suppress_st_warning=True) -def schleifer_model(): - model_name = 'sshleifer/distilbart-cnn-12-6' - summarizer = pipeline('summarization',model=model_name, tokenizer=model_name, - device=0 if torch.cuda.is_available() else -1) - return summarizer - -@st.experimental_singleton(suppress_st_warning=True) -def google_model(): - model_name = 'google/pegasus-large' - summarizer = pipeline('summarization',model=model_name, tokenizer=model_name, - device=0 if torch.cuda.is_available() else -1) - return summarizer - -@st.experimental_singleton(suppress_st_warning=True) -def get_sentence_embedding_model(): - return SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') - -@st.experimental_singleton(suppress_st_warning=True) -def get_ner_pipeline(): - tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll03-english") - model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll03-english") - return pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True) - -# Load all different models (cached) at start time of the hugginface space -sentence_embedding_model = get_sentence_embedding_model() -ner_model = get_ner_pipeline() -nlp = get_spacy() - -#Streamlit App - -st.title("Article Text and Link Extractive Summarizer with Entity Matching 📝") - -model_type = st.sidebar.selectbox( - "Model type", options=["Facebook-Bart", "Sshleifer-DistilBart","Google-Pegasus"] -) - -max_len= st.sidebar.slider("Maximum length of the summarized text",min_value=100,max_value=500,step=10) -min_len= st.sidebar.slider("Minimum length of the summarized text",min_value=50,max_value=200,step=10) - -st.markdown( - "Model Source: [Facebook-Bart-large-CNN](https://huggingface.co/facebook/bart-large-cnn), [Sshleifer-distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) and [Google-Pegasus-large](https://huggingface.co/google/pegasus-large)" -) - -st.markdown( - """The app supports extractive summarization which aims to identify the salient information that is then extracted and grouped together to form a concise summary. - For documents or text that is more than 500 words long, the app will divide the text into chunks and summarize each chunk. Please note when using the sidebar slider, those values represent the min/max text length per chunk of text to be summarized. If your article to be summarized is 1000 words, it will be divided into two chunks of 500 words first then the default max length of 100 words is applied per chunk, resulting in a summarized text with 200 words maximum. - There are two models available to choose from:""") - -st.markdown(""" - - Facebook-Bart, trained on large [CNN and Daily Mail](https://huggingface.co/datasets/cnn_dailymail) news articles. - - Sshleifer-Distilbart, which is a distilled (smaller) version of the large Bart model. - - Google Pegasus, trained on large C4 and HugeNews articles""" -) - -st.markdown("""Please do note that the model will take longer to generate summaries for documents that are too long.""") - -st.markdown( - "The app only ingests the below formats for summarization task:" -) -st.markdown( - """- Raw text entered in text box. -- URL of an article to be summarized. -- Documents with .txt, .pdf or .docx file formats.""" -) - -st.markdown("---") - -if "text_area" not in st.session_state: - st.session_state.text_area = '' - -if "summ_area" not in st.session_state: - st.session_state.summ_area = '' - -url_text = st.text_input("Please Enter a url here") - -st.markdown( - "

      OR

      ", - unsafe_allow_html=True, -) - -plain_text = st.text_area("Please Paste/Enter plain text here",) - -st.markdown( - "

      OR

      ", - unsafe_allow_html=True, -) - -upload_doc = st.file_uploader( - "Upload a .txt, .pdf, .docx file for summarization" -) - -if url_text: - article_title, cleaned_text = clean_text(url_text, url=True) - st.session_state.text_area = cleaned_text[0] - -elif plain_text: - article_title, cleaned_text = clean_text(plain_text,plain_text=True) - st.session_state.text_area = ''.join(cleaned_text) - -elif upload_doc: - article_title, cleaned_text = clean_text(upload_doc,doc=True) - st.session_state.text_area = ''.join(cleaned_text) - -article_text = st.text_area( - label='Full Article Text', - placeholder="Full article text will be displayed here..", - height=250, - key='text_area' -) - -summarize = st.button("Summarize") - -# called on toggle button [summarize] -if summarize: - if model_type == "Facebook-Bart": - if url_text: - text_to_summarize =cleaned_text[0] - else: - text_to_summarize = cleaned_text - - with st.spinner( - text="Loading Facebook-Bart Model and Extracting summary. This might take a few seconds depending on the length of your text..." - ): - summarizer_model = facebook_model() - summarized_text = summarizer_model(text_to_summarize, max_length=max_len, min_length=min_len,clean_up_tokenization_spaces=True,no_repeat_ngram_size=4) - summarized_text = ' '.join([summ['summary_text'] for summ in summarized_text]) - - - elif model_type == "Sshleifer-DistilBart": - if url_text: - text_to_summarize = cleaned_text[0] - else: - text_to_summarize = cleaned_text - - with st.spinner( - text="Loading Sshleifer-DistilBart Model and Extracting summary. This might take a few seconds depending on the length of your text..." - ): - summarizer_model = schleifer_model() - summarized_text = summarizer_model(text_to_summarize, max_length=max_len, min_length=min_len,clean_up_tokenization_spaces=True,no_repeat_ngram_size=4) - summarized_text = ' '.join([summ['summary_text'] for summ in summarized_text]) - - elif model_type == "Google-Pegasus": - if url_text: - text_to_summarize = cleaned_text[0] - - else: - text_to_summarize = cleaned_text - - with st.spinner( - text="Loading Google-Pegasus Model and Extracting summary. This might take a few seconds depending on the length of your text..." - ): - summarizer_model = google_model() - summarized_text = summarizer_model(text_to_summarize, max_length=max_len, min_length=min_len,clean_up_tokenization_spaces=True,no_repeat_ngram_size=4) - summarized_text = ' '.join([summ['summary_text'] for summ in summarized_text]) - - with st.spinner("Calculating and matching entities, this takes a few seconds..."): - entity_match_html = highlight_entities(text_to_summarize,summarized_text) - st.markdown("####") - print(entity_match_html) - - if article_title: - - # view summarized text (expander) - st.markdown(f"Article title: {article_title}") - - st.session_state.summ_area = summarized_text - - st.subheader('Summarized Text with no Entity Matching') - - summarized_text = st.text_area( - label = '', - placeholder="Full summarized text will be displayed here..", - height=250, - key='summ_area' - ) - - st.markdown("####") - - st.subheader("Summarized text with matched entities in Green and mismatched entities in Red relative to the Original Text") - - st.write(entity_match_html, unsafe_allow_html=True) - - st.markdown("####") - - summary_downloader(summarized_text) - - -st.markdown(""" - """) - -st.markdown("![visitor badge](https://visitor-badge.glitch.me/badge?page_id=nickmuchi-article-text-summarizer)") -# In[ ]: - - - - diff --git a/spaces/nightfury/whisperAI/README.md b/spaces/nightfury/whisperAI/README.md deleted file mode 100644 index 87c62a1aa827929f4b5fa959f2f83d5a085cd589..0000000000000000000000000000000000000000 --- a/spaces/nightfury/whisperAI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: WhisperAI -emoji: 👄🤖 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/roi_heads/__init__.py b/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/roi_heads/__init__.py deleted file mode 100644 index d13e9c57235b982f3e0645bc316de2b75755dfda..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/detectron2/modeling/roi_heads/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .box_head import ROI_BOX_HEAD_REGISTRY, build_box_head, FastRCNNConvFCHead -from .keypoint_head import ( - ROI_KEYPOINT_HEAD_REGISTRY, - build_keypoint_head, - BaseKeypointRCNNHead, - KRCNNConvDeconvUpsampleHead, -) -from .mask_head import ( - ROI_MASK_HEAD_REGISTRY, - build_mask_head, - BaseMaskRCNNHead, - MaskRCNNConvUpsampleHead, -) -from .roi_heads import ( - ROI_HEADS_REGISTRY, - ROIHeads, - Res5ROIHeads, - StandardROIHeads, - build_roi_heads, - select_foreground_proposals, -) -from .cascade_rcnn import CascadeROIHeads -from .rotated_fast_rcnn import RROIHeads -from .fast_rcnn import FastRCNNOutputLayers - -from . import cascade_rcnn # isort:skip - -__all__ = list(globals().keys()) diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tests/structures/test_imagelist.py b/spaces/nikitaPDL2023/assignment4/detectron2/tests/structures/test_imagelist.py deleted file mode 100644 index e446e44a37f5d8f9a68362e4b93a291d314d5d68..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/tests/structures/test_imagelist.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import unittest -from typing import List, Sequence, Tuple -import torch - -from detectron2.structures import ImageList - - -class TestImageList(unittest.TestCase): - def test_imagelist_padding_tracing(self): - # test that the trace does not contain hard-coded constant sizes - def to_imagelist(tensors: Sequence[torch.Tensor]): - image_list = ImageList.from_tensors(tensors, 4) - return image_list.tensor, image_list.image_sizes - - def _tensor(*shape): - return torch.ones(shape, dtype=torch.float32) - - # test CHW (inputs needs padding vs. no padding) - for shape in [(3, 10, 10), (3, 12, 12)]: - func = torch.jit.trace(to_imagelist, ([_tensor(*shape)],)) - tensor, image_sizes = func([_tensor(3, 15, 20)]) - self.assertEqual(tensor.shape, (1, 3, 16, 20), tensor.shape) - self.assertEqual(image_sizes[0].tolist(), [15, 20], image_sizes[0]) - - # test HW - func = torch.jit.trace(to_imagelist, ([_tensor(10, 10)],)) - tensor, image_sizes = func([_tensor(15, 20)]) - self.assertEqual(tensor.shape, (1, 16, 20), tensor.shape) - self.assertEqual(image_sizes[0].tolist(), [15, 20], image_sizes[0]) - - # test 2x CHW - func = torch.jit.trace( - to_imagelist, - ([_tensor(3, 16, 10), _tensor(3, 13, 11)],), - ) - tensor, image_sizes = func([_tensor(3, 25, 20), _tensor(3, 10, 10)]) - self.assertEqual(tensor.shape, (2, 3, 28, 20), tensor.shape) - self.assertEqual(image_sizes[0].tolist(), [25, 20], image_sizes[0]) - self.assertEqual(image_sizes[1].tolist(), [10, 10], image_sizes[1]) - # support calling with different spatial sizes, but not with different #images - - def test_imagelist_scriptability(self): - image_nums = 2 - image_tensor = torch.randn((image_nums, 10, 20), dtype=torch.float32) - image_shape = [(10, 20)] * image_nums - - def f(image_tensor, image_shape: List[Tuple[int, int]]): - return ImageList(image_tensor, image_shape) - - ret = f(image_tensor, image_shape) - ret_script = torch.jit.script(f)(image_tensor, image_shape) - - self.assertEqual(len(ret), len(ret_script)) - for i in range(image_nums): - self.assertTrue(torch.equal(ret[i], ret_script[i])) - - def test_imagelist_from_tensors_scriptability(self): - image_tensor_0 = torch.randn(10, 20, dtype=torch.float32) - image_tensor_1 = torch.randn(12, 22, dtype=torch.float32) - inputs = [image_tensor_0, image_tensor_1] - - def f(image_tensor: List[torch.Tensor]): - return ImageList.from_tensors(image_tensor, 10) - - ret = f(inputs) - ret_script = torch.jit.script(f)(inputs) - - self.assertEqual(len(ret), len(ret_script)) - self.assertTrue(torch.equal(ret.tensor, ret_script.tensor)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/nithintechie/NithinGenAIAvatar/app.py b/spaces/nithintechie/NithinGenAIAvatar/app.py deleted file mode 100644 index 2dbf3ae89c2e3fdab7134107dd346f984dca8eb1..0000000000000000000000000000000000000000 --- a/spaces/nithintechie/NithinGenAIAvatar/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """Meet Riya, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Riya's goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/nomnomnonono/Background-Image-Generation-for-Online-Meeting/README.md b/spaces/nomnomnonono/Background-Image-Generation-for-Online-Meeting/README.md deleted file mode 100644 index 1db0217c259a3819d8d1a2dc73a7973dc951e84a..0000000000000000000000000000000000000000 --- a/spaces/nomnomnonono/Background-Image-Generation-for-Online-Meeting/README.md +++ /dev/null @@ -1,24 +0,0 @@ ---- - -title: Background Image Generation For Online Meeting - -emoji: 🌍 - -colorFrom: pink - -colorTo: red - -python: 3.9.7 - -sdk: gradio - -sdk_version: 3.23.0 - -app_file: app.py - -pinned: true - ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - diff --git a/spaces/nschenone/lyric-buddy/app.py b/spaces/nschenone/lyric-buddy/app.py deleted file mode 100644 index 7c068569105c43c45994ece94eb21aa221c51cd8..0000000000000000000000000000000000000000 --- a/spaces/nschenone/lyric-buddy/app.py +++ /dev/null @@ -1,46 +0,0 @@ -import gradio as gr - -from src.generate import generate -from src.utils import load_pipelines_from_config - -pipelines = load_pipelines_from_config(config_path="model_config.yaml") - - -def fn( - text_inputs: str, - model: str, - max_length: int = 100, - temperature: float = 1.5, - seed: int = 0, - censor_profanity: bool = True, -): - - return generate( - pipeline=pipelines[model], - pipeline_args={ - "text_inputs": text_inputs, - "max_length": max_length, - "temperature": temperature, - }, - seed=seed, - censor_profanity=censor_profanity, - ) - - -iface = gr.Interface( - fn=fn, - inputs=[ - gr.Textbox(value="[Verse]", placeholder="Input text...", label="Input Text"), - gr.Dropdown( - choices=list(pipelines.keys()), - value=list(pipelines.keys())[0], - label="Model", - ), - gr.Slider(minimum=50, maximum=1000, value=100, step=10, label="Max Length"), - gr.Slider(minimum=0.9, maximum=1.9, value=1.5, step=0.05, label="Creativity"), - gr.Number(value=42, precision=0, label="Seed"), - gr.Checkbox(value=True, label="Censor Profanity"), - ], - outputs="text", -) -iface.launch() diff --git a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/os/coop_threads.cc b/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/os/coop_threads.cc deleted file mode 100644 index ece0995d4cf2d0a0f0f73170fb5977e08d7731b1..0000000000000000000000000000000000000000 --- a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/os/coop_threads.cc +++ /dev/null @@ -1,63 +0,0 @@ -// Copyright 2021 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#include "sparse_matmul/os/coop_threads.h" - -#include - -namespace csrblocksparse { - -// All threads must execute a std::memory_order_seq_cst operation on -// |barrier_step_| this is what ensures the global memory consistency across -// the barrier. -// -// It is possible for the |barrier_step_| to roll over, but this is safe here. -// -// |yield| instructs the processor that it is in a spin loop and can stop doing -// things like out of order, speculative execution, prefetching, etc. On hyper -// threaded machines it can also choose to swap in the other thread. Note that -// this is a hardware level decision and the OS is never involved. -void SpinBarrier::barrier() { - if (num_threads_ < 2) return; - - int old_step = barrier_step_.load(std::memory_order_relaxed); - - int val_threads = threads_at_barrier_.fetch_add(1, std::memory_order_acq_rel); - - if (val_threads == num_threads_ - 1) { - // This is where the logic can go all wrong if the barrier is called by - // more threads than |num_threads_| -- the assumption that we're the last - // thread is inherently invalid. - - // Assuming num_threads_ are calling this barrier, then we're the last - // thread to reach the barrier, reset and advance step count. - threads_at_barrier_.store(0, std::memory_order_relaxed); - barrier_step_.store(old_step + 1, std::memory_order_release); - } else { - // Wait for step count to advance, then continue. - while (barrier_step_.load(std::memory_order_acquire) == old_step) { - // Intel recommends the equivalent instruction PAUSE, not be called more - // than once in a row, I can't find any recommendations for ARM, so - // following that advice here. -#if defined __aarch64__ || defined __arm__ - asm volatile("yield\n" ::: "memory"); -#else - // No pause for x86! The pause instruction on Skylake takes 141 clock - // cycles, which in an AVX2-down-clocked CPU is getting on for 70ns. -#endif - } - } -} - -} // namespace csrblocksparse diff --git a/spaces/odettecantswim/rvc-mlbb-v2/config.py b/spaces/odettecantswim/rvc-mlbb-v2/config.py deleted file mode 100644 index 040a64d2c5ce4d7802bdf7f69321483b81008f08..0000000000000000000000000000000000000000 --- a/spaces/odettecantswim/rvc-mlbb-v2/config.py +++ /dev/null @@ -1,106 +0,0 @@ -import argparse -import torch -from multiprocessing import cpu_count - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.colab, - self.noparallel, - self.noautoopen, - self.api - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument( - "--pycmd", type=str, default="python", help="Python command" - ) - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - parser.add_argument("--api", action="store_true", help="Launch with api") - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - cmd_opts.api - ) - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - self.is_half = False - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = False - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/repaint.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/repaint.md deleted file mode 100644 index e68b0021634ba92a7d07c97bda864bd0db90fcca..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/repaint.md +++ /dev/null @@ -1,27 +0,0 @@ - - -# RePaintScheduler - -`RePaintScheduler` is a DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. It is designed to be used with the [`RePaintPipeline`], and it is based on the paper [RePaint: Inpainting using Denoising Diffusion Probabilistic Models](https://huggingface.co/papers/2201.09865) by Andreas Lugmayr et al. - -The abstract from the paper is: - -*Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. Github Repository: git.io/RePaint*. - -The original implementation can be found at [andreas128/RePaint](https://github.com/andreas128/). - -## RePaintScheduler -[[autodoc]] RePaintScheduler - -## RePaintSchedulerOutput -[[autodoc]] schedulers.scheduling_repaint.RePaintSchedulerOutput \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/commands/diffusers_cli.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/commands/diffusers_cli.py deleted file mode 100644 index 2016fc19f557fd539782ca2181ec2fe74026340a..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/commands/diffusers_cli.py +++ /dev/null @@ -1,43 +0,0 @@ -#!/usr/bin/env python -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from argparse import ArgumentParser - -from .env import EnvironmentCommand -from .fp16_safetensors import FP16SafetensorsCommand - - -def main(): - parser = ArgumentParser("Diffusers CLI tool", usage="diffusers-cli []") - commands_parser = parser.add_subparsers(help="diffusers-cli command helpers") - - # Register commands - EnvironmentCommand.register_subcommand(commands_parser) - FP16SafetensorsCommand.register_subcommand(commands_parser) - - # Let's go - args = parser.parse_args() - - if not hasattr(args, "func"): - parser.print_help() - exit(1) - - # Run - service = args.func(args) - service.run() - - -if __name__ == "__main__": - main() diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/blip_diffusion/__init__.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/blip_diffusion/__init__.py deleted file mode 100644 index af6c879d5ce88aa8edec0691e987444ff1d3dfec..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/blip_diffusion/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from dataclasses import dataclass -from typing import List, Optional, Union - -import numpy as np -import PIL -from PIL import Image - -from ...utils import OptionalDependencyNotAvailable, is_torch_available, is_transformers_available - - -try: - if not (is_transformers_available() and is_torch_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils.dummy_torch_and_transformers_objects import ShapEPipeline -else: - from .blip_image_processing import BlipImageProcessor - from .modeling_blip2 import Blip2QFormerModel - from .modeling_ctx_clip import ContextCLIPTextModel - from .pipeline_blip_diffusion import BlipDiffusionPipeline diff --git a/spaces/piuba-bigdata/README/README.md b/spaces/piuba-bigdata/README/README.md deleted file mode 100644 index d5ecca153eb95b084b2c393b96dc8965c12e8e87..0000000000000000000000000000000000000000 --- a/spaces/piuba-bigdata/README/README.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: README -emoji: 📚 -colorFrom: gray -colorTo: indigo -sdk: static -pinned: false ---- - -# PIUBA "Big Data and social exclusions" -# Universidad de Buenos Aires, Argentina - - -## Who we are? - -We are a multidisciplinary research team based at the Universidad de Buenos Aires. We are experts in various areas such as sociology, law and computer science. - - -## What we do? - -Our research aims to study and implement methodologies for the study of social exclusions, from an interdisciplinary approach, by applying research techniques focused on the analysis of large volumes of data. We mainly work on textual sources, using different techniques and strategies from artificial intelligence, such as text mining and natural language processing (NLP) and machine learning, including deep neural networks (deep learning), multivariate statistical methods and data visualization. - -Here we are pleased to present results of a study on hate speech detection in social networks, from an interdisciplinary perspective, addressing hate speech both quantitative and qualitatively, during the COVID-19 pandemic time frame. - - -## Published Work - -- Pérez, J. M., Luque, F., Zayat, D., Kondratzky, M., Moro, A., Serrati, P., ... & Cotik, V. (2022). [Assessing the impact of contextual information in hate speech detection](https://arxiv.org/pdf/2210.00465.pdf). arXiv preprint arXiv:2210.00465. (TBP IEEE Access 2023) -- Cotik, V., Debandi, N., Luque, F. M., Miguel, P., Moro, A., Pérez, J. M., ... & Zayat, D. (2020). [A study of Hate Speech in Social Media during the COVID-19 outbreak](https://openreview.net/pdf?id=01eOESDhbSW). \ No newline at end of file diff --git a/spaces/politweet-sh/politweet/functions/functions.py b/spaces/politweet-sh/politweet/functions/functions.py deleted file mode 100644 index 8a575486a9faa0aa9a096df6ec660ac9bfd41fe3..0000000000000000000000000000000000000000 --- a/spaces/politweet-sh/politweet/functions/functions.py +++ /dev/null @@ -1,41 +0,0 @@ -from re import sub - - -def separate_string(string): - """ - This function returns a list of strings from a string. - Example: separate_string('1. swedish 2. nuclear 3. hello world 4. uha yhd ikv hahd vva 5. ') - returns ['swedish', 'nuclear', 'hello world', 'uha yhd ikv hahd vva', ''] - :param string: string to be separated - :return: list of string items - """ - list_string = string.split('.') - list_useable = [] - for list_part in list_string: - list_useable.append(list_part.split(' ', 1)) - - final_list = [] - for li in list_useable[1:]: - final_list.append(li[1]) - # remove numeric characters and spaces - filter_numeric_regex = '[^a-z]' - final_final_list = [] - for li in final_list: - final_final_list.append(sub(filter_numeric_regex, ' ', li).strip()) - return final_final_list - - -def convert_to_tuple(string): - """ - This function converts a string to a tuple. - :param string: - :return: tuple of strings - """ - string = string.strip() - return tuple(string.strip('()').split(', ')) - - - -if __name__ == '__main__': - s = ' (politics, government, negative, sweden)' - print(convert_to_tuple(s)) diff --git a/spaces/power2/JoJoGan-powerhow2/op/fused_act.py b/spaces/power2/JoJoGan-powerhow2/op/fused_act.py deleted file mode 100644 index 8459d510d7b79684779dfe47f5b46d81c94b4a4d..0000000000000000000000000000000000000000 --- a/spaces/power2/JoJoGan-powerhow2/op/fused_act.py +++ /dev/null @@ -1,86 +0,0 @@ -import os - -import torch -from torch import nn -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -fused = load( - 'fused', - sources=[ - os.path.join(module_path, 'fused_bias_act.cpp'), - os.path.join(module_path, 'fused_bias_act_kernel.cu'), - ], -) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output, empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale - ) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/otlLib/maxContextCalc.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/otlLib/maxContextCalc.py deleted file mode 100644 index 03e7561b60f126bc19ff8b49ed2ebe7d6898286e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/otlLib/maxContextCalc.py +++ /dev/null @@ -1,96 +0,0 @@ -__all__ = ["maxCtxFont"] - - -def maxCtxFont(font): - """Calculate the usMaxContext value for an entire font.""" - - maxCtx = 0 - for tag in ("GSUB", "GPOS"): - if tag not in font: - continue - table = font[tag].table - if not table.LookupList: - continue - for lookup in table.LookupList.Lookup: - for st in lookup.SubTable: - maxCtx = maxCtxSubtable(maxCtx, tag, lookup.LookupType, st) - return maxCtx - - -def maxCtxSubtable(maxCtx, tag, lookupType, st): - """Calculate usMaxContext based on a single lookup table (and an existing - max value). - """ - - # single positioning, single / multiple substitution - if (tag == "GPOS" and lookupType == 1) or ( - tag == "GSUB" and lookupType in (1, 2, 3) - ): - maxCtx = max(maxCtx, 1) - - # pair positioning - elif tag == "GPOS" and lookupType == 2: - maxCtx = max(maxCtx, 2) - - # ligatures - elif tag == "GSUB" and lookupType == 4: - for ligatures in st.ligatures.values(): - for ligature in ligatures: - maxCtx = max(maxCtx, ligature.CompCount) - - # context - elif (tag == "GPOS" and lookupType == 7) or (tag == "GSUB" and lookupType == 5): - maxCtx = maxCtxContextualSubtable(maxCtx, st, "Pos" if tag == "GPOS" else "Sub") - - # chained context - elif (tag == "GPOS" and lookupType == 8) or (tag == "GSUB" and lookupType == 6): - maxCtx = maxCtxContextualSubtable( - maxCtx, st, "Pos" if tag == "GPOS" else "Sub", "Chain" - ) - - # extensions - elif (tag == "GPOS" and lookupType == 9) or (tag == "GSUB" and lookupType == 7): - maxCtx = maxCtxSubtable(maxCtx, tag, st.ExtensionLookupType, st.ExtSubTable) - - # reverse-chained context - elif tag == "GSUB" and lookupType == 8: - maxCtx = maxCtxContextualRule(maxCtx, st, "Reverse") - - return maxCtx - - -def maxCtxContextualSubtable(maxCtx, st, ruleType, chain=""): - """Calculate usMaxContext based on a contextual feature subtable.""" - - if st.Format == 1: - for ruleset in getattr(st, "%s%sRuleSet" % (chain, ruleType)): - if ruleset is None: - continue - for rule in getattr(ruleset, "%s%sRule" % (chain, ruleType)): - if rule is None: - continue - maxCtx = maxCtxContextualRule(maxCtx, rule, chain) - - elif st.Format == 2: - for ruleset in getattr(st, "%s%sClassSet" % (chain, ruleType)): - if ruleset is None: - continue - for rule in getattr(ruleset, "%s%sClassRule" % (chain, ruleType)): - if rule is None: - continue - maxCtx = maxCtxContextualRule(maxCtx, rule, chain) - - elif st.Format == 3: - maxCtx = maxCtxContextualRule(maxCtx, st, chain) - - return maxCtx - - -def maxCtxContextualRule(maxCtx, st, chain): - """Calculate usMaxContext based on a contextual feature rule.""" - - if not chain: - return max(maxCtx, st.GlyphCount) - elif chain == "Reverse": - return max(maxCtx, st.GlyphCount + st.LookAheadGlyphCount) - return max(maxCtx, st.InputGlyphCount + st.LookAheadGlyphCount) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/files/runtime/index.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/files/runtime/index.js deleted file mode 100644 index 44d98752ff7e97d1abe1b0fe4bbdd963adfe2793..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/files/runtime/index.js +++ /dev/null @@ -1 +0,0 @@ -export { makeApplyHmr } from './hot-api.js' diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2f00b72c.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2f00b72c.js deleted file mode 100644 index 4427ba610526f0f07806bdbecbefcab5adb714f5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2f00b72c.js +++ /dev/null @@ -1,2 +0,0 @@ -var i=Object.prototype.hasOwnProperty;function u(t,e){var n,r;if(t===e)return!0;if(t&&e&&(n=t.constructor)===e.constructor){if(n===Date)return t.getTime()===e.getTime();if(n===RegExp)return t.toString()===e.toString();if(n===Array){if((r=t.length)===e.length)for(;r--&&u(t[r],e[r]););return r===-1}if(!n||typeof t=="object"){r=0;for(n in t)if(i.call(t,n)&&++r&&!i.call(e,n)||!(n in e)||!u(t[n],e[n]))return!1;return Object.keys(e).length===r}}return t!==t&&e!==e}export{u as d}; -//# sourceMappingURL=index-2f00b72c.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/common/normalize_url.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/common/normalize_url.py deleted file mode 100644 index 92720b31621b0f6b4ac853179d886cb58e4e2f36..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markdown_it/common/normalize_url.py +++ /dev/null @@ -1,81 +0,0 @@ -from __future__ import annotations - -from collections.abc import Callable -from contextlib import suppress -import re -from urllib.parse import quote, unquote, urlparse, urlunparse # noqa: F401 - -import mdurl - -from .. import _punycode - -RECODE_HOSTNAME_FOR = ("http:", "https:", "mailto:") - - -def normalizeLink(url: str) -> str: - """Normalize destination URLs in links - - :: - - [label]: destination 'title' - ^^^^^^^^^^^ - """ - parsed = mdurl.parse(url, slashes_denote_host=True) - - # Encode hostnames in urls like: - # `http://host/`, `https://host/`, `mailto:user@host`, `//host/` - # - # We don't encode unknown schemas, because it's likely that we encode - # something we shouldn't (e.g. `skype:name` treated as `skype:host`) - # - if parsed.hostname and ( - not parsed.protocol or parsed.protocol in RECODE_HOSTNAME_FOR - ): - with suppress(Exception): - parsed = parsed._replace(hostname=_punycode.to_ascii(parsed.hostname)) - - return mdurl.encode(mdurl.format(parsed)) - - -def normalizeLinkText(url: str) -> str: - """Normalize autolink content - - :: - - - ~~~~~~~~~~~ - """ - parsed = mdurl.parse(url, slashes_denote_host=True) - - # Encode hostnames in urls like: - # `http://host/`, `https://host/`, `mailto:user@host`, `//host/` - # - # We don't encode unknown schemas, because it's likely that we encode - # something we shouldn't (e.g. `skype:name` treated as `skype:host`) - # - if parsed.hostname and ( - not parsed.protocol or parsed.protocol in RECODE_HOSTNAME_FOR - ): - with suppress(Exception): - parsed = parsed._replace(hostname=_punycode.to_unicode(parsed.hostname)) - - # add '%' to exclude list because of https://github.com/markdown-it/markdown-it/issues/720 - return mdurl.decode(mdurl.format(parsed), mdurl.DECODE_DEFAULT_CHARS + "%") - - -BAD_PROTO_RE = re.compile(r"^(vbscript|javascript|file|data):") -GOOD_DATA_RE = re.compile(r"^data:image\/(gif|png|jpeg|webp);") - - -def validateLink(url: str, validator: Callable[[str], bool] | None = None) -> bool: - """Validate URL link is allowed in output. - - This validator can prohibit more than really needed to prevent XSS. - It's a tradeoff to keep code simple and to be secure by default. - - Note: url should be normalized at this point, and existing entities decoded. - """ - if validator is not None: - return validator(url) - url = url.strip().lower() - return bool(GOOD_DATA_RE.search(url)) if BAD_PROTO_RE.search(url) else True diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/npy_endian.h b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/npy_endian.h deleted file mode 100644 index 5e58a7f52cee2c21e8f3a4bbc535e7c0982f7de0..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/npy_endian.h +++ /dev/null @@ -1,77 +0,0 @@ -#ifndef NUMPY_CORE_INCLUDE_NUMPY_NPY_ENDIAN_H_ -#define NUMPY_CORE_INCLUDE_NUMPY_NPY_ENDIAN_H_ - -/* - * NPY_BYTE_ORDER is set to the same value as BYTE_ORDER set by glibc in - * endian.h - */ - -#if defined(NPY_HAVE_ENDIAN_H) || defined(NPY_HAVE_SYS_ENDIAN_H) - /* Use endian.h if available */ - - #if defined(NPY_HAVE_ENDIAN_H) - #include - #elif defined(NPY_HAVE_SYS_ENDIAN_H) - #include - #endif - - #if defined(BYTE_ORDER) && defined(BIG_ENDIAN) && defined(LITTLE_ENDIAN) - #define NPY_BYTE_ORDER BYTE_ORDER - #define NPY_LITTLE_ENDIAN LITTLE_ENDIAN - #define NPY_BIG_ENDIAN BIG_ENDIAN - #elif defined(_BYTE_ORDER) && defined(_BIG_ENDIAN) && defined(_LITTLE_ENDIAN) - #define NPY_BYTE_ORDER _BYTE_ORDER - #define NPY_LITTLE_ENDIAN _LITTLE_ENDIAN - #define NPY_BIG_ENDIAN _BIG_ENDIAN - #elif defined(__BYTE_ORDER) && defined(__BIG_ENDIAN) && defined(__LITTLE_ENDIAN) - #define NPY_BYTE_ORDER __BYTE_ORDER - #define NPY_LITTLE_ENDIAN __LITTLE_ENDIAN - #define NPY_BIG_ENDIAN __BIG_ENDIAN - #endif -#endif - -#ifndef NPY_BYTE_ORDER - /* Set endianness info using target CPU */ - #include "npy_cpu.h" - - #define NPY_LITTLE_ENDIAN 1234 - #define NPY_BIG_ENDIAN 4321 - - #if defined(NPY_CPU_X86) \ - || defined(NPY_CPU_AMD64) \ - || defined(NPY_CPU_IA64) \ - || defined(NPY_CPU_ALPHA) \ - || defined(NPY_CPU_ARMEL) \ - || defined(NPY_CPU_ARMEL_AARCH32) \ - || defined(NPY_CPU_ARMEL_AARCH64) \ - || defined(NPY_CPU_SH_LE) \ - || defined(NPY_CPU_MIPSEL) \ - || defined(NPY_CPU_PPC64LE) \ - || defined(NPY_CPU_ARCEL) \ - || defined(NPY_CPU_RISCV64) \ - || defined(NPY_CPU_LOONGARCH) \ - || defined(NPY_CPU_WASM) - #define NPY_BYTE_ORDER NPY_LITTLE_ENDIAN - - #elif defined(NPY_CPU_PPC) \ - || defined(NPY_CPU_SPARC) \ - || defined(NPY_CPU_S390) \ - || defined(NPY_CPU_HPPA) \ - || defined(NPY_CPU_PPC64) \ - || defined(NPY_CPU_ARMEB) \ - || defined(NPY_CPU_ARMEB_AARCH32) \ - || defined(NPY_CPU_ARMEB_AARCH64) \ - || defined(NPY_CPU_SH_BE) \ - || defined(NPY_CPU_MIPSEB) \ - || defined(NPY_CPU_OR1K) \ - || defined(NPY_CPU_M68K) \ - || defined(NPY_CPU_ARCEB) - #define NPY_BYTE_ORDER NPY_BIG_ENDIAN - - #else - #error Unknown CPU: can not set endianness - #endif - -#endif - -#endif /* NUMPY_CORE_INCLUDE_NUMPY_NPY_ENDIAN_H_ */ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/arrays/_arrow_string_mixins.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/arrays/_arrow_string_mixins.py deleted file mode 100644 index 63db03340683b70c2a97d41b9f9771dd2cbb0171..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/arrays/_arrow_string_mixins.py +++ /dev/null @@ -1,84 +0,0 @@ -from __future__ import annotations - -from typing import Literal - -import numpy as np - -from pandas.compat import pa_version_under7p0 - -if not pa_version_under7p0: - import pyarrow as pa - import pyarrow.compute as pc - - -class ArrowStringArrayMixin: - _pa_array = None - - def __init__(self, *args, **kwargs) -> None: - raise NotImplementedError - - def _str_pad( - self, - width: int, - side: Literal["left", "right", "both"] = "left", - fillchar: str = " ", - ): - if side == "left": - pa_pad = pc.utf8_lpad - elif side == "right": - pa_pad = pc.utf8_rpad - elif side == "both": - pa_pad = pc.utf8_center - else: - raise ValueError( - f"Invalid side: {side}. Side must be one of 'left', 'right', 'both'" - ) - return type(self)(pa_pad(self._pa_array, width=width, padding=fillchar)) - - def _str_get(self, i: int): - lengths = pc.utf8_length(self._pa_array) - if i >= 0: - out_of_bounds = pc.greater_equal(i, lengths) - start = i - stop = i + 1 - step = 1 - else: - out_of_bounds = pc.greater(-i, lengths) - start = i - stop = i - 1 - step = -1 - not_out_of_bounds = pc.invert(out_of_bounds.fill_null(True)) - selected = pc.utf8_slice_codeunits( - self._pa_array, start=start, stop=stop, step=step - ) - null_value = pa.scalar( - None, type=self._pa_array.type # type: ignore[attr-defined] - ) - result = pc.if_else(not_out_of_bounds, selected, null_value) - return type(self)(result) - - def _str_slice_replace( - self, start: int | None = None, stop: int | None = None, repl: str | None = None - ): - if repl is None: - repl = "" - if start is None: - start = 0 - if stop is None: - stop = np.iinfo(np.int64).max - return type(self)(pc.utf8_replace_slice(self._pa_array, start, stop, repl)) - - def _str_capitalize(self): - return type(self)(pc.utf8_capitalize(self._pa_array)) - - def _str_title(self): - return type(self)(pc.utf8_title(self._pa_array)) - - def _str_swapcase(self): - return type(self)(pc.utf8_swapcase(self._pa_array)) - - def _str_removesuffix(self, suffix: str): - ends_with = pc.ends_with(self._pa_array, pattern=suffix) - removed = pc.utf8_slice_codeunits(self._pa_array, 0, stop=-len(suffix)) - result = pc.if_else(ends_with, removed, self._pa_array) - return type(self)(result) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/expr.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/expr.py deleted file mode 100644 index 2f9485670246560902ac65c687d615367c24c2af..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/computation/expr.py +++ /dev/null @@ -1,839 +0,0 @@ -""" -:func:`~pandas.eval` parsers. -""" -from __future__ import annotations - -import ast -from functools import ( - partial, - reduce, -) -from keyword import iskeyword -import tokenize -from typing import ( - Callable, - TypeVar, -) - -import numpy as np - -from pandas.errors import UndefinedVariableError - -import pandas.core.common as com -from pandas.core.computation.ops import ( - ARITH_OPS_SYMS, - BOOL_OPS_SYMS, - CMP_OPS_SYMS, - LOCAL_TAG, - MATHOPS, - REDUCTIONS, - UNARY_OPS_SYMS, - BinOp, - Constant, - Div, - FuncNode, - Op, - Term, - UnaryOp, - is_term, -) -from pandas.core.computation.parsing import ( - clean_backtick_quoted_toks, - tokenize_string, -) -from pandas.core.computation.scope import Scope - -from pandas.io.formats import printing - - -def _rewrite_assign(tok: tuple[int, str]) -> tuple[int, str]: - """ - Rewrite the assignment operator for PyTables expressions that use ``=`` - as a substitute for ``==``. - - Parameters - ---------- - tok : tuple of int, str - ints correspond to the all caps constants in the tokenize module - - Returns - ------- - tuple of int, str - Either the input or token or the replacement values - """ - toknum, tokval = tok - return toknum, "==" if tokval == "=" else tokval - - -def _replace_booleans(tok: tuple[int, str]) -> tuple[int, str]: - """ - Replace ``&`` with ``and`` and ``|`` with ``or`` so that bitwise - precedence is changed to boolean precedence. - - Parameters - ---------- - tok : tuple of int, str - ints correspond to the all caps constants in the tokenize module - - Returns - ------- - tuple of int, str - Either the input or token or the replacement values - """ - toknum, tokval = tok - if toknum == tokenize.OP: - if tokval == "&": - return tokenize.NAME, "and" - elif tokval == "|": - return tokenize.NAME, "or" - return toknum, tokval - return toknum, tokval - - -def _replace_locals(tok: tuple[int, str]) -> tuple[int, str]: - """ - Replace local variables with a syntactically valid name. - - Parameters - ---------- - tok : tuple of int, str - ints correspond to the all caps constants in the tokenize module - - Returns - ------- - tuple of int, str - Either the input or token or the replacement values - - Notes - ----- - This is somewhat of a hack in that we rewrite a string such as ``'@a'`` as - ``'__pd_eval_local_a'`` by telling the tokenizer that ``__pd_eval_local_`` - is a ``tokenize.OP`` and to replace the ``'@'`` symbol with it. - """ - toknum, tokval = tok - if toknum == tokenize.OP and tokval == "@": - return tokenize.OP, LOCAL_TAG - return toknum, tokval - - -def _compose2(f, g): - """ - Compose 2 callables. - """ - return lambda *args, **kwargs: f(g(*args, **kwargs)) - - -def _compose(*funcs): - """ - Compose 2 or more callables. - """ - assert len(funcs) > 1, "At least 2 callables must be passed to compose" - return reduce(_compose2, funcs) - - -def _preparse( - source: str, - f=_compose( - _replace_locals, _replace_booleans, _rewrite_assign, clean_backtick_quoted_toks - ), -) -> str: - """ - Compose a collection of tokenization functions. - - Parameters - ---------- - source : str - A Python source code string - f : callable - This takes a tuple of (toknum, tokval) as its argument and returns a - tuple with the same structure but possibly different elements. Defaults - to the composition of ``_rewrite_assign``, ``_replace_booleans``, and - ``_replace_locals``. - - Returns - ------- - str - Valid Python source code - - Notes - ----- - The `f` parameter can be any callable that takes *and* returns input of the - form ``(toknum, tokval)``, where ``toknum`` is one of the constants from - the ``tokenize`` module and ``tokval`` is a string. - """ - assert callable(f), "f must be callable" - return tokenize.untokenize(f(x) for x in tokenize_string(source)) - - -def _is_type(t): - """ - Factory for a type checking function of type ``t`` or tuple of types. - """ - return lambda x: isinstance(x.value, t) - - -_is_list = _is_type(list) -_is_str = _is_type(str) - - -# partition all AST nodes -_all_nodes = frozenset( - node - for node in (getattr(ast, name) for name in dir(ast)) - if isinstance(node, type) and issubclass(node, ast.AST) -) - - -def _filter_nodes(superclass, all_nodes=_all_nodes): - """ - Filter out AST nodes that are subclasses of ``superclass``. - """ - node_names = (node.__name__ for node in all_nodes if issubclass(node, superclass)) - return frozenset(node_names) - - -_all_node_names = frozenset(x.__name__ for x in _all_nodes) -_mod_nodes = _filter_nodes(ast.mod) -_stmt_nodes = _filter_nodes(ast.stmt) -_expr_nodes = _filter_nodes(ast.expr) -_expr_context_nodes = _filter_nodes(ast.expr_context) -_boolop_nodes = _filter_nodes(ast.boolop) -_operator_nodes = _filter_nodes(ast.operator) -_unary_op_nodes = _filter_nodes(ast.unaryop) -_cmp_op_nodes = _filter_nodes(ast.cmpop) -_comprehension_nodes = _filter_nodes(ast.comprehension) -_handler_nodes = _filter_nodes(ast.excepthandler) -_arguments_nodes = _filter_nodes(ast.arguments) -_keyword_nodes = _filter_nodes(ast.keyword) -_alias_nodes = _filter_nodes(ast.alias) - - -# nodes that we don't support directly but are needed for parsing -_hacked_nodes = frozenset(["Assign", "Module", "Expr"]) - - -_unsupported_expr_nodes = frozenset( - [ - "Yield", - "GeneratorExp", - "IfExp", - "DictComp", - "SetComp", - "Repr", - "Lambda", - "Set", - "AST", - "Is", - "IsNot", - ] -) - -# these nodes are low priority or won't ever be supported (e.g., AST) -_unsupported_nodes = ( - _stmt_nodes - | _mod_nodes - | _handler_nodes - | _arguments_nodes - | _keyword_nodes - | _alias_nodes - | _expr_context_nodes - | _unsupported_expr_nodes -) - _hacked_nodes - -# we're adding a different assignment in some cases to be equality comparison -# and we don't want `stmt` and friends in their so get only the class whose -# names are capitalized -_base_supported_nodes = (_all_node_names - _unsupported_nodes) | _hacked_nodes -intersection = _unsupported_nodes & _base_supported_nodes -_msg = f"cannot both support and not support {intersection}" -assert not intersection, _msg - - -def _node_not_implemented(node_name: str) -> Callable[..., None]: - """ - Return a function that raises a NotImplementedError with a passed node name. - """ - - def f(self, *args, **kwargs): - raise NotImplementedError(f"'{node_name}' nodes are not implemented") - - return f - - -# should be bound by BaseExprVisitor but that creates a circular dependency: -# _T is used in disallow, but disallow is used to define BaseExprVisitor -# https://github.com/microsoft/pyright/issues/2315 -_T = TypeVar("_T") - - -def disallow(nodes: set[str]) -> Callable[[type[_T]], type[_T]]: - """ - Decorator to disallow certain nodes from parsing. Raises a - NotImplementedError instead. - - Returns - ------- - callable - """ - - def disallowed(cls: type[_T]) -> type[_T]: - # error: "Type[_T]" has no attribute "unsupported_nodes" - cls.unsupported_nodes = () # type: ignore[attr-defined] - for node in nodes: - new_method = _node_not_implemented(node) - name = f"visit_{node}" - # error: "Type[_T]" has no attribute "unsupported_nodes" - cls.unsupported_nodes += (name,) # type: ignore[attr-defined] - setattr(cls, name, new_method) - return cls - - return disallowed - - -def _op_maker(op_class, op_symbol): - """ - Return a function to create an op class with its symbol already passed. - - Returns - ------- - callable - """ - - def f(self, node, *args, **kwargs): - """ - Return a partial function with an Op subclass with an operator already passed. - - Returns - ------- - callable - """ - return partial(op_class, op_symbol, *args, **kwargs) - - return f - - -_op_classes = {"binary": BinOp, "unary": UnaryOp} - - -def add_ops(op_classes): - """ - Decorator to add default implementation of ops. - """ - - def f(cls): - for op_attr_name, op_class in op_classes.items(): - ops = getattr(cls, f"{op_attr_name}_ops") - ops_map = getattr(cls, f"{op_attr_name}_op_nodes_map") - for op in ops: - op_node = ops_map[op] - if op_node is not None: - made_op = _op_maker(op_class, op) - setattr(cls, f"visit_{op_node}", made_op) - return cls - - return f - - -@disallow(_unsupported_nodes) -@add_ops(_op_classes) -class BaseExprVisitor(ast.NodeVisitor): - """ - Custom ast walker. Parsers of other engines should subclass this class - if necessary. - - Parameters - ---------- - env : Scope - engine : str - parser : str - preparser : callable - """ - - const_type: type[Term] = Constant - term_type = Term - - binary_ops = CMP_OPS_SYMS + BOOL_OPS_SYMS + ARITH_OPS_SYMS - binary_op_nodes = ( - "Gt", - "Lt", - "GtE", - "LtE", - "Eq", - "NotEq", - "In", - "NotIn", - "BitAnd", - "BitOr", - "And", - "Or", - "Add", - "Sub", - "Mult", - None, - "Pow", - "FloorDiv", - "Mod", - ) - binary_op_nodes_map = dict(zip(binary_ops, binary_op_nodes)) - - unary_ops = UNARY_OPS_SYMS - unary_op_nodes = "UAdd", "USub", "Invert", "Not" - unary_op_nodes_map = dict(zip(unary_ops, unary_op_nodes)) - - rewrite_map = { - ast.Eq: ast.In, - ast.NotEq: ast.NotIn, - ast.In: ast.In, - ast.NotIn: ast.NotIn, - } - - unsupported_nodes: tuple[str, ...] - - def __init__(self, env, engine, parser, preparser=_preparse) -> None: - self.env = env - self.engine = engine - self.parser = parser - self.preparser = preparser - self.assigner = None - - def visit(self, node, **kwargs): - if isinstance(node, str): - clean = self.preparser(node) - try: - node = ast.fix_missing_locations(ast.parse(clean)) - except SyntaxError as e: - if any(iskeyword(x) for x in clean.split()): - e.msg = "Python keyword not valid identifier in numexpr query" - raise e - - method = f"visit_{type(node).__name__}" - visitor = getattr(self, method) - return visitor(node, **kwargs) - - def visit_Module(self, node, **kwargs): - if len(node.body) != 1: - raise SyntaxError("only a single expression is allowed") - expr = node.body[0] - return self.visit(expr, **kwargs) - - def visit_Expr(self, node, **kwargs): - return self.visit(node.value, **kwargs) - - def _rewrite_membership_op(self, node, left, right): - # the kind of the operator (is actually an instance) - op_instance = node.op - op_type = type(op_instance) - - # must be two terms and the comparison operator must be ==/!=/in/not in - if is_term(left) and is_term(right) and op_type in self.rewrite_map: - left_list, right_list = map(_is_list, (left, right)) - left_str, right_str = map(_is_str, (left, right)) - - # if there are any strings or lists in the expression - if left_list or right_list or left_str or right_str: - op_instance = self.rewrite_map[op_type]() - - # pop the string variable out of locals and replace it with a list - # of one string, kind of a hack - if right_str: - name = self.env.add_tmp([right.value]) - right = self.term_type(name, self.env) - - if left_str: - name = self.env.add_tmp([left.value]) - left = self.term_type(name, self.env) - - op = self.visit(op_instance) - return op, op_instance, left, right - - def _maybe_transform_eq_ne(self, node, left=None, right=None): - if left is None: - left = self.visit(node.left, side="left") - if right is None: - right = self.visit(node.right, side="right") - op, op_class, left, right = self._rewrite_membership_op(node, left, right) - return op, op_class, left, right - - def _maybe_downcast_constants(self, left, right): - f32 = np.dtype(np.float32) - if ( - left.is_scalar - and hasattr(left, "value") - and not right.is_scalar - and right.return_type == f32 - ): - # right is a float32 array, left is a scalar - name = self.env.add_tmp(np.float32(left.value)) - left = self.term_type(name, self.env) - if ( - right.is_scalar - and hasattr(right, "value") - and not left.is_scalar - and left.return_type == f32 - ): - # left is a float32 array, right is a scalar - name = self.env.add_tmp(np.float32(right.value)) - right = self.term_type(name, self.env) - - return left, right - - def _maybe_eval(self, binop, eval_in_python): - # eval `in` and `not in` (for now) in "partial" python space - # things that can be evaluated in "eval" space will be turned into - # temporary variables. for example, - # [1,2] in a + 2 * b - # in that case a + 2 * b will be evaluated using numexpr, and the "in" - # call will be evaluated using isin (in python space) - return binop.evaluate( - self.env, self.engine, self.parser, self.term_type, eval_in_python - ) - - def _maybe_evaluate_binop( - self, - op, - op_class, - lhs, - rhs, - eval_in_python=("in", "not in"), - maybe_eval_in_python=("==", "!=", "<", ">", "<=", ">="), - ): - res = op(lhs, rhs) - - if res.has_invalid_return_type: - raise TypeError( - f"unsupported operand type(s) for {res.op}: " - f"'{lhs.type}' and '{rhs.type}'" - ) - - if self.engine != "pytables" and ( - res.op in CMP_OPS_SYMS - and getattr(lhs, "is_datetime", False) - or getattr(rhs, "is_datetime", False) - ): - # all date ops must be done in python bc numexpr doesn't work - # well with NaT - return self._maybe_eval(res, self.binary_ops) - - if res.op in eval_in_python: - # "in"/"not in" ops are always evaluated in python - return self._maybe_eval(res, eval_in_python) - elif self.engine != "pytables": - if ( - getattr(lhs, "return_type", None) == object - or getattr(rhs, "return_type", None) == object - ): - # evaluate "==" and "!=" in python if either of our operands - # has an object return type - return self._maybe_eval(res, eval_in_python + maybe_eval_in_python) - return res - - def visit_BinOp(self, node, **kwargs): - op, op_class, left, right = self._maybe_transform_eq_ne(node) - left, right = self._maybe_downcast_constants(left, right) - return self._maybe_evaluate_binop(op, op_class, left, right) - - def visit_Div(self, node, **kwargs): - return lambda lhs, rhs: Div(lhs, rhs) - - def visit_UnaryOp(self, node, **kwargs): - op = self.visit(node.op) - operand = self.visit(node.operand) - return op(operand) - - def visit_Name(self, node, **kwargs): - return self.term_type(node.id, self.env, **kwargs) - - # TODO(py314): deprecated since Python 3.8. Remove after Python 3.14 is min - def visit_NameConstant(self, node, **kwargs) -> Term: - return self.const_type(node.value, self.env) - - # TODO(py314): deprecated since Python 3.8. Remove after Python 3.14 is min - def visit_Num(self, node, **kwargs) -> Term: - return self.const_type(node.value, self.env) - - def visit_Constant(self, node, **kwargs) -> Term: - return self.const_type(node.value, self.env) - - # TODO(py314): deprecated since Python 3.8. Remove after Python 3.14 is min - def visit_Str(self, node, **kwargs): - name = self.env.add_tmp(node.s) - return self.term_type(name, self.env) - - def visit_List(self, node, **kwargs): - name = self.env.add_tmp([self.visit(e)(self.env) for e in node.elts]) - return self.term_type(name, self.env) - - visit_Tuple = visit_List - - def visit_Index(self, node, **kwargs): - """df.index[4]""" - return self.visit(node.value) - - def visit_Subscript(self, node, **kwargs): - from pandas import eval as pd_eval - - value = self.visit(node.value) - slobj = self.visit(node.slice) - result = pd_eval( - slobj, local_dict=self.env, engine=self.engine, parser=self.parser - ) - try: - # a Term instance - v = value.value[result] - except AttributeError: - # an Op instance - lhs = pd_eval( - value, local_dict=self.env, engine=self.engine, parser=self.parser - ) - v = lhs[result] - name = self.env.add_tmp(v) - return self.term_type(name, env=self.env) - - def visit_Slice(self, node, **kwargs): - """df.index[slice(4,6)]""" - lower = node.lower - if lower is not None: - lower = self.visit(lower).value - upper = node.upper - if upper is not None: - upper = self.visit(upper).value - step = node.step - if step is not None: - step = self.visit(step).value - - return slice(lower, upper, step) - - def visit_Assign(self, node, **kwargs): - """ - support a single assignment node, like - - c = a + b - - set the assigner at the top level, must be a Name node which - might or might not exist in the resolvers - - """ - if len(node.targets) != 1: - raise SyntaxError("can only assign a single expression") - if not isinstance(node.targets[0], ast.Name): - raise SyntaxError("left hand side of an assignment must be a single name") - if self.env.target is None: - raise ValueError("cannot assign without a target object") - - try: - assigner = self.visit(node.targets[0], **kwargs) - except UndefinedVariableError: - assigner = node.targets[0].id - - self.assigner = getattr(assigner, "name", assigner) - if self.assigner is None: - raise SyntaxError( - "left hand side of an assignment must be a single resolvable name" - ) - - return self.visit(node.value, **kwargs) - - def visit_Attribute(self, node, **kwargs): - attr = node.attr - value = node.value - - ctx = node.ctx - if isinstance(ctx, ast.Load): - # resolve the value - resolved = self.visit(value).value - try: - v = getattr(resolved, attr) - name = self.env.add_tmp(v) - return self.term_type(name, self.env) - except AttributeError: - # something like datetime.datetime where scope is overridden - if isinstance(value, ast.Name) and value.id == attr: - return resolved - raise - - raise ValueError(f"Invalid Attribute context {type(ctx).__name__}") - - def visit_Call(self, node, side=None, **kwargs): - if isinstance(node.func, ast.Attribute) and node.func.attr != "__call__": - res = self.visit_Attribute(node.func) - elif not isinstance(node.func, ast.Name): - raise TypeError("Only named functions are supported") - else: - try: - res = self.visit(node.func) - except UndefinedVariableError: - # Check if this is a supported function name - try: - res = FuncNode(node.func.id) - except ValueError: - # Raise original error - raise - - if res is None: - # error: "expr" has no attribute "id" - raise ValueError( - f"Invalid function call {node.func.id}" # type: ignore[attr-defined] - ) - if hasattr(res, "value"): - res = res.value - - if isinstance(res, FuncNode): - new_args = [self.visit(arg) for arg in node.args] - - if node.keywords: - raise TypeError( - f'Function "{res.name}" does not support keyword arguments' - ) - - return res(*new_args) - - else: - new_args = [self.visit(arg)(self.env) for arg in node.args] - - for key in node.keywords: - if not isinstance(key, ast.keyword): - # error: "expr" has no attribute "id" - raise ValueError( - "keyword error in function call " # type: ignore[attr-defined] - f"'{node.func.id}'" - ) - - if key.arg: - kwargs[key.arg] = self.visit(key.value)(self.env) - - name = self.env.add_tmp(res(*new_args, **kwargs)) - return self.term_type(name=name, env=self.env) - - def translate_In(self, op): - return op - - def visit_Compare(self, node, **kwargs): - ops = node.ops - comps = node.comparators - - # base case: we have something like a CMP b - if len(comps) == 1: - op = self.translate_In(ops[0]) - binop = ast.BinOp(op=op, left=node.left, right=comps[0]) - return self.visit(binop) - - # recursive case: we have a chained comparison, a CMP b CMP c, etc. - left = node.left - values = [] - for op, comp in zip(ops, comps): - new_node = self.visit( - ast.Compare(comparators=[comp], left=left, ops=[self.translate_In(op)]) - ) - left = comp - values.append(new_node) - return self.visit(ast.BoolOp(op=ast.And(), values=values)) - - def _try_visit_binop(self, bop): - if isinstance(bop, (Op, Term)): - return bop - return self.visit(bop) - - def visit_BoolOp(self, node, **kwargs): - def visitor(x, y): - lhs = self._try_visit_binop(x) - rhs = self._try_visit_binop(y) - - op, op_class, lhs, rhs = self._maybe_transform_eq_ne(node, lhs, rhs) - return self._maybe_evaluate_binop(op, node.op, lhs, rhs) - - operands = node.values - return reduce(visitor, operands) - - -_python_not_supported = frozenset(["Dict", "BoolOp", "In", "NotIn"]) -_numexpr_supported_calls = frozenset(REDUCTIONS + MATHOPS) - - -@disallow( - (_unsupported_nodes | _python_not_supported) - - (_boolop_nodes | frozenset(["BoolOp", "Attribute", "In", "NotIn", "Tuple"])) -) -class PandasExprVisitor(BaseExprVisitor): - def __init__( - self, - env, - engine, - parser, - preparser=partial( - _preparse, - f=_compose(_replace_locals, _replace_booleans, clean_backtick_quoted_toks), - ), - ) -> None: - super().__init__(env, engine, parser, preparser) - - -@disallow(_unsupported_nodes | _python_not_supported | frozenset(["Not"])) -class PythonExprVisitor(BaseExprVisitor): - def __init__( - self, env, engine, parser, preparser=lambda source, f=None: source - ) -> None: - super().__init__(env, engine, parser, preparser=preparser) - - -class Expr: - """ - Object encapsulating an expression. - - Parameters - ---------- - expr : str - engine : str, optional, default 'numexpr' - parser : str, optional, default 'pandas' - env : Scope, optional, default None - level : int, optional, default 2 - """ - - env: Scope - engine: str - parser: str - - def __init__( - self, - expr, - engine: str = "numexpr", - parser: str = "pandas", - env: Scope | None = None, - level: int = 0, - ) -> None: - self.expr = expr - self.env = env or Scope(level=level + 1) - self.engine = engine - self.parser = parser - self._visitor = PARSERS[parser](self.env, self.engine, self.parser) - self.terms = self.parse() - - @property - def assigner(self): - return getattr(self._visitor, "assigner", None) - - def __call__(self): - return self.terms(self.env) - - def __repr__(self) -> str: - return printing.pprint_thing(self.terms) - - def __len__(self) -> int: - return len(self.expr) - - def parse(self): - """ - Parse an expression. - """ - return self._visitor.visit(self.expr) - - @property - def names(self): - """ - Get the names in an expression. - """ - if is_term(self.terms): - return frozenset([self.terms.name]) - return frozenset(term.name for term in com.flatten(self.terms)) - - -PARSERS = {"python": PythonExprVisitor, "pandas": PandasExprVisitor} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/floating/test_to_numpy.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/floating/test_to_numpy.py deleted file mode 100644 index 2ed52439adf53ae12350d35862aacb7f91d6f7f5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/floating/test_to_numpy.py +++ /dev/null @@ -1,132 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -import pandas._testing as tm -from pandas.core.arrays import FloatingArray - - -@pytest.mark.parametrize("box", [True, False], ids=["series", "array"]) -def test_to_numpy(box): - con = pd.Series if box else pd.array - - # default (with or without missing values) -> object dtype - arr = con([0.1, 0.2, 0.3], dtype="Float64") - result = arr.to_numpy() - expected = np.array([0.1, 0.2, 0.3], dtype="object") - tm.assert_numpy_array_equal(result, expected) - - arr = con([0.1, 0.2, None], dtype="Float64") - result = arr.to_numpy() - expected = np.array([0.1, 0.2, pd.NA], dtype="object") - tm.assert_numpy_array_equal(result, expected) - - -@pytest.mark.parametrize("box", [True, False], ids=["series", "array"]) -def test_to_numpy_float(box): - con = pd.Series if box else pd.array - - # no missing values -> can convert to float, otherwise raises - arr = con([0.1, 0.2, 0.3], dtype="Float64") - result = arr.to_numpy(dtype="float64") - expected = np.array([0.1, 0.2, 0.3], dtype="float64") - tm.assert_numpy_array_equal(result, expected) - - arr = con([0.1, 0.2, None], dtype="Float64") - with pytest.raises(ValueError, match="cannot convert to 'float64'-dtype"): - result = arr.to_numpy(dtype="float64") - - # need to explicitly specify na_value - result = arr.to_numpy(dtype="float64", na_value=np.nan) - expected = np.array([0.1, 0.2, np.nan], dtype="float64") - tm.assert_numpy_array_equal(result, expected) - - -@pytest.mark.parametrize("box", [True, False], ids=["series", "array"]) -def test_to_numpy_int(box): - con = pd.Series if box else pd.array - - # no missing values -> can convert to int, otherwise raises - arr = con([1.0, 2.0, 3.0], dtype="Float64") - result = arr.to_numpy(dtype="int64") - expected = np.array([1, 2, 3], dtype="int64") - tm.assert_numpy_array_equal(result, expected) - - arr = con([1.0, 2.0, None], dtype="Float64") - with pytest.raises(ValueError, match="cannot convert to 'int64'-dtype"): - result = arr.to_numpy(dtype="int64") - - # automatic casting (floors the values) - arr = con([0.1, 0.9, 1.1], dtype="Float64") - result = arr.to_numpy(dtype="int64") - expected = np.array([0, 0, 1], dtype="int64") - tm.assert_numpy_array_equal(result, expected) - - -@pytest.mark.parametrize("box", [True, False], ids=["series", "array"]) -def test_to_numpy_na_value(box): - con = pd.Series if box else pd.array - - arr = con([0.0, 1.0, None], dtype="Float64") - result = arr.to_numpy(dtype=object, na_value=None) - expected = np.array([0.0, 1.0, None], dtype="object") - tm.assert_numpy_array_equal(result, expected) - - result = arr.to_numpy(dtype=bool, na_value=False) - expected = np.array([False, True, False], dtype="bool") - tm.assert_numpy_array_equal(result, expected) - - result = arr.to_numpy(dtype="int64", na_value=-99) - expected = np.array([0, 1, -99], dtype="int64") - tm.assert_numpy_array_equal(result, expected) - - -def test_to_numpy_na_value_with_nan(): - # array with both NaN and NA -> only fill NA with `na_value` - arr = FloatingArray(np.array([0.0, np.nan, 0.0]), np.array([False, False, True])) - result = arr.to_numpy(dtype="float64", na_value=-1) - expected = np.array([0.0, np.nan, -1.0], dtype="float64") - tm.assert_numpy_array_equal(result, expected) - - -@pytest.mark.parametrize("dtype", ["float64", "float32", "int32", "int64", "bool"]) -@pytest.mark.parametrize("box", [True, False], ids=["series", "array"]) -def test_to_numpy_dtype(box, dtype): - con = pd.Series if box else pd.array - arr = con([0.0, 1.0], dtype="Float64") - - result = arr.to_numpy(dtype=dtype) - expected = np.array([0, 1], dtype=dtype) - tm.assert_numpy_array_equal(result, expected) - - -@pytest.mark.parametrize("dtype", ["float64", "float32", "int32", "int64", "bool"]) -@pytest.mark.parametrize("box", [True, False], ids=["series", "array"]) -def test_to_numpy_na_raises(box, dtype): - con = pd.Series if box else pd.array - arr = con([0.0, 1.0, None], dtype="Float64") - with pytest.raises(ValueError, match=dtype): - arr.to_numpy(dtype=dtype) - - -@pytest.mark.parametrize("box", [True, False], ids=["series", "array"]) -def test_to_numpy_string(box, dtype): - con = pd.Series if box else pd.array - arr = con([0.0, 1.0, None], dtype="Float64") - - result = arr.to_numpy(dtype="str") - expected = np.array([0.0, 1.0, pd.NA], dtype=f"{tm.ENDIAN}U32") - tm.assert_numpy_array_equal(result, expected) - - -def test_to_numpy_copy(): - # to_numpy can be zero-copy if no missing values - arr = pd.array([0.1, 0.2, 0.3], dtype="Float64") - result = arr.to_numpy(dtype="float64") - result[0] = 10 - tm.assert_extension_array_equal(arr, pd.array([10, 0.2, 0.3], dtype="Float64")) - - arr = pd.array([0.1, 0.2, 0.3], dtype="Float64") - result = arr.to_numpy(dtype="float64", copy=True) - result[0] = 10 - tm.assert_extension_array_equal(arr, pd.array([0.1, 0.2, 0.3], dtype="Float64")) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/period/test_period_range.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/period/test_period_range.py deleted file mode 100644 index c94ddf57c0ee16e2ee9ab907cb121b0c612ccc77..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/period/test_period_range.py +++ /dev/null @@ -1,121 +0,0 @@ -import numpy as np -import pytest - -from pandas import ( - NaT, - Period, - PeriodIndex, - date_range, - period_range, -) -import pandas._testing as tm - - -class TestPeriodRange: - def test_required_arguments(self): - msg = ( - "Of the three parameters: start, end, and periods, exactly two " - "must be specified" - ) - with pytest.raises(ValueError, match=msg): - period_range("2011-1-1", "2012-1-1", "B") - - @pytest.mark.parametrize("freq", ["D", "W", "M", "Q", "A"]) - def test_construction_from_string(self, freq): - # non-empty - expected = date_range( - start="2017-01-01", periods=5, freq=freq, name="foo" - ).to_period() - start, end = str(expected[0]), str(expected[-1]) - - result = period_range(start=start, end=end, freq=freq, name="foo") - tm.assert_index_equal(result, expected) - - result = period_range(start=start, periods=5, freq=freq, name="foo") - tm.assert_index_equal(result, expected) - - result = period_range(end=end, periods=5, freq=freq, name="foo") - tm.assert_index_equal(result, expected) - - # empty - expected = PeriodIndex([], freq=freq, name="foo") - - result = period_range(start=start, periods=0, freq=freq, name="foo") - tm.assert_index_equal(result, expected) - - result = period_range(end=end, periods=0, freq=freq, name="foo") - tm.assert_index_equal(result, expected) - - result = period_range(start=end, end=start, freq=freq, name="foo") - tm.assert_index_equal(result, expected) - - def test_construction_from_period(self): - # upsampling - start, end = Period("2017Q1", freq="Q"), Period("2018Q1", freq="Q") - expected = date_range( - start="2017-03-31", end="2018-03-31", freq="M", name="foo" - ).to_period() - result = period_range(start=start, end=end, freq="M", name="foo") - tm.assert_index_equal(result, expected) - - # downsampling - start, end = Period("2017-1", freq="M"), Period("2019-12", freq="M") - expected = date_range( - start="2017-01-31", end="2019-12-31", freq="Q", name="foo" - ).to_period() - result = period_range(start=start, end=end, freq="Q", name="foo") - tm.assert_index_equal(result, expected) - - # test for issue # 21793 - start, end = Period("2017Q1", freq="Q"), Period("2018Q1", freq="Q") - idx = period_range(start=start, end=end, freq="Q", name="foo") - result = idx == idx.values - expected = np.array([True, True, True, True, True]) - tm.assert_numpy_array_equal(result, expected) - - # empty - expected = PeriodIndex([], freq="W", name="foo") - - result = period_range(start=start, periods=0, freq="W", name="foo") - tm.assert_index_equal(result, expected) - - result = period_range(end=end, periods=0, freq="W", name="foo") - tm.assert_index_equal(result, expected) - - result = period_range(start=end, end=start, freq="W", name="foo") - tm.assert_index_equal(result, expected) - - def test_errors(self): - # not enough params - msg = ( - "Of the three parameters: start, end, and periods, " - "exactly two must be specified" - ) - with pytest.raises(ValueError, match=msg): - period_range(start="2017Q1") - - with pytest.raises(ValueError, match=msg): - period_range(end="2017Q1") - - with pytest.raises(ValueError, match=msg): - period_range(periods=5) - - with pytest.raises(ValueError, match=msg): - period_range() - - # too many params - with pytest.raises(ValueError, match=msg): - period_range(start="2017Q1", end="2018Q1", periods=8, freq="Q") - - # start/end NaT - msg = "start and end must not be NaT" - with pytest.raises(ValueError, match=msg): - period_range(start=NaT, end="2018Q1") - - with pytest.raises(ValueError, match=msg): - period_range(start="2017Q1", end=NaT) - - # invalid periods param - msg = "periods must be a number, got foo" - with pytest.raises(TypeError, match=msg): - period_range(start="2017Q1", periods="foo") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/pytables/test_put.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/pytables/test_put.py deleted file mode 100644 index 5bf94340f4d3f8f91f71a2caf2394e03ab17c325..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/pytables/test_put.py +++ /dev/null @@ -1,359 +0,0 @@ -import datetime -import re - -import numpy as np -import pytest - -from pandas._libs.tslibs import Timestamp - -import pandas as pd -from pandas import ( - DataFrame, - HDFStore, - Index, - MultiIndex, - Series, - _testing as tm, - concat, -) -from pandas.tests.io.pytables.common import ( - _maybe_remove, - ensure_clean_store, -) -from pandas.util import _test_decorators as td - -pytestmark = pytest.mark.single_cpu - - -def test_format_type(tmp_path, setup_path): - df = DataFrame({"A": [1, 2]}) - with HDFStore(tmp_path / setup_path) as store: - store.put("a", df, format="fixed") - store.put("b", df, format="table") - - assert store.get_storer("a").format_type == "fixed" - assert store.get_storer("b").format_type == "table" - - -def test_format_kwarg_in_constructor(tmp_path, setup_path): - # GH 13291 - - msg = "format is not a defined argument for HDFStore" - - with pytest.raises(ValueError, match=msg): - HDFStore(tmp_path / setup_path, format="table") - - -def test_api_default_format(tmp_path, setup_path): - # default_format option - with ensure_clean_store(setup_path) as store: - df = tm.makeDataFrame() - - with pd.option_context("io.hdf.default_format", "fixed"): - _maybe_remove(store, "df") - store.put("df", df) - assert not store.get_storer("df").is_table - - msg = "Can only append to Tables" - with pytest.raises(ValueError, match=msg): - store.append("df2", df) - - with pd.option_context("io.hdf.default_format", "table"): - _maybe_remove(store, "df") - store.put("df", df) - assert store.get_storer("df").is_table - - _maybe_remove(store, "df2") - store.append("df2", df) - assert store.get_storer("df").is_table - - path = tmp_path / setup_path - df = tm.makeDataFrame() - - with pd.option_context("io.hdf.default_format", "fixed"): - df.to_hdf(path, "df") - with HDFStore(path) as store: - assert not store.get_storer("df").is_table - with pytest.raises(ValueError, match=msg): - df.to_hdf(path, "df2", append=True) - - with pd.option_context("io.hdf.default_format", "table"): - df.to_hdf(path, "df3") - with HDFStore(path) as store: - assert store.get_storer("df3").is_table - df.to_hdf(path, "df4", append=True) - with HDFStore(path) as store: - assert store.get_storer("df4").is_table - - -def test_put(setup_path): - with ensure_clean_store(setup_path) as store: - ts = tm.makeTimeSeries() - df = tm.makeTimeDataFrame() - store["a"] = ts - store["b"] = df[:10] - store["foo/bar/bah"] = df[:10] - store["foo"] = df[:10] - store["/foo"] = df[:10] - store.put("c", df[:10], format="table") - - # not OK, not a table - msg = "Can only append to Tables" - with pytest.raises(ValueError, match=msg): - store.put("b", df[10:], append=True) - - # node does not currently exist, test _is_table_type returns False - # in this case - _maybe_remove(store, "f") - with pytest.raises(ValueError, match=msg): - store.put("f", df[10:], append=True) - - # can't put to a table (use append instead) - with pytest.raises(ValueError, match=msg): - store.put("c", df[10:], append=True) - - # overwrite table - store.put("c", df[:10], format="table", append=False) - tm.assert_frame_equal(df[:10], store["c"]) - - -def test_put_string_index(setup_path): - with ensure_clean_store(setup_path) as store: - index = Index([f"I am a very long string index: {i}" for i in range(20)]) - s = Series(np.arange(20), index=index) - df = DataFrame({"A": s, "B": s}) - - store["a"] = s - tm.assert_series_equal(store["a"], s) - - store["b"] = df - tm.assert_frame_equal(store["b"], df) - - # mixed length - index = Index( - ["abcdefghijklmnopqrstuvwxyz1234567890"] - + [f"I am a very long string index: {i}" for i in range(20)] - ) - s = Series(np.arange(21), index=index) - df = DataFrame({"A": s, "B": s}) - store["a"] = s - tm.assert_series_equal(store["a"], s) - - store["b"] = df - tm.assert_frame_equal(store["b"], df) - - -def test_put_compression(setup_path): - with ensure_clean_store(setup_path) as store: - df = tm.makeTimeDataFrame() - - store.put("c", df, format="table", complib="zlib") - tm.assert_frame_equal(store["c"], df) - - # can't compress if format='fixed' - msg = "Compression not supported on Fixed format stores" - with pytest.raises(ValueError, match=msg): - store.put("b", df, format="fixed", complib="zlib") - - -@td.skip_if_windows -def test_put_compression_blosc(setup_path): - df = tm.makeTimeDataFrame() - - with ensure_clean_store(setup_path) as store: - # can't compress if format='fixed' - msg = "Compression not supported on Fixed format stores" - with pytest.raises(ValueError, match=msg): - store.put("b", df, format="fixed", complib="blosc") - - store.put("c", df, format="table", complib="blosc") - tm.assert_frame_equal(store["c"], df) - - -def test_put_mixed_type(setup_path): - df = tm.makeTimeDataFrame() - df["obj1"] = "foo" - df["obj2"] = "bar" - df["bool1"] = df["A"] > 0 - df["bool2"] = df["B"] > 0 - df["bool3"] = True - df["int1"] = 1 - df["int2"] = 2 - df["timestamp1"] = Timestamp("20010102").as_unit("ns") - df["timestamp2"] = Timestamp("20010103").as_unit("ns") - df["datetime1"] = Timestamp("20010102").as_unit("ns") - df["datetime2"] = Timestamp("20010103").as_unit("ns") - df.loc[df.index[3:6], ["obj1"]] = np.nan - df = df._consolidate() - - with ensure_clean_store(setup_path) as store: - _maybe_remove(store, "df") - - with tm.assert_produces_warning(pd.errors.PerformanceWarning): - store.put("df", df) - - expected = store.get("df") - tm.assert_frame_equal(expected, df) - - -@pytest.mark.parametrize( - "format, index", - [ - ["table", tm.makeFloatIndex], - ["table", tm.makeStringIndex], - ["table", tm.makeIntIndex], - ["table", tm.makeDateIndex], - ["fixed", tm.makeFloatIndex], - ["fixed", tm.makeStringIndex], - ["fixed", tm.makeIntIndex], - ["fixed", tm.makeDateIndex], - ["table", tm.makePeriodIndex], # GH#7796 - ["fixed", tm.makePeriodIndex], - ], -) -@pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning") -def test_store_index_types(setup_path, format, index): - # GH5386 - # test storing various index types - - with ensure_clean_store(setup_path) as store: - df = DataFrame( - np.random.default_rng(2).standard_normal((10, 2)), columns=list("AB") - ) - df.index = index(len(df)) - - _maybe_remove(store, "df") - store.put("df", df, format=format) - tm.assert_frame_equal(df, store["df"]) - - -def test_column_multiindex(setup_path): - # GH 4710 - # recreate multi-indexes properly - - index = MultiIndex.from_tuples( - [("A", "a"), ("A", "b"), ("B", "a"), ("B", "b")], names=["first", "second"] - ) - df = DataFrame(np.arange(12).reshape(3, 4), columns=index) - expected = df.set_axis(df.index.to_numpy()) - - with ensure_clean_store(setup_path) as store: - store.put("df", df) - tm.assert_frame_equal( - store["df"], expected, check_index_type=True, check_column_type=True - ) - - store.put("df1", df, format="table") - tm.assert_frame_equal( - store["df1"], expected, check_index_type=True, check_column_type=True - ) - - msg = re.escape("cannot use a multi-index on axis [1] with data_columns ['A']") - with pytest.raises(ValueError, match=msg): - store.put("df2", df, format="table", data_columns=["A"]) - msg = re.escape("cannot use a multi-index on axis [1] with data_columns True") - with pytest.raises(ValueError, match=msg): - store.put("df3", df, format="table", data_columns=True) - - # appending multi-column on existing table (see GH 6167) - with ensure_clean_store(setup_path) as store: - store.append("df2", df) - store.append("df2", df) - - tm.assert_frame_equal(store["df2"], concat((df, df))) - - # non_index_axes name - df = DataFrame(np.arange(12).reshape(3, 4), columns=Index(list("ABCD"), name="foo")) - expected = df.set_axis(df.index.to_numpy()) - - with ensure_clean_store(setup_path) as store: - store.put("df1", df, format="table") - tm.assert_frame_equal( - store["df1"], expected, check_index_type=True, check_column_type=True - ) - - -def test_store_multiindex(setup_path): - # validate multi-index names - # GH 5527 - with ensure_clean_store(setup_path) as store: - - def make_index(names=None): - return MultiIndex.from_tuples( - [ - (datetime.datetime(2013, 12, d), s, t) - for d in range(1, 3) - for s in range(2) - for t in range(3) - ], - names=names, - ) - - # no names - _maybe_remove(store, "df") - df = DataFrame(np.zeros((12, 2)), columns=["a", "b"], index=make_index()) - store.append("df", df) - tm.assert_frame_equal(store.select("df"), df) - - # partial names - _maybe_remove(store, "df") - df = DataFrame( - np.zeros((12, 2)), - columns=["a", "b"], - index=make_index(["date", None, None]), - ) - store.append("df", df) - tm.assert_frame_equal(store.select("df"), df) - - # series - _maybe_remove(store, "s") - s = Series(np.zeros(12), index=make_index(["date", None, None])) - store.append("s", s) - xp = Series(np.zeros(12), index=make_index(["date", "level_1", "level_2"])) - tm.assert_series_equal(store.select("s"), xp) - - # dup with column - _maybe_remove(store, "df") - df = DataFrame( - np.zeros((12, 2)), - columns=["a", "b"], - index=make_index(["date", "a", "t"]), - ) - msg = "duplicate names/columns in the multi-index when storing as a table" - with pytest.raises(ValueError, match=msg): - store.append("df", df) - - # dup within level - _maybe_remove(store, "df") - df = DataFrame( - np.zeros((12, 2)), - columns=["a", "b"], - index=make_index(["date", "date", "date"]), - ) - with pytest.raises(ValueError, match=msg): - store.append("df", df) - - # fully names - _maybe_remove(store, "df") - df = DataFrame( - np.zeros((12, 2)), - columns=["a", "b"], - index=make_index(["date", "s", "t"]), - ) - store.append("df", df) - tm.assert_frame_equal(store.select("df"), df) - - -@pytest.mark.parametrize("format", ["fixed", "table"]) -def test_store_periodindex(tmp_path, setup_path, format): - # GH 7796 - # test of PeriodIndex in HDFStore - df = DataFrame( - np.random.default_rng(2).standard_normal((5, 1)), - index=pd.period_range("20220101", freq="M", periods=5), - ) - - path = tmp_path / setup_path - df.to_hdf(path, "df", mode="w", format=format) - expected = pd.read_hdf(path, "df") - tm.assert_frame_equal(df, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/resample/test_datetime_index.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/resample/test_datetime_index.py deleted file mode 100644 index 1b20b383c4eae07f233d12e055599131ac6c33b5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/resample/test_datetime_index.py +++ /dev/null @@ -1,1996 +0,0 @@ -from datetime import datetime -from functools import partial -from io import StringIO - -import numpy as np -import pytest -import pytz - -from pandas._libs import lib -from pandas._typing import DatetimeNaTType - -import pandas as pd -from pandas import ( - DataFrame, - Series, - Timedelta, - Timestamp, - isna, - notna, -) -import pandas._testing as tm -from pandas.core.groupby.grouper import Grouper -from pandas.core.indexes.datetimes import date_range -from pandas.core.indexes.period import ( - Period, - period_range, -) -from pandas.core.resample import ( - DatetimeIndex, - _get_timestamp_range_edges, -) - -from pandas.tseries import offsets -from pandas.tseries.offsets import Minute - - -@pytest.fixture() -def _index_factory(): - return date_range - - -@pytest.fixture -def _index_freq(): - return "Min" - - -@pytest.fixture -def _static_values(index): - return np.random.default_rng(2).random(len(index)) - - -@pytest.fixture(params=["s", "ms", "us", "ns"]) -def unit(request): - return request.param - - -def test_custom_grouper(index, unit): - dti = index.as_unit(unit) - s = Series(np.array([1] * len(dti)), index=dti, dtype="int64") - - b = Grouper(freq=Minute(5)) - g = s.groupby(b) - - # check all cython functions work - g.ohlc() # doesn't use _cython_agg_general - funcs = ["sum", "mean", "prod", "min", "max", "var"] - for f in funcs: - g._cython_agg_general(f, alt=None, numeric_only=True) - - b = Grouper(freq=Minute(5), closed="right", label="right") - g = s.groupby(b) - # check all cython functions work - g.ohlc() # doesn't use _cython_agg_general - funcs = ["sum", "mean", "prod", "min", "max", "var"] - for f in funcs: - g._cython_agg_general(f, alt=None, numeric_only=True) - - assert g.ngroups == 2593 - assert notna(g.mean()).all() - - # construct expected val - arr = [1] + [5] * 2592 - idx = dti[0:-1:5] - idx = idx.append(dti[-1:]) - idx = DatetimeIndex(idx, freq="5T").as_unit(unit) - expect = Series(arr, index=idx) - - # GH2763 - return input dtype if we can - result = g.agg("sum") - tm.assert_series_equal(result, expect) - - -def test_custom_grouper_df(index, unit): - b = Grouper(freq=Minute(5), closed="right", label="right") - dti = index.as_unit(unit) - df = DataFrame( - np.random.default_rng(2).random((len(dti), 10)), index=dti, dtype="float64" - ) - r = df.groupby(b).agg("sum") - - assert len(r.columns) == 10 - assert len(r.index) == 2593 - - -@pytest.mark.parametrize( - "_index_start,_index_end,_index_name", - [("1/1/2000 00:00:00", "1/1/2000 00:13:00", "index")], -) -@pytest.mark.parametrize( - "closed, expected", - [ - ( - "right", - lambda s: Series( - [s.iloc[0], s[1:6].mean(), s[6:11].mean(), s[11:].mean()], - index=date_range("1/1/2000", periods=4, freq="5min", name="index"), - ), - ), - ( - "left", - lambda s: Series( - [s[:5].mean(), s[5:10].mean(), s[10:].mean()], - index=date_range( - "1/1/2000 00:05", periods=3, freq="5min", name="index" - ), - ), - ), - ], -) -def test_resample_basic(series, closed, expected, unit): - s = series - s.index = s.index.as_unit(unit) - expected = expected(s) - expected.index = expected.index.as_unit(unit) - result = s.resample("5min", closed=closed, label="right").mean() - tm.assert_series_equal(result, expected) - - -def test_resample_integerarray(unit): - # GH 25580, resample on IntegerArray - ts = Series( - range(9), - index=date_range("1/1/2000", periods=9, freq="T").as_unit(unit), - dtype="Int64", - ) - result = ts.resample("3T").sum() - expected = Series( - [3, 12, 21], - index=date_range("1/1/2000", periods=3, freq="3T").as_unit(unit), - dtype="Int64", - ) - tm.assert_series_equal(result, expected) - - result = ts.resample("3T").mean() - expected = Series( - [1, 4, 7], - index=date_range("1/1/2000", periods=3, freq="3T").as_unit(unit), - dtype="Float64", - ) - tm.assert_series_equal(result, expected) - - -def test_resample_basic_grouper(series, unit): - s = series - s.index = s.index.as_unit(unit) - result = s.resample("5Min").last() - grouper = Grouper(freq=Minute(5), closed="left", label="left") - expected = s.groupby(grouper).agg(lambda x: x.iloc[-1]) - tm.assert_series_equal(result, expected) - - -@pytest.mark.parametrize( - "_index_start,_index_end,_index_name", - [("1/1/2000 00:00:00", "1/1/2000 00:13:00", "index")], -) -@pytest.mark.parametrize( - "keyword,value", - [("label", "righttt"), ("closed", "righttt"), ("convention", "starttt")], -) -def test_resample_string_kwargs(series, keyword, value, unit): - # see gh-19303 - # Check that wrong keyword argument strings raise an error - series.index = series.index.as_unit(unit) - msg = f"Unsupported value {value} for `{keyword}`" - with pytest.raises(ValueError, match=msg): - series.resample("5min", **({keyword: value})) - - -@pytest.mark.parametrize( - "_index_start,_index_end,_index_name", - [("1/1/2000 00:00:00", "1/1/2000 00:13:00", "index")], -) -def test_resample_how(series, downsample_method, unit): - if downsample_method == "ohlc": - pytest.skip("covered by test_resample_how_ohlc") - - s = series - s.index = s.index.as_unit(unit) - grouplist = np.ones_like(s) - grouplist[0] = 0 - grouplist[1:6] = 1 - grouplist[6:11] = 2 - grouplist[11:] = 3 - expected = s.groupby(grouplist).agg(downsample_method) - expected.index = date_range( - "1/1/2000", periods=4, freq="5min", name="index" - ).as_unit(unit) - - result = getattr( - s.resample("5min", closed="right", label="right"), downsample_method - )() - tm.assert_series_equal(result, expected) - - -@pytest.mark.parametrize( - "_index_start,_index_end,_index_name", - [("1/1/2000 00:00:00", "1/1/2000 00:13:00", "index")], -) -def test_resample_how_ohlc(series, unit): - s = series - s.index = s.index.as_unit(unit) - grouplist = np.ones_like(s) - grouplist[0] = 0 - grouplist[1:6] = 1 - grouplist[6:11] = 2 - grouplist[11:] = 3 - - def _ohlc(group): - if isna(group).all(): - return np.repeat(np.nan, 4) - return [group.iloc[0], group.max(), group.min(), group.iloc[-1]] - - expected = DataFrame( - s.groupby(grouplist).agg(_ohlc).values.tolist(), - index=date_range("1/1/2000", periods=4, freq="5min", name="index").as_unit( - unit - ), - columns=["open", "high", "low", "close"], - ) - - result = s.resample("5min", closed="right", label="right").ohlc() - tm.assert_frame_equal(result, expected) - - -def test_resample_how_callables(unit): - # GH#7929 - data = np.arange(5, dtype=np.int64) - ind = date_range(start="2014-01-01", periods=len(data), freq="d").as_unit(unit) - df = DataFrame({"A": data, "B": data}, index=ind) - - def fn(x, a=1): - return str(type(x)) - - class FnClass: - def __call__(self, x): - return str(type(x)) - - df_standard = df.resample("M").apply(fn) - df_lambda = df.resample("M").apply(lambda x: str(type(x))) - df_partial = df.resample("M").apply(partial(fn)) - df_partial2 = df.resample("M").apply(partial(fn, a=2)) - df_class = df.resample("M").apply(FnClass()) - - tm.assert_frame_equal(df_standard, df_lambda) - tm.assert_frame_equal(df_standard, df_partial) - tm.assert_frame_equal(df_standard, df_partial2) - tm.assert_frame_equal(df_standard, df_class) - - -def test_resample_rounding(unit): - # GH 8371 - # odd results when rounding is needed - - data = """date,time,value -11-08-2014,00:00:01.093,1 -11-08-2014,00:00:02.159,1 -11-08-2014,00:00:02.667,1 -11-08-2014,00:00:03.175,1 -11-08-2014,00:00:07.058,1 -11-08-2014,00:00:07.362,1 -11-08-2014,00:00:08.324,1 -11-08-2014,00:00:08.830,1 -11-08-2014,00:00:08.982,1 -11-08-2014,00:00:09.815,1 -11-08-2014,00:00:10.540,1 -11-08-2014,00:00:11.061,1 -11-08-2014,00:00:11.617,1 -11-08-2014,00:00:13.607,1 -11-08-2014,00:00:14.535,1 -11-08-2014,00:00:15.525,1 -11-08-2014,00:00:17.960,1 -11-08-2014,00:00:20.674,1 -11-08-2014,00:00:21.191,1""" - - df = pd.read_csv( - StringIO(data), - parse_dates={"timestamp": ["date", "time"]}, - index_col="timestamp", - ) - df.index = df.index.as_unit(unit) - df.index.name = None - result = df.resample("6s").sum() - expected = DataFrame( - {"value": [4, 9, 4, 2]}, - index=date_range("2014-11-08", freq="6s", periods=4).as_unit(unit), - ) - tm.assert_frame_equal(result, expected) - - result = df.resample("7s").sum() - expected = DataFrame( - {"value": [4, 10, 4, 1]}, - index=date_range("2014-11-08", freq="7s", periods=4).as_unit(unit), - ) - tm.assert_frame_equal(result, expected) - - result = df.resample("11s").sum() - expected = DataFrame( - {"value": [11, 8]}, - index=date_range("2014-11-08", freq="11s", periods=2).as_unit(unit), - ) - tm.assert_frame_equal(result, expected) - - result = df.resample("13s").sum() - expected = DataFrame( - {"value": [13, 6]}, - index=date_range("2014-11-08", freq="13s", periods=2).as_unit(unit), - ) - tm.assert_frame_equal(result, expected) - - result = df.resample("17s").sum() - expected = DataFrame( - {"value": [16, 3]}, - index=date_range("2014-11-08", freq="17s", periods=2).as_unit(unit), - ) - tm.assert_frame_equal(result, expected) - - -def test_resample_basic_from_daily(unit): - # from daily - dti = date_range( - start=datetime(2005, 1, 1), end=datetime(2005, 1, 10), freq="D", name="index" - ).as_unit(unit) - - s = Series(np.random.default_rng(2).random(len(dti)), dti) - - # to weekly - result = s.resample("w-sun").last() - - assert len(result) == 3 - assert (result.index.dayofweek == [6, 6, 6]).all() - assert result.iloc[0] == s["1/2/2005"] - assert result.iloc[1] == s["1/9/2005"] - assert result.iloc[2] == s.iloc[-1] - - result = s.resample("W-MON").last() - assert len(result) == 2 - assert (result.index.dayofweek == [0, 0]).all() - assert result.iloc[0] == s["1/3/2005"] - assert result.iloc[1] == s["1/10/2005"] - - result = s.resample("W-TUE").last() - assert len(result) == 2 - assert (result.index.dayofweek == [1, 1]).all() - assert result.iloc[0] == s["1/4/2005"] - assert result.iloc[1] == s["1/10/2005"] - - result = s.resample("W-WED").last() - assert len(result) == 2 - assert (result.index.dayofweek == [2, 2]).all() - assert result.iloc[0] == s["1/5/2005"] - assert result.iloc[1] == s["1/10/2005"] - - result = s.resample("W-THU").last() - assert len(result) == 2 - assert (result.index.dayofweek == [3, 3]).all() - assert result.iloc[0] == s["1/6/2005"] - assert result.iloc[1] == s["1/10/2005"] - - result = s.resample("W-FRI").last() - assert len(result) == 2 - assert (result.index.dayofweek == [4, 4]).all() - assert result.iloc[0] == s["1/7/2005"] - assert result.iloc[1] == s["1/10/2005"] - - # to biz day - result = s.resample("B").last() - assert len(result) == 7 - assert (result.index.dayofweek == [4, 0, 1, 2, 3, 4, 0]).all() - - assert result.iloc[0] == s["1/2/2005"] - assert result.iloc[1] == s["1/3/2005"] - assert result.iloc[5] == s["1/9/2005"] - assert result.index.name == "index" - - -def test_resample_upsampling_picked_but_not_correct(unit): - # Test for issue #3020 - dates = date_range("01-Jan-2014", "05-Jan-2014", freq="D").as_unit(unit) - series = Series(1, index=dates) - - result = series.resample("D").mean() - assert result.index[0] == dates[0] - - # GH 5955 - # incorrect deciding to upsample when the axis frequency matches the - # resample frequency - - s = Series( - np.arange(1.0, 6), index=[datetime(1975, 1, i, 12, 0) for i in range(1, 6)] - ) - s.index = s.index.as_unit(unit) - expected = Series( - np.arange(1.0, 6), - index=date_range("19750101", periods=5, freq="D").as_unit(unit), - ) - - result = s.resample("D").count() - tm.assert_series_equal(result, Series(1, index=expected.index)) - - result1 = s.resample("D").sum() - result2 = s.resample("D").mean() - tm.assert_series_equal(result1, expected) - tm.assert_series_equal(result2, expected) - - -@pytest.mark.parametrize("f", ["sum", "mean", "prod", "min", "max", "var"]) -def test_resample_frame_basic_cy_funcs(f, unit): - df = tm.makeTimeDataFrame() - df.index = df.index.as_unit(unit) - - b = Grouper(freq="M") - g = df.groupby(b) - - # check all cython functions work - g._cython_agg_general(f, alt=None, numeric_only=True) - - -@pytest.mark.parametrize("freq", ["A", "M"]) -def test_resample_frame_basic_M_A(freq, unit): - df = tm.makeTimeDataFrame() - df.index = df.index.as_unit(unit) - result = df.resample(freq).mean() - tm.assert_series_equal(result["A"], df["A"].resample(freq).mean()) - - -@pytest.mark.parametrize("freq", ["W-WED", "M"]) -def test_resample_frame_basic_kind(freq, unit): - df = tm.makeTimeDataFrame() - df.index = df.index.as_unit(unit) - df.resample(freq, kind="period").mean() - - -def test_resample_upsample(unit): - # from daily - dti = date_range( - start=datetime(2005, 1, 1), end=datetime(2005, 1, 10), freq="D", name="index" - ).as_unit(unit) - - s = Series(np.random.default_rng(2).random(len(dti)), dti) - - # to minutely, by padding - result = s.resample("Min").ffill() - assert len(result) == 12961 - assert result.iloc[0] == s.iloc[0] - assert result.iloc[-1] == s.iloc[-1] - - assert result.index.name == "index" - - -def test_resample_how_method(unit): - # GH9915 - s = Series( - [11, 22], - index=[ - Timestamp("2015-03-31 21:48:52.672000"), - Timestamp("2015-03-31 21:49:52.739000"), - ], - ) - s.index = s.index.as_unit(unit) - expected = Series( - [11, np.nan, np.nan, np.nan, np.nan, np.nan, 22], - index=DatetimeIndex( - [ - Timestamp("2015-03-31 21:48:50"), - Timestamp("2015-03-31 21:49:00"), - Timestamp("2015-03-31 21:49:10"), - Timestamp("2015-03-31 21:49:20"), - Timestamp("2015-03-31 21:49:30"), - Timestamp("2015-03-31 21:49:40"), - Timestamp("2015-03-31 21:49:50"), - ], - freq="10s", - ), - ) - expected.index = expected.index.as_unit(unit) - tm.assert_series_equal(s.resample("10S").mean(), expected) - - -def test_resample_extra_index_point(unit): - # GH#9756 - index = date_range(start="20150101", end="20150331", freq="BM").as_unit(unit) - expected = DataFrame({"A": Series([21, 41, 63], index=index)}) - - index = date_range(start="20150101", end="20150331", freq="B").as_unit(unit) - df = DataFrame({"A": Series(range(len(index)), index=index)}, dtype="int64") - result = df.resample("BM").last() - tm.assert_frame_equal(result, expected) - - -def test_upsample_with_limit(unit): - rng = date_range("1/1/2000", periods=3, freq="5t").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), rng) - - result = ts.resample("t").ffill(limit=2) - expected = ts.reindex(result.index, method="ffill", limit=2) - tm.assert_series_equal(result, expected) - - -@pytest.mark.parametrize("freq", ["5D", "10H", "5Min", "10S"]) -@pytest.mark.parametrize("rule", ["Y", "3M", "15D", "30H", "15Min", "30S"]) -def test_nearest_upsample_with_limit(tz_aware_fixture, freq, rule, unit): - # GH 33939 - rng = date_range("1/1/2000", periods=3, freq=freq, tz=tz_aware_fixture).as_unit( - unit - ) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), rng) - - result = ts.resample(rule).nearest(limit=2) - expected = ts.reindex(result.index, method="nearest", limit=2) - tm.assert_series_equal(result, expected) - - -def test_resample_ohlc(series, unit): - s = series - s.index = s.index.as_unit(unit) - - grouper = Grouper(freq=Minute(5)) - expect = s.groupby(grouper).agg(lambda x: x.iloc[-1]) - result = s.resample("5Min").ohlc() - - assert len(result) == len(expect) - assert len(result.columns) == 4 - - xs = result.iloc[-2] - assert xs["open"] == s.iloc[-6] - assert xs["high"] == s[-6:-1].max() - assert xs["low"] == s[-6:-1].min() - assert xs["close"] == s.iloc[-2] - - xs = result.iloc[0] - assert xs["open"] == s.iloc[0] - assert xs["high"] == s[:5].max() - assert xs["low"] == s[:5].min() - assert xs["close"] == s.iloc[4] - - -def test_resample_ohlc_result(unit): - # GH 12332 - index = date_range("1-1-2000", "2-15-2000", freq="h").as_unit(unit) - index = index.union(date_range("4-15-2000", "5-15-2000", freq="h").as_unit(unit)) - s = Series(range(len(index)), index=index) - - a = s.loc[:"4-15-2000"].resample("30T").ohlc() - assert isinstance(a, DataFrame) - - b = s.loc[:"4-14-2000"].resample("30T").ohlc() - assert isinstance(b, DataFrame) - - -def test_resample_ohlc_result_odd_period(unit): - # GH12348 - # raising on odd period - rng = date_range("2013-12-30", "2014-01-07").as_unit(unit) - index = rng.drop( - [ - Timestamp("2014-01-01"), - Timestamp("2013-12-31"), - Timestamp("2014-01-04"), - Timestamp("2014-01-05"), - ] - ) - df = DataFrame(data=np.arange(len(index)), index=index) - result = df.resample("B").mean() - expected = df.reindex(index=date_range(rng[0], rng[-1], freq="B").as_unit(unit)) - tm.assert_frame_equal(result, expected) - - -def test_resample_ohlc_dataframe(unit): - df = ( - DataFrame( - { - "PRICE": { - Timestamp("2011-01-06 10:59:05", tz=None): 24990, - Timestamp("2011-01-06 12:43:33", tz=None): 25499, - Timestamp("2011-01-06 12:54:09", tz=None): 25499, - }, - "VOLUME": { - Timestamp("2011-01-06 10:59:05", tz=None): 1500000000, - Timestamp("2011-01-06 12:43:33", tz=None): 5000000000, - Timestamp("2011-01-06 12:54:09", tz=None): 100000000, - }, - } - ) - ).reindex(["VOLUME", "PRICE"], axis=1) - df.index = df.index.as_unit(unit) - df.columns.name = "Cols" - res = df.resample("H").ohlc() - exp = pd.concat( - [df["VOLUME"].resample("H").ohlc(), df["PRICE"].resample("H").ohlc()], - axis=1, - keys=df.columns, - ) - assert exp.columns.names[0] == "Cols" - tm.assert_frame_equal(exp, res) - - df.columns = [["a", "b"], ["c", "d"]] - res = df.resample("H").ohlc() - exp.columns = pd.MultiIndex.from_tuples( - [ - ("a", "c", "open"), - ("a", "c", "high"), - ("a", "c", "low"), - ("a", "c", "close"), - ("b", "d", "open"), - ("b", "d", "high"), - ("b", "d", "low"), - ("b", "d", "close"), - ] - ) - tm.assert_frame_equal(exp, res) - - # dupe columns fail atm - # df.columns = ['PRICE', 'PRICE'] - - -def test_resample_dup_index(): - # GH 4812 - # dup columns with resample raising - df = DataFrame( - np.random.default_rng(2).standard_normal((4, 12)), - index=[2000, 2000, 2000, 2000], - columns=[Period(year=2000, month=i + 1, freq="M") for i in range(12)], - ) - df.iloc[3, :] = np.nan - warning_msg = "DataFrame.resample with axis=1 is deprecated." - with tm.assert_produces_warning(FutureWarning, match=warning_msg): - result = df.resample("Q", axis=1).mean() - - msg = "DataFrame.groupby with axis=1 is deprecated" - with tm.assert_produces_warning(FutureWarning, match=msg): - expected = df.groupby(lambda x: int((x.month - 1) / 3), axis=1).mean() - expected.columns = [Period(year=2000, quarter=i + 1, freq="Q") for i in range(4)] - tm.assert_frame_equal(result, expected) - - -def test_resample_reresample(unit): - dti = date_range( - start=datetime(2005, 1, 1), end=datetime(2005, 1, 10), freq="D" - ).as_unit(unit) - s = Series(np.random.default_rng(2).random(len(dti)), dti) - bs = s.resample("B", closed="right", label="right").mean() - result = bs.resample("8H").mean() - assert len(result) == 22 - assert isinstance(result.index.freq, offsets.DateOffset) - assert result.index.freq == offsets.Hour(8) - - -@pytest.mark.parametrize( - "freq, expected_kwargs", - [ - ["A-DEC", {"start": "1990", "end": "2000", "freq": "a-dec"}], - ["A-JUN", {"start": "1990", "end": "2000", "freq": "a-jun"}], - ["M", {"start": "1990-01", "end": "2000-01", "freq": "M"}], - ], -) -def test_resample_timestamp_to_period( - simple_date_range_series, freq, expected_kwargs, unit -): - ts = simple_date_range_series("1/1/1990", "1/1/2000") - ts.index = ts.index.as_unit(unit) - - result = ts.resample(freq, kind="period").mean() - expected = ts.resample(freq).mean() - expected.index = period_range(**expected_kwargs) - tm.assert_series_equal(result, expected) - - -def test_ohlc_5min(unit): - def _ohlc(group): - if isna(group).all(): - return np.repeat(np.nan, 4) - return [group.iloc[0], group.max(), group.min(), group.iloc[-1]] - - rng = date_range("1/1/2000 00:00:00", "1/1/2000 5:59:50", freq="10s").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - - resampled = ts.resample("5min", closed="right", label="right").ohlc() - - assert (resampled.loc["1/1/2000 00:00"] == ts.iloc[0]).all() - - exp = _ohlc(ts[1:31]) - assert (resampled.loc["1/1/2000 00:05"] == exp).all() - - exp = _ohlc(ts["1/1/2000 5:55:01":]) - assert (resampled.loc["1/1/2000 6:00:00"] == exp).all() - - -def test_downsample_non_unique(unit): - rng = date_range("1/1/2000", "2/29/2000").as_unit(unit) - rng2 = rng.repeat(5).values - ts = Series(np.random.default_rng(2).standard_normal(len(rng2)), index=rng2) - - result = ts.resample("M").mean() - - expected = ts.groupby(lambda x: x.month).mean() - assert len(result) == 2 - tm.assert_almost_equal(result.iloc[0], expected[1]) - tm.assert_almost_equal(result.iloc[1], expected[2]) - - -def test_asfreq_non_unique(unit): - # GH #1077 - rng = date_range("1/1/2000", "2/29/2000").as_unit(unit) - rng2 = rng.repeat(2).values - ts = Series(np.random.default_rng(2).standard_normal(len(rng2)), index=rng2) - - msg = "cannot reindex on an axis with duplicate labels" - with pytest.raises(ValueError, match=msg): - ts.asfreq("B") - - -def test_resample_axis1(unit): - rng = date_range("1/1/2000", "2/29/2000").as_unit(unit) - df = DataFrame( - np.random.default_rng(2).standard_normal((3, len(rng))), - columns=rng, - index=["a", "b", "c"], - ) - - warning_msg = "DataFrame.resample with axis=1 is deprecated." - with tm.assert_produces_warning(FutureWarning, match=warning_msg): - result = df.resample("M", axis=1).mean() - expected = df.T.resample("M").mean().T - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("freq", ["t", "5t", "15t", "30t", "4h", "12h"]) -def test_resample_anchored_ticks(freq, unit): - # If a fixed delta (5 minute, 4 hour) evenly divides a day, we should - # "anchor" the origin at midnight so we get regular intervals rather - # than starting from the first timestamp which might start in the - # middle of a desired interval - - rng = date_range("1/1/2000 04:00:00", periods=86400, freq="s").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - ts[:2] = np.nan # so results are the same - result = ts[2:].resample(freq, closed="left", label="left").mean() - expected = ts.resample(freq, closed="left", label="left").mean() - tm.assert_series_equal(result, expected) - - -@pytest.mark.parametrize("end", [1, 2]) -def test_resample_single_group(end, unit): - mysum = lambda x: x.sum() - - rng = date_range("2000-1-1", f"2000-{end}-10", freq="D").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - tm.assert_series_equal(ts.resample("M").sum(), ts.resample("M").apply(mysum)) - - -def test_resample_single_group_std(unit): - # GH 3849 - s = Series( - [30.1, 31.6], - index=[Timestamp("20070915 15:30:00"), Timestamp("20070915 15:40:00")], - ) - s.index = s.index.as_unit(unit) - expected = Series( - [0.75], index=DatetimeIndex([Timestamp("20070915")], freq="D").as_unit(unit) - ) - result = s.resample("D").apply(lambda x: np.std(x)) - tm.assert_series_equal(result, expected) - - -def test_resample_offset(unit): - # GH 31809 - - rng = date_range("1/1/2000 00:00:00", "1/1/2000 02:00", freq="s").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - - resampled = ts.resample("5min", offset="2min").mean() - exp_rng = date_range("12/31/1999 23:57:00", "1/1/2000 01:57", freq="5min").as_unit( - unit - ) - tm.assert_index_equal(resampled.index, exp_rng) - - -@pytest.mark.parametrize( - "kwargs", - [ - {"origin": "1999-12-31 23:57:00"}, - {"origin": Timestamp("1970-01-01 00:02:00")}, - {"origin": "epoch", "offset": "2m"}, - # origin of '1999-31-12 12:02:00' should be equivalent for this case - {"origin": "1999-12-31 12:02:00"}, - {"offset": "-3m"}, - ], -) -def test_resample_origin(kwargs, unit): - # GH 31809 - rng = date_range("2000-01-01 00:00:00", "2000-01-01 02:00", freq="s").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - - exp_rng = date_range( - "1999-12-31 23:57:00", "2000-01-01 01:57", freq="5min" - ).as_unit(unit) - - resampled = ts.resample("5min", **kwargs).mean() - tm.assert_index_equal(resampled.index, exp_rng) - - -@pytest.mark.parametrize( - "origin", ["invalid_value", "epch", "startday", "startt", "2000-30-30", object()] -) -def test_resample_bad_origin(origin, unit): - rng = date_range("2000-01-01 00:00:00", "2000-01-01 02:00", freq="s").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - msg = ( - "'origin' should be equal to 'epoch', 'start', 'start_day', " - "'end', 'end_day' or should be a Timestamp convertible type. Got " - f"'{origin}' instead." - ) - with pytest.raises(ValueError, match=msg): - ts.resample("5min", origin=origin) - - -@pytest.mark.parametrize("offset", ["invalid_value", "12dayys", "2000-30-30", object()]) -def test_resample_bad_offset(offset, unit): - rng = date_range("2000-01-01 00:00:00", "2000-01-01 02:00", freq="s").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - msg = f"'offset' should be a Timedelta convertible type. Got '{offset}' instead." - with pytest.raises(ValueError, match=msg): - ts.resample("5min", offset=offset) - - -def test_resample_origin_prime_freq(unit): - # GH 31809 - start, end = "2000-10-01 23:30:00", "2000-10-02 00:30:00" - rng = date_range(start, end, freq="7min").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - - exp_rng = date_range( - "2000-10-01 23:14:00", "2000-10-02 00:22:00", freq="17min" - ).as_unit(unit) - resampled = ts.resample("17min").mean() - tm.assert_index_equal(resampled.index, exp_rng) - resampled = ts.resample("17min", origin="start_day").mean() - tm.assert_index_equal(resampled.index, exp_rng) - - exp_rng = date_range( - "2000-10-01 23:30:00", "2000-10-02 00:21:00", freq="17min" - ).as_unit(unit) - resampled = ts.resample("17min", origin="start").mean() - tm.assert_index_equal(resampled.index, exp_rng) - resampled = ts.resample("17min", offset="23h30min").mean() - tm.assert_index_equal(resampled.index, exp_rng) - resampled = ts.resample("17min", origin="start_day", offset="23h30min").mean() - tm.assert_index_equal(resampled.index, exp_rng) - - exp_rng = date_range( - "2000-10-01 23:18:00", "2000-10-02 00:26:00", freq="17min" - ).as_unit(unit) - resampled = ts.resample("17min", origin="epoch").mean() - tm.assert_index_equal(resampled.index, exp_rng) - - exp_rng = date_range( - "2000-10-01 23:24:00", "2000-10-02 00:15:00", freq="17min" - ).as_unit(unit) - resampled = ts.resample("17min", origin="2000-01-01").mean() - tm.assert_index_equal(resampled.index, exp_rng) - - -def test_resample_origin_with_tz(unit): - # GH 31809 - msg = "The origin must have the same timezone as the index." - - tz = "Europe/Paris" - rng = date_range( - "2000-01-01 00:00:00", "2000-01-01 02:00", freq="s", tz=tz - ).as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - - exp_rng = date_range( - "1999-12-31 23:57:00", "2000-01-01 01:57", freq="5min", tz=tz - ).as_unit(unit) - resampled = ts.resample("5min", origin="1999-12-31 23:57:00+00:00").mean() - tm.assert_index_equal(resampled.index, exp_rng) - - # origin of '1999-31-12 12:02:00+03:00' should be equivalent for this case - resampled = ts.resample("5min", origin="1999-12-31 12:02:00+03:00").mean() - tm.assert_index_equal(resampled.index, exp_rng) - - resampled = ts.resample("5min", origin="epoch", offset="2m").mean() - tm.assert_index_equal(resampled.index, exp_rng) - - with pytest.raises(ValueError, match=msg): - ts.resample("5min", origin="12/31/1999 23:57:00").mean() - - # if the series is not tz aware, origin should not be tz aware - rng = date_range("2000-01-01 00:00:00", "2000-01-01 02:00", freq="s").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - with pytest.raises(ValueError, match=msg): - ts.resample("5min", origin="12/31/1999 23:57:00+03:00").mean() - - -def test_resample_origin_epoch_with_tz_day_vs_24h(unit): - # GH 34474 - start, end = "2000-10-01 23:30:00+0500", "2000-12-02 00:30:00+0500" - rng = date_range(start, end, freq="7min").as_unit(unit) - random_values = np.random.default_rng(2).standard_normal(len(rng)) - ts_1 = Series(random_values, index=rng) - - result_1 = ts_1.resample("D", origin="epoch").mean() - result_2 = ts_1.resample("24H", origin="epoch").mean() - tm.assert_series_equal(result_1, result_2) - - # check that we have the same behavior with epoch even if we are not timezone aware - ts_no_tz = ts_1.tz_localize(None) - result_3 = ts_no_tz.resample("D", origin="epoch").mean() - result_4 = ts_no_tz.resample("24H", origin="epoch").mean() - tm.assert_series_equal(result_1, result_3.tz_localize(rng.tz), check_freq=False) - tm.assert_series_equal(result_1, result_4.tz_localize(rng.tz), check_freq=False) - - # check that we have the similar results with two different timezones (+2H and +5H) - start, end = "2000-10-01 23:30:00+0200", "2000-12-02 00:30:00+0200" - rng = date_range(start, end, freq="7min").as_unit(unit) - ts_2 = Series(random_values, index=rng) - result_5 = ts_2.resample("D", origin="epoch").mean() - result_6 = ts_2.resample("24H", origin="epoch").mean() - tm.assert_series_equal(result_1.tz_localize(None), result_5.tz_localize(None)) - tm.assert_series_equal(result_1.tz_localize(None), result_6.tz_localize(None)) - - -def test_resample_origin_with_day_freq_on_dst(unit): - # GH 31809 - tz = "America/Chicago" - - def _create_series(values, timestamps, freq="D"): - return Series( - values, - index=DatetimeIndex( - [Timestamp(t, tz=tz) for t in timestamps], freq=freq, ambiguous=True - ).as_unit(unit), - ) - - # test classical behavior of origin in a DST context - start = Timestamp("2013-11-02", tz=tz) - end = Timestamp("2013-11-03 23:59", tz=tz) - rng = date_range(start, end, freq="1h").as_unit(unit) - ts = Series(np.ones(len(rng)), index=rng) - - expected = _create_series([24.0, 25.0], ["2013-11-02", "2013-11-03"]) - for origin in ["epoch", "start", "start_day", start, None]: - result = ts.resample("D", origin=origin).sum() - tm.assert_series_equal(result, expected) - - # test complex behavior of origin/offset in a DST context - start = Timestamp("2013-11-03", tz=tz) - end = Timestamp("2013-11-03 23:59", tz=tz) - rng = date_range(start, end, freq="1h").as_unit(unit) - ts = Series(np.ones(len(rng)), index=rng) - - expected_ts = ["2013-11-02 22:00-05:00", "2013-11-03 22:00-06:00"] - expected = _create_series([23.0, 2.0], expected_ts) - result = ts.resample("D", origin="start", offset="-2H").sum() - tm.assert_series_equal(result, expected) - - expected_ts = ["2013-11-02 22:00-05:00", "2013-11-03 21:00-06:00"] - expected = _create_series([22.0, 3.0], expected_ts, freq="24H") - result = ts.resample("24H", origin="start", offset="-2H").sum() - tm.assert_series_equal(result, expected) - - expected_ts = ["2013-11-02 02:00-05:00", "2013-11-03 02:00-06:00"] - expected = _create_series([3.0, 22.0], expected_ts) - result = ts.resample("D", origin="start", offset="2H").sum() - tm.assert_series_equal(result, expected) - - expected_ts = ["2013-11-02 23:00-05:00", "2013-11-03 23:00-06:00"] - expected = _create_series([24.0, 1.0], expected_ts) - result = ts.resample("D", origin="start", offset="-1H").sum() - tm.assert_series_equal(result, expected) - - expected_ts = ["2013-11-02 01:00-05:00", "2013-11-03 01:00:00-0500"] - expected = _create_series([1.0, 24.0], expected_ts) - result = ts.resample("D", origin="start", offset="1H").sum() - tm.assert_series_equal(result, expected) - - -def test_resample_daily_anchored(unit): - rng = date_range("1/1/2000 0:00:00", periods=10000, freq="T").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - ts[:2] = np.nan # so results are the same - - result = ts[2:].resample("D", closed="left", label="left").mean() - expected = ts.resample("D", closed="left", label="left").mean() - tm.assert_series_equal(result, expected) - - -def test_resample_to_period_monthly_buglet(unit): - # GH #1259 - - rng = date_range("1/1/2000", "12/31/2000").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - - result = ts.resample("M", kind="period").mean() - exp_index = period_range("Jan-2000", "Dec-2000", freq="M") - tm.assert_index_equal(result.index, exp_index) - - -def test_period_with_agg(): - # aggregate a period resampler with a lambda - s2 = Series( - np.random.default_rng(2).integers(0, 5, 50), - index=period_range("2012-01-01", freq="H", periods=50), - dtype="float64", - ) - - expected = s2.to_timestamp().resample("D").mean().to_period() - result = s2.resample("D").agg(lambda x: x.mean()) - tm.assert_series_equal(result, expected) - - -def test_resample_segfault(unit): - # GH 8573 - # segfaulting in older versions - all_wins_and_wagers = [ - (1, datetime(2013, 10, 1, 16, 20), 1, 0), - (2, datetime(2013, 10, 1, 16, 10), 1, 0), - (2, datetime(2013, 10, 1, 18, 15), 1, 0), - (2, datetime(2013, 10, 1, 16, 10, 31), 1, 0), - ] - - df = DataFrame.from_records( - all_wins_and_wagers, columns=("ID", "timestamp", "A", "B") - ).set_index("timestamp") - df.index = df.index.as_unit(unit) - result = df.groupby("ID").resample("5min").sum() - expected = df.groupby("ID").apply(lambda x: x.resample("5min").sum()) - tm.assert_frame_equal(result, expected) - - -def test_resample_dtype_preservation(unit): - # GH 12202 - # validation tests for dtype preservation - - df = DataFrame( - { - "date": date_range(start="2016-01-01", periods=4, freq="W").as_unit(unit), - "group": [1, 1, 2, 2], - "val": Series([5, 6, 7, 8], dtype="int32"), - } - ).set_index("date") - - result = df.resample("1D").ffill() - assert result.val.dtype == np.int32 - - result = df.groupby("group").resample("1D").ffill() - assert result.val.dtype == np.int32 - - -def test_resample_dtype_coercion(unit): - pytest.importorskip("scipy.interpolate") - - # GH 16361 - df = {"a": [1, 3, 1, 4]} - df = DataFrame(df, index=date_range("2017-01-01", "2017-01-04").as_unit(unit)) - - expected = df.astype("float64").resample("H").mean()["a"].interpolate("cubic") - - result = df.resample("H")["a"].mean().interpolate("cubic") - tm.assert_series_equal(result, expected) - - result = df.resample("H").mean()["a"].interpolate("cubic") - tm.assert_series_equal(result, expected) - - -def test_weekly_resample_buglet(unit): - # #1327 - rng = date_range("1/1/2000", freq="B", periods=20).as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - - resampled = ts.resample("W").mean() - expected = ts.resample("W-SUN").mean() - tm.assert_series_equal(resampled, expected) - - -def test_monthly_resample_error(unit): - # #1451 - dates = date_range("4/16/2012 20:00", periods=5000, freq="h").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(dates)), index=dates) - # it works! - ts.resample("M") - - -def test_nanosecond_resample_error(): - # GH 12307 - Values falls after last bin when - # Resampling using pd.tseries.offsets.Nano as period - start = 1443707890427 - exp_start = 1443707890400 - indx = date_range(start=pd.to_datetime(start), periods=10, freq="100n") - ts = Series(range(len(indx)), index=indx) - r = ts.resample(pd.tseries.offsets.Nano(100)) - result = r.agg("mean") - - exp_indx = date_range(start=pd.to_datetime(exp_start), periods=10, freq="100n") - exp = Series(range(len(exp_indx)), index=exp_indx, dtype=float) - - tm.assert_series_equal(result, exp) - - -def test_resample_anchored_intraday(simple_date_range_series, unit): - # #1471, #1458 - - rng = date_range("1/1/2012", "4/1/2012", freq="100min").as_unit(unit) - df = DataFrame(rng.month, index=rng) - - result = df.resample("M").mean() - expected = df.resample("M", kind="period").mean().to_timestamp(how="end") - expected.index += Timedelta(1, "ns") - Timedelta(1, "D") - expected.index = expected.index.as_unit(unit)._with_freq("infer") - assert expected.index.freq == "M" - tm.assert_frame_equal(result, expected) - - result = df.resample("M", closed="left").mean() - exp = df.shift(1, freq="D").resample("M", kind="period").mean() - exp = exp.to_timestamp(how="end") - - exp.index = exp.index + Timedelta(1, "ns") - Timedelta(1, "D") - exp.index = exp.index.as_unit(unit)._with_freq("infer") - assert exp.index.freq == "M" - tm.assert_frame_equal(result, exp) - - rng = date_range("1/1/2012", "4/1/2012", freq="100min").as_unit(unit) - df = DataFrame(rng.month, index=rng) - - result = df.resample("Q").mean() - expected = df.resample("Q", kind="period").mean().to_timestamp(how="end") - expected.index += Timedelta(1, "ns") - Timedelta(1, "D") - expected.index._data.freq = "Q" - expected.index._freq = lib.no_default - expected.index = expected.index.as_unit(unit) - tm.assert_frame_equal(result, expected) - - result = df.resample("Q", closed="left").mean() - expected = df.shift(1, freq="D").resample("Q", kind="period", closed="left").mean() - expected = expected.to_timestamp(how="end") - expected.index += Timedelta(1, "ns") - Timedelta(1, "D") - expected.index._data.freq = "Q" - expected.index._freq = lib.no_default - expected.index = expected.index.as_unit(unit) - tm.assert_frame_equal(result, expected) - - ts = simple_date_range_series("2012-04-29 23:00", "2012-04-30 5:00", freq="h") - ts.index = ts.index.as_unit(unit) - resampled = ts.resample("M").mean() - assert len(resampled) == 1 - - -@pytest.mark.parametrize("freq", ["MS", "BMS", "QS-MAR", "AS-DEC", "AS-JUN"]) -def test_resample_anchored_monthstart(simple_date_range_series, freq, unit): - ts = simple_date_range_series("1/1/2000", "12/31/2002") - ts.index = ts.index.as_unit(unit) - ts.resample(freq).mean() - - -@pytest.mark.parametrize("label, sec", [[None, 2.0], ["right", "4.2"]]) -def test_resample_anchored_multiday(label, sec): - # When resampling a range spanning multiple days, ensure that the - # start date gets used to determine the offset. Fixes issue where - # a one day period is not a multiple of the frequency. - # - # See: https://github.com/pandas-dev/pandas/issues/8683 - - index1 = date_range("2014-10-14 23:06:23.206", periods=3, freq="400L") - index2 = date_range("2014-10-15 23:00:00", periods=2, freq="2200L") - index = index1.union(index2) - - s = Series(np.random.default_rng(2).standard_normal(5), index=index) - - # Ensure left closing works - result = s.resample("2200L", label=label).mean() - assert result.index[-1] == Timestamp(f"2014-10-15 23:00:{sec}00") - - -def test_corner_cases(unit): - # miscellaneous test coverage - - rng = date_range("1/1/2000", periods=12, freq="t").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - - result = ts.resample("5t", closed="right", label="left").mean() - ex_index = date_range("1999-12-31 23:55", periods=4, freq="5t").as_unit(unit) - tm.assert_index_equal(result.index, ex_index) - - -def test_corner_cases_period(simple_period_range_series): - # miscellaneous test coverage - len0pts = simple_period_range_series("2007-01", "2010-05", freq="M")[:0] - # it works - result = len0pts.resample("A-DEC").mean() - assert len(result) == 0 - - -def test_corner_cases_date(simple_date_range_series, unit): - # resample to periods - ts = simple_date_range_series("2000-04-28", "2000-04-30 11:00", freq="h") - ts.index = ts.index.as_unit(unit) - result = ts.resample("M", kind="period").mean() - assert len(result) == 1 - assert result.index[0] == Period("2000-04", freq="M") - - -def test_anchored_lowercase_buglet(unit): - dates = date_range("4/16/2012 20:00", periods=50000, freq="s").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(dates)), index=dates) - # it works! - ts.resample("d").mean() - - -def test_upsample_apply_functions(unit): - # #1596 - rng = date_range("2012-06-12", periods=4, freq="h").as_unit(unit) - - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - - result = ts.resample("20min").aggregate(["mean", "sum"]) - assert isinstance(result, DataFrame) - - -def test_resample_not_monotonic(unit): - rng = date_range("2012-06-12", periods=200, freq="h").as_unit(unit) - ts = Series(np.random.default_rng(2).standard_normal(len(rng)), index=rng) - - ts = ts.take(np.random.default_rng(2).permutation(len(ts))) - - result = ts.resample("D").sum() - exp = ts.sort_index().resample("D").sum() - tm.assert_series_equal(result, exp) - - -@pytest.mark.parametrize( - "dtype", - [ - "int64", - "int32", - "float64", - pytest.param( - "float32", - marks=pytest.mark.xfail( - reason="Empty groups cause x.mean() to return float64" - ), - ), - ], -) -def test_resample_median_bug_1688(dtype): - df = DataFrame( - [1, 2], - index=[datetime(2012, 1, 1, 0, 0, 0), datetime(2012, 1, 1, 0, 5, 0)], - dtype=dtype, - ) - - result = df.resample("T").apply(lambda x: x.mean()) - exp = df.asfreq("T") - tm.assert_frame_equal(result, exp) - - result = df.resample("T").median() - exp = df.asfreq("T") - tm.assert_frame_equal(result, exp) - - -def test_how_lambda_functions(simple_date_range_series, unit): - ts = simple_date_range_series("1/1/2000", "4/1/2000") - ts.index = ts.index.as_unit(unit) - - result = ts.resample("M").apply(lambda x: x.mean()) - exp = ts.resample("M").mean() - tm.assert_series_equal(result, exp) - - foo_exp = ts.resample("M").mean() - foo_exp.name = "foo" - bar_exp = ts.resample("M").std() - bar_exp.name = "bar" - - result = ts.resample("M").apply([lambda x: x.mean(), lambda x: x.std(ddof=1)]) - result.columns = ["foo", "bar"] - tm.assert_series_equal(result["foo"], foo_exp) - tm.assert_series_equal(result["bar"], bar_exp) - - # this is a MI Series, so comparing the names of the results - # doesn't make sense - result = ts.resample("M").aggregate( - {"foo": lambda x: x.mean(), "bar": lambda x: x.std(ddof=1)} - ) - tm.assert_series_equal(result["foo"], foo_exp, check_names=False) - tm.assert_series_equal(result["bar"], bar_exp, check_names=False) - - -def test_resample_unequal_times(unit): - # #1772 - start = datetime(1999, 3, 1, 5) - # end hour is less than start - end = datetime(2012, 7, 31, 4) - bad_ind = date_range(start, end, freq="30min").as_unit(unit) - df = DataFrame({"close": 1}, index=bad_ind) - - # it works! - df.resample("AS").sum() - - -def test_resample_consistency(unit): - # GH 6418 - # resample with bfill / limit / reindex consistency - - i30 = date_range("2002-02-02", periods=4, freq="30T").as_unit(unit) - s = Series(np.arange(4.0), index=i30) - s.iloc[2] = np.nan - - # Upsample by factor 3 with reindex() and resample() methods: - i10 = date_range(i30[0], i30[-1], freq="10T").as_unit(unit) - - s10 = s.reindex(index=i10, method="bfill") - s10_2 = s.reindex(index=i10, method="bfill", limit=2) - rl = s.reindex_like(s10, method="bfill", limit=2) - r10_2 = s.resample("10Min").bfill(limit=2) - r10 = s.resample("10Min").bfill() - - # s10_2, r10, r10_2, rl should all be equal - tm.assert_series_equal(s10_2, r10) - tm.assert_series_equal(s10_2, r10_2) - tm.assert_series_equal(s10_2, rl) - - -dates1: list[DatetimeNaTType] = [ - datetime(2014, 10, 1), - datetime(2014, 9, 3), - datetime(2014, 11, 5), - datetime(2014, 9, 5), - datetime(2014, 10, 8), - datetime(2014, 7, 15), -] - -dates2: list[DatetimeNaTType] = ( - dates1[:2] + [pd.NaT] + dates1[2:4] + [pd.NaT] + dates1[4:] -) -dates3 = [pd.NaT] + dates1 + [pd.NaT] - - -@pytest.mark.parametrize("dates", [dates1, dates2, dates3]) -def test_resample_timegrouper(dates): - # GH 7227 - df = DataFrame({"A": dates, "B": np.arange(len(dates))}) - result = df.set_index("A").resample("M").count() - exp_idx = DatetimeIndex( - ["2014-07-31", "2014-08-31", "2014-09-30", "2014-10-31", "2014-11-30"], - freq="M", - name="A", - ) - expected = DataFrame({"B": [1, 0, 2, 2, 1]}, index=exp_idx) - if df["A"].isna().any(): - expected.index = expected.index._with_freq(None) - tm.assert_frame_equal(result, expected) - - result = df.groupby(Grouper(freq="M", key="A")).count() - tm.assert_frame_equal(result, expected) - - df = DataFrame({"A": dates, "B": np.arange(len(dates)), "C": np.arange(len(dates))}) - result = df.set_index("A").resample("M").count() - expected = DataFrame( - {"B": [1, 0, 2, 2, 1], "C": [1, 0, 2, 2, 1]}, - index=exp_idx, - columns=["B", "C"], - ) - if df["A"].isna().any(): - expected.index = expected.index._with_freq(None) - tm.assert_frame_equal(result, expected) - - result = df.groupby(Grouper(freq="M", key="A")).count() - tm.assert_frame_equal(result, expected) - - -def test_resample_nunique(unit): - # GH 12352 - df = DataFrame( - { - "ID": { - Timestamp("2015-06-05 00:00:00"): "0010100903", - Timestamp("2015-06-08 00:00:00"): "0010150847", - }, - "DATE": { - Timestamp("2015-06-05 00:00:00"): "2015-06-05", - Timestamp("2015-06-08 00:00:00"): "2015-06-08", - }, - } - ) - df.index = df.index.as_unit(unit) - r = df.resample("D") - g = df.groupby(Grouper(freq="D")) - expected = df.groupby(Grouper(freq="D")).ID.apply(lambda x: x.nunique()) - assert expected.name == "ID" - - for t in [r, g]: - result = t.ID.nunique() - tm.assert_series_equal(result, expected) - - result = df.ID.resample("D").nunique() - tm.assert_series_equal(result, expected) - - result = df.ID.groupby(Grouper(freq="D")).nunique() - tm.assert_series_equal(result, expected) - - -def test_resample_nunique_preserves_column_level_names(unit): - # see gh-23222 - df = tm.makeTimeDataFrame(freq="1D").abs() - df.index = df.index.as_unit(unit) - df.columns = pd.MultiIndex.from_arrays( - [df.columns.tolist()] * 2, names=["lev0", "lev1"] - ) - result = df.resample("1h").nunique() - tm.assert_index_equal(df.columns, result.columns) - - -@pytest.mark.parametrize( - "func", - [ - lambda x: x.nunique(), - lambda x: x.agg(Series.nunique), - lambda x: x.agg("nunique"), - ], - ids=["nunique", "series_nunique", "nunique_str"], -) -def test_resample_nunique_with_date_gap(func, unit): - # GH 13453 - # Since all elements are unique, these should all be the same - index = date_range("1-1-2000", "2-15-2000", freq="h").as_unit(unit) - index2 = date_range("4-15-2000", "5-15-2000", freq="h").as_unit(unit) - index3 = index.append(index2) - s = Series(range(len(index3)), index=index3, dtype="int64") - r = s.resample("M") - result = r.count() - expected = func(r) - tm.assert_series_equal(result, expected) - - -@pytest.mark.parametrize("n", [10000, 100000]) -@pytest.mark.parametrize("k", [10, 100, 1000]) -def test_resample_group_info(n, k, unit): - # GH10914 - - # use a fixed seed to always have the same uniques - prng = np.random.default_rng(2) - - dr = date_range(start="2015-08-27", periods=n // 10, freq="T").as_unit(unit) - ts = Series(prng.integers(0, n // k, n).astype("int64"), index=prng.choice(dr, n)) - - left = ts.resample("30T").nunique() - ix = date_range(start=ts.index.min(), end=ts.index.max(), freq="30T").as_unit(unit) - - vals = ts.values - bins = np.searchsorted(ix.values, ts.index, side="right") - - sorter = np.lexsort((vals, bins)) - vals, bins = vals[sorter], bins[sorter] - - mask = np.r_[True, vals[1:] != vals[:-1]] - mask |= np.r_[True, bins[1:] != bins[:-1]] - - arr = np.bincount(bins[mask] - 1, minlength=len(ix)).astype("int64", copy=False) - right = Series(arr, index=ix) - - tm.assert_series_equal(left, right) - - -def test_resample_size(unit): - n = 10000 - dr = date_range("2015-09-19", periods=n, freq="T").as_unit(unit) - ts = Series( - np.random.default_rng(2).standard_normal(n), - index=np.random.default_rng(2).choice(dr, n), - ) - - left = ts.resample("7T").size() - ix = date_range(start=left.index.min(), end=ts.index.max(), freq="7T").as_unit(unit) - - bins = np.searchsorted(ix.values, ts.index.values, side="right") - val = np.bincount(bins, minlength=len(ix) + 1)[1:].astype("int64", copy=False) - - right = Series(val, index=ix) - tm.assert_series_equal(left, right) - - -def test_resample_across_dst(): - # The test resamples a DatetimeIndex with values before and after a - # DST change - # Issue: 14682 - - # The DatetimeIndex we will start with - # (note that DST happens at 03:00+02:00 -> 02:00+01:00) - # 2016-10-30 02:23:00+02:00, 2016-10-30 02:23:00+01:00 - df1 = DataFrame([1477786980, 1477790580], columns=["ts"]) - dti1 = DatetimeIndex( - pd.to_datetime(df1.ts, unit="s") - .dt.tz_localize("UTC") - .dt.tz_convert("Europe/Madrid") - ) - - # The expected DatetimeIndex after resampling. - # 2016-10-30 02:00:00+02:00, 2016-10-30 02:00:00+01:00 - df2 = DataFrame([1477785600, 1477789200], columns=["ts"]) - dti2 = DatetimeIndex( - pd.to_datetime(df2.ts, unit="s") - .dt.tz_localize("UTC") - .dt.tz_convert("Europe/Madrid"), - freq="H", - ) - df = DataFrame([5, 5], index=dti1) - - result = df.resample(rule="H").sum() - expected = DataFrame([5, 5], index=dti2) - - tm.assert_frame_equal(result, expected) - - -def test_groupby_with_dst_time_change(unit): - # GH 24972 - index = ( - DatetimeIndex([1478064900001000000, 1480037118776792000], tz="UTC") - .tz_convert("America/Chicago") - .as_unit(unit) - ) - - df = DataFrame([1, 2], index=index) - result = df.groupby(Grouper(freq="1d")).last() - expected_index_values = date_range( - "2016-11-02", "2016-11-24", freq="d", tz="America/Chicago" - ).as_unit(unit) - - index = DatetimeIndex(expected_index_values) - expected = DataFrame([1.0] + ([np.nan] * 21) + [2.0], index=index) - tm.assert_frame_equal(result, expected) - - -def test_resample_dst_anchor(unit): - # 5172 - dti = DatetimeIndex([datetime(2012, 11, 4, 23)], tz="US/Eastern").as_unit(unit) - df = DataFrame([5], index=dti) - - dti = DatetimeIndex(df.index.normalize(), freq="D").as_unit(unit) - expected = DataFrame([5], index=dti) - tm.assert_frame_equal(df.resample(rule="D").sum(), expected) - df.resample(rule="MS").sum() - tm.assert_frame_equal( - df.resample(rule="MS").sum(), - DataFrame( - [5], - index=DatetimeIndex( - [datetime(2012, 11, 1)], tz="US/Eastern", freq="MS" - ).as_unit(unit), - ), - ) - - dti = date_range( - "2013-09-30", "2013-11-02", freq="30Min", tz="Europe/Paris" - ).as_unit(unit) - values = range(dti.size) - df = DataFrame({"a": values, "b": values, "c": values}, index=dti, dtype="int64") - how = {"a": "min", "b": "max", "c": "count"} - - tm.assert_frame_equal( - df.resample("W-MON").agg(how)[["a", "b", "c"]], - DataFrame( - { - "a": [0, 48, 384, 720, 1056, 1394], - "b": [47, 383, 719, 1055, 1393, 1586], - "c": [48, 336, 336, 336, 338, 193], - }, - index=date_range( - "9/30/2013", "11/4/2013", freq="W-MON", tz="Europe/Paris" - ).as_unit(unit), - ), - "W-MON Frequency", - ) - - tm.assert_frame_equal( - df.resample("2W-MON").agg(how)[["a", "b", "c"]], - DataFrame( - { - "a": [0, 48, 720, 1394], - "b": [47, 719, 1393, 1586], - "c": [48, 672, 674, 193], - }, - index=date_range( - "9/30/2013", "11/11/2013", freq="2W-MON", tz="Europe/Paris" - ).as_unit(unit), - ), - "2W-MON Frequency", - ) - - tm.assert_frame_equal( - df.resample("MS").agg(how)[["a", "b", "c"]], - DataFrame( - {"a": [0, 48, 1538], "b": [47, 1537, 1586], "c": [48, 1490, 49]}, - index=date_range( - "9/1/2013", "11/1/2013", freq="MS", tz="Europe/Paris" - ).as_unit(unit), - ), - "MS Frequency", - ) - - tm.assert_frame_equal( - df.resample("2MS").agg(how)[["a", "b", "c"]], - DataFrame( - {"a": [0, 1538], "b": [1537, 1586], "c": [1538, 49]}, - index=date_range( - "9/1/2013", "11/1/2013", freq="2MS", tz="Europe/Paris" - ).as_unit(unit), - ), - "2MS Frequency", - ) - - df_daily = df["10/26/2013":"10/29/2013"] - tm.assert_frame_equal( - df_daily.resample("D").agg({"a": "min", "b": "max", "c": "count"})[ - ["a", "b", "c"] - ], - DataFrame( - { - "a": [1248, 1296, 1346, 1394], - "b": [1295, 1345, 1393, 1441], - "c": [48, 50, 48, 48], - }, - index=date_range( - "10/26/2013", "10/29/2013", freq="D", tz="Europe/Paris" - ).as_unit(unit), - ), - "D Frequency", - ) - - -def test_downsample_across_dst(unit): - # GH 8531 - tz = pytz.timezone("Europe/Berlin") - dt = datetime(2014, 10, 26) - dates = date_range(tz.localize(dt), periods=4, freq="2H").as_unit(unit) - result = Series(5, index=dates).resample("H").mean() - expected = Series( - [5.0, np.nan] * 3 + [5.0], - index=date_range(tz.localize(dt), periods=7, freq="H").as_unit(unit), - ) - tm.assert_series_equal(result, expected) - - -def test_downsample_across_dst_weekly(unit): - # GH 9119, GH 21459 - df = DataFrame( - index=DatetimeIndex( - ["2017-03-25", "2017-03-26", "2017-03-27", "2017-03-28", "2017-03-29"], - tz="Europe/Amsterdam", - ).as_unit(unit), - data=[11, 12, 13, 14, 15], - ) - result = df.resample("1W").sum() - expected = DataFrame( - [23, 42], - index=DatetimeIndex( - ["2017-03-26", "2017-04-02"], tz="Europe/Amsterdam", freq="W" - ).as_unit(unit), - ) - tm.assert_frame_equal(result, expected) - - -def test_downsample_across_dst_weekly_2(unit): - # GH 9119, GH 21459 - idx = date_range("2013-04-01", "2013-05-01", tz="Europe/London", freq="H").as_unit( - unit - ) - s = Series(index=idx, dtype=np.float64) - result = s.resample("W").mean() - expected = Series( - index=date_range("2013-04-07", freq="W", periods=5, tz="Europe/London").as_unit( - unit - ), - dtype=np.float64, - ) - tm.assert_series_equal(result, expected) - - -def test_downsample_dst_at_midnight(unit): - # GH 25758 - start = datetime(2018, 11, 3, 12) - end = datetime(2018, 11, 5, 12) - index = date_range(start, end, freq="1H").as_unit(unit) - index = index.tz_localize("UTC").tz_convert("America/Havana") - data = list(range(len(index))) - dataframe = DataFrame(data, index=index) - result = dataframe.groupby(Grouper(freq="1D")).mean() - - dti = date_range("2018-11-03", periods=3).tz_localize( - "America/Havana", ambiguous=True - ) - dti = DatetimeIndex(dti, freq="D").as_unit(unit) - expected = DataFrame([7.5, 28.0, 44.5], index=dti) - tm.assert_frame_equal(result, expected) - - -def test_resample_with_nat(unit): - # GH 13020 - index = DatetimeIndex( - [ - pd.NaT, - "1970-01-01 00:00:00", - pd.NaT, - "1970-01-01 00:00:01", - "1970-01-01 00:00:02", - ] - ) - frame = DataFrame([2, 3, 5, 7, 11], index=index) - frame.index = frame.index.as_unit(unit) - - index_1s = DatetimeIndex( - ["1970-01-01 00:00:00", "1970-01-01 00:00:01", "1970-01-01 00:00:02"] - ).as_unit(unit) - frame_1s = DataFrame([3.0, 7.0, 11.0], index=index_1s) - tm.assert_frame_equal(frame.resample("1s").mean(), frame_1s) - - index_2s = DatetimeIndex(["1970-01-01 00:00:00", "1970-01-01 00:00:02"]).as_unit( - unit - ) - frame_2s = DataFrame([5.0, 11.0], index=index_2s) - tm.assert_frame_equal(frame.resample("2s").mean(), frame_2s) - - index_3s = DatetimeIndex(["1970-01-01 00:00:00"]).as_unit(unit) - frame_3s = DataFrame([7.0], index=index_3s) - tm.assert_frame_equal(frame.resample("3s").mean(), frame_3s) - - tm.assert_frame_equal(frame.resample("60s").mean(), frame_3s) - - -def test_resample_datetime_values(unit): - # GH 13119 - # check that datetime dtype is preserved when NaT values are - # introduced by the resampling - - dates = [datetime(2016, 1, 15), datetime(2016, 1, 19)] - df = DataFrame({"timestamp": dates}, index=dates) - df.index = df.index.as_unit(unit) - - exp = Series( - [datetime(2016, 1, 15), pd.NaT, datetime(2016, 1, 19)], - index=date_range("2016-01-15", periods=3, freq="2D").as_unit(unit), - name="timestamp", - ) - - res = df.resample("2D").first()["timestamp"] - tm.assert_series_equal(res, exp) - res = df["timestamp"].resample("2D").first() - tm.assert_series_equal(res, exp) - - -def test_resample_apply_with_additional_args(series, unit): - # GH 14615 - def f(data, add_arg): - return np.mean(data) * add_arg - - series.index = series.index.as_unit(unit) - - multiplier = 10 - result = series.resample("D").apply(f, multiplier) - expected = series.resample("D").mean().multiply(multiplier) - tm.assert_series_equal(result, expected) - - # Testing as kwarg - result = series.resample("D").apply(f, add_arg=multiplier) - expected = series.resample("D").mean().multiply(multiplier) - tm.assert_series_equal(result, expected) - - # Testing dataframe - df = DataFrame({"A": 1, "B": 2}, index=date_range("2017", periods=10)) - result = df.groupby("A").resample("D").agg(f, multiplier).astype(float) - expected = df.groupby("A").resample("D").mean().multiply(multiplier) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize("k", [1, 2, 3]) -@pytest.mark.parametrize( - "n1, freq1, n2, freq2", - [ - (30, "S", 0.5, "Min"), - (60, "S", 1, "Min"), - (3600, "S", 1, "H"), - (60, "Min", 1, "H"), - (21600, "S", 0.25, "D"), - (86400, "S", 1, "D"), - (43200, "S", 0.5, "D"), - (1440, "Min", 1, "D"), - (12, "H", 0.5, "D"), - (24, "H", 1, "D"), - ], -) -def test_resample_equivalent_offsets(n1, freq1, n2, freq2, k, unit): - # GH 24127 - n1_ = n1 * k - n2_ = n2 * k - dti = date_range("19910905 13:00", "19911005 07:00", freq=freq1).as_unit(unit) - ser = Series(range(len(dti)), index=dti) - - result1 = ser.resample(str(n1_) + freq1).mean() - result2 = ser.resample(str(n2_) + freq2).mean() - tm.assert_series_equal(result1, result2) - - -@pytest.mark.parametrize( - "first,last,freq,exp_first,exp_last", - [ - ("19910905", "19920406", "D", "19910905", "19920407"), - ("19910905 00:00", "19920406 06:00", "D", "19910905", "19920407"), - ("19910905 06:00", "19920406 06:00", "H", "19910905 06:00", "19920406 07:00"), - ("19910906", "19920406", "M", "19910831", "19920430"), - ("19910831", "19920430", "M", "19910831", "19920531"), - ("1991-08", "1992-04", "M", "19910831", "19920531"), - ], -) -def test_get_timestamp_range_edges(first, last, freq, exp_first, exp_last, unit): - first = Period(first) - first = first.to_timestamp(first.freq).as_unit(unit) - last = Period(last) - last = last.to_timestamp(last.freq).as_unit(unit) - - exp_first = Timestamp(exp_first) - exp_last = Timestamp(exp_last) - - freq = pd.tseries.frequencies.to_offset(freq) - result = _get_timestamp_range_edges(first, last, freq, unit="ns") - expected = (exp_first, exp_last) - assert result == expected - - -@pytest.mark.parametrize("duplicates", [True, False]) -def test_resample_apply_product(duplicates, unit): - # GH 5586 - index = date_range(start="2012-01-31", freq="M", periods=12).as_unit(unit) - - ts = Series(range(12), index=index) - df = DataFrame({"A": ts, "B": ts + 2}) - if duplicates: - df.columns = ["A", "A"] - - msg = "using DatetimeIndexResampler.prod" - with tm.assert_produces_warning(FutureWarning, match=msg): - result = df.resample("Q").apply(np.prod) - expected = DataFrame( - np.array([[0, 24], [60, 210], [336, 720], [990, 1716]], dtype=np.int64), - index=DatetimeIndex( - ["2012-03-31", "2012-06-30", "2012-09-30", "2012-12-31"], freq="Q-DEC" - ).as_unit(unit), - columns=df.columns, - ) - tm.assert_frame_equal(result, expected) - - -@pytest.mark.parametrize( - "first,last,freq_in,freq_out,exp_last", - [ - ( - "2020-03-28", - "2020-03-31", - "D", - "24H", - "2020-03-30 01:00", - ), # includes transition into DST - ( - "2020-03-28", - "2020-10-27", - "D", - "24H", - "2020-10-27 00:00", - ), # includes transition into and out of DST - ( - "2020-10-25", - "2020-10-27", - "D", - "24H", - "2020-10-26 23:00", - ), # includes transition out of DST - ( - "2020-03-28", - "2020-03-31", - "24H", - "D", - "2020-03-30 00:00", - ), # same as above, but from 24H to D - ("2020-03-28", "2020-10-27", "24H", "D", "2020-10-27 00:00"), - ("2020-10-25", "2020-10-27", "24H", "D", "2020-10-26 00:00"), - ], -) -def test_resample_calendar_day_with_dst( - first: str, last: str, freq_in: str, freq_out: str, exp_last: str, unit -): - # GH 35219 - ts = Series( - 1.0, date_range(first, last, freq=freq_in, tz="Europe/Amsterdam").as_unit(unit) - ) - result = ts.resample(freq_out).ffill() - expected = Series( - 1.0, - date_range(first, exp_last, freq=freq_out, tz="Europe/Amsterdam").as_unit(unit), - ) - tm.assert_series_equal(result, expected) - - -@pytest.mark.parametrize("func", ["min", "max", "first", "last"]) -def test_resample_aggregate_functions_min_count(func, unit): - # GH#37768 - index = date_range(start="2020", freq="M", periods=3).as_unit(unit) - ser = Series([1, np.nan, np.nan], index) - result = getattr(ser.resample("Q"), func)(min_count=2) - expected = Series( - [np.nan], - index=DatetimeIndex(["2020-03-31"], freq="Q-DEC").as_unit(unit), - ) - tm.assert_series_equal(result, expected) - - -def test_resample_unsigned_int(any_unsigned_int_numpy_dtype, unit): - # gh-43329 - df = DataFrame( - index=date_range(start="2000-01-01", end="2000-01-03 23", freq="12H").as_unit( - unit - ), - columns=["x"], - data=[0, 1, 0] * 2, - dtype=any_unsigned_int_numpy_dtype, - ) - df = df.loc[(df.index < "2000-01-02") | (df.index > "2000-01-03"), :] - - result = df.resample("D").max() - - expected = DataFrame( - [1, np.nan, 0], - columns=["x"], - index=date_range(start="2000-01-01", end="2000-01-03 23", freq="D").as_unit( - unit - ), - ) - tm.assert_frame_equal(result, expected) - - -def test_long_rule_non_nano(): - # https://github.com/pandas-dev/pandas/issues/51024 - idx = date_range("0300-01-01", "2000-01-01", unit="s", freq="100Y") - ser = Series([1, 4, 2, 8, 5, 7, 1, 4, 2, 8, 5, 7, 1, 4, 2, 8, 5], index=idx) - result = ser.resample("200Y").mean() - expected_idx = DatetimeIndex( - np.array( - [ - "0300-12-31", - "0500-12-31", - "0700-12-31", - "0900-12-31", - "1100-12-31", - "1300-12-31", - "1500-12-31", - "1700-12-31", - "1900-12-31", - ] - ).astype("datetime64[s]"), - freq="200A-DEC", - ) - expected = Series([1.0, 3.0, 6.5, 4.0, 3.0, 6.5, 4.0, 3.0, 6.5], index=expected_idx) - tm.assert_series_equal(result, expected) - - -def test_resample_empty_series_with_tz(): - # GH#53664 - df = DataFrame({"ts": [], "values": []}).astype( - {"ts": "datetime64[ns, Atlantic/Faroe]"} - ) - result = df.resample("2MS", on="ts", closed="left", label="left", origin="start")[ - "values" - ].sum() - - expected_idx = DatetimeIndex( - [], freq="2MS", name="ts", dtype="datetime64[ns, Atlantic/Faroe]" - ) - expected = Series([], index=expected_idx, name="values", dtype="float64") - tm.assert_series_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_to_frame.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_to_frame.py deleted file mode 100644 index 0eadf696b34cc034d2be76ce5daba2cff679da74..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/series/methods/test_to_frame.py +++ /dev/null @@ -1,63 +0,0 @@ -import pytest - -from pandas import ( - DataFrame, - Index, - Series, -) -import pandas._testing as tm - - -class TestToFrame: - def test_to_frame_respects_name_none(self): - # GH#44212 if we explicitly pass name=None, then that should be respected, - # not changed to 0 - # GH-45448 this is first deprecated & enforced in 2.0 - ser = Series(range(3)) - result = ser.to_frame(None) - - exp_index = Index([None], dtype=object) - tm.assert_index_equal(result.columns, exp_index) - - result = ser.rename("foo").to_frame(None) - exp_index = Index([None], dtype=object) - tm.assert_index_equal(result.columns, exp_index) - - def test_to_frame(self, datetime_series): - datetime_series.name = None - rs = datetime_series.to_frame() - xp = DataFrame(datetime_series.values, index=datetime_series.index) - tm.assert_frame_equal(rs, xp) - - datetime_series.name = "testname" - rs = datetime_series.to_frame() - xp = DataFrame( - {"testname": datetime_series.values}, index=datetime_series.index - ) - tm.assert_frame_equal(rs, xp) - - rs = datetime_series.to_frame(name="testdifferent") - xp = DataFrame( - {"testdifferent": datetime_series.values}, index=datetime_series.index - ) - tm.assert_frame_equal(rs, xp) - - @pytest.mark.filterwarnings( - "ignore:Passing a BlockManager|Passing a SingleBlockManager:DeprecationWarning" - ) - def test_to_frame_expanddim(self): - # GH#9762 - - class SubclassedSeries(Series): - @property - def _constructor_expanddim(self): - return SubclassedFrame - - class SubclassedFrame(DataFrame): - pass - - ser = SubclassedSeries([1, 2, 3], name="X") - result = ser.to_frame() - assert isinstance(result, SubclassedFrame) - expected = SubclassedFrame({"X": [1, 2, 3]}) - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/tenacity/_utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/tenacity/_utils.py deleted file mode 100644 index d5c4c9de591a80779ddda1e3e82d4745d7dbab4f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/tenacity/_utils.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright 2016 Julien Danjou -# Copyright 2016 Joshua Harlow -# Copyright 2013-2014 Ray Holder -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import sys -import typing - - -# sys.maxsize: -# An integer giving the maximum value a variable of type Py_ssize_t can take. -MAX_WAIT = sys.maxsize / 2 - - -def find_ordinal(pos_num: int) -> str: - # See: https://en.wikipedia.org/wiki/English_numerals#Ordinal_numbers - if pos_num == 0: - return "th" - elif pos_num == 1: - return "st" - elif pos_num == 2: - return "nd" - elif pos_num == 3: - return "rd" - elif 4 <= pos_num <= 20: - return "th" - else: - return find_ordinal(pos_num % 10) - - -def to_ordinal(pos_num: int) -> str: - return f"{pos_num}{find_ordinal(pos_num)}" - - -def get_callback_name(cb: typing.Callable[..., typing.Any]) -> str: - """Get a callback fully-qualified name. - - If no name can be produced ``repr(cb)`` is called and returned. - """ - segments = [] - try: - segments.append(cb.__qualname__) - except AttributeError: - try: - segments.append(cb.__name__) - except AttributeError: - pass - if not segments: - return repr(cb) - else: - try: - # When running under sphinx it appears this can be none? - if cb.__module__: - segments.insert(0, cb.__module__) - except AttributeError: - pass - return ".".join(segments) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/dataclasses.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/dataclasses.py deleted file mode 100644 index 86bad1e6381b1e729889eaa8be41fe5e6c22c269..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/dataclasses.py +++ /dev/null @@ -1,478 +0,0 @@ -""" -The main purpose is to enhance stdlib dataclasses by adding validation -A pydantic dataclass can be generated from scratch or from a stdlib one. - -Behind the scene, a pydantic dataclass is just like a regular one on which we attach -a `BaseModel` and magic methods to trigger the validation of the data. -`__init__` and `__post_init__` are hence overridden and have extra logic to be -able to validate input data. - -When a pydantic dataclass is generated from scratch, it's just a plain dataclass -with validation triggered at initialization - -The tricky part if for stdlib dataclasses that are converted after into pydantic ones e.g. - -```py -@dataclasses.dataclass -class M: - x: int - -ValidatedM = pydantic.dataclasses.dataclass(M) -``` - -We indeed still want to support equality, hashing, repr, ... as if it was the stdlib one! - -```py -assert isinstance(ValidatedM(x=1), M) -assert ValidatedM(x=1) == M(x=1) -``` - -This means we **don't want to create a new dataclass that inherits from it** -The trick is to create a wrapper around `M` that will act as a proxy to trigger -validation without altering default `M` behaviour. -""" -import copy -import dataclasses -import sys -from contextlib import contextmanager -from functools import wraps -from typing import TYPE_CHECKING, Any, Callable, ClassVar, Dict, Generator, Optional, Type, TypeVar, Union, overload - -from typing_extensions import dataclass_transform - -from .class_validators import gather_all_validators -from .config import BaseConfig, ConfigDict, Extra, get_config -from .error_wrappers import ValidationError -from .errors import DataclassTypeError -from .fields import Field, FieldInfo, Required, Undefined -from .main import create_model, validate_model -from .utils import ClassAttribute - -if TYPE_CHECKING: - from .main import BaseModel - from .typing import CallableGenerator, NoArgAnyCallable - - DataclassT = TypeVar('DataclassT', bound='Dataclass') - - DataclassClassOrWrapper = Union[Type['Dataclass'], 'DataclassProxy'] - - class Dataclass: - # stdlib attributes - __dataclass_fields__: ClassVar[Dict[str, Any]] - __dataclass_params__: ClassVar[Any] # in reality `dataclasses._DataclassParams` - __post_init__: ClassVar[Callable[..., None]] - - # Added by pydantic - __pydantic_run_validation__: ClassVar[bool] - __post_init_post_parse__: ClassVar[Callable[..., None]] - __pydantic_initialised__: ClassVar[bool] - __pydantic_model__: ClassVar[Type[BaseModel]] - __pydantic_validate_values__: ClassVar[Callable[['Dataclass'], None]] - __pydantic_has_field_info_default__: ClassVar[bool] # whether a `pydantic.Field` is used as default value - - def __init__(self, *args: object, **kwargs: object) -> None: - pass - - @classmethod - def __get_validators__(cls: Type['Dataclass']) -> 'CallableGenerator': - pass - - @classmethod - def __validate__(cls: Type['DataclassT'], v: Any) -> 'DataclassT': - pass - - -__all__ = [ - 'dataclass', - 'set_validation', - 'create_pydantic_model_from_dataclass', - 'is_builtin_dataclass', - 'make_dataclass_validator', -] - -_T = TypeVar('_T') - -if sys.version_info >= (3, 10): - - @dataclass_transform(field_specifiers=(dataclasses.field, Field)) - @overload - def dataclass( - *, - init: bool = True, - repr: bool = True, - eq: bool = True, - order: bool = False, - unsafe_hash: bool = False, - frozen: bool = False, - config: Union[ConfigDict, Type[object], None] = None, - validate_on_init: Optional[bool] = None, - use_proxy: Optional[bool] = None, - kw_only: bool = ..., - ) -> Callable[[Type[_T]], 'DataclassClassOrWrapper']: - ... - - @dataclass_transform(field_specifiers=(dataclasses.field, Field)) - @overload - def dataclass( - _cls: Type[_T], - *, - init: bool = True, - repr: bool = True, - eq: bool = True, - order: bool = False, - unsafe_hash: bool = False, - frozen: bool = False, - config: Union[ConfigDict, Type[object], None] = None, - validate_on_init: Optional[bool] = None, - use_proxy: Optional[bool] = None, - kw_only: bool = ..., - ) -> 'DataclassClassOrWrapper': - ... - -else: - - @dataclass_transform(field_specifiers=(dataclasses.field, Field)) - @overload - def dataclass( - *, - init: bool = True, - repr: bool = True, - eq: bool = True, - order: bool = False, - unsafe_hash: bool = False, - frozen: bool = False, - config: Union[ConfigDict, Type[object], None] = None, - validate_on_init: Optional[bool] = None, - use_proxy: Optional[bool] = None, - ) -> Callable[[Type[_T]], 'DataclassClassOrWrapper']: - ... - - @dataclass_transform(field_specifiers=(dataclasses.field, Field)) - @overload - def dataclass( - _cls: Type[_T], - *, - init: bool = True, - repr: bool = True, - eq: bool = True, - order: bool = False, - unsafe_hash: bool = False, - frozen: bool = False, - config: Union[ConfigDict, Type[object], None] = None, - validate_on_init: Optional[bool] = None, - use_proxy: Optional[bool] = None, - ) -> 'DataclassClassOrWrapper': - ... - - -@dataclass_transform(field_specifiers=(dataclasses.field, Field)) -def dataclass( - _cls: Optional[Type[_T]] = None, - *, - init: bool = True, - repr: bool = True, - eq: bool = True, - order: bool = False, - unsafe_hash: bool = False, - frozen: bool = False, - config: Union[ConfigDict, Type[object], None] = None, - validate_on_init: Optional[bool] = None, - use_proxy: Optional[bool] = None, - kw_only: bool = False, -) -> Union[Callable[[Type[_T]], 'DataclassClassOrWrapper'], 'DataclassClassOrWrapper']: - """ - Like the python standard lib dataclasses but with type validation. - The result is either a pydantic dataclass that will validate input data - or a wrapper that will trigger validation around a stdlib dataclass - to avoid modifying it directly - """ - the_config = get_config(config) - - def wrap(cls: Type[Any]) -> 'DataclassClassOrWrapper': - should_use_proxy = ( - use_proxy - if use_proxy is not None - else ( - is_builtin_dataclass(cls) - and (cls.__bases__[0] is object or set(dir(cls)) == set(dir(cls.__bases__[0]))) - ) - ) - if should_use_proxy: - dc_cls_doc = '' - dc_cls = DataclassProxy(cls) - default_validate_on_init = False - else: - dc_cls_doc = cls.__doc__ or '' # needs to be done before generating dataclass - if sys.version_info >= (3, 10): - dc_cls = dataclasses.dataclass( - cls, - init=init, - repr=repr, - eq=eq, - order=order, - unsafe_hash=unsafe_hash, - frozen=frozen, - kw_only=kw_only, - ) - else: - dc_cls = dataclasses.dataclass( # type: ignore - cls, init=init, repr=repr, eq=eq, order=order, unsafe_hash=unsafe_hash, frozen=frozen - ) - default_validate_on_init = True - - should_validate_on_init = default_validate_on_init if validate_on_init is None else validate_on_init - _add_pydantic_validation_attributes(cls, the_config, should_validate_on_init, dc_cls_doc) - dc_cls.__pydantic_model__.__try_update_forward_refs__(**{cls.__name__: cls}) - return dc_cls - - if _cls is None: - return wrap - - return wrap(_cls) - - -@contextmanager -def set_validation(cls: Type['DataclassT'], value: bool) -> Generator[Type['DataclassT'], None, None]: - original_run_validation = cls.__pydantic_run_validation__ - try: - cls.__pydantic_run_validation__ = value - yield cls - finally: - cls.__pydantic_run_validation__ = original_run_validation - - -class DataclassProxy: - __slots__ = '__dataclass__' - - def __init__(self, dc_cls: Type['Dataclass']) -> None: - object.__setattr__(self, '__dataclass__', dc_cls) - - def __call__(self, *args: Any, **kwargs: Any) -> Any: - with set_validation(self.__dataclass__, True): - return self.__dataclass__(*args, **kwargs) - - def __getattr__(self, name: str) -> Any: - return getattr(self.__dataclass__, name) - - def __setattr__(self, __name: str, __value: Any) -> None: - return setattr(self.__dataclass__, __name, __value) - - def __instancecheck__(self, instance: Any) -> bool: - return isinstance(instance, self.__dataclass__) - - def __copy__(self) -> 'DataclassProxy': - return DataclassProxy(copy.copy(self.__dataclass__)) - - def __deepcopy__(self, memo: Any) -> 'DataclassProxy': - return DataclassProxy(copy.deepcopy(self.__dataclass__, memo)) - - -def _add_pydantic_validation_attributes( # noqa: C901 (ignore complexity) - dc_cls: Type['Dataclass'], - config: Type[BaseConfig], - validate_on_init: bool, - dc_cls_doc: str, -) -> None: - """ - We need to replace the right method. If no `__post_init__` has been set in the stdlib dataclass - it won't even exist (code is generated on the fly by `dataclasses`) - By default, we run validation after `__init__` or `__post_init__` if defined - """ - init = dc_cls.__init__ - - @wraps(init) - def handle_extra_init(self: 'Dataclass', *args: Any, **kwargs: Any) -> None: - if config.extra == Extra.ignore: - init(self, *args, **{k: v for k, v in kwargs.items() if k in self.__dataclass_fields__}) - - elif config.extra == Extra.allow: - for k, v in kwargs.items(): - self.__dict__.setdefault(k, v) - init(self, *args, **{k: v for k, v in kwargs.items() if k in self.__dataclass_fields__}) - - else: - init(self, *args, **kwargs) - - if hasattr(dc_cls, '__post_init__'): - try: - post_init = dc_cls.__post_init__.__wrapped__ # type: ignore[attr-defined] - except AttributeError: - post_init = dc_cls.__post_init__ - - @wraps(post_init) - def new_post_init(self: 'Dataclass', *args: Any, **kwargs: Any) -> None: - if config.post_init_call == 'before_validation': - post_init(self, *args, **kwargs) - - if self.__class__.__pydantic_run_validation__: - self.__pydantic_validate_values__() - if hasattr(self, '__post_init_post_parse__'): - self.__post_init_post_parse__(*args, **kwargs) - - if config.post_init_call == 'after_validation': - post_init(self, *args, **kwargs) - - setattr(dc_cls, '__init__', handle_extra_init) - setattr(dc_cls, '__post_init__', new_post_init) - - else: - - @wraps(init) - def new_init(self: 'Dataclass', *args: Any, **kwargs: Any) -> None: - handle_extra_init(self, *args, **kwargs) - - if self.__class__.__pydantic_run_validation__: - self.__pydantic_validate_values__() - - if hasattr(self, '__post_init_post_parse__'): - # We need to find again the initvars. To do that we use `__dataclass_fields__` instead of - # public method `dataclasses.fields` - - # get all initvars and their default values - initvars_and_values: Dict[str, Any] = {} - for i, f in enumerate(self.__class__.__dataclass_fields__.values()): - if f._field_type is dataclasses._FIELD_INITVAR: # type: ignore[attr-defined] - try: - # set arg value by default - initvars_and_values[f.name] = args[i] - except IndexError: - initvars_and_values[f.name] = kwargs.get(f.name, f.default) - - self.__post_init_post_parse__(**initvars_and_values) - - setattr(dc_cls, '__init__', new_init) - - setattr(dc_cls, '__pydantic_run_validation__', ClassAttribute('__pydantic_run_validation__', validate_on_init)) - setattr(dc_cls, '__pydantic_initialised__', False) - setattr(dc_cls, '__pydantic_model__', create_pydantic_model_from_dataclass(dc_cls, config, dc_cls_doc)) - setattr(dc_cls, '__pydantic_validate_values__', _dataclass_validate_values) - setattr(dc_cls, '__validate__', classmethod(_validate_dataclass)) - setattr(dc_cls, '__get_validators__', classmethod(_get_validators)) - - if dc_cls.__pydantic_model__.__config__.validate_assignment and not dc_cls.__dataclass_params__.frozen: - setattr(dc_cls, '__setattr__', _dataclass_validate_assignment_setattr) - - -def _get_validators(cls: 'DataclassClassOrWrapper') -> 'CallableGenerator': - yield cls.__validate__ - - -def _validate_dataclass(cls: Type['DataclassT'], v: Any) -> 'DataclassT': - with set_validation(cls, True): - if isinstance(v, cls): - v.__pydantic_validate_values__() - return v - elif isinstance(v, (list, tuple)): - return cls(*v) - elif isinstance(v, dict): - return cls(**v) - else: - raise DataclassTypeError(class_name=cls.__name__) - - -def create_pydantic_model_from_dataclass( - dc_cls: Type['Dataclass'], - config: Type[Any] = BaseConfig, - dc_cls_doc: Optional[str] = None, -) -> Type['BaseModel']: - field_definitions: Dict[str, Any] = {} - for field in dataclasses.fields(dc_cls): - default: Any = Undefined - default_factory: Optional['NoArgAnyCallable'] = None - field_info: FieldInfo - - if field.default is not dataclasses.MISSING: - default = field.default - elif field.default_factory is not dataclasses.MISSING: - default_factory = field.default_factory - else: - default = Required - - if isinstance(default, FieldInfo): - field_info = default - dc_cls.__pydantic_has_field_info_default__ = True - else: - field_info = Field(default=default, default_factory=default_factory, **field.metadata) - - field_definitions[field.name] = (field.type, field_info) - - validators = gather_all_validators(dc_cls) - model: Type['BaseModel'] = create_model( - dc_cls.__name__, - __config__=config, - __module__=dc_cls.__module__, - __validators__=validators, - __cls_kwargs__={'__resolve_forward_refs__': False}, - **field_definitions, - ) - model.__doc__ = dc_cls_doc if dc_cls_doc is not None else dc_cls.__doc__ or '' - return model - - -def _dataclass_validate_values(self: 'Dataclass') -> None: - # validation errors can occur if this function is called twice on an already initialised dataclass. - # for example if Extra.forbid is enabled, it would consider __pydantic_initialised__ an invalid extra property - if getattr(self, '__pydantic_initialised__'): - return - if getattr(self, '__pydantic_has_field_info_default__', False): - # We need to remove `FieldInfo` values since they are not valid as input - # It's ok to do that because they are obviously the default values! - input_data = {k: v for k, v in self.__dict__.items() if not isinstance(v, FieldInfo)} - else: - input_data = self.__dict__ - d, _, validation_error = validate_model(self.__pydantic_model__, input_data, cls=self.__class__) - if validation_error: - raise validation_error - self.__dict__.update(d) - object.__setattr__(self, '__pydantic_initialised__', True) - - -def _dataclass_validate_assignment_setattr(self: 'Dataclass', name: str, value: Any) -> None: - if self.__pydantic_initialised__: - d = dict(self.__dict__) - d.pop(name, None) - known_field = self.__pydantic_model__.__fields__.get(name, None) - if known_field: - value, error_ = known_field.validate(value, d, loc=name, cls=self.__class__) - if error_: - raise ValidationError([error_], self.__class__) - - object.__setattr__(self, name, value) - - -def is_builtin_dataclass(_cls: Type[Any]) -> bool: - """ - Whether a class is a stdlib dataclass - (useful to discriminated a pydantic dataclass that is actually a wrapper around a stdlib dataclass) - - we check that - - `_cls` is a dataclass - - `_cls` is not a processed pydantic dataclass (with a basemodel attached) - - `_cls` is not a pydantic dataclass inheriting directly from a stdlib dataclass - e.g. - ``` - @dataclasses.dataclass - class A: - x: int - - @pydantic.dataclasses.dataclass - class B(A): - y: int - ``` - In this case, when we first check `B`, we make an extra check and look at the annotations ('y'), - which won't be a superset of all the dataclass fields (only the stdlib fields i.e. 'x') - """ - return ( - dataclasses.is_dataclass(_cls) - and not hasattr(_cls, '__pydantic_model__') - and set(_cls.__dataclass_fields__).issuperset(set(getattr(_cls, '__annotations__', {}))) - ) - - -def make_dataclass_validator(dc_cls: Type['Dataclass'], config: Type[BaseConfig]) -> 'CallableGenerator': - """ - Create a pydantic.dataclass from a builtin dataclass to add type validation - and yield the validators - It retrieves the parameters of the dataclass and forwards them to the newly created dataclass - """ - yield from _get_validators(dataclass(dc_cls, config=config, use_proxy=True)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/referencing/jsonschema.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/referencing/jsonschema.py deleted file mode 100644 index 1887db81bf00860d1e597d64f6f067d70a2a8900..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/referencing/jsonschema.py +++ /dev/null @@ -1,625 +0,0 @@ -""" -Referencing implementations for JSON Schema specs (historic & current). -""" - -from __future__ import annotations - -from collections.abc import Sequence, Set -from typing import Any, Iterable, Union - -from referencing import Anchor, Registry, Resource, Specification, exceptions -from referencing._attrs import frozen -from referencing._core import Resolved as _Resolved, Resolver as _Resolver -from referencing.typing import URI, Anchor as AnchorType, Mapping - -#: A JSON Schema which is a JSON object -ObjectSchema = Mapping[str, Any] - -#: A JSON Schema of any kind -Schema = Union[bool, ObjectSchema] - -#: A JSON Schema Registry -SchemaRegistry = Registry[Schema] - - -@frozen -class UnknownDialect(Exception): - """ - A dialect identifier was found for a dialect unknown by this library. - - If it's a custom ("unofficial") dialect, be sure you've registered it. - """ - - uri: URI - - -def _dollar_id(contents: Schema) -> URI | None: - if isinstance(contents, bool): - return - return contents.get("$id") - - -def _legacy_dollar_id(contents: Schema) -> URI | None: - if isinstance(contents, bool) or "$ref" in contents: - return - id = contents.get("$id") - if id is not None and not id.startswith("#"): - return id - - -def _legacy_id(contents: ObjectSchema) -> URI | None: - if "$ref" in contents: - return - id = contents.get("id") - if id is not None and not id.startswith("#"): - return id - - -def _anchor( - specification: Specification[Schema], - contents: Schema, -) -> Iterable[AnchorType[Schema]]: - if isinstance(contents, bool): - return - anchor = contents.get("$anchor") - if anchor is not None: - yield Anchor( - name=anchor, - resource=specification.create_resource(contents), - ) - - dynamic_anchor = contents.get("$dynamicAnchor") - if dynamic_anchor is not None: - yield DynamicAnchor( - name=dynamic_anchor, - resource=specification.create_resource(contents), - ) - - -def _anchor_2019( - specification: Specification[Schema], - contents: Schema, -) -> Iterable[Anchor[Schema]]: - if isinstance(contents, bool): - return [] - anchor = contents.get("$anchor") - if anchor is None: - return [] - return [ - Anchor( - name=anchor, - resource=specification.create_resource(contents), - ), - ] - - -def _legacy_anchor_in_dollar_id( - specification: Specification[Schema], - contents: Schema, -) -> Iterable[Anchor[Schema]]: - if isinstance(contents, bool): - return [] - id = contents.get("$id", "") - if not id.startswith("#"): - return [] - return [ - Anchor( - name=id[1:], - resource=specification.create_resource(contents), - ), - ] - - -def _legacy_anchor_in_id( - specification: Specification[ObjectSchema], - contents: ObjectSchema, -) -> Iterable[Anchor[ObjectSchema]]: - id = contents.get("id", "") - if not id.startswith("#"): - return [] - return [ - Anchor( - name=id[1:], - resource=specification.create_resource(contents), - ), - ] - - -def _subresources_of( - in_value: Set[str] = frozenset(), - in_subvalues: Set[str] = frozenset(), - in_subarray: Set[str] = frozenset(), -): - """ - Create a callable returning JSON Schema specification-style subschemas. - - Relies on specifying the set of keywords containing subschemas in their - values, in a subobject's values, or in a subarray. - """ - - def subresources_of(contents: Schema) -> Iterable[ObjectSchema]: - if isinstance(contents, bool): - return - for each in in_value: - if each in contents: - yield contents[each] - for each in in_subarray: - if each in contents: - yield from contents[each] - for each in in_subvalues: - if each in contents: - yield from contents[each].values() - - return subresources_of - - -def _subresources_of_with_crazy_items( - in_value: Set[str] = frozenset(), - in_subvalues: Set[str] = frozenset(), - in_subarray: Set[str] = frozenset(), -): - """ - Specifically handle older drafts where there are some funky keywords. - """ - - def subresources_of(contents: Schema) -> Iterable[ObjectSchema]: - if isinstance(contents, bool): - return - for each in in_value: - if each in contents: - yield contents[each] - for each in in_subarray: - if each in contents: - yield from contents[each] - for each in in_subvalues: - if each in contents: - yield from contents[each].values() - - items = contents.get("items") - if items is not None: - if isinstance(items, Sequence): - yield from items - else: - yield items - - return subresources_of - - -def _subresources_of_with_crazy_items_dependencies( - in_value: Set[str] = frozenset(), - in_subvalues: Set[str] = frozenset(), - in_subarray: Set[str] = frozenset(), -): - """ - Specifically handle older drafts where there are some funky keywords. - """ - - def subresources_of(contents: Schema) -> Iterable[ObjectSchema]: - if isinstance(contents, bool): - return - for each in in_value: - if each in contents: - yield contents[each] - for each in in_subarray: - if each in contents: - yield from contents[each] - for each in in_subvalues: - if each in contents: - yield from contents[each].values() - - items = contents.get("items") - if items is not None: - if isinstance(items, Sequence): - yield from items - else: - yield items - dependencies = contents.get("dependencies") - if dependencies is not None: - values = iter(dependencies.values()) - value = next(values, None) - if isinstance(value, Mapping): - yield value - yield from values - - return subresources_of - - -def _subresources_of_with_crazy_aP_items_dependencies( - in_value: Set[str] = frozenset(), - in_subvalues: Set[str] = frozenset(), - in_subarray: Set[str] = frozenset(), -): - """ - Specifically handle even older drafts where there are some funky keywords. - """ - - def subresources_of(contents: ObjectSchema) -> Iterable[ObjectSchema]: - for each in in_value: - if each in contents: - yield contents[each] - for each in in_subarray: - if each in contents: - yield from contents[each] - for each in in_subvalues: - if each in contents: - yield from contents[each].values() - - items = contents.get("items") - if items is not None: - if isinstance(items, Sequence): - yield from items - else: - yield items - dependencies = contents.get("dependencies") - if dependencies is not None: - values = iter(dependencies.values()) - value = next(values, None) - if isinstance(value, Mapping): - yield value - yield from values - - for each in "additionalItems", "additionalProperties": - value = contents.get(each) - if isinstance(value, Mapping): - yield value - - return subresources_of - - -def _maybe_in_subresource( - in_value: Set[str] = frozenset(), - in_subvalues: Set[str] = frozenset(), - in_subarray: Set[str] = frozenset(), -): - in_child = in_subvalues | in_subarray - - def maybe_in_subresource( - segments: Sequence[int | str], - resolver: _Resolver[Any], - subresource: Resource[Any], - ) -> _Resolver[Any]: - _segments = iter(segments) - for segment in _segments: - if segment not in in_value and ( - segment not in in_child or next(_segments, None) is None - ): - return resolver - return resolver.in_subresource(subresource) - - return maybe_in_subresource - - -def _maybe_in_subresource_crazy_items( - in_value: Set[str] = frozenset(), - in_subvalues: Set[str] = frozenset(), - in_subarray: Set[str] = frozenset(), -): - in_child = in_subvalues | in_subarray - - def maybe_in_subresource( - segments: Sequence[int | str], - resolver: _Resolver[Any], - subresource: Resource[Any], - ) -> _Resolver[Any]: - _segments = iter(segments) - for segment in _segments: - if segment == "items" and isinstance( - subresource.contents, - Mapping, - ): - return resolver.in_subresource(subresource) - if segment not in in_value and ( - segment not in in_child or next(_segments, None) is None - ): - return resolver - return resolver.in_subresource(subresource) - - return maybe_in_subresource - - -def _maybe_in_subresource_crazy_items_dependencies( - in_value: Set[str] = frozenset(), - in_subvalues: Set[str] = frozenset(), - in_subarray: Set[str] = frozenset(), -): - in_child = in_subvalues | in_subarray - - def maybe_in_subresource( - segments: Sequence[int | str], - resolver: _Resolver[Any], - subresource: Resource[Any], - ) -> _Resolver[Any]: - _segments = iter(segments) - for segment in _segments: - if ( - segment == "items" or segment == "dependencies" - ) and isinstance(subresource.contents, Mapping): - return resolver.in_subresource(subresource) - if segment not in in_value and ( - segment not in in_child or next(_segments, None) is None - ): - return resolver - return resolver.in_subresource(subresource) - - return maybe_in_subresource - - -#: JSON Schema draft 2020-12 -DRAFT202012 = Specification( - name="draft2020-12", - id_of=_dollar_id, - subresources_of=_subresources_of( - in_value={ - "additionalProperties", - "contains", - "contentSchema", - "else", - "if", - "items", - "not", - "propertyNames", - "then", - "unevaluatedItems", - "unevaluatedProperties", - }, - in_subarray={"allOf", "anyOf", "oneOf", "prefixItems"}, - in_subvalues={ - "$defs", - "dependentSchemas", - "patternProperties", - "properties", - }, - ), - anchors_in=_anchor, - maybe_in_subresource=_maybe_in_subresource( - in_value={ - "additionalProperties", - "contains", - "contentSchema", - "else", - "if", - "items", - "not", - "propertyNames", - "then", - "unevaluatedItems", - "unevaluatedProperties", - }, - in_subarray={"allOf", "anyOf", "oneOf", "prefixItems"}, - in_subvalues={ - "$defs", - "dependentSchemas", - "patternProperties", - "properties", - }, - ), -) -#: JSON Schema draft 2019-09 -DRAFT201909 = Specification( - name="draft2019-09", - id_of=_dollar_id, - subresources_of=_subresources_of_with_crazy_items( - in_value={ - "additionalItems", - "additionalProperties", - "contains", - "contentSchema", - "else", - "if", - "not", - "propertyNames", - "then", - "unevaluatedItems", - "unevaluatedProperties", - }, - in_subarray={"allOf", "anyOf", "oneOf"}, - in_subvalues={ - "$defs", - "dependentSchemas", - "patternProperties", - "properties", - }, - ), - anchors_in=_anchor_2019, - maybe_in_subresource=_maybe_in_subresource_crazy_items( - in_value={ - "additionalItems", - "additionalProperties", - "contains", - "contentSchema", - "else", - "if", - "not", - "propertyNames", - "then", - "unevaluatedItems", - "unevaluatedProperties", - }, - in_subarray={"allOf", "anyOf", "oneOf"}, - in_subvalues={ - "$defs", - "dependentSchemas", - "patternProperties", - "properties", - }, - ), -) -#: JSON Schema draft 7 -DRAFT7 = Specification( - name="draft-07", - id_of=_legacy_dollar_id, - subresources_of=_subresources_of_with_crazy_items_dependencies( - in_value={ - "additionalItems", - "additionalProperties", - "contains", - "else", - "if", - "not", - "propertyNames", - "then", - }, - in_subarray={"allOf", "anyOf", "oneOf"}, - in_subvalues={"definitions", "patternProperties", "properties"}, - ), - anchors_in=_legacy_anchor_in_dollar_id, - maybe_in_subresource=_maybe_in_subresource_crazy_items_dependencies( - in_value={ - "additionalItems", - "additionalProperties", - "contains", - "else", - "if", - "not", - "propertyNames", - "then", - }, - in_subarray={"allOf", "anyOf", "oneOf"}, - in_subvalues={"definitions", "patternProperties", "properties"}, - ), -) -#: JSON Schema draft 6 -DRAFT6 = Specification( - name="draft-06", - id_of=_legacy_dollar_id, - subresources_of=_subresources_of_with_crazy_items_dependencies( - in_value={ - "additionalItems", - "additionalProperties", - "contains", - "not", - "propertyNames", - }, - in_subarray={"allOf", "anyOf", "oneOf"}, - in_subvalues={"definitions", "patternProperties", "properties"}, - ), - anchors_in=_legacy_anchor_in_dollar_id, - maybe_in_subresource=_maybe_in_subresource_crazy_items_dependencies( - in_value={ - "additionalItems", - "additionalProperties", - "contains", - "not", - "propertyNames", - }, - in_subarray={"allOf", "anyOf", "oneOf"}, - in_subvalues={"definitions", "patternProperties", "properties"}, - ), -) -#: JSON Schema draft 4 -DRAFT4 = Specification( - name="draft-04", - id_of=_legacy_id, - subresources_of=_subresources_of_with_crazy_aP_items_dependencies( - in_value={"not"}, - in_subarray={"allOf", "anyOf", "oneOf"}, - in_subvalues={"definitions", "patternProperties", "properties"}, - ), - anchors_in=_legacy_anchor_in_id, - maybe_in_subresource=_maybe_in_subresource_crazy_items_dependencies( - in_value={"additionalItems", "additionalProperties", "not"}, - in_subarray={"allOf", "anyOf", "oneOf"}, - in_subvalues={"definitions", "patternProperties", "properties"}, - ), -) -#: JSON Schema draft 3 -DRAFT3 = Specification( - name="draft-03", - id_of=_legacy_id, - subresources_of=_subresources_of_with_crazy_aP_items_dependencies( - in_subarray={"extends"}, - in_subvalues={"definitions", "patternProperties", "properties"}, - ), - anchors_in=_legacy_anchor_in_id, - maybe_in_subresource=_maybe_in_subresource_crazy_items_dependencies( - in_value={"additionalItems", "additionalProperties"}, - in_subarray={"extends"}, - in_subvalues={"definitions", "patternProperties", "properties"}, - ), -) - - -_SPECIFICATIONS: Registry[Specification[Schema]] = Registry( - { # type: ignore[reportGeneralTypeIssues] # :/ internal vs external types - dialect_id: Resource.opaque(specification) - for dialect_id, specification in [ - ("https://json-schema.org/draft/2020-12/schema", DRAFT202012), - ("https://json-schema.org/draft/2019-09/schema", DRAFT201909), - ("http://json-schema.org/draft-07/schema", DRAFT7), - ("http://json-schema.org/draft-06/schema", DRAFT6), - ("http://json-schema.org/draft-04/schema", DRAFT4), - ("http://json-schema.org/draft-03/schema", DRAFT3), - ] - }, -) - - -def specification_with( - dialect_id: URI, - default: Specification[Any] = None, # type: ignore[reportGeneralTypeIssues] # noqa: E501 -) -> Specification[Any]: - """ - Retrieve the `Specification` with the given dialect identifier. - - Raises: - - `UnknownDialect` - - if the given ``dialect_id`` isn't known - """ - resource = _SPECIFICATIONS.get(dialect_id.rstrip("#")) - if resource is not None: - return resource.contents - if default is None: # type: ignore[reportUnnecessaryComparison] - raise UnknownDialect(dialect_id) - return default - - -@frozen -class DynamicAnchor: - """ - Dynamic anchors, introduced in draft 2020. - """ - - name: str - resource: Resource[Schema] - - def resolve(self, resolver: _Resolver[Schema]) -> _Resolved[Schema]: - """ - Resolve this anchor dynamically. - """ - last = self.resource - for uri, registry in resolver.dynamic_scope(): - try: - anchor = registry.anchor(uri, self.name).value - except exceptions.NoSuchAnchor: - continue - if isinstance(anchor, DynamicAnchor): - last = anchor.resource - return _Resolved( - contents=last.contents, - resolver=resolver.in_subresource(last), - ) - - -def lookup_recursive_ref(resolver: _Resolver[Schema]) -> _Resolved[Schema]: - """ - Recursive references (via recursive anchors), present only in draft 2019. - - As per the 2019 specification (§ 8.2.4.2.1), only the ``#`` recursive - reference is supported (and is therefore assumed to be the relevant - reference). - """ - resolved = resolver.lookup("#") - if isinstance(resolved.contents, Mapping) and resolved.contents.get( - "$recursiveAnchor", - ): - for uri, _ in resolver.dynamic_scope(): - next_resolved = resolver.lookup(uri) - if not isinstance( - next_resolved.contents, - Mapping, - ) or not next_resolved.contents.get("$recursiveAnchor"): - break - resolved = next_resolved - return resolved diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/develop.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/develop.py deleted file mode 100644 index 24fb0a7c81bc665844d5d307eee2d720079c039f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/command/develop.py +++ /dev/null @@ -1,193 +0,0 @@ -from distutils.util import convert_path -from distutils import log -from distutils.errors import DistutilsError, DistutilsOptionError -import os -import glob -import io - -import pkg_resources -from setuptools.command.easy_install import easy_install -from setuptools import namespaces -import setuptools - - -class develop(namespaces.DevelopInstaller, easy_install): - """Set up package for development""" - - description = "install package in 'development mode'" - - user_options = easy_install.user_options + [ - ("uninstall", "u", "Uninstall this source package"), - ("egg-path=", None, "Set the path to be used in the .egg-link file"), - ] - - boolean_options = easy_install.boolean_options + ['uninstall'] - - command_consumes_arguments = False # override base - - def run(self): - if self.uninstall: - self.multi_version = True - self.uninstall_link() - self.uninstall_namespaces() - else: - self.install_for_development() - self.warn_deprecated_options() - - def initialize_options(self): - self.uninstall = None - self.egg_path = None - easy_install.initialize_options(self) - self.setup_path = None - self.always_copy_from = '.' # always copy eggs installed in curdir - - def finalize_options(self): - ei = self.get_finalized_command("egg_info") - if ei.broken_egg_info: - template = "Please rename %r to %r before using 'develop'" - args = ei.egg_info, ei.broken_egg_info - raise DistutilsError(template % args) - self.args = [ei.egg_name] - - easy_install.finalize_options(self) - self.expand_basedirs() - self.expand_dirs() - # pick up setup-dir .egg files only: no .egg-info - self.package_index.scan(glob.glob('*.egg')) - - egg_link_fn = ei.egg_name + '.egg-link' - self.egg_link = os.path.join(self.install_dir, egg_link_fn) - self.egg_base = ei.egg_base - if self.egg_path is None: - self.egg_path = os.path.abspath(ei.egg_base) - - target = pkg_resources.normalize_path(self.egg_base) - egg_path = pkg_resources.normalize_path( - os.path.join(self.install_dir, self.egg_path) - ) - if egg_path != target: - raise DistutilsOptionError( - "--egg-path must be a relative path from the install" - " directory to " + target - ) - - # Make a distribution for the package's source - self.dist = pkg_resources.Distribution( - target, - pkg_resources.PathMetadata(target, os.path.abspath(ei.egg_info)), - project_name=ei.egg_name, - ) - - self.setup_path = self._resolve_setup_path( - self.egg_base, - self.install_dir, - self.egg_path, - ) - - @staticmethod - def _resolve_setup_path(egg_base, install_dir, egg_path): - """ - Generate a path from egg_base back to '.' where the - setup script resides and ensure that path points to the - setup path from $install_dir/$egg_path. - """ - path_to_setup = egg_base.replace(os.sep, '/').rstrip('/') - if path_to_setup != os.curdir: - path_to_setup = '../' * (path_to_setup.count('/') + 1) - resolved = pkg_resources.normalize_path( - os.path.join(install_dir, egg_path, path_to_setup) - ) - if resolved != pkg_resources.normalize_path(os.curdir): - raise DistutilsOptionError( - "Can't get a consistent path to setup script from" - " installation directory", - resolved, - pkg_resources.normalize_path(os.curdir), - ) - return path_to_setup - - def install_for_development(self): - self.run_command('egg_info') - - # Build extensions in-place - self.reinitialize_command('build_ext', inplace=1) - self.run_command('build_ext') - - if setuptools.bootstrap_install_from: - self.easy_install(setuptools.bootstrap_install_from) - setuptools.bootstrap_install_from = None - - self.install_namespaces() - - # create an .egg-link in the installation dir, pointing to our egg - log.info("Creating %s (link to %s)", self.egg_link, self.egg_base) - if not self.dry_run: - with open(self.egg_link, "w") as f: - f.write(self.egg_path + "\n" + self.setup_path) - # postprocess the installed distro, fixing up .pth, installing scripts, - # and handling requirements - self.process_distribution(None, self.dist, not self.no_deps) - - def uninstall_link(self): - if os.path.exists(self.egg_link): - log.info("Removing %s (link to %s)", self.egg_link, self.egg_base) - egg_link_file = open(self.egg_link) - contents = [line.rstrip() for line in egg_link_file] - egg_link_file.close() - if contents not in ([self.egg_path], [self.egg_path, self.setup_path]): - log.warn("Link points to %s: uninstall aborted", contents) - return - if not self.dry_run: - os.unlink(self.egg_link) - if not self.dry_run: - self.update_pth(self.dist) # remove any .pth link to us - if self.distribution.scripts: - # XXX should also check for entry point scripts! - log.warn("Note: you must uninstall or replace scripts manually!") - - def install_egg_scripts(self, dist): - if dist is not self.dist: - # Installing a dependency, so fall back to normal behavior - return easy_install.install_egg_scripts(self, dist) - - # create wrapper scripts in the script dir, pointing to dist.scripts - - # new-style... - self.install_wrapper_scripts(dist) - - # ...and old-style - for script_name in self.distribution.scripts or []: - script_path = os.path.abspath(convert_path(script_name)) - script_name = os.path.basename(script_path) - with io.open(script_path) as strm: - script_text = strm.read() - self.install_script(dist, script_name, script_text, script_path) - - def install_wrapper_scripts(self, dist): - dist = VersionlessRequirement(dist) - return easy_install.install_wrapper_scripts(self, dist) - - -class VersionlessRequirement: - """ - Adapt a pkg_resources.Distribution to simply return the project - name as the 'requirement' so that scripts will work across - multiple versions. - - >>> from pkg_resources import Distribution - >>> dist = Distribution(project_name='foo', version='1.0') - >>> str(dist.as_requirement()) - 'foo==1.0' - >>> adapted_dist = VersionlessRequirement(dist) - >>> str(adapted_dist.as_requirement()) - 'foo' - """ - - def __init__(self, dist): - self.__dist = dist - - def __getattr__(self, name): - return getattr(self.__dist, name) - - def as_requirement(self): - return self.project_name diff --git a/spaces/protoxx91/webui-docker/README.md b/spaces/protoxx91/webui-docker/README.md deleted file mode 100644 index d09d8ce162e139ce06f130f29b73cd0221407ed6..0000000000000000000000000000000000000000 --- a/spaces/protoxx91/webui-docker/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Stable Diffusion Web UI Docker -emoji: 🐳 -colorFrom: blue -colorTo: blue -sdk: docker -sdk_version: 3.9 -app_file: oh-no.py -pinned: false -duplicated_from: camenduru/webui-docker ---- - -## Stable Diffusion Web UI -https://github.com/AUTOMATIC1111/stable-diffusion-webui - -## Documentation -https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki - -## Models License -https://huggingface.co/spaces/CompVis/stable-diffusion-license \ No newline at end of file diff --git a/spaces/pycui/RealChar/realtime_ai_character/__init__.py b/spaces/pycui/RealChar/realtime_ai_character/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/pyodide-demo/self-hosted/numpy-tests.js b/spaces/pyodide-demo/self-hosted/numpy-tests.js deleted file mode 100644 index c9ea55de9a1193f5d1a25fcbb9fd407dcff14427..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/numpy-tests.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="numpy-tests.data";var REMOTE_PACKAGE_BASE="numpy-tests.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","numpy",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","compat",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/compat","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","core",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/core","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/core/tests","data",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/core/tests","examples",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","distutils",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/distutils","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","f2py",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py/tests","src",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py/tests/src","string",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py/tests/src","common",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py/tests/src","regression",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py/tests/src","module_data",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py/tests/src","size",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py/tests/src","array_from_pyobj",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py/tests/src","parameter",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py/tests/src","assumed_shape",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py/tests/src","mixed",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/f2py/tests/src","kind",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","fft",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/fft","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","lib",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/lib","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/lib/tests","data",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","linalg",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/linalg","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","ma",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/ma","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","matrixlib",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/matrixlib","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","polynomial",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/polynomial","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","random",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/random","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/random/tests","data",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","testing",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/testing","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","typing",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/typing","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/typing/tests","data",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/typing/tests/data","misc",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/typing/tests/data","fail",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/typing/tests/data","pass",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy/typing/tests/data","reveal",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/numpy","tests",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:2023847,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1308,2437,3473,4431,5599,6772,7678,8603,9614,10388,11111,11952,12301,13199,14332,14905,15928,17051,18159,19490,20531,21482,22348,23270,24115,25049,25933,27091,28210,29091,30053,31300,32451,33135,34122,34684,35436,36473,37589,38628,39585,40325,41268,42207,42907,43812,44664,45567,46437,47069,47965,48993,49885,50883,51662,52607,53486,54561,55512,55986,56990,58026,58977,60166,61055,61919,62779,63693,64332,65543,66751,67787,68933,70343,71664,72734,73826,74518,75204,76105,76780,77506,78100,78755,79388,80374,81187,81555,82264,83083,84004,84817,85781,86684,87553,88283,89032,89979,90705,91242,92132,92967,93777,94859,96063,96831,97605,98567,99519,100239,101312,102336,102980,103919,105041,106321,106974,107564,108458,109474,110537,111240,112068,112772,113812,114910,116185,117171,118078,119250,120536,121659,123074,124396,125274,126416,127528,128835,130289,131123,132380,133030,133619,134642,135699,136730,137601,138437,139350,140568,141447,142309,143314,144578,145527,146261,146895,147574,148520,149485,150497,151506,152347,153426,154111,154801,155433,156216,157371,158719,159882,161024,162068,163061,164080,165227,166258,167437,168440,169243,170406,171402,172400,173479,174427,175528,176354,177380,178414,179350,180368,181475,182571,183732,185097,186011,187138,188270,189241,190374,191642,192701,193868,195016,196188,197433,198756,199849,201025,202303,203456,204179,204856,205558,206093,206971,208001,208939,210038,211217,212018,212801,213589,214421,215118,216046,216923,217729,218435,219052,219882,221101,222005,223241,224278,225224,226198,227100,228054,228864,229761,230727,231829,232567,233462,234280,235169,236328,237258,238443,239320,240080,240777,241631,242596,243572,244644,245577,246786,247947,248817,249403,250043,251256,252471,253527,254569,255213,256180,257014,258103,258773,259639,260682,261611,262399,263393,264453,265374,266387,266995,267867,269021,269726,270358,271224,272255,272939,273835,274655,275171,275941,276295,276934,277412,277913,278829,279795,280681,281508,282302,283241,283961,284469,285299,286012,286716,287269,288012,288817,289896,290858,291605,292393,293148,293981,294655,295303,295653,296139,296849,297334,297870,298621,299498,300294,301279,301795,302646,303092,303607,303966,304739,305557,306387,307505,308650,309692,310770,311908,313203,314181,315003,315357,316016,317040,318192,319178,320194,321100,321983,322888,323701,324426,325203,326030,327063,328213,328734,329595,330136,330952,331907,332822,333437,334565,335773,336581,337320,338193,339115,339769,340706,341716,342571,343460,344508,345565,346377,347187,348102,349180,349768,350815,351472,352163,352816,353589,354409,355397,356377,357781,358918,359948,360793,361524,362435,363547,364365,365080,365749,366580,367397,368347,369007,369933,370724,371512,372231,373129,374168,375065,376220,376828,377695,378263,378670,379259,379918,380571,381226,381873,382690,383540,384133,385114,386092,387328,388542,389615,390368,391338,392266,392903,393534,394481,395172,395745,396743,397693,398854,400100,401188,402262,403302,404589,405995,407010,408081,409121,410491,411739,412952,414121,415263,416395,417373,418277,419484,420449,421635,422699,423621,424779,425964,426875,427806,429013,430186,431099,432007,432984,433881,434561,435681,436943,437946,438990,440033,440908,441712,442590,443393,444358,445271,446458,447583,448663,449499,450255,451445,452184,452928,453614,454595,455532,456558,457586,458651,459596,460795,462165,463053,464142,465143,465994,466932,468005,468776,469804,470982,471988,472658,473513,474430,475549,476507,477164,478209,479334,480441,481134,482051,482802,483819,484860,486039,487059,487971,488936,489751,490720,491503,492550,493462,494342,495357,496485,497578,498406,499042,500058,500776,501719,502411,503303,504220,504947,505661,506151,506968,507578,508279,509080,509882,510925,511690,512709,513803,514616,515670,516406,517353,518121,518904,519825,520824,521859,522667,523449,524261,525135,526061,527178,528276,529498,530592,531716,532322,533426,534106,534880,535832,536922,537612,538600,539643,540800,541724,542834,543874,544832,546042,546897,547799,548631,549599,550625,551549,552510,553592,554453,555539,556545,557572,558432,559570,560369,561291,562277,563229,564303,565222,566213,567084,567769,568706,569487,570601,571686,572586,573149,573891,574905,575924,577e3,578130,579107,580019,580883,581612,582404,583004,583632,584391,585336,586465,587247,588160,589013,590032,591209,592402,593689,595016,596076,597120,598316,599021,599957,600823,601779,602655,603633,604558,605481,606492,607494,608508,609466,610429,611617,612871,614114,615299,616463,617551,618560,619701,620793,621755,622584,623591,624444,625145,626015,627077,627895,628872,629725,630748,631844,633118,634030,634717,635590,636394,637249,638229,639574,640384,641103,642568,643092,643705,644461,645161,646364,647254,648116,649213,650254,651385,652370,653334,654437,655488,656553,657779,658859,659862,660983,662110,663204,664305,665369,666153,667366,668687,669257,670447,671548,672522,673411,674449,675589,676868,678021,679165,680338,681504,682608,683676,684825,685914,687176,688343,689518,690691,691931,692967,694130,695448,696715,698076,699385,700307,701567,702531,703591,704915,705990,706959,708175,709292,710413,711407,712589,713972,715262,716334,717460,718629,719915,721145,722252,723187,724261,725469,726648,727688,728738,729736,730878,731808,733158,733794,734468,735464,736494,737509,738645,739795,740709,742018,742840,743873,744876,745795,746704,747731,749034,750166,751019,751523,752373,753766,754454,754887,755830,756859,757871,758707,759613,760536,761634,762536,763361,764101,764717,765574,766622,767499,768556,769584,770583,771213,771985,772855,773843,774730,775637,776461,777190,777898,778857,779877,780832,781622,782529,783382,784080,785128,786253,787462,788468,789436,790287,791192,792087,792946,793527,794352,795099,795698,796347,797199,798073,799057,799922,800688,801346,802097,803238,804196,804852,805921,807053,808114,809301,810382,811531,812695,813980,815309,816213,817225,817800,818742,819829,821277,822724,824183,825288,825551,826006,826084,826522,827549,828458,829885,831346,832800,833803,834227,834780,835336,835879,836451,837001,837554,838115,838690,839249,840278,840883,841528,842229,842936,843622,844323,845024,845723,846419,847121,848180,849589,850814,852027,853007,853731,854901,855571,856642,857546,858441,859265,860370,861169,861911,863069,864328,865354,866316,867392,868603,869787,871017,872128,873245,874303,874968,876028,877206,878129,879159,879964,880741,881539,882455,883345,884462,885195,886463,887521,888523,889621,890649,891889,892842,893622,894739,895852,897101,898018,898858,899928,900861,901481,902142,902890,903674,904530,905308,906131,906759,907571,908720,909526,910489,911258,912246,913237,914297,915628,916903,918117,919370,920419,921501,922585,923081,924038,925522,926815,928078,929493,930675,931874,932648,933524,934199,935006,935941,936670,937468,938524,939196,939867,940482,941482,942717,943885,944776,945783,946927,947754,948857,949887,950695,951792,952834,953988,955326,956428,957431,958463,959355,960260,961335,962329,963697,964471,965104,965901,966889,967742,968815,969806,970624,971487,972310,973167,974110,974825,975828,976856,977926,978966,979612,980600,981454,982438,983548,984414,985333,986320,987152,987986,988733,989895,990698,991670,992562,993627,994498,995577,996393,996976,998194,998948,999879,1000610,1001564,1002301,1003143,1003870,1004740,1005387,1006418,1007120,1007809,1008438,1009217,1009823,1010584,1011175,1011881,1012519,1013491,1014311,1015173,1015888,1016790,1017539,1018175,1019200,1020313,1021272,1022284,1023147,1024210,1024669,1025467,1026275,1027168,1028086,1029007,1030094,1031092,1031947,1032557,1033429,1034209,1035180,1036140,1037344,1038168,1039110,1040123,1040862,1041705,1042584,1043359,1044476,1045422,1046202,1047050,1047973,1049066,1050188,1051425,1052286,1053022,1054077,1054647,1055164,1055850,1056114,1056998,1058154,1059098,1060240,1061283,1062094,1062705,1063396,1064384,1065418,1066312,1067159,1067871,1068800,1069636,1070487,1071158,1072062,1073040,1073939,1074639,1075480,1076431,1077449,1078441,1079207,1080150,1081069,1082125,1083120,1084165,1084935,1085884,1087138,1088184,1089187,1090057,1091160,1092111,1092472,1093275,1094107,1094897,1095606,1096644,1097749,1098883,1100208,1101237,1102065,1102874,1103581,1104377,1105297,1106276,1107101,1107963,1108905,1109672,1110623,1111420,1112758,1113724,1114715,1115814,1117070,1118073,1119005,1119825,1120889,1122036,1123208,1124191,1124843,1125484,1126499,1127437,1128581,1129595,1130784,1131435,1132259,1133112,1134099,1135135,1136187,1137332,1138147,1139215,1140181,1141158,1142102,1143212,1143966,1144883,1145789,1146584,1147517,1148266,1149112,1149928,1150860,1151885,1152581,1153642,1154688,1155802,1156709,1157512,1158489,1159771,1161091,1162052,1162926,1163841,1164899,1166017,1166579,1167207,1168092,1168619,1169252,1169948,1171191,1172313,1173392,1174469,1175556,1176535,1177634,1178428,1179204,1180124,1181100,1182149,1182935,1183762,1184497,1185334,1186307,1187136,1187867,1188645,1189876,1190919,1191802,1192711,1193693,1194558,1195576,1196308,1197321,1198264,1199334,1200280,1200989,1202062,1203091,1203816,1204723,1205528,1206495,1207112,1207975,1208547,1209586,1210617,1211277,1212201,1213136,1214198,1215165,1216078,1217144,1218076,1218330,1218643,1218945,1219237,1219531,1219832,1220533,1221678,1222734,1223964,1225251,1226121,1227181,1228118,1228898,1230156,1231398,1232370,1233620,1234848,1235386,1236652,1237944,1238676,1239295,1240315,1241379,1241919,1243108,1244e3,1244801,1245739,1246627,1247442,1248519,1249396,1250322,1251371,1252413,1253475,1254350,1255248,1256159,1256746,1257744,1258570,1259427,1260448,1261456,1262489,1263519,1264536,1265852,1267201,1267976,1268832,1269855,1271161,1272536,1273610,1274422,1275416,1276594,1277703,1278701,1279517,1280437,1281350,1282370,1283199,1284239,1285215,1286138,1286998,1287828,1288849,1289766,1290727,1291703,1292631,1293643,1294595,1295457,1296218,1296910,1297984,1298948,1299642,1300540,1301132,1301680,1302556,1303487,1304260,1305286,1306280,1307099,1308015,1309050,1310010,1310761,1311708,1312870,1313915,1314933,1315691,1316476,1317059,1317792,1318468,1319027,1319681,1320514,1321553,1322425,1323407,1324348,1325290,1326117,1326977,1327801,1328593,1329630,1330653,1331584,1332365,1333340,1334310,1335184,1336189,1337049,1337917,1338867,1339716,1340639,1341607,1342548,1343475,1344431,1345307,1346199,1346950,1347950,1348892,1349821,1350555,1351493,1352484,1353480,1354162,1354951,1355867,1356834,1357840,1359013,1360047,1361431,1362330,1363317,1364177,1364994,1365925,1366881,1367786,1368462,1369082,1369709,1370293,1371136,1371680,1372542,1373257,1373919,1374686,1375552,1376330,1377124,1377883,1378695,1379647,1380307,1381025,1381752,1382464,1382997,1383691,1384537,1385509,1386074,1386864,1388041,1388920,1389829,1390802,1391812,1392680,1393556,1394248,1395257,1396111,1396883,1397825,1398817,1399973,1400787,1402079,1403372,1404574,1405395,1406267,1407112,1408401,1409251,1410090,1411021,1411833,1412711,1413598,1414676,1415741,1416701,1417826,1419107,1420087,1421076,1421814,1422672,1423738,1424917,1425724,1426393,1427570,1428524,1429566,1430535,1431600,1432596,1433862,1434734,1435733,1436743,1437713,1438763,1440068,1440947,1441798,1442366,1443199,1443926,1444716,1445554,1446547,1447756,1448619,1449485,1450325,1450905,1451835,1452670,1453441,1454535,1455597,1456328,1457337,1458155,1458751,1459528,1460305,1461058,1462179,1463301,1464e3,1465061,1465889,1466471,1467227,1468094,1468780,1469868,1470887,1471921,1472643,1473505,1474242,1475086,1476164,1477031,1477934,1479009,1479929,1480752,1481579,1482173,1483067,1483849,1484584,1485658,1486640,1487499,1488509,1489527,1490219,1490995,1491659,1492524,1493343,1494422,1495502,1496268,1497337,1498309,1499047,1499880,1500482,1501374,1502330,1502980,1504098,1505243,1506396,1507698,1508967,1510579,1511820,1513320,1514001,1514934,1515885,1516832,1517746,1518863,1520212,1521299,1522131,1523045,1523631,1524278,1525226,1526185,1527120,1528002,1528863,1529758,1530739,1531628,1532564,1533702,1534704,1535677,1536823,1537886,1538660,1539362,1540123,1540933,1541693,1542400,1543152,1543833,1544553,1545153,1546264,1547020,1548170,1549269,1550098,1550705,1551365,1551996,1552633,1553515,1554047,1554588,1555086,1555850,1556795,1557442,1558742,1559660,1560417,1561357,1562317,1562983,1563894,1564452,1565061,1565926,1567013,1568074,1569116,1570146,1571092,1572257,1572994,1573679,1574358,1575338,1576149,1577016,1577856,1578921,1579822,1580743,1581601,1582439,1583312,1584425,1585332,1586329,1587442,1588387,1589246,1590103,1591166,1592258,1593347,1594111,1594876,1595643,1596368,1597095,1597883,1598649,1599304,1600082,1600722,1601767,1602664,1603684,1604496,1605585,1606762,1607931,1609024,1610122,1611287,1612371,1613545,1614719,1615589,1616284,1616951,1617827,1618752,1619615,1620368,1621251,1622269,1623174,1624376,1625561,1626401,1627103,1627752,1628647,1629814,1630896,1631834,1632714,1633525,1634541,1635605,1636743,1637707,1638652,1639861,1640911,1641858,1642535,1643228,1644092,1644853,1645522,1646289,1646923,1647663,1648280,1649365,1650204,1651932,1653854,1655763,1657692,1659603,1661518,1663433,1665353,1667274,1669198,1671126,1673065,1675015,1676919,1678835,1680761,1682684,1684614,1686541,1688452,1690376,1692284,1694203,1696126,1698051,1699981,1701889,1703812,1705730,1707639,1709558,1711474,1713394,1715314,1717236,1719180,1721100,1723033,1724951,1726881,1728798,1730713,1732619,1734540,1736465,1738399,1740295,1742042,1743785,1745518,1747246,1748978,1750711,1752446,1754197,1755923,1757654,1759390,1761130,1762863,1764595,1766329,1768238,1770164,1772080,1774015,1775924,1777824,1779724,1781641,1783560,1785482,1787396,1789315,1791260,1793171,1795075,1797001,1798921,1800842,1802772,1804695,1806610,1808526,1810440,1812364,1814302,1816209,1818127,1820034,1821942,1823855,1825786,1827714,1829618,1831539,1833447,1835382,1837292,1839208,1841143,1843063,1844978,1846880,1848776,1850703,1852620,1854521,1856394,1857710,1858559,1859649,1860492,1861293,1862058,1862850,1863888,1864663,1865414,1866469,1867013,1867564,1868392,1869304,1870333,1871055,1871798,1872326,1872901,1873622,1874908,1876028,1877155,1878030,1878767,1879878,1881132,1882026,1883433,1884471,1885657,1887059,1888243,1888959,1889971,1891284,1892461,1893293,1893794,1894768,1895752,1896551,1897249,1897825,1898714,1899167,1900166,1901064,1902080,1903078,1904191,1905323,1906259,1907293,1908351,1909516,1910737,1911747,1912623,1913325,1913883,1914491,1915054,1915627,1916171,1916766,1917389,1917899,1918423,1918940,1919456,1919964,1920450,1920961,1921471,1921941,1922569,1923271,1923857,1924390,1924916,1925430,1925997,1926554,1927005,1927479,1927948,1928699,1929927,1930950,1932048,1933328,1934526,1935735,1936844,1937744,1938764,1939805,1940617,1941402,1942494,1943643,1944697,1945759,1946323,1946978,1947782,1948641,1949130,1949524,1949842,1950253,1950605,1950961,1951313,1951902,1952379,1952865,1953412,1954119,1954823,1955304,1956096,1956872,1957681,1958290,1959055,1959727,1960281,1960729,1961310,1961935,1962895,1963645,1964331,1964705,1965119,1965500,1965873,1966204,1966560,1966921,1967272,1967607,1967943,1968280,1968604,1968939,1969258,1969601,1969922,1970302,1970667,1970984,1971331,1971841,1972318,1972642,1973079,1973425,1973794,1974237,1974575,1974947,1975407,1975780,1976236,1976575,1977008,1977360,1977790,1978182,1978533,1978986,1979345,1979795,1980299,1980820,1981331,1981698,1982027,1982405,1982755,1983066,1983409,1983728,1984046,1984353,1984681,1985008,1985307,1985639,1986017,1986420,1986784,1987236,1987587,1987931,1988332,1988701,1989037,1989375,1989739,1990356,1991016,1991603,1992069,1992775,1993329,1994044,1994674,1995454,1995863,1996245,1996636,1997026,1997851,1998619,1999452,2000102,2000599,2001078,2001545,2002035,2002782,2003368,2003923,2004864,2005833,2007020,2008194,2009293,2010650,2012007,2012752,2013835,2015133,2016230,2017318,2018261,2019278,2020306,2021537,2022076,2023290],sizes:[1308,1129,1036,958,1168,1173,906,925,1011,774,723,841,349,898,1133,573,1023,1123,1108,1331,1041,951,866,922,845,934,884,1158,1119,881,962,1247,1151,684,987,562,752,1037,1116,1039,957,740,943,939,700,905,852,903,870,632,896,1028,892,998,779,945,879,1075,951,474,1004,1036,951,1189,889,864,860,914,639,1211,1208,1036,1146,1410,1321,1070,1092,692,686,901,675,726,594,655,633,986,813,368,709,819,921,813,964,903,869,730,749,947,726,537,890,835,810,1082,1204,768,774,962,952,720,1073,1024,644,939,1122,1280,653,590,894,1016,1063,703,828,704,1040,1098,1275,986,907,1172,1286,1123,1415,1322,878,1142,1112,1307,1454,834,1257,650,589,1023,1057,1031,871,836,913,1218,879,862,1005,1264,949,734,634,679,946,965,1012,1009,841,1079,685,690,632,783,1155,1348,1163,1142,1044,993,1019,1147,1031,1179,1003,803,1163,996,998,1079,948,1101,826,1026,1034,936,1018,1107,1096,1161,1365,914,1127,1132,971,1133,1268,1059,1167,1148,1172,1245,1323,1093,1176,1278,1153,723,677,702,535,878,1030,938,1099,1179,801,783,788,832,697,928,877,806,706,617,830,1219,904,1236,1037,946,974,902,954,810,897,966,1102,738,895,818,889,1159,930,1185,877,760,697,854,965,976,1072,933,1209,1161,870,586,640,1213,1215,1056,1042,644,967,834,1089,670,866,1043,929,788,994,1060,921,1013,608,872,1154,705,632,866,1031,684,896,820,516,770,354,639,478,501,916,966,886,827,794,939,720,508,830,713,704,553,743,805,1079,962,747,788,755,833,674,648,350,486,710,485,536,751,877,796,985,516,851,446,515,359,773,818,830,1118,1145,1042,1078,1138,1295,978,822,354,659,1024,1152,986,1016,906,883,905,813,725,777,827,1033,1150,521,861,541,816,955,915,615,1128,1208,808,739,873,922,654,937,1010,855,889,1048,1057,812,810,915,1078,588,1047,657,691,653,773,820,988,980,1404,1137,1030,845,731,911,1112,818,715,669,831,817,950,660,926,791,788,719,898,1039,897,1155,608,867,568,407,589,659,653,655,647,817,850,593,981,978,1236,1214,1073,753,970,928,637,631,947,691,573,998,950,1161,1246,1088,1074,1040,1287,1406,1015,1071,1040,1370,1248,1213,1169,1142,1132,978,904,1207,965,1186,1064,922,1158,1185,911,931,1207,1173,913,908,977,897,680,1120,1262,1003,1044,1043,875,804,878,803,965,913,1187,1125,1080,836,756,1190,739,744,686,981,937,1026,1028,1065,945,1199,1370,888,1089,1001,851,938,1073,771,1028,1178,1006,670,855,917,1119,958,657,1045,1125,1107,693,917,751,1017,1041,1179,1020,912,965,815,969,783,1047,912,880,1015,1128,1093,828,636,1016,718,943,692,892,917,727,714,490,817,610,701,801,802,1043,765,1019,1094,813,1054,736,947,768,783,921,999,1035,808,782,812,874,926,1117,1098,1222,1094,1124,606,1104,680,774,952,1090,690,988,1043,1157,924,1110,1040,958,1210,855,902,832,968,1026,924,961,1082,861,1086,1006,1027,860,1138,799,922,986,952,1074,919,991,871,685,937,781,1114,1085,900,563,742,1014,1019,1076,1130,977,912,864,729,792,600,628,759,945,1129,782,913,853,1019,1177,1193,1287,1327,1060,1044,1196,705,936,866,956,876,978,925,923,1011,1002,1014,958,963,1188,1254,1243,1185,1164,1088,1009,1141,1092,962,829,1007,853,701,870,1062,818,977,853,1023,1096,1274,912,687,873,804,855,980,1345,810,719,1465,524,613,756,700,1203,890,862,1097,1041,1131,985,964,1103,1051,1065,1226,1080,1003,1121,1127,1094,1101,1064,784,1213,1321,570,1190,1101,974,889,1038,1140,1279,1153,1144,1173,1166,1104,1068,1149,1089,1262,1167,1175,1173,1240,1036,1163,1318,1267,1361,1309,922,1260,964,1060,1324,1075,969,1216,1117,1121,994,1182,1383,1290,1072,1126,1169,1286,1230,1107,935,1074,1208,1179,1040,1050,998,1142,930,1350,636,674,996,1030,1015,1136,1150,914,1309,822,1033,1003,919,909,1027,1303,1132,853,504,850,1393,688,433,943,1029,1012,836,906,923,1098,902,825,740,616,857,1048,877,1057,1028,999,630,772,870,988,887,907,824,729,708,959,1020,955,790,907,853,698,1048,1125,1209,1006,968,851,905,895,859,581,825,747,599,649,852,874,984,865,766,658,751,1141,958,656,1069,1132,1061,1187,1081,1149,1164,1285,1329,904,1012,575,942,1087,1448,1447,1459,1105,263,455,78,438,1027,909,1427,1461,1454,1003,424,553,556,543,572,550,553,561,575,559,1029,605,645,701,707,686,701,701,699,696,702,1059,1409,1225,1213,980,724,1170,670,1071,904,895,824,1105,799,742,1158,1259,1026,962,1076,1211,1184,1230,1111,1117,1058,665,1060,1178,923,1030,805,777,798,916,890,1117,733,1268,1058,1002,1098,1028,1240,953,780,1117,1113,1249,917,840,1070,933,620,661,748,784,856,778,823,628,812,1149,806,963,769,988,991,1060,1331,1275,1214,1253,1049,1082,1084,496,957,1484,1293,1263,1415,1182,1199,774,876,675,807,935,729,798,1056,672,671,615,1e3,1235,1168,891,1007,1144,827,1103,1030,808,1097,1042,1154,1338,1102,1003,1032,892,905,1075,994,1368,774,633,797,988,853,1073,991,818,863,823,857,943,715,1003,1028,1070,1040,646,988,854,984,1110,866,919,987,832,834,747,1162,803,972,892,1065,871,1079,816,583,1218,754,931,731,954,737,842,727,870,647,1031,702,689,629,779,606,761,591,706,638,972,820,862,715,902,749,636,1025,1113,959,1012,863,1063,459,798,808,893,918,921,1087,998,855,610,872,780,971,960,1204,824,942,1013,739,843,879,775,1117,946,780,848,923,1093,1122,1237,861,736,1055,570,517,686,264,884,1156,944,1142,1043,811,611,691,988,1034,894,847,712,929,836,851,671,904,978,899,700,841,951,1018,992,766,943,919,1056,995,1045,770,949,1254,1046,1003,870,1103,951,361,803,832,790,709,1038,1105,1134,1325,1029,828,809,707,796,920,979,825,862,942,767,951,797,1338,966,991,1099,1256,1003,932,820,1064,1147,1172,983,652,641,1015,938,1144,1014,1189,651,824,853,987,1036,1052,1145,815,1068,966,977,944,1110,754,917,906,795,933,749,846,816,932,1025,696,1061,1046,1114,907,803,977,1282,1320,961,874,915,1058,1118,562,628,885,527,633,696,1243,1122,1079,1077,1087,979,1099,794,776,920,976,1049,786,827,735,837,973,829,731,778,1231,1043,883,909,982,865,1018,732,1013,943,1070,946,709,1073,1029,725,907,805,967,617,863,572,1039,1031,660,924,935,1062,967,913,1066,932,254,313,302,292,294,301,701,1145,1056,1230,1287,870,1060,937,780,1258,1242,972,1250,1228,538,1266,1292,732,619,1020,1064,540,1189,892,801,938,888,815,1077,877,926,1049,1042,1062,875,898,911,587,998,826,857,1021,1008,1033,1030,1017,1316,1349,775,856,1023,1306,1375,1074,812,994,1178,1109,998,816,920,913,1020,829,1040,976,923,860,830,1021,917,961,976,928,1012,952,862,761,692,1074,964,694,898,592,548,876,931,773,1026,994,819,916,1035,960,751,947,1162,1045,1018,758,785,583,733,676,559,654,833,1039,872,982,941,942,827,860,824,792,1037,1023,931,781,975,970,874,1005,860,868,950,849,923,968,941,927,956,876,892,751,1e3,942,929,734,938,991,996,682,789,916,967,1006,1173,1034,1384,899,987,860,817,931,956,905,676,620,627,584,843,544,862,715,662,767,866,778,794,759,812,952,660,718,727,712,533,694,846,972,565,790,1177,879,909,973,1010,868,876,692,1009,854,772,942,992,1156,814,1292,1293,1202,821,872,845,1289,850,839,931,812,878,887,1078,1065,960,1125,1281,980,989,738,858,1066,1179,807,669,1177,954,1042,969,1065,996,1266,872,999,1010,970,1050,1305,879,851,568,833,727,790,838,993,1209,863,866,840,580,930,835,771,1094,1062,731,1009,818,596,777,777,753,1121,1122,699,1061,828,582,756,867,686,1088,1019,1034,722,862,737,844,1078,867,903,1075,920,823,827,594,894,782,735,1074,982,859,1010,1018,692,776,664,865,819,1079,1080,766,1069,972,738,833,602,892,956,650,1118,1145,1153,1302,1269,1612,1241,1500,681,933,951,947,914,1117,1349,1087,832,914,586,647,948,959,935,882,861,895,981,889,936,1138,1002,973,1146,1063,774,702,761,810,760,707,752,681,720,600,1111,756,1150,1099,829,607,660,631,637,882,532,541,498,764,945,647,1300,918,757,940,960,666,911,558,609,865,1087,1061,1042,1030,946,1165,737,685,679,980,811,867,840,1065,901,921,858,838,873,1113,907,997,1113,945,859,857,1063,1092,1089,764,765,767,725,727,788,766,655,778,640,1045,897,1020,812,1089,1177,1169,1093,1098,1165,1084,1174,1174,870,695,667,876,925,863,753,883,1018,905,1202,1185,840,702,649,895,1167,1082,938,880,811,1016,1064,1138,964,945,1209,1050,947,677,693,864,761,669,767,634,740,617,1085,839,1728,1922,1909,1929,1911,1915,1915,1920,1921,1924,1928,1939,1950,1904,1916,1926,1923,1930,1927,1911,1924,1908,1919,1923,1925,1930,1908,1923,1918,1909,1919,1916,1920,1920,1922,1944,1920,1933,1918,1930,1917,1915,1906,1921,1925,1934,1896,1747,1743,1733,1728,1732,1733,1735,1751,1726,1731,1736,1740,1733,1732,1734,1909,1926,1916,1935,1909,1900,1900,1917,1919,1922,1914,1919,1945,1911,1904,1926,1920,1921,1930,1923,1915,1916,1914,1924,1938,1907,1918,1907,1908,1913,1931,1928,1904,1921,1908,1935,1910,1916,1935,1920,1915,1902,1896,1927,1917,1901,1873,1316,849,1090,843,801,765,792,1038,775,751,1055,544,551,828,912,1029,722,743,528,575,721,1286,1120,1127,875,737,1111,1254,894,1407,1038,1186,1402,1184,716,1012,1313,1177,832,501,974,984,799,698,576,889,453,999,898,1016,998,1113,1132,936,1034,1058,1165,1221,1010,876,702,558,608,563,573,544,595,623,510,524,517,516,508,486,511,510,470,628,702,586,533,526,514,567,557,451,474,469,751,1228,1023,1098,1280,1198,1209,1109,900,1020,1041,812,785,1092,1149,1054,1062,564,655,804,859,489,394,318,411,352,356,352,589,477,486,547,707,704,481,792,776,809,609,765,672,554,448,581,625,960,750,686,374,414,381,373,331,356,361,351,335,336,337,324,335,319,343,321,380,365,317,347,510,477,324,437,346,369,443,338,372,460,373,456,339,433,352,430,392,351,453,359,450,504,521,511,367,329,378,350,311,343,319,318,307,328,327,299,332,378,403,364,452,351,344,401,369,336,338,364,617,660,587,466,706,554,715,630,780,409,382,391,390,825,768,833,650,497,479,467,490,747,586,555,941,969,1187,1174,1099,1357,1357,745,1083,1298,1097,1088,943,1017,1028,1231,539,1214,557],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_numpy-tests.data")}Module["addRunDependency"]("datafile_numpy-tests.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/numpy/conftest.py",start:0,end:4031,audio:0},{filename:"/lib/python3.9/site-packages/numpy/compat/tests/__init__.py",start:4031,end:4031,audio:0},{filename:"/lib/python3.9/site-packages/numpy/compat/tests/test_compat.py",start:4031,end:4507,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_half.py",start:4507,end:28323,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_extint128.py",start:28323,end:33966,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_umath_accuracy.py",start:33966,end:37080,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_arraymethod.py",start:37080,end:39479,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_protocols.py",start:39479,end:40647,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_simd_module.py",start:40647,end:44405,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_function_base.py",start:44405,end:58816,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_ufunc.py",start:58816,end:153279,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test__exceptions.py",start:153279,end:155284,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_nditer.py",start:155284,end:283028,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_umath_complex.py",start:283028,end:306334,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/__init__.py",start:306334,end:306334,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_memmap.py",start:306334,end:313803,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_item_selection.py",start:313803,end:317382,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_shape_base.py",start:317382,end:344630,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_argparse.py",start:344630,end:346607,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_mem_overlap.py",start:346607,end:375691,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_records.py",start:375691,end:395953,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_cpu_dispatcher.py",start:395953,end:397472,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_cython.py",start:397472,end:401001,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_array_coercion.py",start:401001,end:428923,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_numerictypes.py",start:428923,end:449769,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_defchararray.py",start:449769,end:474352,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_scalar_methods.py",start:474352,end:478445,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_scalarinherit.py",start:478445,end:480850,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_dtype.py",start:480850,end:541026,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_arrayprint.py",start:541026,end:578202,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_datetime.py",start:578202,end:690767,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_scalarbuffer.py",start:690767,end:696404,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_longdouble.py",start:696404,end:709445,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_numeric.py",start:709445,end:844685,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_einsum.py",start:844685,end:891514,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_abc.py",start:891514,end:893842,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_indexing.py",start:893842,end:947814,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/_locales.py",start:947814,end:950006,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_scalarmath.py",start:950006,end:982697,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_api.py",start:982697,end:1004982,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_overrides.py",start:1004982,end:1025117,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_multiarray.py",start:1025117,end:1361739,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_simd.py",start:1361739,end:1397117,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_unicode.py",start:1397117,end:1409670,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_scalarprint.py",start:1409670,end:1428316,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_conversion_utils.py",start:1428316,end:1434727,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_regression.py",start:1434727,end:1525837,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_casting_unittests.py",start:1525837,end:1553675,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_deprecations.py",start:1553675,end:1599814,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_cpu_features.py",start:1599814,end:1606591,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_print.py",start:1606591,end:1613328,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_scalar_ctors.py",start:1613328,end:1617016,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_getlimits.py",start:1617016,end:1621313,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_machar.py",start:1621313,end:1622379,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_errstate.py",start:1622379,end:1624445,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_indexerrors.py",start:1624445,end:1629575,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/test_umath.py",start:1629575,end:1770119,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/data/umath-validation-set-README.txt",start:1770119,end:1771086,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/data/umath-validation-set-exp.csv",start:1771086,end:1788577,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/data/recarray_from_file.fits",start:1788577,end:1797217,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/data/umath-validation-set-log.csv",start:1797217,end:1808909,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/data/umath-validation-set-cos.csv",start:1808909,end:1832142,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/data/umath-validation-set-sin.csv",start:1832142,end:1855187,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/data/astype_copy.pkl",start:1855187,end:1855903,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/examples/setup.py",start:1855903,end:1856399,audio:0},{filename:"/lib/python3.9/site-packages/numpy/core/tests/examples/checks.pyx",start:1856399,end:1856987,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_shell_utils.py",start:1856987,end:1858941,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_from_template.py",start:1858941,end:1860044,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_ccompiler_opt_conf.py",start:1860044,end:1866389,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_fcompiler_nagfor.py",start:1866389,end:1867491,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_misc_util.py",start:1867491,end:1870709,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/__init__.py",start:1870709,end:1870709,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_fcompiler.py",start:1870709,end:1871986,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_fcompiler_intel.py",start:1871986,end:1873044,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_fcompiler_gnu.py",start:1873044,end:1875180,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_npy_pkg_config.py",start:1875180,end:1877737,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_exec_command.py",start:1877737,end:1885038,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_system_info.py",start:1885038,end:1895797,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_mingw32ccompiler.py",start:1895797,end:1897406,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_build_ext.py",start:1897406,end:1900070,audio:0},{filename:"/lib/python3.9/site-packages/numpy/distutils/tests/test_ccompiler_opt.py",start:1900070,end:1927966,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_return_character.py",start:1927966,end:1931885,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_string.py",start:1931885,end:1932495,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_semicolon_split.py",start:1932495,end:1934009,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/__init__.py",start:1934009,end:1934009,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_callback.py",start:1934009,end:1942196,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_module_doc.py",start:1942196,end:1943146,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_return_real.py",start:1943146,end:1948548,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_array_from_pyobj.py",start:1948548,end:1971358,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_return_integer.py",start:1971358,end:1975934,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_return_logical.py",start:1975934,end:1980777,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_block_docstring.py",start:1980777,end:1981404,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_return_complex.py",start:1981404,end:1986019,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_crackfortran.py",start:1986019,end:1990078,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_abstract_interface.py",start:1990078,end:1991895,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_size.py",start:1991895,end:1993181,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_kind.py",start:1993181,end:1994193,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_quoted_character.py",start:1994193,end:1995120,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/util.py",start:1995120,end:2004708,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_regression.py",start:2004708,end:2006518,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_assumed_shape.py",start:2006518,end:2008080,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_parameter.py",start:2008080,end:2011990,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_common.py",start:2011990,end:2012792,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_compile_function.py",start:2012792,end:2017101,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/test_mixed.py",start:2017101,end:2018012,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/string/char.f90",start:2018012,end:2018630,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/common/block.f",start:2018630,end:2018854,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/regression/inout.f90",start:2018854,end:2019131,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/module_data/module_data_docstring.f90",start:2019131,end:2019355,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/module_data/mod.mod",start:2019355,end:2019767,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/size/foo.f90",start:2019767,end:2020582,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/array_from_pyobj/wrapmodule.c",start:2020582,end:2027865,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/parameter/constant_compound.f90",start:2027865,end:2028334,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/parameter/constant_integer.f90",start:2028334,end:2028946,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/parameter/constant_both.f90",start:2028946,end:2030885,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/parameter/constant_real.f90",start:2030885,end:2031495,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/parameter/constant_non_compound.f90",start:2031495,end:2032104,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/assumed_shape/.f2py_f2cmap",start:2032104,end:2032133,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/assumed_shape/foo_free.f90",start:2032133,end:2032593,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/assumed_shape/foo_use.f90",start:2032593,end:2032862,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/assumed_shape/foo_mod.f90",start:2032862,end:2033361,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/assumed_shape/precision.f90",start:2033361,end:2033491,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/mixed/foo_free.f90",start:2033491,end:2033630,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/mixed/foo.f",start:2033630,end:2033715,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/mixed/foo_fixed.f90",start:2033715,end:2033894,audio:0},{filename:"/lib/python3.9/site-packages/numpy/f2py/tests/src/kind/foo.f90",start:2033894,end:2034241,audio:0},{filename:"/lib/python3.9/site-packages/numpy/fft/tests/__init__.py",start:2034241,end:2034241,audio:0},{filename:"/lib/python3.9/site-packages/numpy/fft/tests/test_helper.py",start:2034241,end:2040389,audio:0},{filename:"/lib/python3.9/site-packages/numpy/fft/tests/test_pocketfft.py",start:2040389,end:2053217,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_histograms.py",start:2053217,end:2086889,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_arrayterator.py",start:2086889,end:2088180,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_ufunclike.py",start:2088180,end:2091458,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_financial_expired.py",start:2091458,end:2091816,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_function_base.py",start:2091816,end:2227688,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_nanfunctions.py",start:2227688,end:2266268,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_arraypad.py",start:2266268,end:2320551,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_recfunctions.py",start:2320551,end:2361706,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/__init__.py",start:2361706,end:2361706,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_index_tricks.py",start:2361706,end:2380678,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_shape_base.py",start:2380678,end:2404981,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_utils.py",start:2404981,end:2409541,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_mixins.py",start:2409541,end:2416571,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_twodim_base.py",start:2416571,end:2434929,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test__version.py",start:2434929,end:2436928,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_io.py",start:2436928,end:2539867,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test__datasource.py",start:2539867,end:2550354,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_type_check.py",start:2550354,end:2565473,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_regression.py",start:2565473,end:2573745,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_packbits.py",start:2573745,end:2591291,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test__iotools.py",start:2591291,end:2605034,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_stride_tricks.py",start:2605034,end:2627883,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_arraysetops.py",start:2627883,end:2656312,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_polynomial.py",start:2656312,end:2667025,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/test_format.py",start:2667025,end:2705262,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/data/py2-objarr.npy",start:2705262,end:2705520,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/data/win64python2.npy",start:2705520,end:2705616,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/data/py3-objarr.npy",start:2705616,end:2705957,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/data/python3.npy",start:2705957,end:2706053,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/data/py3-objarr.npz",start:2706053,end:2706502,audio:0},{filename:"/lib/python3.9/site-packages/numpy/lib/tests/data/py2-objarr.npz",start:2706502,end:2706868,audio:0},{filename:"/lib/python3.9/site-packages/numpy/linalg/tests/__init__.py",start:2706868,end:2706868,audio:0},{filename:"/lib/python3.9/site-packages/numpy/linalg/tests/test_build.py",start:2706868,end:2708498,audio:0},{filename:"/lib/python3.9/site-packages/numpy/linalg/tests/test_regression.py",start:2708498,end:2714095,audio:0},{filename:"/lib/python3.9/site-packages/numpy/linalg/tests/test_deprecations.py",start:2714095,end:2714735,audio:0},{filename:"/lib/python3.9/site-packages/numpy/linalg/tests/test_linalg.py",start:2714735,end:2789232,audio:0},{filename:"/lib/python3.9/site-packages/numpy/ma/tests/test_core.py",start:2789232,end:2991241,audio:0},{filename:"/lib/python3.9/site-packages/numpy/ma/tests/test_extras.py",start:2991241,end:3059036,audio:0},{filename:"/lib/python3.9/site-packages/numpy/ma/tests/__init__.py",start:3059036,end:3059036,audio:0},{filename:"/lib/python3.9/site-packages/numpy/ma/tests/test_old_ma.py",start:3059036,end:3091301,audio:0},{filename:"/lib/python3.9/site-packages/numpy/ma/tests/test_subclassing.py",start:3091301,end:3103944,audio:0},{filename:"/lib/python3.9/site-packages/numpy/ma/tests/test_mrecords.py",start:3103944,end:3123827,audio:0},{filename:"/lib/python3.9/site-packages/numpy/ma/tests/test_regression.py",start:3123827,end:3126906,audio:0},{filename:"/lib/python3.9/site-packages/numpy/ma/tests/test_deprecations.py",start:3126906,end:3129164,audio:0},{filename:"/lib/python3.9/site-packages/numpy/matrixlib/tests/__init__.py",start:3129164,end:3129164,audio:0},{filename:"/lib/python3.9/site-packages/numpy/matrixlib/tests/test_defmatrix.py",start:3129164,end:3144146,audio:0},{filename:"/lib/python3.9/site-packages/numpy/matrixlib/tests/test_numeric.py",start:3144146,end:3144587,audio:0},{filename:"/lib/python3.9/site-packages/numpy/matrixlib/tests/test_masked_matrix.py",start:3144587,end:3153512,audio:0},{filename:"/lib/python3.9/site-packages/numpy/matrixlib/tests/test_multiarray.py",start:3153512,end:3154066,audio:0},{filename:"/lib/python3.9/site-packages/numpy/matrixlib/tests/test_matrix_linalg.py",start:3154066,end:3156125,audio:0},{filename:"/lib/python3.9/site-packages/numpy/matrixlib/tests/test_regression.py",start:3156125,end:3157052,audio:0},{filename:"/lib/python3.9/site-packages/numpy/matrixlib/tests/test_interaction.py",start:3157052,end:3168927,audio:0},{filename:"/lib/python3.9/site-packages/numpy/polynomial/tests/test_classes.py",start:3168927,end:3187258,audio:0},{filename:"/lib/python3.9/site-packages/numpy/polynomial/tests/test_legendre.py",start:3187258,end:3205931,audio:0},{filename:"/lib/python3.9/site-packages/numpy/polynomial/tests/__init__.py",start:3205931,end:3205931,audio:0},{filename:"/lib/python3.9/site-packages/numpy/polynomial/tests/test_hermite.py",start:3205931,end:3224508,audio:0},{filename:"/lib/python3.9/site-packages/numpy/polynomial/tests/test_chebyshev.py",start:3224508,end:3245030,audio:0},{filename:"/lib/python3.9/site-packages/numpy/polynomial/tests/test_printing.py",start:3245030,end:3260816,audio:0},{filename:"/lib/python3.9/site-packages/numpy/polynomial/tests/test_hermite_e.py",start:3260816,end:3279727,audio:0},{filename:"/lib/python3.9/site-packages/numpy/polynomial/tests/test_polyutils.py",start:3279727,end:3283306,audio:0},{filename:"/lib/python3.9/site-packages/numpy/polynomial/tests/test_laguerre.py",start:3283306,end:3300817,audio:0},{filename:"/lib/python3.9/site-packages/numpy/polynomial/tests/test_polynomial.py",start:3300817,end:3321055,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/test_generator_mt19937_regressions.py",start:3321055,end:3326721,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/test_extending.py",start:3326721,end:3330224,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/test_seed_sequence.py",start:3330224,end:3333535,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/__init__.py",start:3333535,end:3333535,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/test_randomstate.py",start:3333535,end:3415051,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/test_smoke.py",start:3415051,end:3443234,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/test_generator_mt19937.py",start:3443234,end:3552712,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/test_randomstate_regression.py",start:3552712,end:3560267,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/test_regression.py",start:3560267,end:3565720,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/test_direct.py",start:3565720,end:3582169,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/test_random.py",start:3582169,end:3651929,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/data/philox-testset-1.csv",start:3651929,end:3675781,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/data/pcg64-testset-1.csv",start:3675781,end:3699620,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/data/pcg64-testset-2.csv",start:3699620,end:3723465,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/data/philox-testset-2.csv",start:3723465,end:3747303,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/data/mt19937-testset-1.csv",start:3747303,end:3763147,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/data/mt19937-testset-2.csv",start:3763147,end:3778972,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/data/__init__.py",start:3778972,end:3778972,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/data/sfc64-testset-1.csv",start:3778972,end:3802812,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/data/pcg64dxsm-testset-2.csv",start:3802812,end:3826651,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/data/pcg64dxsm-testset-1.csv",start:3826651,end:3850484,audio:0},{filename:"/lib/python3.9/site-packages/numpy/random/tests/data/sfc64-testset-2.csv",start:3850484,end:3874317,audio:0},{filename:"/lib/python3.9/site-packages/numpy/testing/tests/__init__.py",start:3874317,end:3874317,audio:0},{filename:"/lib/python3.9/site-packages/numpy/testing/tests/test_doctesting.py",start:3874317,end:3875664,audio:0},{filename:"/lib/python3.9/site-packages/numpy/testing/tests/test_utils.py",start:3875664,end:3931309,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/__init__.py",start:3931309,end:3931309,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/test_runtime.py",start:3931309,end:3933985,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/test_typing_extensions.py",start:3933985,end:3934987,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/test_generic_alias.py",start:3934987,end:3939448,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/test_typing.py",start:3939448,end:3951548,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/test_isfile.py",start:3951548,end:3952405,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/mypy.ini",start:3952405,end:3952545,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/misc/extended_precision.py",start:3952545,end:3952892,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/lib_version.py",start:3952892,end:3953050,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/arithmetic.py",start:3953050,end:3956845,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/index_tricks.py",start:3956845,end:3957330,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/einsumfunc.py",start:3957330,end:3958073,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/numerictypes.py",start:3958073,end:3958457,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/datasource.py",start:3958457,end:3958852,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/array_constructors.py",start:3958852,end:3959862,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/ndarray_misc.py",start:3959862,end:3961038,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/fromnumeric.py",start:3961038,end:3967030,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/ufuncs.py",start:3967030,end:3968377,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/constants.py",start:3968377,end:3968645,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/random.py",start:3968645,end:3971481,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/bitwise_ops.py",start:3971481,end:3971995,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/ufunclike.py",start:3971995,end:3972680,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/ufunc_config.py",start:3972680,end:3973413,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/arrayterator.py",start:3973413,end:3973893,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/arrayprint.py",start:3973893,end:3974415,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/lib_utils.py",start:3974415,end:3974691,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/warnings_and_errors.py",start:3974691,end:3974971,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/scalars.py",start:3974971,end:3977972,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/array_like.py",start:3977972,end:3978426,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/modules.py",start:3978426,end:3979123,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/dtype.py",start:3979123,end:3979457,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/ndarray.py",start:3979457,end:3979862,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/comparisons.py",start:3979862,end:3980825,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/fail/flatiter.py",start:3980825,end:3981667,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/fromnumeric.py",start:3981667,end:3985409,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/array_constructors.py",start:3985409,end:3987873,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/ufunclike.py",start:3987873,end:3988912,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/warnings_and_errors.py",start:3988912,end:3989084,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/lib_utils.py",start:3989084,end:3989567,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/ndarray_shape_manipulation.py",start:3989567,end:3990207,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/simple.py",start:3990207,end:3992897,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/mod.py",start:3992897,end:3994475,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/literal.py",start:3994475,end:3995774,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/random.py",start:3995774,end:4057597,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/ufuncs.py",start:4057597,end:4058059,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/numeric.py",start:4058059,end:4059537,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/index_tricks.py",start:4059537,end:4061029,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/array_like.py",start:4061029,end:4061922,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/comparisons.py",start:4061922,end:4064914,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/arrayprint.py",start:4064914,end:4065680,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/scalars.py",start:4065680,end:4069187,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/flatiter.py",start:4069187,end:4069361,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/multiarray.py",start:4069361,end:4069895,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/simple_py3.py",start:4069895,end:4069991,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/dtype.py",start:4069991,end:4071064,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/arrayterator.py",start:4071064,end:4071457,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/einsumfunc.py",start:4071457,end:4072833,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/ndarray_misc.py",start:4072833,end:4075549,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/ufunc_config.py",start:4075549,end:4076669,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/arithmetic.py",start:4076669,end:4084330,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/modules.py",start:4084330,end:4084925,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/numerictypes.py",start:4084925,end:4085898,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/lib_version.py",start:4085898,end:4086197,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/ndarray_conversion.py",start:4086197,end:4087823,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/pass/bitwise_ops.py",start:4087823,end:4088793,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/multiarray.py",start:4088793,end:4089766,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/bitwise_ops.py",start:4089766,end:4093475,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/ufunc_config.py",start:4093475,end:4094866,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/nbit_base_example.py",start:4094866,end:4095343,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/modules.py",start:4095343,end:4097281,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/arithmetic.py",start:4097281,end:4122125,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/scalars.py",start:4122125,end:4128009,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/dtype.py",start:4128009,end:4130763,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/nditer.py",start:4130763,end:4131244,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/flatiter.py",start:4131244,end:4132068,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/warnings_and_errors.py",start:4132068,end:4132496,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/ufuncs.py",start:4132496,end:4135493,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/arrayterator.py",start:4135493,end:4136712,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/ufunclike.py",start:4136712,end:4138307,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/ndarray_misc.py",start:4138307,end:4145216,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/ndarray_conversion.py",start:4145216,end:4147351,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/lib_utils.py",start:4147351,end:4148274,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/random.py",start:4148274,end:4297846,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/array_constructors.py",start:4297846,end:4302333,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/numeric.py",start:4302333,end:4305394,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/ndarray_shape_manipulation.py",start:4305394,end:4306400,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/index_tricks.py",start:4306400,end:4310105,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/comparisons.py",start:4310105,end:4319507,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/datasource.py",start:4319507,end:4320064,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/numerictypes.py",start:4320064,end:4321420,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/constants.py",start:4321420,end:4323162,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/arrayprint.py",start:4323162,end:4323821,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/lib_version.py",start:4323821,end:4324426,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/einsumfunc.py",start:4324426,end:4326343,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/fromnumeric.py",start:4326343,end:4336474,audio:0},{filename:"/lib/python3.9/site-packages/numpy/typing/tests/data/reveal/mod.py",start:4336474,end:4342847,audio:0},{filename:"/lib/python3.9/site-packages/numpy/tests/test_scripts.py",start:4342847,end:4344420,audio:0},{filename:"/lib/python3.9/site-packages/numpy/tests/__init__.py",start:4344420,end:4344420,audio:0},{filename:"/lib/python3.9/site-packages/numpy/tests/test_matlib.py",start:4344420,end:4346272,audio:0},{filename:"/lib/python3.9/site-packages/numpy/tests/test_numpy_version.py",start:4346272,end:4347847,audio:0},{filename:"/lib/python3.9/site-packages/numpy/tests/test_reloading.py",start:4347847,end:4349933,audio:0},{filename:"/lib/python3.9/site-packages/numpy/tests/test_public_api.py",start:4349933,end:4364951,audio:0},{filename:"/lib/python3.9/site-packages/numpy/tests/test_ctypeslib.py",start:4364951,end:4377121,audio:0},{filename:"/lib/python3.9/site-packages/numpy/tests/test_warnings.py",start:4377121,end:4379401,audio:0}],remote_package_size:2027943,package_uuid:"fcfcb03d-841d-4d28-9ffd-ac5b28e857cd"})})(); \ No newline at end of file diff --git "a/spaces/qingxu98/gpt-academic/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" "b/spaces/qingxu98/gpt-academic/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" deleted file mode 100644 index 8fdf5915d303a54cc7859e9df77ac9acf5311ced..0000000000000000000000000000000000000000 --- "a/spaces/qingxu98/gpt-academic/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" +++ /dev/null @@ -1,62 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime -@CatchException -def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((txt, "正在同时咨询gpt-3.5和gpt-4……")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - llm_kwargs['llm_model'] = 'gpt-3.5-turbo&gpt-4' # 支持任意数量的llm接口,用&符号分隔 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=txt, inputs_show_user=txt, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt=system_prompt, - retry_times_at_unknown_error=0 - ) - - history.append(txt) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 - - -@CatchException -def 同时问询_指定模型(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - - if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") - # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - llm_kwargs['llm_model'] = plugin_kwargs.get("advanced_arg", 'chatglm&gpt-3.5-turbo') # 'chatglm&gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - - chatbot.append((txt, f"正在同时咨询{llm_kwargs['llm_model']}")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=txt, inputs_show_user=txt, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt=system_prompt, - retry_times_at_unknown_error=0 - ) - - history.append(txt) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Crystalicrsoftwarecrackdownloadf.md b/spaces/quidiaMuxgu/Expedit-SAM/Crystalicrsoftwarecrackdownloadf.md deleted file mode 100644 index d0f00a3c91f627ec822d028db68feecc30bd51e7..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Crystalicrsoftwarecrackdownloadf.md +++ /dev/null @@ -1,21 +0,0 @@ -

      crystalicrsoftwarecrackdownloadf


      Download File 🗸 https://geags.com/2uCsFB



      -
      -14 Aug 2018 - jaryar 7b17bfd26b - shapea says: February 12, 2022 at 1:13 pm. I've been working out for a while now. -At 174 cm tall, my weight is 57 kg -18 Aug 2019 ... -Weight: 57 kg. -Height: 170 cm. Breast size: 4th. -Age: 27 years old. -About me: Hi there, I'm not boring, but also not dumb, with a sense of humor and -28 Oct 2019 ... -Search results for "57 kg tall 168 weight" in Yandex. -Video. ... -57 kg tall 168 weight. -Search. -Sitemap. -17 Sep 2019 ... -57 kg tall 170 weight. -In this video I decided to show my weight, height and other body parameters. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Madrasapattinam 2010 Tamil Movie 1080p Bluray Dts Esub VERIFIED.md b/spaces/quidiaMuxgu/Expedit-SAM/Madrasapattinam 2010 Tamil Movie 1080p Bluray Dts Esub VERIFIED.md deleted file mode 100644 index 06c479c46963c368aef0e6678746c4856e59f413..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Madrasapattinam 2010 Tamil Movie 1080p Bluray Dts Esub VERIFIED.md +++ /dev/null @@ -1,12 +0,0 @@ - -

      Madrasapattinam: A Historical Romance Set in Pre-Independence India

      -

      Madrasapattinam is a 2010 Tamil movie that tells the story of a British woman who falls in love with an Indian freedom fighter in 1940s Madras. The movie is directed by A.L. Vijay and stars Arya and Amy Jackson in the lead roles. The movie was praised for its cinematography, music, costumes and performances. It was also a commercial success, grossing over ₹50 crore at the box office.

      -

      Madrasapattinam 2010 Tamil Movie 1080p Bluray Dts Esub


      Download File ->->->-> https://geags.com/2uCsEX



      -

      The movie is available in 1080p resolution with Blu-ray quality and DTS sound. It also has English subtitles for non-Tamil speakers. You can download the movie from this site [^1^] for free and enjoy the historical romance set in pre-independence India.

      The movie begins with an elderly Amy Wilkinson (Amy Jackson) who is terminally ill and wants to return to India to find her long-lost lover. She is accompanied by her granddaughter Catherine (Lisa Lazarus) who is unaware of her grandmother's past. They arrive in Chennai and meet a taxi driver named Parithi (Arya) who claims to know Amy's lover. He takes them to the places where Amy and her lover spent their time together and narrates their story in flashback.

      -

      Amy was the daughter of the governor of Madras and Parithi was a dhobi (washer-man) who worked for the British. They met by chance when Amy's car broke down and Parithi helped her fix it. They soon developed a friendship that blossomed into love despite the social and political barriers. Parithi was also a member of the Indian National Congress and participated in the Quit India Movement against the British rule. Their love faced many challenges and dangers as they tried to escape from the wrath of the British and the communal riots that erupted after the partition of India.

      -

      The movie ends with a twist that reveals the fate of Amy and Parithi and whether they were reunited or not. The movie is a tribute to the people who sacrificed their lives for the freedom of India and also a celebration of the culture and heritage of Madras.

      The movie has a rich and authentic portrayal of the colonial era and the Indian independence movement. The director has used real locations and props to recreate the atmosphere of Madras in the 1940s. The movie also showcases the diversity and beauty of the city, from its beaches and temples to its markets and streets. The movie has a blend of romance, drama, action and comedy that appeals to a wide range of audiences.

      -

      -

      The movie also has a melodious and soulful soundtrack composed by G.V. Prakash Kumar. The songs are sung by various singers such as Hariharan, Roop Kumar Rathod, Chinmayi, Naresh Iyer and others. The songs are based on different genres such as classical, folk, rock and pop. The songs also reflect the mood and theme of the movie, from the playful and cheerful "Pookal Pookum" to the patriotic and emotional "Aaruyire". The songs are also well-choreographed and picturized with stunning visuals.

      -

      The movie has received positive reviews from critics and audiences alike. It has been praised for its story, direction, cinematography, music, costumes and performances. It has also won several awards and nominations at various film festivals and ceremonies. It has been hailed as one of the best Tamil movies of 2010 and one of the finest historical romance movies ever made.

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Mediacom Karaoke Song Book Song Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Mediacom Karaoke Song Book Song Download.md deleted file mode 100644 index e467b93c94ac317de91ba97c1fdc09a47adfa519..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Mediacom Karaoke Song Book Song Download.md +++ /dev/null @@ -1,87 +0,0 @@ - -

      Mediacom Karaoke Song Book Song Download: A Complete Guide

      -

      If you love karaoke, you probably know how important it is to have a good song book. A song book is a collection of songs that you can choose from when you want to sing along with your favorite tunes. A song book can make or break your karaoke experience, as it determines the variety and quality of songs that you can enjoy.

      -

      mediacom karaoke song book song download


      Download Filehttps://geags.com/2uCsEl



      -

      One of the most popular karaoke brands in the world is Mediacom. Mediacom karaoke offers a wide range of karaoke products, such as players, microphones, speakers, and accessories. But what makes Mediacom karaoke stand out from the rest is its song book. Mediacom karaoke song book is a digital file that contains thousands of songs in different languages and genres. You can view the song book on your phone, tablet, or computer, and easily find and select the songs that you want to sing.

      -

      But how do you get the Mediacom karaoke song book song download? How do you update it to the latest version? How do you use it to enhance your karaoke experience? In this article, we will answer all these questions and more. We will show you how to download, install, and use the Mediacom karaoke song book app, as well as how to access and play the music video versions of the songs on YouTube. We will also give you some tips and tricks on how to optimize your karaoke performance with Mediacom karaoke song book. So, let's get started!

      - -

      How to Download Mediacom Karaoke Song Book App

      -

      The first step to getting the Mediacom karaoke song book song download is to download the Mediacom karaoke song book app. This app allows you to have a digital file and view the karaoke song list of different models of different karaoke players. You can sort the list by song title, by artist, and even create a playlist of all your favorite songs and save it in your phone.

      -

      The app also eliminates the inconvenience of carrying big and bulky song books that keep on getting heavier every time there is an update or new releases. The app automatically updates the song list to its latest version when you are connected to wifi.

      -

      -

      To download the app, you need to have an Android device with Android 3.0 or higher. You can download the app from APKPure.com, a website that provides free and safe APK files for Android apps. Here are the steps to download the app:

      -
        -
      1. Go to https://apkpure.com/mediacom-songbook-app/com.mediacomm on your browser.
      2. -
      3. Click on "Download APK" button and wait for the file to be downloaded.
      4. -
      5. Open the file and follow the instructions to install the app on your device.
      6. -
      7. Launch the app and enjoy your Mediacom karaoke song book!
      8. -
      - -

      How to Use Mediacom Karaoke Song Book App

      -

      Once you have downloaded and installed the app, you can start using it to view and select the songs that you want to sing. The app has a simple and user-friendly interface that makes it easy to navigate and use. Here are some of the features and functions of the app:

      -
        -
      • The app has four tabs at the bottom: Home, Song List, Playlist, and Settings.
      • -
      • The Home tab shows you some information about Mediacom karaoke products and services, such as models, features, prices, contact details, etc.
      • -
      • The Song List tab shows you the list of songs that are available for your Mediacom karaoke model. You can choose from different languages and genres, such as English, Hindi, Filipino, Arabic, Pop, Rock, Country, etc.
      • -
      • You can also search for a specific song or artist by using the search bar at the top. You can type in keywords or numbers to find what you are looking for.
      • -
      • You can also sort the list by song title or by artist by tapping on the icons at the top right corner.
      • -
      • To select a song that you want to sing, simply tap on it and it will be added to your playlist.
      • -
      • The Playlist tab shows you all the songs that you have selected for your karaoke session. You can edit your playlist by adding or removing songs, changing their order, or clearing all.
      • -
      • You can also save your playlist by tapping on the save icon at the top right corner. You can name your playlist and access it later from your device.
      • -
      • The Settings tab allows you to customize some options for your app, such as language preference, notification settings, feedback option, etc.
      • -
      - -

      How to Play Music Video Versions of Songs on YouTube

      -

      One of the coolest features of the Mediacom karaoke song book app is that it allows you to play the music video versions of the selected songs on YouTube. YouTube is the most popular online music site that offers millions of music videos for free. You can watch and listen to your favorite songs while singing along with them.

      -

      To play the music video versions of songs on YouTube, you need to have an internet connection and a YouTube app on your device. Here are the steps to do it:

      -
        -
      1. Select a song that you want to sing from your playlist.
      2. -
      3. Tap on the number of the song title and the app will automatically link to YouTube.
      4. -
      5. Watch and enjoy the music video while singing along with it!
      6. -
      - -

      Tips and Tricks for Optimizing Your Karaoke Performance with Mediacom Karaoke Song Book

      -

      Now that you know how to download, install, use, and play music video versions of songs with Mediacom karaoke song book app, you are ready to rock your karaoke performance! But before you hit that stage (or living room), here are some tips and tricks that can help you optimize your singing skills and impress your audience:

      -
        -
      • Choose songs that suit your vocal range and style. Don't pick songs that are too high or too low for your voice or too fast or too slow for your tempo. Pick songs that match your mood and personality.
      • -
      • Practice before performing. Don't just rely on reading lyrics from a screen. Listen to the original songs several times and memorize them as much as possible. Sing along with them until you feel confident and comfortable.
      • -
      • Warm up your voice before singing. Do some vocal exercises such as humming, breathing deeply, stretching your mouth and tongue muscles etc., before starting your karaoke session. This will help prevent vocal strain and improve your tone quality.
      • -
      • Sing with passion and emotion. Don't just sing words; sing feelings! Express yourself through your voice and body language. Smile when singing happy songs; frown when singing sad songs; dance when singing upbeat songs; etc. Make eye contact with your audience (or camera) and engage them with your performance.
      • -
      • Have fun! The most important thing about karaoke is having fun! Don't worry too much about hitting every note perfectly or sounding like a professional singer. Just enjoy yourself and have a good time with your friends or family!
      • -
      - -

      Conclusion

      -

      Mediacom karaoke song book song download is a great way to enhance your karaoke experience with Mediacom karaoke products. By downloading and using this app, you can access thousands of songs in different languages and genres; sort them by title or artist; create playlists of your favorite songs; play music video versions of them on YouTube; etc., all from one convenient place!

      -

      If you love karaoke as much as we do; then don't hesitate to get this app today! It will make your karaoke sessions more fun; easy; enjoyable; memorable; etc.! Happy singing!

      -

      How to Update Mediacom Karaoke Song Book Song Download

      -

      One of the advantages of using Mediacom karaoke song book song download is that you can always get the latest songs for your karaoke sessions. Mediacom karaoke regularly releases new songs in different languages and genres, so you can always find something new and exciting to sing.

      -

      To update your Mediacom karaoke song book song download, you need to have a DVD player that is compatible with your Mediacom karaoke model. You can check the list of compatible models on the Mediacom karaoke website or Facebook page. You also need to have a DVD disc that contains the new songs that you want to add to your song book.

      -

      You can order the DVD disc online from the Mediacom karaoke website or from authorized dealers. You can also download the DVD disc image from the Mediacom karaoke website or Facebook page and burn it to a blank DVD disc using your computer.

      -

      Once you have the DVD disc, you can follow these steps to update your Mediacom karaoke song book song download:

      -
        -
      1. Turn on your DVD player and insert the DVD disc.
      2. -
      3. Turn on your Mediacom karaoke player and connect it to your TV or monitor.
      4. -
      5. Select the "Upgrade" option from the menu and follow the instructions on the screen.
      6. -
      7. Wait for the upgrade process to complete and eject the DVD disc.
      8. -
      9. Restart your Mediacom karaoke player and enjoy your updated song book!
      10. -
      - -

      How to Troubleshoot Mediacom Karaoke Song Book Song Download

      -

      Sometimes, you may encounter some problems or issues with your Mediacom karaoke song book song download. For example, you may not be able to view or select some songs, or you may experience some errors or glitches while using the app or playing the music videos. Don't worry; these problems are usually easy to fix with some simple troubleshooting steps.

      -

      Here are some of the common problems and solutions for your Mediacom karaoke song book song download:

      -
        -
      • If you cannot view or select some songs, make sure that you have downloaded and installed the latest version of the app. You can check for updates by going to the Settings tab and tapping on "Check for Updates". If there is a new version available, download and install it.
      • -
      • If you experience some errors or glitches while using the app or playing the music videos, make sure that you have a stable internet connection and enough storage space on your device. You can also try clearing the cache and data of the app by going to your device settings and selecting "Apps". Find and tap on "Mediacom Songbook App" and then tap on "Clear Cache" and "Clear Data". This will delete any temporary files that may cause problems.
      • -
      • If none of these steps work, you can try uninstalling and reinstalling the app by following the steps in the previous section. This will reset the app to its default settings and fix any corrupted files.
      • -
      - -

      Final Words

      -

      We hope that this article has helped you understand how to use Mediacom karaoke song book song download to enhance your karaoke experience. With this app, you can access thousands of songs in different languages and genres; sort them by title or artist; create playlists of your favorite songs; play music video versions of them on YouTube; etc., all from one convenient place!

      -

      If you have any questions or feedback about this app, feel free to contact us at mediacomkaraoke@gmail.com. We would love to hear from you!

      -

      Thank you for choosing Mediacom karaoke as your karaoke partner. We wish you a happy singing!

      -

      Conclusion

      -

      Mediacom karaoke song book song download is a great way to enhance your karaoke experience with Mediacom karaoke products. By downloading and using this app, you can access thousands of songs in different languages and genres; sort them by title or artist; create playlists of your favorite songs; play music video versions of them on YouTube; etc., all from one convenient place!

      -

      In this article, we have shown you how to download, install, use, update, and troubleshoot the Mediacom karaoke song book app. We have also given you some tips and tricks on how to optimize your karaoke performance with Mediacom karaoke song book. We hope that this article has been helpful and informative for you.

      -

      If you love karaoke as much as we do, then don't hesitate to get this app today! It will make your karaoke sessions more fun, easy, enjoyable, memorable, etc.! Happy singing!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/r3gm/Advanced-RVC-Inference/lib/infer_pack/models_onnx.py b/spaces/r3gm/Advanced-RVC-Inference/lib/infer_pack/models_onnx.py deleted file mode 100644 index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Advanced-RVC-Inference/lib/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/psp.py b/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/psp.py deleted file mode 100644 index 8cc13dbc649488f254b88b49da28665cb24227bd..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/interface/pixel2style2pixel/models/psp.py +++ /dev/null @@ -1,134 +0,0 @@ -""" -This file defines the core research contribution -""" -import matplotlib - -matplotlib.use("Agg") -import math - -import torch -from torch import nn -from pixel2style2pixel.models.encoders import psp_encoders -from pixel2style2pixel.models.stylegan2.model import Generator -from pixel2style2pixel.configs.paths_config import model_paths - - -def get_keys(d, name): - if "state_dict" in d: - d = d["state_dict"] - d_filt = {k[len(name) + 1 :]: v for k, v in d.items() if k[: len(name)] == name} - return d_filt - - -class pSp(nn.Module): - def __init__(self, opts): - super(pSp, self).__init__() - self.set_opts(opts) - # compute number of style inputs based on the output resolution - self.opts.n_styles = int(math.log(self.opts.output_size, 2)) * 2 - 2 - # Define architecture - self.encoder = self.set_encoder() - self.decoder = Generator(self.opts.output_size, 512, 8) - self.face_pool = torch.nn.AdaptiveAvgPool2d((256, 256)) - # Load weights if needed - self.load_weights() - - def set_encoder(self): - if self.opts.encoder_type == "GradualStyleEncoder": - encoder = psp_encoders.GradualStyleEncoder(50, "ir_se", self.opts) - elif self.opts.encoder_type == "BackboneEncoderUsingLastLayerIntoW": - encoder = psp_encoders.BackboneEncoderUsingLastLayerIntoW( - 50, "ir_se", self.opts - ) - elif self.opts.encoder_type == "BackboneEncoderUsingLastLayerIntoWPlus": - encoder = psp_encoders.BackboneEncoderUsingLastLayerIntoWPlus( - 50, "ir_se", self.opts - ) - else: - raise Exception("{} is not a valid encoders".format(self.opts.encoder_type)) - return encoder - - def load_weights(self): - if self.opts.checkpoint_path is not None: - print("Loading pSp from checkpoint: {}".format(self.opts.checkpoint_path)) - ckpt = torch.load(self.opts.checkpoint_path, map_location="cpu") - self.encoder.load_state_dict(get_keys(ckpt, "encoder"), strict=True) - self.decoder.load_state_dict(get_keys(ckpt, "decoder"), strict=True) - self.__load_latent_avg(ckpt) - else: - print("Loading encoders weights from irse50!") - encoder_ckpt = torch.load(model_paths["ir_se50"]) - # if input to encoder is not an RGB image, do not load the input layer weights - if self.opts.label_nc != 0: - encoder_ckpt = { - k: v for k, v in encoder_ckpt.items() if "input_layer" not in k - } - self.encoder.load_state_dict(encoder_ckpt, strict=False) - print("Loading decoder weights from pretrained!") - ckpt = torch.load(self.opts.stylegan_weights) - self.decoder.load_state_dict(ckpt["g_ema"], strict=False) - if self.opts.learn_in_w: - self.__load_latent_avg(ckpt, repeat=1) - else: - self.__load_latent_avg(ckpt, repeat=self.opts.n_styles) - - def forward( - self, - x, - resize=True, - latent_mask=None, - input_code=False, - randomize_noise=True, - inject_latent=None, - return_latents=False, - alpha=None, - ): - if input_code: - codes = x - else: - codes = self.encoder(x) - # normalize with respect to the center of an average face - if self.opts.start_from_latent_avg: - if self.opts.learn_in_w: - codes = codes + self.latent_avg.repeat(codes.shape[0], 1) - else: - codes = codes + self.latent_avg.repeat(codes.shape[0], 1, 1) - - if latent_mask is not None: - for i in latent_mask: - if inject_latent is not None: - if alpha is not None: - codes[:, i] = ( - alpha * inject_latent[:, i] + (1 - alpha) * codes[:, i] - ) - else: - codes[:, i] = inject_latent[:, i] - else: - codes[:, i] = 0 - - input_is_latent = not input_code - images, result_latent = self.decoder( - [codes], - input_is_latent=input_is_latent, - randomize_noise=randomize_noise, - return_latents=return_latents, - ) - - if resize: - images = self.face_pool(images) - - if return_latents: - return images, result_latent - else: - return images - - def set_opts(self, opts): - self.opts = opts - - def __load_latent_avg(self, ckpt, repeat=None): - if "latent_avg" in ckpt: - self.latent_avg = ckpt["latent_avg"].to(self.opts.device) - if repeat is not None: - self.latent_avg = self.latent_avg.repeat(repeat, 1) - else: - self.latent_avg = None diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/models/stylegan2/model.py b/spaces/radames/UserControllableLT-Latent-Transformer/models/stylegan2/model.py deleted file mode 100644 index 988f19691e078cd7ea7843c54d66d354cc74ae5b..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/models/stylegan2/model.py +++ /dev/null @@ -1,714 +0,0 @@ -import math -import random -import torch -from torch import nn -from torch.nn import functional as F - -from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style, input_is_stylespace=False): - batch, in_channel, height, width = input.shape - - if not input_is_stylespace: - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out, style - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None, input_is_stylespace=False): - out, style = self.conv(input, style, input_is_stylespace=input_is_stylespace) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out, style - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None, input_is_stylespace=False): - out, style = self.conv(input, style, input_is_stylespace=input_is_stylespace) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out, style - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - return_features=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - input_is_stylespace=False, - noise=None, - randomize_noise=True, - return_feature_map=False, - return_s=False - ): - - if not input_is_latent and not input_is_stylespace: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1 and not input_is_stylespace: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if input_is_stylespace: - latent = styles[0] - elif len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - - style_vector = [] - - if not input_is_stylespace: - out = self.input(latent) - out, out_style = self.conv1(out, latent[:, 0], noise=noise[0]) - style_vector.append(out_style) - - skip, out_style = self.to_rgb1(out, latent[:, 1]) - style_vector.append(out_style) - - i = 1 - else: - out = self.input(latent[0]) - out, out_style = self.conv1(out, latent[0], noise=noise[0], input_is_stylespace=input_is_stylespace) - style_vector.append(out_style) - - skip, out_style = self.to_rgb1(out, latent[1], input_is_stylespace=input_is_stylespace) - style_vector.append(out_style) - - i = 2 - - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - if not input_is_stylespace: - out, out_style1 = conv1(out, latent[:, i], noise=noise1) - out, out_style2 = conv2(out, latent[:, i + 1], noise=noise2) - skip, rgb_style = to_rgb(out, latent[:, i + 2], skip) - if i==7: - feature_map = out - style_vector.extend([out_style1, out_style2, rgb_style]) - i += 2 - else: - out, out_style1 = conv1(out, latent[i], noise=noise1, input_is_stylespace=input_is_stylespace) - out, out_style2 = conv2(out, latent[i + 1], noise=noise2, input_is_stylespace=input_is_stylespace) - skip, rgb_style = to_rgb(out, latent[i + 2], skip, input_is_stylespace=input_is_stylespace) - - style_vector.extend([out_style1, out_style2, rgb_style]) - - i += 3 - - image = skip - - if return_feature_map: - if return_latents: - return image, latent, feature_map - elif return_s: - return image, style_vector, feature_map - else: - return image, feature_map - - if return_latents: - return image, latent - elif return_s: - return image, style_vector - elif return_features: - return image, out - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out diff --git a/spaces/raedeXanto/academic-chatgpt-beta/GpsGate 2.6 Key !!HOT!! Crack.md b/spaces/raedeXanto/academic-chatgpt-beta/GpsGate 2.6 Key !!HOT!! Crack.md deleted file mode 100644 index 10fe7bf678b7cf1c826841ddffffb47c99bfb4c3..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/GpsGate 2.6 Key !!HOT!! Crack.md +++ /dev/null @@ -1,24 +0,0 @@ - -

      GpsGate 2.6 Key Crack: How to Unlock the Full Potential of GPS Tracking Software

      -

      If you are looking for a reliable and powerful software solution for GPS tracking and fleet management, you might have heard of GpsGate 2.6. This software platform is designed to help you monitor, manage, and optimize your vehicles, assets, and personnel in real-time.

      -

      However, you might also wonder how to get GpsGate 2.6 key crack, which is a way to use the software without paying for the license fee. Is it possible? Is it safe? Is it legal? In this article, we will answer these questions and more.

      -

      GpsGate 2.6 Key Crack


      Downloadhttps://tinourl.com/2uL3gI



      -

      What is GpsGate 2.6 and why do you need it?

      -

      GpsGate 2.6 is a leading software platform for web-based GPS tracking and fleet management

      -

      GpsGate 2.6 is a software product developed by Franson Technology, a Swedish company that specializes in GPS solutions. It was first released in 2004 and has since been updated regularly with new features and improvements.

      -

      GpsGate 2.6 is a web-based platform that allows you to track and manage your vehicles, assets, and personnel using GPS devices. You can access your data from any device with an internet browser, such as a computer, tablet, or smartphone.

      -

      GpsGate 2.6 consists of two main components: GpsGate Server and GpsGate Splitter.

      -
        -
      • GpsGate Server is the core of the platform that runs on a Windows server or on a cloud service hosted by GpsGate. It collects, stores, processes, and displays your GPS data on a web interface.
      • -
      • GpsGate Splitter is an optional component that runs on a Windows client or mobile device. It enables one GPS device to send signals to multiple applications simultaneously.
      • -
      -

      GpsGate 2.6 offers many features and benefits for different users and scenarios

      -

      GpsGate 2.6 is designed to meet the needs of various users and scenarios, such as:

      -
        -
      • Fleet owners and managers who want to optimize their vehicle operations, reduce costs, improve safety, and enhance customer service.
      • -
      • GPS tracking service providers who want to offer their own branded solutions to their clients.
      • -
      • Individuals who want to track their personal vehicles, assets, or family members.
      • - -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Gran Turismo 4 Exe Full High Quality Version For Pc.rarl.md b/spaces/raedeXanto/academic-chatgpt-beta/Gran Turismo 4 Exe Full High Quality Version For Pc.rarl.md deleted file mode 100644 index a708fcf6e334060ab6c719915816dc6e734036ff..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Gran Turismo 4 Exe Full High Quality Version For Pc.rarl.md +++ /dev/null @@ -1,14 +0,0 @@ -
        -

        Gran Turismo 4: A Racing Game for PlayStation 2

        -

        Gran Turismo 4 is a racing game for the PlayStation 2 console, released in 2004. It is the fourth installment in the main Gran Turismo series and the sixth for the overall series. It was developed by Polyphony Digital and published by Sony Computer Entertainment.

        -

        The game features over 700 cars from 80 manufacturers, ranging from vintage models to modern concepts. It also features 51 tracks, including real-world circuits, city courses, and rally stages. The game offers two main modes: A-Spec and B-Spec. In A-Spec mode, the player controls the car and competes against other drivers. In B-Spec mode, the player acts as a crew chief and instructs the driver how to race.

        -

        Gran Turismo 4 Exe Full Version For Pc.rarl


        Download Zip >> https://tinourl.com/2uKZBY



        -

        Gran Turismo 4 received critical acclaim for its realistic graphics, physics, and sound. It also became a commercial success, selling over 11 million copies worldwide. The game was praised for its variety of cars and tracks, its customization options, and its longevity. However, some critics noted the lack of online multiplayer, the repetitive music, and the difficulty of some events.

        -

        Gran Turismo 4 is considered one of the best racing games of all time and one of the most influential games in the genre. It has won several awards and has been included in many lists of the greatest games ever made. The game also spawned a spin-off title, Gran Turismo 4 Prologue, and a sequel, Gran Turismo 5.

        Gran Turismo 4: A Review of the Gameplay

        -

        Gran Turismo 4 is a game that aims to simulate the experience of driving various types of cars on different tracks and conditions. The gameplay is divided into two main modes: A-Spec and B-Spec. In A-Spec mode, the player controls the car and competes against other drivers in various events, such as races, time trials, license tests, and missions. The player can choose from a wide range of cars, from classic models to modern supercars, and customize them with different parts and settings. The player can also tune the car's performance, such as engine power, suspension, brakes, and tires.

        -

        In B-Spec mode, the player acts as a crew chief and instructs the driver how to race. The player can set the driver's aggressiveness, overtaking strategy, and pit stops. The player can also speed up the time of the race up to 3×, which is useful for endurance races that can last for hours. The player can switch between A-Spec and B-Spec modes at any time during a race.

        -

        The gameplay of Gran Turismo 4 is praised for its realism and depth. The game features a new physics engine that adds a higher level of realism in car performance and behavior. The game also features realistic graphics and sound effects that create an immersive atmosphere. The game's tracks are based on real-world locations, such as Nürburgring, Laguna Seca, and New York City. The game also features dynamic weather and time changes that affect the driving conditions.

        -

        However, the gameplay of Gran Turismo 4 is not without flaws. The game lacks an online multiplayer mode, which was planned but later removed due to technical difficulties. The game also has a repetitive soundtrack that consists mostly of rock and electronic music. The game's difficulty can be frustrating for some players, especially in some license tests and missions that require precise driving skills. The game's AI can also be inconsistent and unfair at times.

        -

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/rahul999r/Rahul_Kannada_TTS/scripts/inference/api.sh b/spaces/rahul999r/Rahul_Kannada_TTS/scripts/inference/api.sh deleted file mode 100644 index 4f6ce2a2147f69e5b3da851c8222bef830056338..0000000000000000000000000000000000000000 --- a/spaces/rahul999r/Rahul_Kannada_TTS/scripts/inference/api.sh +++ /dev/null @@ -1,8 +0,0 @@ -gender='male' -glowdir='../../checkpoints/glow/'$gender'/' -hifidir='../../checkpoints/hifi/'$gender'/' -device='cpu' -lang='en' - - -python ../../utils/inference/api.py -a $glowdir -v $hifidir -d $device -L $lang diff --git a/spaces/rcajegas/WHO_1/style.css b/spaces/rcajegas/WHO_1/style.css deleted file mode 100644 index 1ac7460bb5a60b8378ae0f75571dc093e78c4f57..0000000000000000000000000000000000000000 --- a/spaces/rcajegas/WHO_1/style.css +++ /dev/null @@ -1,27 +0,0 @@ -body { - background-color: skyblue; - background-image: linear-gradient(rgba(255, 255, 255, 0.2) 1px, transparent 1px), - linear-gradient(90deg, rgba(255, 255, 255, 0.2) 1px, transparent 1px), - linear-gradient(rgba(255, 255, 255, 0.1) 1px, transparent 1px), - linear-gradient(90deg, rgba(255, 255, 255, 0.1) 1px, transparent 1px), - linear-gradient(0deg, transparent 10px, rgba(255, 255, 255, 0.3) 10px, rgba(255, 255, 255, 0.3) 20px, transparent 20px, transparent 30px, rgba(255, 255, 255, 0.3) 30px, rgba(255, 255, 255, 0.3) 40px, transparent 40px), - linear-gradient(0deg, transparent 10px, rgba(255, 255, 255, 0.2) 10px, rgba(255, 255, 255, 0.2) 20px, transparent 20px, transparent 30px, rgba(255, 255, 255, 0.2) 30px, rgba(255, 255, 255, 0.2) 40px, transparent 40px); - background-size: 100px 100px, 100px 100px, 50px 50px, 50px 50px, 100% 40px, 100% 20px; - background-position: -2px -2px, -2px -2px, -1px -1px, -1px -1px, 0px 0px, 0px 0px; -} - -#content { - margin: 0 auto; - max-width: 800px; - text-align: center; -} - -#goals { - padding: 30px; -} - -#gif { - border: 5px solid white; - border-radius: 10px; - box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.3); -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/7loader V1.6.1d 2 UPD.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/7loader V1.6.1d 2 UPD.md deleted file mode 100644 index 1c1a6d1c4ad7e7436831fae28eec3580df6db300..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/7loader V1.6.1d 2 UPD.md +++ /dev/null @@ -1,6 +0,0 @@ -

        7loader v1.6.1d 2


        Download Filehttps://urlgoal.com/2uCKrz



        -
        - d5da3c52bf
        -
        -
        -

        diff --git a/spaces/reimari/rvc-aa99/infer_pack/attentions.py b/spaces/reimari/rvc-aa99/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/reimari/rvc-aa99/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/rholtwo/Easy_button_runwayml-stable-diffusion-v1-5/README.md b/spaces/rholtwo/Easy_button_runwayml-stable-diffusion-v1-5/README.md deleted file mode 100644 index 0a11032edf26435c58e34957231e220398c93ec5..0000000000000000000000000000000000000000 --- a/spaces/rholtwo/Easy_button_runwayml-stable-diffusion-v1-5/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Easy Button Runwayml-stable-diffusion-v1-5 -emoji: ⚡ -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/schedules/schedule_s_short.py b/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/schedules/schedule_s_short.py deleted file mode 100644 index dea71cb530411533af0eec5170b5d1105c0c0d92..0000000000000000000000000000000000000000 --- a/spaces/richardzhangy26/yandian_flow_classification/configs/_base_/schedules/schedule_s_short.py +++ /dev/null @@ -1,10 +0,0 @@ -# optimizer -optimizer = dict( - type='Adam', lr=0.0001, weight_decay=0.0004, betas=(0.9, 0.999)) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', by_epoch=False, gamma=0.5, step=[300000, 400000, 500000]) -runner = dict(type='IterBasedRunner', max_iters=600000) -checkpoint_config = dict(by_epoch=False, interval=50000) -evaluation = dict(interval=50000, metric='EPE') diff --git a/spaces/robin0307/MMOCR/configs/textdet/textsnake/textsnake_r50_fpn_unet_1200e_ctw1500.py b/spaces/robin0307/MMOCR/configs/textdet/textsnake/textsnake_r50_fpn_unet_1200e_ctw1500.py deleted file mode 100644 index 045e89a3bb1fa44ff33da1d2b8b32b42e396c58b..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/textdet/textsnake/textsnake_r50_fpn_unet_1200e_ctw1500.py +++ /dev/null @@ -1,33 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_sgd_1200e.py', - '../../_base_/det_models/textsnake_r50_fpn_unet.py', - '../../_base_/det_datasets/ctw1500.py', - '../../_base_/det_pipelines/textsnake_pipeline.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/rorallitri/biomedical-language-models/logs/An Invitation to Health 16th Edition PDF Download Free A Textbook that Guarantees You are Up-to-Date in the Field of Health.md b/spaces/rorallitri/biomedical-language-models/logs/An Invitation to Health 16th Edition PDF Download Free A Textbook that Guarantees You are Up-to-Date in the Field of Health.md deleted file mode 100644 index a7e99a0bc6561dc58a5d056bb905ef21a7928e87..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/An Invitation to Health 16th Edition PDF Download Free A Textbook that Guarantees You are Up-to-Date in the Field of Health.md +++ /dev/null @@ -1,6 +0,0 @@ -

        aninvitationtohealth16theditionpdfdownloadfree


        Download File >>>>> https://tinurll.com/2uzoiL



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Friend Of The Family 2 (1996) - Shauna OBrien Jenna Bodnar.avi LINK.md b/spaces/rorallitri/biomedical-language-models/logs/Friend Of The Family 2 (1996) - Shauna OBrien Jenna Bodnar.avi LINK.md deleted file mode 100644 index d13e5c6ad886fad5b6e3089c50010401c3cdb9fa..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Friend Of The Family 2 (1996) - Shauna OBrien Jenna Bodnar.avi LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Friend Of The Family 2 (1996) - Shauna O`Brien, Jenna Bodnar.avi


        Download File ►►►►► https://tinurll.com/2uzoqg



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/salmanmapkar/audio-video-transcriber/app.py b/spaces/salmanmapkar/audio-video-transcriber/app.py deleted file mode 100644 index a38837d814cf328d8a7f515fac80d1b774f6c8d9..0000000000000000000000000000000000000000 --- a/spaces/salmanmapkar/audio-video-transcriber/app.py +++ /dev/null @@ -1,388 +0,0 @@ -from __future__ import unicode_literals -import youtube_dl -import yt_dlp -from pydub import AudioSegment -from pyannote.audio import Pipeline -import re -import whisper -import os -import ffmpeg -import subprocess -import gradio as gr -import traceback -import json -pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization", use_auth_token="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") -from pydub.effects import speedup -import moviepy.editor as mp -import datetime -import torch -import pyannote.audio -from pyannote.audio.pipelines.speaker_verification import SpeechBrainPretrainedSpeakerEmbedding #PyannoteAudioPretrainedSpeakerEmbedding -from pyannote.audio import Audio -from pyannote.core import Segment -import wave -import contextlib -from sklearn.cluster import AgglomerativeClustering -import numpy as np -import json -from datetime import timedelta - -from transformers import T5ForConditionalGeneration, T5Tokenizer - -__FILES = set() -wispher_models = list(whisper._MODELS.keys()) - -def correct_grammar(input_text,num_return_sequences=1): - torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' - tokenizer = T5Tokenizer.from_pretrained('deep-learning-analytics/GrammarCorrector') - model = T5ForConditionalGeneration.from_pretrained('deep-learning-analytics/GrammarCorrector').to(torch_device) - batch = tokenizer([input_text],truncation=True,padding='max_length',max_length=len(input_text), return_tensors="pt").to(torch_device) - results = model.generate(**batch,max_length=len(input_text),num_beams=2, num_return_sequences=num_return_sequences, temperature=1.5) - generated_sequences = [] - for generated_sequence_idx, generated_sequence in enumerate(results): - text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True, skip_special_tokens=True) - generated_sequences.append(text) - generated_text = "".join(generated_sequences) - _generated_text = "" - for idx, _sentence in enumerate(generated_text.split('.'), 0): - if not idx: - _generated_text+=_sentence+'.' - elif _sentence[:1]!=' ': - _generated_text+=' '+_sentence+'.' - elif _sentence[:1]=='': - pass - else: - _generated_text+=_sentence+'.' - return _generated_text - -def CreateFile(filename): - __FILES.add(filename) - return filename - -def RemoveFile(filename): - if (os.path.isfile(filename)): - os.remove(filename) - -def RemoveAllFiles(): - for file in __FILES: - if (os.path.isfile(file)): - os.remove(file) - -def Transcribe_V1(NumberOfSpeakers, SpeakerNames="", audio="temp_audio.wav"): - SPEAKER_DICT = {} - SPEAKERS = [speaker.strip() for speaker in SpeakerNames.split(',') if len(speaker)] - - def GetSpeaker(sp): - speaker = sp - if sp not in list(SPEAKER_DICT.keys()): - if len(SPEAKERS): - t = SPEAKERS.pop(0) - SPEAKER_DICT[sp] = t - speaker = SPEAKER_DICT[sp] - else: - speaker = SPEAKER_DICT[sp] - return speaker - - def millisec(timeStr): - spl = timeStr.split(":") - s = (int)((int(spl[0]) * 60 * 60 + int(spl[1]) * 60 + float(spl[2]) )* 1000) - return s - - def preprocess(audio): - t1 = 0 * 1000 - t2 = 20 * 60 * 1000 - newAudio = AudioSegment.from_wav(audio) - a = newAudio[t1:t2] - spacermilli = 2000 - spacer = AudioSegment.silent(duration=spacermilli) - newAudio = spacer.append(a, crossfade=0) - newAudio.export(audio, format="wav") - return spacermilli, spacer - - def diarization(audio): - as_audio = AudioSegment.from_wav(audio) - DEMO_FILE = {'uri': 'blabal', 'audio': audio} - if NumberOfSpeakers: - dz = pipeline(DEMO_FILE, num_speakers=NumberOfSpeakers) - else: - dz = pipeline(DEMO_FILE) - with open(CreateFile(f"diarization_{audio}.txt"), "w") as text_file: - text_file.write(str(dz)) - dz = open(CreateFile(f"diarization_{audio}.txt")).read().splitlines() - dzList = [] - for l in dz: - start, end = tuple(re.findall('[0-9]+:[0-9]+:[0-9]+\.[0-9]+', string=l)) - start = millisec(start) - end = millisec(end) - lex = GetSpeaker(re.findall('(SPEAKER_[0-9][0-9])', string=l)[0]) - dzList.append([start, end, lex]) - sounds = spacer - segments = [] - dz = open(CreateFile(f"diarization_{audio}.txt")).read().splitlines() - for l in dz: - start, end = tuple(re.findall('[0-9]+:[0-9]+:[0-9]+\.[0-9]+', string=l)) - start = millisec(start) - end = millisec(end) - segments.append(len(sounds)) - sounds = sounds.append(as_audio[start:end], crossfade=0) - sounds = sounds.append(spacer, crossfade=0) - sounds.export(CreateFile(f"dz_{audio}.wav"), format="wav") - return f"dz_{audio}.wav", dzList, segments - - def transcribe(dz_audio): - model = whisper.load_model("medium") - result = model.transcribe(dz_audio) - # for _ in result['segments']: - # print(_['start'], _['end'], _['text']) - captions = [[((caption["start"]*1000)), ((caption["end"]*1000)), caption["text"]] for caption in result['segments']] - conversation = [] - for i in range(len(segments)): - idx = 0 - for idx in range(len(captions)): - if captions[idx][0] >= (segments[i] - spacermilli): - break; - - while (idx < (len(captions))) and ((i == len(segments) - 1) or (captions[idx][1] < segments[i+1])): - c = captions[idx] - start = dzList[i][0] + (c[0] -segments[i]) - if start < 0: - start = 0 - idx += 1 - if not len(conversation): - conversation.append([dzList[i][2], c[2]]) - elif conversation[-1][0] == dzList[i][2]: - conversation[-1][1] += c[2] - else: - conversation.append([dzList[i][2], c[2]]) - #print(f"[{dzList[i][2]}] {c[2]}") - return conversation, ("".join([f"{speaker} --> {text}\n" for speaker, text in conversation])) - - spacermilli, spacer = preprocess(audio) - dz_audio, dzList, segments = diarization(audio) - conversation, t_text = transcribe(dz_audio) - RemoveAllFiles() - return (t_text, ({ "data": [{"speaker": speaker, "text": text} for speaker, text in conversation]})) - - -def Transcribe_V2(model, num_speakers, speaker_names, audio="temp_audio.wav"): - model = whisper.load_model(model) - # embedding_model = SpeechBrainPretrainedSpeakerEmbedding("speechbrain/spkrec-ecapa-voxceleb") - - embedding_model = SpeechBrainPretrainedSpeakerEmbedding( - "speechbrain/spkrec-ecapa-voxceleb", - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - ) - SPEAKER_DICT = {} - default_speaker_names = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z'] - SPEAKERS = [speaker.strip() for speaker in speaker_names.split(',') if len(speaker)] - def GetSpeaker(sp): - speaker = sp - if sp not in list(SPEAKER_DICT.keys()): - if len(SPEAKERS): - t = SPEAKERS.pop(0) - SPEAKER_DICT[sp] = t - speaker = SPEAKER_DICT[sp] - elif len(default_speaker_names): - t = default_speaker_names.pop(0) - SPEAKER_DICT[sp] = t - speaker = SPEAKER_DICT[sp] - else: - speaker = SPEAKER_DICT[sp] - return speaker - - # audio = Audio() - def diarization(audio): - def millisec(timeStr): - spl = timeStr.split(":") - s = (int)((int(spl[0]) * 60 * 60 + int(spl[1]) * 60 + float(spl[2]) )* 1000) - return s - as_audio = AudioSegment.from_wav(audio) - DEMO_FILE = {'uri': 'blabal', 'audio': audio} - hparams = pipeline.parameters(instantiated=True) - hparams["segmentation"]["min_duration_off"] -= 0.25 - pipeline.instantiate(hparams) - if num_speakers: - dz = pipeline(DEMO_FILE, num_speakers=num_speakers) - else: - dz = pipeline(DEMO_FILE) - with open(CreateFile(f"diarization_{audio}.txt"), "w") as text_file: - text_file.write(str(dz)) - dz = open(CreateFile(f"diarization_{audio}.txt")).read().splitlines() - print(dz) - dzList = [] - for l in dz: - start, end = tuple(re.findall('[0-9]+:[0-9]+:[0-9]+\.[0-9]+', string=l)) - start = millisec(start) - end = millisec(end) - lex = GetSpeaker(re.findall('(SPEAKER_[0-9][0-9])', string=l)[0]) - dzList.append([start, end, lex]) - return dzList - - def get_output(segments): - # print(segments) - conversation=[] - for (i, segment) in enumerate(segments): - # print(f"{i}, {segment["speaker"]}, {segments[i - 1]["speaker"]}, {}") - if not len(conversation): - conversation.append([str(timedelta(seconds=float(segment['start']))),str(timedelta(seconds=float(segment['end']))),GetSpeaker(segment["speaker"]), segment["text"].lstrip()]) - elif conversation[-1][2] == GetSpeaker(segment["speaker"]): - conversation[-1][3] += segment["text"].lstrip() - else: - conversation.append([str(timedelta(seconds=float(segment['start']))),str(timedelta(seconds=float(segment['end']))),GetSpeaker(segment["speaker"]), segment["text"].lstrip()]) - # if i == 0 or segments[i - 1]["speaker"] != segment["speaker"]: - # if i != 0: - # conversation.append([GetSpeaker(segment["speaker"]), segment["text"][1:]]) # segment["speaker"] + ' ' + str(time(segment["start"])) + '\n\n' - # conversation[-1][1] += segment["text"][1:] - # return output - for idx in range(len(conversation)): - conversation[idx][3] = correct_grammar(conversation[idx][3]) - return ("".join([f"[{start}] - {speaker} \n{text}\n" for start, end, speaker, text in conversation])), ({ "data": [{"start": start, "end":end, "speaker": speaker, "text": text} for start, end, speaker, text in conversation]}) - - def get_duration(path): - with contextlib.closing(wave.open(path,'r')) as f: - frames = f.getnframes() - rate = f.getframerate() - return frames / float(rate) - - def make_embeddings(path, segments, duration): - embeddings = np.zeros(shape=(len(segments), 192)) - for i, segment in enumerate(segments): - embeddings[i] = segment_embedding(path, segment, duration) - return np.nan_to_num(embeddings) - - def segment_embedding(path, segment, duration): - start = segment["start"] - # Whisper overshoots the end timestamp in the last segment - end = min(duration, segment["end"]) - clip = Segment(start, end) - waveform, sample_rate = Audio().crop(path, clip) - return embedding_model(waveform[None]) - - def add_speaker_labels(segments, embeddings, num_speakers): - clustering = AgglomerativeClustering(num_speakers).fit(embeddings) - labels = clustering.labels_ - for i in range(len(segments)): - segments[i]["speaker"] = 'SPEAKER ' + str(labels[i] + 1) - - def time(secs): - return datetime.timedelta(seconds=round(secs)) - - duration = get_duration(audio) - if duration > 4 * 60 * 60: - return "Audio duration too long" - - # print(json.dumps(diarization(audio))) - result = model.transcribe(audio) - # print(json.dumps(result)) - - segments = result["segments"] - - num_speakers = min(max(round(num_speakers), 1), len(segments)) - if len(segments) == 1: - segments[0]['speaker'] = 'SPEAKER 1' - else: - embeddings = make_embeddings(audio, segments, duration) - add_speaker_labels(segments, embeddings, num_speakers) - return get_output(segments) - # return output - -def AudioTranscribe(NumberOfSpeakers=None, SpeakerNames="", audio="", retries=5, model='base'): - print(f"{NumberOfSpeakers}, {SpeakerNames}, {retries}") - if retries: - # subprocess.call(['ffmpeg', '-i', audio,'temp_audio.wav']) - try: - subprocess.call(['ffmpeg', '-i', audio,'temp_audio.wav']) - except Exception as ex: - traceback.print_exc() - return AudioTranscribe(NumberOfSpeakers, SpeakerNames, audio, retries-1) - if not (os.path.isfile("temp_audio.wav")): - return AudioTranscribe(NumberOfSpeakers, SpeakerNames, audio, retries-1) - return Transcribe_V2(model, NumberOfSpeakers, SpeakerNames) - else: - raise gr.Error("There is some issue ith Audio Transcriber. Please try again later!") - -def VideoTranscribe(NumberOfSpeakers=None, SpeakerNames="", video="", retries=5, model='base'): - if retries: - try: - clip = mp.VideoFileClip(video) - clip.audio.write_audiofile("temp_audio.wav") - # command = f"ffmpeg -i {video} -ab 160k -ac 2 -ar 44100 -vn temp_audio.wav" - # subprocess.call(command, shell=True) - except Exception as ex: - traceback.print_exc() - return VideoTranscribe(NumberOfSpeakers, SpeakerNames, video, retries-1) - if not (os.path.isfile("temp_audio.wav")): - return VideoTranscribe(NumberOfSpeakers, SpeakerNames, video, retries-1) - return Transcribe_V2(model, NumberOfSpeakers, SpeakerNames) - else: - raise gr.Error("There is some issue ith Video Transcriber. Please try again later!") - -def YoutubeTranscribe(NumberOfSpeakers=None, SpeakerNames="", URL="", retries = 5, model='base'): - if retries: - if "youtu" not in URL.lower(): - raise gr.Error(f"{URL} is not a valid youtube URL.") - else: - RemoveFile("temp_audio.wav") - ydl_opts = { - 'format': 'bestaudio/best', - 'outtmpl': 'temp_audio.%(ext)s', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'wav', - }], - } - try: - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - ydl.download([URL]) - except: - return YoutubeTranscribe(NumberOfSpeakers, SpeakerNames, URL, retries-1) - stream = ffmpeg.input('temp_audio.m4a') - stream = ffmpeg.output(stream, 'temp_audio.wav') - RemoveFile("temp_audio.m4a") - return Transcribe_V2(model, NumberOfSpeakers, SpeakerNames) - else: - raise gr.Error(f"Unable to get video from {URL}") - - -with gr.Blocks() as yav_ui: - with gr.Row(): - with gr.Column(): - with gr.Tab("Youtube", id=1): - ysz = gr.Dropdown(label="Model Size", choices=wispher_models , value='base') - yinput_nos = gr.Number(label="Number of Speakers", placeholder="2") - yinput_sn = gr.Textbox(label="Name of the Speakers (ordered by the time they speak and separated by comma)", placeholder="If Speaker 1 is first to speak followed by Speaker 2 then -> Speaker 1, Speaker 2") - yinput = gr.Textbox(label="Youtube Link", placeholder="https://www.youtube.com/watch?v=GECcjrYHH8w") - ybutton_transcribe = gr.Button("Transcribe", show_progress=True, scroll_to_output=True) - with gr.Tab("Video", id=2): - vsz = gr.Dropdown(label="Model Size", choices=wispher_models, value='base') - vinput_nos = gr.Number(label="Number of Speakers", placeholder="2") - vinput_sn = gr.Textbox(label="Name of the Speakers (ordered by the time they speak and separated by comma)", placeholder="If Speaker 1 is first to speak followed by Speaker 2 then -> Speaker 1, Speaker 2") - vinput = gr.Video(label="Video") - vbutton_transcribe = gr.Button("Transcribe", show_progress=True, scroll_to_output=True) - with gr.Tab("Audio", id=3): - asz = gr.Dropdown(label="Model Size", choices=wispher_models , value='base') - ainput_nos = gr.Number(label="Number of Speakers", placeholder="2") - ainput_sn = gr.Textbox(label="Name of the Speakers (ordered by the time they speak and separated by comma)", placeholder="If Speaker 1 is first to speak followed by Speaker 2 then -> Speaker 1, Speaker 2") - ainput = gr.Audio(label="Audio", type="filepath") - abutton_transcribe = gr.Button("Transcribe", show_progress=True, scroll_to_output=True) - with gr.Column(): - with gr.Tab("Text"): - output_textbox = gr.Textbox(label="Transcribed Text", lines=15) - with gr.Tab("JSON"): - output_json = gr.JSON(label="Transcribed JSON") - ybutton_transcribe.click( - fn=YoutubeTranscribe, - inputs=[yinput_nos,yinput_sn,yinput, ysz], - outputs=[output_textbox,output_json] - ) - abutton_transcribe.click( - fn=AudioTranscribe, - inputs=[ainput_nos,ainput_sn,ainput, asz], - outputs=[output_textbox,output_json] - ) - vbutton_transcribe.click( - fn=VideoTranscribe, - inputs=[vinput_nos,vinput_sn,vinput, vsz], - outputs=[output_textbox,output_json] - ) -yav_ui.launch(debug=True) \ No newline at end of file diff --git a/spaces/santhosh/NLLB-Translator/langs.py b/spaces/santhosh/NLLB-Translator/langs.py deleted file mode 100644 index e5e849a4f5427f5b22e1e0bcfbe00102ac0eef10..0000000000000000000000000000000000000000 --- a/spaces/santhosh/NLLB-Translator/langs.py +++ /dev/null @@ -1,204 +0,0 @@ -LANGS = [ - "ace_Arab", - "ace_Latn", - "acm_Arab", - "acq_Arab", - "aeb_Arab", - "afr_Latn", - "ajp_Arab", - "aka_Latn", - "amh_Ethi", - "apc_Arab", - "arb_Arab", - "ars_Arab", - "ary_Arab", - "arz_Arab", - "asm_Beng", - "ast_Latn", - "awa_Deva", - "ayr_Latn", - "azb_Arab", - "azj_Latn", - "bak_Cyrl", - "bam_Latn", - "ban_Latn", - "bel_Cyrl", - "bem_Latn", - "ben_Beng", - "bho_Deva", - "bjn_Arab", - "bjn_Latn", - "bod_Tibt", - "bos_Latn", - "bug_Latn", - "bul_Cyrl", - "cat_Latn", - "ceb_Latn", - "ces_Latn", - "cjk_Latn", - "ckb_Arab", - "crh_Latn", - "cym_Latn", - "dan_Latn", - "deu_Latn", - "dik_Latn", - "dyu_Latn", - "dzo_Tibt", - "ell_Grek", - "eng_Latn", - "epo_Latn", - "est_Latn", - "eus_Latn", - "ewe_Latn", - "fao_Latn", - "pes_Arab", - "fij_Latn", - "fin_Latn", - "fon_Latn", - "fra_Latn", - "fur_Latn", - "fuv_Latn", - "gla_Latn", - "gle_Latn", - "glg_Latn", - "grn_Latn", - "guj_Gujr", - "hat_Latn", - "hau_Latn", - "heb_Hebr", - "hin_Deva", - "hne_Deva", - "hrv_Latn", - "hun_Latn", - "hye_Armn", - "ibo_Latn", - "ilo_Latn", - "ind_Latn", - "isl_Latn", - "ita_Latn", - "jav_Latn", - "jpn_Jpan", - "kab_Latn", - "kac_Latn", - "kam_Latn", - "kan_Knda", - "kas_Arab", - "kas_Deva", - "kat_Geor", - "knc_Arab", - "knc_Latn", - "kaz_Cyrl", - "kbp_Latn", - "kea_Latn", - "khm_Khmr", - "kik_Latn", - "kin_Latn", - "kir_Cyrl", - "kmb_Latn", - "kon_Latn", - "kor_Hang", - "kmr_Latn", - "lao_Laoo", - "lvs_Latn", - "lij_Latn", - "lim_Latn", - "lin_Latn", - "lit_Latn", - "lmo_Latn", - "ltg_Latn", - "ltz_Latn", - "lua_Latn", - "lug_Latn", - "luo_Latn", - "lus_Latn", - "mag_Deva", - "mai_Deva", - "mal_Mlym", - "mar_Deva", - "min_Latn", - "mkd_Cyrl", - "plt_Latn", - "mlt_Latn", - "mni_Beng", - "khk_Cyrl", - "mos_Latn", - "mri_Latn", - "zsm_Latn", - "mya_Mymr", - "nld_Latn", - "nno_Latn", - "nob_Latn", - "npi_Deva", - "nso_Latn", - "nus_Latn", - "nya_Latn", - "oci_Latn", - "gaz_Latn", - "ory_Orya", - "pag_Latn", - "pan_Guru", - "pap_Latn", - "pol_Latn", - "por_Latn", - "prs_Arab", - "pbt_Arab", - "quy_Latn", - "ron_Latn", - "run_Latn", - "rus_Cyrl", - "sag_Latn", - "san_Deva", - "sat_Beng", - "scn_Latn", - "shn_Mymr", - "sin_Sinh", - "slk_Latn", - "slv_Latn", - "smo_Latn", - "sna_Latn", - "snd_Arab", - "som_Latn", - "sot_Latn", - "spa_Latn", - "als_Latn", - "srd_Latn", - "srp_Cyrl", - "ssw_Latn", - "sun_Latn", - "swe_Latn", - "swh_Latn", - "szl_Latn", - "tam_Taml", - "tat_Cyrl", - "tel_Telu", - "tgk_Cyrl", - "tgl_Latn", - "tha_Thai", - "tir_Ethi", - "taq_Latn", - "taq_Tfng", - "tpi_Latn", - "tsn_Latn", - "tso_Latn", - "tuk_Latn", - "tum_Latn", - "tur_Latn", - "twi_Latn", - "tzm_Tfng", - "uig_Arab", - "ukr_Cyrl", - "umb_Latn", - "urd_Arab", - "uzn_Latn", - "vec_Latn", - "vie_Latn", - "war_Latn", - "wol_Latn", - "xho_Latn", - "ydd_Hebr", - "yor_Latn", - "yue_Hant", - "zho_Hans", - "zho_Hant", - "zul_Latn" -] diff --git a/spaces/sarinam/speaker-anonymization-gan/IMSToucan/README.md b/spaces/sarinam/speaker-anonymization-gan/IMSToucan/README.md deleted file mode 100644 index 8199695cc2a09cab6497e5fcd65653faedf66556..0000000000000000000000000000000000000000 --- a/spaces/sarinam/speaker-anonymization-gan/IMSToucan/README.md +++ /dev/null @@ -1,327 +0,0 @@ -![image](Utility/toucan.png) - -IMS Toucan is a toolkit for teaching, training and using state-of-the-art Speech Synthesis models, developed at the -**Institute for Natural Language Processing (IMS), University of Stuttgart, Germany**. Everything is pure Python and -PyTorch based to keep it as simple and beginner-friendly, yet powerful as possible. - -The PyTorch Modules of [Tacotron 2](https://arxiv.org/abs/1712.05884) -and [FastSpeech 2](https://arxiv.org/abs/2006.04558) are taken from -[ESPnet](https://github.com/espnet/espnet), the PyTorch Modules of [HiFiGAN](https://arxiv.org/abs/2010.05646) are taken -from the [ParallelWaveGAN repository](https://github.com/kan-bayashi/ParallelWaveGAN) -which are also authored by the brilliant [Tomoki Hayashi](https://github.com/kan-bayashi). - -For a version of the toolkit that includes TransformerTTS instead of Tacotron 2 and MelGAN instead of HiFiGAN, check out -the TransformerTTS and MelGAN branch. They are separated to keep the code clean, simple and minimal. - ---- - -## Contents - -- [New Features](#new-features) -- [Demonstration](#demonstration) -- [Installation](#installation) - + [Basic Requirements](#basic-requirements) - + [Speaker Embedding](#speaker-embedding) - + [espeak-ng](#espeak-ng) -- [Creating a new Pipeline](#creating-a-new-pipeline) - * [Build a HiFi-GAN Pipeline](#build-a-hifi-gan-pipeline) - * [Build a FastSpeech 2 Pipeline](#build-a-fastspeech-2-pipeline) -- [Training a Model](#training-a-model) -- [Creating a new InferenceInterface](#creating-a-new-inferenceinterface) -- [Using a trained Model for Inference](#using-a-trained-model-for-inference) -- [FAQ](#faq) -- [Citation](#citation) - ---- - -## New Features - -- [As shown in this paper](http://festvox.org/blizzard/bc2021/BC21_DelightfulTTS.pdf) vocoders can be used to perform - super-resolution and spectrogram inversion simultaneously. We added this to our HiFi-GAN vocoder. It now takes 16kHz - spectrograms as input, but produces 48kHz waveforms. -- We officially introduced IMS Toucan in - [our contribution to the Blizzard Challenge 2021](http://festvox.org/blizzard/bc2021/BC21_IMS.pdf). Check out the - bottom of the readme for a bibtex entry. -- We now use articulatory representations of phonemes as the input for all models. This allows us to easily use - multilingual data. -- We provide a checkpoint trained with [model agnostic meta learning](https://arxiv.org/abs/1703.03400) from which you - should be able to fine-tune a model with very little data in almost any language. -- We now use a small self-contained Aligner that is trained with CTC, inspired by - [this implementation](https://github.com/as-ideas/DeepForcedAligner). This allows us to get rid of the dependence on - autoregressive models. Tacotron 2 is thus now also no longer in this branch, but still present in other branches, - similar to TransformerTTS. - ---- - -## Demonstration - -[Here are two sentences](https://drive.google.com/file/d/1ltAyR2EwAbmDo2hgkx1mvUny4FuxYmru/view?usp=sharing) -produced by Tacotron 2 combined with HiFi-GAN, trained on -[Nancy Krebs](https://www.cstr.ed.ac.uk/projects/blizzard/2011/lessac_blizzard2011/) using this toolkit. - -[Here is some speech](https://drive.google.com/file/d/1mZ1LvTlY6pJ5ZQ4UXZ9jbzB651mufBrB/view?usp=sharing) -produced by FastSpeech 2 and MelGAN trained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) -using this toolkit. - -And [here is a sentence](https://drive.google.com/file/d/1FT49Jf0yyibwMDbsEJEO9mjwHkHRIGXc/view?usp=sharing) -produced by TransformerTTS and MelGAN trained on [Thorsten](https://github.com/thorstenMueller/deep-learning-german-tts) -using this toolkit. - -[Here is some speech](https://drive.google.com/file/d/14nPo2o1VKtWLPGF7e_0TxL8XGI3n7tAs/view?usp=sharing) -produced by a multi-speaker FastSpeech 2 with MelGAN trained on -[LibriTTS](https://research.google/tools/datasets/libri-tts/) using this toolkit. Fans of the videogame Portal may -recognize who was used as the reference speaker for this utterance. - -[Interactive Demo of our entry to the Blizzard Challenge 2021.](https://colab.research.google.com/drive/1bRaySf8U55MRPaxqBr8huWrzCOzlxVqw) -This is based on an older version of the toolkit though. It uses FastSpeech2 and MelGAN as vocoder and is trained on 5 -hours of Spanish. - ---- - -## Installation - -#### Basic Requirements - -To install this toolkit, clone it onto the machine you want to use it on -(should have at least one GPU if you intend to train models on that machine. For inference, you can get by without GPU). -Navigate to the directory you have cloned. We are going to create and activate a -[conda virtual environment](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) -to install the basic requirements into. After creating the environment, the command you need to use to activate the -virtual environment is displayed. The commands below show everything you need to do. - -``` -conda create --prefix ./toucan_conda_venv --no-default-packages python=3.8 - -pip install --no-cache-dir -r requirements.txt - -pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html -``` - -#### Speaker Embedding - -As [NVIDIA has shown](https://arxiv.org/pdf/2110.05798.pdf), you get better results by fine-tuning a pretrained model on -a new speaker, rather than training a multispeaker model. We have thus dropped support for zero-shot multispeaker models -using speaker embeddings. However we still -use [Speechbrain's ECAPA-TDNN](https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb) for a cycle consistency loss to -make adapting to new speakers a bit faster. - -In the current version of the toolkit no further action should be required. When you are using multispeaker for the -first time, it requires an internet connection to download the pretrained models though. - -#### espeak-ng - -And finally you need to have espeak-ng installed on your system, because it is used as backend for the phonemizer. If -you replace the phonemizer, you don't need it. On most Linux environments it will be installed already, and if it is -not, and you have the sufficient rights, you can install it by simply running - -``` -apt-get install espeak-ng -``` - ---- - -## Creating a new Pipeline - -To create a new pipeline to train a HiFiGAN vocoder, you only need a set of audio files. To create a new pipeline for a -FastSpeech 2, you need audio files, corresponding text labels, and an already trained Aligner model to estimate the -duration information that FastSpeech 2 needs as input. Let's go through them in order of increasing complexity. - -### Build a HiFi-GAN Pipeline - -In the directory called -*Utility* there is a file called -*file_lists.py*. In this file you should write a function that returns a list of all the absolute paths to each of the -audio files in your dataset as strings. - -Then go to the directory -*TrainingInterfaces/TrainingPipelines*. In there, make a copy of any existing pipeline that has HiFiGAN in its name. We -will use this as reference and only make the necessary changes to use the new dataset. Import the function you have just -written as -*get_file_list*. Now look out for a variable called -*model_save_dir*. This is the default directory that checkpoints will be saved into, unless you specify another one when -calling the training script. Change it to whatever you like. - -Now you need to add your newly created pipeline to the pipeline dictionary in the file -*run_training_pipeline.py* in the top level of the toolkit. In this file, import the -*run* function from the pipeline you just created and give it a speaking name. Now in the -*pipeline_dict*, add your imported function as value and use as key a shorthand that makes sense. And just like that -you're done. - -### Build a FastSpeech 2 Pipeline - -In the directory called -*Utility* there is a file called -*path_to_transcript_dicts.py*. In this file you should write a function that returns a dictionary that has all the -absolute paths to each of the audio files in your dataset as strings as the keys and the textual transcriptions of the -corresponding audios as the values. - -Then go to the directory -*TrainingInterfaces/TrainingPipelines*. In there, make a copy of any existing pipeline that has FastSpeech 2 in its -name. We will use this copy as reference and only make the necessary changes to use the new dataset. Import the function -you have just written as -*build_path_to_transcript_dict*. Since the data will be processed a considerable amount, a cache will be built and saved -as file for quick and easy restarts. So find the variable -*cache_dir* and adapt it to your needs. The same goes for the variable -*save_dir*, which is where the checkpoints will be saved to. This is a default value, you can overwrite it when calling -the pipeline later using a command line argument, in case you want to fine-tune from a checkpoint and thus save into a -different directory. - -In your new pipeline file, look out for the line in which the -*acoustic_model* is loaded. Change the path to the checkpoint of an Aligner model. It can either be the one that is -supplied with the toolkit in the download script, or one that you trained yourself. In the example pipelines, the one -that we provide is finetuned to the dataset it is applied to before it is used to extract durations. - -Since we are using text here, we have to make sure that the text processing is adequate for the language. So check in -*Preprocessing/TextFrontend* whether the TextFrontend already has a language ID (e.g. 'en' and 'de') for the language of -your dataset. If not, you'll have to implement handling for that, but it should be pretty simple by just doing it -analogous to what is there already. Now back in the pipeline, change the -*lang* argument in the creation of the dataset and in the call to the train loop function to the language ID that -matches your data. - -Now navigate to the implementation of the -*train_loop* that is called in the pipeline. In this file, find the function called -*plot_progress_spec*. This function will produce spectrogram plots during training, which is the most important way to -monitor the progress of the training. In there, you may need to add an example sentence for the language of the data you -are using. It should all be pretty clear from looking at it. - -Once this is done, we are almost done, now we just need to make it available to the -*run_training_pipeline.py* file in the top level. In said file, import the -*run* function from the pipeline you just created and give it a speaking name. Now in the -*pipeline_dict*, add your imported function as value and use as key a shorthand that makes sense. And that's it. - ---- - -## Training a Model - -Once you have a pipeline built, training is super easy. Just activate your virtual environment and run the command -below. You might want to use something like nohup to keep it running after you log out from the server (then you should -also add -u as option to python) and add an & to start it in the background. Also, you might want to direct the std:out -and std:err into a file using > but all of that is just standard shell use and has nothing to do with the toolkit. - -``` -python run_training_pipeline.py -``` - -You can supply any of the following arguments, but don't have to (although for training you should definitely specify at -least a GPU ID). - -``` ---gpu_id - ---resume_checkpoint - ---resume (if this is present, the furthest checkpoint available will be loaded automatically) - ---finetune (if this is present, the provided checkpoint will be fine-tuned on the data from this pipeline) - ---model_save_dir -``` - -After every epoch, some logs will be written to the console. If the loss becomes NaN, you'll need to use a smaller -learning rate or more warmup steps in the arguments of the call to the training_loop in the pipeline you are running. - -If you get cuda out of memory errors, you need to decrease the batchsize in the arguments of the call to the -training_loop in the pipeline you are running. Try decreasing the batchsize in small steps until you get no more out of -cuda memory errors. Decreasing the batchsize may also require you to use a smaller learning rate. The use of GroupNorm -should make it so that the training remains mostly stable. - -Speaking of plots: in the directory you specified for saving model's checkpoint files and self-explanatory visualization -data will appear. Since the checkpoints are quite big, only the five most recent ones will be kept. Training will stop -after 500,000 for FastSpeech 2, and after 2,500,000 steps for HiFiGAN. Depending on the machine and configuration you -are using this will take multiple days, so verify that everything works on small tests before running the big thing. If -you want to stop earlier, just kill the process, since everything is daemonic all the child-processes should die with -it. In case there are some ghost-processes left behind, you can use the following command to find them and kill them -manually. - -``` -fuser -v /dev/nvidia* -``` - -After training is complete, it is recommended to run -*run_weight_averaging.py*. If you made no changes to the architectures and stuck to the default directory layout, it -will automatically load any models you produced with one pipeline, average their parameters to get a slightly more -robust model and save the result as -*best.pt* in the same directory where all the corresponding checkpoints lie. This also compresses the file size -significantly, so you should do this and then use the -*best.pt* model for inference. - ---- - -## Creating a new InferenceInterface - -To build a new -*InferenceInterface*, which you can then use for super simple inference, we're going to use an existing one as template -again. Make a copy of the -*InferenceInterface*. Change the name of the class in the copy and change the paths to the models to use the trained -models of your choice. Instantiate the model with the same hyperparameters that you used when you created it in the -corresponding training pipeline. The last thing to check is the language that you supply to the text frontend. Make sure -it matches what you used during training. - -With your newly created -*InferenceInterface*, you can use your trained models pretty much anywhere, e.g. in other projects. All you need is the -*Utility* directory, the -*Layers* -directory, the -*Preprocessing* directory and the -*InferenceInterfaces* directory (and of course your model checkpoint). That's all the code you need, it works -standalone. - ---- - -## Using a trained Model for Inference - -An -*InferenceInterface* contains two useful methods. They are -*read_to_file* and -*read_aloud*. - -- *read_to_file* takes as input a list of strings and a filename. It will synthesize the sentences in the list and - concatenate them with a short pause inbetween and write them to the filepath you supply as the other argument. - -- *read_aloud* takes just a string, which it will then convert to speech and immediately play using the system's - speakers. If you set the optional argument - *view* to - *True* when calling it, it will also show a plot of the phonemes it produced, the spectrogram it came up with, and the - wave it created from that spectrogram. So all the representations can be seen, text to phoneme, phoneme to spectrogram - and finally spectrogram to wave. - -Those methods are used in demo code in the toolkit. In -*run_interactive_demo.py* and -*run_text_to_file_reader.py*, you can import -*InferenceInterfaces* that you created and add them to the dictionary in each of the files with a shorthand that makes -sense. In the interactive demo, you can just call the python script, then type in the shorthand when prompted and -immediately listen to your synthesis saying whatever you put in next (be wary of out of memory errors for too long -inputs). In the text reader demo script you have to call the function that wraps around the -*InferenceInterface* and supply the shorthand of your choice. It should be pretty clear from looking at it. - ---- - -## FAQ - -Here are a few points that were brought up by users: - -- My error message shows GPU0, even though I specified a different GPU - The way GPU selection works is that the - specified GPU is set as the only visible device, in order to avoid backend stuff running accidentally on different - GPUs. So internally the program will name the device GPU0, because it is the only GPU it can see. It is actually - running on the GPU you specified. - ---- - -This toolkit has been written by Florian Lux (except for the pytorch modules taken -from [ESPnet](https://github.com/espnet/espnet) and -[ParallelWaveGAN](https://github.com/kan-bayashi/ParallelWaveGAN), as mentioned above), so if you come across problems -or questions, feel free to [write a mail](mailto:florian.lux@ims.uni-stuttgart.de). Also let me know if you do something -cool with it. Thank you for reading. - -## Citation - -``` -@inproceedings{lux2021toucan, - title={{The IMS Toucan system for the Blizzard Challenge 2021}}, - author={Florian Lux and Julia Koch and Antje Schweitzer and Ngoc Thang Vu}, - year={2021}, - booktitle={Proc. Blizzard Challenge Workshop}, - volume={2021}, - publisher={{Speech Synthesis SIG}} -} -``` diff --git a/spaces/sarinam/speaker-anonymization/demo_inference/demo_anonymization.py b/spaces/sarinam/speaker-anonymization/demo_inference/demo_anonymization.py deleted file mode 100644 index f628d028132045e2bf3def006e7b128d2d8b02c9..0000000000000000000000000000000000000000 --- a/spaces/sarinam/speaker-anonymization/demo_inference/demo_anonymization.py +++ /dev/null @@ -1,78 +0,0 @@ -import json -import torch -import numpy as np -from sklearn.preprocessing import minmax_scale, StandardScaler - -from anonymization import DemoPoolAnonymizer, DemoRandomAnonymizer - -TAGS_TO_MODELS = { - 'pool': 'pool_minmax_ecapa+xvector', - 'random': 'random_in-scale_ecapa+xvector', - 'pool raw': 'pool_raw_ecapa+xvector' -} - -ANON_MODELS = { - 'pool': DemoPoolAnonymizer, - 'random': DemoRandomAnonymizer -} - - -class DemoAnonymizer: - - def __init__(self, model_path, model_tag, device): - self.device = device - self.scaling = None - self.std_scaler = None - self.model_tag = model_tag - - self.dim_ranges = self._load_dim_ranges(model_path / TAGS_TO_MODELS[model_tag]) - self.anonymizer = self._load_anonymizer(model_path / TAGS_TO_MODELS[model_tag]) - - def anonymize_embedding(self, audio, sr): - - anon_embedding = self.anonymizer.anonymize_embedding(audio, sr) - if self.dim_ranges: - anon_embedding = self._scale_embedding(anon_embedding) - return anon_embedding - - def _load_dim_ranges(self, model_dir): - if (model_dir / 'stats_per_dim.json').exists(): - with open(model_dir / 'stats_per_dim.json') as f: - dim_ranges = json.load(f) - return [(v['min'], v['max']) for k, v in sorted(dim_ranges.items(), key=lambda x: int(x[0]))] - - def _load_anonymizer(self, model_dir): - model_name = model_dir.name.lower() - - if 'pool' in model_name: - model_type = 'pool' - else: - model_type = 'random' - - print(f'Model type of anonymizer: {model_type}') - - model = ANON_MODELS[model_type](device=self.device, vec_type='ecapa+xvector') - model.load_parameters(model_dir) - - if 'minmax' in model_name: - # self.scaling = 'minmax' - #elif 'std_scale' in model_name and model_type == 'pool': - self.scaling = 'std' - self.std_scaler = StandardScaler() - self.std_scaler.fit(model.pool_embeddings.cpu().numpy()) - - return model - - def _scale_embedding(self, vector): - if self.scaling == 'minmax': - vector = vector.cpu().numpy() - scaled_dims = [] - for i in range(len(self.dim_ranges)): - scaled_dims.append(minmax_scale(np.array([vector[i]]), self.dim_ranges[i])[0]) - - vector = torch.tensor(scaled_dims).to(self.device) - elif self.scaling == 'std': - vector = vector.unsqueeze(0).cpu().numpy() - vector = torch.tensor(self.std_scaler.transform(vector)[0]) - - return vector diff --git a/spaces/segments/panoptic-segment-anything/segment_anything/README.md b/spaces/segments/panoptic-segment-anything/segment_anything/README.md deleted file mode 100644 index 6256d2b7f5a387988338d538df4e699eb17ba702..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything/segment_anything/README.md +++ /dev/null @@ -1,107 +0,0 @@ -# Segment Anything - -**[Meta AI Research, FAIR](https://ai.facebook.com/research/)** - -[Alexander Kirillov](https://alexander-kirillov.github.io/), [Eric Mintun](https://ericmintun.github.io/), [Nikhila Ravi](https://nikhilaravi.com/), [Hanzi Mao](https://hanzimao.me/), Chloe Rolland, Laura Gustafson, [Tete Xiao](https://tetexiao.com), [Spencer Whitehead](https://www.spencerwhitehead.com/), Alex Berg, Wan-Yen Lo, [Piotr Dollar](https://pdollar.github.io/), [Ross Girshick](https://www.rossgirshick.info/) - -[[`Paper`](https://ai.facebook.com/research/publications/segment-anything/)] [[`Project`](https://segment-anything.com/)] [[`Demo`](https://segment-anything.com/demo)] [[`Dataset`](https://segment-anything.com/dataset/index.html)] [[`Blog`](https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/)] - -![SAM design](assets/model_diagram.png?raw=true) - -The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks. - -

        - - -

        - -## Installation - -The code requires `python>=3.8`, as well as `pytorch>=1.7` and `torchvision>=0.8`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended. - -Install Segment Anything: - -``` -pip install git+https://github.com/facebookresearch/segment-anything.git -``` - -or clone the repository locally and install with - -``` -git clone git@github.com:facebookresearch/segment-anything.git -cd segment-anything; pip install -e . -``` - -The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. `jupyter` is also required to run the example notebooks. -``` -pip install opencv-python pycocotools matplotlib onnxruntime onnx -``` - - -## Getting Started - -First download a [model checkpoint](#model-checkpoints). Then the model can be used in just a few lines to get masks from a given prompt: - -``` -from segment_anything import build_sam, SamPredictor -predictor = SamPredictor(build_sam(checkpoint="
        ")) -predictor.set_image() -masks, _, _ = predictor.predict() -``` - -or generate masks for an entire image: - -``` -from segment_anything import build_sam, SamAutomaticMaskGenerator -mask_generator = SamAutomaticMaskGenerator(build_sam(checkpoint="
        ")) -masks = mask_generator_generate() -``` - -Additionally, masks can be generated for images from the command line: - -``` -python scripts/amg.py --checkpoint --input --output -``` - -See the examples notebooks on [using SAM with prompts](/notebooks/predictor_example.ipynb) and [automatically generating masks](/notebooks/automatic_mask_generator_example.ipynb) for more details. - -

        - - -

        - -## ONNX Export - -SAM's lightweight mask decoder can be exported to ONNX format so that it can be run in any environment that supports ONNX runtime, such as in-browser as showcased in the [demo](https://segment-anything.com/demo). Export the model with - -``` -python scripts/export_onnx_model.py --checkpoint --output -``` - -See the [example notebook](https://github.com/facebookresearch/segment-anything/blob/main/notebooks/onnx_model_example.ipynb) for details on how to combine image preprocessing via SAM's backbone with mask prediction using the ONNX model. It is recommended to use the latest stable version of PyTorch for ONNX export. - -## Model Checkpoints - -Three model versions of the model are available with different backbone sizes. These models can be instantiated by running -``` -from segment_anything import sam_model_registry -sam = sam_model_registry[""](checkpoint="") -``` -Click the links below to download the checkpoint for the corresponding model name. The default model in bold can also be instantiated with `build_sam`, as in the examples in [Getting Started](#getting-started). - -* **`default` or `vit_h`: [ViT-H SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth)** -* `vit_l`: [ViT-L SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth) -* `vit_b`: [ViT-B SAM model.](https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth) - -## License -The model is licensed under the [Apache 2.0 license](LICENSE). - -## Contributing - -See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md). - -## Contributors - -The Segment Anything project was made possible with the help of many contributors (alphabetical): - -Aaron Adcock, Vaibhav Aggarwal, Morteza Behrooz, Cheng-Yang Fu, Ashley Gabriel, Ahuva Goldstand, Allen Goodman, Sumanth Gurram, Jiabo Hu, Somya Jain, Devansh Kukreja, Robert Kuo, Joshua Lane, Yanghao Li, Lilian Luong, Jitendra Malik, Mallika Malhotra, William Ngan, Omkar Parkhi, Nikhil Raina, Dirk Rowe, Neil Sejoor, Vanessa Stark, Bala Varadarajan, Bram Wasti, Zachary Winstrom diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/train/process_ckpt.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/train/process_ckpt.py deleted file mode 100644 index a9f0f0810730da2543a08d64d4a30f609ec7a272..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/train/process_ckpt.py +++ /dev/null @@ -1,248 +0,0 @@ -import torch, traceback, os, pdb -from collections import OrderedDict - - -def savee(ckpt, sr, if_f0, name, epoch): - try: - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = ckpt[key].half() - if sr == "40k": - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 10, 2, 2], - 512, - [16, 16, 4, 4], - 109, - 256, - 40000, - ] - elif sr == "48k": - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 6, 2, 2, 2], - 512, - [16, 16, 4, 4, 4], - 109, - 256, - 48000, - ] - elif sr == "32k": - opt["config"] = [ - 513, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 4, 2, 2, 2], - 512, - [16, 16, 4, 4, 4], - 109, - 256, - 32000, - ] - opt["info"] = "%sepoch" % epoch - opt["sr"] = sr - opt["f0"] = if_f0 - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() - - -def show_info(path): - try: - a = torch.load(path, map_location="cpu") - return "模型信息:%s\n采样率:%s\n模型是否输入音高引导:%s" % ( - a.get("info", "None"), - a.get("sr", "None"), - a.get("f0", "None"), - ) - except: - return traceback.format_exc() - - -def extract_small_model(path, name, sr, if_f0, info): - try: - ckpt = torch.load(path, map_location="cpu") - if "model" in ckpt: - ckpt = ckpt["model"] - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = ckpt[key].half() - if sr == "40k": - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 10, 2, 2], - 512, - [16, 16, 4, 4], - 109, - 256, - 40000, - ] - elif sr == "48k": - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 6, 2, 2, 2], - 512, - [16, 16, 4, 4, 4], - 109, - 256, - 48000, - ] - elif sr == "32k": - opt["config"] = [ - 513, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 4, 2, 2, 2], - 512, - [16, 16, 4, 4, 4], - 109, - 256, - 32000, - ] - if info == "": - info = "Extracted model." - opt["info"] = info - opt["sr"] = sr - opt["f0"] = int(if_f0) - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() - - -def change_info(path, info, name): - try: - ckpt = torch.load(path, map_location="cpu") - ckpt["info"] = info - if name == "": - name = os.path.basename(path) - torch.save(ckpt, "weights/%s" % name) - return "Success." - except: - return traceback.format_exc() - - -def merge(path1, path2, alpha1, sr, f0, info, name): - try: - - def extract(ckpt): - a = ckpt["model"] - opt = OrderedDict() - opt["weight"] = {} - for key in a.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = a[key] - return opt - - ckpt1 = torch.load(path1, map_location="cpu") - ckpt2 = torch.load(path2, map_location="cpu") - cfg = ckpt1["config"] - if "model" in ckpt1: - ckpt1 = extract(ckpt1) - else: - ckpt1 = ckpt1["weight"] - if "model" in ckpt2: - ckpt2 = extract(ckpt2) - else: - ckpt2 = ckpt2["weight"] - if sorted(list(ckpt1.keys())) != sorted(list(ckpt2.keys())): - return "Fail to merge the models. The model architectures are not the same." - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt1.keys(): - # try: - if key == "emb_g.weight" and ckpt1[key].shape != ckpt2[key].shape: - min_shape0 = min(ckpt1[key].shape[0], ckpt2[key].shape[0]) - opt["weight"][key] = ( - alpha1 * (ckpt1[key][:min_shape0].float()) - + (1 - alpha1) * (ckpt2[key][:min_shape0].float()) - ).half() - else: - opt["weight"][key] = ( - alpha1 * (ckpt1[key].float()) + (1 - alpha1) * (ckpt2[key].float()) - ).half() - # except: - # pdb.set_trace() - opt["config"] = cfg - """ - if(sr=="40k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 10, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 40000] - elif(sr=="48k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,6,2,2,2], 512, [16, 16, 4, 4], 109, 256, 48000] - elif(sr=="32k"):opt["config"] = [513, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 4, 2, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 32000] - """ - opt["sr"] = sr - opt["f0"] = 1 if f0 == "是" else 0 - opt["info"] = info - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() diff --git a/spaces/shgao/EditAnything/annotator/util.py b/spaces/shgao/EditAnything/annotator/util.py deleted file mode 100644 index 553608714bda8fb6e76646e452176783179bb6cf..0000000000000000000000000000000000000000 --- a/spaces/shgao/EditAnything/annotator/util.py +++ /dev/null @@ -1,73 +0,0 @@ -import numpy as np -import cv2 -import os - - -annotator_ckpts_path = os.path.join(os.path.dirname(__file__), 'ckpts') - - -def HWC3(x): - assert x.dtype == np.uint8 - if x.ndim == 2: - x = x[:, :, None] - assert x.ndim == 3 - H, W, C = x.shape - assert C == 1 or C == 3 or C == 4 - if C == 3: - return x - if C == 1: - return np.concatenate([x, x, x], axis=2) - if C == 4: - color = x[:, :, 0:3].astype(np.float32) - alpha = x[:, :, 3:4].astype(np.float32) / 255.0 - y = color * alpha + 255.0 * (1.0 - alpha) - y = y.clip(0, 255).astype(np.uint8) - return y - - -def resize_image(input_image, resolution): - H, W, C = input_image.shape - H = float(H) - W = float(W) - k = float(resolution) / min(H, W) - H *= k - W *= k - H = int(np.round(H / 64.0)) * 64 - W = int(np.round(W / 64.0)) * 64 - img = cv2.resize(input_image, (W, H), interpolation=cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA) - return img - -def resize_points(clicked_points, original_shape, resolution): - original_height, original_width, _ = original_shape - original_height = float(original_height) - original_width = float(original_width) - - scale_factor = float(resolution) / min(original_height, original_width) - resized_points = [] - - for point in clicked_points: - x, y, lab = point - resized_x = int(round(x * scale_factor)) - resized_y = int(round(y * scale_factor)) - resized_point = (resized_x, resized_y, lab) - resized_points.append(resized_point) - - return resized_points - -def get_bounding_box(mask): - # Convert PIL Image to numpy array - mask = np.array(mask).astype(np.uint8) - - # Take the first channel (R) of the mask - mask = mask[:,:,0] - - # Get the indices of elements that are not zero - rows = np.any(mask, axis=0) - cols = np.any(mask, axis=1) - - # Get the minimum and maximum indices where the elements are not zero - rmin, rmax = np.where(rows)[0][[0, -1]] - cmin, cmax = np.where(cols)[0][[0, -1]] - - # Return as [xmin, ymin, xmax, ymax] - return [rmin, cmin, rmax, cmax] diff --git a/spaces/shiyi11/QQsign/devices/device_8963.js b/spaces/shiyi11/QQsign/devices/device_8963.js deleted file mode 100644 index f1bf97749204e374f59d7971ad55c991e97e19af..0000000000000000000000000000000000000000 --- a/spaces/shiyi11/QQsign/devices/device_8963.js +++ /dev/null @@ -1,344 +0,0 @@ -"use strict"; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.getApkInfo = exports.Platform = exports.Device = exports.generateFullDevice = exports.generateShortDevice = void 0; -const crypto_1 = require("crypto"); -const constants_1 = require("./constants"); -const axios_1 = __importDefault(require("axios")); -const algo_1 = require("./algo"); -function generateImei() { - let imei = `86${(0, constants_1.randomString)(12, '0123456789')}`; - function calcSP(imei) { - let sum = 0; - for (let i = 0; i < imei.length; ++i) { - if (i % 2) { - let j = parseInt(imei[i]) * 2; - sum += j % 10 + Math.floor(j / 10); - } - else { - sum += parseInt(imei[i]); - } - } - return (100 - sum) % 10; - } - return imei + calcSP(imei); -} -/** 生成短设备信息 */ -function generateShortDevice() { - const randstr = (length, num = false) => { - const map = num ? '0123456789' : '0123456789abcdef'; - return (0, constants_1.randomString)(length, map); - }; - return { - "--begin--": "该设备为随机生成,丢失后不能得到原先配置", - product: `ILPP-${randstr(5).toUpperCase()}`, - device: `${randstr(5).toUpperCase()}`, - board: `${randstr(5).toUpperCase()}`, - brand: `${randstr(4).toUpperCase()}`, - model: `ICQQ ${randstr(4).toUpperCase()}`, - wifi_ssid: `HUAWEI-${randstr(7)}`, - bootloader: `U-boot`, - android_id: `IL.${randstr(7, true)}.${randstr(4, true)}`, - boot_id: `${randstr(8)}-${randstr(4)}-${randstr(4)}-${randstr(4)}-${randstr(12)}`, - proc_version: `Linux version 5.10.101-android12-${randstr(8)}`, - mac_address: `2D:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}`, - ip_address: `192.168.${randstr(2, true)}.${randstr(2, true)}`, - imei: `${generateImei()}`, - incremental: `${randstr(10, true).toUpperCase()}`, - "--end--": "修改后可能需要重新验证设备。" - }; -} -exports.generateShortDevice = generateShortDevice; -/** 生成完整设备信息 */ -function generateFullDevice(apk, d) { - if (!d) - d = generateShortDevice(); - return { - display: d.android_id, - product: d.product, - device: d.device, - board: d.board, - brand: d.brand, - model: d.model, - bootloader: d.bootloader, - fingerprint: `${d.brand}/${d.product}/${d.device}:10/${d.android_id}/${d.incremental}:user/release-keys`, - boot_id: d.boot_id, - proc_version: d.proc_version, - baseband: "", - sim: "T-Mobile", - os_type: "android", - mac_address: d.mac_address, - ip_address: d.ip_address, - wifi_bssid: d.mac_address, - wifi_ssid: d.wifi_ssid, - imei: d.imei, - android_id: (0, constants_1.md5)(d.android_id).toString("hex"), - apn: "wifi", - version: { - incremental: d.incremental, - release: "10", - codename: "REL", - sdk: 29, - }, - imsi: (0, crypto_1.randomBytes)(16), - guid: (0, constants_1.md5)(Buffer.concat([Buffer.from(d.imei), Buffer.from(d.mac_address)])), - }; -} -exports.generateFullDevice = generateFullDevice; -class Device { - constructor(apk, d) { - this.apk = apk; - this.secret = 'ZdJqM15EeO2zWc08'; - this.publicKey = `-----BEGIN PUBLIC KEY----- -MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEIxgwoutfwoJxcGQeedgP7FG9 -qaIuS0qzfR8gWkrkTZKM2iWHn2ajQpBRZjMSoSf6+KJGvar2ORhBfpDXyVtZCKpq -LQ+FLkpncClKVIrBwv6PHyUvuCb0rIarmgDnzkfQAqVufEtR64iazGDKatvJ9y6B -9NMbHddGSAUmRTCrHQIDAQAB ------END PUBLIC KEY-----`; - if (!d) - d = generateShortDevice(); - Object.assign(this, generateFullDevice(apk, d)); - } - async getQIMEI() { - if (this.apk.app_key === "") { - return; - } - const k = (0, constants_1.randomString)(16); - const key = (0, algo_1.encryptPKCS1)(this.publicKey, k); - const time = Date.now(); - const nonce = (0, constants_1.randomString)(16); - const payload = this.genRandomPayloadByDevice(); - const params = (0, algo_1.aesEncrypt)(JSON.stringify(payload), k).toString('base64'); - try { - const { data } = await axios_1.default.post("https://snowflake.qq.com/ola/android", { - key, - params, - time, nonce, - sign: (0, constants_1.md5)(key + params + time + nonce + this.secret).toString("hex"), - extra: '' - }, { - headers: { - 'User-Agent': `Dalvik/2.1.0 (Linux; U; Android ${this.version.release}; PCRT00 Build/N2G48H)`, - 'Content-Type': "application/json" - } - }); - if (data?.code !== 0) { - return; - } - const { q16, q36 } = JSON.parse((0, algo_1.aesDecrypt)(data.data, k)); - this.qImei16 = q16; - this.qImei36 = q36; - } - catch { - } - } - genRandomPayloadByDevice() { - const fixedRand = (max = 1, min = 0) => { - if (max < min) - [max, min] = [min, max]; - const diff = max - min; - return Math.floor(Math.random() * diff) + min; - }; - const reserved = { - "harmony": "0", - "clone": Math.random() > 0.5 ? "1" : "0", - "containe": "", - "oz": "", - "oo": "", - "kelong": Math.random() > 0.5 ? "1" : "0", - "uptimes": (0, constants_1.formatTime)(new Date()), - "multiUser": Math.random() > 0.5 ? "1" : "0", - "bod": this.board, - "brd": this.brand, - "dv": this.device, - "firstLevel": "", - "manufact": this.brand, - "name": this.model, - "host": "se.infra", - "kernel": this.fingerprint - }; - const timestamp = Date.now(); - this.mtime = this.mtime || Date.now(); - const mtime1 = new Date(this.mtime || Date.now()); - const dateFormat = (fmt, time = Date.now()) => (0, constants_1.formatTime)(time, fmt); - const mtimeStr1 = dateFormat("YYYY-mm-ddHHMMSS", mtime1) + "." + this.imei.slice(2, 11); - const mtime2 = new Date(this.mtime - parseInt(this.imei.slice(2, 4))); - const mtimeStr2 = dateFormat("YYYY-mm-ddHHMMSS", mtime2) + "." + this.imei.slice(5, 14); - let beaconIdArr = [ - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr1, - '0000000000000000', - (0, constants_1.md5)(this.android_id + this.imei).toString("hex").slice(0, 16), - ...new Array(4).fill(false).map((_) => fixedRand(10000000, 1000000)), - this.boot_id, - '1', - fixedRand(5, 0), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(50000, 10000), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr2, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((10 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(100, 10), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(5, 0), - ].map((str, idx) => `k${idx + 1}:${str}`); - return { - "androidId": this.android_id, - "platformId": 1, - "appKey": this.apk.app_key, - "appVersion": this.apk.version, - "beaconIdSrc": beaconIdArr.join(';'), - "brand": this.brand, - "channelId": "2017", - "cid": "", - "imei": this.imei, - "imsi": this.imsi.toString("hex"), - "mac": this.mac_address, - "model": this.model, - "networkType": "unknown", - "oaid": "", - "osVersion": `Android ${this.version.release},level ${this.version.sdk}`, - "qimei": "", - "qimei36": "", - "sdkVersion": "1.2.13.6", - "targetSdkVersion": "26", - "audit": "", - "userId": "{}", - "packageId": this.apk.id, - "deviceType": this.display, - "sdkName": "", - "reserved": JSON.stringify(reserved), - }; - } -} -exports.Device = Device; -/** 支持的登录设备平台 */ -var Platform; -(function (Platform) { - Platform[Platform["Android"] = 1] = "Android"; - Platform[Platform["aPad"] = 2] = "aPad"; - Platform[Platform["Watch"] = 3] = "Watch"; - Platform[Platform["iMac"] = 4] = "iMac"; - Platform[Platform["iPad"] = 5] = "iPad"; - Platform[Platform["Tim"] = 6] = "Tim"; -})(Platform = exports.Platform || (exports.Platform = {})); -const mobile = { - id: "com.tencent.mobileqq", - app_key: '0S200MNJT807V3GE', - name: "A8.9.63.11390", - version: "8.9.63.11390", - ver: "8.9.63", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1685069178, - appid: 16, - subid: 537164840, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2546", - display: "Android", - qua: 'V1_AND_SQ_8.9.63_4194_YYB_D', - ssover: 20, -}; -const tim = { - id: "com.tencent.tim", - app_key: '0S200MNJT807V3GE', - name: "A3.5.1.3168", - version: "3.5.1.3168", - ver: "3.5.1", - sign: Buffer.from('775e696d09856872fdd8ab4f3f06b1e0', 'hex'), - buildtime: 1630062176, - appid: 16, - subid: 537150355, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2484", - display: "Tim", - qua: "V1_AND_SQ_8.3.9_351_TIM_D", - ssover: 18, -}; -const watch = { - id: "com.tencent.qqlite", - app_key: '0S200MNJT807V3GE', - name: "A2.0.8", - version: "2.0.8", - ver: "2.0.8", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1559564731, - appid: 16, - subid: 537065138, - bitmap: 16252796, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2365", - display: "Watch", - qua: '', - ssover: 5 -}; -const hd = { - id: "com.tencent.minihd.qq", - app_key: '0S200MNJT807V3GE', - name: "A5.9.3.3468", - version: "5.9.3.3468", - ver: "5.9.3", - sign: Buffer.from('AA 39 78 F4 1F D9 6F F9 91 4A 66 9E 18 64 74 C7'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1637427966, - appid: 16, - subid: 537128930, - bitmap: 150470524, - main_sig_map: 1970400, - sub_sig_map: 66560, - sdkver: "6.0.0.2433", - display: "iMac", - qua: '', - ssover: 12 -}; -const apklist = { - [Platform.Android]: mobile, - [Platform.Tim]: tim, - [Platform.aPad]: { - ...mobile, - subid: 537164888, - display: 'aPad' - }, - [Platform.Watch]: watch, - [Platform.iMac]: { ...hd }, - [Platform.iPad]: { - ...mobile, - subid: 537155074, - sign: hd.sign, - name: '8.9.50.611', - ver: '8.9.50', - sdkver: '6.0.0.2535', - qua: 'V1_AND_SQ_8.9.50_3898_YYB_D', - display: 'iPad' - }, -}; -function getApkInfo(p) { - return apklist[p] || apklist[Platform.Android]; -} -exports.getApkInfo = getApkInfo; diff --git a/spaces/simonraj/ELOralCoachCantonmentPrimary/README.md b/spaces/simonraj/ELOralCoachCantonmentPrimary/README.md deleted file mode 100644 index 81b1dd2f206fefefbfe2cc9b943dad20dd444547..0000000000000000000000000000000000000000 --- a/spaces/simonraj/ELOralCoachCantonmentPrimary/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: OralCoachStreamingEL -emoji: 📉 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -duplicated_from: simonraj/ELOralCoachv1 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Assetto Corsa Drift Cars Download The Best Mods for Every Skill Level.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Assetto Corsa Drift Cars Download The Best Mods for Every Skill Level.md deleted file mode 100644 index 76140c3c0fee968a2e2e90b6d708f80bca74cb2f..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Assetto Corsa Drift Cars Download The Best Mods for Every Skill Level.md +++ /dev/null @@ -1,152 +0,0 @@ - - - - - - - - - -
        Article with HTML formatting
        -

        Assetto Corsa Drift Cars Download: A Guide for Sim Drifting Enthusiasts

        -

        Introduction

        -

        If you love cars, drifting, video games, and adrenaline, then you probably have heard of Assetto Corsa, one of the most realistic and immersive racing simulators out there. Whether you want to race on famous tracks, cruise on scenic roads, or slide sideways on challenging courses, Assetto Corsa has something for everyone.

        -

        But what if you want to take your sim racing experience to the next level and enjoy the thrill of dr

        ifting, one of the most exhilarating and challenging forms of motorsport? Drifting is a driving technique where the driver intentionally oversteers the car, causing it to lose traction and slide sideways, while maintaining control and direction. Drifting requires a lot of skill, practice, and patience, but it also rewards you with a lot of fun, satisfaction, and style.

        -

        assetto corsa drift cars download


        Downloadhttps://ssurll.com/2uNSb1



        -

        Unfortunately, drifting in real life is not easy or cheap. You need a suitable car, a safe and legal place to drift, and a lot of money for tires, fuel, and repairs. That's why many drifting enthusiasts turn to simulators like Assetto Corsa, where they can enjoy drifting without the hassle and expense of real life.

        -

        But how can you drift in Assetto Corsa? Well, you need two things: a drift car and a drift track. A drift car is a car that has been modified or designed to drift better, usually by having more power, less weight, better suspension, and a limited-slip differential. A drift track is a track that has curves, corners, and obstacles that challenge the driver's drifting skills.

        -

        Fortunately, Assetto Corsa has both drift cars and drift tracks. The game comes with some default drift cars and tracks that you can use right away, but if you want more variety and realism, you can also download drift car packs from the internet. Drift car packs are collections of custom-made drift cars that you can install and use in Assetto Corsa. They usually have more details, features, and physics than the default cars, making them more realistic and fun to drive.

        -

        In this article, we will guide you through the process of downloading drift car packs for Assetto Corsa. We will also show you how to choose the best drift car packs for your preferences and needs. And finally, we will give you some tips and tricks on how to drift in Assetto Corsa with drift car packs. By the end of this article, you will be ready to unleash your inner drifter and have a blast in Assetto Corsa.

        -

        How to Download Drift Car Packs for Assetto Corsa

        -

        Downloading drift car packs for Assetto Corsa is not difficult, but it does require some steps and requirements. Here is what you need to do:

        -
          -
        1. Make sure you have Assetto Corsa installed on your PC/Windows. You can buy the game from Steam or other online platforms. You also need to have the latest version of the game and all the official updates and DLCs installed.
        2. -
        3. Find a reliable source or website for downloading drift car packs. There are many websites that offer drift car packs for Assetto Corsa, but not all of them are trustworthy or safe. You should look for websites that have good reviews, ratings, comments, and feedback from other users. You should also avoid websites that have suspicious links, ads, or pop-ups that may contain malware or viruses.
        4. -
        5. Download the drift car pack file. The file will usually be in a compressed format like ZIP or RAR. You may need to use a program like WinRAR or 7-Zip to extract the file. The file will contain one or more folders with the names of the drift cars.
        6. -
        7. Copy and paste the folders into your Assetto Corsa content folder. The content folder is where all the game data is stored. You can find it in your Steam library folder under steamapps/common/assettocorsa/content/cars. You need to copy and paste the folders of the drift cars into this folder.
        8. -
        9. Launch Assetto Corsa and enjoy your new drift cars. You can now select your new drift cars from the game menu under Drive/Car Selection. You can also customize your drift cars under Drive/Setup/Tuning.
        10. -
        -

        That's it! You have successfully downloaded and installed a drift car pack for Assetto Corsa. Now you can try out your new drift cars on any track you want.

        -

        assetto corsa drift car packs vosan
        -assetto corsa dwg 3.0 car pack download
        -assetto corsa rada pack 01 download
        -assetto corsa bdb street style pack download
        -assetto corsa shadowrealm proam car pack
        -assetto corsa championnat de france car pack
        -assetto corsa the shakalz car pack download
        -assetto corsa wrdz greg's jdm classics download
        -assetto corsa aio gravy wdts car pack
        -assetto corsa radagarage teaser pack download
        -assetto corsa cg snv car pack v1 download
        -assetto corsa korea superconducting tokamak car pack
        -assetto corsa fun tracks drift car packs
        -assetto corsa individual cars download vosan
        -assetto corsa drift car skins download vosan
        -assetto corsa drift server setup vosan
        -assetto corsa drift events calendar vosan
        -assetto corsa drift media partners vosan
        -assetto corsa drift submission forms vosan
        -assetto corsa drift community vosan home
        -assetto corsa best drift cars download 2023
        -assetto corsa realistic drift cars download 2023
        -assetto corsa japanese drift cars download 2023
        -assetto corsa formula drift cars download 2023
        -assetto corsa ebisu drift cars download 2023
        -assetto corsa dtp drift cars download 2023
        -assetto corsa wdt drift cars download 2023
        -assetto corsa sdc drift cars download 2023
        -assetto corsa dori drift cars download 2023
        -assetto corsa hoonigan drift cars download 2023
        -assetto corsa mad mike drift cars download 2023
        -assetto corsa daigo saito drift cars download 2023
        -assetto corsa forrest wang drift cars download 2023
        -assetto corsa james deane drift cars download 2023
        -assetto corsa chelsea denofa drift cars download 2023
        -assetto corsa vaughn gittin jr drift cars download 2023
        -assetto corsa fredric aasbo drift cars download 2023
        -assetto corsa ryan tuerck drift cars download 2023
        -assetto corsa ken block drift cars download 2023
        -assetto corsa keiichi tsuchiya drift cars download 2023
        -how to install assetto corsa drift cars mod
        -how to tune assetto corsa drift cars mod
        -how to setup wheel for assetto corsa drift cars mod
        -how to join online servers for assetto corsa drift cars mod
        -how to create custom skins for assetto corsa drift cars mod
        -how to update physics for assetto corsa drift cars mod
        -how to fix sound for assetto corsa drift cars mod

        -

        How to Choose the Best Drift Car Packs for Assetto Corsa

        -

        There are hundreds of drift car packs available for Assetto Corsa, but not all of them are equal in quality or performance. Some may be more realistic than others, some may be more fun than others, and some may be more suitable for your skill level than others. How can you choose the best drift car packs for your sim drifting experience?

        -

        Here are some factors and criteria that you should consider when choosing drift car packs:

        -
          -
        • The source or creator of the drift car pack. You should look for drift car packs that are made by reputable and experienced modders or developers who have a good reputation and track record in the sim racing community. You can check their profiles, portfolios, ratings, reviews, and feedback from other users to see their credibility and quality. You should also look for drift car packs that are updated and supported regularly by their creators, as they may fix bugs, improve features, and add new content over time.
        • -
        • The realism and accuracy of the drift car pack. You should look for drift car packs that are realistic and accurate in terms of their appearance, sound, physics, and performance. You should look for drift car packs that have high-resolution textures, detailed models, authentic liveries, realistic sounds, accurate physics, and balanced performance. You should also look for drift car packs that match the real-life specifications and characteristics of the drift cars they represent, such as their power, weight, suspension, tires, differential, etc.
        • -
        • The variety and diversity of the drift car pack. You should look for drift car packs that offer a variety of drift cars to choose from, as different drift cars may suit different preferences, styles, and situations. You should look for drift car packs that have different types, models, brands, generations, and categories of drift cars, such as Japanese, European, American, classic, modern, street, pro, etc. You should also look for drift car packs that have different features and options for customization and tuning of the drift cars, such as colors, decals, parts, setups, etc.
        • -
        • The fun and enjoyment of the drift car pack. Ultimately, you should look for drift car packs that are fun and enjoyable to drive in Assetto Corsa. You should look for drift car packs that have responsive handling, smooth steering, stable drifting, and satisfying feedback. You should also look for drift car packs that have challenging but rewarding learning curves, as they may help you improve your drifting skills and confidence. And finally, you should look for drift car packs that have a lot of positive reviews and recommendations from other sim drifting enthusiasts who have tried them.
        • -
        -

        Based on these factors and criteria, here are some of the most realistic and fun drift car packs available for Assetto Corsa:

        -
          -
        • AC Drifting Pro. This is one of the most popular and comprehensive drift car packs for Assetto Corsa. It features over 100 drift cars from various brands, models, generations, and categories. It also has realistic physics, sounds, graphics, and customization options. It also has regular updates and support from its creator. You can download it from [here].
        • -
        • Drift Workshop Street Pack. This is another popular and realistic drift car pack for Assetto Corsa. It features over 50 drift cars from various Japanese brands and models, such as Nissan, Toyota, Mazda, Subaru, etc. It also has realistic physics, sounds, graphics, and customization options. It also has regular updates and support from its creator. You can download it from [here].
        • -
        • World Drift Tour Car Pack. This is a fun and diverse drift car pack for Assetto Corsa. It features over 40 drift cars from various brands, models, generations, and categories, such as BMW, Ford, Chevrolet, Ferrari, Lamborghini, etc. It also has realistic physics, sounds, graphics, and customization options. It also has regular updates and support from its creator. You can download it from [here].
        • -
        -

        These are just some examples of the many drift car packs available for Assetto Corsa. You can find more drift car packs by searching online or joining online communities and forums dedicated to sim drifting and Assetto Corsa.

        -

        How to Drift in Assetto Corsa with Drift Car Packs

        -

        Now that you have downloaded and installed some drift car packs for Assetto Corsa, you are ready to start drifting in the game. But how do you drift in Assetto Corsa? What are some of the basic skills and techniques that you need to master? How do you set up and tune your drift car in the game? How do you practice and improve your drifting skills in the game? And what are some of the best tracks and locations for drifting in the game?

        -

        In this section, we will answer these questions and give you some tips and tricks on how to drift in Assetto Corsa with drift car packs.

        -

        Basic Skills and Techniques for Drifting in Assetto Corsa

        -

        Drifting in Assetto Corsa is not easy, but it is not impossible either. You just need to learn and practice some basic skills and techniques that will help you control your drift car and slide it sideways. Here are some of them:

        -
          -
        • Throttle control. This is one of the most important skills for drifting in Assetto Corsa. You need to use the throttle to modulate the power and speed of your drift car. You need to apply enough throttle to initiate and maintain a drift, but not too much that you spin out or lose control. You also need to release the throttle when you want to exit or correct a drift.
        • -
        • Steering control. This is another important skill for drifting in Assetto Corsa. You need to use the steering wheel to direct and balance your drift car. You need to steer into the direction of the drift (countersteer) to keep your car sideways, but not too much that you overcorrect or understeer. You also need to steer out of the direction of the drift (steer) when you want to change or adjust your drift angle.
        • -
        • Brake control. This is a useful skill for drifting in Assetto Corsa. You need to use the brake pedal to slow down or stop your drift car. You can use the brake pedal to initiate a drift by locking up your rear wheels (handbrake) or your front wheels (footbrake). You can also use the brake pedal to modulate your speed or traction during a drift.
        • -
        • Clutch control. This is an advanced skill for drifting in Assetto Corsa. You need to use the clutch pedal to disengage or engage your engine and transmission. You can use the clutch pedal to initiate a drift by dropping your clutch (clutch kick) or revving up your engine (clutch slip). You can also use the clutch pedal to shift gears or maintain your revs during a drift.
        • -
        -

        These are some of the basic skills and techniques that you need to master for drifting in Assetto Corsa. Of course, there are more advanced skills and techniques that you can learn as you progress, such as weight transfer, feinting, flicking, scandinavian flicking, etc. But for now, focus on these basics and practice them until you feel comfortable and confident.

        -

        How to Set Up and Tune Your Drift Car in Assetto Corsa

        -

        Another thing that you need to do before you start drifting in Assetto Corsa is to set up and tune your drift car in the game. Setting up and tuning your drift car will allow you to customize its performance and behavior according to your preferences and needs. You can set up and tune your drift car under Drive/Setup/Tuning in the game menu. There are many parameters and options that you can adjust and modify in your drift car setup, such as tires, suspension, alignment, brakes, differential, gears, etc. However, you don't need to change everything or know everything to have a good drift car setup. You just need to focus on the most important and relevant ones for drifting. Here are some of the most important and relevant parameters and options that you should pay attention to when setting up and tuning your drift car in Assetto Corsa: - Tires. Tires are the only part of your car that touches the ground, so they are very important for drifting. You should choose tires that have good grip, durability, and pressure for drifting. You should also adjust the tire pressure according to the temperature and conditions of the track. Generally, lower tire pressure will give you more grip and stability, but also more wear and heat. Higher tire pressure will give you less grip and stability, but also less wear and heat. - Suspension. Suspension is the system that connects your car to the wheels, so it is also very important for drifting. You should choose suspension that has good stiffness, damping, and height for drifting. You should also adjust the suspension settings according to the characteristics and layout of the track. Generally, stiffer suspension will give you more responsiveness and agility, but also more harshness and oversteer. Softer suspension will give you more comfort and stability, but also more sluggishness and understeer. - Alignment. Alignment is the angle and position of your wheels relative to your car, so it is also very important for drifting. You should choose alignment that has good camber, toe, and caster for drifting. You should also adjust the alignment settings according to your driving style and preference. Generally, negative camber will give you more grip and cornering ability, but also more tire wear and instability. Positive camber will give you less grip and cornering ability, but also less tire wear and stability. Toe-in will give you more stability and straight-line performance, but also more understeer and tire wear. Toe-out will give you more responsiveness and turn-in performance, but also more oversteer and tire wear. Caster will give you more steering feedback and self-centering effect, but also more steering effort and weight transfer. - Brakes. Brakes are the system that slows down or stops your car, so they are also very important for drifting. You should choose brakes that have good power, balance, and bias for drifting. You should also adjust the brake settings according to your skill level and technique. Generally, more brake power will give you more stopping ability and control, but also more heat and lock-up risk. Less brake power will give you less stopping ability and control, but also less heat and lock-up risk. More brake balance will give you more braking force on the front or rear wheels, depending on which way you set it. Less brake balance will give you less braking force on the front or rear wheels, depending on which way you set it. More brake bias will give you more braking force on the front or rear wheels relative to the other wheels, depending on which way you set it. Less brake bias will give you less braking force on the front or rear wheels relative to the other wheels, depending on which way you set it. - Differential. Differential is the system that distributes power to your wheels, so it is also very important for drifting. You should choose differential that has good type, preload, coast lock, power lock and ramp angle for drifting. You should also adjust the differential settings according to your power output and traction level. Generally, a limited-slip differential (LSD) will give you more control and stability over your wheels, but also more complexity and difficulty. An open differential will give you less control and stability over your wheels, but also less complexity and difficulty. More preload will give you more locking effect and consistency, but also more understeer and drag. Less preload will give you less locking effect and consistency, but also less understeer and drag. More coast lock will give you more locking effect and stability when off-throttle, but also more oversteer and snapback. Less coast lock will give you less locking effect and stability when off-throttle, but also less oversteer and snapback. More power lock will give you more locking effect and stability when on-throttle, but also more understeer and wheelspin. Less power lock will give you less locking effect and stability when on-throttle, but also less understeer and wheelspin. More ramp angle will give you more locking effect and aggressiveness, but also more harshness and wear. Less ramp angle will give you less locking effect and aggressiveness, but also less harshness and wear. - -

        These are some of the most important and relevant parameters and options that you should pay attention to when setting up and tuning your drift car in Assetto Corsa. Of course, there are more parameters and options that you can adjust and modify in your drift car setup, such as gears, aerodynamics, fuel, etc. But for now, focus on these ones and experiment with them until you find the best setup for your drift car.

        -

        How to Practice and Improve Your Drifting Skills in Assetto Corsa

        -

        Once you have set up and tuned your drift car in Assetto Corsa, you are ready to practice and improve your drifting skills in the game. But how do you practice and improve your drifting skills in Assetto Corsa? What are some of the best ways to learn and master drifting in the game?

        -

        Here are some tips and tricks that will help you practice and improve your drifting skills in Assetto Corsa:

        -
          -
        • Start with the basics. Before you try to drift like a pro, you need to learn the basics of drifting first. You need to understand how drifting works, what are the main techniques for initiating, maintaining, and exiting a drift, and what are the common mistakes and challenges that you may face when drifting. You can watch online tutorials, read online guides, or join online courses that teach you the basics of drifting in Assetto Corsa.
        • -
        • Practice with easy cars and tracks. After you learn the basics of drifting, you need to practice them with easy cars and tracks first. You need to choose cars that are easy to control, stable, and forgiving, such as the BMW M3 E30 Drift or the Toyota GT86 Drift. You also need to choose tracks that are simple, wide, and flat, such as the Drift Playground or the Drift Track 1. You need to practice with easy cars and tracks first to build your confidence and muscle memory.
        • -
        • Gradually increase the difficulty. As you get better at drifting with easy cars and tracks, you need to gradually increase the difficulty of your practice sessions. You need to choose cars that are more powerful, responsive, and challenging, such as the Nissan Silvia S15 Drift or the Mazda RX-7 FD Drift. You also need to choose tracks that are more complex, narrow, and hilly, such as the Ebisu Minami or the Lake Louise Loop Road. You need to gradually increase the difficulty of your practice sessions to challenge yourself and improve your skills and techniques.
        • -
        • Use the practice mode and the replay feature. One of the best ways to practice and improve your drifting skills in Assetto Corsa is to use the practice mode and the replay feature. The practice mode allows you to drive freely on any track without any opponents or time limits. You can use the practice mode to try different cars, tracks, setups, and techniques without any pressure or distraction. The replay feature allows you to watch your driving from different angles and perspectives. You can use the replay feature to analyze your mistakes, correct your errors, and learn from your successes.
        • -
        • Join online sessions and events. Another way to practice and improve your drifting skills in Assetto Corsa is to join online sessions and events. Online sessions and events allow you to drift with other players from around the world. You can join online sessions and events to have fun, make friends, compete, or learn from other drifters. You can also join online sessions and events that are dedicated to drifting, such as drift servers, drift lobbies, drift clubs, drift schools, drift competitions, etc.
        • -
        -

        These are some of the tips and tricks that will help you practice and improve your drifting skills in Assetto Corsa. Of course, there are more ways to practice and improve your drifting skills in the game, such as watching online videos, reading online articles, or joining online communities and forums dedicated to sim drifting and Assetto Corsa. But for now, focus on these ones and practice them regularly until you become a better drifter.

        -

        Best Tracks and Locations for Drifting in Assetto Corsa

        -

        Finally, one of the things that you need to know before you start drifting in Assetto Corsa is the best tracks and locations for drifting in the game. The best tracks and locations for drifting in Assetto Corsa are the ones that have curves, corners, and obstacles that challenge your drifting skills and techniques. The best tracks and locations for drifting in Assetto Corsa are also the ones that have beautiful scenery, realistic atmosphere, and immersive environment that enhance your sim drifting experience.

        -

        Here are some of the best tracks and locations for drifting in Assetto Corsa:

        -
          -
        • Ebisu Circuit. This is one of the most famous and popular tracks for drifting in Japan and in the world. It has several layouts that cater to different levels of difficulty and style of drifting. It also has a lot of elevation changes, hairpins, chicanes, jumps, and walls that make it challenging and fun to drift on.
        • -
        • Lake Louise Loop Road. This is one of the most scenic and realistic tracks for drifting in Canada and in the world. It is a 22 km long road that runs around a lake surrounded by mountains. It has a lot of twists, turns, bends, and bridges that make it exciting and enjoyable to drift on.
        • -
        • Driftland. This is one of the most unique and innovative tracks for drifting in Scotland and in the world. It is the first and only purpose-built drift track in the UK. It has a circular layout that has a variety of corners, curves, and angles that make it ideal and versatile for drifting.
        • -
        • Long Beach. This is one of the most iconic and legendary tracks for drifting in the USA and in the world. It is a street circuit that hosts the Formula Drift championship every year. It has a long straight, a tight hairpin, and a fast sweeper that make it thrilling and spectacular to drift on.
        • -
        • Meihan Sportsland. This is one of the most challenging and technical tracks for drifting in Japan and in the world. It is a small circuit that has a lot of tight corners, sharp turns, and concrete barriers that make it demanding and dangerous to drift on. It is also famous for its reverse entry technique, where the driver enters a corner at a very high angle and speed.
        • -
        -

        These are some of the best tracks and locations for drifting in Assetto Corsa. Of course, there are more tracks and locations that you can drift on in the game, such as Nurburgring, Spa-Francorchamps, Monaco, etc. But for now, try these ones and have fun drifting on them.

        -

        Conclusion

        -

        In this article, we have guided you through the process of downloading drift car packs for Assetto Corsa. We have also shown you how to choose the best drift car packs for your sim drifting experience. And finally, we have given you some tips and tricks on how to drift in Assetto Corsa with drift car packs.

        -

        By following this guide, you will be able to enjoy drifting in Assetto Corsa with realistic and fun drift cars on challenging and beautiful tracks. You will also be able to improve your drifting skills and techniques in the game. And most importantly, you will have a lot of fun and satisfaction in sim drifting.

        -

        We hope you found this article helpful and informative. If you have any questions, comments, or feedback, please feel free to share them with us. We would love to hear from you and learn from your experiences with drift car packs in Assetto Corsa.

        -

        Thank you for reading this article and happy drifting!

        -

        FAQs

        -
          -
        • Q: Can I download drift car packs for Assetto Corsa on Xbox or PlayStation?
        • -
        • A: No, you can only download drift car packs for Assetto Corsa on PC/Windows.
        • -
        • Q: How much does it cost to download drift car packs for Assetto Corsa?
        • -
        • A: Most drift car packs are free to download, but some may require a donation or a subscription.
        • -
        • Q: How can I find more drift car packs for Assetto Corsa?
        • -
        • A: You can search online or join online communities and forums dedicated to sim drifting and Assetto Corsa.
        • -
        • Q: How can I make my own drift car pack for Assetto Corsa?
        • -
        • A: You can use modding tools and software to create your own drift car pack, but it requires a lot of time, skill, and knowledge.
        • -
        • Q: What are some of the best VR headsets for sim drifting in Assetto Corsa?
        • -
        • A: Some of the best VR headsets for sim drifting in Assetto Corsa are Oculus Rift S, Oculus Quest 2, Valve Index VR, and HTC Vive Pro.
        • -
        -
        -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Clash of Clans Mod APK Download A Guide for Beginners.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Clash of Clans Mod APK Download A Guide for Beginners.md deleted file mode 100644 index 281481f272867c7fba2c76ccb3793c7df4abf5cd..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Clash of Clans Mod APK Download A Guide for Beginners.md +++ /dev/null @@ -1,127 +0,0 @@ -
        -

        Download Clash of Clans Mod APK: The Ultimate Guide

        -

        If you are a fan of strategy games, you have probably heard of Clash of Clans, one of the most popular mobile games in the world. But did you know that you can download Clash of Clans Mod APK and enjoy unlimited resources, gems, gold, and elixir in the game? In this article, we will tell you everything you need to know about Clash of Clans Mod APK, including what it is, why you should download it, how to download and install it, and how to play it. So, without further ado, let's get started!

        -

        download clash of clans mod apk


        Downloadhttps://ssurll.com/2uO0eu



        -

        What is Clash of Clans?

        -

        A brief introduction to the game

        -

        Clash of Clans is a freemium strategy game developed and published by Supercell, a Finnish game company. It was released in 2012 for iOS and in 2013 for Android devices. The game has over 500 million downloads on Google Play Store and is one of the highest-grossing apps on both platforms.

        -

        The game is set in a fantasy world where you have to build your own village, train your troops, and fight against other players online. You can also join or create a clan with other players and participate in clan wars, clan games, and clan leagues. The game is updated regularly with new features, events, and challenges.

        -

        The main features of Clash of Clans

        -

        Some of the main features of Clash of Clans are:

        -

        download clash of clans mod apk unlimited gems
        -download clash of clans mod apk latest version
        -download clash of clans mod apk android 1
        -download clash of clans mod apk offline
        -download clash of clans mod apk hack
        -download clash of clans mod apk 2023
        -download clash of clans mod apk with th14
        -download clash of clans mod apk for pc
        -download clash of clans mod apk revdl
        -download clash of clans mod apk no root
        -download clash of clans mod apk unlimited everything
        -download clash of clans mod apk private server
        -download clash of clans mod apk town hall 14
        -download clash of clans mod apk unlimited troops
        -download clash of clans mod apk ihackedit
        -download clash of clans mod apk rexdl
        -download clash of clans mod apk happymod
        -download clash of clans mod apk plenixclash
        -download clash of clans mod apk magic s4
        -download clash of clans mod apk s1
        -download clash of clans mod apk s2
        -download clash of clans mod apk s3
        -download clash of clans mod apk s4
        -download clash of clans mod apk s5
        -download clash of clans mod apk s6
        -download clash of clans mod apk s7
        -download clash of clans mod apk s8
        -download clash of clans mod apk s9
        -download clash of clans mod apk s10
        -download clash of clans mod apk nulls royale
        -download clash of clans mod apk nulls brawl
        -download clash of clans mod apk nulls clash
        -download clash of clans mod apk lights server
        -download clash of clans mod apk miroclash
        -download clash of clans mod apk master royale
        -download clash of clans mod apk plenix royale
        -download clash of clans mod apk flix royale
        -download clash of clans mod apk cosmic royale
        -download clash of clans mod apk fun royale
        -download clash of clans mod apk legend royale

        -
          -
        • Build your own village with various buildings, defenses, traps, and decorations.
        • -
        • Train different types of troops with unique abilities and upgrade them with dark elixir, elixir, or gems.
        • -
        • Attack other players' villages and loot their resources or defend your own village from enemy attacks.
        • -
        • Join or create a clan with other players and chat, donate troops, and request reinforcements.
        • -
        • Participate in clan wars, clan games, and clan leagues and earn rewards and trophies.
        • -
        • Unlock new heroes like the Barbarian King, the Archer Queen, the Grand Warden, and the Royal Champion and use their special powers in battles.
        • -
        • Explore the mysterious Builder Base and discover new buildings, troops, and challenges.
        • -
        • Complete various achievements and missions and earn gems, resources, and magic items.
        • -
        • Customize your village, troops, heroes, and clan badge with various skins and themes.
        • -
        -

        Why download Clash of Clans Mod APK?

        -

        The benefits of using the modded version

        -

        Clash of Clans Mod APK is a modified version of the original game that allows you to enjoy unlimited resources, gems, gold, and elixir in the game. This means that you can build your village faster, train your troops more easily, upgrade your heroes more quickly, and unlock more features without spending any real money. You can also access some exclusive features that are not available in the official version, such as custom servers, private chats, unlimited troops in battles, and more.

        -

        The drawbacks of using the modded version

        -

        However, there are also some drawbacks of using Clash of Clans Mod APK that you should be aware of before downloading it. Some of them are:

        -
          -
        • You may face some compatibility issues with your device or operating system.
        • -
        • You may encounter some bugs or glitches that may affect your gameplay experience.
        • -
        • You may get banned from the official game server or lose your progress if you switch back to the original version.
        • -
        • You may not be able to play with other players who are using the official version or join the official clans.
        • -
        • You may expose your device to malware or viruses that may harm your data or privacy.
        • -
        -

        Therefore, you should download Clash of Clans Mod APK at your own risk and discretion. We are not responsible for any damage or loss that may occur as a result of using the modded version.

        -

        How to download and install Clash of Clans Mod APK?

        -

        The requirements for downloading the mod APK

        -

        Before you download and install Clash of Clans Mod APK, you need to make sure that your device meets the following requirements:

        -
          -
        • Your device should have Android 4.1 or higher version installed.
        • -
        • Your device should have at least 2 GB of RAM and 100 MB of free storage space.
        • -
        • Your device should have a stable internet connection.
        • -
        • Your device should allow installation from unknown sources. You can enable this option by going to Settings > Security > Unknown Sources and toggling it on.
        • -
        -

        The steps for downloading and installing the mod APK

        -

        Once you have checked the requirements, you can follow these steps to download and install Clash of Clans Mod APK:

        -
          -
        1. Click on this link to download the latest version of Clash of Clans Mod APK: [Download Clash of Clans Mod APK].
        2. -
        3. Wait for the download to complete and then locate the file in your device's file manager.
        4. -
        5. Tap on the file and then tap on Install to start the installation process.
        6. -
        7. Wait for the installation to finish and then tap on Open to launch the game.
        8. -
        9. Enjoy playing Clash of Clans Mod APK with unlimited resources, gems, gold, and elixir!
        10. -
        -

        How to play Clash of Clans Mod APK?

        -

        The basics of the gameplay

        -

        The gameplay of Clash of Clans Mod APK is similar to the original version, except that you have unlimited resources, gems, gold, and elixir. You can use these resources to build your village, train your troops, upgrade your heroes, and unlock more features. You can also attack other players' villages and loot their resources or defend your own village from enemy attacks. You can also join or create a clan with other players and participate in clan wars, clan games, and clan leagues. The game is updated regularly with new features, events, and challenges.

        -

        The tips and tricks for playing the mod APK

        -

        To make the most out of Clash of Clans Mod APK, you can follow these tips and tricks:

        -
          -
        • Use gems wisely. Gems are the most valuable resource in the game and you can use them to speed up building, training, upgrading, and researching processes. You can also use them to buy more resources, magic items, shields, and decorations. However, you should not waste them on unnecessary things like skipping tutorials, changing your name, or buying resources that you can easily get by attacking or collecting.
        • -
        • Plan your base layout carefully. Your base layout is crucial for your defense and offense strategies. You should place your town hall in the center of your base and surround it with walls, traps, defenses, and heroes. You should also place your resource storages near your town hall and protect them with defenses. You should also place your barracks, spell factories, laboratories, clan castle, and builder huts near the edges of your base and away from defenses. You should also leave some gaps between your buildings to prevent enemy troops from funneling into your base.
        • -
        • Balanced your army composition. Your army composition is important for your attack strategy. You should have a balanced mix of different types of troops with different abilities and roles. For example, you can use tanks like giants, golems, or pekkas to absorb damage and distract defenses. You can use damage dealers like wizards, archers, or dragons to destroy buildings and troops. You can use support troops like healers, wall breakers, or goblins to heal your tanks, break walls, or loot resources. You can also use spells like rage, heal, jump, or freeze to boost your troops' performance or hinder your enemy's defenses.
        • -
        • Choose your target wisely. Before you attack another player's village, you should scout their base layout and their defense level. You should also check their resource amount and their league rank. You should choose a target that has a lot of resources that you need and a low defense level that you can easily overcome. You should also avoid attacking players who are in the same clan as you or have a shield or a guard active. You should also avoid attacking players who are too strong or too weak for your level, as you will not get much loot or trophies from them.
        • -
        • Join or create a clan. Clans are one of the best features of Clash of Clans Mod APK, as they allow you to interact with other players, share troops, and participate in clan wars, clan games, and clan leagues. You can join an existing clan that suits your preferences and goals, or you can create your own clan and invite your friends or other players to join. You can also chat with your clan members, donate and request troops, and earn clan perks and rewards.
        • -
        -

        Conclusion

        -

        A summary of the main points

        -

        Clash of Clans Mod APK is a modified version of the original game that allows you to enjoy unlimited resources, gems, gold, and elixir in the game. You can use these resources to build your village faster, train your troops more easily, upgrade your heroes more quickly, and unlock more features without spending any real money. You can also access some exclusive features that are not available in the official version, such as custom servers, private chats, unlimited troops in battles, and more. However, there are also some drawbacks of using Clash of Clans Mod APK that you should be aware of before downloading it, such as compatibility issues, bugs, glitches, bans, and malware.

        -

        A call to action for the readers

        -

        If you are interested in downloading Clash of Clans Mod APK and experiencing the game in a new way, you can follow the steps that we have provided in this article. You can also check out our website for more information and updates on Clash of Clans Mod APK. We hope that you have enjoyed reading this article and that you have learned something new about Clash of Clans Mod APK. Thank you for your time and attention. Happy clashing!

        -

        FAQs

        -

        Q: Is Clash of Clans Mod APK safe to download and use?

        -

        A: Clash of Clans Mod APK is not an official product of Supercell and is not endorsed or supported by them. Therefore, it may not be safe to download and use. You may expose your device to malware or viruses that may harm your data or privacy. You may also get banned from the official game server or lose your progress if you switch back to the original version. Therefore, you should download Clash of Clans Mod APK at your own risk and discretion.

        -

        Q: Can I play Clash of Clans Mod APK with other players who are using the official version?

        -

        A: No, you cannot play Clash of Clans Mod APK with other players who are using the official version. Clash of Clans Mod APK uses custom servers that are different from the official servers. Therefore, you will not be able to connect with other players who are using the official version or join the official clans.

        -

        Q: How can I update Clash of Clans Mod APK?

        -

        A: Clash of Clans Mod APK is updated regularly with new features, events, and challenges. However, you cannot update it from the Google Play Store or the App Store. You have to download the latest version of Clash of Clans Mod APK from our website and install it manually on your device.

        -

        Q: What are some alternatives to Clash of Clans Mod APK?

        -

        A: If you are looking for some alternatives to Clash of Clans Mod APK, you can try these games:

        -
          -
        • Clash Royale: A spin-off game from Supercell that combines card collecting, tower defense, and real-time strategy elements.
        • -
        • Boom Beach: Another game from Supercell that involves building your base on an island and fighting against other players and an evil organization called the Blackguard.
        • -
        • Lords Mobile: A game from IGG that involves building your kingdom, recruiting heroes, and fighting against other players and monsters.
        • -
        -

        Q: Where can I find more information and updates on Clash of Clans Mod APK?

        -

        A: You can find more information and updates on Clash of Clans Mod APK on our website [ClashofClansModAPK.com]. You can also follow us on our social media platforms [Facebook], [Twitter], [Instagram], and [YouTube] for more news and tips on Clash of Clans Mod APK.

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Dolphin Emulator Original and Relive Your Childhood Memories with Wii and GameCube Games.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Dolphin Emulator Original and Relive Your Childhood Memories with Wii and GameCube Games.md deleted file mode 100644 index e4ce53d2e3d10cf1952a3011bf5362ba5bf27f03..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Dolphin Emulator Original and Relive Your Childhood Memories with Wii and GameCube Games.md +++ /dev/null @@ -1,210 +0,0 @@ -
        -

        Download Dolphin Emulator Original: How to Play GameCube and Wii Games on PC

        -

        If you are a fan of Nintendo's GameCube and Wii consoles, you might be wondering how to play your favorite games on your PC. The answer is Dolphin Emulator, a free and open-source software that can emulate these two consoles with high accuracy and performance. In this article, we will show you how to download dolphin emulator original, how to install and configure it, and how to play GameCube and Wii games on it.

        -

        What is Dolphin Emulator?

        -

        Dolphin Emulator is an emulator for the Nintendo GameCube and Wii that runs on Windows, Linux, macOS, Android, Xbox One, Xbox Series X and Series S. It was first released in 2003 as a freeware for Windows, and later became open-source in 2008. Dolphin Emulator was the first GameCube emulator that could successfully run commercial games, and later gained support for Wii emulation as well.

        -

        download dolphin emulator original


        Download File · https://ssurll.com/2uO19T



        -

        Features of Dolphin Emulator

        -

        Dolphin Emulator has many features that make it a popular choice among gamers who want to enjoy GameCube and Wii games on their PC. Some of these features are:

        -
          -
        • Compatibility with all PC controllers, including keyboard and mouse.
        • -
        • Turbo speed option that can speed up or slow down the emulation.
        • -
        • Networked multiplayer that allows playing online or locally with other players.
        • -
        • Save and load state system that can save and load the game at any point.
        • -
        • Customizable graphics and audio settings that can enhance the game's visuals and sounds.
        • -
        • Cheat codes support that can enable GameShark, Action Replay, and Gecko codes.
        • -
        • High-level emulation accuracy that can run most games without major glitches or crashes.
        • -
        -

        Compatibility of Dolphin Emulator

        -

        Dolphin Emulator has a high compatibility rate across the majority of titles for both GameCube and Wii platforms. According to the official compatibility list, out of the 1,368 tested games, 38% are rated as perfect, 59% are rated as playable, 2% are rated as starts, 0.3% are rated as intro/menu, and 0.5% are rated as broken. However, some games may require specific settings or hardware to run properly, so it is recommended to check the game's wiki page or forum thread before playing.

        -

        How to Download Dolphin Emulator Original?

        -

        The original version of Dolphin Emulator can be downloaded from different sources, depending on your operating system and preference. Here are some of the options:

        -

        Downloading from the Official Website

        -

        The official website of Dolphin Emulator is https://dolphin-emu.org/, where you can find the latest versions of the emulator for various platforms. There are two types of versions available: beta versions and development versions. Beta versions are released every month, usually accompanied by a progress report article. They are more stable than development versions, but may not have the newest features or fixes. Development versions are released every time a developer makes a change to Dolphin, several times every day. They have the latest and greatest improvements to the project, but they are less tested than beta versions, and may have bugs or issues. You can choose the version that suits your needs and download it from the website.

        -

        download dolphin emulator for windows
        -download dolphin emulator for mac
        -download dolphin emulator for linux
        -download dolphin emulator for android
        -download dolphin emulator for xbox one
        -download dolphin emulator for xbox series x
        -download dolphin emulator beta version
        -download dolphin emulator development version
        -download dolphin emulator latest version
        -download dolphin emulator old version
        -download dolphin emulator 5.0
        -download dolphin emulator 4.0
        -download dolphin emulator 3.0
        -download dolphin emulator 2.0
        -download dolphin emulator 1.0
        -download dolphin emulator apk
        -download dolphin emulator exe
        -download dolphin emulator dmg
        -download dolphin emulator zip
        -download dolphin emulator iso
        -download dolphin emulator games
        -download dolphin emulator roms
        -download dolphin emulator wii games
        -download dolphin emulator gamecube games
        -download dolphin emulator wii u games
        -download dolphin emulator nintendo games
        -download dolphin emulator mario games
        -download dolphin emulator zelda games
        -download dolphin emulator pokemon games
        -download dolphin emulator sonic games
        -download dolphin emulator settings
        -download dolphin emulator config
        -download dolphin emulator cheats
        -download dolphin emulator save files
        -download dolphin emulator memory card
        -download dolphin emulator controller profiles
        -download dolphin emulator shaders
        -download dolphin emulator textures
        -download dolphin emulator mods
        -download dolphin emulator hacks
        -how to download dolphin emulator original on pc
        -how to download dolphin emulator original on macbook
        -how to download dolphin emulator original on linux mint
        -how to download dolphin emulator original on android phone
        -how to download dolphin emulator original on xbox console
        -where to download dolphin emulator original safely
        -where to find the best source to download dolphi

        -

        Downloading from Other Sources

        -

        If you prefer to download Dolphin Emulator from other sources, such as third-party websites or app stores, you should be careful and make sure that the source is trustworthy and reliable. Some sources may offer modified or outdated versions of Dolphin Emulator that may contain malware, viruses, or unwanted features. Some sources may also claim to offer Dolphin Emulator for platforms that are not officially supported, such as iOS or PlayStation. These are usually fake or scam applications that may harm your device or steal your information. To avoid these risks, it is recommended to download Dolphin Emulator only from the official website or from reputable sources that are verified by the Dolphin team.

        -

        How to Install and Configure Dolphin Emulator?

        -

        After downloading Dolphin Emulator, you need to install and configure it on your PC before you can start playing GameCube and Wii games. The installation and configuration process may vary depending on your operating system and preferences. Here are some general steps to follow:

        -

        Installing Dolphin Emulator on Windows

        -

        If you downloaded Dolphin Emulator for Windows, you will get a ZIP file that contains the emulator files. You need to extract the ZIP file to a folder of your choice, such as C:\Dolphin. You can use any file extraction software, such as WinRAR or 7-Zip, to do this. After extracting the ZIP file, you will see a folder named Dolphin-x64 (or Dolphin-x86 if you downloaded the 32-bit version). Inside this folder, you will find the executable file named Dolphin.exe. This is the main file that runs the emulator. You can double-click on it to launch Dolphin Emulator, or create a shortcut on your desktop for easier access.

        -

        Installing Dolphin Emulator on Mac

        -

        If you downloaded Dolphin Emulator for Mac, you will get a DMG file that contains the emulator files. You need to mount the DMG file by double-clicking on it. This will open a new window that shows the Dolphin icon. You need to drag and drop this icon to the Applications folder in your Finder sidebar. This will copy the emulator files to your Applications folder. After copying the files, you can eject the DMG file by right-clicking on it and selecting Eject. To launch Dolphin Emulator, you need to open your Applications folder and double-click on the Dolphin icon. You may need to allow Dolphin Emulator to run on your Mac by going to System Preferences > Security & Privacy > General and clicking on Open Anyway next to Dolphin Emulator.

        Installing Dolphin Emulator on Linux

        -

        If you downloaded Dolphin Emulator for Linux, you will get a TAR.XZ file that contains the emulator files. You need to extract the TAR.XZ file to a folder of your choice, such as ~/Dolphin. You can use any file extraction software, such as tar or xz, to do this. After extracting the TAR.XZ file, you will see a folder named dolphin-emu-master. Inside this folder, you will find the executable file named dolphin-emu. This is the main file that runs the emulator. You can run it from the terminal by typing ./dolphin-emu, or create a launcher on your desktop for easier access. You may need to install some dependencies for Dolphin Emulator to work on your Linux system, such as libgtk2.0-dev, libwxgtk3.0-dev, libxext-dev, and libudev-dev. You can use your package manager, such as apt or yum, to install these dependencies.

        -

        Configuring Dolphin Emulator Settings

        -

        Once you have installed Dolphin Emulator on your PC, you need to configure some settings to optimize your gaming experience. To access the settings menu, you need to click on the Config button on the main toolbar of Dolphin Emulator. This will open a new window that shows various tabs and options. Here are some of the most important settings to configure:

        -
          -
        • General: This tab allows you to change some general settings, such as language, theme, interface mode, and update channel. You can also enable or disable some features, such as dual core mode, idle skipping, cheats, and analytics.
        • -
        • Graphics: This tab allows you to change some graphics settings, such as backend, adapter, resolution, aspect ratio, fullscreen mode, vsync, and anti-aliasing. You can also enable or disable some enhancements, such as anisotropic filtering, scaled EFB copy, force texture filtering, and post-processing effects.
        • -
        • Audio: This tab allows you to change some audio settings, such as backend, volume, latency, and stretching. You can also enable or disable some features, such as DSP HLE emulation and audio dumping.
        • -
        • GameCube: This tab allows you to change some GameCube settings, such as system language, IPL settings, memory card slots, and SIDevice.
        • -
        • Wii: This tab allows you to change some Wii settings, such as system language, aspect ratio, sensor bar position and sensitivity, speaker volume and data rate.
        • -
        • Paths: This tab allows you to change some paths settings, such as default ISO directory and NAND root path.
        • -
        • Advanced: This tab allows you to change some advanced settings that are not recommended for casual users. These include CPU clock override, MMU emulation, CPU emulation engine, determinism mode, and custom textures.
        • -
        -

        You can experiment with different settings and see how they affect the performance and quality of the emulation. You can also save and load different profiles of settings for different games or scenarios. To save a profile, you need to click on the Save button at the bottom of the settings window and enter a name for the profile. To load a profile, you need to click on the Load button and select the profile from the list.

        -

        Configuring Dolphin Emulator Controllers

        -

        Another important step to configure Dolphin Emulator is to set up your controllers. Dolphin Emulator supports various types of controllers, such as keyboard and mouse, gamepad, joystick, steering wheel, and motion controllers. You can also use real GameCube and Wii controllers with Dolphin Emulator if you have the proper adapter or Bluetooth connection. To access the controller settings menu, you need to click on the Controllers button on the main toolbar of Dolphin Emulator. This will open a new window that shows four tabs: GameCube Controllers, Wii Remotes, Emulated Wii Remote, and Real Wii Remote. Here are some of the options to configure:

        -
          -
        • GameCube Controllers: This tab allows you to configure up to four GameCube controllers for Dolphin Emulator. You can choose between Standard Controller, GameCube Adapter for Wii U, or None for each port. If you choose Standard Controller, you can click on the Configure button to map the buttons and axes of your controller to the GameCube controller layout. You can also adjust some settings, such as rumble and deadzone.
        • -
        • Wii Remotes: This tab allows you to configure up to four Wii Remotes for Dolphin Emulator. You can choose between Emulated Wii Remote, Real Wii Remote, or Hybrid Wii Remote for each slot. If you choose Emulated Wii Remote, you can click on the Configure button to map the buttons and axes of your controller to the Wii Remote layout. You can also adjust some settings, such as extension, speaker data rate, and motion simulation.
        • -
        • Emulated Wii Remote: This tab allows you to configure the general settings for all emulated Wii Remotes in Dolphin Emulator. You can choose between Basic or Advanced mode for the motion input source. You can also calibrate your controller's accelerometer and gyroscope.
        • -
        • Real Wii Remote: This tab allows you to configure the general settings for all real Wii Remotes in Dolphin Emulator. You can enable or disable continuous scanning for new devices, speaker data rate, rumble motor, and battery level display.
        • -
        -

        You can test your controller's input by clicking on the Test button at the bottom of each tab. You can also save and load different profiles of controller settings for different games or scenarios. To save a profile, you need to click on the Save button at the bottom of each tab and enter a name for the profile. To load a profile, you need to click on the Load button and select the profile from the list.

        -

        How to Play GameCube and Wii Games on Dolphin Emulator?

        -

        After installing and configuring Dolphin Emulator on your PC, you are ready to play GameCube and Wii games on it. However, before you can do that, you need to obtain the game files, also known as ROMs, for the games you want to play. ROMs are digital copies of the game discs that can be read by the emulator. However, obtaining ROMs is not a simple or legal process, as it involves ripping the game discs from your own console or downloading them from the internet. Therefore, we will not provide any links or instructions on how to obtain ROMs, and we advise you to do so at your own risk and responsibility.

        -

        Obtaining GameCube and Wii ROMs

        -

        There are two main ways to obtain GameCube and Wii ROMs: ripping them from your own game discs or downloading them from the internet. Ripping them from your own game discs is the legal and ethical way, as it ensures that you own a legitimate copy of the game and that you are not infringing on any copyrights. However, ripping them from your own game discs requires some special hardware and software, such as a Wii console, a Wii disc drive, an SD card, and a homebrew application. Downloading them from the internet is the easy and convenient way, as it only requires a web browser and an internet connection. However, downloading them from the internet is illegal and unethical, as it violates the game developers' and publishers' rights and may expose you to malware or viruses.

        -

        Loading GameCube and Wii ROMs on Dolphin Emulator

        -

        Once you have obtained the GameCube and Wii ROMs, you need to load them on Dolphin Emulator to play them. To do this, you need to follow these steps:

        -
          -
        1. Launch Dolphin Emulator on your PC.
        2. -
        3. Click on the Open button on the main toolbar of Dolphin Emulator. This will open a file browser window that allows you to select the ROM file you want to load.
        4. -
        5. Navigate to the folder where you stored your ROM files and select the one you want to play. The ROM file should have an extension of .iso, .gcm, .wbfs, .ciso, .gcz, or .wad.
        6. -
        7. Click on the Open button at the bottom of the file browser window. This will load the ROM file on Dolphin Emulator and start the game.
        8. -
        -

        You can also add your ROM files to Dolphin Emulator's library for easier access. To do this, you need to follow these steps:

        -
          -
        1. Launch Dolphin Emulator on your PC.
        2. -
        3. Click on the Config button on the main toolbar of Dolphin Emulator. This will open the settings window.
        4. -
        5. Click on the Paths tab in the settings window.
        6. -
        7. Click on the Add... button at the bottom of the Paths tab. This will open a file browser window that allows you to select a folder where your ROM files are stored.
        8. -
        9. Navigate to the folder where you stored your ROM files and select it.
        10. -
        11. Click on the Select Folder button at the bottom of the file browser window. This will add the folder to Dolphin Emulator's library.
        12. -
        13. Click on the Close button at the bottom of the settings window. This will close the settings window and return to the main window of Dolphin Emulator.
        14. -
        15. Click on the Refresh button on the main toolbar of Dolphin Emulator. This will refresh the library and show the ROM files in the list.
        16. -
        -

        You can now double-click on any ROM file in the list to load it on Dolphin Emulator and start the game.

        -

        Enhancing GameCube and Wii Games on Dolphin Emulator

        -

        One of the advantages of playing GameCube and Wii games on Dolphin Emulator is that you can enhance them with various settings and features that are not available on the original consoles. For example, you can increase the resolution, enable anti-aliasing, apply post-processing effects, use custom textures, and more. To access these options, you need to right-click on any ROM file in the list and select Properties. This will open a new window that shows various tabs and options. Here are some of the options to enhance your games:

        -
          -
        • Info: This tab shows some basic information about the game, such as title, ID, region, platform, size, and description. You can also edit some of these fields if you want.
        • -
        • Filesystem: This tab shows the file structure of the game disc, such as partitions, files, and folders. You can also extract or replace some of these files if you want.
        • -
        • Patches: This tab shows some patches that can be applied to the game, such as AR codes, Gecko codes, or IPS patches. You can also add or remove some of these patches if you want.
        • -
        • Game Config: This tab shows some configuration settings that are specific to the game, such as CPU clock override, MMU emulation, dual core mode, and more. You can also change some of these settings if you want.
        • -
        • Enhancements: This tab shows some enhancements that can be applied to the game, such as resolution, anti-aliasing, anisotropic filtering, scaled EFB copy, force texture filtering, post-processing effects, and more. You can also change some of these settings if you want.
        • -
        • Custom Textures: This tab shows some custom textures that can be used for the game, such as high-resolution textures or fan-made textures. You can also add or remove some of these textures if you want.
        • -
        -

        You can experiment with different options and see how they affect the performance and quality of the game. You can also save and load different profiles of options for different games or scenarios. To save a profile, you need to click on the Save button at the bottom of each tab and enter a name for the profile. To load a profile, you need to click on the Load button and select the profile from the list.

        -

        Conclusion

        -

        In this article, we have shown you how to download dolphin emulator original, how to install and configure it, and how to play GameCube and Wii games on it. Dolphin Emulator is a powerful and versatile software that can emulate these two consoles with high accuracy and performance. It also offers many features and enhancements that can improve your gaming experience. However, it also requires some technical knowledge and skills to use it properly and legally. We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below.

        -

        FAQs

        -

        Here are some frequently asked questions about Dolphin Emulator:

        -
          -
        1. Is Dolphin Emulator legal?
        2. -

          Dolphin Emulator itself is legal, as it is a software that emulates hardware that is no longer in production or supported by Nintendo. However, the legality of obtaining and using ROMs for GameCube and Wii games is more complicated, as it depends on the laws of your country and the source of the ROMs. Generally, it is legal to rip ROMs from your own game discs, but illegal to download them from the internet. However, some countries may have different or unclear regulations on this matter, so it is advisable to consult a lawyer or an expert before obtaining or using ROMs.

          -
        3. Is Dolphin Emulator safe?
        4. -

          Dolphin Emulator is safe, as long as you download it from the official website or from reputable sources that are verified by the Dolphin team. However, some sources may offer modified or outdated versions of Dolphin Emulator that may contain malware, viruses, or unwanted features. Some sources may also claim to offer Dolphin Emulator for platforms that are not officially supported, such as iOS or PlayStation. These are usually fake or scam applications that may harm your device or steal your information. To avoid these risks, it is recommended to download Dolphin Emulator only from the official website or from reputable sources that are verified by the Dolphin team.

          -
        5. What are the system requirements for Dolphin Emulator?
        6. -

          The system requirements for Dolphin Emulator vary depending on the game and the settings you use. However, here are some general guidelines for the minimum and recommended system requirements for Dolphin Emulator:

          - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
          ComponentMinimumRecommended
          CPUIntel Core 2 Duo E8400 or AMD Phenom II X2 550Intel Core i5-3570K or AMD Ryzen 3 1300X
          GPUNVIDIA GeForce GT 630 or AMD Radeon HD 6570NVIDIA GeForce GTX 1060 or AMD Radeon RX 580
          RAM2 GB4 GB
          OSWindows 7 (64-bit) or Linux (64-bit)Windows 10 (64-bit) or Linux (64-bit)
          StorageAt least 10 GB of free space for Dolphin Emulator and ROMsAt least 20 GB of free space for Dolphin Emulator and ROMs
          ControllerKeyboard and mouse, gamepad, joystick, steering wheel, or motion controllerGameCube controller with adapter or Wii Remote with Bluetooth connection
          -

          You can also check the performance guide on the official website of Dolphin Emulator for more tips and tricks on how to optimize your system for Dolphin Emulator.

          -
        7. What are some of the best games to play on Dolphin Emulator?
        8. -

          Dolphin Emulator can run most games for GameCube and Wii platforms with high accuracy and performance. However, some games may stand out more than others due to their popularity, quality, or compatibility. Here are some of the best games to play on Dolphin Emulator:

          -
            -
          • The Legend of Zelda: The Wind Waker: This is a classic action-adventure game that features a cel-shaded art style and an open-world exploration of a vast ocean. It is one of the most visually stunning and immersive games on GameCube, and it runs perfectly on Dolphin Emulator with enhanced graphics and features.
          • -
          • Super Smash Bros. Melee: This is a fighting game that features a roster of characters from various Nintendo franchises, such as Mario, Zelda, Pokemon, and more. It is one of the most popular and competitive games on GameCube, and it runs smoothly on Dolphin Emulator with online multiplayer support and custom mods.
          • -
          • Mario Kart Wii: This is a racing game that features a variety of characters, vehicles, tracks, and items from the Mario series. It is one of the most fun and chaotic games on Wii, and it runs flawlessly on Dolphin Emulator with motion controls emulation and online multiplayer support.
          • -
          • Metroid Prime Trilogy: This is a collection of three first-person shooter games that follow the adventures of Samus Aran, a bounty hunter who explores alien worlds and fights against space pirates. It is one of the most critically acclaimed and atmospheric games on Wii, and it runs beautifully on Dolphin Emulator with enhanced graphics and features.
          • -
          • Xenoblade Chronicles: This is a role-playing game that features a massive open-world environment, a real-time combat system, a complex story, and a memorable soundtrack. It is one of the most epic and ambitious games on Wii, and it runs superbly on Dolphin Emulator with enhanced graphics and features.
          • -
          -

          These are just some of the best games to play on Dolphin Emulator, but there are many more to discover and enjoy. You can check the official compatibility list or the game's wiki page or forum thread for more information and recommendations.

          -
        9. How to update Dolphin Emulator?
        10. -

          Dolphin Emulator is constantly being updated by its developers and contributors, who add new features, fix bugs, and improve performance. To update Dolphin Emulator, you need to follow these steps:

          -
            -
          1. Launch Dolphin Emulator on your PC.
          2. -
          3. Click on the Help button on the main toolbar of Dolphin Emulator. This will open a drop-down menu that shows various options.
          4. -
          5. Click on Check for Updates. This will check if there is a newer version of Dolphin Emulator available for download.
          6. -
          7. If there is a newer version available, you will see a pop-up window that shows the version number and the changelog. You can also click on the View Full Changelog button to see more details.
          8. -
          9. Click on the Download Now button to download the newer version of Dolphin Emulator. This will open a web browser window that shows the download page.
          10. -
          11. Click on the Download button on the download page to download the ZIP file that contains the newer version of Dolphin Emulator.
          12. -
          13. Extract the ZIP file to a folder of your choice, such as C:\Dolphin. You can use any file extraction software, such as WinRAR or 7-Zip, to do this.
          14. -
          15. Replace the old version of Dolphin Emulator with the newer version by copying and pasting the files from the extracted folder to the folder where you installed Dolphin Emulator, such as C:\Dolphin. You may need to overwrite some files or folders if prompted.
          16. -
          17. Launch Dolphin Emulator on your PC. You should see the newer version number on the title bar of Dolphin Emulator.
          18. -
          -

          You can also enable automatic updates for Dolphin Emulator by going to Config > General > Updates and checking the Enable Auto-Update option. This will make Dolphin Emulator check for updates every time you launch it and download them automatically if available.

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/examples/sequence_tagging/finetune_sequence_tagging.py b/spaces/skf15963/summary/fengshen/examples/sequence_tagging/finetune_sequence_tagging.py deleted file mode 100644 index a4ca513231810e3c7020e1ee4657c53ce286a5e7..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/examples/sequence_tagging/finetune_sequence_tagging.py +++ /dev/null @@ -1,317 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from dataclasses import dataclass -import copy -import logging -import torch.nn.functional as F -import os -import json -import torch -import pytorch_lightning as pl -import argparse -from pytorch_lightning.callbacks import ModelCheckpoint, LearningRateMonitor -from torch.utils.data import Dataset, DataLoader -from torch.utils.data._utils.collate import default_collate -from fengshen.models.tagging_models.bert_for_tagging import BertLinear,BertCrf,BertSpan,BertBiaffine -from fengshen.data.sequence_tagging_dataloader.sequence_tagging_collator import CollatorForLinear, CollatorForCrf, CollatorForSpan, CollatorForBiaffine -from fengshen.data.sequence_tagging_dataloader.sequence_tagging_datasets import DataProcessor, get_datasets -from fengshen.metric.metric import EntityScore -from fengshen.models.model_utils import configure_optimizers, get_total_steps -from fengshen.utils.universal_checkpoint import UniversalCheckpoint -from fengshen.data.universal_datamodule import UniversalDataModule - -from transformers import ( - BertTokenizer, BertConfig, AutoTokenizer -) -from fengshen.metric.utils_ner import get_entities, bert_extract_item - - -_model_dict={ - 'bert-linear': BertLinear, - 'bert-crf': BertCrf, - 'bert-span': BertSpan, - 'bert-biaffine': BertBiaffine -} - -_collator_dict={ - 'linear': CollatorForLinear, - 'crf': CollatorForCrf, - 'span': CollatorForSpan -} - -_validation_dict={ - 'linear': 'validation_linear', - 'crf': 'validation_crf', - 'span': 'validation_span', - 'biaffine': 'validation_biaffine', -} - -_prediction_dict={ - 'linear': 'predict_linear', - 'crf': 'predict_crf', - 'span': 'predict_span', - 'biaffine': 'predict_biaffine', -} - -logger = logging.getLogger(__name__) - - -class LitModel(pl.LightningModule): - - @staticmethod - def add_model_specific_args(parent_args): - parser = parent_args.add_argument_group('BaseModel') - parser.add_argument("--max_seq_length", default=512, type=int) - parser.add_argument('--data_dir', default=None, type=str) - parser.add_argument('--model_type', default='bert', type=str) - parser.add_argument("--decode_type", default="linear", choices=["linear", "crf", "biaffine", "span"], type=str) - parser.add_argument('--loss_type', default='ce', type=str, choices=['lsr', 'focal', 'ce']) - return parent_args - - def __init__(self, args, id2label, tokenizer): - super().__init__() - - self.model_name=args.model_type+"-"+args.decode_type - self.id2label = id2label - - self.config=BertConfig.from_pretrained(args.model_path) - self.tokenizer = tokenizer - self.model = _model_dict[self.model_name].from_pretrained(args.model_path, config=self.config, num_labels=len(self.id2label), loss_type=args.loss_type) - self.entity_score=EntityScore() - - self.validate_fn=getattr(self,_validation_dict[args.decode_type]) - self.predict_fn=getattr(self,_prediction_dict[args.decode_type]) - - self.predict_result=[] - - self.save_hyperparameters(args) - - def setup(self, stage) -> None: - if stage == 'fit': - self.total_steps = get_total_steps(self.trainer, self.hparams) - print('Total steps: {}' .format(self.total_steps)) - - def training_step(self, batch, batch_idx): - outputs = self.model(**batch) - loss = outputs.loss - self.log('train_loss', loss) - return loss - - def validation_step(self, batch, batch_idx): - self.validate_fn(batch,batch_idx) - - def validation_linear(self, batch, batch_idx): - outputs = self.model(**batch) - loss = outputs.loss - logits = outputs.logits - - preds = torch.argmax(F.log_softmax(logits, dim=2), dim=2) - preds = preds.detach().cpu().numpy() - labels = batch['labels'].detach().cpu().numpy() - - for i, label in enumerate(labels): - y_true = [] - y_pred = [] - for j, m in enumerate(label): - if j == 0: - continue - elif j == (torch.sum(batch['attention_mask'][i]).item()-1): - true_subject=get_entities(y_true,self.id2label) - pred_subject=get_entities(y_pred,self.id2label) - self.entity_score.update(true_subject=true_subject, pred_subject=pred_subject) - break - else: - y_true.append(self.id2label[labels[i][j]]) - y_pred.append(self.id2label[preds[i][j]]) - - self.log('val_loss', loss) - - def validation_crf(self, batch, batch_idx): - outputs = self.model(**batch) - loss = outputs.loss - logits = outputs.logits - - preds = self.model.crf.decode(logits, batch['attention_mask']) - preds = preds.detach().squeeze(0).cpu().numpy().tolist() - labels = batch['labels'].detach().cpu().numpy() - - for i, label in enumerate(labels): - y_true = [] - y_pred = [] - for j, m in enumerate(label): - if j == 0: - continue - elif j == (torch.sum(batch['attention_mask'][i]).item()-1): - true_subject=get_entities(y_true,self.id2label) - pred_subject=get_entities(y_pred,self.id2label) - self.entity_score.update(true_subject=true_subject, pred_subject=pred_subject) - break - else: - y_true.append(self.id2label[labels[i][j]]) - y_pred.append(self.id2label[preds[i][j]]) - - self.log('val_loss', loss) - - def validation_span(self, batch, batch_idx): - outputs = self.model(**batch) - loss = outputs.loss - start_logits = outputs.start_logits - end_logits = outputs.end_logits - labels=batch['subjects'] - for i, T in enumerate(labels): - active_start_logits=start_logits[i][:batch['input_len'][i]] - active_end_logits=end_logits[i][:batch['input_len'][i]] - R = bert_extract_item(active_start_logits, active_end_logits) - - T=T[~torch.all(T==-1,dim=-1)].cpu().numpy() - T=list(map(lambda x:(self.id2label[x[0]],x[1],x[2]),T)) - R=list(map(lambda x:(self.id2label[x[0]],x[1],x[2]),R)) - - self.entity_score.update(true_subject=T, pred_subject=R) - self.log('val_loss', loss) - - def validation_biaffine(self, batch, batch_idx): - outputs = self.model(**batch) - loss = outputs.loss - logits = outputs.span_logits - - preds = torch.argmax(logits.cpu().numpy(), axis=-1) - labels = batch['span_labels'].cpu().numpy() - - for i, label in enumerate(labels): - input_len=(batch['input_len'][i])-2 - active_label=labels[i,1:input_len+1,1:input_len+1] - active_pred=preds[i,1:input_len+1,1:input_len+1] - - temp_1 = [] - temp_2 = [] - - for j in range(input_len): - for k in range(input_len): - if self.id2label[active_label[j,k]]!="O": - temp_1.append([self.id2label[active_label[j,k]],j,k]) - if self.id2label[active_pred[j,k]]!="O": - temp_2.append([self.id2label[active_pred[j,k]],j,k]) - - self.entity_score.update(pred_subject=temp_2, true_subject=temp_1) - - self.log('val_loss', loss) - - def validation_epoch_end(self, outputs): - # compute metric for all process - score_dict, _ = self.entity_score.result() - if self.trainer._accelerator_connector.cluster_environment.global_rank() == 0: - print('score_dict:\n', score_dict) - # reset the metric after once validation - self.entity_score.reset() - for k, v in score_dict.items(): - self.log('val_{}'.format(k), v) - - def predict_step(self, batch, batch_idx): - batch['labels'] = None - outputs = self.model(**batch) - - self.predict_fn(batch,batch_idx) - - def predict_linear(self, batch, outputs): - logits = torch.argmax(F.log_softmax(outputs.logits, dim=2), dim=2) - preds = logits.detach().cpu().numpy() - - for i, pred in enumerate(preds): - text = self.tokenizer.convert_ids_to_tokens(batch['input_ids'][i])[:batch['input_len'][i]][1:-1] - pred = pred[:batch['input_len'][i]][1:-1] - label_entities = get_entities(pred, self.id2label) - for label_list in label_entities: - label_list.append("".join(text[label_list[1]:label_list[2]+1])) - - self.predict_result.extend(label_entities) - - def predict_crf(self, batch, batch_idx): - logits = self.model(**batch).logits - preds = self.model.crf.decode(logits, batch['attention_mask']).squeeze(0).cpu().numpy().tolist() - - for i, pred in enumerate(preds): - text = self.tokenizer.convert_ids_to_tokens(batch['input_ids'][i])[:batch['input_len'][i]][1:-1] - pred = pred[:batch['input_len'][i]][1:-1] - label_entities = get_entities(pred, self.id2label) - for label_list in label_entities: - label_list.append("".join(text[label_list[1]:label_list[2]+1])) - - self.predict_result.extend(label_entities) - - def predict_span(self, batch, batch_idx): - batch['start_positions'] = None - batch['end_positions'] = None - outputs = self.model(**batch) - - start_logits, end_logits = outputs.start_logits, outputs.end_logits - for i, _ in enumerate(start_logits): - text = self.tokenizer.convert_ids_to_tokens(batch['input_ids'][i])[:batch['input_len'][i]][1:-1] - R = bert_extract_item(start_logits[i][:batch['input_len'][i]], end_logits[i][:batch['input_len'][i]]) - if R: - label_entities = [[self.id2label[x[0]],x[1],x[2],"".join(text[x[1]:x[2]+1])] for x in R] - else: - label_entities = [] - - self.predict_result.extend(label_entities) - - - - def configure_optimizers(self): - return configure_optimizers(self) - -def main(): - total_parser = argparse.ArgumentParser("TASK NAME") - - # * Args for data preprocessing - total_parser = UniversalDataModule.add_data_specific_args(total_parser) - # * Args for training - total_parser = pl.Trainer.add_argparse_args(total_parser) - total_parser = UniversalCheckpoint.add_argparse_args(total_parser) - - # * Args for base model - from fengshen.models.model_utils import add_module_args - total_parser = add_module_args(total_parser) - total_parser = LitModel.add_model_specific_args(total_parser) - - args = total_parser.parse_args() - - datasets=get_datasets(args) - - checkpoint_callback = UniversalCheckpoint(args).callbacks - lr_monitor = LearningRateMonitor(logging_interval='step') - - trainer = pl.Trainer.from_argparse_args(args, - callbacks=[checkpoint_callback, lr_monitor] - ) - - label2id,id2label=DataProcessor.get_labels(args) - tokenizer = AutoTokenizer.from_pretrained(args.model_path) - - collator = _collator_dict[args.decode_type]() - collator.args=args - collator.tokenizer=tokenizer - collator.label2id=label2id - data_model = UniversalDataModule(tokenizer,collator,args,datasets) - - model = LitModel(args,id2label,tokenizer) - print(label2id) - trainer.fit(model, data_model) - # trainer.predict(model,dataloaders=data_model.predict_dataloader()) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/snowcoin/bing/Dockerfile b/spaces/snowcoin/bing/Dockerfile deleted file mode 100644 index 139c333a3bba5ac3680d42b6f356824207f05255..0000000000000000000000000000000000000000 --- a/spaces/snowcoin/bing/Dockerfile +++ /dev/null @@ -1,33 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,并且清除缓存🧹 -RUN apk --no-cache add git && \ - git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app && \ - apk del git - -# 设置工作目录 -WORKDIR /workspace/app - -# 编译 go 项目 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像🪞 -FROM alpine - -# 设置工作目录💼 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件👔 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# (可选)设置环境变量✍️ -ENV Go_Proxy_BingAI_USER_TOKEN_1="G4hJ9k544565uhjjhjlkjh6356223p3EaYc0FvIjHmLzXeRfAq" - -# 端口 -EXPOSE 8080 - -# 容器运行✅ -CMD ["/workspace/app/go-proxy-bingai"] diff --git a/spaces/sohojoe/soho-clip-embeddings-explorer/experimental/clip_app_client.py b/spaces/sohojoe/soho-clip-embeddings-explorer/experimental/clip_app_client.py deleted file mode 100644 index ac3636822ab5cc9a5ad70e9854831b32647efe47..0000000000000000000000000000000000000000 --- a/spaces/sohojoe/soho-clip-embeddings-explorer/experimental/clip_app_client.py +++ /dev/null @@ -1,113 +0,0 @@ -import json -import os -import numpy as np -import requests -from concurrent.futures import ThreadPoolExecutor, as_completed -from PIL import Image -from io import BytesIO -import torch - -from clip_retrieval.load_clip import load_clip, get_tokenizer - - -class ClipAppClient: - """ - A class to handle generating embeddings using the OpenAI CLIP model. - - app_client = ClipAppClient() - - test_image_url = "https://example.com/image.jpg" - preprocessed_image = app_client.preprocess_image(test_image_url) - - text = "A beautiful landscape" - text_embeddings = app_client.text_to_embedding(text) - - image_embeddings = app_client.image_url_to_embedding(test_image_url) - - preprocessed_image_embeddings = app_client.preprocessed_image_to_embedding(preprocessed_image) - """ - - def __init__(self, clip_model="ViT-L/14", device=None): - # def __init__(self, clip_model="open_clip:ViT-H-14", device=None): - self.clip_model = clip_model - self.device = device or ("cuda:0" if torch.cuda.is_available() else "cpu") - print("using device", self.device) - _, self.preprocess = load_clip(clip_model, use_jit=True, device=self.device) - self.tokenizer = get_tokenizer(clip_model) - - def preprocess_image(self, image_url): - """ - Preprocess an image from a given URL. - - :param image_url: str, URL of the image to preprocess - :return: torch.Tensor, preprocessed image - """ - if os.path.isfile(image_url): - input_image = Image.open(image_url).convert('RGB') - input_image = np.array(input_image) - input_image = Image.fromarray(input_image) - else: - response = requests.get(image_url) - input_image = Image.open(BytesIO(response.content)).convert('RGB') - input_image = np.array(input_image) - input_image = Image.fromarray(input_image) - prepro = self.preprocess(input_image).unsqueeze(0).cpu() - return prepro - - def text_to_embedding(self, text): - """ - Convert a given text to an embedding using the OpenAI CLIP model. - - :param text: str, text to convert to an embedding - :return: str, text embeddings - """ - payload = { - "text": ('str', text, 'application/octet-stream'), - } - url = os.environ.get("HTTP_ADDRESS", "http://127.0.0.1:8000/") - response = requests.post(url, files=payload) - embeddings = response.text - embeddings = json.loads(embeddings) - embeddings = torch.tensor(embeddings) - return embeddings - - def image_url_to_embedding(self, image_url): - """ - Convert an image URL to an embedding using the OpenAI CLIP model. - - :param image_url: str, URL of the image to convert to an embedding - :return: str, image embeddings - """ - payload = { - "image_url": ('str', image_url, 'application/octet-stream'), - } - url = os.environ.get("HTTP_ADDRESS", "http://127.0.0.1:8000/") - response = requests.post(url, files=payload) - embeddings = response.text - embeddings = json.loads(embeddings) - embeddings = torch.tensor(embeddings) - return embeddings - - def preprocessed_image_to_embedding(self, image): - """ - Convert a preprocessed image to an embedding using the OpenAI CLIP model. - - :param image: torch.Tensor, preprocessed image - :return: str, image embeddings - """ - key = "preprocessed_image" - data_bytes = image.numpy().tobytes() - shape_bytes = np.array(image.shape).tobytes() - dtype_bytes = str(image.dtype).encode() - payload = { - key: ('tensor', data_bytes, 'application/octet-stream'), - 'shape': ('shape', shape_bytes, 'application/octet-stream'), - 'dtype': ('dtype', dtype_bytes, 'application/octet-stream'), - } - url = os.environ.get("HTTP_ADDRESS", "http://127.0.0.1:8000/") - response = requests.post(url, files=payload) - embeddings = response.text - embeddings = json.loads(embeddings) - embeddings = torch.tensor(embeddings) - return embeddings - diff --git a/spaces/spark-nlp/SparkNLP_NER/_highlight.py b/spaces/spark-nlp/SparkNLP_NER/_highlight.py deleted file mode 100644 index 3e9356992d81e9cd977c24c842583690f05b6588..0000000000000000000000000000000000000000 --- a/spaces/spark-nlp/SparkNLP_NER/_highlight.py +++ /dev/null @@ -1,92 +0,0 @@ -import re -from rich.console import Console -from rich.highlighter import RegexHighlighter -from typing import Tuple, List - - -class NullHighlighter(RegexHighlighter): - """Apply style to anything that looks like an email.""" - - base_style = "" - highlights = [r""] - - -def highlight_document(doc: str, - keywords: List[Tuple[str, float]]): - """ Highlight keywords in a document - Arguments: - doc: The document for which to extract keywords/keyphrases - keywords: the top n keywords for a document with their respective distances - to the input document - Returns: - highlighted_text: The document with additional tags to highlight keywords - according to the rich package - """ - keywords_only = [keyword for keyword, _ in keywords] - max_len = max([len(token.split(" ")) for token in keywords_only]) - - if max_len == 1: - highlighted_text = _highlight_one_gram(doc, keywords_only) - else: - highlighted_text = _highlight_n_gram(doc, keywords_only) - - - return highlighted_text - - -def _highlight_one_gram(doc: str, - keywords: List[str]) -> str: - """ Highlight 1-gram keywords in a document - Arguments: - doc: The document for which to extract keywords/keyphrases - keywords: the top n keywords for a document - Returns: - highlighted_text: The document with additional tags to highlight keywords - according to the rich package - """ - tokens = re.sub(r' +', ' ', doc.replace("\n", " ")).split(" ") - - highlighted_text = " ".join([f'{token}' - if token.lower() in keywords - else f"{token}" - for token in tokens]).strip() - - - return highlighted_text - - -def _highlight_n_gram(doc: str, - keywords: List[str]) -> str: - """ Highlight n-gram keywords in a document - Arguments: - doc: The document for which to extract keywords/keyphrases - keywords: the top n keywords for a document - Returns: - highlighted_text: The document with additional tags to highlight keywords - according to the rich package - """ - max_len = max([len(token.split(" ")) for token in keywords]) - tokens = re.sub(r' +', ' ', doc.replace("\n", " ")).strip().split(" ") - n_gram_tokens = [[" ".join(tokens[i: i + max_len][0: j + 1]) for j in range(max_len)] for i, _ in enumerate(tokens)] - highlighted_text = [] - skip = False - - for n_grams in n_gram_tokens: - candidate = False - - if not skip: - for index, n_gram in enumerate(n_grams): - - if n_gram.lower() in keywords: - candidate = f'{n_gram}' + n_grams[-1].split(n_gram)[-1] - skip = index + 1 - - if not candidate: - candidate = n_grams[0] - - highlighted_text.append(candidate) - - else: - skip = skip - 1 - highlighted_text = " ".join(highlighted_text) - return highlighted_text \ No newline at end of file diff --git a/spaces/spondej/stabel-diffusion-z-1.5/index.html b/spaces/spondej/stabel-diffusion-z-1.5/index.html deleted file mode 100644 index 918e851d9dd1baf9e4fb4f067fd979d432472161..0000000000000000000000000000000000000000 --- a/spaces/spondej/stabel-diffusion-z-1.5/index.html +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - My static Space - - - -
          -

          Welcome to your static Space!

          -

          - You can modify this app directly by editing index.html in the - Files and versions tab. -

          -

          - Also don't forget to check the - Spaces documentation. -

          -
          - - diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_to_text/docs/librispeech_example.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_to_text/docs/librispeech_example.md deleted file mode 100644 index 4040fda9426027537036ba987d087a43e734bfd9..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_to_text/docs/librispeech_example.md +++ /dev/null @@ -1,69 +0,0 @@ -[[Back]](..) - -# S2T Example: Speech Recognition (ASR) on LibriSpeech -[LibriSpeech](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf) is a de-facto standard English ASR -benchmark. We provide competitive -vanilla [Transformer](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) baselines. - -## Data preparation -Download and preprocess LibriSpeech data with -```bash -# additional Python packages for S2T data processing/model training -pip install pandas torchaudio sentencepiece - -python examples/speech_to_text/prep_librispeech_data.py \ - --output-root ${LS_ROOT} --vocab-type unigram --vocab-size 10000 -``` -where `LS_ROOT` is the root path for downloaded data as well as generated files (manifest, features, vocabulary and -data configuration). - -[Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_vocab_unigram10000.zip) our vocabulary files -if you want to use our pre-trained models. - -## Training -```bash -fairseq-train ${LS_ROOT} --save-dir ${SAVE_DIR} \ - --config-yaml config.yaml --train-subset train-clean-100,train-clean-360,train-other-500 --valid-subset dev-clean,dev-other \ - --num-workers 4 --max-tokens 40000 --max-update 300000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --share-decoder-input-output-embed \ - --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt --warmup-updates 10000 \ - --clip-norm 10.0 --seed 1 --update-freq 8 -``` -where `SAVE_DIR` is the checkpoint root path. Here we use `--arch s2t_transformer_s` (31M parameters) as example. -For better performance, you may switch to `s2t_transformer_m` (71M, with `--lr 1e-3`) or `s2t_transformer_l` -(268M, with `--lr 5e-4`). We set `--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to update it accordingly -when using more than 1 GPU. - -## Inference & Evaluation -Average the last 10 checkpoints and evaluate on the 4 splits -(`dev-clean`, `dev-other`, `test-clean` and `test-other`): -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py --inputs ${SAVE_DIR} \ - --num-epoch-checkpoints 10 \ - --output "${SAVE_DIR}/${CHECKPOINT_FILENAME}" -for SUBSET in dev-clean dev-other test-clean test-other; do - fairseq-generate ${LS_ROOT} --config-yaml config.yaml --gen-subset ${SUBSET} \ - --task speech_to_text --path ${SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring wer -done -``` - -## Interactive Decoding -Launch the interactive console via -```bash -fairseq-interactive ${LS_ROOT} --config-yaml config.yaml --task speech_to_text \ - --path ${SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 -``` -Type in WAV/FLAC/OGG audio paths (one per line) after the prompt. - -## Results - -| --arch | Params | dev-clean | dev-other | test-clean | test-other | Model | -|---|---|---|---|---|---|---| -| s2t_transformer_s | 30M | 3.8 | 8.9 | 4.4 | 9.0 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_transformer_s.pt) | -| s2t_transformer_m | 71M | 3.2 | 8.0 | 3.4 | 7.9 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_transformer_m.pt) | -| s2t_transformer_l | 268M | 3.0 | 7.5 | 3.2 | 7.5 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_transformer_l.pt) | - -[[Back]](..) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/dataclass/utils.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/dataclass/utils.py deleted file mode 100644 index 1320ec473756c78ec949f72f9260420c19caff0f..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/dataclass/utils.py +++ /dev/null @@ -1,493 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import inspect -import logging -import os -import re -from argparse import ArgumentError, ArgumentParser, Namespace -from dataclasses import _MISSING_TYPE, MISSING, is_dataclass -from enum import Enum -from typing import Any, Dict, List, Optional, Tuple, Type - -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.configs import FairseqConfig -from hydra.core.global_hydra import GlobalHydra -from hydra.experimental import compose, initialize -from omegaconf import DictConfig, OmegaConf, open_dict, _utils - -logger = logging.getLogger(__name__) - - -def eval_str_list(x, x_type=float): - if x is None: - return None - if isinstance(x, str): - if len(x) == 0: - return [] - x = ast.literal_eval(x) - try: - return list(map(x_type, x)) - except TypeError: - return [x_type(x)] - - -def interpret_dc_type(field_type): - if isinstance(field_type, str): - raise RuntimeError("field should be a type") - - if field_type == Any: - return str - - typestring = str(field_type) - if re.match( - r"(typing.|^)Union\[(.*), NoneType\]$", typestring - ) or typestring.startswith("typing.Optional"): - return field_type.__args__[0] - return field_type - - -def gen_parser_from_dataclass( - parser: ArgumentParser, - dataclass_instance: FairseqDataclass, - delete_default: bool = False, - with_prefix: Optional[str] = None, -) -> None: - """ - convert a dataclass instance to tailing parser arguments. - - If `with_prefix` is provided, prefix all the keys in the resulting parser with it. It means that we are - building a flat namespace from a structured dataclass (see transformer_config.py for example). - """ - - def argparse_name(name: str): - if name == "data" and (with_prefix is None or with_prefix == ''): - # normally data is positional args, so we don't add the -- nor the prefix - return name - if name == "_name": - # private member, skip - return None - full_name = "--" + name.replace("_", "-") - if with_prefix is not None and with_prefix != '': - # if a prefix is specified, construct the prefixed arg name - full_name = with_prefix + "-" + full_name[2:] # strip -- when composing - return full_name - - def get_kwargs_from_dc( - dataclass_instance: FairseqDataclass, k: str - ) -> Dict[str, Any]: - """k: dataclass attributes""" - - kwargs = {} - - field_type = dataclass_instance._get_type(k) - inter_type = interpret_dc_type(field_type) - - field_default = dataclass_instance._get_default(k) - - if isinstance(inter_type, type) and issubclass(inter_type, Enum): - field_choices = [t.value for t in list(inter_type)] - else: - field_choices = None - - field_help = dataclass_instance._get_help(k) - field_const = dataclass_instance._get_argparse_const(k) - - if isinstance(field_default, str) and field_default.startswith("${"): - kwargs["default"] = field_default - else: - if field_default is MISSING: - kwargs["required"] = True - if field_choices is not None: - kwargs["choices"] = field_choices - if ( - isinstance(inter_type, type) - and (issubclass(inter_type, List) or issubclass(inter_type, Tuple)) - ) or ("List" in str(inter_type) or "Tuple" in str(inter_type)): - if "int" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, int) - elif "float" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, float) - elif "str" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, str) - else: - raise NotImplementedError( - "parsing of type " + str(inter_type) + " is not implemented" - ) - if field_default is not MISSING: - kwargs["default"] = ( - ",".join(map(str, field_default)) - if field_default is not None - else None - ) - elif ( - isinstance(inter_type, type) and issubclass(inter_type, Enum) - ) or "Enum" in str(inter_type): - kwargs["type"] = str - if field_default is not MISSING: - if isinstance(field_default, Enum): - kwargs["default"] = field_default.value - else: - kwargs["default"] = field_default - elif inter_type is bool: - kwargs["action"] = ( - "store_false" if field_default is True else "store_true" - ) - kwargs["default"] = field_default - else: - kwargs["type"] = inter_type - if field_default is not MISSING: - kwargs["default"] = field_default - - # build the help with the hierarchical prefix - if with_prefix is not None and with_prefix != '' and field_help is not None: - field_help = with_prefix[2:] + ': ' + field_help - - kwargs["help"] = field_help - if field_const is not None: - kwargs["const"] = field_const - kwargs["nargs"] = "?" - - return kwargs - - for k in dataclass_instance._get_all_attributes(): - field_name = argparse_name(dataclass_instance._get_name(k)) - field_type = dataclass_instance._get_type(k) - if field_name is None: - continue - elif inspect.isclass(field_type) and issubclass(field_type, FairseqDataclass): - # for fields that are of type FairseqDataclass, we can recursively - # add their fields to the namespace (so we add the args from model, task, etc. to the root namespace) - prefix = None - if with_prefix is not None: - # if a prefix is specified, then we don't want to copy the subfields directly to the root namespace - # but we prefix them with the name of the current field. - prefix = field_name - gen_parser_from_dataclass(parser, field_type(), delete_default, prefix) - continue - - kwargs = get_kwargs_from_dc(dataclass_instance, k) - - field_args = [field_name] - alias = dataclass_instance._get_argparse_alias(k) - if alias is not None: - field_args.append(alias) - - if "default" in kwargs: - if isinstance(kwargs["default"], str) and kwargs["default"].startswith( - "${" - ): - if kwargs["help"] is None: - # this is a field with a name that will be added elsewhere - continue - else: - del kwargs["default"] - if delete_default and "default" in kwargs: - del kwargs["default"] - try: - parser.add_argument(*field_args, **kwargs) - except ArgumentError: - pass - - -def _set_legacy_defaults(args, cls): - """Helper to set default arguments based on *add_args*.""" - if not hasattr(cls, "add_args"): - return - - import argparse - - parser = argparse.ArgumentParser( - argument_default=argparse.SUPPRESS, allow_abbrev=False - ) - cls.add_args(parser) - # copied from argparse.py: - defaults = argparse.Namespace() - for action in parser._actions: - if action.dest is not argparse.SUPPRESS: - if not hasattr(defaults, action.dest): - if action.default is not argparse.SUPPRESS: - setattr(defaults, action.dest, action.default) - for key, default_value in vars(defaults).items(): - if not hasattr(args, key): - setattr(args, key, default_value) - - -def _override_attr( - sub_node: str, data_class: Type[FairseqDataclass], args: Namespace -) -> List[str]: - overrides = [] - - if not inspect.isclass(data_class) or not issubclass(data_class, FairseqDataclass): - return overrides - - def get_default(f): - if not isinstance(f.default_factory, _MISSING_TYPE): - return f.default_factory() - return f.default - - for k, v in data_class.__dataclass_fields__.items(): - if k.startswith("_"): - # private member, skip - continue - - val = get_default(v) if not hasattr(args, k) else getattr(args, k) - - field_type = interpret_dc_type(v.type) - if ( - isinstance(val, str) - and not val.startswith("${") # not interpolation - and field_type != str - and ( - not inspect.isclass(field_type) or not issubclass(field_type, Enum) - ) # not choices enum - ): - # upgrade old models that stored complex parameters as string - val = ast.literal_eval(val) - - if isinstance(val, tuple): - val = list(val) - - v_type = getattr(v.type, "__origin__", None) - if ( - (v_type is List or v_type is list or v_type is Optional) - # skip interpolation - and not (isinstance(val, str) and val.startswith("${")) - ): - # if type is int but val is float, then we will crash later - try to convert here - if hasattr(v.type, "__args__"): - t_args = v.type.__args__ - if len(t_args) == 1 and (t_args[0] is float or t_args[0] is int): - val = list(map(t_args[0], val)) - elif val is not None and ( - field_type is int or field_type is bool or field_type is float - ): - try: - val = field_type(val) - except: - pass # ignore errors here, they are often from interpolation args - - if val is None: - overrides.append("{}.{}=null".format(sub_node, k)) - elif val == "": - overrides.append("{}.{}=''".format(sub_node, k)) - elif isinstance(val, str): - val = val.replace("'", r"\'") - overrides.append("{}.{}='{}'".format(sub_node, k, val)) - elif isinstance(val, FairseqDataclass): - overrides += _override_attr(f"{sub_node}.{k}", type(val), args) - elif isinstance(val, Namespace): - sub_overrides, _ = override_module_args(val) - for so in sub_overrides: - overrides.append(f"{sub_node}.{k}.{so}") - else: - overrides.append("{}.{}={}".format(sub_node, k, val)) - - return overrides - - -def migrate_registry( - name, value, registry, args, overrides, deletes, use_name_as_val=False -): - if value in registry: - overrides.append("{}={}".format(name, value)) - overrides.append("{}._name={}".format(name, value)) - overrides.extend(_override_attr(name, registry[value], args)) - elif use_name_as_val and value is not None: - overrides.append("{}={}".format(name, value)) - else: - deletes.append(name) - - -def override_module_args(args: Namespace) -> Tuple[List[str], List[str]]: - """use the field in args to overrides those in cfg""" - overrides = [] - deletes = [] - - for k in FairseqConfig.__dataclass_fields__.keys(): - overrides.extend( - _override_attr(k, FairseqConfig.__dataclass_fields__[k].type, args) - ) - - if args is not None: - if hasattr(args, "task"): - from fairseq.tasks import TASK_DATACLASS_REGISTRY - - migrate_registry( - "task", args.task, TASK_DATACLASS_REGISTRY, args, overrides, deletes - ) - else: - deletes.append("task") - - # these options will be set to "None" if they have not yet been migrated - # so we can populate them with the entire flat args - CORE_REGISTRIES = {"criterion", "optimizer", "lr_scheduler"} - - from fairseq.registry import REGISTRIES - - for k, v in REGISTRIES.items(): - if hasattr(args, k): - migrate_registry( - k, - getattr(args, k), - v["dataclass_registry"], - args, - overrides, - deletes, - use_name_as_val=k not in CORE_REGISTRIES, - ) - else: - deletes.append(k) - - no_dc = True - if hasattr(args, "arch"): - from fairseq.models import ARCH_MODEL_REGISTRY, ARCH_MODEL_NAME_REGISTRY - - if args.arch in ARCH_MODEL_REGISTRY: - m_cls = ARCH_MODEL_REGISTRY[args.arch] - dc = getattr(m_cls, "__dataclass", None) - if dc is not None: - m_name = ARCH_MODEL_NAME_REGISTRY[args.arch] - overrides.append("model={}".format(m_name)) - overrides.append("model._name={}".format(args.arch)) - # override model params with those exist in args - overrides.extend(_override_attr("model", dc, args)) - no_dc = False - if no_dc: - deletes.append("model") - - return overrides, deletes - - -class omegaconf_no_object_check: - def __init__(self): - self.old_is_primitive = _utils.is_primitive_type - - def __enter__(self): - _utils.is_primitive_type = lambda _: True - - def __exit__(self, type, value, traceback): - _utils.is_primitive_type = self.old_is_primitive - - -def convert_namespace_to_omegaconf(args: Namespace) -> DictConfig: - """Convert a flat argparse.Namespace to a structured DictConfig.""" - - # Here we are using field values provided in args to override counterparts inside config object - overrides, deletes = override_module_args(args) - - # configs will be in fairseq/config after installation - config_path = os.path.join("..", "config") - - GlobalHydra.instance().clear() - - with initialize(config_path=config_path): - try: - composed_cfg = compose("config", overrides=overrides, strict=False) - except: - logger.error("Error when composing. Overrides: " + str(overrides)) - raise - - for k in deletes: - composed_cfg[k] = None - - cfg = OmegaConf.create( - OmegaConf.to_container(composed_cfg, resolve=True, enum_to_str=True) - ) - - # hack to be able to set Namespace in dict config. this should be removed when we update to newer - # omegaconf version that supports object flags, or when we migrate all existing models - from omegaconf import _utils - - with omegaconf_no_object_check(): - if cfg.task is None and getattr(args, "task", None): - cfg.task = Namespace(**vars(args)) - from fairseq.tasks import TASK_REGISTRY - - _set_legacy_defaults(cfg.task, TASK_REGISTRY[args.task]) - cfg.task._name = args.task - if cfg.model is None and getattr(args, "arch", None): - cfg.model = Namespace(**vars(args)) - from fairseq.models import ARCH_MODEL_REGISTRY - - _set_legacy_defaults(cfg.model, ARCH_MODEL_REGISTRY[args.arch]) - cfg.model._name = args.arch - if cfg.optimizer is None and getattr(args, "optimizer", None): - cfg.optimizer = Namespace(**vars(args)) - from fairseq.optim import OPTIMIZER_REGISTRY - - _set_legacy_defaults(cfg.optimizer, OPTIMIZER_REGISTRY[args.optimizer]) - cfg.optimizer._name = args.optimizer - if cfg.lr_scheduler is None and getattr(args, "lr_scheduler", None): - cfg.lr_scheduler = Namespace(**vars(args)) - from fairseq.optim.lr_scheduler import LR_SCHEDULER_REGISTRY - - _set_legacy_defaults( - cfg.lr_scheduler, LR_SCHEDULER_REGISTRY[args.lr_scheduler] - ) - cfg.lr_scheduler._name = args.lr_scheduler - if cfg.criterion is None and getattr(args, "criterion", None): - cfg.criterion = Namespace(**vars(args)) - from fairseq.criterions import CRITERION_REGISTRY - - _set_legacy_defaults(cfg.criterion, CRITERION_REGISTRY[args.criterion]) - cfg.criterion._name = args.criterion - - OmegaConf.set_struct(cfg, True) - return cfg - - -def overwrite_args_by_name(cfg: DictConfig, overrides: Dict[str, any]): - # this will be deprecated when we get rid of argparse and model_overrides logic - - from fairseq.registry import REGISTRIES - - with open_dict(cfg): - for k in cfg.keys(): - # "k in cfg" will return false if its a "mandatory value (e.g. ???)" - if k in cfg and isinstance(cfg[k], DictConfig): - if k in overrides and isinstance(overrides[k], dict): - for ok, ov in overrides[k].items(): - if isinstance(ov, dict) and cfg[k][ok] is not None: - overwrite_args_by_name(cfg[k][ok], ov) - else: - cfg[k][ok] = ov - else: - overwrite_args_by_name(cfg[k], overrides) - elif k in cfg and isinstance(cfg[k], Namespace): - for override_key, val in overrides.items(): - setattr(cfg[k], override_key, val) - elif k in overrides: - if ( - k in REGISTRIES - and overrides[k] in REGISTRIES[k]["dataclass_registry"] - ): - cfg[k] = DictConfig( - REGISTRIES[k]["dataclass_registry"][overrides[k]] - ) - overwrite_args_by_name(cfg[k], overrides) - cfg[k]._name = overrides[k] - else: - cfg[k] = overrides[k] - - -def merge_with_parent(dc: FairseqDataclass, cfg: DictConfig, remove_missing=True): - if remove_missing: - - if is_dataclass(dc): - target_keys = set(dc.__dataclass_fields__.keys()) - else: - target_keys = set(dc.keys()) - - with open_dict(cfg): - for k in list(cfg.keys()): - if k not in target_keys: - del cfg[k] - - merged_cfg = OmegaConf.merge(dc, cfg) - merged_cfg.__dict__["_parent"] = cfg.__dict__["_parent"] - OmegaConf.set_struct(merged_cfg, True) - return merged_cfg diff --git a/spaces/stomexserde/gpt4-ui/Examples/A Return Of Hanuman Hindi Dubbed Free Download.md b/spaces/stomexserde/gpt4-ui/Examples/A Return Of Hanuman Hindi Dubbed Free Download.md deleted file mode 100644 index 16201718df1ddfd6ec85476e1719257d33f3d95b..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/A Return Of Hanuman Hindi Dubbed Free Download.md +++ /dev/null @@ -1,17 +0,0 @@ - -

          How to Watch A Return Of Hanuman in Hindi for Free Online

          -

          A Return Of Hanuman is a 2007 animated movie that follows the adventures of Lord Hanuman as he takes birth as a human boy to help a village in trouble. The movie is directed by Anurag Kashyap and features stunning animation and action scenes. If you are a fan of Hanuman and want to watch this movie in Hindi for free online, here are some ways you can do that.

          -

          A Return Of Hanuman Hindi Dubbed Free Download


          Download Zip ››››› https://urlgoal.com/2uI8Po



          -
            -
          • One option is to visit the website Toonhub4u, which offers the movie in different resolutions and formats. You can choose from 480p, 720p, or 1080p x264 WEB-DL and download the movie using the single download links provided. You can also stream the movie online using the app drive or stream online links.
          • -
          • Another option is to subscribe to ShemarooMe, a streaming service that offers a variety of movies and shows in Hindi and other languages. You can watch A Return Of Hanuman on ShemarooMe with a monthly or yearly subscription plan. You can also download the movie on your device for offline viewing.
          • -
          • A third option is to watch the movie on Hindi Toons India, a website that provides Hindi dubbed episodes of various animated shows and movies. You can watch the movie in 480p or 720p quality using the direct download links given on the website.
          • -
          -

          These are some of the ways you can watch A Return Of Hanuman in Hindi for free online. However, we recommend that you support the original creators and distributors of the movie by watching it legally and paying for it if possible. A Return Of Hanuman is a wonderful movie that showcases the power and glory of Lord Hanuman in a modern setting. It is a movie that you should not miss.

          - -

          A Return Of Hanuman is a sequel to the 2005 movie Hanuman, which was also directed by Anurag Kashyap. The movie features the voice talents of Rajesh Jolly, Sanchit Saxena, Pinky Rajput, and others. The movie has a runtime of 110 minutes and was released on December 28, 2007. The movie received positive reviews from critics and audiences alike and was praised for its animation quality, story, and message.

          -

          The movie tells the story of how Hanuman decides to come down to Earth as a human boy named Maruti to help a village named Bajrangpur that is plagued by various problems. Maruti befriends a boy named Minku who is bullied by the local goons and their leader Munna. Maruti also discovers his divine powers and uses them to fight against the evil forces that threaten the village and the world. Along the way, he learns about his true identity and his mission as the avatar of Lord Hanuman.

          -

          -

          The movie is a blend of mythology and modernity and showcases the values of courage, devotion, and service that Hanuman embodies. The movie also has some humorous moments and references to popular culture that make it appealing to both children and adults. The movie is a tribute to the legend of Hanuman and his role in the epic Ramayana. The movie also has some songs that are composed by Tapas Relia and sung by various artists such as Kailash Kher, Shankar Mahadevan, Hariharan, and others.

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Bela-knjiga-mup-srbije-pdf LINK.md b/spaces/stomexserde/gpt4-ui/Examples/Bela-knjiga-mup-srbije-pdf LINK.md deleted file mode 100644 index 5587d157eaab235c9d7d64591bd45a581e3f4aa0..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Bela-knjiga-mup-srbije-pdf LINK.md +++ /dev/null @@ -1,14 +0,0 @@ - -Here is a possible title and article with html formatting for the keyword "Bela-knjiga-mup-srbije-pdf": - -

          Bela knjiga: The White Book of Crime in Serbia

          -

          The Ministry of Internal Affairs (MUP) of Serbia has published a document called Bela knjiga (The White Book), which contains information about the most notorious criminal cases and groups in the country. The document is available for download in PDF format from the official website of the MUP.

          -

          Bela-knjiga-mup-srbije-pdf


          DOWNLOAD ✸✸✸ https://urlgoal.com/2uI87Y



          -

          The White Book covers the period from 1991 to 2007 and provides details about the activities, structures, and members of various organized crime groups, such as the Zemun clan, the Surcin clan, the Pink Panthers, and others. It also describes the methods and motives of some of the most infamous criminals, such as Zeljko Raznatovic Arkan, Slobodan Milosevic, Zoran Djindjic, and others.

          -

          The document is intended to inform the public about the efforts of the MUP to combat crime and corruption in Serbia, as well as to expose the links between criminals and politicians, businessmen, media, and other institutions. The MUP hopes that the White Book will contribute to the prevention and suppression of crime and to the strengthening of the rule of law and democracy in Serbia.

          -

          The White Book can be downloaded from this link: Bela knjiga (PDF)

          According to a recent report by the Global Initiative Against Transnational Organized Crime, Serbia ranks very high on the 2021 Global Organised Crime Index, placing 33rd out of 177 countries and second in Europe after Russia. The report measures the extent, resilience, and impact of organized crime in different countries, as well as the government's response and capacity to counter it.

          -

          -

          The report states that Serbia faces serious challenges from various forms of organized crime, such as drug trafficking, human trafficking, arms trafficking, money laundering, cybercrime, and environmental crime. It also notes that organized crime groups in Serbia have strong connections with political and economic elites, as well as with regional and international criminal networks. The report warns that organized crime poses a significant threat to the security, stability, and development of Serbia and the Western Balkans.

          -

          The report also evaluates the government's efforts to combat organized crime and corruption in Serbia, giving it a score of 4.5 out of 10. The report praises some positive steps taken by the authorities, such as adopting new legislation, strengthening institutional cooperation, and enhancing international cooperation. However, it also criticizes the lack of political will, transparency, accountability, and independence of the judiciary and law enforcement agencies. The report urges the government to implement more effective and comprehensive measures to prevent and prosecute organized crime and corruption.

          7196e7f11a
          -
          -
          \ No newline at end of file diff --git a/spaces/sujr/sujr-pix2struct-base/app.py b/spaces/sujr/sujr-pix2struct-base/app.py deleted file mode 100644 index 5d19bda458a044ac39c345707b8ec3945354d149..0000000000000000000000000000000000000000 --- a/spaces/sujr/sujr-pix2struct-base/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import gradio as gr -import numpy as np -from PIL import Image -from transformers import Pix2StructForConditionalGeneration, Pix2StructProcessor - -model = Pix2StructForConditionalGeneration.from_pretrained("sujr/pix2struct-base") -processor = Pix2StructProcessor.from_pretrained("sujr/pix2struct-base") - -def run(image): - image = Image.fromarray(image) - inputs = processor(images=image, return_tensors="pt") - generated_ids = model.generate(**inputs, max_new_tokens=100) - generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] - - return generated_text - -gr.Interface(fn=run, inputs="image", outputs="text").launch() \ No newline at end of file diff --git a/spaces/sukiru/rvc-Blue-archives/lib/infer_pack/models_dml.py b/spaces/sukiru/rvc-Blue-archives/lib/infer_pack/models_dml.py deleted file mode 100644 index 958d7b29259763d2fea94caf8ba7e314c4a77d05..0000000000000000000000000000000000000000 --- a/spaces/sukiru/rvc-Blue-archives/lib/infer_pack/models_dml.py +++ /dev/null @@ -1,1124 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv.float() - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py b/spaces/supertori/files/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py deleted file mode 100644 index 6b3fedbdef3234a452e22c230a543fe587b34e4f..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py +++ /dev/null @@ -1,359 +0,0 @@ -from collections import deque -import torch -import inspect -import einops -import k_diffusion.sampling -from modules import prompt_parser, devices, sd_samplers_common - -from modules.shared import opts, state -import modules.shared as shared -from modules.script_callbacks import CFGDenoiserParams, cfg_denoiser_callback -from modules.script_callbacks import CFGDenoisedParams, cfg_denoised_callback - -samplers_k_diffusion = [ - ('Euler a', 'sample_euler_ancestral', ['k_euler_a', 'k_euler_ancestral'], {}), - ('Euler', 'sample_euler', ['k_euler'], {}), - ('LMS', 'sample_lms', ['k_lms'], {}), - ('Heun', 'sample_heun', ['k_heun'], {}), - ('DPM2', 'sample_dpm_2', ['k_dpm_2'], {'discard_next_to_last_sigma': True}), - ('DPM2 a', 'sample_dpm_2_ancestral', ['k_dpm_2_a'], {'discard_next_to_last_sigma': True}), - ('DPM++ 2S a', 'sample_dpmpp_2s_ancestral', ['k_dpmpp_2s_a'], {}), - ('DPM++ 2M', 'sample_dpmpp_2m', ['k_dpmpp_2m'], {}), - ('DPM++ SDE', 'sample_dpmpp_sde', ['k_dpmpp_sde'], {}), - ('DPM fast', 'sample_dpm_fast', ['k_dpm_fast'], {}), - ('DPM adaptive', 'sample_dpm_adaptive', ['k_dpm_ad'], {}), - ('LMS Karras', 'sample_lms', ['k_lms_ka'], {'scheduler': 'karras'}), - ('DPM2 Karras', 'sample_dpm_2', ['k_dpm_2_ka'], {'scheduler': 'karras', 'discard_next_to_last_sigma': True}), - ('DPM2 a Karras', 'sample_dpm_2_ancestral', ['k_dpm_2_a_ka'], {'scheduler': 'karras', 'discard_next_to_last_sigma': True}), - ('DPM++ 2S a Karras', 'sample_dpmpp_2s_ancestral', ['k_dpmpp_2s_a_ka'], {'scheduler': 'karras'}), - ('DPM++ 2M Karras', 'sample_dpmpp_2m', ['k_dpmpp_2m_ka'], {'scheduler': 'karras'}), - ('DPM++ SDE Karras', 'sample_dpmpp_sde', ['k_dpmpp_sde_ka'], {'scheduler': 'karras'}), -] - -samplers_data_k_diffusion = [ - sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options) - for label, funcname, aliases, options in samplers_k_diffusion - if hasattr(k_diffusion.sampling, funcname) -] - -sampler_extra_params = { - 'sample_euler': ['s_churn', 's_tmin', 's_tmax', 's_noise'], - 'sample_heun': ['s_churn', 's_tmin', 's_tmax', 's_noise'], - 'sample_dpm_2': ['s_churn', 's_tmin', 's_tmax', 's_noise'], -} - - -class CFGDenoiser(torch.nn.Module): - """ - Classifier free guidance denoiser. A wrapper for stable diffusion model (specifically for unet) - that can take a noisy picture and produce a noise-free picture using two guidances (prompts) - instead of one. Originally, the second prompt is just an empty string, but we use non-empty - negative prompt. - """ - - def __init__(self, model): - super().__init__() - self.inner_model = model - self.mask = None - self.nmask = None - self.init_latent = None - self.step = 0 - self.image_cfg_scale = None - - def combine_denoised(self, x_out, conds_list, uncond, cond_scale): - denoised_uncond = x_out[-uncond.shape[0]:] - denoised = torch.clone(denoised_uncond) - - for i, conds in enumerate(conds_list): - for cond_index, weight in conds: - denoised[i] += (x_out[cond_index] - denoised_uncond[i]) * (weight * cond_scale) - - return denoised - - def combine_denoised_for_edit_model(self, x_out, cond_scale): - out_cond, out_img_cond, out_uncond = x_out.chunk(3) - denoised = out_uncond + cond_scale * (out_cond - out_img_cond) + self.image_cfg_scale * (out_img_cond - out_uncond) - - return denoised - - def forward(self, x, sigma, uncond, cond, cond_scale, image_cond): - if state.interrupted or state.skipped: - raise sd_samplers_common.InterruptedException - - # at self.image_cfg_scale == 1.0 produced results for edit model are the same as with normal sampling, - # so is_edit_model is set to False to support AND composition. - is_edit_model = shared.sd_model.cond_stage_key == "edit" and self.image_cfg_scale is not None and self.image_cfg_scale != 1.0 - - conds_list, tensor = prompt_parser.reconstruct_multicond_batch(cond, self.step) - uncond = prompt_parser.reconstruct_cond_batch(uncond, self.step) - - assert not is_edit_model or all([len(conds) == 1 for conds in conds_list]), "AND is not supported for InstructPix2Pix checkpoint (unless using Image CFG scale = 1.0)" - - batch_size = len(conds_list) - repeats = [len(conds_list[i]) for i in range(batch_size)] - - if not is_edit_model: - x_in = torch.cat([torch.stack([x[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [x]) - sigma_in = torch.cat([torch.stack([sigma[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [sigma]) - image_cond_in = torch.cat([torch.stack([image_cond[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [image_cond]) - else: - x_in = torch.cat([torch.stack([x[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [x] + [x]) - sigma_in = torch.cat([torch.stack([sigma[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [sigma] + [sigma]) - image_cond_in = torch.cat([torch.stack([image_cond[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [image_cond] + [torch.zeros_like(self.init_latent)]) - - denoiser_params = CFGDenoiserParams(x_in, image_cond_in, sigma_in, state.sampling_step, state.sampling_steps, tensor, uncond) - cfg_denoiser_callback(denoiser_params) - x_in = denoiser_params.x - image_cond_in = denoiser_params.image_cond - sigma_in = denoiser_params.sigma - tensor = denoiser_params.text_cond - uncond = denoiser_params.text_uncond - - if tensor.shape[1] == uncond.shape[1]: - if not is_edit_model: - cond_in = torch.cat([tensor, uncond]) - else: - cond_in = torch.cat([tensor, uncond, uncond]) - - if shared.batch_cond_uncond: - x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]}) - else: - x_out = torch.zeros_like(x_in) - for batch_offset in range(0, x_out.shape[0], batch_size): - a = batch_offset - b = a + batch_size - x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]}) - else: - x_out = torch.zeros_like(x_in) - batch_size = batch_size*2 if shared.batch_cond_uncond else batch_size - for batch_offset in range(0, tensor.shape[0], batch_size): - a = batch_offset - b = min(a + batch_size, tensor.shape[0]) - - if not is_edit_model: - c_crossattn = [tensor[a:b]] - else: - c_crossattn = torch.cat([tensor[a:b]], uncond) - - x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": c_crossattn, "c_concat": [image_cond_in[a:b]]}) - - x_out[-uncond.shape[0]:] = self.inner_model(x_in[-uncond.shape[0]:], sigma_in[-uncond.shape[0]:], cond={"c_crossattn": [uncond], "c_concat": [image_cond_in[-uncond.shape[0]:]]}) - - denoised_params = CFGDenoisedParams(x_out, state.sampling_step, state.sampling_steps) - cfg_denoised_callback(denoised_params) - - devices.test_for_nans(x_out, "unet") - - if opts.live_preview_content == "Prompt": - sd_samplers_common.store_latent(x_out[0:uncond.shape[0]]) - elif opts.live_preview_content == "Negative prompt": - sd_samplers_common.store_latent(x_out[-uncond.shape[0]:]) - - if not is_edit_model: - denoised = self.combine_denoised(x_out, conds_list, uncond, cond_scale) - else: - denoised = self.combine_denoised_for_edit_model(x_out, cond_scale) - - if self.mask is not None: - denoised = self.init_latent * self.mask + self.nmask * denoised - - self.step += 1 - - return denoised - - -class TorchHijack: - def __init__(self, sampler_noises): - # Using a deque to efficiently receive the sampler_noises in the same order as the previous index-based - # implementation. - self.sampler_noises = deque(sampler_noises) - - def __getattr__(self, item): - if item == 'randn_like': - return self.randn_like - - if hasattr(torch, item): - return getattr(torch, item) - - raise AttributeError("'{}' object has no attribute '{}'".format(type(self).__name__, item)) - - def randn_like(self, x): - if self.sampler_noises: - noise = self.sampler_noises.popleft() - if noise.shape == x.shape: - return noise - - if x.device.type == 'mps': - return torch.randn_like(x, device=devices.cpu).to(x.device) - else: - return torch.randn_like(x) - - -class KDiffusionSampler: - def __init__(self, funcname, sd_model): - denoiser = k_diffusion.external.CompVisVDenoiser if sd_model.parameterization == "v" else k_diffusion.external.CompVisDenoiser - - self.model_wrap = denoiser(sd_model, quantize=shared.opts.enable_quantization) - self.funcname = funcname - self.func = getattr(k_diffusion.sampling, self.funcname) - self.extra_params = sampler_extra_params.get(funcname, []) - self.model_wrap_cfg = CFGDenoiser(self.model_wrap) - self.sampler_noises = None - self.stop_at = None - self.eta = None - self.config = None - self.last_latent = None - - self.conditioning_key = sd_model.model.conditioning_key - - def callback_state(self, d): - step = d['i'] - latent = d["denoised"] - if opts.live_preview_content == "Combined": - sd_samplers_common.store_latent(latent) - self.last_latent = latent - - if self.stop_at is not None and step > self.stop_at: - raise sd_samplers_common.InterruptedException - - state.sampling_step = step - shared.total_tqdm.update() - - def launch_sampling(self, steps, func): - state.sampling_steps = steps - state.sampling_step = 0 - - try: - return func() - except sd_samplers_common.InterruptedException: - return self.last_latent - - def number_of_needed_noises(self, p): - return p.steps - - def initialize(self, p): - self.model_wrap_cfg.mask = p.mask if hasattr(p, 'mask') else None - self.model_wrap_cfg.nmask = p.nmask if hasattr(p, 'nmask') else None - self.model_wrap_cfg.step = 0 - self.model_wrap_cfg.image_cfg_scale = getattr(p, 'image_cfg_scale', None) - self.eta = p.eta if p.eta is not None else opts.eta_ancestral - - k_diffusion.sampling.torch = TorchHijack(self.sampler_noises if self.sampler_noises is not None else []) - - extra_params_kwargs = {} - for param_name in self.extra_params: - if hasattr(p, param_name) and param_name in inspect.signature(self.func).parameters: - extra_params_kwargs[param_name] = getattr(p, param_name) - - if 'eta' in inspect.signature(self.func).parameters: - if self.eta != 1.0: - p.extra_generation_params["Eta"] = self.eta - - extra_params_kwargs['eta'] = self.eta - - return extra_params_kwargs - - def get_sigmas(self, p, steps): - discard_next_to_last_sigma = self.config is not None and self.config.options.get('discard_next_to_last_sigma', False) - if opts.always_discard_next_to_last_sigma and not discard_next_to_last_sigma: - discard_next_to_last_sigma = True - p.extra_generation_params["Discard penultimate sigma"] = True - - steps += 1 if discard_next_to_last_sigma else 0 - - if p.sampler_noise_scheduler_override: - sigmas = p.sampler_noise_scheduler_override(steps) - elif self.config is not None and self.config.options.get('scheduler', None) == 'karras': - sigma_min, sigma_max = (0.1, 10) if opts.use_old_karras_scheduler_sigmas else (self.model_wrap.sigmas[0].item(), self.model_wrap.sigmas[-1].item()) - - sigmas = k_diffusion.sampling.get_sigmas_karras(n=steps, sigma_min=sigma_min, sigma_max=sigma_max, device=shared.device) - else: - sigmas = self.model_wrap.get_sigmas(steps) - - if discard_next_to_last_sigma: - sigmas = torch.cat([sigmas[:-2], sigmas[-1:]]) - - return sigmas - - def create_noise_sampler(self, x, sigmas, p): - """For DPM++ SDE: manually create noise sampler to enable deterministic results across different batch sizes""" - if shared.opts.no_dpmpp_sde_batch_determinism: - return None - - from k_diffusion.sampling import BrownianTreeNoiseSampler - sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max() - current_iter_seeds = p.all_seeds[p.iteration * p.batch_size:(p.iteration + 1) * p.batch_size] - return BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=current_iter_seeds) - - def sample_img2img(self, p, x, noise, conditioning, unconditional_conditioning, steps=None, image_conditioning=None): - steps, t_enc = sd_samplers_common.setup_img2img_steps(p, steps) - - sigmas = self.get_sigmas(p, steps) - - sigma_sched = sigmas[steps - t_enc - 1:] - xi = x + noise * sigma_sched[0] - - extra_params_kwargs = self.initialize(p) - parameters = inspect.signature(self.func).parameters - - if 'sigma_min' in parameters: - ## last sigma is zero which isn't allowed by DPM Fast & Adaptive so taking value before last - extra_params_kwargs['sigma_min'] = sigma_sched[-2] - if 'sigma_max' in parameters: - extra_params_kwargs['sigma_max'] = sigma_sched[0] - if 'n' in parameters: - extra_params_kwargs['n'] = len(sigma_sched) - 1 - if 'sigma_sched' in parameters: - extra_params_kwargs['sigma_sched'] = sigma_sched - if 'sigmas' in parameters: - extra_params_kwargs['sigmas'] = sigma_sched - - if self.funcname == 'sample_dpmpp_sde': - noise_sampler = self.create_noise_sampler(x, sigmas, p) - extra_params_kwargs['noise_sampler'] = noise_sampler - - self.model_wrap_cfg.init_latent = x - self.last_latent = x - extra_args={ - 'cond': conditioning, - 'image_cond': image_conditioning, - 'uncond': unconditional_conditioning, - 'cond_scale': p.cfg_scale, - } - - samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) - - return samples - - def sample(self, p, x, conditioning, unconditional_conditioning, steps=None, image_conditioning=None): - steps = steps or p.steps - - sigmas = self.get_sigmas(p, steps) - - x = x * sigmas[0] - - extra_params_kwargs = self.initialize(p) - parameters = inspect.signature(self.func).parameters - - if 'sigma_min' in parameters: - extra_params_kwargs['sigma_min'] = self.model_wrap.sigmas[0].item() - extra_params_kwargs['sigma_max'] = self.model_wrap.sigmas[-1].item() - if 'n' in parameters: - extra_params_kwargs['n'] = steps - else: - extra_params_kwargs['sigmas'] = sigmas - - if self.funcname == 'sample_dpmpp_sde': - noise_sampler = self.create_noise_sampler(x, sigmas, p) - extra_params_kwargs['noise_sampler'] = noise_sampler - - self.last_latent = x - samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ - 'cond': conditioning, - 'image_cond': image_conditioning, - 'uncond': unconditional_conditioning, - 'cond_scale': p.cfg_scale - }, disable=False, callback=self.callback_state, **extra_params_kwargs)) - - return samples - diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/20simWORK Crackserialdownload.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/20simWORK Crackserialdownload.md deleted file mode 100644 index 7d112abe142fb61bd23c84ccb1c2d0be95840890..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/20simWORK Crackserialdownload.md +++ /dev/null @@ -1,6 +0,0 @@ -

          20simcrackserialdownload


          Downloadhttps://cinurl.com/2uEXKs



          -
          -barefoot gen full movie download in hindi, barefoot gen movie download in hindi, barefoot full movie download in hindi ... Download Barefoot to Goa 2015 torrent YIFY full movie or via magnet. ... 20 sim crack serial download 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Motorola Professional Radio Cps Software Download [UPD].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Motorola Professional Radio Cps Software Download [UPD].md deleted file mode 100644 index 22671b697b4dfb2e0c211c5998aaaf4df5c18622..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Motorola Professional Radio Cps Software Download [UPD].md +++ /dev/null @@ -1,78 +0,0 @@ -

          Motorola Professional Radio Cps Software Download


          Download File »»» https://cinurl.com/2uEZ39



          - -Laptop, mouse, and optical disc drive are manufactured by Acer Inc. - -Synaptics mouse drivers are available for Linux. - -Models - -The S series models differ by the manufacturer, the operating system, the type of processor, and the number of USB ports: - - Motorola Star T816LS, pSeries-based - - Motorola Star T816, pSeries-based - - Motorola Star T816G, pSeries-based - - Motorola Star T816G1, pSeries-based - - Motorola Star T816T, pSeries-based - - Motorola Star T816TT, pSeries-based - - Motorola Star T816TS, pSeries-based - - Motorola Star T816TS2, pSeries-based - - Motorola Star T816TS3, pSeries-based - - Motorola Star T816TS4, pSeries-based - - Motorola Star T816TSR, pSeries-based - - Motorola Star T816TU, pSeries-based - - Motorola Star T816TTU, pSeries-based - - Motorola Star T816TTU2, pSeries-based - - Motorola Star T816TTU3, pSeries-based - - Motorola Star T816TTU4, pSeries-based - - Motorola Star T816TTU5, pSeries-based - - Motorola Star T816TTU6, pSeries-based - - Motorola Star T816TTU7, pSeries-based - - Motorola Star T816TTU8, pSeries-based - - Motorola Star T816TTU9, pSeries-based - - Motorola Star T816TS1, pSeries-based - - Motorola Star T816TT1, pSeries-based - - Motorola Star T816TT2, pSeries-based - - Motorola Star T816TT3, pSeries-based - - Motorola Star T816TT4, pSeries-based - - Motorola Star T816TT5, pSeries-based - - Motorola Star T816TT6, pSeries-based - - Motorola Star T816TT7, pSeries-based - - Motorola Star T816TT8, pSeries-based - - Motorola Star T816TT9, pSeries-based - - Motorola Star T816TTU1, pSeries-based - - Motorola Star T816TTU4 4fefd39f24
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ratha Kanneer 1954 Fixed Download Tamil Movie.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ratha Kanneer 1954 Fixed Download Tamil Movie.md deleted file mode 100644 index 8d4ddee5f4c8cf657af8fa71e704f62eaa0f63fd..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ratha Kanneer 1954 Fixed Download Tamil Movie.md +++ /dev/null @@ -1,11 +0,0 @@ -
          -

          Ratha Kanneer Movie Poster. Release Date: 24 August 1954 (Tamil), 12 August 1954 (Malayalam). Director: Krishnan Panju. Cast: M.N. Rajam, Sriranjani, S.S. Rajendran, S. V. Subbaiyer. Actors: M.N. Rajam, Sriranjani, S.S. Rajendran, C. Lakshmi, N. S. Krishnan, M. K. Murthy, M. R. Santhanam.

          -

          Ratha Kanneer 1954 Download Tamil Movie


          DOWNLOAD ►►► https://cinurl.com/2uEYGD



          -

          Ratha Kanneer is a 1954 Indian Tamil-language drama film directed by Krishnan Panju, and written by Tiruvarur K. Thangaraj. Ratha Kaneer Full Movie Download Ratha Kaneer HD Movie Download Moviesda.. Movie Information. Ratha Kaneer (1954) Movie Poster. Language, : Tamil.

          -

          Ratha Kaneer Movie Poster. Release Date: 24 August 1954 (Tamil), 12 August 1954 (Malayalam). Director: Krishnan Panju. Cast: M.N. Rajam, Sriranjani, S.S. Rajendran, S. V. Subbaiyer. Actors: M.N. Rajam, Sriranjani, S.S. Rajendran, C. Lakshmi, N. S. Krishnan, M. K. Murthy, M. R. Santhanam.

          -

          Ratha Kaneer is a 1954 Indian Tamil-language drama film directed by Krishnan Panju, and written by Tiruvarur K. Thangaraj. Ratha Kaneer Full Movie Download Ratha Kaneer HD Movie Download Moviesda.. Movie Information. Ratha Kaneer (1954) Movie Poster. Language, : Tamil.

          -

          Ratha Kaneer Movie Poster. Release Date: 24 August 1954 (Tamil), 12 August 1954 (Malayalam). Director: Krishnan Panju. Cast: M.N. Rajam, Sriranjani, S.S. Rajendran, S. V. Subbaiyer. Actors: M.N. Rajam, Sriranjani, S.S.

          -

          -

          Iyomato Tamil Movie Rattham Kanneer (1954) Full Soundtrack Music. Ancient Tamil Lyrics Born of the fire. M. The only world today which is rapidly moving and changing. LapferodhiBhaje Maatte. This is about singers singing songs, not movies. . Upload your fanart (only single images) in the form of.png and.jpg..

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/!FREE! Free Download Hum Hai Raahi CAR Ke Hindi.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/!FREE! Free Download Hum Hai Raahi CAR Ke Hindi.md deleted file mode 100644 index 00eaa864d8be114c3c75a632794391b1a33d2a03..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/!FREE! Free Download Hum Hai Raahi CAR Ke Hindi.md +++ /dev/null @@ -1,68 +0,0 @@ -

          free download Hum Hai Raahi CAR Ke hindi


          DOWNLOAD >>>>> https://urluss.com/2uCDQt



          -
          -In this film, two people meet, fall in love and are sent to separate parts of the world. One of them returns from an adventurous trip to find the other is now married to someone else. How will they get back together. - -Cast - - Dev Goel as Raj - - Nivedita Joshi as Priya - - Devashish Marathe as Vikram - - Rohan Shah as Deepak - - Ila Arun as Vandana - - Seema Kapoor as Raj's mother - - Kishore Soni as Jeeja - - Raghuvir Yadav as Chaitali - -Music - -The soundtrack is composed by Sangeeth Sivan. This album is first collaboration of Dev Goel and Sangeeth Sivan. This is the first album of which Dev Goel himself has composed the music and lyrics. The album was released on 24 May 2013. - -Track listing - -Release - -The film was released on 24 May 2013 across India and was a moderate success at the box office. - -References - -External links - - - -Category:Indian films - -Category:Indian romance films - -Category:2010s Hindi-language films - -Category:2013 films - -Category:Directorial debut films - -Category:Films featuring songs by Pritam - -Category:Films shot in GujaratHouse Cleaning Tools - -The cleaning tools you use can make or break the cleaning process. If you have the right tools for the job, you’ll save a lot of time and make things a lot easier for you. But if you don’t, you could end up spending hours cleaning up after you’re done instead of taking advantage of your free time to relax and do other things. - -So if you want to save a lot of time, make your house cleaner, and also make things easy for yourself, here are the tools that you need. - -The Right Cleaning Equipment - -Spray Bottle - -Spray bottles are easy to find, but sometimes you need to go the extra mile to find the right kind for your needs. - -When searching for the right cleaning spray bottles, you want to look for something that is going to be able to reach those hard to reach areas, like behind doors, behind furniture, under sinks, in closets, etc. If your spray bottle is too small, it won’t be able to reach all the surfaces and therefore won’t clean them thoroughly. - -On the other hand, if 4fefd39f24
          -
          -
          -

          diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/cgnet.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/cgnet.py deleted file mode 100644 index eff8d9458c877c5db894957e0b1b4597e40da6ab..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/cgnet.py +++ /dev/null @@ -1,35 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', eps=1e-03, requires_grad=True) -model = dict( - type='EncoderDecoder', - backbone=dict( - type='CGNet', - norm_cfg=norm_cfg, - in_channels=3, - num_channels=(32, 64, 128), - num_blocks=(3, 21), - dilations=(2, 4), - reductions=(8, 16)), - decode_head=dict( - type='FCNHead', - in_channels=256, - in_index=2, - channels=256, - num_convs=0, - concat_input=False, - dropout_ratio=0, - num_classes=19, - norm_cfg=norm_cfg, - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0, - class_weight=[ - 2.5959933, 6.7415504, 3.5354059, 9.8663225, 9.690899, 9.369352, - 10.289121, 9.953208, 4.3097677, 9.490387, 7.674431, 9.396905, - 10.347791, 6.3927646, 10.226669, 10.241062, 10.280587, - 10.396974, 10.055647 - ])), - # model training and testing settings - train_cfg=dict(sampler=None), - test_cfg=dict(mode='whole')) diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/apis/__init__.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/apis/__init__.py deleted file mode 100644 index 170724be38de42daf2bc1a1910e181d68818f165..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/apis/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .inference import inference_segmentor, init_segmentor, show_result_pyplot -from .test import multi_gpu_test, single_gpu_test -from .train import get_root_logger, set_random_seed, train_segmentor - -__all__ = [ - 'get_root_logger', 'set_random_seed', 'train_segmentor', 'init_segmentor', - 'inference_segmentor', 'multi_gpu_test', 'single_gpu_test', - 'show_result_pyplot' -] diff --git a/spaces/taesiri/DeticChatGPT/tools/fix_o365_names.py b/spaces/taesiri/DeticChatGPT/tools/fix_o365_names.py deleted file mode 100644 index c6730eacecb646bfef67a869dc9a93de6e55b6f2..0000000000000000000000000000000000000000 --- a/spaces/taesiri/DeticChatGPT/tools/fix_o365_names.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json -import copy - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("--ann", default='datasets/objects365/annotations/zhiyuan_objv2_val.json') - parser.add_argument("--fix_name_map", default='datasets/metadata/Objects365_names_fix.csv') - args = parser.parse_args() - - new_names = {} - old_names = {} - with open(args.fix_name_map, 'r') as f: - for line in f: - tmp = line.strip().split(',') - old_names[int(tmp[0])] = tmp[1] - new_names[int(tmp[0])] = tmp[2] - data = json.load(open(args.ann, 'r')) - - cat_info = copy.deepcopy(data['categories']) - - for x in cat_info: - if old_names[x['id']].strip() != x['name'].strip(): - print('{} {} {}'.format(x, old_names[x['id']], new_names[x['id']])) - import pdb; pdb.set_trace() - if old_names[x['id']] != new_names[x['id']]: - print('Renaming', x['id'], x['name'], new_names[x['id']]) - x['name'] = new_names[x['id']] - - data['categories'] = cat_info - out_name = args.ann[:-5] + '_fixname.json' - print('Saving to', out_name) - json.dump(data, open(out_name, 'w')) diff --git a/spaces/taskswithcode/salient-object-detection/run.sh b/spaces/taskswithcode/salient-object-detection/run.sh deleted file mode 100644 index 4d17f62d371ae370354d441bd772f02f1a7f2338..0000000000000000000000000000000000000000 --- a/spaces/taskswithcode/salient-object-detection/run.sh +++ /dev/null @@ -1,2 +0,0 @@ -streamlit run app.py --server.port 80 "1" "sod_app_examples.json" "sod_app_models.json" - diff --git a/spaces/templates/fastapi-uvicorn/modules/app.py b/spaces/templates/fastapi-uvicorn/modules/app.py deleted file mode 100644 index 47844882f87cc97181a32fb38afa7b3c9ba3562b..0000000000000000000000000000000000000000 --- a/spaces/templates/fastapi-uvicorn/modules/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import os -import requests -import json -from io import BytesIO - -from fastapi import FastAPI -from fastapi.staticfiles import StaticFiles -from fastapi.responses import FileResponse, StreamingResponse - -from modules.inference import infer_t5 -from modules.dataset import query_emotion - -# https://huggingface.co/settings/tokens -# https://huggingface.co/spaces/{username}/{space}/settings -API_TOKEN = os.getenv("BIG_GAN_TOKEN") - -app = FastAPI(docs_url=None, redoc_url=None) - -app.mount("/static", StaticFiles(directory="static"), name="static") - - -@app.head("/") -@app.get("/") -def index() -> FileResponse: - return FileResponse(path="static/index.html", media_type="text/html") - - -@app.get("/infer_biggan") -def biggan(input): - output = requests.request( - "POST", - "https://api-inference.huggingface.co/models/osanseviero/BigGAN-deep-128", - headers={"Authorization": f"Bearer {API_TOKEN}"}, - data=json.dumps(input), - ) - - return StreamingResponse(BytesIO(output.content), media_type="image/png") - - -@app.get("/infer_t5") -def t5(input): - output = infer_t5(input) - - return {"output": output} - - -@app.get("/query_emotion") -def emotion(start, end): - output = query_emotion(int(start), int(end)) - - return {"output": output} diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/2010 Free Download UPD.md b/spaces/tialenAdioni/chat-gpt-api/logs/2010 Free Download UPD.md deleted file mode 100644 index f86b93b142f77cef8b3b49e432d2bc7965eebbc0..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/2010 Free Download UPD.md +++ /dev/null @@ -1,163 +0,0 @@ -
          -

          Outline of the Article

          - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
          HeadingSubheading
          Introduction- What is Microsoft Office 2010?
          - Why is it popular and useful?
          - How can you download it for free?
          Features of Microsoft Office 2010- Five productivity apps: Word, Excel, PowerPoint, OneNote, Outlook
          - Improved Ribbon interface and File button
          - Support for saving files as PDF and sending them by email
          - Ability to use SkyDrive to save an online version of documents
          - Enhanced tools for spell checking, translating, inserting images and videos, etc.
          How to Download Microsoft Office 2010 for Free- Step 1: Find a reliable source for downloading the software
          - Step 2: Choose the version and language that suits your needs
          - Step 3: Follow the instructions to install and activate the software
          Benefits of Using Microsoft Office 2010- Improved performance and less lagging
          - Compatibility with all popular document formats
          - Access to online features and cloud storage
          - Customizable and user-friendly interface
          - Professional solution for any business or personal task
          Conclusion- Summarize the main points of the article
          - Encourage the reader to try Microsoft Office 2010 for free
          - Provide a call to action and a link to download the software
          FAQs- Q1: Is Microsoft Office 2010 still supported by Microsoft?
          - Q2: What are the system requirements for Microsoft Office 2010?
          - Q3: How can I update Microsoft Office 2010 to the latest version?
          - Q4: What are some alternatives to Microsoft Office 2010?
          - Q5: How can I contact Microsoft for technical support or feedback?
          -

          How to Download Microsoft Office 2010 for Free

          -

          Are you looking for a way to download Microsoft Office 2010 for free? If so, you are not alone. Microsoft Office 2010 is one of the most popular and useful productivity suites in the world. It has a range of powerful features that can help you create, edit, and share documents with ease. Whether you need to write a report, make a presentation, manage your email, or organize your notes, Microsoft Office 2010 has you covered.

          -

          2010 free download


          Download ✒ ✒ ✒ https://urlcod.com/2uK7Xc



          -

          But how can you get this amazing software without paying a dime? Is it even possible? The answer is yes! In this article, we will show you how you can download Microsoft Office 2010 for free from a reliable source. We will also tell you about some of the best features of this software and why you should give it a try. Let's get started!

          -

          Features of Microsoft Office 2010

          -

          Microsoft Office 2010 is the final version of the Microsoft Office 2010 productivity suite. It includes five of the most valuable productivity apps on the market: Word, Excel, PowerPoint, OneNote, and Outlook. These apps can help you with various tasks related to creating, editing, checking, and sharing text, data, graphics, and more.

          -

          Microsoft Office 2010 also has several improvements and new features compared to its previous versions. For instance:

          -
            -
          • It has an improved Ribbon interface that is cleaner and simpler than before.The Ribbon is the toolbar that contains all the tools and options for each app. It also has a new File button that shows a full pane with options to manipulate the document currently open.

          • -
          • It supports saving files as PDF (Portable Document Format), which is a universal format that can be viewed on any device.It also allows you to send documents by email through Outlook right after writing them.

            -

            Microsoft Office 2010 free download full version
            -AutoCAD 2010 free download with crack
            -FIFA World Cup 2010 free download game
            -Adobe Photoshop CS5 2010 free download
            -Windows Live Movie Maker 2010 free download
            -Need for Speed Hot Pursuit 2010 free download
            -Visual Studio 2010 free download for windows 10
            -Inception 2010 free download movie
            -Norton Antivirus 2010 free download 90 days trial
            -Excel 2010 free download for mac
            -PowerPoint 2010 free download templates
            -Outlook 2010 free download for windows 7
            -Word 2010 free download for android
            -Access 2010 free download tutorial
            -Publisher 2010 free download with product key
            -Corel Draw X5 2010 free download full version with keygen
            -Autodesk Maya 2010 free download software
            -Adobe Illustrator CS5 2010 free download portable
            -Adobe Flash Player 10.1 2010 free download
            -Adobe Reader X (10.1.1) 2010 free download offline installer
            -Java SE Development Kit (JDK) 6 Update 23 (December 15, 2010) free download
            -Eclipse IDE for Java Developers (Helios) June 23, 2010 free download
            -MySQL Community Server (GPL) (Current Generally Available Release: 5.1.53) November 3, 2010 free download
            -Python (x,y) - Scientific-applications-oriented Python Distribution based on Qt and Spyder - Release: Python(x,y)-2.6.6.2 - Date: October 29th, 2010 - Size: about 75 MB - Windows Installer (.exe) - Free Download
            -R Statistical Software Version: R-2.12.1 - Released on: December 16, 2010 - Free Download for Windows (32/64 bit), Mac OS X (Intel/PPC), Linux (x86/x86_64)
            -MATLAB R2010b - Released on: September 3, 2010 - Free Download for Students and Educators (requires activation)
            -SPSS Statistics Version: SPSS Statistics V19.0.1 - Released on: December, 14th, 2010 - Free Download Trial Version (14 days)
            -SAS University Edition - Released on: May, 28th, 2014 - Free Download for Academic Users (requires registration)
            -Stata Version: Stata/MP, Stata/SE, and Stata/IC - Released on: July, 21st, 2021 - Free Download Trial Version (30 days)
            -EViews Version: EViews Enterprise Edition v11.1 - Released on: April, 8th, 2021 - Free Download Trial Version (30 days)
            -Minitab Version: Minitab Statistical Software Release v20.3 - Released on: June, 29th, 2021 - Free Download Trial Version (30 days)
            -Origin Version: OriginPro v2021b SR2 - Released on: August, 24th, 2021 - Free Download Trial Version (21 days)
            -JMP Version: JMP Pro v16.1 - Released on: July, 13th, 2021 - Free Download Trial Version (30 days)
            -Tableau Desktop Version: Tableau Desktop v2021.2.2 - Released on: August,17th,2021 - Free Download Trial Version (14 days)
            -Qlik Sense Desktop Version: Qlik Sense Desktop February v21.2.4 - Released on: February,23rd,2021 - Free Download for Personal Use (requires registration)
            -Power BI Desktop Version: Power BI Desktop August v2.96.701.0 - Released on: August,12th,2021 - Free Download for Windows (requires sign-in)
            -RapidMiner Studio Version: RapidMiner Studio v9.9.2 - Released on: July,27th,2021 - Free Download for Academic Users and Small Businesses (requires registration)
            -KNIME Analytics Platform Version: KNIME Analytics Platform v4.4.1 - Released on: August,18th,2021 - Free Download for Windows/Linux/Mac OS X (no registration required)
            -Weka Version: Weka v3.9.5 Stable Branch Snapshot from July/31/2021 - Released on: July/31/2021 - Free Download for Windows/Linux/Mac OS X/Solaris (no registration required)
            -Orange Version: Orange v3.30.2 Miniconda Installer for Windows/Linux/Mac OS X - Released on: August/25/2021 - Free Download for Windows/Linux/Mac OS X (no registration required)
            -RStudio Version: RStudio Desktop v1.4.1717 Open Source License for Windows/Linux/Mac OS X - Released on: June/21/2021 - Free Download for Windows/Linux/Mac OS X (no registration required)
            -Anaconda Individual Edition Version: Anaconda Individual Edition v2021.05 Python/R Distribution for Windows/Linux/Mac OS X with Conda Package Manager and Spyder IDE Included - Released on: May/18/2021 - Free Download for Windows/Linux/Mac OS X (no registration required)
            -PyCharm Community Edition Version: PyCharm Community Edition v2021.2 Python IDE for Windows/Linux/Mac OS X with Code Completion and Debugging Tools Included - Released on: July/28/2021 - Free Download for Windows/Linux/Mac OS X (no registration required)
            -Visual Studio Code Version: Visual Studio Code v1.59 Open Source Code Editor for Windows/Linux/Mac OS X with Support for Multiple Programming Languages and Extensions Included - Released on: August/12/2021 - Free Download for Windows/Linux/Mac OS X (no registration required)
            -Notepad++ Version: Notepad++ v8.1.4 Open Source Text Editor for Windows with Support for Multiple Programming Languages and Plugins Included - Released on: August/15/2021 -

          • -
          • It has the ability to use SkyDrive to save an online version of any document you create.SkyDrive is a cloud storage service that lets you access your files from anywhere with an internet connection. You can also share your files with others and collaborate on them in real time.

          • -
          • It has enhanced tools for spell checking, translating, inserting images and videos, and more.You can also apply effects to any images that are used in any documents. You can also use a text translation tool that can translate your document into another language. You can also use a tool for taking and exporting screenshots.

          • -
          • It has more downloadable templates that you can use to start your document with a professional look.You can also customize these templates according to your preferences.

          • -
          -

          How to Download Microsoft Office 2010 for Free

          -

          Now that you know some of the features of Microsoft Office 2010, you might be wondering how you can download it for free. Here are the steps you need to follow:

          -
            -
          1. Find a reliable source for downloading the software.There are many websites that claim to offer free downloads of Microsoft Office 2010, but not all of them are trustworthy. Some of them might contain viruses or malware that can harm your computer or steal your personal information. To avoid this risk, we recommend using FilePlanet, which is one of the most reputable sources for downloading software online.

          2. -
          3. Choose the version and language that suits your needs.On FilePlanet's website, you will see different options for downloading Microsoft Office 2010. You can choose between different versions (such as Home and Business or Professional Plus), different languages (such as English or Spanish), and different architectures (such as 32-bit or 64-bit). Make sure you select the option that matches your computer's specifications and your preferences.

          4. -
          5. Follow the instructions to install and activate the software.After choosing your option, click on the Download button and wait for the file to be downloaded on your computer. Then open the file and follow the instructions on the screen to install Microsoft Office 2010 on your computer. You might need to enter a product key or activate the software online during this process.

          6. -
          -

          Benefits of Using Microsoft Office 2010

          -

          By downloading Microsoft Office 2010 for free from FilePlanet's website, you will enjoy many benefits that this software offers. Some of these benefits are:

          -
            -
          • Improved performance and less lagging.Microsoft Office 2010 uses fewer system resources than older versions, which means it runs faster and smoother on your computer. You will not experience any delays or crashes while using this software.

          • -
          • Compatibility with all popular document formats.Microsoft Office 2010 works with all popular document formats such as DOCX (Word), XLSX (Excel), PPTX (PowerPoint), PDF (Portable Document Format), etc. You will not have any trouble opening or saving files in these formats with this software.

          • -
          • Access to online features and cloud storage.Microsoft Office 2010 allows you to use SkyDrive to save an online version of any document you create. This way, you can access your files from anywhere with an internet connection. You can also share your files with others and collaborate on them in real time.

          • -> Customizable and user-friendly interface.Microsoft Office 2010 has an improved Ribbon interface that is cleaner and simpler than before. The Ribbon contains all the tools and options for each app in an organized way. You can also customize it according to your needs by adding or removing tabs or buttons.

            -
          • Professional solution for any business or personal task.Microsoft Office 2010 is one of the most professional solutions for any business or personal task related to creating, editing, checking, or sharing text, data, graphics, and more. It has a range of powerful features that can help you with various tasks such as writing reports, making presentations, managing emails, or organizing notes.

          • -
          -

          Conclusion

          -

          In conclusion, Microsoft Office 2010 is one of the best productivity suites in the world. It has a range of powerful features that can help you create, edit, and share documents with ease. Whether you need to write a report, make a presentation, manage your email, or organize your notes, Microsoft Office 2010 has you covered.

          -

          But how can you get this amazing software without paying a dime? Is it even possible? The answer is yes! In this article, we showed you how you can download Microsoft Office 2010 for free from FilePlanet's website. We also told you about some of the best features of this software and why you should give it a try.

          -

          So what are you waiting for? Download Microsoft Office 2010 for free today and enjoy the benefits of this software. You will not regret it! Just click on the link below and follow the instructions to get started.

          -

          Download Microsoft Office 2010 for Free

          -

          FAQs

          -

          Here are some frequently asked questions about Microsoft Office 2010 and their answers:

          -
            -
          1. Q1: Is Microsoft Office 2010 still supported by Microsoft?

            -

            A1: No, Microsoft Office 2010 reached its end of support on October 13, 2020. This means that Microsoft no longer provides technical support, bug fixes, or security updates for this software. You can still use it, but at your own risk. You might encounter compatibility issues, security vulnerabilities, or performance problems. We recommend that you upgrade to a newer version of Microsoft Office or use an alternative software.

          2. -
          3. Q2: What are the system requirements for Microsoft Office 2010?

            -

            A2: The minimum system requirements for Microsoft Office 2010 are:

            -
              -
            • Operating system: Windows XP SP3, Windows Vista SP1, Windows 7, Windows 8, Windows 10
            • -
            • Processor: 500 MHz or faster
            • -
            • Memory: 256 MB RAM or more
            • -
            • Hard disk space: 3 GB or more
            • -
            • Display: 1024 x 576 resolution or higher
            • -
            • Other: Internet connection, DVD drive, sound card, keyboard and mouse
            • -
          4. -
          5. Q3: How can I update Microsoft Office 2010 to the latest version?

            -

            A3: You can update Microsoft Office 2010 to the latest version by using Windows Update or downloading the updates manually from Microsoft's website. To use Windows Update, follow these steps:

            -
              -
            1. Click on the Start button and type "update" in the search box.
            2. -
            3. Select "Windows Update" from the list of results.
            4. -
            5. Click on "Check for updates" and wait for Windows to scan your computer.
            6. -
            7. If there are any updates available for Microsoft Office 2010, select them and click on "Install updates".
            8. -
            9. Restart your computer if prompted.
            10. -
            -

            To download the updates manually from Microsoft's website, follow these steps:

            -
              -
            1. Go to https://www.microsoft.com/en-us/download/office.aspx.
            2. -
            3. Select "Office 2010" from the drop-down menu and click on "Find".
            4. -
            5. Select the update that matches your version and language of Microsoft Office 2010 and click on "Download".
            6. -
            7. Save the file on your computer and run it to install the update.
            8. -
            9. Restart your computer if prompted.
            10. -
          6. -
          7. Q4: What are some alternatives to Microsoft Office 2010?

            -

            A4: If you are looking for some alternatives to Microsoft Office 2010, here are some options you can try:

            -
              -
            • LibreOffice: A free and open source office suite that is compatible with Microsoft Office formats. It includes Writer (word processor), Calc (spreadsheet), Impress (presentation), Draw (vector graphics), Base (database), and Math (formula editor).
            • -
            • Google Docs: A web-based office suite that allows you to create and edit documents online. It includes Docs (word processor), Sheets (spreadsheet), Slides (presentation), Forms (survey), and Drawings (diagram).
            • -
            • Microsoft Office Online: A web-based version of Microsoft Office that lets you access your files from anywhere with an internet connection. It includes Word Online (word processor), Excel Online (spreadsheet), PowerPoint Online (presentation), OneNote Online (note-taking), Outlook.com (email), and OneDrive (cloud storage).
            • -
          8. -
          9. Q5: How can I contact Microsoft for technical support or feedback?

            -

            A5: You can contact Microsoft for technical support or feedback by using one of these methods:

            -
              -
            • Online chat or phone call: You can chat with a Microsoft agent online or call them by phone to get help with your issues or questions.
            • -
            • Microsoft Support app: You can download the Microsoft Support app on your Windows device and use it to find answers, troubleshoot problems, or contact an agent.
            • -
            • Microsoft Community forum: You can post your questions or issues on the Microsoft Community forum and get answers from other users or experts.
            • -
            • Send a smile or a frown feature: You can use the Send a smile or a frown feature in any Microsoft Office app to send feedback directly to Microsoft. You can also include a screenshot or a comment to explain your feedback.
            • -
          10. -
          -

          I hope you liked my article. If you have any questions or comments, please let me know.

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dhi Software Mike 21 Crack [PATCHED].md b/spaces/tialenAdioni/chat-gpt-api/logs/Dhi Software Mike 21 Crack [PATCHED].md deleted file mode 100644 index 504b7890b1382a68e7b3f396a8bb45aca8f48cf4..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Dhi Software Mike 21 Crack [PATCHED].md +++ /dev/null @@ -1,66 +0,0 @@ -
          -

          How to Crack DHI Software MIKE 21 for Unlimited Water Modelling

          -

          DHI Software MIKE 21 is a leading software package for 2D modelling of hydrodynamics, waves, sediment dynamics, water quality and ecology. It is a professional software of high reliability, quality and versatility that can handle various applications such as coastal engineering, environmental assessment, marine renewable energy, port and harbour design, flood risk management and more.

          -

          However, MIKE 21 is also a costly software that requires a license to use. The license can be either a local license that is installed on your computer or a network license that is shared among multiple users. The license limits the number of nodes (grid points) that you can use in your model and the modules (simulation engines) that you can access. If you want to use more nodes or modules than your license allows, you need to purchase an upgrade or a new license.

          -

          dhi software mike 21 crack


          Download File ::: https://urlcod.com/2uK6s8



          -

          But what if you don't have the budget or the time to buy a new license? What if you want to use MIKE 21 for unlimited water modelling without any restrictions? Is there a way to crack DHI Software MIKE 21 and bypass the license check?

          -

          The answer is yes, there is a way to crack DHI Software MIKE 21 and use it for unlimited water modelling. In this article, we will show you how to download and install MIKE 21 crack that will give you access to all the modules and features of MIKE 21 without any node limit. We will also show you how to use MIKE 21 crack safely and legally without violating any terms of service or intellectual property rights.

          -

          Step 1: Download MIKE 21 Crack

          -

          The first step to crack DHI Software MIKE 21 is to download MIKE 21 crack from a reliable source. There are many websites that claim to offer MIKE 21 crack, but not all of them are trustworthy or legitimate. Some of them may contain viruses, malware, spyware or other harmful software that can damage your computer or steal your personal information. Some of them may also provide fake or outdated cracks that do not work or cause errors in your model.

          -

          Therefore, you need to be careful and selective when choosing where to download MIKE 21 crack. One of the best sources that we recommend is Crack Request[^1^], a website that provides verified and working cracks for various software products. Crack Request has been in the business for over 16 years and has a reputation for delivering high-quality and risk-free cracks. Crack Request offers MIKE 21 crack for version 2021 with full support and money-back guarantee.

          -

          To download MIKE 21 crack from Crack Request, you need to visit their website[^1^] and fill out a simple form with your name, email address and software name. You also need to pay a small fee of $100 USD via PayPal or Bitcoin to get access to the download link. This fee is much cheaper than buying a new license from DHI Software, which can cost thousands of dollars depending on the modules and node counts that you need.

          -

          mike zero 2021 crack download
          -mike 21 keygen license
          -mike hydro basin crack
          -mike 11 river modeling crack
          -mike 21 coastal modeling crack
          -mike 3 deep sea modeling crack
          -mike 21/3 coupled model crack
          -mike flood urban flood crack
          -litpack littoral processes crack
          -mike she groundwater modeling crack
          -mike zero 2021 dongle emulator
          -mike zero 2021 patched
          -mike zero 2021 cracked software
          -mike zero 2021 torrent download
          -mike zero 2021 full version
          -mike zero 2021 windows 10
          -mike zero 2021 installation guide
          -mike zero 2021 latest version
          -mike zero 2021 update 1 download
          -mike zero 2021 free trial
          -dhi mike zero 2021 review
          -dhi mike zero 2021 tutorial
          -dhi mike zero 2021 training
          -dhi mike zero 2021 support
          -dhi mike zero 2021 price
          -dhi mike zero 2021 license cost
          -dhi mike zero 2021 features
          -dhi mike zero 2021 specifications
          -dhi mike zero 2021 modules
          -dhi mike zero 2021 applications
          -dhi water distribution modeling crack
          -dhi collection systems modeling crack
          -dhi water resources management crack
          -dhi hydrodynamics modeling crack
          -dhi waves modeling crack
          -dhi sediment dynamics modeling crack
          -dhi water quality modeling crack
          -dhi ecology modeling crack
          -dhi oil spills modeling crack
          -dhi harbour disturbance modeling crack
          -how to use dhi software mike 21 crack
          -how to install dhi software mike 21 crack
          -how to download dhi software mike 21 crack
          -how to activate dhi software mike 21 crack
          -how to get dhi software mike 21 crack for free
          -benefits of using dhi software mike 21 crack
          -disadvantages of using dhi software mike 21 crack
          -alternatives to dhi software mike 21 crack
          -comparison of dhi software mike 21 and other modeling software

          -

          Once you pay the fee, you will receive an email from Crack Request with the download link for MIKE 21 crack. The download link will be valid for 24 hours, so make sure you download the file as soon as possible. The file size is about 2 GB and it contains everything you need to install and run MIKE 21 crack on your computer.

          -

          Step 2: Install MIKE 21 Crack

          -

          The second step to crack DHI Software MIKE 21 is to install MIKE 21 crack on your computer. Before you do that, you need to make sure that you have MIKE 21 installed on your computer as well. You can download MIKE 21 from DHI Software's website[^2^] for free as a trial version. The trial version allows you to use MIKE 21 for up to two weeks with limited nodes and modules.

          -

          After you download MIKE 21 from DHI Software's website[^2^], you need to run the setup file and follow the instructions on the screen to install it on your computer. You will need to accept the terms and

          e753bf7129
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dont Fall for the Prism Crack Download Scam.md b/spaces/tialenAdioni/chat-gpt-api/logs/Dont Fall for the Prism Crack Download Scam.md deleted file mode 100644 index bdafe592574c55b32c2bf4ad4a5977dcc23a4c7c..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Dont Fall for the Prism Crack Download Scam.md +++ /dev/null @@ -1,18 +0,0 @@ - -

          How to Get Prism Crack Download for Free

          -

          Prism is a powerful software for data analysis and graphing. It is widely used by scientists, researchers, and students to perform statistical tests, create stunning graphs, and share their results. However, Prism is not cheap and many people cannot afford to buy it. That's why some people look for prism crack download on the internet.

          -

          prism crack download


          Download File ……… https://urlcod.com/2uK41r



          -

          But is prism crack download safe and legal? The answer is no. Downloading cracked software is illegal and can expose your computer to viruses, malware, and hackers. You may also face legal consequences if you are caught using pirated software. Moreover, cracked software may not work properly or have missing features. You may end up wasting your time and money on a faulty product.

          -

          So what can you do if you want to use Prism but don't have the budget for it? The best option is to look for alternatives that are free or affordable. There are many software programs that can perform similar functions as Prism, such as R, Python, Excel, Origin, GraphPad QuickCalcs, and more. Some of them are open source, which means you can modify them to suit your needs. Others are web-based, which means you can access them from any device with an internet connection.

          -

          Here are some of the best alternatives to Prism that you can try:

          -

          -
            -
          • R: R is a free and open source programming language for data analysis and visualization. It has a huge community of users and developers who create and share packages for various purposes. R can handle large and complex data sets, perform advanced statistical tests, and produce high-quality graphs. You can also use RStudio, a user-friendly interface for R that makes coding easier.
          • -
          • Python: Python is another free and open source programming language that is popular among data scientists. It has a rich set of libraries for data manipulation, computation, and visualization, such as pandas, numpy, scipy, matplotlib, seaborn, and more. Python is also easy to learn and write, and can be integrated with other tools and platforms.
          • -
          • Excel: Excel is a spreadsheet software that comes with Microsoft Office. It is widely used for data entry, calculation, and charting. Excel has many built-in functions and formulas that can help you perform basic statistical analysis and graphing. You can also use add-ins such as XLSTAT or Data Analysis Toolpak to extend its capabilities.
          • -
          • Origin: Origin is a commercial software for data analysis and graphing. It has a similar interface and functionality as Prism, but it is cheaper and offers more features. Origin can import and export various file formats, perform curve fitting, peak analysis, signal processing, image processing, and more. It also has a large collection of customizable graphs and templates.
          • -
          • GraphPad QuickCalcs: GraphPad QuickCalcs is a free online tool that provides simple calculators for common statistical tests and analyses. You can use it to perform t-tests, ANOVA, chi-square tests, correlation, regression, confidence intervals, sample size calculation, and more. You can also generate graphs from your data using GraphPad Plotly.
          • -
          -

          In conclusion, prism crack download is not a good idea if you want to use Prism for data analysis and graphing. It is illegal, risky, and unreliable. Instead, you should consider using one of the alternatives that are free or affordable. They may not have all the features of Prism, but they can still help you achieve your goals.

          ddb901b051
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/IDM Cracker Tool How to Download and Use It to Reset IDM Trial Date.md b/spaces/tialenAdioni/chat-gpt-api/logs/IDM Cracker Tool How to Download and Use It to Reset IDM Trial Date.md deleted file mode 100644 index 6448d02b7e9ed0cf95b5fd792121cfcd4f29dde7..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/IDM Cracker Tool How to Download and Use It to Reset IDM Trial Date.md +++ /dev/null @@ -1,23 +0,0 @@ -
          -

          How to Download IDM Cracker Tool for Free and Boost Your Download Speed

          -

          If you are looking for a way to download files faster and easier, you may have heard of Internet Download Manager (IDM), a popular download accelerator tool that can increase your download speed by up to 5 times. IDM can also resume and schedule downloads, support various protocols and browsers, and handle video and audio content. However, IDM is not a free software and you need to buy a license key to use it without any limitations.

          -

          idm cracker tool free download


          Download Ziphttps://urlcod.com/2uK8tl



          -

          Fortunately, there is a way to use IDM for free without cracking or patching it. You can use IDM Cracker Tool, a free software that can reset the trial period of IDM and let you use it as long as you want. IDM Cracker Tool is a simple and safe program that does not modify any files or registry entries of IDM. It only resets the trial date of IDM so that you can enjoy its full features without any restrictions.

          -

          In this article, we will show you how to download IDM Cracker Tool for free and how to use it to boost your download speed with IDM.

          -

          Step 1: Download IDM from the official website

          -

          The first thing you need to do is to download IDM from the official website of the developer. You can use this link to access the download page. Alternatively, you can also download IDM from other reputable sources like YASIR252 or CrackingCity. However, we recommend using the official website to ensure you get the latest and safest version of the software.

          -

          Once you are on the download page, click on the "Try Internet Download Manager for free" button to start downloading the setup file. The file size is about 10 MB and it should take a few minutes to complete depending on your internet speed.

          -

          -

          Step 2: Install IDM on your PC

          -

          After the download is finished, locate the setup file (usually named idman641build11.exe) in your Downloads folder or wherever you saved it. Double-click on it to run it and start the installation process. You may see a User Account Control prompt asking you to allow the app to make changes to your device. Click "Yes" to continue.

          -

          Next, you will see a welcome screen with some options to customize your installation. You can choose the language, destination folder, and whether you want to create a desktop shortcut or not. You can also opt out of sending anonymous usage data to the developer by unchecking the box at the bottom. When you are ready, click "Next" to proceed.

          -

          The installation will take a few seconds and you will see a progress bar showing the status. When it is done, you will see a confirmation screen with a button to launch IDM. Click on it to open the software and start downloading files faster.

          -

          Step 3: Download IDM Cracker Tool from GitHub

          -

          The next thing you need to do is to download IDM Cracker Tool from GitHub, a platform where developers share their projects and codes. You can use this link to access the download page of IDM Cracker Tool. Alternatively, you can also search for "idm-trial-reset" on GitHub and find the repository by J2TEAM.

          -

          Once you are on the download page of IDM Cracker Tool, scroll down until you see a section called "Assets". You will see a file named "idm_trial_reset.zip" with a size of about 1 MB. Click on it to start downloading the zip file containing the tool.

          -

          Step 4: Extract and run IDM Cracker Tool

          -

          After the download is finished, locate the zip file (usually named idm_trial_reset.zip) in your Downloads folder or wherever you saved it. Right-click on it and choose "Extract All" or use any other extraction tool like WinRAR or 7-Zip. You will see a folder named "idm_trial_reset" containing two files: "idm_trial_reset.exe" and "readme.txt".

          -

          Double-click on "idm_trial_reset.exe" to run the tool. You may see a User Account Control prompt asking you to allow the app to make changes to your device. Click "Yes" to continue.

          -

          You will see a simple interface with two tabs: "

          ddb901b051
          -
          -
          \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bleach Vs Naruto 3.3 Mod - A Huge Anime Fighting Game with 400 Characters.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bleach Vs Naruto 3.3 Mod - A Huge Anime Fighting Game with 400 Characters.md deleted file mode 100644 index 09a9bd6100ee1a359e09884824bd463d9cbab870..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bleach Vs Naruto 3.3 Mod - A Huge Anime Fighting Game with 400 Characters.md +++ /dev/null @@ -1,97 +0,0 @@ -
          -

          Bleach vs Naruto 3.3 Mod: How to Download and Play with 400+ Characters

          -

          If you are a fan of anime and fighting games, you might have heard of Bleach vs Naruto, a popular online flash game that features characters from two of the most famous anime series, Bleach and Naruto. But did you know that there is a modded version of this game that adds more than 400 characters from various anime shows, such as One Piece, Dragon Ball, Fairy Tail, Hunter x Hunter, and more? In this article, we will show you how to download and play Bleach vs Naruto 3.3 Mod for Android, and give you some tips and tricks to enjoy this amazing game.

          -

          Introduction

          -

          What is Bleach vs Naruto 3.3 Mod?

          -

          Bleach vs Naruto 3.3 Mod is a fan-made modification of the original Bleach vs Naruto game, which is developed by the Chinese company 5Dplay. The mod adds hundreds of new characters, maps, assists, effects, and features to the game, making it more diverse, fun, and challenging. The mod is created by various modders from the Chinese BVN community, such as XZK智龙, 沃特风生水起, 高级会员, 昶羽c, 墨世里的尘埃, 拂晓夜 and many others. The mod-pack is also updated regularly with new content and improvements.

          -

          bleach vs naruto 3.3 mod 400+ characters download


          Download Ziphttps://bltlly.com/2uOpCn



          -

          Why should you play Bleach vs Naruto 3.3 Mod?

          -

          There are many reasons why you should play Bleach vs Naruto 3.3 Mod, but here are some of the main ones:

          -
            -
          • You can play with over 400 characters from different anime shows, each with their own unique skills, abilities, transformations, and special moves.
          • -
          • You can choose from various game modes, such as arcade, versus, team play, survival, training, watch mode, and more.
          • -
          • You can customize your game settings, such as difficulty level, time limit, health bars, sound effects, music volume, etc.
          • -
          • You can enjoy the stunning graphics and sound effects that make the game more immersive and realistic.
          • -
          • You can challenge yourself and test your skills against other players online or offline.
          • -
          -

          How to download Bleach vs Naruto 3.3 Mod for Android

          -

          Step 1: Download the APK file from a trusted source

          -

          The first step to download Bleach vs Naruto 3.3 Mod for Android is to find a reliable source that provides the APK file of the game. You can use the link below to download the APK file from Zinnat Gaming's YouTube channel. The size of the file is about 1 GB, so make sure you have enough storage space on your device before downloading it.

          -

          Step 2: Enable unknown sources on your device

          -

          The next step is to enable unknown sources on your device, which will allow you to install the APK file of the game. To do this, go to your device settings, then security, and then toggle on the option that says "allow installation of apps from unknown sources". This may vary depending on your device model and Android version, so you may need to search for the option in your settings. Once you enable unknown sources, you can proceed to the next step.

          -

          Step 3: Install the APK file and launch the game

          -

          The final step is to install the APK file of the game and launch it. To do this, locate the downloaded file in your device storage, and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish. Once the game is installed, you can launch it from your app drawer or home screen. You may need to grant some permissions to the game, such as storage access, before you can play it.

          -

          How to play Bleach vs Naruto 3.3 Mod with 400+ characters

          -

          Step 1: Choose your game mode and difficulty level

          -

          When you launch the game, you will see a main menu with various options. You can choose from different game modes, such as arcade, versus, team play, survival, training, watch mode, and more. Each game mode has its own rules and objectives, so you can pick the one that suits your preference and skill level. You can also adjust the difficulty level of the game, from easy to hard, depending on how challenging you want the game to be.

          -

          Step 2: Select your character and assist from the character selection screen

          -

          After choosing your game mode and difficulty level, you will be taken to the character selection screen, where you can choose your character and assist from over 400 options. You can scroll through the character list using the arrow keys or the mouse wheel, and select your character by clicking on their portrait or pressing enter. You can also choose an assist character that will help you in battle by pressing A or S keys. You can see the name and stats of each character and assist at the bottom of the screen.

          -

          bleach vs naruto 3.3 mod 400+ characters apk
          -bleach vs naruto 3.3 mod 400+ characters android
          -bleach vs naruto 3.3 mod 400+ characters pc
          -bleach vs naruto 3.3 mod 400+ characters mega update
          -bleach vs naruto 3.3 mod 400+ characters youtube
          -bleach vs naruto 3.3 mod 400+ characters free download
          -bleach vs naruto 3.3 mod 400+ characters offline
          -bleach vs naruto 3.3 mod 400+ characters gameplay
          -bleach vs naruto 3.3 mod 400+ characters tutorial
          -bleach vs naruto 3.3 mod 400+ characters latest version
          -bleach vs naruto 3.3 mod 400+ characters kizuma gaming
          -bleach vs naruto 3.3 mod 400+ characters zinnat gaming
          -bleach vs naruto 3.3 mod 400+ characters andrej
          -bleach vs naruto 3.3 mod 400+ characters bvn mods
          -bleach vs naruto 3.3 mod 400+ characters mediafire
          -bleach vs naruto 3.3 mod 400+ characters google drive
          -bleach vs naruto 3.3 mod 400+ characters review
          -bleach vs naruto 3.3 mod 400+ characters how to install
          -bleach vs naruto 3.3 mod 400+ characters how to play
          -bleach vs naruto 3.3 mod 400+ characters how to add
          -bleach vs naruto 3.3 mod 400+ characters best edition
          -bleach vs naruto 3.3 mod 400+ characters new update
          -bleach vs naruto 3.3 mod 400+ characters new maps
          -bleach vs naruto 3.3 mod 400+ characters new assists
          -bleach vs naruto 3.3 mod 400+ characters new interface
          -bleach vs naruto 3.3 mod 400+ characters new effects
          -bleach vs naruto 3.3 mod 400+ characters new features
          -bleach vs naruto 3.3 mod 400+ characters new modes
          -bleach vs naruto 3.3 mod 400+ characters new settings
          -bleach vs naruto 3.3 mod 400+ characters new transformations
          -bleach vs naruto 3.3 mod 400+ characters all forms
          -bleach vs naruto 3.3 mod 400+ characters all attacks
          -bleach vs naruto 3.3 mod 400+ characters all specials
          -bleach vs naruto 3.3 mod 400+ characters all ultimates
          -bleach vs naruto 3.3 mod 400+ characters all assists
          -bleach vs naruto 3.3 mod 400+ characters all maps
          -bleach vs naruto 3.

          -

          Step 3: Enjoy the epic anime battles with stunning graphics and sound effects

          -

          Once you select your character and assist, you will be ready to start the battle. You will be matched with an opponent based on your game mode and difficulty level, and you will fight in one of the many maps available in the game. The game has stunning graphics and sound effects that make the battles more immersive and realistic. You can use various skills, abilities, transformations, and special moves to defeat your opponent and win the match.

          -

          Tips and tricks for Bleach vs Naruto 3.3 Mod

          -

          Tip 1: Learn the basic controls and combos of each character

          -

          One of the most important tips for playing Bleach vs Naruto 3.3 Mod is to learn the basic controls and combos of each character. The game has a simple control scheme that uses four keys: J, K, L, and U. J is for attack, K is for jump, L is for sprint or dash, and U is for special attack. You can also use W, A, S, D keys or arrow keys to move your character around. By combining these keys in different ways, you can perform various combos and moves that deal more damage and have different effects.

          -

          Table: Example of basic combos for some characters

          - | Character | Combo | Effect | | --- | --- | --- | | Naruto | J + J + J + K + J + U | Rasengan | | Ichigo | J + J + J + K + J + U | Getsuga Tenshou | | Goku | J + J + J + K + J + U | Kamehameha | | Luffy | J + J + J + K + J + U | Gomu Gomu no Pistol |

          You can also check out some YouTube videos or online guides that show more advanced combos and techniques for each character.

          -

          Tip 2: Use your assist wisely and strategically

          -

          Another tip for playing Bleach vs Naruto 3.3 Mod is to use your assist wisely and strategically. Your assist character can help you in various ways, such as attacking your opponent, defending you from attacks, healing you, boosting your stats, etc. However, you can only use your assist once per match, so you have to choose the right time and situation to use it. Some assists are more suitable for offensive purposes, while others are more suitable for defensive purposes. For example, Sasuke can use his Chidori to deal a lot of damage to the enemy, while Orihime can use her Soten Kisshun to heal herself or her ally. You can also combine your assist with your own attacks to create more powerful combos and effects.

          -

          Tip 3: Experiment with different characters and find your favorite ones

          -

          The last tip for playing Bleach vs Naruto 3.3 Mod is to experiment with different characters and find your favorite ones. The game has over 400 characters to choose from, each with their own strengths, weaknesses, styles, and personalities. You can try out different characters and see which ones suit your preference and skill level. You can also discover new characters that you may not have heard of before, and learn more about their backgrounds and stories. You may find some hidden gems that you will love to play with.

          -

          Conclusion

          -

          Summary of the main points

          -

          Bleach vs Naruto 3.3 Mod is a fan-made modification of the original Bleach vs Naruto game, which adds hundreds of new characters, maps, assists, effects, and features to the game. It is a fun and challenging game that lets you play with over 400 characters from different anime shows, such as Bleach, Naruto, One Piece, Dragon Ball, Fairy Tail, Hunter x Hunter, and more. You can download and play Bleach vs Naruto 3.3 Mod for Android by following the steps in this article, and enjoy the epic anime battles with stunning graphics and sound effects. You can also use some tips and tricks to improve your skills and have more fun with the game.

          -

          Call to action and final remarks

          -

          If you are looking for a game that combines anime and fighting genres, you should definitely try Bleach vs Naruto 3.3 Mod. It is a game that will keep you entertained for hours, whether you play alone or with your friends. You can download the game from the link below, and start playing right away. You will not regret it!

          -

          Thank you for reading this article, and we hope you found it helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!

          -

          FAQs

          -
            -
          • Q: Is Bleach vs Naruto 3.3 Mod safe to download and play?
          • -
          • A: Yes, Bleach vs Naruto 3.3 Mod is safe to download and play, as long as you download it from a trusted source, such as the link we provided in this article. However, you should always be careful when downloading any files from the internet, and scan them for viruses or malware before installing them on your device.
          • -
          • Q: Can I play Bleach vs Naruto 3.3 Mod on PC?
          • -
          • A: Yes, you can play Bleach vs Naruto 3.3 Mod on PC, by using an Android emulator, such as BlueStacks or NoxPlayer. These are software that allow you to run Android apps on your PC. You can download an emulator from their official websites, install it on your PC, and then install the APK file of the game on the emulator.
          • -
          • Q: How can I unlock more characters in Bleach vs Naruto 3.3 Mod?
          • -
          • A: You can unlock more characters in Bleach vs Naruto 3.3 Mod by playing the arcade mode or the survival mode of the game. In these modes, you will face different opponents in a series of matches, and if you win enough matches, you will unlock new characters that you can use in other modes.
          • -
          • Q: How can I update Bleach vs Naruto 3.3 Mod to the latest version?
          • -
          • A: You can update Bleach vs Naruto 3.3 Mod to the latest version by downloading the new APK file from the same source that you downloaded the previous version from. You can check for updates by visiting Zinnat Gaming's YouTube channel, where he posts new videos about the game regularly.
          • -
          • Q: Where can I find more information about Bleach vs Naruto 3.3 Mod?
          • -
          • A: You can find more information about Bleach vs Naruto 3.3 Mod by visiting the official website of the mod, where you can see the latest news, updates, features, screenshots, videos, and downloads of the game. You can also join the official Discord server of the mod, where you can chat with other players, modders, and fans of the game.
          • -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Craftsman A New and Exciting Building Craft Game for iOS Users.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Craftsman A New and Exciting Building Craft Game for iOS Users.md deleted file mode 100644 index 9d7e880b6332b390680eb4e66aff392683974ac6..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Craftsman A New and Exciting Building Craft Game for iOS Users.md +++ /dev/null @@ -1,109 +0,0 @@ -
          -

          Craftsman Building Craft: A Free and Fun Sandbox Game for iOS

          -

          If you are looking for a game that lets you create your own world, explore different environments, and have fun with crafting and building, then you should try Craftsman Building Craft. This is a free simulation game that is available for iOS devices. In this article, we will tell you everything you need to know about this game, including what it is, how to download and install it, how to play it, and why you should play it.

          -

          craftsman building craft download ios


          Download Zip >>>>> https://bltlly.com/2uOnH4



          -

          What is Craftsman Building Craft?

          -

          Craftsman Building Craft is a sandbox game that gives you unlimited possibilities to craft and build anything you can imagine. You can use various blocks, ores, and other resources to create houses, farms, cities, or even entire worlds. You can also explore different landscapes, such as forests, deserts, mountains, or oceans. You can also interact with animals, plants, and other creatures. You can also customize your character and choose from different skins and outfits.

          -

          A sandbox game with unlimited possibilities

          -

          Craftsman Building Craft is a game that lets you express your creativity and imagination. You can use the ice craft building system to craft and build items from blocks, ores, and other resources. You can also use the decoration system to add furniture, paintings, lights, and other elements to your buildings. You can also use the painting system to change the color and texture of your blocks. You can also use the inventory system to store and manage your items. You can also use the crafting table system to create tools, weapons, armor, and other items.

          -

          A game inspired by Minecraft and Realmcraft

          -

          Craftsman Building Craft is a game that is inspired by popular games like Minecraft and Realmcraft. It has similar gameplay mechanics, graphics, and sound effects. However, it also has some unique features that make it stand out from other games. For example, it has more realistic physics and lighting effects. It also has more diverse biomes and weather conditions. It also has more animals and monsters to encounter. It also has more items and blocks to craft and build with.

          -

          A game with different modes and worlds

          -

          Craftsman Building Craft is a game that offers different modes and worlds to play in. You can choose from three modes: Survival mode, Creative mode, and Create World mode. In Survival mode, you have to gather resources, craft items, build shelters, and fight against enemies. In Creative mode, you have unlimited resources to craft and build anything you want. In Create World mode, you can create your own world from scratch. You can also choose from three worlds: Flat world, Old world, and Infinite world. In Flat world, you have a flat terrain with no obstacles or enemies. In Old world, you have a limited map size with some hills and trees. In Infinite world, you have an endless map with various biomes and features.

          -

          How to download and install Craftsman Building Craft on iOS?

          -

          If you want to play Craftsman Building Craft on your iOS device, you have two options: download it from the App Store or download it from other sources.

          -

          craftsman survival edition app store
          -craftsman building craft free android games
          -craftsman world building craft 2021
          -craftsman building craft stargame22
          -craftsman building craft simulation game
          -craftsman building craft sandbox game
          -craftsman building craft creative mode
          -craftsman building craft survival mode
          -craftsman building craft create world mode
          -craftsman building craft flat world
          -craftsman building craft old world
          -craftsman building craft infinite world
          -craftsman building craft game resources
          -craftsman building craft armor and items
          -craftsman building craft tools and weapons
          -craftsman building craft house and farms
          -craftsman building craft decor and interior
          -craftsman building craft monsters and predators
          -craftsman building craft multiplayer mode
          -craftsman building craft online play
          -craftsman building craft offline play
          -craftsman building craft cubic world
          -craftsman building craft ice world
          -craftsman building craft stunning graphics
          -craftsman building craft immersive audio
          -craftsman building craft user-friendly interface
          -craftsman building craft simple controls
          -craftsman building craft net energy gain
          -craftsman building craft mini sun experiment
          -craftsman building craft nuclear fusion reaction
          -craftsman building craft 100 million degrees celsius
          -craftsman building craft 30 seconds duration
          -craftsman building craft holy grail experiment
          -craftsman building craft korea institute of fusion energy
          -craftsman building craft kstar facility
          -craftsman building craft new scientist article
          -craftsman building craft the sun article
          -craftsman building craft yahoo news article
          -craftsman building craft wikipedia article
          -craftsman building craft montana solar physics article
          -craftsman building craft cornell university article
          -craftsman building craft nasa fact sheet
          -craftsman building craft solar core temperature
          -craftsman building craft photosphere composition
          -craftsman building craft chromosphere thickness
          -craftsman building craft sun spot cycle
          -craftsman building craft app privacy policy
          -craftsman building craft app support website
          -craftsman building craft app size and category
          -craftsman building craft app compatibility and languages

          -

          Download from the App Store

          -

          The easiest way to download Craftsman Building Craft on your iOS device is to download it from the App Store. You can search for "Craftsman Building Craft" or "Craftsman : Survival Edition" on the App Store or click on this link. Then you can tap on the [Download] button and follow the instructions to install the game on your device. You will need to have an iOS version of 9.0 or later and at least 200 MB of free space on your device. The game is compatible with iPhone, iPad, and iPod touch.

          -

          Download from other sources

          -

          Another way to download Craftsman Building Craft on your iOS device is to download it from other sources, such as third-party websites or file-sharing platforms. However, this method is not recommended, as it may expose your device to viruses, malware, or other security risks. You may also violate the terms and conditions of the game developer or the App Store. If you still want to download the game from other sources, you will need to follow these steps:

          -
            -
          1. Find a reliable and trustworthy source that offers the game file for iOS devices. You can search for "Craftsman Building Craft ios download" or "Craftsman : Survival Edition ios download" on Google or other search engines.
          2. -
          3. Download the game file to your computer or directly to your device. The file should have a .ipa extension, which stands for iOS application archive.
          4. -
          5. If you downloaded the file to your computer, transfer it to your device using a USB cable or a wireless connection.
          6. -
          7. Install the game file on your device using a file manager app or a sideloading tool, such as Cydia Impactor or AltStore. You may need to jailbreak your device or use an Apple ID and password to install the game.
          8. -
          9. Launch the game and enjoy playing it.
          10. -
          -

          How to play Craftsman Building Craft on iOS?

          -

          Once you have downloaded and installed Craftsman Building Craft on your iOS device, you can start playing it by tapping on the game icon on your home screen. You will see the main menu of the game, where you can choose your mode, world, and settings. Here are some tips on how to play the game:

          -

          Choose your mode and world

          -

          As mentioned before, you can choose from three modes: Survival mode, Creative mode, and Create World mode. You can also choose from three worlds: Flat world, Old world, and Infinite world. You can also create your own custom world by selecting the Create World mode and adjusting the parameters, such as terrain type, biome size, seed number, day length, weather cycle, and difficulty level. You can also name your world and save it for later use.

          -

          Craft, build, and explore

          -

          The main activity of Craftsman Building Craft is to craft and build items from blocks, ores, and other resources. You can use the ice craft building system to place and remove blocks in the world. You can also use the decoration system to add furniture, paintings, lights, and other elements to your buildings. You can also use the painting system to change the color and texture of your blocks. You can also use the inventory system to store and manage your items. You can also use the crafting table system to create tools, weapons, armor, and other items.

          -

          Besides crafting and building, you can also explore the world and discover different landscapes, such as forests, deserts, mountains, or oceans. You can also interact with animals, plants, and other creatures. You can also find chests with loot, dungeons with traps, villages with villagers, temples with secrets, and more.

          -

          Survive and fight

          -

          If you choose the Survival mode, you will have to survive in a hostile environment where you have to deal with hunger, thirst, health, and enemies. You will have to gather resources, craft items, build shelters, and fight against enemies. You will encounter different types of enemies, such as zombies, skeletons, spiders, creepers, and more. You will have to use your tools, weapons, armor, and skills to defend yourself and attack them. You will also have to avoid dangers, such as lava, fire, fall damage, and explosions. You will also have to cope with day and night cycles, weather changes, and natural disasters.

          -

          Why play Craftsman Building Craft on iOS?

          -

          Craftsman Building Craft is a game that offers many benefits and advantages for iOS users. Here are some of the reasons why you should play this game on your iOS device:

          -

          Express your creativity and imagination

          -

          Craftsman Building Craft is a game that allows you to express your creativity and imagination. You can craft and build anything you can imagine, from simple houses to complex cities, from realistic structures to fantasy worlds. You can also customize your character and choose from different skins and outfits. You can also share your creations with other players online or offline.

          -

          Enjoy stunning graphics and sound effects

          -

          Craftsman Building Craft is a game that has stunning graphics and sound effects. The game has realistic physics and lighting effects that make the world look more alive and dynamic. The game also has diverse biomes and weather conditions that create different atmospheres and moods. The game also has high-quality sound effects that enhance the gameplay experience. You can hear the sounds of blocks breaking, animals roaring, monsters growling, water flowing, fire crackling, and more.

          -

          Play with your friends online

          -

          Craftsman Building Craft is a game that supports multiplayer mode. You can play with your friends online or offline. You can join or create a server and invite your friends to join you. You can also chat with them using the chat system. You can cooperate with them to craft and build together, or compete with them to see who can survive longer or create better things. You can also trade items with them or fight against them.

          -

          Conclusion

          -

          Craftsman Building Craft is a free and fun sandbox game for iOS devices that lets you craft and build anything you can imagine. You can also explore different landscapes, interact with animals and creatures, survive and fight against enemies, and play with your friends online. If you are looking for a game that offers unlimited possibilities, stunning graphics, and sound effects, and multiplayer mode, then you should download Craftsman Building Craft on your iOS device today.

          -

          FAQs

          -
            -
          1. Q: Is Craftsman Building Craft free to play?
            -A: Yes, Craftsman Building Craft is free to play. However, it may contain ads or in-app purchases.
          2. -
          3. Q: Is Craftsman Building Craft safe to play?
            -A: Yes, Craftsman Building Craft is safe to play. However, if you download it from other sources than the App Store, you may expose your device to security risks.
          4. -
          5. Q: Is Craftsman Building Craft compatible with my iOS device?
            -A: Craftsman Building Craft is compatible with iPhone, iPad, and iPod touch running iOS 9.0 or later.
          6. -
          7. Q: How can I update Craftsman Building Craft?
            -A: You can update Craftsman Building Craft by checking for updates on the App Store or by downloading the latest version from other sources.
          8. -
          9. Q: How can I contact the developer of Craftsman Building Craft?
            -A: You can contact the developer of Craftsman Building Craft by sending an email to craftsmanbuildingcraft@gmail.com or by visiting their website.
          10. -

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/ting520/66/devices/device_8958.js b/spaces/ting520/66/devices/device_8958.js deleted file mode 100644 index 455ddb0108b70276949e6539926481590a98e0d9..0000000000000000000000000000000000000000 --- a/spaces/ting520/66/devices/device_8958.js +++ /dev/null @@ -1,344 +0,0 @@ -"use strict"; -var __importDefault = (this && this.__importDefault) || function (mod) { - return (mod && mod.__esModule) ? mod : { "default": mod }; -}; -Object.defineProperty(exports, "__esModule", { value: true }); -exports.getApkInfo = exports.Platform = exports.Device = exports.generateFullDevice = exports.generateShortDevice = void 0; -const crypto_1 = require("crypto"); -const constants_1 = require("./constants"); -const axios_1 = __importDefault(require("axios")); -const algo_1 = require("./algo"); -function generateImei() { - let imei = `86${(0, constants_1.randomString)(12, '0123456789')}`; - function calcSP(imei) { - let sum = 0; - for (let i = 0; i < imei.length; ++i) { - if (i % 2) { - let j = parseInt(imei[i]) * 2; - sum += j % 10 + Math.floor(j / 10); - } - else { - sum += parseInt(imei[i]); - } - } - return (100 - sum) % 10; - } - return imei + calcSP(imei); -} -/** 生成短设备信息 */ -function generateShortDevice() { - const randstr = (length, num = false) => { - const map = num ? '0123456789' : '0123456789abcdef'; - return (0, constants_1.randomString)(length, map); - }; - return { - "--begin--": "该设备为随机生成,丢失后不能得到原先配置", - product: `ILPP-${randstr(5).toUpperCase()}`, - device: `${randstr(5).toUpperCase()}`, - board: `${randstr(5).toUpperCase()}`, - brand: `${randstr(4).toUpperCase()}`, - model: `ICQQ ${randstr(4).toUpperCase()}`, - wifi_ssid: `HUAWEI-${randstr(7)}`, - bootloader: `U-boot`, - android_id: `IL.${randstr(7, true)}.${randstr(4, true)}`, - boot_id: `${randstr(8)}-${randstr(4)}-${randstr(4)}-${randstr(4)}-${randstr(12)}`, - proc_version: `Linux version 5.10.101-android12-${randstr(8)}`, - mac_address: `2D:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}:${randstr(2).toUpperCase()}`, - ip_address: `192.168.${randstr(2, true)}.${randstr(2, true)}`, - imei: `${generateImei()}`, - incremental: `${randstr(10, true).toUpperCase()}`, - "--end--": "修改后可能需要重新验证设备。" - }; -} -exports.generateShortDevice = generateShortDevice; -/** 生成完整设备信息 */ -function generateFullDevice(apk, d) { - if (!d) - d = generateShortDevice(); - return { - display: d.android_id, - product: d.product, - device: d.device, - board: d.board, - brand: d.brand, - model: d.model, - bootloader: d.bootloader, - fingerprint: `${d.brand}/${d.product}/${d.device}:10/${d.android_id}/${d.incremental}:user/release-keys`, - boot_id: d.boot_id, - proc_version: d.proc_version, - baseband: "", - sim: "T-Mobile", - os_type: "android", - mac_address: d.mac_address, - ip_address: d.ip_address, - wifi_bssid: d.mac_address, - wifi_ssid: d.wifi_ssid, - imei: d.imei, - android_id: (0, constants_1.md5)(d.android_id).toString("hex"), - apn: "wifi", - version: { - incremental: d.incremental, - release: "10", - codename: "REL", - sdk: 29, - }, - imsi: (0, crypto_1.randomBytes)(16), - guid: (0, constants_1.md5)(Buffer.concat([Buffer.from(d.imei), Buffer.from(d.mac_address)])), - }; -} -exports.generateFullDevice = generateFullDevice; -class Device { - constructor(apk, d) { - this.apk = apk; - this.secret = 'ZdJqM15EeO2zWc08'; - this.publicKey = `-----BEGIN PUBLIC KEY----- -MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEIxgwoutfwoJxcGQeedgP7FG9 -qaIuS0qzfR8gWkrkTZKM2iWHn2ajQpBRZjMSoSf6+KJGvar2ORhBfpDXyVtZCKpq -LQ+FLkpncClKVIrBwv6PHyUvuCb0rIarmgDnzkfQAqVufEtR64iazGDKatvJ9y6B -9NMbHddGSAUmRTCrHQIDAQAB ------END PUBLIC KEY-----`; - if (!d) - d = generateShortDevice(); - Object.assign(this, generateFullDevice(apk, d)); - } - async getQIMEI() { - if (this.apk.app_key === "") { - return; - } - const k = (0, constants_1.randomString)(16); - const key = (0, algo_1.encryptPKCS1)(this.publicKey, k); - const time = Date.now(); - const nonce = (0, constants_1.randomString)(16); - const payload = this.genRandomPayloadByDevice(); - const params = (0, algo_1.aesEncrypt)(JSON.stringify(payload), k).toString('base64'); - try { - const { data } = await axios_1.default.post("https://snowflake.qq.com/ola/android", { - key, - params, - time, nonce, - sign: (0, constants_1.md5)(key + params + time + nonce + this.secret).toString("hex"), - extra: '' - }, { - headers: { - 'User-Agent': `Dalvik/2.1.0 (Linux; U; Android ${this.version.release}; PCRT00 Build/N2G48H)`, - 'Content-Type': "application/json" - } - }); - if (data?.code !== 0) { - return; - } - const { q16, q36 } = JSON.parse((0, algo_1.aesDecrypt)(data.data, k)); - this.qImei16 = q16; - this.qImei36 = q36; - } - catch { - } - } - genRandomPayloadByDevice() { - const fixedRand = (max = 1, min = 0) => { - if (max < min) - [max, min] = [min, max]; - const diff = max - min; - return Math.floor(Math.random() * diff) + min; - }; - const reserved = { - "harmony": "0", - "clone": Math.random() > 0.5 ? "1" : "0", - "containe": "", - "oz": "", - "oo": "", - "kelong": Math.random() > 0.5 ? "1" : "0", - "uptimes": (0, constants_1.formatTime)(new Date()), - "multiUser": Math.random() > 0.5 ? "1" : "0", - "bod": this.board, - "brd": this.brand, - "dv": this.device, - "firstLevel": "", - "manufact": this.brand, - "name": this.model, - "host": "se.infra", - "kernel": this.fingerprint - }; - const timestamp = Date.now(); - this.mtime = this.mtime || Date.now(); - const mtime1 = new Date(this.mtime || Date.now()); - const dateFormat = (fmt, time = Date.now()) => (0, constants_1.formatTime)(time, fmt); - const mtimeStr1 = dateFormat("YYYY-mm-ddHHMMSS", mtime1) + "." + this.imei.slice(2, 11); - const mtime2 = new Date(this.mtime - parseInt(this.imei.slice(2, 4))); - const mtimeStr2 = dateFormat("YYYY-mm-ddHHMMSS", mtime2) + "." + this.imei.slice(5, 14); - let beaconIdArr = [ - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr1, - '0000000000000000', - (0, constants_1.md5)(this.android_id + this.imei).toString("hex").slice(0, 16), - ...new Array(4).fill(false).map((_) => fixedRand(10000000, 1000000)), - this.boot_id, - '1', - fixedRand(5, 0), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(50000, 10000), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - mtimeStr2, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((10 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(100, 10), - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - `${dateFormat("YYYY-mm-ddHHMMSS")}.${String(((11 + parseInt(this.imei.slice(5, 7))) % 100)).padStart(2, "0")}0000000`, - fixedRand(10000, 1000), - fixedRand(5, 0), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(100, 10), - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - `${(0, constants_1.formatTime)(new Date(timestamp + fixedRand(60, 0)))}.${String(fixedRand(99, 0)).padStart(2, '0')}0000000`, - fixedRand(5, 0), - fixedRand(5, 0), - ].map((str, idx) => `k${idx + 1}:${str}`); - return { - "androidId": this.android_id, - "platformId": 1, - "appKey": this.apk.app_key, - "appVersion": this.apk.version, - "beaconIdSrc": beaconIdArr.join(';'), - "brand": this.brand, - "channelId": "2017", - "cid": "", - "imei": this.imei, - "imsi": this.imsi.toString("hex"), - "mac": this.mac_address, - "model": this.model, - "networkType": "unknown", - "oaid": "", - "osVersion": `Android ${this.version.release},level ${this.version.sdk}`, - "qimei": "", - "qimei36": "", - "sdkVersion": "1.2.13.6", - "targetSdkVersion": "26", - "audit": "", - "userId": "{}", - "packageId": this.apk.id, - "deviceType": this.display, - "sdkName": "", - "reserved": JSON.stringify(reserved), - }; - } -} -exports.Device = Device; -/** 支持的登录设备平台 */ -var Platform; -(function (Platform) { - Platform[Platform["Android"] = 1] = "Android"; - Platform[Platform["aPad"] = 2] = "aPad"; - Platform[Platform["Watch"] = 3] = "Watch"; - Platform[Platform["iMac"] = 4] = "iMac"; - Platform[Platform["iPad"] = 5] = "iPad"; - Platform[Platform["Tim"] = 6] = "Tim"; -})(Platform = exports.Platform || (exports.Platform = {})); -const mobile = { - id: "com.tencent.mobileqq", - app_key: '0S200MNJT807V3GE', - name: "A8.9.58.11175", - version: "8.9.58.11175", - ver: "8.9.58", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1684467300, - appid: 16, - subid: 537163194, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2545", - display: "Android_8.9.58", - qua: 'V1_AND_SQ_8.9.58_4108_YYB_D', - ssover: 20, -}; -const tim = { - id: "com.tencent.tim", - app_key: '0S200MNJT807V3GE', - name: "A3.5.1.3168", - version: "3.5.1.3168", - ver: "3.5.1", - sign: Buffer.from('775e696d09856872fdd8ab4f3f06b1e0', 'hex'), - buildtime: 1630062176, - appid: 16, - subid: 537150355, - bitmap: 150470524, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2484", - display: "Tim", - qua: "V1_AND_SQ_8.3.9_351_TIM_D", - ssover: 18, -}; -const watch = { - id: "com.tencent.qqlite", - app_key: '0S200MNJT807V3GE', - name: "A2.0.8", - version: "2.0.8", - ver: "2.0.8", - sign: Buffer.from('A6 B7 45 BF 24 A2 C2 77 52 77 16 F6 F3 6E B6 8D'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1559564731, - appid: 16, - subid: 537065138, - bitmap: 16252796, - main_sig_map: 16724722, - sub_sig_map: 0x10400, - sdkver: "6.0.0.2365", - display: "Watch", - qua: '', - ssover: 5 -}; -const hd = { - id: "com.tencent.minihd.qq", - app_key: '0S200MNJT807V3GE', - name: "A5.9.3.3468", - version: "5.9.3.3468", - ver: "5.9.3", - sign: Buffer.from('AA 39 78 F4 1F D9 6F F9 91 4A 66 9E 18 64 74 C7'.split(' ').map(s => parseInt(s, 16))), - buildtime: 1637427966, - appid: 16, - subid: 537128930, - bitmap: 150470524, - main_sig_map: 1970400, - sub_sig_map: 66560, - sdkver: "6.0.0.2433", - display: "iMac", - qua: '', - ssover: 12 -}; -const apklist = { - [Platform.Android]: mobile, - [Platform.Tim]: tim, - [Platform.aPad]: { - ...mobile, - subid: 537163242, - display: 'aPad_8.9.58' - }, - [Platform.Watch]: watch, - [Platform.iMac]: { ...hd }, - [Platform.iPad]: { - ...mobile, - subid: 537155074, - sign: hd.sign, - name: '8.9.50.611', - ver: '8.9.50', - sdkver: '6.0.0.2535', - qua: 'V1_AND_SQ_8.9.50_3898_YYB_D', - display: 'iPad' - }, -}; -function getApkInfo(p) { - return apklist[p] || apklist[Platform.Android]; -} -exports.getApkInfo = getApkInfo; diff --git a/spaces/tioseFevbu/cartoon-converter/Khoobsurat In Hindi 720p Torrentl.md b/spaces/tioseFevbu/cartoon-converter/Khoobsurat In Hindi 720p Torrentl.md deleted file mode 100644 index b69b2df0dbf7ea50009cd4c6efffb68bd7ebbc3a..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/Khoobsurat In Hindi 720p Torrentl.md +++ /dev/null @@ -1,39 +0,0 @@ -## Khoobsurat In Hindi 720p Torrentl - - - -**Click Here ✫✫✫ [https://ditzcosupo.blogspot.com/?d=2tx0oD](https://ditzcosupo.blogspot.com/?d=2tx0oD)** - - - -# Khoobsurat In Hindi 720p Torrentl: A Review of the Romantic Comedy - - - -Khoobsurat is a 2014 Bollywood romantic comedy film starring Sonam Kapoor, Fawad Khan, Kirron Kher and Ratna Pathak Shah. The film is a remake of the 1980 film of the same name, which was directed by Hrishikesh Mukherjee and starred Rekha and Rakesh Roshan. The film follows the story of Mili (Kapoor), a quirky and free-spirited physiotherapist who is hired to treat the paraplegic king of a royal family in Rajasthan. There, she falls in love with the king's son Vikram (Khan), who is engaged to another woman. The film explores the clash of cultures and values between Mili and the royal family, as well as the challenges of finding true love. - - - -Khoobsurat In Hindi 720p Torrentl is a high-quality torrent file that allows you to download and watch the film in HD resolution. The torrent file is easy to use and has a fast download speed. You can enjoy the film's vibrant colors, scenic locations, catchy songs and hilarious dialogues in crystal clear quality. The torrent file also has subtitles in English and other languages for your convenience. - - - -If you are looking for a fun and light-hearted film to watch with your loved ones, Khoobsurat In Hindi 720p Torrentl is a great choice. The film will make you laugh, cry and swoon with its charming characters, witty humor and heartwarming romance. Khoobsurat is a film that celebrates life, love and happiness in all its forms. - - - -Khoobsurat In Hindi 720p Torrentl is not only a film for entertainment, but also a film with a message. The film shows how Mili breaks the stereotypes and norms of the royal family with her unconventional and modern outlook. She teaches them to embrace their true selves and to live with joy and passion. She also inspires Vikram to follow his dreams and to stand up for his love. The film challenges the notions of class, status and tradition, and shows how love can overcome any obstacle. - - - -Khoobsurat In Hindi 720p Torrentl is a film that will make you feel good and smile. It is a film that will remind you of the beauty and magic of life. It is a film that will make you fall in love with Khoobsurat. - - - -Khoobsurat In Hindi 720p Torrentl is a film that will appeal to all kinds of audiences. Whether you are a fan of Bollywood, romance, comedy or drama, you will find something to enjoy in this film. The film has a talented cast that delivers excellent performances. Sonam Kapoor and Fawad Khan have a great chemistry and charisma on screen. Kirron Kher and Ratna Pathak Shah are hilarious and endearing as the contrasting mothers. The film also has a memorable soundtrack composed by Sneha Khanwalkar and Badshah, featuring songs like "Engine Ki Seeti", "Abhi Toh Party Shuru Hui Hai" and "Naina". The film has a beautiful cinematography that captures the essence and culture of Rajasthan. - - - -Khoobsurat In Hindi 720p Torrentl is a film that will make you laugh, cry and swoon with its charming characters, witty humor and heartwarming romance. Khoobsurat is a film that celebrates life, love and happiness in all its forms. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/All Safari Magazine Gujarati Pdf Free 35 VERIFIED.md b/spaces/tioseFevbu/cartoon-converter/scripts/All Safari Magazine Gujarati Pdf Free 35 VERIFIED.md deleted file mode 100644 index df9c8cfa45959b2dfe3a909d68b051b951eb5c36..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/All Safari Magazine Gujarati Pdf Free 35 VERIFIED.md +++ /dev/null @@ -1,15 +0,0 @@ - -

          All Safari Magazine Gujarati PDF Free Download

          -

          If you are looking for a magazine that covers science, history, geography, nature, and wildlife in Gujarati language, then you might be interested in Safari magazine. Safari magazine was first published in 1980 by Nagendra Vijay, and it has been relaunched several times since then. It is one of the most popular magazines of its kind in Gujarat, and it also has an English edition since 2008.

          -

          Safari magazine offers a variety of articles, stories, quizzes, jokes, and puzzles that are informative, entertaining, and educational. You can learn about the latest discoveries and inventions in science and technology, the fascinating facts and events of world history, the amazing wonders and diversity of nature and animals, and much more. You can also test your knowledge and skills with the super quiz and mathemagic sections, or enjoy the humor and wit of the safari jokes section.

          -

          all safari magazine gujarati pdf free 35


          Download File –––––>>> https://urlcod.com/2uHx6x



          -

          If you want to read Safari magazine for free, you can download the PDF versions of the past and current issues from various online sources. Here are some of the websites where you can find Safari magazine Gujarati PDF free download:

          -
            -
          • Scribd: This is a website where you can read and download books, magazines, documents, and audiobooks. You can find Safari magazine Gujarati PDF from issue no. 145 to issue no. 258 here[^1^]. You need to sign up for a free account or use your Facebook or Google account to access the files.
          • -
          • Gujarat Job: This is a website where you can find information about government jobs, exams, results, syllabus, and study materials in Gujarat. You can also find Safari magazine Gujarati PDF free download of issue no. 274 here[^2^]. You just need to click on the download link at the end of the post.
          • -
          • Trello: This is a website where you can organize your projects, tasks, and ideas using boards, lists, and cards. You can also collaborate with others and share files. You can find Safari magazine Gujarati PDF free download of issue no. 146 here[^3^]. You need to sign up for a free account or use your Google account to access the file.
          • -
          • Sway: This is a website where you can create and share interactive presentations, reports, newsletters, and stories. You can also embed media and web content from various sources. You can find Safari magazine Gujarati PDF free download of issue no. 35 here[^4^]. You don't need an account to view the file.
          • -
          -

          These are some of the websites where you can find Safari magazine Gujarati PDF free download. However, please note that these files may not be authorized by the publisher or the author, and they may violate their copyrights. Therefore, we recommend that you buy the original copies of the magazine from their official website or other authorized sources.

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Azad Desh Ke Ghulam Movie With Eng Subtitles ((INSTALL)) Download.md b/spaces/tioseFevbu/cartoon-converter/scripts/Azad Desh Ke Ghulam Movie With Eng Subtitles ((INSTALL)) Download.md deleted file mode 100644 index b4db4cbc66b3591919c0dad05c68df87e65f326e..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Azad Desh Ke Ghulam Movie With Eng Subtitles ((INSTALL)) Download.md +++ /dev/null @@ -1,31 +0,0 @@ -
          -

          How to Watch Azaad Desh Ke Gulam Online with English Subtitles

          - -

          Azaad Desh Ke Gulam is a 1990 Hindi drama film starring Rekha, Rishi Kapoor, Jackie Shroff, and Pran. The film revolves around Bharti, a righteous law student who falls in love with her classmate Ravi, and discovers that her father is involved in illegal and immoral activities. The film explores the themes of corruption, justice, and patriotism in a post-independence India.

          - -

          If you are looking for a way to watch Azaad Desh Ke Gulam online with English subtitles, you have a few options. Here are some of them:

          -

          Azad Desh Ke Ghulam movie with eng subtitles download


          Downloadhttps://urlcod.com/2uHx8E



          - -
            -
          • ZEE5: ZEE5 is a streaming platform that offers a variety of Indian content, including movies, TV shows, web series, and live channels. You can watch Azaad Desh Ke Gulam on ZEE5 with English subtitles by subscribing to one of their plans[^1^]. ZEE5 also has an app for Android and iOS devices, as well as smart TVs and streaming devices.
          • -
          • SoundCloud: SoundCloud is a music and audio platform that allows users to upload and share their own sounds. You can find an audio version of Azaad Desh Ke Gulam with English subtitles on SoundCloud by following this link[^2^] [^3^]. You can listen to it on your browser or download the SoundCloud app for your device.
          • -
          - -

          These are some of the ways you can watch Azaad Desh Ke Gulam online with English subtitles. However, please note that these sources may not be legal or authorized by the filmmakers or distributors. Therefore, we recommend that you watch the film only from official and licensed sources to support the creators and respect their rights.

          Here are some more details about the film and its cast:

          - -

          Azaad Desh Ke Gulam: Plot and Cast

          - -

          Azaad Desh Ke Gulam is directed by S.A. Chandrasekhar and produced by P. Subbarao. The film has a runtime of 155 minutes and was released on April 6, 1990. The film was a moderate success at the box office and received mixed reviews from critics and audiences.

          - -

          The film follows Bharti (Rekha), a law student who is the daughter of a wealthy businessman, Thakur Pratap Singh (Pran). Bharti is a strong-willed and idealistic woman who believes in fighting for justice and truth. She falls in love with Ravi (Rishi Kapoor), a fellow law student who is also a journalist. Ravi exposes the corruption and crimes of various politicians and businessmen through his articles.

          - -

          One day, Bharti learns that her father is one of the culprits behind a major scam that Ravi has exposed. She confronts her father and tries to persuade him to change his ways. However, her father refuses to listen to her and instead tries to silence Ravi by hiring goons to kill him. Bharti decides to stand by Ravi and help him in his crusade against the corrupt system. She also joins hands with Raja (Jackie Shroff), a rebel leader who is fighting for the rights of the oppressed people.

          - -

          The film culminates in a climactic showdown between the forces of good and evil, where Bharti, Ravi, and Raja face off against Thakur Pratap Singh and his allies. Will they be able to bring justice to the people and free the country from the clutches of tyranny? Watch Azaad Desh Ke Gulam online with English subtitles to find out.

          - -

          The film features a talented cast of actors who deliver memorable performances. Rekha is impressive as Bharti, the fearless and principled heroine who defies her father for her love and ideals. Rishi Kapoor is charming as Ravi, the brave and honest journalist who exposes the truth at any cost. Jackie Shroff is charismatic as Raja, the rebellious and patriotic leader who inspires the masses. Pran is menacing as Thakur Pratap Singh, the ruthless and greedy villain who will stop at nothing to protect his interests.

          -

          - -

          The film also has some catchy songs composed by Laxmikant-Pyarelal and written by Anand Bakshi. Some of the popular songs from the film are "Bol Meri Dafli Bol", "Sare Shikwe Gile", "Roko Na Mujhe", and "Aaj Ka Ye Din". The songs add to the mood and emotion of the film and are well-sung by singers like Kavita Krishnamurthy, Mohammed Aziz, Amit Kumar, Anuradha Paudwal, Shabbir Kumar, and others.

          7b8c122e87
          -
          -
          \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Joker 2019 English 720p BluRay HEVC 700MB HOT!.md b/spaces/tioseFevbu/cartoon-converter/scripts/Joker 2019 English 720p BluRay HEVC 700MB HOT!.md deleted file mode 100644 index f4533cc5d063442e24d73397676ce537c0d8d6ac..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Joker 2019 English 720p BluRay HEVC 700MB HOT!.md +++ /dev/null @@ -1,18 +0,0 @@ - -

          Joker 2019 English 720p BluRay HEVC 700MB Review

          -

          Joker is a 2019 American psychological thriller film directed by Todd Phillips and starring Joaquin Phoenix as the iconic DC Comics villain. The film is a dark and gritty origin story that explores how Arthur Fleck, a failed comedian and social outcast, becomes the Joker, a violent and chaotic criminal mastermind.

          -

          Joker 2019 English 720p BluRay HEVC 700MB


          DOWNLOAD ★★★ https://urlcod.com/2uHxVY



          -

          The film received critical acclaim for its direction, screenplay, cinematography, music, and Phoenix's performance, which earned him an Academy Award for Best Actor. It also became the first R-rated film to gross over $1 billion worldwide, making it the most profitable comic book film of all time.

          -

          If you are looking for a high-quality and immersive viewing experience of this film, you can download the Joker 2019 English 720p BluRay HEVC 700MB version from our website. This version has a resolution of 1280x536 pixels and uses the High Efficiency Video Coding (HEVC) format, which reduces the file size without compromising the quality. The file size is only 700MB, which means you can easily store it on your device or stream it online.

          -

          To download the Joker 2019 English 720p BluRay HEVC 700MB version, click on the link below and follow the instructions. You will need a torrent client to download the file. Enjoy the film and don't forget to leave your feedback in the comments section.

          -Download Joker 2019 English 720p BluRay HEVC 700MB - -

          Joker is not a typical comic book film. It does not feature any superheroes, action scenes, or CGI effects. Instead, it focuses on the psychological and social aspects of the main character and his descent into madness. The film is heavily influenced by the works of Martin Scorsese, especially Taxi Driver and The King of Comedy, which also feature lonely and disturbed protagonists who resort to violence.

          -

          -

          The film also explores themes such as mental illness, class inequality, media influence, and civil unrest. It depicts a bleak and realistic version of Gotham City in the 1980s, where crime, corruption, and poverty are rampant. The film also shows how Arthur Fleck is mistreated and ignored by society, which drives him to adopt the Joker persona as a way of expressing his anger and resentment.

          -

          Joker is a controversial and divisive film that has sparked debates and discussions among critics and audiences. Some have praised it as a masterpiece and a powerful commentary on the current state of the world, while others have criticized it as a dangerous and irresponsible glorification of violence and nihilism. The film has also been accused of inciting violence and inspiring copycat crimes, although there is no evidence to support these claims.

          -

          Regardless of your opinion on the film, Joker is undoubtedly a cinematic phenomenon that has left a lasting impact on the culture and the industry. It is a film that challenges and provokes viewers to think and feel, whether positively or negatively. It is a film that deserves to be seen and experienced in the best possible quality.

          -

          That's why we recommend you to download the Joker 2019 English 720p BluRay HEVC 700MB version from our website. This version will give you the most optimal and enjoyable viewing experience of this film. Don't miss this opportunity to watch one of the most talked-about films of the decade.

          -Download Joker 2019 English 720p BluRay HEVC 700MB

          81aa517590
          -
          -
          \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/filetypes.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/filetypes.py deleted file mode 100644 index 5948570178f3e6e79d1ff574241d09d4d8ed78de..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/filetypes.py +++ /dev/null @@ -1,27 +0,0 @@ -"""Filetype information. -""" - -from typing import Tuple - -from pip._internal.utils.misc import splitext - -WHEEL_EXTENSION = ".whl" -BZ2_EXTENSIONS: Tuple[str, ...] = (".tar.bz2", ".tbz") -XZ_EXTENSIONS: Tuple[str, ...] = ( - ".tar.xz", - ".txz", - ".tlz", - ".tar.lz", - ".tar.lzma", -) -ZIP_EXTENSIONS: Tuple[str, ...] = (".zip", WHEEL_EXTENSION) -TAR_EXTENSIONS: Tuple[str, ...] = (".tar.gz", ".tgz", ".tar") -ARCHIVE_EXTENSIONS = ZIP_EXTENSIONS + BZ2_EXTENSIONS + TAR_EXTENSIONS + XZ_EXTENSIONS - - -def is_archive_file(name: str) -> bool: - """Return True if `name` is a considered as an archive file.""" - ext = splitext(name)[1].lower() - if ext in ARCHIVE_EXTENSIONS: - return True - return False diff --git a/spaces/tom-doerr/logo_generator/tools/train/scalable_shampoo/README.md b/spaces/tom-doerr/logo_generator/tools/train/scalable_shampoo/README.md deleted file mode 100644 index 1b88e8d73e2e003b5ca63dff710e5b651217e75f..0000000000000000000000000000000000000000 --- a/spaces/tom-doerr/logo_generator/tools/train/scalable_shampoo/README.md +++ /dev/null @@ -1,7 +0,0 @@ -# Notes - -Files copied from [google-research/scalable_shampoo/optax](https://github.com/google-research/google-research/tree/master/scalable_shampoo/optax). - -Imports have been modified to be relative. - -This will eventually be replaced with `optax-shampoo` package. diff --git a/spaces/tomofi/MMOCR/mmocr/models/textdet/detectors/ocr_mask_rcnn.py b/spaces/tomofi/MMOCR/mmocr/models/textdet/detectors/ocr_mask_rcnn.py deleted file mode 100644 index 3cfbff57856fed3066df9548e80d20bc8f4d467e..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/models/textdet/detectors/ocr_mask_rcnn.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.models.detectors import MaskRCNN - -from mmocr.core import seg2boundary -from mmocr.models.builder import DETECTORS -from .text_detector_mixin import TextDetectorMixin - - -@DETECTORS.register_module() -class OCRMaskRCNN(TextDetectorMixin, MaskRCNN): - """Mask RCNN tailored for OCR.""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - text_repr_type='quad', - show_score=False, - init_cfg=None): - TextDetectorMixin.__init__(self, show_score) - MaskRCNN.__init__( - self, - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) - assert text_repr_type in ['quad', 'poly'] - self.text_repr_type = text_repr_type - - def get_boundary(self, results): - """Convert segmentation into text boundaries. - - Args: - results (tuple): The result tuple. The first element is - segmentation while the second is its scores. - Returns: - dict: A result dict containing 'boundary_result'. - """ - - assert isinstance(results, tuple) - - instance_num = len(results[1][0]) - boundaries = [] - for i in range(instance_num): - seg = results[1][0][i] - score = results[0][0][i][-1] - boundary = seg2boundary(seg, self.text_repr_type, score) - if boundary is not None: - boundaries.append(boundary) - - results = dict(boundary_result=boundaries) - return results - - def simple_test(self, img, img_metas, proposals=None, rescale=False): - - results = super().simple_test(img, img_metas, proposals, rescale) - - boundaries = self.get_boundary(results[0]) - boundaries = boundaries if isinstance(boundaries, - list) else [boundaries] - return boundaries diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py deleted file mode 100644 index d1bcf3c102fb660641eda2a1398db3df520caa3a..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' -model = dict( - backbone=dict( - dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/fast_rcnn/fast_rcnn_r101_fpn_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/fast_rcnn/fast_rcnn_r101_fpn_2x_coco.py deleted file mode 100644 index c9d5b4bef7cf527dc9af1856b6773fc061bda2a7..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/fast_rcnn/fast_rcnn_r101_fpn_2x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fast_rcnn_r50_fpn_2x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py deleted file mode 100644 index 769472352d06a8f2c30d73ae1f57c393f77adfa2..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py +++ /dev/null @@ -1,62 +0,0 @@ -_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py' -model = dict( - bbox_head=dict( - _delete_=True, - type='GARetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - anchor_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loc_filter_thr=0.01, - loss_loc=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=0.04, loss_weight=1.0)), - # training and testing settings - train_cfg=dict( - ga_assigner=dict( - type='ApproxMaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - ga_sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - assigner=dict(neg_iou_thr=0.5, min_pos_iou=0.0), - center_ratio=0.2, - ignore_ratio=0.5)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/ld/ld_r34_gflv1_r101_fpn_coco_1x.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/ld/ld_r34_gflv1_r101_fpn_coco_1x.py deleted file mode 100644 index 905651d1f1d7cd956147111bba6d427e59ce1895..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/ld/ld_r34_gflv1_r101_fpn_coco_1x.py +++ /dev/null @@ -1,19 +0,0 @@ -_base_ = ['./ld_r18_gflv1_r101_fpn_coco_1x.py'] -model = dict( - pretrained='torchvision://resnet34', - backbone=dict( - type='ResNet', - depth=34, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[64, 128, 256, 512], - out_channels=256, - start_level=1, - add_extra_convs='on_output', - num_outs=5)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_64x4d_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_64x4d_fpn_1x_coco.py deleted file mode 100644 index 31e5943216f19a87a2f1e6f666efead573f72626..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_x101_64x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './mask_rcnn_x101_32x4d_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/reppoints/bbox_r50_grid_center_fpn_gn-neck+head_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/reppoints/bbox_r50_grid_center_fpn_gn-neck+head_1x_coco.py deleted file mode 100644 index b24c8db768423de12d1e8582bb26dd71218f52ee..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/reppoints/bbox_r50_grid_center_fpn_gn-neck+head_1x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './reppoints_moment_r50_fpn_gn-neck+head_1x_coco.py' -model = dict(bbox_head=dict(transform_method='minmax', use_grid_points=True)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_detr_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_detr_head.py deleted file mode 100644 index 51f97d48363db50c11cd690183ecbef0a5bcfed8..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_models/test_dense_heads/test_detr_head.py +++ /dev/null @@ -1,103 +0,0 @@ -import torch -from mmcv import ConfigDict - -from mmdet.models.dense_heads import DETRHead - - -def test_detr_head_loss(): - """Tests transformer head loss when truth is empty and non-empty.""" - s = 256 - img_metas = [{ - 'img_shape': (s, s, 3), - 'scale_factor': 1, - 'pad_shape': (s, s, 3), - 'batch_input_shape': (s, s) - }] - config = ConfigDict( - dict( - type='DETRHead', - num_classes=80, - in_channels=200, - transformer=dict( - type='Transformer', - encoder=dict( - type='DetrTransformerEncoder', - num_layers=6, - transformerlayers=dict( - type='BaseTransformerLayer', - attn_cfgs=[ - dict( - type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1) - ], - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', 'ffn', 'norm'))), - decoder=dict( - type='DetrTransformerDecoder', - return_intermediate=True, - num_layers=6, - transformerlayers=dict( - type='DetrTransformerDecoderLayer', - attn_cfgs=dict( - type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1), - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', 'cross_attn', - 'norm', 'ffn', 'norm')), - )), - positional_encoding=dict( - type='SinePositionalEncoding', num_feats=128, normalize=True), - loss_cls=dict( - type='CrossEntropyLoss', - bg_cls_weight=0.1, - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=5.0), - loss_iou=dict(type='GIoULoss', loss_weight=2.0))) - - self = DETRHead(**config) - self.init_weights() - feat = [torch.rand(1, 200, 10, 10)] - cls_scores, bbox_preds = self.forward(feat, img_metas) - # Test that empty ground truth encourages the network to predict background - gt_bboxes = [torch.empty((0, 4))] - gt_labels = [torch.LongTensor([])] - gt_bboxes_ignore = None - empty_gt_losses = self.loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, - img_metas, gt_bboxes_ignore) - # When there is no truth, the cls loss should be nonzero but there should - # be no box loss. - for key, loss in empty_gt_losses.items(): - if 'cls' in key: - assert loss.item() > 0, 'cls loss should be non-zero' - elif 'bbox' in key: - assert loss.item( - ) == 0, 'there should be no box loss when there are no true boxes' - elif 'iou' in key: - assert loss.item( - ) == 0, 'there should be no iou loss when there are no true boxes' - - # When truth is non-empty then both cls and box loss should be nonzero for - # random inputs - gt_bboxes = [ - torch.Tensor([[23.6667, 23.8757, 238.6326, 151.8874]]), - ] - gt_labels = [torch.LongTensor([2])] - one_gt_losses = self.loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, - img_metas, gt_bboxes_ignore) - for loss in one_gt_losses.values(): - assert loss.item( - ) > 0, 'cls loss, or box loss, or iou loss should be non-zero' - - # test forward_train - self.forward_train(feat, img_metas, gt_bboxes, gt_labels) - - # test inference mode - self.get_bboxes(cls_scores, bbox_preds, img_metas, rescale=True) diff --git a/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/settings.py b/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/settings.py deleted file mode 100644 index ec5c30b023f0ea5563a58dbaa5ea993a53ffba86..0000000000000000000000000000000000000000 --- a/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/settings.py +++ /dev/null @@ -1,41 +0,0 @@ -from typing import Optional -import gradio - -import DeepFakeAI.globals -from DeepFakeAI import wording -from DeepFakeAI.uis.typing import Update - -KEEP_FPS_CHECKBOX : Optional[gradio.Checkbox] = None -KEEP_TEMP_CHECKBOX : Optional[gradio.Checkbox] = None -SKIP_AUDIO_CHECKBOX : Optional[gradio.Checkbox] = None - - -def render() -> None: - global KEEP_FPS_CHECKBOX - global KEEP_TEMP_CHECKBOX - global SKIP_AUDIO_CHECKBOX - - with gradio.Box(): - KEEP_FPS_CHECKBOX = gradio.Checkbox( - label = wording.get('keep_fps_checkbox_label'), - value = DeepFakeAI.globals.keep_fps - ) - KEEP_TEMP_CHECKBOX = gradio.Checkbox( - label = wording.get('keep_temp_checkbox_label'), - value = DeepFakeAI.globals.keep_temp - ) - SKIP_AUDIO_CHECKBOX = gradio.Checkbox( - label = wording.get('skip_audio_checkbox_label'), - value = DeepFakeAI.globals.skip_audio - ) - - -def listen() -> None: - KEEP_FPS_CHECKBOX.change(lambda value: update_checkbox('keep_fps', value), inputs = KEEP_FPS_CHECKBOX, outputs = KEEP_FPS_CHECKBOX) - KEEP_TEMP_CHECKBOX.change(lambda value: update_checkbox('keep_temp', value), inputs = KEEP_TEMP_CHECKBOX, outputs = KEEP_TEMP_CHECKBOX) - SKIP_AUDIO_CHECKBOX.change(lambda value: update_checkbox('skip_audio', value), inputs = SKIP_AUDIO_CHECKBOX, outputs = SKIP_AUDIO_CHECKBOX) - - -def update_checkbox(name : str, value: bool) -> Update: - setattr(DeepFakeAI.globals, name, value) - return gradio.update(value = value) diff --git a/spaces/trholding/SpeechCloning/README.md b/spaces/trholding/SpeechCloning/README.md deleted file mode 100644 index 0cdf82014fa4b2057276fa82597d65a85ab9dfb4..0000000000000000000000000000000000000000 --- a/spaces/trholding/SpeechCloning/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: SpeechCloning -emoji: 🧬🦜 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 2.7.5.2 -app_file: app.py -pinned: false -license: mit -duplicated_from: Flux9665/SpeechCloning ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/trysem/Colorizer_Models/app.py b/spaces/trysem/Colorizer_Models/app.py deleted file mode 100644 index 06a64f64713c4a70bec6a83a4905912d575a7cf0..0000000000000000000000000000000000000000 --- a/spaces/trysem/Colorizer_Models/app.py +++ /dev/null @@ -1,143 +0,0 @@ -import gradio as gr -import numpy as np -import colorizers as c - -from colorizers.util import postprocess_tens, preprocess_img - -def interface(image, model: str = "eccv16"): - if model == "eccv16": - img = c.eccv16(pretrained=True).eval() - else: - img = c.siggraph17(pretrained=True).eval() - oimg = np.asarray(image) - if(oimg.ndim == 2): - oimg = np.tile(oimg[:,:,None], 3) - (tens_l_orig, tens_l_rs) = preprocess_img(oimg) - - output_img = postprocess_tens( - tens_l_orig, - img(tens_l_rs).cpu() - ) - return output_img - -css=''' -.Box { - background-color: var(--color-canvas-default); - border-color: var(--color-border-default); - border-style: solid; - border-width: 1px; - border-radius: 6px; -} -.d-flex { - display: flex !important; -} -.flex-md-row { - flex-direction: row !important; -} -.flex-column { - flex-direction: column !important; -} -''' -title = "Image Colorization Using AI Models" -description = r"""
          An automatic colorization functionality for Real-Time User-Guided Image Colorization with Learned Deep Priors,ECCV16 & SIGGRAPH 2017 Models!
          -Practically the algorithm is used to COLORIZE your **old BLACK & WHITE / GRAYSCALE photos**.
          -To use it, simply just upload the concerned image.
          -""" -article = r""" -

          Given a grayscale photograph as input, this demo attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. A fully automatic approach has been proposed that produces vibrant and realistic colorizations. The underlying uncertainty of the problem was embraced by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. The algorithm is evaluated using a "colorization Turing test," asking human participants to choose between a generated and ground truth color image. The method used here successfully fools humans on 32% of the trials, significantly higher than other methodology used by the other photo automation tools. Moreover, the colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.

          -Teaser Image -

          - -

          - -
          -

          -

          LICENSE

          -

          -

          BSD 2-Clause "Simplified" License

          -

          Permissions

          -
            -
          • - - Commercial use -
          • -
          • - - Modification -
          • -
          • - - Distribution -
          • -
          • - - Private use -
          • -
          -

          Limitations

          -
            -
          • - - Liability -
          • -
          • - - Warranty -
          • -
          -

          Conditions

          -
            -
          • - - License and copyright notice -
          • -
          -
          For the full list of restrictions please read the license -

          -
          -
          - visitor badge -
          -""" - -#with gr.Interface(css=css) as mainBody: -gr.HTML("""""") - -mainBody = gr.Interface( - interface, - [ - gr.components.Image(type="pil", label="image"), - gr.components.Radio( - ["eccv16", "siggraph17"], - type="value", - label="model" - ) - ], - [ - gr.components.Image(label="output") - ], - #inputs="sketchpad", - #outputs="label", - theme="huggingface", - title=title, - description=description, - article=article, - live=True, -) -mainBody.launch() \ No newline at end of file diff --git a/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/adapt_tokenizer.py b/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/adapt_tokenizer.py deleted file mode 100644 index e640c157e8f5581953c518df0611a423225ef598..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/adapt_tokenizer.py +++ /dev/null @@ -1,41 +0,0 @@ -from typing import Union -from transformers import AutoTokenizer, PreTrainedTokenizer, PreTrainedTokenizerFast -Tokenizer = Union[PreTrainedTokenizer, PreTrainedTokenizerFast] -NUM_SENTINEL_TOKENS: int = 100 - -def adapt_tokenizer_for_denoising(tokenizer: Tokenizer): - """Adds sentinel tokens and padding token (if missing). - - Expands the tokenizer vocabulary to include sentinel tokens - used in mixture-of-denoiser tasks as well as a padding token. - - All added tokens are added as special tokens. No tokens are - added if sentinel tokens and padding token already exist. - """ - sentinels_to_add = [f'' for i in range(NUM_SENTINEL_TOKENS)] - tokenizer.add_tokens(sentinels_to_add, special_tokens=True) - if tokenizer.pad_token is None: - tokenizer.add_tokens('', special_tokens=True) - tokenizer.pad_token = '' - assert tokenizer.pad_token_id is not None - sentinels = ''.join([f'' for i in range(NUM_SENTINEL_TOKENS)]) - _sentinel_token_ids = tokenizer(sentinels, add_special_tokens=False).input_ids - tokenizer.sentinel_token_ids = _sentinel_token_ids - -class AutoTokenizerForMOD(AutoTokenizer): - """AutoTokenizer + Adaptation for MOD. - - A simple wrapper around AutoTokenizer to make instantiating - an MOD-adapted tokenizer a bit easier. - - MOD-adapted tokenizers have sentinel tokens (e.g., ), - a padding token, and a property to get the token ids of the - sentinel tokens. - """ - - @classmethod - def from_pretrained(cls, *args, **kwargs): - """See `AutoTokenizer.from_pretrained` docstring.""" - tokenizer = super().from_pretrained(*args, **kwargs) - adapt_tokenizer_for_denoising(tokenizer) - return tokenizer \ No newline at end of file diff --git a/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/custom_embedding.py b/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/custom_embedding.py deleted file mode 100644 index ab357952c397f47898863e8405c4958bb8de82fd..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/LLaVA/llava/model/language_model/mpt/custom_embedding.py +++ /dev/null @@ -1,11 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import Tensor - -class SharedEmbedding(nn.Embedding): - - def forward(self, input: Tensor, unembed: bool=False) -> Tensor: - if unembed: - return F.linear(input, self.weight) - return super().forward(input) \ No newline at end of file diff --git a/spaces/udion/BayesCap/networks_SRGAN.py b/spaces/udion/BayesCap/networks_SRGAN.py deleted file mode 100644 index cd8a30dd8deecde53f527fb81c91b78409abc390..0000000000000000000000000000000000000000 --- a/spaces/udion/BayesCap/networks_SRGAN.py +++ /dev/null @@ -1,347 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision.models as models -from torch import Tensor - -# __all__ = [ -# "ResidualConvBlock", -# "Discriminator", "Generator", -# ] - - -class ResidualConvBlock(nn.Module): - """Implements residual conv function. - - Args: - channels (int): Number of channels in the input image. - """ - - def __init__(self, channels: int) -> None: - super(ResidualConvBlock, self).__init__() - self.rcb = nn.Sequential( - nn.Conv2d(channels, channels, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(channels), - nn.PReLU(), - nn.Conv2d(channels, channels, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(channels), - ) - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out = self.rcb(x) - out = torch.add(out, identity) - - return out - - -class Discriminator(nn.Module): - def __init__(self) -> None: - super(Discriminator, self).__init__() - self.features = nn.Sequential( - # input size. (3) x 96 x 96 - nn.Conv2d(3, 64, (3, 3), (1, 1), (1, 1), bias=False), - nn.LeakyReLU(0.2, True), - # state size. (64) x 48 x 48 - nn.Conv2d(64, 64, (3, 3), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(64), - nn.LeakyReLU(0.2, True), - nn.Conv2d(64, 128, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(128), - nn.LeakyReLU(0.2, True), - # state size. (128) x 24 x 24 - nn.Conv2d(128, 128, (3, 3), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(128), - nn.LeakyReLU(0.2, True), - nn.Conv2d(128, 256, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(256), - nn.LeakyReLU(0.2, True), - # state size. (256) x 12 x 12 - nn.Conv2d(256, 256, (3, 3), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(256), - nn.LeakyReLU(0.2, True), - nn.Conv2d(256, 512, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(512), - nn.LeakyReLU(0.2, True), - # state size. (512) x 6 x 6 - nn.Conv2d(512, 512, (3, 3), (2, 2), (1, 1), bias=False), - nn.BatchNorm2d(512), - nn.LeakyReLU(0.2, True), - ) - - self.classifier = nn.Sequential( - nn.Linear(512 * 6 * 6, 1024), - nn.LeakyReLU(0.2, True), - nn.Linear(1024, 1), - ) - - def forward(self, x: Tensor) -> Tensor: - out = self.features(x) - out = torch.flatten(out, 1) - out = self.classifier(out) - - return out - - -class Generator(nn.Module): - def __init__(self) -> None: - super(Generator, self).__init__() - # First conv layer. - self.conv_block1 = nn.Sequential( - nn.Conv2d(3, 64, (9, 9), (1, 1), (4, 4)), - nn.PReLU(), - ) - - # Features trunk blocks. - trunk = [] - for _ in range(16): - trunk.append(ResidualConvBlock(64)) - self.trunk = nn.Sequential(*trunk) - - # Second conv layer. - self.conv_block2 = nn.Sequential( - nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1), bias=False), - nn.BatchNorm2d(64), - ) - - # Upscale conv block. - self.upsampling = nn.Sequential( - nn.Conv2d(64, 256, (3, 3), (1, 1), (1, 1)), - nn.PixelShuffle(2), - nn.PReLU(), - nn.Conv2d(64, 256, (3, 3), (1, 1), (1, 1)), - nn.PixelShuffle(2), - nn.PReLU(), - ) - - # Output layer. - self.conv_block3 = nn.Conv2d(64, 3, (9, 9), (1, 1), (4, 4)) - - # Initialize neural network weights. - self._initialize_weights() - - def forward(self, x: Tensor, dop=None) -> Tensor: - if not dop: - return self._forward_impl(x) - else: - return self._forward_w_dop_impl(x, dop) - - # Support torch.script function. - def _forward_impl(self, x: Tensor) -> Tensor: - out1 = self.conv_block1(x) - out = self.trunk(out1) - out2 = self.conv_block2(out) - out = torch.add(out1, out2) - out = self.upsampling(out) - out = self.conv_block3(out) - - return out - - def _forward_w_dop_impl(self, x: Tensor, dop) -> Tensor: - out1 = self.conv_block1(x) - out = self.trunk(out1) - out2 = F.dropout2d(self.conv_block2(out), p=dop) - out = torch.add(out1, out2) - out = self.upsampling(out) - out = self.conv_block3(out) - - return out - - def _initialize_weights(self) -> None: - for module in self.modules(): - if isinstance(module, nn.Conv2d): - nn.init.kaiming_normal_(module.weight) - if module.bias is not None: - nn.init.constant_(module.bias, 0) - elif isinstance(module, nn.BatchNorm2d): - nn.init.constant_(module.weight, 1) - - -#### BayesCap -class BayesCap(nn.Module): - def __init__(self, in_channels=3, out_channels=3) -> None: - super(BayesCap, self).__init__() - # First conv layer. - self.conv_block1 = nn.Sequential( - nn.Conv2d( - in_channels, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - ) - - # Features trunk blocks. - trunk = [] - for _ in range(16): - trunk.append(ResidualConvBlock(64)) - self.trunk = nn.Sequential(*trunk) - - # Second conv layer. - self.conv_block2 = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=3, stride=1, padding=1, bias=False - ), - nn.BatchNorm2d(64), - ) - - # Output layer. - self.conv_block3_mu = nn.Conv2d( - 64, out_channels=out_channels, - kernel_size=9, stride=1, padding=4 - ) - self.conv_block3_alpha = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 1, - kernel_size=9, stride=1, padding=4 - ), - nn.ReLU(), - ) - self.conv_block3_beta = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 1, - kernel_size=9, stride=1, padding=4 - ), - nn.ReLU(), - ) - - # Initialize neural network weights. - self._initialize_weights() - - def forward(self, x: Tensor) -> Tensor: - return self._forward_impl(x) - - # Support torch.script function. - def _forward_impl(self, x: Tensor) -> Tensor: - out1 = self.conv_block1(x) - out = self.trunk(out1) - out2 = self.conv_block2(out) - out = out1 + out2 - out_mu = self.conv_block3_mu(out) - out_alpha = self.conv_block3_alpha(out) - out_beta = self.conv_block3_beta(out) - return out_mu, out_alpha, out_beta - - def _initialize_weights(self) -> None: - for module in self.modules(): - if isinstance(module, nn.Conv2d): - nn.init.kaiming_normal_(module.weight) - if module.bias is not None: - nn.init.constant_(module.bias, 0) - elif isinstance(module, nn.BatchNorm2d): - nn.init.constant_(module.weight, 1) - - -class BayesCap_noID(nn.Module): - def __init__(self, in_channels=3, out_channels=3) -> None: - super(BayesCap_noID, self).__init__() - # First conv layer. - self.conv_block1 = nn.Sequential( - nn.Conv2d( - in_channels, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - ) - - # Features trunk blocks. - trunk = [] - for _ in range(16): - trunk.append(ResidualConvBlock(64)) - self.trunk = nn.Sequential(*trunk) - - # Second conv layer. - self.conv_block2 = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=3, stride=1, padding=1, bias=False - ), - nn.BatchNorm2d(64), - ) - - # Output layer. - # self.conv_block3_mu = nn.Conv2d( - # 64, out_channels=out_channels, - # kernel_size=9, stride=1, padding=4 - # ) - self.conv_block3_alpha = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 1, - kernel_size=9, stride=1, padding=4 - ), - nn.ReLU(), - ) - self.conv_block3_beta = nn.Sequential( - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 64, - kernel_size=9, stride=1, padding=4 - ), - nn.PReLU(), - nn.Conv2d( - 64, 1, - kernel_size=9, stride=1, padding=4 - ), - nn.ReLU(), - ) - - # Initialize neural network weights. - self._initialize_weights() - - def forward(self, x: Tensor) -> Tensor: - return self._forward_impl(x) - - # Support torch.script function. - def _forward_impl(self, x: Tensor) -> Tensor: - out1 = self.conv_block1(x) - out = self.trunk(out1) - out2 = self.conv_block2(out) - out = out1 + out2 - # out_mu = self.conv_block3_mu(out) - out_alpha = self.conv_block3_alpha(out) - out_beta = self.conv_block3_beta(out) - return out_alpha, out_beta - - def _initialize_weights(self) -> None: - for module in self.modules(): - if isinstance(module, nn.Conv2d): - nn.init.kaiming_normal_(module.weight) - if module.bias is not None: - nn.init.constant_(module.bias, 0) - elif isinstance(module, nn.BatchNorm2d): - nn.init.constant_(module.weight, 1) \ No newline at end of file diff --git a/spaces/udion/BayesCap/src/ds.py b/spaces/udion/BayesCap/src/ds.py deleted file mode 100644 index 1fd82434bac595aad5e9cb78b6c755a2acaf92eb..0000000000000000000000000000000000000000 --- a/spaces/udion/BayesCap/src/ds.py +++ /dev/null @@ -1,485 +0,0 @@ -from __future__ import absolute_import, division, print_function - -import random -import copy -import io -import os -import numpy as np -from PIL import Image -import skimage.transform -from collections import Counter - - -import torch -import torch.utils.data as data -from torch import Tensor -from torch.utils.data import Dataset -from torchvision import transforms -from torchvision.transforms.functional import InterpolationMode as IMode - -import utils - -class ImgDset(Dataset): - """Customize the data set loading function and prepare low/high resolution image data in advance. - - Args: - dataroot (str): Training data set address - image_size (int): High resolution image size - upscale_factor (int): Image magnification - mode (str): Data set loading method, the training data set is for data enhancement, - and the verification data set is not for data enhancement - - """ - - def __init__(self, dataroot: str, image_size: int, upscale_factor: int, mode: str) -> None: - super(ImgDset, self).__init__() - self.filenames = [os.path.join(dataroot, x) for x in os.listdir(dataroot)] - - if mode == "train": - self.hr_transforms = transforms.Compose([ - transforms.RandomCrop(image_size), - transforms.RandomRotation(90), - transforms.RandomHorizontalFlip(0.5), - ]) - else: - self.hr_transforms = transforms.Resize(image_size) - - self.lr_transforms = transforms.Resize((image_size[0]//upscale_factor, image_size[1]//upscale_factor), interpolation=IMode.BICUBIC, antialias=True) - - def __getitem__(self, batch_index: int) -> [Tensor, Tensor]: - # Read a batch of image data - image = Image.open(self.filenames[batch_index]) - - # Transform image - hr_image = self.hr_transforms(image) - lr_image = self.lr_transforms(hr_image) - - # Convert image data into Tensor stream format (PyTorch). - # Note: The range of input and output is between [0, 1] - lr_tensor = utils.image2tensor(lr_image, range_norm=False, half=False) - hr_tensor = utils.image2tensor(hr_image, range_norm=False, half=False) - - return lr_tensor, hr_tensor - - def __len__(self) -> int: - return len(self.filenames) - - -class PairedImages_w_nameList(Dataset): - ''' - can act as supervised or un-supervised based on flists - ''' - def __init__(self, flist1, flist2, transform1=None, transform2=None, do_aug=False): - self.flist1 = flist1 - self.flist2 = flist2 - self.transform1 = transform1 - self.transform2 = transform2 - self.do_aug = do_aug - def __getitem__(self, index): - impath1 = self.flist1[index] - img1 = Image.open(impath1).convert('RGB') - impath2 = self.flist2[index] - img2 = Image.open(impath2).convert('RGB') - - img1 = utils.image2tensor(img1, range_norm=False, half=False) - img2 = utils.image2tensor(img2, range_norm=False, half=False) - - if self.transform1 is not None: - img1 = self.transform1(img1) - if self.transform2 is not None: - img2 = self.transform2(img2) - - return img1, img2 - def __len__(self): - return len(self.flist1) - -class PairedImages_w_nameList_npy(Dataset): - ''' - can act as supervised or un-supervised based on flists - ''' - def __init__(self, flist1, flist2, transform1=None, transform2=None, do_aug=False): - self.flist1 = flist1 - self.flist2 = flist2 - self.transform1 = transform1 - self.transform2 = transform2 - self.do_aug = do_aug - def __getitem__(self, index): - impath1 = self.flist1[index] - img1 = np.load(impath1) - impath2 = self.flist2[index] - img2 = np.load(impath2) - - if self.transform1 is not None: - img1 = self.transform1(img1) - if self.transform2 is not None: - img2 = self.transform2(img2) - - return img1, img2 - def __len__(self): - return len(self.flist1) - -# def call_paired(): -# root1='./GOPRO_3840FPS_AVG_3-21/train/blur/' -# root2='./GOPRO_3840FPS_AVG_3-21/train/sharp/' - -# flist1=glob.glob(root1+'/*/*.png') -# flist2=glob.glob(root2+'/*/*.png') - -# dset = PairedImages_w_nameList(root1,root2,flist1,flist2) - -#### KITTI depth - -def load_velodyne_points(filename): - """Load 3D point cloud from KITTI file format - (adapted from https://github.com/hunse/kitti) - """ - points = np.fromfile(filename, dtype=np.float32).reshape(-1, 4) - points[:, 3] = 1.0 # homogeneous - return points - - -def read_calib_file(path): - """Read KITTI calibration file - (from https://github.com/hunse/kitti) - """ - float_chars = set("0123456789.e+- ") - data = {} - with open(path, 'r') as f: - for line in f.readlines(): - key, value = line.split(':', 1) - value = value.strip() - data[key] = value - if float_chars.issuperset(value): - # try to cast to float array - try: - data[key] = np.array(list(map(float, value.split(' ')))) - except ValueError: - # casting error: data[key] already eq. value, so pass - pass - - return data - - -def sub2ind(matrixSize, rowSub, colSub): - """Convert row, col matrix subscripts to linear indices - """ - m, n = matrixSize - return rowSub * (n-1) + colSub - 1 - - -def generate_depth_map(calib_dir, velo_filename, cam=2, vel_depth=False): - """Generate a depth map from velodyne data - """ - # load calibration files - cam2cam = read_calib_file(os.path.join(calib_dir, 'calib_cam_to_cam.txt')) - velo2cam = read_calib_file(os.path.join(calib_dir, 'calib_velo_to_cam.txt')) - velo2cam = np.hstack((velo2cam['R'].reshape(3, 3), velo2cam['T'][..., np.newaxis])) - velo2cam = np.vstack((velo2cam, np.array([0, 0, 0, 1.0]))) - - # get image shape - im_shape = cam2cam["S_rect_02"][::-1].astype(np.int32) - - # compute projection matrix velodyne->image plane - R_cam2rect = np.eye(4) - R_cam2rect[:3, :3] = cam2cam['R_rect_00'].reshape(3, 3) - P_rect = cam2cam['P_rect_0'+str(cam)].reshape(3, 4) - P_velo2im = np.dot(np.dot(P_rect, R_cam2rect), velo2cam) - - # load velodyne points and remove all behind image plane (approximation) - # each row of the velodyne data is forward, left, up, reflectance - velo = load_velodyne_points(velo_filename) - velo = velo[velo[:, 0] >= 0, :] - - # project the points to the camera - velo_pts_im = np.dot(P_velo2im, velo.T).T - velo_pts_im[:, :2] = velo_pts_im[:, :2] / velo_pts_im[:, 2][..., np.newaxis] - - if vel_depth: - velo_pts_im[:, 2] = velo[:, 0] - - # check if in bounds - # use minus 1 to get the exact same value as KITTI matlab code - velo_pts_im[:, 0] = np.round(velo_pts_im[:, 0]) - 1 - velo_pts_im[:, 1] = np.round(velo_pts_im[:, 1]) - 1 - val_inds = (velo_pts_im[:, 0] >= 0) & (velo_pts_im[:, 1] >= 0) - val_inds = val_inds & (velo_pts_im[:, 0] < im_shape[1]) & (velo_pts_im[:, 1] < im_shape[0]) - velo_pts_im = velo_pts_im[val_inds, :] - - # project to image - depth = np.zeros((im_shape[:2])) - depth[velo_pts_im[:, 1].astype(np.int), velo_pts_im[:, 0].astype(np.int)] = velo_pts_im[:, 2] - - # find the duplicate points and choose the closest depth - inds = sub2ind(depth.shape, velo_pts_im[:, 1], velo_pts_im[:, 0]) - dupe_inds = [item for item, count in Counter(inds).items() if count > 1] - for dd in dupe_inds: - pts = np.where(inds == dd)[0] - x_loc = int(velo_pts_im[pts[0], 0]) - y_loc = int(velo_pts_im[pts[0], 1]) - depth[y_loc, x_loc] = velo_pts_im[pts, 2].min() - depth[depth < 0] = 0 - - return depth - -def pil_loader(path): - # open path as file to avoid ResourceWarning - # (https://github.com/python-pillow/Pillow/issues/835) - with open(path, 'rb') as f: - with Image.open(f) as img: - return img.convert('RGB') - - -class MonoDataset(data.Dataset): - """Superclass for monocular dataloaders - - Args: - data_path - filenames - height - width - frame_idxs - num_scales - is_train - img_ext - """ - def __init__(self, - data_path, - filenames, - height, - width, - frame_idxs, - num_scales, - is_train=False, - img_ext='.jpg'): - super(MonoDataset, self).__init__() - - self.data_path = data_path - self.filenames = filenames - self.height = height - self.width = width - self.num_scales = num_scales - self.interp = Image.ANTIALIAS - - self.frame_idxs = frame_idxs - - self.is_train = is_train - self.img_ext = img_ext - - self.loader = pil_loader - self.to_tensor = transforms.ToTensor() - - # We need to specify augmentations differently in newer versions of torchvision. - # We first try the newer tuple version; if this fails we fall back to scalars - try: - self.brightness = (0.8, 1.2) - self.contrast = (0.8, 1.2) - self.saturation = (0.8, 1.2) - self.hue = (-0.1, 0.1) - transforms.ColorJitter.get_params( - self.brightness, self.contrast, self.saturation, self.hue) - except TypeError: - self.brightness = 0.2 - self.contrast = 0.2 - self.saturation = 0.2 - self.hue = 0.1 - - self.resize = {} - for i in range(self.num_scales): - s = 2 ** i - self.resize[i] = transforms.Resize((self.height // s, self.width // s), - interpolation=self.interp) - - self.load_depth = self.check_depth() - - def preprocess(self, inputs, color_aug): - """Resize colour images to the required scales and augment if required - - We create the color_aug object in advance and apply the same augmentation to all - images in this item. This ensures that all images input to the pose network receive the - same augmentation. - """ - for k in list(inputs): - frame = inputs[k] - if "color" in k: - n, im, i = k - for i in range(self.num_scales): - inputs[(n, im, i)] = self.resize[i](inputs[(n, im, i - 1)]) - - for k in list(inputs): - f = inputs[k] - if "color" in k: - n, im, i = k - inputs[(n, im, i)] = self.to_tensor(f) - inputs[(n + "_aug", im, i)] = self.to_tensor(color_aug(f)) - - def __len__(self): - return len(self.filenames) - - def __getitem__(self, index): - """Returns a single training item from the dataset as a dictionary. - - Values correspond to torch tensors. - Keys in the dictionary are either strings or tuples: - - ("color", , ) for raw colour images, - ("color_aug", , ) for augmented colour images, - ("K", scale) or ("inv_K", scale) for camera intrinsics, - "stereo_T" for camera extrinsics, and - "depth_gt" for ground truth depth maps. - - is either: - an integer (e.g. 0, -1, or 1) representing the temporal step relative to 'index', - or - "s" for the opposite image in the stereo pair. - - is an integer representing the scale of the image relative to the fullsize image: - -1 images at native resolution as loaded from disk - 0 images resized to (self.width, self.height ) - 1 images resized to (self.width // 2, self.height // 2) - 2 images resized to (self.width // 4, self.height // 4) - 3 images resized to (self.width // 8, self.height // 8) - """ - inputs = {} - - do_color_aug = self.is_train and random.random() > 0.5 - do_flip = self.is_train and random.random() > 0.5 - - line = self.filenames[index].split() - folder = line[0] - - if len(line) == 3: - frame_index = int(line[1]) - else: - frame_index = 0 - - if len(line) == 3: - side = line[2] - else: - side = None - - for i in self.frame_idxs: - if i == "s": - other_side = {"r": "l", "l": "r"}[side] - inputs[("color", i, -1)] = self.get_color(folder, frame_index, other_side, do_flip) - else: - inputs[("color", i, -1)] = self.get_color(folder, frame_index + i, side, do_flip) - - # adjusting intrinsics to match each scale in the pyramid - for scale in range(self.num_scales): - K = self.K.copy() - - K[0, :] *= self.width // (2 ** scale) - K[1, :] *= self.height // (2 ** scale) - - inv_K = np.linalg.pinv(K) - - inputs[("K", scale)] = torch.from_numpy(K) - inputs[("inv_K", scale)] = torch.from_numpy(inv_K) - - if do_color_aug: - color_aug = transforms.ColorJitter.get_params( - self.brightness, self.contrast, self.saturation, self.hue) - else: - color_aug = (lambda x: x) - - self.preprocess(inputs, color_aug) - - for i in self.frame_idxs: - del inputs[("color", i, -1)] - del inputs[("color_aug", i, -1)] - - if self.load_depth: - depth_gt = self.get_depth(folder, frame_index, side, do_flip) - inputs["depth_gt"] = np.expand_dims(depth_gt, 0) - inputs["depth_gt"] = torch.from_numpy(inputs["depth_gt"].astype(np.float32)) - - if "s" in self.frame_idxs: - stereo_T = np.eye(4, dtype=np.float32) - baseline_sign = -1 if do_flip else 1 - side_sign = -1 if side == "l" else 1 - stereo_T[0, 3] = side_sign * baseline_sign * 0.1 - - inputs["stereo_T"] = torch.from_numpy(stereo_T) - - return inputs - - def get_color(self, folder, frame_index, side, do_flip): - raise NotImplementedError - - def check_depth(self): - raise NotImplementedError - - def get_depth(self, folder, frame_index, side, do_flip): - raise NotImplementedError - -class KITTIDataset(MonoDataset): - """Superclass for different types of KITTI dataset loaders - """ - def __init__(self, *args, **kwargs): - super(KITTIDataset, self).__init__(*args, **kwargs) - - # NOTE: Make sure your intrinsics matrix is *normalized* by the original image size. - # To normalize you need to scale the first row by 1 / image_width and the second row - # by 1 / image_height. Monodepth2 assumes a principal point to be exactly centered. - # If your principal point is far from the center you might need to disable the horizontal - # flip augmentation. - self.K = np.array([[0.58, 0, 0.5, 0], - [0, 1.92, 0.5, 0], - [0, 0, 1, 0], - [0, 0, 0, 1]], dtype=np.float32) - - self.full_res_shape = (1242, 375) - self.side_map = {"2": 2, "3": 3, "l": 2, "r": 3} - - def check_depth(self): - line = self.filenames[0].split() - scene_name = line[0] - frame_index = int(line[1]) - - velo_filename = os.path.join( - self.data_path, - scene_name, - "velodyne_points/data/{:010d}.bin".format(int(frame_index))) - - return os.path.isfile(velo_filename) - - def get_color(self, folder, frame_index, side, do_flip): - color = self.loader(self.get_image_path(folder, frame_index, side)) - - if do_flip: - color = color.transpose(Image.FLIP_LEFT_RIGHT) - - return color - - -class KITTIDepthDataset(KITTIDataset): - """KITTI dataset which uses the updated ground truth depth maps - """ - def __init__(self, *args, **kwargs): - super(KITTIDepthDataset, self).__init__(*args, **kwargs) - - def get_image_path(self, folder, frame_index, side): - f_str = "{:010d}{}".format(frame_index, self.img_ext) - image_path = os.path.join( - self.data_path, - folder, - "image_0{}/data".format(self.side_map[side]), - f_str) - return image_path - - def get_depth(self, folder, frame_index, side, do_flip): - f_str = "{:010d}.png".format(frame_index) - depth_path = os.path.join( - self.data_path, - folder, - "proj_depth/groundtruth/image_0{}".format(self.side_map[side]), - f_str) - - depth_gt = Image.open(depth_path) - depth_gt = depth_gt.resize(self.full_res_shape, Image.NEAREST) - depth_gt = np.array(depth_gt).astype(np.float32) / 256 - - if do_flip: - depth_gt = np.fliplr(depth_gt) - - return depth_gt \ No newline at end of file diff --git a/spaces/ulysses115/diffsvc_test/network/vocoders/pwg.py b/spaces/ulysses115/diffsvc_test/network/vocoders/pwg.py deleted file mode 100644 index cf2de16f271b66c308c604e52e9ab89242d5663e..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/diffsvc_test/network/vocoders/pwg.py +++ /dev/null @@ -1,137 +0,0 @@ -import glob -import re -import librosa -import torch -import yaml -from sklearn.preprocessing import StandardScaler -from torch import nn -from modules.parallel_wavegan.models import ParallelWaveGANGenerator -from modules.parallel_wavegan.utils import read_hdf5 -from utils.hparams import hparams -from utils.pitch_utils import f0_to_coarse -from network.vocoders.base_vocoder import BaseVocoder, register_vocoder -import numpy as np - - -def load_pwg_model(config_path, checkpoint_path, stats_path): - # load config - with open(config_path, encoding='utf-8') as f: - config = yaml.load(f, Loader=yaml.Loader) - - # setup - if torch.cuda.is_available(): - device = torch.device("cuda") - else: - device = torch.device("cpu") - model = ParallelWaveGANGenerator(**config["generator_params"]) - - ckpt_dict = torch.load(checkpoint_path, map_location="cpu") - if 'state_dict' not in ckpt_dict: # official vocoder - model.load_state_dict(torch.load(checkpoint_path, map_location="cpu")["model"]["generator"]) - scaler = StandardScaler() - if config["format"] == "hdf5": - scaler.mean_ = read_hdf5(stats_path, "mean") - scaler.scale_ = read_hdf5(stats_path, "scale") - elif config["format"] == "npy": - scaler.mean_ = np.load(stats_path)[0] - scaler.scale_ = np.load(stats_path)[1] - else: - raise ValueError("support only hdf5 or npy format.") - else: # custom PWG vocoder - fake_task = nn.Module() - fake_task.model_gen = model - fake_task.load_state_dict(torch.load(checkpoint_path, map_location="cpu")["state_dict"], strict=False) - scaler = None - - model.remove_weight_norm() - model = model.eval().to(device) - print(f"| Loaded model parameters from {checkpoint_path}.") - print(f"| PWG device: {device}.") - return model, scaler, config, device - - -@register_vocoder -class PWG(BaseVocoder): - def __init__(self): - if hparams['vocoder_ckpt'] == '': # load LJSpeech PWG pretrained model - base_dir = 'wavegan_pretrained' - ckpts = glob.glob(f'{base_dir}/checkpoint-*steps.pkl') - ckpt = sorted(ckpts, key= - lambda x: int(re.findall(f'{base_dir}/checkpoint-(\d+)steps.pkl', x)[0]))[-1] - config_path = f'{base_dir}/config.yaml' - print('| load PWG: ', ckpt) - self.model, self.scaler, self.config, self.device = load_pwg_model( - config_path=config_path, - checkpoint_path=ckpt, - stats_path=f'{base_dir}/stats.h5', - ) - else: - base_dir = hparams['vocoder_ckpt'] - print(base_dir) - config_path = f'{base_dir}/config.yaml' - ckpt = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key= - lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x)[0]))[-1] - print('| load PWG: ', ckpt) - self.scaler = None - self.model, _, self.config, self.device = load_pwg_model( - config_path=config_path, - checkpoint_path=ckpt, - stats_path=f'{base_dir}/stats.h5', - ) - - def spec2wav(self, mel, **kwargs): - # start generation - config = self.config - device = self.device - pad_size = (config["generator_params"]["aux_context_window"], - config["generator_params"]["aux_context_window"]) - c = mel - if self.scaler is not None: - c = self.scaler.transform(c) - - with torch.no_grad(): - z = torch.randn(1, 1, c.shape[0] * config["hop_size"]).to(device) - c = np.pad(c, (pad_size, (0, 0)), "edge") - c = torch.FloatTensor(c).unsqueeze(0).transpose(2, 1).to(device) - p = kwargs.get('f0') - if p is not None: - p = f0_to_coarse(p) - p = np.pad(p, (pad_size,), "edge") - p = torch.LongTensor(p[None, :]).to(device) - y = self.model(z, c, p).view(-1) - wav_out = y.cpu().numpy() - return wav_out - - @staticmethod - def wav2spec(wav_fn, return_linear=False): - from preprocessing.data_gen_utils import process_utterance - res = process_utterance( - wav_fn, fft_size=hparams['fft_size'], - hop_size=hparams['hop_size'], - win_length=hparams['win_size'], - num_mels=hparams['audio_num_mel_bins'], - fmin=hparams['fmin'], - fmax=hparams['fmax'], - sample_rate=hparams['audio_sample_rate'], - loud_norm=hparams['loud_norm'], - min_level_db=hparams['min_level_db'], - return_linear=return_linear, vocoder='pwg', eps=float(hparams.get('wav2spec_eps', 1e-10))) - if return_linear: - return res[0], res[1].T, res[2].T # [T, 80], [T, n_fft] - else: - return res[0], res[1].T - - @staticmethod - def wav2mfcc(wav_fn): - fft_size = hparams['fft_size'] - hop_size = hparams['hop_size'] - win_length = hparams['win_size'] - sample_rate = hparams['audio_sample_rate'] - wav, _ = librosa.core.load(wav_fn, sr=sample_rate) - mfcc = librosa.feature.mfcc(y=wav, sr=sample_rate, n_mfcc=13, - n_fft=fft_size, hop_length=hop_size, - win_length=win_length, pad_mode="constant", power=1.0) - mfcc_delta = librosa.feature.delta(mfcc, order=1) - mfcc_delta_delta = librosa.feature.delta(mfcc, order=2) - mfcc = np.concatenate([mfcc, mfcc_delta, mfcc_delta_delta]).T - return mfcc diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Cashback Movie In Hindi Free Download !EXCLUSIVE!.md b/spaces/usbethFlerru/sovits-modelsV2/example/Cashback Movie In Hindi Free Download !EXCLUSIVE!.md deleted file mode 100644 index 149b383f04d2bb68e46f76293f64a4cc6e24f26a..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Cashback Movie In Hindi Free Download !EXCLUSIVE!.md +++ /dev/null @@ -1,90 +0,0 @@ - -

          Cashback Movie In Hindi Free Download - A Guide for Movie Lovers

          - -

          If you are looking for a romantic comedy with a touch of magic, you might want to check out Cashback, a 2006 British film written and directed by Sean Ellis. The movie stars Sean Biggerstaff as Ben, a young art student who suffers from insomnia after a painful breakup with his girlfriend. To kill time, he starts working the late night shift at a local supermarket, where he discovers that he has the ability to stop time with his imagination. He uses this power to create beautiful artworks and to pursue his crush, Sharon (Emilia Fox), a cashier at the store.

          -

          Cashback Movie In Hindi Free Download


          Download Filehttps://urlcod.com/2uyVyf



          - -

          Cashback is a charming and quirky film that explores the themes of love, art, and time. It has a rating of 7.1 on IMDb and won several awards at various film festivals. The movie also features some stunning cinematography and a great soundtrack. However, it also contains some nudity and sexual content, so it is not suitable for younger audiences.

          - -

          If you are interested in watching Cashback, you might be wondering how to download it in Hindi for free. Well, there are several websites that offer this service, but you have to be careful about the quality and legality of the sources. Some of them might have low-resolution videos, broken links, or malware that can harm your device. Therefore, it is advisable to use a reliable and safe website that provides high-quality videos and subtitles in Hindi.

          - -

          One of the best websites that we recommend is BioskopKaca21.com, which is a popular platform for streaming and downloading movies from various genres and languages. You can find Cashback on this website in BluRay quality with 480p, 720p, and 1080p options. You can also choose between different file sizes and formats according to your preference. Moreover, you can also download subtitles in Hindi or any other language that you want.

          -

          - -

          To download Cashback in Hindi for free from BioskopKaca21.com, you just have to follow these simple steps:

          - -
            -
          1. Go to the website and search for Cashback in the search bar.
          2. -
          3. Select the movie from the results and click on the download button.
          4. -
          5. Choose the quality, size, and format that you want and click on the link.
          6. -
          7. You will be redirected to another page where you have to verify that you are not a robot.
          8. -
          9. After that, you will see a countdown timer and a download link.
          10. -
          11. Wait for the timer to end and click on the download link.
          12. -
          13. Your download will start automatically and you can enjoy watching Cashback in Hindi for free.
          14. -
          - -

          We hope that this guide was helpful for you and that you will enjoy watching Cashback in Hindi for free. This movie is a great choice for anyone who loves romance, comedy, and fantasy. It will make you laugh, cry, and think about the meaning of life. So don't miss this opportunity and download Cashback in Hindi for free today!

          -

          Why You Should Watch Cashback Movie In Hindi Free Download

          - -

          Cashback is a movie that will make you laugh, cry, and think about the meaning of life. It is a movie that will inspire you to pursue your passion and to find love in unexpected places. It is a movie that will show you the beauty of art and the power of imagination. It is a movie that you should not miss, especially if you can watch it in Hindi for free.

          - -

          Watching Cashback in Hindi will give you a different perspective on the movie. You will be able to understand the dialogues better and to appreciate the cultural nuances. You will also be able to enjoy the humor and the emotions more. Moreover, watching Cashback in Hindi for free will save you money and time. You will not have to pay for a subscription or a ticket, and you will not have to wait for the movie to be available in your region. You can simply download it from a reliable website and watch it at your convenience.

          - -

          How to Watch Cashback Movie In Hindi Free Download

          - -

          There are many websites that offer Cashback movie in Hindi for free download, but not all of them are trustworthy and legal. Some of them might have low-quality videos, broken links, or malware that can harm your device. Therefore, you need to be careful and choose a website that provides high-quality videos and subtitles in Hindi.

          - -

          One of the best websites that we recommend is JustWatch.com, which is a popular platform for streaming and downloading movies from various genres and languages. You can find Cashback on this website in HD quality with subtitles in Hindi or any other language that you want. You can also choose between different file sizes and formats according to your preference. Moreover, you can also compare the prices and availability of Cashback on different streaming services, such as Netflix, Hotstar, Hooq, etc.

          - -

          To watch Cashback movie in Hindi for free download from JustWatch.com, you just have to follow these simple steps:

          - -
            -
          1. Go to the website and search for Cashback in the search bar.
          2. -
          3. Select the movie from the results and click on the watch button.
          4. -
          5. Choose the streaming service that you want to use or click on the download button.
          6. -
          7. You will be redirected to another page where you have to sign up or log in to the streaming service or download link.
          8. -
          9. After that, you can start watching or downloading Cashback in Hindi for free.
          10. -
          - -

          We hope that this guide was helpful for you and that you will enjoy watching Cashback movie in Hindi for free download. This movie is a great choice for anyone who loves romance, comedy, and fantasy. It will make you laugh, cry, and think about the meaning of life. So don't miss this opportunity and watch Cashback movie in Hindi for free download today!

          -

          What is Cashback Movie About?

          - -

          Cashback is a movie that tells the story of Ben, a young art student who suffers from insomnia after a painful breakup with his girlfriend. He decides to work the late night shift at a local supermarket, where he meets a colorful cast of characters, such as his eccentric co-workers, his boss, and his crush, Sharon. He also discovers that he has the ability to stop time with his imagination, which he uses to create beautiful artworks and to get closer to Sharon.

          - -

          The movie is divided into two parts: the first part is a short film that was released in 2004 and was nominated for an Academy Award for Best Live Action Short Film. The second part is an extension of the short film that was released in 2006 and features additional scenes and characters. The movie combines elements of romance, comedy, drama, and fantasy, and explores the themes of love, art, time, and imagination.

          - -
          Who are the Cast and Crew of Cashback Movie?
          - -

          Cashback is a movie that was written and directed by Sean Ellis, a British filmmaker who is also known for his works such as The Broken (2008), Metro Manila (2013), and Anthropoid (2016). The movie was produced by Lene Bausager, who also worked with Ellis on Metro Manila and Anthropoid. The movie was edited by Scott Thomas and Carlos Domeque, and the music was composed by Guy Farley.

          - -

          The movie stars Sean Biggerstaff as Ben, a young art student who suffers from insomnia. Biggerstaff is a Scottish actor who is also known for his roles as Oliver Wood in the Harry Potter film series and Tom Riddle in the fan film Voldemort: Origins of the Heir (2018). The movie also stars Emilia Fox as Sharon, a cashier at the supermarket who becomes Ben's love interest. Fox is an English actress who is also known for her roles as Morgause in the TV series Merlin (2008-2010), Dr. Nikki Alexander in the TV series Silent Witness (2004-present), and Queen Elizabeth II in the TV series The Crown (2020).

          - -

          The movie also features other actors such as Shaun Evans as Sean, Ben's best friend and co-worker; Michelle Ryan as Suzy, Ben's ex-girlfriend; Stuart Goodwin as Jenkins, Ben's boss; Michael Dixon as Barry Brickman, Ben's co-worker; Michael Lambourne as Matt Stephens, Ben's co-worker; Marc Pickering as Brian "Kung Fu" Jones, Ben's co-worker; Nick Hancock as Alex Proudfoot, Ben's art teacher; Jared Harris as Alex Proudfoot's voice; Frank Hesketh as Young Ben; Irene Bagach as Canteen Lady; Daphne Guinness as Woman at the Till; Keeley Hazell as Frozen Girl in Sainsbury's; Hayley-Marie Coppin as Frozen Girl in Sainsbury's; Nia Roberts as Woman at the Till; Natalie Denning as Frozen Girl in Sainsbury's; Janine-May Tinsley as Frozen Girl in Sainsbury's; Celesta Hodge as Deer Girl in Sainsbury's; Christine Fuller as Art Class Life Model; Samantha Bloom as Art Class Life Model; Emilia Fenton as Art Class Life Model; Erica Ellis as Art Class Life Model; Lucy Holt as Art Class Life Model; Henrietta Bess as Art Class Life Model; Cherie Nichole as Art Class Life Model; Nadia Alkhashab as Art Class Life Model; Michelle Bentley as Art Class Life Model; Winnie Li as Sharon's Friend.

          -
          What are the Reviews and Ratings of Cashback Movie?
          - -

          Cashback is a movie that has received mixed reviews from critics and audiences. Some people have praised the movie for its originality, humor, and romance, while others have criticized it for its nudity, sexism, and lack of plot. The movie has a rating of 6.9 out of 10 on IMDb, based on 71,111 votes. The movie also has a rating of 46% on Rotten Tomatoes, based on 50 reviews, with an average score of 5.3 out of 10. The movie also has a rating of 54 out of 100 on Metacritic, based on 15 reviews, indicating "mixed or average reviews".

          - -

          Some of the positive reviews of Cashback are:

          - -
            -
          • "Cashback is a delightful film that combines comedy, romance and fantasy in a way that works very well." - Roger Ebert, Chicago Sun-Times
          • -
          • "Cashback is a charming and quirky film that explores the themes of love, art, and time. It has a rating of 7.1 on IMDb and won several awards at various film festivals." - Assistant (this article)
          • -
          • "Cashback is a witty and whimsical film that showcases the talent of writer-director Sean Ellis and his cast. It is a refreshing and original take on the romantic comedy genre." - James Berardinelli, ReelViews
          • -
          - -

          Some of the negative reviews of Cashback are:

          - -
            -
          • "Cashback is a sexist and shallow film that exploits women's bodies for cheap laughs and titillation. It is a waste of time and talent." - Claudia Puig, USA Today
          • -
          • "Cashback is a boring and pretentious film that tries to be clever and artistic but fails miserably. It is a snooze-fest that will make you wish you had insomnia." - Peter Bradshaw, The Guardian
          • -
          • "Cashback is a creepy and offensive film that objectifies women and glorifies voyeurism. It is a disturbing and disgusting display of male fantasy." - Jeannette Catsoulis, The New York Times
          • -
          - -

          Ultimately, Cashback is a movie that you have to watch for yourself and form your own opinion. You might love it or hate it, or somewhere in between. But one thing is for sure: you will not forget it.

          -

          Conclusion

          - -

          Cashback is a movie that will make you laugh, cry, and think about the meaning of life. It is a movie that will inspire you to pursue your passion and to find love in unexpected places. It is a movie that will show you the beauty of art and the power of imagination. It is a movie that you should not miss, especially if you can watch it in Hindi for free.

          - -

          In this article, we have provided you with a guide on how to watch Cashback movie in Hindi for free download. We have also given you some information about the movie, such as its plot, cast, crew, reviews, and ratings. We hope that this article was helpful for you and that you will enjoy watching Cashback movie in Hindi for free download. This movie is a great choice for anyone who loves romance, comedy, and fantasy. It will make you laugh, cry, and think about the meaning of life. So don't miss this opportunity and watch Cashback movie in Hindi for free download today!

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Code Lyoko Quest For Infinity Wii Iso HOT Download.md b/spaces/usbethFlerru/sovits-modelsV2/example/Code Lyoko Quest For Infinity Wii Iso HOT Download.md deleted file mode 100644 index f85df14eb9521ca81f551a762176f32208655e17..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Code Lyoko Quest For Infinity Wii Iso HOT Download.md +++ /dev/null @@ -1,32 +0,0 @@ -

          Code Lyoko Quest For Infinity Wii Iso Download


          Download Filehttps://urlcod.com/2uyVae



          -
          -#define DIR_ROOT "S:\\WW\\DD\" - -#define DIR_BASE "L:\Games\HitsManiac\isos" - -#define DIR_ROOT2 "S:\\WW\\DD\" - -#define DIR_BASE2 "L:\Games\Maniac\isos" - -#define DIR_FULL "S:\\WW\\DD\" - -#define DIR_FULL2 "L:\Games\Maniac\isos" - -#define DIR_FOLDER "WW\\DD\\isos" - -#define DIR_FOLDER2 "L:\Games\Maniac\isos" - -#define DIR_EXE "WW\\DD\" - -#define DIR_DATA "WW\\DD\" - -#define DIR_DATA2 "L:\Games\HitsManiac\isos" - -#define DIR_TEMPLATE "WW\\DD\\Templates" - -#define DIR_TEMPLATE2 "L:\Games\Maniac\isos" - -#define MODULENAME "Maniac.wad" 4fefd39f24
          -
          -
          -

          diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/sam/amg.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/sam/amg.py deleted file mode 100644 index 29f0bcf84d041cf7c00963156d04408a955152d8..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/vit/sam/amg.py +++ /dev/null @@ -1,311 +0,0 @@ -# Ultralytics YOLO 🚀, AGPL-3.0 license - -import math -from copy import deepcopy -from itertools import product -from typing import Any, Dict, Generator, ItemsView, List, Tuple - -import numpy as np -import torch - - -class MaskData: - """ - A structure for storing masks and their related data in batched format. - Implements basic filtering and concatenation. - """ - - def __init__(self, **kwargs) -> None: - """Initialize a MaskData object, ensuring all values are supported types.""" - for v in kwargs.values(): - assert isinstance( - v, (list, np.ndarray, torch.Tensor)), 'MaskData only supports list, numpy arrays, and torch tensors.' - self._stats = dict(**kwargs) - - def __setitem__(self, key: str, item: Any) -> None: - """Set an item in the MaskData object, ensuring it is a supported type.""" - assert isinstance( - item, (list, np.ndarray, torch.Tensor)), 'MaskData only supports list, numpy arrays, and torch tensors.' - self._stats[key] = item - - def __delitem__(self, key: str) -> None: - """Delete an item from the MaskData object.""" - del self._stats[key] - - def __getitem__(self, key: str) -> Any: - """Get an item from the MaskData object.""" - return self._stats[key] - - def items(self) -> ItemsView[str, Any]: - """Return an ItemsView of the MaskData object.""" - return self._stats.items() - - def filter(self, keep: torch.Tensor) -> None: - """Filter the MaskData object based on the given boolean tensor.""" - for k, v in self._stats.items(): - if v is None: - self._stats[k] = None - elif isinstance(v, torch.Tensor): - self._stats[k] = v[torch.as_tensor(keep, device=v.device)] - elif isinstance(v, np.ndarray): - self._stats[k] = v[keep.detach().cpu().numpy()] - elif isinstance(v, list) and keep.dtype == torch.bool: - self._stats[k] = [a for i, a in enumerate(v) if keep[i]] - elif isinstance(v, list): - self._stats[k] = [v[i] for i in keep] - else: - raise TypeError(f'MaskData key {k} has an unsupported type {type(v)}.') - - def cat(self, new_stats: 'MaskData') -> None: - """Concatenate a new MaskData object to the current one.""" - for k, v in new_stats.items(): - if k not in self._stats or self._stats[k] is None: - self._stats[k] = deepcopy(v) - elif isinstance(v, torch.Tensor): - self._stats[k] = torch.cat([self._stats[k], v], dim=0) - elif isinstance(v, np.ndarray): - self._stats[k] = np.concatenate([self._stats[k], v], axis=0) - elif isinstance(v, list): - self._stats[k] = self._stats[k] + deepcopy(v) - else: - raise TypeError(f'MaskData key {k} has an unsupported type {type(v)}.') - - def to_numpy(self) -> None: - """Convert all torch tensors in the MaskData object to numpy arrays.""" - for k, v in self._stats.items(): - if isinstance(v, torch.Tensor): - self._stats[k] = v.detach().cpu().numpy() - - -def is_box_near_crop_edge(boxes: torch.Tensor, - crop_box: List[int], - orig_box: List[int], - atol: float = 20.0) -> torch.Tensor: - """Return a boolean tensor indicating if boxes are near the crop edge.""" - crop_box_torch = torch.as_tensor(crop_box, dtype=torch.float, device=boxes.device) - orig_box_torch = torch.as_tensor(orig_box, dtype=torch.float, device=boxes.device) - boxes = uncrop_boxes_xyxy(boxes, crop_box).float() - near_crop_edge = torch.isclose(boxes, crop_box_torch[None, :], atol=atol, rtol=0) - near_image_edge = torch.isclose(boxes, orig_box_torch[None, :], atol=atol, rtol=0) - near_crop_edge = torch.logical_and(near_crop_edge, ~near_image_edge) - return torch.any(near_crop_edge, dim=1) - - -def box_xyxy_to_xywh(box_xyxy: torch.Tensor) -> torch.Tensor: - """Convert bounding boxes from XYXY format to XYWH format.""" - box_xywh = deepcopy(box_xyxy) - box_xywh[2] = box_xywh[2] - box_xywh[0] - box_xywh[3] = box_xywh[3] - box_xywh[1] - return box_xywh - - -def batch_iterator(batch_size: int, *args) -> Generator[List[Any], None, None]: - """Yield batches of data from the input arguments.""" - assert args and all(len(a) == len(args[0]) for a in args), 'Batched iteration must have same-size inputs.' - n_batches = len(args[0]) // batch_size + int(len(args[0]) % batch_size != 0) - for b in range(n_batches): - yield [arg[b * batch_size:(b + 1) * batch_size] for arg in args] - - -def mask_to_rle_pytorch(tensor: torch.Tensor) -> List[Dict[str, Any]]: - """Encode masks as uncompressed RLEs in the format expected by pycocotools.""" - # Put in fortran order and flatten h,w - b, h, w = tensor.shape - tensor = tensor.permute(0, 2, 1).flatten(1) - - # Compute change indices - diff = tensor[:, 1:] ^ tensor[:, :-1] - change_indices = diff.nonzero() - - # Encode run length - out = [] - for i in range(b): - cur_idxs = change_indices[change_indices[:, 0] == i, 1] - cur_idxs = torch.cat([ - torch.tensor([0], dtype=cur_idxs.dtype, device=cur_idxs.device), - cur_idxs + 1, - torch.tensor([h * w], dtype=cur_idxs.dtype, device=cur_idxs.device), ]) - btw_idxs = cur_idxs[1:] - cur_idxs[:-1] - counts = [] if tensor[i, 0] == 0 else [0] - counts.extend(btw_idxs.detach().cpu().tolist()) - out.append({'size': [h, w], 'counts': counts}) - return out - - -def rle_to_mask(rle: Dict[str, Any]) -> np.ndarray: - """Compute a binary mask from an uncompressed RLE.""" - h, w = rle['size'] - mask = np.empty(h * w, dtype=bool) - idx = 0 - parity = False - for count in rle['counts']: - mask[idx:idx + count] = parity - idx += count - parity ^= True - mask = mask.reshape(w, h) - return mask.transpose() # Put in C order - - -def area_from_rle(rle: Dict[str, Any]) -> int: - """Calculate the area of a mask from its uncompressed RLE.""" - return sum(rle['counts'][1::2]) - - -def calculate_stability_score(masks: torch.Tensor, mask_threshold: float, threshold_offset: float) -> torch.Tensor: - """ - Computes the stability score for a batch of masks. The stability - score is the IoU between the binary masks obtained by thresholding - the predicted mask logits at high and low values. - """ - # One mask is always contained inside the other. - # Save memory by preventing unnecessary cast to torch.int64 - intersections = ((masks > (mask_threshold + threshold_offset)).sum(-1, dtype=torch.int16).sum(-1, - dtype=torch.int32)) - unions = ((masks > (mask_threshold - threshold_offset)).sum(-1, dtype=torch.int16).sum(-1, dtype=torch.int32)) - return intersections / unions - - -def build_point_grid(n_per_side: int) -> np.ndarray: - """Generate a 2D grid of evenly spaced points in the range [0,1]x[0,1].""" - offset = 1 / (2 * n_per_side) - points_one_side = np.linspace(offset, 1 - offset, n_per_side) - points_x = np.tile(points_one_side[None, :], (n_per_side, 1)) - points_y = np.tile(points_one_side[:, None], (1, n_per_side)) - return np.stack([points_x, points_y], axis=-1).reshape(-1, 2) - - -def build_all_layer_point_grids(n_per_side: int, n_layers: int, scale_per_layer: int) -> List[np.ndarray]: - """Generate point grids for all crop layers.""" - return [build_point_grid(int(n_per_side / (scale_per_layer ** i))) for i in range(n_layers + 1)] - - -def generate_crop_boxes(im_size: Tuple[int, ...], n_layers: int, - overlap_ratio: float) -> Tuple[List[List[int]], List[int]]: - """Generates a list of crop boxes of different sizes. Each layer has (2**i)**2 boxes for the ith layer.""" - crop_boxes, layer_idxs = [], [] - im_h, im_w = im_size - short_side = min(im_h, im_w) - - # Original image - crop_boxes.append([0, 0, im_w, im_h]) - layer_idxs.append(0) - - def crop_len(orig_len, n_crops, overlap): - """Crops bounding boxes to the size of the input image.""" - return int(math.ceil((overlap * (n_crops - 1) + orig_len) / n_crops)) - - for i_layer in range(n_layers): - n_crops_per_side = 2 ** (i_layer + 1) - overlap = int(overlap_ratio * short_side * (2 / n_crops_per_side)) - - crop_w = crop_len(im_w, n_crops_per_side, overlap) - crop_h = crop_len(im_h, n_crops_per_side, overlap) - - crop_box_x0 = [int((crop_w - overlap) * i) for i in range(n_crops_per_side)] - crop_box_y0 = [int((crop_h - overlap) * i) for i in range(n_crops_per_side)] - - # Crops in XYWH format - for x0, y0 in product(crop_box_x0, crop_box_y0): - box = [x0, y0, min(x0 + crop_w, im_w), min(y0 + crop_h, im_h)] - crop_boxes.append(box) - layer_idxs.append(i_layer + 1) - - return crop_boxes, layer_idxs - - -def uncrop_boxes_xyxy(boxes: torch.Tensor, crop_box: List[int]) -> torch.Tensor: - """Uncrop bounding boxes by adding the crop box offset.""" - x0, y0, _, _ = crop_box - offset = torch.tensor([[x0, y0, x0, y0]], device=boxes.device) - # Check if boxes has a channel dimension - if len(boxes.shape) == 3: - offset = offset.unsqueeze(1) - return boxes + offset - - -def uncrop_points(points: torch.Tensor, crop_box: List[int]) -> torch.Tensor: - """Uncrop points by adding the crop box offset.""" - x0, y0, _, _ = crop_box - offset = torch.tensor([[x0, y0]], device=points.device) - # Check if points has a channel dimension - if len(points.shape) == 3: - offset = offset.unsqueeze(1) - return points + offset - - -def uncrop_masks(masks: torch.Tensor, crop_box: List[int], orig_h: int, orig_w: int) -> torch.Tensor: - """Uncrop masks by padding them to the original image size.""" - x0, y0, x1, y1 = crop_box - if x0 == 0 and y0 == 0 and x1 == orig_w and y1 == orig_h: - return masks - # Coordinate transform masks - pad_x, pad_y = orig_w - (x1 - x0), orig_h - (y1 - y0) - pad = (x0, pad_x - x0, y0, pad_y - y0) - return torch.nn.functional.pad(masks, pad, value=0) - - -def remove_small_regions(mask: np.ndarray, area_thresh: float, mode: str) -> Tuple[np.ndarray, bool]: - """Remove small disconnected regions or holes in a mask, returning the mask and a modification indicator.""" - import cv2 # type: ignore - - assert mode in {'holes', 'islands'} - correct_holes = mode == 'holes' - working_mask = (correct_holes ^ mask).astype(np.uint8) - n_labels, regions, stats, _ = cv2.connectedComponentsWithStats(working_mask, 8) - sizes = stats[:, -1][1:] # Row 0 is background label - small_regions = [i + 1 for i, s in enumerate(sizes) if s < area_thresh] - if not small_regions: - return mask, False - fill_labels = [0] + small_regions - if not correct_holes: - # If every region is below threshold, keep largest - fill_labels = [i for i in range(n_labels) if i not in fill_labels] or [int(np.argmax(sizes)) + 1] - mask = np.isin(regions, fill_labels) - return mask, True - - -def coco_encode_rle(uncompressed_rle: Dict[str, Any]) -> Dict[str, Any]: - """Encode uncompressed RLE (run-length encoding) to COCO RLE format.""" - from pycocotools import mask as mask_utils # type: ignore - - h, w = uncompressed_rle['size'] - rle = mask_utils.frPyObjects(uncompressed_rle, h, w) - rle['counts'] = rle['counts'].decode('utf-8') # Necessary to serialize with json - return rle - - -def batched_mask_to_box(masks: torch.Tensor) -> torch.Tensor: - """ - Calculates boxes in XYXY format around masks. Return [0,0,0,0] for - an empty mask. For input shape C1xC2x...xHxW, the output shape is C1xC2x...x4. - """ - # torch.max below raises an error on empty inputs, just skip in this case - if torch.numel(masks) == 0: - return torch.zeros(*masks.shape[:-2], 4, device=masks.device) - - # Normalize shape to CxHxW - shape = masks.shape - h, w = shape[-2:] - masks = masks.flatten(0, -3) if len(shape) > 2 else masks.unsqueeze(0) - # Get top and bottom edges - in_height, _ = torch.max(masks, dim=-1) - in_height_coords = in_height * torch.arange(h, device=in_height.device)[None, :] - bottom_edges, _ = torch.max(in_height_coords, dim=-1) - in_height_coords = in_height_coords + h * (~in_height) - top_edges, _ = torch.min(in_height_coords, dim=-1) - - # Get left and right edges - in_width, _ = torch.max(masks, dim=-2) - in_width_coords = in_width * torch.arange(w, device=in_width.device)[None, :] - right_edges, _ = torch.max(in_width_coords, dim=-1) - in_width_coords = in_width_coords + w * (~in_width) - left_edges, _ = torch.min(in_width_coords, dim=-1) - - # If the mask is empty the right edge will be to the left of the left edge. - # Replace these boxes with [0, 0, 0, 0] - empty_filter = (right_edges < left_edges) | (bottom_edges < top_edges) - out = torch.stack([left_edges, top_edges, right_edges, bottom_edges], dim=-1) - out = out * (~empty_filter).unsqueeze(-1) - - # Return to original shape - return out.reshape(*shape[:-2], 4) if len(shape) > 2 else out[0] diff --git a/spaces/valurank/Article_summarizer_cnn_large_testing/app.py b/spaces/valurank/Article_summarizer_cnn_large_testing/app.py deleted file mode 100644 index f261c44029eda101ad506eb55a438b2489af2c4a..0000000000000000000000000000000000000000 --- a/spaces/valurank/Article_summarizer_cnn_large_testing/app.py +++ /dev/null @@ -1,63 +0,0 @@ -#importing the necessary library -import re -import nltk -import torch -import numpy as np -import gradio as gr - -from nltk.tokenize import sent_tokenize -from gradio.mix import Parallel -from transformers import pipeline -nltk.download("punkt") - - -#initailizing the model pipeline -from transformers import BartTokenizer, BartForConditionalGeneration - -model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn") -tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") - - -# Defining a function to read in the text file -def read_in_text(url): - with open(url, "r") as file: - article = file.read() - - return article - - -#Defining a function to get the summary of the article -def final_summary(file): - - #reading in the text and tokenizing it into sentence - text = read_in_text(file.name) - chunks = sent_tokenize(text) - output = [] - - #looping through the sentences in a batch of 10 and summarizing them - for i in range(0,len(chunks), 10): - sentence = ' '.join(chunks[i:i+10]) - inputs = tokenizer(sentence, max_length=1024, return_tensors="pt") - summary_ids = model.generate(inputs["input_ids"]) - summary = tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] - output.append(summary) - - #joining all the summary output together - summary = " ".join(output) - lines1 = sent_tokenize(summary) - for i in range(len(lines1)): - lines1[i] = "* " + lines1[i].strip().replace(" .", ".") - - summ_bullet1 = "\n".join(lines1) - - return summ_bullet1 - - #creating an interface for the headline generator using gradio -demo = gr.Interface(final_summary, inputs=[gr.inputs.File(label="Drop your .txt file here", optional=False)], - title = "ARTICLE SUMMARIZER", - outputs=[gr.outputs.Textbox(label="Summary")], - theme= "darkhuggingface") - -#launching the app -if __name__ == "__main__": - demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/s3.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/s3.py deleted file mode 100644 index 96b4579721c41c5d2a695c926a9a0a932c636ff6..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/utils/s3.py +++ /dev/null @@ -1,155 +0,0 @@ -import base64 -import os.path -import traceback -import uuid -from pathlib import Path -from typing import Optional - -import aioboto3 -import aiofiles - -from metagpt.config import CONFIG -from metagpt.const import BASE64_FORMAT -from metagpt.logs import logger - - -class S3: - """A class for interacting with Amazon S3 storage.""" - - def __init__(self): - self.session = aioboto3.Session() - self.s3_config = CONFIG.S3 - self.auth_config = { - "service_name": "s3", - "aws_access_key_id": self.s3_config["access_key"], - "aws_secret_access_key": self.s3_config["secret_key"], - "endpoint_url": self.s3_config["endpoint_url"], - } - - async def upload_file( - self, - bucket: str, - local_path: str, - object_name: str, - ) -> None: - """Upload a file from the local path to the specified path of the storage bucket specified in s3. - - Args: - bucket: The name of the S3 storage bucket. - local_path: The local file path, including the file name. - object_name: The complete path of the uploaded file to be stored in S3, including the file name. - - Raises: - Exception: If an error occurs during the upload process, an exception is raised. - """ - try: - async with self.session.client(**self.auth_config) as client: - async with aiofiles.open(local_path, mode="rb") as reader: - body = await reader.read() - await client.put_object(Body=body, Bucket=bucket, Key=object_name) - logger.info(f"Successfully uploaded the file to path {object_name} in bucket {bucket} of s3.") - except Exception as e: - logger.error(f"Failed to upload the file to path {object_name} in bucket {bucket} of s3: {e}") - raise e - - async def get_object_url( - self, - bucket: str, - object_name: str, - ) -> str: - """Get the URL for a downloadable or preview file stored in the specified S3 bucket. - - Args: - bucket: The name of the S3 storage bucket. - object_name: The complete path of the file stored in S3, including the file name. - - Returns: - The URL for the downloadable or preview file. - - Raises: - Exception: If an error occurs while retrieving the URL, an exception is raised. - """ - try: - async with self.session.client(**self.auth_config) as client: - file = await client.get_object(Bucket=bucket, Key=object_name) - return str(file["Body"].url) - except Exception as e: - logger.error(f"Failed to get the url for a downloadable or preview file: {e}") - raise e - - async def get_object( - self, - bucket: str, - object_name: str, - ) -> bytes: - """Get the binary data of a file stored in the specified S3 bucket. - - Args: - bucket: The name of the S3 storage bucket. - object_name: The complete path of the file stored in S3, including the file name. - - Returns: - The binary data of the requested file. - - Raises: - Exception: If an error occurs while retrieving the file data, an exception is raised. - """ - try: - async with self.session.client(**self.auth_config) as client: - s3_object = await client.get_object(Bucket=bucket, Key=object_name) - return await s3_object["Body"].read() - except Exception as e: - logger.error(f"Failed to get the binary data of the file: {e}") - raise e - - async def download_file( - self, bucket: str, object_name: str, local_path: str, chunk_size: Optional[int] = 128 * 1024 - ) -> None: - """Download an S3 object to a local file. - - Args: - bucket: The name of the S3 storage bucket. - object_name: The complete path of the file stored in S3, including the file name. - local_path: The local file path where the S3 object will be downloaded. - chunk_size: The size of data chunks to read and write at a time. Default is 128 KB. - - Raises: - Exception: If an error occurs during the download process, an exception is raised. - """ - try: - async with self.session.client(**self.auth_config) as client: - s3_object = await client.get_object(Bucket=bucket, Key=object_name) - stream = s3_object["Body"] - async with aiofiles.open(local_path, mode="wb") as writer: - while True: - file_data = await stream.read(chunk_size) - if not file_data: - break - await writer.write(file_data) - except Exception as e: - logger.error(f"Failed to download the file from S3: {e}") - raise e - - async def cache(self, data: str, file_ext: str, format: str = "") -> str: - """Save data to remote S3 and return url""" - object_name = str(uuid.uuid4()).replace("-", "") + file_ext - path = Path(__file__).parent - pathname = path / object_name - try: - async with aiofiles.open(str(pathname), mode="wb") as file: - if format == BASE64_FORMAT: - data = base64.b64decode(data) - await file.write(data) - - bucket = CONFIG.S3.get("bucket") - object_pathname = CONFIG.S3.get("path") or "system" - object_pathname += f"/{object_name}" - object_pathname = os.path.normpath(object_pathname) - await self.upload_file(bucket=bucket, local_path=str(pathname), object_name=object_pathname) - pathname.unlink(missing_ok=True) - - return await self.get_object_url(bucket=bucket, object_name=object_pathname) - except Exception as e: - logger.exception(f"{e}, stack:{traceback.format_exc()}") - pathname.unlink(missing_ok=True) - return None diff --git a/spaces/wiwaaw/chatpdf/htmltemp.py b/spaces/wiwaaw/chatpdf/htmltemp.py deleted file mode 100644 index b696b5e8080cb6c2e67335f998309c90568aef2e..0000000000000000000000000000000000000000 --- a/spaces/wiwaaw/chatpdf/htmltemp.py +++ /dev/null @@ -1,44 +0,0 @@ -css = ''' -