diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Cracker Template VERIFIED.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Cracker Template VERIFIED.md deleted file mode 100644 index bed1b8129effb8219483fb953451a24344ac3614..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Cracker Template VERIFIED.md +++ /dev/null @@ -1,47 +0,0 @@ -
-Title: How to Make Your Own Crackers with a Free Cracker Template - -Article: - -

How to Make Your Own Crackers with a Free Cracker Template

- -

Crackers are a delicious and versatile snack that can be enjoyed with cheese, dips, spreads, or on their own. They are also easy and fun to make at home with simple ingredients and tools. You can customize your crackers with different flavors, shapes, and sizes. You can also make them more festive and creative by using a free cracker template.

-

free cracker template


DOWNLOAD ★★★ https://byltly.com/2uKwiX



- -

A free cracker template is a printable pattern that you can use to cut out your cracker dough into various designs. You can find many free cracker templates online or create your own using a drawing software. Some examples of free cracker template designs are stars, hearts, flowers, animals, letters, numbers, and more.

- -

To make your own crackers with a free cracker template, you will need the following ingredients and tools:

- - - -

Here are the steps to make your own crackers with a free cracker template:

- -
    -
  1. Preheat your oven to 180°C (350°F) and line your baking sheet with parchment paper.
  2. -
  3. In a large bowl, mix the flour and salt together.
  4. -
  5. Add the oil and water and stir until a dough forms.
  6. -
  7. Knead the dough on a lightly floured surface for about 10 minutes or until smooth and elastic.
  8. -
  9. Divide the dough into four equal portions and roll out each portion into a thin rectangle.
  10. -
  11. Sprinkle your choice of seasonings over the dough and press lightly with the rolling pin.
  12. -
  13. Place your free cracker template over the dough and cut out the shapes with a knife or a cookie cutter.
  14. -
  15. Transfer the cut-out crackers to the prepared baking sheet and prick them with a fork to prevent them from puffing up.
  16. -
  17. Bake the crackers for 15 to 20 minutes or until golden and crisp.
  18. -
  19. Let the crackers cool completely on a wire rack before storing them in an airtight container.
  20. -
- -

Enjoy your homemade crackers with a free cracker template!

-

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Appgini Php Code Generator For Mysql 4 53 Incl Crackzip Turn Your MySQL Data into Dynamic Web Pages.md b/spaces/1gistliPinn/ChatGPT4/Examples/Appgini Php Code Generator For Mysql 4 53 Incl Crackzip Turn Your MySQL Data into Dynamic Web Pages.md deleted file mode 100644 index 0b91893967971d8f10918ba1990dfafb276e6e2d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Appgini Php Code Generator For Mysql 4 53 Incl Crackzip Turn Your MySQL Data into Dynamic Web Pages.md +++ /dev/null @@ -1,6 +0,0 @@ -

Appgini Php Code Generator For Mysql 4 53 Incl Crackzip


Downloadhttps://imgfil.com/2uxZSI



- - aaccfb2cb3
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download White Cap Platinum Crack ((FREE)).md b/spaces/1gistliPinn/ChatGPT4/Examples/Download White Cap Platinum Crack ((FREE)).md deleted file mode 100644 index 5567e8cf54134f89fd961c58bbe7b9442548c3d7..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download White Cap Platinum Crack ((FREE)).md +++ /dev/null @@ -1,50 +0,0 @@ -

Download White Cap Platinum Crack


Download Ziphttps://imgfil.com/2uy11W



-
-_____. It has been a beautiful day, and you go to open your gate to the backyard, and find a starfish resting at the front door. - -14. ___________ get a little bit upset about this. It is raining in your house, and your pet cat is being fed a warm cup of milk. But you are not sure that she likes it. - -15. The man in the picture is speaking to you. Do you know who he is? - -16. The bad thing about being a mom is ___. - -17. After you have read these questions, you will probably think _____. - -18. _____________. There is a meteor shower tonight, and you are thinking about going to a movie. - -19. The woman in the picture is a _____. - -20. I need a little ____ for my cousin to have a good Christmas this year. - -21. Is that ____ on the bus? The bus is coming to your house. - -22. The man in the picture is playing in the snow. _____. - -23. What is that on the man’s ____? - -24. The person in the picture is running toward you. Do you think he is playing a game with you? - -25. The little boy in the picture is _____ a cat. - -26. The man in the picture is giving you a ____. - -27. The woman in the picture is coming to visit you. Do you think she has a good idea of what she is doing? - -28. The man in the picture is helping his wife. Do you think he is _____? - -29. Do you know where you are? You are in _____, and you have walked over to the store to get some _____ for your cousin. - -30. The man in the picture is _____. - -31. I am going to look at a picture of people from the past, and tell you if you are in it. The first picture I am looking at is a young man. Do you think it is you? - -32. How is it that you can see _____ here? - -33. If you were to take a picture of me, would you like _____? - -34. The man in the picture is taking a picture of _____. - -35. _____________. You are sitting in a classroom, and 4fefd39f24
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Bitcoin Software A Step-by-Step Tutorial.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Bitcoin Software A Step-by-Step Tutorial.md deleted file mode 100644 index ca002398fd781b1a85537f043611fd611e499e3b..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Bitcoin Software A Step-by-Step Tutorial.md +++ /dev/null @@ -1,108 +0,0 @@ - -

Download Bitcoin Software: A Complete Guide

-

Bitcoin is a digital currency that enables peer-to-peer transactions without intermediaries or central authorities. It is powered by a network of computers that run special software to validate and record transactions on a public ledger called the blockchain. To use bitcoin, you need to have some bitcoin software on your device. But what is bitcoin software and how do you choose, download, install, and use it? In this article, we will answer these questions and more.

-

download bitcoin software


DOWNLOAD https://urlin.us/2uSYAk



-

Types of Bitcoin Software

-

There are different types of bitcoin software that serve different purposes and functions. Here are the main ones:

- -

How to Choose the Best Bitcoin Software for Your Needs

-

There is no one-size-fits-all solution when it comes to choosing bitcoin software. Depending on your goals, preferences, and resources, you may want to use different types of software or even multiple ones. Here are some factors to consider when making your choice:

- -

How to Download and Install Bitcoin Software

-

The process of downloading and installing bitcoin software may vary depending on the type of software and the platform or device that you use. However, here are some general steps that you can follow:

-
    -
  1. Choose your software: Based on the factors mentioned above, choose the best bitcoin software for your needs. You can find various options on websites such as bitcoin.org, bitcoin.com, or bitcoincore.org.
  2. -
  3. Download your software: Go to the official website of your chosen software and click on the download link. Make sure that you download the latest version of the software from a trusted source. Avoid clicking on suspicious links or downloading. - Install your software: Once you have downloaded your software, open the file and follow the instructions to install it on your device. You may need to agree to some terms and conditions, choose a location, and create a shortcut. Some software may also require you to verify your identity or create an account.
  4. -
  5. Set up your software: After you have installed your software, you need to set it up according to your preferences and needs. You may need to choose a password, a recovery phrase, a network, a fee level, or other options. Some software may also require you to sync with the blockchain, which can take some time and space.
  6. -
-

How to Use Bitcoin Software

-

Once you have downloaded and installed your bitcoin software, you are ready to use it. Here are some basic tips and best practices for using bitcoin software:

- -

Conclusion

-

Bitcoin software is essential for using bitcoin. It allows you to store, send, receive, and manage your bitcoins. There are different types of bitcoin software that serve different purposes and functions. You need to choose the best bitcoin software for your needs based on factors such as security, features, compatibility, and ease of use. You also need to download, install, and set up your bitcoin software properly. Finally, you need to use your bitcoin software wisely and safely by following some basic tips and best practices.

-

FAQ

-

What is the best bitcoin software?

-

There is no definitive answer to this question, as different users may have different preferences and needs. However, some of the most popular and reputable bitcoin software are:

- -

How do I update my bitcoin software?

-

To update your bitcoin software, you need to download the latest version of the software from the official website or source and install it on your device. You may need to uninstall the previous version first or overwrite it with the new one. You may also need to backup your wallet before updating.

-

How do I uninstall my bitcoin software?

-

To uninstall your bitcoin software, you need to delete the program files from your device. You may also need to delete the data files such as the blockchain or the wallet. However, before uninstalling your bitcoin software, you should make sure that you have backed up your wallet or transferred your bitcoins to another wallet.

-

Download Bitcoin Core latest version for Windows
-How to install Bitcoin Core on your desktop
-Bitcoin Core source code and release signatures
-Best Bitcoin wallets for Windows users
-Compare Bitcoin Core with other Bitcoin clients
-Download Bitcoin Core for Linux and Mac OS
-Troubleshooting Bitcoin Core installation issues
-How to run a full node with Bitcoin Core
-How to backup and restore your Bitcoin Core wallet
-How to use Tor with Bitcoin Core for privacy
-How to change fees and use RBF or CPFP with Bitcoin Core
-How to verify Bitcoin Core binaries and signatures
-How to contribute to Bitcoin Core development
-How to update Bitcoin Core to the latest version
-How to sync Bitcoin Core with the blockchain faster
-How to enable SegWit and Bech32 addresses with Bitcoin Core
-How to use Bitcoin Core as a cold storage wallet
-How to encrypt and secure your Bitcoin Core wallet
-How to send and receive bitcoins with Bitcoin Core
-How to use the console and debug window in Bitcoin Core
-How to connect Bitcoin Core to your hardware wallet
-How to use multi-signature wallets with Bitcoin Core
-How to import and export private keys with Bitcoin Core
-How to sign and verify messages with Bitcoin Core
-How to use the testnet and regtest modes with Bitcoin Core
-How to configure Bitcoin Core settings and options
-How to use the RPC interface and API with Bitcoin Core
-How to monitor network activity and performance with Bitcoin Core
-How to prune the blockchain and save disk space with Bitcoin Core
-How to run Bitcoin Core in headless mode or as a daemon
-How to compile Bitcoin Core from source code on Windows
-How to download and verify the checksums of Bitcoin Core binaries
-How to use the peer-to-peer network with Bitcoin Core
-How to report bugs and issues with Bitcoin Core
-How to join the Bitcoin Core community and mailing list
-How to donate to the Bitcoin Core project and developers
-How to review the code and documentation of Bitcoin Core
-How to test new features and improvements of Bitcoin Core
-How to understand the architecture and design of Bitcoin Core
-How to learn more about the history and vision of Bitcoin Core

-

How do I troubleshoot my bitcoin software?

-

To troubleshoot your bitcoin software, you need to identify the problem and find the possible solutions. Some common problems and solutions are:

- -

If none of these solutions work, you can also contact the customer support of your software or seek help from online forums or communities.

-

How do I secure my bitcoin software?

-

To secure your bitcoin software, you need to follow some basic security measures and precautions. Some of them are:

- -

-

This is the end of the article. I hope you found it useful and informative. If you have any questions or feedback, please let me know. Thank you for reading!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download GTA 5 for Xbox Series XS Experience the Ultimate Grand Theft Auto V Adventure.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download GTA 5 for Xbox Series XS Experience the Ultimate Grand Theft Auto V Adventure.md deleted file mode 100644 index 1df181753d485efc5b4b246626a6f777badddc17..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download GTA 5 for Xbox Series XS Experience the Ultimate Grand Theft Auto V Adventure.md +++ /dev/null @@ -1,118 +0,0 @@ -
-

Download GTA 5 Xbox Series S: How to Experience the Ultimate Grand Theft Auto V on Your Console

-

Introduction

-

Grand Theft Auto V (GTA 5) is one of the most successful and influential video games of all time. It has sold over 150 million copies worldwide and has won numerous awards and accolades. It is also one of the most immersive and diverse open-world games ever created, featuring a rich story mode, a dynamic online multiplayer mode, and countless activities and missions to enjoy.

-

download gta 5 xbox series s


DOWNLOADhttps://urlin.us/2uSXnp



-

If you are a fan of GTA 5 or want to try it for the first time, you might be wondering how to download it on your Xbox Series S console. The good news is that GTA 5 is now available for Xbox Series S, with a range of technical upgrades and enhancements that make it even more amazing than before. In this article, we will show you how to download GTA 5 Xbox Series S, and how to enjoy it to the fullest.

-

How to download GTA 5 Xbox Series S

-

Step 1: Buy GTA 5 from the Xbox Store or a physical copy

-

The first step to download GTA 5 Xbox Series S is to buy the game from the Xbox Store or a physical copy. You can buy GTA 5 from the Xbox Store for $19.99 (on sale from $39.99) until March 21, 2023. You can also buy a physical copy of GTA 5 from various retailers, such as Amazon, Walmart, or GameStop.

-

Step 2: Install GTA 5 on your Xbox Series S

-

The next step is to install GTA 5 on your Xbox Series S console. If you bought the game from the Xbox Store, you can download it directly to your console by following the instructions on the screen. If you bought a physical copy of the game, you will need to insert the disc into your console and follow the prompts to install it.

-

How to download gta 5 on xbox series s
-Download gta 5 xbox series s free
-Download gta 5 xbox series s digital edition
-Download gta 5 xbox series s optimized version
-Download gta 5 xbox series s update
-Download gta 5 xbox series s online
-Download gta 5 xbox series s cheats
-Download gta 5 xbox series s mods
-Download gta 5 xbox series s disc
-Download gta 5 xbox series s size
-Download gta 5 xbox series s price
-Download gta 5 xbox series s release date
-Download gta 5 xbox series s gameplay
-Download gta 5 xbox series s trailer
-Download gta 5 xbox series s review
-Download gta 5 xbox series s graphics
-Download gta 5 xbox series s comparison
-Download gta 5 xbox series s backwards compatibility
-Download gta 5 xbox series s transfer progress
-Download gta 5 xbox series s best settings
-Download gta 5 xbox series s fps
-Download gta 5 xbox series s resolution
-Download gta 5 xbox series s loading time
-Download gta 5 xbox series s ray tracing
-Download gta 5 xbox series s enhanced edition
-Download gta 5 xbox series s expansion pack
-Download gta 5 xbox series s new features
-Download gta 5 xbox series s new cars
-Download gta 5 xbox series s new missions
-Download gta 5 xbox series s new map
-Download gta 5 xbox series s new weapons
-Download gta 5 xbox series s new characters
-Download gta 5 xbox series s new heists
-Download gta 5 xbox series s new radio stations
-Download gta 5 xbox series s new outfits
-Download gta 5 xbox series s new vehicles
-Download gta 5 xbox series s new activities
-Download gta 5 xbox series s new modes
-Download gta 5 xbox series s new events
-Download gta 5 xbox series s new challenges

-

The installation process may take some time, depending on your internet speed and storage space. The game requires about 100 GB of storage space, so make sure you have enough free space on your console before installing it.

-

Step 3: Transfer your GTA Online progress and characters from previous consoles

-

If you have played GTA Online on previous consoles, such as Xbox One or Xbox 360, you can transfer your progress and characters to your Xbox Series S console with a one-time migration. This way, you can continue your journey in GTA Online without losing any of your achievements, money, properties, vehicles, or items.

-

To transfer your GTA Online progress and characters, you will need to have a Rockstar Games Social Club account linked to both your previous console and your Xbox Series S console. You will also need to have played GTA Online at least once on both consoles. Then, you can follow these steps:

-
    -
  1. Launch GTA Online on your Xbox Series S console.
  2. -
  3. Select "Transfer Character" from the menu.
  4. -
  5. Log in with your Rockstar Games Social Club account.
  6. -
  7. Select the character you want to transfer from your previous console.
  8. -
  9. Confirm the transfer and wait for it to complete.
  10. How to enjoy GTA 5 Xbox Series S to the fullest -

    Explore the stunning visuals and performance enhancements of GTA 5 on Xbox Series S

    -

    One of the main reasons to download GTA 5 Xbox Series S is to experience the stunning visuals and performance enhancements that the game offers on the new console. GTA 5 on Xbox Series S runs at a smooth 60 frames per second, with improved resolution, textures, lighting, shadows, and reflections. The game also supports HDR (high dynamic range) and Dolby Atmos, which enhance the color and contrast of the image and the quality and immersion of the sound.

    -

    GTA 5 on Xbox Series S also features faster loading times, which means you can jump into the game and switch between characters more quickly and seamlessly. The game also takes advantage of the Xbox Series S's Quick Resume feature, which allows you to resume the game from where you left off without having to restart it.

    -

    Experience exclusive new content and features in GTA Online on Xbox Series S

    -

    Another reason to download GTA 5 Xbox Series S is to experience exclusive new content and features in GTA Online, the online multiplayer mode of GTA 5. GTA Online on Xbox Series S offers access to a range of new content and features that are not available on previous consoles, such as:

    - -

    Access all current and previous updates and expansions in GTA 5 and GTA Online on Xbox Series S

    -

    A final reason to download GTA 5 Xbox Series S is to access all current and previous updates and expansions in GTA 5 and GTA Online on your console. Since its release in 2013, GTA 5 has received numerous updates and expansions that have added new content, features, modes, missions, vehicles, weapons, characters, and more to the game. Some of the most notable updates and expansions include:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    Update/ExpansionDescription
    The Diamond Casino & ResortA luxurious casino and resort that offers a range of gambling games, entertainment options, penthouse suites, missions, vehicles, clothing, and more.
    The Doomsday HeistA three-part heist that involves saving the world from a rogue AI and a nuclear threat. You can team up with up to three other players and use futuristic vehicles, weapons, gadgets, and outfits.
    GunrunningA update that allows you to become an arms dealer and run your own bunker. You can research and manufacture new weapons, vehicles, mods, and upgrades. You can also access new missions, challenges, clothing, and more.
    Import/ExportA update that allows you to become a car thief and run your own vehicle warehouse. You can steal and sell high-end vehicles, customize them with new mods and features. You can also access new missions, vehicles, clothing, and more.
    BikersA update that allows you to become a biker gang leader and run your own clubhouse. You can recruit other players as prospects, run various businesses, access new missions, modes, vehicles, weapons, clothing, and more.
    -

    By downloading GTA 5 Xbox Series S, you can access all these updates and expansions on your console for free. You can also expect more updates and expansions in the future as Rockstar Games continues to support GTA 5 and GTA Online.

    -

    Conclusion

    -

    Summary of the main points

    -

    In conclusion, downloading GTA 5 Xbox Series S is a great way to experience the ultimate Grand Theft Auto V on your console. You can enjoy the stunning visuals and performance enhancements of GTA 5 on Xbox Series S, which runs at 60 fps, supports HDR and Dolby Atmos, and features faster loading times and Quick Resume. You can also experience exclusive new content and features in GTA Online on Xbox Series S, such as the Cayo Perico Heist, the Los Santos Tuners Update, and the Contract. Moreover, you can access all current and previous updates and expansions in GTA 5 and GTA Online on Xbox Series S, such as the Diamond Casino & Resort, the Doomsday Heist, Gunrunning, Import/Export, and Bikers.

    -

    Call to action and final thoughts

    -

    If you are ready to download GTA 5 Xbox Series S and enjoy the ultimate Grand Theft Auto V on your console, you can buy the game from the Xbox Store or a physical copy from various retailers. You can also transfer your GTA Online progress and characters from previous consoles with a one-time migration. GTA 5 Xbox Series S is a game that will keep you entertained for hours, days, weeks, and months with its endless possibilities and content. Don't miss this opportunity to experience one of the best games of all time on your Xbox Series S console.

    -

    FAQs

    -

    Q: Is GTA 5 Xbox Series S compatible with Xbox Series X?

    -

    A: Yes, GTA 5 Xbox Series S is compatible with Xbox Series X, as both consoles are part of the same generation. You can play GTA 5 on either console with the same disc or digital copy.

    -

    Q: Is GTA 5 Xbox Series S different from GTA 5 Xbox One?

    -

    A: Yes, GTA 5 Xbox Series S is different from GTA 5 Xbox One in terms of technical upgrades and enhancements. GTA 5 Xbox Series S runs at a higher frame rate, resolution, and quality than GTA 5 Xbox One. It also features faster loading times, Quick Resume, HDR, and Dolby Atmos support.

    -

    Q: How much does GTA 5 Xbox Series S cost?

    -

    A: GTA 5 Xbox Series S costs $19.99 (on sale from $39.99) until March 21, 2023 on the Xbox Store. The price may vary depending on the retailer if you buy a physical copy of the game.

    -

    Q: How long does it take to download GTA 5 Xbox Series S?

    -

    A: The download time of GTA 5 Xbox Series S depends on your internet speed and storage space. The game requires about 100 GB of storage space, so make sure you have enough free space on your console before installing it. The download time may range from a few minutes to a few hours.

    -

    Q: Can I play GTA Online with other players on different consoles?

    -

    A: Yes, you can play GTA Online with other players on different consoles as long as they are part of the same generation. For example, you can play GTA Online with players on Xbox Series X or Xbox One if you have an Xbox Series S console. However, you cannot play GTA Online with players on PlayStation or PC.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Chikii How to Stream Hundreds of Games on Android without Downloading.md b/spaces/1phancelerku/anime-remove-background/Chikii How to Stream Hundreds of Games on Android without Downloading.md deleted file mode 100644 index 347abcafa4ac078fba09593e60829cd830cdf220..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Chikii How to Stream Hundreds of Games on Android without Downloading.md +++ /dev/null @@ -1,129 +0,0 @@ -
    -

    Chiki Android: A Cloud Gaming Platform for PC and Console Games

    -

    Do you love playing PC and console games but don't have enough time, money or space to buy them? Do you wish you could play your favorite games on your phone without downloading or installing anything? If yes, then you should try Chiki Android, a cloud gaming platform that lets you play PC and console games on your phone with just a few taps.

    -

    chiki android


    Download File ››››› https://jinyurl.com/2uNT7T



    -

    What is Chiki Android?

    -

    Chiki Android is a mobile game app that lets you play PC and console games on your phone

    -

    Chiki Android is a mobile game app that allows you to stream PC and console games from the cloud to your phone. You don't need to download or install anything, just log in and play. You can use your phone's touchscreen, gyroscope or external controller to control the games. You can also chat with other players and share your gameplay screenshots and videos.

    -

    Chiki Android has over 400+ and 200+ 3A games in Steam, PS4, Xbox One and Switch game libraries

    -

    Chiki Android has a huge game library that includes over 400+ and 200+ 3A games from Steam, PS4, Xbox One and Switch platforms. You can find all kinds of genres and categories, such as action, adventure, racing, sports, simulation, RPG, strategy, horror, puzzle, etc. Some of the popular games that you can play on Chiki Android are:

    - - - - - - - - - -Chiki Android does not require download or installation, just log in and play -

    One of the best features of Chiki Android is that it does not require you to download or install any game on your phone. You can save your phone storage space and battery life by playing the games directly from the cloud. All you need is a Chiki Android account and a stable internet connection. You can log in with your email, Facebook, Google or Apple ID and start playing right away.

    -

    How to use Chiki Android?

    -

    Download Chiki Android from Google Play or APKCombo

    -

    To use Chiki Android, you need to download the app from Google Play or APKCombo. The app is free to download and has a size of about 30 MB. You can also scan the QR code on the official website to get the app. The app is compatible with Android 5.0 and above devices.

    -

    Create an account or log in with your existing account

    -

    After downloading the app, you need to create an account or log in with your existing account. You can use your email, Facebook, Google or Apple ID to sign up or sign in. You will also need to verify your phone number and agree to the terms and conditions of the app.

    -

    chiki android app download
    -chiki android cloud gaming
    -chiki android apk mod
    -chiki android play pc games
    -chiki android review
    -chiki android hack
    -chiki android emulator
    -chiki android controller support
    -chiki android alternative
    -chiki android free coins
    -chiki android games list
    -chiki android reddit
    -chiki android ios
    -chiki android offline
    -chiki android requirements
    -chiki android update
    -chiki android error
    -chiki android vpn
    -chiki android beta
    -chiki android login
    -chiki android not working
    -chiki android gta 5
    -chiki android spider man
    -chiki android naruto
    -chiki android dragon ball z
    -chiki android resident evil 4
    -chiki android god of war
    -chiki android forza horizon 5
    -chiki android red dead redemption 2
    -chiki android mortal kombat 11
    -chiki android elden ring
    -chiki android demon slayer
    -chiki android one piece pirate warriors 4
    -chiki android the last of us part i
    -chiki android marvel's spider man remastered
    -chiki android ghostwire tokyo
    -chiki android attack on titan
    -chiki android grand theft auto san andreas
    -chiki android cuphead
    -chiki android battlefield 5
    -chiki android tekken 7
    -chiki android resident evil village
    -chiki android dragonball fighter z
    -chiki android stray
    -chiki android sims 4
    -chikii andro

    -

    Browse the game library and choose a game to play

    -

    Once you have logged in, you can browse the game library and choose a game to play. You can filter the games by platform, genre, popularity, rating, etc. You can also search for a specific game by typing its name in the search bar. You can see the game details, screenshots, videos, reviews and ratings before playing it.

    -

    Enjoy the game on your phone with high-quality graphics and sound

    -

    When you have selected a game to play, you can tap on the play button and wait for a few seconds for the game to load. You can then enjoy the game on your phone with high-quality graphics and sound. You can adjust the game settings, such as resolution, frame rate, audio, etc., according to your preference. You can also use your phone's touchscreen, gyroscope or external controller to control the game.

    -

    What are the benefits of Chiki Android?

    -

    Chiki Android lets you play PC and console games anytime and anywhere

    -

    The main benefit of Chiki Android is that it lets you play PC and console games anytime and anywhere. You don't need to buy expensive gaming devices or accessories to play your favorite games. You can simply use your phone as a portable gaming console and play the games on the go. Whether you are at home, at work, at school, on a bus, on a plane or anywhere else, you can enjoy playing PC and console games on your phone with Chiki Android.

    -

    Chiki Android saves your phone storage space and battery life

    -

    Another benefit of Chiki Android is that it saves your phone storage space and battery life. Since you don't need to download or install any game on your phone, you can save a lot of space that you can use for other apps or files. You also don't need to worry about updating or deleting any game from your phone. Moreover, since you are playing the games from the cloud, you don't need to use much of your phone's CPU or GPU power, which means you can save your phone's battery life as well.

    -

    Chiki Android offers VIP subscription that allows you to play unpurchased games for free

    -

    A third benefit of Chiki Android is that it offers VIP subscription that allows you to play unpurchased games for free. If you want to play more games without buying them, you can subscribe to Chiki Android VIP plan for $9.99 per month or $99.99 per year. With this plan, you can access over 200+ 3A games that are normally paid on other platforms. You can also enjoy faster loading speed, higher resolution, unlimited gameplay time and exclusive VIP customer service.

    -

    Chiki Android provides online multiplayer mode to play with your friends

    -

    A fourth benefit of Chiki Android is that it provides online multiplayer mode to play with your friends. If you want to have more fun and challenge with other players, you can join the online multiplayer mode of Chiki Android. You can invite your friends to join your game room or join other players' game rooms. You can also chat with other players via voice or text messages and share your gameplay screenshots and videos.

    -

    What are the drawbacks of Chiki Android?

    -

    Chiki Android requires a stable internet connection to stream the games

    -

    The main drawback of Chiki Android is that it requires a stable internet connection to stream the games. You need to have at least 10 Mbps of download speed and 5 Mbps of upload speed to play the games smoothly. You also need to have a low ping and latency to avoid lag or delay. If your internet connection is slow, unstable or interrupted, you may experience poor game quality, buffering, freezing or disconnection.

    -

    Chiki Android may have some latency or lag issues depending on your network speed and location

    -

    Another drawback of Chiki Android is that it may have some latency or lag issues depending on your network speed and location. Since you are playing the games from the cloud, there may be some delay between your input and the game response. This may affect your game performance, especially in fast-paced or competitive games. The latency or lag may vary depending on your network speed, server location, game type, etc. You can check the ping and latency of each game before playing it.

    -

    Chiki Android may not support some games or devices due to compatibility issues

    -

    A third drawback of Chiki Android is that it may not support some games or devices due to compatibility issues. Some games may not be available on Chiki Android due to licensing or technical reasons. Some games may also have bugs or glitches that affect the game quality or functionality. Some devices may not be compatible with Chiki Android due to hardware or software limitations. You can check the game and device compatibility on the official website or app.

    -

    Conclusion

    -

    Chiki Android is a cloud gaming platform that lets you play PC and console games on your phone. It has many benefits, such as a large game library, no download or installation, VIP subscription and online multiplayer mode. It also has some drawbacks, such as internet connection requirement, latency or lag issues and compatibility issues. If you are looking for a way to enjoy PC and console games on your phone without spending much money or space, you should give Chiki Android a try.

    -

    FAQs

    -

    Q: How much does Chiki Android cost?

    -

    A: Chiki Android is free to download and use. You can play any game that you have purchased on other platforms for free on Chiki Android. You can also subscribe to Chiki Android VIP plan for $9.99 per month or $99.99 per year to play over 200+ 3A games that are normally paid on other platforms.

    -

    Q: What are the minimum requirements for Chiki Android?

    -

    A: The minimum requirements for Chiki Android are:

    - -

    Q: How can I contact Chiki Android customer service?

    -

    A: You can contact Chiki Android customer service by:

    - -

    Q: How can I improve my game quality on Chiki Android?

    -

    A: You can improve your game quality on Chiki Android by:

    - -

    Q: How can I share my feedback or suggestions on Chiki Android?

    -

    A: You can share your feedback or suggestions on Chiki Android by:

    - - - - - -

    Conclusión

    -

    Clash of Clans es un juego divertido y adictivo que puedes jugar en tu ordenador Linux usando Anbox, un software Android en una caja que te permite ejecutar aplicaciones Android en cualquier distribución de Linux. Solo necesitas instalar Anbox usando snap, descargar un archivo APK para Clash of Clans desde APKPure, y usar adb para instalar y ejecutar el juego en Anbox.

    - -

    Esperamos que este artículo te haya ayudado a aprender cómo descargar Clash of Clans en Linux y divertirte jugando. Si tiene alguna pregunta o comentario, por favor háganoslo saber en los comentarios a continuación. Happy clashing!

    -

    Preguntas frecuentes

    -

    ¿Es Anbox la única forma de jugar a Clash of Clans en Linux?

    -

    No, Anbox no es la única forma de jugar a Clash of Clans en Linux. Hay otros métodos, como usar un emulador de Android, una máquina virtual o un sistema de arranque dual. Sin embargo, Anbox es una de las formas más fáciles y rápidas de jugar a Clash of Clans en Linux, ya que no requiere instalar un sistema operativo separado o crear un dispositivo virtual.

    -

    ¿Puedo jugar Clash of Clans en Linux con otros jugadores en línea?

    -

    Sí, puedes jugar a Clash of Clans en Linux con otros jugadores en línea, siempre y cuando tengas una conexión a Internet estable y una cuenta válida. Puedes unirte o crear clanes, chatear con otros jugadores y participar en guerras de clanes y eventos. Sin embargo, es posible que no pueda acceder a algunas funciones que requieren Servicios de Google Play, como Google Play Games o compras en la aplicación.

    -

    ¿Puedo jugar Clash of Clans en Linux sin conexión?

    -

    No, no puedes jugar Clash of Clans sin conexión a Linux. Clash of Clans es un juego en línea que requiere una conexión a Internet constante para funcionar. Si pierde su conexión a Internet o intenta jugar sin conexión, verá un mensaje de error que dice "No se puede conectar al servidor" y el juego se cerrará.

    -

    ¿Puedo transferir mi progreso desde mi smartphone a mi ordenador Linux?

    -

    Sí, puede transferir su progreso desde su teléfono inteligente a su computadora Linux, siempre y cuando haya vinculado su cuenta de juego a una cuenta de Google o un ID de Supercell. Para hacer esto, siga estos pasos:

    -
      -
    1. En tu smartphone, abre Clash of Clans y ve a Configuración > Cuenta > Dispositivo de enlace.
    2. -
    3. Seleccione "Este es el dispositivo antiguo" y elija si desea vincular su cuenta a una cuenta de Google o a un ID de Supercell.
    4. - -
    5. En su computadora Linux, abra Clash of Clans y vaya a Configuración > Cuenta > Dispositivo de enlace.
    6. -
    7. Seleccione "Este es el nuevo dispositivo" y elija si desea vincular su cuenta a una cuenta de Google o a un ID de Supercell.
    8. -
    9. Siga las instrucciones en la pantalla para vincular su cuenta y cargar su aldea existente.
    10. -
    -

    ¿Puedo actualizar Clash of Clans en Linux?

    -

    Sí, puedes actualizar Clash of Clans en Linux, pero no automáticamente. Tendrás que descargar e instalar manualmente la última versión del archivo APK para Clash of Clans desde APKPure u otra fuente de confianza. Para hacer esto, siga estos pasos:

    -
      -
    1. Elimina la versión antigua del archivo APK para Clash of Clans de tu carpeta de Linux.
    2. -
    3. Descargar la última versión del archivo APK para Clash of Clans de APKPure u otra fuente de confianza.
    4. -
    5. Instalar la nueva versión del archivo APK usando adb escribiendo: adb install -r Clash-of-Clans.apk (Reemplazar Clash-of-Clans.apk con el nombre de su archivo APK).
    6. -
    7. Espere a que la instalación se complete y ejecute Clash of Clans en Anbox.
    8. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/endpoint.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/endpoint.py deleted file mode 100644 index adc622c25a647faf2ad700d6a524584d2ccf4709..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/endpoint.py +++ /dev/null @@ -1,443 +0,0 @@ -# Copyright (c) 2012-2013 Mitch Garnaat http://garnaat.org/ -# Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. - -import datetime -import logging -import os -import threading -import time -import uuid - -from botocore import parsers -from botocore.awsrequest import create_request_object -from botocore.exceptions import HTTPClientError -from botocore.history import get_global_history_recorder -from botocore.hooks import first_non_none_response -from botocore.httpchecksum import handle_checksum_body -from botocore.httpsession import URLLib3Session -from botocore.response import StreamingBody -from botocore.utils import ( - get_environ_proxies, - is_valid_endpoint_url, - is_valid_ipv6_endpoint_url, -) - -logger = logging.getLogger(__name__) -history_recorder = get_global_history_recorder() -DEFAULT_TIMEOUT = 60 -MAX_POOL_CONNECTIONS = 10 - - -def convert_to_response_dict(http_response, operation_model): - """Convert an HTTP response object to a request dict. - - This converts the requests library's HTTP response object to - a dictionary. - - :type http_response: botocore.vendored.requests.model.Response - :param http_response: The HTTP response from an AWS service request. - - :rtype: dict - :return: A response dictionary which will contain the following keys: - * headers (dict) - * status_code (int) - * body (string or file-like object) - - """ - response_dict = { - 'headers': http_response.headers, - 'status_code': http_response.status_code, - 'context': { - 'operation_name': operation_model.name, - }, - } - if response_dict['status_code'] >= 300: - response_dict['body'] = http_response.content - elif operation_model.has_event_stream_output: - response_dict['body'] = http_response.raw - elif operation_model.has_streaming_output: - length = response_dict['headers'].get('content-length') - response_dict['body'] = StreamingBody(http_response.raw, length) - else: - response_dict['body'] = http_response.content - return response_dict - - -class Endpoint: - """ - Represents an endpoint for a particular service in a specific - region. Only an endpoint can make requests. - - :ivar service: The Service object that describes this endpoints - service. - :ivar host: The fully qualified endpoint hostname. - :ivar session: The session object. - """ - - def __init__( - self, - host, - endpoint_prefix, - event_emitter, - response_parser_factory=None, - http_session=None, - ): - self._endpoint_prefix = endpoint_prefix - self._event_emitter = event_emitter - self.host = host - self._lock = threading.Lock() - if response_parser_factory is None: - response_parser_factory = parsers.ResponseParserFactory() - self._response_parser_factory = response_parser_factory - self.http_session = http_session - if self.http_session is None: - self.http_session = URLLib3Session() - - def __repr__(self): - return f'{self._endpoint_prefix}({self.host})' - - def close(self): - self.http_session.close() - - def make_request(self, operation_model, request_dict): - logger.debug( - "Making request for %s with params: %s", - operation_model, - request_dict, - ) - return self._send_request(request_dict, operation_model) - - def create_request(self, params, operation_model=None): - request = create_request_object(params) - if operation_model: - request.stream_output = any( - [ - operation_model.has_streaming_output, - operation_model.has_event_stream_output, - ] - ) - service_id = operation_model.service_model.service_id.hyphenize() - event_name = 'request-created.{service_id}.{op_name}'.format( - service_id=service_id, op_name=operation_model.name - ) - self._event_emitter.emit( - event_name, - request=request, - operation_name=operation_model.name, - ) - prepared_request = self.prepare_request(request) - return prepared_request - - def _encode_headers(self, headers): - # In place encoding of headers to utf-8 if they are unicode. - for key, value in headers.items(): - if isinstance(value, str): - headers[key] = value.encode('utf-8') - - def prepare_request(self, request): - self._encode_headers(request.headers) - return request.prepare() - - def _calculate_ttl( - self, response_received_timestamp, date_header, read_timeout - ): - local_timestamp = datetime.datetime.utcnow() - date_conversion = datetime.datetime.strptime( - date_header, "%a, %d %b %Y %H:%M:%S %Z" - ) - estimated_skew = date_conversion - response_received_timestamp - ttl = ( - local_timestamp - + datetime.timedelta(seconds=read_timeout) - + estimated_skew - ) - return ttl.strftime('%Y%m%dT%H%M%SZ') - - def _set_ttl(self, retries_context, read_timeout, success_response): - response_date_header = success_response[0].headers.get('Date') - has_streaming_input = retries_context.get('has_streaming_input') - if response_date_header and not has_streaming_input: - try: - response_received_timestamp = datetime.datetime.utcnow() - retries_context['ttl'] = self._calculate_ttl( - response_received_timestamp, - response_date_header, - read_timeout, - ) - except Exception: - logger.debug( - "Exception received when updating retries context with TTL", - exc_info=True, - ) - - def _update_retries_context(self, context, attempt, success_response=None): - retries_context = context.setdefault('retries', {}) - retries_context['attempt'] = attempt - if 'invocation-id' not in retries_context: - retries_context['invocation-id'] = str(uuid.uuid4()) - - if success_response: - read_timeout = context['client_config'].read_timeout - self._set_ttl(retries_context, read_timeout, success_response) - - def _send_request(self, request_dict, operation_model): - attempts = 1 - context = request_dict['context'] - self._update_retries_context(context, attempts) - request = self.create_request(request_dict, operation_model) - success_response, exception = self._get_response( - request, operation_model, context - ) - while self._needs_retry( - attempts, - operation_model, - request_dict, - success_response, - exception, - ): - attempts += 1 - self._update_retries_context(context, attempts, success_response) - # If there is a stream associated with the request, we need - # to reset it before attempting to send the request again. - # This will ensure that we resend the entire contents of the - # body. - request.reset_stream() - # Create a new request when retried (including a new signature). - request = self.create_request(request_dict, operation_model) - success_response, exception = self._get_response( - request, operation_model, context - ) - if ( - success_response is not None - and 'ResponseMetadata' in success_response[1] - ): - # We want to share num retries, not num attempts. - total_retries = attempts - 1 - success_response[1]['ResponseMetadata'][ - 'RetryAttempts' - ] = total_retries - if exception is not None: - raise exception - else: - return success_response - - def _get_response(self, request, operation_model, context): - # This will return a tuple of (success_response, exception) - # and success_response is itself a tuple of - # (http_response, parsed_dict). - # If an exception occurs then the success_response is None. - # If no exception occurs then exception is None. - success_response, exception = self._do_get_response( - request, operation_model, context - ) - kwargs_to_emit = { - 'response_dict': None, - 'parsed_response': None, - 'context': context, - 'exception': exception, - } - if success_response is not None: - http_response, parsed_response = success_response - kwargs_to_emit['parsed_response'] = parsed_response - kwargs_to_emit['response_dict'] = convert_to_response_dict( - http_response, operation_model - ) - service_id = operation_model.service_model.service_id.hyphenize() - self._event_emitter.emit( - f"response-received.{service_id}.{operation_model.name}", - **kwargs_to_emit, - ) - return success_response, exception - - def _do_get_response(self, request, operation_model, context): - try: - logger.debug("Sending http request: %s", request) - history_recorder.record( - 'HTTP_REQUEST', - { - 'method': request.method, - 'headers': request.headers, - 'streaming': operation_model.has_streaming_input, - 'url': request.url, - 'body': request.body, - }, - ) - service_id = operation_model.service_model.service_id.hyphenize() - event_name = f"before-send.{service_id}.{operation_model.name}" - responses = self._event_emitter.emit(event_name, request=request) - http_response = first_non_none_response(responses) - if http_response is None: - http_response = self._send(request) - except HTTPClientError as e: - return (None, e) - except Exception as e: - logger.debug( - "Exception received when sending HTTP request.", exc_info=True - ) - return (None, e) - # This returns the http_response and the parsed_data. - response_dict = convert_to_response_dict( - http_response, operation_model - ) - handle_checksum_body( - http_response, - response_dict, - context, - operation_model, - ) - - http_response_record_dict = response_dict.copy() - http_response_record_dict[ - 'streaming' - ] = operation_model.has_streaming_output - history_recorder.record('HTTP_RESPONSE', http_response_record_dict) - - protocol = operation_model.metadata['protocol'] - parser = self._response_parser_factory.create_parser(protocol) - parsed_response = parser.parse( - response_dict, operation_model.output_shape - ) - # Do a second parsing pass to pick up on any modeled error fields - # NOTE: Ideally, we would push this down into the parser classes but - # they currently have no reference to the operation or service model - # The parsers should probably take the operation model instead of - # output shape but we can't change that now - if http_response.status_code >= 300: - self._add_modeled_error_fields( - response_dict, - parsed_response, - operation_model, - parser, - ) - history_recorder.record('PARSED_RESPONSE', parsed_response) - return (http_response, parsed_response), None - - def _add_modeled_error_fields( - self, - response_dict, - parsed_response, - operation_model, - parser, - ): - error_code = parsed_response.get("Error", {}).get("Code") - if error_code is None: - return - service_model = operation_model.service_model - error_shape = service_model.shape_for_error_code(error_code) - if error_shape is None: - return - modeled_parse = parser.parse(response_dict, error_shape) - # TODO: avoid naming conflicts with ResponseMetadata and Error - parsed_response.update(modeled_parse) - - def _needs_retry( - self, - attempts, - operation_model, - request_dict, - response=None, - caught_exception=None, - ): - service_id = operation_model.service_model.service_id.hyphenize() - event_name = f"needs-retry.{service_id}.{operation_model.name}" - responses = self._event_emitter.emit( - event_name, - response=response, - endpoint=self, - operation=operation_model, - attempts=attempts, - caught_exception=caught_exception, - request_dict=request_dict, - ) - handler_response = first_non_none_response(responses) - if handler_response is None: - return False - else: - # Request needs to be retried, and we need to sleep - # for the specified number of times. - logger.debug( - "Response received to retry, sleeping for %s seconds", - handler_response, - ) - time.sleep(handler_response) - return True - - def _send(self, request): - return self.http_session.send(request) - - -class EndpointCreator: - def __init__(self, event_emitter): - self._event_emitter = event_emitter - - def create_endpoint( - self, - service_model, - region_name, - endpoint_url, - verify=None, - response_parser_factory=None, - timeout=DEFAULT_TIMEOUT, - max_pool_connections=MAX_POOL_CONNECTIONS, - http_session_cls=URLLib3Session, - proxies=None, - socket_options=None, - client_cert=None, - proxies_config=None, - ): - if not is_valid_endpoint_url( - endpoint_url - ) and not is_valid_ipv6_endpoint_url(endpoint_url): - raise ValueError("Invalid endpoint: %s" % endpoint_url) - - if proxies is None: - proxies = self._get_proxies(endpoint_url) - endpoint_prefix = service_model.endpoint_prefix - - logger.debug('Setting %s timeout as %s', endpoint_prefix, timeout) - http_session = http_session_cls( - timeout=timeout, - proxies=proxies, - verify=self._get_verify_value(verify), - max_pool_connections=max_pool_connections, - socket_options=socket_options, - client_cert=client_cert, - proxies_config=proxies_config, - ) - - return Endpoint( - endpoint_url, - endpoint_prefix=endpoint_prefix, - event_emitter=self._event_emitter, - response_parser_factory=response_parser_factory, - http_session=http_session, - ) - - def _get_proxies(self, url): - # We could also support getting proxies from a config file, - # but for now proxy support is taken from the environment. - return get_environ_proxies(url) - - def _get_verify_value(self, verify): - # This is to account for: - # https://github.com/kennethreitz/requests/issues/1436 - # where we need to honor REQUESTS_CA_BUNDLE because we're creating our - # own request objects. - # First, if verify is not None, then the user explicitly specified - # a value so this automatically wins. - if verify is not None: - return verify - # Otherwise use the value from REQUESTS_CA_BUNDLE, or default to - # True if the env var does not exist. - return os.environ.get('REQUESTS_CA_BUNDLE', True) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/__init__.py deleted file mode 100644 index c2439e37d4748be9bb20714ecc780014f468f2f2..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/jmespath/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -from jmespath import parser -from jmespath.visitor import Options - -__version__ = '1.0.1' - - -def compile(expression): - return parser.Parser().parse(expression) - - -def search(expression, data, options=None): - return parser.Parser().parse(expression).search(data, options=options) diff --git a/spaces/CNXT/CHaTx/README.md b/spaces/CNXT/CHaTx/README.md deleted file mode 100644 index 721deb057ea8c20a1c1886020ef1c5254e609c90..0000000000000000000000000000000000000000 --- a/spaces/CNXT/CHaTx/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: CHaTx -emoji: 🦀 -colorFrom: pink -colorTo: indigo -sdk: docker -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/getting_started.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/getting_started.md deleted file mode 100644 index e90bde77a3197b77f4cfdce86ca8f96491650acd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/tutorials/getting_started.md +++ /dev/null @@ -1 +0,0 @@ -../../GETTING_STARTED.md \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/visual_genome.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/visual_genome.py deleted file mode 100644 index a34ac6f0d9dac77a4431a2cee0ad9b9729d0e8f6..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/grid-feats-vqa/grid_feats/visual_genome.py +++ /dev/null @@ -1,149 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import contextlib -import io -import logging -import os -from fvcore.common.file_io import PathManager -from fvcore.common.timer import Timer - -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.structures import BoxMode - - -logger = logging.getLogger(__name__) - -def load_coco_with_attributes_json(json_file, - image_root, - dataset_name=None, - extra_annotation_keys=None): - """ - Extend load_coco_json() with additional support for attributes - """ - from pycocotools.coco import COCO - - timer = Timer() - json_file = PathManager.get_local_path(json_file) - with contextlib.redirect_stdout(io.StringIO()): - coco_api = COCO(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format(json_file, timer.seconds())) - - id_map = None - if dataset_name is not None: - meta = MetadataCatalog.get(dataset_name) - cat_ids = sorted(coco_api.getCatIds()) - cats = coco_api.loadCats(cat_ids) - thing_classes = [c["name"] for c in sorted(cats, key=lambda x: x["id"])] - meta.thing_classes = thing_classes - if not (min(cat_ids) == 1 and max(cat_ids) == len(cat_ids)): - if "coco" not in dataset_name: - logger.warning( - """ -Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you. -""" - ) - id_map = {v: i for i, v in enumerate(cat_ids)} - meta.thing_dataset_id_to_contiguous_id = id_map - - img_ids = sorted(coco_api.imgs.keys()) - imgs = coco_api.loadImgs(img_ids) - anns = [coco_api.imgToAnns[img_id] for img_id in img_ids] - - if "minival" not in json_file: - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), "Annotation ids in '{}' are not unique!".format( - json_file - ) - - imgs_anns = list(zip(imgs, anns)) - - logger.info("Loaded {} images in COCO format from {}".format(len(imgs_anns), json_file)) - - dataset_dicts = [] - - ann_keys = ["iscrowd", "bbox", "keypoints", "category_id"] + (extra_annotation_keys or []) - - num_instances_without_valid_segmentation = 0 - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - record["file_name"] = os.path.join(image_root, img_dict["file_name"]) - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - assert anno["image_id"] == image_id - - assert anno.get("ignore", 0) == 0, '"ignore" in COCO json file is not supported.' - - obj = {key: anno[key] for key in ann_keys if key in anno} - - segm = anno.get("segmentation", None) - if segm: - if not isinstance(segm, dict): - segm = [poly for poly in segm if len(poly) % 2 == 0 and len(poly) >= 6] - if len(segm) == 0: - num_instances_without_valid_segmentation += 1 - continue - obj["segmentation"] = segm - - keypts = anno.get("keypoints", None) - if keypts: - for idx, v in enumerate(keypts): - if idx % 3 != 2: - keypts[idx] = v + 0.5 - obj["keypoints"] = keypts - - attrs = anno.get("attribute_ids", None) - if attrs: # list[int] - obj["attribute_ids"] = attrs - - obj["bbox_mode"] = BoxMode.XYWH_ABS - if id_map: - obj["category_id"] = id_map[obj["category_id"]] - objs.append(obj) - record["annotations"] = objs - dataset_dicts.append(record) - - if num_instances_without_valid_segmentation > 0: - logger.warning( - "Filtered out {} instances without valid segmentation. " - "There might be issues in your dataset generation process.".format( - num_instances_without_valid_segmentation - ) - ) - return dataset_dicts - -def register_coco_instances_with_attributes(name, metadata, json_file, image_root): - DatasetCatalog.register(name, lambda: load_coco_with_attributes_json(json_file, - image_root, - name)) - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, evaluator_type="coco", **metadata - ) - -# ==== Predefined splits for visual genome images =========== -_PREDEFINED_SPLITS_VG = { - "visual_genome_train": ("visual_genome/images", - "visual_genome/annotations/visual_genome_train.json"), - "visual_genome_val": ("visual_genome/images", - "visual_genome/annotations/visual_genome_val.json"), - "visual_genome_test": ("visual_genome/images", - "visual_genome/annotations/visual_genome_test.json"), -} - -def register_all_vg(root): - for key, (image_root, json_file) in _PREDEFINED_SPLITS_VG.items(): - register_coco_instances_with_attributes( - key, - {}, # no meta data - os.path.join(root, json_file), - os.path.join(root, image_root), - ) - -# Register them all under "./datasets" -_root = os.getenv("DETECTRON2_DATASETS", "datasets") -register_all_vg(_root) \ No newline at end of file diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_numpy_dtypes.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_numpy_dtypes.cpp deleted file mode 100644 index 467e0253f7eb422da4fff3b4db7e4836fc2c11f2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_numpy_dtypes.cpp +++ /dev/null @@ -1,474 +0,0 @@ -/* - tests/test_numpy_dtypes.cpp -- Structured and compound NumPy dtypes - - Copyright (c) 2016 Ivan Smirnov - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" -#include - -#ifdef __GNUC__ -#define PYBIND11_PACKED(cls) cls __attribute__((__packed__)) -#else -#define PYBIND11_PACKED(cls) __pragma(pack(push, 1)) cls __pragma(pack(pop)) -#endif - -namespace py = pybind11; - -struct SimpleStruct { - bool bool_; - uint32_t uint_; - float float_; - long double ldbl_; -}; - -std::ostream& operator<<(std::ostream& os, const SimpleStruct& v) { - return os << "s:" << v.bool_ << "," << v.uint_ << "," << v.float_ << "," << v.ldbl_; -} - -struct SimpleStructReordered { - bool bool_; - float float_; - uint32_t uint_; - long double ldbl_; -}; - -PYBIND11_PACKED(struct PackedStruct { - bool bool_; - uint32_t uint_; - float float_; - long double ldbl_; -}); - -std::ostream& operator<<(std::ostream& os, const PackedStruct& v) { - return os << "p:" << v.bool_ << "," << v.uint_ << "," << v.float_ << "," << v.ldbl_; -} - -PYBIND11_PACKED(struct NestedStruct { - SimpleStruct a; - PackedStruct b; -}); - -std::ostream& operator<<(std::ostream& os, const NestedStruct& v) { - return os << "n:a=" << v.a << ";b=" << v.b; -} - -struct PartialStruct { - bool bool_; - uint32_t uint_; - float float_; - uint64_t dummy2; - long double ldbl_; -}; - -struct PartialNestedStruct { - uint64_t dummy1; - PartialStruct a; - uint64_t dummy2; -}; - -struct UnboundStruct { }; - -struct StringStruct { - char a[3]; - std::array b; -}; - -struct ComplexStruct { - std::complex cflt; - std::complex cdbl; -}; - -std::ostream& operator<<(std::ostream& os, const ComplexStruct& v) { - return os << "c:" << v.cflt << "," << v.cdbl; -} - -struct ArrayStruct { - char a[3][4]; - int32_t b[2]; - std::array c; - std::array d[4]; -}; - -PYBIND11_PACKED(struct StructWithUglyNames { - int8_t __x__; - uint64_t __y__; -}); - -enum class E1 : int64_t { A = -1, B = 1 }; -enum E2 : uint8_t { X = 1, Y = 2 }; - -PYBIND11_PACKED(struct EnumStruct { - E1 e1; - E2 e2; -}); - -std::ostream& operator<<(std::ostream& os, const StringStruct& v) { - os << "a='"; - for (size_t i = 0; i < 3 && v.a[i]; i++) os << v.a[i]; - os << "',b='"; - for (size_t i = 0; i < 3 && v.b[i]; i++) os << v.b[i]; - return os << "'"; -} - -std::ostream& operator<<(std::ostream& os, const ArrayStruct& v) { - os << "a={"; - for (int i = 0; i < 3; i++) { - if (i > 0) - os << ','; - os << '{'; - for (int j = 0; j < 3; j++) - os << v.a[i][j] << ','; - os << v.a[i][3] << '}'; - } - os << "},b={" << v.b[0] << ',' << v.b[1]; - os << "},c={" << int(v.c[0]) << ',' << int(v.c[1]) << ',' << int(v.c[2]); - os << "},d={"; - for (int i = 0; i < 4; i++) { - if (i > 0) - os << ','; - os << '{' << v.d[i][0] << ',' << v.d[i][1] << '}'; - } - return os << '}'; -} - -std::ostream& operator<<(std::ostream& os, const EnumStruct& v) { - return os << "e1=" << (v.e1 == E1::A ? "A" : "B") << ",e2=" << (v.e2 == E2::X ? "X" : "Y"); -} - -template -py::array mkarray_via_buffer(size_t n) { - return py::array(py::buffer_info(nullptr, sizeof(T), - py::format_descriptor::format(), - 1, { n }, { sizeof(T) })); -} - -#define SET_TEST_VALS(s, i) do { \ - s.bool_ = (i) % 2 != 0; \ - s.uint_ = (uint32_t) (i); \ - s.float_ = (float) (i) * 1.5f; \ - s.ldbl_ = (long double) (i) * -2.5L; } while (0) - -template -py::array_t create_recarray(size_t n) { - auto arr = mkarray_via_buffer(n); - auto req = arr.request(); - auto ptr = static_cast(req.ptr); - for (size_t i = 0; i < n; i++) { - SET_TEST_VALS(ptr[i], i); - } - return arr; -} - -template -py::list print_recarray(py::array_t arr) { - const auto req = arr.request(); - const auto ptr = static_cast(req.ptr); - auto l = py::list(); - for (ssize_t i = 0; i < req.size; i++) { - std::stringstream ss; - ss << ptr[i]; - l.append(py::str(ss.str())); - } - return l; -} - -py::array_t test_array_ctors(int i) { - using arr_t = py::array_t; - - std::vector data { 1, 2, 3, 4, 5, 6 }; - std::vector shape { 3, 2 }; - std::vector strides { 8, 4 }; - - auto ptr = data.data(); - auto vptr = (void *) ptr; - auto dtype = py::dtype("int32"); - - py::buffer_info buf_ndim1(vptr, 4, "i", 6); - py::buffer_info buf_ndim1_null(nullptr, 4, "i", 6); - py::buffer_info buf_ndim2(vptr, 4, "i", 2, shape, strides); - py::buffer_info buf_ndim2_null(nullptr, 4, "i", 2, shape, strides); - - auto fill = [](py::array arr) { - auto req = arr.request(); - for (int i = 0; i < 6; i++) ((int32_t *) req.ptr)[i] = i + 1; - return arr; - }; - - switch (i) { - // shape: (3, 2) - case 10: return arr_t(shape, strides, ptr); - case 11: return py::array(shape, strides, ptr); - case 12: return py::array(dtype, shape, strides, vptr); - case 13: return arr_t(shape, ptr); - case 14: return py::array(shape, ptr); - case 15: return py::array(dtype, shape, vptr); - case 16: return arr_t(buf_ndim2); - case 17: return py::array(buf_ndim2); - // shape: (3, 2) - post-fill - case 20: return fill(arr_t(shape, strides)); - case 21: return py::array(shape, strides, ptr); // can't have nullptr due to templated ctor - case 22: return fill(py::array(dtype, shape, strides)); - case 23: return fill(arr_t(shape)); - case 24: return py::array(shape, ptr); // can't have nullptr due to templated ctor - case 25: return fill(py::array(dtype, shape)); - case 26: return fill(arr_t(buf_ndim2_null)); - case 27: return fill(py::array(buf_ndim2_null)); - // shape: (6, ) - case 30: return arr_t(6, ptr); - case 31: return py::array(6, ptr); - case 32: return py::array(dtype, 6, vptr); - case 33: return arr_t(buf_ndim1); - case 34: return py::array(buf_ndim1); - // shape: (6, ) - case 40: return fill(arr_t(6)); - case 41: return py::array(6, ptr); // can't have nullptr due to templated ctor - case 42: return fill(py::array(dtype, 6)); - case 43: return fill(arr_t(buf_ndim1_null)); - case 44: return fill(py::array(buf_ndim1_null)); - } - return arr_t(); -} - -py::list test_dtype_ctors() { - py::list list; - list.append(py::dtype("int32")); - list.append(py::dtype(std::string("float64"))); - list.append(py::dtype::from_args(py::str("bool"))); - py::list names, offsets, formats; - py::dict dict; - names.append(py::str("a")); names.append(py::str("b")); dict["names"] = names; - offsets.append(py::int_(1)); offsets.append(py::int_(10)); dict["offsets"] = offsets; - formats.append(py::dtype("int32")); formats.append(py::dtype("float64")); dict["formats"] = formats; - dict["itemsize"] = py::int_(20); - list.append(py::dtype::from_args(dict)); - list.append(py::dtype(names, formats, offsets, 20)); - list.append(py::dtype(py::buffer_info((void *) 0, sizeof(unsigned int), "I", 1))); - list.append(py::dtype(py::buffer_info((void *) 0, 0, "T{i:a:f:b:}", 1))); - return list; -} - -struct A {}; -struct B {}; - -TEST_SUBMODULE(numpy_dtypes, m) { - try { py::module::import("numpy"); } - catch (...) { return; } - - // typeinfo may be registered before the dtype descriptor for scalar casts to work... - py::class_(m, "SimpleStruct"); - - PYBIND11_NUMPY_DTYPE(SimpleStruct, bool_, uint_, float_, ldbl_); - PYBIND11_NUMPY_DTYPE(SimpleStructReordered, bool_, uint_, float_, ldbl_); - PYBIND11_NUMPY_DTYPE(PackedStruct, bool_, uint_, float_, ldbl_); - PYBIND11_NUMPY_DTYPE(NestedStruct, a, b); - PYBIND11_NUMPY_DTYPE(PartialStruct, bool_, uint_, float_, ldbl_); - PYBIND11_NUMPY_DTYPE(PartialNestedStruct, a); - PYBIND11_NUMPY_DTYPE(StringStruct, a, b); - PYBIND11_NUMPY_DTYPE(ArrayStruct, a, b, c, d); - PYBIND11_NUMPY_DTYPE(EnumStruct, e1, e2); - PYBIND11_NUMPY_DTYPE(ComplexStruct, cflt, cdbl); - - // ... or after - py::class_(m, "PackedStruct"); - - PYBIND11_NUMPY_DTYPE_EX(StructWithUglyNames, __x__, "x", __y__, "y"); - - // If uncommented, this should produce a static_assert failure telling the user that the struct - // is not a POD type -// struct NotPOD { std::string v; NotPOD() : v("hi") {}; }; -// PYBIND11_NUMPY_DTYPE(NotPOD, v); - - // Check that dtypes can be registered programmatically, both from - // initializer lists of field descriptors and from other containers. - py::detail::npy_format_descriptor::register_dtype( - {} - ); - py::detail::npy_format_descriptor::register_dtype( - std::vector{} - ); - - // test_recarray, test_scalar_conversion - m.def("create_rec_simple", &create_recarray); - m.def("create_rec_packed", &create_recarray); - m.def("create_rec_nested", [](size_t n) { // test_signature - py::array_t arr = mkarray_via_buffer(n); - auto req = arr.request(); - auto ptr = static_cast(req.ptr); - for (size_t i = 0; i < n; i++) { - SET_TEST_VALS(ptr[i].a, i); - SET_TEST_VALS(ptr[i].b, i + 1); - } - return arr; - }); - m.def("create_rec_partial", &create_recarray); - m.def("create_rec_partial_nested", [](size_t n) { - py::array_t arr = mkarray_via_buffer(n); - auto req = arr.request(); - auto ptr = static_cast(req.ptr); - for (size_t i = 0; i < n; i++) { - SET_TEST_VALS(ptr[i].a, i); - } - return arr; - }); - m.def("print_rec_simple", &print_recarray); - m.def("print_rec_packed", &print_recarray); - m.def("print_rec_nested", &print_recarray); - - // test_format_descriptors - m.def("get_format_unbound", []() { return py::format_descriptor::format(); }); - m.def("print_format_descriptors", []() { - py::list l; - for (const auto &fmt : { - py::format_descriptor::format(), - py::format_descriptor::format(), - py::format_descriptor::format(), - py::format_descriptor::format(), - py::format_descriptor::format(), - py::format_descriptor::format(), - py::format_descriptor::format(), - py::format_descriptor::format(), - py::format_descriptor::format() - }) { - l.append(py::cast(fmt)); - } - return l; - }); - - // test_dtype - m.def("print_dtypes", []() { - py::list l; - for (const py::handle &d : { - py::dtype::of(), - py::dtype::of(), - py::dtype::of(), - py::dtype::of(), - py::dtype::of(), - py::dtype::of(), - py::dtype::of(), - py::dtype::of(), - py::dtype::of(), - py::dtype::of() - }) - l.append(py::str(d)); - return l; - }); - m.def("test_dtype_ctors", &test_dtype_ctors); - m.def("test_dtype_methods", []() { - py::list list; - auto dt1 = py::dtype::of(); - auto dt2 = py::dtype::of(); - list.append(dt1); list.append(dt2); - list.append(py::bool_(dt1.has_fields())); list.append(py::bool_(dt2.has_fields())); - list.append(py::int_(dt1.itemsize())); list.append(py::int_(dt2.itemsize())); - return list; - }); - struct TrailingPaddingStruct { - int32_t a; - char b; - }; - PYBIND11_NUMPY_DTYPE(TrailingPaddingStruct, a, b); - m.def("trailing_padding_dtype", []() { return py::dtype::of(); }); - - // test_string_array - m.def("create_string_array", [](bool non_empty) { - py::array_t arr = mkarray_via_buffer(non_empty ? 4 : 0); - if (non_empty) { - auto req = arr.request(); - auto ptr = static_cast(req.ptr); - for (ssize_t i = 0; i < req.size * req.itemsize; i++) - static_cast(req.ptr)[i] = 0; - ptr[1].a[0] = 'a'; ptr[1].b[0] = 'a'; - ptr[2].a[0] = 'a'; ptr[2].b[0] = 'a'; - ptr[3].a[0] = 'a'; ptr[3].b[0] = 'a'; - - ptr[2].a[1] = 'b'; ptr[2].b[1] = 'b'; - ptr[3].a[1] = 'b'; ptr[3].b[1] = 'b'; - - ptr[3].a[2] = 'c'; ptr[3].b[2] = 'c'; - } - return arr; - }); - m.def("print_string_array", &print_recarray); - - // test_array_array - m.def("create_array_array", [](size_t n) { - py::array_t arr = mkarray_via_buffer(n); - auto ptr = (ArrayStruct *) arr.mutable_data(); - for (size_t i = 0; i < n; i++) { - for (size_t j = 0; j < 3; j++) - for (size_t k = 0; k < 4; k++) - ptr[i].a[j][k] = char('A' + (i * 100 + j * 10 + k) % 26); - for (size_t j = 0; j < 2; j++) - ptr[i].b[j] = int32_t(i * 1000 + j); - for (size_t j = 0; j < 3; j++) - ptr[i].c[j] = uint8_t(i * 10 + j); - for (size_t j = 0; j < 4; j++) - for (size_t k = 0; k < 2; k++) - ptr[i].d[j][k] = float(i) * 100.0f + float(j) * 10.0f + float(k); - } - return arr; - }); - m.def("print_array_array", &print_recarray); - - // test_enum_array - m.def("create_enum_array", [](size_t n) { - py::array_t arr = mkarray_via_buffer(n); - auto ptr = (EnumStruct *) arr.mutable_data(); - for (size_t i = 0; i < n; i++) { - ptr[i].e1 = static_cast(-1 + ((int) i % 2) * 2); - ptr[i].e2 = static_cast(1 + (i % 2)); - } - return arr; - }); - m.def("print_enum_array", &print_recarray); - - // test_complex_array - m.def("create_complex_array", [](size_t n) { - py::array_t arr = mkarray_via_buffer(n); - auto ptr = (ComplexStruct *) arr.mutable_data(); - for (size_t i = 0; i < n; i++) { - ptr[i].cflt.real(float(i)); - ptr[i].cflt.imag(float(i) + 0.25f); - ptr[i].cdbl.real(double(i) + 0.5); - ptr[i].cdbl.imag(double(i) + 0.75); - } - return arr; - }); - m.def("print_complex_array", &print_recarray); - - // test_array_constructors - m.def("test_array_ctors", &test_array_ctors); - - // test_compare_buffer_info - struct CompareStruct { - bool x; - uint32_t y; - float z; - }; - PYBIND11_NUMPY_DTYPE(CompareStruct, x, y, z); - m.def("compare_buffer_info", []() { - py::list list; - list.append(py::bool_(py::detail::compare_buffer_info::compare(py::buffer_info(nullptr, sizeof(float), "f", 1)))); - list.append(py::bool_(py::detail::compare_buffer_info::compare(py::buffer_info(nullptr, sizeof(int), "I", 1)))); - list.append(py::bool_(py::detail::compare_buffer_info::compare(py::buffer_info(nullptr, sizeof(long), "l", 1)))); - list.append(py::bool_(py::detail::compare_buffer_info::compare(py::buffer_info(nullptr, sizeof(long), sizeof(long) == sizeof(int) ? "i" : "q", 1)))); - list.append(py::bool_(py::detail::compare_buffer_info::compare(py::buffer_info(nullptr, sizeof(CompareStruct), "T{?:x:3xI:y:f:z:}", 1)))); - return list; - }); - m.def("buffer_to_dtype", [](py::buffer& buf) { return py::dtype(buf.request()); }); - - // test_scalar_conversion - m.def("f_simple", [](SimpleStruct s) { return s.uint_ * 10; }); - m.def("f_packed", [](PackedStruct s) { return s.uint_ * 10; }); - m.def("f_nested", [](NestedStruct s) { return s.a.uint_ * 10; }); - - // test_register_dtype - m.def("register_dtype", []() { PYBIND11_NUMPY_DTYPE(SimpleStruct, bool_, uint_, float_, ldbl_); }); - - // test_str_leak - m.def("dtype_wrapper", [](py::object d) { return py::dtype::from_args(std::move(d)); }); -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/equal.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/equal.h deleted file mode 100644 index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/equal.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special version of this algorithm - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/execution_policy.h deleted file mode 100644 index 27e4db86264ba8c08ab34499565758f1bbce9bb9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/execution_policy.h +++ /dev/null @@ -1,81 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -// put the canonical tag in the same ns as the backend's entry points -namespace cpp -{ -namespace detail -{ - -// this awkward sequence of definitions arise -// from the desire both for tag to derive -// from execution_policy and for execution_policy -// to convert to tag (when execution_policy is not -// an ancestor of tag) - -// forward declaration of tag -struct tag; - -// forward declaration of execution_policy -template struct execution_policy; - -// specialize execution_policy for tag -template<> - struct execution_policy - : thrust::system::detail::sequential::execution_policy -{}; - -// tag's definition comes before the -// generic definition of execution_policy -struct tag : execution_policy {}; - -// allow conversion to tag when it is not a successor -template - struct execution_policy - : thrust::system::detail::sequential::execution_policy -{ - typedef tag tag_type; - operator tag() const { return tag(); } -}; - -} // end detail - -// alias execution_policy and tag here -using thrust::system::cpp::detail::execution_policy; -using thrust::system::cpp::detail::tag; - -} // end cpp -} // end system - -// alias items at top-level -namespace cpp -{ - -using thrust::system::cpp::execution_policy; -using thrust::system::cpp::tag; - -} // end cpp -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/scatter.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/scatter.h deleted file mode 100644 index 95c5a14ba3df120019c9a5b6ed638db3f2555a5b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/scatter.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits this algorithm -#include - diff --git a/spaces/CVPR/monoscene_lite/monoscene/DDR.py b/spaces/CVPR/monoscene_lite/monoscene/DDR.py deleted file mode 100644 index cc997ed7604d83ef562474d32cb484aac36f2adc..0000000000000000000000000000000000000000 --- a/spaces/CVPR/monoscene_lite/monoscene/DDR.py +++ /dev/null @@ -1,139 +0,0 @@ -""" -Most of the code in this file is taken from https://github.com/waterljwant/SSC/blob/master/models/DDR.py -""" - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class SimpleRB(nn.Module): - def __init__(self, in_channel, norm_layer, bn_momentum): - super(SimpleRB, self).__init__() - self.path = nn.Sequential( - nn.Conv3d(in_channel, in_channel, kernel_size=3, padding=1, bias=False), - norm_layer(in_channel, momentum=bn_momentum), - nn.ReLU(), - nn.Conv3d(in_channel, in_channel, kernel_size=3, padding=1, bias=False), - norm_layer(in_channel, momentum=bn_momentum), - ) - self.relu = nn.ReLU() - - def forward(self, x): - residual = x - conv_path = self.path(x) - out = residual + conv_path - out = self.relu(out) - return out - - -""" -3D Residual Block,3x3x3 conv ==> 3 smaller 3D conv, refered from DDRNet -""" - - -class Bottleneck3D(nn.Module): - def __init__( - self, - inplanes, - planes, - norm_layer, - stride=1, - dilation=[1, 1, 1], - expansion=4, - downsample=None, - fist_dilation=1, - multi_grid=1, - bn_momentum=0.0003, - ): - super(Bottleneck3D, self).__init__() - # often,planes = inplanes // 4 - self.expansion = expansion - self.conv1 = nn.Conv3d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = norm_layer(planes, momentum=bn_momentum) - self.conv2 = nn.Conv3d( - planes, - planes, - kernel_size=(1, 1, 3), - stride=(1, 1, stride), - dilation=(1, 1, dilation[0]), - padding=(0, 0, dilation[0]), - bias=False, - ) - self.bn2 = norm_layer(planes, momentum=bn_momentum) - self.conv3 = nn.Conv3d( - planes, - planes, - kernel_size=(1, 3, 1), - stride=(1, stride, 1), - dilation=(1, dilation[1], 1), - padding=(0, dilation[1], 0), - bias=False, - ) - self.bn3 = norm_layer(planes, momentum=bn_momentum) - self.conv4 = nn.Conv3d( - planes, - planes, - kernel_size=(3, 1, 1), - stride=(stride, 1, 1), - dilation=(dilation[2], 1, 1), - padding=(dilation[2], 0, 0), - bias=False, - ) - self.bn4 = norm_layer(planes, momentum=bn_momentum) - self.conv5 = nn.Conv3d( - planes, planes * self.expansion, kernel_size=(1, 1, 1), bias=False - ) - self.bn5 = norm_layer(planes * self.expansion, momentum=bn_momentum) - - self.relu = nn.ReLU(inplace=False) - self.relu_inplace = nn.ReLU(inplace=True) - self.downsample = downsample - self.dilation = dilation - self.stride = stride - - self.downsample2 = nn.Sequential( - nn.AvgPool3d(kernel_size=(1, stride, 1), stride=(1, stride, 1)), - nn.Conv3d(planes, planes, kernel_size=1, stride=1, bias=False), - norm_layer(planes, momentum=bn_momentum), - ) - self.downsample3 = nn.Sequential( - nn.AvgPool3d(kernel_size=(stride, 1, 1), stride=(stride, 1, 1)), - nn.Conv3d(planes, planes, kernel_size=1, stride=1, bias=False), - norm_layer(planes, momentum=bn_momentum), - ) - self.downsample4 = nn.Sequential( - nn.AvgPool3d(kernel_size=(stride, 1, 1), stride=(stride, 1, 1)), - nn.Conv3d(planes, planes, kernel_size=1, stride=1, bias=False), - norm_layer(planes, momentum=bn_momentum), - ) - - def forward(self, x): - residual = x - - out1 = self.relu(self.bn1(self.conv1(x))) - out2 = self.bn2(self.conv2(out1)) - out2_relu = self.relu(out2) - - out3 = self.bn3(self.conv3(out2_relu)) - if self.stride != 1: - out2 = self.downsample2(out2) - out3 = out3 + out2 - out3_relu = self.relu(out3) - - out4 = self.bn4(self.conv4(out3_relu)) - if self.stride != 1: - out2 = self.downsample3(out2) - out3 = self.downsample4(out3) - out4 = out4 + out2 + out3 - - out4_relu = self.relu(out4) - out5 = self.bn5(self.conv5(out4_relu)) - - if self.downsample is not None: - residual = self.downsample(x) - - out = out5 + residual - out_relu = self.relu(out) - - return out_relu diff --git a/spaces/Cicooo/vits-uma-genshin-honkai/text/cleaners.py b/spaces/Cicooo/vits-uma-genshin-honkai/text/cleaners.py deleted file mode 100644 index d26581deb399609163518054718ad80ecca5d934..0000000000000000000000000000000000000000 --- a/spaces/Cicooo/vits-uma-genshin-honkai/text/cleaners.py +++ /dev/null @@ -1,475 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -import pyopenjtalk -from jamo import h2j, j2hcj -from pypinyin import lazy_pinyin, BOPOMOFO -import jieba, cn2an - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile(r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile(r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (Latin alphabet, bopomofo) pairs: -_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', 'ㄟˉ'), - ('b', 'ㄅㄧˋ'), - ('c', 'ㄙㄧˉ'), - ('d', 'ㄉㄧˋ'), - ('e', 'ㄧˋ'), - ('f', 'ㄝˊㄈㄨˋ'), - ('g', 'ㄐㄧˋ'), - ('h', 'ㄝˇㄑㄩˋ'), - ('i', 'ㄞˋ'), - ('j', 'ㄐㄟˋ'), - ('k', 'ㄎㄟˋ'), - ('l', 'ㄝˊㄛˋ'), - ('m', 'ㄝˊㄇㄨˋ'), - ('n', 'ㄣˉ'), - ('o', 'ㄡˉ'), - ('p', 'ㄆㄧˉ'), - ('q', 'ㄎㄧㄡˉ'), - ('r', 'ㄚˋ'), - ('s', 'ㄝˊㄙˋ'), - ('t', 'ㄊㄧˋ'), - ('u', 'ㄧㄡˉ'), - ('v', 'ㄨㄧˉ'), - ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'), - ('x', 'ㄝˉㄎㄨˋㄙˋ'), - ('y', 'ㄨㄞˋ'), - ('z', 'ㄗㄟˋ') -]] - - -# List of (bopomofo, romaji) pairs: -_bopomofo_to_romaji = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('ㄅㄛ', 'p⁼wo'), - ('ㄆㄛ', 'pʰwo'), - ('ㄇㄛ', 'mwo'), - ('ㄈㄛ', 'fwo'), - ('ㄅ', 'p⁼'), - ('ㄆ', 'pʰ'), - ('ㄇ', 'm'), - ('ㄈ', 'f'), - ('ㄉ', 't⁼'), - ('ㄊ', 'tʰ'), - ('ㄋ', 'n'), - ('ㄌ', 'l'), - ('ㄍ', 'k⁼'), - ('ㄎ', 'kʰ'), - ('ㄏ', 'h'), - ('ㄐ', 'ʧ⁼'), - ('ㄑ', 'ʧʰ'), - ('ㄒ', 'ʃ'), - ('ㄓ', 'ʦ`⁼'), - ('ㄔ', 'ʦ`ʰ'), - ('ㄕ', 's`'), - ('ㄖ', 'ɹ`'), - ('ㄗ', 'ʦ⁼'), - ('ㄘ', 'ʦʰ'), - ('ㄙ', 's'), - ('ㄚ', 'a'), - ('ㄛ', 'o'), - ('ㄜ', 'ə'), - ('ㄝ', 'e'), - ('ㄞ', 'ai'), - ('ㄟ', 'ei'), - ('ㄠ', 'au'), - ('ㄡ', 'ou'), - ('ㄧㄢ', 'yeNN'), - ('ㄢ', 'aNN'), - ('ㄧㄣ', 'iNN'), - ('ㄣ', 'əNN'), - ('ㄤ', 'aNg'), - ('ㄧㄥ', 'iNg'), - ('ㄨㄥ', 'uNg'), - ('ㄩㄥ', 'yuNg'), - ('ㄥ', 'əNg'), - ('ㄦ', 'əɻ'), - ('ㄧ', 'i'), - ('ㄨ', 'u'), - ('ㄩ', 'ɥ'), - ('ˉ', '→'), - ('ˊ', '↑'), - ('ˇ', '↓↑'), - ('ˋ', '↓'), - ('˙', ''), - (',', ','), - ('。', '.'), - ('!', '!'), - ('?', '?'), - ('—', '-') -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text!='': - text+=' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil','pau']: - text += phoneme.replace('ch','ʧ').replace('sh','ʃ').replace('cl','Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil','pau']: - a2_next=-1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i # big endian! - version: H - numRecords: H - recordSize: l -""" - - -class _GlyphnamedList(Mapping): - def __init__(self, reverseGlyphOrder, data): - self._array = data - self._map = dict(reverseGlyphOrder) - - def __getitem__(self, k): - return self._array[self._map[k]] - - def __len__(self): - return len(self._map) - - def __iter__(self): - return iter(self._map) - - def keys(self): - return self._map.keys() - - -class table__h_d_m_x(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - numGlyphs = ttFont["maxp"].numGlyphs - glyphOrder = ttFont.getGlyphOrder() - dummy, data = sstruct.unpack2(hdmxHeaderFormat, data, self) - self.hdmx = {} - for i in range(self.numRecords): - ppem = byteord(data[0]) - maxSize = byteord(data[1]) - widths = _GlyphnamedList( - ttFont.getReverseGlyphMap(), array.array("B", data[2 : 2 + numGlyphs]) - ) - self.hdmx[ppem] = widths - data = data[self.recordSize :] - assert len(data) == 0, "too much hdmx data" - - def compile(self, ttFont): - self.version = 0 - numGlyphs = ttFont["maxp"].numGlyphs - glyphOrder = ttFont.getGlyphOrder() - self.recordSize = 4 * ((2 + numGlyphs + 3) // 4) - pad = (self.recordSize - 2 - numGlyphs) * b"\0" - self.numRecords = len(self.hdmx) - data = sstruct.pack(hdmxHeaderFormat, self) - items = sorted(self.hdmx.items()) - for ppem, widths in items: - data = data + bytechr(ppem) + bytechr(max(widths.values())) - for glyphID in range(len(glyphOrder)): - width = widths[glyphOrder[glyphID]] - data = data + bytechr(width) - data = data + pad - return data - - def toXML(self, writer, ttFont): - writer.begintag("hdmxData") - writer.newline() - ppems = sorted(self.hdmx.keys()) - records = [] - format = "" - for ppem in ppems: - widths = self.hdmx[ppem] - records.append(widths) - format = format + "%4d" - glyphNames = ttFont.getGlyphOrder()[:] - glyphNames.sort() - maxNameLen = max(map(len, glyphNames)) - format = "%" + repr(maxNameLen) + "s:" + format + " ;" - writer.write(format % (("ppem",) + tuple(ppems))) - writer.newline() - writer.newline() - for glyphName in glyphNames: - row = [] - for ppem in ppems: - widths = self.hdmx[ppem] - row.append(widths[glyphName]) - if ";" in glyphName: - glyphName = "\\x3b".join(glyphName.split(";")) - writer.write(format % ((glyphName,) + tuple(row))) - writer.newline() - writer.endtag("hdmxData") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name != "hdmxData": - return - content = strjoin(content) - lines = content.split(";") - topRow = lines[0].split() - assert topRow[0] == "ppem:", "illegal hdmx format" - ppems = list(map(int, topRow[1:])) - self.hdmx = hdmx = {} - for ppem in ppems: - hdmx[ppem] = {} - lines = (line.split() for line in lines[1:]) - for line in lines: - if not line: - continue - assert line[0][-1] == ":", "illegal hdmx format" - glyphName = line[0][:-1] - if "\\" in glyphName: - from fontTools.misc.textTools import safeEval - - glyphName = safeEval('"""' + glyphName + '"""') - line = list(map(int, line[1:])) - assert len(line) == len(ppems), "illegal hdmx format" - for i in range(len(ppems)): - hdmx[ppems[i]][glyphName] = line[i] diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/conftest.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/conftest.py deleted file mode 100644 index 6874a42c4895c3c7b973dc5d63fd4488a4e60b44..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fsspec/conftest.py +++ /dev/null @@ -1,55 +0,0 @@ -import os -import shutil -import subprocess -import sys -import time - -import pytest - -import fsspec -from fsspec.implementations.cached import CachingFileSystem - - -@pytest.fixture() -def m(): - """ - Fixture providing a memory filesystem. - """ - m = fsspec.filesystem("memory") - m.store.clear() - m.pseudo_dirs.clear() - m.pseudo_dirs.append("") - try: - yield m - finally: - m.store.clear() - m.pseudo_dirs.clear() - m.pseudo_dirs.append("") - - -@pytest.fixture -def ftp_writable(tmpdir): - """ - Fixture providing a writable FTP filesystem. - """ - pytest.importorskip("pyftpdlib") - from fsspec.implementations.ftp import FTPFileSystem - - FTPFileSystem.clear_instance_cache() # remove lingering connections - CachingFileSystem.clear_instance_cache() - d = str(tmpdir) - with open(os.path.join(d, "out"), "wb") as f: - f.write(b"hello" * 10000) - P = subprocess.Popen( - [sys.executable, "-m", "pyftpdlib", "-d", d, "-u", "user", "-P", "pass", "-w"] - ) - try: - time.sleep(1) - yield "localhost", 2121, "user", "pass" - finally: - P.terminate() - P.wait() - try: - shutil.rmtree(tmpdir) - except Exception: - pass diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-150cb53b.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-150cb53b.js deleted file mode 100644 index cf91e96cd58a2151d97e33ccdda263fc37c40e0f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-150cb53b.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as U,e as G,s as H,N as S,k as R,O as K,K as o,p as B,M as C,o as T,ap as E,Q as k,aw as N,z,v as A,A as M,x as D,a1 as W,B as X,am as Y,P as Z,R as y,E as p,ae as x,h as O,j as P,q as $,r as ee,t as Q,F}from"./index-3370be2a.js";/* empty css */import{B as le}from"./Button-89624748.js";import{B as ne}from"./BlockTitle-bcf8c05e.js";import"./Info-5611e10f.js";function ie(e){let n;return{c(){n=Z(e[5])},m(i,a){B(i,n,a)},p(i,a){a&32&&y(n,i[5])},d(i){i&&M(n)}}}function ae(e){let n,i,a,m,_,s,c,f,d,r,g;return m=new ne({props:{show_label:e[7],info:e[6],$$slots:{default:[ie]},$$scope:{ctx:e}}}),{c(){n=S("div"),i=S("div"),a=S("label"),R(m.$$.fragment),_=K(),s=S("input"),c=K(),f=S("input"),o(a,"for",e[8]),o(s,"data-testid","number-input"),o(s,"type","number"),o(s,"min",e[1]),o(s,"max",e[2]),o(s,"step",e[3]),s.disabled=e[4],o(s,"class","svelte-1cl284s"),o(i,"class","head svelte-1cl284s"),o(n,"class","wrap svelte-1cl284s"),o(f,"type","range"),o(f,"id",e[8]),o(f,"name","cowbell"),o(f,"min",e[1]),o(f,"max",e[2]),o(f,"step",e[3]),f.disabled=e[4],o(f,"class","svelte-1cl284s")},m(l,u){B(l,n,u),C(n,i),C(i,a),T(m,a,null),C(i,_),C(i,s),E(s,e[0]),B(l,c,u),B(l,f,u),E(f,e[0]),d=!0,r||(g=[k(s,"input",e[12]),k(s,"blur",e[10]),k(s,"pointerup",e[9]),k(f,"change",e[13]),k(f,"input",e[13]),k(f,"pointerup",e[9])],r=!0)},p(l,[u]){const v={};u&128&&(v.show_label=l[7]),u&64&&(v.info=l[6]),u&65568&&(v.$$scope={dirty:u,ctx:l}),m.$set(v),(!d||u&2)&&o(s,"min",l[1]),(!d||u&4)&&o(s,"max",l[2]),(!d||u&8)&&o(s,"step",l[3]),(!d||u&16)&&(s.disabled=l[4]),u&1&&N(s.value)!==l[0]&&E(s,l[0]),(!d||u&2)&&o(f,"min",l[1]),(!d||u&4)&&o(f,"max",l[2]),(!d||u&8)&&o(f,"step",l[3]),(!d||u&16)&&(f.disabled=l[4]),u&1&&E(f,l[0])},i(l){d||(z(m.$$.fragment,l),d=!0)},o(l){A(m.$$.fragment,l),d=!1},d(l){l&&(M(n),M(c),M(f)),D(m),r=!1,W(g)}}}let te=0;function ue(e,n,i){let{value:a=0}=n,{value_is_output:m=!1}=n,{minimum:_=0}=n,{maximum:s=100}=n,{step:c=1}=n,{disabled:f=!1}=n,{label:d}=n,{info:r=void 0}=n,{show_label:g}=n;const l=`range_id_${te++}`,u=X();function v(){u("change",a),m||u("input")}Y(()=>{i(11,m=!1)});function h(b){u("release",a)}const j=()=>{u("release",a),i(0,a=Math.min(Math.max(a,_),s))};function q(){a=N(this.value),i(0,a)}function w(){a=N(this.value),i(0,a)}return e.$$set=b=>{"value"in b&&i(0,a=b.value),"value_is_output"in b&&i(11,m=b.value_is_output),"minimum"in b&&i(1,_=b.minimum),"maximum"in b&&i(2,s=b.maximum),"step"in b&&i(3,c=b.step),"disabled"in b&&i(4,f=b.disabled),"label"in b&&i(5,d=b.label),"info"in b&&i(6,r=b.info),"show_label"in b&&i(7,g=b.show_label)},e.$$.update=()=>{e.$$.dirty&1&&v()},[a,_,s,c,f,d,r,g,l,h,j,m,q,w]}class se extends U{constructor(n){super(),G(this,n,ue,ae,H,{value:0,value_is_output:11,minimum:1,maximum:2,step:3,disabled:4,label:5,info:6,show_label:7})}}function me(e){let n,i,a,m,_,s;const c=[e[15]];let f={};for(let l=0;lP(a,"value",d)),O.push(()=>P(a,"value_is_output",r)),a.$on("input",e[18]),a.$on("change",e[19]),a.$on("release",e[20]),{c(){R(n.$$.fragment),i=K(),R(a.$$.fragment)},m(l,u){T(n,l,u),B(l,i,u),T(a,l,u),s=!0},p(l,u){const v=u&32768?$(c,[ee(l[15])]):{};n.$set(v);const h={};u&32&&(h.label=l[5]),u&64&&(h.info=l[6]),u&16384&&(h.show_label=l[14]),u&1024&&(h.minimum=l[10]),u&2048&&(h.maximum=l[11]),u&4096&&(h.step=l[12]),u&8192&&(h.disabled=l[13]==="static"),!m&&u&1&&(m=!0,h.value=l[0],Q(()=>m=!1)),!_&&u&2&&(_=!0,h.value_is_output=l[1],Q(()=>_=!1)),a.$set(h)},i(l){s||(z(n.$$.fragment,l),z(a.$$.fragment,l),s=!0)},o(l){A(n.$$.fragment,l),A(a.$$.fragment,l),s=!1},d(l){l&&M(i),D(n,l),D(a,l)}}}function fe(e){let n,i;return n=new le({props:{visible:e[4],elem_id:e[2],elem_classes:e[3],container:e[7],scale:e[8],min_width:e[9],$$slots:{default:[me]},$$scope:{ctx:e}}}),{c(){R(n.$$.fragment)},m(a,m){T(n,a,m),i=!0},p(a,[m]){const _={};m&16&&(_.visible=a[4]),m&4&&(_.elem_id=a[2]),m&8&&(_.elem_classes=a[3]),m&128&&(_.container=a[7]),m&256&&(_.scale=a[8]),m&512&&(_.min_width=a[9]),m&2161763&&(_.$$scope={dirty:m,ctx:a}),n.$set(_)},i(a){i||(z(n.$$.fragment,a),i=!0)},o(a){A(n.$$.fragment,a),i=!1},d(a){D(n,a)}}}function _e(e,n,i){let{elem_id:a=""}=n,{elem_classes:m=[]}=n,{visible:_=!0}=n,{value:s=0}=n,{label:c="Slider"}=n,{info:f=void 0}=n,{container:d=!0}=n,{scale:r=null}=n,{min_width:g=void 0}=n,{minimum:l}=n,{maximum:u}=n,{step:v}=n,{mode:h}=n,{show_label:j}=n,{loading_status:q}=n,{value_is_output:w=!1}=n;function b(t){s=t,i(0,s)}function I(t){w=t,i(1,w)}function J(t){F.call(this,e,t)}function L(t){F.call(this,e,t)}function V(t){F.call(this,e,t)}return e.$$set=t=>{"elem_id"in t&&i(2,a=t.elem_id),"elem_classes"in t&&i(3,m=t.elem_classes),"visible"in t&&i(4,_=t.visible),"value"in t&&i(0,s=t.value),"label"in t&&i(5,c=t.label),"info"in t&&i(6,f=t.info),"container"in t&&i(7,d=t.container),"scale"in t&&i(8,r=t.scale),"min_width"in t&&i(9,g=t.min_width),"minimum"in t&&i(10,l=t.minimum),"maximum"in t&&i(11,u=t.maximum),"step"in t&&i(12,v=t.step),"mode"in t&&i(13,h=t.mode),"show_label"in t&&i(14,j=t.show_label),"loading_status"in t&&i(15,q=t.loading_status),"value_is_output"in t&&i(1,w=t.value_is_output)},[s,w,a,m,_,c,f,d,r,g,l,u,v,h,j,q,b,I,J,L,V]}class oe extends U{constructor(n){super(),G(this,n,_e,fe,H,{elem_id:2,elem_classes:3,visible:4,value:0,label:5,info:6,container:7,scale:8,min_width:9,minimum:10,maximum:11,step:12,mode:13,show_label:14,loading_status:15,value_is_output:1})}}const re=oe,ve=["static","dynamic"],we=e=>({type:{payload:"number"},description:{payload:"selected value"},example_data:e.value??e.minimum});export{re as Component,we as document,ve as modes}; -//# sourceMappingURL=index-150cb53b.js.map diff --git a/spaces/Datasculptor/MusicGen/audiocraft/utils/utils.py b/spaces/Datasculptor/MusicGen/audiocraft/utils/utils.py deleted file mode 100644 index 86e1448d065fa182ca69aae00d2f2a7eea55d8a4..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/MusicGen/audiocraft/utils/utils.py +++ /dev/null @@ -1,234 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from concurrent.futures import ProcessPoolExecutor -from functools import wraps -import hashlib -import logging -import typing as tp - -import flashy -import flashy.distrib -import omegaconf -import torch -from torch.nn.utils.rnn import pad_sequence - - -logger = logging.getLogger(__name__) - - -def dict_from_config(cfg: omegaconf.DictConfig) -> dict: - """Convenience function to map an omegaconf configuration to a dictionary. - - Args: - cfg (omegaconf.DictConfig): Original configuration to map to dict. - Returns: - dict: Config as dictionary object. - """ - dct = omegaconf.OmegaConf.to_container(cfg, resolve=True) - assert isinstance(dct, dict) - return dct - - -def random_subset(dataset, max_samples: int, seed: int = 42) -> torch.utils.data.Subset: - if max_samples >= len(dataset): - return dataset - - generator = torch.Generator().manual_seed(seed) - perm = torch.randperm(len(dataset), generator=generator) - return torch.utils.data.Subset(dataset, perm[:max_samples].tolist()) - - -def get_loader(dataset, num_samples: tp.Optional[int], batch_size: int, - num_workers: int, seed: int, **kwargs) -> torch.utils.data.DataLoader: - """Convenience function to load dataset into a dataloader with optional subset sampling. - - Args: - dataset: Dataset to load. - num_samples (Optional[int]): Number of samples to limit subset size. - batch_size (int): Batch size. - num_workers (int): Number of workers for data loading. - seed (int): Random seed. - """ - if num_samples is not None: - dataset = random_subset(dataset, num_samples, seed) - - dataloader = flashy.distrib.loader( - dataset, - batch_size=batch_size, - num_workers=num_workers, - **kwargs - ) - return dataloader - - -def get_dataset_from_loader(dataloader): - dataset = dataloader.dataset - if isinstance(dataset, torch.utils.data.Subset): - return dataset.dataset - else: - return dataset - - -def multinomial(input: torch.Tensor, num_samples: int, replacement=False, *, generator=None): - """torch.multinomial with arbitrary number of dimensions, and number of candidates on the last dimension. - - Args: - input (torch.Tensor): The input tensor containing probabilities. - num_samples (int): Number of samples to draw. - replacement (bool): Whether to draw with replacement or not. - Keywords args: - generator (torch.Generator): A pseudorandom number generator for sampling. - Returns: - torch.Tensor: Last dimension contains num_samples indices - sampled from the multinomial probability distribution - located in the last dimension of tensor input. - """ - input_ = input.reshape(-1, input.shape[-1]) - output_ = torch.multinomial(input_, num_samples=num_samples, replacement=replacement, generator=generator) - output = output_.reshape(*list(input.shape[:-1]), -1) - return output - - -def sample_top_k(probs: torch.Tensor, k: int) -> torch.Tensor: - """Sample next token from top K values along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - k (int): The k in “top-k”. - Returns: - torch.Tensor: Sampled tokens. - """ - top_k_value, _ = torch.topk(probs, k, dim=-1) - min_value_top_k = top_k_value[..., [-1]] - probs *= (probs >= min_value_top_k).float() - probs.div_(probs.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs, num_samples=1) - return next_token - - -def sample_top_p(probs: torch.Tensor, p: float) -> torch.Tensor: - """Sample next token from top P probabilities along the last dimension of the input probs tensor. - - Args: - probs (torch.Tensor): Input probabilities with token candidates on the last dimension. - p (int): The p in “top-p”. - Returns: - torch.Tensor: Sampled tokens. - """ - probs_sort, probs_idx = torch.sort(probs, dim=-1, descending=True) - probs_sum = torch.cumsum(probs_sort, dim=-1) - mask = probs_sum - probs_sort > p - probs_sort *= (~mask).float() - probs_sort.div_(probs_sort.sum(dim=-1, keepdim=True)) - next_token = multinomial(probs_sort, num_samples=1) - next_token = torch.gather(probs_idx, -1, next_token) - return next_token - - -class DummyPoolExecutor: - """Dummy pool executor to use when we actually have only 1 worker. - (e.g. instead of ProcessPoolExecutor). - """ - class DummyResult: - def __init__(self, func, *args, **kwargs): - self.func = func - self.args = args - self.kwargs = kwargs - - def result(self): - return self.func(*self.args, **self.kwargs) - - def __init__(self, workers, mp_context=None): - pass - - def submit(self, func, *args, **kwargs): - return DummyPoolExecutor.DummyResult(func, *args, **kwargs) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, exc_tb): - return - - -def get_pool_executor(num_workers: int, mp_context=None): - return ProcessPoolExecutor(num_workers, mp_context) if num_workers > 1 else DummyPoolExecutor(1) - - -def length_to_mask(lengths: torch.Tensor, max_len: tp.Optional[int] = None) -> torch.Tensor: - """Utility function to convert a tensor of sequence lengths to a mask (useful when working on padded sequences). - For example: [3, 5] => [[1, 1, 1, 0, 0], [1, 1, 1, 1, 1]] - - Args: - lengths (torch.Tensor): tensor with lengths - max_len (int): can set the max length manually. Defaults to None. - Returns: - torch.Tensor: mask with 0s where there is pad tokens else 1s - """ - assert len(lengths.shape) == 1, "Length shape should be 1 dimensional." - final_length = lengths.max().item() if not max_len else max_len - final_length = max(final_length, 1) # if all seqs are of len zero we don't want a zero-size tensor - return torch.arange(final_length)[None, :].to(lengths.device) < lengths[:, None] - - -def hash_trick(word: str, vocab_size: int) -> int: - """Hash trick to pair each word with an index - - Args: - word (str): word we wish to convert to an index - vocab_size (int): size of the vocabulary - Returns: - int: index of the word in the embedding LUT - """ - hash = int(hashlib.sha256(word.encode("utf-8")).hexdigest(), 16) - return hash % vocab_size - - -def with_rank_rng(base_seed: int = 1234): - """Decorator for a function so that the function will use a Random Number Generator - whose state depend on the GPU rank. The original RNG state is restored upon returning. - - Args: - base_seed (int): Random seed. - """ - def _decorator(fun: tp.Callable): - @wraps(fun) - def _decorated(*args, **kwargs): - state = torch.get_rng_state() - seed = base_seed ^ flashy.distrib.rank() - torch.manual_seed(seed) - logger.debug('Rank dependent seed set to %d', seed) - try: - return fun(*args, **kwargs) - finally: - torch.set_rng_state(state) - logger.debug('RNG state restored.') - return _decorated - return _decorator - - -def collate(tensors: tp.List[torch.Tensor], dim: int = 0) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Get a list of tensors and collate them to a single tensor. according to the following logic: - - `dim` specifies the time dimension which will be stacked and padded. - - The output will contain 1 new dimension (dimension index 0) which will be the size of - of the original list. - - Args: - tensors (tp.List[torch.Tensor]): List of tensors to collate. - dim (int): Dimension which will be stacked and padded. - Returns: - tp.Tuple[torch.Tensor, torch.Tensor]: - torch.Tensor: Stacked and padded tensor. The output will contain 1 new dimension - (dimension index 0) which will be the size of the original list. - torch.Tensor: Tensor containing length of original tensor sizes (without padding). - """ - tensors = [x.transpose(0, dim) for x in tensors] - lens = torch.LongTensor([len(x) for x in tensors]) - padded_tensors = pad_sequence(tensors) - padded_tensors = padded_tensors.transpose(0, 1) - padded_tensors = padded_tensors.transpose(1, dim + 1) - return padded_tensors, lens diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/utils/__init__.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Dauzy/whisper-webui/src/prompts/prependPromptStrategy.py b/spaces/Dauzy/whisper-webui/src/prompts/prependPromptStrategy.py deleted file mode 100644 index 6f8b6eba5b98310f57a656db73b5e415de3af958..0000000000000000000000000000000000000000 --- a/spaces/Dauzy/whisper-webui/src/prompts/prependPromptStrategy.py +++ /dev/null @@ -1,31 +0,0 @@ -from src.config import VadInitialPromptMode -from src.prompts.abstractPromptStrategy import AbstractPromptStrategy - -class PrependPromptStrategy(AbstractPromptStrategy): - """ - A simple prompt strategy that prepends a single prompt to all segments of audio, or prepends the prompt to the first segment of audio. - """ - def __init__(self, initial_prompt: str, initial_prompt_mode: VadInitialPromptMode): - """ - Parameters - ---------- - initial_prompt: str - The initial prompt to use for the transcription. - initial_prompt_mode: VadInitialPromptMode - The mode to use for the initial prompt. If set to PREPEND_FIRST_SEGMENT, the initial prompt will be prepended to the first segment of audio. - If set to PREPEND_ALL_SEGMENTS, the initial prompt will be prepended to all segments of audio. - """ - self.initial_prompt = initial_prompt - self.initial_prompt_mode = initial_prompt_mode - - # This is a simple prompt strategy, so we only support these two modes - if initial_prompt_mode not in [VadInitialPromptMode.PREPEND_ALL_SEGMENTS, VadInitialPromptMode.PREPREND_FIRST_SEGMENT]: - raise ValueError(f"Unsupported initial prompt mode {initial_prompt_mode}") - - def get_segment_prompt(self, segment_index: int, whisper_prompt: str, detected_language: str) -> str: - if (self.initial_prompt_mode == VadInitialPromptMode.PREPEND_ALL_SEGMENTS): - return self._concat_prompt(self.initial_prompt, whisper_prompt) - elif (self.initial_prompt_mode == VadInitialPromptMode.PREPREND_FIRST_SEGMENT): - return self._concat_prompt(self.initial_prompt, whisper_prompt) if segment_index == 0 else whisper_prompt - else: - raise ValueError(f"Unknown initial prompt mode {self.initial_prompt_mode}") \ No newline at end of file diff --git a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/transforms.py b/spaces/DeepDrivePL/PaddleSeg-Matting/matting/transforms.py deleted file mode 100644 index 01866c4978c1ce054a8a7f558da5bc41f6c35499..0000000000000000000000000000000000000000 --- a/spaces/DeepDrivePL/PaddleSeg-Matting/matting/transforms.py +++ /dev/null @@ -1,530 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import random - -import cv2 -import numpy as np -from paddleseg.transforms import functional -from paddleseg.cvlibs import manager -from PIL import Image - - -@manager.TRANSFORMS.add_component -class Compose: - """ - Do transformation on input data with corresponding pre-processing and augmentation operations. - The shape of input data to all operations is [height, width, channels]. - """ - - def __init__(self, transforms, to_rgb=True): - if not isinstance(transforms, list): - raise TypeError('The transforms must be a list!') - self.transforms = transforms - self.to_rgb = to_rgb - - def __call__(self, data): - """ - Args: - data (dict): The data to transform. - - Returns: - dict: Data after transformation - """ - if 'trans_info' not in data: - data['trans_info'] = [] - for op in self.transforms: - data = op(data) - if data is None: - return None - - data['img'] = np.transpose(data['img'], (2, 0, 1)) - for key in data.get('gt_fields', []): - if len(data[key].shape) == 2: - continue - data[key] = np.transpose(data[key], (2, 0, 1)) - - return data - - -@manager.TRANSFORMS.add_component -class LoadImages: - def __init__(self, to_rgb=True): - self.to_rgb = to_rgb - - def __call__(self, data): - if isinstance(data['img'], str): - data['img'] = cv2.imread(data['img']) - for key in data.get('gt_fields', []): - if isinstance(data[key], str): - data[key] = cv2.imread(data[key], cv2.IMREAD_UNCHANGED) - # if alpha and trimap has 3 channels, extract one. - if key in ['alpha', 'trimap']: - if len(data[key].shape) > 2: - data[key] = data[key][:, :, 0] - - if self.to_rgb: - data['img'] = cv2.cvtColor(data['img'], cv2.COLOR_BGR2RGB) - for key in data.get('gt_fields', []): - if len(data[key].shape) == 2: - continue - data[key] = cv2.cvtColor(data[key], cv2.COLOR_BGR2RGB) - - return data - - -@manager.TRANSFORMS.add_component -class Resize: - def __init__(self, target_size=(512, 512)): - if isinstance(target_size, list) or isinstance(target_size, tuple): - if len(target_size) != 2: - raise ValueError( - '`target_size` should include 2 elements, but it is {}'. - format(target_size)) - else: - raise TypeError( - "Type of `target_size` is invalid. It should be list or tuple, but it is {}" - .format(type(target_size))) - - self.target_size = target_size - - def __call__(self, data): - data['trans_info'].append(('resize', data['img'].shape[0:2])) - data['img'] = functional.resize(data['img'], self.target_size) - for key in data.get('gt_fields', []): - data[key] = functional.resize(data[key], self.target_size) - return data - - -@manager.TRANSFORMS.add_component -class ResizeByLong: - """ - Resize the long side of an image to given size, and then scale the other side proportionally. - - Args: - long_size (int): The target size of long side. - """ - - def __init__(self, long_size): - self.long_size = long_size - - def __call__(self, data): - data['trans_info'].append(('resize', data['img'].shape[0:2])) - data['img'] = functional.resize_long(data['img'], self.long_size) - for key in data.get('gt_fields', []): - data[key] = functional.resize_long(data[key], self.long_size) - return data - - -@manager.TRANSFORMS.add_component -class ResizeByShort: - """ - Resize the short side of an image to given size, and then scale the other side proportionally. - - Args: - short_size (int): The target size of short side. - """ - - def __init__(self, short_size): - self.short_size = short_size - - def __call__(self, data): - data['trans_info'].append(('resize', data['img'].shape[0:2])) - data['img'] = functional.resize_short(data['img'], self.short_size) - for key in data.get('gt_fields', []): - data[key] = functional.resize_short(data[key], self.short_size) - return data - - -@manager.TRANSFORMS.add_component -class ResizeToIntMult: - """ - Resize to some int muitple, d.g. 32. - """ - - def __init__(self, mult_int=32): - self.mult_int = mult_int - - def __call__(self, data): - data['trans_info'].append(('resize', data['img'].shape[0:2])) - - h, w = data['img'].shape[0:2] - rw = w - w % 32 - rh = h - h % 32 - data['img'] = functional.resize(data['img'], (rw, rh)) - for key in data.get('gt_fields', []): - data[key] = functional.resize(data[key], (rw, rh)) - - return data - - -@manager.TRANSFORMS.add_component -class Normalize: - """ - Normalize an image. - - Args: - mean (list, optional): The mean value of a data set. Default: [0.5, 0.5, 0.5]. - std (list, optional): The standard deviation of a data set. Default: [0.5, 0.5, 0.5]. - - Raises: - ValueError: When mean/std is not list or any value in std is 0. - """ - - def __init__(self, mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)): - self.mean = mean - self.std = std - if not (isinstance(self.mean, (list, tuple)) - and isinstance(self.std, (list, tuple))): - raise ValueError( - "{}: input type is invalid. It should be list or tuple".format( - self)) - from functools import reduce - if reduce(lambda x, y: x * y, self.std) == 0: - raise ValueError('{}: std is invalid!'.format(self)) - - def __call__(self, data): - mean = np.array(self.mean)[np.newaxis, np.newaxis, :] - std = np.array(self.std)[np.newaxis, np.newaxis, :] - data['img'] = functional.normalize(data['img'], mean, std) - if 'fg' in data.get('gt_fields', []): - data['fg'] = functional.normalize(data['fg'], mean, std) - if 'bg' in data.get('gt_fields', []): - data['bg'] = functional.normalize(data['bg'], mean, std) - - return data - - -@manager.TRANSFORMS.add_component -class RandomCropByAlpha: - """ - Randomly crop while centered on uncertain area by a certain probability. - - Args: - crop_size (tuple|list): The size you want to crop from image. - p (float): The probability centered on uncertain area. - - """ - - def __init__(self, crop_size=((320, 320), (480, 480), (640, 640)), - prob=0.5): - self.crop_size = crop_size - self.prob = prob - - def __call__(self, data): - idex = np.random.randint(low=0, high=len(self.crop_size)) - crop_w, crop_h = self.crop_size[idex] - - img_h = data['img'].shape[0] - img_w = data['img'].shape[1] - if np.random.rand() < self.prob: - crop_center = np.where((data['alpha'] > 0) & (data['alpha'] < 255)) - center_h_array, center_w_array = crop_center - if len(center_h_array) == 0: - return data - rand_ind = np.random.randint(len(center_h_array)) - center_h = center_h_array[rand_ind] - center_w = center_w_array[rand_ind] - delta_h = crop_h // 2 - delta_w = crop_w // 2 - start_h = max(0, center_h - delta_h) - start_w = max(0, center_w - delta_w) - else: - start_h = 0 - start_w = 0 - if img_h > crop_h: - start_h = np.random.randint(img_h - crop_h + 1) - if img_w > crop_w: - start_w = np.random.randint(img_w - crop_w + 1) - - end_h = min(img_h, start_h + crop_h) - end_w = min(img_w, start_w + crop_w) - - data['img'] = data['img'][start_h:end_h, start_w:end_w] - for key in data.get('gt_fields', []): - data[key] = data[key][start_h:end_h, start_w:end_w] - - return data - - -@manager.TRANSFORMS.add_component -class RandomCrop: - """ - Randomly crop - - Args: - crop_size (tuple|list): The size you want to crop from image. - """ - - def __init__(self, crop_size=((320, 320), (480, 480), (640, 640))): - if not isinstance(crop_size[0], (list, tuple)): - crop_size = [crop_size] - self.crop_size = crop_size - - def __call__(self, data): - idex = np.random.randint(low=0, high=len(self.crop_size)) - crop_w, crop_h = self.crop_size[idex] - img_h, img_w = data['img'].shape[0:2] - - start_h = 0 - start_w = 0 - if img_h > crop_h: - start_h = np.random.randint(img_h - crop_h + 1) - if img_w > crop_w: - start_w = np.random.randint(img_w - crop_w + 1) - - end_h = min(img_h, start_h + crop_h) - end_w = min(img_w, start_w + crop_w) - - data['img'] = data['img'][start_h:end_h, start_w:end_w] - for key in data.get('gt_fields', []): - data[key] = data[key][start_h:end_h, start_w:end_w] - - return data - - -@manager.TRANSFORMS.add_component -class LimitLong: - """ - Limit the long edge of image. - - If the long edge is larger than max_long, resize the long edge - to max_long, while scale the short edge proportionally. - - If the long edge is smaller than min_long, resize the long edge - to min_long, while scale the short edge proportionally. - - Args: - max_long (int, optional): If the long edge of image is larger than max_long, - it will be resize to max_long. Default: None. - min_long (int, optional): If the long edge of image is smaller than min_long, - it will be resize to min_long. Default: None. - """ - - def __init__(self, max_long=None, min_long=None): - if max_long is not None: - if not isinstance(max_long, int): - raise TypeError( - "Type of `max_long` is invalid. It should be int, but it is {}" - .format(type(max_long))) - if min_long is not None: - if not isinstance(min_long, int): - raise TypeError( - "Type of `min_long` is invalid. It should be int, but it is {}" - .format(type(min_long))) - if (max_long is not None) and (min_long is not None): - if min_long > max_long: - raise ValueError( - '`max_long should not smaller than min_long, but they are {} and {}' - .format(max_long, min_long)) - self.max_long = max_long - self.min_long = min_long - - def __call__(self, data): - h, w = data['img'].shape[:2] - long_edge = max(h, w) - target = long_edge - if (self.max_long is not None) and (long_edge > self.max_long): - target = self.max_long - elif (self.min_long is not None) and (long_edge < self.min_long): - target = self.min_long - - if target != long_edge: - data['trans_info'].append(('resize', data['img'].shape[0:2])) - data['img'] = functional.resize_long(data['img'], target) - for key in data.get('gt_fields', []): - data[key] = functional.resize_long(data[key], target) - - return data - - -@manager.TRANSFORMS.add_component -class RandomHorizontalFlip: - """ - Flip an image horizontally with a certain probability. - - Args: - prob (float, optional): A probability of horizontally flipping. Default: 0.5. - """ - - def __init__(self, prob=0.5): - self.prob = prob - - def __call__(self, data): - if random.random() < self.prob: - data['img'] = functional.horizontal_flip(data['img']) - for key in data.get('gt_fields', []): - data[key] = functional.horizontal_flip(data[key]) - - return data - - -@manager.TRANSFORMS.add_component -class RandomBlur: - """ - Blurring an image by a Gaussian function with a certain probability. - - Args: - prob (float, optional): A probability of blurring an image. Default: 0.1. - """ - - def __init__(self, prob=0.1): - self.prob = prob - - def __call__(self, data): - if self.prob <= 0: - n = 0 - elif self.prob >= 1: - n = 1 - else: - n = int(1.0 / self.prob) - if n > 0: - if np.random.randint(0, n) == 0: - radius = np.random.randint(3, 10) - if radius % 2 != 1: - radius = radius + 1 - if radius > 9: - radius = 9 - data['img'] = cv2.GaussianBlur(data['img'], (radius, radius), 0, - 0) - for key in data.get('gt_fields', []): - data[key] = cv2.GaussianBlur(data[key], (radius, radius), 0, - 0) - return data - - -@manager.TRANSFORMS.add_component -class RandomDistort: - """ - Distort an image with random configurations. - - Args: - brightness_range (float, optional): A range of brightness. Default: 0.5. - brightness_prob (float, optional): A probability of adjusting brightness. Default: 0.5. - contrast_range (float, optional): A range of contrast. Default: 0.5. - contrast_prob (float, optional): A probability of adjusting contrast. Default: 0.5. - saturation_range (float, optional): A range of saturation. Default: 0.5. - saturation_prob (float, optional): A probability of adjusting saturation. Default: 0.5. - hue_range (int, optional): A range of hue. Default: 18. - hue_prob (float, optional): A probability of adjusting hue. Default: 0.5. - """ - - def __init__(self, - brightness_range=0.5, - brightness_prob=0.5, - contrast_range=0.5, - contrast_prob=0.5, - saturation_range=0.5, - saturation_prob=0.5, - hue_range=18, - hue_prob=0.5): - self.brightness_range = brightness_range - self.brightness_prob = brightness_prob - self.contrast_range = contrast_range - self.contrast_prob = contrast_prob - self.saturation_range = saturation_range - self.saturation_prob = saturation_prob - self.hue_range = hue_range - self.hue_prob = hue_prob - - def __call__(self, data): - brightness_lower = 1 - self.brightness_range - brightness_upper = 1 + self.brightness_range - contrast_lower = 1 - self.contrast_range - contrast_upper = 1 + self.contrast_range - saturation_lower = 1 - self.saturation_range - saturation_upper = 1 + self.saturation_range - hue_lower = -self.hue_range - hue_upper = self.hue_range - ops = [ - functional.brightness, functional.contrast, functional.saturation, - functional.hue - ] - random.shuffle(ops) - params_dict = { - 'brightness': { - 'brightness_lower': brightness_lower, - 'brightness_upper': brightness_upper - }, - 'contrast': { - 'contrast_lower': contrast_lower, - 'contrast_upper': contrast_upper - }, - 'saturation': { - 'saturation_lower': saturation_lower, - 'saturation_upper': saturation_upper - }, - 'hue': { - 'hue_lower': hue_lower, - 'hue_upper': hue_upper - } - } - prob_dict = { - 'brightness': self.brightness_prob, - 'contrast': self.contrast_prob, - 'saturation': self.saturation_prob, - 'hue': self.hue_prob - } - - im = data['img'].astype('uint8') - im = Image.fromarray(im) - for id in range(len(ops)): - params = params_dict[ops[id].__name__] - params['im'] = im - prob = prob_dict[ops[id].__name__] - if np.random.uniform(0, 1) < prob: - im = ops[id](**params) - data['img'] = np.asarray(im) - - for key in data.get('gt_fields', []): - if key in ['alpha', 'trimap']: - continue - else: - im = data[key].astype('uint8') - im = Image.fromarray(im) - for id in range(len(ops)): - params = params_dict[ops[id].__name__] - params['im'] = im - prob = prob_dict[ops[id].__name__] - if np.random.uniform(0, 1) < prob: - im = ops[id](**params) - data[key] = np.asarray(im) - return data - - -if __name__ == "__main__": - transforms = [RandomDistort()] - transforms = Compose(transforms) - fg_path = '/ssd1/home/chenguowei01/github/PaddleSeg/contrib/matting/data/matting/human_matting/Distinctions-646/train/fg/13(2).png' - alpha_path = fg_path.replace('fg', 'alpha') - bg_path = '/ssd1/home/chenguowei01/github/PaddleSeg/contrib/matting/data/matting/human_matting/bg/unsplash_bg/attic/photo-1443884590026-2e4d21aee71c?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=MnwxMjA3fDB8MXxzZWFyY2h8Nzh8fGF0dGljfGVufDB8fHx8MTYyOTY4MDcxNQ&ixlib=rb-1.2.1&q=80&w=400.jpg' - data = {} - data['fg'] = cv2.imread(fg_path) - data['bg'] = cv2.imread(bg_path) - h, w, c = data['fg'].shape - data['bg'] = cv2.resize(data['bg'], (w, h)) - alpha = cv2.imread(alpha_path) - data['alpha'] = alpha[:, :, 0] - alpha = alpha / 255. - data['img'] = alpha * data['fg'] + (1 - alpha) * data['bg'] - - data['gt_fields'] = ['fg', 'bg'] - print(data['img'].shape) - for key in data['gt_fields']: - print(data[key].shape) -# import pdb -# pdb.set_trace() - data = transforms(data) - print(data['img'].dtype, data['img'].shape) - cv2.imwrite('distort_img.jpg', data['img'].transpose([1, 2, 0])) diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg2/training/networks.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg2/training/networks.py deleted file mode 100644 index 291d1f6d157aeab10896bc106c15fe4d03fcb145..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/training_scripts/sg2/training/networks.py +++ /dev/null @@ -1,966 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import numpy as np -import torch -from torch_utils import misc -from torch_utils import persistence -from torch_utils.ops import conv2d_resample -from torch_utils.ops import upfirdn2d -from torch_utils.ops import bias_act -from torch_utils.ops import fma - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def normalize_2nd_moment(x, dim=1, eps=1e-8): - return x * (x.square().mean(dim=dim, keepdim=True) + eps).rsqrt() - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def modulated_conv2d( - # Input tensor of shape [batch_size, in_channels, in_height, in_width]. - x, - # Weight tensor of shape [out_channels, in_channels, kernel_height, kernel_width]. - weight, - # Modulation coefficients of shape [batch_size, in_channels]. - styles, - noise=None, # Optional noise tensor to add to the output activations. - up=1, # Integer upsampling factor. - down=1, # Integer downsampling factor. - padding=0, # Padding with respect to the upsampled image. - # Low-pass filter to apply when resampling activations. Must be prepared beforehand by calling upfirdn2d.setup_filter(). - resample_filter=None, - demodulate=True, # Apply weight demodulation? - # False = convolution, True = correlation (matches torch.nn.functional.conv2d). - flip_weight=True, - # Perform modulation, convolution, and demodulation as a single fused operation? - fused_modconv=True, -): - batch_size = x.shape[0] - out_channels, in_channels, kh, kw = weight.shape - misc.assert_shape(weight, [out_channels, in_channels, kh, kw]) # [OIkk] - misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW] - misc.assert_shape(styles, [batch_size, in_channels]) # [NI] - - # Pre-normalize inputs to avoid FP16 overflow. - if x.dtype == torch.float16 and demodulate: - weight = weight * (1 / np.sqrt(in_channels * kh * kw) / - weight.norm(float('inf'), dim=[1, 2, 3], keepdim=True)) # max_Ikk - styles = styles / \ - styles.norm(float('inf'), dim=1, keepdim=True) # max_I - - # Calculate per-sample weights and demodulation coefficients. - w = None - dcoefs = None - if demodulate or fused_modconv: - w = weight.unsqueeze(0) # [NOIkk] - w = w * styles.reshape(batch_size, 1, -1, 1, 1) # [NOIkk] - if demodulate: - dcoefs = (w.square().sum(dim=[2, 3, 4]) + 1e-8).rsqrt() # [NO] - if demodulate and fused_modconv: - w = w * dcoefs.reshape(batch_size, -1, 1, 1, 1) # [NOIkk] - - # Execute by scaling the activations before and after the convolution. - if not fused_modconv: - x = x * styles.to(x.dtype).reshape(batch_size, -1, 1, 1) - x = conv2d_resample.conv2d_resample(x=x, w=weight.to( - x.dtype), f=resample_filter, up=up, down=down, padding=padding, flip_weight=flip_weight) - if demodulate and noise is not None: - x = fma.fma(x, dcoefs.to(x.dtype).reshape( - batch_size, -1, 1, 1), noise.to(x.dtype)) - elif demodulate: - x = x * dcoefs.to(x.dtype).reshape(batch_size, -1, 1, 1) - elif noise is not None: - x = x.add_(noise.to(x.dtype)) - return x - - # Execute as one fused op using grouped convolution. - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - batch_size = int(batch_size) - misc.assert_shape(x, [batch_size, in_channels, None, None]) - x = x.reshape(1, -1, *x.shape[2:]) - w = w.reshape(-1, in_channels, kh, kw) - x = conv2d_resample.conv2d_resample(x=x, w=w.to( - x.dtype), f=resample_filter, up=up, down=down, padding=padding, groups=batch_size, flip_weight=flip_weight) - x = x.reshape(batch_size, -1, *x.shape[2:]) - if noise is not None: - x = x.add_(noise) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class FullyConnectedLayer(torch.nn.Module): - def __init__(self, - in_features, # Number of input features. - out_features, # Number of output features. - bias=True, # Apply additive bias before the activation function? - # Activation function: 'relu', 'lrelu', etc. - activation='linear', - lr_multiplier=1, # Learning rate multiplier. - bias_init=0, # Initial value for the additive bias. - ): - super().__init__() - self.activation = activation - self.weight = torch.nn.Parameter(torch.randn( - [out_features, in_features]) / lr_multiplier) - self.bias = torch.nn.Parameter(torch.full( - [out_features], np.float32(bias_init))) if bias else None - self.weight_gain = lr_multiplier / np.sqrt(in_features) - self.bias_gain = lr_multiplier - - def forward(self, x): - w = self.weight.to(x.dtype) * self.weight_gain - b = self.bias - if b is not None: - b = b.to(x.dtype) - if self.bias_gain != 1: - b = b * self.bias_gain - - if self.activation == 'linear' and b is not None: - x = torch.addmm(b.unsqueeze(0), x, w.t()) - else: - x = x.matmul(w.t()) - x = bias_act.bias_act(x, b, act=self.activation) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class Conv2dLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - # Width and height of the convolution kernel. - kernel_size, - bias=True, # Apply additive bias before the activation function? - # Activation function: 'relu', 'lrelu', etc. - activation='linear', - up=1, # Integer upsampling factor. - down=1, # Integer downsampling factor. - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output to +-X, None = disable clamping. - conv_clamp=None, - channels_last=False, # Expect the input to have memory_format=channels_last? - trainable=True, # Update the weights of this layer during training? - ): - super().__init__() - self.activation = activation - self.up = up - self.down = down - self.conv_clamp = conv_clamp - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = kernel_size // 2 - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - self.act_gain = bias_act.activation_funcs[activation].def_gain - - memory_format = torch.channels_last if channels_last else torch.contiguous_format - weight = torch.randn([out_channels, in_channels, kernel_size, kernel_size]).to( - memory_format=memory_format) - bias = torch.zeros([out_channels]) if bias else None - if trainable: - self.weight = torch.nn.Parameter(weight) - self.bias = torch.nn.Parameter(bias) if bias is not None else None - else: - self.register_buffer('weight', weight) - if bias is not None: - self.register_buffer('bias', bias) - else: - self.bias = None - - def forward(self, x, gain=1): - w = self.weight * self.weight_gain - b = self.bias.to(x.dtype) if self.bias is not None else None - flip_weight = (self.up == 1) # slightly faster - x = conv2d_resample.conv2d_resample(x=x, w=w.to( - x.dtype), f=self.resample_filter, up=self.up, down=self.down, padding=self.padding, flip_weight=flip_weight) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, b, act=self.activation, - gain=act_gain, clamp=act_clamp) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class MappingNetwork(torch.nn.Module): - def __init__(self, - # Input latent (Z) dimensionality, 0 = no latent. - z_dim, - # Conditioning label (C) dimensionality, 0 = no label. - c_dim, - # Intermediate latent (W) dimensionality. - w_dim, - # Number of intermediate latents to output, None = do not broadcast. - num_ws, - num_layers=8, # Number of mapping layers. - # Label embedding dimensionality, None = same as w_dim. - embed_features=None, - # Number of intermediate features in the mapping layers, None = same as w_dim. - layer_features=None, - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Learning rate multiplier for the mapping layers. - lr_multiplier=0.01, - # Decay for tracking the moving average of W during training, None = do not track. - w_avg_beta=0.995, - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.num_ws = num_ws - self.num_layers = num_layers - self.w_avg_beta = w_avg_beta - - if embed_features is None: - embed_features = w_dim - if c_dim == 0: - embed_features = 0 - if layer_features is None: - layer_features = w_dim - features_list = [z_dim + embed_features] + \ - [layer_features] * (num_layers - 1) + [w_dim] - - if c_dim > 0: - self.embed = FullyConnectedLayer(c_dim, embed_features) - for idx in range(num_layers): - in_features = features_list[idx] - out_features = features_list[idx + 1] - layer = FullyConnectedLayer( - in_features, out_features, activation=activation, lr_multiplier=lr_multiplier) - setattr(self, f'fc{idx}', layer) - - if num_ws is not None and w_avg_beta is not None: - self.register_buffer('w_avg', torch.zeros([w_dim])) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, skip_w_avg_update=False): - # Embed, normalize, and concat inputs. - x = None - with torch.autograd.profiler.record_function('input'): - if self.z_dim > 0: - misc.assert_shape(z, [None, self.z_dim]) - x = normalize_2nd_moment(z.to(torch.float32)) - if self.c_dim > 0: - misc.assert_shape(c, [None, self.c_dim]) - y = normalize_2nd_moment(self.embed(c.to(torch.float32))) - x = torch.cat([x, y], dim=1) if x is not None else y - - # Main layers. - for idx in range(self.num_layers): - layer = getattr(self, f'fc{idx}') - x = layer(x) - - # Update moving average of W. - if self.w_avg_beta is not None and self.training and not skip_w_avg_update: - with torch.autograd.profiler.record_function('update_w_avg'): - self.w_avg.copy_(x.detach().mean( - dim=0).lerp(self.w_avg, self.w_avg_beta)) - - # Broadcast. - if self.num_ws is not None: - with torch.autograd.profiler.record_function('broadcast'): - x = x.unsqueeze(1).repeat([1, self.num_ws, 1]) - - # Apply truncation. - if truncation_psi != 1: - with torch.autograd.profiler.record_function('truncate'): - assert self.w_avg_beta is not None - if self.num_ws is None or truncation_cutoff is None: - x = self.w_avg.lerp(x, truncation_psi) - else: - x[:, :truncation_cutoff] = self.w_avg.lerp( - x[:, :truncation_cutoff], truncation_psi) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisLayer(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - out_channels, # Number of output channels. - # Intermediate latent (W) dimensionality. - w_dim, - resolution, # Resolution of this layer. - kernel_size=3, # Convolution kernel size. - up=1, # Integer upsampling factor. - use_noise=True, # Enable noise input? - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - channels_last=False, # Use channels_last format for the weights? - square=False, # default if for rectangle images - ): - super().__init__() - self.resolution = resolution - self.up = up - self.use_noise = use_noise - self.activation = activation - self.conv_clamp = conv_clamp - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.padding = kernel_size // 2 - self.act_gain = bias_act.activation_funcs[activation].def_gain - self.square = square - - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - memory_format = torch.channels_last if channels_last else torch.contiguous_format - self.weight = torch.nn.Parameter(torch.randn( - [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format)) - if use_noise: - if self.square: - self.register_buffer( - 'noise_const', torch.randn([resolution, resolution])) - else: - self.register_buffer('noise_const', torch.randn( - [resolution, resolution // 2])) - self.noise_strength = torch.nn.Parameter(torch.zeros([])) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - - def forward(self, x, w, noise_mode='random', fused_modconv=True, gain=1): - assert noise_mode in ['random', 'const', 'none'] - in_resolution = self.resolution // self.up - if self.square: - misc.assert_shape( - x, [None, self.weight.shape[1], in_resolution, in_resolution]) - else: - misc.assert_shape( - x, [None, self.weight.shape[1], in_resolution, in_resolution // 2]) - styles = self.affine(w) - - noise = None - if self.use_noise and noise_mode == 'random': - if self.square: - noise = torch.randn( - [x.shape[0], 1, self.resolution, self.resolution], device=x.device) * self.noise_strength - else: - noise = torch.randn( - [x.shape[0], 1, self.resolution, self.resolution // 2], device=x.device) * self.noise_strength - if self.use_noise and noise_mode == 'const': - noise = self.noise_const * self.noise_strength - - flip_weight = (self.up == 1) # slightly faster - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up, - padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv) - - act_gain = self.act_gain * gain - act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None - x = bias_act.bias_act(x, self.bias.to( - x.dtype), act=self.activation, gain=act_gain, clamp=act_clamp) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class ToRGBLayer(torch.nn.Module): - def __init__(self, in_channels, out_channels, w_dim, kernel_size=1, conv_clamp=None, channels_last=False): - super().__init__() - self.conv_clamp = conv_clamp - self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1) - memory_format = torch.channels_last if channels_last else torch.contiguous_format - self.weight = torch.nn.Parameter(torch.randn( - [out_channels, in_channels, kernel_size, kernel_size]).to(memory_format=memory_format)) - self.bias = torch.nn.Parameter(torch.zeros([out_channels])) - self.weight_gain = 1 / np.sqrt(in_channels * (kernel_size ** 2)) - - def forward(self, x, w, fused_modconv=True): - styles = self.affine(w) * self.weight_gain - x = modulated_conv2d(x=x, weight=self.weight, styles=styles, - demodulate=False, fused_modconv=fused_modconv) - x = bias_act.bias_act(x, self.bias.to(x.dtype), clamp=self.conv_clamp) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisBlock(torch.nn.Module): - def __init__(self, - # Number of input channels, 0 = first block. - in_channels, - # Number of output channels. - out_channels, - # Intermediate latent (W) dimensionality. - w_dim, - # Resolution of this block. - resolution, - # Number of output color channels. - img_channels, - is_last, # Is this the last block? - # Architecture: 'orig', 'skip', 'resnet'. - architecture='skip', - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - use_fp16=False, # Use FP16 for this block? - fp16_channels_last=False, # Use channels-last memory format with FP16? - square=False, # default is for rectangle images - # Arguments for SynthesisLayer. - **layer_kwargs, - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.w_dim = w_dim - self.resolution = resolution - self.img_channels = img_channels - self.is_last = is_last - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.num_conv = 0 - self.num_torgb = 0 - self.square = square - - if in_channels == 0: - if self.square: - self.const = torch.nn.Parameter(torch.randn( - [out_channels, resolution, resolution])) - else: # rectangle - self.const = torch.nn.Parameter(torch.randn( - [out_channels, resolution, resolution // 2])) - - if in_channels != 0: - self.conv0 = SynthesisLayer(in_channels, out_channels, w_dim=w_dim, resolution=resolution, up=2, - resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last, square=square, **layer_kwargs) - self.num_conv += 1 - - self.conv1 = SynthesisLayer(out_channels, out_channels, w_dim=w_dim, resolution=resolution, - conv_clamp=conv_clamp, channels_last=self.channels_last, square=square, **layer_kwargs) - self.num_conv += 1 - - if is_last or architecture == 'skip': - self.torgb = ToRGBLayer(out_channels, img_channels, w_dim=w_dim, - conv_clamp=conv_clamp, channels_last=self.channels_last) - self.num_torgb += 1 - - if in_channels != 0 and architecture == 'resnet': - self.skip = Conv2dLayer(in_channels, out_channels, kernel_size=1, bias=False, up=2, - resample_filter=resample_filter, channels_last=self.channels_last) - - def forward(self, x, img, ws, force_fp32=False, fused_modconv=None, **layer_kwargs): - misc.assert_shape( - ws, [None, self.num_conv + self.num_torgb, self.w_dim]) - w_iter = iter(ws.unbind(dim=1)) - dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - if fused_modconv is None: - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - fused_modconv = (not self.training) and ( - dtype == torch.float32 or int(x.shape[0]) == 1) - - # Input. - if self.in_channels == 0: - x = self.const.to(dtype=dtype, memory_format=memory_format) - x = x.unsqueeze(0).repeat([ws.shape[0], 1, 1, 1]) - else: - if self.square: - misc.assert_shape( - x, [None, self.in_channels, self.resolution // 2, self.resolution // 2]) - else: # rectangle - misc.assert_shape( - x, [None, self.in_channels, self.resolution // 2, self.resolution // 4]) - x = x.to(dtype=dtype, memory_format=memory_format) - - # Main layers. - if self.in_channels == 0: - x = self.conv1(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - elif self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - x = self.conv1(x, next(w_iter), fused_modconv=fused_modconv, - gain=np.sqrt(0.5), **layer_kwargs) - x = y.add_(x) - else: - x = self.conv0(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - x = self.conv1(x, next(w_iter), - fused_modconv=fused_modconv, **layer_kwargs) - - # ToRGB. - if img is not None: - if self.square: - misc.assert_shape( - img, [None, self.img_channels, self.resolution // 2, self.resolution // 2]) - else: - misc.assert_shape( - img, [None, self.img_channels, self.resolution // 2, self.resolution // 4]) - img = upfirdn2d.upsample2d(img, self.resample_filter) - if self.is_last or self.architecture == 'skip': - y = self.torgb(x, next(w_iter), fused_modconv=fused_modconv) - y = y.to(dtype=torch.float32, - memory_format=torch.contiguous_format) - img = img.add_(y) if img is not None else y - - assert x.dtype == dtype - assert img is None or img.dtype == torch.float32 - return x, img - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class SynthesisNetwork(torch.nn.Module): - def __init__(self, - # Intermediate latent (W) dimensionality. - w_dim, - img_resolution, # Output image resolution. - img_channels, # Number of color channels. - square, - # Overall multiplier for the number of channels. - channel_base=32768, - # Maximum number of channels in any layer. - channel_max=512, - # Use FP16 for the N highest resolutions. - num_fp16_res=0, - **block_kwargs, # Arguments for SynthesisBlock. - ): - assert img_resolution >= 4 and img_resolution & ( - img_resolution - 1) == 0 - super().__init__() - self.w_dim = w_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.square = square - self.block_resolutions = [ - 2 ** i for i in range(2, self.img_resolution_log2 + 1)] - channels_dict = {res: min(channel_base // res, channel_max) - for res in self.block_resolutions} - fp16_resolution = max( - 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - self.num_ws = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res // 2] if res > 4 else 0 - out_channels = channels_dict[res] - use_fp16 = (res >= fp16_resolution) - is_last = (res == self.img_resolution) - block = SynthesisBlock(in_channels, out_channels, w_dim=w_dim, resolution=res, - img_channels=img_channels, is_last=is_last, use_fp16=use_fp16, square=square, **block_kwargs) - self.num_ws += block.num_conv - if is_last: - self.num_ws += block.num_torgb - setattr(self, f'b{res}', block) - - def forward(self, ws, return_feature=False, **block_kwargs): - block_ws = [] - features = [] - with torch.autograd.profiler.record_function('split_ws'): - misc.assert_shape(ws, [None, self.num_ws, self.w_dim]) - ws = ws.to(torch.float32) - w_idx = 0 - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - block_ws.append( - ws.narrow(1, w_idx, block.num_conv + block.num_torgb)) - w_idx += block.num_conv - - x = img = None - for res, cur_ws in zip(self.block_resolutions, block_ws): - block = getattr(self, f'b{res}') - x, img = block(x, img, cur_ws, **block_kwargs) - features.append(x) - if return_feature: - return img, features - else: - return img - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class Generator(torch.nn.Module): - def __init__(self, - z_dim, # Input latent (Z) dimensionality. - # Conditioning label (C) dimensionality. - c_dim, - # Intermediate latent (W) dimensionality. - w_dim, - img_resolution, # Output resolution. - square, - img_channels, # Number of output color channels. - mapping_kwargs={}, # Arguments for MappingNetwork. - synthesis_kwargs={}, # Arguments for SynthesisNetwork. - padding=False - ): - super().__init__() - self.z_dim = z_dim - self.c_dim = c_dim - self.w_dim = w_dim - self.square = square - self.img_resolution = img_resolution - self.img_channels = img_channels - self.padding = padding - self.synthesis = SynthesisNetwork( - w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, square=square, **synthesis_kwargs) - self.num_ws = self.synthesis.num_ws - self.mapping = MappingNetwork( - z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs) - - def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, input_is_w=False, return_feature=False, **synthesis_kwargs): - if input_is_w: - ws = z - if ws.dim() == 2: - ws = ws.unsqueeze(1).repeat([1, self.mapping.num_ws, 1]) - else: - ws = self.mapping(z, c, truncation_psi=truncation_psi, - truncation_cutoff=truncation_cutoff) - img = self.synthesis( - ws, return_feature=return_feature, **synthesis_kwargs) - if return_feature: - img, feature = img - if self.padding: - pad = (img.size(2) - img.size(3)) // 2 - img = torch.nn.functional.pad(img, (pad, pad), "constant", 1) - if return_feature: - for i, feat in enumerate(feature): - pad = (feat.size(2) - feat.size(3)) // 2 - feature[i] = torch.nn.functional.pad( - feat, (pad, pad), "constant", 0) - if return_feature: - return img, feature - else: - return img - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class DiscriminatorBlock(torch.nn.Module): - def __init__(self, - # Number of input channels, 0 = first block. - in_channels, - # Number of intermediate channels. - tmp_channels, - # Number of output channels. - out_channels, - # Resolution of this block. - resolution, - # Number of input color channels. - img_channels, - # Index of the first layer. - first_layer_idx, - # Architecture: 'orig', 'skip', 'resnet'. - architecture='resnet', - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Low-pass filter to apply when resampling activations. - resample_filter=[1, 3, 3, 1], - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - use_fp16=False, # Use FP16 for this block? - fp16_channels_last=False, # Use channels-last memory format with FP16? - # Freeze-D: Number of layers to freeze. - freeze_layers=0, - square=False, - ): - assert in_channels in [0, tmp_channels] - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.resolution = resolution - self.img_channels = img_channels - self.first_layer_idx = first_layer_idx - self.architecture = architecture - self.use_fp16 = use_fp16 - self.channels_last = (use_fp16 and fp16_channels_last) - self.register_buffer( - 'resample_filter', upfirdn2d.setup_filter(resample_filter)) - self.square = square - - self.num_layers = 0 - - def trainable_gen(): - while True: - layer_idx = self.first_layer_idx + self.num_layers - trainable = (layer_idx >= freeze_layers) - self.num_layers += 1 - yield trainable - trainable_iter = trainable_gen() - - if in_channels == 0 or architecture == 'skip': - self.fromrgb = Conv2dLayer(img_channels, tmp_channels, kernel_size=1, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv0 = Conv2dLayer(tmp_channels, tmp_channels, kernel_size=3, activation=activation, - trainable=next(trainable_iter), conv_clamp=conv_clamp, channels_last=self.channels_last) - - self.conv1 = Conv2dLayer(tmp_channels, out_channels, kernel_size=3, activation=activation, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, conv_clamp=conv_clamp, channels_last=self.channels_last) - - if architecture == 'resnet': - self.skip = Conv2dLayer(tmp_channels, out_channels, kernel_size=1, bias=False, down=2, - trainable=next(trainable_iter), resample_filter=resample_filter, channels_last=self.channels_last) - - def forward(self, x, img, force_fp32=False): - dtype = torch.float16 if self.use_fp16 and not force_fp32 else torch.float32 - memory_format = torch.channels_last if self.channels_last and not force_fp32 else torch.contiguous_format - - # Input. - if x is not None: - if self.square: - misc.assert_shape( - x, [None, self.in_channels, self.resolution, self.resolution]) - else: - misc.assert_shape( - x, [None, self.in_channels, self.resolution, self.resolution // 2]) - x = x.to(dtype=dtype, memory_format=memory_format) - - # FromRGB. - if self.in_channels == 0 or self.architecture == 'skip': - if self.square: - misc.assert_shape( - img, [None, self.img_channels, self.resolution, self.resolution]) - else: - misc.assert_shape( - img, [None, self.img_channels, self.resolution, self.resolution // 2]) - img = img.to(dtype=dtype, memory_format=memory_format) - y = self.fromrgb(img) - x = x + y if x is not None else y - img = upfirdn2d.downsample2d( - img, self.resample_filter) if self.architecture == 'skip' else None - - # Main layers. - if self.architecture == 'resnet': - y = self.skip(x, gain=np.sqrt(0.5)) - x = self.conv0(x) - x = self.conv1(x, gain=np.sqrt(0.5)) - x = y.add_(x) - else: - x = self.conv0(x) - x = self.conv1(x) - - assert x.dtype == dtype - return x, img - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class MinibatchStdLayer(torch.nn.Module): - def __init__(self, group_size, num_channels=1): - super().__init__() - self.group_size = group_size - self.num_channels = num_channels - - def forward(self, x): - N, C, H, W = x.shape - with misc.suppress_tracer_warnings(): # as_tensor results are registered as constants - G = torch.min(torch.as_tensor(self.group_size), torch.as_tensor( - N)) if self.group_size is not None else N - F = self.num_channels - c = C // F - - # [GnFcHW] Split minibatch N into n groups of size G, and channels C into F groups of size c. - y = x.reshape(G, -1, F, c, H, W) - # [GnFcHW] Subtract mean over group. - y = y - y.mean(dim=0) - # [nFcHW] Calc variance over group. - y = y.square().mean(dim=0) - y = (y + 1e-8).sqrt() # [nFcHW] Calc stddev over group. - # [nF] Take average over channels and pixels. - y = y.mean(dim=[2, 3, 4]) - y = y.reshape(-1, F, 1, 1) # [nF11] Add missing dimensions. - # [NFHW] Replicate over group and pixels. - y = y.repeat(G, 1, H, W) - # [NCHW] Append to input as new channels. - x = torch.cat([x, y], dim=1) - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class DiscriminatorEpilogue(torch.nn.Module): - def __init__(self, - in_channels, # Number of input channels. - # Dimensionality of mapped conditioning label, 0 = no label. - cmap_dim, - resolution, # Resolution of this block. - # Number of input color channels. - img_channels, - # Architecture: 'orig', 'skip', 'resnet'. - architecture='resnet', - # Group size for the minibatch standard deviation layer, None = entire minibatch. - mbstd_group_size=4, - # Number of features for the minibatch standard deviation layer, 0 = disable. - mbstd_num_channels=1, - # Activation function: 'relu', 'lrelu', etc. - activation='lrelu', - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - square=False, - ): - assert architecture in ['orig', 'skip', 'resnet'] - super().__init__() - self.in_channels = in_channels - self.cmap_dim = cmap_dim - self.resolution = resolution - self.img_channels = img_channels - self.architecture = architecture - self.square = square - - if architecture == 'skip': - self.fromrgb = Conv2dLayer( - img_channels, in_channels, kernel_size=1, activation=activation) - self.mbstd = MinibatchStdLayer( - group_size=mbstd_group_size, num_channels=mbstd_num_channels) if mbstd_num_channels > 0 else None - self.conv = Conv2dLayer(in_channels + mbstd_num_channels, in_channels, - kernel_size=3, activation=activation, conv_clamp=conv_clamp) - - if self.square: - self.fc = FullyConnectedLayer( - in_channels * (resolution ** 2), in_channels, activation=activation) - else: - self.fc = FullyConnectedLayer( - in_channels * (resolution ** 2 // 2), in_channels, activation=activation) - - self.out = FullyConnectedLayer( - in_channels, 1 if cmap_dim == 0 else cmap_dim) - - def forward(self, x, img, cmap, force_fp32=False): - if self.square: - misc.assert_shape(x, [None, self.in_channels, - self.resolution, self.resolution]) - else: - misc.assert_shape( - x, [None, self.in_channels, self.resolution, self.resolution // 2]) # [NCHW] - _ = force_fp32 # unused - dtype = torch.float32 - memory_format = torch.contiguous_format - - # FromRGB. - x = x.to(dtype=dtype, memory_format=memory_format) - if self.architecture == 'skip': - if self.square: - misc.assert_shape( - img, [None, self.img_channels, self.resolution, self.resolution]) - else: - misc.assert_shape( - img, [None, self.img_channels, self.resolution, self.resolution // 2]) - img = img.to(dtype=dtype, memory_format=memory_format) - x = x + self.fromrgb(img) - - # Main layers. - if self.mbstd is not None: - x = self.mbstd(x) - x = self.conv(x) - x = self.fc(x.flatten(1)) - x = self.out(x) - - # Conditioning. - if self.cmap_dim > 0: - misc.assert_shape(cmap, [None, self.cmap_dim]) - x = (x * cmap).sum(dim=1, keepdim=True) * \ - (1 / np.sqrt(self.cmap_dim)) - - assert x.dtype == dtype - return x - -# ---------------------------------------------------------------------------- - - -@persistence.persistent_class -class Discriminator(torch.nn.Module): - def __init__(self, - # Conditioning label (C) dimensionality. - c_dim, - img_resolution, # Input resolution. - # Number of input color channels. - img_channels, - # Architecture: 'orig', 'skip', 'resnet'. - architecture='resnet', - # Overall multiplier for the number of channels. - channel_base=32768, - # Maximum number of channels in any layer. - channel_max=512, - # Use FP16 for the N highest resolutions. - num_fp16_res=0, - # Clamp the output of convolution layers to +-X, None = disable clamping. - conv_clamp=None, - # Dimensionality of mapped conditioning label, None = default. - cmap_dim=None, - square=False, # default for rectangle images - block_kwargs={}, # Arguments for DiscriminatorBlock. - mapping_kwargs={}, # Arguments for MappingNetwork. - # Arguments for DiscriminatorEpilogue. - epilogue_kwargs={}, - ): - super().__init__() - self.c_dim = c_dim - self.img_resolution = img_resolution - self.img_resolution_log2 = int(np.log2(img_resolution)) - self.img_channels = img_channels - self.square = square - self.block_resolutions = [ - 2 ** i for i in range(self.img_resolution_log2, 2, -1)] - channels_dict = {res: min(channel_base // res, channel_max) - for res in self.block_resolutions + [4]} - fp16_resolution = max( - 2 ** (self.img_resolution_log2 + 1 - num_fp16_res), 8) - - if cmap_dim is None: - cmap_dim = channels_dict[4] - if c_dim == 0: - cmap_dim = 0 - - common_kwargs = dict(img_channels=img_channels, - architecture=architecture, conv_clamp=conv_clamp) - cur_layer_idx = 0 - for res in self.block_resolutions: - in_channels = channels_dict[res] if res < img_resolution else 0 - tmp_channels = channels_dict[res] - out_channels = channels_dict[res // 2] - use_fp16 = (res >= fp16_resolution) - block = DiscriminatorBlock(in_channels, tmp_channels, out_channels, resolution=res, - first_layer_idx=cur_layer_idx, use_fp16=use_fp16, square=square, **block_kwargs, **common_kwargs) - setattr(self, f'b{res}', block) - cur_layer_idx += block.num_layers - if c_dim > 0: - self.mapping = MappingNetwork( - z_dim=0, c_dim=c_dim, w_dim=cmap_dim, num_ws=None, w_avg_beta=None, **mapping_kwargs) - self.b4 = DiscriminatorEpilogue( - channels_dict[4], cmap_dim=cmap_dim, resolution=4, square=square, **epilogue_kwargs, **common_kwargs) - - def forward(self, img, c, **block_kwargs): - x = None - for res in self.block_resolutions: - block = getattr(self, f'b{res}') - x, img = block(x, img, **block_kwargs) - - cmap = None - if self.c_dim > 0: - cmap = self.mapping(None, c) - x = self.b4(x, img, cmap) - return x - -# ---------------------------------------------------------------------------- diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/utils/log_utils.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/utils/log_utils.py deleted file mode 100644 index 7b4528dda762802b1161b7148c4348a5d360ad83..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/utils/log_utils.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - - -import numpy as np -from PIL import Image -import wandb -from pti.pti_configs import global_config -import torch -import matplotlib.pyplot as plt - - -def log_image_from_w(w, G, name): - img = get_image_from_w(w, G) - pillow_image = Image.fromarray(img) - wandb.log( - {f"{name}": [ - wandb.Image(pillow_image, caption=f"current inversion {name}")]}, - step=global_config.training_step) - - -def log_images_from_w(ws, G, names): - for name, w in zip(names, ws): - w = w.to(global_config.device) - log_image_from_w(w, G, name) - - -def plot_image_from_w(w, G): - img = get_image_from_w(w, G) - pillow_image = Image.fromarray(img) - plt.imshow(pillow_image) - plt.show() - - -def plot_image(img): - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, - 255).to(torch.uint8).detach().cpu().numpy() - pillow_image = Image.fromarray(img[0]) - plt.imshow(pillow_image) - plt.show() - - -def save_image(name, method_type, results_dir, image, run_id): - image.save(f'{results_dir}/{method_type}_{name}_{run_id}.jpg') - - -def save_w(w, G, name, method_type, results_dir): - im = get_image_from_w(w, G) - im = Image.fromarray(im, mode='RGB') - save_image(name, method_type, results_dir, im) - - -def save_concat_image(base_dir, image_latents, new_inv_image_latent, new_G, - old_G, - file_name, - extra_image=None): - images_to_save = [] - if extra_image is not None: - images_to_save.append(extra_image) - for latent in image_latents: - images_to_save.append(get_image_from_w(latent, old_G)) - images_to_save.append(get_image_from_w(new_inv_image_latent, new_G)) - result_image = create_alongside_images(images_to_save) - result_image.save(f'{base_dir}/{file_name}.jpg') - - -def save_single_image(base_dir, image_latent, G, file_name): - image_to_save = get_image_from_w(image_latent, G) - image_to_save = Image.fromarray(image_to_save, mode='RGB') - image_to_save.save(f'{base_dir}/{file_name}.jpg') - - -def create_alongside_images(images): - res = np.concatenate([np.array(image) for image in images], axis=1) - return Image.fromarray(res, mode='RGB') - - -def get_image_from_w(w, G): - if len(w.size()) <= 2: - w = w.unsqueeze(0) - with torch.no_grad(): - img = G.synthesis(w, noise_mode='const') - img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, - 255).to(torch.uint8).detach().cpu().numpy() - return img[0] diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/tests/test_dataset.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/tests/test_dataset.py deleted file mode 100644 index 715b4082645c131d43d728ae8f65bcc2430aa8c9..0000000000000000000000000000000000000000 --- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/tests/test_dataset.py +++ /dev/null @@ -1,151 +0,0 @@ -import pytest -import yaml - -from realesrgan.data.realesrgan_dataset import RealESRGANDataset -from realesrgan.data.realesrgan_paired_dataset import RealESRGANPairedDataset - - -def test_realesrgan_dataset(): - - with open('tests/data/test_realesrgan_dataset.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - dataset = RealESRGANDataset(opt) - assert dataset.io_backend_opt['type'] == 'disk' # io backend - assert len(dataset) == 2 # whether to read correct meta info - assert dataset.kernel_list == [ - 'iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso' - ] # correct initialization the degradation configurations - assert dataset.betag_range2 == [0.5, 4] - - # test __getitem__ - result = dataset.__getitem__(0) - # check returned keys - expected_keys = ['gt', 'kernel1', 'kernel2', 'sinc_kernel', 'gt_path'] - assert set(expected_keys).issubset(set(result.keys())) - # check shape and contents - assert result['gt'].shape == (3, 400, 400) - assert result['kernel1'].shape == (21, 21) - assert result['kernel2'].shape == (21, 21) - assert result['sinc_kernel'].shape == (21, 21) - assert result['gt_path'] == 'tests/data/gt/baboon.png' - - # ------------------ test lmdb backend -------------------- # - opt['dataroot_gt'] = 'tests/data/gt.lmdb' - opt['io_backend']['type'] = 'lmdb' - - dataset = RealESRGANDataset(opt) - assert dataset.io_backend_opt['type'] == 'lmdb' # io backend - assert len(dataset.paths) == 2 # whether to read correct meta info - assert dataset.kernel_list == [ - 'iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso' - ] # correct initialization the degradation configurations - assert dataset.betag_range2 == [0.5, 4] - - # test __getitem__ - result = dataset.__getitem__(1) - # check returned keys - expected_keys = ['gt', 'kernel1', 'kernel2', 'sinc_kernel', 'gt_path'] - assert set(expected_keys).issubset(set(result.keys())) - # check shape and contents - assert result['gt'].shape == (3, 400, 400) - assert result['kernel1'].shape == (21, 21) - assert result['kernel2'].shape == (21, 21) - assert result['sinc_kernel'].shape == (21, 21) - assert result['gt_path'] == 'comic' - - # ------------------ test with sinc_prob = 0 -------------------- # - opt['dataroot_gt'] = 'tests/data/gt.lmdb' - opt['io_backend']['type'] = 'lmdb' - opt['sinc_prob'] = 0 - opt['sinc_prob2'] = 0 - opt['final_sinc_prob'] = 0 - dataset = RealESRGANDataset(opt) - result = dataset.__getitem__(0) - # check returned keys - expected_keys = ['gt', 'kernel1', 'kernel2', 'sinc_kernel', 'gt_path'] - assert set(expected_keys).issubset(set(result.keys())) - # check shape and contents - assert result['gt'].shape == (3, 400, 400) - assert result['kernel1'].shape == (21, 21) - assert result['kernel2'].shape == (21, 21) - assert result['sinc_kernel'].shape == (21, 21) - assert result['gt_path'] == 'baboon' - - # ------------------ lmdb backend should have paths ends with lmdb -------------------- # - with pytest.raises(ValueError): - opt['dataroot_gt'] = 'tests/data/gt' - opt['io_backend']['type'] = 'lmdb' - dataset = RealESRGANDataset(opt) - - -def test_realesrgan_paired_dataset(): - - with open('tests/data/test_realesrgan_paired_dataset.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - dataset = RealESRGANPairedDataset(opt) - assert dataset.io_backend_opt['type'] == 'disk' # io backend - assert len(dataset) == 2 # whether to read correct meta info - - # test __getitem__ - result = dataset.__getitem__(0) - # check returned keys - expected_keys = ['gt', 'lq', 'gt_path', 'lq_path'] - assert set(expected_keys).issubset(set(result.keys())) - # check shape and contents - assert result['gt'].shape == (3, 128, 128) - assert result['lq'].shape == (3, 32, 32) - assert result['gt_path'] == 'tests/data/gt/baboon.png' - assert result['lq_path'] == 'tests/data/lq/baboon.png' - - # ------------------ test lmdb backend -------------------- # - opt['dataroot_gt'] = 'tests/data/gt.lmdb' - opt['dataroot_lq'] = 'tests/data/lq.lmdb' - opt['io_backend']['type'] = 'lmdb' - - dataset = RealESRGANPairedDataset(opt) - assert dataset.io_backend_opt['type'] == 'lmdb' # io backend - assert len(dataset) == 2 # whether to read correct meta info - - # test __getitem__ - result = dataset.__getitem__(1) - # check returned keys - expected_keys = ['gt', 'lq', 'gt_path', 'lq_path'] - assert set(expected_keys).issubset(set(result.keys())) - # check shape and contents - assert result['gt'].shape == (3, 128, 128) - assert result['lq'].shape == (3, 32, 32) - assert result['gt_path'] == 'comic' - assert result['lq_path'] == 'comic' - - # ------------------ test paired_paths_from_folder -------------------- # - opt['dataroot_gt'] = 'tests/data/gt' - opt['dataroot_lq'] = 'tests/data/lq' - opt['io_backend'] = dict(type='disk') - opt['meta_info'] = None - - dataset = RealESRGANPairedDataset(opt) - assert dataset.io_backend_opt['type'] == 'disk' # io backend - assert len(dataset) == 2 # whether to read correct meta info - - # test __getitem__ - result = dataset.__getitem__(0) - # check returned keys - expected_keys = ['gt', 'lq', 'gt_path', 'lq_path'] - assert set(expected_keys).issubset(set(result.keys())) - # check shape and contents - assert result['gt'].shape == (3, 128, 128) - assert result['lq'].shape == (3, 32, 32) - - # ------------------ test normalization -------------------- # - dataset.mean = [0.5, 0.5, 0.5] - dataset.std = [0.5, 0.5, 0.5] - # test __getitem__ - result = dataset.__getitem__(0) - # check returned keys - expected_keys = ['gt', 'lq', 'gt_path', 'lq_path'] - assert set(expected_keys).issubset(set(result.keys())) - # check shape and contents - assert result['gt'].shape == (3, 128, 128) - assert result['lq'].shape == (3, 32, 32) diff --git a/spaces/ElainaFanBoy/MusicGen/CONTRIBUTING.md b/spaces/ElainaFanBoy/MusicGen/CONTRIBUTING.md deleted file mode 100644 index 55b99140204d785d572ada9761dd77f302ae31c6..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/MusicGen/CONTRIBUTING.md +++ /dev/null @@ -1,35 +0,0 @@ -# Contributing to Audiocraft - -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests - -Audiocraft is the implementation of a research paper. -Therefore, we do not plan on accepting many pull requests for new features. -We certainly welcome them for bug fixes. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Meta's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe -disclosure of security bugs. In those cases, please go through the process -outlined on that page and do not file a public issue. - -## License -By contributing to encodec, you agree that your contributions will be licensed -under the LICENSE file in the root directory of this source tree. diff --git a/spaces/EsoCode/text-generation-webui/docs/Training-LoRAs.md b/spaces/EsoCode/text-generation-webui/docs/Training-LoRAs.md deleted file mode 100644 index 83e6d5a7251eea080cd7dfe8d19a2e42d6d3a822..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/docs/Training-LoRAs.md +++ /dev/null @@ -1,174 +0,0 @@ -## Training Your Own LoRAs - -The WebUI seeks to make training your own LoRAs as easy as possible. It comes down to just a few simple steps: - -### **Step 1**: Make a plan. -- What base model do you want to use? The LoRA you make has to be matched up to a single architecture (eg LLaMA-13B) and cannot be transferred to others (eg LLaMA-7B, StableLM, etc. would all be different). Derivatives of the same model (eg Alpaca finetune of LLaMA-13B) might be transferrable, but even then it's best to train exactly on what you plan to use. -- What model format do you want? At time of writing, 8-bit models are most stable, and 4-bit are supported but experimental. In the near future it is likely that 4-bit will be the best option for most users. -- What are you training it on? Do you want it to learn real information, a simple format, ...? - -### **Step 2**: Gather a dataset. -- If you use a dataset similar to the [Alpaca](https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json) format, that is natively supported by the `Formatted Dataset` input in the WebUI, with premade formatter options. -- If you use a dataset that isn't matched to Alpaca's format, but uses the same basic JSON structure, you can make your own format file by copying `training/formats/alpaca-format.json` to a new file and [editing its content](#format-files). -- If you can get the dataset into a simple text file, that works too! You can train using the `Raw text file` input option. - - This means you can for example just copy/paste a chatlog/documentation page/whatever you want, shove it in a plain text file, and train on it. -- If you use a structured dataset not in this format, you may have to find an external way to convert it - or open an issue to request native support. - -### **Step 3**: Do the training. -- **3.1**: Load the WebUI, and your model. - - Make sure you don't have any LoRAs already loaded (unless you want to train for multi-LoRA usage). -- **3.2**: Open the `Training` tab at the top, `Train LoRA` sub-tab. -- **3.3**: Fill in the name of the LoRA, select your dataset in the dataset options. -- **3.4**: Select other parameters to your preference. See [parameters below](#parameters). -- **3.5**: click `Start LoRA Training`, and wait. - - It can take a few hours for a large dataset, or just a few minute if doing a small run. - - You may want to monitor your [loss value](#loss) while it goes. - -### **Step 4**: Evaluate your results. -- Load the LoRA under the Models Tab. -- You can go test-drive it on the `Text generation` tab, or you can use the `Perplexity evaluation` sub-tab of the `Training` tab. -- If you used the `Save every n steps` option, you can grab prior copies of the model from sub-folders within the LoRA model's folder and try them instead. - -### **Step 5**: Re-run if you're unhappy. -- Make sure to unload the LoRA before training it. -- You can simply resume a prior run - use `Copy parameters from` to select your LoRA, and edit parameters. Note that you cannot change the `Rank` of an already created LoRA. - - If you want to resume from a checkpoint saved along the way, simply copy the contents of the checkpoint folder into the LoRA's folder. - - (Note: `adapter_model.bin` is the important file that holds the actual LoRA content). - - This will start Learning Rate and Steps back to the start. If you want to resume as if you were midway through, you can adjust your Learning Rate to the last reported LR in logs and reduce your epochs. -- Or, you can start over entirely if you prefer. -- If your model is producing corrupted outputs, you probably need to start over and use a lower Learning Rate. -- If your model isn't learning detailed information but you want it to, you might need to just run more epochs, or you might need a higher Rank. -- If your model is enforcing a format you didn't want, you may need to tweak your dataset, or start over and not train as far. - -## Format Files - -If using JSON formatted datasets, they are presumed to be in the following approximate format: - -```json -[ - { - "somekey": "somevalue", - "key2": "value2" - }, - { - // etc - } -] -``` - -Where the keys (eg `somekey`, `key2` above) are standardized, and relatively consistent across the dataset, and the values (eg `somevalue`, `value2`) contain the content actually intended to be trained. - -For Alpaca, the keys are `instruction`, `input`, and `output`, wherein `input` is sometimes blank. - -A simple format file for Alpaca to be used as a chat bot is: - -```json -{ - "instruction,output": "User: %instruction%\nAssistant: %output%", - "instruction,input,output": "User: %instruction%: %input%\nAssistant: %output%" -} -``` - -Note that the keys (eg `instruction,output`) are a comma-separated list of dataset keys, and the values are a simple string that use those keys with `%%`. - -So for example if a dataset has `"instruction": "answer my question"`, then the format file's `User: %instruction%\n` will be automatically filled in as `User: answer my question\n`. - -If you have different sets of key inputs, you can make your own format file to match it. This format-file is designed to be as simple as possible to enable easy editing to match your needs. - -## Raw Text File Settings - -When using raw text files as your dataset, the text is automatically split into chunks based on your `Cutoff Length` you get a few basic options to configure them. -- `Overlap Length` is how much to overlap chunks by. Overlapping chunks helps prevent the model from learning strange mid-sentence cuts, and instead learn continual sentences that flow from earlier text. -- `Prefer Newline Cut Length` sets a maximum distance in characters to shift the chunk cut towards newlines. Doing this helps prevent lines from starting or ending mid-sentence, preventing the model from learning to cut off sentences randomly. -- `Hard Cut String` sets a string that indicates there must be a hard cut without overlap. This defaults to `\n\n\n`, meaning 3 newlines. No trained chunk will ever contain this string. This allows you to insert unrelated sections of text in the same text file, but still ensure the model won't be taught to randomly change the subject. - -## Parameters - -The basic purpose and function of each parameter is documented on-page in the WebUI, so read through them in the UI to understand your options. - -That said, here's a guide to the most important parameter choices you should consider: - -### VRAM - -- First, you must consider your VRAM availability. - - Generally, under default settings, VRAM usage for training with default parameters is very close to when generating text (with 1000+ tokens of context) (ie, if you can generate text, you can train LoRAs). - - Note: worse by default in the 4-bit monkeypatch currently. Reduce `Micro Batch Size` to `1` to restore this to expectations. - - If you have VRAM to spare, setting higher batch sizes will use more VRAM and get you better quality training in exchange. - - If you have large data, setting a higher cutoff length may be beneficial, but will cost significant VRAM. If you can spare some, set your batch size to `1` and see how high you can push your cutoff length. - - If you're low on VRAM, reducing batch size or cutoff length will of course improve that. - - Don't be afraid to just try it and see what happens. If it's too much, it will just error out, and you can lower settings and try again. - -### Rank - -- Second, you want to consider the amount of learning you want. - - For example, you may wish to just learn a dialogue format (as in the case of Alpaca) in which case setting a low `Rank` value (32 or lower) works great. - - Or, you might be training on project documentation you want the bot to understand and be able to understand questions about, in which case the higher the rank, the better. - - Generally, higher Rank = more precise learning = more total content learned = more VRAM usage while training. - -### Learning Rate and Epochs - -- Third, how carefully you want it to be learned. - - In other words, how okay or not you are with the model losing unrelated understandings. - - You can control this with 3 key settings: the Learning Rate, its scheduler, and your total epochs. - - The learning rate controls how much change is made to the model by each token it sees. - - It's in scientific notation normally, so for example `3e-4` means `3 * 10^-4` which is `0.0003`. The number after `e-` controls how many `0`s are in the number. - - Higher values let training run faster, but also are more likely to corrupt prior data in the model. - - You essentially have two variables to balance: the LR, and Epochs. - - If you make LR higher, you can set Epochs equally lower to match. High LR + low epochs = very fast, low quality training. - - If you make LR low, set epochs high. Low LR + high epochs = slow but high-quality training. - - The scheduler controls change-over-time as you train - it starts high, and then goes low. This helps balance getting data in, and having decent quality, at the same time. - - You can see graphs of the different scheduler options [in the HuggingFace docs here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_1/en/main_classes/optimizer_schedules#transformers.SchedulerType) - -## Loss - -When you're running training, the WebUI's console window will log reports that include, among other things, a numeric value named `Loss`. It will start as a high number, and gradually get lower and lower as it goes. - -"Loss" in the world of AI training theoretically means "how close is the model to perfect", with `0` meaning "absolutely perfect". This is calculated by measuring the difference between the model outputting exactly the text you're training it to output, and what it actually outputs. - -In practice, a good LLM should have a very complex variable range of ideas running in its artificial head, so a loss of `0` would indicate that the model has broken and forgotten to how think about anything other than what you trained it. - -So, in effect, Loss is a balancing game: you want to get it low enough that it understands your data, but high enough that it isn't forgetting everything else. Generally, if it goes below `1.0`, it's going to start forgetting its prior memories, and you should stop training. In some cases you may prefer to take it as low as `0.5` (if you want it to be very very predictable). Different goals have different needs, so don't be afraid to experiment and see what works best for you. - -Note: if you see Loss start at or suddenly jump to exactly `0`, it is likely something has gone wrong in your training process (eg model corruption). - -## Note: 4-Bit Monkeypatch - -The [4-bit LoRA monkeypatch](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) works for training, but has side effects: -- VRAM usage is higher currently. You can reduce the `Micro Batch Size` to `1` to compensate. -- Models do funky things. LoRAs apply themselves, or refuse to apply, or spontaneously error out, or etc. It can be helpful to reload base model or restart the WebUI between training/usage to minimize chances of anything going haywire. -- Loading or working with multiple LoRAs at the same time doesn't currently work. -- Generally, recognize and treat the monkeypatch as the dirty temporary hack it is - it works, but isn't very stable. It will get better in time when everything is merged upstream for full official support. - -## Legacy notes - -LoRA training was contributed by [mcmonkey4eva](https://github.com/mcmonkey4eva) in PR [#570](https://github.com/oobabooga/text-generation-webui/pull/570). - -### Using the original alpaca-lora code - -Kept here for reference. The Training tab has much more features than this method. - -``` -conda activate textgen -git clone https://github.com/tloen/alpaca-lora -``` - -Edit those two lines in `alpaca-lora/finetune.py` to use your existing model folder instead of downloading everything from decapoda: - -``` -model = LlamaForCausalLM.from_pretrained( - "models/llama-7b", - load_in_8bit=True, - device_map="auto", -) -tokenizer = LlamaTokenizer.from_pretrained( - "models/llama-7b", add_eos_token=True -) -``` - -Run the script with: - -``` -python finetune.py -``` - -It just works. It runs at 22.32s/it, with 1170 iterations in total, so about 7 hours and a half for training a LoRA. RTX 3090, 18153MiB VRAM used, drawing maximum power (350W, room heater mode). diff --git a/spaces/FelixLuoX/stable_diffusion_test/share_btn.py b/spaces/FelixLuoX/stable_diffusion_test/share_btn.py deleted file mode 100644 index 4bf271fe915e78e6df33a9df53b47ad68e620e2e..0000000000000000000000000000000000000000 --- a/spaces/FelixLuoX/stable_diffusion_test/share_btn.py +++ /dev/null @@ -1,60 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - const gradioEl = document.querySelector('body > gradio-app'); - const imgEls = gradioEl.querySelectorAll('#gallery img'); - const promptTxt = gradioEl.querySelector('#prompt-text-input input').value; - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!imgEls.length){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const files = await Promise.all( - [...imgEls].map(async (imgEl) => { - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `diffuse-the-rest-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }) - ); - const urls = await Promise.all(files.map((f) => uploadFile(f))); - const htmlImgs = urls.map(url => ``); - const descriptionMd = `
    -${htmlImgs.join(`\n`)} -
    `; - const params = new URLSearchParams({ - title: promptTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/Flux9665/IMS-Toucan/Layers/Conformer.py b/spaces/Flux9665/IMS-Toucan/Layers/Conformer.py deleted file mode 100644 index 5ca87bfbf18bcfb84830501dc3d00e3a38916966..0000000000000000000000000000000000000000 --- a/spaces/Flux9665/IMS-Toucan/Layers/Conformer.py +++ /dev/null @@ -1,144 +0,0 @@ -""" -Taken from ESPNet -""" - -import torch -import torch.nn.functional as F - -from Layers.Attention import RelPositionMultiHeadedAttention -from Layers.Convolution import ConvolutionModule -from Layers.EncoderLayer import EncoderLayer -from Layers.LayerNorm import LayerNorm -from Layers.MultiLayeredConv1d import MultiLayeredConv1d -from Layers.MultiSequential import repeat -from Layers.PositionalEncoding import RelPositionalEncoding -from Layers.Swish import Swish - - -class Conformer(torch.nn.Module): - """ - Conformer encoder module. - - Args: - idim (int): Input dimension. - attention_dim (int): Dimension of attention. - attention_heads (int): The number of heads of multi head attention. - linear_units (int): The number of units of position-wise feed forward. - num_blocks (int): The number of decoder blocks. - dropout_rate (float): Dropout rate. - positional_dropout_rate (float): Dropout rate after adding positional encoding. - attention_dropout_rate (float): Dropout rate in attention. - input_layer (Union[str, torch.nn.Module]): Input layer type. - normalize_before (bool): Whether to use layer_norm before the first block. - concat_after (bool): Whether to concat attention layer's input and output. - if True, additional linear will be applied. - i.e. x -> x + linear(concat(x, att(x))) - if False, no additional linear will be applied. i.e. x -> x + att(x) - positionwise_layer_type (str): "linear", "conv1d", or "conv1d-linear". - positionwise_conv_kernel_size (int): Kernel size of positionwise conv1d layer. - macaron_style (bool): Whether to use macaron style for positionwise layer. - pos_enc_layer_type (str): Conformer positional encoding layer type. - selfattention_layer_type (str): Conformer attention layer type. - activation_type (str): Conformer activation function type. - use_cnn_module (bool): Whether to use convolution module. - cnn_module_kernel (int): Kernerl size of convolution module. - padding_idx (int): Padding idx for input_layer=embed. - - """ - - def __init__(self, idim, attention_dim=256, attention_heads=4, linear_units=2048, num_blocks=6, dropout_rate=0.1, positional_dropout_rate=0.1, - attention_dropout_rate=0.0, input_layer="conv2d", normalize_before=True, concat_after=False, positionwise_conv_kernel_size=1, - macaron_style=False, use_cnn_module=False, cnn_module_kernel=31, zero_triu=False, utt_embed=None, connect_utt_emb_at_encoder_out=True, - spk_emb_bottleneck_size=128, lang_embs=None): - super(Conformer, self).__init__() - - activation = Swish() - self.conv_subsampling_factor = 1 - - if isinstance(input_layer, torch.nn.Module): - self.embed = input_layer - self.pos_enc = RelPositionalEncoding(attention_dim, positional_dropout_rate) - elif input_layer is None: - self.embed = None - self.pos_enc = torch.nn.Sequential(RelPositionalEncoding(attention_dim, positional_dropout_rate)) - else: - raise ValueError("unknown input_layer: " + input_layer) - - self.normalize_before = normalize_before - - self.connect_utt_emb_at_encoder_out = connect_utt_emb_at_encoder_out - if utt_embed is not None: - self.hs_emb_projection = torch.nn.Linear(attention_dim + spk_emb_bottleneck_size, attention_dim) - # embedding projection derived from https://arxiv.org/pdf/1705.08947.pdf - self.embedding_projection = torch.nn.Sequential(torch.nn.Linear(utt_embed, spk_emb_bottleneck_size), - torch.nn.Softsign()) - if lang_embs is not None: - self.language_embedding = torch.nn.Embedding(num_embeddings=lang_embs, embedding_dim=attention_dim) - - # self-attention module definition - encoder_selfattn_layer = RelPositionMultiHeadedAttention - encoder_selfattn_layer_args = (attention_heads, attention_dim, attention_dropout_rate, zero_triu) - - # feed-forward module definition - positionwise_layer = MultiLayeredConv1d - positionwise_layer_args = (attention_dim, linear_units, positionwise_conv_kernel_size, dropout_rate,) - - # convolution module definition - convolution_layer = ConvolutionModule - convolution_layer_args = (attention_dim, cnn_module_kernel, activation) - - self.encoders = repeat(num_blocks, lambda lnum: EncoderLayer(attention_dim, encoder_selfattn_layer(*encoder_selfattn_layer_args), - positionwise_layer(*positionwise_layer_args), - positionwise_layer(*positionwise_layer_args) if macaron_style else None, - convolution_layer(*convolution_layer_args) if use_cnn_module else None, dropout_rate, - normalize_before, concat_after)) - if self.normalize_before: - self.after_norm = LayerNorm(attention_dim) - - def forward(self, xs, masks, utterance_embedding=None, lang_ids=None): - """ - Encode input sequence. - - Args: - utterance_embedding: embedding containing lots of conditioning signals - step: indicator for when to start updating the embedding function - xs (torch.Tensor): Input tensor (#batch, time, idim). - masks (torch.Tensor): Mask tensor (#batch, time). - - Returns: - torch.Tensor: Output tensor (#batch, time, attention_dim). - torch.Tensor: Mask tensor (#batch, time). - - """ - - if self.embed is not None: - xs = self.embed(xs) - - if lang_ids is not None: - lang_embs = self.language_embedding(lang_ids) - xs = xs + lang_embs # offset the phoneme distribution of a language - - if utterance_embedding is not None and not self.connect_utt_emb_at_encoder_out: - xs = self._integrate_with_utt_embed(xs, utterance_embedding) - - xs = self.pos_enc(xs) - - xs, masks = self.encoders(xs, masks) - if isinstance(xs, tuple): - xs = xs[0] - - if self.normalize_before: - xs = self.after_norm(xs) - - if utterance_embedding is not None and self.connect_utt_emb_at_encoder_out: - xs = self._integrate_with_utt_embed(xs, utterance_embedding) - - return xs, masks - - def _integrate_with_utt_embed(self, hs, utt_embeddings): - # project embedding into smaller space - speaker_embeddings_projected = self.embedding_projection(utt_embeddings) - # concat hidden states with spk embeds and then apply projection - speaker_embeddings_expanded = F.normalize(speaker_embeddings_projected).unsqueeze(1).expand(-1, hs.size(1), -1) - hs = self.hs_emb_projection(torch.cat([hs, speaker_embeddings_expanded], dim=-1)) - return hs diff --git a/spaces/FredZhang7/paint-journey-demo/app.py b/spaces/FredZhang7/paint-journey-demo/app.py deleted file mode 100644 index 4ca9c102f3c72cd8ed22edbeb47cf2d15b624fbb..0000000000000000000000000000000000000000 --- a/spaces/FredZhang7/paint-journey-demo/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import os - -user_home = r"/home/user/app" -os.chdir(user_home) -#clone stable-diffusion-webui repo -print("cloning stable-diffusion-webui repo") -os.system("git clone \"https://github.com/AUTOMATIC1111/stable-diffusion-webui.git\" "+user_home+r"/stable-diffusion-webui") -#install extensions -print("installing extensions") -os.system(r"git clone !git clone https://huggingface.co/embed/negative "+user_home+"/stable-diffusion-webui/embeddings/negative") -os.system(r"git clone https://huggingface.co/embed/lora "+user_home+"/stable-diffusion-webui/models/Lora/positive") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth -d "+user_home+"/stable-diffusion-webui/models/ESRGAN -o 4x-UltraSharp.ptharia2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth -d "+user_home+"/stable-diffusion-webui/models/ESRGAN -o 4x-UltraSharp.pth") -os.system(r"wget https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py -O "+user_home+"/stable-diffusion-webui/scripts/run_n_times.py") -os.system(r"git clone https://github.com/AlUlkesh/stable-diffusion-webui-images-browser "+user_home+"/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") -os.system(r"git clone https://github.com/camenduru/stable-diffusion-webui-huggingface "+user_home+"/stable-diffusion-webui/extensions/stable-diffusion-webui-huggingface") -os.system(r"git clone https://github.com/Mikubill/sd-webui-controlnet "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet") -os.system(r"git clone https://github.com/jexom/sd-webui-depth-lib "+user_home+"/stable-diffusion-webui/extensions/sd-webui-depth-lib") -os.system(r"git clone https://github.com/hnmr293/posex "+user_home+"/stable-diffusion-webui/extensions/posex") -os.system(r"git clone https://github.com/nonnonstop/sd-webui-3d-open-pose-editor "+user_home+"/stable-diffusion-webui/extensions/sd-webui-3d-open-pose-editor") -os.system(r"git clone https://github.com/imrayya/stable-diffusion-webui-Prompt_Generator "+user_home+"/stable-diffusion-webui/extensions/sd-webui-Prompt_Generator") -os.chdir(os.path.join(user_home,r"stable-diffusion-webui")) -os.system(r"git reset --hard") -os.system(r"git -C "+user_home+"/stable-diffusion-webui/repositories/stable-diffusion-stability-ai reset --hard") - -#download ControlNet models -print("downloading ControlNet models") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors -d "+user_home+""+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11e_sd15_ip2p_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11e_sd15_shuffle_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_canny_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11f1p_sd15_depth_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_inpaint_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_lineart_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_mlsd_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_normalbae_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_openpose_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_scribble_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_seg_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_seg_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_softedge_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15s2_lineart_anime_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11f1e_sd15_tile_fp16.safetensors") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_ip2p_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11e_sd15_ip2p_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_shuffle_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11e_sd15_shuffle_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_canny_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_canny_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1p_sd15_depth_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11f1p_sd15_depth_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_inpaint_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_inpaint_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_lineart_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_lineart_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_mlsd_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_mlsd_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_normalbae_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_normalbae_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_openpose_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_openpose_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_scribble_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_scribble_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_seg_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_seg_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_softedge_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15_softedge_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15s2_lineart_anime_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11p_sd15s2_lineart_anime_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1e_sd15_tile_fp16.yaml -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o control_v11f1e_sd15_tile_fp16.yaml") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_style_sd14v1.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_style_sd14v1.pth") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd14v1.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_sketch_sd14v1.pth") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_seg_sd14v1.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_seg_sd14v1.pth") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_openpose_sd14v1.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_openpose_sd14v1.pth") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_keypose_sd14v1.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_keypose_sd14v1.pth") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_depth_sd14v1.pth") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_depth_sd14v1.pth") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd14v1.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_canny_sd14v1.pth") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd15v2.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_canny_sd15v2.pth") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd15v2.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_depth_sd15v2.pth") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd15v2.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_sketch_sd15v2.pth") -os.system(r"aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_zoedepth_sd15v1.pth -d "+user_home+"/stable-diffusion-webui/extensions/sd-webui-controlnet/models -o t2iadapter_zoedepth_sd15v1.pth") - -try: - os.mkdir(os.path.join(user_home,"stable-diffusion-webui/models/Stable-diffusion")) -except(FileExistsError): - print("exist") - -print("downloading model") -os.system(f"wget -q https://huggingface.co/FredZhang7/paint-journey-v2/resolve/main/paint_journey_v2_cpu_only.ckpt -O {user_home}/stable-diffusion-webui/models/Stable-diffusion/paint_journey_v2_cpu_only.ckpt") -os.system(f"wget -q https://huggingface.co/FredZhang7/paint-journey-v2/resolve/main/paint_journey_v2.vae.pt -O {user_home}/stable-diffusion-webui/models/Stable-diffusion/paint_journey_v2_cpu_only.vae.pt") - - -#strt webui -print("Done\nStarting Webui...") -os.system(r"python3 launch.py --precision full --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --no-half --no-half-vae --enable-insecure-extension-access --medvram --skip-torch-cuda-test ") - - -del os ,user_home \ No newline at end of file diff --git a/spaces/GIZ/SDSN-demo/style.css b/spaces/GIZ/SDSN-demo/style.css deleted file mode 100644 index cce4715afa124c31f1e3a28423fc2adbaa9a3a8d..0000000000000000000000000000000000000000 --- a/spaces/GIZ/SDSN-demo/style.css +++ /dev/null @@ -1,180 +0,0 @@ - -.row-widget.stTextInput > div:first-of-type { - background: #fff; - display: flex; - border: 1px solid #dfe1e5; - box-shadow: none; - border-radius: 24px; - height: 50px; - width: auto; - margin: 10px auto 30px; -} - -.row-widget.stTextInput > div:first-of-type:hover, -.row-widget.stTextInput > div:first-of-type:focus { - box-shadow: 1px 1px 2px 1px rgba(0, 0, 0, 0.2); -} - -.row-widget.stTextInput .st-bq { - background-color: #fff; -} - -.row-widget.stTextInput > label { - color: #b3b3b3; -} - -.row-widget.stButton > button { - border-radius: 24px; - background-color: #B6C9B1; - color: #fff; - border: none; - padding: 6px 20px; - float: right; - background-image: none; -} - -.row-widget.stButton > button:hover { - box-shadow: 1px 1px 2px 1px rgba(0, 0, 0, 0.2); -} - -.row-widget.stButton > button:focus { - border: none; - color: #fff; -} - -.footer-custom { - position: fixed; - bottom: 0; - width: 100%; - color: var(--text-color); - max-width: 698px; - font-size: 14px; - height: 50px; - padding: 10px 0; - z-index: 50; -} - -.main { - padding: 20px; -} - -footer { - display: none !important; -} - -.footer-custom a { - color: var(--text-color); -} - -#wikipedia-assistant { - font-size: 36px; -} - -.generated-answer p { - font-size: 16px; - font-weight: bold; -} - -.react-json-view { - margin: 40px 0 80px; -} - -.tooltip { - text-align: center; - line-height: 20px; - display: table-caption; - font-size: 10px; - border-radius: 50%; - height: 20px; - width: 20px; - position: relative; - cursor: pointer; - color:#000; -} - -.tooltip .tooltiptext { - visibility: hidden; - width: 280px; - text-align: center; - border-radius: 6px; - padding: 10px; - position: absolute; - z-index: 1; - top: 25px; - left: 50%; - margin-left: -140px; - font-size: 14px; - background-color: #fff; - border: 1px solid #ccc; - box-shadow: 0px 0px 3px 1px rgba(0, 0, 0, 0.16); - color: #000; -} - -.tooltip:hover .tooltiptext { - visibility: visible; -} - -.sentence-wrapper { - border-left: 4px solid #ffc423; - padding-left: 20px; - margin-bottom: 40px; -} - -#context { - padding: 2rem 0 1rem; -} - -hr { - margin: 2em 0 1em; -} - - -.technical-details-info { - margin-bottom: 100px; -} - -.loader-wrapper { - display: flex; - align-items: center; - background-color: rgba(250, 202, 43, 0.2); - padding: 15px 20px; - border-radius: 6px; -} - -.loader-wrapper p { - margin-bottom: 0; - margin-left: 20px; -} - -.loader { - width: 30px; - height: 30px; - border: dotted 5px #868686; - border-radius: 100%; - animation: spin 1s linear infinite; -} - -.loader-note { - font-size: 14px; - color: #b3b3b3; - margin-left: 5px; -} - -@keyframes spin { - 0% { - transform: rotate(0deg) scale(0.8); - border-top-color: transparent; - border-right-color: transparent; - } - 50% { transform: rotate(180deg) scale(1.2); - border-color: #949494; - border-top-color: transparent; - border-right-color: transparent; - } - 100% { transform: rotate(360deg) scale(0.8); - border-color: #bbbbbb; - border-top-color: transparent; - border-right-color: transparent; - } -} - diff --git a/spaces/Godrose0728/sound-link/text/__init__.py b/spaces/Godrose0728/sound-link/text/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/Godrose0728/sound-link/text/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/Gradio-Blocks/beat-interpolator/examples/models/mnist/model.py b/spaces/Gradio-Blocks/beat-interpolator/examples/models/mnist/model.py deleted file mode 100644 index f3d650720c5d5533609779c52a6f645ffbbc66d2..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/beat-interpolator/examples/models/mnist/model.py +++ /dev/null @@ -1,69 +0,0 @@ -import os -import numpy as np -import torch -import torch.nn as nn - - -class Generator(nn.Module): - '''Refer to https://github.com/safwankdb/Vanilla-GAN''' - def __init__(self): - super(Generator, self).__init__() - self.n_features = 128 - self.n_out = 784 - self.fc0 = nn.Sequential( - nn.Linear(self.n_features, 256), - nn.LeakyReLU(0.2) - ) - self.fc1 = nn.Sequential( - nn.Linear(256, 512), - nn.LeakyReLU(0.2) - ) - self.fc2 = nn.Sequential( - nn.Linear(512, 784), - nn.Tanh() - ) - def forward(self, x): - x = self.fc0(x) - x = self.fc1(x) - x = self.fc2(x) - x = x.view(-1, 1, 28, 28) - return x - - -def create_mnist_inference(): - device = 'cuda' if torch.cuda.is_available() else 'cpu' - mnist = Generator() - state = torch.load( - os.path.join( - os.path.dirname(__file__), - 'mnist_generator.pretrained' - ), - map_location='cpu' - ) - mnist.load_state_dict(state) - mnist.to(device) - mnist.eval() - - @torch.inference_mode() - def mnist_generator(latents): - latents = [torch.from_numpy(latent).float().to(device) for latent in latents] - latents = torch.stack(latents) - out = mnist(latents) - outs = [] - for out_i in out: - out_i = ((out_i[0] + 1) * 127.5).clamp(0,255).cpu().numpy() - out_i = np.uint8(out_i) - out_i = np.stack([out_i]*3, -1) - outs.append(out_i) - return outs - - return { - 'name': 'MNIST', - 'generator': mnist_generator, - 'latent_dim': 128, - 'fps': 20, - 'batch_size': 8, - 'strength': 0.75, - 'max_duration': 30, - 'use_peak': True - } diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/model_cards/MUSICGEN_MODEL_CARD.md b/spaces/GrandaddyShmax/AudioCraft_Plus/model_cards/MUSICGEN_MODEL_CARD.md deleted file mode 100644 index 10ba9f9790841be06cd3e459cf667c1af6291343..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/model_cards/MUSICGEN_MODEL_CARD.md +++ /dev/null @@ -1,90 +0,0 @@ -# MusicGen Model Card - -## Model details - -**Organization developing the model:** The FAIR team of Meta AI. - -**Model date:** MusicGen was trained between April 2023 and May 2023. - -**Model version:** This is the version 1 of the model. - -**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. - -**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv]. - -**Citation details:** See [our paper][arxiv] - -**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0. - -**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [GitHub repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. - -## Intended use -**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - -- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science -- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs - -**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. - -**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. - -## Metrics - -**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - -- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) -- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) -- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model - -Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - -- Overall quality of the music samples; -- Text relevance to the provided text input; -- Adherence to the melody for melody-guided music generation. - -More details on performance measures and human studies can be found in the paper. - -**Decision thresholds:** Not applicable. - -## Evaluation datasets - -The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. - -## Training datasets - -The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. - -## Evaluation results - -Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper. - -| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity | -|---|---|---|---|---| -| facebook/musicgen-small | 4.88 | 1.28 | 0.27 | - | -| facebook/musicgen-medium | 5.14 | 1.24 | 0.28 | - | -| facebook/musicgen-large | 5.48 | 1.22 | 0.28 | - | -| facebook/musicgen-melody | 4.93 | 1.26 | 0.27 | 0.44 | - -More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Results section. - -## Limitations and biases - -**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. - -**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). - -**Limitations:** - -- The model is not able to generate realistic vocals. -- The model has been trained with English descriptions and will not perform as well in other languages. -- The model does not perform equally well for all music styles and cultures. -- The model sometimes generates end of songs, collapsing to silence. -- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. - -**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. - -**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. - -**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. - -[arxiv]: https://arxiv.org/abs/2306.05284 diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/common_utils/__init__.py b/spaces/GrandaddyShmax/AudioCraft_Plus/tests/common_utils/__init__.py deleted file mode 100644 index 74ffcfef96fec35c99b2a1a053a61f44f7a8bbe9..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/common_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .temp_utils import TempDirMixin -from .wav_utils import get_batch_white_noise, get_white_noise, save_wav diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/models/test_encodec_model.py b/spaces/GrandaddyShmax/AudioCraft_Plus/tests/models/test_encodec_model.py deleted file mode 100644 index 2f9c1db3f69a45f02451b71da95f44356811acbb..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/models/test_encodec_model.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random - -import numpy as np -import torch - -from audiocraft.models import EncodecModel -from audiocraft.modules import SEANetEncoder, SEANetDecoder -from audiocraft.quantization import DummyQuantizer - - -class TestEncodecModel: - - def _create_encodec_model(self, - sample_rate: int, - channels: int, - dim: int = 5, - n_filters: int = 3, - n_residual_layers: int = 1, - ratios: list = [5, 4, 3, 2], - **kwargs): - frame_rate = np.prod(ratios) - encoder = SEANetEncoder(channels=channels, dimension=dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - decoder = SEANetDecoder(channels=channels, dimension=dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - quantizer = DummyQuantizer() - model = EncodecModel(encoder, decoder, quantizer, frame_rate=frame_rate, - sample_rate=sample_rate, channels=channels, **kwargs) - return model - - def test_model(self): - random.seed(1234) - sample_rate = 24_000 - channels = 1 - model = self._create_encodec_model(sample_rate, channels) - for _ in range(10): - length = random.randrange(1, 10_000) - x = torch.randn(2, channels, length) - res = model(x) - assert res.x.shape == x.shape - - def test_model_renorm(self): - random.seed(1234) - sample_rate = 24_000 - channels = 1 - model_nonorm = self._create_encodec_model(sample_rate, channels, renormalize=False) - model_renorm = self._create_encodec_model(sample_rate, channels, renormalize=True) - - for _ in range(10): - length = random.randrange(1, 10_000) - x = torch.randn(2, channels, length) - codes, scales = model_nonorm.encode(x) - codes, scales = model_renorm.encode(x) - assert scales is not None diff --git a/spaces/Grezz/generate_human_motion/pyrender/pyrender/version.py b/spaces/Grezz/generate_human_motion/pyrender/pyrender/version.py deleted file mode 100644 index a33fc87f61f528780e3319a5160769cc84512b1b..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/pyrender/pyrender/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '0.1.45' diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/README.md b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/README.md deleted file mode 100644 index 02892bc9dd4344e550596d238e2b71870cfc7dd3..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/README.md +++ /dev/null @@ -1,220 +0,0 @@ -# vakyansh-tts -Text to Speech for Indic languages - -## 1. Installation and Setup for training - -Clone repo -Note : for multspeaker glow-tts training use branch [multispeaker](https://github.com/Open-Speech-EkStep/vakyansh-tts/tree/multispeaker) -``` -git clone https://github.com/Open-Speech-EkStep/vakyansh-tts -``` -Build conda virtual environment -``` -cd ./vakyansh-tts -conda create --name python=3.7 -conda activate -pip install -r requirements.txt -``` -Install [apex](https://github.com/NVIDIA/apex); commit: 37cdaf4 for Mixed-precision training - -Note : used only for glow-tts -``` -cd .. -git clone https://github.com/NVIDIA/apex -cd apex -git checkout 37cdaf4 -pip install -v --disable-pip-version-check --no-cache-dir ./ -cd ../vakyansh-tts -``` -Build Monotonic Alignment Search Code (Cython) - -Note : used only for glow-tts -``` -bash install.sh -``` - -## 2. Data Resampling - -The data format should have a folder containing all the .wav files for glow-tts and a text file containing filenames with their sentences. - -Directory structure: - -langauge_folder_name -``` -language_folder_name -|-- ./wav/*.wav -|-- ./text_file_name.txt -``` -The format for text_file_name.txt (Text file is only needed for glow-tts training) - -``` -( audio1.wav "Sentence1." ) -( audio2.wav "Sentence2." ) -``` - -To resample the .wav files to 22050 sample rate, change the following parameters in the vakyansh-tts/scripts/data/resample.sh - -``` -input_wav_path : absolute path to wav file folder in vakyansh_tts/data/ -output_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name -output_sample_rate : 22050 (or any other desired sample rate) -``` - -To run: -```bash -cd scripts/data/ -bash resample.sh -``` - - -## 3. Spectogram Training (glow-tts) - -### 3.1 Data Preparation - - -To prepare the data edit the vakyansh-tts/scripts/glow/prepare_data.sh file and change the following parameters -``` -input_text_path : absolute path to vakyansh_tts/data/text_file_name.txt -input_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name -gender : female or male voice -``` -To run: -```bash -cd scripts/glow/ -bash prepare_data.sh -``` -### 3.2 Training glow-tts - -To start the spectogram-training edit the vakyansh-tts/scripts/glow/train_glow.sh file and change the following parameter: -``` -gender : female or male voice -``` -Make sure that the gender is same as that of the prepare_data.sh file - -To start the training, run: -```bash -cd scripts/glow/ -bash train_glow.sh -``` -## 4. Vocoder Training (hifi-gan) - -### 4.1 Data Preparation - -To prepare the data edit the vakyansh-tts/scripts/hifi/prepare_data.sh file and change the following parameters -``` -input_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name -gender : female or male voice -``` -To run: -```bash -cd scripts/hifi/ -bash prepare_data.sh -``` -### 4.2 Training hifi-gan - -To start the spectogram-training edit the vakyansh-tts/scripts/hifi/train_hifi.sh file and change the following parameter: -``` -gender : female or male voice -``` -Make sure that the gender is same as that of the prepare_data.sh file - -To start the training, run: -```bash -cd scripts/hifi/ -bash train_hifi.sh -``` - -## 5. Inference - -### 5.1 Using Gradio - -To use the gradio link edit the following parameters in the vakyansh-tts/scripts/inference/gradio.sh file: -``` -gender : female or male voice -device : cpu or cuda -lang : langauge code -``` - -To run: -```bash -cd scripts/inference/ -bash gradio.sh -``` -### 5.2 Using fast API -To use the fast api link edit the parameters in the vakyansh-tts/scripts/inference/api.sh file similar to section 5.1 - -To run: -```bash -cd scripts/inference/ -bash api.sh -``` - -### 5.3 Direct Inference using text -To infer, edit the parameters in the vakyansh-tts/scripts/inference/infer.sh file similar to section 5.1 and set the text to the text variable - -To run: -```bash -cd scripts/inference/ -bash infer.sh -``` - -To configure other parameters there is a version that runs the advanced inference as well. Additional Parameters: -``` -noise_scale : can vary from 0 to 1 for noise factor -length_scale : can vary from 0 to 2 for changing the speed of the generated audio -transliteration : whether to switch on/off transliteration. 1: ON, 0: OFF -number_conversion : whether to switch on/off number to words conversion. 1: ON, 0: OFF -split_sentences : whether to switch on/off splitting of sentences. 1: ON, 0: OFF -``` -To run: -``` -cd scripts/inference/ -bash advanced_infer.sh -``` - -### 5.4 Installation of tts_infer package - -In tts_infer package, we currently have two components: - - 1. Transliteration (AI4bharat's open sourced models) (Languages supported: {'hi', 'gu', 'mr', 'bn', 'te', 'ta', 'kn', 'pa', 'gom', 'mai', 'ml', 'sd', 'si', 'ur'} ) - - 2. Num to Word (Languages supported: {'en', 'hi', 'gu', 'mr', 'bn', 'te', 'ta', 'kn', 'or', 'pa'} ) -``` -git clone https://github.com/Open-Speech-EkStep/vakyansh-tts -cd vakyansh-tts -bash install.sh -python setup.py bdist_wheel -pip install -e . -cd tts_infer -gsutil -m cp -r gs://vakyaansh-open-models/translit_models . -``` - -Usage: Refer to example file in tts_infer/ -``` -from tts_infer.tts import TextToMel, MelToWav -from tts_infer.transliterate import XlitEngine -from tts_infer.num_to_word_on_sent import normalize_nums - -import re -from scipy.io.wavfile import write - -text_to_mel = TextToMel(glow_model_dir='/path/to/glow-tts/checkpoint/dir', device='cuda') -mel_to_wav = MelToWav(hifi_model_dir='/path/to/hifi/checkpoint/dir', device='cuda') - -def translit(text, lang): - reg = re.compile(r'[a-zA-Z]') - engine = XlitEngine(lang) - words = [engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word for word in text.split()] - updated_sent = ' '.join(words) - return updated_sent - -def run_tts(text, lang): - text = text.replace('।', '.') # only for hindi models - text_num_to_word = normalize_nums(text, lang) # converting numbers to words in lang - text_num_to_word_and_transliterated = translit(text_num_to_word, lang) # transliterating english words to lang - - mel = text_to_mel.generate_mel(text_num_to_word_and_transliterated) - audio, sr = mel_to_wav.generate_wav(mel) - write(filename='temp.wav', rate=sr, data=audio) # for saving wav file, if needed - return (sr, audio) -``` diff --git a/spaces/Hina4867/bingo/README.md b/spaces/Hina4867/bingo/README.md deleted file mode 100644 index 0db8128b259b3b8df0437188b25589a1e0804259..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/README.md +++ /dev/null @@ -1,194 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -
    - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -
    - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -
    - -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 - - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -
    - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -
    -正常格式/网页端保存的格式(格式仅供参考) - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -
    - -
    -转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式) - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -
    - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - - - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/remove_silence.py b/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/remove_silence.py deleted file mode 100644 index fac88b989703262a84b242b2761df621bf02c739..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/wav2vec/unsupervised/scripts/remove_silence.py +++ /dev/null @@ -1,63 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -get intervals from .vads file, specify output data, and this script removes silences and saves the audio data in out path folder -paths=shards/train.tsv -vads=shards/train.vads -python remove_silence.py --paths $paths --vads $vads -""" - -import os -import argparse -import torch -import torchaudio -import tqdm - - -parser = argparse.ArgumentParser() -parser.add_argument("--tsv", default="", type=str) -parser.add_argument("--vads", default="", type=str) -parser.add_argument("--out", type=str) -params = parser.parse_args() - -# load paths -paths = [] -with open(params.tsv) as f: - root = next(f).rstrip() - for line in f: - paths.append(os.path.join(root, line.rstrip().split("\t")[0])) - -# load vads -list_intervals = [] -with open(params.vads) as f: - for line in f: - interval = [ - [int(w.split(":")[0]), int(w.split(":")[1])] for w in line.rstrip().split() - ] - list_intervals.append(interval) - - -# load audio and keep only intervals (i.e. remove silences) -for i in tqdm.trange(len(paths)): - data, _ = torchaudio.load(paths[i]) - if len(list_intervals[i]) > 0: - data_filtered = torch.cat( - [data[0][int(it[0]) : int(it[1])] for it in list_intervals[i]] - ).unsqueeze(0) - else: - data_filtered = data - - # YOU MAY NEED TO MODIFY THIS TO GET THE RIGHT SUBPATH - # outpath = params.out + '/'.join(paths[i].split('/')[-1]) - outpath = params.out + "/" + "/".join(paths[i].split("/")[-2:]) - - if not os.path.isdir("/".join(outpath.split("/")[:-1])): - os.makedirs("/".join(outpath.split("/")[:-1])) - if not os.path.exists(outpath): - torchaudio.save(outpath, data_filtered, sample_rate=16000) - else: - print(outpath, "exists!") diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/speech_to_text/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/speech_to_text/__init__.py deleted file mode 100644 index 1c5189c0f7fb4d66077d9d6498cb78cacff76de8..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/speech_to_text/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .berard import * # noqa -from .convtransformer import * # noqa -from .s2t_transformer import * # noqa -from .xm_transformer import * # noqa diff --git a/spaces/ICML2022/resefa/utils/formatting_utils.py b/spaces/ICML2022/resefa/utils/formatting_utils.py deleted file mode 100644 index 20f9f14050da889b7b9be0867e9373ff54ebe42d..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/utils/formatting_utils.py +++ /dev/null @@ -1,178 +0,0 @@ -# python3.7 -"""Contains utility functions used for formatting.""" - -import cv2 -import numpy as np - -__all__ = [ - 'format_time', 'format_range', 'format_image_size', 'format_image', - 'raw_label_to_one_hot', 'one_hot_to_raw_label' -] - - -def format_time(seconds): - """Formats seconds to readable time string. - - Args: - seconds: Number of seconds to format. - - Returns: - The formatted time string. - - Raises: - ValueError: If the input `seconds` is less than 0. - """ - if seconds < 0: - raise ValueError(f'Input `seconds` should be greater than or equal to ' - f'0, but `{seconds}` is received!') - - # Returns seconds as float if less than 1 minute. - if seconds < 10: - return f'{seconds:7.3f} s' - if seconds < 60: - return f'{seconds:7.2f} s' - - seconds = int(seconds + 0.5) - days, seconds = divmod(seconds, 86400) - hours, seconds = divmod(seconds, 3600) - minutes, seconds = divmod(seconds, 60) - if days: - return f'{days:2d} d {hours:02d} h' - if hours: - return f'{hours:2d} h {minutes:02d} m' - return f'{minutes:2d} m {seconds:02d} s' - - -def format_range(obj, min_val=None, max_val=None): - """Formats the given object to a valid range. - - If `min_val` or `max_val` is provided, both the starting value and the end - value will be clamped to range `[min_val, max_val]`. - - NOTE: (a, b) is regarded as a valid range if and only if `a <= b`. - - Args: - obj: The input object to format. - min_val: The minimum value to cut off the input range. If not provided, - the default minimum value is negative infinity. (default: None) - max_val: The maximum value to cut off the input range. If not provided, - the default maximum value is infinity. (default: None) - - Returns: - A two-elements tuple, indicating the start and the end of the range. - - Raises: - ValueError: If the input object is an invalid range. - """ - if not isinstance(obj, (tuple, list)): - raise ValueError(f'Input object must be a tuple or a list, ' - f'but `{type(obj)}` received!') - if len(obj) != 2: - raise ValueError(f'Input object is expected to contain two elements, ' - f'but `{len(obj)}` received!') - if obj[0] > obj[1]: - raise ValueError(f'The second element is expected to be equal to or ' - f'greater than the first one, ' - f'but `({obj[0]}, {obj[1]})` received!') - - obj = list(obj) - if min_val is not None: - obj[0] = max(obj[0], min_val) - obj[1] = max(obj[1], min_val) - if max_val is not None: - obj[0] = min(obj[0], max_val) - obj[1] = min(obj[1], max_val) - return tuple(obj) - - -def format_image_size(size): - """Formats the given image size to a two-element tuple. - - A valid image size can be an integer, indicating both the height and the - width, OR can be a two-element list or tuple. Both height and width are - assumed to be positive integer. - - Args: - size: The input size to format. - - Returns: - A two-elements tuple, indicating the height and the width, respectively. - - Raises: - ValueError: If the input size is invalid. - """ - if not isinstance(size, (int, tuple, list)): - raise ValueError(f'Input size must be an integer, a tuple, or a list, ' - f'but `{type(size)}` received!') - if isinstance(size, int): - size = (size, size) - else: - if len(size) == 1: - size = (size[0], size[0]) - if not len(size) == 2: - raise ValueError(f'Input size is expected to have two numbers at ' - f'most, but `{len(size)}` numbers received!') - if not isinstance(size[0], int) or size[0] < 0: - raise ValueError(f'The height is expected to be a non-negative ' - f'integer, but `{size[0]}` received!') - if not isinstance(size[1], int) or size[1] < 0: - raise ValueError(f'The width is expected to be a non-negative ' - f'integer, but `{size[1]}` received!') - return tuple(size) - - -def format_image(image): - """Formats an image read from `cv2`. - - NOTE: This function will always return a 3-dimensional image (i.e., with - shape [H, W, C]) in pixel range [0, 255]. For color images, the channel - order of the input is expected to be with `BGR` or `BGRA`, which is the - raw image decoded by `cv2`; while the channel order of the output is set to - `RGB` or `RGBA` by default. - - Args: - image: `np.ndarray`, an image read by `cv2.imread()` or - `cv2.imdecode()`. - - Returns: - An image with shape [H, W, C] (where `C = 1` for grayscale image). - """ - if image.ndim == 2: # add additional axis if given a grayscale image - image = image[:, :, np.newaxis] - - assert isinstance(image, np.ndarray) - assert image.dtype == np.uint8 - assert image.ndim == 3 and image.shape[2] in [1, 3, 4] - - if image.shape[2] == 3: # BGR image - return cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - if image.shape[2] == 4: # BGRA image - return cv2.cvtColor(image, cv2.COLOR_BGRA2RGBA) - return image - - -def raw_label_to_one_hot(raw_label, num_classes): - """Converts a single label into one-hot vector. - - Args: - raw_label: The raw label. - num_classes: Total number of classes. - - Returns: - one-hot vector of the given raw label. - """ - one_hot = np.zeros(num_classes, dtype=np.float32) - one_hot[raw_label] = 1.0 - return one_hot - - -def one_hot_to_raw_label(one_hot): - """Converts a one-hot vector to a single value label. - - Args: - one_hot: `np.ndarray`, a one-hot encoded vector. - - Returns: - A single integer to represent the category. - """ - return np.argmax(one_hot) diff --git a/spaces/IELTS8/ISF/README.md b/spaces/IELTS8/ISF/README.md deleted file mode 100644 index e1ca27b0482b5b337d56cbd7287f3178a2fd4a60..0000000000000000000000000000000000000000 --- a/spaces/IELTS8/ISF/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ISF -emoji: 📉 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/models/autoencoder.py b/spaces/Iceclear/StableSR/StableSR/ldm/models/autoencoder.py deleted file mode 100644 index 7b4156448b61788681c7bcdcdc9123a89a732ec8..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/ldm/models/autoencoder.py +++ /dev/null @@ -1,919 +0,0 @@ -import torch -import pytorch_lightning as pl -import torch.nn.functional as F -from contextlib import contextmanager - -from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer - -from ldm.modules.diffusionmodules.model import Encoder, Decoder, Decoder_Mix -from ldm.modules.distributions.distributions import DiagonalGaussianDistribution - -from ldm.util import instantiate_from_config - -from basicsr.utils import DiffJPEG, USMSharp -from basicsr.utils.img_process_util import filter2D -from basicsr.data.transforms import paired_random_crop, triplet_random_crop -from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt, random_add_speckle_noise_pt, random_add_saltpepper_noise_pt -import random - -import torchvision.transforms as transforms - - -class VQModel(pl.LightningModule): - def __init__(self, - ddconfig, - lossconfig, - n_embed, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - batch_resize_range=None, - scheduler_config=None, - lr_g_factor=1.0, - remap=None, - sane_index_shape=False, # tell vector quantizer to return indices as bhw - use_ema=False - ): - super().__init__() - self.embed_dim = embed_dim - self.n_embed = n_embed - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - self.loss = instantiate_from_config(lossconfig) - self.quantize = VectorQuantizer(n_embed, embed_dim, beta=0.25, - remap=remap, - sane_index_shape=sane_index_shape) - self.quant_conv = torch.nn.Conv2d(ddconfig["z_channels"], embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - if colorize_nlabels is not None: - assert type(colorize_nlabels)==int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - self.batch_resize_range = batch_resize_range - if self.batch_resize_range is not None: - print(f"{self.__class__.__name__}: Using per-batch resizing in range {batch_resize_range}.") - - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - self.scheduler_config = scheduler_config - self.lr_g_factor = lr_g_factor - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.parameters()) - self.model_ema.copy_to(self) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - def init_from_ckpt(self, path, ignore_keys=list()): - sd = torch.load(path, map_location="cpu")["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - print(f"Unexpected Keys: {unexpected}") - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self) - - def encode(self, x): - h = self.encoder(x) - h = self.quant_conv(h) - quant, emb_loss, info = self.quantize(h) - return quant, emb_loss, info - - def encode_to_prequant(self, x): - h = self.encoder(x) - h = self.quant_conv(h) - return h - - def decode(self, quant): - quant = self.post_quant_conv(quant) - dec = self.decoder(quant) - return dec - - def decode_code(self, code_b): - quant_b = self.quantize.embed_code(code_b) - dec = self.decode(quant_b) - return dec - - def forward(self, input, return_pred_indices=False): - quant, diff, (_,_,ind) = self.encode(input) - dec = self.decode(quant) - if return_pred_indices: - return dec, diff, ind - return dec, diff - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - if self.batch_resize_range is not None: - lower_size = self.batch_resize_range[0] - upper_size = self.batch_resize_range[1] - if self.global_step <= 4: - # do the first few batches with max size to avoid later oom - new_resize = upper_size - else: - new_resize = np.random.choice(np.arange(lower_size, upper_size+16, 16)) - if new_resize != x.shape[2]: - x = F.interpolate(x, size=new_resize, mode="bicubic") - x = x.detach() - return x - - def training_step(self, batch, batch_idx, optimizer_idx): - # https://github.com/pytorch/pytorch/issues/37142 - # try not to fool the heuristics - x = self.get_input(batch, self.image_key) - xrec, qloss, ind = self(x, return_pred_indices=True) - - if optimizer_idx == 0: - # autoencode - aeloss, log_dict_ae = self.loss(qloss, x, xrec, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train", - predicted_indices=ind) - - self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True) - return aeloss - - if optimizer_idx == 1: - # discriminator - discloss, log_dict_disc = self.loss(qloss, x, xrec, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=True) - return discloss - - def validation_step(self, batch, batch_idx): - log_dict = self._validation_step(batch, batch_idx) - with self.ema_scope(): - log_dict_ema = self._validation_step(batch, batch_idx, suffix="_ema") - return log_dict - - def _validation_step(self, batch, batch_idx, suffix=""): - x = self.get_input(batch, self.image_key) - xrec, qloss, ind = self(x, return_pred_indices=True) - aeloss, log_dict_ae = self.loss(qloss, x, xrec, 0, - self.global_step, - last_layer=self.get_last_layer(), - split="val"+suffix, - predicted_indices=ind - ) - - discloss, log_dict_disc = self.loss(qloss, x, xrec, 1, - self.global_step, - last_layer=self.get_last_layer(), - split="val"+suffix, - predicted_indices=ind - ) - rec_loss = log_dict_ae[f"val{suffix}/rec_loss"] - self.log(f"val{suffix}/rec_loss", rec_loss, - prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True) - self.log(f"val{suffix}/aeloss", aeloss, - prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True) - if version.parse(pl.__version__) >= version.parse('1.4.0'): - del log_dict_ae[f"val{suffix}/rec_loss"] - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def configure_optimizers(self): - lr_d = self.learning_rate - lr_g = self.lr_g_factor*self.learning_rate - print("lr_d", lr_d) - print("lr_g", lr_g) - opt_ae = torch.optim.Adam(list(self.encoder.parameters())+ - list(self.decoder.parameters())+ - list(self.quantize.parameters())+ - list(self.quant_conv.parameters())+ - list(self.post_quant_conv.parameters()), - lr=lr_g, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), - lr=lr_d, betas=(0.5, 0.9)) - - if self.scheduler_config is not None: - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt_ae, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }, - { - 'scheduler': LambdaLR(opt_disc, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }, - ] - return [opt_ae, opt_disc], scheduler - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - def log_images(self, batch, only_inputs=False, plot_ema=False, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - if only_inputs: - log["inputs"] = x - return log - xrec, _ = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["inputs"] = x - log["reconstructions"] = xrec - if plot_ema: - with self.ema_scope(): - xrec_ema, _ = self(x) - if x.shape[1] > 3: xrec_ema = self.to_rgb(xrec_ema) - log["reconstructions_ema"] = xrec_ema - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x - -class VQModelInterface(VQModel): - def __init__(self, embed_dim, *args, **kwargs): - super().__init__(embed_dim=embed_dim, *args, **kwargs) - self.embed_dim = embed_dim - - def encode(self, x): - h = self.encoder(x) - h = self.quant_conv(h) - return h - - def decode(self, h, force_not_quantize=False): - # also go through quantization layer - if not force_not_quantize: - quant, emb_loss, info = self.quantize(h) - else: - quant = h - quant = self.post_quant_conv(quant) - dec = self.decoder(quant) - return dec - -class AutoencoderKL(pl.LightningModule): - def __init__(self, - ddconfig, - lossconfig, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - ): - super().__init__() - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - self.loss = instantiate_from_config(lossconfig) - assert ddconfig["double_z"] - self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - self.embed_dim = embed_dim - if colorize_nlabels is not None: - assert type(colorize_nlabels)==int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - if 'first_stage_model' in k: - sd[k[18:]] = sd[k] - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Encoder Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - # if len(unexpected) > 0: - # print(f"Unexpected Keys: {unexpected}") - - def encode(self, x, return_encfea=False): - h = self.encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - if return_encfea: - return posterior, moments - return posterior - - def encode_gt(self, x, new_encoder): - h = new_encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - return posterior, moments - - def decode(self, z): - z = self.post_quant_conv(z) - dec = self.decoder(z) - return dec - - def forward(self, input, sample_posterior=True): - posterior = self.encode(input) - if sample_posterior: - z = posterior.sample() - else: - z = posterior.mode() - dec = self.decode(z) - return dec, posterior - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - # x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - x = x.to(memory_format=torch.contiguous_format).float() - # x = x*2.0-1.0 - return x - - def training_step(self, batch, batch_idx, optimizer_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - - if optimizer_idx == 0: - # train encoder+decoder+logvar - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return aeloss - - if optimizer_idx == 1: - # train the discriminator - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - - self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return discloss - - def validation_step(self, batch, batch_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step, - last_layer=self.get_last_layer(), split="val") - - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step, - last_layer=self.get_last_layer(), split="val") - - self.log("val/rec_loss", log_dict_ae["val/rec_loss"]) - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def configure_optimizers(self): - lr = self.learning_rate - opt_ae = torch.optim.Adam(list(self.encoder.parameters())+ - list(self.decoder.parameters())+ - list(self.quant_conv.parameters())+ - list(self.post_quant_conv.parameters()), - lr=lr, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), - lr=lr, betas=(0.5, 0.9)) - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - @torch.no_grad() - def log_images(self, batch, only_inputs=False, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - if not only_inputs: - xrec, posterior = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - # log["samples"] = self.decode(torch.randn_like(posterior.sample())) - log["reconstructions"] = xrec - log["inputs"] = x - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x - -class IdentityFirstStage(torch.nn.Module): - def __init__(self, *args, vq_interface=False, **kwargs): - self.vq_interface = vq_interface # TODO: Should be true by default but check to not break older stuff - super().__init__() - - def encode(self, x, *args, **kwargs): - return x - - def decode(self, x, *args, **kwargs): - return x - - def quantize(self, x, *args, **kwargs): - if self.vq_interface: - return x, None, [None, None, None] - return x - - def forward(self, x, *args, **kwargs): - return x - -class AutoencoderKLResi(pl.LightningModule): - def __init__(self, - ddconfig, - lossconfig, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - fusion_w=1.0, - freeze_dec=True, - synthesis_data=False, - use_usm=False, - test_gt=False, - ): - super().__init__() - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder_Mix(**ddconfig) - self.decoder.fusion_w = fusion_w - self.loss = instantiate_from_config(lossconfig) - self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - self.embed_dim = embed_dim - if colorize_nlabels is not None: - assert type(colorize_nlabels)==int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - if ckpt_path is not None: - missing_list = self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - else: - missing_list = [] - - print('>>>>>>>>>>>>>>>>>missing>>>>>>>>>>>>>>>>>>>') - print(missing_list) - self.synthesis_data = synthesis_data - self.use_usm = use_usm - self.test_gt = test_gt - - if freeze_dec: - for name, param in self.named_parameters(): - if 'fusion_layer' in name: - param.requires_grad = True - # elif 'encoder' in name: - # param.requires_grad = True - # elif 'quant_conv' in name and 'post_quant_conv' not in name: - # param.requires_grad = True - elif 'loss.discriminator' in name: - param.requires_grad = True - else: - param.requires_grad = False - - print('>>>>>>>>>>>>>>>>>trainable_list>>>>>>>>>>>>>>>>>>>') - trainable_list = [] - for name, params in self.named_parameters(): - if params.requires_grad: - trainable_list.append(name) - print(trainable_list) - - print('>>>>>>>>>>>>>>>>>Untrainable_list>>>>>>>>>>>>>>>>>>>') - untrainable_list = [] - for name, params in self.named_parameters(): - if not params.requires_grad: - untrainable_list.append(name) - print(untrainable_list) - # untrainable_list = list(set(trainable_list).difference(set(missing_list))) - # print('>>>>>>>>>>>>>>>>>untrainable_list>>>>>>>>>>>>>>>>>>>') - # print(untrainable_list) - - # def init_from_ckpt(self, path, ignore_keys=list()): - # sd = torch.load(path, map_location="cpu")["state_dict"] - # keys = list(sd.keys()) - # for k in keys: - # for ik in ignore_keys: - # if k.startswith(ik): - # print("Deleting key {} from state_dict.".format(k)) - # del sd[k] - # self.load_state_dict(sd, strict=False) - # print(f"Restored from {path}") - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - if 'first_stage_model' in k: - sd[k[18:]] = sd[k] - del sd[k] - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Encoder Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - return missing - - def encode(self, x): - h, enc_fea = self.encoder(x, return_fea=True) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - # posterior = h - return posterior, enc_fea - - def encode_gt(self, x, new_encoder): - h = new_encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - return posterior, moments - - def decode(self, z, enc_fea): - z = self.post_quant_conv(z) - dec = self.decoder(z, enc_fea) - return dec - - def forward(self, input, latent, sample_posterior=True): - posterior, enc_fea_lq = self.encode(input) - dec = self.decode(latent, enc_fea_lq) - return dec, posterior - - @torch.no_grad() - def _dequeue_and_enqueue(self): - """It is the training pair pool for increasing the diversity in a batch. - - Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a - batch could not have different resize scaling factors. Therefore, we employ this training pair pool - to increase the degradation diversity in a batch. - """ - # initialize - b, c, h, w = self.lq.size() - _, c_, h_, w_ = self.latent.size() - if b == self.configs.data.params.batch_size: - if not hasattr(self, 'queue_size'): - self.queue_size = self.configs.data.params.train.params.get('queue_size', b*50) - if not hasattr(self, 'queue_lr'): - assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}' - self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda() - _, c, h, w = self.gt.size() - self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda() - self.queue_sample = torch.zeros(self.queue_size, c, h, w).cuda() - self.queue_latent = torch.zeros(self.queue_size, c_, h_, w_).cuda() - self.queue_ptr = 0 - if self.queue_ptr == self.queue_size: # the pool is full - # do dequeue and enqueue - # shuffle - idx = torch.randperm(self.queue_size) - self.queue_lr = self.queue_lr[idx] - self.queue_gt = self.queue_gt[idx] - self.queue_sample = self.queue_sample[idx] - self.queue_latent = self.queue_latent[idx] - # get first b samples - lq_dequeue = self.queue_lr[0:b, :, :, :].clone() - gt_dequeue = self.queue_gt[0:b, :, :, :].clone() - sample_dequeue = self.queue_sample[0:b, :, :, :].clone() - latent_dequeue = self.queue_latent[0:b, :, :, :].clone() - # update the queue - self.queue_lr[0:b, :, :, :] = self.lq.clone() - self.queue_gt[0:b, :, :, :] = self.gt.clone() - self.queue_sample[0:b, :, :, :] = self.sample.clone() - self.queue_latent[0:b, :, :, :] = self.latent.clone() - - self.lq = lq_dequeue - self.gt = gt_dequeue - self.sample = sample_dequeue - self.latent = latent_dequeue - else: - # only do enqueue - self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone() - self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone() - self.queue_sample[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.sample.clone() - self.queue_latent[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.latent.clone() - self.queue_ptr = self.queue_ptr + b - - def get_input(self, batch): - input = batch['lq'] - gt = batch['gt'] - latent = batch['latent'] - sample = batch['sample'] - - assert not torch.isnan(latent).any() - - input = input.to(memory_format=torch.contiguous_format).float() - gt = gt.to(memory_format=torch.contiguous_format).float() - latent = latent.to(memory_format=torch.contiguous_format).float() / 0.18215 - - gt = gt * 2.0 - 1.0 - input = input * 2.0 - 1.0 - sample = sample * 2.0 -1.0 - - return input, gt, latent, sample - - @torch.no_grad() - def get_input_synthesis(self, batch, val=False, test_gt=False): - - jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts - im_gt = batch['gt'].cuda() - if self.use_usm: - usm_sharpener = USMSharp().cuda() # do usm sharpening - im_gt = usm_sharpener(im_gt) - im_gt = im_gt.to(memory_format=torch.contiguous_format).float() - kernel1 = batch['kernel1'].cuda() - kernel2 = batch['kernel2'].cuda() - sinc_kernel = batch['sinc_kernel'].cuda() - - ori_h, ori_w = im_gt.size()[2:4] - - # ----------------------- The first degradation process ----------------------- # - # blur - out = filter2D(im_gt, kernel1) - # random resize - updown_type = random.choices( - ['up', 'down', 'keep'], - self.configs.degradation['resize_prob'], - )[0] - if updown_type == 'up': - scale = random.uniform(1, self.configs.degradation['resize_range'][1]) - elif updown_type == 'down': - scale = random.uniform(self.configs.degradation['resize_range'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, scale_factor=scale, mode=mode) - # add noise - gray_noise_prob = self.configs.degradation['gray_noise_prob'] - if random.random() < self.configs.degradation['gaussian_noise_prob']: - out = random_add_gaussian_noise_pt( - out, - sigma_range=self.configs.degradation['noise_range'], - clip=True, - rounds=False, - gray_prob=gray_noise_prob, - ) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.configs.degradation['poisson_scale_range'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.configs.degradation['jpeg_range']) - out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts - out = jpeger(out, quality=jpeg_p) - - # ----------------------- The second degradation process ----------------------- # - # blur - if random.random() < self.configs.degradation['second_blur_prob']: - out = filter2D(out, kernel2) - # random resize - updown_type = random.choices( - ['up', 'down', 'keep'], - self.configs.degradation['resize_prob2'], - )[0] - if updown_type == 'up': - scale = random.uniform(1, self.configs.degradation['resize_range2'][1]) - elif updown_type == 'down': - scale = random.uniform(self.configs.degradation['resize_range2'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate( - out, - size=(int(ori_h / self.configs.sf * scale), - int(ori_w / self.configs.sf * scale)), - mode=mode, - ) - # add noise - gray_noise_prob = self.configs.degradation['gray_noise_prob2'] - if random.random() < self.configs.degradation['gaussian_noise_prob2']: - out = random_add_gaussian_noise_pt( - out, - sigma_range=self.configs.degradation['noise_range2'], - clip=True, - rounds=False, - gray_prob=gray_noise_prob, - ) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.configs.degradation['poisson_scale_range2'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False, - ) - - # JPEG compression + the final sinc filter - # We also need to resize images to desired sizes. We group [resize back + sinc filter] together - # as one operation. - # We consider two orders: - # 1. [resize back + sinc filter] + JPEG compression - # 2. JPEG compression + [resize back + sinc filter] - # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines. - if random.random() < 0.5: - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate( - out, - size=(ori_h // self.configs.sf, - ori_w // self.configs.sf), - mode=mode, - ) - out = filter2D(out, sinc_kernel) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.configs.degradation['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = jpeger(out, quality=jpeg_p) - else: - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.configs.degradation['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = jpeger(out, quality=jpeg_p) - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate( - out, - size=(ori_h // self.configs.sf, - ori_w // self.configs.sf), - mode=mode, - ) - out = filter2D(out, sinc_kernel) - - # clamp and round - im_lq = torch.clamp(out, 0, 1.0) - - # random crop - gt_size = self.configs.degradation['gt_size'] - im_gt, im_lq = paired_random_crop(im_gt, im_lq, gt_size, self.configs.sf) - self.lq, self.gt = im_lq, im_gt - - self.lq = F.interpolate( - self.lq, - size=(self.gt.size(-2), - self.gt.size(-1)), - mode='bicubic', - ) - - self.latent = batch['latent'] / 0.18215 - self.sample = batch['sample'] * 2 - 1.0 - # training pair pool - if not val: - self._dequeue_and_enqueue() - # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue - self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract - self.lq = self.lq*2 - 1.0 - self.gt = self.gt*2 - 1.0 - - self.lq = torch.clamp(self.lq, -1.0, 1.0) - - x = self.lq - y = self.gt - x = x.to(self.device) - y = y.to(self.device) - - if self.test_gt: - return y, y, self.latent.to(self.device), self.sample.to(self.device) - else: - return x, y, self.latent.to(self.device), self.sample.to(self.device) - - def training_step(self, batch, batch_idx, optimizer_idx): - if self.synthesis_data: - inputs, gts, latents, _ = self.get_input_synthesis(batch, val=False) - else: - inputs, gts, latents, _ = self.get_input(batch) - reconstructions, posterior = self(inputs, latents) - - if optimizer_idx == 0: - # train encoder+decoder+logvar - aeloss, log_dict_ae = self.loss(gts, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return aeloss - - if optimizer_idx == 1: - # train the discriminator - discloss, log_dict_disc = self.loss(gts, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - - self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return discloss - - def validation_step(self, batch, batch_idx): - inputs, gts, latents, _ = self.get_input(batch) - - reconstructions, posterior = self(inputs, latents) - aeloss, log_dict_ae = self.loss(gts, reconstructions, posterior, 0, self.global_step, - last_layer=self.get_last_layer(), split="val") - - discloss, log_dict_disc = self.loss(gts, reconstructions, posterior, 1, self.global_step, - last_layer=self.get_last_layer(), split="val") - - self.log("val/rec_loss", log_dict_ae["val/rec_loss"]) - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def configure_optimizers(self): - lr = self.learning_rate - opt_ae = torch.optim.Adam(list(self.encoder.parameters())+ - list(self.decoder.parameters())+ - # list(self.quant_conv.parameters())+ - list(self.post_quant_conv.parameters()), - lr=lr, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), - lr=lr, betas=(0.5, 0.9)) - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - @torch.no_grad() - def log_images(self, batch, only_inputs=False, **kwargs): - log = dict() - if self.synthesis_data: - x, gts, latents, samples = self.get_input_synthesis(batch, val=False) - else: - x, gts, latents, samples = self.get_input(batch) - x = x.to(self.device) - latents = latents.to(self.device) - samples = samples.to(self.device) - if not only_inputs: - xrec, posterior = self(x, latents) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - gts = self.to_rgb(gts) - samples = self.to_rgb(samples) - xrec = self.to_rgb(xrec) - # log["samples"] = self.decode(torch.randn_like(posterior.sample())) - log["reconstructions"] = xrec - log["inputs"] = x - log["gts"] = gts - log["samples"] = samples - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x diff --git a/spaces/Illia56/fastest-whisper-v3-large/README.md b/spaces/Illia56/fastest-whisper-v3-large/README.md deleted file mode 100644 index cfdb26d345a1882d03385447a6468ed628c2b6a4..0000000000000000000000000000000000000000 --- a/spaces/Illia56/fastest-whisper-v3-large/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Whisper Large V3 -emoji: ⚡ -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.38.0 -app_file: app.py -pinned: false -tags: -- whisper-event ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Illumotion/Koboldcpp/otherarch/rwkv_v3.h b/spaces/Illumotion/Koboldcpp/otherarch/rwkv_v3.h deleted file mode 100644 index b9e0d57e2d6c78714fea820d4c0cb2f9da17e9cb..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/otherarch/rwkv_v3.h +++ /dev/null @@ -1,180 +0,0 @@ -#ifndef RWKV_H -#define RWKV_H - -#include -#include -#include - -#ifdef RWKV_SHARED -# if defined(_WIN32) && !defined(__MINGW32__) -# ifdef RWKV_BUILD -# define RWKV_API __declspec(dllexport) -# else -# define RWKV_API __declspec(dllimport) -# endif -# else -# define RWKV_API __attribute__ ((visibility ("default"))) -# endif -#else -# define RWKV_API -#endif - -// 'ggmf' in hex. -#define RWKV_FILE_MAGIC 0x67676d66 - -#define RWKV_FILE_VERSION_0 100 -#define RWKV_FILE_VERSION_1 101 -#define RWKV_FILE_VERSION_MIN RWKV_FILE_VERSION_0 -#define RWKV_FILE_VERSION_MAX RWKV_FILE_VERSION_1 -// Default file version is the latest version. -#define RWKV_FILE_VERSION RWKV_FILE_VERSION_MAX - -#ifdef __cplusplus -extern "C" { -#endif - - // Represents an error encountered during a function call. - // These are flags, so an actual value might contain multiple errors. - enum rwkv_error_flags { - RWKV_ERROR_NONE = 0, - - RWKV_ERROR_ARGS = 1 << 8, - RWKV_ERROR_FILE = 2 << 8, - RWKV_ERROR_MODEL = 3 << 8, - RWKV_ERROR_MODEL_PARAMS = 4 << 8, - RWKV_ERROR_GRAPH = 5 << 8, - RWKV_ERROR_CTX = 6 << 8, - - RWKV_ERROR_ALLOC = 1, - RWKV_ERROR_FILE_OPEN = 2, - RWKV_ERROR_FILE_STAT = 3, - RWKV_ERROR_FILE_READ = 4, - RWKV_ERROR_FILE_WRITE = 5, - RWKV_ERROR_FILE_MAGIC = 6, - RWKV_ERROR_FILE_VERSION = 7, - RWKV_ERROR_DATA_TYPE = 8, - RWKV_ERROR_UNSUPPORTED = 9, - RWKV_ERROR_SHAPE = 10, - RWKV_ERROR_DIMENSION = 11, - RWKV_ERROR_KEY = 12, - RWKV_ERROR_DATA = 13, - RWKV_ERROR_PARAM_MISSING = 14 - }; - - // RWKV context that can be used for inference. - // All functions that operate on rwkv_context are thread-safe. - // rwkv_context can be sent to different threads between calls to rwkv_eval. - // There is no requirement for rwkv_context to be freed on the creating thread. - struct rwkv_context; - - // Sets whether errors are automatically printed to stderr. - // If this is set to false, you are responsible for calling rwkv_last_error manually if an operation fails. - // - ctx: the context to suppress error messages for. - // If NULL, affects model load (rwkv_init_from_file) and quantization (rwkv_quantize_model_file) errors, - // as well as the default for new context. - // - print_errors: whether error messages should be automatically printed. - RWKV_API void rwkv_set_print_errors(struct rwkv_context * ctx, bool print_errors); - - // Gets whether errors are automatically printed to stderr. - // - ctx: the context to retrieve the setting for, or NULL for the global setting. - RWKV_API bool rwkv_get_print_errors(struct rwkv_context * ctx); - - // Retrieves and clears the error flags. - // - ctx: the context the retrieve the error for, or NULL for the global error. - RWKV_API enum rwkv_error_flags rwkv_get_last_error(struct rwkv_context * ctx); - - // Loads the model from a file and prepares it for inference. - // Returns NULL on any error. - // - model_file_path: path to model file in ggml format. - // - n_threads: count of threads to use, must be positive. - RWKV_API struct rwkv_context * rwkv_init_from_file(const char * model_file_path, const uint32_t n_threads); - - // Creates a new context from an existing one. - // This can allow you to run multiple rwkv_eval's in parallel, without having to load a single model multiple times. - // Each rwkv_context can have one eval running at a time. - // Every rwkv_context must be freed using rwkv_free. - // - ctx: context to be cloned. - // - n_threads: count of threads to use, must be positive. - RWKV_API struct rwkv_context * rwkv_clone_context(struct rwkv_context * ctx, const uint32_t n_threads); - - // Offloads specified count of model layers onto the GPU. Offloaded layers are evaluated using cuBLAS. - // Returns true if at least one layer was offloaded. - // If rwkv.cpp was compiled without cuBLAS support, this function is a no-op and always returns false. - RWKV_API bool rwkv_gpu_offload_layers(struct rwkv_context * ctx, const uint32_t n_layers); - - // Evaluates the model for a single token. - // Not thread-safe. For parallel inference, call rwkv_clone_context to create one rwkv_context for each thread. - // Returns false on any error. - // You can pass NULL to logits_out whenever logits are not needed. This can improve speed by ~10ms per iteration - // that you do not calculate logits. - // - token: next token index, in range 0 <= token < n_vocab. - // - state_in: FP32 buffer of size rwkv_get_state_len(); or NULL, if this is a first pass. - // - state_out: FP32 buffer of size rwkv_get_state_len(). This buffer will be written to if non-NULL. - // - logits_out: FP32 buffer of size rwkv_get_logits_len(). This buffer will be written to if non-NULL. - RWKV_API bool rwkv_eval(struct rwkv_context *, const int n_threads, const uint32_t token, const float * state_in, float * state_out, float * logits_out); - - // Evaluates the model for a sequence of tokens. - // Uses a faster algorithm than rwkv_eval if you do not need the state and logits for every token. Best used with batch sizes of 64 or so. - // Has to build a computation graph on the first call for a given sequence, but will use this cached graph for subsequent calls of the same sequence length. - // Not thread-safe. For parallel inference, call rwkv_clone_context to create one rwkv_context for each thread. - // Returns false on any error. - // You can pass NULL to logits_out whenever logits are not needed. This can improve speed by ~10ms per iteration - // that you do not calculate logits. - // - tokens: pointer to an array of tokens. If NULL, the graph will be built and cached, but not executed: this can be useful for initialization. - // - sequence_len: number of tokens to read from the array. - // - state_in: FP32 buffer of size rwkv_get_state_len(), or NULL if this is a first pass. - // - state_out: FP32 buffer of size rwkv_get_state_len(). This buffer will be written to if non-NULL. - // - logits_out: FP32 buffer of size rwkv_get_logits_len(). This buffer will be written to if non-NULL. - RWKV_API bool rwkv_eval_sequence(struct rwkv_context * ctx, const int n_threads, const uint32_t * tokens, size_t sequence_len, const float * state_in, float * state_out, float * logits_out); - - // Returns the number of tokens in the given model's vocabulary. - // Useful for telling 20B_tokenizer models (n_vocab = 50277) apart from World models (n_vocab = 65536). - RWKV_API size_t rwkv_get_n_vocab(const struct rwkv_context * ctx); - - // Returns the number of elements in the given model's embedding. - // Useful for reading individual fields of a model's hidden state. - RWKV_API size_t rwkv_get_n_embed(const struct rwkv_context * ctx); - - // Returns the number of layers in the given model. - // Useful for always offloading the entire model to GPU. - RWKV_API size_t rwkv_get_n_layer(const struct rwkv_context * ctx); - - // Returns the number of float elements in a complete state for the given model. - // This is the number of elements you'll need to allocate for a call to rwkv_eval, rwkv_eval_sequence, or rwkv_init_state. - RWKV_API size_t rwkv_get_state_len(const struct rwkv_context * ctx); - - // Returns the number of float elements in the logits output of a given model. - // This is currently always identical to n_vocab. - RWKV_API size_t rwkv_get_logits_len(const struct rwkv_context * ctx); - - // Initializes the given state so that passing it to rwkv_eval or rwkv_eval_sequence would be identical to passing NULL. - // Useful in cases where tracking the first call to these functions may be annoying or expensive. - // State must be initialized for behavior to be defined, passing a zeroed state to rwkv.cpp functions will result in NaNs. - // - state: FP32 buffer of size rwkv_get_state_len() to initialize - RWKV_API void rwkv_init_state(const struct rwkv_context * ctx, float * state); - - // Frees all allocated memory and the context. - // Does not need to be called on the same thread that created the rwkv_context. - RWKV_API void rwkv_free(struct rwkv_context * ctx); - - // Quantizes FP32 or FP16 model to one of quantized formats. - // Returns false on any error. Error messages would be printed to stderr. - // - model_file_path_in: path to model file in ggml format, must be either FP32 or FP16. - // - model_file_path_out: quantized model will be written here. - // - format_name: must be one of available format names below. - // Available format names: - // - Q4_0 - // - Q4_1 - // - Q5_0 - // - Q5_1 - // - Q8_0 - RWKV_API bool rwkv_quantize_model_file(const char * model_file_path_in, const char * model_file_path_out, const char * format_name); - - // Returns system information string. - RWKV_API const char * rwkv_get_system_info_string(void); - -#ifdef __cplusplus -} -#endif - -#endif \ No newline at end of file diff --git a/spaces/JMalott/ai_architecture/dalle/utils/__init__.py b/spaces/JMalott/ai_architecture/dalle/utils/__init__.py deleted file mode 100644 index 776dd3a6ef93a2d905cbcaec159b6db320bdf3db..0000000000000000000000000000000000000000 --- a/spaces/JMalott/ai_architecture/dalle/utils/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .utils import * -from .config import * -from .sampling import * \ No newline at end of file diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/safety_checker.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/safety_checker.py deleted file mode 100644 index 1476c1ede62c6f2189c9025598ddab02169c5f69..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/stable_diffusion/safety_checker.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import numpy as np -import torch -import torch.nn as nn - -from transformers import CLIPConfig, CLIPVisionModel, PreTrainedModel - -from ...utils import logging - - -logger = logging.get_logger(__name__) - - -def cosine_distance(image_embeds, text_embeds): - normalized_image_embeds = nn.functional.normalize(image_embeds) - normalized_text_embeds = nn.functional.normalize(text_embeds) - return torch.mm(normalized_image_embeds, normalized_text_embeds.t()) - - -class StableDiffusionSafetyChecker(PreTrainedModel): - config_class = CLIPConfig - - _no_split_modules = ["CLIPEncoderLayer"] - - def __init__(self, config: CLIPConfig): - super().__init__(config) - - self.vision_model = CLIPVisionModel(config.vision_config) - self.visual_projection = nn.Linear(config.vision_config.hidden_size, config.projection_dim, bias=False) - - self.concept_embeds = nn.Parameter(torch.ones(17, config.projection_dim), requires_grad=False) - self.special_care_embeds = nn.Parameter(torch.ones(3, config.projection_dim), requires_grad=False) - - self.concept_embeds_weights = nn.Parameter(torch.ones(17), requires_grad=False) - self.special_care_embeds_weights = nn.Parameter(torch.ones(3), requires_grad=False) - - @torch.no_grad() - def forward(self, clip_input, images): - pooled_output = self.vision_model(clip_input)[1] # pooled_output - image_embeds = self.visual_projection(pooled_output) - - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds).cpu().float().numpy() - cos_dist = cosine_distance(image_embeds, self.concept_embeds).cpu().float().numpy() - - result = [] - batch_size = image_embeds.shape[0] - for i in range(batch_size): - result_img = {"special_scores": {}, "special_care": [], "concept_scores": {}, "bad_concepts": []} - - # increase this value to create a stronger `nfsw` filter - # at the cost of increasing the possibility of filtering benign images - adjustment = 0.0 - - for concept_idx in range(len(special_cos_dist[0])): - concept_cos = special_cos_dist[i][concept_idx] - concept_threshold = self.special_care_embeds_weights[concept_idx].item() - result_img["special_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3) - if result_img["special_scores"][concept_idx] > 0: - result_img["special_care"].append({concept_idx, result_img["special_scores"][concept_idx]}) - adjustment = 0.01 - - for concept_idx in range(len(cos_dist[0])): - concept_cos = cos_dist[i][concept_idx] - concept_threshold = self.concept_embeds_weights[concept_idx].item() - result_img["concept_scores"][concept_idx] = round(concept_cos - concept_threshold + adjustment, 3) - if result_img["concept_scores"][concept_idx] > 0: - result_img["bad_concepts"].append(concept_idx) - - result.append(result_img) - - # has_nsfw_concepts = [len(res["bad_concepts"]) > 0 for res in result] - has_nsfw_concepts = [False] - - for idx, has_nsfw_concept in enumerate(has_nsfw_concepts): - if has_nsfw_concept: - images[idx] = np.zeros(images[idx].shape) # black image - - if any(has_nsfw_concepts): - logger.warning( - "Potential NSFW content was detected in one or more images. A black image will be returned instead." - " Try again with a different prompt and/or seed." - ) - - return images, has_nsfw_concepts - - @torch.no_grad() - def forward_onnx(self, clip_input: torch.FloatTensor, images: torch.FloatTensor): - pooled_output = self.vision_model(clip_input)[1] # pooled_output - image_embeds = self.visual_projection(pooled_output) - - special_cos_dist = cosine_distance(image_embeds, self.special_care_embeds) - cos_dist = cosine_distance(image_embeds, self.concept_embeds) - - # increase this value to create a stronger `nsfw` filter - # at the cost of increasing the possibility of filtering benign images - adjustment = 0.0 - - special_scores = special_cos_dist - self.special_care_embeds_weights + adjustment - # special_scores = special_scores.round(decimals=3) - special_care = torch.any(special_scores > 0, dim=1) - special_adjustment = special_care * 0.01 - special_adjustment = special_adjustment.unsqueeze(1).expand(-1, cos_dist.shape[1]) - - concept_scores = (cos_dist - self.concept_embeds_weights) + special_adjustment - # concept_scores = concept_scores.round(decimals=3) - has_nsfw_concepts = torch.any(concept_scores > 0, dim=1) - - images[has_nsfw_concepts] = 0.0 # black image - - return images, has_nsfw_concepts diff --git a/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/attentions.py b/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/JunchuanYu/Tools/app.py b/spaces/JunchuanYu/Tools/app.py deleted file mode 100644 index e1c314d1f33d8faa2697bf7e53bda65109fe7512..0000000000000000000000000000000000000000 --- a/spaces/JunchuanYu/Tools/app.py +++ /dev/null @@ -1,84 +0,0 @@ - -from gramformer import Gramformer -import spacy -import gradio as gr -from transformers import pipeline -spacy.load('en_core_web_sm') -# from spacy.lang.en import English - - -def extract_str(text): - text=str(text) - start = text.find("{'") - end = text.find("'}") - return text[start+2:end] - -def gramacorrect(sentence): - gf = Gramformer(models=1, use_gpu=False) - res = gf.correct(sentence) - return extract_str(res) - -def translate_zh(from_text): - translation_pipeline = pipeline("translation", model="Helsinki-NLP/opus-mt-en-zh") - res = translation_pipeline(from_text)[0] - return res['translation_text'] - -def translate_en(from_text): - translation_pipeline = pipeline("translation", model="Helsinki-NLP/opus-mt-zh-en") - res = translation_pipeline(from_text)[0] - return res['translation_text'] - -def generator(from_text): - english_generator = pipeline("text-generation", model="distilgpt2") - english_text = english_generator(from_text)[0]["generated_text"] - return english_text - - -with gr.Blocks() as demo: - with gr.Tab("Translator"): - gr.Markdown(""" - #### English to Chinese. - """) - - with gr.Row(): - text_input1 = gr.Textbox(lines=4, placeholder="Enter sentence here...") - chinese = gr.Textbox(lines=4, placeholder="Chinese") - zh_button = gr.Button("RUN") - gr.Markdown(""" - #### Chinese to English. - """) - - with gr.Row(): - text_input2 = gr.Textbox(lines=4, placeholder="Enter sentence here...") - english = gr.Textbox(lines=4, placeholder="English") - en_button = gr.Button("RUN") - - with gr.Tab("Gramachecker"): - gr.Markdown(""" - #### English grama checker. - """) - with gr.Row(): - text_input3 = gr.Textbox(lines=4, placeholder="Enter sentence here...") - check = gr.Textbox(lines=4, placeholder="Grama Check") - check_button = gr.Button("RUN") - - gr.Markdown(""" - #### English text generator. - """) - with gr.Row(): - text_input4 = gr.Textbox(lines=2, placeholder="Enter sentence here...") - txtgenerator = gr.Textbox(lines=6, placeholder="Text Generator") - gen_button = gr.Button("RUN") - - - - zh_button.click(translate_zh, inputs=text_input1, outputs=chinese,api_name="translate_zh") - en_button.click(translate_en, inputs=text_input2, outputs=english,api_name="translate_en") - - check_button.click(gramacorrect, inputs=text_input3, outputs=check,api_name="gramacorrect") - gen_button.click(generator, inputs=text_input4, outputs=txtgenerator,api_name="generator") -demo.launch() - - - - diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/common/batch_norm_conv.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/common/batch_norm_conv.py deleted file mode 100644 index 0d07a4a9495657bcd434111ec0b6f16ca35211c2..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/models/sublayer/common/batch_norm_conv.py +++ /dev/null @@ -1,14 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F - -class BatchNormConv(nn.Module): - def __init__(self, in_channels, out_channels, kernel, relu=True): - super().__init__() - self.conv = nn.Conv1d(in_channels, out_channels, kernel, stride=1, padding=kernel // 2, bias=False) - self.bnorm = nn.BatchNorm1d(out_channels) - self.relu = relu - - def forward(self, x): - x = self.conv(x) - x = F.relu(x) if self.relu is True else x - return self.bnorm(x) \ No newline at end of file diff --git a/spaces/Kreaols/ChuanhuChatGPT/custom.css b/spaces/Kreaols/ChuanhuChatGPT/custom.css deleted file mode 100644 index 5143eb138ea2469d8c457c71cb210fd3fb7cbe15..0000000000000000000000000000000000000000 --- a/spaces/Kreaols/ChuanhuChatGPT/custom.css +++ /dev/null @@ -1,162 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色 */ -#chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: #FFFFFF !important; -} -[data-testid = "user"] { - background-color: #95EC69 !important; -} -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1.4em 1.2em 0em 1.4em; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/KyanChen/FunSR/datasets/datasets_loader.py b/spaces/KyanChen/FunSR/datasets/datasets_loader.py deleted file mode 100644 index 9b61dbeee997a129e0b48e7f884bc6a1478683d9..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/datasets/datasets_loader.py +++ /dev/null @@ -1,69 +0,0 @@ -import os -import json -from PIL import Image - -import pickle -import imageio -import numpy as np -import torch -from torch.utils.data import Dataset -from torchvision import transforms - -from datasets import register - - -@register('hr_data_loader') -class HRImgLoader(Dataset): - def __init__(self, root_path, split_file, split_key, first_k=None, cache='none'): - self.cache = cache - with open(split_file, 'r') as f: - filenames = json.load(f)[split_key] - if first_k is not None: - filenames = filenames[:first_k] - - self.files = [] - for filename in filenames: - file = os.path.join(root_path, filename) - - if cache == 'none': - self.files.append(file) - - elif cache == 'bin': - bin_root = os.path.join(os.path.dirname(root_path), - '_bin_' + os.path.basename(root_path)) - if not os.path.exists(bin_root): - os.mkdir(bin_root) - print('mkdir', bin_root) - bin_file = os.path.join( - bin_root, filename.split('.')[0] + '.pkl') - if not os.path.exists(bin_file): - with open(bin_file, 'wb') as f: - pickle.dump(imageio.imread(file), f) - print('dump', bin_file) - self.files.append(bin_file) - - elif cache == 'in_memory': - self.files.append(transforms.ToTensor()( - Image.open(file).convert('RGB'))) - - def __len__(self): - return len(self.files) - - def __getitem__(self, idx): - x = self.files[idx] - file_name = x - - if self.cache == 'none': - return transforms.ToTensor()(Image.open(x).convert('RGB')), file_name - - elif self.cache == 'bin': - with open(x, 'rb') as f: - x = pickle.load(f) - x = np.ascontiguousarray(x.transpose(2, 0, 1)) - x = torch.from_numpy(x).float() / 255 - return x, file_name - - elif self.cache == 'in_memory': - return x, file_name - - diff --git a/spaces/L0SG/BigVGAN/alias_free_torch/act.py b/spaces/L0SG/BigVGAN/alias_free_torch/act.py deleted file mode 100644 index 028debd697dd60458aae75010057df038bd3518a..0000000000000000000000000000000000000000 --- a/spaces/L0SG/BigVGAN/alias_free_torch/act.py +++ /dev/null @@ -1,28 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch.nn as nn -from .resample import UpSample1d, DownSample1d - - -class Activation1d(nn.Module): - def __init__(self, - activation, - up_ratio: int = 2, - down_ratio: int = 2, - up_kernel_size: int = 12, - down_kernel_size: int = 12): - super().__init__() - self.up_ratio = up_ratio - self.down_ratio = down_ratio - self.act = activation - self.upsample = UpSample1d(up_ratio, up_kernel_size) - self.downsample = DownSample1d(down_ratio, down_kernel_size) - - # x: [B,C,T] - def forward(self, x): - x = self.upsample(x) - x = self.act(x) - x = self.downsample(x) - - return x \ No newline at end of file diff --git a/spaces/L0SG/BigVGAN/utils.py b/spaces/L0SG/BigVGAN/utils.py deleted file mode 100644 index edf3d4e4fef2dff646a29ce49f20cc794ab81ecb..0000000000000000000000000000000000000000 --- a/spaces/L0SG/BigVGAN/utils.py +++ /dev/null @@ -1,80 +0,0 @@ -# Adapted from https://github.com/jik876/hifi-gan under the MIT license. -# LICENSE is in incl_licenses directory. - -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -matplotlib.use("Agg") -import matplotlib.pylab as plt -from meldataset import MAX_WAV_VALUE -from scipy.io.wavfile import write - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def plot_spectrogram_clipped(spectrogram, clip_max=2.): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none', vmin=1e-6, vmax=clip_max) - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - -def save_audio(audio, path, sr): - # wav: torch with 1d shape - audio = audio * MAX_WAV_VALUE - audio = audio.cpu().numpy().astype('int16') - write(path, sr, audio) \ No newline at end of file diff --git a/spaces/Laihiujin/OneFormer/deform_setup.sh b/spaces/Laihiujin/OneFormer/deform_setup.sh deleted file mode 100644 index a9e31922423a94acf918def8436a25876203d065..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/deform_setup.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/usr/bin/env bash - -# ln -s ./oneformer/modeling/pixel_decoder/ops/ ./ -# ls -# cd ops/ && bash make.sh && cd .. -echo '----------------------------------------------------------------' -echo '----------------------------------------------------------------' -pip3 freeze | grep MultiScaleDeformableAttention -pip3 freeze | grep torch -pip3 freeze | grep detectron2 -pip3 freeze | grep natten -echo '----------------------------------------------------------------' -echo '----------------------------------------------------------------' - -# echo '----------------------------------------------------------------' -# echo '----------------------------------------------------------------' -# cd /home/user/.pyenv/versions/3.8.15/lib/python3.8/site-packages -# ls -# ls | grep MultiScale -# echo '----------------------------------------------------------------' -# echo '----------------------------------------------------------------' diff --git a/spaces/Libra7578/Promt-to-Image-diffusions/README.md b/spaces/Libra7578/Promt-to-Image-diffusions/README.md deleted file mode 100644 index fd591f24e42e1d998f0ba10d76d094ae90c3140d..0000000000000000000000000000000000000000 --- a/spaces/Libra7578/Promt-to-Image-diffusions/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Magic Prompt -emoji: 🎆 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: lambdalabs/Promt-to-Image-diffusions ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/cpp/cppipc/waiter.h b/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/cpp/cppipc/waiter.h deleted file mode 100644 index ee45fe3517be95ac1688a3e3540189edeb0d860c..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/cpp/cppipc/waiter.h +++ /dev/null @@ -1,83 +0,0 @@ -#pragma once - -#include -#include -#include -#include - -#include "libipc/def.h" -#include "libipc/mutex.h" -#include "libipc/condition.h" -#include "libipc/platform/detail.h" - -namespace ipc { -namespace detail { - -class waiter { - ipc::sync::condition cond_; - ipc::sync::mutex lock_; - std::atomic quit_ {false}; - -public: - static void init(); - - waiter() = default; - waiter(char const *name) { - open(name); - } - - ~waiter() { - close(); - } - - bool valid() const noexcept { - return cond_.valid() && lock_.valid(); - } - - bool open(char const *name) noexcept { - quit_.store(false, std::memory_order_relaxed); - if (!cond_.open((std::string{"_waiter_cond_"} + name).c_str())) { - return false; - } - if (!lock_.open((std::string{"_waiter_lock_"} + name).c_str())) { - cond_.close(); - return false; - } - return valid(); - } - - void close() noexcept { - cond_.close(); - lock_.close(); - } - - template - bool wait_if(F &&pred, std::uint64_t tm = ipc::invalid_value) noexcept { - IPC_UNUSED_ std::lock_guard guard {lock_}; - while ([this, &pred] { - return !quit_.load(std::memory_order_relaxed) - && std::forward(pred)(); - }()) { - if (!cond_.wait(lock_, tm)) return false; - } - return true; - } - - bool notify() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.notify(lock_); - } - - bool broadcast() noexcept { - std::lock_guard{lock_}; // barrier - return cond_.broadcast(lock_); - } - - bool quit_waiting() { - quit_.store(true, std::memory_order_release); - return broadcast(); - } -}; - -} // namespace detail -} // namespace ipc diff --git a/spaces/LucasCodeBreak/MusicGen/audiocraft/modules/conv.py b/spaces/LucasCodeBreak/MusicGen/audiocraft/modules/conv.py deleted file mode 100644 index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000 --- a/spaces/LucasCodeBreak/MusicGen/audiocraft/modules/conv.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp -import warnings - -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm, weight_norm - - -CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm', - 'time_group_norm']) - - -def apply_parametrization_norm(module: nn.Module, norm: str = 'none'): - assert norm in CONV_NORMALIZATIONS - if norm == 'weight_norm': - return weight_norm(module) - elif norm == 'spectral_norm': - return spectral_norm(module) - else: - # We already check was in CONV_NORMALIZATION, so any other choice - # doesn't need reparametrization. - return module - - -def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs): - """Return the proper normalization module. If causal is True, this will ensure the returned - module is causal, or return an error if the normalization doesn't support causal evaluation. - """ - assert norm in CONV_NORMALIZATIONS - if norm == 'time_group_norm': - if causal: - raise ValueError("GroupNorm doesn't support causal evaluation.") - assert isinstance(module, nn.modules.conv._ConvNd) - return nn.GroupNorm(1, module.out_channels, **norm_kwargs) - else: - return nn.Identity() - - -def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, - padding_total: int = 0) -> int: - """See `pad_for_conv1d`. - """ - length = x.shape[-1] - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length - length - - -def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0): - """Pad for a convolution to make sure that the last window is full. - Extra padding is added at the end. This is required to ensure that we can rebuild - an output of the same length, as otherwise, even with padding, some time steps - might get removed. - For instance, with total padding = 4, kernel size = 4, stride = 2: - 0 0 1 2 3 4 5 0 0 # (0s are padding) - 1 2 3 # (output frames of a convolution, last 0 is never used) - 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding) - 1 2 3 4 # once you removed padding, we are missing one time step ! - """ - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - return F.pad(x, (0, extra_padding)) - - -def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.): - """Tiny wrapper around F.pad, just to allow for reflect padding on small input. - If this is the case, we insert extra 0 padding to the right before the reflection happen. - """ - length = x.shape[-1] - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - if mode == 'reflect': - max_pad = max(padding_left, padding_right) - extra_pad = 0 - if length <= max_pad: - extra_pad = max_pad - length + 1 - x = F.pad(x, (0, extra_pad)) - padded = F.pad(x, paddings, mode, value) - end = padded.shape[-1] - extra_pad - return padded[..., :end] - else: - return F.pad(x, paddings, mode, value) - - -def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]): - """Remove padding from x, handling properly zero padding. Only for 1d! - """ - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - assert (padding_left + padding_right) <= x.shape[-1] - end = x.shape[-1] - padding_right - return x[..., padding_left: end] - - -class NormConv1d(nn.Module): - """Wrapper around Conv1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConv2d(nn.Module): - """Wrapper around Conv2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConvTranspose1d(nn.Module): - """Wrapper around ConvTranspose1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class NormConvTranspose2d(nn.Module): - """Wrapper around ConvTranspose2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs) - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class StreamableConv1d(nn.Module): - """Conv1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, dilation: int = 1, - groups: int = 1, bias: bool = True, causal: bool = False, - norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, - pad_mode: str = 'reflect'): - super().__init__() - # warn user on unusual setup between dilation and stride - if stride > 1 and dilation > 1: - warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1' - f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).') - self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride, - dilation=dilation, groups=groups, bias=bias, causal=causal, - norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.pad_mode = pad_mode - - def forward(self, x): - B, C, T = x.shape - kernel_size = self.conv.conv.kernel_size[0] - stride = self.conv.conv.stride[0] - dilation = self.conv.conv.dilation[0] - kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations - padding_total = kernel_size - stride - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - if self.causal: - # Left padding for causal - x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode) - return self.conv(x) - - -class StreamableConvTranspose1d(nn.Module): - """ConvTranspose1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, causal: bool = False, - norm: str = 'none', trim_right_ratio: float = 1., - norm_kwargs: tp.Dict[str, tp.Any] = {}): - super().__init__() - self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride, - causal=causal, norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.trim_right_ratio = trim_right_ratio - assert self.causal or self.trim_right_ratio == 1., \ - "`trim_right_ratio` != 1.0 only makes sense for causal convolutions" - assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1. - - def forward(self, x): - kernel_size = self.convtr.convtr.kernel_size[0] - stride = self.convtr.convtr.stride[0] - padding_total = kernel_size - stride - - y = self.convtr(x) - - # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be - # removed at the very end, when keeping only the right length for the output, - # as removing it here would require also passing the length at the matching layer - # in the encoder. - if self.causal: - # Trim the padding on the right according to the specified ratio - # if trim_right_ratio = 1.0, trim everything from right - padding_right = math.ceil(padding_total * self.trim_right_ratio) - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - return y diff --git a/spaces/MAPS-research/GEMRec-Gallery/css/style.css b/spaces/MAPS-research/GEMRec-Gallery/css/style.css deleted file mode 100644 index e8cf79c6ff1829ed9d09c8399a8d98fc28652d83..0000000000000000000000000000000000000000 --- a/spaces/MAPS-research/GEMRec-Gallery/css/style.css +++ /dev/null @@ -1,27 +0,0 @@ -div.row-widget.stRadio > div { - flex-direction: row; - align-items: stretch; -} - -div.row-widget.stRadio > div[role="radiogroup"] > label[data-baseweb="radio"] { - /*background-color: rgb(240, 242, 246);*/ - padding-right: 10px; - /*padding-left: 4px;*/ - padding-bottom: 3px; - margin: 4px 0px; - border-radius: 0; - border-bottom: 2px solid rgba(169, 169, 169, 0.3); - transition: border-bottom-color 0.2s ease 0s; -} - -/*change the background color of the parent label of the selected radio button*/ -div.row-widget.stRadio > div[role="radiogroup"] > label[data-baseweb="radio"]:has( > input[type="radio"]:checked) { - /*background-color: blanchedalmond;*/ - border-bottom: 2px solid red; - transition: border-bottom-color 0.2s ease 0s; -} - -/*hide the circle of the radio button*/ -div.row-widget.stRadio > div[role="radiogroup"] > label[data-baseweb="radio"] > div:first-child { - display: none; -} diff --git a/spaces/MCkernick/Image_Restoration_Colorization/Global/util/__init__.py b/spaces/MCkernick/Image_Restoration_Colorization/Global/util/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/MGLDZM/chgpt/static/js/windowHandler.js b/spaces/MGLDZM/chgpt/static/js/windowHandler.js deleted file mode 100644 index 4c8f7aa819940606a13c358e177d805cff4a5f22..0000000000000000000000000000000000000000 --- a/spaces/MGLDZM/chgpt/static/js/windowHandler.js +++ /dev/null @@ -1,218 +0,0 @@ -class WindowHandler{ - constructor(conversacion, index){ - this.index = index - this.template = $('

    '); - this.active = false; - this.mensaje = ""; - this.interacted = false - - this.cargarChat(conversacion) - - - this.ctx.on("chat:enviar", (event, params) => this.recalcularTextarea()); - this.ctx.on("chat:eliminar:pre", () => this.eliminarChat()); - - this.ctx.find(".input-text").keypress((event) => { - if (!event.shiftKey && event.keyCode === 13) { - this.manejadorEnviar(); - }}); - this.ctx.find(".input-send").click(() => this.manejadorEnviar()); - this.ctx.find(".input-text").keypress(() => this.recalcularTextarea()); - this.ctx.find(".input-text").keyup(() => this.recalcularTextarea()); - this.ctx.find(".input-text").keydown(() => this.recalcularTextarea()); - this.ctx.find(".input-delete").click(()=> this.ctx.trigger("chat:eliminar:pre")) - - this.ctx.on("precarga:inicio", (event, params) => { - this.respuestaInicio(params) - }); - this.ctx.on("precarga:iniciada", (event, params) => { - this.respuestaIniciada() - }); - this.ctx.on("precarga:status", (event, params) => { - this.respuestaStatus(...Object.values(params)) - }); - this.ctx.on("precarga:mensaje", (event, params) => { - this.respuestaMensaje(params)} - ); - this.ctx.on("precarga:error", (event, params) => { - this.respuestaError(params) - }); - - } - - cargarChat(conversacion){ - let tempTabSel = $(".tab-label-template").clone(); - tempTabSel.removeClass("tab-label-template") - tempTabSel.addClass("tab-label") - tempTabSel.find("div").text("Tab "+this.index); - tempTabSel.find("input").val(this.index); - if(this.index==0){ - tempTabSel.find("input").prop("checked", true); - } - $(".tabs").append(tempTabSel) - - let tempTab = $(".tab-template").clone(); - tempTab.removeClass("tab-template") - tempTab.addClass("tab") - $(".wrapper").append(tempTab) - this.ctx = tempTab - this.chatbox = this.ctx.find(".chat"); - - - for(let mensaje of conversacion){ - if(mensaje.role!="system"){ - let clone = this.template.clone(); - let texto = mensaje.content; - if(mensaje.role=="user") { - clone.addClass("me"); - clone.find("div p").text(texto.replace(/\n/g, "
    ")); - }else{ - texto = this.procesarTexto(texto); - clone.find("div p").html(texto); - } - this.chatbox.append(clone); - this.active = clone; - Prism.highlightAllUnder(this.active[0]) - this.active = false; - this.chatbox.scrollTop(this.chatbox[0].scrollHeight); - this.interacted=true - } - } - - - - - - } - - eliminarChat(){ - if(confirm("¿Estás seguro que quieres eliminar esta conversación?") == true){ - $(document).trigger("chat:eliminar", {ctx:this.ctx, index:this.index}) - - } - - } - - manejadorEnviar(){ - let mensaje = this.ctx.find(".input-text").val(); - if(mensaje==""){ - return false; - } - $(document).trigger("chat:enviar", {mensaje:mensaje, ctx:this.ctx}); - } - - recalcularTextarea(){ - this.ctx.find(".input-box").css("height", "30px"); - let height = parseInt((this.ctx.find(".input-text").prop('scrollHeight')+15)/15)*15; - this.ctx.find(".input-box").css("height", height+"px"); - height -= 30; - this.ctx.find(".chat").css("--textarea", height+"px"); - } - - procesarTexto(texto){ - - let resultado = ""; - let codigo = false; - for(let actual of texto.split("```")){ - if(codigo){ - let temp = actual.split("\n",1); - resultado += "
    "+temp+"
    "; - }else{ - resultado += $("
    ").text(actual).html().replace(/`([^`]+?)`/gm, "$1").replace(/\n/g, "
    "); - } - codigo = !codigo; - } - - return resultado - } - - - - respuestaInicio(mensaje){ - this.ctx.find(".input-text").val(""); - this.ctx.find("button").prop("disabled", true); - this.ctx.find("textarea").prop("disabled", true); - this.mensaje = "" - let clone = this.template.clone(); - clone.addClass("me"); - clone.find("div p").text(mensaje); - this.chatbox.append(clone); - - clone = this.template.clone(); - clone.find("div p").html('
    '); - this.chatbox.append(clone); - this.active = clone; - this.chatbox.scrollTop(this.chatbox[0].scrollHeight); - } - - respuestaIniciada(){ - this.active.find(".loader").addClass("firststage") - } - respuestaStatus(mensaje, modo=false){ - let temp = $("
    ") - temp.text(mensaje) - switch(modo){ - case "enlinea": - this.active.find("div p div:last-child").text(this.active.find("div p div:last-child").text() + mensaje); - break; - case "reemplazar": - this.active.find("div p div:not(.loader-wrap)").remove(); - this.active.find("div p").append(temp) - break; - default: - this.active.find("div p").append(temp) - this.chatbox.scrollTop(this.chatbox[0].scrollHeight); - - } - - } - - respuestaMensaje(mensaje){ - this.active.find("div p").html(""); - mensaje = this.procesarTexto(mensaje); - this.active.find("div p").html(mensaje); - Prism.highlightAllUnder(this.active[0]); - this.active = false; - this.ctx.find("button").prop("disabled", false); - this.ctx.find("textarea").prop("disabled", false); - this.ctx.find("textarea").focus(); - - this.chatbox.scrollTop(this.chatbox[0].scrollHeight); - this.interacted = true; - } - - respuestaError(error){ - this.ctx.find("button").prop("disabled", false); - this.ctx.find("textarea").prop("disabled", false); - this.ctx.find("textarea").focus(); - if(error.hasOwnProperty("responseJSON")){ - this.active.find("div p").html(error.responseJSON.detail) - }else{ - this.active.find("div p").html("El API no responde, la conexión pudo haberse caido") - } - - switch(error.status | 0){ - case 404: - this.active.addClass("error") - break; - case 408: - this.active.addClass("warning") - break; - default: - this.active.addClass("error") - - } - this.active = false; - this.chatbox.scrollTop(this.chatbox[0].scrollHeight) - - - - } - - - - -} \ No newline at end of file diff --git a/spaces/ML701G7/taim-gan/src/data/__init__.py b/spaces/ML701G7/taim-gan/src/data/__init__.py deleted file mode 100644 index aa29aef109004daac476c8f18628dd3525599103..0000000000000000000000000000000000000000 --- a/spaces/ML701G7/taim-gan/src/data/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -"""Dataset and custom collate function to load""" - -from .collate import custom_collate -from .datasets import TextImageDataset -from .tokenizer import TAIMGANTokenizer diff --git a/spaces/MMMMQZ/MQZGPT/modules/base_model.py b/spaces/MMMMQZ/MQZGPT/modules/base_model.py deleted file mode 100644 index 2b55623f6b0989f60d818be6e0e77f5948484b82..0000000000000000000000000000000000000000 --- a/spaces/MMMMQZ/MQZGPT/modules/base_model.py +++ /dev/null @@ -1,561 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
    \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
      \n\n" + "".join(display_append) + "
    " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, chatbot, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, user_name, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return filename, json_s["system"], json_s["chatbot"] - except FileNotFoundError: - logging.warning(f"{user_name} 没有找到对话历史文件,不执行任何操作") - return filename, self.system_prompt, chatbot - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/MWilinski/bot/tests/bot/__init__.py b/spaces/MWilinski/bot/tests/bot/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Madhur-01/text-summarizer/README.md b/spaces/Madhur-01/text-summarizer/README.md deleted file mode 100644 index 4e610b936749e058b902a777e231faee318f4e6b..0000000000000000000000000000000000000000 --- a/spaces/Madhur-01/text-summarizer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Summarizer -emoji: 🐢 -colorFrom: indigo -colorTo: indigo -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/text/text_encoder.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/text/text_encoder.py deleted file mode 100644 index 3ec5090c290ee5ecf1dd49915b70d6b4cc2b84d9..0000000000000000000000000000000000000000 --- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/detic/modeling/text/text_encoder.py +++ /dev/null @@ -1,189 +0,0 @@ -# This code is modified from https://github.com/openai/CLIP/blob/main/clip/clip.py -# Modified by Xingyi Zhou -# The original code is under MIT license -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import Union, List -from collections import OrderedDict -import torch -from torch import nn -import torch - -from clip.simple_tokenizer import SimpleTokenizer as _Tokenizer - -__all__ = ["tokenize"] - -count = 0 - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x: torch.Tensor): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x: torch.Tensor): - x = x + self.attention(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.Sequential( - *[ResidualAttentionBlock(width, heads, attn_mask) \ - for _ in range(layers)]) - - def forward(self, x: torch.Tensor): - return self.resblocks(x) - -class CLIPTEXT(nn.Module): - def __init__(self, - embed_dim=512, - # text - context_length=77, - vocab_size=49408, - transformer_width=512, - transformer_heads=8, - transformer_layers=12 - ): - super().__init__() - - self._tokenizer = _Tokenizer() - self.context_length = context_length - - self.transformer = Transformer( - width=transformer_width, - layers=transformer_layers, - heads=transformer_heads, - attn_mask=self.build_attention_mask() - ) - - self.vocab_size = vocab_size - self.token_embedding = nn.Embedding(vocab_size, transformer_width) - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width)) - self.ln_final = LayerNorm(transformer_width) - - self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim)) - # self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - self.initialize_parameters() - - def initialize_parameters(self): - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - - proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5) - attn_std = self.transformer.width ** -0.5 - fc_std = (2 * self.transformer.width) ** -0.5 - for block in self.transformer.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - if self.text_projection is not None: - nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - @property - def device(self): - return self.text_projection.device - - @property - def dtype(self): - return self.text_projection.dtype - - def tokenize(self, - texts: Union[str, List[str]], \ - context_length: int = 77) -> torch.LongTensor: - """ - """ - if isinstance(texts, str): - texts = [texts] - - sot_token = self._tokenizer.encoder["<|startoftext|>"] - eot_token = self._tokenizer.encoder["<|endoftext|>"] - all_tokens = [[sot_token] + self._tokenizer.encode(text) + [eot_token] for text in texts] - result = torch.zeros(len(all_tokens), context_length, dtype=torch.long) - - for i, tokens in enumerate(all_tokens): - if len(tokens) > context_length: - st = torch.randint( - len(tokens) - context_length + 1, (1,))[0].item() - tokens = tokens[st: st + context_length] - # raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}") - result[i, :len(tokens)] = torch.tensor(tokens) - - return result - - def encode_text(self, text): - x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model] - x = x + self.positional_embedding.type(self.dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x).type(self.dtype) - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - return x - - def forward(self, captions): - ''' - captions: list of strings - ''' - text = self.tokenize(captions).to(self.device) # B x L x D - features = self.encode_text(text) # B x D - return features - - -def build_text_encoder(pretrain=True): - text_encoder = CLIPTEXT() - if pretrain: - import clip - pretrained_model, _ = clip.load("ViT-B/32", device='cpu') - state_dict = pretrained_model.state_dict() - to_delete_keys = ["logit_scale", "input_resolution", \ - "context_length", "vocab_size"] + \ - [k for k in state_dict.keys() if k.startswith('visual.')] - for k in to_delete_keys: - if k in state_dict: - del state_dict[k] - print('Loading pretrained CLIP') - text_encoder.load_state_dict(state_dict) - # import pdb; pdb.set_trace() - return text_encoder \ No newline at end of file diff --git a/spaces/MedicalAILabo/Xp-age/lib/options.py b/spaces/MedicalAILabo/Xp-age/lib/options.py deleted file mode 100644 index e75af536f08ca708e190524e78c72a3af6271f6d..0000000000000000000000000000000000000000 --- a/spaces/MedicalAILabo/Xp-age/lib/options.py +++ /dev/null @@ -1,655 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -import argparse -from distutils.util import strtobool -from pathlib import Path -import pandas as pd -import json -import torch -from .logger import BaseLogger -from typing import List, Dict, Tuple, Union - - -logger = BaseLogger.get_logger(__name__) - - -class Options: - """ - Class for options. - """ - def __init__(self, datetime: str = None, isTrain: bool = None) -> None: - """ - Args: - datetime (str, optional): date time Args: - isTrain (bool, optional): Variable indicating whether training or not. Defaults to None. - """ - self.parser = argparse.ArgumentParser(description='Options for training or test') - - # CSV - self.parser.add_argument('--csvpath', type=str, required=True, help='path to csv for training or test') - - # GPU Ids - self.parser.add_argument('--gpu_ids', type=str, default='cpu', help='gpu ids: e.g. 0, 0-1-2, 0-2. Use cpu for CPU (Default: cpu)') - - if isTrain: - # Task - self.parser.add_argument('--task', type=str, required=True, choices=['classification', 'regression', 'deepsurv'], help='Task') - - # Model - self.parser.add_argument('--model', type=str, required=True, help='model: MLP, CNN, ViT, or MLP+(CNN or ViT)') - self.parser.add_argument('--pretrained', type=strtobool, default=False, help='For use of pretrained model(CNN or ViT)') - - # Training and Internal validation - self.parser.add_argument('--criterion', type=str, required=True, choices=['CEL', 'MSE', 'RMSE', 'MAE', 'NLL'], help='criterion') - self.parser.add_argument('--optimizer', type=str, default='Adam', choices=['SGD', 'Adadelta', 'RMSprop', 'Adam', 'RAdam'], help='optimizer') - self.parser.add_argument('--lr', type=float, metavar='N', help='learning rate') - self.parser.add_argument('--epochs', type=int, default=10, metavar='N', help='number of epochs (Default: 10)') - - # Batch size - self.parser.add_argument('--batch_size', type=int, required=True, metavar='N', help='batch size in training') - - # Preprocess for image - self.parser.add_argument('--augmentation', type=str, default='no', choices=['xrayaug', 'trivialaugwide', 'randaug', 'no'], help='kind of augmentation') - self.parser.add_argument('--normalize_image', type=str, choices=['yes', 'no'], default='yes', help='image normalization: yes, no (Default: yes)') - - # Sampler - self.parser.add_argument('--sampler', type=str, default='no', choices=['yes', 'no'], help='sample data in training or not, yes or no') - - # Input channel - self.parser.add_argument('--in_channel', type=int, required=True, choices=[1, 3], help='channel of input image') - self.parser.add_argument('--vit_image_size', type=int, default=0, help='input image size for ViT. Set 0 if not used ViT (Default: 0)') - - # Weight saving strategy - self.parser.add_argument('--save_weight_policy', type=str, choices=['best', 'each'], default='best', help='Save weight policy: best, or each(ie. save each time loss decreases when multi-label output) (Default: best)') - - else: - # Directory of weight at training - self.parser.add_argument('--weight_dir', type=str, default=None, help='directory of weight to be used when test. If None, the latest one is selected') - - # Test bash size - self.parser.add_argument('--test_batch_size', type=int, default=1, metavar='N', help='batch size for test (Default: 1)') - - # Splits for test - self.parser.add_argument('--test_splits', type=str, default='train-val-test', help='splits for test: e.g. test, val-test, train-val-test. (Default: train-val-test)') - - self.args = self.parser.parse_args() - - if datetime is not None: - self.args.datetime = datetime - - assert isinstance(isTrain, bool), 'isTrain should be bool.' - self.args.isTrain = isTrain - - def get_args(self) -> argparse.Namespace: - """ - Return arguments. - - Returns: - argparse.Namespace: arguments - """ - return self.args - - -class CSVParser: - """ - Class to get information of csv and cast csv. - """ - def __init__(self, csvpath: str, task: str, isTrain: bool = None) -> None: - """ - Args: - csvpath (str): path to csv - task (str): task - isTrain (bool): if training or not - """ - self.csvpath = csvpath - self.task = task - - _df_source = pd.read_csv(self.csvpath) - _df_source = _df_source[_df_source['split'] != 'exclude'] - - self.input_list = list(_df_source.columns[_df_source.columns.str.startswith('input')]) - self.label_list = list(_df_source.columns[_df_source.columns.str.startswith('label')]) - if self.task == 'deepsurv': - _period_name_list = list(_df_source.columns[_df_source.columns.str.startswith('period')]) - assert (len(_period_name_list) == 1), f"One column of period should be contained in {self.csvpath} when deepsurv." - self.period_name = _period_name_list[0] - - _df_source = self._cast(_df_source, self.task) - - # If no column of group, add it. - if 'group' not in _df_source.columns: - _df_source = _df_source.assign(group='all') - - self.df_source = _df_source - - if isTrain: - self.mlp_num_inputs = len(self.input_list) - self.num_outputs_for_label = self._define_num_outputs_for_label(self.df_source, self.label_list, self.task) - - def _cast(self, df_source: pd.DataFrame, task: str) -> pd.DataFrame: - """ - Make dictionary of cast depending on task. - - Args: - df_source (pd.DataFrame): excluded DataFrame - task: (str): task - - Returns: - DataFrame: csv excluded and cast depending on task - """ - _cast_input = {input_name: float for input_name in self.input_list} - - if task == 'classification': - _cast_label = {label_name: int for label_name in self.label_list} - _casts = {**_cast_input, **_cast_label} - df_source = df_source.astype(_casts) - return df_source - - elif task == 'regression': - _cast_label = {label_name: float for label_name in self.label_list} - _casts = {**_cast_input, **_cast_label} - df_source = df_source.astype(_casts) - return df_source - - elif task == 'deepsurv': - _cast_label = {label_name: int for label_name in self.label_list} - _cast_period = {self.period_name: int} - _casts = {**_cast_input, **_cast_label, **_cast_period} - df_source = df_source.astype(_casts) - return df_source - - else: - raise ValueError(f"Invalid task: {self.task}.") - - def _define_num_outputs_for_label(self, df_source: pd.DataFrame, label_list: List[str], task :str) -> Dict[str, int]: - """ - Define the number of outputs for each label. - - Args: - df_source (pd.DataFrame): DataFrame of csv - label_list (List[str]): list of labels - task: str - - Returns: - Dict[str, int]: dictionary of the number of outputs for each label - eg. - classification: _num_outputs_for_label = {label_A: 2, label_B: 3, ...} - regression, deepsurv: _num_outputs_for_label = {label_A: 1, label_B: 1, ...} - deepsurv: _num_outputs_for_label = {label_A: 1} - """ - if task == 'classification': - _num_outputs_for_label = {label_name: df_source[label_name].nunique() for label_name in label_list} - return _num_outputs_for_label - - elif (task == 'regression') or (task == 'deepsurv'): - _num_outputs_for_label = {label_name: 1 for label_name in label_list} - return _num_outputs_for_label - - else: - raise ValueError(f"Invalid task: {task}.") - - -def _parse_model(model_name: str) -> Tuple[Union[str, None], Union[str, None]]: - """ - Parse model name. - - Args: - model_name (str): model name (eg. MLP, ResNey18, or MLP+ResNet18) - - Returns: - Tuple[str, str]: MLP, CNN or Vision Transformer name - eg. 'MLP', 'ResNet18', 'MLP+ResNet18' -> - ['MLP'], ['ResNet18'], ['MLP', 'ResNet18'] - """ - _model = model_name.split('+') - mlp = 'MLP' if 'MLP' in _model else None - _net = [_n for _n in _model if _n != 'MLP'] - net = _net[0] if _net != [] else None - return mlp, net - - -def _parse_gpu_ids(gpu_ids: str) -> List[int]: - """ - Parse GPU ids concatenated with '-' to list of integers of GPU ids. - eg. '0-1-2' -> [0, 1, 2], '-1' -> [] - - Args: - gpu_ids (str): GPU Ids - - Returns: - List[int]: list of GPU ids - """ - if (gpu_ids == 'cpu') or (gpu_ids == 'cpu\r'): - str_ids = [] - else: - str_ids = gpu_ids.split('-') - _gpu_ids = [] - for str_id in str_ids: - id = int(str_id) - if id >= 0: - _gpu_ids.append(id) - return _gpu_ids - - -def _get_latest_weight_dir() -> str: - """ - Return the latest path to directory of weight made at training. - - Returns: - str: path to directory of the latest weight - eg. 'results//trials/2022-09-30-15-56-60/weights' - """ - _weight_dirs = list(Path('results').glob('*/trials/*/weights')) - assert (_weight_dirs != []), 'No directory of weight.' - weight_dir = max(_weight_dirs, key=lambda weight_dir: weight_dir.stat().st_mtime) - return str(weight_dir) - - -def _collect_weight_paths(weight_dir: str) -> List[str]: - """ - Return list of weight paths. - - Args: - weight_dir (str): path to directory of weights - - Returns: - List[str]: list of weight paths - """ - _weight_paths = list(Path(weight_dir).glob('*.pt')) - assert _weight_paths != [], f"No weight in {weight_dir}." - _weight_paths.sort(key=lambda path: path.stat().st_mtime) - _weight_paths = [str(weight_path) for weight_path in _weight_paths] - return _weight_paths - - -class ParamTable: - """ - Class to make table to dispatch parameters by group. - """ - def __init__(self) -> None: - # groups - # key is abbreviation, value is group name - self.groups = { - 'mo': 'model', - 'dl': 'dataloader', - 'trc': 'train_conf', - 'tsc': 'test_conf', - 'sa': 'save', - 'lo': 'load', - 'trp': 'train_print', - 'tsp': 'test_print' - } - - mo = self.groups['mo'] - dl = self.groups['dl'] - trc = self.groups['trc'] - tsc = self.groups['tsc'] - sa = self.groups['sa'] - lo = self.groups['lo'] - trp = self.groups['trp'] - tsp = self.groups['tsp'] - - # The below shows that which group each parameter dispatches to. - self.dispatch = { - 'datetime': [sa], - 'project': [sa, trp, tsp], - 'csvpath': [sa, trp, tsp], - 'task': [dl, tsc, sa, lo, trp, tsp], - 'isTrain': [dl, trp, tsp], - - 'model': [sa, lo, trp, tsp], - 'vit_image_size': [mo, sa, lo, trp, tsp], - 'pretrained': [mo, sa, trp], - 'mlp': [mo, dl], - 'net': [mo, dl], - - 'weight_dir': [tsc, tsp], - 'weight_paths': [tsc], - - 'criterion': [trc, sa, trp], - 'optimizer': [trc, sa, trp], - 'lr': [trc, sa, trp], - 'epochs': [trc, sa, trp], - - 'batch_size': [dl, sa, trp], - 'test_batch_size': [dl, tsp], - 'test_splits': [tsc, tsp], - - 'in_channel': [mo, dl, sa, lo, trp, tsp], - 'normalize_image': [dl, sa, lo, trp, tsp], - 'augmentation': [dl, sa, trp], - 'sampler': [dl, sa, trp], - - 'df_source': [dl], - 'label_list': [dl, trc, sa, lo], - 'input_list': [dl, sa, lo], - 'period_name': [dl, sa, lo], - 'mlp_num_inputs': [mo, sa, lo], - 'num_outputs_for_label': [mo, sa, lo, tsc], - - 'save_weight_policy': [sa, trp, trc], - 'scaler_path': [dl, tsp], - 'save_datetime_dir': [trc, tsc, trp, tsp], - - 'gpu_ids': [trc, tsc, sa, trp, tsp], - 'device': [mo, trc, tsc], - 'dataset_info': [trc, sa, trp, tsp] - } - - self.table = self._make_table() - - def _make_table(self) -> pd.DataFrame: - """ - Make table to dispatch parameters by group. - - Returns: - pd.DataFrame: table which shows that which group each parameter belongs to. - """ - df_table = pd.DataFrame([], index=self.dispatch.keys(), columns=self.groups.values()).fillna('no') - for param, grps in self.dispatch.items(): - for grp in grps: - df_table.loc[param, grp] = 'yes' - - df_table = df_table.reset_index() - df_table = df_table.rename(columns={'index': 'parameter'}) - return df_table - - def get_by_group(self, group_name: str) -> List[str]: - """ - Return list of parameters which belong to group - - Args: - group_name (str): group name - - Returns: - List[str]: list of parameters - """ - _df_table = self.table - _param_names = _df_table[_df_table[group_name] == 'yes']['parameter'].tolist() - return _param_names - - -Param_Table = ParamTable() - - -class ParamSet: - """ - Class to store required parameters for each group. - """ - pass - - -def _dispatch_by_group(args: argparse.Namespace, group_name: str) -> ParamSet: - """ - Dispatch parameters depending on group. - - Args: - args (argparse.Namespace): arguments - group_name (str): group - - Returns: - ParamSet: class containing parameters for group - """ - _param_names = Param_Table.get_by_group(group_name) - param_set = ParamSet() - for param_name in _param_names: - if hasattr(args, param_name): - _arg = getattr(args, param_name) - setattr(param_set, param_name, _arg) - return param_set - - -def save_parameter(params: ParamSet, save_path: str) -> None: - """ - Save parameters. - - Args: - params (ParamSet): parameters - - save_path (str): save path for parameters - """ - _saved = {_param: _arg for _param, _arg in vars(params).items()} - save_dir = Path(save_path).parents[0] - save_dir.mkdir(parents=True, exist_ok=True) - with open(save_path, 'w') as f: - json.dump(_saved, f, indent=4) - - -def _retrieve_parameter(parameter_path: str) -> Dict[str, Union[str, int, float]]: - """ - Retrieve only parameters required at test from parameters at training. - - Args: - parameter_path (str): path to parameter_path - - Returns: - Dict[str, Union[str, int, float]]: parameters at training - """ - with open(parameter_path) as f: - params = json.load(f) - - _required = Param_Table.get_by_group('load') - params = {p: v for p, v in params.items() if p in _required} - return params - - -def print_parameter(params: ParamSet) -> None: - """ - Print parameters. - - Args: - params (ParamSet): parameters - """ - - LINE_LENGTH = 82 - - if params.isTrain: - phase = 'Training' - else: - phase = 'Test' - - _header = f" Configuration of {phase} " - _padding = (LINE_LENGTH - len(_header) + 1) // 2 # round up - _header = ('-' * _padding) + _header + ('-' * _padding) + '\n' - - _footer = ' End ' - _padding = (LINE_LENGTH - len(_footer) + 1) // 2 - _footer = ('-' * _padding) + _footer + ('-' * _padding) + '\n' - - message = '' - message += _header - - _params_dict = vars(params) - del _params_dict['isTrain'] - for _param, _arg in _params_dict.items(): - _str_arg = _arg2str(_param, _arg) - message += f"{_param:>30}: {_str_arg:<40}\n" - - message += _footer - logger.info(message) - - -def _arg2str(param: str, arg: Union[str, int, float]) -> str: - """ - Convert argument to string. - - Args: - param (str): parameter - arg (Union[str, int, float]): argument - - Returns: - str: strings of argument - """ - if param == 'lr': - if arg is None: - str_arg = 'Default' - else: - str_arg = str(param) - return str_arg - elif param == 'gpu_ids': - if arg == []: - str_arg = 'CPU selected' - else: - str_arg = f"{arg} (Primary GPU:{arg[0]})" - return str_arg - elif param == 'test_splits': - str_arg = ', '.join(arg) - return str_arg - elif param == 'dataset_info': - str_arg = '' - for i, (split, total) in enumerate(arg.items()): - if i < len(arg) - 1: - str_arg += (f"{split}_data={total}, ") - else: - str_arg += (f"{split}_data={total}") - return str_arg - else: - if arg is None: - str_arg = 'No need' - else: - str_arg = str(arg) - return str_arg - - -def _check_if_valid_criterion(task: str = None, criterion: str = None) -> None: - """ - Check if criterion is valid. - - Args: - task (str): task - criterion (str): criterion - """ - valid_criterion = { - 'classification': ['CEL'], - 'regression': ['MSE', 'RMSE', 'MAE'], - 'deepsurv': ['NLL'] - } - if criterion in valid_criterion[task]: - pass - else: - raise ValueError(f"Invalid criterion for task: task={task}, criterion={criterion}.") - - -def _train_parse(args: argparse.Namespace) -> Dict[str, ParamSet]: - """ - Parse parameters required at training. - - Args: - args (argparse.Namespace): arguments - - Returns: - Dict[str, ParamSet]: parameters dispatched by group - """ - # Check if criterion is valid. - _check_if_valid_criterion(task=args.task, criterion=args.criterion) - - args.project = Path(args.csvpath).stem - args.gpu_ids = _parse_gpu_ids(args.gpu_ids) - args.device = torch.device(f"cuda:{args.gpu_ids[0]}") if args.gpu_ids != [] else torch.device('cpu') - args.mlp, args.net = _parse_model(args.model) - args.pretrained = bool(args.pretrained) # strtobool('False') = 0 (== False) - args.save_datetime_dir = str(Path('results', args.project, 'trials', args.datetime)) - - # Parse csv - _csvparser = CSVParser(args.csvpath, args.task, args.isTrain) - args.df_source = _csvparser.df_source - args.dataset_info = {split: len(args.df_source[args.df_source['split'] == split]) for split in ['train', 'val']} - args.input_list = _csvparser.input_list - args.label_list = _csvparser.label_list - args.mlp_num_inputs = _csvparser.mlp_num_inputs - args.num_outputs_for_label = _csvparser.num_outputs_for_label - if args.task == 'deepsurv': - args.period_name = _csvparser.period_name - - # Dispatch parameters - return { - 'args_model': _dispatch_by_group(args, 'model'), - 'args_dataloader': _dispatch_by_group(args, 'dataloader'), - 'args_conf': _dispatch_by_group(args, 'train_conf'), - 'args_print': _dispatch_by_group(args, 'train_print'), - 'args_save': _dispatch_by_group(args, 'save') - } - - -def _test_parse(args: argparse.Namespace) -> Dict[str, ParamSet]: - """ - Parse parameters required at test. - - Args: - args (argparse.Namespace): arguments - - Returns: - Dict[str, ParamSet]: parameters dispatched by group - """ - args.project = Path(args.csvpath).stem - args.gpu_ids = _parse_gpu_ids(args.gpu_ids) - args.device = torch.device(f"cuda:{args.gpu_ids[0]}") if args.gpu_ids != [] else torch.device('cpu') - - # Collect weight paths - if args.weight_dir is None: - args.weight_dir = _get_latest_weight_dir() - args.weight_paths = _collect_weight_paths(args.weight_dir) - - # Get datetime at training - _train_datetime_dir = Path(args.weight_dir).parents[0] - _train_datetime = _train_datetime_dir.name - - args.save_datetime_dir = str(Path('results', args.project, 'trials', _train_datetime)) - - # Retrieve only parameters required at test - _parameter_path = str(Path(_train_datetime_dir, 'parameters.json')) - params = _retrieve_parameter(_parameter_path) - for _param, _arg in params.items(): - setattr(args, _param, _arg) - - # When test, the followings are always fixed. - args.augmentation = 'no' - args.sampler = 'no' - args.pretrained = False - - args.mlp, args.net = _parse_model(args.model) - if args.mlp is not None: - args.scaler_path = str(Path(_train_datetime_dir, 'scaler.pkl')) - - # Parse csv - _csvparser = CSVParser(args.csvpath, args.task) - args.df_source = _csvparser.df_source - - # Align test_splits - args.test_splits = args.test_splits.split('-') - _splits = args.df_source['split'].unique().tolist() - if set(_splits) < set(args.test_splits): - args.test_splits = _splits - - args.dataset_info = {split: len(args.df_source[args.df_source['split'] == split]) for split in args.test_splits} - - # Dispatch parameters - return { - 'args_model': _dispatch_by_group(args, 'model'), - 'args_dataloader': _dispatch_by_group(args, 'dataloader'), - 'args_conf': _dispatch_by_group(args, 'test_conf'), - 'args_print': _dispatch_by_group(args, 'test_print') - } - -def set_options(datetime_name: str = None, phase: str = None) -> argparse.Namespace: - """ - Parse options for training or test. - - Args: - datetime_name (str, optional): datetime name. Defaults to None. - phase (str, optional): train or test. Defaults to None. - - Returns: - argparse.Namespace: arguments - """ - if phase == 'train': - opt = Options(datetime=datetime_name, isTrain=True) - _args = opt.get_args() - args = _train_parse(_args) - return args - else: - opt = Options(isTrain=False) - _args = opt.get_args() - args = _test_parse(_args) - return args diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/midas/midas/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/midas/midas/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/openpose/model.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/openpose/model.py deleted file mode 100644 index 5dfc80de827a17beccb9b0f3f7588545be78c9de..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/openpose/model.py +++ /dev/null @@ -1,219 +0,0 @@ -import torch -from collections import OrderedDict - -import torch -import torch.nn as nn - -def make_layers(block, no_relu_layers): - layers = [] - for layer_name, v in block.items(): - if 'pool' in layer_name: - layer = nn.MaxPool2d(kernel_size=v[0], stride=v[1], - padding=v[2]) - layers.append((layer_name, layer)) - else: - conv2d = nn.Conv2d(in_channels=v[0], out_channels=v[1], - kernel_size=v[2], stride=v[3], - padding=v[4]) - layers.append((layer_name, conv2d)) - if layer_name not in no_relu_layers: - layers.append(('relu_'+layer_name, nn.ReLU(inplace=True))) - - return nn.Sequential(OrderedDict(layers)) - -class bodypose_model(nn.Module): - def __init__(self): - super(bodypose_model, self).__init__() - - # these layers have no relu layer - no_relu_layers = ['conv5_5_CPM_L1', 'conv5_5_CPM_L2', 'Mconv7_stage2_L1',\ - 'Mconv7_stage2_L2', 'Mconv7_stage3_L1', 'Mconv7_stage3_L2',\ - 'Mconv7_stage4_L1', 'Mconv7_stage4_L2', 'Mconv7_stage5_L1',\ - 'Mconv7_stage5_L2', 'Mconv7_stage6_L1', 'Mconv7_stage6_L1'] - blocks = {} - block0 = OrderedDict([ - ('conv1_1', [3, 64, 3, 1, 1]), - ('conv1_2', [64, 64, 3, 1, 1]), - ('pool1_stage1', [2, 2, 0]), - ('conv2_1', [64, 128, 3, 1, 1]), - ('conv2_2', [128, 128, 3, 1, 1]), - ('pool2_stage1', [2, 2, 0]), - ('conv3_1', [128, 256, 3, 1, 1]), - ('conv3_2', [256, 256, 3, 1, 1]), - ('conv3_3', [256, 256, 3, 1, 1]), - ('conv3_4', [256, 256, 3, 1, 1]), - ('pool3_stage1', [2, 2, 0]), - ('conv4_1', [256, 512, 3, 1, 1]), - ('conv4_2', [512, 512, 3, 1, 1]), - ('conv4_3_CPM', [512, 256, 3, 1, 1]), - ('conv4_4_CPM', [256, 128, 3, 1, 1]) - ]) - - - # Stage 1 - block1_1 = OrderedDict([ - ('conv5_1_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_2_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_3_CPM_L1', [128, 128, 3, 1, 1]), - ('conv5_4_CPM_L1', [128, 512, 1, 1, 0]), - ('conv5_5_CPM_L1', [512, 38, 1, 1, 0]) - ]) - - block1_2 = OrderedDict([ - ('conv5_1_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_2_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_3_CPM_L2', [128, 128, 3, 1, 1]), - ('conv5_4_CPM_L2', [128, 512, 1, 1, 0]), - ('conv5_5_CPM_L2', [512, 19, 1, 1, 0]) - ]) - blocks['block1_1'] = block1_1 - blocks['block1_2'] = block1_2 - - self.model0 = make_layers(block0, no_relu_layers) - - # Stages 2 - 6 - for i in range(2, 7): - blocks['block%d_1' % i] = OrderedDict([ - ('Mconv1_stage%d_L1' % i, [185, 128, 7, 1, 3]), - ('Mconv2_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d_L1' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d_L1' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d_L1' % i, [128, 38, 1, 1, 0]) - ]) - - blocks['block%d_2' % i] = OrderedDict([ - ('Mconv1_stage%d_L2' % i, [185, 128, 7, 1, 3]), - ('Mconv2_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d_L2' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d_L2' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d_L2' % i, [128, 19, 1, 1, 0]) - ]) - - for k in blocks.keys(): - blocks[k] = make_layers(blocks[k], no_relu_layers) - - self.model1_1 = blocks['block1_1'] - self.model2_1 = blocks['block2_1'] - self.model3_1 = blocks['block3_1'] - self.model4_1 = blocks['block4_1'] - self.model5_1 = blocks['block5_1'] - self.model6_1 = blocks['block6_1'] - - self.model1_2 = blocks['block1_2'] - self.model2_2 = blocks['block2_2'] - self.model3_2 = blocks['block3_2'] - self.model4_2 = blocks['block4_2'] - self.model5_2 = blocks['block5_2'] - self.model6_2 = blocks['block6_2'] - - - def forward(self, x): - - out1 = self.model0(x) - - out1_1 = self.model1_1(out1) - out1_2 = self.model1_2(out1) - out2 = torch.cat([out1_1, out1_2, out1], 1) - - out2_1 = self.model2_1(out2) - out2_2 = self.model2_2(out2) - out3 = torch.cat([out2_1, out2_2, out1], 1) - - out3_1 = self.model3_1(out3) - out3_2 = self.model3_2(out3) - out4 = torch.cat([out3_1, out3_2, out1], 1) - - out4_1 = self.model4_1(out4) - out4_2 = self.model4_2(out4) - out5 = torch.cat([out4_1, out4_2, out1], 1) - - out5_1 = self.model5_1(out5) - out5_2 = self.model5_2(out5) - out6 = torch.cat([out5_1, out5_2, out1], 1) - - out6_1 = self.model6_1(out6) - out6_2 = self.model6_2(out6) - - return out6_1, out6_2 - -class handpose_model(nn.Module): - def __init__(self): - super(handpose_model, self).__init__() - - # these layers have no relu layer - no_relu_layers = ['conv6_2_CPM', 'Mconv7_stage2', 'Mconv7_stage3',\ - 'Mconv7_stage4', 'Mconv7_stage5', 'Mconv7_stage6'] - # stage 1 - block1_0 = OrderedDict([ - ('conv1_1', [3, 64, 3, 1, 1]), - ('conv1_2', [64, 64, 3, 1, 1]), - ('pool1_stage1', [2, 2, 0]), - ('conv2_1', [64, 128, 3, 1, 1]), - ('conv2_2', [128, 128, 3, 1, 1]), - ('pool2_stage1', [2, 2, 0]), - ('conv3_1', [128, 256, 3, 1, 1]), - ('conv3_2', [256, 256, 3, 1, 1]), - ('conv3_3', [256, 256, 3, 1, 1]), - ('conv3_4', [256, 256, 3, 1, 1]), - ('pool3_stage1', [2, 2, 0]), - ('conv4_1', [256, 512, 3, 1, 1]), - ('conv4_2', [512, 512, 3, 1, 1]), - ('conv4_3', [512, 512, 3, 1, 1]), - ('conv4_4', [512, 512, 3, 1, 1]), - ('conv5_1', [512, 512, 3, 1, 1]), - ('conv5_2', [512, 512, 3, 1, 1]), - ('conv5_3_CPM', [512, 128, 3, 1, 1]) - ]) - - block1_1 = OrderedDict([ - ('conv6_1_CPM', [128, 512, 1, 1, 0]), - ('conv6_2_CPM', [512, 22, 1, 1, 0]) - ]) - - blocks = {} - blocks['block1_0'] = block1_0 - blocks['block1_1'] = block1_1 - - # stage 2-6 - for i in range(2, 7): - blocks['block%d' % i] = OrderedDict([ - ('Mconv1_stage%d' % i, [150, 128, 7, 1, 3]), - ('Mconv2_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv3_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv4_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv5_stage%d' % i, [128, 128, 7, 1, 3]), - ('Mconv6_stage%d' % i, [128, 128, 1, 1, 0]), - ('Mconv7_stage%d' % i, [128, 22, 1, 1, 0]) - ]) - - for k in blocks.keys(): - blocks[k] = make_layers(blocks[k], no_relu_layers) - - self.model1_0 = blocks['block1_0'] - self.model1_1 = blocks['block1_1'] - self.model2 = blocks['block2'] - self.model3 = blocks['block3'] - self.model4 = blocks['block4'] - self.model5 = blocks['block5'] - self.model6 = blocks['block6'] - - def forward(self, x): - out1_0 = self.model1_0(x) - out1_1 = self.model1_1(out1_0) - concat_stage2 = torch.cat([out1_1, out1_0], 1) - out_stage2 = self.model2(concat_stage2) - concat_stage3 = torch.cat([out_stage2, out1_0], 1) - out_stage3 = self.model3(concat_stage3) - concat_stage4 = torch.cat([out_stage3, out1_0], 1) - out_stage4 = self.model4(concat_stage4) - concat_stage5 = torch.cat([out_stage4, out1_0], 1) - out_stage5 = self.model5(concat_stage5) - concat_stage6 = torch.cat([out_stage5, out1_0], 1) - out_stage6 = self.model6(concat_stage6) - return out_stage6 - - diff --git a/spaces/MirageML/sjc/ncsn/__init__.py b/spaces/MirageML/sjc/ncsn/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/apps/crop_img.py b/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/apps/crop_img.py deleted file mode 100644 index 4854d1f5a6361963659a9d79f41c404d801e9193..0000000000000000000000000000000000000000 --- a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/apps/crop_img.py +++ /dev/null @@ -1,75 +0,0 @@ -import os -import cv2 -import numpy as np - -from pathlib import Path -import argparse - -def get_bbox(msk): - rows = np.any(msk, axis=1) - cols = np.any(msk, axis=0) - rmin, rmax = np.where(rows)[0][[0,-1]] - cmin, cmax = np.where(cols)[0][[0,-1]] - - return rmin, rmax, cmin, cmax - -def process_img(img, msk, bbox=None): - if bbox is None: - bbox = get_bbox(msk > 100) - cx = (bbox[3] + bbox[2])//2 - cy = (bbox[1] + bbox[0])//2 - - w = img.shape[1] - h = img.shape[0] - height = int(1.138*(bbox[1] - bbox[0])) - hh = height//2 - - # crop - dw = min(cx, w-cx, hh) - if cy-hh < 0: - img = cv2.copyMakeBorder(img,hh-cy,0,0,0,cv2.BORDER_CONSTANT,value=[0,0,0]) - msk = cv2.copyMakeBorder(msk,hh-cy,0,0,0,cv2.BORDER_CONSTANT,value=0) - cy = hh - if cy+hh > h: - img = cv2.copyMakeBorder(img,0,cy+hh-h,0,0,cv2.BORDER_CONSTANT,value=[0,0,0]) - msk = cv2.copyMakeBorder(msk,0,cy+hh-h,0,0,cv2.BORDER_CONSTANT,value=0) - img = img[cy-hh:(cy+hh),cx-dw:cx+dw,:] - msk = msk[cy-hh:(cy+hh),cx-dw:cx+dw] - dw = img.shape[0] - img.shape[1] - if dw != 0: - img = cv2.copyMakeBorder(img,0,0,dw//2,dw//2,cv2.BORDER_CONSTANT,value=[0,0,0]) - msk = cv2.copyMakeBorder(msk,0,0,dw//2,dw//2,cv2.BORDER_CONSTANT,value=0) - img = cv2.resize(img, (512, 512)) - msk = cv2.resize(msk, (512, 512)) - - kernel = np.ones((3,3),np.uint8) - msk = cv2.erode((255*(msk > 100)).astype(np.uint8), kernel, iterations = 1) - - return img, msk - -def main(): - ''' - given foreground mask, this script crops and resizes an input image and mask for processing. - ''' - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input_image', type=str, help='if the image has alpha channel, it will be used as mask') - parser.add_argument('-m', '--input_mask', type=str) - parser.add_argument('-o', '--out_path', type=str, default='./sample_images') - args = parser.parse_args() - - img = cv2.imread(args.input_image, cv2.IMREAD_UNCHANGED) - if img.shape[2] == 4: - msk = img[:,:,3:] - img = img[:,:,:3] - else: - msk = cv2.imread(args.input_mask, cv2.IMREAD_GRAYSCALE) - - img_new, msk_new = process_img(img, msk) - - img_name = Path(args.input_image).stem - - cv2.imwrite(os.path.join(args.out_path, img_name + '.png'), img_new) - cv2.imwrite(os.path.join(args.out_path, img_name + '_mask.png'), msk_new) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/MohamedRashad/Diffusion4Fashion/app.py b/spaces/MohamedRashad/Diffusion4Fashion/app.py deleted file mode 100644 index afcfb3a4fec36579ee8f131bb6ea3544d2740d99..0000000000000000000000000000000000000000 --- a/spaces/MohamedRashad/Diffusion4Fashion/app.py +++ /dev/null @@ -1,23 +0,0 @@ -import gradio as gr -from diffusers import StableDiffusionPipeline -import torch - -pipe = StableDiffusionPipeline.from_pretrained("MohamedRashad/diffusion_fashion", torch_dtype=torch.float32) -pipe.to("cpu") - -def generate_image(text): - images = pipe(text).images - image = images[0] - return image - -diffusion_interface = gr.Interface( - generate_image, - gr.Textbox(lines=1, label="Input"), - gr.Image(type="pil", label="Output"), - title="Diffusion4Fashion: Generate cool clothes!", - description="

    Enter a description about a piece of cloth and the model will generate an image.

    ", - examples=["A photo of a dress, made in 2019, color is Red, Casual usage, Women's cloth, something for the summer season, on white background"], - cache_examples=True, -) - -diffusion_interface.launch() \ No newline at end of file diff --git a/spaces/Munna0912/URL_CLASSIFIER/Utils/Evaluation.py b/spaces/Munna0912/URL_CLASSIFIER/Utils/Evaluation.py deleted file mode 100644 index 9368a2ad0f16935abb170e1711ff387c4e1ea4ca..0000000000000000000000000000000000000000 --- a/spaces/Munna0912/URL_CLASSIFIER/Utils/Evaluation.py +++ /dev/null @@ -1,51 +0,0 @@ -from sklearn.metrics import confusion_matrix, accuracy_score, precision_score, recall_score, f1_score -from Utils.DataProcessing import process_links -import numpy as np -def round_th(a,th): - if a= 2: - num = int(scope_names[1]) - pointer = pointer[num] - if m_name.endswith("_embeddings"): - pointer = getattr(pointer, "weight") - elif m_name == "kernel": - array = np.transpose(array) - try: - if pointer.shape != array.shape: - raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") - except AssertionError as e: - e.args += (pointer.shape, array.shape) - raise - print(f"Initialize PyTorch weight {name}", original_name) - pointer.data = torch.from_numpy(array) - except AttributeError as e: - print(f"Skipping {original_name}", name, e) - continue - return model - - -class ElectraEmbeddings(nn.Module): - """Construct the embeddings from word, position and token_type embeddings.""" - - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding(config.vocab_size, config.embedding_size, padding_idx=config.pad_token_id) - self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.embedding_size) - self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.embedding_size) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.embedding_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - if version.parse(torch.__version__) > version.parse("1.6.0"): - self.register_buffer( - "token_type_ids", - torch.zeros(self.position_ids.size(), dtype=torch.long), - persistent=False, - ) - - # Copied from transformers.models.bert.modeling_bert.BertEmbeddings.forward - def forward( - self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0 - ): - if input_ids is not None: - input_shape = input_ids.size() - else: - input_shape = inputs_embeds.size()[:-1] - - seq_length = input_shape[1] - - if position_ids is None: - position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length] - - # Setting the token_type_ids to the registered buffer in constructor where it is all zeros, which usually occurs - # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves - # issue #5664 - if token_type_ids is None: - if hasattr(self, "token_type_ids"): - buffered_token_type_ids = self.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) - - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - token_type_embeddings = self.token_type_embeddings(token_type_ids) - - embeddings = inputs_embeds + token_type_embeddings - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings += position_embeddings - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - -# Copied from transformers.models.bert.modeling_bert.BertSelfAttention with Bert->Electra -class ElectraSelfAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " - f"heads ({config.num_attention_heads})" - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.position_embedding_type = position_embedding_type or getattr( - config, "position_embedding_type", "absolute" - ) - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) - - self.is_decoder = config.is_decoder - - def transpose_for_scores(self, x): - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - x = x.view(new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - output_norms: Optional[bool] = False, # added by Fayyaz / Modarressi - ) -> Tuple[torch.Tensor]: - mixed_query_layer = self.query(hidden_states) - - # If this is instantiated as a cross-attention module, the keys - # and values come from an encoder; the attention mask needs to be - # such that the encoder's padding tokens are not attended to. - is_cross_attention = encoder_hidden_states is not None - - if is_cross_attention and past_key_value is not None: - # reuse k,v, cross_attentions - key_layer = past_key_value[0] - value_layer = past_key_value[1] - attention_mask = encoder_attention_mask - elif is_cross_attention: - key_layer = self.transpose_for_scores(self.key(encoder_hidden_states)) - value_layer = self.transpose_for_scores(self.value(encoder_hidden_states)) - attention_mask = encoder_attention_mask - elif past_key_value is not None: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - key_layer = torch.cat([past_key_value[0], key_layer], dim=2) - value_layer = torch.cat([past_key_value[1], value_layer], dim=2) - else: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - - query_layer = self.transpose_for_scores(mixed_query_layer) - - if self.is_decoder: - # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states. - # Further calls to cross_attention layer can then reuse all cross-attention - # key/value_states (first "if" case) - # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of - # all previous decoder key/value_states. Further calls to uni-directional self-attention - # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) - # if encoder bi-directional self-attention `past_key_value` is always `None` - past_key_value = (key_layer, value_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - seq_length = hidden_states.size()[1] - position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) - position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1) - distance = position_ids_l - position_ids_r - positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) - positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in ElectraModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = torch.matmul(attention_probs, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(new_context_layer_shape) - - # added by Fayyaz / Modarressi - # ------------------------------- - if output_norms: - outputs = (context_layer, attention_probs, value_layer) - return outputs - # ------------------------------- - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - if self.is_decoder: - outputs = outputs + (past_key_value,) - return outputs - - -# Copied from transformers.models.bert.modeling_bert.BertSelfOutput -class ElectraSelfOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor, - output_norms=False): # added by Fayyaz / Modarressi - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - # hidden_states = self.LayerNorm(hidden_states + input_tensor) - # return hidden_states - pre_ln_states = hidden_states + input_tensor # added by Fayyaz / Modarressi - post_ln_states = self.LayerNorm(pre_ln_states) # added by Fayyaz / Modarressi - # added by Fayyaz / Modarressi - if output_norms: - return post_ln_states, pre_ln_states - else: - return post_ln_states - - -class BertNormOutput(nn.Module): # This class was added by Goro Kobayashi - def __init__(self, config): - super().__init__() - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - def forward(self, hidden_states, attention_probs, value_layer, dense, LayerNorm, pre_ln_states): - # Args: - # hidden_states: Representations from previous layer and inputs to self-attention. (batch, seq_length, all_head_size) - # attention_probs: Attention weights calculated in self-attention. (batch, num_heads, seq_length, seq_length) - # value_layer: Value vectors calculated in self-attention. (batch, num_heads, seq_length, head_size) - # dense: Dense layer in self-attention. nn.Linear(all_head_size, all_head_size) - # LayerNorm: nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - # pre_ln_states: Vectors just before LayerNorm (batch, seq_length, all_head_size) - - with torch.no_grad(): - # Make transformed vectors f(x) from Value vectors (value_layer) and weight matrix (dense). - dense = dense.weight.view(self.all_head_size, self.num_attention_heads, - self.attention_head_size) # W^o (768, 768) - transformed_layer = torch.einsum('bhsv,dhv->bhsd', value_layer, dense) # V * W^o (z=(qk)v) - - # Make weighted vectors αf(x) from transformed vectors (transformed_layer) - # and attention weights (attentions): - # (batch, num_heads, seq_length, seq_length, all_head_size) - weighted_layer = torch.einsum('bhks,bhsd->bhksd', attention_probs, - transformed_layer) # attention_probs(Q*K^t) * V * W^o - weighted_norm = torch.norm(weighted_layer, dim=-1) # norm of attended tokens representations - - # Sum each weighted vectors αf(x) over all heads: - # (batch, seq_length, seq_length, all_head_size) - summed_weighted_layer = weighted_layer.sum(dim=1) # sum over heads - summed_weighted_norm = torch.norm(summed_weighted_layer, dim=-1) # norm of ||Σαf(x)|| - - """ここからがnew""" - # Make residual matrix (batch, seq_length, seq_length, all_head_size) - hidden_shape = hidden_states.size() # (batch, seq_length, all_head_size) - device = hidden_states.device - residual = torch.einsum('sk,bsd->bskd', torch.eye(hidden_shape[1]).to(device), - hidden_states) # diagonal representations (hidden states) - - # Make matrix of summed weighted vector + residual vectors - residual_weighted_layer = summed_weighted_layer + residual - residual_weighted_norm = torch.norm(residual_weighted_layer, dim=-1) # ||Σαf(x) + x|| - - # consider layernorm - ln_weight = LayerNorm.weight.data # gama - ln_eps = LayerNorm.eps - - # 実際にLayerNormにかけられるベクトル pre_ln_states の平均・分散を計算 - mean = pre_ln_states.mean(-1, keepdim=True) # (batch, seq_len, 1) m(y=Σy_j) - var = (pre_ln_states - mean).pow(2).mean(-1, keepdim=True).unsqueeze(dim=2) # (batch, seq_len, 1, 1) s(y) - - # attention + residual のサムの中のベクトルごとに平均を計算 - each_mean = residual_weighted_layer.mean(-1, keepdim=True) # (batch, seq_len, seq_len, 1) m(y_j) - - # attention + residual のサムの中の各ベクトルから,各平均を引き,標準偏差で割る - # (LayerNorm の normalization 部分をサムの中のベクトルごとに実行していることに相当) - normalized_layer = torch.div(residual_weighted_layer - each_mean, - (var + ln_eps) ** (1 / 2)) # (batch, seq_len, seq_len, all_head_size) - - # さらに,LayerNorm の重みでエレメント積を各ベクトルに対して実行 - post_ln_layer = torch.einsum('bskd,d->bskd', normalized_layer, - ln_weight) # (batch, seq_len, seq_len, all_head_size) - post_ln_norm = torch.norm(post_ln_layer, dim=-1) # (batch, seq_len, seq_len) - - # Attn-N の mixing ratio - attn_preserving = torch.diagonal(summed_weighted_layer, dim1=1, dim2=2).permute(0, 2, 1) - attn_mixing = torch.sum(summed_weighted_layer, dim=2) - attn_preserving - attn_preserving_norm = torch.norm(attn_preserving, dim=-1) - attn_mixing_norm = torch.norm(attn_mixing, dim=-1) - attn_n_mixing_ratio = attn_mixing_norm / (attn_mixing_norm + attn_preserving_norm) - - # AttnRes-N の mixing ratio - before_ln_preserving = torch.diagonal(residual_weighted_layer, dim1=1, dim2=2).permute(0, 2, 1) - before_ln_mixing = torch.sum(residual_weighted_layer, dim=2) - before_ln_preserving - before_ln_preserving_norm = torch.norm(before_ln_preserving, dim=-1) - before_ln_mixing_norm = torch.norm(before_ln_mixing, dim=-1) - attnres_n_mixing_ratio = before_ln_mixing_norm / (before_ln_mixing_norm + before_ln_preserving_norm) - - # AttnResLn-N の mixing ratio - post_ln_preserving = torch.diagonal(post_ln_layer, dim1=1, dim2=2).permute(0, 2, 1) - post_ln_mixing = torch.sum(post_ln_layer, dim=2) - post_ln_preserving - post_ln_preserving_norm = torch.norm(post_ln_preserving, dim=-1) - post_ln_mixing_norm = torch.norm(post_ln_mixing, dim=-1) - attnresln_n_mixing_ratio = post_ln_mixing_norm / (post_ln_mixing_norm + post_ln_preserving_norm) - - outputs = (weighted_norm, # ||αf(x)|| - summed_weighted_norm, # ||Σαf(x)|| - residual_weighted_norm, # ||Σαf(x) + x|| - post_ln_norm, # Norm of vectors after LayerNorm - post_ln_layer, - attn_n_mixing_ratio, # Mixing ratio for Attn-N - attnres_n_mixing_ratio, # Mixing ratio for AttnRes-N - attnresln_n_mixing_ratio, # Mixing ratio for AttnResLn-N - ) - return outputs - - -# Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Electra -class ElectraAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - self.self = ElectraSelfAttention(config, position_embedding_type=position_embedding_type) - self.output = ElectraSelfOutput(config) - self.pruned_heads = set() - self.norm = BertNormOutput(config) # added by Goro Kobayashi - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - output_norms: Optional[bool] = False, # added by Goro Kobayashi - ) -> Tuple[torch.Tensor]: - self_outputs = self.self( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - output_norms=output_norms, # added by Goro Kobayashi - ) - attention_output = self.output( - self_outputs[0], - hidden_states, - output_norms=output_norms, # added by Goro Kobayashi - ) - - # Added by Fayyaz / Modarressi - # ------------------------------- - if output_norms: - _, attention_probs, value_layer = self_outputs - attention_output, pre_ln_states = attention_output - norms_outputs = self.norm( - hidden_states, - attention_probs, - value_layer, - self.output.dense, - self.output.LayerNorm, - pre_ln_states, - ) - outputs = (attention_output, attention_probs,) + norms_outputs # add attentions and norms if we output them - """ - # outputs: - attention_output - attention_probs - transformed_norm - summed_weighted_norm - residual_weighted_norm - post_ln_norm - before_ln_mixing_ratio - post_ln_mixing_ratio - """ - return outputs - # ------------------------------- - - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -# Copied from transformers.models.bert.modeling_bert.BertIntermediate -class ElectraIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - return hidden_states - - -# Copied from transformers.models.bert.modeling_bert.BertOutput -class ElectraOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - # hidden_states = self.LayerNorm(hidden_states + input_tensor) - # return hidden_states - # Added by Fayyaz / Modarressi - # ------------------------------- - pre_ln_states = hidden_states + input_tensor - hidden_states = self.LayerNorm(pre_ln_states) - return hidden_states, pre_ln_states - # ------------------------------- - - -# Copied from transformers.models.bert.modeling_bert.BertLayer with Bert->Electra -class ElectraLayer(nn.Module): - def __init__(self, config): - super().__init__() - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = ElectraAttention(config) - self.is_decoder = config.is_decoder - self.add_cross_attention = config.add_cross_attention - if self.add_cross_attention: - if not self.is_decoder: - raise ValueError(f"{self} should be used as a decoder model if cross attention is added") - self.crossattention = ElectraAttention(config, position_embedding_type="absolute") - self.intermediate = ElectraIntermediate(config) - self.output = ElectraOutput(config) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - output_norms: Optional[bool] = False, # added by Goro Kobayashi - ) -> Tuple[torch.Tensor]: - # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 - # self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None - # self_attention_outputs = self.attention( - # hidden_states, - # attention_mask, - # head_mask, - # output_attentions=output_attentions, - # past_key_value=self_attn_past_key_value, - # ) - self_attention_outputs = self.attention( - hidden_states, - attention_mask, - head_mask, - output_attentions=output_attentions, - output_norms=output_norms, - ) # changed by Goro Kobayashi - attention_output = self_attention_outputs[0] - - # if decoder, the last output is tuple of self-attn cache - if self.is_decoder: - outputs = self_attention_outputs[1:-1] - present_key_value = self_attention_outputs[-1] - else: - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights - - cross_attn_present_key_value = None - if self.is_decoder and encoder_hidden_states is not None: - if not hasattr(self, "crossattention"): - raise ValueError( - f"If `encoder_hidden_states` are passed, {self} has to be instantiated with cross-attention layers by setting `config.add_cross_attention=True`" - ) - - # cross_attn cached key/values tuple is at positions 3,4 of past_key_value tuple - cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None - cross_attention_outputs = self.crossattention( - attention_output, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - cross_attn_past_key_value, - output_attentions, - ) - attention_output = cross_attention_outputs[0] - outputs = outputs + cross_attention_outputs[1:-1] # add cross attentions if we output attention weights - - # add cross-attn cache to positions 3,4 of present_key_value tuple - cross_attn_present_key_value = cross_attention_outputs[-1] - present_key_value = present_key_value + cross_attn_present_key_value - - # layer_output = apply_chunking_to_forward( - # self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output - # ) - - # Added by Fayyaz / Modarressi - # ------------------------------- - intermediate_output = self.intermediate(attention_output) - layer_output, pre_ln2_states = self.output(intermediate_output, attention_output) - if output_norms: - post_ln_layer = outputs[5] - each_mean = post_ln_layer.mean(-1, keepdim=True) - - mean = pre_ln2_states.mean(-1, keepdim=True) - var = (pre_ln2_states - mean).pow(2).mean(-1, keepdim=True).unsqueeze(dim=2) - - normalized_layer = torch.div(post_ln_layer - each_mean, (var + self.output.LayerNorm.eps) ** (1 / 2)) - post_ln2_layer = torch.einsum('bskd,d->bskd', normalized_layer, self.output.LayerNorm.weight) - post_ln2_norm = torch.norm(post_ln2_layer, dim=-1) - - # N-ResOut mixing ratio - post_ln2_preserving = torch.diagonal(post_ln2_layer, dim1=1, dim2=2).permute(0, 2, 1) - post_ln2_mixing = torch.sum(post_ln2_layer, dim=2) - post_ln2_preserving - post_ln2_preserving_norm = torch.norm(post_ln2_preserving, dim=-1) - post_ln2_mixing_norm = torch.norm(post_ln2_mixing, dim=-1) - attnresln2_n_mixing_ratio = post_ln2_mixing_norm / (post_ln2_mixing_norm + post_ln2_preserving_norm) - - new_outputs = outputs[:5] + (post_ln2_norm,) + outputs[6:] + (attnresln2_n_mixing_ratio,) - return (layer_output,) + new_outputs - # ------------------------------- - - outputs = (layer_output,) + outputs - - # if decoder, return the attn key/values as the last output - if self.is_decoder: - outputs = outputs + (present_key_value,) - - return outputs - - def feed_forward_chunk(self, attention_output): - intermediate_output = self.intermediate(attention_output) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - -# Copied from transformers.models.bert.modeling_bert.BertEncoder with Bert->Electra -class ElectraEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList([ElectraLayer(config) for _ in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = False, - output_hidden_states: Optional[bool] = False, - return_dict: Optional[bool] = True, - output_norms: Optional[bool] = False, # added by Goro Kobayashi - ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - - next_decoder_cache = () if use_cache else None - all_norms = () # added by Goro Kobayashi - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - past_key_value = past_key_values[i] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - if use_cache: - logger.warning( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, past_key_value, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - ) - else: - layer_outputs = layer_module( - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - output_norms, # added by Goro Kobayashi - ) - - hidden_states = layer_outputs[0] - if use_cache: - next_decoder_cache += (layer_outputs[-1],) - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - if self.config.add_cross_attention: - all_cross_attentions = all_cross_attentions + (layer_outputs[2],) - - # added by Goro Kobayashi - if output_norms: - all_norms = all_norms + (layer_outputs[2:],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - next_decoder_cache, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - all_norms, # Added by Fayyaz / Modarressi - ] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_decoder_cache, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -class ElectraDiscriminatorPredictions(nn.Module): - """Prediction module for the discriminator, made up of two dense layers.""" - - def __init__(self, config): - super().__init__() - - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.dense_prediction = nn.Linear(config.hidden_size, 1) - self.config = config - - def forward(self, discriminator_hidden_states): - hidden_states = self.dense(discriminator_hidden_states) - hidden_states = get_activation(self.config.hidden_act)(hidden_states) - logits = self.dense_prediction(hidden_states).squeeze(-1) - - return logits - - -class ElectraGeneratorPredictions(nn.Module): - """Prediction module for the generator, made up of two dense layers.""" - - def __init__(self, config): - super().__init__() - - self.LayerNorm = nn.LayerNorm(config.embedding_size, eps=config.layer_norm_eps) - self.dense = nn.Linear(config.hidden_size, config.embedding_size) - - def forward(self, generator_hidden_states): - hidden_states = self.dense(generator_hidden_states) - hidden_states = get_activation("gelu")(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - - return hidden_states - - -class ElectraPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = ElectraConfig - load_tf_weights = load_tf_weights_in_electra - base_model_prefix = "electra" - supports_gradient_checkpointing = True - _keys_to_ignore_on_load_missing = [r"position_ids"] - _keys_to_ignore_on_load_unexpected = [r"electra\.embeddings_project\.weight", r"electra\.embeddings_project\.bias"] - - # Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, nn.Linear): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, ElectraEncoder): - module.gradient_checkpointing = value - - -@dataclass -class ElectraForPreTrainingOutput(ModelOutput): - """ - Output type of [`ElectraForPreTraining`]. - - Args: - loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): - Total loss of the ELECTRA objective. - logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): - Prediction scores of the head (scores for each token before SoftMax). - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of - shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[torch.FloatTensor] = None - logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -ELECTRA_START_DOCSTRING = r""" - - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`ElectraConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -ELECTRA_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`ElectraTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - encoder_hidden_states (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to " - "the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the " - "hidden size and embedding size are different. " - "" - "Both the generator and discriminator checkpoints may be loaded into this model.", - ELECTRA_START_DOCSTRING, -) -class ElectraModel(ElectraPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.embeddings = ElectraEmbeddings(config) - - if config.embedding_size != config.hidden_size: - self.embeddings_project = nn.Linear(config.embedding_size, config.hidden_size) - - self.encoder = ElectraEncoder(config) - self.config = config - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - output_norms: Optional[bool] = None, # added by Goro Kobayashi - ) -> Union[Tuple, BaseModelOutputWithCrossAttentions]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - batch_size, seq_length = input_shape - device = input_ids.device if input_ids is not None else inputs_embeds.device - - # past_key_values_length - past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 - - if attention_mask is None: - attention_mask = torch.ones(input_shape, device=device) - if token_type_ids is None: - if hasattr(self.embeddings, "token_type_ids"): - buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - - extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape, device) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.is_decoder and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - hidden_states = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - past_key_values_length=past_key_values_length, - ) - - if hasattr(self, "embeddings_project"): - hidden_states = self.embeddings_project(hidden_states) - - hidden_states = self.encoder( - hidden_states, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - output_norms=output_norms, # added by Goro Kobayashi - ) - - return hidden_states - - -class ElectraClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.out_proj = nn.Linear(config.hidden_size, config.num_labels) - - def forward(self, features, **kwargs): - x = features[:, 0, :] # take token (equiv. to [CLS]) - x = self.dropout(x) - x = self.dense(x) - x = get_activation("gelu")(x) # although BERT uses tanh here, it seems Electra authors used gelu here - x = self.dropout(x) - x = self.out_proj(x) - return x - - -@add_start_docstrings( - """ - ELECTRA Model transformer with a sequence classification/regression head on top (a linear layer on top of the - pooled output) e.g. for GLUE tasks. - """, - ELECTRA_START_DOCSTRING, -) -class ElectraForSequenceClassification(ElectraPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.config = config - self.electra = ElectraModel(config) - self.classifier = ElectraClassificationHead(config) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=SequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - output_norms: Optional[bool] = None, # added by Fayyaz / Modarressi - ) -> Union[Tuple, SequenceClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - discriminator_hidden_states = self.electra( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - output_norms=output_norms, # added by Fayyaz / Modarressi - ) - - sequence_output = discriminator_hidden_states[0] - logits = self.classifier(sequence_output) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - - if not return_dict: - output = (logits,) + discriminator_hidden_states[1:] - return ((loss,) + output) if loss is not None else output - - return SequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=discriminator_hidden_states.hidden_states, - attentions=discriminator_hidden_states.attentions, - ) - - -@add_start_docstrings( - """ - Electra model with a binary classification head on top as used during pretraining for identifying generated tokens. - - It is recommended to load the discriminator checkpoint into that model. - """, - ELECTRA_START_DOCSTRING, -) -class ElectraForPreTraining(ElectraPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.electra = ElectraModel(config) - self.discriminator_predictions = ElectraDiscriminatorPredictions(config) - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=ElectraForPreTrainingOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, ElectraForPreTrainingOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the ELECTRA loss. Input should be a sequence of tokens (see `input_ids` docstring) - Indices should be in `[0, 1]`: - - - 0 indicates the token is an original token, - - 1 indicates the token was replaced. - - Returns: - - Examples: - - ```python - >>> from transformers import ElectraTokenizer, ElectraForPreTraining - >>> import torch - - >>> tokenizer = ElectraTokenizer.from_pretrained("google/electra-small-discriminator") - >>> model = ElectraForPreTraining.from_pretrained("google/electra-small-discriminator") - - >>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze( - ... 0 - >>> ) # Batch size 1 - >>> logits = model(input_ids).logits - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - discriminator_hidden_states = self.electra( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - discriminator_sequence_output = discriminator_hidden_states[0] - - logits = self.discriminator_predictions(discriminator_sequence_output) - - loss = None - if labels is not None: - loss_fct = nn.BCEWithLogitsLoss() - if attention_mask is not None: - active_loss = attention_mask.view(-1, discriminator_sequence_output.shape[1]) == 1 - active_logits = logits.view(-1, discriminator_sequence_output.shape[1])[active_loss] - active_labels = labels[active_loss] - loss = loss_fct(active_logits, active_labels.float()) - else: - loss = loss_fct(logits.view(-1, discriminator_sequence_output.shape[1]), labels.float()) - - if not return_dict: - output = (logits,) + discriminator_hidden_states[1:] - return ((loss,) + output) if loss is not None else output - - return ElectraForPreTrainingOutput( - loss=loss, - logits=logits, - hidden_states=discriminator_hidden_states.hidden_states, - attentions=discriminator_hidden_states.attentions, - ) - - -@add_start_docstrings( - """ - Electra model with a language modeling head on top. - - Even though both the discriminator and generator may be loaded into this model, the generator is the only model of - the two to have been trained for the masked language modeling task. - """, - ELECTRA_START_DOCSTRING, -) -class ElectraForMaskedLM(ElectraPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.electra = ElectraModel(config) - self.generator_predictions = ElectraGeneratorPredictions(config) - - self.generator_lm_head = nn.Linear(config.embedding_size, config.vocab_size) - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.generator_lm_head - - def set_output_embeddings(self, word_embeddings): - self.generator_lm_head = word_embeddings - - @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=MaskedLMOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, MaskedLMOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the - loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - generator_hidden_states = self.electra( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - generator_sequence_output = generator_hidden_states[0] - - prediction_scores = self.generator_predictions(generator_sequence_output) - prediction_scores = self.generator_lm_head(prediction_scores) - - loss = None - # Masked language modeling softmax layer - if labels is not None: - loss_fct = nn.CrossEntropyLoss() # -100 index = padding token - loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + generator_hidden_states[1:] - return ((loss,) + output) if loss is not None else output - - return MaskedLMOutput( - loss=loss, - logits=prediction_scores, - hidden_states=generator_hidden_states.hidden_states, - attentions=generator_hidden_states.attentions, - ) - - -@add_start_docstrings( - """ - Electra model with a token classification head on top. - - Both the discriminator and generator may be loaded into this model. - """, - ELECTRA_START_DOCSTRING, -) -class ElectraForTokenClassification(ElectraPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.electra = ElectraModel(config) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, TokenClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - discriminator_hidden_states = self.electra( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - discriminator_sequence_output = discriminator_hidden_states[0] - - discriminator_sequence_output = self.dropout(discriminator_sequence_output) - logits = self.classifier(discriminator_sequence_output) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - - if not return_dict: - output = (logits,) + discriminator_hidden_states[1:] - return ((loss,) + output) if loss is not None else output - - return TokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=discriminator_hidden_states.hidden_states, - attentions=discriminator_hidden_states.attentions, - ) - - -@add_start_docstrings( - """ - ELECTRA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear - layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - ELECTRA_START_DOCSTRING, -) -class ElectraForQuestionAnswering(ElectraPreTrainedModel): - config_class = ElectraConfig - base_model_prefix = "electra" - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.electra = ElectraModel(config) - self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=QuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - start_positions: Optional[torch.Tensor] = None, - end_positions: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, QuestionAnsweringModelOutput]: - r""" - start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - discriminator_hidden_states = self.electra( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - ) - - sequence_output = discriminator_hidden_states[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = ( - start_logits, - end_logits, - ) + discriminator_hidden_states[1:] - return ((total_loss,) + output) if total_loss is not None else output - - return QuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=discriminator_hidden_states.hidden_states, - attentions=discriminator_hidden_states.attentions, - ) - - -@add_start_docstrings( - """ - ELECTRA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a - softmax) e.g. for RocStories/SWAG tasks. - """, - ELECTRA_START_DOCSTRING, -) -class ElectraForMultipleChoice(ElectraPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.electra = ElectraModel(config) - self.sequence_summary = SequenceSummary(config) - self.classifier = nn.Linear(config.hidden_size, 1) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=MultipleChoiceModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, MultipleChoiceModelOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., - num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See - `input_ids` above) - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1] - - input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None - attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None - token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None - position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None - inputs_embeds = ( - inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1)) - if inputs_embeds is not None - else None - ) - - discriminator_hidden_states = self.electra( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = discriminator_hidden_states[0] - - pooled_output = self.sequence_summary(sequence_output) - logits = self.classifier(pooled_output) - reshaped_logits = logits.view(-1, num_choices) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(reshaped_logits, labels) - - if not return_dict: - output = (reshaped_logits,) + discriminator_hidden_states[1:] - return ((loss,) + output) if loss is not None else output - - return MultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=discriminator_hidden_states.hidden_states, - attentions=discriminator_hidden_states.attentions, - ) - - -@add_start_docstrings( - """ELECTRA Model with a `language modeling` head on top for CLM fine-tuning.""", ELECTRA_START_DOCSTRING -) -class ElectraForCausalLM(ElectraPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - if not config.is_decoder: - logger.warning("If you want to use `ElectraForCausalLM` as a standalone, add `is_decoder=True.`") - - self.electra = ElectraModel(config) - self.generator_predictions = ElectraGeneratorPredictions(config) - self.generator_lm_head = nn.Linear(config.embedding_size, config.vocab_size) - - self.init_weights() - - def get_output_embeddings(self): - return self.generator_lm_head - - def set_output_embeddings(self, new_embeddings): - self.generator_lm_head = new_embeddings - - @add_start_docstrings_to_model_forward(ELECTRA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=CausalLMOutputWithCrossAttentions, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.Tensor]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]: - r""" - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in - `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are - ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - - Returns: - - Example: - - ```python - >>> from transformers import ElectraTokenizer, ElectraForCausalLM, ElectraConfig - >>> import torch - - >>> tokenizer = ElectraTokenizer.from_pretrained("google/electra-base-generator") - >>> config = ElectraConfig.from_pretrained("google/electra-base-generator") - >>> config.is_decoder = True - >>> model = ElectraForCausalLM.from_pretrained("google/electra-base-generator", config=config) - - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") - >>> outputs = model(**inputs) - - >>> prediction_logits = outputs.logits - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - if labels is not None: - use_cache = False - - outputs = self.electra( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - prediction_scores = self.generator_lm_head(self.generator_predictions(sequence_output)) - - lm_loss = None - if labels is not None: - # we are doing next-token prediction; shift prediction scores and input ids by one - shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous() - labels = labels[:, 1:].contiguous() - loss_fct = CrossEntropyLoss() - lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + outputs[1:] - return ((lm_loss,) + output) if lm_loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=lm_loss, - logits=prediction_scores, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - - # Copied from transformers.models.roberta.modeling_roberta.RobertaForCausalLM.prepare_inputs_for_generation - def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, **model_kwargs): - input_shape = input_ids.shape - # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly - if attention_mask is None: - attention_mask = input_ids.new_ones(input_shape) - - # cut decoder_input_ids if past is used - if past is not None: - input_ids = input_ids[:, -1:] - - return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past} - - # Copied from transformers.models.roberta.modeling_roberta.RobertaForCausalLM._reorder_cache - def _reorder_cache(self, past, beam_idx): - reordered_past = () - for layer_past in past: - reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),) - return reordered_past diff --git a/spaces/NCTCMumbai/NCTC/models/official/benchmark/bert_benchmark_utils.py b/spaces/NCTCMumbai/NCTC/models/official/benchmark/bert_benchmark_utils.py deleted file mode 100644 index 705a243315616080fe15c70925ed74a905818cdc..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/benchmark/bert_benchmark_utils.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Utility functions or classes shared between BERT benchmarks.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import time - -# pylint: disable=g-bad-import-order -import numpy as np -from absl import flags -import tensorflow as tf -# pylint: enable=g-bad-import-order - -from official.utils.flags import core as flags_core -from official.benchmark.perfzero_benchmark import PerfZeroBenchmark - -FLAGS = flags.FLAGS - - -class BenchmarkTimerCallback(tf.keras.callbacks.Callback): - """Callback that records time it takes to run each batch.""" - - def __init__(self, num_batches_to_skip=10): - super(BenchmarkTimerCallback, self).__init__() - self.batch_start_times = {} - self.batch_stop_times = {} - - def on_batch_begin(self, batch, logs=None): - self.batch_start_times[batch] = time.time() - - def on_batch_end(self, batch, logs=None): - # If there are multiple steps_per_loop, the end batch index will not be the - # same as the starting index. Use the last starting index instead. - if batch not in self.batch_start_times: - batch = max(self.batch_start_times.keys()) - - self.batch_stop_times[batch] = time.time() - - def get_examples_per_sec(self, batch_size, num_batches_to_skip=1): - batch_durations = [] - for batch in self.batch_start_times: - if batch in self.batch_stop_times and batch >= num_batches_to_skip: - batch_durations.append(self.batch_stop_times[batch] - - self.batch_start_times[batch]) - return batch_size / np.mean(batch_durations) - - def get_startup_time(self, program_start_time): - return self.batch_start_times[0] - program_start_time - - -class BertBenchmarkBase(PerfZeroBenchmark): - """Base class to hold methods common to test classes.""" - local_flags = None - - def __init__(self, output_dir=None, tpu=None, **kwargs): - super(BertBenchmarkBase, self).__init__( - output_dir=output_dir, tpu=tpu, **kwargs) - self.num_gpus = 8 - self.timer_callback = None - - def _setup(self): - """Sets up and resets flags before each test.""" - super(BertBenchmarkBase, self)._setup() - self.timer_callback = BenchmarkTimerCallback() - - def _report_benchmark(self, stats, wall_time_sec, min_accuracy, max_accuracy): - """Report benchmark results by writing to local protobuf file. - - Args: - stats: dict returned from BERT models with known entries. - wall_time_sec: the during of the benchmark execution in seconds - min_accuracy: Minimum classification accuracy constraint to verify - correctness of the model. - max_accuracy: Maximum classification accuracy constraint to verify - correctness of the model. - """ - metrics = [{ - 'name': 'training_loss', - 'value': stats['train_loss'], - }] - if self.timer_callback: - metrics.append({ - 'name': - 'exp_per_second', - 'value': - self.timer_callback.get_examples_per_sec(FLAGS.train_batch_size * - FLAGS.steps_per_loop) - }) - else: - metrics.append({ - 'name': 'exp_per_second', - 'value': 0.0, - }) - if self.timer_callback and 'start_time_sec' in stats: - metrics.append({ - 'name': 'startup_time', - 'value': self.timer_callback.get_startup_time(stats['start_time_sec']) - }) - - if 'eval_metrics' in stats: - metrics.append({ - 'name': 'eval_accuracy', - 'value': stats['eval_metrics'], - 'min_value': min_accuracy, - 'max_value': max_accuracy, - }) - flags_str = flags_core.get_nondefault_flags_as_str() - self.report_benchmark( - iters=stats['total_training_steps'], - wall_time=wall_time_sec, - metrics=metrics, - extras={'flags': flags_str}) diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/tokenization_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/tokenization_test.py deleted file mode 100644 index 4a0503c3ed6999e3bd81aec4de8f7d64ec733bd9..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/bert/tokenization_test.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import os -import tempfile - -import six -import tensorflow as tf - -from official.nlp.bert import tokenization - - -class TokenizationTest(tf.test.TestCase): - """Tokenization test. - - The implementation is forked from - https://github.com/google-research/bert/blob/master/tokenization_test.py." - """ - - def test_full_tokenizer(self): - vocab_tokens = [ - "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn", - "##ing", "," - ] - with tempfile.NamedTemporaryFile(delete=False) as vocab_writer: - if six.PY2: - vocab_writer.write("".join([x + "\n" for x in vocab_tokens])) - else: - vocab_writer.write("".join([x + "\n" for x in vocab_tokens - ]).encode("utf-8")) - - vocab_file = vocab_writer.name - - tokenizer = tokenization.FullTokenizer(vocab_file) - os.unlink(vocab_file) - - tokens = tokenizer.tokenize(u"UNwant\u00E9d,running") - self.assertAllEqual(tokens, ["un", "##want", "##ed", ",", "runn", "##ing"]) - - self.assertAllEqual( - tokenizer.convert_tokens_to_ids(tokens), [7, 4, 5, 10, 8, 9]) - - def test_chinese(self): - tokenizer = tokenization.BasicTokenizer() - - self.assertAllEqual( - tokenizer.tokenize(u"ah\u535A\u63A8zz"), - [u"ah", u"\u535A", u"\u63A8", u"zz"]) - - def test_basic_tokenizer_lower(self): - tokenizer = tokenization.BasicTokenizer(do_lower_case=True) - - self.assertAllEqual( - tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "), - ["hello", "!", "how", "are", "you", "?"]) - self.assertAllEqual(tokenizer.tokenize(u"H\u00E9llo"), ["hello"]) - - def test_basic_tokenizer_no_lower(self): - tokenizer = tokenization.BasicTokenizer(do_lower_case=False) - - self.assertAllEqual( - tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "), - ["HeLLo", "!", "how", "Are", "yoU", "?"]) - - def test_basic_tokenizer_no_split_on_punc(self): - tokenizer = tokenization.BasicTokenizer( - do_lower_case=True, split_on_punc=False) - - self.assertAllEqual( - tokenizer.tokenize(u" \tHeLLo!how \n Are yoU? "), - ["hello!how", "are", "you?"]) - - def test_wordpiece_tokenizer(self): - vocab_tokens = [ - "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn", - "##ing", "##!", "!" - ] - - vocab = {} - for (i, token) in enumerate(vocab_tokens): - vocab[token] = i - tokenizer = tokenization.WordpieceTokenizer(vocab=vocab) - - self.assertAllEqual(tokenizer.tokenize(""), []) - - self.assertAllEqual( - tokenizer.tokenize("unwanted running"), - ["un", "##want", "##ed", "runn", "##ing"]) - - self.assertAllEqual( - tokenizer.tokenize("unwanted running !"), - ["un", "##want", "##ed", "runn", "##ing", "!"]) - - self.assertAllEqual( - tokenizer.tokenize("unwanted running!"), - ["un", "##want", "##ed", "runn", "##ing", "##!"]) - - self.assertAllEqual( - tokenizer.tokenize("unwantedX running"), ["[UNK]", "runn", "##ing"]) - - def test_convert_tokens_to_ids(self): - vocab_tokens = [ - "[UNK]", "[CLS]", "[SEP]", "want", "##want", "##ed", "wa", "un", "runn", - "##ing" - ] - - vocab = {} - for (i, token) in enumerate(vocab_tokens): - vocab[token] = i - - self.assertAllEqual( - tokenization.convert_tokens_to_ids( - vocab, ["un", "##want", "##ed", "runn", "##ing"]), [7, 4, 5, 8, 9]) - - def test_is_whitespace(self): - self.assertTrue(tokenization._is_whitespace(u" ")) - self.assertTrue(tokenization._is_whitespace(u"\t")) - self.assertTrue(tokenization._is_whitespace(u"\r")) - self.assertTrue(tokenization._is_whitespace(u"\n")) - self.assertTrue(tokenization._is_whitespace(u"\u00A0")) - - self.assertFalse(tokenization._is_whitespace(u"A")) - self.assertFalse(tokenization._is_whitespace(u"-")) - - def test_is_control(self): - self.assertTrue(tokenization._is_control(u"\u0005")) - - self.assertFalse(tokenization._is_control(u"A")) - self.assertFalse(tokenization._is_control(u" ")) - self.assertFalse(tokenization._is_control(u"\t")) - self.assertFalse(tokenization._is_control(u"\r")) - self.assertFalse(tokenization._is_control(u"\U0001F4A9")) - - def test_is_punctuation(self): - self.assertTrue(tokenization._is_punctuation(u"-")) - self.assertTrue(tokenization._is_punctuation(u"$")) - self.assertTrue(tokenization._is_punctuation(u"`")) - self.assertTrue(tokenization._is_punctuation(u".")) - - self.assertFalse(tokenization._is_punctuation(u"A")) - self.assertFalse(tokenization._is_punctuation(u" ")) - - -if __name__ == "__main__": - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/configs/encoders.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/configs/encoders.py deleted file mode 100644 index 0af5b733d9a7b60af21a8be9021fafdfa085e34a..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/configs/encoders.py +++ /dev/null @@ -1,62 +0,0 @@ -# Lint as: python3 -# Copyright 2020 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Transformer Encoders. - -Includes configurations and instantiation methods. -""" - -import dataclasses -import tensorflow as tf - -from official.modeling import tf_utils -from official.modeling.hyperparams import base_config -from official.nlp.modeling import networks - - -@dataclasses.dataclass -class TransformerEncoderConfig(base_config.Config): - """BERT encoder configuration.""" - vocab_size: int = 30522 - hidden_size: int = 768 - num_layers: int = 12 - num_attention_heads: int = 12 - hidden_activation: str = "gelu" - intermediate_size: int = 3072 - dropout_rate: float = 0.1 - attention_dropout_rate: float = 0.1 - max_position_embeddings: int = 512 - type_vocab_size: int = 2 - initializer_range: float = 0.02 - - -def instantiate_encoder_from_cfg( - config: TransformerEncoderConfig) -> networks.TransformerEncoder: - """Instantiate a Transformer encoder network from TransformerEncoderConfig.""" - encoder_network = networks.TransformerEncoder( - vocab_size=config.vocab_size, - hidden_size=config.hidden_size, - num_layers=config.num_layers, - num_attention_heads=config.num_attention_heads, - intermediate_size=config.intermediate_size, - activation=tf_utils.get_activation(config.hidden_activation), - dropout_rate=config.dropout_rate, - attention_dropout_rate=config.attention_dropout_rate, - sequence_length=None, - max_sequence_length=config.max_position_embeddings, - type_vocab_size=config.type_vocab_size, - initializer=tf.keras.initializers.TruncatedNormal( - stddev=config.initializer_range)) - return encoder_network diff --git a/spaces/Navneet574/Kidney_Stone_Prediction/app.py b/spaces/Navneet574/Kidney_Stone_Prediction/app.py deleted file mode 100644 index 6be5678b633a407f44b92ece3138e3425dbf0e35..0000000000000000000000000000000000000000 --- a/spaces/Navneet574/Kidney_Stone_Prediction/app.py +++ /dev/null @@ -1,44 +0,0 @@ -import gradio as gr -from joblib import load -import numpy as np -import pandas as pd - -def predict_price(gravity, ph, osmo, cond, urea, calc): - model = load('Stone_Prediction.joblib') - - data = { - 'gravity': [gravity], - 'ph': [ph], - 'osmo': [osmo], - 'cond': [cond], - 'urea': [urea], - 'calc': [calc], - } - - Xinp = pd.DataFrame(data) - print(Xinp) - - stone = model.predict(Xinp) - - return stone[0] - -ui = gr.Interface( - fn=predict_price,inputs=[ - gr.inputs.Textbox(placeholder='gravity', default=0, - numeric=True, label='gravity (normal is between 1.002 - 1.030)'), - gr.inputs.Textbox(placeholder='PH', - default=4.5, numeric=True, label='PH value (normal is between 4.5 - 8.0)'), - gr.inputs.Textbox(placeholder='Osmolarity', default='500', numeric=True, label='Osmolarity (normal is between 500 mOsm/kg - 800 mOsm/kg)'), - gr.inputs.Textbox(placeholder='Conductivity', - default='50', numeric=True, label='Conductivity (normal is between 50 - 1500 µS/cm or 0.05 - 1.5 mS/cm)'), - gr.inputs.Textbox(placeholder='Urea', - default='12', numeric=True, label='Urea (normal is between 12 to 20 mg/dL or 3.6 to 7.1 mmol/L.)'), - gr.inputs.Textbox(placeholder='Calc', - default='2.5', numeric=True, label='Calc (normal is between 100 to 300 mg/dL or 2.5 to 7.5 mmol/L)'), - ], outputs=[ - "text" - ] -) - -if __name__ == "__main__": - ui.launch() \ No newline at end of file diff --git a/spaces/NickyGenN1/ImageClassification/README.md b/spaces/NickyGenN1/ImageClassification/README.md deleted file mode 100644 index dab3a42676d2eda99a5c150c2aca67b1099e2df1..0000000000000000000000000000000000000000 --- a/spaces/NickyGenN1/ImageClassification/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ImageClassification -emoji: 🦀 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/inference_realesrgan_video.py b/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/inference_realesrgan_video.py deleted file mode 100644 index 170fb23971d135ebf0c854c652a0005d3f31abaa..0000000000000000000000000000000000000000 --- a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/inference_realesrgan_video.py +++ /dev/null @@ -1,566 +0,0 @@ -import argparse -import cv2 -import glob -import mimetypes -import numpy as np -import os -import shutil -import subprocess -import torch -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.download_util import load_file_from_url -from os import path as osp -from tqdm import tqdm - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - -try: - import ffmpeg -except ImportError: - import pip - - pip.main(["install", "--user", "ffmpeg-python"]) - import ffmpeg - - -def get_video_meta_info(video_path): - ret = {} - probe = ffmpeg.probe(video_path) - video_streams = [ - stream for stream in probe["streams"] if stream["codec_type"] == "video" - ] - has_audio = any(stream["codec_type"] == "audio" for stream in probe["streams"]) - ret["width"] = video_streams[0]["width"] - ret["height"] = video_streams[0]["height"] - ret["fps"] = eval(video_streams[0]["avg_frame_rate"]) - ret["audio"] = ffmpeg.input(video_path).audio if has_audio else None - ret["nb_frames"] = int(video_streams[0]["nb_frames"]) - return ret - - -def get_sub_video(args, num_process, process_idx): - if num_process == 1: - return args.input - meta = get_video_meta_info(args.input) - duration = int(meta["nb_frames"] / meta["fps"]) - part_time = duration // num_process - print(f"duration: {duration}, part_time: {part_time}") - os.makedirs( - osp.join(args.output, f"{args.video_name}_inp_tmp_videos"), exist_ok=True - ) - out_path = osp.join( - args.output, f"{args.video_name}_inp_tmp_videos", f"{process_idx:03d}.mp4" - ) - cmd = [ - args.ffmpeg_bin, - f"-i {args.input}", - "-ss", - f"{part_time * process_idx}", - f"-to {part_time * (process_idx + 1)}" - if process_idx != num_process - 1 - else "", - "-async 1", - out_path, - "-y", - ] - print(" ".join(cmd)) - subprocess.call(" ".join(cmd), shell=True) - return out_path - - -class Reader: - def __init__(self, args, total_workers=1, worker_idx=0): - self.args = args - input_type = mimetypes.guess_type(args.input)[0] - self.input_type = "folder" if input_type is None else input_type - self.paths = [] # for image&folder type - self.audio = None - self.input_fps = None - if self.input_type.startswith("video"): - video_path = get_sub_video(args, total_workers, worker_idx) - self.stream_reader = ( - ffmpeg.input(video_path) - .output("pipe:", format="rawvideo", pix_fmt="bgr24", loglevel="error") - .run_async(pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin) - ) - meta = get_video_meta_info(video_path) - self.width = meta["width"] - self.height = meta["height"] - self.input_fps = meta["fps"] - self.audio = meta["audio"] - self.nb_frames = meta["nb_frames"] - - else: - if self.input_type.startswith("image"): - self.paths = [args.input] - else: - paths = sorted(glob.glob(os.path.join(args.input, "*"))) - tot_frames = len(paths) - num_frame_per_worker = tot_frames // total_workers + ( - 1 if tot_frames % total_workers else 0 - ) - self.paths = paths[ - num_frame_per_worker - * worker_idx : num_frame_per_worker - * (worker_idx + 1) - ] - - self.nb_frames = len(self.paths) - assert self.nb_frames > 0, "empty folder" - from PIL import Image - - tmp_img = Image.open(self.paths[0]) - self.width, self.height = tmp_img.size - self.idx = 0 - - def get_resolution(self): - return self.height, self.width - - def get_fps(self): - if self.args.fps is not None: - return self.args.fps - elif self.input_fps is not None: - return self.input_fps - return 24 - - def get_audio(self): - return self.audio - - def __len__(self): - return self.nb_frames - - def get_frame_from_stream(self): - img_bytes = self.stream_reader.stdout.read( - self.width * self.height * 3 - ) # 3 bytes for one pixel - if not img_bytes: - return None - img = np.frombuffer(img_bytes, np.uint8).reshape([self.height, self.width, 3]) - return img - - def get_frame_from_list(self): - if self.idx >= self.nb_frames: - return None - img = cv2.imread(self.paths[self.idx]) - self.idx += 1 - return img - - def get_frame(self): - if self.input_type.startswith("video"): - return self.get_frame_from_stream() - else: - return self.get_frame_from_list() - - def close(self): - if self.input_type.startswith("video"): - self.stream_reader.stdin.close() - self.stream_reader.wait() - - -class Writer: - def __init__(self, args, audio, height, width, video_save_path, fps): - out_width, out_height = int(width * args.outscale), int(height * args.outscale) - if out_height > 2160: - print( - "You are generating video that is larger than 4K, which will be very slow due to IO speed.", - "We highly recommend to decrease the outscale(aka, -s).", - ) - - if audio is not None: - self.stream_writer = ( - ffmpeg.input( - "pipe:", - format="rawvideo", - pix_fmt="bgr24", - s=f"{out_width}x{out_height}", - framerate=fps, - ) - .output( - audio, - video_save_path, - pix_fmt="yuv420p", - vcodec="libx264", - loglevel="error", - acodec="copy", - ) - .overwrite_output() - .run_async(pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin) - ) - else: - self.stream_writer = ( - ffmpeg.input( - "pipe:", - format="rawvideo", - pix_fmt="bgr24", - s=f"{out_width}x{out_height}", - framerate=fps, - ) - .output( - video_save_path, - pix_fmt="yuv420p", - vcodec="libx264", - loglevel="error", - ) - .overwrite_output() - .run_async(pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin) - ) - - def write_frame(self, frame): - frame = frame.astype(np.uint8).tobytes() - self.stream_writer.stdin.write(frame) - - def close(self): - self.stream_writer.stdin.close() - self.stream_writer.wait() - - -def inference_video(args, video_save_path, device=None, total_workers=1, worker_idx=0): - # ---------------------- determine models according to model names ---------------------- # - args.model_name = args.model_name.split(".pth")[0] - if args.model_name == "RealESRGAN_x4plus": # x4 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=4, - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth" - ] - elif args.model_name == "RealESRNet_x4plus": # x4 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=4, - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth" - ] - elif ( - args.model_name == "RealESRGAN_x4plus_anime_6B" - ): # x4 RRDBNet model with 6 blocks - model = RRDBNet( - num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4 - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth" - ] - elif args.model_name == "RealESRGAN_x2plus": # x2 RRDBNet model - model = RRDBNet( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_block=23, - num_grow_ch=32, - scale=2, - ) - netscale = 2 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth" - ] - elif args.model_name == "realesr-animevideov3": # x4 VGG-style model (XS size) - model = SRVGGNetCompact( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_conv=16, - upscale=4, - act_type="prelu", - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth" - ] - elif args.model_name == "realesr-general-x4v3": # x4 VGG-style model (S size) - model = SRVGGNetCompact( - num_in_ch=3, - num_out_ch=3, - num_feat=64, - num_conv=32, - upscale=4, - act_type="prelu", - ) - netscale = 4 - file_url = [ - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth", - "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth", - ] - - # ---------------------- determine model paths ---------------------- # - model_path = os.path.join("weights", args.model_name + ".pth") - if not os.path.isfile(model_path): - ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) - for url in file_url: - # model_path will be updated - model_path = load_file_from_url( - url=url, - model_dir=os.path.join(ROOT_DIR, "weights"), - progress=True, - file_name=None, - ) - - # use dni to control the denoise strength - dni_weight = None - if args.model_name == "realesr-general-x4v3" and args.denoise_strength != 1: - wdn_model_path = model_path.replace( - "realesr-general-x4v3", "realesr-general-wdn-x4v3" - ) - model_path = [model_path, wdn_model_path] - dni_weight = [args.denoise_strength, 1 - args.denoise_strength] - - # restorer - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - dni_weight=dni_weight, - model=model, - tile=args.tile, - tile_pad=args.tile_pad, - pre_pad=args.pre_pad, - half=not args.fp32, - device=device, - ) - - if "anime" in args.model_name and args.face_enhance: - print( - "face_enhance is not supported in anime models, we turned this option off for you. " - "if you insist on turning it on, please manually comment the relevant lines of code." - ) - args.face_enhance = False - - if args.face_enhance: # Use GFPGAN for face enhancement - from gfpgan import GFPGANer - - face_enhancer = GFPGANer( - model_path="https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth", - upscale=args.outscale, - arch="clean", - channel_multiplier=2, - bg_upsampler=upsampler, - ) # TODO support custom device - else: - face_enhancer = None - - reader = Reader(args, total_workers, worker_idx) - audio = reader.get_audio() - height, width = reader.get_resolution() - fps = reader.get_fps() - writer = Writer(args, audio, height, width, video_save_path, fps) - - pbar = tqdm(total=len(reader), unit="frame", desc="inference") - while True: - img = reader.get_frame() - if img is None: - break - - try: - if args.face_enhance: - _, _, output = face_enhancer.enhance( - img, has_aligned=False, only_center_face=False, paste_back=True - ) - else: - output, _ = upsampler.enhance(img, outscale=args.outscale) - except RuntimeError as error: - print("Error", error) - print( - "If you encounter CUDA out of memory, try to set --tile with a smaller number." - ) - else: - writer.write_frame(output) - - torch.cuda.synchronize(device) - pbar.update(1) - - reader.close() - writer.close() - - -def run(args): - args.video_name = osp.splitext(os.path.basename(args.input))[0] - video_save_path = osp.join(args.output, f"{args.video_name}_{args.suffix}.mp4") - - if args.extract_frame_first: - tmp_frames_folder = osp.join(args.output, f"{args.video_name}_inp_tmp_frames") - os.makedirs(tmp_frames_folder, exist_ok=True) - os.system( - f"ffmpeg -i {args.input} -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 {tmp_frames_folder}/frame%08d.png" - ) - args.input = tmp_frames_folder - - num_gpus = torch.cuda.device_count() - num_process = num_gpus * args.num_process_per_gpu - if num_process == 1: - inference_video(args, video_save_path) - return - - ctx = torch.multiprocessing.get_context("spawn") - pool = ctx.Pool(num_process) - os.makedirs( - osp.join(args.output, f"{args.video_name}_out_tmp_videos"), exist_ok=True - ) - pbar = tqdm(total=num_process, unit="sub_video", desc="inference") - for i in range(num_process): - sub_video_save_path = osp.join( - args.output, f"{args.video_name}_out_tmp_videos", f"{i:03d}.mp4" - ) - pool.apply_async( - inference_video, - args=( - args, - sub_video_save_path, - torch.device(i % num_gpus), - num_process, - i, - ), - callback=lambda arg: pbar.update(1), - ) - pool.close() - pool.join() - - # combine sub videos - # prepare vidlist.txt - with open(f"{args.output}/{args.video_name}_vidlist.txt", "w") as f: - for i in range(num_process): - f.write(f"file '{args.video_name}_out_tmp_videos/{i:03d}.mp4'\n") - - cmd = [ - args.ffmpeg_bin, - "-f", - "concat", - "-safe", - "0", - "-i", - f"{args.output}/{args.video_name}_vidlist.txt", - "-c", - "copy", - f"{video_save_path}", - ] - print(" ".join(cmd)) - subprocess.call(cmd) - shutil.rmtree(osp.join(args.output, f"{args.video_name}_out_tmp_videos")) - if osp.exists(osp.join(args.output, f"{args.video_name}_inp_tmp_videos")): - shutil.rmtree(osp.join(args.output, f"{args.video_name}_inp_tmp_videos")) - os.remove(f"{args.output}/{args.video_name}_vidlist.txt") - - -def main(): - """Inference demo for Real-ESRGAN. - It mainly for restoring anime videos. - - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "-i", "--input", type=str, default="inputs", help="Input video, image or folder" - ) - parser.add_argument( - "-n", - "--model_name", - type=str, - default="realesr-animevideov3", - help=( - "Model names: realesr-animevideov3 | RealESRGAN_x4plus_anime_6B | RealESRGAN_x4plus | RealESRNet_x4plus |" - " RealESRGAN_x2plus | realesr-general-x4v3" - "Default:realesr-animevideov3" - ), - ) - parser.add_argument( - "-o", "--output", type=str, default="results", help="Output folder" - ) - parser.add_argument( - "-dn", - "--denoise_strength", - type=float, - default=0.5, - help=( - "Denoise strength. 0 for weak denoise (keep noise), 1 for strong denoise ability. " - "Only used for the realesr-general-x4v3 model" - ), - ) - parser.add_argument( - "-s", - "--outscale", - type=float, - default=4, - help="The final upsampling scale of the image", - ) - parser.add_argument( - "--suffix", type=str, default="out", help="Suffix of the restored video" - ) - parser.add_argument( - "-t", - "--tile", - type=int, - default=0, - help="Tile size, 0 for no tile during testing", - ) - parser.add_argument("--tile_pad", type=int, default=10, help="Tile padding") - parser.add_argument( - "--pre_pad", type=int, default=0, help="Pre padding size at each border" - ) - parser.add_argument( - "--face_enhance", action="store_true", help="Use GFPGAN to enhance face" - ) - parser.add_argument( - "--fp32", - action="store_true", - help="Use fp32 precision during inference. Default: fp16 (half precision).", - ) - parser.add_argument( - "--fps", type=float, default=None, help="FPS of the output video" - ) - parser.add_argument( - "--ffmpeg_bin", type=str, default="ffmpeg", help="The path to ffmpeg" - ) - parser.add_argument("--extract_frame_first", action="store_true") - parser.add_argument("--num_process_per_gpu", type=int, default=1) - - parser.add_argument( - "--alpha_upsampler", - type=str, - default="realesrgan", - help="The upsampler for the alpha channels. Options: realesrgan | bicubic", - ) - parser.add_argument( - "--ext", - type=str, - default="auto", - help="Image extension. Options: auto | jpg | png, auto means using the same extension as inputs", - ) - args = parser.parse_args() - - args.input = args.input.rstrip("/").rstrip("\\") - os.makedirs(args.output, exist_ok=True) - - if mimetypes.guess_type(args.input)[0] is not None and mimetypes.guess_type( - args.input - )[0].startswith("video"): - is_video = True - else: - is_video = False - - if is_video and args.input.endswith(".flv"): - mp4_path = args.input.replace(".flv", ".mp4") - os.system(f"ffmpeg -i {args.input} -codec copy {mp4_path}") - args.input = mp4_path - - if args.extract_frame_first and not is_video: - args.extract_frame_first = False - - run(args) - - if args.extract_frame_first: - tmp_frames_folder = osp.join(args.output, f"{args.video_name}_inp_tmp_frames") - shutil.rmtree(tmp_frames_folder) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/linformer_src/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/linformer_src/__init__.py deleted file mode 100644 index 1c52f135ea6f99d0effe8ce1f7d77cbd66be3745..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/linformer/linformer_src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .models import linformer_roberta # noqa diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/collaters.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/collaters.py deleted file mode 100644 index 6acfec876b87e5a00bc92083b1181301a2a18e3f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/collaters.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" - This module contains collection of classes which implement - collate functionalities for various tasks. - - Collaters should know what data to expect for each sample - and they should pack / collate them into batches -""" - - -from __future__ import absolute_import, division, print_function, unicode_literals - -import numpy as np -import torch -from fairseq.data import data_utils as fairseq_data_utils - - -class Seq2SeqCollater(object): - """ - Implements collate function mainly for seq2seq tasks - This expects each sample to contain feature (src_tokens) and - targets. - This collator is also used for aligned training task. - """ - - def __init__( - self, - feature_index=0, - label_index=1, - pad_index=1, - eos_index=2, - move_eos_to_beginning=True, - ): - self.feature_index = feature_index - self.label_index = label_index - self.pad_index = pad_index - self.eos_index = eos_index - self.move_eos_to_beginning = move_eos_to_beginning - - def _collate_frames(self, frames): - """Convert a list of 2d frames into a padded 3d tensor - Args: - frames (list): list of 2d frames of size L[i]*f_dim. Where L[i] is - length of i-th frame and f_dim is static dimension of features - Returns: - 3d tensor of size len(frames)*len_max*f_dim where len_max is max of L[i] - """ - len_max = max(frame.size(0) for frame in frames) - f_dim = frames[0].size(1) - res = frames[0].new(len(frames), len_max, f_dim).fill_(0.0) - - for i, v in enumerate(frames): - res[i, : v.size(0)] = v - - return res - - def collate(self, samples): - """ - utility function to collate samples into batch for speech recognition. - """ - if len(samples) == 0: - return {} - - # parse samples into torch tensors - parsed_samples = [] - for s in samples: - # skip invalid samples - if s["data"][self.feature_index] is None: - continue - source = s["data"][self.feature_index] - if isinstance(source, (np.ndarray, np.generic)): - source = torch.from_numpy(source) - target = s["data"][self.label_index] - if isinstance(target, (np.ndarray, np.generic)): - target = torch.from_numpy(target).long() - elif isinstance(target, list): - target = torch.LongTensor(target) - - parsed_sample = {"id": s["id"], "source": source, "target": target} - parsed_samples.append(parsed_sample) - samples = parsed_samples - - id = torch.LongTensor([s["id"] for s in samples]) - frames = self._collate_frames([s["source"] for s in samples]) - # sort samples by descending number of frames - frames_lengths = torch.LongTensor([s["source"].size(0) for s in samples]) - frames_lengths, sort_order = frames_lengths.sort(descending=True) - id = id.index_select(0, sort_order) - frames = frames.index_select(0, sort_order) - - target = None - target_lengths = None - prev_output_tokens = None - if samples[0].get("target", None) is not None: - ntokens = sum(len(s["target"]) for s in samples) - target = fairseq_data_utils.collate_tokens( - [s["target"] for s in samples], - self.pad_index, - self.eos_index, - left_pad=False, - move_eos_to_beginning=False, - ) - target = target.index_select(0, sort_order) - target_lengths = torch.LongTensor( - [s["target"].size(0) for s in samples] - ).index_select(0, sort_order) - prev_output_tokens = fairseq_data_utils.collate_tokens( - [s["target"] for s in samples], - self.pad_index, - self.eos_index, - left_pad=False, - move_eos_to_beginning=self.move_eos_to_beginning, - ) - prev_output_tokens = prev_output_tokens.index_select(0, sort_order) - else: - ntokens = sum(len(s["source"]) for s in samples) - - batch = { - "id": id, - "ntokens": ntokens, - "net_input": {"src_tokens": frames, "src_lengths": frames_lengths}, - "target": target, - "target_lengths": target_lengths, - "nsentences": len(samples), - } - if prev_output_tokens is not None: - batch["net_input"]["prev_output_tokens"] = prev_output_tokens - return batch diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/libri_labels.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/libri_labels.py deleted file mode 100644 index 694a202604c7a4a480550550679ce6c16bd10e42..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/libri_labels.py +++ /dev/null @@ -1,56 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Helper script to pre-compute embeddings for a flashlight (previously called wav2letter++) dataset -""" - -import argparse -import os - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("tsv") - parser.add_argument("--output-dir", required=True) - parser.add_argument("--output-name", required=True) - args = parser.parse_args() - - os.makedirs(args.output_dir, exist_ok=True) - - transcriptions = {} - - with open(args.tsv, "r") as tsv, open( - os.path.join(args.output_dir, args.output_name + ".ltr"), "w" - ) as ltr_out, open( - os.path.join(args.output_dir, args.output_name + ".wrd"), "w" - ) as wrd_out: - root = next(tsv).strip() - for line in tsv: - line = line.strip() - dir = os.path.dirname(line) - if dir not in transcriptions: - parts = dir.split(os.path.sep) - trans_path = f"{parts[-2]}-{parts[-1]}.trans.txt" - path = os.path.join(root, dir, trans_path) - assert os.path.exists(path) - texts = {} - with open(path, "r") as trans_f: - for tline in trans_f: - items = tline.strip().split() - texts[items[0]] = " ".join(items[1:]) - transcriptions[dir] = texts - part = os.path.basename(line).split(".")[0] - assert part in transcriptions[dir] - print(transcriptions[dir][part], file=wrd_out) - print( - " ".join(list(transcriptions[dir][part].replace(" ", "|"))) + " |", - file=ltr_out, - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py deleted file mode 100644 index 632a69e9f4bd98d33abb689c15557c818d0e35ea..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py +++ /dev/null @@ -1,210 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import gc -import os -import os.path as osp -import random -import numpy as np -import tqdm -import torch - -from collections import namedtuple - -import faiss - -import fairseq -import soundfile as sf - - -def get_parser(): - parser = argparse.ArgumentParser( - description="compute kmeans codebook from kaldi-computed feats" - ) - # fmt: off - parser.add_argument('data', help='location of tsv files') - parser.add_argument('--save-dir', help='where to save the output', required=True) - parser.add_argument('--checkpoint', type=str, help='checkpoint for wav2vec model (if using wav2vec features)', required=True) - parser.add_argument('--sample-pct', '-r', type=float, help='percentage of timesteps to sample', default=0) - parser.add_argument('--layer', '-l', type=int, help='which layer to read', default=14) - parser.add_argument('--faiss-specs', '-f', type=str, - help='faiss index specs; separated by space ' - 'format is: PCAx_NORM_CLUSx_SPHERICAL -> ' - 'PCAx if exists first apply PCA ' - 'NORM if exists, normalize the vector by L2 norm ' - 'CLUSx must exist, cluster to x clusters ' - 'SPEHRICAL if exists, apply spherical kmeans', - default='l2') - # fmt: on - - return parser - - -faiss_spec = namedtuple("faiss_spec", ["pca", "norm", "n_clus", "sphere", "spec_str"]) - - -def parse_faiss_specs(specs_str): - specs = [] - for ss in specs_str.split(): - comps = ss.split("_") - pca = 0 - norm = False - n_clus = 0 - sphere = False - for c in comps: - if c.startswith("PCA"): - pca = int(c[3:]) - elif c == "NORM": - norm = True - elif c.startswith("CLUS"): - n_clus = int(c[4:]) - elif c == "SPHERICAL": - sphere = True - assert n_clus > 0 - specs.append( - faiss_spec(pca=pca, norm=norm, n_clus=n_clus, sphere=sphere, spec_str=ss) - ) - return specs - - -class Wav2VecFeatureReader(object): - def __init__(self, cp_file, layer): - state = fairseq.checkpoint_utils.load_checkpoint_to_cpu(cp_file) - - self.layer = layer - - if "cfg" in state: - w2v_args = state["cfg"] - task = fairseq.tasks.setup_task(w2v_args.task) - model = task.build_model(w2v_args.model) - else: - w2v_args = state["args"] - task = fairseq.tasks.setup_task(w2v_args) - model = task.build_model(w2v_args) - model.load_state_dict(state["model"], strict=True) - model.eval() - model.cuda() - self.model = model - - def read_audio(self, fname): - """Load an audio file and return PCM along with the sample rate""" - wav, sr = sf.read(fname) - assert sr == 16e3 - - return wav - - def get_feats(self, loc): - x = self.read_audio(loc) - with torch.no_grad(): - source = torch.from_numpy(x).view(1, -1).float().cuda() - res = self.model( - source=source, mask=False, features_only=True, layer=self.layer - ) - return res["layer_results"][self.layer][0].squeeze(1) - - -def get_iterator(args): - with open(args.data, "r") as fp: - lines = fp.read().split("\n") - root = lines.pop(0).strip() - files = [osp.join(root, line.split("\t")[0]) for line in lines if len(line) > 0] - - if getattr(args, "sample_pct", 0) > 0: - files = random.sample(files, int(args.sample_pct * len(files))) - num = len(files) - reader = Wav2VecFeatureReader(args.checkpoint, args.layer) - - def iterate(): - for fname in files: - feats = reader.get_feats(fname) - yield feats.cpu().numpy() - - return iterate, num - - -def main(): - parser = get_parser() - args = parser.parse_args() - - faiss_specs = parse_faiss_specs(args.faiss_specs) - print("Faiss Specs:", faiss_specs) - - feat_path = osp.join(args.save_dir, "features") - if osp.exists(feat_path + ".npy"): - feats = np.load(feat_path + ".npy") - else: - generator, num = get_iterator(args) - iterator = generator() - - feats = [] - for f in tqdm.tqdm(iterator, total=num): - feats.append(f) - - del iterator - del generator - - feats = np.concatenate(feats) - - print(feats.shape) - - os.makedirs(args.save_dir, exist_ok=True) - # np.save(feat_path, feats) - - gc.collect() - torch.cuda.empty_cache() - - reload = False - for spec in faiss_specs: - print("Processing spec", spec) - - if reload: - print("Reloading...") - del feats - gc.collect() - feats = np.load(feat_path + ".npy") - - save_path = osp.join(args.save_dir, spec.spec_str) - os.makedirs(save_path, exist_ok=True) - d = feats.shape[-1] - x = feats - if spec.pca > 0: - print("Computing PCA") - pca = faiss.PCAMatrix(d, spec.pca) - pca.train(x) - d = spec.pca - b = faiss.vector_to_array(pca.b) - A = faiss.vector_to_array(pca.A).reshape(pca.d_out, pca.d_in) - np.save(osp.join(save_path, "pca_A"), A.T) - np.save(osp.join(save_path, "pca_b"), b) - print("Applying PCA") - x = pca.apply_py(x) - - if spec.norm: - reload = spec.pca <= 0 - print("Normalizing") - faiss.normalize_L2(x) - - print("Computing kmeans") - kmeans = faiss.Kmeans( - d, - spec.n_clus, - niter=50, - verbose=True, - spherical=spec.sphere, - max_points_per_centroid=feats.shape[0], - gpu=True, - nredo=3, - ) - kmeans.train(x) - np.save(osp.join(save_path, "centroids"), kmeans.centroids) - del kmeans - del x - gc.collect() - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/cluster_kmeans.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/cluster_kmeans.py deleted file mode 100644 index 7cf844a95a075ee9ad318dc11dd71537d1ef6a5b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/cluster_kmeans.py +++ /dev/null @@ -1,212 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os -import time - -import numpy as np -from sklearn.cluster import MiniBatchKMeans - -import joblib -from examples.textless_nlp.gslm.speech2unit.pretrained.utils import ( - get_and_dump_features, - get_features, -) - - -def get_logger(): - log_format = "[%(asctime)s] [%(levelname)s]: %(message)s" - logging.basicConfig(format=log_format, level=logging.INFO) - logger = logging.getLogger(__name__) - return logger - - -def get_parser(): - parser = argparse.ArgumentParser( - description="Learn K-means clustering over acoustic features." - ) - - # Features arguments - parser.add_argument( - "--in_features_path", type=str, default=None, help="Features file path" - ) - parser.add_argument( - "--feature_type", - type=str, - choices=["logmel", "hubert", "w2v2", "cpc"], - default=None, - help="Acoustic feature type", - ) - parser.add_argument( - "--manifest_path", - type=str, - default=None, - help="Manifest file containing the root dir and file names", - ) - parser.add_argument( - "--out_features_path", - type=str, - default=None, - help="Features file path to write to", - ) - parser.add_argument( - "--checkpoint_path", - type=str, - help="Pretrained acoustic model checkpoint", - ) - parser.add_argument( - "--layer", - type=int, - help="The layer of the pretrained model to extract features from", - default=-1, - ) - parser.add_argument( - "--sample_pct", - type=float, - help="Percent data to use for K-means training", - default=0.1, - ) - - # K-means arguments - parser.add_argument( - "--num_clusters", type=int, help="Nubmer of clusters", default=50 - ) - parser.add_argument("--init", default="k-means++") - parser.add_argument( - "--max_iter", - type=int, - help="Maximum number of iterations for K-means training", - default=150, - ) - parser.add_argument( - "--batch_size", - type=int, - help="Batch size for K-means training", - default=10000, - ) - parser.add_argument("--tol", default=0.0, type=float) - parser.add_argument("--max_no_improvement", default=100, type=int) - parser.add_argument("--n_init", default=20, type=int) - parser.add_argument("--reassignment_ratio", default=0.5, type=float) - parser.add_argument( - "--out_kmeans_model_path", - type=str, - required=True, - help="Path to save K-means model", - ) - - # Leftovers - parser.add_argument( - "--seed", - type=int, - help="Random seed to use for K-means training", - default=1369, - ) - - return parser - - -def get_kmeans_model( - n_clusters, - init, - max_iter, - batch_size, - tol, - max_no_improvement, - n_init, - reassignment_ratio, - random_state, -): - return MiniBatchKMeans( - n_clusters=n_clusters, - init=init, - max_iter=max_iter, - batch_size=batch_size, - tol=tol, - max_no_improvement=max_no_improvement, - n_init=n_init, - reassignment_ratio=reassignment_ratio, - random_state=random_state, - verbose=1, - compute_labels=True, - init_size=None, - ) - - -def train_kmeans(kmeans_model, features_batch): - start_time = time.time() - kmeans_model.fit(features_batch) - time_taken = round((time.time() - start_time) // 60, 2) - return kmeans_model, time_taken - - -def main(args, logger): - # Features loading/extraction for K-means - if args.in_features_path: - # Feature loading - logger.info(f"Loading features from {args.in_features_path}...") - features_batch = np.load(args.in_features_path, allow_pickle=True) - else: - # Feature extraction - logger.info(f"Extracting {args.feature_type} acoustic features...") - features_batch = ( - get_features( - feature_type=args.feature_type, - checkpoint_path=args.checkpoint_path, - layer=args.layer, - manifest_path=args.manifest_path, - sample_pct=args.sample_pct, - flatten=True, - ) - if not args.out_features_path - else get_and_dump_features( - feature_type=args.feature_type, - checkpoint_path=args.checkpoint_path, - layer=args.layer, - manifest_path=args.manifest_path, - sample_pct=args.sample_pct, - flatten=True, - out_features_path=args.out_features_path, - ) - ) - if args.out_features_path: - logger.info( - f"Saved extracted features at {args.out_features_path}" - ) - logger.info(f"Features shape = {features_batch.shape}\n") - - # Learn and save K-means model - kmeans_model = get_kmeans_model( - n_clusters=args.num_clusters, - init=args.init, - max_iter=args.max_iter, - batch_size=args.batch_size, - tol=args.tol, - max_no_improvement=args.max_no_improvement, - n_init=args.n_init, - reassignment_ratio=args.reassignment_ratio, - random_state=args.seed, - ) - logger.info("Starting k-means training...") - kmeans_model, time_taken = train_kmeans( - kmeans_model=kmeans_model, features_batch=features_batch - ) - logger.info(f"...done k-means training in {time_taken} minutes") - inertia = -kmeans_model.score(features_batch) / len(features_batch) - logger.info(f"Total intertia: {round(inertia, 2)}\n") - - logger.info(f"Saving k-means model to {args.out_kmeans_model_path}") - os.makedirs(os.path.dirname(args.out_kmeans_model_path), exist_ok=True) - joblib.dump(kmeans_model, open(args.out_kmeans_model_path, "wb")) - - -if __name__ == "__main__": - parser = get_parser() - args = parser.parse_args() - logger = get_logger() - logger.info(args) - main(args, logger) diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/adaptive_span/adaptive_span_model_wrapper.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/adaptive_span/adaptive_span_model_wrapper.py deleted file mode 100644 index 5b147fe11f9d730438d036321a2d4a5d776efaa2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/adaptive_span/adaptive_span_model_wrapper.py +++ /dev/null @@ -1,145 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass -from typing import Dict, List, Optional - -import torch -from fairseq.dataclass import FairseqDataclass -from fairseq.models import ( - FairseqIncrementalDecoder, - FairseqLanguageModel, - register_model, -) -from .adaptive_span_model import TransformerSeq as AdaptiveSpanTransformerModel - - -logger = logging.getLogger(__name__) - - -@dataclass -class AdaptiveSpanSmallConfig(FairseqDataclass): - # defaults come from https://github.com/facebookresearch/adaptive-span/blob/master/experiments/enwik8_small.sh - vocab_size: int = 50 - d_model: int = 256 - n_head: int = 4 - d_inner: int = 1024 - n_layer: int = 8 - attn_span: int = 1024 - dropout: float = 0.0 - emb_dropout: float = 0.0 - adapt_span_ramp: int = 32 - adapt_span_init: float = 0.0 - aux_loss_scaler: float = 0.000002 - adapt_span_layer: bool = False - - -@register_model("adaptive_span", dataclass=AdaptiveSpanSmallConfig) -class AdaptiveSpanTransformer(FairseqLanguageModel): - @classmethod - def build_model(cls, cfg: AdaptiveSpanSmallConfig, task): - return cls(AdaptiveSpanDecoder(cfg, task)) - - def get_aux_loss(self): - return self.decoder.get_aux_loss() - - def get_current_max_span(self): - return self.decoder.get_current_max_span() - - def get_current_avg_span(self): - return self.decoder.get_current_avg_span() - - -class AdaptiveSpanDecoder(FairseqIncrementalDecoder): - def __init__(self, cfg, task): - - super().__init__(task.target_dictionary) - - self.config = cfg - config = AdaptiveSpanSmallConfig( - vocab_size=len(task.target_dictionary), - d_model=cfg.d_model, - n_head=cfg.n_head, - d_inner=cfg.d_inner, - n_layer=cfg.n_layer, - attn_span=cfg.attn_span, - dropout=cfg.dropout, - emb_dropout=cfg.emb_dropout, - adapt_span_ramp=cfg.adapt_span_ramp, - adapt_span_init=cfg.adapt_span_init, - aux_loss_scaler=cfg.aux_loss_scaler, - adapt_span_layer=cfg.adapt_span_layer, - ) - logger.info(config) - self.model = AdaptiveSpanTransformerModel(**config.__dict__) - - self._mems = None - - def forward( - self, - src_tokens, - incremental_state: Optional[Dict[str, List[torch.Tensor]]] = None, - encoder_out=None, - ): - bsz = src_tokens.size(0) - if incremental_state is not None: # used during inference - mems = self.get_incremental_state("mems") - src_tokens = src_tokens[:, -1:] # only keep the most recent token - else: - mems = self._mems - - if mems is None: - # first time init - mems = self.init_hid_cache(bsz) - output = self.model(x=src_tokens, h_cache=mems,) - if incremental_state is not None: - self.set_incremental_state(incremental_state, "mems", output[1]) - else: - self._mems = output[1] - return (output[0],) - - def max_positions(self): - return self.config.attn_span - - def init_hid_cache(self, batch_sz): - hid = [] - for layer in self.model.layers: - param = next(self.model.parameters()) - h = torch.zeros( - batch_sz, - layer.get_cache_size(), - self.config.d_model, - dtype=param.dtype, - device=param.device, - ) - hid.append(h) - return hid - - def get_aux_loss(self): - return self.model.get_aux_loss() - - def get_current_max_span(self): - return self.model.get_current_max_span() - - def get_current_avg_span(self): - return self.model.get_current_avg_span() - - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[torch.Tensor]]], - new_order: torch.Tensor, - ): - """Reorder incremental state. - - This will be called when the order of the input has changed from the - previous time step. A typical use case is beam search, where the input - order changes between time steps based on the selection of beams. - """ - raise NotImplementedError("This is required for generation/beam search") - # mems = self.get_incremental_state(incremental_state, "mems") - # if mems is not None: - # new_mems = [mems_i.index_select(1, new_order) for mems_i in mems] - # self.set_incremental_state(incremental_state, "mems", new_mems) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py deleted file mode 100644 index 2c87445d810cd790f887d1a135287a334cbdf223..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os - -import numpy as np - -import joblib -from examples.textless_nlp.gslm.speech2unit.clustering.utils import ( - get_audio_files, -) -from examples.textless_nlp.gslm.speech2unit.pretrained.utils import ( - get_features, -) - - -def get_logger(): - log_format = "[%(asctime)s] [%(levelname)s]: %(message)s" - logging.basicConfig(format=log_format, level=logging.INFO) - logger = logging.getLogger(__name__) - return logger - - -def get_parser(): - parser = argparse.ArgumentParser( - description="Quantize using K-means clustering over acoustic features." - ) - parser.add_argument( - "--feature_type", - type=str, - choices=["logmel", "hubert", "w2v2", "cpc"], - default=None, - required=True, - help="Acoustic feature type", - ) - parser.add_argument( - "--acoustic_model_path", - type=str, - help="Pretrained acoustic model checkpoint" - ) - parser.add_argument( - "--layer", - type=int, - help="The layer of the pretrained model to extract features from", - default=-1, - ) - parser.add_argument( - "--kmeans_model_path", - type=str, - required=True, - help="K-means model file path to use for inference", - ) - parser.add_argument( - "--features_path", - type=str, - default=None, - help="Features file path. You don't need to enter acoustic model details if you have dumped features", - ) - parser.add_argument( - "--manifest_path", - type=str, - default=None, - help="Manifest file containing the root dir and file names", - ) - parser.add_argument( - "--out_quantized_file_path", - required=True, - type=str, - help="File path of quantized output.", - ) - parser.add_argument( - "--extension", type=str, default=".flac", help="Features file path" - ) - return parser - - -def main(args, logger): - # Feature extraction - if args.features_path is not None: - logger.info(f"Loading acoustic features from {args.features_path}...") - features_batch = np.load(args.features_path) - else: - logger.info(f"Extracting {args.feature_type} acoustic features...") - features_batch = get_features( - feature_type=args.feature_type, - checkpoint_path=args.acoustic_model_path, - layer=args.layer, - manifest_path=args.manifest_path, - sample_pct=1.0, - flatten=False, - ) - logger.info( - f"Features extracted for {len(features_batch)} utterances.\n" - ) - logger.info( - f"Dimensionality of representation = {features_batch[0].shape[1]}" - ) - - # K-means model - logger.info(f"Loading K-means model from {args.kmeans_model_path} ...") - kmeans_model = joblib.load(open(args.kmeans_model_path, "rb")) - kmeans_model.verbose = False - - _, fnames, _ = get_audio_files(args.manifest_path) - - os.makedirs(os.path.dirname(args.out_quantized_file_path), exist_ok=True) - print(f"Writing quantized predictions to {args.out_quantized_file_path}") - with open(args.out_quantized_file_path, "w") as fout: - for i, feats in enumerate(features_batch): - pred = kmeans_model.predict(feats) - pred_str = " ".join(str(p) for p in pred) - base_fname = os.path.basename(fnames[i]).rstrip(args.extension) - fout.write(f"{base_fname}|{pred_str}\n") - - -if __name__ == "__main__": - parser = get_parser() - args = parser.parse_args() - logger = get_logger() - logger.info(args) - main(args, logger) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/gpu/test_ema_gpu.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/gpu/test_ema_gpu.py deleted file mode 100644 index 337107d69a2626652d1f34321a555dde02b3c1a9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/gpu/test_ema_gpu.py +++ /dev/null @@ -1,200 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest -from copy import deepcopy -from dataclasses import dataclass -from typing import Optional - -import torch -from fairseq.models.ema import EMA - - -class DummyModule(torch.nn.Module): - def __init__(self) -> None: - """LightningModule for testing purposes - - Args: - epoch_min_loss_override (int, optional): Pass in an epoch that will be set to the minimum - validation loss for testing purposes (zero based). If None this is ignored. Defaults to None. - """ - super().__init__() - self.layer = torch.nn.Linear(in_features=32, out_features=2) - self.another_layer = torch.nn.Linear(in_features=2, out_features=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.layer(x) - return self.another_layer(x) - - -@dataclass -class EMAConfig(object): - ema_decay: float = 0.99 - ema_start_update: int = 0 - ema_fp32: bool = False - ema_seed_model: Optional[str] = None - - -@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") -class TestEMAGPU(unittest.TestCase): - def assertTorchAllClose(self, x, y, atol=1e-8, rtol=1e-5, msg=None): - diff = x.float() - y.float() - diff_norm = torch.norm(diff) - other_norm = torch.norm(y.float()) - - if msg is None: - msg = "|input - other| > {} + {} * |other|".format( - atol, rtol - ) - - self.assertLessEqual( - diff_norm, - atol + rtol * other_norm, - msg=msg, - ) - - def test_ema(self): - model = DummyModule().cuda() - optimizer = torch.optim.SGD(model.parameters(), lr=0.01) - state = deepcopy(model.state_dict()) - config = EMAConfig() - ema = EMA(model, config) - - # set decay - ema._set_decay(config.ema_decay) - self.assertEqual(ema.get_decay(), config.ema_decay) - - # get model - self.assertEqual(ema.get_model(), ema.model) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - # EMA step - x = torch.randn(32).cuda() - y = model(x) - loss = y.sum() - loss.backward() - optimizer.step() - - ema.step(model) - - ema_state_dict = ema.get_model().state_dict() - - for key, param in model.state_dict().items(): - prev_param = state[key] - ema_param = ema_state_dict[key] - - if "version" in key: - # Do not decay a model.version pytorch param - continue - self.assertTorchAllClose( - ema_param, - config.ema_decay * prev_param + (1 - config.ema_decay) * param, - ) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - # Load EMA into model - model2 = DummyModule().cuda() - ema.reverse(model2) - - for key, param in model2.state_dict().items(): - ema_param = ema_state_dict[key] - self.assertTrue( - torch.allclose(ema_param, param) - ) - - def test_ema_fp32(self): - model = DummyModule().cuda().half() - optimizer = torch.optim.SGD(model.parameters(), lr=0.01) - state = deepcopy(model.state_dict()) - config = EMAConfig(ema_fp32=True) - ema = EMA(model, config) - - x = torch.randn(32).cuda() - y = model(x.half()) - loss = y.sum() - loss.backward() - optimizer.step() - - ema.step(model) - - for key, param in model.state_dict().items(): - prev_param = state[key] - ema_param = ema.get_model().state_dict()[key] - - if "version" in key: - # Do not decay a model.version pytorch param - continue - self.assertIn(key, ema.fp32_params) - - # EMA update is done in fp32, and hence the EMA param must be - # closer to the EMA update done in fp32 than in fp16. - self.assertLessEqual( - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half().float() - ), - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param + (1 - config.ema_decay) * param).float() - ), - ) - self.assertTorchAllClose( - ema_param, - (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half(), - ) - - def test_ema_fp16(self): - model = DummyModule().cuda().half() - optimizer = torch.optim.SGD(model.parameters(), lr=0.01) - state = deepcopy(model.state_dict()) - config = EMAConfig(ema_fp32=False) - ema = EMA(model, config) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - x = torch.randn(32).cuda() - y = model(x.half()) - loss = y.sum() - loss.backward() - optimizer.step() - - ema.step(model) - - for key, param in model.state_dict().items(): - prev_param = state[key] - ema_param = ema.get_model().state_dict()[key] - - if "version" in key: - # Do not decay a model.version pytorch param - continue - - # EMA update is done in fp16, and hence the EMA param must be - # closer to the EMA update done in fp16 than in fp32. - self.assertLessEqual( - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param + (1 - config.ema_decay) * param).float() - ), - torch.norm( - ema_param.float() - - (config.ema_decay * prev_param.float() + (1 - config.ema_decay) * param.float()).half().float() - ), - ) - self.assertTorchAllClose( - ema_param, - config.ema_decay * prev_param + (1 - config.ema_decay) * param, - ) - - # Since fp32 params is not used, it should be of size 0 - self.assertEqual(len(ema.fp32_params), 0) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OptimalScale/Robin-7b/lmflow/pipeline/base_pipeline.py b/spaces/OptimalScale/Robin-7b/lmflow/pipeline/base_pipeline.py deleted file mode 100644 index e5d03b91e20b7cc8cf9b49762fb9882db2d851dd..0000000000000000000000000000000000000000 --- a/spaces/OptimalScale/Robin-7b/lmflow/pipeline/base_pipeline.py +++ /dev/null @@ -1,9 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -""" BasePipeline. -""" - -from abc import ABC # abstract class - -class BasePipeline(ABC): - pass diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/datasets/voc.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/datasets/voc.py deleted file mode 100644 index a8855203b14ee0dc4da9099a2945d4aedcffbcd6..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/datasets/voc.py +++ /dev/null @@ -1,29 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class PascalVOCDataset(CustomDataset): - """Pascal VOC dataset. - - Args: - split (str): Split txt file for Pascal VOC. - """ - - CLASSES = ('background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', - 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', - 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', - 'train', 'tvmonitor') - - PALETTE = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], - [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], - [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], - [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], - [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] - - def __init__(self, split, **kwargs): - super(PascalVOCDataset, self).__init__( - img_suffix='.jpg', seg_map_suffix='.png', split=split, **kwargs) - assert osp.exists(self.img_dir) and self.split is not None diff --git a/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/encoders/modules.py b/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/encoders/modules.py deleted file mode 100644 index ededbe43e9e0466b9979079060692e38f561d4d3..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/encoders/modules.py +++ /dev/null @@ -1,234 +0,0 @@ -import torch -import torch.nn as nn -from functools import partial -import clip -from einops import rearrange, repeat -from transformers import CLIPTokenizer, CLIPTextModel -import kornia - -from ldm.modules.x_transformer import Encoder, TransformerWrapper # TODO: can we directly rely on lucidrains code and simply add this as a reuirement? --> test - - -class AbstractEncoder(nn.Module): - def __init__(self): - super().__init__() - - def encode(self, *args, **kwargs): - raise NotImplementedError - - - -class ClassEmbedder(nn.Module): - def __init__(self, embed_dim, n_classes=1000, key='class'): - super().__init__() - self.key = key - self.embedding = nn.Embedding(n_classes, embed_dim) - - def forward(self, batch, key=None): - if key is None: - key = self.key - # this is for use in crossattn - c = batch[key][:, None] - c = self.embedding(c) - return c - - -class TransformerEmbedder(AbstractEncoder): - """Some transformer encoder layers""" - def __init__(self, n_embed, n_layer, vocab_size, max_seq_len=77, device="cuda"): - super().__init__() - self.device = device - self.transformer = TransformerWrapper(num_tokens=vocab_size, max_seq_len=max_seq_len, - attn_layers=Encoder(dim=n_embed, depth=n_layer)) - - def forward(self, tokens): - tokens = tokens.to(self.device) # meh - z = self.transformer(tokens, return_embeddings=True) - return z - - def encode(self, x): - return self(x) - - -class BERTTokenizer(AbstractEncoder): - """ Uses a pretrained BERT tokenizer by huggingface. Vocab size: 30522 (?)""" - def __init__(self, device="cuda", vq_interface=True, max_length=77): - super().__init__() - from transformers import BertTokenizerFast # TODO: add to reuquirements - self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") - self.device = device - self.vq_interface = vq_interface - self.max_length = max_length - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - return tokens - - @torch.no_grad() - def encode(self, text): - tokens = self(text) - if not self.vq_interface: - return tokens - return None, None, [None, None, tokens] - - def decode(self, text): - return text - - -class BERTEmbedder(AbstractEncoder): - """Uses the BERT tokenizr model and add some transformer encoder layers""" - def __init__(self, n_embed, n_layer, vocab_size=30522, max_seq_len=77, - device="cuda",use_tokenizer=True, embedding_dropout=0.0): - super().__init__() - self.use_tknz_fn = use_tokenizer - if self.use_tknz_fn: - self.tknz_fn = BERTTokenizer(vq_interface=False, max_length=max_seq_len) - self.device = device - self.transformer = TransformerWrapper(num_tokens=vocab_size, max_seq_len=max_seq_len, - attn_layers=Encoder(dim=n_embed, depth=n_layer), - emb_dropout=embedding_dropout) - - def forward(self, text): - if self.use_tknz_fn: - tokens = self.tknz_fn(text)#.to(self.device) - else: - tokens = text - z = self.transformer(tokens, return_embeddings=True) - return z - - def encode(self, text): - # output of length 77 - return self(text) - - -class SpatialRescaler(nn.Module): - def __init__(self, - n_stages=1, - method='bilinear', - multiplier=0.5, - in_channels=3, - out_channels=None, - bias=False): - super().__init__() - self.n_stages = n_stages - assert self.n_stages >= 0 - assert method in ['nearest','linear','bilinear','trilinear','bicubic','area'] - self.multiplier = multiplier - self.interpolator = partial(torch.nn.functional.interpolate, mode=method) - self.remap_output = out_channels is not None - if self.remap_output: - print(f'Spatial Rescaler mapping from {in_channels} to {out_channels} channels after resizing.') - self.channel_mapper = nn.Conv2d(in_channels,out_channels,1,bias=bias) - - def forward(self,x): - for stage in range(self.n_stages): - x = self.interpolator(x, scale_factor=self.multiplier) - - - if self.remap_output: - x = self.channel_mapper(x) - return x - - def encode(self, x): - return self(x) - -class FrozenCLIPEmbedder(AbstractEncoder): - """Uses the CLIP transformer encoder for text (from Hugging Face)""" - def __init__(self, version="openai/clip-vit-large-patch14", device="cuda", max_length=77): - super().__init__() - self.tokenizer = CLIPTokenizer.from_pretrained(version) - self.transformer = CLIPTextModel.from_pretrained(version) - self.device = device - self.max_length = max_length - self.freeze() - - def freeze(self): - self.transformer = self.transformer.eval() - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - outputs = self.transformer(input_ids=tokens) - - z = outputs.last_hidden_state - return z - - def encode(self, text): - return self(text) - - -class FrozenCLIPTextEmbedder(nn.Module): - """ - Uses the CLIP transformer encoder for text. - """ - def __init__(self, version='ViT-L/14', device="cuda", max_length=77, n_repeat=1, normalize=True): - super().__init__() - self.model, _ = clip.load(version, jit=False, device="cpu") - self.device = device - self.max_length = max_length - self.n_repeat = n_repeat - self.normalize = normalize - - def freeze(self): - self.model = self.model.eval() - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - tokens = clip.tokenize(text).to(self.device) - z = self.model.encode_text(tokens) - if self.normalize: - z = z / torch.linalg.norm(z, dim=1, keepdim=True) - return z - - def encode(self, text): - z = self(text) - if z.ndim==2: - z = z[:, None, :] - z = repeat(z, 'b 1 d -> b k d', k=self.n_repeat) - return z - - -class FrozenClipImageEmbedder(nn.Module): - """ - Uses the CLIP image encoder. - """ - def __init__( - self, - model, - jit=False, - device='cuda' if torch.cuda.is_available() else 'cpu', - antialias=False, - ): - super().__init__() - self.model, _ = clip.load(name=model, device=device, jit=jit) - - self.antialias = antialias - - self.register_buffer('mean', torch.Tensor([0.48145466, 0.4578275, 0.40821073]), persistent=False) - self.register_buffer('std', torch.Tensor([0.26862954, 0.26130258, 0.27577711]), persistent=False) - - def preprocess(self, x): - # normalize to [0,1] - x = kornia.geometry.resize(x, (224, 224), - interpolation='bicubic',align_corners=True, - antialias=self.antialias) - x = (x + 1.) / 2. - # renormalize according to clip - x = kornia.enhance.normalize(x, self.mean, self.std) - return x - - def forward(self, x): - # x is assumed to be in range [-1,1] - return self.model.encode_image(self.preprocess(x)) - - -if __name__ == "__main__": - from ldm.util import count_params - model = FrozenCLIPEmbedder() - count_params(model, verbose=True) \ No newline at end of file diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/specialize-numbers.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/specialize-numbers.go deleted file mode 100644 index 5a3b5134115a7babd98dd7f39593fd783ef43449..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/specialize-numbers.go and /dev/null differ diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/build_py.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/build_py.py deleted file mode 100644 index 47c6158e0f74033bfcfeb7424df227a3815651de..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/build_py.py +++ /dev/null @@ -1,407 +0,0 @@ -"""distutils.command.build_py - -Implements the Distutils 'build_py' command.""" - -import os -import importlib.util -import sys -import glob - -from distutils.core import Command -from distutils.errors import DistutilsOptionError, DistutilsFileError -from distutils.util import convert_path -from distutils import log - - -class build_py(Command): - - description = "\"build\" pure Python modules (copy to build directory)" - - user_options = [ - ('build-lib=', 'd', "directory to \"build\" (copy) to"), - ('compile', 'c', "compile .py to .pyc"), - ('no-compile', None, "don't compile .py files [default]"), - ( - 'optimize=', - 'O', - "also compile with optimization: -O1 for \"python -O\", " - "-O2 for \"python -OO\", and -O0 to disable [default: -O0]", - ), - ('force', 'f', "forcibly build everything (ignore file timestamps)"), - ] - - boolean_options = ['compile', 'force'] - negative_opt = {'no-compile': 'compile'} - - def initialize_options(self): - self.build_lib = None - self.py_modules = None - self.package = None - self.package_data = None - self.package_dir = None - self.compile = 0 - self.optimize = 0 - self.force = None - - def finalize_options(self): - self.set_undefined_options( - 'build', ('build_lib', 'build_lib'), ('force', 'force') - ) - - # Get the distribution options that are aliases for build_py - # options -- list of packages and list of modules. - self.packages = self.distribution.packages - self.py_modules = self.distribution.py_modules - self.package_data = self.distribution.package_data - self.package_dir = {} - if self.distribution.package_dir: - for name, path in self.distribution.package_dir.items(): - self.package_dir[name] = convert_path(path) - self.data_files = self.get_data_files() - - # Ick, copied straight from install_lib.py (fancy_getopt needs a - # type system! Hell, *everything* needs a type system!!!) - if not isinstance(self.optimize, int): - try: - self.optimize = int(self.optimize) - assert 0 <= self.optimize <= 2 - except (ValueError, AssertionError): - raise DistutilsOptionError("optimize must be 0, 1, or 2") - - def run(self): - # XXX copy_file by default preserves atime and mtime. IMHO this is - # the right thing to do, but perhaps it should be an option -- in - # particular, a site administrator might want installed files to - # reflect the time of installation rather than the last - # modification time before the installed release. - - # XXX copy_file by default preserves mode, which appears to be the - # wrong thing to do: if a file is read-only in the working - # directory, we want it to be installed read/write so that the next - # installation of the same module distribution can overwrite it - # without problems. (This might be a Unix-specific issue.) Thus - # we turn off 'preserve_mode' when copying to the build directory, - # since the build directory is supposed to be exactly what the - # installation will look like (ie. we preserve mode when - # installing). - - # Two options control which modules will be installed: 'packages' - # and 'py_modules'. The former lets us work with whole packages, not - # specifying individual modules at all; the latter is for - # specifying modules one-at-a-time. - - if self.py_modules: - self.build_modules() - if self.packages: - self.build_packages() - self.build_package_data() - - self.byte_compile(self.get_outputs(include_bytecode=0)) - - def get_data_files(self): - """Generate list of '(package,src_dir,build_dir,filenames)' tuples""" - data = [] - if not self.packages: - return data - for package in self.packages: - # Locate package source directory - src_dir = self.get_package_dir(package) - - # Compute package build directory - build_dir = os.path.join(*([self.build_lib] + package.split('.'))) - - # Length of path to strip from found files - plen = 0 - if src_dir: - plen = len(src_dir) + 1 - - # Strip directory from globbed filenames - filenames = [file[plen:] for file in self.find_data_files(package, src_dir)] - data.append((package, src_dir, build_dir, filenames)) - return data - - def find_data_files(self, package, src_dir): - """Return filenames for package's data files in 'src_dir'""" - globs = self.package_data.get('', []) + self.package_data.get(package, []) - files = [] - for pattern in globs: - # Each pattern has to be converted to a platform-specific path - filelist = glob.glob( - os.path.join(glob.escape(src_dir), convert_path(pattern)) - ) - # Files that match more than one pattern are only added once - files.extend( - [fn for fn in filelist if fn not in files and os.path.isfile(fn)] - ) - return files - - def build_package_data(self): - """Copy data files into build directory""" - for package, src_dir, build_dir, filenames in self.data_files: - for filename in filenames: - target = os.path.join(build_dir, filename) - self.mkpath(os.path.dirname(target)) - self.copy_file( - os.path.join(src_dir, filename), target, preserve_mode=False - ) - - def get_package_dir(self, package): - """Return the directory, relative to the top of the source - distribution, where package 'package' should be found - (at least according to the 'package_dir' option, if any).""" - path = package.split('.') - - if not self.package_dir: - if path: - return os.path.join(*path) - else: - return '' - else: - tail = [] - while path: - try: - pdir = self.package_dir['.'.join(path)] - except KeyError: - tail.insert(0, path[-1]) - del path[-1] - else: - tail.insert(0, pdir) - return os.path.join(*tail) - else: - # Oops, got all the way through 'path' without finding a - # match in package_dir. If package_dir defines a directory - # for the root (nameless) package, then fallback on it; - # otherwise, we might as well have not consulted - # package_dir at all, as we just use the directory implied - # by 'tail' (which should be the same as the original value - # of 'path' at this point). - pdir = self.package_dir.get('') - if pdir is not None: - tail.insert(0, pdir) - - if tail: - return os.path.join(*tail) - else: - return '' - - def check_package(self, package, package_dir): - # Empty dir name means current directory, which we can probably - # assume exists. Also, os.path.exists and isdir don't know about - # my "empty string means current dir" convention, so we have to - # circumvent them. - if package_dir != "": - if not os.path.exists(package_dir): - raise DistutilsFileError( - "package directory '%s' does not exist" % package_dir - ) - if not os.path.isdir(package_dir): - raise DistutilsFileError( - "supposed package directory '%s' exists, " - "but is not a directory" % package_dir - ) - - # Directories without __init__.py are namespace packages (PEP 420). - if package: - init_py = os.path.join(package_dir, "__init__.py") - if os.path.isfile(init_py): - return init_py - - # Either not in a package at all (__init__.py not expected), or - # __init__.py doesn't exist -- so don't return the filename. - return None - - def check_module(self, module, module_file): - if not os.path.isfile(module_file): - log.warn("file %s (for module %s) not found", module_file, module) - return False - else: - return True - - def find_package_modules(self, package, package_dir): - self.check_package(package, package_dir) - module_files = glob.glob(os.path.join(glob.escape(package_dir), "*.py")) - modules = [] - setup_script = os.path.abspath(self.distribution.script_name) - - for f in module_files: - abs_f = os.path.abspath(f) - if abs_f != setup_script: - module = os.path.splitext(os.path.basename(f))[0] - modules.append((package, module, f)) - else: - self.debug_print("excluding %s" % setup_script) - return modules - - def find_modules(self): - """Finds individually-specified Python modules, ie. those listed by - module name in 'self.py_modules'. Returns a list of tuples (package, - module_base, filename): 'package' is a tuple of the path through - package-space to the module; 'module_base' is the bare (no - packages, no dots) module name, and 'filename' is the path to the - ".py" file (relative to the distribution root) that implements the - module. - """ - # Map package names to tuples of useful info about the package: - # (package_dir, checked) - # package_dir - the directory where we'll find source files for - # this package - # checked - true if we have checked that the package directory - # is valid (exists, contains __init__.py, ... ?) - packages = {} - - # List of (package, module, filename) tuples to return - modules = [] - - # We treat modules-in-packages almost the same as toplevel modules, - # just the "package" for a toplevel is empty (either an empty - # string or empty list, depending on context). Differences: - # - don't check for __init__.py in directory for empty package - for module in self.py_modules: - path = module.split('.') - package = '.'.join(path[0:-1]) - module_base = path[-1] - - try: - (package_dir, checked) = packages[package] - except KeyError: - package_dir = self.get_package_dir(package) - checked = 0 - - if not checked: - init_py = self.check_package(package, package_dir) - packages[package] = (package_dir, 1) - if init_py: - modules.append((package, "__init__", init_py)) - - # XXX perhaps we should also check for just .pyc files - # (so greedy closed-source bastards can distribute Python - # modules too) - module_file = os.path.join(package_dir, module_base + ".py") - if not self.check_module(module, module_file): - continue - - modules.append((package, module_base, module_file)) - - return modules - - def find_all_modules(self): - """Compute the list of all modules that will be built, whether - they are specified one-module-at-a-time ('self.py_modules') or - by whole packages ('self.packages'). Return a list of tuples - (package, module, module_file), just like 'find_modules()' and - 'find_package_modules()' do.""" - modules = [] - if self.py_modules: - modules.extend(self.find_modules()) - if self.packages: - for package in self.packages: - package_dir = self.get_package_dir(package) - m = self.find_package_modules(package, package_dir) - modules.extend(m) - return modules - - def get_source_files(self): - return [module[-1] for module in self.find_all_modules()] - - def get_module_outfile(self, build_dir, package, module): - outfile_path = [build_dir] + list(package) + [module + ".py"] - return os.path.join(*outfile_path) - - def get_outputs(self, include_bytecode=1): - modules = self.find_all_modules() - outputs = [] - for (package, module, module_file) in modules: - package = package.split('.') - filename = self.get_module_outfile(self.build_lib, package, module) - outputs.append(filename) - if include_bytecode: - if self.compile: - outputs.append( - importlib.util.cache_from_source(filename, optimization='') - ) - if self.optimize > 0: - outputs.append( - importlib.util.cache_from_source( - filename, optimization=self.optimize - ) - ) - - outputs += [ - os.path.join(build_dir, filename) - for package, src_dir, build_dir, filenames in self.data_files - for filename in filenames - ] - - return outputs - - def build_module(self, module, module_file, package): - if isinstance(package, str): - package = package.split('.') - elif not isinstance(package, (list, tuple)): - raise TypeError( - "'package' must be a string (dot-separated), list, or tuple" - ) - - # Now put the module source file into the "build" area -- this is - # easy, we just copy it somewhere under self.build_lib (the build - # directory for Python source). - outfile = self.get_module_outfile(self.build_lib, package, module) - dir = os.path.dirname(outfile) - self.mkpath(dir) - return self.copy_file(module_file, outfile, preserve_mode=0) - - def build_modules(self): - modules = self.find_modules() - for (package, module, module_file) in modules: - # Now "build" the module -- ie. copy the source file to - # self.build_lib (the build directory for Python source). - # (Actually, it gets copied to the directory for this package - # under self.build_lib.) - self.build_module(module, module_file, package) - - def build_packages(self): - for package in self.packages: - # Get list of (package, module, module_file) tuples based on - # scanning the package directory. 'package' is only included - # in the tuple so that 'find_modules()' and - # 'find_package_tuples()' have a consistent interface; it's - # ignored here (apart from a sanity check). Also, 'module' is - # the *unqualified* module name (ie. no dots, no package -- we - # already know its package!), and 'module_file' is the path to - # the .py file, relative to the current directory - # (ie. including 'package_dir'). - package_dir = self.get_package_dir(package) - modules = self.find_package_modules(package, package_dir) - - # Now loop over the modules we found, "building" each one (just - # copy it to self.build_lib). - for (package_, module, module_file) in modules: - assert package == package_ - self.build_module(module, module_file, package) - - def byte_compile(self, files): - if sys.dont_write_bytecode: - self.warn('byte-compiling is disabled, skipping.') - return - - from distutils.util import byte_compile - - prefix = self.build_lib - if prefix[-1] != os.sep: - prefix = prefix + os.sep - - # XXX this code is essentially the same as the 'byte_compile() - # method of the "install_lib" command, except for the determination - # of the 'prefix' string. Hmmm. - if self.compile: - byte_compile( - files, optimize=0, force=self.force, prefix=prefix, dry_run=self.dry_run - ) - if self.optimize > 0: - byte_compile( - files, - optimize=self.optimize, - force=self.force, - prefix=prefix, - dry_run=self.dry_run, - ) diff --git a/spaces/Reha2704/VToonify/vtoonify/model/dualstylegan.py b/spaces/Reha2704/VToonify/vtoonify/model/dualstylegan.py deleted file mode 100644 index 60d9850ad049a2751781871d6ae0c2779ecc863f..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/model/dualstylegan.py +++ /dev/null @@ -1,203 +0,0 @@ -import random -import torch -from torch import nn -from model.stylegan.model import ConvLayer, PixelNorm, EqualLinear, Generator - -class AdaptiveInstanceNorm(nn.Module): - def __init__(self, fin, style_dim=512): - super().__init__() - - self.norm = nn.InstanceNorm2d(fin, affine=False) - self.style = nn.Linear(style_dim, fin * 2) - - self.style.bias.data[:fin] = 1 - self.style.bias.data[fin:] = 0 - - def forward(self, input, style): - style = self.style(style).unsqueeze(2).unsqueeze(3) - gamma, beta = style.chunk(2, 1) - out = self.norm(input) - out = gamma * out + beta - return out - -# modulative residual blocks (ModRes) -class AdaResBlock(nn.Module): - def __init__(self, fin, style_dim=512, dilation=1): # modified - super().__init__() - - self.conv = ConvLayer(fin, fin, 3, dilation=dilation) # modified - self.conv2 = ConvLayer(fin, fin, 3, dilation=dilation) # modified - self.norm = AdaptiveInstanceNorm(fin, style_dim) - self.norm2 = AdaptiveInstanceNorm(fin, style_dim) - - # model initialization - # the convolution filters are set to values close to 0 to produce negligible residual features - self.conv[0].weight.data *= 0.01 - self.conv2[0].weight.data *= 0.01 - - def forward(self, x, s, w=1): - skip = x - if w == 0: - return skip - out = self.conv(self.norm(x, s)) - out = self.conv2(self.norm2(out, s)) - out = out * w + skip - return out - -class DualStyleGAN(nn.Module): - def __init__(self, size, style_dim, n_mlp, channel_multiplier=2, twoRes=True, res_index=6): - super().__init__() - - layers = [PixelNorm()] - for i in range(n_mlp-6): - layers.append(EqualLinear(512, 512, lr_mul=0.01, activation="fused_lrelu")) - # color transform blocks T_c - self.style = nn.Sequential(*layers) - # StyleGAN2 - self.generator = Generator(size, style_dim, n_mlp, channel_multiplier) - # The extrinsic style path - self.res = nn.ModuleList() - self.res_index = res_index//2 * 2 - self.res.append(AdaResBlock(self.generator.channels[2 ** 2])) # for conv1 - for i in range(3, self.generator.log_size + 1): - out_channel = self.generator.channels[2 ** i] - if i < 3 + self.res_index//2: - # ModRes - self.res.append(AdaResBlock(out_channel)) - self.res.append(AdaResBlock(out_channel)) - else: - # structure transform block T_s - self.res.append(EqualLinear(512, 512)) - # FC layer is initialized with identity matrices, meaning no changes to the input latent code - self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01 - self.res.append(EqualLinear(512, 512)) - self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01 - self.res.append(EqualLinear(512, 512)) # for to_rgb7 - self.res[-1].weight.data = torch.eye(512) * 512.0**0.5 + torch.randn(512, 512) * 0.01 - self.size = self.generator.size - self.style_dim = self.generator.style_dim - self.log_size = self.generator.log_size - self.num_layers = self.generator.num_layers - self.n_latent = self.generator.n_latent - self.channels = self.generator.channels - - def forward( - self, - styles, # intrinsic style code - exstyles, # extrinsic style code - return_latents=False, - return_feat=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - z_plus_latent=False, # intrinsic style code is z+ or z - use_res=True, # whether to use the extrinsic style path - fuse_index=18, # layers > fuse_index do not use the extrinsic style path - interp_weights=[1]*18, # weight vector for style combination of two paths - ): - - if not input_is_latent: - if not z_plus_latent: - styles = [self.generator.style(s) for s in styles] - else: - styles = [self.generator.style(s.reshape(s.shape[0]*s.shape[1], s.shape[2])).reshape(s.shape) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.generator.num_layers - else: - noise = [ - getattr(self.generator.noises, f"noise_{i}") for i in range(self.generator.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) < 2: - inject_index = self.generator.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.generator.n_latent - 1) - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.generator.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - else: - latent = torch.cat([styles[0][:,0:inject_index], styles[1][:,inject_index:]], 1) - - if use_res: - if exstyles.ndim < 3: - resstyles = self.style(exstyles).unsqueeze(1).repeat(1, self.generator.n_latent, 1) - adastyles = exstyles.unsqueeze(1).repeat(1, self.generator.n_latent, 1) - else: - nB, nL, nD = exstyles.shape - resstyles = self.style(exstyles.reshape(nB*nL, nD)).reshape(nB, nL, nD) - adastyles = exstyles - - out = self.generator.input(latent) - out = self.generator.conv1(out, latent[:, 0], noise=noise[0]) - if use_res and fuse_index > 0: - out = self.res[0](out, resstyles[:, 0], interp_weights[0]) - - skip = self.generator.to_rgb1(out, latent[:, 1]) - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.generator.convs[::2], self.generator.convs[1::2], noise[1::2], noise[2::2], self.generator.to_rgbs): - if use_res and fuse_index >= i and i > self.res_index: - out = conv1(out, interp_weights[i] * self.res[i](adastyles[:, i]) + - (1-interp_weights[i]) * latent[:, i], noise=noise1) - else: - out = conv1(out, latent[:, i], noise=noise1) - if use_res and fuse_index >= i and i <= self.res_index: - out = self.res[i](out, resstyles[:, i], interp_weights[i]) - if use_res and fuse_index >= (i+1) and i > self.res_index: - out = conv2(out, interp_weights[i+1] * self.res[i+1](adastyles[:, i+1]) + - (1-interp_weights[i+1]) * latent[:, i+1], noise=noise2) - else: - out = conv2(out, latent[:, i + 1], noise=noise2) - if use_res and fuse_index >= (i+1) and i <= self.res_index: - out = self.res[i+1](out, resstyles[:, i+1], interp_weights[i+1]) - if use_res and fuse_index >= (i+2) and i >= self.res_index-1: - skip = to_rgb(out, interp_weights[i+2] * self.res[i+2](adastyles[:, i+2]) + - (1-interp_weights[i+2]) * latent[:, i + 2], skip) - else: - skip = to_rgb(out, latent[:, i + 2], skip) - i += 2 - if i > self.res_index and return_feat: - return out, skip - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - def make_noise(self): - return self.generator.make_noise() - - def mean_latent(self, n_latent): - return self.generator.mean_latent(n_latent) - - def get_latent(self, input): - return self.generator.style(input) \ No newline at end of file diff --git a/spaces/Riksarkivet/htr_demo/helper/text/help/fasttrack/fast_track.md b/spaces/Riksarkivet/htr_demo/helper/text/help/fasttrack/fast_track.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Riksarkivet/htr_demo/src/htr_pipeline/models.py b/spaces/Riksarkivet/htr_demo/src/htr_pipeline/models.py deleted file mode 100644 index 04597a4df55df9d9f6e52f9d9942b1182ce2e046..0000000000000000000000000000000000000000 --- a/spaces/Riksarkivet/htr_demo/src/htr_pipeline/models.py +++ /dev/null @@ -1,65 +0,0 @@ -import os - -import torch -from huggingface_hub import snapshot_download -from mmdet.apis import DetInferencer - -# from mmengine import Config -from mmocr.apis import TextRecInferencer - - -class HtrModels: - def __init__(self, local_run=False): - self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - - model_folder = "./models" - self.region_config = f"{model_folder}/RmtDet_regions/rtmdet_m_textregions_2_concat.py" - self.region_checkpoint = f"{model_folder}/RmtDet_regions/epoch_12.pth" - - self.line_config = f"{model_folder}/RmtDet_lines/rtmdet_m_textlines_2_concat.py" - self.line_checkpoint = f"{model_folder}/RmtDet_lines/epoch_12.pth" - - self.mmocr_config = f"{model_folder}/SATRN/_base_satrn_shallow_concat.py" - self.mmocr_checkpoint = f"{model_folder}/SATRN/epoch_5.pth" - - # Check if model files exist at the specified paths, if not, get the config - if not ( - os.path.exists(self.region_checkpoint) - and os.path.exists(self.line_checkpoint) - and os.path.exists(self.mmocr_checkpoint) - ): - config_path = self.get_config() - self.region_checkpoint = config_path["region_checkpoint"] - self.line_checkpoint = config_path["line_checkpoint"] - self.mmocr_checkpoint = config_path["mmocr_checkpoint"] - - def load_region_model(self): - # build the model from a config file and a checkpoint file - return DetInferencer(self.region_config, self.region_checkpoint, device=self.device) - - def load_line_model(self): - return DetInferencer(self.line_config, self.line_checkpoint, device=self.device) - - def load_htr_model(self): - inferencer = TextRecInferencer(self.mmocr_config, self.mmocr_checkpoint, device=self.device) - return inferencer - - @staticmethod - def get_config(): - path_models = snapshot_download( - "Riksarkivet/HTR_pipeline_models", - allow_patterns=["*.pth"], - token="__INSERT__FINS_HUGGINFACE_TOKEN__", - cache_dir="./", - ) - config_path = { - "region_checkpoint": os.path.join(path_models, "RmtDet_regions/epoch_12.pth"), - "line_checkpoint": os.path.join(path_models, "RmtDet_lines/epoch_12.pth"), - "mmocr_checkpoint": os.path.join(path_models, "SATRN/epoch_5.pth"), - } - - return config_path - - -if __name__ == "__main__": - pass diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/backbones/ssd_vgg.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/backbones/ssd_vgg.py deleted file mode 100644 index cbc4fbb2301afc002f47abb9ed133a500d6cf23f..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/backbones/ssd_vgg.py +++ /dev/null @@ -1,169 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import VGG, constant_init, kaiming_init, normal_init, xavier_init -from mmcv.runner import load_checkpoint - -from mmdet.utils import get_root_logger -from ..builder import BACKBONES - - -@BACKBONES.register_module() -class SSDVGG(VGG): - """VGG Backbone network for single-shot-detection. - - Args: - input_size (int): width and height of input, from {300, 512}. - depth (int): Depth of vgg, from {11, 13, 16, 19}. - out_indices (Sequence[int]): Output from which stages. - - Example: - >>> self = SSDVGG(input_size=300, depth=11) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 300, 300) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 1024, 19, 19) - (1, 512, 10, 10) - (1, 256, 5, 5) - (1, 256, 3, 3) - (1, 256, 1, 1) - """ - extra_setting = { - 300: (256, 'S', 512, 128, 'S', 256, 128, 256, 128, 256), - 512: (256, 'S', 512, 128, 'S', 256, 128, 'S', 256, 128, 'S', 256, 128), - } - - def __init__(self, - input_size, - depth, - with_last_pool=False, - ceil_mode=True, - out_indices=(3, 4), - out_feature_indices=(22, 34), - l2_norm_scale=20.): - # TODO: in_channels for mmcv.VGG - super(SSDVGG, self).__init__( - depth, - with_last_pool=with_last_pool, - ceil_mode=ceil_mode, - out_indices=out_indices) - assert input_size in (300, 512) - self.input_size = input_size - - self.features.add_module( - str(len(self.features)), - nn.MaxPool2d(kernel_size=3, stride=1, padding=1)) - self.features.add_module( - str(len(self.features)), - nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6)) - self.features.add_module( - str(len(self.features)), nn.ReLU(inplace=True)) - self.features.add_module( - str(len(self.features)), nn.Conv2d(1024, 1024, kernel_size=1)) - self.features.add_module( - str(len(self.features)), nn.ReLU(inplace=True)) - self.out_feature_indices = out_feature_indices - - self.inplanes = 1024 - self.extra = self._make_extra_layers(self.extra_setting[input_size]) - self.l2_norm = L2Norm( - self.features[out_feature_indices[0] - 1].out_channels, - l2_norm_scale) - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.features.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - elif isinstance(m, nn.Linear): - normal_init(m, std=0.01) - else: - raise TypeError('pretrained must be a str or None') - - for m in self.extra.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - - constant_init(self.l2_norm, self.l2_norm.scale) - - def forward(self, x): - """Forward function.""" - outs = [] - for i, layer in enumerate(self.features): - x = layer(x) - if i in self.out_feature_indices: - outs.append(x) - for i, layer in enumerate(self.extra): - x = F.relu(layer(x), inplace=True) - if i % 2 == 1: - outs.append(x) - outs[0] = self.l2_norm(outs[0]) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def _make_extra_layers(self, outplanes): - layers = [] - kernel_sizes = (1, 3) - num_layers = 0 - outplane = None - for i in range(len(outplanes)): - if self.inplanes == 'S': - self.inplanes = outplane - continue - k = kernel_sizes[num_layers % 2] - if outplanes[i] == 'S': - outplane = outplanes[i + 1] - conv = nn.Conv2d( - self.inplanes, outplane, k, stride=2, padding=1) - else: - outplane = outplanes[i] - conv = nn.Conv2d( - self.inplanes, outplane, k, stride=1, padding=0) - layers.append(conv) - self.inplanes = outplanes[i] - num_layers += 1 - if self.input_size == 512: - layers.append(nn.Conv2d(self.inplanes, 256, 4, padding=1)) - - return nn.Sequential(*layers) - - -class L2Norm(nn.Module): - - def __init__(self, n_dims, scale=20., eps=1e-10): - """L2 normalization layer. - - Args: - n_dims (int): Number of dimensions to be normalized - scale (float, optional): Defaults to 20.. - eps (float, optional): Used to avoid division by zero. - Defaults to 1e-10. - """ - super(L2Norm, self).__init__() - self.n_dims = n_dims - self.weight = nn.Parameter(torch.Tensor(self.n_dims)) - self.eps = eps - self.scale = scale - - def forward(self, x): - """Forward function.""" - # normalization layer convert to FP32 in FP16 training - x_float = x.float() - norm = x_float.pow(2).sum(1, keepdim=True).sqrt() + self.eps - return (self.weight[None, :, None, None].float().expand_as(x_float) * - x_float / norm).type_as(x) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/pisa_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/pisa_loss.py deleted file mode 100644 index 4a48adfcd400bb07b719a6fbd5a8af0508820629..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/pisa_loss.py +++ /dev/null @@ -1,183 +0,0 @@ -import mmcv -import torch - -from mmdet.core import bbox_overlaps - - -@mmcv.jit(derivate=True, coderize=True) -def isr_p(cls_score, - bbox_pred, - bbox_targets, - rois, - sampling_results, - loss_cls, - bbox_coder, - k=2, - bias=0, - num_class=80): - """Importance-based Sample Reweighting (ISR_P), positive part. - - Args: - cls_score (Tensor): Predicted classification scores. - bbox_pred (Tensor): Predicted bbox deltas. - bbox_targets (tuple[Tensor]): A tuple of bbox targets, the are - labels, label_weights, bbox_targets, bbox_weights, respectively. - rois (Tensor): Anchors (single_stage) in shape (n, 4) or RoIs - (two_stage) in shape (n, 5). - sampling_results (obj): Sampling results. - loss_cls (func): Classification loss func of the head. - bbox_coder (obj): BBox coder of the head. - k (float): Power of the non-linear mapping. - bias (float): Shift of the non-linear mapping. - num_class (int): Number of classes, default: 80. - - Return: - tuple([Tensor]): labels, imp_based_label_weights, bbox_targets, - bbox_target_weights - """ - - labels, label_weights, bbox_targets, bbox_weights = bbox_targets - pos_label_inds = ((labels >= 0) & - (labels < num_class)).nonzero().reshape(-1) - pos_labels = labels[pos_label_inds] - - # if no positive samples, return the original targets - num_pos = float(pos_label_inds.size(0)) - if num_pos == 0: - return labels, label_weights, bbox_targets, bbox_weights - - # merge pos_assigned_gt_inds of per image to a single tensor - gts = list() - last_max_gt = 0 - for i in range(len(sampling_results)): - gt_i = sampling_results[i].pos_assigned_gt_inds - gts.append(gt_i + last_max_gt) - if len(gt_i) != 0: - last_max_gt = gt_i.max() + 1 - gts = torch.cat(gts) - assert len(gts) == num_pos - - cls_score = cls_score.detach() - bbox_pred = bbox_pred.detach() - - # For single stage detectors, rois here indicate anchors, in shape (N, 4) - # For two stage detectors, rois are in shape (N, 5) - if rois.size(-1) == 5: - pos_rois = rois[pos_label_inds][:, 1:] - else: - pos_rois = rois[pos_label_inds] - - if bbox_pred.size(-1) > 4: - bbox_pred = bbox_pred.view(bbox_pred.size(0), -1, 4) - pos_delta_pred = bbox_pred[pos_label_inds, pos_labels].view(-1, 4) - else: - pos_delta_pred = bbox_pred[pos_label_inds].view(-1, 4) - - # compute iou of the predicted bbox and the corresponding GT - pos_delta_target = bbox_targets[pos_label_inds].view(-1, 4) - pos_bbox_pred = bbox_coder.decode(pos_rois, pos_delta_pred) - target_bbox_pred = bbox_coder.decode(pos_rois, pos_delta_target) - ious = bbox_overlaps(pos_bbox_pred, target_bbox_pred, is_aligned=True) - - pos_imp_weights = label_weights[pos_label_inds] - # Two steps to compute IoU-HLR. Samples are first sorted by IoU locally, - # then sorted again within the same-rank group - max_l_num = pos_labels.bincount().max() - for label in pos_labels.unique(): - l_inds = (pos_labels == label).nonzero().view(-1) - l_gts = gts[l_inds] - for t in l_gts.unique(): - t_inds = l_inds[l_gts == t] - t_ious = ious[t_inds] - _, t_iou_rank_idx = t_ious.sort(descending=True) - _, t_iou_rank = t_iou_rank_idx.sort() - ious[t_inds] += max_l_num - t_iou_rank.float() - l_ious = ious[l_inds] - _, l_iou_rank_idx = l_ious.sort(descending=True) - _, l_iou_rank = l_iou_rank_idx.sort() # IoU-HLR - # linearly map HLR to label weights - pos_imp_weights[l_inds] *= (max_l_num - l_iou_rank.float()) / max_l_num - - pos_imp_weights = (bias + pos_imp_weights * (1 - bias)).pow(k) - - # normalize to make the new weighted loss value equal to the original loss - pos_loss_cls = loss_cls( - cls_score[pos_label_inds], pos_labels, reduction_override='none') - if pos_loss_cls.dim() > 1: - ori_pos_loss_cls = pos_loss_cls * label_weights[pos_label_inds][:, - None] - new_pos_loss_cls = pos_loss_cls * pos_imp_weights[:, None] - else: - ori_pos_loss_cls = pos_loss_cls * label_weights[pos_label_inds] - new_pos_loss_cls = pos_loss_cls * pos_imp_weights - pos_loss_cls_ratio = ori_pos_loss_cls.sum() / new_pos_loss_cls.sum() - pos_imp_weights = pos_imp_weights * pos_loss_cls_ratio - label_weights[pos_label_inds] = pos_imp_weights - - bbox_targets = labels, label_weights, bbox_targets, bbox_weights - return bbox_targets - - -@mmcv.jit(derivate=True, coderize=True) -def carl_loss(cls_score, - labels, - bbox_pred, - bbox_targets, - loss_bbox, - k=1, - bias=0.2, - avg_factor=None, - sigmoid=False, - num_class=80): - """Classification-Aware Regression Loss (CARL). - - Args: - cls_score (Tensor): Predicted classification scores. - labels (Tensor): Targets of classification. - bbox_pred (Tensor): Predicted bbox deltas. - bbox_targets (Tensor): Target of bbox regression. - loss_bbox (func): Regression loss func of the head. - bbox_coder (obj): BBox coder of the head. - k (float): Power of the non-linear mapping. - bias (float): Shift of the non-linear mapping. - avg_factor (int): Average factor used in regression loss. - sigmoid (bool): Activation of the classification score. - num_class (int): Number of classes, default: 80. - - Return: - dict: CARL loss dict. - """ - pos_label_inds = ((labels >= 0) & - (labels < num_class)).nonzero().reshape(-1) - if pos_label_inds.numel() == 0: - return dict(loss_carl=cls_score.sum()[None] * 0.) - pos_labels = labels[pos_label_inds] - - # multiply pos_cls_score with the corresponding bbox weight - # and remain gradient - if sigmoid: - pos_cls_score = cls_score.sigmoid()[pos_label_inds, pos_labels] - else: - pos_cls_score = cls_score.softmax(-1)[pos_label_inds, pos_labels] - carl_loss_weights = (bias + (1 - bias) * pos_cls_score).pow(k) - - # normalize carl_loss_weight to make its sum equal to num positive - num_pos = float(pos_cls_score.size(0)) - weight_ratio = num_pos / carl_loss_weights.sum() - carl_loss_weights *= weight_ratio - - if avg_factor is None: - avg_factor = bbox_targets.size(0) - # if is class agnostic, bbox pred is in shape (N, 4) - # otherwise, bbox pred is in shape (N, #classes, 4) - if bbox_pred.size(-1) > 4: - bbox_pred = bbox_pred.view(bbox_pred.size(0), -1, 4) - pos_bbox_preds = bbox_pred[pos_label_inds, pos_labels] - else: - pos_bbox_preds = bbox_pred[pos_label_inds] - ori_loss_reg = loss_bbox( - pos_bbox_preds, - bbox_targets[pos_label_inds], - reduction_override='none') / avg_factor - loss_carl = (ori_loss_reg * carl_loss_weights[:, None]).sum() - return dict(loss_carl=loss_carl[None]) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/deform_conv.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/deform_conv.py deleted file mode 100644 index a3f8c75ee774823eea334e3b3732af6a18f55038..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/deform_conv.py +++ /dev/null @@ -1,405 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import Tensor -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair, _single - -from annotator.uniformer.mmcv.utils import deprecated_api_warning -from ..cnn import CONV_LAYERS -from ..utils import ext_loader, print_log - -ext_module = ext_loader.load_ext('_ext', [ - 'deform_conv_forward', 'deform_conv_backward_input', - 'deform_conv_backward_parameters' -]) - - -class DeformConv2dFunction(Function): - - @staticmethod - def symbolic(g, - input, - offset, - weight, - stride, - padding, - dilation, - groups, - deform_groups, - bias=False, - im2col_step=32): - return g.op( - 'mmcv::MMCVDeformConv2d', - input, - offset, - weight, - stride_i=stride, - padding_i=padding, - dilation_i=dilation, - groups_i=groups, - deform_groups_i=deform_groups, - bias_i=bias, - im2col_step_i=im2col_step) - - @staticmethod - def forward(ctx, - input, - offset, - weight, - stride=1, - padding=0, - dilation=1, - groups=1, - deform_groups=1, - bias=False, - im2col_step=32): - if input is not None and input.dim() != 4: - raise ValueError( - f'Expected 4D tensor as input, got {input.dim()}D tensor \ - instead.') - assert bias is False, 'Only support bias is False.' - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deform_groups = deform_groups - ctx.im2col_step = im2col_step - - # When pytorch version >= 1.6.0, amp is adopted for fp16 mode; - # amp won't cast the type of model (float32), but "offset" is cast - # to float16 by nn.Conv2d automatically, leading to the type - # mismatch with input (when it is float32) or weight. - # The flag for whether to use fp16 or amp is the type of "offset", - # we cast weight and input to temporarily support fp16 and amp - # whatever the pytorch version is. - input = input.type_as(offset) - weight = weight.type_as(input) - ctx.save_for_backward(input, offset, weight) - - output = input.new_empty( - DeformConv2dFunction._output_size(ctx, input, weight)) - - ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones - - cur_im2col_step = min(ctx.im2col_step, input.size(0)) - assert (input.size(0) % - cur_im2col_step) == 0, 'im2col step must divide batchsize' - ext_module.deform_conv_forward( - input, - weight, - offset, - output, - ctx.bufs_[0], - ctx.bufs_[1], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - im2col_step=cur_im2col_step) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, weight = ctx.saved_tensors - - grad_input = grad_offset = grad_weight = None - - cur_im2col_step = min(ctx.im2col_step, input.size(0)) - assert (input.size(0) % cur_im2col_step - ) == 0, 'batch size must be divisible by im2col_step' - - grad_output = grad_output.contiguous() - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - ext_module.deform_conv_backward_input( - input, - offset, - grad_output, - grad_input, - grad_offset, - weight, - ctx.bufs_[0], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - im2col_step=cur_im2col_step) - - if ctx.needs_input_grad[2]: - grad_weight = torch.zeros_like(weight) - ext_module.deform_conv_backward_parameters( - input, - offset, - grad_output, - grad_weight, - ctx.bufs_[0], - ctx.bufs_[1], - kW=weight.size(3), - kH=weight.size(2), - dW=ctx.stride[1], - dH=ctx.stride[0], - padW=ctx.padding[1], - padH=ctx.padding[0], - dilationW=ctx.dilation[1], - dilationH=ctx.dilation[0], - group=ctx.groups, - deformable_group=ctx.deform_groups, - scale=1, - im2col_step=cur_im2col_step) - - return grad_input, grad_offset, grad_weight, \ - None, None, None, None, None, None, None - - @staticmethod - def _output_size(ctx, input, weight): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = ctx.padding[d] - kernel = ctx.dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = ctx.stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError( - 'convolution input is too small (output would be ' + - 'x'.join(map(str, output_size)) + ')') - return output_size - - -deform_conv2d = DeformConv2dFunction.apply - - -class DeformConv2d(nn.Module): - r"""Deformable 2D convolution. - - Applies a deformable 2D convolution over an input signal composed of - several input planes. DeformConv2d was described in the paper - `Deformable Convolutional Networks - `_ - - Note: - The argument ``im2col_step`` was added in version 1.3.17, which means - number of samples processed by the ``im2col_cuda_kernel`` per call. - It enables users to define ``batch_size`` and ``im2col_step`` more - flexibly and solved `issue mmcv#1440 - `_. - - Args: - in_channels (int): Number of channels in the input image. - out_channels (int): Number of channels produced by the convolution. - kernel_size(int, tuple): Size of the convolving kernel. - stride(int, tuple): Stride of the convolution. Default: 1. - padding (int or tuple): Zero-padding added to both sides of the input. - Default: 0. - dilation (int or tuple): Spacing between kernel elements. Default: 1. - groups (int): Number of blocked connections from input. - channels to output channels. Default: 1. - deform_groups (int): Number of deformable group partitions. - bias (bool): If True, adds a learnable bias to the output. - Default: False. - im2col_step (int): Number of samples processed by im2col_cuda_kernel - per call. It will work when ``batch_size`` > ``im2col_step``, but - ``batch_size`` must be divisible by ``im2col_step``. Default: 32. - `New in version 1.3.17.` - """ - - @deprecated_api_warning({'deformable_groups': 'deform_groups'}, - cls_name='DeformConv2d') - def __init__(self, - in_channels: int, - out_channels: int, - kernel_size: Union[int, Tuple[int, ...]], - stride: Union[int, Tuple[int, ...]] = 1, - padding: Union[int, Tuple[int, ...]] = 0, - dilation: Union[int, Tuple[int, ...]] = 1, - groups: int = 1, - deform_groups: int = 1, - bias: bool = False, - im2col_step: int = 32) -> None: - super(DeformConv2d, self).__init__() - - assert not bias, \ - f'bias={bias} is not supported in DeformConv2d.' - assert in_channels % groups == 0, \ - f'in_channels {in_channels} cannot be divisible by groups {groups}' - assert out_channels % groups == 0, \ - f'out_channels {out_channels} cannot be divisible by groups \ - {groups}' - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deform_groups = deform_groups - self.im2col_step = im2col_step - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - # only weight, no bias - self.weight = nn.Parameter( - torch.Tensor(out_channels, in_channels // self.groups, - *self.kernel_size)) - - self.reset_parameters() - - def reset_parameters(self): - # switch the initialization of `self.weight` to the standard kaiming - # method described in `Delving deep into rectifiers: Surpassing - # human-level performance on ImageNet classification` - He, K. et al. - # (2015), using a uniform distribution - nn.init.kaiming_uniform_(self.weight, nonlinearity='relu') - - def forward(self, x: Tensor, offset: Tensor) -> Tensor: - """Deformable Convolutional forward function. - - Args: - x (Tensor): Input feature, shape (B, C_in, H_in, W_in) - offset (Tensor): Offset for deformable convolution, shape - (B, deform_groups*kernel_size[0]*kernel_size[1]*2, - H_out, W_out), H_out, W_out are equal to the output's. - - An offset is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`. - The spatial arrangement is like: - - .. code:: text - - (x0, y0) (x1, y1) (x2, y2) - (x3, y3) (x4, y4) (x5, y5) - (x6, y6) (x7, y7) (x8, y8) - - Returns: - Tensor: Output of the layer. - """ - # To fix an assert error in deform_conv_cuda.cpp:128 - # input image is smaller than kernel - input_pad = (x.size(2) < self.kernel_size[0]) or (x.size(3) < - self.kernel_size[1]) - if input_pad: - pad_h = max(self.kernel_size[0] - x.size(2), 0) - pad_w = max(self.kernel_size[1] - x.size(3), 0) - x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0) - offset = offset.contiguous() - out = deform_conv2d(x, offset, self.weight, self.stride, self.padding, - self.dilation, self.groups, self.deform_groups, - False, self.im2col_step) - if input_pad: - out = out[:, :, :out.size(2) - pad_h, :out.size(3) - - pad_w].contiguous() - return out - - def __repr__(self): - s = self.__class__.__name__ - s += f'(in_channels={self.in_channels},\n' - s += f'out_channels={self.out_channels},\n' - s += f'kernel_size={self.kernel_size},\n' - s += f'stride={self.stride},\n' - s += f'padding={self.padding},\n' - s += f'dilation={self.dilation},\n' - s += f'groups={self.groups},\n' - s += f'deform_groups={self.deform_groups},\n' - # bias is not supported in DeformConv2d. - s += 'bias=False)' - return s - - -@CONV_LAYERS.register_module('DCN') -class DeformConv2dPack(DeformConv2d): - """A Deformable Conv Encapsulation that acts as normal Conv layers. - - The offset tensor is like `[y0, x0, y1, x1, y2, x2, ..., y8, x8]`. - The spatial arrangement is like: - - .. code:: text - - (x0, y0) (x1, y1) (x2, y2) - (x3, y3) (x4, y4) (x5, y5) - (x6, y6) (x7, y7) (x8, y8) - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(DeformConv2dPack, self).__init__(*args, **kwargs) - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deform_groups * 2 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_offset() - - def init_offset(self): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - offset = self.conv_offset(x) - return deform_conv2d(x, offset, self.weight, self.stride, self.padding, - self.dilation, self.groups, self.deform_groups, - False, self.im2col_step) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - version = local_metadata.get('version', None) - - if version is None or version < 2: - # the key is different in early versions - # In version < 2, DeformConvPack loads previous benchmark models. - if (prefix + 'conv_offset.weight' not in state_dict - and prefix[:-1] + '_offset.weight' in state_dict): - state_dict[prefix + 'conv_offset.weight'] = state_dict.pop( - prefix[:-1] + '_offset.weight') - if (prefix + 'conv_offset.bias' not in state_dict - and prefix[:-1] + '_offset.bias' in state_dict): - state_dict[prefix + - 'conv_offset.bias'] = state_dict.pop(prefix[:-1] + - '_offset.bias') - - if version is not None and version > 1: - print_log( - f'DeformConv2dPack {prefix.rstrip(".")} is upgraded to ' - 'version 2.', - logger='root') - - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/momentum_updater.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/momentum_updater.py deleted file mode 100644 index 60437756ceedf06055ec349df69a25465738d3f0..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/momentum_updater.py +++ /dev/null @@ -1,493 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import annotator.uniformer.mmcv as mmcv -from .hook import HOOKS, Hook -from .lr_updater import annealing_cos, annealing_linear, format_param - - -class MomentumUpdaterHook(Hook): - - def __init__(self, - by_epoch=True, - warmup=None, - warmup_iters=0, - warmup_ratio=0.9): - # validate the "warmup" argument - if warmup is not None: - if warmup not in ['constant', 'linear', 'exp']: - raise ValueError( - f'"{warmup}" is not a supported type for warming up, valid' - ' types are "constant" and "linear"') - if warmup is not None: - assert warmup_iters > 0, \ - '"warmup_iters" must be a positive integer' - assert 0 < warmup_ratio <= 1.0, \ - '"warmup_momentum" must be in range (0,1]' - - self.by_epoch = by_epoch - self.warmup = warmup - self.warmup_iters = warmup_iters - self.warmup_ratio = warmup_ratio - - self.base_momentum = [] # initial momentum for all param groups - self.regular_momentum = [ - ] # expected momentum if no warming up is performed - - def _set_momentum(self, runner, momentum_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, mom in zip(optim.param_groups, - momentum_groups[k]): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - else: - for param_group, mom in zip(runner.optimizer.param_groups, - momentum_groups): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - - def get_momentum(self, runner, base_momentum): - raise NotImplementedError - - def get_regular_momentum(self, runner): - if isinstance(runner.optimizer, dict): - momentum_groups = {} - for k in runner.optimizer.keys(): - _momentum_group = [ - self.get_momentum(runner, _base_momentum) - for _base_momentum in self.base_momentum[k] - ] - momentum_groups.update({k: _momentum_group}) - return momentum_groups - else: - return [ - self.get_momentum(runner, _base_momentum) - for _base_momentum in self.base_momentum - ] - - def get_warmup_momentum(self, cur_iters): - - def _get_warmup_momentum(cur_iters, regular_momentum): - if self.warmup == 'constant': - warmup_momentum = [ - _momentum / self.warmup_ratio - for _momentum in self.regular_momentum - ] - elif self.warmup == 'linear': - k = (1 - cur_iters / self.warmup_iters) * (1 - - self.warmup_ratio) - warmup_momentum = [ - _momentum / (1 - k) for _momentum in self.regular_mom - ] - elif self.warmup == 'exp': - k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters) - warmup_momentum = [ - _momentum / k for _momentum in self.regular_mom - ] - return warmup_momentum - - if isinstance(self.regular_momentum, dict): - momentum_groups = {} - for key, regular_momentum in self.regular_momentum.items(): - momentum_groups[key] = _get_warmup_momentum( - cur_iters, regular_momentum) - return momentum_groups - else: - return _get_warmup_momentum(cur_iters, self.regular_momentum) - - def before_run(self, runner): - # NOTE: when resuming from a checkpoint, - # if 'initial_momentum' is not saved, - # it will be set according to the optimizer params - if isinstance(runner.optimizer, dict): - self.base_momentum = {} - for k, optim in runner.optimizer.items(): - for group in optim.param_groups: - if 'momentum' in group.keys(): - group.setdefault('initial_momentum', group['momentum']) - else: - group.setdefault('initial_momentum', group['betas'][0]) - _base_momentum = [ - group['initial_momentum'] for group in optim.param_groups - ] - self.base_momentum.update({k: _base_momentum}) - else: - for group in runner.optimizer.param_groups: - if 'momentum' in group.keys(): - group.setdefault('initial_momentum', group['momentum']) - else: - group.setdefault('initial_momentum', group['betas'][0]) - self.base_momentum = [ - group['initial_momentum'] - for group in runner.optimizer.param_groups - ] - - def before_train_epoch(self, runner): - if not self.by_epoch: - return - self.regular_mom = self.get_regular_momentum(runner) - self._set_momentum(runner, self.regular_mom) - - def before_train_iter(self, runner): - cur_iter = runner.iter - if not self.by_epoch: - self.regular_mom = self.get_regular_momentum(runner) - if self.warmup is None or cur_iter >= self.warmup_iters: - self._set_momentum(runner, self.regular_mom) - else: - warmup_momentum = self.get_warmup_momentum(cur_iter) - self._set_momentum(runner, warmup_momentum) - elif self.by_epoch: - if self.warmup is None or cur_iter > self.warmup_iters: - return - elif cur_iter == self.warmup_iters: - self._set_momentum(runner, self.regular_mom) - else: - warmup_momentum = self.get_warmup_momentum(cur_iter) - self._set_momentum(runner, warmup_momentum) - - -@HOOKS.register_module() -class StepMomentumUpdaterHook(MomentumUpdaterHook): - """Step momentum scheduler with min value clipping. - - Args: - step (int | list[int]): Step to decay the momentum. If an int value is - given, regard it as the decay interval. If a list is given, decay - momentum at these steps. - gamma (float, optional): Decay momentum ratio. Default: 0.5. - min_momentum (float, optional): Minimum momentum value to keep. If - momentum after decay is lower than this value, it will be clipped - accordingly. If None is given, we don't perform lr clipping. - Default: None. - """ - - def __init__(self, step, gamma=0.5, min_momentum=None, **kwargs): - if isinstance(step, list): - assert mmcv.is_list_of(step, int) - assert all([s > 0 for s in step]) - elif isinstance(step, int): - assert step > 0 - else: - raise TypeError('"step" must be a list or integer') - self.step = step - self.gamma = gamma - self.min_momentum = min_momentum - super(StepMomentumUpdaterHook, self).__init__(**kwargs) - - def get_momentum(self, runner, base_momentum): - progress = runner.epoch if self.by_epoch else runner.iter - - # calculate exponential term - if isinstance(self.step, int): - exp = progress // self.step - else: - exp = len(self.step) - for i, s in enumerate(self.step): - if progress < s: - exp = i - break - - momentum = base_momentum * (self.gamma**exp) - if self.min_momentum is not None: - # clip to a minimum value - momentum = max(momentum, self.min_momentum) - return momentum - - -@HOOKS.register_module() -class CosineAnnealingMomentumUpdaterHook(MomentumUpdaterHook): - - def __init__(self, min_momentum=None, min_momentum_ratio=None, **kwargs): - assert (min_momentum is None) ^ (min_momentum_ratio is None) - self.min_momentum = min_momentum - self.min_momentum_ratio = min_momentum_ratio - super(CosineAnnealingMomentumUpdaterHook, self).__init__(**kwargs) - - def get_momentum(self, runner, base_momentum): - if self.by_epoch: - progress = runner.epoch - max_progress = runner.max_epochs - else: - progress = runner.iter - max_progress = runner.max_iters - if self.min_momentum_ratio is not None: - target_momentum = base_momentum * self.min_momentum_ratio - else: - target_momentum = self.min_momentum - return annealing_cos(base_momentum, target_momentum, - progress / max_progress) - - -@HOOKS.register_module() -class CyclicMomentumUpdaterHook(MomentumUpdaterHook): - """Cyclic momentum Scheduler. - - Implement the cyclical momentum scheduler policy described in - https://arxiv.org/pdf/1708.07120.pdf - - This momentum scheduler usually used together with the CyclicLRUpdater - to improve the performance in the 3D detection area. - - Attributes: - target_ratio (tuple[float]): Relative ratio of the lowest momentum and - the highest momentum to the initial momentum. - cyclic_times (int): Number of cycles during training - step_ratio_up (float): The ratio of the increasing process of momentum - in the total cycle. - by_epoch (bool): Whether to update momentum by epoch. - """ - - def __init__(self, - by_epoch=False, - target_ratio=(0.85 / 0.95, 1), - cyclic_times=1, - step_ratio_up=0.4, - **kwargs): - if isinstance(target_ratio, float): - target_ratio = (target_ratio, target_ratio / 1e5) - elif isinstance(target_ratio, tuple): - target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \ - if len(target_ratio) == 1 else target_ratio - else: - raise ValueError('target_ratio should be either float ' - f'or tuple, got {type(target_ratio)}') - - assert len(target_ratio) == 2, \ - '"target_ratio" must be list or tuple of two floats' - assert 0 <= step_ratio_up < 1.0, \ - '"step_ratio_up" must be in range [0,1)' - - self.target_ratio = target_ratio - self.cyclic_times = cyclic_times - self.step_ratio_up = step_ratio_up - self.momentum_phases = [] # init momentum_phases - # currently only support by_epoch=False - assert not by_epoch, \ - 'currently only support "by_epoch" = False' - super(CyclicMomentumUpdaterHook, self).__init__(by_epoch, **kwargs) - - def before_run(self, runner): - super(CyclicMomentumUpdaterHook, self).before_run(runner) - # initiate momentum_phases - # total momentum_phases are separated as up and down - max_iter_per_phase = runner.max_iters // self.cyclic_times - iter_up_phase = int(self.step_ratio_up * max_iter_per_phase) - self.momentum_phases.append( - [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]]) - self.momentum_phases.append([ - iter_up_phase, max_iter_per_phase, max_iter_per_phase, - self.target_ratio[0], self.target_ratio[1] - ]) - - def get_momentum(self, runner, base_momentum): - curr_iter = runner.iter - for (start_iter, end_iter, max_iter_per_phase, start_ratio, - end_ratio) in self.momentum_phases: - curr_iter %= max_iter_per_phase - if start_iter <= curr_iter < end_iter: - progress = curr_iter - start_iter - return annealing_cos(base_momentum * start_ratio, - base_momentum * end_ratio, - progress / (end_iter - start_iter)) - - -@HOOKS.register_module() -class OneCycleMomentumUpdaterHook(MomentumUpdaterHook): - """OneCycle momentum Scheduler. - - This momentum scheduler usually used together with the OneCycleLrUpdater - to improve the performance. - - Args: - base_momentum (float or list): Lower momentum boundaries in the cycle - for each parameter group. Note that momentum is cycled inversely - to learning rate; at the peak of a cycle, momentum is - 'base_momentum' and learning rate is 'max_lr'. - Default: 0.85 - max_momentum (float or list): Upper momentum boundaries in the cycle - for each parameter group. Functionally, - it defines the cycle amplitude (max_momentum - base_momentum). - Note that momentum is cycled inversely - to learning rate; at the start of a cycle, momentum is - 'max_momentum' and learning rate is 'base_lr' - Default: 0.95 - pct_start (float): The percentage of the cycle (in number of steps) - spent increasing the learning rate. - Default: 0.3 - anneal_strategy (str): {'cos', 'linear'} - Specifies the annealing strategy: 'cos' for cosine annealing, - 'linear' for linear annealing. - Default: 'cos' - three_phase (bool): If three_phase is True, use a third phase of the - schedule to annihilate the learning rate according to - final_div_factor instead of modifying the second phase (the first - two phases will be symmetrical about the step indicated by - pct_start). - Default: False - """ - - def __init__(self, - base_momentum=0.85, - max_momentum=0.95, - pct_start=0.3, - anneal_strategy='cos', - three_phase=False, - **kwargs): - # validate by_epoch, currently only support by_epoch=False - if 'by_epoch' not in kwargs: - kwargs['by_epoch'] = False - else: - assert not kwargs['by_epoch'], \ - 'currently only support "by_epoch" = False' - if not isinstance(base_momentum, (float, list, dict)): - raise ValueError('base_momentum must be the type among of float,' - 'list or dict.') - self._base_momentum = base_momentum - if not isinstance(max_momentum, (float, list, dict)): - raise ValueError('max_momentum must be the type among of float,' - 'list or dict.') - self._max_momentum = max_momentum - # validate pct_start - if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float): - raise ValueError('Expected float between 0 and 1 pct_start, but ' - f'got {pct_start}') - self.pct_start = pct_start - # validate anneal_strategy - if anneal_strategy not in ['cos', 'linear']: - raise ValueError('anneal_strategy must by one of "cos" or ' - f'"linear", instead got {anneal_strategy}') - elif anneal_strategy == 'cos': - self.anneal_func = annealing_cos - elif anneal_strategy == 'linear': - self.anneal_func = annealing_linear - self.three_phase = three_phase - self.momentum_phases = [] # init momentum_phases - super(OneCycleMomentumUpdaterHook, self).__init__(**kwargs) - - def before_run(self, runner): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - if ('momentum' not in optim.defaults - and 'betas' not in optim.defaults): - raise ValueError('optimizer must support momentum with' - 'option enabled') - self.use_beta1 = 'betas' in optim.defaults - _base_momentum = format_param(k, optim, self._base_momentum) - _max_momentum = format_param(k, optim, self._max_momentum) - for group, b_momentum, m_momentum in zip( - optim.param_groups, _base_momentum, _max_momentum): - if self.use_beta1: - _, beta2 = group['betas'] - group['betas'] = (m_momentum, beta2) - else: - group['momentum'] = m_momentum - group['base_momentum'] = b_momentum - group['max_momentum'] = m_momentum - else: - optim = runner.optimizer - if ('momentum' not in optim.defaults - and 'betas' not in optim.defaults): - raise ValueError('optimizer must support momentum with' - 'option enabled') - self.use_beta1 = 'betas' in optim.defaults - k = type(optim).__name__ - _base_momentum = format_param(k, optim, self._base_momentum) - _max_momentum = format_param(k, optim, self._max_momentum) - for group, b_momentum, m_momentum in zip(optim.param_groups, - _base_momentum, - _max_momentum): - if self.use_beta1: - _, beta2 = group['betas'] - group['betas'] = (m_momentum, beta2) - else: - group['momentum'] = m_momentum - group['base_momentum'] = b_momentum - group['max_momentum'] = m_momentum - - if self.three_phase: - self.momentum_phases.append({ - 'end_iter': - float(self.pct_start * runner.max_iters) - 1, - 'start_momentum': - 'max_momentum', - 'end_momentum': - 'base_momentum' - }) - self.momentum_phases.append({ - 'end_iter': - float(2 * self.pct_start * runner.max_iters) - 2, - 'start_momentum': - 'base_momentum', - 'end_momentum': - 'max_momentum' - }) - self.momentum_phases.append({ - 'end_iter': runner.max_iters - 1, - 'start_momentum': 'max_momentum', - 'end_momentum': 'max_momentum' - }) - else: - self.momentum_phases.append({ - 'end_iter': - float(self.pct_start * runner.max_iters) - 1, - 'start_momentum': - 'max_momentum', - 'end_momentum': - 'base_momentum' - }) - self.momentum_phases.append({ - 'end_iter': runner.max_iters - 1, - 'start_momentum': 'base_momentum', - 'end_momentum': 'max_momentum' - }) - - def _set_momentum(self, runner, momentum_groups): - if isinstance(runner.optimizer, dict): - for k, optim in runner.optimizer.items(): - for param_group, mom in zip(optim.param_groups, - momentum_groups[k]): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - else: - for param_group, mom in zip(runner.optimizer.param_groups, - momentum_groups): - if 'momentum' in param_group.keys(): - param_group['momentum'] = mom - elif 'betas' in param_group.keys(): - param_group['betas'] = (mom, param_group['betas'][1]) - - def get_momentum(self, runner, param_group): - curr_iter = runner.iter - start_iter = 0 - for i, phase in enumerate(self.momentum_phases): - end_iter = phase['end_iter'] - if curr_iter <= end_iter or i == len(self.momentum_phases) - 1: - pct = (curr_iter - start_iter) / (end_iter - start_iter) - momentum = self.anneal_func( - param_group[phase['start_momentum']], - param_group[phase['end_momentum']], pct) - break - start_iter = end_iter - return momentum - - def get_regular_momentum(self, runner): - if isinstance(runner.optimizer, dict): - momentum_groups = {} - for k, optim in runner.optimizer.items(): - _momentum_group = [ - self.get_momentum(runner, param_group) - for param_group in optim.param_groups - ] - momentum_groups.update({k: _momentum_group}) - return momentum_groups - else: - momentum_groups = [] - for param_group in runner.optimizer.param_groups: - momentum_groups.append(self.get_momentum(runner, param_group)) - return momentum_groups diff --git a/spaces/Saketh-Reddy/webhook_space/Dockerfile b/spaces/Saketh-Reddy/webhook_space/Dockerfile deleted file mode 100644 index b742a1870b92ce033b776c0defec1a9996889d50..0000000000000000000000000000000000000000 --- a/spaces/Saketh-Reddy/webhook_space/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/Saturdays/CardioSight_dup/app.py b/spaces/Saturdays/CardioSight_dup/app.py deleted file mode 100644 index f32a3293c4acec979d39c140e610725bec11976d..0000000000000000000000000000000000000000 --- a/spaces/Saturdays/CardioSight_dup/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import gradio as gr -import pandas as pd -from joblib import load - -def cardio(age,gender,ap_hi,ap_lo,cholesterol,gluc,smoke,alco,active,height,weight): - model = load('cardiosight.joblib') - df = pd.DataFrame.from_dict( - { - "age": [age*365], - "gender":[0 if gender=='Male' else 1], - "ap_hi": [ap_hi], - "ap_lo": [ap_lo], - "cholesterol": [cholesterol + 1], - "gluc": [gluc + 1], - "smoke":[1 if smoke=='Yes' else 0], - "alco": [1 if alco=='Yes' else 0], - "active": [1 if active=='Yes' else 0], - "newvalues_height": [height], - "newvalues_weight": [weight], - "New_values_BMI": weight/((height/100)**2), - - } - ) - - pred = model.predict(df)[0] - if pred==1: - predicted="Tiene un riesgo alto de sufrir problemas cardiovasculares" - else: - predicted="Su riesgo de sufrir problemas cardiovasculares es muy bajo. Siga así." - return "Su IMC es de "+str(round(df['New_values_BMI'][0], 2))+'. '+predicted - -iface = gr.Interface( - cardio, - [ - gr.Slider(1,99,label="Age"), - gr.Dropdown(choices=['Male', 'Female'], label='Gender', value='Female'), - gr.Slider(10,250,label="Diastolic Preassure"), - gr.Slider(10,250,label="Sistolic Preassure"), - gr.Radio(["Normal","High","Very High"],type="index",label="Cholesterol"), - gr.Radio(["Normal","High","Very High"],type="index",label="Glucosa Level"), - gr.Dropdown(choices=['Yes', 'No'], label='Smoke', value='No'), - gr.Dropdown(choices=['Yes', 'No'], label='Alcohol', value='No'), - gr.Dropdown(choices=['Yes', 'No'], label='Active', value='Yes'), - gr.Slider(30,220,label="Height in cm"), - gr.Slider(10,300,label="Weight in Kg"), - ], - - "text", - examples=[ - [20,'Male',110,60,"Normal","Normal",'No','No','Yes',168,60], - [30,'Female',120,70,"High","High",'No','Yes','Yes',143,70], - [40,'Male',130,80,"Very High","Very High",'Yes','Yes','No',185,80], - [50,'Female',140,90,"Normal","High",'Yes','No','No',165,90], - [60,'Male',150,100,"High","Very High",'No','No','Yes',175,100], - [70,'Female',160,90,"Very High","Normal",'Yes','Yes','No',185,110], - ], - title = 'Calculadora de Riesgo Cardiovascular mediante Inteligencia Artificial', - description = 'Duplicación del proyecto de CARDIOSIGHT. He cambiado los botones tipo check por dropdown y calculado el IMC a partir de la altura y el peso. Más información: https://saturdays.ai/2022/03/16/cardiosight-machine-learning-para-calcular-riesgo-cardiovascular/' -) - -iface.launch() \ No newline at end of file diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py b/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py deleted file mode 100644 index 167d4cb2198863cf43e93440f7e63c5342fc7605..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer/lib/uvr5_pack/lib_v5/nets_123812KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from . import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h b/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h deleted file mode 100644 index b2b88e8c46f19b6db0933163e57ccdb51180f517..0000000000000000000000000000000000000000 --- a/spaces/ShilongLiu/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h +++ /dev/null @@ -1,35 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino diff --git a/spaces/StatsByZach/app/app.py b/spaces/StatsByZach/app/app.py deleted file mode 100644 index c122f6e6a174595063210ebda3f34263f9cb2e6a..0000000000000000000000000000000000000000 --- a/spaces/StatsByZach/app/app.py +++ /dev/null @@ -1,39 +0,0 @@ -##### app.py ##### -# Main shiny app -# Zach Andrews - -#Import modules -from starlette.applications import Starlette -from starlette.routing import Mount -from starlette.staticfiles import StaticFiles -from shiny import App, ui -import shinyswatch - -#Import pages -from home import home -from about import about -from gsax_timeline import gsax_timeline -from on_ice_xg_rates import on_ice_xg -from gsax_leaderboard import gsax_leaderboard -from on_ice_xgfp import on_ice_xgfp -from team_xg_rates import team_xg_rates -from gsax_comparison import gsax_comparison -from game import game -from games import games - -# Create app -routes = [ - Mount('/home', app=home), - Mount('/about', app=about), - Mount('/gsax-timeline', app=gsax_timeline), - Mount('/skater-xg-rates', app=on_ice_xg), - Mount('/gsax-leaderboard', app=gsax_leaderboard), - Mount('/skater-xg-percentages', app=on_ice_xgfp), - Mount('/team-xg-rates', app=team_xg_rates), - Mount('/gsax-comparison',app=gsax_comparison), - Mount('/games',app=games), - Mount('/game/{game_id}',app=game) -] - -#Run App -app = Starlette(routes=routes) \ No newline at end of file diff --git a/spaces/Stearns/Soar/pysoarlib/TimeConnector.py b/spaces/Stearns/Soar/pysoarlib/TimeConnector.py deleted file mode 100644 index 8be71108d70af37b42a922f2c23befea847d43b9..0000000000000000000000000000000000000000 --- a/spaces/Stearns/Soar/pysoarlib/TimeConnector.py +++ /dev/null @@ -1,181 +0,0 @@ - -import time -import datetime -current_time_ms = lambda: int(round(time.time() * 1000)) - -from .AgentConnector import AgentConnector -from .SoarWME import SoarWME - -class TimeConnector(AgentConnector): - """ An agent connector that will maintain time info on the input-link - - The input link will look like: - ( ^time ) - ( ^seconds # real-time seconds elapsed since start of agent - ^milliseconds # real-time milliseconds elapsed since start - ^steps # number of decision cycles since start of agent - ^clock ) - ( ^hour
    # 0-23 - ^minute # 0-59 - ^second # 0-59 - ^millisecond # 0-999 - ^epoch # Unix epoch time in seconds) - - Also, if using a simulated clock, the agent can send the following output-command: - ( ^set-time ) ( ^hour ^minute ^second ) - - Settings: - clock_include_ms: bool [default=True] - If true, includes milliseconds with both elapsed time and clock time - sim_clock: bool [default=False] - If true, uses a simulated clock that starts at 8AM and advances a fixed amount every DC - If false, will use the local real time - clock_step_ms: int [default=5000] - If using the simulated clock, this is the number of milliseconds it will increase every DC - - """ - def __init__(self, client, clock_include_ms=True, sim_clock=False, clock_step_ms=50, **kwargs): - """ Initializes the connector with the time info - - clock_include_ms - If True: will include millisecond resolution on clock/elapsed - (Setting to false will mean fewer changes to the input-link, slightly faster) - sim_clock - If False: clock uses real-time. If True: clock is simulated - clock_step_ms - If sim_clock=True, this is how much the clock advances every DC - """ - AgentConnector.__init__(self, client) - - self.include_ms = clock_include_ms - self.sim_clock = sim_clock - self.clock_step_ms = int(clock_step_ms) - - self.time_id = None - self.seconds = SoarWME("seconds", 0) # number of real-time seconds elapsed since start of agent - self.milsecs = SoarWME("milliseconds", 0) # number of real-time milliseconds elapsed since start of agent - self.steps = SoarWME("steps", 0) # number of decision cycles the agent has taken - - # Output Link Command: ( ^set-time ) ( ^hour ^minute ^second ) - self.add_output_command("set-time") - - # Clock info, hour minute second millisecond - self.clock_id = None - self.clock_info = [0, 0, 0, 0, 0] - self.clock_wmes = [ SoarWME("hour", 0), SoarWME("minute", 0), SoarWME("second", 0), SoarWME("millisecond", 0), SoarWME("epoch", 0) ] - self.reset_time() - - def advance_clock(self, num_ms): - """ Advances the simulated clock by the given number of milliseconds """ - self.clock_info[3] += num_ms - # MS - if self.clock_info[3] >= 1000: - self.clock_info[2] += self.clock_info[3] // 1000 - self.clock_info[4] += self.clock_info[3] // 1000 - self.clock_info[3] = self.clock_info[3] % 1000 - # Seconds - if self.clock_info[2] >= 60: - self.clock_info[1] += self.clock_info[2] // 60 - self.clock_info[2] = self.clock_info[2] % 60 - # Minutes - if self.clock_info[1] >= 60: - self.clock_info[0] += self.clock_info[1] // 60 - self.clock_info[1] = self.clock_info[1] % 60 - # Hours - self.clock_info[0] = self.clock_info[0] % 24 - - def update_clock(self): - """ Updates the clock with the real time """ - localtime = time.localtime() - self.clock_info[0] = localtime.tm_hour - self.clock_info[1] = localtime.tm_min - self.clock_info[2] = localtime.tm_sec - self.clock_info[3] = current_time_ms() % 1000 - self.clock_info[4] = int(time.time()) - - def reset_time(self): - """ Resets the time info """ - # If simulating clock, default epoch is Jan 1, 2020 at 8 AM - default_epoch = int(time.mktime(datetime.datetime(2020, 1, 1, 8, 0, 0, 0).timetuple())) - self.clock_info = [8, 0, 0, 0, default_epoch] # [ hour, min, sec, ms, epoch ] - self.milsecs.set_value(0) - self.seconds.set_value(0) - self.steps.set_value(0) - self.start_time = current_time_ms() - - def on_init_soar(self): - self._remove_from_wm() - self.reset_time() - - def set_time(self, hour, min, sec=0, ms=0): - if not self.sim_clock: - return - self.clock_info[0] = hour - self.clock_info[1] = (0 if min is None else min) - self.clock_info[2] = (0 if sec is None else sec) - self.clock_info[3] = ms - self.clock_info[4] = int(time.mktime(datetime.datetime(2020, 1, 1, hour, min, sec, ms).timetuple())) - - def on_input_phase(self, input_link): - # Update the global timers (time since agent start) - self.milsecs.set_value(int(current_time_ms() - self.start_time)) - self.seconds.set_value(int((current_time_ms() - self.start_time)/1000)) - self.steps.set_value(self.steps.get_value() + 1) - - # Update the clock, either real-time or simulated - if self.sim_clock: - self.advance_clock(self.clock_step_ms) - else: - self.update_clock() - - # Update working memory - if self.time_id is None: - self._add_to_wm(input_link) - else: - self._update_wm() - - def on_output_event(self, command_name, root_id): - if command_name == "set-time": - self.process_set_time_command(root_id) - - def process_set_time_command(self, time_id): - h = time_id.GetChildInt('hour') - m = time_id.GetChildInt('minute') - s = time_id.GetChildInt('second') - self.set_time(h, m, s) - time_id.CreateStringWME('status', 'complete') - - ### Internal methods - - def _add_to_wm(self, parent_id): - self.time_id = parent_id.CreateIdWME("time") - if self.include_ms: - self.milsecs.add_to_wm(self.time_id) - self.seconds.add_to_wm(self.time_id) - self.steps.add_to_wm(self.time_id) - - self.clock_id = self.time_id.CreateIdWME("clock") - for i, wme in enumerate(self.clock_wmes): - if i == 3 and not self.include_ms: - continue - wme.set_value(self.clock_info[i]) - wme.add_to_wm(self.clock_id) - - def _update_wm(self): - if self.include_ms: - self.milsecs.update_wm() - self.seconds.update_wm() - self.steps.update_wm() - for i, wme in enumerate(self.clock_wmes): - wme.set_value(self.clock_info[i]) - wme.update_wm() - - def _remove_from_wm(self): - if self.time_id is None: - return - for wme in self.clock_wmes: - wme.remove_from_wm() - self.milsecs.remove_from_wm() - self.seconds.remove_from_wm() - self.steps.remove_from_wm() - self.time_id.DestroyWME() - self.time_id = None - self.clock_id = None - diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/localinterfaces.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/localinterfaces.py deleted file mode 100644 index 2f911222d8d623ebccf295e21fe6fdc428bccdd5..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/localinterfaces.py +++ /dev/null @@ -1,5 +0,0 @@ -from warnings import warn - -warn("IPython.utils.localinterfaces has moved to jupyter_client.localinterfaces", stacklevel=2) - -from jupyter_client.localinterfaces import * diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImagePalette.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImagePalette.py deleted file mode 100644 index e455c04596c2c77f434dc61070eb332d6bc2bfee..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImagePalette.py +++ /dev/null @@ -1,272 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# image palette object -# -# History: -# 1996-03-11 fl Rewritten. -# 1997-01-03 fl Up and running. -# 1997-08-23 fl Added load hack -# 2001-04-16 fl Fixed randint shadow bug in random() -# -# Copyright (c) 1997-2001 by Secret Labs AB -# Copyright (c) 1996-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import array - -from . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile -from ._deprecate import deprecate - - -class ImagePalette: - """ - Color palette for palette mapped images - - :param mode: The mode to use for the palette. See: - :ref:`concept-modes`. Defaults to "RGB" - :param palette: An optional palette. If given, it must be a bytearray, - an array or a list of ints between 0-255. The list must consist of - all channels for one color followed by the next color (e.g. RGBRGBRGB). - Defaults to an empty palette. - """ - - def __init__(self, mode="RGB", palette=None, size=0): - self.mode = mode - self.rawmode = None # if set, palette contains raw data - self.palette = palette or bytearray() - self.dirty = None - if size != 0: - deprecate("The size parameter", 10, None) - if size != len(self.palette): - msg = "wrong palette size" - raise ValueError(msg) - - @property - def palette(self): - return self._palette - - @palette.setter - def palette(self, palette): - self._colors = None - self._palette = palette - - @property - def colors(self): - if self._colors is None: - mode_len = len(self.mode) - self._colors = {} - for i in range(0, len(self.palette), mode_len): - color = tuple(self.palette[i : i + mode_len]) - if color in self._colors: - continue - self._colors[color] = i // mode_len - return self._colors - - @colors.setter - def colors(self, colors): - self._colors = colors - - def copy(self): - new = ImagePalette() - - new.mode = self.mode - new.rawmode = self.rawmode - if self.palette is not None: - new.palette = self.palette[:] - new.dirty = self.dirty - - return new - - def getdata(self): - """ - Get palette contents in format suitable for the low-level - ``im.putpalette`` primitive. - - .. warning:: This method is experimental. - """ - if self.rawmode: - return self.rawmode, self.palette - return self.mode, self.tobytes() - - def tobytes(self): - """Convert palette to bytes. - - .. warning:: This method is experimental. - """ - if self.rawmode: - msg = "palette contains raw palette data" - raise ValueError(msg) - if isinstance(self.palette, bytes): - return self.palette - arr = array.array("B", self.palette) - return arr.tobytes() - - # Declare tostring as an alias for tobytes - tostring = tobytes - - def getcolor(self, color, image=None): - """Given an rgb tuple, allocate palette entry. - - .. warning:: This method is experimental. - """ - if self.rawmode: - msg = "palette contains raw palette data" - raise ValueError(msg) - if isinstance(color, tuple): - if self.mode == "RGB": - if len(color) == 4: - if color[3] != 255: - msg = "cannot add non-opaque RGBA color to RGB palette" - raise ValueError(msg) - color = color[:3] - elif self.mode == "RGBA": - if len(color) == 3: - color += (255,) - try: - return self.colors[color] - except KeyError as e: - # allocate new color slot - if not isinstance(self.palette, bytearray): - self._palette = bytearray(self.palette) - index = len(self.palette) // 3 - special_colors = () - if image: - special_colors = ( - image.info.get("background"), - image.info.get("transparency"), - ) - while index in special_colors: - index += 1 - if index >= 256: - if image: - # Search for an unused index - for i, count in reversed(list(enumerate(image.histogram()))): - if count == 0 and i not in special_colors: - index = i - break - if index >= 256: - msg = "cannot allocate more than 256 colors" - raise ValueError(msg) from e - self.colors[color] = index - if index * 3 < len(self.palette): - self._palette = ( - self.palette[: index * 3] - + bytes(color) - + self.palette[index * 3 + 3 :] - ) - else: - self._palette += bytes(color) - self.dirty = 1 - return index - else: - msg = f"unknown color specifier: {repr(color)}" - raise ValueError(msg) - - def save(self, fp): - """Save palette to text file. - - .. warning:: This method is experimental. - """ - if self.rawmode: - msg = "palette contains raw palette data" - raise ValueError(msg) - if isinstance(fp, str): - fp = open(fp, "w") - fp.write("# Palette\n") - fp.write(f"# Mode: {self.mode}\n") - for i in range(256): - fp.write(f"{i}") - for j in range(i * len(self.mode), (i + 1) * len(self.mode)): - try: - fp.write(f" {self.palette[j]}") - except IndexError: - fp.write(" 0") - fp.write("\n") - fp.close() - - -# -------------------------------------------------------------------- -# Internal - - -def raw(rawmode, data): - palette = ImagePalette() - palette.rawmode = rawmode - palette.palette = data - palette.dirty = 1 - return palette - - -# -------------------------------------------------------------------- -# Factories - - -def make_linear_lut(black, white): - lut = [] - if black == 0: - for i in range(256): - lut.append(white * i // 255) - else: - raise NotImplementedError # FIXME - return lut - - -def make_gamma_lut(exp): - lut = [] - for i in range(256): - lut.append(int(((i / 255.0) ** exp) * 255.0 + 0.5)) - return lut - - -def negative(mode="RGB"): - palette = list(range(256 * len(mode))) - palette.reverse() - return ImagePalette(mode, [i // len(mode) for i in palette]) - - -def random(mode="RGB"): - from random import randint - - palette = [] - for i in range(256 * len(mode)): - palette.append(randint(0, 255)) - return ImagePalette(mode, palette) - - -def sepia(white="#fff0c0"): - bands = [make_linear_lut(0, band) for band in ImageColor.getrgb(white)] - return ImagePalette("RGB", [bands[i % 3][i // 3] for i in range(256 * 3)]) - - -def wedge(mode="RGB"): - palette = list(range(256 * len(mode))) - return ImagePalette(mode, [i // len(mode) for i in palette]) - - -def load(filename): - # FIXME: supports GIMP gradients only - - with open(filename, "rb") as fp: - for paletteHandler in [ - GimpPaletteFile.GimpPaletteFile, - GimpGradientFile.GimpGradientFile, - PaletteFile.PaletteFile, - ]: - try: - fp.seek(0) - lut = paletteHandler(fp).getpalette() - if lut: - break - except (SyntaxError, ValueError): - # import traceback - # traceback.print_exc() - pass - else: - msg = "cannot load palette" - raise OSError(msg) - - return lut # data, rawmode diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/linux_and_mac/compile_mac.sh b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/linux_and_mac/compile_mac.sh deleted file mode 100644 index 5c614fe2b45a6773e8f9386ae98e0d8076491af4..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/linux_and_mac/compile_mac.sh +++ /dev/null @@ -1,5 +0,0 @@ -g++ -fPIC -D_REENTRANT -std=c++11 -arch x86_64 -c -o attach_x86_64.o attach.cpp -g++ -dynamiclib -nostartfiles -arch x86_64 -o attach_x86_64.dylib attach_x86_64.o -lc -rm attach_x86_64.o -mv attach_x86_64.dylib ../attach_x86_64.dylib - diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/win32/gdi32.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/win32/gdi32.py deleted file mode 100644 index c3b5e6ebc3e5cc6c2e408f3beaa7b2dc436bcab3..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/winappdbg/win32/gdi32.py +++ /dev/null @@ -1,507 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -# Copyright (c) 2009-2014, Mario Vilas -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice,this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# * Neither the name of the copyright holder nor the names of its -# contributors may be used to endorse or promote products derived from -# this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -""" -Wrapper for gdi32.dll in ctypes. -""" - -__revision__ = "$Id$" - -from winappdbg.win32.defines import * -from winappdbg.win32.kernel32 import GetLastError, SetLastError - -#============================================================================== -# This is used later on to calculate the list of exported symbols. -_all = None -_all = set(vars().keys()) -#============================================================================== - -#--- Helpers ------------------------------------------------------------------ - -#--- Types -------------------------------------------------------------------- - -#--- Constants ---------------------------------------------------------------- - -# GDI object types -OBJ_PEN = 1 -OBJ_BRUSH = 2 -OBJ_DC = 3 -OBJ_METADC = 4 -OBJ_PAL = 5 -OBJ_FONT = 6 -OBJ_BITMAP = 7 -OBJ_REGION = 8 -OBJ_METAFILE = 9 -OBJ_MEMDC = 10 -OBJ_EXTPEN = 11 -OBJ_ENHMETADC = 12 -OBJ_ENHMETAFILE = 13 -OBJ_COLORSPACE = 14 -GDI_OBJ_LAST = OBJ_COLORSPACE - -# Ternary raster operations -SRCCOPY = 0x00CC0020 # dest = source -SRCPAINT = 0x00EE0086 # dest = source OR dest -SRCAND = 0x008800C6 # dest = source AND dest -SRCINVERT = 0x00660046 # dest = source XOR dest -SRCERASE = 0x00440328 # dest = source AND (NOT dest) -NOTSRCCOPY = 0x00330008 # dest = (NOT source) -NOTSRCERASE = 0x001100A6 # dest = (NOT src) AND (NOT dest) -MERGECOPY = 0x00C000CA # dest = (source AND pattern) -MERGEPAINT = 0x00BB0226 # dest = (NOT source) OR dest -PATCOPY = 0x00F00021 # dest = pattern -PATPAINT = 0x00FB0A09 # dest = DPSnoo -PATINVERT = 0x005A0049 # dest = pattern XOR dest -DSTINVERT = 0x00550009 # dest = (NOT dest) -BLACKNESS = 0x00000042 # dest = BLACK -WHITENESS = 0x00FF0062 # dest = WHITE -NOMIRRORBITMAP = 0x80000000 # Do not Mirror the bitmap in this call -CAPTUREBLT = 0x40000000 # Include layered windows - -# Region flags -ERROR = 0 -NULLREGION = 1 -SIMPLEREGION = 2 -COMPLEXREGION = 3 -RGN_ERROR = ERROR - -# CombineRgn() styles -RGN_AND = 1 -RGN_OR = 2 -RGN_XOR = 3 -RGN_DIFF = 4 -RGN_COPY = 5 -RGN_MIN = RGN_AND -RGN_MAX = RGN_COPY - -# StretchBlt() modes -BLACKONWHITE = 1 -WHITEONBLACK = 2 -COLORONCOLOR = 3 -HALFTONE = 4 -MAXSTRETCHBLTMODE = 4 -STRETCH_ANDSCANS = BLACKONWHITE -STRETCH_ORSCANS = WHITEONBLACK -STRETCH_DELETESCANS = COLORONCOLOR -STRETCH_HALFTONE = HALFTONE - -# PolyFill() modes -ALTERNATE = 1 -WINDING = 2 -POLYFILL_LAST = 2 - -# Layout orientation options -LAYOUT_RTL = 0x00000001 # Right to left -LAYOUT_BTT = 0x00000002 # Bottom to top -LAYOUT_VBH = 0x00000004 # Vertical before horizontal -LAYOUT_ORIENTATIONMASK = LAYOUT_RTL + LAYOUT_BTT + LAYOUT_VBH -LAYOUT_BITMAPORIENTATIONPRESERVED = 0x00000008 - -# Stock objects -WHITE_BRUSH = 0 -LTGRAY_BRUSH = 1 -GRAY_BRUSH = 2 -DKGRAY_BRUSH = 3 -BLACK_BRUSH = 4 -NULL_BRUSH = 5 -HOLLOW_BRUSH = NULL_BRUSH -WHITE_PEN = 6 -BLACK_PEN = 7 -NULL_PEN = 8 -OEM_FIXED_FONT = 10 -ANSI_FIXED_FONT = 11 -ANSI_VAR_FONT = 12 -SYSTEM_FONT = 13 -DEVICE_DEFAULT_FONT = 14 -DEFAULT_PALETTE = 15 -SYSTEM_FIXED_FONT = 16 - -# Metafile functions -META_SETBKCOLOR = 0x0201 -META_SETBKMODE = 0x0102 -META_SETMAPMODE = 0x0103 -META_SETROP2 = 0x0104 -META_SETRELABS = 0x0105 -META_SETPOLYFILLMODE = 0x0106 -META_SETSTRETCHBLTMODE = 0x0107 -META_SETTEXTCHAREXTRA = 0x0108 -META_SETTEXTCOLOR = 0x0209 -META_SETTEXTJUSTIFICATION = 0x020A -META_SETWINDOWORG = 0x020B -META_SETWINDOWEXT = 0x020C -META_SETVIEWPORTORG = 0x020D -META_SETVIEWPORTEXT = 0x020E -META_OFFSETWINDOWORG = 0x020F -META_SCALEWINDOWEXT = 0x0410 -META_OFFSETVIEWPORTORG = 0x0211 -META_SCALEVIEWPORTEXT = 0x0412 -META_LINETO = 0x0213 -META_MOVETO = 0x0214 -META_EXCLUDECLIPRECT = 0x0415 -META_INTERSECTCLIPRECT = 0x0416 -META_ARC = 0x0817 -META_ELLIPSE = 0x0418 -META_FLOODFILL = 0x0419 -META_PIE = 0x081A -META_RECTANGLE = 0x041B -META_ROUNDRECT = 0x061C -META_PATBLT = 0x061D -META_SAVEDC = 0x001E -META_SETPIXEL = 0x041F -META_OFFSETCLIPRGN = 0x0220 -META_TEXTOUT = 0x0521 -META_BITBLT = 0x0922 -META_STRETCHBLT = 0x0B23 -META_POLYGON = 0x0324 -META_POLYLINE = 0x0325 -META_ESCAPE = 0x0626 -META_RESTOREDC = 0x0127 -META_FILLREGION = 0x0228 -META_FRAMEREGION = 0x0429 -META_INVERTREGION = 0x012A -META_PAINTREGION = 0x012B -META_SELECTCLIPREGION = 0x012C -META_SELECTOBJECT = 0x012D -META_SETTEXTALIGN = 0x012E -META_CHORD = 0x0830 -META_SETMAPPERFLAGS = 0x0231 -META_EXTTEXTOUT = 0x0a32 -META_SETDIBTODEV = 0x0d33 -META_SELECTPALETTE = 0x0234 -META_REALIZEPALETTE = 0x0035 -META_ANIMATEPALETTE = 0x0436 -META_SETPALENTRIES = 0x0037 -META_POLYPOLYGON = 0x0538 -META_RESIZEPALETTE = 0x0139 -META_DIBBITBLT = 0x0940 -META_DIBSTRETCHBLT = 0x0b41 -META_DIBCREATEPATTERNBRUSH = 0x0142 -META_STRETCHDIB = 0x0f43 -META_EXTFLOODFILL = 0x0548 -META_SETLAYOUT = 0x0149 -META_DELETEOBJECT = 0x01f0 -META_CREATEPALETTE = 0x00f7 -META_CREATEPATTERNBRUSH = 0x01F9 -META_CREATEPENINDIRECT = 0x02FA -META_CREATEFONTINDIRECT = 0x02FB -META_CREATEBRUSHINDIRECT = 0x02FC -META_CREATEREGION = 0x06FF - -# Metafile escape codes -NEWFRAME = 1 -ABORTDOC = 2 -NEXTBAND = 3 -SETCOLORTABLE = 4 -GETCOLORTABLE = 5 -FLUSHOUTPUT = 6 -DRAFTMODE = 7 -QUERYESCSUPPORT = 8 -SETABORTPROC = 9 -STARTDOC = 10 -ENDDOC = 11 -GETPHYSPAGESIZE = 12 -GETPRINTINGOFFSET = 13 -GETSCALINGFACTOR = 14 -MFCOMMENT = 15 -GETPENWIDTH = 16 -SETCOPYCOUNT = 17 -SELECTPAPERSOURCE = 18 -DEVICEDATA = 19 -PASSTHROUGH = 19 -GETTECHNOLGY = 20 -GETTECHNOLOGY = 20 -SETLINECAP = 21 -SETLINEJOIN = 22 -SETMITERLIMIT = 23 -BANDINFO = 24 -DRAWPATTERNRECT = 25 -GETVECTORPENSIZE = 26 -GETVECTORBRUSHSIZE = 27 -ENABLEDUPLEX = 28 -GETSETPAPERBINS = 29 -GETSETPRINTORIENT = 30 -ENUMPAPERBINS = 31 -SETDIBSCALING = 32 -EPSPRINTING = 33 -ENUMPAPERMETRICS = 34 -GETSETPAPERMETRICS = 35 -POSTSCRIPT_DATA = 37 -POSTSCRIPT_IGNORE = 38 -MOUSETRAILS = 39 -GETDEVICEUNITS = 42 -GETEXTENDEDTEXTMETRICS = 256 -GETEXTENTTABLE = 257 -GETPAIRKERNTABLE = 258 -GETTRACKKERNTABLE = 259 -EXTTEXTOUT = 512 -GETFACENAME = 513 -DOWNLOADFACE = 514 -ENABLERELATIVEWIDTHS = 768 -ENABLEPAIRKERNING = 769 -SETKERNTRACK = 770 -SETALLJUSTVALUES = 771 -SETCHARSET = 772 -STRETCHBLT = 2048 -METAFILE_DRIVER = 2049 -GETSETSCREENPARAMS = 3072 -QUERYDIBSUPPORT = 3073 -BEGIN_PATH = 4096 -CLIP_TO_PATH = 4097 -END_PATH = 4098 -EXT_DEVICE_CAPS = 4099 -RESTORE_CTM = 4100 -SAVE_CTM = 4101 -SET_ARC_DIRECTION = 4102 -SET_BACKGROUND_COLOR = 4103 -SET_POLY_MODE = 4104 -SET_SCREEN_ANGLE = 4105 -SET_SPREAD = 4106 -TRANSFORM_CTM = 4107 -SET_CLIP_BOX = 4108 -SET_BOUNDS = 4109 -SET_MIRROR_MODE = 4110 -OPENCHANNEL = 4110 -DOWNLOADHEADER = 4111 -CLOSECHANNEL = 4112 -POSTSCRIPT_PASSTHROUGH = 4115 -ENCAPSULATED_POSTSCRIPT = 4116 -POSTSCRIPT_IDENTIFY = 4117 -POSTSCRIPT_INJECTION = 4118 -CHECKJPEGFORMAT = 4119 -CHECKPNGFORMAT = 4120 -GET_PS_FEATURESETTING = 4121 -GDIPLUS_TS_QUERYVER = 4122 -GDIPLUS_TS_RECORD = 4123 -SPCLPASSTHROUGH2 = 4568 - -#--- Structures --------------------------------------------------------------- - -# typedef struct _RECT { -# LONG left; -# LONG top; -# LONG right; -# LONG bottom; -# }RECT, *PRECT; -class RECT(Structure): - _fields_ = [ - ('left', LONG), - ('top', LONG), - ('right', LONG), - ('bottom', LONG), - ] -PRECT = POINTER(RECT) -LPRECT = PRECT - -# typedef struct tagPOINT { -# LONG x; -# LONG y; -# } POINT; -class POINT(Structure): - _fields_ = [ - ('x', LONG), - ('y', LONG), - ] -PPOINT = POINTER(POINT) -LPPOINT = PPOINT - -# typedef struct tagBITMAP { -# LONG bmType; -# LONG bmWidth; -# LONG bmHeight; -# LONG bmWidthBytes; -# WORD bmPlanes; -# WORD bmBitsPixel; -# LPVOID bmBits; -# } BITMAP, *PBITMAP; -class BITMAP(Structure): - _fields_ = [ - ("bmType", LONG), - ("bmWidth", LONG), - ("bmHeight", LONG), - ("bmWidthBytes", LONG), - ("bmPlanes", WORD), - ("bmBitsPixel", WORD), - ("bmBits", LPVOID), - ] -PBITMAP = POINTER(BITMAP) -LPBITMAP = PBITMAP - -#--- High level classes ------------------------------------------------------- - -#--- gdi32.dll ---------------------------------------------------------------- - -# HDC GetDC( -# __in HWND hWnd -# ); -def GetDC(hWnd): - _GetDC = windll.gdi32.GetDC - _GetDC.argtypes = [HWND] - _GetDC.restype = HDC - _GetDC.errcheck = RaiseIfZero - return _GetDC(hWnd) - -# HDC GetWindowDC( -# __in HWND hWnd -# ); -def GetWindowDC(hWnd): - _GetWindowDC = windll.gdi32.GetWindowDC - _GetWindowDC.argtypes = [HWND] - _GetWindowDC.restype = HDC - _GetWindowDC.errcheck = RaiseIfZero - return _GetWindowDC(hWnd) - -# int ReleaseDC( -# __in HWND hWnd, -# __in HDC hDC -# ); -def ReleaseDC(hWnd, hDC): - _ReleaseDC = windll.gdi32.ReleaseDC - _ReleaseDC.argtypes = [HWND, HDC] - _ReleaseDC.restype = ctypes.c_int - _ReleaseDC.errcheck = RaiseIfZero - _ReleaseDC(hWnd, hDC) - -# HGDIOBJ SelectObject( -# __in HDC hdc, -# __in HGDIOBJ hgdiobj -# ); -def SelectObject(hdc, hgdiobj): - _SelectObject = windll.gdi32.SelectObject - _SelectObject.argtypes = [HDC, HGDIOBJ] - _SelectObject.restype = HGDIOBJ - _SelectObject.errcheck = RaiseIfZero - return _SelectObject(hdc, hgdiobj) - -# HGDIOBJ GetStockObject( -# __in int fnObject -# ); -def GetStockObject(fnObject): - _GetStockObject = windll.gdi32.GetStockObject - _GetStockObject.argtypes = [ctypes.c_int] - _GetStockObject.restype = HGDIOBJ - _GetStockObject.errcheck = RaiseIfZero - return _GetStockObject(fnObject) - -# DWORD GetObjectType( -# __in HGDIOBJ h -# ); -def GetObjectType(h): - _GetObjectType = windll.gdi32.GetObjectType - _GetObjectType.argtypes = [HGDIOBJ] - _GetObjectType.restype = DWORD - _GetObjectType.errcheck = RaiseIfZero - return _GetObjectType(h) - -# int GetObject( -# __in HGDIOBJ hgdiobj, -# __in int cbBuffer, -# __out LPVOID lpvObject -# ); -def GetObject(hgdiobj, cbBuffer = None, lpvObject = None): - _GetObject = windll.gdi32.GetObject - _GetObject.argtypes = [HGDIOBJ, ctypes.c_int, LPVOID] - _GetObject.restype = ctypes.c_int - _GetObject.errcheck = RaiseIfZero - - # Both cbBuffer and lpvObject can be omitted, the correct - # size and structure to return are automatically deduced. - # If lpvObject is given it must be a ctypes object, not a pointer. - # Always returns a ctypes object. - - if cbBuffer is not None: - if lpvObject is None: - lpvObject = ctypes.create_string_buffer("", cbBuffer) - elif lpvObject is not None: - cbBuffer = sizeof(lpvObject) - else: # most likely case, both are None - t = GetObjectType(hgdiobj) - if t == OBJ_PEN: - cbBuffer = sizeof(LOGPEN) - lpvObject = LOGPEN() - elif t == OBJ_BRUSH: - cbBuffer = sizeof(LOGBRUSH) - lpvObject = LOGBRUSH() - elif t == OBJ_PAL: - cbBuffer = _GetObject(hgdiobj, 0, None) - lpvObject = (WORD * (cbBuffer // sizeof(WORD)))() - elif t == OBJ_FONT: - cbBuffer = sizeof(LOGFONT) - lpvObject = LOGFONT() - elif t == OBJ_BITMAP: # try the two possible types of bitmap - cbBuffer = sizeof(DIBSECTION) - lpvObject = DIBSECTION() - try: - _GetObject(hgdiobj, cbBuffer, byref(lpvObject)) - return lpvObject - except WindowsError: - cbBuffer = sizeof(BITMAP) - lpvObject = BITMAP() - elif t == OBJ_EXTPEN: - cbBuffer = sizeof(LOGEXTPEN) - lpvObject = LOGEXTPEN() - else: - cbBuffer = _GetObject(hgdiobj, 0, None) - lpvObject = ctypes.create_string_buffer("", cbBuffer) - _GetObject(hgdiobj, cbBuffer, byref(lpvObject)) - return lpvObject - -# LONG GetBitmapBits( -# __in HBITMAP hbmp, -# __in LONG cbBuffer, -# __out LPVOID lpvBits -# ); -def GetBitmapBits(hbmp): - _GetBitmapBits = windll.gdi32.GetBitmapBits - _GetBitmapBits.argtypes = [HBITMAP, LONG, LPVOID] - _GetBitmapBits.restype = LONG - _GetBitmapBits.errcheck = RaiseIfZero - - bitmap = GetObject(hbmp, lpvObject = BITMAP()) - cbBuffer = bitmap.bmWidthBytes * bitmap.bmHeight - lpvBits = ctypes.create_string_buffer("", cbBuffer) - _GetBitmapBits(hbmp, cbBuffer, byref(lpvBits)) - return lpvBits.raw - -# HBITMAP CreateBitmapIndirect( -# __in const BITMAP *lpbm -# ); -def CreateBitmapIndirect(lpbm): - _CreateBitmapIndirect = windll.gdi32.CreateBitmapIndirect - _CreateBitmapIndirect.argtypes = [PBITMAP] - _CreateBitmapIndirect.restype = HBITMAP - _CreateBitmapIndirect.errcheck = RaiseIfZero - return _CreateBitmapIndirect(lpbm) - -#============================================================================== -# This calculates the list of exported symbols. -_all = set(vars().keys()).difference(_all) -__all__ = [_x for _x in _all if not _x.startswith('_')] -__all__.sort() -#============================================================================== diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/index/backends/in_memory.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/index/backends/in_memory.py deleted file mode 100644 index 41cb248a9c4a9c96b9c5da4eac295390f9e1e24b..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/index/backends/in_memory.py +++ /dev/null @@ -1,335 +0,0 @@ -import os -from collections import defaultdict -from dataclasses import dataclass, field -from typing import ( - Any, - Dict, - Generator, - Generic, - List, - Optional, - Sequence, - Tuple, - Type, - TypeVar, - Union, - cast, -) - -import numpy as np - -from docarray import BaseDoc, DocList -from docarray.index.abstract import BaseDocIndex, _raise_not_supported -from docarray.index.backends.helper import ( - _collect_query_args, - _execute_find_and_filter_query, -) -from docarray.typing import AnyTensor, NdArray -from docarray.typing.tensor.abstract_tensor import AbstractTensor -from docarray.utils.filter import filter_docs -from docarray.utils.find import ( - FindResult, - FindResultBatched, - _FindResult, - _FindResultBatched, - find, - find_batched, -) - -TSchema = TypeVar('TSchema', bound=BaseDoc) - - -class InMemoryExactNNIndex(BaseDocIndex, Generic[TSchema]): - def __init__( - self, - docs: Optional[DocList] = None, - index_file_path: Optional[str] = None, - **kwargs, - ): - """Initialize InMemoryExactNNIndex""" - super().__init__(db_config=None, **kwargs) - self._runtime_config = self.RuntimeConfig() - - if docs and index_file_path: - raise ValueError( - 'Initialize `InMemoryExactNNIndex` with either `docs` or ' - '`index_file_path`, not both. Provide `docs` for a fresh index, or ' - '`index_file_path` to use an existing file.' - ) - - if index_file_path: - if os.path.exists(index_file_path): - self._logger.info( - f'Loading index from a binary file: {index_file_path}' - ) - self._docs = DocList.__class_getitem__( - cast(Type[BaseDoc], self._schema) - ).load_binary(file=index_file_path) - else: - self._logger.warning( - f'Index file does not exist: {index_file_path}. ' - f'Initializing empty InMemoryExactNNIndex.' - ) - self._docs = DocList.__class_getitem__( - cast(Type[BaseDoc], self._schema) - )() - else: - if docs: - self._logger.info('Docs provided. Initializing with provided docs.') - self._docs = docs - else: - self._logger.info( - 'No docs or index file provided. Initializing empty InMemoryExactNNIndex.' - ) - self._docs = DocList.__class_getitem__( - cast(Type[BaseDoc], self._schema) - )() - - def python_type_to_db_type(self, python_type: Type) -> Any: - """Map python type to database type. - Takes any python type and returns the corresponding database column type. - - :param python_type: a python type. - :return: the corresponding database column type, - or None if ``python_type`` is not supported. - """ - return python_type - - class QueryBuilder(BaseDocIndex.QueryBuilder): - def __init__(self, query: Optional[List[Tuple[str, Dict]]] = None): - super().__init__() - # list of tuples (method name, kwargs) - self._queries: List[Tuple[str, Dict]] = query or [] - - def build(self, *args, **kwargs) -> Any: - """Build the query object.""" - return self._queries - - find = _collect_query_args('find') - find_batched = _collect_query_args('find_batched') - filter = _collect_query_args('filter') - filter_batched = _raise_not_supported('find_batched') - text_search = _raise_not_supported('text_search') - text_search_batched = _raise_not_supported('text_search') - - @dataclass - class DBConfig(BaseDocIndex.DBConfig): - """Dataclass that contains all "static" configurations of InMemoryExactNNIndex.""" - - pass - - @dataclass - class RuntimeConfig(BaseDocIndex.RuntimeConfig): - """Dataclass that contains all "dynamic" configurations of InMemoryExactNNIndex.""" - - default_column_config: Dict[Type, Dict[str, Any]] = field( - default_factory=lambda: defaultdict( - dict, - { - AbstractTensor: {'space': 'cosine_sim'}, - }, - ) - ) - - def index(self, docs: Union[BaseDoc, Sequence[BaseDoc]], **kwargs): - """index Documents into the index. - - !!! note - Passing a sequence of Documents that is not a DocList - (such as a List of Docs) comes at a performance penalty. - This is because the Index needs to check compatibility between itself and - the data. With a DocList as input this is a single check; for other inputs - compatibility needs to be checked for every Document individually. - - :param docs: Documents to index. - """ - # implementing the public option because conversion to column dict is not needed - docs = self._validate_docs(docs) - self._docs.extend(docs) - - def _index(self, column_to_data: Dict[str, Generator[Any, None, None]]): - raise NotImplementedError - - def num_docs(self) -> int: - """ - Get the number of documents. - """ - return len(self._docs) - - def _del_items(self, doc_ids: Sequence[str]): - """Delete Documents from the index. - - :param doc_ids: ids to delete from the Document Store - """ - indices = [] - for i, doc in enumerate(self._docs): - if doc.id in doc_ids: - indices.append(i) - - del self._docs[indices] - - def _get_items( - self, doc_ids: Sequence[str] - ) -> Union[Sequence[TSchema], Sequence[Dict[str, Any]]]: - """Get Documents from the index, by `id`. - If no document is found, a KeyError is raised. - - :param doc_ids: ids to get from the Document index - :return: Sequence of Documents, sorted corresponding to the order of `doc_ids`. - Duplicate `doc_ids` can be omitted in the output. - """ - indices = [] - for i, doc in enumerate(self._docs): - if doc.id in doc_ids: - indices.append(i) - return self._docs[indices] - - def execute_query(self, query: List[Tuple[str, Dict]], *args, **kwargs) -> Any: - """ - Execute a query on the InMemoryExactNNIndex. - - Can take two kinds of inputs: - - 1. A native query of the underlying database. This is meant as a passthrough so that you - can enjoy any functionality that is not available through the Document index API. - 2. The output of this Document index' `QueryBuilder.build()` method. - - :param query: the query to execute - :param args: positional arguments to pass to the query - :param kwargs: keyword arguments to pass to the query - :return: the result of the query - """ - if args or kwargs: - raise ValueError( - f'args and kwargs not supported for `execute_query` on {type(self)}' - ) - find_res = _execute_find_and_filter_query( - doc_index=self, - query=query, - ) - return find_res - - def find( - self, - query: Union[AnyTensor, BaseDoc], - search_field: str = '', - limit: int = 10, - **kwargs, - ) -> FindResult: - """Find Documents in the index using nearest-neighbor search. - - :param query: query vector for KNN/ANN search. - Can be either a tensor-like (np.array, torch.Tensor, etc.) - with a single axis, or a Document - :param search_field: name of the field to search on. - Documents in the index are retrieved based on this similarity - of this field to the query. - :param limit: maximum number of Documents to return - :return: a named tuple containing `documents` and `scores` - """ - self._logger.debug(f'Executing `find` for search field {search_field}') - self._validate_search_field(search_field) - - if self.num_docs() == 0: - return FindResult(documents=[], scores=[]) # type: ignore - - config = self._column_infos[search_field].config - - docs, scores = find( - index=self._docs, - query=query, - search_field=search_field, - limit=limit, - metric=config['space'], - ) - docs_with_schema = DocList.__class_getitem__(cast(Type[BaseDoc], self._schema))( - docs - ) - return FindResult(documents=docs_with_schema, scores=scores) - - def _find( - self, query: np.ndarray, limit: int, search_field: str = '' - ) -> _FindResult: - raise NotImplementedError - - def find_batched( - self, - queries: Union[AnyTensor, DocList], - search_field: str = '', - limit: int = 10, - **kwargs, - ) -> FindResultBatched: - """Find Documents in the index using nearest-neighbor search. - - :param queries: query vector for KNN/ANN search. - Can be either a tensor-like (np.array, torch.Tensor, etc.) with a, - or a DocList. - If a tensor-like is passed, it should have shape (batch_size, vector_dim) - :param search_field: name of the field to search on. - Documents in the index are retrieved based on this similarity - of this field to the query. - :param limit: maximum number of documents to return per query - :return: a named tuple containing `documents` and `scores` - """ - self._logger.debug(f'Executing `find_batched` for search field {search_field}') - self._validate_search_field(search_field) - - if self.num_docs() == 0: - return FindResultBatched(documents=[], scores=[]) # type: ignore - - config = self._column_infos[search_field].config - - find_res = find_batched( - index=self._docs, - query=cast(NdArray, queries), - search_field=search_field, - limit=limit, - metric=config['space'], - ) - - return find_res - - def _find_batched( - self, queries: np.ndarray, limit: int, search_field: str = '' - ) -> _FindResultBatched: - raise NotImplementedError - - def filter( - self, - filter_query: Any, - limit: int = 10, - **kwargs, - ) -> DocList: - """Find documents in the index based on a filter query - - :param filter_query: the filter query to execute following the query - language of - :param limit: maximum number of documents to return - :return: a DocList containing the documents that match the filter query - """ - self._logger.debug(f'Executing `filter` for the query {filter_query}') - - docs = filter_docs(docs=self._docs, query=filter_query) - return cast(DocList, docs) - - def _filter(self, filter_query: Any, limit: int) -> Union[DocList, List[Dict]]: - raise NotImplementedError - - def _filter_batched( - self, filter_queries: Any, limit: int - ) -> Union[List[DocList], List[List[Dict]]]: - raise NotImplementedError(f'{type(self)} does not support filtering.') - - def _text_search( - self, query: str, limit: int, search_field: str = '' - ) -> _FindResult: - raise NotImplementedError(f'{type(self)} does not support text search.') - - def _text_search_batched( - self, queries: Sequence[str], limit: int, search_field: str = '' - ) -> _FindResultBatched: - raise NotImplementedError(f'{type(self)} does not support text search.') - - def persist(self, file: str = 'in_memory_index.bin') -> None: - """Persist InMemoryExactNNIndex into a binary file.""" - self._docs.save_binary(file=file) diff --git a/spaces/Tahnik/spreadsight-demo/README.md b/spaces/Tahnik/spreadsight-demo/README.md deleted file mode 100644 index a13001938a29106cfad9bd5a7b722ec10a3e3708..0000000000000000000000000000000000000000 --- a/spaces/Tahnik/spreadsight-demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: "Chat with PDF •\_OpenAI" -emoji: 📄🤖 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.27.0 -python_version: 3.10.9 -app_file: app.py -pinned: false -duplicated_from: fedor-ch/langchain-ynp-test ---- -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TencentARC/MasaCtrl/app.py b/spaces/TencentARC/MasaCtrl/app.py deleted file mode 100644 index 9aa0327692ac932ac11ef35fc44dda6b3cfc7129..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/MasaCtrl/app.py +++ /dev/null @@ -1,54 +0,0 @@ -import os -import gradio as gr -import numpy as np -import torch -from diffusers import DDIMScheduler -from pytorch_lightning import seed_everything - -from masactrl.diffuser_utils import MasaCtrlPipeline -from masactrl.masactrl_utils import (AttentionBase, - regiter_attention_editor_diffusers) - -torch.set_grad_enabled(False) - -from gradio_app.image_synthesis_app import create_demo_synthesis -from gradio_app.real_image_editing_app import create_demo_editing - -from gradio_app.app_utils import global_context - - -SPACE_ID = os.getenv('SPACE_ID') -TITLE = '# [MasaCtrl](https://ljzycmd.github.io/projects/MasaCtrl/)' -DESCRIPTION = '
    ' -DESCRIPTION += f'

    Gradio demo for MasaCtrl: [Github], [Paper]. If MasaCtrl is helpful, please help to ⭐ the Github Repo 😊

    ' -DESCRIPTION += f'

    For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. Duplicate Space

    ' -DESCRIPTION += '
    ' - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(TITLE) - gr.HTML(DESCRIPTION) - model_path_gr = gr.Dropdown( - ["xyn-ai/anything-v4.0", - "CompVis/stable-diffusion-v1-4", - "runwayml/stable-diffusion-v1-5"], - value="xyn-ai/anything-v4.0", - label="Model", info="Select the model to use!" - ) - with gr.Tab("Consistent Synthesis"): - create_demo_synthesis() - with gr.Tab("Real Editing"): - create_demo_editing() - - def reload_ckpt(model_path): - print("Reloading model from", model_path) - global_context["model"] = MasaCtrlPipeline.from_pretrained( - model_path, scheduler=global_context["scheduler"]).to(global_context["device"]) - - model_path_gr.select( - reload_ckpt, - [model_path_gr] - ) - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/TencentARC/T2I-Adapter-SDXL/app.py b/spaces/TencentARC/T2I-Adapter-SDXL/app.py deleted file mode 100644 index 17f553f890e98dc497676396821756c152784676..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/T2I-Adapter-SDXL/app.py +++ /dev/null @@ -1,36 +0,0 @@ -#!/usr/bin/env python - -import os - -import gradio as gr -import torch - -from app_base import create_demo as create_demo_base -from app_sketch import create_demo as create_demo_sketch -from model import ADAPTER_NAMES, Model, download_all_adapters - -DESCRIPTION = "# T2I-Adapter-SDXL" - -if not torch.cuda.is_available(): - DESCRIPTION += "\n

    Running on CPU 🥶 This demo does not work on CPU.

    " - - -download_all_adapters() -model = Model(ADAPTER_NAMES[0]) - - -with gr.Blocks(css="style.css") as demo: - gr.Markdown(DESCRIPTION) - gr.DuplicateButton( - value="Duplicate Space for private use", - elem_id="duplicate-button", - visible=os.getenv("SHOW_DUPLICATE_BUTTON") == "1", - ) - with gr.Tabs(): - with gr.Tab(label="Base"): - create_demo_base(model) - with gr.Tab(label="Sketch"): - create_demo_sketch(model) - -if __name__ == "__main__": - demo.queue(max_size=20).launch() diff --git a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/pasearch.py b/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/pasearch.py deleted file mode 100644 index f3482b1c7cab06437601538af2eabccbe16db0a0..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/pasearch.py +++ /dev/null @@ -1,243 +0,0 @@ -#!/Users/pranab/Tools/anaconda/bin/python - -# Package imports -import os -import sys -import numpy as np -import sklearn as sk -import random -import jprops -import abc -import math -import random -sys.path.append(os.path.abspath("../lib")) -from util import * - -#base parameter search -class BaseParameterSearch(object): - __metaclass__ = abc.ABCMeta - - def __init__(self, verbose): - self.verbose = verbose - self.parameters = [] - self.paramData = {} - self.currentParams = [] - self.curIter = 0 - self.bestSolution = None - - # add param name and type - def addParam(self, param): - self.parameters.append(param) - - # add param data - def addParamVaues(self, paramName, paramData): - self.paramData[paramName] = paramData - - # max iterations - def setMaxIter(self, maxIter): - self.maxIter = maxIter - - @abc.abstractmethod - def prepare(self): - pass - - @abc.abstractmethod - def nextParamValues(self): - pass - - @abc.abstractmethod - def setCost(self, cost): - pass - - # get best solution - def getBestSolution(self): - return self.bestSolution - -#enumerate through provided list of param values -class GuidedParameterSearch: - def __init__(self, verbose=False): - self.verbose = verbose - self.parameters = [] - self.paramData = {} - self.paramIndexes = [] - self.numParamValues = [] - self.currentParams = [] - self.bestSolution = None - - # max iterations - def setMaxIter(self,maxIter): - self.maxIter = maxIter - - # add param name and type - def addParam(self, param): - self.parameters.append(param) - - # add param data - def addParamVaues(self, paramName, paramData): - self.paramData[paramName] = paramData - - # prepare - def prepare(self): - self.numParams = len(self.parameters) - for i in range(self.numParams): - self.paramIndexes.append(0) - - #number of values for each parameter - paramName = self.parameters[i][0] - self.numParamValues.append(len(self.paramData[paramName])) - self.curParamIndex = 0 - - paramValueCombList = [] - paramValueComb = [] - paramValueCombList.append(paramValueComb) - - # all params - for i in range(self.numParams): - paramValueCombListTemp = [] - for paramValueComb in paramValueCombList: - # all param values - for j in range(self.numParamValues[i]): - paramValueCombTemp = paramValueComb[:] - paramValueCombTemp.append(j) - paramValueCombListTemp.append(paramValueCombTemp) - paramValueCombList = paramValueCombListTemp - self.paramValueCombList = paramValueCombList - self.numParamValueComb = len(self.paramValueCombList) - self.curParamValueCombIndx = 0; - - # next param combination - def nextParamValues(self): - retParamNameValue = None - if self.curParamValueCombIndx < len(self.paramValueCombList): - retParamNameValue = [] - curParams = self.paramValueCombList[self.curParamValueCombIndx] - print (curParams) - for i in range(len(curParams)): - paramName = self.parameters[i][0] - paramValue = self.paramData[paramName][curParams[i]] - retParamNameValue.append((paramName, paramValue)) - self.curParamValueCombIndx = self.curParamValueCombIndx + 1 - self.currentParams = retParamNameValue - return retParamNameValue - - # set cost of current parameter set - def setCost(self, cost): - if self.bestSolution is not None: - if cost < self.bestSolution[1]: - self.bestSolution = (self.currentParams, cost) - else: - self.bestSolution = (self.currentParams, cost) - - # get best solution - def getBestSolution(self): - return self.bestSolution - -#random search through provided list of parameter values -class RandomParameterSearch(BaseParameterSearch): - def __init__(self, verbose=False): - super(RandomParameterSearch, self).__init__(verbose) - - - # prepare - def prepare(self): - pass - - # next param combination - def nextParamValues(self): - retParamNameValue = None - if (self.curIter < self.maxIter): - retParamNameValue = [] - for pName, pValues in self.paramData.iteritems(): - pValue = selectRandomFromList(pValues) - retParamNameValue.append((pName, pValue)) - self.curIter = self.curIter + 1 - self.currentParams = retParamNameValue - return retParamNameValue - - # set cost of current parameter set - def setCost(self, cost): - if self.bestSolution is not None: - if cost < self.bestSolution[1]: - self.bestSolution = (self.currentParams, cost) - else: - self.bestSolution = (self.currentParams, cost) - -#random search through provided list of parameter values -class SimulatedAnnealingParameterSearch(BaseParameterSearch): - def __init__(self, verbose=False): - self.curSolution = None - self.nextSolution = None - super(SimulatedAnnealingParameterSearch, self).__init__(verbose) - - # prepare - def prepare(self): - pass - - def setTemp(self, temp): - self.temp = temp - - def setTempReductionRate(self, tempRedRate): - self.tempRedRate = tempRedRate - - # next param combination - def nextParamValues(self): - retParamNameValue = None - if (self.curIter == 0): - #initial random solution - retParamNameValue = [] - for pName, pValues in self.paramData.iteritems(): - pValue = selectRandomFromList(pValues) - retParamNameValue.append((pName, pValue)) - self.curIter = self.curIter + 1 - self.currentParams = retParamNameValue - elif (self.curIter < self.maxIter): - #perturb current solution - retParamNameValue = [] - - #randomly mutate one parameter value - (pNameSel, pValue) = selectRandomFromList(self.currentParams) - pValueNext = selectRandomFromList(self.paramData[pNameSel]) - while (pValueNext == pValue): - pValueNext = selectRandomFromList(self.paramData[pNameSel]) - - #copy - for (pName, pValue) in self.currentParams: - if (pName == pNameSel): - pValueNew = pValueNext - else: - pValueNew = pValue - retParamNameValue.append((pName, pValueNew)) - self.curIter = self.curIter + 1 - self.currentParams = retParamNameValue - return retParamNameValue - - # set cost of current parameter set - def setCost(self, cost): - if self.curSolution is None: - self.curSolution = (self.currentParams, cost) - self.bestSolution = (self.currentParams, cost) - else: - self.nextSolution = (self.currentParams, cost) - if (self.nextSolution[1] < self.curSolution[1]): - if (self.verbose): - print ("next soln better") - self.curSolution = self.nextSolution - if (self.nextSolution[1] < self.bestSolution[1]): - if (self.verbose): - print ("next soln better than best") - self.bestSolution = self.nextSolution - else: - if (self.verbose): - print ("next soln worst") - pr = math.exp((self.curSolution[1] - self.nextSolution[1]) / self.temp) - if (pr > random.random()): - self.curSolution = self.nextSolution - if (self.verbose): - print ("next soln worst but accepted") - else: - if (self.verbose): - print ("next soln worst and rejected") - - self.temp = self.temp * self.tempRedRate - - \ No newline at end of file diff --git a/spaces/Tune-A-Video-library/Tune-A-Video-Training-UI/uploader.py b/spaces/Tune-A-Video-library/Tune-A-Video-Training-UI/uploader.py deleted file mode 100644 index a2924401a256d4369b0b8b0a12898a78cae6daa4..0000000000000000000000000000000000000000 --- a/spaces/Tune-A-Video-library/Tune-A-Video-Training-UI/uploader.py +++ /dev/null @@ -1,66 +0,0 @@ -from __future__ import annotations - -import os -import pathlib -import shlex -import subprocess - -import slugify -from huggingface_hub import HfApi - -from constants import ( - MODEL_LIBRARY_ORG_NAME, - URL_TO_JOIN_MODEL_LIBRARY_ORG, - UploadTarget, -) - - -def join_model_library_org(hf_token: str) -> None: - subprocess.run( - shlex.split( - f'curl -X POST -H "Authorization: Bearer {hf_token}" -H "Content-Type: application/json" {URL_TO_JOIN_MODEL_LIBRARY_ORG}' - ) - ) - - -def upload( - local_folder_path: str, - target_repo_name: str, - upload_to: str, - private: bool = True, - delete_existing_repo: bool = False, - hf_token: str = "", -) -> str: - hf_token = os.getenv("HF_TOKEN") or hf_token - if not hf_token: - raise ValueError - api = HfApi(token=hf_token) - - if not local_folder_path: - raise ValueError - if not target_repo_name: - target_repo_name = pathlib.Path(local_folder_path).name - target_repo_name = slugify.slugify(target_repo_name) - - if upload_to == UploadTarget.PERSONAL_PROFILE.value: - organization = api.whoami()["name"] - elif upload_to == UploadTarget.MODEL_LIBRARY.value: - organization = MODEL_LIBRARY_ORG_NAME - join_model_library_org(hf_token) - else: - raise ValueError - - repo_id = f"{organization}/{target_repo_name}" - if delete_existing_repo: - try: - api.delete_repo(repo_id, repo_type="model") - except Exception: - pass - try: - api.create_repo(repo_id, repo_type="model", private=private) - api.upload_folder(repo_id=repo_id, folder_path=local_folder_path, path_in_repo=".", repo_type="model") - url = f"https://huggingface.co/{repo_id}" - message = f"Your model was successfully uploaded to {url}." - except Exception as e: - message = str(e) - return message diff --git a/spaces/VISION23/V23ChatBot/app.py b/spaces/VISION23/V23ChatBot/app.py deleted file mode 100644 index ad9dcf78507e44ecd0be46f307abec7b59ad70d8..0000000000000000000000000000000000000000 --- a/spaces/VISION23/V23ChatBot/app.py +++ /dev/null @@ -1,55 +0,0 @@ -import os -import openai -import gradio as gr - -#if you have OpenAI API key as an environment variable, enable the below -#openai.api_key = os.getenv("OPENAI_API_KEY") - -#if you have OpenAI API key as a string, enable the below -openai.api_key = "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" - -start_sequence = "\nAI:" -restart_sequence = "\nHuman: " - -prompt = "The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: " - -def openai_create(prompt): - - response = openai.Completion.create( - model="text-davinci-003", - prompt=prompt, - temperature=0.9, - max_tokens=150, - top_p=1, - frequency_penalty=0, - presence_penalty=0.6, - stop=[" Human:", " AI:"] - ) - - return response.choices[0].text - - - -def chatgpt_clone(input, history): - history = history or [] - s = list(sum(history, ())) - s.append(input) - inp = ' '.join(s) - output = openai_create(inp) - history.append((input, output)) - return history, history - - -block = gr.Blocks() - - -with block: - gr.Markdown("""

    V23 CHATBOT

    - """) - chatbot = gr.Chatbot() - message = gr.Textbox(placeholder=prompt) - state = gr.State() - submit = gr.Button("SEND") - submit.click(chatgpt_clone, inputs=[message, state], outputs=[chatbot, state]) - -block.launch(debug = True) \ No newline at end of file diff --git a/spaces/VickyKira/NASAGPT/client/css/field.css b/spaces/VickyKira/NASAGPT/client/css/field.css deleted file mode 100644 index 914425a75d9e62e6428bdb8f5de2c66c91f10d33..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/client/css/field.css +++ /dev/null @@ -1,11 +0,0 @@ -.field { - display: flex; - align-items: center; - padding: 4px; -} - -@media screen and (max-width: 990px) { - .field { - flex-wrap: nowrap; - } -} diff --git a/spaces/VickyKira/NASAGPT/client/css/message-input.css b/spaces/VickyKira/NASAGPT/client/css/message-input.css deleted file mode 100644 index de5f58388133bd3b2b2333dd99cecf0110002367..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/client/css/message-input.css +++ /dev/null @@ -1,27 +0,0 @@ -#message-input { - margin-right: 30px; - height: 64px; -} - -#message-input::-webkit-scrollbar { - width: 5px; -} - -#message-input::-webkit-scrollbar-track { - background: #f1f1f1; -} - -#message-input::-webkit-scrollbar-thumb { - background: #c7a2ff; -} - -#message-input::-webkit-scrollbar-thumb:hover { - background: #8b3dff; -} - -@media screen and (max-width: 360px) { - #message-input { - margin: 0; - } -} - diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Xiaor.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Xiaor.py deleted file mode 100644 index 5757f9971157116cbbfabbe5420e3b7e88fed4e7..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Xiaor.py +++ /dev/null @@ -1,39 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://xiaor.eu.org' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', - 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - } - data = { - 'model': model, - 'temperature': 0.7, - 'presence_penalty': 0, - 'messages': messages, - } - response = requests.post(url + '/p1/v1/chat/completions', - json=data, stream=True) - - if stream: - for chunk in response.iter_content(chunk_size=None): - chunk = chunk.decode('utf-8') - if chunk.strip(): - message = json.loads(chunk)['choices'][0]['message']['content'] - yield message - else: - message = response.json()['choices'][0]['message']['content'] - yield message - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/Widium/Image-Recreation/functions/processing.py b/spaces/Widium/Image-Recreation/functions/processing.py deleted file mode 100644 index c1deff81de3c243cbde86a4018f7b8b46ccacdd6..0000000000000000000000000000000000000000 --- a/spaces/Widium/Image-Recreation/functions/processing.py +++ /dev/null @@ -1,94 +0,0 @@ -# *************************************************************************** # -# # -# processing.py # -# # -# By: Widium # -# Github : https://github.com/widium # -# # -# Created: 2022/11/10 09:10:04 by ebennace # -# Updated: 2023/05/04 11:37:55 by Widium # -# # -# **************************************************************************** ## =============== Import =================== # -import tensorflow as tf -import numpy as np - -from numpy import ndarray -from tensorflow import Tensor -from keras.applications.vgg19 import preprocess_input - -# ======================================== # - -def create_batch_image(img : Tensor): - """ - Create a batch of images with a single image by expanding its dimensions. - - Args: - img: The input image as a tensor. - - Returns: - Tensor: The batched image tensor. - """ - img = tf.expand_dims(tf.constant(img),axis=0) - return (img) - -# ======================================== # - -def remove_batch_dimension(array : ndarray): - """Remove the batch dimension from a NumPy array. - - Args: - array: The input NumPy array with a batch dimension. - - Returns: - np.ndarray: The reshaped array without the batch dimension. - """ - array = np.reshape(array, (array.shape[1], array.shape[2], array.shape[3])) - return (array) - -# ======================================== # - -def preprocessing_img(img : Tensor): - """ - Preprocess an image for input into a VGG network. - - Args: - img: The input image as a tensor. - - Returns: - Tensor: The preprocessed image tensor. - """ - img = inverse_normalize_image(img) - preprocessed_img = preprocess_input(img) - return preprocessed_img - -# ======================================== # - -def Normalize_image(img : Tensor): - """ - Normalize an image by dividing its pixel values by 255. - - Args: - img: The input image as a tensor. - - Returns: - Tensor: The normalized image tensor. - """ - img = img / 255. - return (img) - -# ======================================== # - -def inverse_normalize_image(img : Tensor): - """ - Inverse the normalization of an image by multiplying its pixel values by 255. - - Args: - img: The input image as a tensor. - - Returns: - Tensor: The denormalized image tensor. - """ - img = img * 255 - return (img) - -# ======================================== # \ No newline at end of file diff --git a/spaces/XzJosh/Ava2-Bert-VITS2/README.md b/spaces/XzJosh/Ava2-Bert-VITS2/README.md deleted file mode 100644 index 3ee0f304476c09653428c8a09a157093ac70a026..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Ava2-Bert-VITS2/README.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -license: mit -sdk: gradio -title: AI向晚② ---- \ No newline at end of file diff --git a/spaces/XzJosh/Echo-Bert-VITS2/utils.py b/spaces/XzJosh/Echo-Bert-VITS2/utils.py deleted file mode 100644 index c6aa6cfc64c33e2eed33e9845239e831fc1c4a1a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Echo-Bert-VITS2/utils.py +++ /dev/null @@ -1,293 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - elif optimizer is None and not skip_optimizer: - #else: #Disable this line if Infer ,and enable the line upper - new_opt_dict = optimizer.state_dict() - new_opt_dict_params = new_opt_dict['param_groups'][0]['params'] - new_opt_dict['param_groups'] = checkpoint_dict['optimizer']['param_groups'] - new_opt_dict['param_groups'][0]['params'] = new_opt_dict_params - optimizer.load_state_dict(new_opt_dict) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - #assert "emb_g" not in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - print("load ") - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="./OUTPUT_MODEL", - help='Model name') - parser.add_argument('--cont', dest='cont', action="store_true", default=False, help="whether to continue training on the latest checkpoint") - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.cont = args.cont - return hparams - - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - import re - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], - key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/XzJosh/ranran-Bert-VITS2/attentions.py b/spaces/XzJosh/ranran-Bert-VITS2/attentions.py deleted file mode 100644 index 1192dd7268c20c11010e73a6017ed09549695afe..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/ranran-Bert-VITS2/attentions.py +++ /dev/null @@ -1,344 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import logging - -logger = logging.getLogger(__name__) - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, isflow = True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - #if isflow: - # cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1) - # self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1) - # self.cond_layer = weight_norm(cond_layer, name='weight') - # self.gin_channels = 256 - self.cond_layer_idx = self.n_layers - if 'gin_channels' in kwargs: - self.gin_channels = kwargs['gin_channels'] - if self.gin_channels != 0: - self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels) - # vits2 says 3rd block, so idx is 2 by default - self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2 - logging.debug(self.gin_channels, self.cond_layer_idx) - assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers' - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - def forward(self, x, x_mask, g=None): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - if i == self.cond_layer_idx and g is not None: - g = self.spk_emb_linear(g.transpose(1, 2)) - g = g.transpose(1, 2) - x = x + g - x = x * x_mask - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.py b/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.py deleted file mode 100644 index f490c4bbd598a35de43d36ceafcbd769e7ff21bf..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/config/GroundingDINO_SwinB.py +++ /dev/null @@ -1,43 +0,0 @@ -batch_size = 1 -modelname = "groundingdino" -backbone = "swin_B_384_22k" -position_embedding = "sine" -pe_temperatureH = 20 -pe_temperatureW = 20 -return_interm_indices = [1, 2, 3] -backbone_freeze_keywords = None -enc_layers = 6 -dec_layers = 6 -pre_norm = False -dim_feedforward = 2048 -hidden_dim = 256 -dropout = 0.0 -nheads = 8 -num_queries = 900 -query_dim = 4 -num_patterns = 0 -num_feature_levels = 4 -enc_n_points = 4 -dec_n_points = 4 -two_stage_type = "standard" -two_stage_bbox_embed_share = False -two_stage_class_embed_share = False -transformer_activation = "relu" -dec_pred_bbox_embed_share = True -dn_box_noise_scale = 1.0 -dn_label_noise_ratio = 0.5 -dn_label_coef = 1.0 -dn_bbox_coef = 1.0 -embed_init_tgt = True -dn_labelbook_size = 2000 -max_text_len = 256 -text_encoder_type = "bert-base-uncased" -use_text_enhancer = True -use_fusion_layer = True -use_checkpoint = True -use_transformer_ckpt = True -use_text_cross_attention = True -text_dropout = 0.0 -fusion_dropout = 0.0 -fusion_droppath = 0.1 -sub_sentence_present = True diff --git a/spaces/YouLiXiya/Mobile-SAM/segment_anything/segment_anything/build_sam.py b/spaces/YouLiXiya/Mobile-SAM/segment_anything/segment_anything/build_sam.py deleted file mode 100644 index 8985c43bafaaaa4ff76ef06eb34a6fdea5c8edcf..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/segment_anything/segment_anything/build_sam.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from functools import partial - -from .modeling import ImageEncoderViT, MaskDecoder, PromptEncoder, Sam, TwoWayTransformer - -def build_sam(checkpoint=None): - sam_version = checkpoint.split('.')[0].split('_')[2] - if sam_version == 'b': - return build_sam_vit_b(checkpoint) - elif sam_version == 'l': - return build_sam_vit_l(checkpoint) - else: - return build_sam_vit_h(checkpoint) - -def build_sam_vit_h(checkpoint=None): - return _build_sam( - encoder_embed_dim=1280, - encoder_depth=32, - encoder_num_heads=16, - encoder_global_attn_indexes=[7, 15, 23, 31], - checkpoint=checkpoint, - ) - -def build_sam_vit_l(checkpoint=None): - return _build_sam( - encoder_embed_dim=1024, - encoder_depth=24, - encoder_num_heads=16, - encoder_global_attn_indexes=[5, 11, 17, 23], - checkpoint=checkpoint, - ) - - -def build_sam_vit_b(checkpoint=None): - return _build_sam( - encoder_embed_dim=768, - encoder_depth=12, - encoder_num_heads=12, - encoder_global_attn_indexes=[2, 5, 8, 11], - checkpoint=checkpoint, - ) - - -sam_model_registry = { - "default": build_sam, - "vit_h": build_sam, - "vit_l": build_sam_vit_l, - "vit_b": build_sam_vit_b, -} - - -def _build_sam( - encoder_embed_dim, - encoder_depth, - encoder_num_heads, - encoder_global_attn_indexes, - checkpoint=None, -): - prompt_embed_dim = 256 - image_size = 1024 - vit_patch_size = 16 - image_embedding_size = image_size // vit_patch_size - sam = Sam( - image_encoder=ImageEncoderViT( - depth=encoder_depth, - embed_dim=encoder_embed_dim, - img_size=image_size, - mlp_ratio=4, - norm_layer=partial(torch.nn.LayerNorm, eps=1e-6), - num_heads=encoder_num_heads, - patch_size=vit_patch_size, - qkv_bias=True, - use_rel_pos=True, - global_attn_indexes=encoder_global_attn_indexes, - window_size=14, - out_chans=prompt_embed_dim, - ), - prompt_encoder=PromptEncoder( - embed_dim=prompt_embed_dim, - image_embedding_size=(image_embedding_size, image_embedding_size), - input_image_size=(image_size, image_size), - mask_in_chans=16, - ), - mask_decoder=MaskDecoder( - num_multimask_outputs=3, - transformer=TwoWayTransformer( - depth=2, - embedding_dim=prompt_embed_dim, - mlp_dim=2048, - num_heads=8, - ), - transformer_dim=prompt_embed_dim, - iou_head_depth=3, - iou_head_hidden_dim=256, - ), - pixel_mean=[123.675, 116.28, 103.53], - pixel_std=[58.395, 57.12, 57.375], - ) - sam.eval() - if checkpoint is not None: - with open(checkpoint, "rb") as f: - state_dict = torch.load(f) - sam.load_state_dict(state_dict) - return sam diff --git a/spaces/abdvl/datahub_qa_bot/docs/quick-ingestion-guides/redshift/setup.md b/spaces/abdvl/datahub_qa_bot/docs/quick-ingestion-guides/redshift/setup.md deleted file mode 100644 index 8308b09b1f7823460e5f04b3c5c25510e8243903..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/quick-ingestion-guides/redshift/setup.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: Setup ---- -# Redshift Ingestion Guide: Setup & Prerequisites - -To configure ingestion from Redshift, you'll need a [User](https://docs.aws.amazon.com/redshift/latest/gsg/t_adding_redshift_user_cmd.html) configured with the proper permission sets, and an associated. - -This setup guide will walk you through the steps you'll need to take via your Google Cloud Console. - -## Redshift Prerequisites - -1. Connect to your Amazon Redshift cluster using an SQL client such as SQL Workbench/J or Amazon Redshift Query Editor with your Admin user. -2. Create a [Redshift User](https://docs.aws.amazon.com/redshift/latest/gsg/t_adding_redshift_user_cmd.html) that will be used to perform the metadata extraction if you don't have one already. -For example: - -```sql -CREATE USER datahub WITH PASSWORD 'Datahub1234'; -``` - -## Redshift Setup - -1. Grant the following permission to your `datahub` user: - -```sql -ALTER USER datahub WITH SYSLOG ACCESS UNRESTRICTED; -GRANT SELECT ON pg_catalog.svv_table_info to datahub; -GRANT SELECT ON pg_catalog.svl_user_info to datahub; - -``` - -## Next Steps - -Once you've confirmed all of the above in Redshift, it's time to [move on](configuration.md) to configure the actual ingestion source within the DataHub UI. - -*Need more help? Join the conversation in [Slack](http://slack.datahubproject.io)!* diff --git a/spaces/abdvl/datahub_qa_bot/docs/sync-status.md b/spaces/abdvl/datahub_qa_bot/docs/sync-status.md deleted file mode 100644 index 7ece80c95a38f1176794092ff74cbaa1865170cf..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/sync-status.md +++ /dev/null @@ -1,46 +0,0 @@ -import FeatureAvailability from '@site/src/components/FeatureAvailability'; - -# About DataHub Sync Status - - - -When looking at metadata in DataHub, it's useful to know if the information you're looking at is relevant. -Specifically, if metadata is stale, or hasn't been updated in a while, then you should consider refreshing that metadata -using [metadata ingestion](./../metadata-ingestion/README.md) or [deleting](./how/delete-metadata.md) it if it no longer exists. - -## Sync Status Setup, Prerequisites, and Permissions - -The sync status feature is enabled by default and does not require any special setup. - -## Using Sync Status - -The DataHub UI will display the sync status in the top right corner of the page. - -The last synchronized date is basically the last time an ingestion run saw an entity. It is computed as the most recent update to the entity, excluding changes done through the UI. If an ingestion run restates an entity but doesn't actually cause any changes, we still count that as an update for the purposes of sync status. - -
    - Technical details: computing the last synchronized timestamp - -To compute the last synchronized timestamp, we look at the system metadata of all aspects associated with the entity. -We exclude any aspects where the system metadata `runId` value is unset or equal to `no-run-id-provided`, as this is what filters out changes made through the UI. -Finally, we take the most recent system metadata `lastObserved` timestamp across the aspects and use that as the last synchronized timestamp. - -
    - -

    - -

    - -We'll automatically assign a color based on the sync status recency: - -- Green: last synchronized in the past week -- Yellow: last synchronized in the past month -- Red: last synchronized more than a month ago - -You can hover over the sync status message in the UI to view the exact timestamp of the most recent sync. - -

    - -

    - -_Need more help? Join the conversation in [Slack](http://slack.datahubproject.io)!_ diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/seg/sampler/ohem_pixel_sampler.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/seg/sampler/ohem_pixel_sampler.py deleted file mode 100644 index 88bb10d44026ba9f21756eaea9e550841cd59b9f..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/seg/sampler/ohem_pixel_sampler.py +++ /dev/null @@ -1,76 +0,0 @@ -import torch -import torch.nn.functional as F - -from ..builder import PIXEL_SAMPLERS -from .base_pixel_sampler import BasePixelSampler - - -@PIXEL_SAMPLERS.register_module() -class OHEMPixelSampler(BasePixelSampler): - """Online Hard Example Mining Sampler for segmentation. - - Args: - context (nn.Module): The context of sampler, subclass of - :obj:`BaseDecodeHead`. - thresh (float, optional): The threshold for hard example selection. - Below which, are prediction with low confidence. If not - specified, the hard examples will be pixels of top ``min_kept`` - loss. Default: None. - min_kept (int, optional): The minimum number of predictions to keep. - Default: 100000. - """ - - def __init__(self, context, thresh=None, min_kept=100000): - super(OHEMPixelSampler, self).__init__() - self.context = context - assert min_kept > 1 - self.thresh = thresh - self.min_kept = min_kept - - def sample(self, seg_logit, seg_label): - """Sample pixels that have high loss or with low prediction confidence. - - Args: - seg_logit (torch.Tensor): segmentation logits, shape (N, C, H, W) - seg_label (torch.Tensor): segmentation label, shape (N, 1, H, W) - - Returns: - torch.Tensor: segmentation weight, shape (N, H, W) - """ - with torch.no_grad(): - assert seg_logit.shape[2:] == seg_label.shape[2:] - assert seg_label.shape[1] == 1 - seg_label = seg_label.squeeze(1).long() - batch_kept = self.min_kept * seg_label.size(0) - valid_mask = seg_label != self.context.ignore_index - seg_weight = seg_logit.new_zeros(size=seg_label.size()) - valid_seg_weight = seg_weight[valid_mask] - if self.thresh is not None: - seg_prob = F.softmax(seg_logit, dim=1) - - tmp_seg_label = seg_label.clone().unsqueeze(1) - tmp_seg_label[tmp_seg_label == self.context.ignore_index] = 0 - seg_prob = seg_prob.gather(1, tmp_seg_label).squeeze(1) - sort_prob, sort_indices = seg_prob[valid_mask].sort() - - if sort_prob.numel() > 0: - min_threshold = sort_prob[min(batch_kept, - sort_prob.numel() - 1)] - else: - min_threshold = 0.0 - threshold = max(min_threshold, self.thresh) - valid_seg_weight[seg_prob[valid_mask] < threshold] = 1. - else: - losses = self.context.loss_decode( - seg_logit, - seg_label, - weight=None, - ignore_index=self.context.ignore_index, - reduction_override='none') - # faster than topk according to https://github.com/pytorch/pytorch/issues/22812 # noqa - _, sort_indices = losses[valid_mask].sort(descending=True) - valid_seg_weight[sort_indices[:batch_kept]] = 1. - - seg_weight[valid_mask] = valid_seg_weight - - return seg_weight diff --git a/spaces/abidlabs/middle-ages-islamic-art/app.py b/spaces/abidlabs/middle-ages-islamic-art/app.py deleted file mode 100644 index 70022585cfb2755b811680a5dea1a7d0a8adb1b8..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/middle-ages-islamic-art/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import gradio as gr - -API_KEY=os.environ.get('HUGGING_FACE_HUB_TOKEN', None) - -article = """--- -This space was created using [SD Space Creator](https://huggingface.co/spaces/anzorq/sd-space-creator).""" - -gr.Interface.load( - name="models/abidlabs/middle-ages-islamic-art", - title="""Middle Ages Islamic Art""", - description="""Demo for Middle Ages Islamic Art Stable Diffusion model.""", - article=article, - api_key=API_KEY, - ).queue(concurrency_count=20).launch() diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/skeleton.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/skeleton.py deleted file mode 100644 index 6de56af0c29ae7cccbd7178f912459413f87c646..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/skeleton.py +++ /dev/null @@ -1,199 +0,0 @@ -from utils.quaternion import * -import scipy.ndimage.filters as filters - -class Skeleton(object): - def __init__(self, offset, kinematic_tree, device): - self.device = device - self._raw_offset_np = offset.numpy() - self._raw_offset = offset.clone().detach().to(device).float() - self._kinematic_tree = kinematic_tree - self._offset = None - self._parents = [0] * len(self._raw_offset) - self._parents[0] = -1 - for chain in self._kinematic_tree: - for j in range(1, len(chain)): - self._parents[chain[j]] = chain[j-1] - - def njoints(self): - return len(self._raw_offset) - - def offset(self): - return self._offset - - def set_offset(self, offsets): - self._offset = offsets.clone().detach().to(self.device).float() - - def kinematic_tree(self): - return self._kinematic_tree - - def parents(self): - return self._parents - - # joints (batch_size, joints_num, 3) - def get_offsets_joints_batch(self, joints): - assert len(joints.shape) == 3 - _offsets = self._raw_offset.expand(joints.shape[0], -1, -1).clone() - for i in range(1, self._raw_offset.shape[0]): - _offsets[:, i] = torch.norm(joints[:, i] - joints[:, self._parents[i]], p=2, dim=1)[:, None] * _offsets[:, i] - - self._offset = _offsets.detach() - return _offsets - - # joints (joints_num, 3) - def get_offsets_joints(self, joints): - assert len(joints.shape) == 2 - _offsets = self._raw_offset.clone() - for i in range(1, self._raw_offset.shape[0]): - # print(joints.shape) - _offsets[i] = torch.norm(joints[i] - joints[self._parents[i]], p=2, dim=0) * _offsets[i] - - self._offset = _offsets.detach() - return _offsets - - # face_joint_idx should follow the order of right hip, left hip, right shoulder, left shoulder - # joints (batch_size, joints_num, 3) - def inverse_kinematics_np(self, joints, face_joint_idx, smooth_forward=False): - assert len(face_joint_idx) == 4 - '''Get Forward Direction''' - l_hip, r_hip, sdr_r, sdr_l = face_joint_idx - across1 = joints[:, r_hip] - joints[:, l_hip] - across2 = joints[:, sdr_r] - joints[:, sdr_l] - across = across1 + across2 - across = across / np.sqrt((across**2).sum(axis=-1))[:, np.newaxis] - # print(across1.shape, across2.shape) - - # forward (batch_size, 3) - forward = np.cross(np.array([[0, 1, 0]]), across, axis=-1) - if smooth_forward: - forward = filters.gaussian_filter1d(forward, 20, axis=0, mode='nearest') - # forward (batch_size, 3) - forward = forward / np.sqrt((forward**2).sum(axis=-1))[..., np.newaxis] - - '''Get Root Rotation''' - target = np.array([[0,0,1]]).repeat(len(forward), axis=0) - root_quat = qbetween_np(forward, target) - - '''Inverse Kinematics''' - # quat_params (batch_size, joints_num, 4) - # print(joints.shape[:-1]) - quat_params = np.zeros(joints.shape[:-1] + (4,)) - # print(quat_params.shape) - root_quat[0] = np.array([[1.0, 0.0, 0.0, 0.0]]) - quat_params[:, 0] = root_quat - # quat_params[0, 0] = np.array([[1.0, 0.0, 0.0, 0.0]]) - for chain in self._kinematic_tree: - R = root_quat - for j in range(len(chain) - 1): - # (batch, 3) - u = self._raw_offset_np[chain[j+1]][np.newaxis,...].repeat(len(joints), axis=0) - # print(u.shape) - # (batch, 3) - v = joints[:, chain[j+1]] - joints[:, chain[j]] - v = v / np.sqrt((v**2).sum(axis=-1))[:, np.newaxis] - # print(u.shape, v.shape) - rot_u_v = qbetween_np(u, v) - - R_loc = qmul_np(qinv_np(R), rot_u_v) - - quat_params[:,chain[j + 1], :] = R_loc - R = qmul_np(R, R_loc) - - return quat_params - - # Be sure root joint is at the beginning of kinematic chains - def forward_kinematics(self, quat_params, root_pos, skel_joints=None, do_root_R=True): - # quat_params (batch_size, joints_num, 4) - # joints (batch_size, joints_num, 3) - # root_pos (batch_size, 3) - if skel_joints is not None: - offsets = self.get_offsets_joints_batch(skel_joints) - if len(self._offset.shape) == 2: - offsets = self._offset.expand(quat_params.shape[0], -1, -1) - joints = torch.zeros(quat_params.shape[:-1] + (3,)).to(self.device) - joints[:, 0] = root_pos - for chain in self._kinematic_tree: - if do_root_R: - R = quat_params[:, 0] - else: - R = torch.tensor([[1.0, 0.0, 0.0, 0.0]]).expand(len(quat_params), -1).detach().to(self.device) - for i in range(1, len(chain)): - R = qmul(R, quat_params[:, chain[i]]) - offset_vec = offsets[:, chain[i]] - joints[:, chain[i]] = qrot(R, offset_vec) + joints[:, chain[i-1]] - return joints - - # Be sure root joint is at the beginning of kinematic chains - def forward_kinematics_np(self, quat_params, root_pos, skel_joints=None, do_root_R=True): - # quat_params (batch_size, joints_num, 4) - # joints (batch_size, joints_num, 3) - # root_pos (batch_size, 3) - if skel_joints is not None: - skel_joints = torch.from_numpy(skel_joints) - offsets = self.get_offsets_joints_batch(skel_joints) - if len(self._offset.shape) == 2: - offsets = self._offset.expand(quat_params.shape[0], -1, -1) - offsets = offsets.numpy() - joints = np.zeros(quat_params.shape[:-1] + (3,)) - joints[:, 0] = root_pos - for chain in self._kinematic_tree: - if do_root_R: - R = quat_params[:, 0] - else: - R = np.array([[1.0, 0.0, 0.0, 0.0]]).repeat(len(quat_params), axis=0) - for i in range(1, len(chain)): - R = qmul_np(R, quat_params[:, chain[i]]) - offset_vec = offsets[:, chain[i]] - joints[:, chain[i]] = qrot_np(R, offset_vec) + joints[:, chain[i - 1]] - return joints - - def forward_kinematics_cont6d_np(self, cont6d_params, root_pos, skel_joints=None, do_root_R=True): - # cont6d_params (batch_size, joints_num, 6) - # joints (batch_size, joints_num, 3) - # root_pos (batch_size, 3) - if skel_joints is not None: - skel_joints = torch.from_numpy(skel_joints) - offsets = self.get_offsets_joints_batch(skel_joints) - if len(self._offset.shape) == 2: - offsets = self._offset.expand(cont6d_params.shape[0], -1, -1) - offsets = offsets.numpy() - joints = np.zeros(cont6d_params.shape[:-1] + (3,)) - joints[:, 0] = root_pos - for chain in self._kinematic_tree: - if do_root_R: - matR = cont6d_to_matrix_np(cont6d_params[:, 0]) - else: - matR = np.eye(3)[np.newaxis, :].repeat(len(cont6d_params), axis=0) - for i in range(1, len(chain)): - matR = np.matmul(matR, cont6d_to_matrix_np(cont6d_params[:, chain[i]])) - offset_vec = offsets[:, chain[i]][..., np.newaxis] - # print(matR.shape, offset_vec.shape) - joints[:, chain[i]] = np.matmul(matR, offset_vec).squeeze(-1) + joints[:, chain[i-1]] - return joints - - def forward_kinematics_cont6d(self, cont6d_params, root_pos, skel_joints=None, do_root_R=True): - # cont6d_params (batch_size, joints_num, 6) - # joints (batch_size, joints_num, 3) - # root_pos (batch_size, 3) - if skel_joints is not None: - # skel_joints = torch.from_numpy(skel_joints) - offsets = self.get_offsets_joints_batch(skel_joints) - if len(self._offset.shape) == 2: - offsets = self._offset.expand(cont6d_params.shape[0], -1, -1) - joints = torch.zeros(cont6d_params.shape[:-1] + (3,)).to(cont6d_params.device) - joints[..., 0, :] = root_pos - for chain in self._kinematic_tree: - if do_root_R: - matR = cont6d_to_matrix(cont6d_params[:, 0]) - else: - matR = torch.eye(3).expand((len(cont6d_params), -1, -1)).detach().to(cont6d_params.device) - for i in range(1, len(chain)): - matR = torch.matmul(matR, cont6d_to_matrix(cont6d_params[:, chain[i]])) - offset_vec = offsets[:, chain[i]].unsqueeze(-1) - # print(matR.shape, offset_vec.shape) - joints[:, chain[i]] = torch.matmul(matR, offset_vec).squeeze(-1) + joints[:, chain[i-1]] - return joints - - - - - diff --git a/spaces/acmyu/frame_interpolation_prototype/metrics/plotmetrics.py b/spaces/acmyu/frame_interpolation_prototype/metrics/plotmetrics.py deleted file mode 100644 index 0b5a4344d08ca7f297b568d1bfe5ca3a485b725d..0000000000000000000000000000000000000000 --- a/spaces/acmyu/frame_interpolation_prototype/metrics/plotmetrics.py +++ /dev/null @@ -1,22 +0,0 @@ -import matplotlib.pyplot as plt -import json - -with open("metrics.json") as metricsfile: - metrics = json.load(metricsfile) - - -x = [float(m['epoch']) + float(m['batch'])/1000.0 for m in metrics] -g = [float(m['g_loss']) for m in metrics] -dr = [float(m['real_loss']) for m in metrics] -df = [float(m['fake_loss'])for m in metrics] -d = [(float(m['real_loss']) + float(m['fake_loss']))/2.0 for m in metrics] -plt.plot(x, g, label = "g_loss") -plt.plot(x, d, label = "d_loss") -plt.plot(x, dr, label = "d_real_loss") -plt.plot(x, df, label = "d_fake_loss") - -plt.legend() -plt.xlabel('epoch') -plt.title('frame_interpolation_GAN (DCGAN)') - -plt.show() diff --git a/spaces/ajitrajasekharan/Bio-medical-NER-Model-Gradio-Demo/app.py b/spaces/ajitrajasekharan/Bio-medical-NER-Model-Gradio-Demo/app.py deleted file mode 100644 index 3f9d5c28e1ce8e14c08c8c17ba0adbf92f903f7f..0000000000000000000000000000000000000000 --- a/spaces/ajitrajasekharan/Bio-medical-NER-Model-Gradio-Demo/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import gradio as gr -title = "Model for Biomedical NER" -description = "Gradio Demo of a pretrained model used for NER without fine-tuning. To test model predictions, simply add your text, or click one of the examples to load them. These predictions are used to perform NER as described in the link below." -article = "

    Model pretrained on biomedical corpus and used for NER without fine-tuning | HF model page


    Note:Streamlit version of this app is a better choice to examine model than this app:-
    - Control over number of results to display
    - Examine both masked position and [CLS] predictions
    - Compare this model results with other pretrained BERT models.

    " -examples = [ - ["Lou Gehrig who works for XCorp suffers from [MASK]"],["A [MASK] level below 60 indicates chronic kidney disease"],["There are no specific treatment options specifically indicated for [MASK]"],["Paul Erdos died at [MASK]"] -] -gr.Interface.load("huggingface/ajitrajasekharan/biomedical",title=title,description=description,article=article, examples=examples, allow_flagging="never",enable_queue=True).launch() diff --git a/spaces/ajitrajasekharan/Image-Text-Detection/app.py b/spaces/ajitrajasekharan/Image-Text-Detection/app.py deleted file mode 100644 index d953285bd01bd3e75816d0e941a2fb36a66ccd4f..0000000000000000000000000000000000000000 --- a/spaces/ajitrajasekharan/Image-Text-Detection/app.py +++ /dev/null @@ -1,62 +0,0 @@ -import PIL -from PIL import ImageDraw -from PIL import Image -import streamlit as st -import os - - -def load_image(image_file): - img = PIL.Image.open(image_file) - return img - -def init_session_states(): - if 'disp' not in st.session_state: - st.session_state['disp'] = st.empty() - st.session_state['disp'].text("Setting up environment with latest build of easyocr. This will take about a minute ") - if 'init' not in st.session_state: - st.session_state['init'] = 1 - os.system('pip install git+git://github.com/jaidedai/easyocr.git') - os.system('pip install git+https://github.com/huggingface/transformers.git --upgrade') - - - -init_session_states() -import easyocr -from transformers import TrOCRProcessor, VisionEncoderDecoderModel - -def text_recognition(image): - processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") - model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") - #processor = TrOCRProcessor.from_pretrained("microsoft/trocr-large-handwritten") - #model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-large-handwritten") - - pixel_values = processor(image, return_tensors="pt").pixel_values - generated_ids = model.generate(pixel_values) - generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] - st.write(generated_text) - -def main(): - - st.session_state['disp'].text("Env setup up Complete") - uploaded_file = st.file_uploader("Choose image file to detect text",type=['jpeg','jpg']) - if uploaded_file is not None: - file_details = {"FileName":uploaded_file.name,"FileType":uploaded_file.type,"FileSize":uploaded_file.size} - st.write(file_details) - image = load_image(uploaded_file) - st.image(image,width=500) - st.write("Detecting text bounding box and Take 1 recognition...") - reader = easyocr.Reader(['en'],gpu=True) - bound = reader.readtext(image) - st.write("Bounding box Detection complete") - st.write(str(bound)) - st.write("Recognizing text - Take 2....") - text_recognition(image) - - - -if __name__ == "__main__": - main() - - - - \ No newline at end of file diff --git a/spaces/akdeniz27/zero-shot-text-classification-with-multilingual-t5/app.py b/spaces/akdeniz27/zero-shot-text-classification-with-multilingual-t5/app.py deleted file mode 100644 index f5b409d7185e551be98327392f0aa8e068b2a9fb..0000000000000000000000000000000000000000 --- a/spaces/akdeniz27/zero-shot-text-classification-with-multilingual-t5/app.py +++ /dev/null @@ -1,77 +0,0 @@ -# Zero-Shot Text Classification with Multilingual T5 (mT5) - -import streamlit as st -import plotly.graph_objects as go -from mT5Model import runModel - -text_1 = """Bilim insanları Botsvana’da Covid-19’un şu ana kadar en çok mutasyona uğramış varyantını tespit etti. \ -Resmi olarak B.1.1.529 koduyla bilinen bu varyantı ise “Nu varyantı” adı verildi. Uzmanlar bu varyant içerisinde \ -tam 32 farklı mutasyon tespit edildiğini açıklarken, bu virüsün corona virüsü aşılarına karşı daha dirençli olabileceğini duyurdu.""" - -text_2 = """Argentina beat Australia 2-1 on Saturday and will take on the Netherlands in the World Cup quarterfinals. \ -It was a historic night for Lionel Messi as the Argentine superstar took to the pitch for his 1,000th match for club and country. \ -He also scored in the match. Messi scored the opening goal in the 35th minute as his low shot in the box beat Australian goalkeeper Mathew Ryan.""" - -@st.cache(allow_output_mutation=True) -def list2text(label_list): - labels = "" - for label in label_list: - labels = labels + label + "," - labels = labels[:-1] - return labels - -label_list_1 = ["dünya", "ekonomi", "kültür", "sağlık", "siyaset", "spor", "teknoloji"] -label_list_2 = ["positive", "negative", "neutral"] - -hypothesis_1 = "Bu yazı {} konusundadır" -hypothesis_2 = "This text is in {} subject" - -st.title("Multilingual Zero-Shot Text Classification with mT5") - -model_name = "alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli" - -st.sidebar.write("For details of used model:") -st.sidebar.write("https://huggingface.co/alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli") - -st.sidebar.write("For Xtreme XNLI Dataset:") -st.sidebar.write("https://www.tensorflow.org/datasets/catalog/xtreme_xnli") - -st.subheader("Select Text, Label List and Hyphothesis") -st.text_area("Text #1", text_1, height=128) -st.text_area("Text #2", text_2, height=128) -st.write(f"Label List #1: {list2text(label_list_1)}") -st.write(f"Label List #2: {list2text(label_list_2)}") -st.write(f"Hypothesis #1: {hypothesis_1}") -st.write(f"Hypothesis #2: {hypothesis_2}") - -text = st.radio("Select Text", ("Text #1", "Text #2", "New Text")) -labels = st.radio("Select Label List", ("Label List #1", "Label List #2", "New Label List")) -hypothesis = st.radio("Select Hypothesis", ("Hypothesis #1", "Hypothesis #2", "New Hypothesis")) - -if text == "Text #1": sequence_to_classify = text_1 -elif text == "Text #2": sequence_to_classify = text_2 -elif text == "New Text": - sequence_to_classify = st.text_area("New Text", value="", height=128) - -if labels == "Label List #1": candidate_labels = label_list_1 -elif labels == "Label List #2": candidate_labels = label_list_2 -elif labels == "New Label List": - candidate_labels = st.text_area("New Label List (Pls Input as comma-separated)", value="", height=16).split(",") - -if hypothesis == "Hypothesis #1": hypothesis_template = hypothesis_1 -elif hypothesis == "Hypothesis #2": hypothesis_template = hypothesis_2 -elif labels == "New Hypothesis": - hypothesis_template = st.text_area("Hypothesis Template for NLI (Pls use similar format of examples)", value="", height=16) - -Run_Button = st.button("Run", key=None) -if Run_Button == True: - with st.spinner('Model is running...'): - output = runModel(model_name, sequence_to_classify, candidate_labels, hypothesis_template) - output_labels = list(output.keys()) - output_scores = list(output.values()) - - st.header("Result") - fig = go.Figure([go.Bar(x=output_labels, y=output_scores)]) - st.plotly_chart(fig, use_container_width=False, sharing="streamlit") - st.success('Done!') - diff --git a/spaces/alphunt/diffdock-alphunt-demo/utils/sampling.py b/spaces/alphunt/diffdock-alphunt-demo/utils/sampling.py deleted file mode 100644 index c764eeb718f47060f25b2b48964c536570ad5ee9..0000000000000000000000000000000000000000 --- a/spaces/alphunt/diffdock-alphunt-demo/utils/sampling.py +++ /dev/null @@ -1,114 +0,0 @@ -import numpy as np -import torch -from torch_geometric.loader import DataLoader - -from utils.diffusion_utils import modify_conformer, set_time -from utils.torsion import modify_conformer_torsion_angles -from scipy.spatial.transform import Rotation as R - - -def randomize_position(data_list, no_torsion, no_random, tr_sigma_max): - # in place modification of the list - if not no_torsion: - # randomize torsion angles - for complex_graph in data_list: - torsion_updates = np.random.uniform(low=-np.pi, high=np.pi, size=complex_graph['ligand'].edge_mask.sum()) - complex_graph['ligand'].pos = \ - modify_conformer_torsion_angles(complex_graph['ligand'].pos, - complex_graph['ligand', 'ligand'].edge_index.T[ - complex_graph['ligand'].edge_mask], - complex_graph['ligand'].mask_rotate[0], torsion_updates) - - for complex_graph in data_list: - # randomize position - molecule_center = torch.mean(complex_graph['ligand'].pos, dim=0, keepdim=True) - random_rotation = torch.from_numpy(R.random().as_matrix()).float() - complex_graph['ligand'].pos = (complex_graph['ligand'].pos - molecule_center) @ random_rotation.T - # base_rmsd = np.sqrt(np.sum((complex_graph['ligand'].pos.cpu().numpy() - orig_complex_graph['ligand'].pos.numpy()) ** 2, axis=1).mean()) - - if not no_random: # note for now the torsion angles are still randomised - tr_update = torch.normal(mean=0, std=tr_sigma_max, size=(1, 3)) - complex_graph['ligand'].pos += tr_update - - -def sampling(data_list, model, inference_steps, tr_schedule, rot_schedule, tor_schedule, device, t_to_sigma, model_args, - no_random=False, ode=False, visualization_list=None, confidence_model=None, confidence_data_list=None, - confidence_model_args=None, batch_size=32, no_final_step_noise=False): - N = len(data_list) - - for t_idx in range(inference_steps): - t_tr, t_rot, t_tor = tr_schedule[t_idx], rot_schedule[t_idx], tor_schedule[t_idx] - dt_tr = tr_schedule[t_idx] - tr_schedule[t_idx + 1] if t_idx < inference_steps - 1 else tr_schedule[t_idx] - dt_rot = rot_schedule[t_idx] - rot_schedule[t_idx + 1] if t_idx < inference_steps - 1 else rot_schedule[t_idx] - dt_tor = tor_schedule[t_idx] - tor_schedule[t_idx + 1] if t_idx < inference_steps - 1 else tor_schedule[t_idx] - - loader = DataLoader(data_list, batch_size=batch_size) - new_data_list = [] - - for complex_graph_batch in loader: - b = complex_graph_batch.num_graphs - complex_graph_batch = complex_graph_batch.to(device) - - tr_sigma, rot_sigma, tor_sigma = t_to_sigma(t_tr, t_rot, t_tor) - set_time(complex_graph_batch, t_tr, t_rot, t_tor, b, model_args.all_atoms, device) - - with torch.no_grad(): - tr_score, rot_score, tor_score = model(complex_graph_batch) - - tr_g = tr_sigma * torch.sqrt(torch.tensor(2 * np.log(model_args.tr_sigma_max / model_args.tr_sigma_min))) - rot_g = 2 * rot_sigma * torch.sqrt(torch.tensor(np.log(model_args.rot_sigma_max / model_args.rot_sigma_min))) - - if ode: - tr_perturb = (0.5 * tr_g ** 2 * dt_tr * tr_score.cpu()).cpu() - rot_perturb = (0.5 * rot_score.cpu() * dt_rot * rot_g ** 2).cpu() - else: - tr_z = torch.zeros((b, 3)) if no_random or (no_final_step_noise and t_idx == inference_steps - 1) \ - else torch.normal(mean=0, std=1, size=(b, 3)) - tr_perturb = (tr_g ** 2 * dt_tr * tr_score.cpu() + tr_g * np.sqrt(dt_tr) * tr_z).cpu() - - rot_z = torch.zeros((b, 3)) if no_random or (no_final_step_noise and t_idx == inference_steps - 1) \ - else torch.normal(mean=0, std=1, size=(b, 3)) - rot_perturb = (rot_score.cpu() * dt_rot * rot_g ** 2 + rot_g * np.sqrt(dt_rot) * rot_z).cpu() - - if not model_args.no_torsion: - tor_g = tor_sigma * torch.sqrt(torch.tensor(2 * np.log(model_args.tor_sigma_max / model_args.tor_sigma_min))) - if ode: - tor_perturb = (0.5 * tor_g ** 2 * dt_tor * tor_score.cpu()).numpy() - else: - tor_z = torch.zeros(tor_score.shape) if no_random or (no_final_step_noise and t_idx == inference_steps - 1) \ - else torch.normal(mean=0, std=1, size=tor_score.shape) - tor_perturb = (tor_g ** 2 * dt_tor * tor_score.cpu() + tor_g * np.sqrt(dt_tor) * tor_z).numpy() - torsions_per_molecule = tor_perturb.shape[0] // b - else: - tor_perturb = None - - # Apply noise - new_data_list.extend([modify_conformer(complex_graph, tr_perturb[i:i + 1], rot_perturb[i:i + 1].squeeze(0), - tor_perturb[i * torsions_per_molecule:(i + 1) * torsions_per_molecule] if not model_args.no_torsion else None) - for i, complex_graph in enumerate(complex_graph_batch.to('cpu').to_data_list())]) - data_list = new_data_list - - if visualization_list is not None: - for idx, visualization in enumerate(visualization_list): - visualization.add((data_list[idx]['ligand'].pos + data_list[idx].original_center).detach().cpu(), - part=1, order=t_idx + 2) - - with torch.no_grad(): - if confidence_model is not None: - loader = DataLoader(data_list, batch_size=batch_size) - confidence_loader = iter(DataLoader(confidence_data_list, batch_size=batch_size)) - confidence = [] - for complex_graph_batch in loader: - complex_graph_batch = complex_graph_batch.to(device) - if confidence_data_list is not None: - confidence_complex_graph_batch = next(confidence_loader).to(device) - confidence_complex_graph_batch['ligand'].pos = complex_graph_batch['ligand'].pos - set_time(confidence_complex_graph_batch, 0, 0, 0, N, confidence_model_args.all_atoms, device) - confidence.append(confidence_model(confidence_complex_graph_batch)) - else: - confidence.append(confidence_model(complex_graph_batch)) - confidence = torch.cat(confidence, dim=0) - else: - confidence = None - - return data_list, confidence diff --git a/spaces/alvin888/GeoGenie/README.md b/spaces/alvin888/GeoGenie/README.md deleted file mode 100644 index 99a86a74f8e4f4f0bfa54db55973da0db7da012e..0000000000000000000000000000000000000000 --- a/spaces/alvin888/GeoGenie/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GeoGenie -emoji: 👀 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 4.1.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/amitjamadagni/qs-benchmarks/plot_scripts/map_packages_colors_all.py b/spaces/amitjamadagni/qs-benchmarks/plot_scripts/map_packages_colors_all.py deleted file mode 100644 index ea769a9123d00ca0de027819893a63dc68586d39..0000000000000000000000000000000000000000 --- a/spaces/amitjamadagni/qs-benchmarks/plot_scripts/map_packages_colors_all.py +++ /dev/null @@ -1,87 +0,0 @@ -import matplotlib.pyplot as plt -import numpy as np -from matplotlib import rc -import matplotlib.ticker as ticker -from matplotlib.ticker import MaxNLocator -# from matplotlib import pyplot - -fig_width_pt = 246.0 # Get this from LaTeX using \showthe\columnwidth -inches_per_pt = 1.0/72.27 # Convert pt to inch -golden_mean = (np.sqrt(5)-1.0)/2.0 # Aesthetic ratio -fig_width = fig_width_pt*inches_per_pt # width in inches -fig_height = fig_width*golden_mean # height in inches -# fig_size = [fig_width+1.25,fig_height+1.25] -# rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']}) -params = {'backend': 'ps', - 'axes.labelsize': 14, - 'axes.titlesize': 12, - 'font.size': 8, - 'legend.fontsize': 12, - 'xtick.labelsize': 14, - 'ytick.labelsize': 14,} -# 'text.usetex': True} -# 'figure.figsize': fig_size} -# plt.rc('text.latex', preamble=r'\usepackage{braket}') -plt.rcParams.update(params) - -# cm = plt.get_cmap('tab20') -# n_colors = 20 -# x_arr = [cm(1.*i/n_colors) for i in range(n_colors)] -# s_arr = ["o", "*", "s", "^", "D", "v"] -# s_arr = Line2D.filled_markers*100 - -x_arr = ['grey', 'indianred', 'thistle', 'red', 'saddlebrown', 'peru', 'darkorange', 'gold', 'darkkhaki', 'limegreen', 'darkslategray', 'deepskyblue', 'mediumpurple', 'darkorchid', 'magenta', 'aqua', 'lightgreen', 'lightcoral', 'chocolate', 'pink', 'darkmagenta', 'lightsalmon', 'darkcyan', 'tan'] - -s_arr = ['o', 'v', '^', '<', '>', '8', 's', 'p', '*', 'h', 'H', 'D', 'd', 'P', 'X', '+', '2', '4']*50 - -pkg_str = ['cirq', 'hybridq', 'intel_qs_cpp', 'pennylane_l', 'projectq', 'qcgpu', 'qibojit', 'qsimcirq', 'quest', 'svsim', 'yao', 'hiq', 'pennylane', 'qibo', 'qiskit', 'qrack_sch', 'qulacs', 'cuquantum_qiskit', 'cuquantum_qsimcirq', 'qpanda', 'qpp', 'myqlm', 'myqlm_cpp', 'braket'] - -task = ['hdyn', 'rqc', 'qft'] - -com_cap = ['singlethread', 'multithread', 'gpu'] - -prec = ['sp', 'dp'] - -storage_dict = {} -for pkg in pkg_str: - storage_dict.update({pkg:pkg}) - -label_dict = {} -for pkg in pkg_str: - for t in task: - for cc in com_cap: - for p in prec: - label_dict.update({pkg+'_'+t+'_'+cc+'_'+p:pkg}) - -label_dict.update({'cuquantum_qiskit_hdyn_gpu_sp':'cuquantum(qiskit)'}) -label_dict.update({'cuquantum_qiskit_hdyn_gpu_dp':'cuquantum(qiskit)'}) -label_dict.update({'cuquantum_qiskit_rqc_gpu_sp':'cuquantum(qiskit)'}) -label_dict.update({'cuquantum_qiskit_rqc_gpu_dp':'cuquantum(qiskit)'}) -label_dict.update({'cuquantum_qiskit_qft_gpu_sp':'cuquantum(qiskit)'}) -label_dict.update({'cuquantum_qiskit_qft_gpu_dp':'cuquantum(qiskit)'}) - -label_dict.update({'cuquantum_qsimcirq_hdyn_gpu_sp':'cuquantum(qsimcirq)'}) -label_dict.update({'cuquantum_qsimcirq_rqc_gpu_sp':'cuquantum(qsimcirq)'}) -label_dict.update({'cuquantum_qsimcirq_qft_gpu_sp':'cuquantum(qsimcirq)'}) -label_dict.update({'gate_count':'ngates ratio'}) - -color_dict = {} -n_c = 0 -for pkg in pkg_str: - for t in task: - for cc in com_cap: - for p in prec: - color_dict.update({pkg+'_'+t+'_'+cc+'_'+p:x_arr[n_c]}) - n_c = n_c + 1 - -color_dict.update({'gate_count':'black'}) - -symbol_dict = {} -n_s = 0 -for pkg in pkg_str: - for t in task: - for cc in com_cap: - for p in prec: - symbol_dict.update({pkg+'_'+t+'_'+cc+'_'+p:s_arr[n_s]}) - n_s = n_s + 1 -symbol_dict.update({'gate_count':'3'}) diff --git a/spaces/armanokka/nllb-translation-demo/app.py b/spaces/armanokka/nllb-translation-demo/app.py deleted file mode 100644 index 5774f14ebfc26e3c45df7f2f497826d3967d9018..0000000000000000000000000000000000000000 --- a/spaces/armanokka/nllb-translation-demo/app.py +++ /dev/null @@ -1,88 +0,0 @@ -import os -import torch -import gradio as gr -import time -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline -from flores200_codes import flores_codes - -#print(f"Is CUDA available: {torch.cuda.is_available()}") -# True -#print(f"CUDA device: {torch.cuda.get_device_name(torch.cuda.current_device())}") - -def load_models(): - # build model and tokenizer - model_name_dict = {'nllb-distilled-600M': 'facebook/nllb-200-distilled-600M', - #'nllb-1.3B': 'facebook/nllb-200-1.3B', - #'nllb-distilled-1.3B': 'facebook/nllb-200-distilled-1.3B', - #'nllb-3.3B': 'facebook/nllb-200-3.3B', - } - - model_dict = {} - - for call_name, real_name in model_name_dict.items(): - print('\tLoading model: %s' % call_name) - model = AutoModelForSeq2SeqLM.from_pretrained(real_name) - tokenizer = AutoTokenizer.from_pretrained(real_name) - model_dict[call_name+'_model'] = model - model_dict[call_name+'_tokenizer'] = tokenizer - - return model_dict - - -def translation(source, target, text): - if len(model_dict) == 2: - model_name = 'nllb-distilled-600M' - - start_time = time.time() - source = flores_codes[source] - target = flores_codes[target] - - model = model_dict[model_name + '_model'] - tokenizer = model_dict[model_name + '_tokenizer'] - - translator = pipeline('translation', model=model, tokenizer=tokenizer, src_lang=source, tgt_lang=target) - output = translator(text, max_length=4098) - - end_time = time.time() - - output = output[0]['translation_text'] - result = {'inference_time': end_time - start_time, - 'source': source, - 'target': target, - 'result': output} - return result - - -if __name__ == '__main__': - print('\tinit models') - - global model_dict - - model_dict = load_models() - - # define gradio demo - lang_codes = list(flores_codes.keys()) - #inputs = [gr.inputs.Radio(['nllb-distilled-600M', 'nllb-1.3B', 'nllb-distilled-1.3B'], label='NLLB Model'), - inputs = [gr.inputs.Dropdown(lang_codes, default='English', label='Source'), - gr.inputs.Dropdown(lang_codes, default='Korean', label='Target'), - gr.inputs.Textbox(lines=5, label="Input text"), - ] - - outputs = gr.outputs.JSON() - - title = "NLLB distilled 600M demo" - - demo_status = "Demo is running on CPU" - description = f"Details: https://github.com/facebookresearch/fairseq/tree/nllb. {demo_status}" - examples = [ - ['English', 'Korean', 'Hi. nice to meet you'] - ] - - gr.Interface(translation, - inputs, - outputs, - title=title, - description=description, - ).launch() - - diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/wind_vector_map.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/wind_vector_map.py deleted file mode 100644 index f16ad285a72e1c89973a76b44e3a545d162cbbb3..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/wind_vector_map.py +++ /dev/null @@ -1,24 +0,0 @@ -""" -Wind Vector Map ---------------- -An example showing a vector array map showing wind speed and direction using ``wedge`` -as shape for ``mark_point`` and ``angle`` encoding for the wind direction. -This is adapted from this corresponding Vega-Lite Example: -`Wind Vector Map `_. -""" -# category: scatter plots - -import altair as alt -from vega_datasets import data - -source = data.windvectors() - -alt.Chart(source).mark_point(shape="wedge", filled=True).encode( - latitude="latitude", - longitude="longitude", - color=alt.Color( - "dir", scale=alt.Scale(domain=[0, 360], scheme="rainbow"), legend=None - ), - angle=alt.Angle("dir", scale=alt.Scale(domain=[0, 360], range=[180, 540])), - size=alt.Size("speed", scale=alt.Scale(rangeMax=500)), -).project("equalEarth") diff --git a/spaces/asd998877/TsGpt/ChuanhuChatbot.py b/spaces/asd998877/TsGpt/ChuanhuChatbot.py deleted file mode 100644 index cbf63e52857a1852658fdf2009ca26f9fb0a6bec..0000000000000000000000000000000000000000 --- a/spaces/asd998877/TsGpt/ChuanhuChatbot.py +++ /dev/null @@ -1,470 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from modules import config -from modules.config import * -from modules.utils import * -from modules.presets import * -from modules.overwrites import * -from modules.models import get_model - - -gr.Chatbot._postprocess_chat_messages = postprocess_chat_messages -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("assets/custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -def create_new_model(): - return get_model(model_name = MODELS[DEFAULT_MODEL], access_key = my_api_key)[0] - -with gr.Blocks(css=customCSS, theme=small_and_beautiful_theme) as demo: - user_name = gr.State("") - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_question = gr.State("") - user_api_key = gr.State(my_api_key) - current_model = gr.State(create_new_model) - - topic = gr.State(i18n("未命名对话历史记录")) - - with gr.Row(): - gr.HTML(CHUANHU_TITLE, elem_id="app_title") - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - with gr.Row(elem_id="float_display"): - user_info = gr.Markdown(value="getting user info...", elem_id="user_info") - - # https://github.com/gradio-app/gradio/pull/3296 - def create_greeting(request: gr.Request): - if hasattr(request, "username") and request.username: # is not None or is not "" - logging.info(f"Get User Name: {request.username}") - return gr.Markdown.update(value=f"User: {request.username}"), request.username - else: - return gr.Markdown.update(value=f"User: default", visible=False), "" - demo.load(create_greeting, inputs=None, outputs=[user_info, user_name]) - - with gr.Row().style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(): - with gr.Column(min_width=225, scale=12): - user_input = gr.Textbox( - elem_id="user_input_tb", - show_label=False, placeholder=i18n("在这里输入") - ).style(container=False) - with gr.Column(min_width=42, scale=1): - submitBtn = gr.Button(value="", variant="primary", elem_id="submit_btn") - cancelBtn = gr.Button(value="", variant="secondary", visible=False, elem_id="cancel_btn") - with gr.Row(): - emptyBtn = gr.Button( - i18n("🧹 新的对话"), - ) - retryBtn = gr.Button(i18n("🔄 重新生成")) - delFirstBtn = gr.Button(i18n("🗑️ 删除最旧对话")) - delLastBtn = gr.Button(i18n("🗑️ 删除最新对话")) - with gr.Row(visible=False) as like_dislike_area: - with gr.Column(min_width=20, scale=1): - likeBtn = gr.Button(i18n("👍")) - with gr.Column(min_width=20, scale=1): - dislikeBtn = gr.Button(i18n("👎")) - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label=i18n("模型")): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"Your API-key...", - value=hide_middle_chars(user_api_key.value), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - if multi_api_key: - usageTxt = gr.Markdown(i18n("多账号模式已开启,无需输入key,可直接开始对话"), elem_id="usage_display", elem_classes="insert_block") - else: - usageTxt = gr.Markdown(i18n("**发送消息** 或 **提交key** 以显示额度"), elem_id="usage_display", elem_classes="insert_block") - model_select_dropdown = gr.Dropdown( - label=i18n("选择模型"), choices=MODELS, multiselect=False, value=MODELS[DEFAULT_MODEL], interactive=True - ) - lora_select_dropdown = gr.Dropdown( - label=i18n("选择LoRA模型"), choices=[], multiselect=False, interactive=True, visible=False - ) - with gr.Row(): - use_streaming_checkbox = gr.Checkbox( - label=i18n("实时传输回答"), value=True, visible=ENABLE_STREAMING_OPTION - ) - single_turn_checkbox = gr.Checkbox(label=i18n("单轮对话"), value=False) - use_websearch_checkbox = gr.Checkbox(label=i18n("使用在线搜索"), value=False) - language_select_dropdown = gr.Dropdown( - label=i18n("选择回复语言(针对搜索&索引功能)"), - choices=REPLY_LANGUAGES, - multiselect=False, - value=REPLY_LANGUAGES[0], - ) - index_files = gr.Files(label=i18n("上传"), type="file") - two_column = gr.Checkbox(label=i18n("双栏pdf"), value=advance_docs["pdf"].get("two_column", False)) - # TODO: 公式ocr - # formula_ocr = gr.Checkbox(label=i18n("识别公式"), value=advance_docs["pdf"].get("formula_ocr", False)) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入System Prompt..."), - label="System prompt", - value=INITIAL_SYSTEM_PROMPT, - lines=10, - ).style(container=False) - with gr.Accordion(label=i18n("加载Prompt模板"), open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label=i18n("选择Prompt模板集合文件"), - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button(i18n("🔄 刷新")) - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label=i18n("从Prompt模板中加载"), - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - ).style(container=False) - - with gr.Tab(label=i18n("保存/加载")): - with gr.Accordion(label=i18n("保存/加载对话历史记录"), open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label=i18n("从列表中加载对话"), - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button(i18n("🔄 刷新")) - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=i18n("设置文件名: 默认为.json,可选为.md"), - label=i18n("设置保存文件名"), - value=i18n("对话历史记录"), - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button(i18n("💾 保存对话")) - exportMarkdownBtn = gr.Button(i18n("📝 导出为Markdown")) - gr.Markdown(i18n("默认保存于history文件夹")) - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label=i18n("高级")): - gr.Markdown(i18n("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置")) - gr.HTML(APPEARANCE_SWITCHER, elem_classes="insert_block") - with gr.Accordion(i18n("参数"), open=False): - temperature_slider = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="temperature", - ) - top_p_slider = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="top-p", - ) - n_choices_slider = gr.Slider( - minimum=1, - maximum=10, - value=1, - step=1, - interactive=True, - label="n choices", - ) - stop_sequence_txt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入停止符,用英文逗号隔开..."), - label="stop", - value="", - lines=1, - ) - max_context_length_slider = gr.Slider( - minimum=1, - maximum=32768, - value=2000, - step=1, - interactive=True, - label="max context", - ) - max_generation_slider = gr.Slider( - minimum=1, - maximum=32768, - value=1000, - step=1, - interactive=True, - label="max generations", - ) - presence_penalty_slider = gr.Slider( - minimum=-2.0, - maximum=2.0, - value=0.0, - step=0.01, - interactive=True, - label="presence penalty", - ) - frequency_penalty_slider = gr.Slider( - minimum=-2.0, - maximum=2.0, - value=0.0, - step=0.01, - interactive=True, - label="frequency penalty", - ) - logit_bias_txt = gr.Textbox( - show_label=True, - placeholder=f"word:likelihood", - label="logit bias", - value="", - lines=1, - ) - user_identifier_txt = gr.Textbox( - show_label=True, - placeholder=i18n("用于定位滥用行为"), - label=i18n("用户名"), - value=user_name.value, - lines=1, - ) - - with gr.Accordion(i18n("网络设置"), open=False): - # 优先展示自定义的api_host - apihostTxt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入API-Host..."), - label="API-Host", - value=config.api_host or shared.API_HOST, - lines=1, - ) - changeAPIURLBtn = gr.Button(i18n("🔄 切换API地址")) - proxyTxt = gr.Textbox( - show_label=True, - placeholder=i18n("在这里输入代理地址..."), - label=i18n("代理地址(示例:http://127.0.0.1:10809)"), - value="", - lines=2, - ) - changeProxyBtn = gr.Button(i18n("🔄 设置代理地址")) - default_btn = gr.Button(i18n("🔙 恢复默认设置")) - - gr.Markdown(CHUANHU_DESCRIPTION, elem_id="description") - gr.HTML(FOOTER.format(versions=versions_html()), elem_id="footer") - demo.load(refresh_ui_elements_on_load, [current_model, model_select_dropdown], [like_dislike_area], show_progress=False) - chatgpt_predict_args = dict( - fn=predict, - inputs=[ - current_model, - user_question, - chatbot, - use_streaming_checkbox, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - outputs=[chatbot, status_display], - show_progress=True, - ) - - start_outputing_args = dict( - fn=start_outputing, - inputs=[], - outputs=[submitBtn, cancelBtn], - show_progress=True, - ) - - end_outputing_args = dict( - fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn] - ) - - reset_textbox_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input] - ) - - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input, submitBtn, cancelBtn], show_progress=True - ) - - get_usage_args = dict( - fn=billing_info, inputs=[current_model], outputs=[usageTxt], show_progress=False - ) - - load_history_from_file_args = dict( - fn=load_chat_history, - inputs=[current_model, historyFileSelectDropdown, chatbot, user_name], - outputs=[saveFileName, systemPromptTxt, chatbot] - ) - - - # Chatbot - cancelBtn.click(interrupt, [current_model], []) - - user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - user_input.submit(**get_usage_args) - - submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - submitBtn.click(**get_usage_args) - - index_files.change(handle_file_upload, [current_model, index_files, chatbot], [index_files, chatbot, status_display]) - - emptyBtn.click( - reset, - inputs=[current_model], - outputs=[chatbot, status_display], - show_progress=True, - ) - - retryBtn.click(**start_outputing_args).then( - retry, - [ - current_model, - chatbot, - use_streaming_checkbox, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - [chatbot, status_display], - show_progress=True, - ).then(**end_outputing_args) - retryBtn.click(**get_usage_args) - - delFirstBtn.click( - delete_first_conversation, - [current_model], - [status_display], - ) - - delLastBtn.click( - delete_last_conversation, - [current_model, chatbot], - [chatbot, status_display], - show_progress=False - ) - - likeBtn.click( - like, - [current_model], - [status_display], - show_progress=False - ) - - dislikeBtn.click( - dislike, - [current_model], - [status_display], - show_progress=False - ) - - two_column.change(update_doc_config, [two_column], None) - - # LLM Models - keyTxt.change(set_key, [current_model, keyTxt], [user_api_key, status_display]).then(**get_usage_args) - keyTxt.submit(**get_usage_args) - single_turn_checkbox.change(set_single_turn, [current_model, single_turn_checkbox], None) - model_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider, top_p_slider, systemPromptTxt], [current_model, status_display, lora_select_dropdown], show_progress=True) - model_select_dropdown.change(toggle_like_btn_visibility, [model_select_dropdown], [like_dislike_area], show_progress=False) - lora_select_dropdown.change(get_model, [model_select_dropdown, lora_select_dropdown, user_api_key, temperature_slider, top_p_slider, systemPromptTxt], [current_model, status_display], show_progress=True) - - # Template - systemPromptTxt.change(set_system_prompt, [current_model, systemPromptTxt], None) - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [current_model, saveFileName, chatbot, user_name], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [current_model, saveFileName, chatbot, user_name], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, [gr.State(False), user_name], [historyFileSelectDropdown]) - historyFileSelectDropdown.change(**load_history_from_file_args) - downloadFile.change(**load_history_from_file_args) - - # Advanced - max_context_length_slider.change(set_token_upper_limit, [current_model, max_context_length_slider], None) - temperature_slider.change(set_temperature, [current_model, temperature_slider], None) - top_p_slider.change(set_top_p, [current_model, top_p_slider], None) - n_choices_slider.change(set_n_choices, [current_model, n_choices_slider], None) - stop_sequence_txt.change(set_stop_sequence, [current_model, stop_sequence_txt], None) - max_generation_slider.change(set_max_tokens, [current_model, max_generation_slider], None) - presence_penalty_slider.change(set_presence_penalty, [current_model, presence_penalty_slider], None) - frequency_penalty_slider.change(set_frequency_penalty, [current_model, frequency_penalty_slider], None) - logit_bias_txt.change(set_logit_bias, [current_model, logit_bias_txt], None) - user_identifier_txt.change(set_user_identifier, [current_model, user_identifier_txt], None) - - default_btn.click( - reset_default, [], [apihostTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_host, - [apihostTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = i18n("川虎Chat 🚀") - -if __name__ == "__main__": - reload_javascript() - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - favicon_path="./assets/favicon.ico", - ) - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/awacke1/ChatGPT-Genius-Assistant-4Writers/README.md b/spaces/awacke1/ChatGPT-Genius-Assistant-4Writers/README.md deleted file mode 100644 index eb4039b54d525563eaae17b99c89f10dba9f12b8..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ChatGPT-Genius-Assistant-4Writers/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatGPT Genius Assistant 4Writers -emoji: 🐢 -colorFrom: yellow -colorTo: yellow -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Text2Speech-0721/app.py b/spaces/awacke1/Text2Speech-0721/app.py deleted file mode 100644 index 624711103fff0eb591bc05f07ae20c47fbe03cd2..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Text2Speech-0721/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/facebook/fastspeech2-en-ljspeech").launch() \ No newline at end of file diff --git a/spaces/awacke1/acw-dr-llama-7b-chat/README.md b/spaces/awacke1/acw-dr-llama-7b-chat/README.md deleted file mode 100644 index 060fb2620353d1f4ae83bdfdb3ffb2a97e08d59c..0000000000000000000000000000000000000000 --- a/spaces/awacke1/acw-dr-llama-7b-chat/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🐪Llama Whisper🦙 Voice Chat 🌟 -emoji: 🐪🦙 -colorFrom: red -colorTo: red -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/BoxLineGeometry.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/BoxLineGeometry.js deleted file mode 100644 index 4c7ed50602e03a37af40b18cccc436d3005db1ea..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/BoxLineGeometry.js +++ /dev/null @@ -1,67 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - */ - -THREE.BoxLineGeometry = function ( width, height, depth, widthSegments, heightSegments, depthSegments ) { - - THREE.BufferGeometry.call( this ); - - width = width || 1; - height = height || 1; - depth = depth || 1; - - widthSegments = Math.floor( widthSegments ) || 1; - heightSegments = Math.floor( heightSegments ) || 1; - depthSegments = Math.floor( depthSegments ) || 1; - - var widthHalf = width / 2; - var heightHalf = height / 2; - var depthHalf = depth / 2; - - var segmentWidth = width / widthSegments; - var segmentHeight = height / heightSegments; - var segmentDepth = depth / depthSegments; - - var vertices = []; - - var x = - widthHalf, y = - heightHalf, z = - depthHalf; - - for ( var i = 0; i <= widthSegments; i ++ ) { - - vertices.push( x, - heightHalf, - depthHalf, x, heightHalf, - depthHalf ); - vertices.push( x, heightHalf, - depthHalf, x, heightHalf, depthHalf ); - vertices.push( x, heightHalf, depthHalf, x, - heightHalf, depthHalf ); - vertices.push( x, - heightHalf, depthHalf, x, - heightHalf, - depthHalf ); - - x += segmentWidth; - - } - - for ( var i = 0; i <= heightSegments; i ++ ) { - - vertices.push( - widthHalf, y, - depthHalf, widthHalf, y, - depthHalf ); - vertices.push( widthHalf, y, - depthHalf, widthHalf, y, depthHalf ); - vertices.push( widthHalf, y, depthHalf, - widthHalf, y, depthHalf ); - vertices.push( - widthHalf, y, depthHalf, - widthHalf, y, - depthHalf ); - - y += segmentHeight; - - } - - for ( var i = 0; i <= depthSegments; i ++ ) { - - vertices.push( - widthHalf, - heightHalf, z, - widthHalf, heightHalf, z ); - vertices.push( - widthHalf, heightHalf, z, widthHalf, heightHalf, z ); - vertices.push( widthHalf, heightHalf, z, widthHalf, - heightHalf, z ); - vertices.push( widthHalf, - heightHalf, z, - widthHalf, - heightHalf, z ); - - z += segmentDepth; - - } - - this.addAttribute( 'position', new THREE.Float32BufferAttribute( vertices, 3 ) ); - -} - -THREE.BoxLineGeometry.prototype = Object.create( THREE.BufferGeometry.prototype ); -THREE.BoxLineGeometry.prototype.constructor = THREE.BoxLineGeometry; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/THREE.Nodes.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/THREE.Nodes.js deleted file mode 100644 index 6e5df2537a174d99da66852c600cd160e80705e0..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/THREE.Nodes.js +++ /dev/null @@ -1,225 +0,0 @@ -import { - - // core - - Node, - TempNode, - InputNode, - ConstNode, - VarNode, - StructNode, - AttributeNode, - FunctionNode, - ExpressionNode, - FunctionCallNode, - NodeLib, - NodeUtils, - NodeFrame, - NodeUniform, - NodeBuilder, - - // inputs - - BoolNode, - IntNode, - FloatNode, - Vector2Node, - Vector3Node, - Vector4Node, - ColorNode, - Matrix3Node, - Matrix4Node, - TextureNode, - CubeTextureNode, - ScreenNode, - ReflectorNode, - PropertyNode, - RTTNode, - - // accessors - - UVNode, - ColorsNode, - PositionNode, - NormalNode, - CameraNode, - LightNode, - ReflectNode, - ScreenUVNode, - ResolutionNode, - - // math - - Math1Node, - Math2Node, - Math3Node, - OperatorNode, - CondNode, - - // procedural - - NoiseNode, - CheckerNode, - - // bsdfs - - BlinnShininessExponentNode, - BlinnExponentToRoughnessNode, - RoughnessToBlinnExponentNode, - - // misc - - TextureCubeUVNode, - TextureCubeNode, - NormalMapNode, - BumpMapNode, - - // utils - - BypassNode, - JoinNode, - SwitchNode, - TimerNode, - VelocityNode, - UVTransformNode, - MaxMIPLevelNode, - ColorSpaceNode, - - // effects - - BlurNode, - ColorAdjustmentNode, - LuminanceNode, - - // material nodes - - RawNode, - SpriteNode, - PhongNode, - StandardNode, - MeshStandardNode, - - // materials - - NodeMaterial, - SpriteNodeMaterial, - PhongNodeMaterial, - StandardNodeMaterial, - MeshStandardNodeMaterial, - - // post-processing - - NodePostProcessing - -} from './Nodes.js'; - -// core - -THREE.Node = Node; -THREE.TempNode = TempNode; -THREE.InputNode = InputNode; -THREE.ConstNode = ConstNode; -THREE.VarNode = VarNode; -THREE.StructNode = StructNode; -THREE.AttributeNode = AttributeNode; -THREE.FunctionNode = FunctionNode; -THREE.ExpressionNode = ExpressionNode; -THREE.FunctionCallNode = FunctionCallNode; -THREE.NodeLib = NodeLib; -THREE.NodeUtils = NodeUtils; -THREE.NodeFrame = NodeFrame; -THREE.NodeUniform = NodeUniform; -THREE.NodeBuilder = NodeBuilder; - -// inputs - -THREE.BoolNode = BoolNode; -THREE.IntNode = IntNode; -THREE.FloatNode = FloatNode; -THREE.Vector2Node = Vector2Node; -THREE.Vector3Node = Vector3Node; -THREE.Vector4Node = Vector4Node; -THREE.ColorNode = ColorNode; -THREE.Matrix3Node = Matrix3Node; -THREE.Matrix4Node = Matrix4Node; -THREE.TextureNode = TextureNode; -THREE.CubeTextureNode = CubeTextureNode; -THREE.ScreenNode = ScreenNode; -THREE.ReflectorNode = ReflectorNode; -THREE.PropertyNode = PropertyNode; -THREE.RTTNode = RTTNode; - -// accessors - -THREE.UVNode = UVNode; -THREE.ColorsNode = ColorsNode; -THREE.PositionNode = PositionNode; -THREE.NormalNode = NormalNode; -THREE.CameraNode = CameraNode; -THREE.LightNode = LightNode; -THREE.ReflectNode = ReflectNode; -THREE.ScreenUVNode = ScreenUVNode; -THREE.ResolutionNode = ResolutionNode; - -// math - -THREE.Math1Node = Math1Node; -THREE.Math2Node = Math2Node; -THREE.Math3Node = Math3Node; -THREE.OperatorNode = OperatorNode; -THREE.CondNode = CondNode; - -// procedural - -THREE.NoiseNode = NoiseNode; -THREE.CheckerNode = CheckerNode; - -// bsdfs - -THREE.BlinnShininessExponentNode = BlinnShininessExponentNode; -THREE.BlinnExponentToRoughnessNode = BlinnExponentToRoughnessNode; -THREE.RoughnessToBlinnExponentNode = RoughnessToBlinnExponentNode; - -// misc - -THREE.TextureCubeUVNode = TextureCubeUVNode; -THREE.TextureCubeNode = TextureCubeNode; -THREE.NormalMapNode = NormalMapNode; -THREE.BumpMapNode = BumpMapNode; - -// utils - -THREE.BypassNode = BypassNode; -THREE.JoinNode = JoinNode; -THREE.SwitchNode = SwitchNode; -THREE.TimerNode = TimerNode; -THREE.VelocityNode = VelocityNode; -THREE.UVTransformNode = UVTransformNode; -THREE.MaxMIPLevelNode = MaxMIPLevelNode; -THREE.ColorSpaceNode = ColorSpaceNode; - -// effects - -THREE.BlurNode = BlurNode; -THREE.ColorAdjustmentNode = ColorAdjustmentNode; -THREE.LuminanceNode = LuminanceNode; - -// material nodes - -THREE.RawNode = RawNode; -THREE.SpriteNode = SpriteNode; -THREE.PhongNode = PhongNode; -THREE.StandardNode = StandardNode; -THREE.MeshStandardNode = MeshStandardNode; - -// materials - -THREE.NodeMaterial = NodeMaterial; -THREE.SpriteNodeMaterial = SpriteNodeMaterial; -THREE.PhongNodeMaterial = PhongNodeMaterial; -THREE.StandardNodeMaterial = StandardNodeMaterial; -THREE.MeshStandardNodeMaterial = MeshStandardNodeMaterial; - -// post-processing - -THREE.NodePostProcessing = NodePostProcessing; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/fog_fragment.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/fog_fragment.glsl.js deleted file mode 100644 index 24dc2fba66dde5dc195b382c4b9201bcfe46de6b..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/fog_fragment.glsl.js +++ /dev/null @@ -1,17 +0,0 @@ -export default /* glsl */` -#ifdef USE_FOG - - #ifdef FOG_EXP2 - - float fogFactor = whiteCompliment( exp2( - fogDensity * fogDensity * fogDepth * fogDepth * LOG2 ) ); - - #else - - float fogFactor = smoothstep( fogNear, fogFar, fogDepth ); - - #endif - - gl_FragColor.rgb = mix( gl_FragColor.rgb, fogColor, fogFactor ); - -#endif -`; diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326225603.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326225603.py deleted file mode 100644 index 2e0a37be3ba26cc71d1a25ff33b06b64b6322c36..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326225603.py +++ /dev/null @@ -1,68 +0,0 @@ -import os -os.system("pip install gfpgan") - -os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - - - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_faces[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

    Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

    visitor badge
    " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/better57/CHATGPT/run_Windows.bat b/spaces/better57/CHATGPT/run_Windows.bat deleted file mode 100644 index 4c18f9ccaeea0af972301ffdf48778641221f76d..0000000000000000000000000000000000000000 --- a/spaces/better57/CHATGPT/run_Windows.bat +++ /dev/null @@ -1,5 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" diff --git a/spaces/bhavyapandya/Next-Word-Prediction/README.md b/spaces/bhavyapandya/Next-Word-Prediction/README.md deleted file mode 100644 index 45cdf346d3aa87bcbab12597d50fad4224b61e7e..0000000000000000000000000000000000000000 --- a/spaces/bhavyapandya/Next-Word-Prediction/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Next Word Prediction -emoji: 🚀 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bigcode/bigcode-models-leaderboard/src/add_json_csv.py b/spaces/bigcode/bigcode-models-leaderboard/src/add_json_csv.py deleted file mode 100644 index f6040f3069a5566fcbf566b651e8f425b8457017..0000000000000000000000000000000000000000 --- a/spaces/bigcode/bigcode-models-leaderboard/src/add_json_csv.py +++ /dev/null @@ -1,49 +0,0 @@ -import csv -import json - -# Given mapping -mapping = { - "humaneval": "humaneval-python", - "multiple-lua": "lua", - "multiple-java": "java", - "multiple-jl": "julia", - "multiple-cpp": "cpp", - "multiple-rs": "rust", - "multiple-rkt": "racket", - "multiple-php": "php", - "multiple-r": "r", - "multiple-js": "javascript", - "multiple-d": "d", - "multiple-swift": "swift" -} - -# JSON Data (replace this with your actual loaded JSON) -json_path = "/fsx/loubna/bigcode-models-leaderboard/community_results/WisdomShell_CodeShell_ruixie/WisdomShell_CodeShell_ruixie.json" -with open(json_path, "r") as f: - json_data = json.load(f) -parsed_data = json_data['results'] - -# Create a dictionary with column names as keys and empty values -csv_columns = ["Models", "Size (B)", "Throughput (tokens/s)", "Seq_length", "#Languages", "humaneval-python", "java", "javascript", "cpp", "php", "julia", "d", "lua", "r", "racket", "rust", "swift", "Throughput (tokens/s) bs=50", "Peak Memory (MB)"] -row_data = {col: '' for col in csv_columns} - -# Fill the dictionary with data from the JSON -for item in parsed_data: - csv_col = mapping.get(item['task']) - if csv_col: - row_data[csv_col] = round(item['pass@1'] * 100, 2) - -# Set model name under the 'Models' column -row_data['Models'] = json_data['meta']['model'] - -# Write to CSV -csv_file = "/fsx/loubna/bigcode-models-leaderboard/data/raw_scores.csv" -with open(csv_file, 'a', newline='') as csvfile: - writer = csv.DictWriter(csvfile, fieldnames=row_data.keys()) - writer.writerow(row_data) - -# print last 3 rows in csv -with open(csv_file, 'r') as f: - lines = f.readlines() - for line in lines[-3:]: - print(line) diff --git a/spaces/bigslime/stablediffusion-infinity/utils.py b/spaces/bigslime/stablediffusion-infinity/utils.py deleted file mode 100644 index bebc4f7f4da8f6de637b148f39aa6a5ef60679c5..0000000000000000000000000000000000000000 --- a/spaces/bigslime/stablediffusion-infinity/utils.py +++ /dev/null @@ -1,217 +0,0 @@ -from PIL import Image -from PIL import ImageFilter -import cv2 -import numpy as np -import scipy -import scipy.signal -from scipy.spatial import cKDTree - -import os -from perlin2d import * - -patch_match_compiled = True - -try: - from PyPatchMatch import patch_match -except Exception as e: - try: - import patch_match - except Exception as e: - patch_match_compiled = False - -try: - patch_match -except NameError: - print("patch_match compiling failed, will fall back to edge_pad") - patch_match_compiled = False - - - - -def edge_pad(img, mask, mode=1): - if mode == 0: - nmask = mask.copy() - nmask[nmask > 0] = 1 - res0 = 1 - nmask - res1 = nmask - p0 = np.stack(res0.nonzero(), axis=0).transpose() - p1 = np.stack(res1.nonzero(), axis=0).transpose() - min_dists, min_dist_idx = cKDTree(p1).query(p0, 1) - loc = p1[min_dist_idx] - for (a, b), (c, d) in zip(p0, loc): - img[a, b] = img[c, d] - elif mode == 1: - record = {} - kernel = [[1] * 3 for _ in range(3)] - nmask = mask.copy() - nmask[nmask > 0] = 1 - res = scipy.signal.convolve2d( - nmask, kernel, mode="same", boundary="fill", fillvalue=1 - ) - res[nmask < 1] = 0 - res[res == 9] = 0 - res[res > 0] = 1 - ylst, xlst = res.nonzero() - queue = [(y, x) for y, x in zip(ylst, xlst)] - # bfs here - cnt = res.astype(np.float32) - acc = img.astype(np.float32) - step = 1 - h = acc.shape[0] - w = acc.shape[1] - offset = [(1, 0), (-1, 0), (0, 1), (0, -1)] - while queue: - target = [] - for y, x in queue: - val = acc[y][x] - for yo, xo in offset: - yn = y + yo - xn = x + xo - if 0 <= yn < h and 0 <= xn < w and nmask[yn][xn] < 1: - if record.get((yn, xn), step) == step: - acc[yn][xn] = acc[yn][xn] * cnt[yn][xn] + val - cnt[yn][xn] += 1 - acc[yn][xn] /= cnt[yn][xn] - if (yn, xn) not in record: - record[(yn, xn)] = step - target.append((yn, xn)) - step += 1 - queue = target - img = acc.astype(np.uint8) - else: - nmask = mask.copy() - ylst, xlst = nmask.nonzero() - yt, xt = ylst.min(), xlst.min() - yb, xb = ylst.max(), xlst.max() - content = img[yt : yb + 1, xt : xb + 1] - img = np.pad( - content, - ((yt, mask.shape[0] - yb - 1), (xt, mask.shape[1] - xb - 1), (0, 0)), - mode="edge", - ) - return img, mask - - -def perlin_noise(img, mask): - lin = np.linspace(0, 5, mask.shape[0], endpoint=False) - x, y = np.meshgrid(lin, lin) - avg = img.mean(axis=0).mean(axis=0) - # noise=[((perlin(x, y)+1)*128+avg[i]).astype(np.uint8) for i in range(3)] - noise = [((perlin(x, y) + 1) * 0.5 * 255).astype(np.uint8) for i in range(3)] - noise = np.stack(noise, axis=-1) - # mask=skimage.measure.block_reduce(mask,(8,8),np.min) - # mask=mask.repeat(8, axis=0).repeat(8, axis=1) - # mask_image=Image.fromarray(mask) - # mask_image=mask_image.filter(ImageFilter.GaussianBlur(radius = 4)) - # mask=np.array(mask_image) - nmask = mask.copy() - # nmask=nmask/255.0 - nmask[mask > 0] = 1 - img = nmask[:, :, np.newaxis] * img + (1 - nmask[:, :, np.newaxis]) * noise - # img=img.astype(np.uint8) - return img, mask - - -def gaussian_noise(img, mask): - noise = np.random.randn(mask.shape[0], mask.shape[1], 3) - noise = (noise + 1) / 2 * 255 - noise = noise.astype(np.uint8) - nmask = mask.copy() - nmask[mask > 0] = 1 - img = nmask[:, :, np.newaxis] * img + (1 - nmask[:, :, np.newaxis]) * noise - return img, mask - - -def cv2_telea(img, mask): - ret = cv2.inpaint(img, 255 - mask, 5, cv2.INPAINT_TELEA) - return ret, mask - - -def cv2_ns(img, mask): - ret = cv2.inpaint(img, 255 - mask, 5, cv2.INPAINT_NS) - return ret, mask - - -def patch_match_func(img, mask): - ret = patch_match.inpaint(img, mask=255 - mask, patch_size=3) - return ret, mask - - -def mean_fill(img, mask): - avg = img.mean(axis=0).mean(axis=0) - img[mask < 1] = avg - return img, mask - -def g_diffuser(img,mask): - return img, mask - -def dummy_fill(img,mask): - return img,mask -functbl = { - "gaussian": gaussian_noise, - "perlin": perlin_noise, - "edge_pad": edge_pad, - "patchmatch": patch_match_func if patch_match_compiled else edge_pad, - "cv2_ns": cv2_ns, - "cv2_telea": cv2_telea, - "g_diffuser": g_diffuser, - "g_diffuser_lib": dummy_fill, -} - -try: - from postprocess import PhotometricCorrection - correction_func = PhotometricCorrection() -except Exception as e: - print(e, "so PhotometricCorrection is disabled") - class DummyCorrection: - def __init__(self): - self.backend="" - pass - def run(self,a,b,**kwargs): - return b - correction_func=DummyCorrection() - -if "taichi" in correction_func.backend: - import sys - import io - import base64 - from PIL import Image - def base64_to_pil(base64_str): - data = base64.b64decode(str(base64_str)) - pil = Image.open(io.BytesIO(data)) - return pil - - def pil_to_base64(out_pil): - out_buffer = io.BytesIO() - out_pil.save(out_buffer, format="PNG") - out_buffer.seek(0) - base64_bytes = base64.b64encode(out_buffer.read()) - base64_str = base64_bytes.decode("ascii") - return base64_str - from subprocess import Popen, PIPE, STDOUT - class SubprocessCorrection: - def __init__(self): - self.backend=correction_func.backend - self.child= Popen(["python", "postprocess.py"], stdin=PIPE, stdout=PIPE, stderr=STDOUT) - def run(self,img_input,img_inpainted,mode): - if mode=="disabled": - return img_inpainted - base64_str_input = pil_to_base64(img_input) - base64_str_inpainted = pil_to_base64(img_inpainted) - try: - if self.child.poll(): - self.child= Popen(["python", "postprocess.py"], stdin=PIPE, stdout=PIPE, stderr=STDOUT) - self.child.stdin.write(f"{base64_str_input},{base64_str_inpainted},{mode}\n".encode()) - self.child.stdin.flush() - out = self.child.stdout.readline() - base64_str=out.decode().strip() - while base64_str and base64_str[0]=="[": - print(base64_str) - out = self.child.stdout.readline() - base64_str=out.decode().strip() - ret=base64_to_pil(base64_str) - except: - print("[PIE] not working, photometric correction is disabled") - ret=img_inpainted - return ret - correction_func = SubprocessCorrection() diff --git a/spaces/binker/interpreter/response_parser.py b/spaces/binker/interpreter/response_parser.py deleted file mode 100644 index 685b11d8b62223cd92bc603c111e661dddd089e9..0000000000000000000000000000000000000000 --- a/spaces/binker/interpreter/response_parser.py +++ /dev/null @@ -1,200 +0,0 @@ -from abc import ABCMeta, abstractmethod -from functional import * - - -class ChoiceStrategy(metaclass=ABCMeta): - def __init__(self, choice): - self.choice = choice - self.delta = choice['delta'] - - @abstractmethod - def support(self): - pass - - @abstractmethod - def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool): - pass - - -class RoleChoiceStrategy(ChoiceStrategy): - - def support(self): - return 'role' in self.delta - - def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool): - bot_backend.set_assistant_role_name(assistant_role_name=self.delta['role']) - return history, whether_exit - - -class ContentChoiceStrategy(ChoiceStrategy): - def support(self): - return 'content' in self.delta and self.delta['content'] is not None - # null value of content often occur in function call: - # { - # "role": "assistant", - # "content": null, - # "function_call": { - # "name": "python", - # "arguments": "" - # } - # } - - def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool): - bot_backend.add_content(content=self.delta.get('content', '')) - history[-1][1] = bot_backend.content - return history, whether_exit - - -class NameFunctionCallChoiceStrategy(ChoiceStrategy): - def support(self): - return 'function_call' in self.delta and 'name' in self.delta['function_call'] - - def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool): - function_dict = bot_backend.jupyter_kernel.available_functions - bot_backend.set_function_name(function_name=self.delta['function_call']['name']) - bot_backend.copy_current_bot_history(bot_history=history) - if bot_backend.function_name not in function_dict: - history.append( - [ - None, - f'GPT attempted to call a function that does ' - f'not exist: {bot_backend.function_name}\n ' - ] - ) - whether_exit = True - - return history, whether_exit - - -class ArgumentsFunctionCallChoiceStrategy(ChoiceStrategy): - - def support(self): - return 'function_call' in self.delta and 'arguments' in self.delta['function_call'] - - def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool): - bot_backend.add_function_args_str(function_args_str=self.delta['function_call']['arguments']) - - if bot_backend.function_name == 'python': # handle hallucinatory function calls - """ - In practice, we have noticed that GPT, especially GPT-3.5, may occasionally produce hallucinatory - function calls. These calls involve a non-existent function named `python` with arguments consisting - solely of raw code text (not a JSON format). - """ - temp_code_str = bot_backend.function_args_str - bot_backend.update_display_code_block( - display_code_block="\n🔴Working:\n```python\n{}\n```".format(temp_code_str) - ) - history = copy.deepcopy(bot_backend.bot_history) - history[-1][1] += bot_backend.display_code_block - else: - temp_code_str = parse_json(function_args=bot_backend.function_args_str, finished=False) - if temp_code_str is not None: - bot_backend.update_display_code_block( - display_code_block="\n🔴Working:\n```python\n{}\n```".format( - temp_code_str - ) - ) - history = copy.deepcopy(bot_backend.bot_history) - history[-1][1] += bot_backend.display_code_block - - return history, whether_exit - - -class FinishReasonChoiceStrategy(ChoiceStrategy): - def support(self): - return self.choice['finish_reason'] is not None - - def execute(self, bot_backend: BotBackend, history: List, whether_exit: bool): - function_dict = bot_backend.jupyter_kernel.available_functions - - if bot_backend.content: - bot_backend.add_gpt_response_content_message() - - bot_backend.update_finish_reason(finish_reason=self.choice['finish_reason']) - if bot_backend.finish_reason == 'function_call': - try: - - code_str = self.get_code_str(bot_backend) - - bot_backend.update_display_code_block( - display_code_block="\n🟢Working:\n```python\n{}\n```".format(code_str) - ) - history = copy.deepcopy(bot_backend.bot_history) - history[-1][1] += bot_backend.display_code_block - - # function response - text_to_gpt, content_to_display = function_dict[ - bot_backend.function_name - ](code_str) - - # add function call to conversion - bot_backend.add_function_call_response_message(function_response=text_to_gpt, save_tokens=True) - - add_function_response_to_bot_history( - content_to_display=content_to_display, history=history, unique_id=bot_backend.unique_id - ) - - except json.JSONDecodeError: - history.append( - [None, f"GPT generate wrong function args: {bot_backend.function_args_str}"] - ) - whether_exit = True - return history, whether_exit - - except Exception as e: - history.append([None, f'Backend error: {e}']) - whether_exit = True - return history, whether_exit - - bot_backend.reset_gpt_response_log_values(exclude=['finish_reason']) - - return history, whether_exit - - @staticmethod - def get_code_str(bot_backend): - if bot_backend.function_name == 'python': - code_str = bot_backend.function_args_str - else: - code_str = parse_json(function_args=bot_backend.function_args_str, finished=True) - if code_str is None: - raise json.JSONDecodeError - return code_str - - -class ChoiceHandler: - strategies = [ - RoleChoiceStrategy, ContentChoiceStrategy, NameFunctionCallChoiceStrategy, - ArgumentsFunctionCallChoiceStrategy, FinishReasonChoiceStrategy - ] - - def __init__(self, choice): - self.choice = choice - - def handle(self, bot_backend: BotBackend, history: List, whether_exit: bool): - for Strategy in self.strategies: - strategy_instance = Strategy(choice=self.choice) - if not strategy_instance.support(): - continue - history, whether_exit = strategy_instance.execute( - bot_backend=bot_backend, - history=history, - whether_exit=whether_exit - ) - return history, whether_exit - - -def parse_response(chunk, history, bot_backend: BotBackend): - """ - :return: history, whether_exit - """ - whether_exit = False - if chunk['choices']: - choice = chunk['choices'][0] - choice_handler = ChoiceHandler(choice=choice) - history, whether_exit = choice_handler.handle( - history=history, - bot_backend=bot_backend, - whether_exit=whether_exit - ) - - return history, whether_exit diff --git a/spaces/bioriAsaeru/text-to-voice/Abelssoft WashAndGo Crack v25.1 Build 264 With Serial Key Free Download and Review.md b/spaces/bioriAsaeru/text-to-voice/Abelssoft WashAndGo Crack v25.1 Build 264 With Serial Key Free Download and Review.md deleted file mode 100644 index 0c057329bdb2789f5f3ccc3b3d73c706d9b75c78..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Abelssoft WashAndGo Crack v25.1 Build 264 With Serial Key Free Download and Review.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Abelssoft WashAndGo Crack v25.1 Build 264 With Serial Key


    Download File →→→ https://urloso.com/2uyQvU



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Detective Byomkesh Bakshy! ((INSTALL)) Full Movie Torrent Download.md b/spaces/bioriAsaeru/text-to-voice/Detective Byomkesh Bakshy! ((INSTALL)) Full Movie Torrent Download.md deleted file mode 100644 index ede4e4d8e1662beb8b6c2abd84ffeac98717fed4..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Detective Byomkesh Bakshy! ((INSTALL)) Full Movie Torrent Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Detective Byomkesh Bakshy! full movie torrent download


    Downloadhttps://urloso.com/2uyS8y



    -
    -part 2 full movie torrent download Game Paisa Ladki .. New Tamil,Telugu Hindi ... Spider Man Far From Home Hindi Dubbed Torrent Movie ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Foca 3.0 Disponible The Best Way to Scan and Extract Metadata from Files.md b/spaces/bioriAsaeru/text-to-voice/Foca 3.0 Disponible The Best Way to Scan and Extract Metadata from Files.md deleted file mode 100644 index d50660e2c1dede84858342a293b9d2d1c2bdf5c5..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Foca 3.0 Disponible The Best Way to Scan and Extract Metadata from Files.md +++ /dev/null @@ -1,12 +0,0 @@ - -

    La foca de cresta (Cystophora cristata) és un mamífer carnívor de la família dels fòcids. Només viu a l'Atlàntic nord, en una àrea que va des de Svalbard a l'est fins al Golf de Sant Llorenç a l'oest. És l'única espècie del gènere Cystophora.

    -

    Ayer por la tarde se subió la compilación de FOCA Free 3.1.1 a la web, con lo que ya se puede descargar el motor de la última versión disponible de esta herramienta. Como siempre, esperamos que la utilicéis, que nos enviéis ideas y sugerencias, así como posibles bugs que os puedan aparecer. Se ha trabajado mucho la estabilidad de FOCA y se ha mejorado el consumo de memoria de la herramienta.

    Entre la versión FOCA PRO 3.1.1 y la FOCA FREE 3.1.1 hay varias diferencias, y como siempre nos las están preguntando hemos hecho una tabla comparativa con lo que tienen de distinto. En principio las funciones de exploiting, la publicidad, el reporting y la velocidad son los aspectos más significativo.

    El próximo seminario para conseguir la FOCA PRO 3.1.1 serán los días 9 y 10 de Abril, pero puedes estar siempre informado en la web donde publicamos los próximos eventos de Informática 64.
    Saludos Malignos!

    -

    Foca 3.0 Disponible


    Download Zip ✏ ✏ ✏ https://urloso.com/2uyRLz



    -

    Buenas, en mi caso me está dando un problema al ejecutarlo, evidentemnete será algo de mi equipo, pero por si alguien tiene conocimeinto de ello se agradece :-)(máquina Windows 7 SP1, todos los fixes al día de 32 bit).


    Nombre del evento de problema: CLR20r3
    Firma del problema 01: foca free.exe
    Firma del problema 02: 3.1.1.0
    Firma del problema 03: 4f60b291
    Firma del problema 04: FOCA Free
    Firma del problema 05: 3.1.1.0
    Firma del problema 06: 4f60b291
    Firma del problema 07: c3
    Firma del problema 08: 1d
    Firma del problema 09: System.IO.FileNotFoundException
    Versión del sistema operativo: 6.1.7601.2.1.0.256.48
    Id. de configuración regional: 3082

    -

    El primero de los ataques que encontramos, son los ataques a las redes IPv6, que dieron origen a los post de elladodelmal, los libros de informatica64 sobre el tema y la misma evil foca, estos ataques a redes IPv6 y en especial el ataque SLAAC están muy bien documentados en los siguientes post:

    -

    El segundo ataque que nos permite realizar la foca es el de "Hombre en el Medio" utilizando ARP Spoofing en IPv4, para esto solo seleccionamos la puerta de enlace de nuestra red y la dirección ip de nuestra victima, después damos al botón start para hacerle creer a la victima que somos la puerta de acceso y a la puerta de acceso que somos la victima, consiguiendo de esta forma que el trafico en las 2 direcciones pase siempre por nosotros.

    -

    El primer integrante de la serie Ocean, el BYD Dolphin («delfín»), se presentó el año pasado. El desembarco en Europa de este utilitario ya ha sido confirmado por la propia empresa. En 2022 se unirán otros tres modelos a la familia: el urbano Seagull («gaviota»), el sedán mediano Seal («foca») y el SUV familiar Sealion («león marino»).

    -

    Aunque apenas se tienen datos de los Seagull y Sealion, una serie de filtraciones nos han permitido conocer algunos detalles del inminente Seal, que medirá 4,77 metros de largo y tendrá una batalla de 2,9 metros. Estará disponible en tres versiones, dos de tracción trasera y una de tracción total, las cuales rendirán respectivamente 204 CV (150 kW), 245 CV (180 kW) y 489 CV (360 kW).

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/HD Online Player (Pacific Rim Uprising English Hd 1080).md b/spaces/bioriAsaeru/text-to-voice/HD Online Player (Pacific Rim Uprising English Hd 1080).md deleted file mode 100644 index 979c44ee115a12d51ccbd0d0cf62d3c7fbf4a64e..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/HD Online Player (Pacific Rim Uprising English Hd 1080).md +++ /dev/null @@ -1,20 +0,0 @@ -

    HD Online Player (Pacific Rim Uprising English Hd 1080)


    Download ✸✸✸ https://urloso.com/2uyPu9



    -
    -www.theplanet.co. Pacific Rim Uprising English Hd 1080. 2017-12-16T17:24:27.The hd feature lists of. Hd Online Player Pacific Rim Uprising English Hd 1080, Linux (Ubuntu) Game Hd Online Player Pacific Rim Uprising English Hd 1080. www.theplanet.co. What is the hd online player-pacific-rim-uprising-english-hd-1080-darnraes. Hd Online Player Pacific Rim Uprising English Hd 1080, Linux (Ubuntu) Game Pacific Rim Uprising English Hd 1080. www.theplanet.co. Hd Online Player Pacific Rim Uprising English Hd 1080, Linux (Ubuntu) Game. Hd Online Player Pacific Rim Uprising English Hd 1080, Linux (Ubuntu) Game. 2018-04-10T19:12:08.Rights groups say the arrest of a Sudanese man at New York’s John F. Kennedy airport has sparked concerns that prosecutors are acting too quickly to deport him for a crime he says he didn’t commit. - -Security camera footage of the arrest shows the unidentified man bleeding heavily from his nose and mouth after he was wrestled to the ground in a scuffle with a U.S. Customs and Border Protection officer. - -Sudan’s embassy in Washington says it has no information about the incident. - -U.S. Immigration and Customs Enforcement spokesman Carl Rusnok says the man was arrested by Customs officers on Sunday after they learned he had arrived in the U.S. on a tourist visa with counterfeit documents. Rusnok says the man will be processed at the nearest U.S. court. - -Ishaq Mohammed says he was working for a construction company at the airport on Monday when a U.S. Customs and Border Protection officer saw him speaking Arabic and asked him if he was Muslim. He says the officer then asked him about his origin and checked his passport. - -Mohammed, who was born in India, says he was confused by the officer’s questions and felt he had to prove his loyalty to the United States. - -He says he presented his passport, but the officer told him it was not valid. He says the officer then handcuffed him and walked him to the back of a van, where he was patted down and handcuffed again. - -Mohammed says he was later told he would be deported 4fefd39f24
    -
    -
    -

    diff --git a/spaces/birdortyedi/cifr-pytorch/modeling/arch.py b/spaces/birdortyedi/cifr-pytorch/modeling/arch.py deleted file mode 100644 index 9f7f2a59593bfd78f320bfee4027287c1a4f4f2b..0000000000000000000000000000000000000000 --- a/spaces/birdortyedi/cifr-pytorch/modeling/arch.py +++ /dev/null @@ -1,272 +0,0 @@ -import torch -from torch import nn -from torch.nn.utils import spectral_norm - -from modeling.base import BaseNetwork -from layers.blocks import DestyleResBlock, Destyler, ResBlock - - -class IFRNet(BaseNetwork): - def __init__(self, base_n_channels, destyler_n_channels): - super(IFRNet, self).__init__() - self.destyler = Destyler(in_features=32768, num_features=destyler_n_channels) # from vgg features - - self.ds_fc1 = nn.Linear(destyler_n_channels, base_n_channels * 2) - self.ds_res1 = DestyleResBlock(channels_in=3, channels_out=base_n_channels, kernel_size=5, stride=1, padding=2) - self.ds_fc2 = nn.Linear(destyler_n_channels, base_n_channels * 4) - self.ds_res2 = DestyleResBlock(channels_in=base_n_channels, channels_out=base_n_channels * 2, kernel_size=3, stride=2, padding=1) - self.ds_fc3 = nn.Linear(destyler_n_channels, base_n_channels * 4) - self.ds_res3 = DestyleResBlock(channels_in=base_n_channels * 2, channels_out=base_n_channels * 2, kernel_size=3, stride=1, padding=1) - self.ds_fc4 = nn.Linear(destyler_n_channels, base_n_channels * 8) - self.ds_res4 = DestyleResBlock(channels_in=base_n_channels * 2, channels_out=base_n_channels * 4, kernel_size=3, stride=2, padding=1) - self.ds_fc5 = nn.Linear(destyler_n_channels, base_n_channels * 8) - self.ds_res5 = DestyleResBlock(channels_in=base_n_channels * 4, channels_out=base_n_channels * 4, kernel_size=3, stride=1, padding=1) - self.ds_fc6 = nn.Linear(destyler_n_channels, base_n_channels * 16) - self.ds_res6 = DestyleResBlock(channels_in=base_n_channels * 4, channels_out=base_n_channels * 8, kernel_size=3, stride=2, padding=1) - - self.upsample = nn.UpsamplingNearest2d(scale_factor=2.0) - - self.res1 = ResBlock(channels_in=base_n_channels * 8, channels_out=base_n_channels * 4, kernel_size=3, stride=1, padding=1) - self.res2 = ResBlock(channels_in=base_n_channels * 4, channels_out=base_n_channels * 4, kernel_size=3, stride=1, padding=1) - self.res3 = ResBlock(channels_in=base_n_channels * 4, channels_out=base_n_channels * 2, kernel_size=3, stride=1, padding=1) - self.res4 = ResBlock(channels_in=base_n_channels * 2, channels_out=base_n_channels * 2, kernel_size=3, stride=1, padding=1) - self.res5 = ResBlock(channels_in=base_n_channels * 2, channels_out=base_n_channels, kernel_size=3, stride=1, padding=1) - - self.conv1 = nn.Conv2d(base_n_channels, 3, kernel_size=3, stride=1, padding=1) - - self.init_weights(init_type="normal", gain=0.02) - - def forward(self, x, vgg_feat): - b_size, ch, h, w = vgg_feat.size() - vgg_feat = vgg_feat.view(b_size, ch * h * w) - vgg_feat = self.destyler(vgg_feat) - - out = self.ds_res1(x, self.ds_fc1(vgg_feat)) - out = self.ds_res2(out, self.ds_fc2(vgg_feat)) - out = self.ds_res3(out, self.ds_fc3(vgg_feat)) - out = self.ds_res4(out, self.ds_fc4(vgg_feat)) - out = self.ds_res5(out, self.ds_fc5(vgg_feat)) - aux = self.ds_res6(out, self.ds_fc6(vgg_feat)) - - out = self.upsample(aux) - out = self.res1(out) - out = self.res2(out) - out = self.upsample(out) - out = self.res3(out) - out = self.res4(out) - out = self.upsample(out) - out = self.res5(out) - out = self.conv1(out) - - return out, aux - - -class CIFR_Encoder(IFRNet): - def __init__(self, base_n_channels, destyler_n_channels): - super(CIFR_Encoder, self).__init__(base_n_channels, destyler_n_channels) - - def forward(self, x, vgg_feat): - b_size, ch, h, w = vgg_feat.size() - vgg_feat = vgg_feat.view(b_size, ch * h * w) - vgg_feat = self.destyler(vgg_feat) - - feat1 = self.ds_res1(x, self.ds_fc1(vgg_feat)) - feat2 = self.ds_res2(feat1, self.ds_fc2(vgg_feat)) - feat3 = self.ds_res3(feat2, self.ds_fc3(vgg_feat)) - feat4 = self.ds_res4(feat3, self.ds_fc4(vgg_feat)) - feat5 = self.ds_res5(feat4, self.ds_fc5(vgg_feat)) - feat6 = self.ds_res6(feat5, self.ds_fc6(vgg_feat)) - - feats = [feat1, feat2, feat3, feat4, feat5, feat6] - - out = self.upsample(feat6) - out = self.res1(out) - out = self.res2(out) - out = self.upsample(out) - out = self.res3(out) - out = self.res4(out) - out = self.upsample(out) - out = self.res5(out) - out = self.conv1(out) - - return out, feats - - -class Normalize(nn.Module): - def __init__(self, power=2): - super(Normalize, self).__init__() - self.power = power - - def forward(self, x): - norm = x.pow(self.power).sum(1, keepdim=True).pow(1. / self.power) - out = x.div(norm + 1e-7) - return out - - -class PatchSampleF(BaseNetwork): - def __init__(self, base_n_channels, style_or_content, use_mlp=False, nc=256): - # potential issues: currently, we use the same patch_ids for multiple images in the batch - super(PatchSampleF, self).__init__() - self.is_content = True if style_or_content == "content" else False - self.l2norm = Normalize(2) - self.use_mlp = use_mlp - self.nc = nc # hard-coded - - self.mlp_0 = nn.Sequential(*[nn.Linear(base_n_channels, self.nc), nn.ReLU(), nn.Linear(self.nc, self.nc)]).cuda() - self.mlp_1 = nn.Sequential(*[nn.Linear(base_n_channels * 2, self.nc), nn.ReLU(), nn.Linear(self.nc, self.nc)]).cuda() - self.mlp_2 = nn.Sequential(*[nn.Linear(base_n_channels * 2, self.nc), nn.ReLU(), nn.Linear(self.nc, self.nc)]).cuda() - self.mlp_3 = nn.Sequential(*[nn.Linear(base_n_channels * 4, self.nc), nn.ReLU(), nn.Linear(self.nc, self.nc)]).cuda() - self.mlp_4 = nn.Sequential(*[nn.Linear(base_n_channels * 4, self.nc), nn.ReLU(), nn.Linear(self.nc, self.nc)]).cuda() - self.mlp_5 = nn.Sequential(*[nn.Linear(base_n_channels * 8, self.nc), nn.ReLU(), nn.Linear(self.nc, self.nc)]).cuda() - self.init_weights(init_type="normal", gain=0.02) - - @staticmethod - def gram_matrix(x): - # a, b, c, d = x.size() # a=batch size(=1) - a, b = x.size() - # b=number of feature maps - # (c,d)=dimensions of a f. map (N=c*d) - - # features = x.view(a * b, c * d) # resise F_XL into \hat F_XL - - G = torch.mm(x, x.t()) # compute the gram product - - # we 'normalize' the values of the gram matrix - # by dividing by the number of element in each feature maps. - return G.div(a * b) - - def forward(self, feats, num_patches=64, patch_ids=None): - return_ids = [] - return_feats = [] - - for feat_id, feat in enumerate(feats): - B, C, H, W = feat.shape - feat_reshape = feat.permute(0, 2, 3, 1).flatten(1, 2) - if num_patches > 0: - if patch_ids is not None: - patch_id = patch_ids[feat_id] - else: - patch_id = torch.randperm(feat_reshape.shape[1], device=feats[0].device) - patch_id = patch_id[:int(min(num_patches, patch_id.shape[0]))] # .to(patch_ids.device) - x_sample = feat_reshape[:, patch_id, :].flatten(0, 1) # reshape(-1, x.shape[1]) - else: - x_sample = feat_reshape - patch_id = [] - if self.use_mlp: - mlp = getattr(self, 'mlp_%d' % feat_id) - x_sample = mlp(x_sample) - if not self.is_content: - x_sample = self.gram_matrix(x_sample) - return_ids.append(patch_id) - x_sample = self.l2norm(x_sample) - - if num_patches == 0: - x_sample = x_sample.permute(0, 2, 1).reshape([B, x_sample.shape[-1], H, W]) - return_feats.append(x_sample) - return return_feats, return_ids - - -class MLP(nn.Module): - def __init__(self, base_n_channels, out_features=14): - super(MLP, self).__init__() - self.aux_classifier = nn.Sequential( - nn.Conv2d(base_n_channels * 8, base_n_channels * 4, kernel_size=3, stride=1, padding=1), - nn.MaxPool2d(2), - nn.Conv2d(base_n_channels * 4, base_n_channels * 2, kernel_size=3, stride=1, padding=1), - nn.MaxPool2d(2), - # nn.Conv2d(base_n_channels * 2, base_n_channels * 1, kernel_size=3, stride=1, padding=1), - # nn.MaxPool2d(2), - Flatten(), - nn.Linear(base_n_channels * 8 * 8 * 2, out_features), - # nn.Softmax(dim=-1) - ) - - def forward(self, x): - return self.aux_classifier(x) - - -class Flatten(nn.Module): - def forward(self, input): - """ - Note that input.size(0) is usually the batch size. - So what it does is that given any input with input.size(0) # of batches, - will flatten to be 1 * nb_elements. - """ - batch_size = input.size(0) - out = input.view(batch_size, -1) - return out # (batch_size, *size) - - -class Discriminator(BaseNetwork): - def __init__(self, base_n_channels): - """ - img_size : (int, int, int) - Height and width must be powers of 2. E.g. (32, 32, 1) or - (64, 128, 3). Last number indicates number of channels, e.g. 1 for - grayscale or 3 for RGB - """ - super(Discriminator, self).__init__() - - self.image_to_features = nn.Sequential( - spectral_norm(nn.Conv2d(3, base_n_channels, 5, 2, 2)), - nn.LeakyReLU(0.2, inplace=True), - spectral_norm(nn.Conv2d(base_n_channels, 2 * base_n_channels, 5, 2, 2)), - nn.LeakyReLU(0.2, inplace=True), - spectral_norm(nn.Conv2d(2 * base_n_channels, 2 * base_n_channels, 5, 2, 2)), - nn.LeakyReLU(0.2, inplace=True), - spectral_norm(nn.Conv2d(2 * base_n_channels, 4 * base_n_channels, 5, 2, 2)), - nn.LeakyReLU(0.2, inplace=True), - # spectral_norm(nn.Conv2d(4 * base_n_channels, 4 * base_n_channels, 5, 2, 2)), - # nn.LeakyReLU(0.2, inplace=True), - spectral_norm(nn.Conv2d(4 * base_n_channels, 8 * base_n_channels, 5, 1, 1)), - nn.LeakyReLU(0.2, inplace=True), - ) - - output_size = 8 * base_n_channels * 3 * 3 - self.features_to_prob = nn.Sequential( - spectral_norm(nn.Conv2d(8 * base_n_channels, 2 * base_n_channels, 5, 2, 1)), - Flatten(), - nn.Linear(output_size, 1) - ) - - self.init_weights(init_type="normal", gain=0.02) - - def forward(self, input_data): - x = self.image_to_features(input_data) - return self.features_to_prob(x) - - -class PatchDiscriminator(Discriminator): - def __init__(self, base_n_channels): - super(PatchDiscriminator, self).__init__(base_n_channels) - - self.features_to_prob = nn.Sequential( - spectral_norm(nn.Conv2d(8 * base_n_channels, 1, 1)), - Flatten() - ) - - def forward(self, input_data): - x = self.image_to_features(input_data) - return self.features_to_prob(x) - - -if __name__ == '__main__': - import torchvision - ifrnet = CIFR_Encoder(32, 128).cuda() - x = torch.rand((2, 3, 256, 256)).cuda() - vgg16 = torchvision.models.vgg16(pretrained=True).features.eval().cuda() - with torch.no_grad(): - vgg_feat = vgg16(x) - output, feats = ifrnet(x, vgg_feat) - print(output.size()) - for i, feat in enumerate(feats): - print(i, feat.size()) - - disc = Discriminator(32).cuda() - d_out = disc(output) - print(d_out.size()) - - patch_disc = PatchDiscriminator(32).cuda() - p_d_out = patch_disc(output) - print(p_d_out.size()) - diff --git a/spaces/bkhalaf/testapp/README.md b/spaces/bkhalaf/testapp/README.md deleted file mode 100644 index fd10dfaf552e0d530774b3d9cc8d27f700b4fa9c..0000000000000000000000000000000000000000 --- a/spaces/bkhalaf/testapp/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Testapp -emoji: 🚀 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/__init__.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/__init__.py deleted file mode 100644 index 70643517cd1a8b4e712eca90e23411ae89937795..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Dora Grids.""" diff --git a/spaces/breadlicker45/gpt-ya-gen/app.py b/spaces/breadlicker45/gpt-ya-gen/app.py deleted file mode 100644 index a26a9ae28656de335fa9a0e8889a3460868cf07a..0000000000000000000000000000000000000000 --- a/spaces/breadlicker45/gpt-ya-gen/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import streamlit as st -import time -from transformers import pipeline -import torch -trust_remote_code=True -st.markdown('## Text-generation gpt-ya from Breadlicker45') -use_auth_token=True -@st.cache(allow_output_mutation=True, suppress_st_warning =True, show_spinner=False) -def get_model(): - return pipeline('text-generation', model=model, do_sample=False) - -col1, col2 = st.columns([2,1]) - -with st.sidebar: - st.markdown('## Model Parameters') - - max_length = st.slider('Max text length', 0, 500, 80) - - num_beams = st.slider('N° tree beams search', 1, 15, 2) - - early_stopping = st.selectbox( - 'Early stopping text generation', - ('True', 'False'), key={'True' : True, 'False': False}, index=0) - - no_ngram_repeat = st.slider('Max repetition limit', 1, 5, 2) - -with col1: - prompt= st.text_area('Your prompt here', - '''What is the meaning of life?''') - -with col2: - select_model = st.radio( - "Select the model to use:", - ('gpt-ya', 'gpt-ya-1-1', 'gpt-ya-1-1-160M'), index = 2) - - if select_model == 'gpt-ya': - model = 'breadlicker45/gpt-ya' - elif select_model == 'gpt-ya-1-1': - model = 'BreadAi/gpt-YA-1-1_70M' - elif select_model == 'gpt-ya-1-1-160M': - model = 'BreadAi/gpt-YA-1-1_160M' - - with st.spinner('Loading Model... (This may take a while)'): - generator = get_model() - st.success('Model loaded correctly!') - -gen = st.info('Generating text...') -answer = generator(prompt, max_length=max_length, no_repeat_ngram_size=no_ngram_repeat, - early_stopping=early_stopping, num_beams=num_beams, do_sample=False) -gen.empty() - -lst = answer[0]['generated_text'] - -t = st.empty() -for i in range(len(lst)): - t.markdown("#### %s" % lst[0:i]) - time.sleep(0.04) \ No newline at end of file diff --git a/spaces/cakiki/arxiv-downloads/index.html b/spaces/cakiki/arxiv-downloads/index.html deleted file mode 100644 index 506b17abb4e310670fadb3360c60dcd7a49644d0..0000000000000000000000000000000000000000 --- a/spaces/cakiki/arxiv-downloads/index.html +++ /dev/null @@ -1,85 +0,0 @@ - - - - - - - - - - - Monthly arXiv downloads since 1994 - - - - - - - - - - - - - - - - - - - - - - - -
    - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/utils.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/utils.py deleted file mode 100644 index 2e76eb9535a68dcb4ccb065556c55289294e42c8..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/utils.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from torch import nn - - -def initialize_module_params(module: nn.Module) -> None: - for name, param in module.named_parameters(): - if "bias" in name: - nn.init.constant_(param, 0) - elif "weight" in name: - nn.init.kaiming_normal_(param, mode="fan_out", nonlinearity="relu") diff --git a/spaces/chenxx/ChuanhuChatGPT/modules/chat_func.py b/spaces/chenxx/ChuanhuChatGPT/modules/chat_func.py deleted file mode 100644 index 342246ca11999fb5e15f035f8b34711c23be067c..0000000000000000000000000000000000000000 --- a/spaces/chenxx/ChuanhuChatGPT/modules/chat_func.py +++ /dev/null @@ -1,473 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import os -import requests -import urllib3 - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp - -from modules.presets import * -from modules.llama_func import * -from modules.utils import * -import modules.shared as shared - -# logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s") - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - - -initial_prompt = "You are a helpful assistant." -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -def get_response( - openai_api_key, system_prompt, history, temperature, top_p, stream, selected_model -): - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}", - } - - history = [construct_system(system_prompt), *history] - - payload = { - "model": selected_model, - "messages": history, # [{"role": "user", "content": f"{inputs}"}], - "temperature": temperature, # 1.0, - "top_p": top_p, # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - } - if stream: - timeout = timeout_streaming - else: - timeout = timeout_all - - # 获取环境变量中的代理设置 - http_proxy = os.environ.get("HTTP_PROXY") or os.environ.get("http_proxy") - https_proxy = os.environ.get("HTTPS_PROXY") or os.environ.get("https_proxy") - - # 如果存在代理设置,使用它们 - proxies = {} - if http_proxy: - logging.info(f"使用 HTTP 代理: {http_proxy}") - proxies["http"] = http_proxy - if https_proxy: - logging.info(f"使用 HTTPS 代理: {https_proxy}") - proxies["https"] = https_proxy - - # 如果有自定义的api-url,使用自定义url发送请求,否则使用默认设置发送请求 - if shared.state.api_url != API_URL: - logging.info(f"使用自定义API URL: {shared.state.api_url}") - if proxies: - response = requests.post( - shared.state.api_url, - headers=headers, - json=payload, - stream=True, - timeout=timeout, - proxies=proxies, - ) - else: - response = requests.post( - shared.state.api_url, - headers=headers, - json=payload, - stream=True, - timeout=timeout, - ) - return response - - -def stream_predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=None, - display_append="" -): - def get_return_value(): - return chatbot, history, status_text, all_token_counts - - logging.info("实时回答模式") - partial_words = "" - counter = 0 - status_text = "开始实时传输回答……" - history.append(construct_user(inputs)) - history.append(construct_assistant("")) - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - user_token_count = 0 - if len(all_token_counts) == 0: - system_prompt_token_count = count_token(construct_system(system_prompt)) - user_token_count = ( - count_token(construct_user(inputs)) + system_prompt_token_count - ) - else: - user_token_count = count_token(construct_user(inputs)) - all_token_counts.append(user_token_count) - logging.info(f"输入token计数: {user_token_count}") - yield get_return_value() - try: - response = get_response( - openai_api_key, - system_prompt, - history, - temperature, - top_p, - True, - selected_model, - ) - except requests.exceptions.ConnectTimeout: - status_text = ( - standard_error_msg + connection_timeout_prompt + error_retrieve_prompt - ) - yield get_return_value() - return - except requests.exceptions.ReadTimeout: - status_text = standard_error_msg + read_timeout_prompt + error_retrieve_prompt - yield get_return_value() - return - - yield get_return_value() - error_json_str = "" - - for chunk in response.iter_lines(): - if counter == 0: - counter += 1 - continue - counter += 1 - # check whether each line is non-empty - if chunk: - chunk = chunk.decode() - chunklength = len(chunk) - try: - chunk = json.loads(chunk[6:]) - except json.JSONDecodeError: - logging.info(chunk) - error_json_str += chunk - status_text = f"JSON解析错误。请重置对话。收到的内容: {error_json_str}" - yield get_return_value() - continue - # decode each line as response data is in bytes - if chunklength > 6 and "delta" in chunk["choices"][0]: - finish_reason = chunk["choices"][0]["finish_reason"] - status_text = construct_token_message( - sum(all_token_counts), stream=True - ) - if finish_reason == "stop": - yield get_return_value() - break - try: - partial_words = ( - partial_words + chunk["choices"][0]["delta"]["content"] - ) - except KeyError: - status_text = ( - standard_error_msg - + "API回复中找不到内容。很可能是Token计数达到上限了。请重置对话。当前Token计数: " - + str(sum(all_token_counts)) - ) - yield get_return_value() - break - history[-1] = construct_assistant(partial_words) - chatbot[-1] = (chatbot[-1][0], partial_words+display_append) - all_token_counts[-1] += 1 - yield get_return_value() - - -def predict_all( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=None, - display_append="" -): - logging.info("一次性回答模式") - history.append(construct_user(inputs)) - history.append(construct_assistant("")) - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - all_token_counts.append(count_token(construct_user(inputs))) - try: - response = get_response( - openai_api_key, - system_prompt, - history, - temperature, - top_p, - False, - selected_model, - ) - except requests.exceptions.ConnectTimeout: - status_text = ( - standard_error_msg + connection_timeout_prompt + error_retrieve_prompt - ) - return chatbot, history, status_text, all_token_counts - except requests.exceptions.ProxyError: - status_text = standard_error_msg + proxy_error_prompt + error_retrieve_prompt - return chatbot, history, status_text, all_token_counts - except requests.exceptions.SSLError: - status_text = standard_error_msg + ssl_error_prompt + error_retrieve_prompt - return chatbot, history, status_text, all_token_counts - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - history[-1] = construct_assistant(content) - chatbot[-1] = (chatbot[-1][0], content+display_append) - total_token_count = response["usage"]["total_tokens"] - all_token_counts[-1] = total_token_count - sum(all_token_counts) - status_text = construct_token_message(total_token_count) - return chatbot, history, status_text, all_token_counts - - -def predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - stream=False, - selected_model=MODELS[0], - use_websearch=False, - files = None, - reply_language="中文", - should_check_token_count=True, -): # repetition_penalty, top_k - logging.info("输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL) - yield chatbot+[(inputs, "")], history, "开始生成回答……", all_token_counts - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - if files: - msg = "构建索引中……(这可能需要比较久的时间)" - logging.info(msg) - yield chatbot+[(inputs, "")], history, msg, all_token_counts - index = construct_index(openai_api_key, file_src=files) - msg = "索引构建完成,获取回答中……" - yield chatbot+[(inputs, "")], history, msg, all_token_counts - history, chatbot, status_text = chat_ai(openai_api_key, index, inputs, history, chatbot, reply_language) - yield chatbot, history, status_text, all_token_counts - return - - old_inputs = "" - link_references = [] - if use_websearch: - search_results = ddg(inputs, max_results=5) - old_inputs = inputs - web_results = [] - for idx, result in enumerate(search_results): - logging.info(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - web_results.append(f'[{idx+1}]"{result["body"]}"\nURL: {result["href"]}') - link_references.append(f"{idx+1}. [{domain_name}]({result['href']})\n") - link_references = "\n\n" + "".join(link_references) - inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", inputs) - .replace("{web_results}", "\n\n".join(web_results)) - .replace("{reply_language}", reply_language ) - ) - else: - link_references = "" - - if len(openai_api_key) != 51: - status_text = standard_error_msg + no_apikey_msg - logging.info(status_text) - chatbot.append((inputs, "")) - if len(history) == 0: - history.append(construct_user(inputs)) - history.append("") - all_token_counts.append(0) - else: - history[-2] = construct_user(inputs) - yield chatbot+[(inputs, "")], history, status_text, all_token_counts - return - elif len(inputs.strip()) == 0: - status_text = standard_error_msg + no_input_msg - logging.info(status_text) - yield chatbot+[(inputs, "")], history, status_text, all_token_counts - return - - if stream: - logging.info("使用流式传输") - iter = stream_predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=old_inputs, - display_append=link_references - ) - for chatbot, history, status_text, all_token_counts in iter: - if shared.state.interrupted: - shared.state.recover() - return - yield chatbot, history, status_text, all_token_counts - else: - logging.info("不使用流式传输") - chatbot, history, status_text, all_token_counts = predict_all( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - all_token_counts, - top_p, - temperature, - selected_model, - fake_input=old_inputs, - display_append=link_references - ) - yield chatbot, history, status_text, all_token_counts - - logging.info(f"传输完毕。当前token计数为{all_token_counts}") - if len(history) > 1 and history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if stream: - max_token = max_token_streaming - else: - max_token = max_token_all - - if sum(all_token_counts) > max_token and should_check_token_count: - status_text = f"精简token中{all_token_counts}/{max_token}" - logging.info(status_text) - yield chatbot, history, status_text, all_token_counts - iter = reduce_token_size( - openai_api_key, - system_prompt, - history, - chatbot, - all_token_counts, - top_p, - temperature, - max_token//2, - selected_model=selected_model, - ) - for chatbot, history, status_text, all_token_counts in iter: - status_text = f"Token 达到上限,已自动降低Token计数至 {status_text}" - yield chatbot, history, status_text, all_token_counts - - -def retry( - openai_api_key, - system_prompt, - history, - chatbot, - token_count, - top_p, - temperature, - stream=False, - selected_model=MODELS[0], - reply_language="中文", -): - logging.info("重试中……") - if len(history) == 0: - yield chatbot, history, f"{standard_error_msg}上下文是空的", token_count - return - history.pop() - inputs = history.pop()["content"] - token_count.pop() - iter = predict( - openai_api_key, - system_prompt, - history, - inputs, - chatbot, - token_count, - top_p, - temperature, - stream=stream, - selected_model=selected_model, - reply_language=reply_language, - ) - logging.info("重试中……") - for x in iter: - yield x - logging.info("重试完毕") - - -def reduce_token_size( - openai_api_key, - system_prompt, - history, - chatbot, - token_count, - top_p, - temperature, - max_token_count, - selected_model=MODELS[0], - reply_language="中文", -): - logging.info("开始减少token数量……") - iter = predict( - openai_api_key, - system_prompt, - history, - summarize_prompt, - chatbot, - token_count, - top_p, - temperature, - selected_model=selected_model, - should_check_token_count=False, - reply_language=reply_language, - ) - logging.info(f"chatbot: {chatbot}") - flag = False - for chatbot, history, status_text, previous_token_count in iter: - num_chat = find_n(previous_token_count, max_token_count) - if flag: - chatbot = chatbot[:-1] - flag = True - history = history[-2*num_chat:] if num_chat > 0 else [] - token_count = previous_token_count[-num_chat:] if num_chat > 0 else [] - msg = f"保留了最近{num_chat}轮对话" - yield chatbot, history, msg + "," + construct_token_message( - sum(token_count) if len(token_count) > 0 else 0, - ), token_count - logging.info(msg) - logging.info("减少token数量完毕") diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/_core/_subprocesses.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/_core/_subprocesses.py deleted file mode 100644 index 1a26ac8c7ff908341c25d2464972160fbe170a65..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/_core/_subprocesses.py +++ /dev/null @@ -1,135 +0,0 @@ -from __future__ import annotations - -from io import BytesIO -from os import PathLike -from subprocess import DEVNULL, PIPE, CalledProcessError, CompletedProcess -from typing import ( - IO, - Any, - AsyncIterable, - Mapping, - Sequence, - cast, -) - -from ..abc import Process -from ._eventloop import get_asynclib -from ._tasks import create_task_group - - -async def run_process( - command: str | bytes | Sequence[str | bytes], - *, - input: bytes | None = None, - stdout: int | IO[Any] | None = PIPE, - stderr: int | IO[Any] | None = PIPE, - check: bool = True, - cwd: str | bytes | PathLike[str] | None = None, - env: Mapping[str, str] | None = None, - start_new_session: bool = False, -) -> CompletedProcess[bytes]: - """ - Run an external command in a subprocess and wait until it completes. - - .. seealso:: :func:`subprocess.run` - - :param command: either a string to pass to the shell, or an iterable of strings containing the - executable name or path and its arguments - :param input: bytes passed to the standard input of the subprocess - :param stdout: either :data:`subprocess.PIPE` or :data:`subprocess.DEVNULL` - :param stderr: one of :data:`subprocess.PIPE`, :data:`subprocess.DEVNULL` or - :data:`subprocess.STDOUT` - :param check: if ``True``, raise :exc:`~subprocess.CalledProcessError` if the process - terminates with a return code other than 0 - :param cwd: If not ``None``, change the working directory to this before running the command - :param env: if not ``None``, this mapping replaces the inherited environment variables from the - parent process - :param start_new_session: if ``true`` the setsid() system call will be made in the child - process prior to the execution of the subprocess. (POSIX only) - :return: an object representing the completed process - :raises ~subprocess.CalledProcessError: if ``check`` is ``True`` and the process exits with a - nonzero return code - - """ - - async def drain_stream(stream: AsyncIterable[bytes], index: int) -> None: - buffer = BytesIO() - async for chunk in stream: - buffer.write(chunk) - - stream_contents[index] = buffer.getvalue() - - async with await open_process( - command, - stdin=PIPE if input else DEVNULL, - stdout=stdout, - stderr=stderr, - cwd=cwd, - env=env, - start_new_session=start_new_session, - ) as process: - stream_contents: list[bytes | None] = [None, None] - try: - async with create_task_group() as tg: - if process.stdout: - tg.start_soon(drain_stream, process.stdout, 0) - if process.stderr: - tg.start_soon(drain_stream, process.stderr, 1) - if process.stdin and input: - await process.stdin.send(input) - await process.stdin.aclose() - - await process.wait() - except BaseException: - process.kill() - raise - - output, errors = stream_contents - if check and process.returncode != 0: - raise CalledProcessError(cast(int, process.returncode), command, output, errors) - - return CompletedProcess(command, cast(int, process.returncode), output, errors) - - -async def open_process( - command: str | bytes | Sequence[str | bytes], - *, - stdin: int | IO[Any] | None = PIPE, - stdout: int | IO[Any] | None = PIPE, - stderr: int | IO[Any] | None = PIPE, - cwd: str | bytes | PathLike[str] | None = None, - env: Mapping[str, str] | None = None, - start_new_session: bool = False, -) -> Process: - """ - Start an external command in a subprocess. - - .. seealso:: :class:`subprocess.Popen` - - :param command: either a string to pass to the shell, or an iterable of strings containing the - executable name or path and its arguments - :param stdin: one of :data:`subprocess.PIPE`, :data:`subprocess.DEVNULL`, a - file-like object, or ``None`` - :param stdout: one of :data:`subprocess.PIPE`, :data:`subprocess.DEVNULL`, - a file-like object, or ``None`` - :param stderr: one of :data:`subprocess.PIPE`, :data:`subprocess.DEVNULL`, - :data:`subprocess.STDOUT`, a file-like object, or ``None`` - :param cwd: If not ``None``, the working directory is changed before executing - :param env: If env is not ``None``, it must be a mapping that defines the environment - variables for the new process - :param start_new_session: if ``true`` the setsid() system call will be made in the child - process prior to the execution of the subprocess. (POSIX only) - :return: an asynchronous process object - - """ - shell = isinstance(command, str) - return await get_asynclib().open_process( - command, - shell=shell, - stdin=stdin, - stdout=stdout, - stderr=stderr, - cwd=cwd, - env=env, - start_new_session=start_new_session, - ) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/common.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/common.py deleted file mode 100644 index 5b06d3f4fa9942b0804d78e2bec4eead4e0e9148..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/common.py +++ /dev/null @@ -1,206 +0,0 @@ -import array -import struct -import sys - -from typing import Sequence, MutableSequence, Dict, Optional, Union, Generator - -from clickhouse_connect.driver.exceptions import ProgrammingError, StreamClosedError -from clickhouse_connect.driver.types import Closable - -# pylint: disable=invalid-name -must_swap = sys.byteorder == 'big' -int_size = array.array('i').itemsize -low_card_version = 1 - -array_map = {1: 'b', 2: 'h', 4: 'i', 8: 'q'} -decimal_prec = {32: 9, 64: 18, 128: 38, 256: 79} - -if int_size == 2: - array_map[4] = 'l' - -array_sizes = {v: k for k, v in array_map.items()} -array_sizes['f'] = 4 -array_sizes['d'] = 8 -np_date_types = {0: '[s]', 3: '[ms]', 6: '[us]', 9: '[ns]'} - - -def array_type(size: int, signed: bool): - """ - Determines the Python array.array code for the requested byte size - :param size: byte size - :param signed: whether int types should be signed or unsigned - :return: Python array.array code - """ - try: - code = array_map[size] - except KeyError: - return None - return code if signed else code.upper() - - -def write_array(code: str, column: Sequence, dest: MutableSequence): - """ - Write a column of native Python data matching the array.array code - :param code: Python array.array code matching the column data type - :param column: Column of native Python values - :param dest: Destination byte buffer - """ - if len(column) and not isinstance(column[0], (int, float)): - if code in ('f', 'F', 'd', 'D'): - column = [float(x) for x in column] - else: - column = [int(x) for x in column] - try: - buff = struct.Struct(f'<{len(column)}{code}') - dest += buff.pack(*column) - except (TypeError, OverflowError, struct.error) as ex: - raise ProgrammingError('Unable to create Python array. This is usually caused by trying to insert None ' + - 'values into a ClickHouse column that is not Nullable') from ex - - -def write_uint64(value: int, dest: MutableSequence): - """ - Write a single UInt64 value to a binary write buffer - :param value: UInt64 value to write - :param dest: Destination byte buffer - """ - dest.extend(value.to_bytes(8, 'little')) - - -def write_leb128(value: int, dest: MutableSequence): - """ - Write a LEB128 encoded integer to a target binary buffer - :param value: Integer value (positive only) - :param dest: Target buffer - """ - while True: - b = value & 0x7f - value >>= 7 - if value == 0: - dest.append(b) - return - dest.append(0x80 | b) - - -def decimal_size(prec: int): - """ - Determine the bit size of a ClickHouse or Python Decimal needed to store a value of the requested precision - :param prec: Precision of the Decimal in total number of base 10 digits - :return: Required bit size - """ - if prec < 1 or prec > 79: - raise ArithmeticError(f'Invalid precision {prec} for ClickHouse Decimal type') - if prec < 10: - return 32 - if prec < 19: - return 64 - if prec < 39: - return 128 - return 256 - - -def unescape_identifier(x: str) -> str: - if x.startswith('`') and x.endswith('`'): - return x[1:-1] - return x - - -def dict_copy(source: Dict = None, update: Optional[Dict] = None) -> Dict: - copy = source.copy() if source else {} - if update: - copy.update(update) - return copy - - -def empty_gen(): - yield from () - - -def coerce_int(val: Optional[Union[str, int]]) -> int: - if not val: - return 0 - return int(val) - - -def coerce_bool(val: Optional[Union[str, bool]]): - if not val: - return False - return val in (True, 'True', 'true', '1') - - -class SliceView(Sequence): - """ - Provides a view into a sequence rather than copying. Borrows liberally from - https://gist.github.com/mathieucaroff/0cf094325fb5294fb54c6a577f05a2c1 - Also see the discussion on SO: https://stackoverflow.com/questions/3485475/can-i-create-a-view-on-a-python-list - """ - slots = ('_source', '_range') - - def __init__(self, source: Sequence, source_slice: Optional[slice] = None): - if isinstance(source, SliceView): - self._source = source._source - self._range = source._range[source_slice] - else: - self._source = source - if source_slice is None: - self._range = range(len(source)) - else: - self._range = range(len(source))[source_slice] - - def __len__(self): - return len(self._range) - - def __getitem__(self, i): - if isinstance(i, slice): - return SliceView(self._source, i) - return self._source[self._range[i]] - - def __str__(self): - r = self._range - return str(self._source[slice(r.start, r.stop, r.step)]) - - def __repr__(self): - r = self._range - return f'SliceView({self._source[slice(r.start, r.stop, r.step)]})' - - def __eq__(self, other): - if self is other: - return True - if len(self) != len(other): - return False - for v, w in zip(self, other): - if v != w: - return False - return True - - -class StreamContext: - """ - Wraps a generator and its "source" in a Context. This ensures that the source will be "closed" even if the - generator is not fully consumed or there is an exception during consumption - """ - __slots__ = 'source', 'gen', '_in_context' - - def __init__(self, source: Closable, gen: Generator): - self.source = source - self.gen = gen - self._in_context = False - - def __iter__(self): - return self - - def __next__(self): - if not self._in_context: - raise ProgrammingError('Stream should be used within a context') - return next(self.gen) - - def __enter__(self): - if not self.gen: - raise StreamClosedError - self._in_context = True - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self._in_context = False - self.source.close() - self.gen = None diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/cu2quPen.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/cu2quPen.py deleted file mode 100644 index f182aed44a0e8a6dfd906c385f10a5f3a14c332e..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/pens/cu2quPen.py +++ /dev/null @@ -1,325 +0,0 @@ -# Copyright 2016 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import operator -from fontTools.cu2qu import curve_to_quadratic, curves_to_quadratic -from fontTools.pens.basePen import decomposeSuperBezierSegment -from fontTools.pens.filterPen import FilterPen -from fontTools.pens.reverseContourPen import ReverseContourPen -from fontTools.pens.pointPen import BasePointToSegmentPen -from fontTools.pens.pointPen import ReverseContourPointPen - - -class Cu2QuPen(FilterPen): - """A filter pen to convert cubic bezier curves to quadratic b-splines - using the FontTools SegmentPen protocol. - - Args: - - other_pen: another SegmentPen used to draw the transformed outline. - max_err: maximum approximation error in font units. For optimal results, - if you know the UPEM of the font, we recommend setting this to a - value equal, or close to UPEM / 1000. - reverse_direction: flip the contours' direction but keep starting point. - stats: a dictionary counting the point numbers of quadratic segments. - all_quadratic: if True (default), only quadratic b-splines are generated. - if False, quadratic curves or cubic curves are generated depending - on which one is more economical. - """ - - def __init__( - self, - other_pen, - max_err, - reverse_direction=False, - stats=None, - all_quadratic=True, - ): - if reverse_direction: - other_pen = ReverseContourPen(other_pen) - super().__init__(other_pen) - self.max_err = max_err - self.stats = stats - self.all_quadratic = all_quadratic - - def _convert_curve(self, pt1, pt2, pt3): - curve = (self.current_pt, pt1, pt2, pt3) - result = curve_to_quadratic(curve, self.max_err, self.all_quadratic) - if self.stats is not None: - n = str(len(result) - 2) - self.stats[n] = self.stats.get(n, 0) + 1 - if self.all_quadratic: - self.qCurveTo(*result[1:]) - else: - if len(result) == 3: - self.qCurveTo(*result[1:]) - else: - assert len(result) == 4 - super().curveTo(*result[1:]) - - def curveTo(self, *points): - n = len(points) - if n == 3: - # this is the most common case, so we special-case it - self._convert_curve(*points) - elif n > 3: - for segment in decomposeSuperBezierSegment(points): - self._convert_curve(*segment) - else: - self.qCurveTo(*points) - - -class Cu2QuPointPen(BasePointToSegmentPen): - """A filter pen to convert cubic bezier curves to quadratic b-splines - using the FontTools PointPen protocol. - - Args: - other_point_pen: another PointPen used to draw the transformed outline. - max_err: maximum approximation error in font units. For optimal results, - if you know the UPEM of the font, we recommend setting this to a - value equal, or close to UPEM / 1000. - reverse_direction: reverse the winding direction of all contours. - stats: a dictionary counting the point numbers of quadratic segments. - all_quadratic: if True (default), only quadratic b-splines are generated. - if False, quadratic curves or cubic curves are generated depending - on which one is more economical. - """ - - __points_required = { - "move": (1, operator.eq), - "line": (1, operator.eq), - "qcurve": (2, operator.ge), - "curve": (3, operator.eq), - } - - def __init__( - self, - other_point_pen, - max_err, - reverse_direction=False, - stats=None, - all_quadratic=True, - ): - BasePointToSegmentPen.__init__(self) - if reverse_direction: - self.pen = ReverseContourPointPen(other_point_pen) - else: - self.pen = other_point_pen - self.max_err = max_err - self.stats = stats - self.all_quadratic = all_quadratic - - def _flushContour(self, segments): - assert len(segments) >= 1 - closed = segments[0][0] != "move" - new_segments = [] - prev_points = segments[-1][1] - prev_on_curve = prev_points[-1][0] - for segment_type, points in segments: - if segment_type == "curve": - for sub_points in self._split_super_bezier_segments(points): - on_curve, smooth, name, kwargs = sub_points[-1] - bcp1, bcp2 = sub_points[0][0], sub_points[1][0] - cubic = [prev_on_curve, bcp1, bcp2, on_curve] - quad = curve_to_quadratic(cubic, self.max_err, self.all_quadratic) - if self.stats is not None: - n = str(len(quad) - 2) - self.stats[n] = self.stats.get(n, 0) + 1 - new_points = [(pt, False, None, {}) for pt in quad[1:-1]] - new_points.append((on_curve, smooth, name, kwargs)) - if self.all_quadratic or len(new_points) == 2: - new_segments.append(["qcurve", new_points]) - else: - new_segments.append(["curve", new_points]) - prev_on_curve = sub_points[-1][0] - else: - new_segments.append([segment_type, points]) - prev_on_curve = points[-1][0] - if closed: - # the BasePointToSegmentPen.endPath method that calls _flushContour - # rotates the point list of closed contours so that they end with - # the first on-curve point. We restore the original starting point. - new_segments = new_segments[-1:] + new_segments[:-1] - self._drawPoints(new_segments) - - def _split_super_bezier_segments(self, points): - sub_segments = [] - # n is the number of control points - n = len(points) - 1 - if n == 2: - # a simple bezier curve segment - sub_segments.append(points) - elif n > 2: - # a "super" bezier; decompose it - on_curve, smooth, name, kwargs = points[-1] - num_sub_segments = n - 1 - for i, sub_points in enumerate( - decomposeSuperBezierSegment([pt for pt, _, _, _ in points]) - ): - new_segment = [] - for point in sub_points[:-1]: - new_segment.append((point, False, None, {})) - if i == (num_sub_segments - 1): - # the last on-curve keeps its original attributes - new_segment.append((on_curve, smooth, name, kwargs)) - else: - # on-curves of sub-segments are always "smooth" - new_segment.append((sub_points[-1], True, None, {})) - sub_segments.append(new_segment) - else: - raise AssertionError("expected 2 control points, found: %d" % n) - return sub_segments - - def _drawPoints(self, segments): - pen = self.pen - pen.beginPath() - last_offcurves = [] - points_required = self.__points_required - for i, (segment_type, points) in enumerate(segments): - if segment_type in points_required: - n, op = points_required[segment_type] - assert op(len(points), n), ( - f"illegal {segment_type!r} segment point count: " - f"expected {n}, got {len(points)}" - ) - offcurves = points[:-1] - if i == 0: - # any off-curve points preceding the first on-curve - # will be appended at the end of the contour - last_offcurves = offcurves - else: - for (pt, smooth, name, kwargs) in offcurves: - pen.addPoint(pt, None, smooth, name, **kwargs) - pt, smooth, name, kwargs = points[-1] - if pt is None: - assert segment_type == "qcurve" - # special quadratic contour with no on-curve points: - # we need to skip the "None" point. See also the Pen - # protocol's qCurveTo() method and fontTools.pens.basePen - pass - else: - pen.addPoint(pt, segment_type, smooth, name, **kwargs) - else: - raise AssertionError("unexpected segment type: %r" % segment_type) - for (pt, smooth, name, kwargs) in last_offcurves: - pen.addPoint(pt, None, smooth, name, **kwargs) - pen.endPath() - - def addComponent(self, baseGlyphName, transformation): - assert self.currentPath is None - self.pen.addComponent(baseGlyphName, transformation) - - -class Cu2QuMultiPen: - """A filter multi-pen to convert cubic bezier curves to quadratic b-splines - in a interpolation-compatible manner, using the FontTools SegmentPen protocol. - - Args: - - other_pens: list of SegmentPens used to draw the transformed outlines. - max_err: maximum approximation error in font units. For optimal results, - if you know the UPEM of the font, we recommend setting this to a - value equal, or close to UPEM / 1000. - reverse_direction: flip the contours' direction but keep starting point. - - This pen does not follow the normal SegmentPen protocol. Instead, its - moveTo/lineTo/qCurveTo/curveTo methods take a list of tuples that are - arguments that would normally be passed to a SegmentPen, one item for - each of the pens in other_pens. - """ - - # TODO Simplify like 3e8ebcdce592fe8a59ca4c3a294cc9724351e1ce - # Remove start_pts and _add_moveTO - - def __init__(self, other_pens, max_err, reverse_direction=False): - if reverse_direction: - other_pens = [ - ReverseContourPen(pen, outputImpliedClosingLine=True) - for pen in other_pens - ] - self.pens = other_pens - self.max_err = max_err - self.start_pts = None - self.current_pts = None - - def _check_contour_is_open(self): - if self.current_pts is None: - raise AssertionError("moveTo is required") - - def _check_contour_is_closed(self): - if self.current_pts is not None: - raise AssertionError("closePath or endPath is required") - - def _add_moveTo(self): - if self.start_pts is not None: - for pt, pen in zip(self.start_pts, self.pens): - pen.moveTo(*pt) - self.start_pts = None - - def moveTo(self, pts): - self._check_contour_is_closed() - self.start_pts = self.current_pts = pts - self._add_moveTo() - - def lineTo(self, pts): - self._check_contour_is_open() - self._add_moveTo() - for pt, pen in zip(pts, self.pens): - pen.lineTo(*pt) - self.current_pts = pts - - def qCurveTo(self, pointsList): - self._check_contour_is_open() - if len(pointsList[0]) == 1: - self.lineTo([(points[0],) for points in pointsList]) - return - self._add_moveTo() - current_pts = [] - for points, pen in zip(pointsList, self.pens): - pen.qCurveTo(*points) - current_pts.append((points[-1],)) - self.current_pts = current_pts - - def _curves_to_quadratic(self, pointsList): - curves = [] - for current_pt, points in zip(self.current_pts, pointsList): - curves.append(current_pt + points) - quadratics = curves_to_quadratic(curves, [self.max_err] * len(curves)) - pointsList = [] - for quadratic in quadratics: - pointsList.append(quadratic[1:]) - self.qCurveTo(pointsList) - - def curveTo(self, pointsList): - self._check_contour_is_open() - self._curves_to_quadratic(pointsList) - - def closePath(self): - self._check_contour_is_open() - if self.start_pts is None: - for pen in self.pens: - pen.closePath() - self.current_pts = self.start_pts = None - - def endPath(self): - self._check_contour_is_open() - if self.start_pts is None: - for pen in self.pens: - pen.endPath() - self.current_pts = self.start_pts = None - - def addComponent(self, glyphName, transformations): - self._check_contour_is_closed() - for trans, pen in zip(transformations, self.pens): - pen.addComponent(glyphName, trans) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_C_.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_C_.py deleted file mode 100644 index 573b3f9c3970766ea817994509f4939ef4f70f0c..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_C_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .otBase import BaseTTXConverter - - -class table_T_S_I_C_(BaseTTXConverter): - pass diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/unicodedata/Scripts.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/unicodedata/Scripts.py deleted file mode 100644 index 68bb91b396d62b03a8bfd650c64ce0b7375e1e48..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/unicodedata/Scripts.py +++ /dev/null @@ -1,3509 +0,0 @@ -# -*- coding: utf-8 -*- -# -# NOTE: This file was auto-generated with MetaTools/buildUCD.py. -# Source: https://unicode.org/Public/UNIDATA/Scripts.txt -# License: http://unicode.org/copyright.html#License -# -# Scripts-15.0.0.txt -# Date: 2022-04-26, 23:15:02 GMT -# © 2022 Unicode®, Inc. -# Unicode and the Unicode Logo are registered trademarks of Unicode, Inc. in the U.S. and other countries. -# For terms of use, see https://www.unicode.org/terms_of_use.html -# -# Unicode Character Database -# For documentation, see https://www.unicode.org/reports/tr44/ -# For more information, see: -# UAX #24, Unicode Script Property: https://www.unicode.org/reports/tr24/ -# Especially the sections: -# https://www.unicode.org/reports/tr24/#Assignment_Script_Values -# https://www.unicode.org/reports/tr24/#Assignment_ScriptX_Values -# - - -RANGES = [ - 0x0000, # .. 0x0040 ; Common - 0x0041, # .. 0x005A ; Latin - 0x005B, # .. 0x0060 ; Common - 0x0061, # .. 0x007A ; Latin - 0x007B, # .. 0x00A9 ; Common - 0x00AA, # .. 0x00AA ; Latin - 0x00AB, # .. 0x00B9 ; Common - 0x00BA, # .. 0x00BA ; Latin - 0x00BB, # .. 0x00BF ; Common - 0x00C0, # .. 0x00D6 ; Latin - 0x00D7, # .. 0x00D7 ; Common - 0x00D8, # .. 0x00F6 ; Latin - 0x00F7, # .. 0x00F7 ; Common - 0x00F8, # .. 0x02B8 ; Latin - 0x02B9, # .. 0x02DF ; Common - 0x02E0, # .. 0x02E4 ; Latin - 0x02E5, # .. 0x02E9 ; Common - 0x02EA, # .. 0x02EB ; Bopomofo - 0x02EC, # .. 0x02FF ; Common - 0x0300, # .. 0x036F ; Inherited - 0x0370, # .. 0x0373 ; Greek - 0x0374, # .. 0x0374 ; Common - 0x0375, # .. 0x0377 ; Greek - 0x0378, # .. 0x0379 ; Unknown - 0x037A, # .. 0x037D ; Greek - 0x037E, # .. 0x037E ; Common - 0x037F, # .. 0x037F ; Greek - 0x0380, # .. 0x0383 ; Unknown - 0x0384, # .. 0x0384 ; Greek - 0x0385, # .. 0x0385 ; Common - 0x0386, # .. 0x0386 ; Greek - 0x0387, # .. 0x0387 ; Common - 0x0388, # .. 0x038A ; Greek - 0x038B, # .. 0x038B ; Unknown - 0x038C, # .. 0x038C ; Greek - 0x038D, # .. 0x038D ; Unknown - 0x038E, # .. 0x03A1 ; Greek - 0x03A2, # .. 0x03A2 ; Unknown - 0x03A3, # .. 0x03E1 ; Greek - 0x03E2, # .. 0x03EF ; Coptic - 0x03F0, # .. 0x03FF ; Greek - 0x0400, # .. 0x0484 ; Cyrillic - 0x0485, # .. 0x0486 ; Inherited - 0x0487, # .. 0x052F ; Cyrillic - 0x0530, # .. 0x0530 ; Unknown - 0x0531, # .. 0x0556 ; Armenian - 0x0557, # .. 0x0558 ; Unknown - 0x0559, # .. 0x058A ; Armenian - 0x058B, # .. 0x058C ; Unknown - 0x058D, # .. 0x058F ; Armenian - 0x0590, # .. 0x0590 ; Unknown - 0x0591, # .. 0x05C7 ; Hebrew - 0x05C8, # .. 0x05CF ; Unknown - 0x05D0, # .. 0x05EA ; Hebrew - 0x05EB, # .. 0x05EE ; Unknown - 0x05EF, # .. 0x05F4 ; Hebrew - 0x05F5, # .. 0x05FF ; Unknown - 0x0600, # .. 0x0604 ; Arabic - 0x0605, # .. 0x0605 ; Common - 0x0606, # .. 0x060B ; Arabic - 0x060C, # .. 0x060C ; Common - 0x060D, # .. 0x061A ; Arabic - 0x061B, # .. 0x061B ; Common - 0x061C, # .. 0x061E ; Arabic - 0x061F, # .. 0x061F ; Common - 0x0620, # .. 0x063F ; Arabic - 0x0640, # .. 0x0640 ; Common - 0x0641, # .. 0x064A ; Arabic - 0x064B, # .. 0x0655 ; Inherited - 0x0656, # .. 0x066F ; Arabic - 0x0670, # .. 0x0670 ; Inherited - 0x0671, # .. 0x06DC ; Arabic - 0x06DD, # .. 0x06DD ; Common - 0x06DE, # .. 0x06FF ; Arabic - 0x0700, # .. 0x070D ; Syriac - 0x070E, # .. 0x070E ; Unknown - 0x070F, # .. 0x074A ; Syriac - 0x074B, # .. 0x074C ; Unknown - 0x074D, # .. 0x074F ; Syriac - 0x0750, # .. 0x077F ; Arabic - 0x0780, # .. 0x07B1 ; Thaana - 0x07B2, # .. 0x07BF ; Unknown - 0x07C0, # .. 0x07FA ; Nko - 0x07FB, # .. 0x07FC ; Unknown - 0x07FD, # .. 0x07FF ; Nko - 0x0800, # .. 0x082D ; Samaritan - 0x082E, # .. 0x082F ; Unknown - 0x0830, # .. 0x083E ; Samaritan - 0x083F, # .. 0x083F ; Unknown - 0x0840, # .. 0x085B ; Mandaic - 0x085C, # .. 0x085D ; Unknown - 0x085E, # .. 0x085E ; Mandaic - 0x085F, # .. 0x085F ; Unknown - 0x0860, # .. 0x086A ; Syriac - 0x086B, # .. 0x086F ; Unknown - 0x0870, # .. 0x088E ; Arabic - 0x088F, # .. 0x088F ; Unknown - 0x0890, # .. 0x0891 ; Arabic - 0x0892, # .. 0x0897 ; Unknown - 0x0898, # .. 0x08E1 ; Arabic - 0x08E2, # .. 0x08E2 ; Common - 0x08E3, # .. 0x08FF ; Arabic - 0x0900, # .. 0x0950 ; Devanagari - 0x0951, # .. 0x0954 ; Inherited - 0x0955, # .. 0x0963 ; Devanagari - 0x0964, # .. 0x0965 ; Common - 0x0966, # .. 0x097F ; Devanagari - 0x0980, # .. 0x0983 ; Bengali - 0x0984, # .. 0x0984 ; Unknown - 0x0985, # .. 0x098C ; Bengali - 0x098D, # .. 0x098E ; Unknown - 0x098F, # .. 0x0990 ; Bengali - 0x0991, # .. 0x0992 ; Unknown - 0x0993, # .. 0x09A8 ; Bengali - 0x09A9, # .. 0x09A9 ; Unknown - 0x09AA, # .. 0x09B0 ; Bengali - 0x09B1, # .. 0x09B1 ; Unknown - 0x09B2, # .. 0x09B2 ; Bengali - 0x09B3, # .. 0x09B5 ; Unknown - 0x09B6, # .. 0x09B9 ; Bengali - 0x09BA, # .. 0x09BB ; Unknown - 0x09BC, # .. 0x09C4 ; Bengali - 0x09C5, # .. 0x09C6 ; Unknown - 0x09C7, # .. 0x09C8 ; Bengali - 0x09C9, # .. 0x09CA ; Unknown - 0x09CB, # .. 0x09CE ; Bengali - 0x09CF, # .. 0x09D6 ; Unknown - 0x09D7, # .. 0x09D7 ; Bengali - 0x09D8, # .. 0x09DB ; Unknown - 0x09DC, # .. 0x09DD ; Bengali - 0x09DE, # .. 0x09DE ; Unknown - 0x09DF, # .. 0x09E3 ; Bengali - 0x09E4, # .. 0x09E5 ; Unknown - 0x09E6, # .. 0x09FE ; Bengali - 0x09FF, # .. 0x0A00 ; Unknown - 0x0A01, # .. 0x0A03 ; Gurmukhi - 0x0A04, # .. 0x0A04 ; Unknown - 0x0A05, # .. 0x0A0A ; Gurmukhi - 0x0A0B, # .. 0x0A0E ; Unknown - 0x0A0F, # .. 0x0A10 ; Gurmukhi - 0x0A11, # .. 0x0A12 ; Unknown - 0x0A13, # .. 0x0A28 ; Gurmukhi - 0x0A29, # .. 0x0A29 ; Unknown - 0x0A2A, # .. 0x0A30 ; Gurmukhi - 0x0A31, # .. 0x0A31 ; Unknown - 0x0A32, # .. 0x0A33 ; Gurmukhi - 0x0A34, # .. 0x0A34 ; Unknown - 0x0A35, # .. 0x0A36 ; Gurmukhi - 0x0A37, # .. 0x0A37 ; Unknown - 0x0A38, # .. 0x0A39 ; Gurmukhi - 0x0A3A, # .. 0x0A3B ; Unknown - 0x0A3C, # .. 0x0A3C ; Gurmukhi - 0x0A3D, # .. 0x0A3D ; Unknown - 0x0A3E, # .. 0x0A42 ; Gurmukhi - 0x0A43, # .. 0x0A46 ; Unknown - 0x0A47, # .. 0x0A48 ; Gurmukhi - 0x0A49, # .. 0x0A4A ; Unknown - 0x0A4B, # .. 0x0A4D ; Gurmukhi - 0x0A4E, # .. 0x0A50 ; Unknown - 0x0A51, # .. 0x0A51 ; Gurmukhi - 0x0A52, # .. 0x0A58 ; Unknown - 0x0A59, # .. 0x0A5C ; Gurmukhi - 0x0A5D, # .. 0x0A5D ; Unknown - 0x0A5E, # .. 0x0A5E ; Gurmukhi - 0x0A5F, # .. 0x0A65 ; Unknown - 0x0A66, # .. 0x0A76 ; Gurmukhi - 0x0A77, # .. 0x0A80 ; Unknown - 0x0A81, # .. 0x0A83 ; Gujarati - 0x0A84, # .. 0x0A84 ; Unknown - 0x0A85, # .. 0x0A8D ; Gujarati - 0x0A8E, # .. 0x0A8E ; Unknown - 0x0A8F, # .. 0x0A91 ; Gujarati - 0x0A92, # .. 0x0A92 ; Unknown - 0x0A93, # .. 0x0AA8 ; Gujarati - 0x0AA9, # .. 0x0AA9 ; Unknown - 0x0AAA, # .. 0x0AB0 ; Gujarati - 0x0AB1, # .. 0x0AB1 ; Unknown - 0x0AB2, # .. 0x0AB3 ; Gujarati - 0x0AB4, # .. 0x0AB4 ; Unknown - 0x0AB5, # .. 0x0AB9 ; Gujarati - 0x0ABA, # .. 0x0ABB ; Unknown - 0x0ABC, # .. 0x0AC5 ; Gujarati - 0x0AC6, # .. 0x0AC6 ; Unknown - 0x0AC7, # .. 0x0AC9 ; Gujarati - 0x0ACA, # .. 0x0ACA ; Unknown - 0x0ACB, # .. 0x0ACD ; Gujarati - 0x0ACE, # .. 0x0ACF ; Unknown - 0x0AD0, # .. 0x0AD0 ; Gujarati - 0x0AD1, # .. 0x0ADF ; Unknown - 0x0AE0, # .. 0x0AE3 ; Gujarati - 0x0AE4, # .. 0x0AE5 ; Unknown - 0x0AE6, # .. 0x0AF1 ; Gujarati - 0x0AF2, # .. 0x0AF8 ; Unknown - 0x0AF9, # .. 0x0AFF ; Gujarati - 0x0B00, # .. 0x0B00 ; Unknown - 0x0B01, # .. 0x0B03 ; Oriya - 0x0B04, # .. 0x0B04 ; Unknown - 0x0B05, # .. 0x0B0C ; Oriya - 0x0B0D, # .. 0x0B0E ; Unknown - 0x0B0F, # .. 0x0B10 ; Oriya - 0x0B11, # .. 0x0B12 ; Unknown - 0x0B13, # .. 0x0B28 ; Oriya - 0x0B29, # .. 0x0B29 ; Unknown - 0x0B2A, # .. 0x0B30 ; Oriya - 0x0B31, # .. 0x0B31 ; Unknown - 0x0B32, # .. 0x0B33 ; Oriya - 0x0B34, # .. 0x0B34 ; Unknown - 0x0B35, # .. 0x0B39 ; Oriya - 0x0B3A, # .. 0x0B3B ; Unknown - 0x0B3C, # .. 0x0B44 ; Oriya - 0x0B45, # .. 0x0B46 ; Unknown - 0x0B47, # .. 0x0B48 ; Oriya - 0x0B49, # .. 0x0B4A ; Unknown - 0x0B4B, # .. 0x0B4D ; Oriya - 0x0B4E, # .. 0x0B54 ; Unknown - 0x0B55, # .. 0x0B57 ; Oriya - 0x0B58, # .. 0x0B5B ; Unknown - 0x0B5C, # .. 0x0B5D ; Oriya - 0x0B5E, # .. 0x0B5E ; Unknown - 0x0B5F, # .. 0x0B63 ; Oriya - 0x0B64, # .. 0x0B65 ; Unknown - 0x0B66, # .. 0x0B77 ; Oriya - 0x0B78, # .. 0x0B81 ; Unknown - 0x0B82, # .. 0x0B83 ; Tamil - 0x0B84, # .. 0x0B84 ; Unknown - 0x0B85, # .. 0x0B8A ; Tamil - 0x0B8B, # .. 0x0B8D ; Unknown - 0x0B8E, # .. 0x0B90 ; Tamil - 0x0B91, # .. 0x0B91 ; Unknown - 0x0B92, # .. 0x0B95 ; Tamil - 0x0B96, # .. 0x0B98 ; Unknown - 0x0B99, # .. 0x0B9A ; Tamil - 0x0B9B, # .. 0x0B9B ; Unknown - 0x0B9C, # .. 0x0B9C ; Tamil - 0x0B9D, # .. 0x0B9D ; Unknown - 0x0B9E, # .. 0x0B9F ; Tamil - 0x0BA0, # .. 0x0BA2 ; Unknown - 0x0BA3, # .. 0x0BA4 ; Tamil - 0x0BA5, # .. 0x0BA7 ; Unknown - 0x0BA8, # .. 0x0BAA ; Tamil - 0x0BAB, # .. 0x0BAD ; Unknown - 0x0BAE, # .. 0x0BB9 ; Tamil - 0x0BBA, # .. 0x0BBD ; Unknown - 0x0BBE, # .. 0x0BC2 ; Tamil - 0x0BC3, # .. 0x0BC5 ; Unknown - 0x0BC6, # .. 0x0BC8 ; Tamil - 0x0BC9, # .. 0x0BC9 ; Unknown - 0x0BCA, # .. 0x0BCD ; Tamil - 0x0BCE, # .. 0x0BCF ; Unknown - 0x0BD0, # .. 0x0BD0 ; Tamil - 0x0BD1, # .. 0x0BD6 ; Unknown - 0x0BD7, # .. 0x0BD7 ; Tamil - 0x0BD8, # .. 0x0BE5 ; Unknown - 0x0BE6, # .. 0x0BFA ; Tamil - 0x0BFB, # .. 0x0BFF ; Unknown - 0x0C00, # .. 0x0C0C ; Telugu - 0x0C0D, # .. 0x0C0D ; Unknown - 0x0C0E, # .. 0x0C10 ; Telugu - 0x0C11, # .. 0x0C11 ; Unknown - 0x0C12, # .. 0x0C28 ; Telugu - 0x0C29, # .. 0x0C29 ; Unknown - 0x0C2A, # .. 0x0C39 ; Telugu - 0x0C3A, # .. 0x0C3B ; Unknown - 0x0C3C, # .. 0x0C44 ; Telugu - 0x0C45, # .. 0x0C45 ; Unknown - 0x0C46, # .. 0x0C48 ; Telugu - 0x0C49, # .. 0x0C49 ; Unknown - 0x0C4A, # .. 0x0C4D ; Telugu - 0x0C4E, # .. 0x0C54 ; Unknown - 0x0C55, # .. 0x0C56 ; Telugu - 0x0C57, # .. 0x0C57 ; Unknown - 0x0C58, # .. 0x0C5A ; Telugu - 0x0C5B, # .. 0x0C5C ; Unknown - 0x0C5D, # .. 0x0C5D ; Telugu - 0x0C5E, # .. 0x0C5F ; Unknown - 0x0C60, # .. 0x0C63 ; Telugu - 0x0C64, # .. 0x0C65 ; Unknown - 0x0C66, # .. 0x0C6F ; Telugu - 0x0C70, # .. 0x0C76 ; Unknown - 0x0C77, # .. 0x0C7F ; Telugu - 0x0C80, # .. 0x0C8C ; Kannada - 0x0C8D, # .. 0x0C8D ; Unknown - 0x0C8E, # .. 0x0C90 ; Kannada - 0x0C91, # .. 0x0C91 ; Unknown - 0x0C92, # .. 0x0CA8 ; Kannada - 0x0CA9, # .. 0x0CA9 ; Unknown - 0x0CAA, # .. 0x0CB3 ; Kannada - 0x0CB4, # .. 0x0CB4 ; Unknown - 0x0CB5, # .. 0x0CB9 ; Kannada - 0x0CBA, # .. 0x0CBB ; Unknown - 0x0CBC, # .. 0x0CC4 ; Kannada - 0x0CC5, # .. 0x0CC5 ; Unknown - 0x0CC6, # .. 0x0CC8 ; Kannada - 0x0CC9, # .. 0x0CC9 ; Unknown - 0x0CCA, # .. 0x0CCD ; Kannada - 0x0CCE, # .. 0x0CD4 ; Unknown - 0x0CD5, # .. 0x0CD6 ; Kannada - 0x0CD7, # .. 0x0CDC ; Unknown - 0x0CDD, # .. 0x0CDE ; Kannada - 0x0CDF, # .. 0x0CDF ; Unknown - 0x0CE0, # .. 0x0CE3 ; Kannada - 0x0CE4, # .. 0x0CE5 ; Unknown - 0x0CE6, # .. 0x0CEF ; Kannada - 0x0CF0, # .. 0x0CF0 ; Unknown - 0x0CF1, # .. 0x0CF3 ; Kannada - 0x0CF4, # .. 0x0CFF ; Unknown - 0x0D00, # .. 0x0D0C ; Malayalam - 0x0D0D, # .. 0x0D0D ; Unknown - 0x0D0E, # .. 0x0D10 ; Malayalam - 0x0D11, # .. 0x0D11 ; Unknown - 0x0D12, # .. 0x0D44 ; Malayalam - 0x0D45, # .. 0x0D45 ; Unknown - 0x0D46, # .. 0x0D48 ; Malayalam - 0x0D49, # .. 0x0D49 ; Unknown - 0x0D4A, # .. 0x0D4F ; Malayalam - 0x0D50, # .. 0x0D53 ; Unknown - 0x0D54, # .. 0x0D63 ; Malayalam - 0x0D64, # .. 0x0D65 ; Unknown - 0x0D66, # .. 0x0D7F ; Malayalam - 0x0D80, # .. 0x0D80 ; Unknown - 0x0D81, # .. 0x0D83 ; Sinhala - 0x0D84, # .. 0x0D84 ; Unknown - 0x0D85, # .. 0x0D96 ; Sinhala - 0x0D97, # .. 0x0D99 ; Unknown - 0x0D9A, # .. 0x0DB1 ; Sinhala - 0x0DB2, # .. 0x0DB2 ; Unknown - 0x0DB3, # .. 0x0DBB ; Sinhala - 0x0DBC, # .. 0x0DBC ; Unknown - 0x0DBD, # .. 0x0DBD ; Sinhala - 0x0DBE, # .. 0x0DBF ; Unknown - 0x0DC0, # .. 0x0DC6 ; Sinhala - 0x0DC7, # .. 0x0DC9 ; Unknown - 0x0DCA, # .. 0x0DCA ; Sinhala - 0x0DCB, # .. 0x0DCE ; Unknown - 0x0DCF, # .. 0x0DD4 ; Sinhala - 0x0DD5, # .. 0x0DD5 ; Unknown - 0x0DD6, # .. 0x0DD6 ; Sinhala - 0x0DD7, # .. 0x0DD7 ; Unknown - 0x0DD8, # .. 0x0DDF ; Sinhala - 0x0DE0, # .. 0x0DE5 ; Unknown - 0x0DE6, # .. 0x0DEF ; Sinhala - 0x0DF0, # .. 0x0DF1 ; Unknown - 0x0DF2, # .. 0x0DF4 ; Sinhala - 0x0DF5, # .. 0x0E00 ; Unknown - 0x0E01, # .. 0x0E3A ; Thai - 0x0E3B, # .. 0x0E3E ; Unknown - 0x0E3F, # .. 0x0E3F ; Common - 0x0E40, # .. 0x0E5B ; Thai - 0x0E5C, # .. 0x0E80 ; Unknown - 0x0E81, # .. 0x0E82 ; Lao - 0x0E83, # .. 0x0E83 ; Unknown - 0x0E84, # .. 0x0E84 ; Lao - 0x0E85, # .. 0x0E85 ; Unknown - 0x0E86, # .. 0x0E8A ; Lao - 0x0E8B, # .. 0x0E8B ; Unknown - 0x0E8C, # .. 0x0EA3 ; Lao - 0x0EA4, # .. 0x0EA4 ; Unknown - 0x0EA5, # .. 0x0EA5 ; Lao - 0x0EA6, # .. 0x0EA6 ; Unknown - 0x0EA7, # .. 0x0EBD ; Lao - 0x0EBE, # .. 0x0EBF ; Unknown - 0x0EC0, # .. 0x0EC4 ; Lao - 0x0EC5, # .. 0x0EC5 ; Unknown - 0x0EC6, # .. 0x0EC6 ; Lao - 0x0EC7, # .. 0x0EC7 ; Unknown - 0x0EC8, # .. 0x0ECE ; Lao - 0x0ECF, # .. 0x0ECF ; Unknown - 0x0ED0, # .. 0x0ED9 ; Lao - 0x0EDA, # .. 0x0EDB ; Unknown - 0x0EDC, # .. 0x0EDF ; Lao - 0x0EE0, # .. 0x0EFF ; Unknown - 0x0F00, # .. 0x0F47 ; Tibetan - 0x0F48, # .. 0x0F48 ; Unknown - 0x0F49, # .. 0x0F6C ; Tibetan - 0x0F6D, # .. 0x0F70 ; Unknown - 0x0F71, # .. 0x0F97 ; Tibetan - 0x0F98, # .. 0x0F98 ; Unknown - 0x0F99, # .. 0x0FBC ; Tibetan - 0x0FBD, # .. 0x0FBD ; Unknown - 0x0FBE, # .. 0x0FCC ; Tibetan - 0x0FCD, # .. 0x0FCD ; Unknown - 0x0FCE, # .. 0x0FD4 ; Tibetan - 0x0FD5, # .. 0x0FD8 ; Common - 0x0FD9, # .. 0x0FDA ; Tibetan - 0x0FDB, # .. 0x0FFF ; Unknown - 0x1000, # .. 0x109F ; Myanmar - 0x10A0, # .. 0x10C5 ; Georgian - 0x10C6, # .. 0x10C6 ; Unknown - 0x10C7, # .. 0x10C7 ; Georgian - 0x10C8, # .. 0x10CC ; Unknown - 0x10CD, # .. 0x10CD ; Georgian - 0x10CE, # .. 0x10CF ; Unknown - 0x10D0, # .. 0x10FA ; Georgian - 0x10FB, # .. 0x10FB ; Common - 0x10FC, # .. 0x10FF ; Georgian - 0x1100, # .. 0x11FF ; Hangul - 0x1200, # .. 0x1248 ; Ethiopic - 0x1249, # .. 0x1249 ; Unknown - 0x124A, # .. 0x124D ; Ethiopic - 0x124E, # .. 0x124F ; Unknown - 0x1250, # .. 0x1256 ; Ethiopic - 0x1257, # .. 0x1257 ; Unknown - 0x1258, # .. 0x1258 ; Ethiopic - 0x1259, # .. 0x1259 ; Unknown - 0x125A, # .. 0x125D ; Ethiopic - 0x125E, # .. 0x125F ; Unknown - 0x1260, # .. 0x1288 ; Ethiopic - 0x1289, # .. 0x1289 ; Unknown - 0x128A, # .. 0x128D ; Ethiopic - 0x128E, # .. 0x128F ; Unknown - 0x1290, # .. 0x12B0 ; Ethiopic - 0x12B1, # .. 0x12B1 ; Unknown - 0x12B2, # .. 0x12B5 ; Ethiopic - 0x12B6, # .. 0x12B7 ; Unknown - 0x12B8, # .. 0x12BE ; Ethiopic - 0x12BF, # .. 0x12BF ; Unknown - 0x12C0, # .. 0x12C0 ; Ethiopic - 0x12C1, # .. 0x12C1 ; Unknown - 0x12C2, # .. 0x12C5 ; Ethiopic - 0x12C6, # .. 0x12C7 ; Unknown - 0x12C8, # .. 0x12D6 ; Ethiopic - 0x12D7, # .. 0x12D7 ; Unknown - 0x12D8, # .. 0x1310 ; Ethiopic - 0x1311, # .. 0x1311 ; Unknown - 0x1312, # .. 0x1315 ; Ethiopic - 0x1316, # .. 0x1317 ; Unknown - 0x1318, # .. 0x135A ; Ethiopic - 0x135B, # .. 0x135C ; Unknown - 0x135D, # .. 0x137C ; Ethiopic - 0x137D, # .. 0x137F ; Unknown - 0x1380, # .. 0x1399 ; Ethiopic - 0x139A, # .. 0x139F ; Unknown - 0x13A0, # .. 0x13F5 ; Cherokee - 0x13F6, # .. 0x13F7 ; Unknown - 0x13F8, # .. 0x13FD ; Cherokee - 0x13FE, # .. 0x13FF ; Unknown - 0x1400, # .. 0x167F ; Canadian_Aboriginal - 0x1680, # .. 0x169C ; Ogham - 0x169D, # .. 0x169F ; Unknown - 0x16A0, # .. 0x16EA ; Runic - 0x16EB, # .. 0x16ED ; Common - 0x16EE, # .. 0x16F8 ; Runic - 0x16F9, # .. 0x16FF ; Unknown - 0x1700, # .. 0x1715 ; Tagalog - 0x1716, # .. 0x171E ; Unknown - 0x171F, # .. 0x171F ; Tagalog - 0x1720, # .. 0x1734 ; Hanunoo - 0x1735, # .. 0x1736 ; Common - 0x1737, # .. 0x173F ; Unknown - 0x1740, # .. 0x1753 ; Buhid - 0x1754, # .. 0x175F ; Unknown - 0x1760, # .. 0x176C ; Tagbanwa - 0x176D, # .. 0x176D ; Unknown - 0x176E, # .. 0x1770 ; Tagbanwa - 0x1771, # .. 0x1771 ; Unknown - 0x1772, # .. 0x1773 ; Tagbanwa - 0x1774, # .. 0x177F ; Unknown - 0x1780, # .. 0x17DD ; Khmer - 0x17DE, # .. 0x17DF ; Unknown - 0x17E0, # .. 0x17E9 ; Khmer - 0x17EA, # .. 0x17EF ; Unknown - 0x17F0, # .. 0x17F9 ; Khmer - 0x17FA, # .. 0x17FF ; Unknown - 0x1800, # .. 0x1801 ; Mongolian - 0x1802, # .. 0x1803 ; Common - 0x1804, # .. 0x1804 ; Mongolian - 0x1805, # .. 0x1805 ; Common - 0x1806, # .. 0x1819 ; Mongolian - 0x181A, # .. 0x181F ; Unknown - 0x1820, # .. 0x1878 ; Mongolian - 0x1879, # .. 0x187F ; Unknown - 0x1880, # .. 0x18AA ; Mongolian - 0x18AB, # .. 0x18AF ; Unknown - 0x18B0, # .. 0x18F5 ; Canadian_Aboriginal - 0x18F6, # .. 0x18FF ; Unknown - 0x1900, # .. 0x191E ; Limbu - 0x191F, # .. 0x191F ; Unknown - 0x1920, # .. 0x192B ; Limbu - 0x192C, # .. 0x192F ; Unknown - 0x1930, # .. 0x193B ; Limbu - 0x193C, # .. 0x193F ; Unknown - 0x1940, # .. 0x1940 ; Limbu - 0x1941, # .. 0x1943 ; Unknown - 0x1944, # .. 0x194F ; Limbu - 0x1950, # .. 0x196D ; Tai_Le - 0x196E, # .. 0x196F ; Unknown - 0x1970, # .. 0x1974 ; Tai_Le - 0x1975, # .. 0x197F ; Unknown - 0x1980, # .. 0x19AB ; New_Tai_Lue - 0x19AC, # .. 0x19AF ; Unknown - 0x19B0, # .. 0x19C9 ; New_Tai_Lue - 0x19CA, # .. 0x19CF ; Unknown - 0x19D0, # .. 0x19DA ; New_Tai_Lue - 0x19DB, # .. 0x19DD ; Unknown - 0x19DE, # .. 0x19DF ; New_Tai_Lue - 0x19E0, # .. 0x19FF ; Khmer - 0x1A00, # .. 0x1A1B ; Buginese - 0x1A1C, # .. 0x1A1D ; Unknown - 0x1A1E, # .. 0x1A1F ; Buginese - 0x1A20, # .. 0x1A5E ; Tai_Tham - 0x1A5F, # .. 0x1A5F ; Unknown - 0x1A60, # .. 0x1A7C ; Tai_Tham - 0x1A7D, # .. 0x1A7E ; Unknown - 0x1A7F, # .. 0x1A89 ; Tai_Tham - 0x1A8A, # .. 0x1A8F ; Unknown - 0x1A90, # .. 0x1A99 ; Tai_Tham - 0x1A9A, # .. 0x1A9F ; Unknown - 0x1AA0, # .. 0x1AAD ; Tai_Tham - 0x1AAE, # .. 0x1AAF ; Unknown - 0x1AB0, # .. 0x1ACE ; Inherited - 0x1ACF, # .. 0x1AFF ; Unknown - 0x1B00, # .. 0x1B4C ; Balinese - 0x1B4D, # .. 0x1B4F ; Unknown - 0x1B50, # .. 0x1B7E ; Balinese - 0x1B7F, # .. 0x1B7F ; Unknown - 0x1B80, # .. 0x1BBF ; Sundanese - 0x1BC0, # .. 0x1BF3 ; Batak - 0x1BF4, # .. 0x1BFB ; Unknown - 0x1BFC, # .. 0x1BFF ; Batak - 0x1C00, # .. 0x1C37 ; Lepcha - 0x1C38, # .. 0x1C3A ; Unknown - 0x1C3B, # .. 0x1C49 ; Lepcha - 0x1C4A, # .. 0x1C4C ; Unknown - 0x1C4D, # .. 0x1C4F ; Lepcha - 0x1C50, # .. 0x1C7F ; Ol_Chiki - 0x1C80, # .. 0x1C88 ; Cyrillic - 0x1C89, # .. 0x1C8F ; Unknown - 0x1C90, # .. 0x1CBA ; Georgian - 0x1CBB, # .. 0x1CBC ; Unknown - 0x1CBD, # .. 0x1CBF ; Georgian - 0x1CC0, # .. 0x1CC7 ; Sundanese - 0x1CC8, # .. 0x1CCF ; Unknown - 0x1CD0, # .. 0x1CD2 ; Inherited - 0x1CD3, # .. 0x1CD3 ; Common - 0x1CD4, # .. 0x1CE0 ; Inherited - 0x1CE1, # .. 0x1CE1 ; Common - 0x1CE2, # .. 0x1CE8 ; Inherited - 0x1CE9, # .. 0x1CEC ; Common - 0x1CED, # .. 0x1CED ; Inherited - 0x1CEE, # .. 0x1CF3 ; Common - 0x1CF4, # .. 0x1CF4 ; Inherited - 0x1CF5, # .. 0x1CF7 ; Common - 0x1CF8, # .. 0x1CF9 ; Inherited - 0x1CFA, # .. 0x1CFA ; Common - 0x1CFB, # .. 0x1CFF ; Unknown - 0x1D00, # .. 0x1D25 ; Latin - 0x1D26, # .. 0x1D2A ; Greek - 0x1D2B, # .. 0x1D2B ; Cyrillic - 0x1D2C, # .. 0x1D5C ; Latin - 0x1D5D, # .. 0x1D61 ; Greek - 0x1D62, # .. 0x1D65 ; Latin - 0x1D66, # .. 0x1D6A ; Greek - 0x1D6B, # .. 0x1D77 ; Latin - 0x1D78, # .. 0x1D78 ; Cyrillic - 0x1D79, # .. 0x1DBE ; Latin - 0x1DBF, # .. 0x1DBF ; Greek - 0x1DC0, # .. 0x1DFF ; Inherited - 0x1E00, # .. 0x1EFF ; Latin - 0x1F00, # .. 0x1F15 ; Greek - 0x1F16, # .. 0x1F17 ; Unknown - 0x1F18, # .. 0x1F1D ; Greek - 0x1F1E, # .. 0x1F1F ; Unknown - 0x1F20, # .. 0x1F45 ; Greek - 0x1F46, # .. 0x1F47 ; Unknown - 0x1F48, # .. 0x1F4D ; Greek - 0x1F4E, # .. 0x1F4F ; Unknown - 0x1F50, # .. 0x1F57 ; Greek - 0x1F58, # .. 0x1F58 ; Unknown - 0x1F59, # .. 0x1F59 ; Greek - 0x1F5A, # .. 0x1F5A ; Unknown - 0x1F5B, # .. 0x1F5B ; Greek - 0x1F5C, # .. 0x1F5C ; Unknown - 0x1F5D, # .. 0x1F5D ; Greek - 0x1F5E, # .. 0x1F5E ; Unknown - 0x1F5F, # .. 0x1F7D ; Greek - 0x1F7E, # .. 0x1F7F ; Unknown - 0x1F80, # .. 0x1FB4 ; Greek - 0x1FB5, # .. 0x1FB5 ; Unknown - 0x1FB6, # .. 0x1FC4 ; Greek - 0x1FC5, # .. 0x1FC5 ; Unknown - 0x1FC6, # .. 0x1FD3 ; Greek - 0x1FD4, # .. 0x1FD5 ; Unknown - 0x1FD6, # .. 0x1FDB ; Greek - 0x1FDC, # .. 0x1FDC ; Unknown - 0x1FDD, # .. 0x1FEF ; Greek - 0x1FF0, # .. 0x1FF1 ; Unknown - 0x1FF2, # .. 0x1FF4 ; Greek - 0x1FF5, # .. 0x1FF5 ; Unknown - 0x1FF6, # .. 0x1FFE ; Greek - 0x1FFF, # .. 0x1FFF ; Unknown - 0x2000, # .. 0x200B ; Common - 0x200C, # .. 0x200D ; Inherited - 0x200E, # .. 0x2064 ; Common - 0x2065, # .. 0x2065 ; Unknown - 0x2066, # .. 0x2070 ; Common - 0x2071, # .. 0x2071 ; Latin - 0x2072, # .. 0x2073 ; Unknown - 0x2074, # .. 0x207E ; Common - 0x207F, # .. 0x207F ; Latin - 0x2080, # .. 0x208E ; Common - 0x208F, # .. 0x208F ; Unknown - 0x2090, # .. 0x209C ; Latin - 0x209D, # .. 0x209F ; Unknown - 0x20A0, # .. 0x20C0 ; Common - 0x20C1, # .. 0x20CF ; Unknown - 0x20D0, # .. 0x20F0 ; Inherited - 0x20F1, # .. 0x20FF ; Unknown - 0x2100, # .. 0x2125 ; Common - 0x2126, # .. 0x2126 ; Greek - 0x2127, # .. 0x2129 ; Common - 0x212A, # .. 0x212B ; Latin - 0x212C, # .. 0x2131 ; Common - 0x2132, # .. 0x2132 ; Latin - 0x2133, # .. 0x214D ; Common - 0x214E, # .. 0x214E ; Latin - 0x214F, # .. 0x215F ; Common - 0x2160, # .. 0x2188 ; Latin - 0x2189, # .. 0x218B ; Common - 0x218C, # .. 0x218F ; Unknown - 0x2190, # .. 0x2426 ; Common - 0x2427, # .. 0x243F ; Unknown - 0x2440, # .. 0x244A ; Common - 0x244B, # .. 0x245F ; Unknown - 0x2460, # .. 0x27FF ; Common - 0x2800, # .. 0x28FF ; Braille - 0x2900, # .. 0x2B73 ; Common - 0x2B74, # .. 0x2B75 ; Unknown - 0x2B76, # .. 0x2B95 ; Common - 0x2B96, # .. 0x2B96 ; Unknown - 0x2B97, # .. 0x2BFF ; Common - 0x2C00, # .. 0x2C5F ; Glagolitic - 0x2C60, # .. 0x2C7F ; Latin - 0x2C80, # .. 0x2CF3 ; Coptic - 0x2CF4, # .. 0x2CF8 ; Unknown - 0x2CF9, # .. 0x2CFF ; Coptic - 0x2D00, # .. 0x2D25 ; Georgian - 0x2D26, # .. 0x2D26 ; Unknown - 0x2D27, # .. 0x2D27 ; Georgian - 0x2D28, # .. 0x2D2C ; Unknown - 0x2D2D, # .. 0x2D2D ; Georgian - 0x2D2E, # .. 0x2D2F ; Unknown - 0x2D30, # .. 0x2D67 ; Tifinagh - 0x2D68, # .. 0x2D6E ; Unknown - 0x2D6F, # .. 0x2D70 ; Tifinagh - 0x2D71, # .. 0x2D7E ; Unknown - 0x2D7F, # .. 0x2D7F ; Tifinagh - 0x2D80, # .. 0x2D96 ; Ethiopic - 0x2D97, # .. 0x2D9F ; Unknown - 0x2DA0, # .. 0x2DA6 ; Ethiopic - 0x2DA7, # .. 0x2DA7 ; Unknown - 0x2DA8, # .. 0x2DAE ; Ethiopic - 0x2DAF, # .. 0x2DAF ; Unknown - 0x2DB0, # .. 0x2DB6 ; Ethiopic - 0x2DB7, # .. 0x2DB7 ; Unknown - 0x2DB8, # .. 0x2DBE ; Ethiopic - 0x2DBF, # .. 0x2DBF ; Unknown - 0x2DC0, # .. 0x2DC6 ; Ethiopic - 0x2DC7, # .. 0x2DC7 ; Unknown - 0x2DC8, # .. 0x2DCE ; Ethiopic - 0x2DCF, # .. 0x2DCF ; Unknown - 0x2DD0, # .. 0x2DD6 ; Ethiopic - 0x2DD7, # .. 0x2DD7 ; Unknown - 0x2DD8, # .. 0x2DDE ; Ethiopic - 0x2DDF, # .. 0x2DDF ; Unknown - 0x2DE0, # .. 0x2DFF ; Cyrillic - 0x2E00, # .. 0x2E5D ; Common - 0x2E5E, # .. 0x2E7F ; Unknown - 0x2E80, # .. 0x2E99 ; Han - 0x2E9A, # .. 0x2E9A ; Unknown - 0x2E9B, # .. 0x2EF3 ; Han - 0x2EF4, # .. 0x2EFF ; Unknown - 0x2F00, # .. 0x2FD5 ; Han - 0x2FD6, # .. 0x2FEF ; Unknown - 0x2FF0, # .. 0x2FFB ; Common - 0x2FFC, # .. 0x2FFF ; Unknown - 0x3000, # .. 0x3004 ; Common - 0x3005, # .. 0x3005 ; Han - 0x3006, # .. 0x3006 ; Common - 0x3007, # .. 0x3007 ; Han - 0x3008, # .. 0x3020 ; Common - 0x3021, # .. 0x3029 ; Han - 0x302A, # .. 0x302D ; Inherited - 0x302E, # .. 0x302F ; Hangul - 0x3030, # .. 0x3037 ; Common - 0x3038, # .. 0x303B ; Han - 0x303C, # .. 0x303F ; Common - 0x3040, # .. 0x3040 ; Unknown - 0x3041, # .. 0x3096 ; Hiragana - 0x3097, # .. 0x3098 ; Unknown - 0x3099, # .. 0x309A ; Inherited - 0x309B, # .. 0x309C ; Common - 0x309D, # .. 0x309F ; Hiragana - 0x30A0, # .. 0x30A0 ; Common - 0x30A1, # .. 0x30FA ; Katakana - 0x30FB, # .. 0x30FC ; Common - 0x30FD, # .. 0x30FF ; Katakana - 0x3100, # .. 0x3104 ; Unknown - 0x3105, # .. 0x312F ; Bopomofo - 0x3130, # .. 0x3130 ; Unknown - 0x3131, # .. 0x318E ; Hangul - 0x318F, # .. 0x318F ; Unknown - 0x3190, # .. 0x319F ; Common - 0x31A0, # .. 0x31BF ; Bopomofo - 0x31C0, # .. 0x31E3 ; Common - 0x31E4, # .. 0x31EF ; Unknown - 0x31F0, # .. 0x31FF ; Katakana - 0x3200, # .. 0x321E ; Hangul - 0x321F, # .. 0x321F ; Unknown - 0x3220, # .. 0x325F ; Common - 0x3260, # .. 0x327E ; Hangul - 0x327F, # .. 0x32CF ; Common - 0x32D0, # .. 0x32FE ; Katakana - 0x32FF, # .. 0x32FF ; Common - 0x3300, # .. 0x3357 ; Katakana - 0x3358, # .. 0x33FF ; Common - 0x3400, # .. 0x4DBF ; Han - 0x4DC0, # .. 0x4DFF ; Common - 0x4E00, # .. 0x9FFF ; Han - 0xA000, # .. 0xA48C ; Yi - 0xA48D, # .. 0xA48F ; Unknown - 0xA490, # .. 0xA4C6 ; Yi - 0xA4C7, # .. 0xA4CF ; Unknown - 0xA4D0, # .. 0xA4FF ; Lisu - 0xA500, # .. 0xA62B ; Vai - 0xA62C, # .. 0xA63F ; Unknown - 0xA640, # .. 0xA69F ; Cyrillic - 0xA6A0, # .. 0xA6F7 ; Bamum - 0xA6F8, # .. 0xA6FF ; Unknown - 0xA700, # .. 0xA721 ; Common - 0xA722, # .. 0xA787 ; Latin - 0xA788, # .. 0xA78A ; Common - 0xA78B, # .. 0xA7CA ; Latin - 0xA7CB, # .. 0xA7CF ; Unknown - 0xA7D0, # .. 0xA7D1 ; Latin - 0xA7D2, # .. 0xA7D2 ; Unknown - 0xA7D3, # .. 0xA7D3 ; Latin - 0xA7D4, # .. 0xA7D4 ; Unknown - 0xA7D5, # .. 0xA7D9 ; Latin - 0xA7DA, # .. 0xA7F1 ; Unknown - 0xA7F2, # .. 0xA7FF ; Latin - 0xA800, # .. 0xA82C ; Syloti_Nagri - 0xA82D, # .. 0xA82F ; Unknown - 0xA830, # .. 0xA839 ; Common - 0xA83A, # .. 0xA83F ; Unknown - 0xA840, # .. 0xA877 ; Phags_Pa - 0xA878, # .. 0xA87F ; Unknown - 0xA880, # .. 0xA8C5 ; Saurashtra - 0xA8C6, # .. 0xA8CD ; Unknown - 0xA8CE, # .. 0xA8D9 ; Saurashtra - 0xA8DA, # .. 0xA8DF ; Unknown - 0xA8E0, # .. 0xA8FF ; Devanagari - 0xA900, # .. 0xA92D ; Kayah_Li - 0xA92E, # .. 0xA92E ; Common - 0xA92F, # .. 0xA92F ; Kayah_Li - 0xA930, # .. 0xA953 ; Rejang - 0xA954, # .. 0xA95E ; Unknown - 0xA95F, # .. 0xA95F ; Rejang - 0xA960, # .. 0xA97C ; Hangul - 0xA97D, # .. 0xA97F ; Unknown - 0xA980, # .. 0xA9CD ; Javanese - 0xA9CE, # .. 0xA9CE ; Unknown - 0xA9CF, # .. 0xA9CF ; Common - 0xA9D0, # .. 0xA9D9 ; Javanese - 0xA9DA, # .. 0xA9DD ; Unknown - 0xA9DE, # .. 0xA9DF ; Javanese - 0xA9E0, # .. 0xA9FE ; Myanmar - 0xA9FF, # .. 0xA9FF ; Unknown - 0xAA00, # .. 0xAA36 ; Cham - 0xAA37, # .. 0xAA3F ; Unknown - 0xAA40, # .. 0xAA4D ; Cham - 0xAA4E, # .. 0xAA4F ; Unknown - 0xAA50, # .. 0xAA59 ; Cham - 0xAA5A, # .. 0xAA5B ; Unknown - 0xAA5C, # .. 0xAA5F ; Cham - 0xAA60, # .. 0xAA7F ; Myanmar - 0xAA80, # .. 0xAAC2 ; Tai_Viet - 0xAAC3, # .. 0xAADA ; Unknown - 0xAADB, # .. 0xAADF ; Tai_Viet - 0xAAE0, # .. 0xAAF6 ; Meetei_Mayek - 0xAAF7, # .. 0xAB00 ; Unknown - 0xAB01, # .. 0xAB06 ; Ethiopic - 0xAB07, # .. 0xAB08 ; Unknown - 0xAB09, # .. 0xAB0E ; Ethiopic - 0xAB0F, # .. 0xAB10 ; Unknown - 0xAB11, # .. 0xAB16 ; Ethiopic - 0xAB17, # .. 0xAB1F ; Unknown - 0xAB20, # .. 0xAB26 ; Ethiopic - 0xAB27, # .. 0xAB27 ; Unknown - 0xAB28, # .. 0xAB2E ; Ethiopic - 0xAB2F, # .. 0xAB2F ; Unknown - 0xAB30, # .. 0xAB5A ; Latin - 0xAB5B, # .. 0xAB5B ; Common - 0xAB5C, # .. 0xAB64 ; Latin - 0xAB65, # .. 0xAB65 ; Greek - 0xAB66, # .. 0xAB69 ; Latin - 0xAB6A, # .. 0xAB6B ; Common - 0xAB6C, # .. 0xAB6F ; Unknown - 0xAB70, # .. 0xABBF ; Cherokee - 0xABC0, # .. 0xABED ; Meetei_Mayek - 0xABEE, # .. 0xABEF ; Unknown - 0xABF0, # .. 0xABF9 ; Meetei_Mayek - 0xABFA, # .. 0xABFF ; Unknown - 0xAC00, # .. 0xD7A3 ; Hangul - 0xD7A4, # .. 0xD7AF ; Unknown - 0xD7B0, # .. 0xD7C6 ; Hangul - 0xD7C7, # .. 0xD7CA ; Unknown - 0xD7CB, # .. 0xD7FB ; Hangul - 0xD7FC, # .. 0xF8FF ; Unknown - 0xF900, # .. 0xFA6D ; Han - 0xFA6E, # .. 0xFA6F ; Unknown - 0xFA70, # .. 0xFAD9 ; Han - 0xFADA, # .. 0xFAFF ; Unknown - 0xFB00, # .. 0xFB06 ; Latin - 0xFB07, # .. 0xFB12 ; Unknown - 0xFB13, # .. 0xFB17 ; Armenian - 0xFB18, # .. 0xFB1C ; Unknown - 0xFB1D, # .. 0xFB36 ; Hebrew - 0xFB37, # .. 0xFB37 ; Unknown - 0xFB38, # .. 0xFB3C ; Hebrew - 0xFB3D, # .. 0xFB3D ; Unknown - 0xFB3E, # .. 0xFB3E ; Hebrew - 0xFB3F, # .. 0xFB3F ; Unknown - 0xFB40, # .. 0xFB41 ; Hebrew - 0xFB42, # .. 0xFB42 ; Unknown - 0xFB43, # .. 0xFB44 ; Hebrew - 0xFB45, # .. 0xFB45 ; Unknown - 0xFB46, # .. 0xFB4F ; Hebrew - 0xFB50, # .. 0xFBC2 ; Arabic - 0xFBC3, # .. 0xFBD2 ; Unknown - 0xFBD3, # .. 0xFD3D ; Arabic - 0xFD3E, # .. 0xFD3F ; Common - 0xFD40, # .. 0xFD8F ; Arabic - 0xFD90, # .. 0xFD91 ; Unknown - 0xFD92, # .. 0xFDC7 ; Arabic - 0xFDC8, # .. 0xFDCE ; Unknown - 0xFDCF, # .. 0xFDCF ; Arabic - 0xFDD0, # .. 0xFDEF ; Unknown - 0xFDF0, # .. 0xFDFF ; Arabic - 0xFE00, # .. 0xFE0F ; Inherited - 0xFE10, # .. 0xFE19 ; Common - 0xFE1A, # .. 0xFE1F ; Unknown - 0xFE20, # .. 0xFE2D ; Inherited - 0xFE2E, # .. 0xFE2F ; Cyrillic - 0xFE30, # .. 0xFE52 ; Common - 0xFE53, # .. 0xFE53 ; Unknown - 0xFE54, # .. 0xFE66 ; Common - 0xFE67, # .. 0xFE67 ; Unknown - 0xFE68, # .. 0xFE6B ; Common - 0xFE6C, # .. 0xFE6F ; Unknown - 0xFE70, # .. 0xFE74 ; Arabic - 0xFE75, # .. 0xFE75 ; Unknown - 0xFE76, # .. 0xFEFC ; Arabic - 0xFEFD, # .. 0xFEFE ; Unknown - 0xFEFF, # .. 0xFEFF ; Common - 0xFF00, # .. 0xFF00 ; Unknown - 0xFF01, # .. 0xFF20 ; Common - 0xFF21, # .. 0xFF3A ; Latin - 0xFF3B, # .. 0xFF40 ; Common - 0xFF41, # .. 0xFF5A ; Latin - 0xFF5B, # .. 0xFF65 ; Common - 0xFF66, # .. 0xFF6F ; Katakana - 0xFF70, # .. 0xFF70 ; Common - 0xFF71, # .. 0xFF9D ; Katakana - 0xFF9E, # .. 0xFF9F ; Common - 0xFFA0, # .. 0xFFBE ; Hangul - 0xFFBF, # .. 0xFFC1 ; Unknown - 0xFFC2, # .. 0xFFC7 ; Hangul - 0xFFC8, # .. 0xFFC9 ; Unknown - 0xFFCA, # .. 0xFFCF ; Hangul - 0xFFD0, # .. 0xFFD1 ; Unknown - 0xFFD2, # .. 0xFFD7 ; Hangul - 0xFFD8, # .. 0xFFD9 ; Unknown - 0xFFDA, # .. 0xFFDC ; Hangul - 0xFFDD, # .. 0xFFDF ; Unknown - 0xFFE0, # .. 0xFFE6 ; Common - 0xFFE7, # .. 0xFFE7 ; Unknown - 0xFFE8, # .. 0xFFEE ; Common - 0xFFEF, # .. 0xFFF8 ; Unknown - 0xFFF9, # .. 0xFFFD ; Common - 0xFFFE, # .. 0xFFFF ; Unknown - 0x10000, # .. 0x1000B ; Linear_B - 0x1000C, # .. 0x1000C ; Unknown - 0x1000D, # .. 0x10026 ; Linear_B - 0x10027, # .. 0x10027 ; Unknown - 0x10028, # .. 0x1003A ; Linear_B - 0x1003B, # .. 0x1003B ; Unknown - 0x1003C, # .. 0x1003D ; Linear_B - 0x1003E, # .. 0x1003E ; Unknown - 0x1003F, # .. 0x1004D ; Linear_B - 0x1004E, # .. 0x1004F ; Unknown - 0x10050, # .. 0x1005D ; Linear_B - 0x1005E, # .. 0x1007F ; Unknown - 0x10080, # .. 0x100FA ; Linear_B - 0x100FB, # .. 0x100FF ; Unknown - 0x10100, # .. 0x10102 ; Common - 0x10103, # .. 0x10106 ; Unknown - 0x10107, # .. 0x10133 ; Common - 0x10134, # .. 0x10136 ; Unknown - 0x10137, # .. 0x1013F ; Common - 0x10140, # .. 0x1018E ; Greek - 0x1018F, # .. 0x1018F ; Unknown - 0x10190, # .. 0x1019C ; Common - 0x1019D, # .. 0x1019F ; Unknown - 0x101A0, # .. 0x101A0 ; Greek - 0x101A1, # .. 0x101CF ; Unknown - 0x101D0, # .. 0x101FC ; Common - 0x101FD, # .. 0x101FD ; Inherited - 0x101FE, # .. 0x1027F ; Unknown - 0x10280, # .. 0x1029C ; Lycian - 0x1029D, # .. 0x1029F ; Unknown - 0x102A0, # .. 0x102D0 ; Carian - 0x102D1, # .. 0x102DF ; Unknown - 0x102E0, # .. 0x102E0 ; Inherited - 0x102E1, # .. 0x102FB ; Common - 0x102FC, # .. 0x102FF ; Unknown - 0x10300, # .. 0x10323 ; Old_Italic - 0x10324, # .. 0x1032C ; Unknown - 0x1032D, # .. 0x1032F ; Old_Italic - 0x10330, # .. 0x1034A ; Gothic - 0x1034B, # .. 0x1034F ; Unknown - 0x10350, # .. 0x1037A ; Old_Permic - 0x1037B, # .. 0x1037F ; Unknown - 0x10380, # .. 0x1039D ; Ugaritic - 0x1039E, # .. 0x1039E ; Unknown - 0x1039F, # .. 0x1039F ; Ugaritic - 0x103A0, # .. 0x103C3 ; Old_Persian - 0x103C4, # .. 0x103C7 ; Unknown - 0x103C8, # .. 0x103D5 ; Old_Persian - 0x103D6, # .. 0x103FF ; Unknown - 0x10400, # .. 0x1044F ; Deseret - 0x10450, # .. 0x1047F ; Shavian - 0x10480, # .. 0x1049D ; Osmanya - 0x1049E, # .. 0x1049F ; Unknown - 0x104A0, # .. 0x104A9 ; Osmanya - 0x104AA, # .. 0x104AF ; Unknown - 0x104B0, # .. 0x104D3 ; Osage - 0x104D4, # .. 0x104D7 ; Unknown - 0x104D8, # .. 0x104FB ; Osage - 0x104FC, # .. 0x104FF ; Unknown - 0x10500, # .. 0x10527 ; Elbasan - 0x10528, # .. 0x1052F ; Unknown - 0x10530, # .. 0x10563 ; Caucasian_Albanian - 0x10564, # .. 0x1056E ; Unknown - 0x1056F, # .. 0x1056F ; Caucasian_Albanian - 0x10570, # .. 0x1057A ; Vithkuqi - 0x1057B, # .. 0x1057B ; Unknown - 0x1057C, # .. 0x1058A ; Vithkuqi - 0x1058B, # .. 0x1058B ; Unknown - 0x1058C, # .. 0x10592 ; Vithkuqi - 0x10593, # .. 0x10593 ; Unknown - 0x10594, # .. 0x10595 ; Vithkuqi - 0x10596, # .. 0x10596 ; Unknown - 0x10597, # .. 0x105A1 ; Vithkuqi - 0x105A2, # .. 0x105A2 ; Unknown - 0x105A3, # .. 0x105B1 ; Vithkuqi - 0x105B2, # .. 0x105B2 ; Unknown - 0x105B3, # .. 0x105B9 ; Vithkuqi - 0x105BA, # .. 0x105BA ; Unknown - 0x105BB, # .. 0x105BC ; Vithkuqi - 0x105BD, # .. 0x105FF ; Unknown - 0x10600, # .. 0x10736 ; Linear_A - 0x10737, # .. 0x1073F ; Unknown - 0x10740, # .. 0x10755 ; Linear_A - 0x10756, # .. 0x1075F ; Unknown - 0x10760, # .. 0x10767 ; Linear_A - 0x10768, # .. 0x1077F ; Unknown - 0x10780, # .. 0x10785 ; Latin - 0x10786, # .. 0x10786 ; Unknown - 0x10787, # .. 0x107B0 ; Latin - 0x107B1, # .. 0x107B1 ; Unknown - 0x107B2, # .. 0x107BA ; Latin - 0x107BB, # .. 0x107FF ; Unknown - 0x10800, # .. 0x10805 ; Cypriot - 0x10806, # .. 0x10807 ; Unknown - 0x10808, # .. 0x10808 ; Cypriot - 0x10809, # .. 0x10809 ; Unknown - 0x1080A, # .. 0x10835 ; Cypriot - 0x10836, # .. 0x10836 ; Unknown - 0x10837, # .. 0x10838 ; Cypriot - 0x10839, # .. 0x1083B ; Unknown - 0x1083C, # .. 0x1083C ; Cypriot - 0x1083D, # .. 0x1083E ; Unknown - 0x1083F, # .. 0x1083F ; Cypriot - 0x10840, # .. 0x10855 ; Imperial_Aramaic - 0x10856, # .. 0x10856 ; Unknown - 0x10857, # .. 0x1085F ; Imperial_Aramaic - 0x10860, # .. 0x1087F ; Palmyrene - 0x10880, # .. 0x1089E ; Nabataean - 0x1089F, # .. 0x108A6 ; Unknown - 0x108A7, # .. 0x108AF ; Nabataean - 0x108B0, # .. 0x108DF ; Unknown - 0x108E0, # .. 0x108F2 ; Hatran - 0x108F3, # .. 0x108F3 ; Unknown - 0x108F4, # .. 0x108F5 ; Hatran - 0x108F6, # .. 0x108FA ; Unknown - 0x108FB, # .. 0x108FF ; Hatran - 0x10900, # .. 0x1091B ; Phoenician - 0x1091C, # .. 0x1091E ; Unknown - 0x1091F, # .. 0x1091F ; Phoenician - 0x10920, # .. 0x10939 ; Lydian - 0x1093A, # .. 0x1093E ; Unknown - 0x1093F, # .. 0x1093F ; Lydian - 0x10940, # .. 0x1097F ; Unknown - 0x10980, # .. 0x1099F ; Meroitic_Hieroglyphs - 0x109A0, # .. 0x109B7 ; Meroitic_Cursive - 0x109B8, # .. 0x109BB ; Unknown - 0x109BC, # .. 0x109CF ; Meroitic_Cursive - 0x109D0, # .. 0x109D1 ; Unknown - 0x109D2, # .. 0x109FF ; Meroitic_Cursive - 0x10A00, # .. 0x10A03 ; Kharoshthi - 0x10A04, # .. 0x10A04 ; Unknown - 0x10A05, # .. 0x10A06 ; Kharoshthi - 0x10A07, # .. 0x10A0B ; Unknown - 0x10A0C, # .. 0x10A13 ; Kharoshthi - 0x10A14, # .. 0x10A14 ; Unknown - 0x10A15, # .. 0x10A17 ; Kharoshthi - 0x10A18, # .. 0x10A18 ; Unknown - 0x10A19, # .. 0x10A35 ; Kharoshthi - 0x10A36, # .. 0x10A37 ; Unknown - 0x10A38, # .. 0x10A3A ; Kharoshthi - 0x10A3B, # .. 0x10A3E ; Unknown - 0x10A3F, # .. 0x10A48 ; Kharoshthi - 0x10A49, # .. 0x10A4F ; Unknown - 0x10A50, # .. 0x10A58 ; Kharoshthi - 0x10A59, # .. 0x10A5F ; Unknown - 0x10A60, # .. 0x10A7F ; Old_South_Arabian - 0x10A80, # .. 0x10A9F ; Old_North_Arabian - 0x10AA0, # .. 0x10ABF ; Unknown - 0x10AC0, # .. 0x10AE6 ; Manichaean - 0x10AE7, # .. 0x10AEA ; Unknown - 0x10AEB, # .. 0x10AF6 ; Manichaean - 0x10AF7, # .. 0x10AFF ; Unknown - 0x10B00, # .. 0x10B35 ; Avestan - 0x10B36, # .. 0x10B38 ; Unknown - 0x10B39, # .. 0x10B3F ; Avestan - 0x10B40, # .. 0x10B55 ; Inscriptional_Parthian - 0x10B56, # .. 0x10B57 ; Unknown - 0x10B58, # .. 0x10B5F ; Inscriptional_Parthian - 0x10B60, # .. 0x10B72 ; Inscriptional_Pahlavi - 0x10B73, # .. 0x10B77 ; Unknown - 0x10B78, # .. 0x10B7F ; Inscriptional_Pahlavi - 0x10B80, # .. 0x10B91 ; Psalter_Pahlavi - 0x10B92, # .. 0x10B98 ; Unknown - 0x10B99, # .. 0x10B9C ; Psalter_Pahlavi - 0x10B9D, # .. 0x10BA8 ; Unknown - 0x10BA9, # .. 0x10BAF ; Psalter_Pahlavi - 0x10BB0, # .. 0x10BFF ; Unknown - 0x10C00, # .. 0x10C48 ; Old_Turkic - 0x10C49, # .. 0x10C7F ; Unknown - 0x10C80, # .. 0x10CB2 ; Old_Hungarian - 0x10CB3, # .. 0x10CBF ; Unknown - 0x10CC0, # .. 0x10CF2 ; Old_Hungarian - 0x10CF3, # .. 0x10CF9 ; Unknown - 0x10CFA, # .. 0x10CFF ; Old_Hungarian - 0x10D00, # .. 0x10D27 ; Hanifi_Rohingya - 0x10D28, # .. 0x10D2F ; Unknown - 0x10D30, # .. 0x10D39 ; Hanifi_Rohingya - 0x10D3A, # .. 0x10E5F ; Unknown - 0x10E60, # .. 0x10E7E ; Arabic - 0x10E7F, # .. 0x10E7F ; Unknown - 0x10E80, # .. 0x10EA9 ; Yezidi - 0x10EAA, # .. 0x10EAA ; Unknown - 0x10EAB, # .. 0x10EAD ; Yezidi - 0x10EAE, # .. 0x10EAF ; Unknown - 0x10EB0, # .. 0x10EB1 ; Yezidi - 0x10EB2, # .. 0x10EFC ; Unknown - 0x10EFD, # .. 0x10EFF ; Arabic - 0x10F00, # .. 0x10F27 ; Old_Sogdian - 0x10F28, # .. 0x10F2F ; Unknown - 0x10F30, # .. 0x10F59 ; Sogdian - 0x10F5A, # .. 0x10F6F ; Unknown - 0x10F70, # .. 0x10F89 ; Old_Uyghur - 0x10F8A, # .. 0x10FAF ; Unknown - 0x10FB0, # .. 0x10FCB ; Chorasmian - 0x10FCC, # .. 0x10FDF ; Unknown - 0x10FE0, # .. 0x10FF6 ; Elymaic - 0x10FF7, # .. 0x10FFF ; Unknown - 0x11000, # .. 0x1104D ; Brahmi - 0x1104E, # .. 0x11051 ; Unknown - 0x11052, # .. 0x11075 ; Brahmi - 0x11076, # .. 0x1107E ; Unknown - 0x1107F, # .. 0x1107F ; Brahmi - 0x11080, # .. 0x110C2 ; Kaithi - 0x110C3, # .. 0x110CC ; Unknown - 0x110CD, # .. 0x110CD ; Kaithi - 0x110CE, # .. 0x110CF ; Unknown - 0x110D0, # .. 0x110E8 ; Sora_Sompeng - 0x110E9, # .. 0x110EF ; Unknown - 0x110F0, # .. 0x110F9 ; Sora_Sompeng - 0x110FA, # .. 0x110FF ; Unknown - 0x11100, # .. 0x11134 ; Chakma - 0x11135, # .. 0x11135 ; Unknown - 0x11136, # .. 0x11147 ; Chakma - 0x11148, # .. 0x1114F ; Unknown - 0x11150, # .. 0x11176 ; Mahajani - 0x11177, # .. 0x1117F ; Unknown - 0x11180, # .. 0x111DF ; Sharada - 0x111E0, # .. 0x111E0 ; Unknown - 0x111E1, # .. 0x111F4 ; Sinhala - 0x111F5, # .. 0x111FF ; Unknown - 0x11200, # .. 0x11211 ; Khojki - 0x11212, # .. 0x11212 ; Unknown - 0x11213, # .. 0x11241 ; Khojki - 0x11242, # .. 0x1127F ; Unknown - 0x11280, # .. 0x11286 ; Multani - 0x11287, # .. 0x11287 ; Unknown - 0x11288, # .. 0x11288 ; Multani - 0x11289, # .. 0x11289 ; Unknown - 0x1128A, # .. 0x1128D ; Multani - 0x1128E, # .. 0x1128E ; Unknown - 0x1128F, # .. 0x1129D ; Multani - 0x1129E, # .. 0x1129E ; Unknown - 0x1129F, # .. 0x112A9 ; Multani - 0x112AA, # .. 0x112AF ; Unknown - 0x112B0, # .. 0x112EA ; Khudawadi - 0x112EB, # .. 0x112EF ; Unknown - 0x112F0, # .. 0x112F9 ; Khudawadi - 0x112FA, # .. 0x112FF ; Unknown - 0x11300, # .. 0x11303 ; Grantha - 0x11304, # .. 0x11304 ; Unknown - 0x11305, # .. 0x1130C ; Grantha - 0x1130D, # .. 0x1130E ; Unknown - 0x1130F, # .. 0x11310 ; Grantha - 0x11311, # .. 0x11312 ; Unknown - 0x11313, # .. 0x11328 ; Grantha - 0x11329, # .. 0x11329 ; Unknown - 0x1132A, # .. 0x11330 ; Grantha - 0x11331, # .. 0x11331 ; Unknown - 0x11332, # .. 0x11333 ; Grantha - 0x11334, # .. 0x11334 ; Unknown - 0x11335, # .. 0x11339 ; Grantha - 0x1133A, # .. 0x1133A ; Unknown - 0x1133B, # .. 0x1133B ; Inherited - 0x1133C, # .. 0x11344 ; Grantha - 0x11345, # .. 0x11346 ; Unknown - 0x11347, # .. 0x11348 ; Grantha - 0x11349, # .. 0x1134A ; Unknown - 0x1134B, # .. 0x1134D ; Grantha - 0x1134E, # .. 0x1134F ; Unknown - 0x11350, # .. 0x11350 ; Grantha - 0x11351, # .. 0x11356 ; Unknown - 0x11357, # .. 0x11357 ; Grantha - 0x11358, # .. 0x1135C ; Unknown - 0x1135D, # .. 0x11363 ; Grantha - 0x11364, # .. 0x11365 ; Unknown - 0x11366, # .. 0x1136C ; Grantha - 0x1136D, # .. 0x1136F ; Unknown - 0x11370, # .. 0x11374 ; Grantha - 0x11375, # .. 0x113FF ; Unknown - 0x11400, # .. 0x1145B ; Newa - 0x1145C, # .. 0x1145C ; Unknown - 0x1145D, # .. 0x11461 ; Newa - 0x11462, # .. 0x1147F ; Unknown - 0x11480, # .. 0x114C7 ; Tirhuta - 0x114C8, # .. 0x114CF ; Unknown - 0x114D0, # .. 0x114D9 ; Tirhuta - 0x114DA, # .. 0x1157F ; Unknown - 0x11580, # .. 0x115B5 ; Siddham - 0x115B6, # .. 0x115B7 ; Unknown - 0x115B8, # .. 0x115DD ; Siddham - 0x115DE, # .. 0x115FF ; Unknown - 0x11600, # .. 0x11644 ; Modi - 0x11645, # .. 0x1164F ; Unknown - 0x11650, # .. 0x11659 ; Modi - 0x1165A, # .. 0x1165F ; Unknown - 0x11660, # .. 0x1166C ; Mongolian - 0x1166D, # .. 0x1167F ; Unknown - 0x11680, # .. 0x116B9 ; Takri - 0x116BA, # .. 0x116BF ; Unknown - 0x116C0, # .. 0x116C9 ; Takri - 0x116CA, # .. 0x116FF ; Unknown - 0x11700, # .. 0x1171A ; Ahom - 0x1171B, # .. 0x1171C ; Unknown - 0x1171D, # .. 0x1172B ; Ahom - 0x1172C, # .. 0x1172F ; Unknown - 0x11730, # .. 0x11746 ; Ahom - 0x11747, # .. 0x117FF ; Unknown - 0x11800, # .. 0x1183B ; Dogra - 0x1183C, # .. 0x1189F ; Unknown - 0x118A0, # .. 0x118F2 ; Warang_Citi - 0x118F3, # .. 0x118FE ; Unknown - 0x118FF, # .. 0x118FF ; Warang_Citi - 0x11900, # .. 0x11906 ; Dives_Akuru - 0x11907, # .. 0x11908 ; Unknown - 0x11909, # .. 0x11909 ; Dives_Akuru - 0x1190A, # .. 0x1190B ; Unknown - 0x1190C, # .. 0x11913 ; Dives_Akuru - 0x11914, # .. 0x11914 ; Unknown - 0x11915, # .. 0x11916 ; Dives_Akuru - 0x11917, # .. 0x11917 ; Unknown - 0x11918, # .. 0x11935 ; Dives_Akuru - 0x11936, # .. 0x11936 ; Unknown - 0x11937, # .. 0x11938 ; Dives_Akuru - 0x11939, # .. 0x1193A ; Unknown - 0x1193B, # .. 0x11946 ; Dives_Akuru - 0x11947, # .. 0x1194F ; Unknown - 0x11950, # .. 0x11959 ; Dives_Akuru - 0x1195A, # .. 0x1199F ; Unknown - 0x119A0, # .. 0x119A7 ; Nandinagari - 0x119A8, # .. 0x119A9 ; Unknown - 0x119AA, # .. 0x119D7 ; Nandinagari - 0x119D8, # .. 0x119D9 ; Unknown - 0x119DA, # .. 0x119E4 ; Nandinagari - 0x119E5, # .. 0x119FF ; Unknown - 0x11A00, # .. 0x11A47 ; Zanabazar_Square - 0x11A48, # .. 0x11A4F ; Unknown - 0x11A50, # .. 0x11AA2 ; Soyombo - 0x11AA3, # .. 0x11AAF ; Unknown - 0x11AB0, # .. 0x11ABF ; Canadian_Aboriginal - 0x11AC0, # .. 0x11AF8 ; Pau_Cin_Hau - 0x11AF9, # .. 0x11AFF ; Unknown - 0x11B00, # .. 0x11B09 ; Devanagari - 0x11B0A, # .. 0x11BFF ; Unknown - 0x11C00, # .. 0x11C08 ; Bhaiksuki - 0x11C09, # .. 0x11C09 ; Unknown - 0x11C0A, # .. 0x11C36 ; Bhaiksuki - 0x11C37, # .. 0x11C37 ; Unknown - 0x11C38, # .. 0x11C45 ; Bhaiksuki - 0x11C46, # .. 0x11C4F ; Unknown - 0x11C50, # .. 0x11C6C ; Bhaiksuki - 0x11C6D, # .. 0x11C6F ; Unknown - 0x11C70, # .. 0x11C8F ; Marchen - 0x11C90, # .. 0x11C91 ; Unknown - 0x11C92, # .. 0x11CA7 ; Marchen - 0x11CA8, # .. 0x11CA8 ; Unknown - 0x11CA9, # .. 0x11CB6 ; Marchen - 0x11CB7, # .. 0x11CFF ; Unknown - 0x11D00, # .. 0x11D06 ; Masaram_Gondi - 0x11D07, # .. 0x11D07 ; Unknown - 0x11D08, # .. 0x11D09 ; Masaram_Gondi - 0x11D0A, # .. 0x11D0A ; Unknown - 0x11D0B, # .. 0x11D36 ; Masaram_Gondi - 0x11D37, # .. 0x11D39 ; Unknown - 0x11D3A, # .. 0x11D3A ; Masaram_Gondi - 0x11D3B, # .. 0x11D3B ; Unknown - 0x11D3C, # .. 0x11D3D ; Masaram_Gondi - 0x11D3E, # .. 0x11D3E ; Unknown - 0x11D3F, # .. 0x11D47 ; Masaram_Gondi - 0x11D48, # .. 0x11D4F ; Unknown - 0x11D50, # .. 0x11D59 ; Masaram_Gondi - 0x11D5A, # .. 0x11D5F ; Unknown - 0x11D60, # .. 0x11D65 ; Gunjala_Gondi - 0x11D66, # .. 0x11D66 ; Unknown - 0x11D67, # .. 0x11D68 ; Gunjala_Gondi - 0x11D69, # .. 0x11D69 ; Unknown - 0x11D6A, # .. 0x11D8E ; Gunjala_Gondi - 0x11D8F, # .. 0x11D8F ; Unknown - 0x11D90, # .. 0x11D91 ; Gunjala_Gondi - 0x11D92, # .. 0x11D92 ; Unknown - 0x11D93, # .. 0x11D98 ; Gunjala_Gondi - 0x11D99, # .. 0x11D9F ; Unknown - 0x11DA0, # .. 0x11DA9 ; Gunjala_Gondi - 0x11DAA, # .. 0x11EDF ; Unknown - 0x11EE0, # .. 0x11EF8 ; Makasar - 0x11EF9, # .. 0x11EFF ; Unknown - 0x11F00, # .. 0x11F10 ; Kawi - 0x11F11, # .. 0x11F11 ; Unknown - 0x11F12, # .. 0x11F3A ; Kawi - 0x11F3B, # .. 0x11F3D ; Unknown - 0x11F3E, # .. 0x11F59 ; Kawi - 0x11F5A, # .. 0x11FAF ; Unknown - 0x11FB0, # .. 0x11FB0 ; Lisu - 0x11FB1, # .. 0x11FBF ; Unknown - 0x11FC0, # .. 0x11FF1 ; Tamil - 0x11FF2, # .. 0x11FFE ; Unknown - 0x11FFF, # .. 0x11FFF ; Tamil - 0x12000, # .. 0x12399 ; Cuneiform - 0x1239A, # .. 0x123FF ; Unknown - 0x12400, # .. 0x1246E ; Cuneiform - 0x1246F, # .. 0x1246F ; Unknown - 0x12470, # .. 0x12474 ; Cuneiform - 0x12475, # .. 0x1247F ; Unknown - 0x12480, # .. 0x12543 ; Cuneiform - 0x12544, # .. 0x12F8F ; Unknown - 0x12F90, # .. 0x12FF2 ; Cypro_Minoan - 0x12FF3, # .. 0x12FFF ; Unknown - 0x13000, # .. 0x13455 ; Egyptian_Hieroglyphs - 0x13456, # .. 0x143FF ; Unknown - 0x14400, # .. 0x14646 ; Anatolian_Hieroglyphs - 0x14647, # .. 0x167FF ; Unknown - 0x16800, # .. 0x16A38 ; Bamum - 0x16A39, # .. 0x16A3F ; Unknown - 0x16A40, # .. 0x16A5E ; Mro - 0x16A5F, # .. 0x16A5F ; Unknown - 0x16A60, # .. 0x16A69 ; Mro - 0x16A6A, # .. 0x16A6D ; Unknown - 0x16A6E, # .. 0x16A6F ; Mro - 0x16A70, # .. 0x16ABE ; Tangsa - 0x16ABF, # .. 0x16ABF ; Unknown - 0x16AC0, # .. 0x16AC9 ; Tangsa - 0x16ACA, # .. 0x16ACF ; Unknown - 0x16AD0, # .. 0x16AED ; Bassa_Vah - 0x16AEE, # .. 0x16AEF ; Unknown - 0x16AF0, # .. 0x16AF5 ; Bassa_Vah - 0x16AF6, # .. 0x16AFF ; Unknown - 0x16B00, # .. 0x16B45 ; Pahawh_Hmong - 0x16B46, # .. 0x16B4F ; Unknown - 0x16B50, # .. 0x16B59 ; Pahawh_Hmong - 0x16B5A, # .. 0x16B5A ; Unknown - 0x16B5B, # .. 0x16B61 ; Pahawh_Hmong - 0x16B62, # .. 0x16B62 ; Unknown - 0x16B63, # .. 0x16B77 ; Pahawh_Hmong - 0x16B78, # .. 0x16B7C ; Unknown - 0x16B7D, # .. 0x16B8F ; Pahawh_Hmong - 0x16B90, # .. 0x16E3F ; Unknown - 0x16E40, # .. 0x16E9A ; Medefaidrin - 0x16E9B, # .. 0x16EFF ; Unknown - 0x16F00, # .. 0x16F4A ; Miao - 0x16F4B, # .. 0x16F4E ; Unknown - 0x16F4F, # .. 0x16F87 ; Miao - 0x16F88, # .. 0x16F8E ; Unknown - 0x16F8F, # .. 0x16F9F ; Miao - 0x16FA0, # .. 0x16FDF ; Unknown - 0x16FE0, # .. 0x16FE0 ; Tangut - 0x16FE1, # .. 0x16FE1 ; Nushu - 0x16FE2, # .. 0x16FE3 ; Han - 0x16FE4, # .. 0x16FE4 ; Khitan_Small_Script - 0x16FE5, # .. 0x16FEF ; Unknown - 0x16FF0, # .. 0x16FF1 ; Han - 0x16FF2, # .. 0x16FFF ; Unknown - 0x17000, # .. 0x187F7 ; Tangut - 0x187F8, # .. 0x187FF ; Unknown - 0x18800, # .. 0x18AFF ; Tangut - 0x18B00, # .. 0x18CD5 ; Khitan_Small_Script - 0x18CD6, # .. 0x18CFF ; Unknown - 0x18D00, # .. 0x18D08 ; Tangut - 0x18D09, # .. 0x1AFEF ; Unknown - 0x1AFF0, # .. 0x1AFF3 ; Katakana - 0x1AFF4, # .. 0x1AFF4 ; Unknown - 0x1AFF5, # .. 0x1AFFB ; Katakana - 0x1AFFC, # .. 0x1AFFC ; Unknown - 0x1AFFD, # .. 0x1AFFE ; Katakana - 0x1AFFF, # .. 0x1AFFF ; Unknown - 0x1B000, # .. 0x1B000 ; Katakana - 0x1B001, # .. 0x1B11F ; Hiragana - 0x1B120, # .. 0x1B122 ; Katakana - 0x1B123, # .. 0x1B131 ; Unknown - 0x1B132, # .. 0x1B132 ; Hiragana - 0x1B133, # .. 0x1B14F ; Unknown - 0x1B150, # .. 0x1B152 ; Hiragana - 0x1B153, # .. 0x1B154 ; Unknown - 0x1B155, # .. 0x1B155 ; Katakana - 0x1B156, # .. 0x1B163 ; Unknown - 0x1B164, # .. 0x1B167 ; Katakana - 0x1B168, # .. 0x1B16F ; Unknown - 0x1B170, # .. 0x1B2FB ; Nushu - 0x1B2FC, # .. 0x1BBFF ; Unknown - 0x1BC00, # .. 0x1BC6A ; Duployan - 0x1BC6B, # .. 0x1BC6F ; Unknown - 0x1BC70, # .. 0x1BC7C ; Duployan - 0x1BC7D, # .. 0x1BC7F ; Unknown - 0x1BC80, # .. 0x1BC88 ; Duployan - 0x1BC89, # .. 0x1BC8F ; Unknown - 0x1BC90, # .. 0x1BC99 ; Duployan - 0x1BC9A, # .. 0x1BC9B ; Unknown - 0x1BC9C, # .. 0x1BC9F ; Duployan - 0x1BCA0, # .. 0x1BCA3 ; Common - 0x1BCA4, # .. 0x1CEFF ; Unknown - 0x1CF00, # .. 0x1CF2D ; Inherited - 0x1CF2E, # .. 0x1CF2F ; Unknown - 0x1CF30, # .. 0x1CF46 ; Inherited - 0x1CF47, # .. 0x1CF4F ; Unknown - 0x1CF50, # .. 0x1CFC3 ; Common - 0x1CFC4, # .. 0x1CFFF ; Unknown - 0x1D000, # .. 0x1D0F5 ; Common - 0x1D0F6, # .. 0x1D0FF ; Unknown - 0x1D100, # .. 0x1D126 ; Common - 0x1D127, # .. 0x1D128 ; Unknown - 0x1D129, # .. 0x1D166 ; Common - 0x1D167, # .. 0x1D169 ; Inherited - 0x1D16A, # .. 0x1D17A ; Common - 0x1D17B, # .. 0x1D182 ; Inherited - 0x1D183, # .. 0x1D184 ; Common - 0x1D185, # .. 0x1D18B ; Inherited - 0x1D18C, # .. 0x1D1A9 ; Common - 0x1D1AA, # .. 0x1D1AD ; Inherited - 0x1D1AE, # .. 0x1D1EA ; Common - 0x1D1EB, # .. 0x1D1FF ; Unknown - 0x1D200, # .. 0x1D245 ; Greek - 0x1D246, # .. 0x1D2BF ; Unknown - 0x1D2C0, # .. 0x1D2D3 ; Common - 0x1D2D4, # .. 0x1D2DF ; Unknown - 0x1D2E0, # .. 0x1D2F3 ; Common - 0x1D2F4, # .. 0x1D2FF ; Unknown - 0x1D300, # .. 0x1D356 ; Common - 0x1D357, # .. 0x1D35F ; Unknown - 0x1D360, # .. 0x1D378 ; Common - 0x1D379, # .. 0x1D3FF ; Unknown - 0x1D400, # .. 0x1D454 ; Common - 0x1D455, # .. 0x1D455 ; Unknown - 0x1D456, # .. 0x1D49C ; Common - 0x1D49D, # .. 0x1D49D ; Unknown - 0x1D49E, # .. 0x1D49F ; Common - 0x1D4A0, # .. 0x1D4A1 ; Unknown - 0x1D4A2, # .. 0x1D4A2 ; Common - 0x1D4A3, # .. 0x1D4A4 ; Unknown - 0x1D4A5, # .. 0x1D4A6 ; Common - 0x1D4A7, # .. 0x1D4A8 ; Unknown - 0x1D4A9, # .. 0x1D4AC ; Common - 0x1D4AD, # .. 0x1D4AD ; Unknown - 0x1D4AE, # .. 0x1D4B9 ; Common - 0x1D4BA, # .. 0x1D4BA ; Unknown - 0x1D4BB, # .. 0x1D4BB ; Common - 0x1D4BC, # .. 0x1D4BC ; Unknown - 0x1D4BD, # .. 0x1D4C3 ; Common - 0x1D4C4, # .. 0x1D4C4 ; Unknown - 0x1D4C5, # .. 0x1D505 ; Common - 0x1D506, # .. 0x1D506 ; Unknown - 0x1D507, # .. 0x1D50A ; Common - 0x1D50B, # .. 0x1D50C ; Unknown - 0x1D50D, # .. 0x1D514 ; Common - 0x1D515, # .. 0x1D515 ; Unknown - 0x1D516, # .. 0x1D51C ; Common - 0x1D51D, # .. 0x1D51D ; Unknown - 0x1D51E, # .. 0x1D539 ; Common - 0x1D53A, # .. 0x1D53A ; Unknown - 0x1D53B, # .. 0x1D53E ; Common - 0x1D53F, # .. 0x1D53F ; Unknown - 0x1D540, # .. 0x1D544 ; Common - 0x1D545, # .. 0x1D545 ; Unknown - 0x1D546, # .. 0x1D546 ; Common - 0x1D547, # .. 0x1D549 ; Unknown - 0x1D54A, # .. 0x1D550 ; Common - 0x1D551, # .. 0x1D551 ; Unknown - 0x1D552, # .. 0x1D6A5 ; Common - 0x1D6A6, # .. 0x1D6A7 ; Unknown - 0x1D6A8, # .. 0x1D7CB ; Common - 0x1D7CC, # .. 0x1D7CD ; Unknown - 0x1D7CE, # .. 0x1D7FF ; Common - 0x1D800, # .. 0x1DA8B ; SignWriting - 0x1DA8C, # .. 0x1DA9A ; Unknown - 0x1DA9B, # .. 0x1DA9F ; SignWriting - 0x1DAA0, # .. 0x1DAA0 ; Unknown - 0x1DAA1, # .. 0x1DAAF ; SignWriting - 0x1DAB0, # .. 0x1DEFF ; Unknown - 0x1DF00, # .. 0x1DF1E ; Latin - 0x1DF1F, # .. 0x1DF24 ; Unknown - 0x1DF25, # .. 0x1DF2A ; Latin - 0x1DF2B, # .. 0x1DFFF ; Unknown - 0x1E000, # .. 0x1E006 ; Glagolitic - 0x1E007, # .. 0x1E007 ; Unknown - 0x1E008, # .. 0x1E018 ; Glagolitic - 0x1E019, # .. 0x1E01A ; Unknown - 0x1E01B, # .. 0x1E021 ; Glagolitic - 0x1E022, # .. 0x1E022 ; Unknown - 0x1E023, # .. 0x1E024 ; Glagolitic - 0x1E025, # .. 0x1E025 ; Unknown - 0x1E026, # .. 0x1E02A ; Glagolitic - 0x1E02B, # .. 0x1E02F ; Unknown - 0x1E030, # .. 0x1E06D ; Cyrillic - 0x1E06E, # .. 0x1E08E ; Unknown - 0x1E08F, # .. 0x1E08F ; Cyrillic - 0x1E090, # .. 0x1E0FF ; Unknown - 0x1E100, # .. 0x1E12C ; Nyiakeng_Puachue_Hmong - 0x1E12D, # .. 0x1E12F ; Unknown - 0x1E130, # .. 0x1E13D ; Nyiakeng_Puachue_Hmong - 0x1E13E, # .. 0x1E13F ; Unknown - 0x1E140, # .. 0x1E149 ; Nyiakeng_Puachue_Hmong - 0x1E14A, # .. 0x1E14D ; Unknown - 0x1E14E, # .. 0x1E14F ; Nyiakeng_Puachue_Hmong - 0x1E150, # .. 0x1E28F ; Unknown - 0x1E290, # .. 0x1E2AE ; Toto - 0x1E2AF, # .. 0x1E2BF ; Unknown - 0x1E2C0, # .. 0x1E2F9 ; Wancho - 0x1E2FA, # .. 0x1E2FE ; Unknown - 0x1E2FF, # .. 0x1E2FF ; Wancho - 0x1E300, # .. 0x1E4CF ; Unknown - 0x1E4D0, # .. 0x1E4F9 ; Nag_Mundari - 0x1E4FA, # .. 0x1E7DF ; Unknown - 0x1E7E0, # .. 0x1E7E6 ; Ethiopic - 0x1E7E7, # .. 0x1E7E7 ; Unknown - 0x1E7E8, # .. 0x1E7EB ; Ethiopic - 0x1E7EC, # .. 0x1E7EC ; Unknown - 0x1E7ED, # .. 0x1E7EE ; Ethiopic - 0x1E7EF, # .. 0x1E7EF ; Unknown - 0x1E7F0, # .. 0x1E7FE ; Ethiopic - 0x1E7FF, # .. 0x1E7FF ; Unknown - 0x1E800, # .. 0x1E8C4 ; Mende_Kikakui - 0x1E8C5, # .. 0x1E8C6 ; Unknown - 0x1E8C7, # .. 0x1E8D6 ; Mende_Kikakui - 0x1E8D7, # .. 0x1E8FF ; Unknown - 0x1E900, # .. 0x1E94B ; Adlam - 0x1E94C, # .. 0x1E94F ; Unknown - 0x1E950, # .. 0x1E959 ; Adlam - 0x1E95A, # .. 0x1E95D ; Unknown - 0x1E95E, # .. 0x1E95F ; Adlam - 0x1E960, # .. 0x1EC70 ; Unknown - 0x1EC71, # .. 0x1ECB4 ; Common - 0x1ECB5, # .. 0x1ED00 ; Unknown - 0x1ED01, # .. 0x1ED3D ; Common - 0x1ED3E, # .. 0x1EDFF ; Unknown - 0x1EE00, # .. 0x1EE03 ; Arabic - 0x1EE04, # .. 0x1EE04 ; Unknown - 0x1EE05, # .. 0x1EE1F ; Arabic - 0x1EE20, # .. 0x1EE20 ; Unknown - 0x1EE21, # .. 0x1EE22 ; Arabic - 0x1EE23, # .. 0x1EE23 ; Unknown - 0x1EE24, # .. 0x1EE24 ; Arabic - 0x1EE25, # .. 0x1EE26 ; Unknown - 0x1EE27, # .. 0x1EE27 ; Arabic - 0x1EE28, # .. 0x1EE28 ; Unknown - 0x1EE29, # .. 0x1EE32 ; Arabic - 0x1EE33, # .. 0x1EE33 ; Unknown - 0x1EE34, # .. 0x1EE37 ; Arabic - 0x1EE38, # .. 0x1EE38 ; Unknown - 0x1EE39, # .. 0x1EE39 ; Arabic - 0x1EE3A, # .. 0x1EE3A ; Unknown - 0x1EE3B, # .. 0x1EE3B ; Arabic - 0x1EE3C, # .. 0x1EE41 ; Unknown - 0x1EE42, # .. 0x1EE42 ; Arabic - 0x1EE43, # .. 0x1EE46 ; Unknown - 0x1EE47, # .. 0x1EE47 ; Arabic - 0x1EE48, # .. 0x1EE48 ; Unknown - 0x1EE49, # .. 0x1EE49 ; Arabic - 0x1EE4A, # .. 0x1EE4A ; Unknown - 0x1EE4B, # .. 0x1EE4B ; Arabic - 0x1EE4C, # .. 0x1EE4C ; Unknown - 0x1EE4D, # .. 0x1EE4F ; Arabic - 0x1EE50, # .. 0x1EE50 ; Unknown - 0x1EE51, # .. 0x1EE52 ; Arabic - 0x1EE53, # .. 0x1EE53 ; Unknown - 0x1EE54, # .. 0x1EE54 ; Arabic - 0x1EE55, # .. 0x1EE56 ; Unknown - 0x1EE57, # .. 0x1EE57 ; Arabic - 0x1EE58, # .. 0x1EE58 ; Unknown - 0x1EE59, # .. 0x1EE59 ; Arabic - 0x1EE5A, # .. 0x1EE5A ; Unknown - 0x1EE5B, # .. 0x1EE5B ; Arabic - 0x1EE5C, # .. 0x1EE5C ; Unknown - 0x1EE5D, # .. 0x1EE5D ; Arabic - 0x1EE5E, # .. 0x1EE5E ; Unknown - 0x1EE5F, # .. 0x1EE5F ; Arabic - 0x1EE60, # .. 0x1EE60 ; Unknown - 0x1EE61, # .. 0x1EE62 ; Arabic - 0x1EE63, # .. 0x1EE63 ; Unknown - 0x1EE64, # .. 0x1EE64 ; Arabic - 0x1EE65, # .. 0x1EE66 ; Unknown - 0x1EE67, # .. 0x1EE6A ; Arabic - 0x1EE6B, # .. 0x1EE6B ; Unknown - 0x1EE6C, # .. 0x1EE72 ; Arabic - 0x1EE73, # .. 0x1EE73 ; Unknown - 0x1EE74, # .. 0x1EE77 ; Arabic - 0x1EE78, # .. 0x1EE78 ; Unknown - 0x1EE79, # .. 0x1EE7C ; Arabic - 0x1EE7D, # .. 0x1EE7D ; Unknown - 0x1EE7E, # .. 0x1EE7E ; Arabic - 0x1EE7F, # .. 0x1EE7F ; Unknown - 0x1EE80, # .. 0x1EE89 ; Arabic - 0x1EE8A, # .. 0x1EE8A ; Unknown - 0x1EE8B, # .. 0x1EE9B ; Arabic - 0x1EE9C, # .. 0x1EEA0 ; Unknown - 0x1EEA1, # .. 0x1EEA3 ; Arabic - 0x1EEA4, # .. 0x1EEA4 ; Unknown - 0x1EEA5, # .. 0x1EEA9 ; Arabic - 0x1EEAA, # .. 0x1EEAA ; Unknown - 0x1EEAB, # .. 0x1EEBB ; Arabic - 0x1EEBC, # .. 0x1EEEF ; Unknown - 0x1EEF0, # .. 0x1EEF1 ; Arabic - 0x1EEF2, # .. 0x1EFFF ; Unknown - 0x1F000, # .. 0x1F02B ; Common - 0x1F02C, # .. 0x1F02F ; Unknown - 0x1F030, # .. 0x1F093 ; Common - 0x1F094, # .. 0x1F09F ; Unknown - 0x1F0A0, # .. 0x1F0AE ; Common - 0x1F0AF, # .. 0x1F0B0 ; Unknown - 0x1F0B1, # .. 0x1F0BF ; Common - 0x1F0C0, # .. 0x1F0C0 ; Unknown - 0x1F0C1, # .. 0x1F0CF ; Common - 0x1F0D0, # .. 0x1F0D0 ; Unknown - 0x1F0D1, # .. 0x1F0F5 ; Common - 0x1F0F6, # .. 0x1F0FF ; Unknown - 0x1F100, # .. 0x1F1AD ; Common - 0x1F1AE, # .. 0x1F1E5 ; Unknown - 0x1F1E6, # .. 0x1F1FF ; Common - 0x1F200, # .. 0x1F200 ; Hiragana - 0x1F201, # .. 0x1F202 ; Common - 0x1F203, # .. 0x1F20F ; Unknown - 0x1F210, # .. 0x1F23B ; Common - 0x1F23C, # .. 0x1F23F ; Unknown - 0x1F240, # .. 0x1F248 ; Common - 0x1F249, # .. 0x1F24F ; Unknown - 0x1F250, # .. 0x1F251 ; Common - 0x1F252, # .. 0x1F25F ; Unknown - 0x1F260, # .. 0x1F265 ; Common - 0x1F266, # .. 0x1F2FF ; Unknown - 0x1F300, # .. 0x1F6D7 ; Common - 0x1F6D8, # .. 0x1F6DB ; Unknown - 0x1F6DC, # .. 0x1F6EC ; Common - 0x1F6ED, # .. 0x1F6EF ; Unknown - 0x1F6F0, # .. 0x1F6FC ; Common - 0x1F6FD, # .. 0x1F6FF ; Unknown - 0x1F700, # .. 0x1F776 ; Common - 0x1F777, # .. 0x1F77A ; Unknown - 0x1F77B, # .. 0x1F7D9 ; Common - 0x1F7DA, # .. 0x1F7DF ; Unknown - 0x1F7E0, # .. 0x1F7EB ; Common - 0x1F7EC, # .. 0x1F7EF ; Unknown - 0x1F7F0, # .. 0x1F7F0 ; Common - 0x1F7F1, # .. 0x1F7FF ; Unknown - 0x1F800, # .. 0x1F80B ; Common - 0x1F80C, # .. 0x1F80F ; Unknown - 0x1F810, # .. 0x1F847 ; Common - 0x1F848, # .. 0x1F84F ; Unknown - 0x1F850, # .. 0x1F859 ; Common - 0x1F85A, # .. 0x1F85F ; Unknown - 0x1F860, # .. 0x1F887 ; Common - 0x1F888, # .. 0x1F88F ; Unknown - 0x1F890, # .. 0x1F8AD ; Common - 0x1F8AE, # .. 0x1F8AF ; Unknown - 0x1F8B0, # .. 0x1F8B1 ; Common - 0x1F8B2, # .. 0x1F8FF ; Unknown - 0x1F900, # .. 0x1FA53 ; Common - 0x1FA54, # .. 0x1FA5F ; Unknown - 0x1FA60, # .. 0x1FA6D ; Common - 0x1FA6E, # .. 0x1FA6F ; Unknown - 0x1FA70, # .. 0x1FA7C ; Common - 0x1FA7D, # .. 0x1FA7F ; Unknown - 0x1FA80, # .. 0x1FA88 ; Common - 0x1FA89, # .. 0x1FA8F ; Unknown - 0x1FA90, # .. 0x1FABD ; Common - 0x1FABE, # .. 0x1FABE ; Unknown - 0x1FABF, # .. 0x1FAC5 ; Common - 0x1FAC6, # .. 0x1FACD ; Unknown - 0x1FACE, # .. 0x1FADB ; Common - 0x1FADC, # .. 0x1FADF ; Unknown - 0x1FAE0, # .. 0x1FAE8 ; Common - 0x1FAE9, # .. 0x1FAEF ; Unknown - 0x1FAF0, # .. 0x1FAF8 ; Common - 0x1FAF9, # .. 0x1FAFF ; Unknown - 0x1FB00, # .. 0x1FB92 ; Common - 0x1FB93, # .. 0x1FB93 ; Unknown - 0x1FB94, # .. 0x1FBCA ; Common - 0x1FBCB, # .. 0x1FBEF ; Unknown - 0x1FBF0, # .. 0x1FBF9 ; Common - 0x1FBFA, # .. 0x1FFFF ; Unknown - 0x20000, # .. 0x2A6DF ; Han - 0x2A6E0, # .. 0x2A6FF ; Unknown - 0x2A700, # .. 0x2B739 ; Han - 0x2B73A, # .. 0x2B73F ; Unknown - 0x2B740, # .. 0x2B81D ; Han - 0x2B81E, # .. 0x2B81F ; Unknown - 0x2B820, # .. 0x2CEA1 ; Han - 0x2CEA2, # .. 0x2CEAF ; Unknown - 0x2CEB0, # .. 0x2EBE0 ; Han - 0x2EBE1, # .. 0x2F7FF ; Unknown - 0x2F800, # .. 0x2FA1D ; Han - 0x2FA1E, # .. 0x2FFFF ; Unknown - 0x30000, # .. 0x3134A ; Han - 0x3134B, # .. 0x3134F ; Unknown - 0x31350, # .. 0x323AF ; Han - 0x323B0, # .. 0xE0000 ; Unknown - 0xE0001, # .. 0xE0001 ; Common - 0xE0002, # .. 0xE001F ; Unknown - 0xE0020, # .. 0xE007F ; Common - 0xE0080, # .. 0xE00FF ; Unknown - 0xE0100, # .. 0xE01EF ; Inherited - 0xE01F0, # .. 0x10FFFF ; Unknown -] - -VALUES = [ - "Zyyy", # 0000..0040 ; Common - "Latn", # 0041..005A ; Latin - "Zyyy", # 005B..0060 ; Common - "Latn", # 0061..007A ; Latin - "Zyyy", # 007B..00A9 ; Common - "Latn", # 00AA..00AA ; Latin - "Zyyy", # 00AB..00B9 ; Common - "Latn", # 00BA..00BA ; Latin - "Zyyy", # 00BB..00BF ; Common - "Latn", # 00C0..00D6 ; Latin - "Zyyy", # 00D7..00D7 ; Common - "Latn", # 00D8..00F6 ; Latin - "Zyyy", # 00F7..00F7 ; Common - "Latn", # 00F8..02B8 ; Latin - "Zyyy", # 02B9..02DF ; Common - "Latn", # 02E0..02E4 ; Latin - "Zyyy", # 02E5..02E9 ; Common - "Bopo", # 02EA..02EB ; Bopomofo - "Zyyy", # 02EC..02FF ; Common - "Zinh", # 0300..036F ; Inherited - "Grek", # 0370..0373 ; Greek - "Zyyy", # 0374..0374 ; Common - "Grek", # 0375..0377 ; Greek - "Zzzz", # 0378..0379 ; Unknown - "Grek", # 037A..037D ; Greek - "Zyyy", # 037E..037E ; Common - "Grek", # 037F..037F ; Greek - "Zzzz", # 0380..0383 ; Unknown - "Grek", # 0384..0384 ; Greek - "Zyyy", # 0385..0385 ; Common - "Grek", # 0386..0386 ; Greek - "Zyyy", # 0387..0387 ; Common - "Grek", # 0388..038A ; Greek - "Zzzz", # 038B..038B ; Unknown - "Grek", # 038C..038C ; Greek - "Zzzz", # 038D..038D ; Unknown - "Grek", # 038E..03A1 ; Greek - "Zzzz", # 03A2..03A2 ; Unknown - "Grek", # 03A3..03E1 ; Greek - "Copt", # 03E2..03EF ; Coptic - "Grek", # 03F0..03FF ; Greek - "Cyrl", # 0400..0484 ; Cyrillic - "Zinh", # 0485..0486 ; Inherited - "Cyrl", # 0487..052F ; Cyrillic - "Zzzz", # 0530..0530 ; Unknown - "Armn", # 0531..0556 ; Armenian - "Zzzz", # 0557..0558 ; Unknown - "Armn", # 0559..058A ; Armenian - "Zzzz", # 058B..058C ; Unknown - "Armn", # 058D..058F ; Armenian - "Zzzz", # 0590..0590 ; Unknown - "Hebr", # 0591..05C7 ; Hebrew - "Zzzz", # 05C8..05CF ; Unknown - "Hebr", # 05D0..05EA ; Hebrew - "Zzzz", # 05EB..05EE ; Unknown - "Hebr", # 05EF..05F4 ; Hebrew - "Zzzz", # 05F5..05FF ; Unknown - "Arab", # 0600..0604 ; Arabic - "Zyyy", # 0605..0605 ; Common - "Arab", # 0606..060B ; Arabic - "Zyyy", # 060C..060C ; Common - "Arab", # 060D..061A ; Arabic - "Zyyy", # 061B..061B ; Common - "Arab", # 061C..061E ; Arabic - "Zyyy", # 061F..061F ; Common - "Arab", # 0620..063F ; Arabic - "Zyyy", # 0640..0640 ; Common - "Arab", # 0641..064A ; Arabic - "Zinh", # 064B..0655 ; Inherited - "Arab", # 0656..066F ; Arabic - "Zinh", # 0670..0670 ; Inherited - "Arab", # 0671..06DC ; Arabic - "Zyyy", # 06DD..06DD ; Common - "Arab", # 06DE..06FF ; Arabic - "Syrc", # 0700..070D ; Syriac - "Zzzz", # 070E..070E ; Unknown - "Syrc", # 070F..074A ; Syriac - "Zzzz", # 074B..074C ; Unknown - "Syrc", # 074D..074F ; Syriac - "Arab", # 0750..077F ; Arabic - "Thaa", # 0780..07B1 ; Thaana - "Zzzz", # 07B2..07BF ; Unknown - "Nkoo", # 07C0..07FA ; Nko - "Zzzz", # 07FB..07FC ; Unknown - "Nkoo", # 07FD..07FF ; Nko - "Samr", # 0800..082D ; Samaritan - "Zzzz", # 082E..082F ; Unknown - "Samr", # 0830..083E ; Samaritan - "Zzzz", # 083F..083F ; Unknown - "Mand", # 0840..085B ; Mandaic - "Zzzz", # 085C..085D ; Unknown - "Mand", # 085E..085E ; Mandaic - "Zzzz", # 085F..085F ; Unknown - "Syrc", # 0860..086A ; Syriac - "Zzzz", # 086B..086F ; Unknown - "Arab", # 0870..088E ; Arabic - "Zzzz", # 088F..088F ; Unknown - "Arab", # 0890..0891 ; Arabic - "Zzzz", # 0892..0897 ; Unknown - "Arab", # 0898..08E1 ; Arabic - "Zyyy", # 08E2..08E2 ; Common - "Arab", # 08E3..08FF ; Arabic - "Deva", # 0900..0950 ; Devanagari - "Zinh", # 0951..0954 ; Inherited - "Deva", # 0955..0963 ; Devanagari - "Zyyy", # 0964..0965 ; Common - "Deva", # 0966..097F ; Devanagari - "Beng", # 0980..0983 ; Bengali - "Zzzz", # 0984..0984 ; Unknown - "Beng", # 0985..098C ; Bengali - "Zzzz", # 098D..098E ; Unknown - "Beng", # 098F..0990 ; Bengali - "Zzzz", # 0991..0992 ; Unknown - "Beng", # 0993..09A8 ; Bengali - "Zzzz", # 09A9..09A9 ; Unknown - "Beng", # 09AA..09B0 ; Bengali - "Zzzz", # 09B1..09B1 ; Unknown - "Beng", # 09B2..09B2 ; Bengali - "Zzzz", # 09B3..09B5 ; Unknown - "Beng", # 09B6..09B9 ; Bengali - "Zzzz", # 09BA..09BB ; Unknown - "Beng", # 09BC..09C4 ; Bengali - "Zzzz", # 09C5..09C6 ; Unknown - "Beng", # 09C7..09C8 ; Bengali - "Zzzz", # 09C9..09CA ; Unknown - "Beng", # 09CB..09CE ; Bengali - "Zzzz", # 09CF..09D6 ; Unknown - "Beng", # 09D7..09D7 ; Bengali - "Zzzz", # 09D8..09DB ; Unknown - "Beng", # 09DC..09DD ; Bengali - "Zzzz", # 09DE..09DE ; Unknown - "Beng", # 09DF..09E3 ; Bengali - "Zzzz", # 09E4..09E5 ; Unknown - "Beng", # 09E6..09FE ; Bengali - "Zzzz", # 09FF..0A00 ; Unknown - "Guru", # 0A01..0A03 ; Gurmukhi - "Zzzz", # 0A04..0A04 ; Unknown - "Guru", # 0A05..0A0A ; Gurmukhi - "Zzzz", # 0A0B..0A0E ; Unknown - "Guru", # 0A0F..0A10 ; Gurmukhi - "Zzzz", # 0A11..0A12 ; Unknown - "Guru", # 0A13..0A28 ; Gurmukhi - "Zzzz", # 0A29..0A29 ; Unknown - "Guru", # 0A2A..0A30 ; Gurmukhi - "Zzzz", # 0A31..0A31 ; Unknown - "Guru", # 0A32..0A33 ; Gurmukhi - "Zzzz", # 0A34..0A34 ; Unknown - "Guru", # 0A35..0A36 ; Gurmukhi - "Zzzz", # 0A37..0A37 ; Unknown - "Guru", # 0A38..0A39 ; Gurmukhi - "Zzzz", # 0A3A..0A3B ; Unknown - "Guru", # 0A3C..0A3C ; Gurmukhi - "Zzzz", # 0A3D..0A3D ; Unknown - "Guru", # 0A3E..0A42 ; Gurmukhi - "Zzzz", # 0A43..0A46 ; Unknown - "Guru", # 0A47..0A48 ; Gurmukhi - "Zzzz", # 0A49..0A4A ; Unknown - "Guru", # 0A4B..0A4D ; Gurmukhi - "Zzzz", # 0A4E..0A50 ; Unknown - "Guru", # 0A51..0A51 ; Gurmukhi - "Zzzz", # 0A52..0A58 ; Unknown - "Guru", # 0A59..0A5C ; Gurmukhi - "Zzzz", # 0A5D..0A5D ; Unknown - "Guru", # 0A5E..0A5E ; Gurmukhi - "Zzzz", # 0A5F..0A65 ; Unknown - "Guru", # 0A66..0A76 ; Gurmukhi - "Zzzz", # 0A77..0A80 ; Unknown - "Gujr", # 0A81..0A83 ; Gujarati - "Zzzz", # 0A84..0A84 ; Unknown - "Gujr", # 0A85..0A8D ; Gujarati - "Zzzz", # 0A8E..0A8E ; Unknown - "Gujr", # 0A8F..0A91 ; Gujarati - "Zzzz", # 0A92..0A92 ; Unknown - "Gujr", # 0A93..0AA8 ; Gujarati - "Zzzz", # 0AA9..0AA9 ; Unknown - "Gujr", # 0AAA..0AB0 ; Gujarati - "Zzzz", # 0AB1..0AB1 ; Unknown - "Gujr", # 0AB2..0AB3 ; Gujarati - "Zzzz", # 0AB4..0AB4 ; Unknown - "Gujr", # 0AB5..0AB9 ; Gujarati - "Zzzz", # 0ABA..0ABB ; Unknown - "Gujr", # 0ABC..0AC5 ; Gujarati - "Zzzz", # 0AC6..0AC6 ; Unknown - "Gujr", # 0AC7..0AC9 ; Gujarati - "Zzzz", # 0ACA..0ACA ; Unknown - "Gujr", # 0ACB..0ACD ; Gujarati - "Zzzz", # 0ACE..0ACF ; Unknown - "Gujr", # 0AD0..0AD0 ; Gujarati - "Zzzz", # 0AD1..0ADF ; Unknown - "Gujr", # 0AE0..0AE3 ; Gujarati - "Zzzz", # 0AE4..0AE5 ; Unknown - "Gujr", # 0AE6..0AF1 ; Gujarati - "Zzzz", # 0AF2..0AF8 ; Unknown - "Gujr", # 0AF9..0AFF ; Gujarati - "Zzzz", # 0B00..0B00 ; Unknown - "Orya", # 0B01..0B03 ; Oriya - "Zzzz", # 0B04..0B04 ; Unknown - "Orya", # 0B05..0B0C ; Oriya - "Zzzz", # 0B0D..0B0E ; Unknown - "Orya", # 0B0F..0B10 ; Oriya - "Zzzz", # 0B11..0B12 ; Unknown - "Orya", # 0B13..0B28 ; Oriya - "Zzzz", # 0B29..0B29 ; Unknown - "Orya", # 0B2A..0B30 ; Oriya - "Zzzz", # 0B31..0B31 ; Unknown - "Orya", # 0B32..0B33 ; Oriya - "Zzzz", # 0B34..0B34 ; Unknown - "Orya", # 0B35..0B39 ; Oriya - "Zzzz", # 0B3A..0B3B ; Unknown - "Orya", # 0B3C..0B44 ; Oriya - "Zzzz", # 0B45..0B46 ; Unknown - "Orya", # 0B47..0B48 ; Oriya - "Zzzz", # 0B49..0B4A ; Unknown - "Orya", # 0B4B..0B4D ; Oriya - "Zzzz", # 0B4E..0B54 ; Unknown - "Orya", # 0B55..0B57 ; Oriya - "Zzzz", # 0B58..0B5B ; Unknown - "Orya", # 0B5C..0B5D ; Oriya - "Zzzz", # 0B5E..0B5E ; Unknown - "Orya", # 0B5F..0B63 ; Oriya - "Zzzz", # 0B64..0B65 ; Unknown - "Orya", # 0B66..0B77 ; Oriya - "Zzzz", # 0B78..0B81 ; Unknown - "Taml", # 0B82..0B83 ; Tamil - "Zzzz", # 0B84..0B84 ; Unknown - "Taml", # 0B85..0B8A ; Tamil - "Zzzz", # 0B8B..0B8D ; Unknown - "Taml", # 0B8E..0B90 ; Tamil - "Zzzz", # 0B91..0B91 ; Unknown - "Taml", # 0B92..0B95 ; Tamil - "Zzzz", # 0B96..0B98 ; Unknown - "Taml", # 0B99..0B9A ; Tamil - "Zzzz", # 0B9B..0B9B ; Unknown - "Taml", # 0B9C..0B9C ; Tamil - "Zzzz", # 0B9D..0B9D ; Unknown - "Taml", # 0B9E..0B9F ; Tamil - "Zzzz", # 0BA0..0BA2 ; Unknown - "Taml", # 0BA3..0BA4 ; Tamil - "Zzzz", # 0BA5..0BA7 ; Unknown - "Taml", # 0BA8..0BAA ; Tamil - "Zzzz", # 0BAB..0BAD ; Unknown - "Taml", # 0BAE..0BB9 ; Tamil - "Zzzz", # 0BBA..0BBD ; Unknown - "Taml", # 0BBE..0BC2 ; Tamil - "Zzzz", # 0BC3..0BC5 ; Unknown - "Taml", # 0BC6..0BC8 ; Tamil - "Zzzz", # 0BC9..0BC9 ; Unknown - "Taml", # 0BCA..0BCD ; Tamil - "Zzzz", # 0BCE..0BCF ; Unknown - "Taml", # 0BD0..0BD0 ; Tamil - "Zzzz", # 0BD1..0BD6 ; Unknown - "Taml", # 0BD7..0BD7 ; Tamil - "Zzzz", # 0BD8..0BE5 ; Unknown - "Taml", # 0BE6..0BFA ; Tamil - "Zzzz", # 0BFB..0BFF ; Unknown - "Telu", # 0C00..0C0C ; Telugu - "Zzzz", # 0C0D..0C0D ; Unknown - "Telu", # 0C0E..0C10 ; Telugu - "Zzzz", # 0C11..0C11 ; Unknown - "Telu", # 0C12..0C28 ; Telugu - "Zzzz", # 0C29..0C29 ; Unknown - "Telu", # 0C2A..0C39 ; Telugu - "Zzzz", # 0C3A..0C3B ; Unknown - "Telu", # 0C3C..0C44 ; Telugu - "Zzzz", # 0C45..0C45 ; Unknown - "Telu", # 0C46..0C48 ; Telugu - "Zzzz", # 0C49..0C49 ; Unknown - "Telu", # 0C4A..0C4D ; Telugu - "Zzzz", # 0C4E..0C54 ; Unknown - "Telu", # 0C55..0C56 ; Telugu - "Zzzz", # 0C57..0C57 ; Unknown - "Telu", # 0C58..0C5A ; Telugu - "Zzzz", # 0C5B..0C5C ; Unknown - "Telu", # 0C5D..0C5D ; Telugu - "Zzzz", # 0C5E..0C5F ; Unknown - "Telu", # 0C60..0C63 ; Telugu - "Zzzz", # 0C64..0C65 ; Unknown - "Telu", # 0C66..0C6F ; Telugu - "Zzzz", # 0C70..0C76 ; Unknown - "Telu", # 0C77..0C7F ; Telugu - "Knda", # 0C80..0C8C ; Kannada - "Zzzz", # 0C8D..0C8D ; Unknown - "Knda", # 0C8E..0C90 ; Kannada - "Zzzz", # 0C91..0C91 ; Unknown - "Knda", # 0C92..0CA8 ; Kannada - "Zzzz", # 0CA9..0CA9 ; Unknown - "Knda", # 0CAA..0CB3 ; Kannada - "Zzzz", # 0CB4..0CB4 ; Unknown - "Knda", # 0CB5..0CB9 ; Kannada - "Zzzz", # 0CBA..0CBB ; Unknown - "Knda", # 0CBC..0CC4 ; Kannada - "Zzzz", # 0CC5..0CC5 ; Unknown - "Knda", # 0CC6..0CC8 ; Kannada - "Zzzz", # 0CC9..0CC9 ; Unknown - "Knda", # 0CCA..0CCD ; Kannada - "Zzzz", # 0CCE..0CD4 ; Unknown - "Knda", # 0CD5..0CD6 ; Kannada - "Zzzz", # 0CD7..0CDC ; Unknown - "Knda", # 0CDD..0CDE ; Kannada - "Zzzz", # 0CDF..0CDF ; Unknown - "Knda", # 0CE0..0CE3 ; Kannada - "Zzzz", # 0CE4..0CE5 ; Unknown - "Knda", # 0CE6..0CEF ; Kannada - "Zzzz", # 0CF0..0CF0 ; Unknown - "Knda", # 0CF1..0CF3 ; Kannada - "Zzzz", # 0CF4..0CFF ; Unknown - "Mlym", # 0D00..0D0C ; Malayalam - "Zzzz", # 0D0D..0D0D ; Unknown - "Mlym", # 0D0E..0D10 ; Malayalam - "Zzzz", # 0D11..0D11 ; Unknown - "Mlym", # 0D12..0D44 ; Malayalam - "Zzzz", # 0D45..0D45 ; Unknown - "Mlym", # 0D46..0D48 ; Malayalam - "Zzzz", # 0D49..0D49 ; Unknown - "Mlym", # 0D4A..0D4F ; Malayalam - "Zzzz", # 0D50..0D53 ; Unknown - "Mlym", # 0D54..0D63 ; Malayalam - "Zzzz", # 0D64..0D65 ; Unknown - "Mlym", # 0D66..0D7F ; Malayalam - "Zzzz", # 0D80..0D80 ; Unknown - "Sinh", # 0D81..0D83 ; Sinhala - "Zzzz", # 0D84..0D84 ; Unknown - "Sinh", # 0D85..0D96 ; Sinhala - "Zzzz", # 0D97..0D99 ; Unknown - "Sinh", # 0D9A..0DB1 ; Sinhala - "Zzzz", # 0DB2..0DB2 ; Unknown - "Sinh", # 0DB3..0DBB ; Sinhala - "Zzzz", # 0DBC..0DBC ; Unknown - "Sinh", # 0DBD..0DBD ; Sinhala - "Zzzz", # 0DBE..0DBF ; Unknown - "Sinh", # 0DC0..0DC6 ; Sinhala - "Zzzz", # 0DC7..0DC9 ; Unknown - "Sinh", # 0DCA..0DCA ; Sinhala - "Zzzz", # 0DCB..0DCE ; Unknown - "Sinh", # 0DCF..0DD4 ; Sinhala - "Zzzz", # 0DD5..0DD5 ; Unknown - "Sinh", # 0DD6..0DD6 ; Sinhala - "Zzzz", # 0DD7..0DD7 ; Unknown - "Sinh", # 0DD8..0DDF ; Sinhala - "Zzzz", # 0DE0..0DE5 ; Unknown - "Sinh", # 0DE6..0DEF ; Sinhala - "Zzzz", # 0DF0..0DF1 ; Unknown - "Sinh", # 0DF2..0DF4 ; Sinhala - "Zzzz", # 0DF5..0E00 ; Unknown - "Thai", # 0E01..0E3A ; Thai - "Zzzz", # 0E3B..0E3E ; Unknown - "Zyyy", # 0E3F..0E3F ; Common - "Thai", # 0E40..0E5B ; Thai - "Zzzz", # 0E5C..0E80 ; Unknown - "Laoo", # 0E81..0E82 ; Lao - "Zzzz", # 0E83..0E83 ; Unknown - "Laoo", # 0E84..0E84 ; Lao - "Zzzz", # 0E85..0E85 ; Unknown - "Laoo", # 0E86..0E8A ; Lao - "Zzzz", # 0E8B..0E8B ; Unknown - "Laoo", # 0E8C..0EA3 ; Lao - "Zzzz", # 0EA4..0EA4 ; Unknown - "Laoo", # 0EA5..0EA5 ; Lao - "Zzzz", # 0EA6..0EA6 ; Unknown - "Laoo", # 0EA7..0EBD ; Lao - "Zzzz", # 0EBE..0EBF ; Unknown - "Laoo", # 0EC0..0EC4 ; Lao - "Zzzz", # 0EC5..0EC5 ; Unknown - "Laoo", # 0EC6..0EC6 ; Lao - "Zzzz", # 0EC7..0EC7 ; Unknown - "Laoo", # 0EC8..0ECE ; Lao - "Zzzz", # 0ECF..0ECF ; Unknown - "Laoo", # 0ED0..0ED9 ; Lao - "Zzzz", # 0EDA..0EDB ; Unknown - "Laoo", # 0EDC..0EDF ; Lao - "Zzzz", # 0EE0..0EFF ; Unknown - "Tibt", # 0F00..0F47 ; Tibetan - "Zzzz", # 0F48..0F48 ; Unknown - "Tibt", # 0F49..0F6C ; Tibetan - "Zzzz", # 0F6D..0F70 ; Unknown - "Tibt", # 0F71..0F97 ; Tibetan - "Zzzz", # 0F98..0F98 ; Unknown - "Tibt", # 0F99..0FBC ; Tibetan - "Zzzz", # 0FBD..0FBD ; Unknown - "Tibt", # 0FBE..0FCC ; Tibetan - "Zzzz", # 0FCD..0FCD ; Unknown - "Tibt", # 0FCE..0FD4 ; Tibetan - "Zyyy", # 0FD5..0FD8 ; Common - "Tibt", # 0FD9..0FDA ; Tibetan - "Zzzz", # 0FDB..0FFF ; Unknown - "Mymr", # 1000..109F ; Myanmar - "Geor", # 10A0..10C5 ; Georgian - "Zzzz", # 10C6..10C6 ; Unknown - "Geor", # 10C7..10C7 ; Georgian - "Zzzz", # 10C8..10CC ; Unknown - "Geor", # 10CD..10CD ; Georgian - "Zzzz", # 10CE..10CF ; Unknown - "Geor", # 10D0..10FA ; Georgian - "Zyyy", # 10FB..10FB ; Common - "Geor", # 10FC..10FF ; Georgian - "Hang", # 1100..11FF ; Hangul - "Ethi", # 1200..1248 ; Ethiopic - "Zzzz", # 1249..1249 ; Unknown - "Ethi", # 124A..124D ; Ethiopic - "Zzzz", # 124E..124F ; Unknown - "Ethi", # 1250..1256 ; Ethiopic - "Zzzz", # 1257..1257 ; Unknown - "Ethi", # 1258..1258 ; Ethiopic - "Zzzz", # 1259..1259 ; Unknown - "Ethi", # 125A..125D ; Ethiopic - "Zzzz", # 125E..125F ; Unknown - "Ethi", # 1260..1288 ; Ethiopic - "Zzzz", # 1289..1289 ; Unknown - "Ethi", # 128A..128D ; Ethiopic - "Zzzz", # 128E..128F ; Unknown - "Ethi", # 1290..12B0 ; Ethiopic - "Zzzz", # 12B1..12B1 ; Unknown - "Ethi", # 12B2..12B5 ; Ethiopic - "Zzzz", # 12B6..12B7 ; Unknown - "Ethi", # 12B8..12BE ; Ethiopic - "Zzzz", # 12BF..12BF ; Unknown - "Ethi", # 12C0..12C0 ; Ethiopic - "Zzzz", # 12C1..12C1 ; Unknown - "Ethi", # 12C2..12C5 ; Ethiopic - "Zzzz", # 12C6..12C7 ; Unknown - "Ethi", # 12C8..12D6 ; Ethiopic - "Zzzz", # 12D7..12D7 ; Unknown - "Ethi", # 12D8..1310 ; Ethiopic - "Zzzz", # 1311..1311 ; Unknown - "Ethi", # 1312..1315 ; Ethiopic - "Zzzz", # 1316..1317 ; Unknown - "Ethi", # 1318..135A ; Ethiopic - "Zzzz", # 135B..135C ; Unknown - "Ethi", # 135D..137C ; Ethiopic - "Zzzz", # 137D..137F ; Unknown - "Ethi", # 1380..1399 ; Ethiopic - "Zzzz", # 139A..139F ; Unknown - "Cher", # 13A0..13F5 ; Cherokee - "Zzzz", # 13F6..13F7 ; Unknown - "Cher", # 13F8..13FD ; Cherokee - "Zzzz", # 13FE..13FF ; Unknown - "Cans", # 1400..167F ; Canadian_Aboriginal - "Ogam", # 1680..169C ; Ogham - "Zzzz", # 169D..169F ; Unknown - "Runr", # 16A0..16EA ; Runic - "Zyyy", # 16EB..16ED ; Common - "Runr", # 16EE..16F8 ; Runic - "Zzzz", # 16F9..16FF ; Unknown - "Tglg", # 1700..1715 ; Tagalog - "Zzzz", # 1716..171E ; Unknown - "Tglg", # 171F..171F ; Tagalog - "Hano", # 1720..1734 ; Hanunoo - "Zyyy", # 1735..1736 ; Common - "Zzzz", # 1737..173F ; Unknown - "Buhd", # 1740..1753 ; Buhid - "Zzzz", # 1754..175F ; Unknown - "Tagb", # 1760..176C ; Tagbanwa - "Zzzz", # 176D..176D ; Unknown - "Tagb", # 176E..1770 ; Tagbanwa - "Zzzz", # 1771..1771 ; Unknown - "Tagb", # 1772..1773 ; Tagbanwa - "Zzzz", # 1774..177F ; Unknown - "Khmr", # 1780..17DD ; Khmer - "Zzzz", # 17DE..17DF ; Unknown - "Khmr", # 17E0..17E9 ; Khmer - "Zzzz", # 17EA..17EF ; Unknown - "Khmr", # 17F0..17F9 ; Khmer - "Zzzz", # 17FA..17FF ; Unknown - "Mong", # 1800..1801 ; Mongolian - "Zyyy", # 1802..1803 ; Common - "Mong", # 1804..1804 ; Mongolian - "Zyyy", # 1805..1805 ; Common - "Mong", # 1806..1819 ; Mongolian - "Zzzz", # 181A..181F ; Unknown - "Mong", # 1820..1878 ; Mongolian - "Zzzz", # 1879..187F ; Unknown - "Mong", # 1880..18AA ; Mongolian - "Zzzz", # 18AB..18AF ; Unknown - "Cans", # 18B0..18F5 ; Canadian_Aboriginal - "Zzzz", # 18F6..18FF ; Unknown - "Limb", # 1900..191E ; Limbu - "Zzzz", # 191F..191F ; Unknown - "Limb", # 1920..192B ; Limbu - "Zzzz", # 192C..192F ; Unknown - "Limb", # 1930..193B ; Limbu - "Zzzz", # 193C..193F ; Unknown - "Limb", # 1940..1940 ; Limbu - "Zzzz", # 1941..1943 ; Unknown - "Limb", # 1944..194F ; Limbu - "Tale", # 1950..196D ; Tai_Le - "Zzzz", # 196E..196F ; Unknown - "Tale", # 1970..1974 ; Tai_Le - "Zzzz", # 1975..197F ; Unknown - "Talu", # 1980..19AB ; New_Tai_Lue - "Zzzz", # 19AC..19AF ; Unknown - "Talu", # 19B0..19C9 ; New_Tai_Lue - "Zzzz", # 19CA..19CF ; Unknown - "Talu", # 19D0..19DA ; New_Tai_Lue - "Zzzz", # 19DB..19DD ; Unknown - "Talu", # 19DE..19DF ; New_Tai_Lue - "Khmr", # 19E0..19FF ; Khmer - "Bugi", # 1A00..1A1B ; Buginese - "Zzzz", # 1A1C..1A1D ; Unknown - "Bugi", # 1A1E..1A1F ; Buginese - "Lana", # 1A20..1A5E ; Tai_Tham - "Zzzz", # 1A5F..1A5F ; Unknown - "Lana", # 1A60..1A7C ; Tai_Tham - "Zzzz", # 1A7D..1A7E ; Unknown - "Lana", # 1A7F..1A89 ; Tai_Tham - "Zzzz", # 1A8A..1A8F ; Unknown - "Lana", # 1A90..1A99 ; Tai_Tham - "Zzzz", # 1A9A..1A9F ; Unknown - "Lana", # 1AA0..1AAD ; Tai_Tham - "Zzzz", # 1AAE..1AAF ; Unknown - "Zinh", # 1AB0..1ACE ; Inherited - "Zzzz", # 1ACF..1AFF ; Unknown - "Bali", # 1B00..1B4C ; Balinese - "Zzzz", # 1B4D..1B4F ; Unknown - "Bali", # 1B50..1B7E ; Balinese - "Zzzz", # 1B7F..1B7F ; Unknown - "Sund", # 1B80..1BBF ; Sundanese - "Batk", # 1BC0..1BF3 ; Batak - "Zzzz", # 1BF4..1BFB ; Unknown - "Batk", # 1BFC..1BFF ; Batak - "Lepc", # 1C00..1C37 ; Lepcha - "Zzzz", # 1C38..1C3A ; Unknown - "Lepc", # 1C3B..1C49 ; Lepcha - "Zzzz", # 1C4A..1C4C ; Unknown - "Lepc", # 1C4D..1C4F ; Lepcha - "Olck", # 1C50..1C7F ; Ol_Chiki - "Cyrl", # 1C80..1C88 ; Cyrillic - "Zzzz", # 1C89..1C8F ; Unknown - "Geor", # 1C90..1CBA ; Georgian - "Zzzz", # 1CBB..1CBC ; Unknown - "Geor", # 1CBD..1CBF ; Georgian - "Sund", # 1CC0..1CC7 ; Sundanese - "Zzzz", # 1CC8..1CCF ; Unknown - "Zinh", # 1CD0..1CD2 ; Inherited - "Zyyy", # 1CD3..1CD3 ; Common - "Zinh", # 1CD4..1CE0 ; Inherited - "Zyyy", # 1CE1..1CE1 ; Common - "Zinh", # 1CE2..1CE8 ; Inherited - "Zyyy", # 1CE9..1CEC ; Common - "Zinh", # 1CED..1CED ; Inherited - "Zyyy", # 1CEE..1CF3 ; Common - "Zinh", # 1CF4..1CF4 ; Inherited - "Zyyy", # 1CF5..1CF7 ; Common - "Zinh", # 1CF8..1CF9 ; Inherited - "Zyyy", # 1CFA..1CFA ; Common - "Zzzz", # 1CFB..1CFF ; Unknown - "Latn", # 1D00..1D25 ; Latin - "Grek", # 1D26..1D2A ; Greek - "Cyrl", # 1D2B..1D2B ; Cyrillic - "Latn", # 1D2C..1D5C ; Latin - "Grek", # 1D5D..1D61 ; Greek - "Latn", # 1D62..1D65 ; Latin - "Grek", # 1D66..1D6A ; Greek - "Latn", # 1D6B..1D77 ; Latin - "Cyrl", # 1D78..1D78 ; Cyrillic - "Latn", # 1D79..1DBE ; Latin - "Grek", # 1DBF..1DBF ; Greek - "Zinh", # 1DC0..1DFF ; Inherited - "Latn", # 1E00..1EFF ; Latin - "Grek", # 1F00..1F15 ; Greek - "Zzzz", # 1F16..1F17 ; Unknown - "Grek", # 1F18..1F1D ; Greek - "Zzzz", # 1F1E..1F1F ; Unknown - "Grek", # 1F20..1F45 ; Greek - "Zzzz", # 1F46..1F47 ; Unknown - "Grek", # 1F48..1F4D ; Greek - "Zzzz", # 1F4E..1F4F ; Unknown - "Grek", # 1F50..1F57 ; Greek - "Zzzz", # 1F58..1F58 ; Unknown - "Grek", # 1F59..1F59 ; Greek - "Zzzz", # 1F5A..1F5A ; Unknown - "Grek", # 1F5B..1F5B ; Greek - "Zzzz", # 1F5C..1F5C ; Unknown - "Grek", # 1F5D..1F5D ; Greek - "Zzzz", # 1F5E..1F5E ; Unknown - "Grek", # 1F5F..1F7D ; Greek - "Zzzz", # 1F7E..1F7F ; Unknown - "Grek", # 1F80..1FB4 ; Greek - "Zzzz", # 1FB5..1FB5 ; Unknown - "Grek", # 1FB6..1FC4 ; Greek - "Zzzz", # 1FC5..1FC5 ; Unknown - "Grek", # 1FC6..1FD3 ; Greek - "Zzzz", # 1FD4..1FD5 ; Unknown - "Grek", # 1FD6..1FDB ; Greek - "Zzzz", # 1FDC..1FDC ; Unknown - "Grek", # 1FDD..1FEF ; Greek - "Zzzz", # 1FF0..1FF1 ; Unknown - "Grek", # 1FF2..1FF4 ; Greek - "Zzzz", # 1FF5..1FF5 ; Unknown - "Grek", # 1FF6..1FFE ; Greek - "Zzzz", # 1FFF..1FFF ; Unknown - "Zyyy", # 2000..200B ; Common - "Zinh", # 200C..200D ; Inherited - "Zyyy", # 200E..2064 ; Common - "Zzzz", # 2065..2065 ; Unknown - "Zyyy", # 2066..2070 ; Common - "Latn", # 2071..2071 ; Latin - "Zzzz", # 2072..2073 ; Unknown - "Zyyy", # 2074..207E ; Common - "Latn", # 207F..207F ; Latin - "Zyyy", # 2080..208E ; Common - "Zzzz", # 208F..208F ; Unknown - "Latn", # 2090..209C ; Latin - "Zzzz", # 209D..209F ; Unknown - "Zyyy", # 20A0..20C0 ; Common - "Zzzz", # 20C1..20CF ; Unknown - "Zinh", # 20D0..20F0 ; Inherited - "Zzzz", # 20F1..20FF ; Unknown - "Zyyy", # 2100..2125 ; Common - "Grek", # 2126..2126 ; Greek - "Zyyy", # 2127..2129 ; Common - "Latn", # 212A..212B ; Latin - "Zyyy", # 212C..2131 ; Common - "Latn", # 2132..2132 ; Latin - "Zyyy", # 2133..214D ; Common - "Latn", # 214E..214E ; Latin - "Zyyy", # 214F..215F ; Common - "Latn", # 2160..2188 ; Latin - "Zyyy", # 2189..218B ; Common - "Zzzz", # 218C..218F ; Unknown - "Zyyy", # 2190..2426 ; Common - "Zzzz", # 2427..243F ; Unknown - "Zyyy", # 2440..244A ; Common - "Zzzz", # 244B..245F ; Unknown - "Zyyy", # 2460..27FF ; Common - "Brai", # 2800..28FF ; Braille - "Zyyy", # 2900..2B73 ; Common - "Zzzz", # 2B74..2B75 ; Unknown - "Zyyy", # 2B76..2B95 ; Common - "Zzzz", # 2B96..2B96 ; Unknown - "Zyyy", # 2B97..2BFF ; Common - "Glag", # 2C00..2C5F ; Glagolitic - "Latn", # 2C60..2C7F ; Latin - "Copt", # 2C80..2CF3 ; Coptic - "Zzzz", # 2CF4..2CF8 ; Unknown - "Copt", # 2CF9..2CFF ; Coptic - "Geor", # 2D00..2D25 ; Georgian - "Zzzz", # 2D26..2D26 ; Unknown - "Geor", # 2D27..2D27 ; Georgian - "Zzzz", # 2D28..2D2C ; Unknown - "Geor", # 2D2D..2D2D ; Georgian - "Zzzz", # 2D2E..2D2F ; Unknown - "Tfng", # 2D30..2D67 ; Tifinagh - "Zzzz", # 2D68..2D6E ; Unknown - "Tfng", # 2D6F..2D70 ; Tifinagh - "Zzzz", # 2D71..2D7E ; Unknown - "Tfng", # 2D7F..2D7F ; Tifinagh - "Ethi", # 2D80..2D96 ; Ethiopic - "Zzzz", # 2D97..2D9F ; Unknown - "Ethi", # 2DA0..2DA6 ; Ethiopic - "Zzzz", # 2DA7..2DA7 ; Unknown - "Ethi", # 2DA8..2DAE ; Ethiopic - "Zzzz", # 2DAF..2DAF ; Unknown - "Ethi", # 2DB0..2DB6 ; Ethiopic - "Zzzz", # 2DB7..2DB7 ; Unknown - "Ethi", # 2DB8..2DBE ; Ethiopic - "Zzzz", # 2DBF..2DBF ; Unknown - "Ethi", # 2DC0..2DC6 ; Ethiopic - "Zzzz", # 2DC7..2DC7 ; Unknown - "Ethi", # 2DC8..2DCE ; Ethiopic - "Zzzz", # 2DCF..2DCF ; Unknown - "Ethi", # 2DD0..2DD6 ; Ethiopic - "Zzzz", # 2DD7..2DD7 ; Unknown - "Ethi", # 2DD8..2DDE ; Ethiopic - "Zzzz", # 2DDF..2DDF ; Unknown - "Cyrl", # 2DE0..2DFF ; Cyrillic - "Zyyy", # 2E00..2E5D ; Common - "Zzzz", # 2E5E..2E7F ; Unknown - "Hani", # 2E80..2E99 ; Han - "Zzzz", # 2E9A..2E9A ; Unknown - "Hani", # 2E9B..2EF3 ; Han - "Zzzz", # 2EF4..2EFF ; Unknown - "Hani", # 2F00..2FD5 ; Han - "Zzzz", # 2FD6..2FEF ; Unknown - "Zyyy", # 2FF0..2FFB ; Common - "Zzzz", # 2FFC..2FFF ; Unknown - "Zyyy", # 3000..3004 ; Common - "Hani", # 3005..3005 ; Han - "Zyyy", # 3006..3006 ; Common - "Hani", # 3007..3007 ; Han - "Zyyy", # 3008..3020 ; Common - "Hani", # 3021..3029 ; Han - "Zinh", # 302A..302D ; Inherited - "Hang", # 302E..302F ; Hangul - "Zyyy", # 3030..3037 ; Common - "Hani", # 3038..303B ; Han - "Zyyy", # 303C..303F ; Common - "Zzzz", # 3040..3040 ; Unknown - "Hira", # 3041..3096 ; Hiragana - "Zzzz", # 3097..3098 ; Unknown - "Zinh", # 3099..309A ; Inherited - "Zyyy", # 309B..309C ; Common - "Hira", # 309D..309F ; Hiragana - "Zyyy", # 30A0..30A0 ; Common - "Kana", # 30A1..30FA ; Katakana - "Zyyy", # 30FB..30FC ; Common - "Kana", # 30FD..30FF ; Katakana - "Zzzz", # 3100..3104 ; Unknown - "Bopo", # 3105..312F ; Bopomofo - "Zzzz", # 3130..3130 ; Unknown - "Hang", # 3131..318E ; Hangul - "Zzzz", # 318F..318F ; Unknown - "Zyyy", # 3190..319F ; Common - "Bopo", # 31A0..31BF ; Bopomofo - "Zyyy", # 31C0..31E3 ; Common - "Zzzz", # 31E4..31EF ; Unknown - "Kana", # 31F0..31FF ; Katakana - "Hang", # 3200..321E ; Hangul - "Zzzz", # 321F..321F ; Unknown - "Zyyy", # 3220..325F ; Common - "Hang", # 3260..327E ; Hangul - "Zyyy", # 327F..32CF ; Common - "Kana", # 32D0..32FE ; Katakana - "Zyyy", # 32FF..32FF ; Common - "Kana", # 3300..3357 ; Katakana - "Zyyy", # 3358..33FF ; Common - "Hani", # 3400..4DBF ; Han - "Zyyy", # 4DC0..4DFF ; Common - "Hani", # 4E00..9FFF ; Han - "Yiii", # A000..A48C ; Yi - "Zzzz", # A48D..A48F ; Unknown - "Yiii", # A490..A4C6 ; Yi - "Zzzz", # A4C7..A4CF ; Unknown - "Lisu", # A4D0..A4FF ; Lisu - "Vaii", # A500..A62B ; Vai - "Zzzz", # A62C..A63F ; Unknown - "Cyrl", # A640..A69F ; Cyrillic - "Bamu", # A6A0..A6F7 ; Bamum - "Zzzz", # A6F8..A6FF ; Unknown - "Zyyy", # A700..A721 ; Common - "Latn", # A722..A787 ; Latin - "Zyyy", # A788..A78A ; Common - "Latn", # A78B..A7CA ; Latin - "Zzzz", # A7CB..A7CF ; Unknown - "Latn", # A7D0..A7D1 ; Latin - "Zzzz", # A7D2..A7D2 ; Unknown - "Latn", # A7D3..A7D3 ; Latin - "Zzzz", # A7D4..A7D4 ; Unknown - "Latn", # A7D5..A7D9 ; Latin - "Zzzz", # A7DA..A7F1 ; Unknown - "Latn", # A7F2..A7FF ; Latin - "Sylo", # A800..A82C ; Syloti_Nagri - "Zzzz", # A82D..A82F ; Unknown - "Zyyy", # A830..A839 ; Common - "Zzzz", # A83A..A83F ; Unknown - "Phag", # A840..A877 ; Phags_Pa - "Zzzz", # A878..A87F ; Unknown - "Saur", # A880..A8C5 ; Saurashtra - "Zzzz", # A8C6..A8CD ; Unknown - "Saur", # A8CE..A8D9 ; Saurashtra - "Zzzz", # A8DA..A8DF ; Unknown - "Deva", # A8E0..A8FF ; Devanagari - "Kali", # A900..A92D ; Kayah_Li - "Zyyy", # A92E..A92E ; Common - "Kali", # A92F..A92F ; Kayah_Li - "Rjng", # A930..A953 ; Rejang - "Zzzz", # A954..A95E ; Unknown - "Rjng", # A95F..A95F ; Rejang - "Hang", # A960..A97C ; Hangul - "Zzzz", # A97D..A97F ; Unknown - "Java", # A980..A9CD ; Javanese - "Zzzz", # A9CE..A9CE ; Unknown - "Zyyy", # A9CF..A9CF ; Common - "Java", # A9D0..A9D9 ; Javanese - "Zzzz", # A9DA..A9DD ; Unknown - "Java", # A9DE..A9DF ; Javanese - "Mymr", # A9E0..A9FE ; Myanmar - "Zzzz", # A9FF..A9FF ; Unknown - "Cham", # AA00..AA36 ; Cham - "Zzzz", # AA37..AA3F ; Unknown - "Cham", # AA40..AA4D ; Cham - "Zzzz", # AA4E..AA4F ; Unknown - "Cham", # AA50..AA59 ; Cham - "Zzzz", # AA5A..AA5B ; Unknown - "Cham", # AA5C..AA5F ; Cham - "Mymr", # AA60..AA7F ; Myanmar - "Tavt", # AA80..AAC2 ; Tai_Viet - "Zzzz", # AAC3..AADA ; Unknown - "Tavt", # AADB..AADF ; Tai_Viet - "Mtei", # AAE0..AAF6 ; Meetei_Mayek - "Zzzz", # AAF7..AB00 ; Unknown - "Ethi", # AB01..AB06 ; Ethiopic - "Zzzz", # AB07..AB08 ; Unknown - "Ethi", # AB09..AB0E ; Ethiopic - "Zzzz", # AB0F..AB10 ; Unknown - "Ethi", # AB11..AB16 ; Ethiopic - "Zzzz", # AB17..AB1F ; Unknown - "Ethi", # AB20..AB26 ; Ethiopic - "Zzzz", # AB27..AB27 ; Unknown - "Ethi", # AB28..AB2E ; Ethiopic - "Zzzz", # AB2F..AB2F ; Unknown - "Latn", # AB30..AB5A ; Latin - "Zyyy", # AB5B..AB5B ; Common - "Latn", # AB5C..AB64 ; Latin - "Grek", # AB65..AB65 ; Greek - "Latn", # AB66..AB69 ; Latin - "Zyyy", # AB6A..AB6B ; Common - "Zzzz", # AB6C..AB6F ; Unknown - "Cher", # AB70..ABBF ; Cherokee - "Mtei", # ABC0..ABED ; Meetei_Mayek - "Zzzz", # ABEE..ABEF ; Unknown - "Mtei", # ABF0..ABF9 ; Meetei_Mayek - "Zzzz", # ABFA..ABFF ; Unknown - "Hang", # AC00..D7A3 ; Hangul - "Zzzz", # D7A4..D7AF ; Unknown - "Hang", # D7B0..D7C6 ; Hangul - "Zzzz", # D7C7..D7CA ; Unknown - "Hang", # D7CB..D7FB ; Hangul - "Zzzz", # D7FC..F8FF ; Unknown - "Hani", # F900..FA6D ; Han - "Zzzz", # FA6E..FA6F ; Unknown - "Hani", # FA70..FAD9 ; Han - "Zzzz", # FADA..FAFF ; Unknown - "Latn", # FB00..FB06 ; Latin - "Zzzz", # FB07..FB12 ; Unknown - "Armn", # FB13..FB17 ; Armenian - "Zzzz", # FB18..FB1C ; Unknown - "Hebr", # FB1D..FB36 ; Hebrew - "Zzzz", # FB37..FB37 ; Unknown - "Hebr", # FB38..FB3C ; Hebrew - "Zzzz", # FB3D..FB3D ; Unknown - "Hebr", # FB3E..FB3E ; Hebrew - "Zzzz", # FB3F..FB3F ; Unknown - "Hebr", # FB40..FB41 ; Hebrew - "Zzzz", # FB42..FB42 ; Unknown - "Hebr", # FB43..FB44 ; Hebrew - "Zzzz", # FB45..FB45 ; Unknown - "Hebr", # FB46..FB4F ; Hebrew - "Arab", # FB50..FBC2 ; Arabic - "Zzzz", # FBC3..FBD2 ; Unknown - "Arab", # FBD3..FD3D ; Arabic - "Zyyy", # FD3E..FD3F ; Common - "Arab", # FD40..FD8F ; Arabic - "Zzzz", # FD90..FD91 ; Unknown - "Arab", # FD92..FDC7 ; Arabic - "Zzzz", # FDC8..FDCE ; Unknown - "Arab", # FDCF..FDCF ; Arabic - "Zzzz", # FDD0..FDEF ; Unknown - "Arab", # FDF0..FDFF ; Arabic - "Zinh", # FE00..FE0F ; Inherited - "Zyyy", # FE10..FE19 ; Common - "Zzzz", # FE1A..FE1F ; Unknown - "Zinh", # FE20..FE2D ; Inherited - "Cyrl", # FE2E..FE2F ; Cyrillic - "Zyyy", # FE30..FE52 ; Common - "Zzzz", # FE53..FE53 ; Unknown - "Zyyy", # FE54..FE66 ; Common - "Zzzz", # FE67..FE67 ; Unknown - "Zyyy", # FE68..FE6B ; Common - "Zzzz", # FE6C..FE6F ; Unknown - "Arab", # FE70..FE74 ; Arabic - "Zzzz", # FE75..FE75 ; Unknown - "Arab", # FE76..FEFC ; Arabic - "Zzzz", # FEFD..FEFE ; Unknown - "Zyyy", # FEFF..FEFF ; Common - "Zzzz", # FF00..FF00 ; Unknown - "Zyyy", # FF01..FF20 ; Common - "Latn", # FF21..FF3A ; Latin - "Zyyy", # FF3B..FF40 ; Common - "Latn", # FF41..FF5A ; Latin - "Zyyy", # FF5B..FF65 ; Common - "Kana", # FF66..FF6F ; Katakana - "Zyyy", # FF70..FF70 ; Common - "Kana", # FF71..FF9D ; Katakana - "Zyyy", # FF9E..FF9F ; Common - "Hang", # FFA0..FFBE ; Hangul - "Zzzz", # FFBF..FFC1 ; Unknown - "Hang", # FFC2..FFC7 ; Hangul - "Zzzz", # FFC8..FFC9 ; Unknown - "Hang", # FFCA..FFCF ; Hangul - "Zzzz", # FFD0..FFD1 ; Unknown - "Hang", # FFD2..FFD7 ; Hangul - "Zzzz", # FFD8..FFD9 ; Unknown - "Hang", # FFDA..FFDC ; Hangul - "Zzzz", # FFDD..FFDF ; Unknown - "Zyyy", # FFE0..FFE6 ; Common - "Zzzz", # FFE7..FFE7 ; Unknown - "Zyyy", # FFE8..FFEE ; Common - "Zzzz", # FFEF..FFF8 ; Unknown - "Zyyy", # FFF9..FFFD ; Common - "Zzzz", # FFFE..FFFF ; Unknown - "Linb", # 10000..1000B ; Linear_B - "Zzzz", # 1000C..1000C ; Unknown - "Linb", # 1000D..10026 ; Linear_B - "Zzzz", # 10027..10027 ; Unknown - "Linb", # 10028..1003A ; Linear_B - "Zzzz", # 1003B..1003B ; Unknown - "Linb", # 1003C..1003D ; Linear_B - "Zzzz", # 1003E..1003E ; Unknown - "Linb", # 1003F..1004D ; Linear_B - "Zzzz", # 1004E..1004F ; Unknown - "Linb", # 10050..1005D ; Linear_B - "Zzzz", # 1005E..1007F ; Unknown - "Linb", # 10080..100FA ; Linear_B - "Zzzz", # 100FB..100FF ; Unknown - "Zyyy", # 10100..10102 ; Common - "Zzzz", # 10103..10106 ; Unknown - "Zyyy", # 10107..10133 ; Common - "Zzzz", # 10134..10136 ; Unknown - "Zyyy", # 10137..1013F ; Common - "Grek", # 10140..1018E ; Greek - "Zzzz", # 1018F..1018F ; Unknown - "Zyyy", # 10190..1019C ; Common - "Zzzz", # 1019D..1019F ; Unknown - "Grek", # 101A0..101A0 ; Greek - "Zzzz", # 101A1..101CF ; Unknown - "Zyyy", # 101D0..101FC ; Common - "Zinh", # 101FD..101FD ; Inherited - "Zzzz", # 101FE..1027F ; Unknown - "Lyci", # 10280..1029C ; Lycian - "Zzzz", # 1029D..1029F ; Unknown - "Cari", # 102A0..102D0 ; Carian - "Zzzz", # 102D1..102DF ; Unknown - "Zinh", # 102E0..102E0 ; Inherited - "Zyyy", # 102E1..102FB ; Common - "Zzzz", # 102FC..102FF ; Unknown - "Ital", # 10300..10323 ; Old_Italic - "Zzzz", # 10324..1032C ; Unknown - "Ital", # 1032D..1032F ; Old_Italic - "Goth", # 10330..1034A ; Gothic - "Zzzz", # 1034B..1034F ; Unknown - "Perm", # 10350..1037A ; Old_Permic - "Zzzz", # 1037B..1037F ; Unknown - "Ugar", # 10380..1039D ; Ugaritic - "Zzzz", # 1039E..1039E ; Unknown - "Ugar", # 1039F..1039F ; Ugaritic - "Xpeo", # 103A0..103C3 ; Old_Persian - "Zzzz", # 103C4..103C7 ; Unknown - "Xpeo", # 103C8..103D5 ; Old_Persian - "Zzzz", # 103D6..103FF ; Unknown - "Dsrt", # 10400..1044F ; Deseret - "Shaw", # 10450..1047F ; Shavian - "Osma", # 10480..1049D ; Osmanya - "Zzzz", # 1049E..1049F ; Unknown - "Osma", # 104A0..104A9 ; Osmanya - "Zzzz", # 104AA..104AF ; Unknown - "Osge", # 104B0..104D3 ; Osage - "Zzzz", # 104D4..104D7 ; Unknown - "Osge", # 104D8..104FB ; Osage - "Zzzz", # 104FC..104FF ; Unknown - "Elba", # 10500..10527 ; Elbasan - "Zzzz", # 10528..1052F ; Unknown - "Aghb", # 10530..10563 ; Caucasian_Albanian - "Zzzz", # 10564..1056E ; Unknown - "Aghb", # 1056F..1056F ; Caucasian_Albanian - "Vith", # 10570..1057A ; Vithkuqi - "Zzzz", # 1057B..1057B ; Unknown - "Vith", # 1057C..1058A ; Vithkuqi - "Zzzz", # 1058B..1058B ; Unknown - "Vith", # 1058C..10592 ; Vithkuqi - "Zzzz", # 10593..10593 ; Unknown - "Vith", # 10594..10595 ; Vithkuqi - "Zzzz", # 10596..10596 ; Unknown - "Vith", # 10597..105A1 ; Vithkuqi - "Zzzz", # 105A2..105A2 ; Unknown - "Vith", # 105A3..105B1 ; Vithkuqi - "Zzzz", # 105B2..105B2 ; Unknown - "Vith", # 105B3..105B9 ; Vithkuqi - "Zzzz", # 105BA..105BA ; Unknown - "Vith", # 105BB..105BC ; Vithkuqi - "Zzzz", # 105BD..105FF ; Unknown - "Lina", # 10600..10736 ; Linear_A - "Zzzz", # 10737..1073F ; Unknown - "Lina", # 10740..10755 ; Linear_A - "Zzzz", # 10756..1075F ; Unknown - "Lina", # 10760..10767 ; Linear_A - "Zzzz", # 10768..1077F ; Unknown - "Latn", # 10780..10785 ; Latin - "Zzzz", # 10786..10786 ; Unknown - "Latn", # 10787..107B0 ; Latin - "Zzzz", # 107B1..107B1 ; Unknown - "Latn", # 107B2..107BA ; Latin - "Zzzz", # 107BB..107FF ; Unknown - "Cprt", # 10800..10805 ; Cypriot - "Zzzz", # 10806..10807 ; Unknown - "Cprt", # 10808..10808 ; Cypriot - "Zzzz", # 10809..10809 ; Unknown - "Cprt", # 1080A..10835 ; Cypriot - "Zzzz", # 10836..10836 ; Unknown - "Cprt", # 10837..10838 ; Cypriot - "Zzzz", # 10839..1083B ; Unknown - "Cprt", # 1083C..1083C ; Cypriot - "Zzzz", # 1083D..1083E ; Unknown - "Cprt", # 1083F..1083F ; Cypriot - "Armi", # 10840..10855 ; Imperial_Aramaic - "Zzzz", # 10856..10856 ; Unknown - "Armi", # 10857..1085F ; Imperial_Aramaic - "Palm", # 10860..1087F ; Palmyrene - "Nbat", # 10880..1089E ; Nabataean - "Zzzz", # 1089F..108A6 ; Unknown - "Nbat", # 108A7..108AF ; Nabataean - "Zzzz", # 108B0..108DF ; Unknown - "Hatr", # 108E0..108F2 ; Hatran - "Zzzz", # 108F3..108F3 ; Unknown - "Hatr", # 108F4..108F5 ; Hatran - "Zzzz", # 108F6..108FA ; Unknown - "Hatr", # 108FB..108FF ; Hatran - "Phnx", # 10900..1091B ; Phoenician - "Zzzz", # 1091C..1091E ; Unknown - "Phnx", # 1091F..1091F ; Phoenician - "Lydi", # 10920..10939 ; Lydian - "Zzzz", # 1093A..1093E ; Unknown - "Lydi", # 1093F..1093F ; Lydian - "Zzzz", # 10940..1097F ; Unknown - "Mero", # 10980..1099F ; Meroitic_Hieroglyphs - "Merc", # 109A0..109B7 ; Meroitic_Cursive - "Zzzz", # 109B8..109BB ; Unknown - "Merc", # 109BC..109CF ; Meroitic_Cursive - "Zzzz", # 109D0..109D1 ; Unknown - "Merc", # 109D2..109FF ; Meroitic_Cursive - "Khar", # 10A00..10A03 ; Kharoshthi - "Zzzz", # 10A04..10A04 ; Unknown - "Khar", # 10A05..10A06 ; Kharoshthi - "Zzzz", # 10A07..10A0B ; Unknown - "Khar", # 10A0C..10A13 ; Kharoshthi - "Zzzz", # 10A14..10A14 ; Unknown - "Khar", # 10A15..10A17 ; Kharoshthi - "Zzzz", # 10A18..10A18 ; Unknown - "Khar", # 10A19..10A35 ; Kharoshthi - "Zzzz", # 10A36..10A37 ; Unknown - "Khar", # 10A38..10A3A ; Kharoshthi - "Zzzz", # 10A3B..10A3E ; Unknown - "Khar", # 10A3F..10A48 ; Kharoshthi - "Zzzz", # 10A49..10A4F ; Unknown - "Khar", # 10A50..10A58 ; Kharoshthi - "Zzzz", # 10A59..10A5F ; Unknown - "Sarb", # 10A60..10A7F ; Old_South_Arabian - "Narb", # 10A80..10A9F ; Old_North_Arabian - "Zzzz", # 10AA0..10ABF ; Unknown - "Mani", # 10AC0..10AE6 ; Manichaean - "Zzzz", # 10AE7..10AEA ; Unknown - "Mani", # 10AEB..10AF6 ; Manichaean - "Zzzz", # 10AF7..10AFF ; Unknown - "Avst", # 10B00..10B35 ; Avestan - "Zzzz", # 10B36..10B38 ; Unknown - "Avst", # 10B39..10B3F ; Avestan - "Prti", # 10B40..10B55 ; Inscriptional_Parthian - "Zzzz", # 10B56..10B57 ; Unknown - "Prti", # 10B58..10B5F ; Inscriptional_Parthian - "Phli", # 10B60..10B72 ; Inscriptional_Pahlavi - "Zzzz", # 10B73..10B77 ; Unknown - "Phli", # 10B78..10B7F ; Inscriptional_Pahlavi - "Phlp", # 10B80..10B91 ; Psalter_Pahlavi - "Zzzz", # 10B92..10B98 ; Unknown - "Phlp", # 10B99..10B9C ; Psalter_Pahlavi - "Zzzz", # 10B9D..10BA8 ; Unknown - "Phlp", # 10BA9..10BAF ; Psalter_Pahlavi - "Zzzz", # 10BB0..10BFF ; Unknown - "Orkh", # 10C00..10C48 ; Old_Turkic - "Zzzz", # 10C49..10C7F ; Unknown - "Hung", # 10C80..10CB2 ; Old_Hungarian - "Zzzz", # 10CB3..10CBF ; Unknown - "Hung", # 10CC0..10CF2 ; Old_Hungarian - "Zzzz", # 10CF3..10CF9 ; Unknown - "Hung", # 10CFA..10CFF ; Old_Hungarian - "Rohg", # 10D00..10D27 ; Hanifi_Rohingya - "Zzzz", # 10D28..10D2F ; Unknown - "Rohg", # 10D30..10D39 ; Hanifi_Rohingya - "Zzzz", # 10D3A..10E5F ; Unknown - "Arab", # 10E60..10E7E ; Arabic - "Zzzz", # 10E7F..10E7F ; Unknown - "Yezi", # 10E80..10EA9 ; Yezidi - "Zzzz", # 10EAA..10EAA ; Unknown - "Yezi", # 10EAB..10EAD ; Yezidi - "Zzzz", # 10EAE..10EAF ; Unknown - "Yezi", # 10EB0..10EB1 ; Yezidi - "Zzzz", # 10EB2..10EFC ; Unknown - "Arab", # 10EFD..10EFF ; Arabic - "Sogo", # 10F00..10F27 ; Old_Sogdian - "Zzzz", # 10F28..10F2F ; Unknown - "Sogd", # 10F30..10F59 ; Sogdian - "Zzzz", # 10F5A..10F6F ; Unknown - "Ougr", # 10F70..10F89 ; Old_Uyghur - "Zzzz", # 10F8A..10FAF ; Unknown - "Chrs", # 10FB0..10FCB ; Chorasmian - "Zzzz", # 10FCC..10FDF ; Unknown - "Elym", # 10FE0..10FF6 ; Elymaic - "Zzzz", # 10FF7..10FFF ; Unknown - "Brah", # 11000..1104D ; Brahmi - "Zzzz", # 1104E..11051 ; Unknown - "Brah", # 11052..11075 ; Brahmi - "Zzzz", # 11076..1107E ; Unknown - "Brah", # 1107F..1107F ; Brahmi - "Kthi", # 11080..110C2 ; Kaithi - "Zzzz", # 110C3..110CC ; Unknown - "Kthi", # 110CD..110CD ; Kaithi - "Zzzz", # 110CE..110CF ; Unknown - "Sora", # 110D0..110E8 ; Sora_Sompeng - "Zzzz", # 110E9..110EF ; Unknown - "Sora", # 110F0..110F9 ; Sora_Sompeng - "Zzzz", # 110FA..110FF ; Unknown - "Cakm", # 11100..11134 ; Chakma - "Zzzz", # 11135..11135 ; Unknown - "Cakm", # 11136..11147 ; Chakma - "Zzzz", # 11148..1114F ; Unknown - "Mahj", # 11150..11176 ; Mahajani - "Zzzz", # 11177..1117F ; Unknown - "Shrd", # 11180..111DF ; Sharada - "Zzzz", # 111E0..111E0 ; Unknown - "Sinh", # 111E1..111F4 ; Sinhala - "Zzzz", # 111F5..111FF ; Unknown - "Khoj", # 11200..11211 ; Khojki - "Zzzz", # 11212..11212 ; Unknown - "Khoj", # 11213..11241 ; Khojki - "Zzzz", # 11242..1127F ; Unknown - "Mult", # 11280..11286 ; Multani - "Zzzz", # 11287..11287 ; Unknown - "Mult", # 11288..11288 ; Multani - "Zzzz", # 11289..11289 ; Unknown - "Mult", # 1128A..1128D ; Multani - "Zzzz", # 1128E..1128E ; Unknown - "Mult", # 1128F..1129D ; Multani - "Zzzz", # 1129E..1129E ; Unknown - "Mult", # 1129F..112A9 ; Multani - "Zzzz", # 112AA..112AF ; Unknown - "Sind", # 112B0..112EA ; Khudawadi - "Zzzz", # 112EB..112EF ; Unknown - "Sind", # 112F0..112F9 ; Khudawadi - "Zzzz", # 112FA..112FF ; Unknown - "Gran", # 11300..11303 ; Grantha - "Zzzz", # 11304..11304 ; Unknown - "Gran", # 11305..1130C ; Grantha - "Zzzz", # 1130D..1130E ; Unknown - "Gran", # 1130F..11310 ; Grantha - "Zzzz", # 11311..11312 ; Unknown - "Gran", # 11313..11328 ; Grantha - "Zzzz", # 11329..11329 ; Unknown - "Gran", # 1132A..11330 ; Grantha - "Zzzz", # 11331..11331 ; Unknown - "Gran", # 11332..11333 ; Grantha - "Zzzz", # 11334..11334 ; Unknown - "Gran", # 11335..11339 ; Grantha - "Zzzz", # 1133A..1133A ; Unknown - "Zinh", # 1133B..1133B ; Inherited - "Gran", # 1133C..11344 ; Grantha - "Zzzz", # 11345..11346 ; Unknown - "Gran", # 11347..11348 ; Grantha - "Zzzz", # 11349..1134A ; Unknown - "Gran", # 1134B..1134D ; Grantha - "Zzzz", # 1134E..1134F ; Unknown - "Gran", # 11350..11350 ; Grantha - "Zzzz", # 11351..11356 ; Unknown - "Gran", # 11357..11357 ; Grantha - "Zzzz", # 11358..1135C ; Unknown - "Gran", # 1135D..11363 ; Grantha - "Zzzz", # 11364..11365 ; Unknown - "Gran", # 11366..1136C ; Grantha - "Zzzz", # 1136D..1136F ; Unknown - "Gran", # 11370..11374 ; Grantha - "Zzzz", # 11375..113FF ; Unknown - "Newa", # 11400..1145B ; Newa - "Zzzz", # 1145C..1145C ; Unknown - "Newa", # 1145D..11461 ; Newa - "Zzzz", # 11462..1147F ; Unknown - "Tirh", # 11480..114C7 ; Tirhuta - "Zzzz", # 114C8..114CF ; Unknown - "Tirh", # 114D0..114D9 ; Tirhuta - "Zzzz", # 114DA..1157F ; Unknown - "Sidd", # 11580..115B5 ; Siddham - "Zzzz", # 115B6..115B7 ; Unknown - "Sidd", # 115B8..115DD ; Siddham - "Zzzz", # 115DE..115FF ; Unknown - "Modi", # 11600..11644 ; Modi - "Zzzz", # 11645..1164F ; Unknown - "Modi", # 11650..11659 ; Modi - "Zzzz", # 1165A..1165F ; Unknown - "Mong", # 11660..1166C ; Mongolian - "Zzzz", # 1166D..1167F ; Unknown - "Takr", # 11680..116B9 ; Takri - "Zzzz", # 116BA..116BF ; Unknown - "Takr", # 116C0..116C9 ; Takri - "Zzzz", # 116CA..116FF ; Unknown - "Ahom", # 11700..1171A ; Ahom - "Zzzz", # 1171B..1171C ; Unknown - "Ahom", # 1171D..1172B ; Ahom - "Zzzz", # 1172C..1172F ; Unknown - "Ahom", # 11730..11746 ; Ahom - "Zzzz", # 11747..117FF ; Unknown - "Dogr", # 11800..1183B ; Dogra - "Zzzz", # 1183C..1189F ; Unknown - "Wara", # 118A0..118F2 ; Warang_Citi - "Zzzz", # 118F3..118FE ; Unknown - "Wara", # 118FF..118FF ; Warang_Citi - "Diak", # 11900..11906 ; Dives_Akuru - "Zzzz", # 11907..11908 ; Unknown - "Diak", # 11909..11909 ; Dives_Akuru - "Zzzz", # 1190A..1190B ; Unknown - "Diak", # 1190C..11913 ; Dives_Akuru - "Zzzz", # 11914..11914 ; Unknown - "Diak", # 11915..11916 ; Dives_Akuru - "Zzzz", # 11917..11917 ; Unknown - "Diak", # 11918..11935 ; Dives_Akuru - "Zzzz", # 11936..11936 ; Unknown - "Diak", # 11937..11938 ; Dives_Akuru - "Zzzz", # 11939..1193A ; Unknown - "Diak", # 1193B..11946 ; Dives_Akuru - "Zzzz", # 11947..1194F ; Unknown - "Diak", # 11950..11959 ; Dives_Akuru - "Zzzz", # 1195A..1199F ; Unknown - "Nand", # 119A0..119A7 ; Nandinagari - "Zzzz", # 119A8..119A9 ; Unknown - "Nand", # 119AA..119D7 ; Nandinagari - "Zzzz", # 119D8..119D9 ; Unknown - "Nand", # 119DA..119E4 ; Nandinagari - "Zzzz", # 119E5..119FF ; Unknown - "Zanb", # 11A00..11A47 ; Zanabazar_Square - "Zzzz", # 11A48..11A4F ; Unknown - "Soyo", # 11A50..11AA2 ; Soyombo - "Zzzz", # 11AA3..11AAF ; Unknown - "Cans", # 11AB0..11ABF ; Canadian_Aboriginal - "Pauc", # 11AC0..11AF8 ; Pau_Cin_Hau - "Zzzz", # 11AF9..11AFF ; Unknown - "Deva", # 11B00..11B09 ; Devanagari - "Zzzz", # 11B0A..11BFF ; Unknown - "Bhks", # 11C00..11C08 ; Bhaiksuki - "Zzzz", # 11C09..11C09 ; Unknown - "Bhks", # 11C0A..11C36 ; Bhaiksuki - "Zzzz", # 11C37..11C37 ; Unknown - "Bhks", # 11C38..11C45 ; Bhaiksuki - "Zzzz", # 11C46..11C4F ; Unknown - "Bhks", # 11C50..11C6C ; Bhaiksuki - "Zzzz", # 11C6D..11C6F ; Unknown - "Marc", # 11C70..11C8F ; Marchen - "Zzzz", # 11C90..11C91 ; Unknown - "Marc", # 11C92..11CA7 ; Marchen - "Zzzz", # 11CA8..11CA8 ; Unknown - "Marc", # 11CA9..11CB6 ; Marchen - "Zzzz", # 11CB7..11CFF ; Unknown - "Gonm", # 11D00..11D06 ; Masaram_Gondi - "Zzzz", # 11D07..11D07 ; Unknown - "Gonm", # 11D08..11D09 ; Masaram_Gondi - "Zzzz", # 11D0A..11D0A ; Unknown - "Gonm", # 11D0B..11D36 ; Masaram_Gondi - "Zzzz", # 11D37..11D39 ; Unknown - "Gonm", # 11D3A..11D3A ; Masaram_Gondi - "Zzzz", # 11D3B..11D3B ; Unknown - "Gonm", # 11D3C..11D3D ; Masaram_Gondi - "Zzzz", # 11D3E..11D3E ; Unknown - "Gonm", # 11D3F..11D47 ; Masaram_Gondi - "Zzzz", # 11D48..11D4F ; Unknown - "Gonm", # 11D50..11D59 ; Masaram_Gondi - "Zzzz", # 11D5A..11D5F ; Unknown - "Gong", # 11D60..11D65 ; Gunjala_Gondi - "Zzzz", # 11D66..11D66 ; Unknown - "Gong", # 11D67..11D68 ; Gunjala_Gondi - "Zzzz", # 11D69..11D69 ; Unknown - "Gong", # 11D6A..11D8E ; Gunjala_Gondi - "Zzzz", # 11D8F..11D8F ; Unknown - "Gong", # 11D90..11D91 ; Gunjala_Gondi - "Zzzz", # 11D92..11D92 ; Unknown - "Gong", # 11D93..11D98 ; Gunjala_Gondi - "Zzzz", # 11D99..11D9F ; Unknown - "Gong", # 11DA0..11DA9 ; Gunjala_Gondi - "Zzzz", # 11DAA..11EDF ; Unknown - "Maka", # 11EE0..11EF8 ; Makasar - "Zzzz", # 11EF9..11EFF ; Unknown - "Kawi", # 11F00..11F10 ; Kawi - "Zzzz", # 11F11..11F11 ; Unknown - "Kawi", # 11F12..11F3A ; Kawi - "Zzzz", # 11F3B..11F3D ; Unknown - "Kawi", # 11F3E..11F59 ; Kawi - "Zzzz", # 11F5A..11FAF ; Unknown - "Lisu", # 11FB0..11FB0 ; Lisu - "Zzzz", # 11FB1..11FBF ; Unknown - "Taml", # 11FC0..11FF1 ; Tamil - "Zzzz", # 11FF2..11FFE ; Unknown - "Taml", # 11FFF..11FFF ; Tamil - "Xsux", # 12000..12399 ; Cuneiform - "Zzzz", # 1239A..123FF ; Unknown - "Xsux", # 12400..1246E ; Cuneiform - "Zzzz", # 1246F..1246F ; Unknown - "Xsux", # 12470..12474 ; Cuneiform - "Zzzz", # 12475..1247F ; Unknown - "Xsux", # 12480..12543 ; Cuneiform - "Zzzz", # 12544..12F8F ; Unknown - "Cpmn", # 12F90..12FF2 ; Cypro_Minoan - "Zzzz", # 12FF3..12FFF ; Unknown - "Egyp", # 13000..13455 ; Egyptian_Hieroglyphs - "Zzzz", # 13456..143FF ; Unknown - "Hluw", # 14400..14646 ; Anatolian_Hieroglyphs - "Zzzz", # 14647..167FF ; Unknown - "Bamu", # 16800..16A38 ; Bamum - "Zzzz", # 16A39..16A3F ; Unknown - "Mroo", # 16A40..16A5E ; Mro - "Zzzz", # 16A5F..16A5F ; Unknown - "Mroo", # 16A60..16A69 ; Mro - "Zzzz", # 16A6A..16A6D ; Unknown - "Mroo", # 16A6E..16A6F ; Mro - "Tnsa", # 16A70..16ABE ; Tangsa - "Zzzz", # 16ABF..16ABF ; Unknown - "Tnsa", # 16AC0..16AC9 ; Tangsa - "Zzzz", # 16ACA..16ACF ; Unknown - "Bass", # 16AD0..16AED ; Bassa_Vah - "Zzzz", # 16AEE..16AEF ; Unknown - "Bass", # 16AF0..16AF5 ; Bassa_Vah - "Zzzz", # 16AF6..16AFF ; Unknown - "Hmng", # 16B00..16B45 ; Pahawh_Hmong - "Zzzz", # 16B46..16B4F ; Unknown - "Hmng", # 16B50..16B59 ; Pahawh_Hmong - "Zzzz", # 16B5A..16B5A ; Unknown - "Hmng", # 16B5B..16B61 ; Pahawh_Hmong - "Zzzz", # 16B62..16B62 ; Unknown - "Hmng", # 16B63..16B77 ; Pahawh_Hmong - "Zzzz", # 16B78..16B7C ; Unknown - "Hmng", # 16B7D..16B8F ; Pahawh_Hmong - "Zzzz", # 16B90..16E3F ; Unknown - "Medf", # 16E40..16E9A ; Medefaidrin - "Zzzz", # 16E9B..16EFF ; Unknown - "Plrd", # 16F00..16F4A ; Miao - "Zzzz", # 16F4B..16F4E ; Unknown - "Plrd", # 16F4F..16F87 ; Miao - "Zzzz", # 16F88..16F8E ; Unknown - "Plrd", # 16F8F..16F9F ; Miao - "Zzzz", # 16FA0..16FDF ; Unknown - "Tang", # 16FE0..16FE0 ; Tangut - "Nshu", # 16FE1..16FE1 ; Nushu - "Hani", # 16FE2..16FE3 ; Han - "Kits", # 16FE4..16FE4 ; Khitan_Small_Script - "Zzzz", # 16FE5..16FEF ; Unknown - "Hani", # 16FF0..16FF1 ; Han - "Zzzz", # 16FF2..16FFF ; Unknown - "Tang", # 17000..187F7 ; Tangut - "Zzzz", # 187F8..187FF ; Unknown - "Tang", # 18800..18AFF ; Tangut - "Kits", # 18B00..18CD5 ; Khitan_Small_Script - "Zzzz", # 18CD6..18CFF ; Unknown - "Tang", # 18D00..18D08 ; Tangut - "Zzzz", # 18D09..1AFEF ; Unknown - "Kana", # 1AFF0..1AFF3 ; Katakana - "Zzzz", # 1AFF4..1AFF4 ; Unknown - "Kana", # 1AFF5..1AFFB ; Katakana - "Zzzz", # 1AFFC..1AFFC ; Unknown - "Kana", # 1AFFD..1AFFE ; Katakana - "Zzzz", # 1AFFF..1AFFF ; Unknown - "Kana", # 1B000..1B000 ; Katakana - "Hira", # 1B001..1B11F ; Hiragana - "Kana", # 1B120..1B122 ; Katakana - "Zzzz", # 1B123..1B131 ; Unknown - "Hira", # 1B132..1B132 ; Hiragana - "Zzzz", # 1B133..1B14F ; Unknown - "Hira", # 1B150..1B152 ; Hiragana - "Zzzz", # 1B153..1B154 ; Unknown - "Kana", # 1B155..1B155 ; Katakana - "Zzzz", # 1B156..1B163 ; Unknown - "Kana", # 1B164..1B167 ; Katakana - "Zzzz", # 1B168..1B16F ; Unknown - "Nshu", # 1B170..1B2FB ; Nushu - "Zzzz", # 1B2FC..1BBFF ; Unknown - "Dupl", # 1BC00..1BC6A ; Duployan - "Zzzz", # 1BC6B..1BC6F ; Unknown - "Dupl", # 1BC70..1BC7C ; Duployan - "Zzzz", # 1BC7D..1BC7F ; Unknown - "Dupl", # 1BC80..1BC88 ; Duployan - "Zzzz", # 1BC89..1BC8F ; Unknown - "Dupl", # 1BC90..1BC99 ; Duployan - "Zzzz", # 1BC9A..1BC9B ; Unknown - "Dupl", # 1BC9C..1BC9F ; Duployan - "Zyyy", # 1BCA0..1BCA3 ; Common - "Zzzz", # 1BCA4..1CEFF ; Unknown - "Zinh", # 1CF00..1CF2D ; Inherited - "Zzzz", # 1CF2E..1CF2F ; Unknown - "Zinh", # 1CF30..1CF46 ; Inherited - "Zzzz", # 1CF47..1CF4F ; Unknown - "Zyyy", # 1CF50..1CFC3 ; Common - "Zzzz", # 1CFC4..1CFFF ; Unknown - "Zyyy", # 1D000..1D0F5 ; Common - "Zzzz", # 1D0F6..1D0FF ; Unknown - "Zyyy", # 1D100..1D126 ; Common - "Zzzz", # 1D127..1D128 ; Unknown - "Zyyy", # 1D129..1D166 ; Common - "Zinh", # 1D167..1D169 ; Inherited - "Zyyy", # 1D16A..1D17A ; Common - "Zinh", # 1D17B..1D182 ; Inherited - "Zyyy", # 1D183..1D184 ; Common - "Zinh", # 1D185..1D18B ; Inherited - "Zyyy", # 1D18C..1D1A9 ; Common - "Zinh", # 1D1AA..1D1AD ; Inherited - "Zyyy", # 1D1AE..1D1EA ; Common - "Zzzz", # 1D1EB..1D1FF ; Unknown - "Grek", # 1D200..1D245 ; Greek - "Zzzz", # 1D246..1D2BF ; Unknown - "Zyyy", # 1D2C0..1D2D3 ; Common - "Zzzz", # 1D2D4..1D2DF ; Unknown - "Zyyy", # 1D2E0..1D2F3 ; Common - "Zzzz", # 1D2F4..1D2FF ; Unknown - "Zyyy", # 1D300..1D356 ; Common - "Zzzz", # 1D357..1D35F ; Unknown - "Zyyy", # 1D360..1D378 ; Common - "Zzzz", # 1D379..1D3FF ; Unknown - "Zyyy", # 1D400..1D454 ; Common - "Zzzz", # 1D455..1D455 ; Unknown - "Zyyy", # 1D456..1D49C ; Common - "Zzzz", # 1D49D..1D49D ; Unknown - "Zyyy", # 1D49E..1D49F ; Common - "Zzzz", # 1D4A0..1D4A1 ; Unknown - "Zyyy", # 1D4A2..1D4A2 ; Common - "Zzzz", # 1D4A3..1D4A4 ; Unknown - "Zyyy", # 1D4A5..1D4A6 ; Common - "Zzzz", # 1D4A7..1D4A8 ; Unknown - "Zyyy", # 1D4A9..1D4AC ; Common - "Zzzz", # 1D4AD..1D4AD ; Unknown - "Zyyy", # 1D4AE..1D4B9 ; Common - "Zzzz", # 1D4BA..1D4BA ; Unknown - "Zyyy", # 1D4BB..1D4BB ; Common - "Zzzz", # 1D4BC..1D4BC ; Unknown - "Zyyy", # 1D4BD..1D4C3 ; Common - "Zzzz", # 1D4C4..1D4C4 ; Unknown - "Zyyy", # 1D4C5..1D505 ; Common - "Zzzz", # 1D506..1D506 ; Unknown - "Zyyy", # 1D507..1D50A ; Common - "Zzzz", # 1D50B..1D50C ; Unknown - "Zyyy", # 1D50D..1D514 ; Common - "Zzzz", # 1D515..1D515 ; Unknown - "Zyyy", # 1D516..1D51C ; Common - "Zzzz", # 1D51D..1D51D ; Unknown - "Zyyy", # 1D51E..1D539 ; Common - "Zzzz", # 1D53A..1D53A ; Unknown - "Zyyy", # 1D53B..1D53E ; Common - "Zzzz", # 1D53F..1D53F ; Unknown - "Zyyy", # 1D540..1D544 ; Common - "Zzzz", # 1D545..1D545 ; Unknown - "Zyyy", # 1D546..1D546 ; Common - "Zzzz", # 1D547..1D549 ; Unknown - "Zyyy", # 1D54A..1D550 ; Common - "Zzzz", # 1D551..1D551 ; Unknown - "Zyyy", # 1D552..1D6A5 ; Common - "Zzzz", # 1D6A6..1D6A7 ; Unknown - "Zyyy", # 1D6A8..1D7CB ; Common - "Zzzz", # 1D7CC..1D7CD ; Unknown - "Zyyy", # 1D7CE..1D7FF ; Common - "Sgnw", # 1D800..1DA8B ; SignWriting - "Zzzz", # 1DA8C..1DA9A ; Unknown - "Sgnw", # 1DA9B..1DA9F ; SignWriting - "Zzzz", # 1DAA0..1DAA0 ; Unknown - "Sgnw", # 1DAA1..1DAAF ; SignWriting - "Zzzz", # 1DAB0..1DEFF ; Unknown - "Latn", # 1DF00..1DF1E ; Latin - "Zzzz", # 1DF1F..1DF24 ; Unknown - "Latn", # 1DF25..1DF2A ; Latin - "Zzzz", # 1DF2B..1DFFF ; Unknown - "Glag", # 1E000..1E006 ; Glagolitic - "Zzzz", # 1E007..1E007 ; Unknown - "Glag", # 1E008..1E018 ; Glagolitic - "Zzzz", # 1E019..1E01A ; Unknown - "Glag", # 1E01B..1E021 ; Glagolitic - "Zzzz", # 1E022..1E022 ; Unknown - "Glag", # 1E023..1E024 ; Glagolitic - "Zzzz", # 1E025..1E025 ; Unknown - "Glag", # 1E026..1E02A ; Glagolitic - "Zzzz", # 1E02B..1E02F ; Unknown - "Cyrl", # 1E030..1E06D ; Cyrillic - "Zzzz", # 1E06E..1E08E ; Unknown - "Cyrl", # 1E08F..1E08F ; Cyrillic - "Zzzz", # 1E090..1E0FF ; Unknown - "Hmnp", # 1E100..1E12C ; Nyiakeng_Puachue_Hmong - "Zzzz", # 1E12D..1E12F ; Unknown - "Hmnp", # 1E130..1E13D ; Nyiakeng_Puachue_Hmong - "Zzzz", # 1E13E..1E13F ; Unknown - "Hmnp", # 1E140..1E149 ; Nyiakeng_Puachue_Hmong - "Zzzz", # 1E14A..1E14D ; Unknown - "Hmnp", # 1E14E..1E14F ; Nyiakeng_Puachue_Hmong - "Zzzz", # 1E150..1E28F ; Unknown - "Toto", # 1E290..1E2AE ; Toto - "Zzzz", # 1E2AF..1E2BF ; Unknown - "Wcho", # 1E2C0..1E2F9 ; Wancho - "Zzzz", # 1E2FA..1E2FE ; Unknown - "Wcho", # 1E2FF..1E2FF ; Wancho - "Zzzz", # 1E300..1E4CF ; Unknown - "Nagm", # 1E4D0..1E4F9 ; Nag_Mundari - "Zzzz", # 1E4FA..1E7DF ; Unknown - "Ethi", # 1E7E0..1E7E6 ; Ethiopic - "Zzzz", # 1E7E7..1E7E7 ; Unknown - "Ethi", # 1E7E8..1E7EB ; Ethiopic - "Zzzz", # 1E7EC..1E7EC ; Unknown - "Ethi", # 1E7ED..1E7EE ; Ethiopic - "Zzzz", # 1E7EF..1E7EF ; Unknown - "Ethi", # 1E7F0..1E7FE ; Ethiopic - "Zzzz", # 1E7FF..1E7FF ; Unknown - "Mend", # 1E800..1E8C4 ; Mende_Kikakui - "Zzzz", # 1E8C5..1E8C6 ; Unknown - "Mend", # 1E8C7..1E8D6 ; Mende_Kikakui - "Zzzz", # 1E8D7..1E8FF ; Unknown - "Adlm", # 1E900..1E94B ; Adlam - "Zzzz", # 1E94C..1E94F ; Unknown - "Adlm", # 1E950..1E959 ; Adlam - "Zzzz", # 1E95A..1E95D ; Unknown - "Adlm", # 1E95E..1E95F ; Adlam - "Zzzz", # 1E960..1EC70 ; Unknown - "Zyyy", # 1EC71..1ECB4 ; Common - "Zzzz", # 1ECB5..1ED00 ; Unknown - "Zyyy", # 1ED01..1ED3D ; Common - "Zzzz", # 1ED3E..1EDFF ; Unknown - "Arab", # 1EE00..1EE03 ; Arabic - "Zzzz", # 1EE04..1EE04 ; Unknown - "Arab", # 1EE05..1EE1F ; Arabic - "Zzzz", # 1EE20..1EE20 ; Unknown - "Arab", # 1EE21..1EE22 ; Arabic - "Zzzz", # 1EE23..1EE23 ; Unknown - "Arab", # 1EE24..1EE24 ; Arabic - "Zzzz", # 1EE25..1EE26 ; Unknown - "Arab", # 1EE27..1EE27 ; Arabic - "Zzzz", # 1EE28..1EE28 ; Unknown - "Arab", # 1EE29..1EE32 ; Arabic - "Zzzz", # 1EE33..1EE33 ; Unknown - "Arab", # 1EE34..1EE37 ; Arabic - "Zzzz", # 1EE38..1EE38 ; Unknown - "Arab", # 1EE39..1EE39 ; Arabic - "Zzzz", # 1EE3A..1EE3A ; Unknown - "Arab", # 1EE3B..1EE3B ; Arabic - "Zzzz", # 1EE3C..1EE41 ; Unknown - "Arab", # 1EE42..1EE42 ; Arabic - "Zzzz", # 1EE43..1EE46 ; Unknown - "Arab", # 1EE47..1EE47 ; Arabic - "Zzzz", # 1EE48..1EE48 ; Unknown - "Arab", # 1EE49..1EE49 ; Arabic - "Zzzz", # 1EE4A..1EE4A ; Unknown - "Arab", # 1EE4B..1EE4B ; Arabic - "Zzzz", # 1EE4C..1EE4C ; Unknown - "Arab", # 1EE4D..1EE4F ; Arabic - "Zzzz", # 1EE50..1EE50 ; Unknown - "Arab", # 1EE51..1EE52 ; Arabic - "Zzzz", # 1EE53..1EE53 ; Unknown - "Arab", # 1EE54..1EE54 ; Arabic - "Zzzz", # 1EE55..1EE56 ; Unknown - "Arab", # 1EE57..1EE57 ; Arabic - "Zzzz", # 1EE58..1EE58 ; Unknown - "Arab", # 1EE59..1EE59 ; Arabic - "Zzzz", # 1EE5A..1EE5A ; Unknown - "Arab", # 1EE5B..1EE5B ; Arabic - "Zzzz", # 1EE5C..1EE5C ; Unknown - "Arab", # 1EE5D..1EE5D ; Arabic - "Zzzz", # 1EE5E..1EE5E ; Unknown - "Arab", # 1EE5F..1EE5F ; Arabic - "Zzzz", # 1EE60..1EE60 ; Unknown - "Arab", # 1EE61..1EE62 ; Arabic - "Zzzz", # 1EE63..1EE63 ; Unknown - "Arab", # 1EE64..1EE64 ; Arabic - "Zzzz", # 1EE65..1EE66 ; Unknown - "Arab", # 1EE67..1EE6A ; Arabic - "Zzzz", # 1EE6B..1EE6B ; Unknown - "Arab", # 1EE6C..1EE72 ; Arabic - "Zzzz", # 1EE73..1EE73 ; Unknown - "Arab", # 1EE74..1EE77 ; Arabic - "Zzzz", # 1EE78..1EE78 ; Unknown - "Arab", # 1EE79..1EE7C ; Arabic - "Zzzz", # 1EE7D..1EE7D ; Unknown - "Arab", # 1EE7E..1EE7E ; Arabic - "Zzzz", # 1EE7F..1EE7F ; Unknown - "Arab", # 1EE80..1EE89 ; Arabic - "Zzzz", # 1EE8A..1EE8A ; Unknown - "Arab", # 1EE8B..1EE9B ; Arabic - "Zzzz", # 1EE9C..1EEA0 ; Unknown - "Arab", # 1EEA1..1EEA3 ; Arabic - "Zzzz", # 1EEA4..1EEA4 ; Unknown - "Arab", # 1EEA5..1EEA9 ; Arabic - "Zzzz", # 1EEAA..1EEAA ; Unknown - "Arab", # 1EEAB..1EEBB ; Arabic - "Zzzz", # 1EEBC..1EEEF ; Unknown - "Arab", # 1EEF0..1EEF1 ; Arabic - "Zzzz", # 1EEF2..1EFFF ; Unknown - "Zyyy", # 1F000..1F02B ; Common - "Zzzz", # 1F02C..1F02F ; Unknown - "Zyyy", # 1F030..1F093 ; Common - "Zzzz", # 1F094..1F09F ; Unknown - "Zyyy", # 1F0A0..1F0AE ; Common - "Zzzz", # 1F0AF..1F0B0 ; Unknown - "Zyyy", # 1F0B1..1F0BF ; Common - "Zzzz", # 1F0C0..1F0C0 ; Unknown - "Zyyy", # 1F0C1..1F0CF ; Common - "Zzzz", # 1F0D0..1F0D0 ; Unknown - "Zyyy", # 1F0D1..1F0F5 ; Common - "Zzzz", # 1F0F6..1F0FF ; Unknown - "Zyyy", # 1F100..1F1AD ; Common - "Zzzz", # 1F1AE..1F1E5 ; Unknown - "Zyyy", # 1F1E6..1F1FF ; Common - "Hira", # 1F200..1F200 ; Hiragana - "Zyyy", # 1F201..1F202 ; Common - "Zzzz", # 1F203..1F20F ; Unknown - "Zyyy", # 1F210..1F23B ; Common - "Zzzz", # 1F23C..1F23F ; Unknown - "Zyyy", # 1F240..1F248 ; Common - "Zzzz", # 1F249..1F24F ; Unknown - "Zyyy", # 1F250..1F251 ; Common - "Zzzz", # 1F252..1F25F ; Unknown - "Zyyy", # 1F260..1F265 ; Common - "Zzzz", # 1F266..1F2FF ; Unknown - "Zyyy", # 1F300..1F6D7 ; Common - "Zzzz", # 1F6D8..1F6DB ; Unknown - "Zyyy", # 1F6DC..1F6EC ; Common - "Zzzz", # 1F6ED..1F6EF ; Unknown - "Zyyy", # 1F6F0..1F6FC ; Common - "Zzzz", # 1F6FD..1F6FF ; Unknown - "Zyyy", # 1F700..1F776 ; Common - "Zzzz", # 1F777..1F77A ; Unknown - "Zyyy", # 1F77B..1F7D9 ; Common - "Zzzz", # 1F7DA..1F7DF ; Unknown - "Zyyy", # 1F7E0..1F7EB ; Common - "Zzzz", # 1F7EC..1F7EF ; Unknown - "Zyyy", # 1F7F0..1F7F0 ; Common - "Zzzz", # 1F7F1..1F7FF ; Unknown - "Zyyy", # 1F800..1F80B ; Common - "Zzzz", # 1F80C..1F80F ; Unknown - "Zyyy", # 1F810..1F847 ; Common - "Zzzz", # 1F848..1F84F ; Unknown - "Zyyy", # 1F850..1F859 ; Common - "Zzzz", # 1F85A..1F85F ; Unknown - "Zyyy", # 1F860..1F887 ; Common - "Zzzz", # 1F888..1F88F ; Unknown - "Zyyy", # 1F890..1F8AD ; Common - "Zzzz", # 1F8AE..1F8AF ; Unknown - "Zyyy", # 1F8B0..1F8B1 ; Common - "Zzzz", # 1F8B2..1F8FF ; Unknown - "Zyyy", # 1F900..1FA53 ; Common - "Zzzz", # 1FA54..1FA5F ; Unknown - "Zyyy", # 1FA60..1FA6D ; Common - "Zzzz", # 1FA6E..1FA6F ; Unknown - "Zyyy", # 1FA70..1FA7C ; Common - "Zzzz", # 1FA7D..1FA7F ; Unknown - "Zyyy", # 1FA80..1FA88 ; Common - "Zzzz", # 1FA89..1FA8F ; Unknown - "Zyyy", # 1FA90..1FABD ; Common - "Zzzz", # 1FABE..1FABE ; Unknown - "Zyyy", # 1FABF..1FAC5 ; Common - "Zzzz", # 1FAC6..1FACD ; Unknown - "Zyyy", # 1FACE..1FADB ; Common - "Zzzz", # 1FADC..1FADF ; Unknown - "Zyyy", # 1FAE0..1FAE8 ; Common - "Zzzz", # 1FAE9..1FAEF ; Unknown - "Zyyy", # 1FAF0..1FAF8 ; Common - "Zzzz", # 1FAF9..1FAFF ; Unknown - "Zyyy", # 1FB00..1FB92 ; Common - "Zzzz", # 1FB93..1FB93 ; Unknown - "Zyyy", # 1FB94..1FBCA ; Common - "Zzzz", # 1FBCB..1FBEF ; Unknown - "Zyyy", # 1FBF0..1FBF9 ; Common - "Zzzz", # 1FBFA..1FFFF ; Unknown - "Hani", # 20000..2A6DF ; Han - "Zzzz", # 2A6E0..2A6FF ; Unknown - "Hani", # 2A700..2B739 ; Han - "Zzzz", # 2B73A..2B73F ; Unknown - "Hani", # 2B740..2B81D ; Han - "Zzzz", # 2B81E..2B81F ; Unknown - "Hani", # 2B820..2CEA1 ; Han - "Zzzz", # 2CEA2..2CEAF ; Unknown - "Hani", # 2CEB0..2EBE0 ; Han - "Zzzz", # 2EBE1..2F7FF ; Unknown - "Hani", # 2F800..2FA1D ; Han - "Zzzz", # 2FA1E..2FFFF ; Unknown - "Hani", # 30000..3134A ; Han - "Zzzz", # 3134B..3134F ; Unknown - "Hani", # 31350..323AF ; Han - "Zzzz", # 323B0..E0000 ; Unknown - "Zyyy", # E0001..E0001 ; Common - "Zzzz", # E0002..E001F ; Unknown - "Zyyy", # E0020..E007F ; Common - "Zzzz", # E0080..E00FF ; Unknown - "Zinh", # E0100..E01EF ; Inherited - "Zzzz", # E01F0..10FFFF ; Unknown -] - -NAMES = { - "Adlm": "Adlam", - "Aghb": "Caucasian_Albanian", - "Ahom": "Ahom", - "Arab": "Arabic", - "Armi": "Imperial_Aramaic", - "Armn": "Armenian", - "Avst": "Avestan", - "Bali": "Balinese", - "Bamu": "Bamum", - "Bass": "Bassa_Vah", - "Batk": "Batak", - "Beng": "Bengali", - "Bhks": "Bhaiksuki", - "Bopo": "Bopomofo", - "Brah": "Brahmi", - "Brai": "Braille", - "Bugi": "Buginese", - "Buhd": "Buhid", - "Cakm": "Chakma", - "Cans": "Canadian_Aboriginal", - "Cari": "Carian", - "Cham": "Cham", - "Cher": "Cherokee", - "Chrs": "Chorasmian", - "Copt": "Coptic", - "Cpmn": "Cypro_Minoan", - "Cprt": "Cypriot", - "Cyrl": "Cyrillic", - "Deva": "Devanagari", - "Diak": "Dives_Akuru", - "Dogr": "Dogra", - "Dsrt": "Deseret", - "Dupl": "Duployan", - "Egyp": "Egyptian_Hieroglyphs", - "Elba": "Elbasan", - "Elym": "Elymaic", - "Ethi": "Ethiopic", - "Geor": "Georgian", - "Glag": "Glagolitic", - "Gong": "Gunjala_Gondi", - "Gonm": "Masaram_Gondi", - "Goth": "Gothic", - "Gran": "Grantha", - "Grek": "Greek", - "Gujr": "Gujarati", - "Guru": "Gurmukhi", - "Hang": "Hangul", - "Hani": "Han", - "Hano": "Hanunoo", - "Hatr": "Hatran", - "Hebr": "Hebrew", - "Hira": "Hiragana", - "Hluw": "Anatolian_Hieroglyphs", - "Hmng": "Pahawh_Hmong", - "Hmnp": "Nyiakeng_Puachue_Hmong", - "Hrkt": "Katakana_Or_Hiragana", - "Hung": "Old_Hungarian", - "Ital": "Old_Italic", - "Java": "Javanese", - "Kali": "Kayah_Li", - "Kana": "Katakana", - "Kawi": "Kawi", - "Khar": "Kharoshthi", - "Khmr": "Khmer", - "Khoj": "Khojki", - "Kits": "Khitan_Small_Script", - "Knda": "Kannada", - "Kthi": "Kaithi", - "Lana": "Tai_Tham", - "Laoo": "Lao", - "Latn": "Latin", - "Lepc": "Lepcha", - "Limb": "Limbu", - "Lina": "Linear_A", - "Linb": "Linear_B", - "Lisu": "Lisu", - "Lyci": "Lycian", - "Lydi": "Lydian", - "Mahj": "Mahajani", - "Maka": "Makasar", - "Mand": "Mandaic", - "Mani": "Manichaean", - "Marc": "Marchen", - "Medf": "Medefaidrin", - "Mend": "Mende_Kikakui", - "Merc": "Meroitic_Cursive", - "Mero": "Meroitic_Hieroglyphs", - "Mlym": "Malayalam", - "Modi": "Modi", - "Mong": "Mongolian", - "Mroo": "Mro", - "Mtei": "Meetei_Mayek", - "Mult": "Multani", - "Mymr": "Myanmar", - "Nagm": "Nag_Mundari", - "Nand": "Nandinagari", - "Narb": "Old_North_Arabian", - "Nbat": "Nabataean", - "Newa": "Newa", - "Nkoo": "Nko", - "Nshu": "Nushu", - "Ogam": "Ogham", - "Olck": "Ol_Chiki", - "Orkh": "Old_Turkic", - "Orya": "Oriya", - "Osge": "Osage", - "Osma": "Osmanya", - "Ougr": "Old_Uyghur", - "Palm": "Palmyrene", - "Pauc": "Pau_Cin_Hau", - "Perm": "Old_Permic", - "Phag": "Phags_Pa", - "Phli": "Inscriptional_Pahlavi", - "Phlp": "Psalter_Pahlavi", - "Phnx": "Phoenician", - "Plrd": "Miao", - "Prti": "Inscriptional_Parthian", - "Rjng": "Rejang", - "Rohg": "Hanifi_Rohingya", - "Runr": "Runic", - "Samr": "Samaritan", - "Sarb": "Old_South_Arabian", - "Saur": "Saurashtra", - "Sgnw": "SignWriting", - "Shaw": "Shavian", - "Shrd": "Sharada", - "Sidd": "Siddham", - "Sind": "Khudawadi", - "Sinh": "Sinhala", - "Sogd": "Sogdian", - "Sogo": "Old_Sogdian", - "Sora": "Sora_Sompeng", - "Soyo": "Soyombo", - "Sund": "Sundanese", - "Sylo": "Syloti_Nagri", - "Syrc": "Syriac", - "Tagb": "Tagbanwa", - "Takr": "Takri", - "Tale": "Tai_Le", - "Talu": "New_Tai_Lue", - "Taml": "Tamil", - "Tang": "Tangut", - "Tavt": "Tai_Viet", - "Telu": "Telugu", - "Tfng": "Tifinagh", - "Tglg": "Tagalog", - "Thaa": "Thaana", - "Thai": "Thai", - "Tibt": "Tibetan", - "Tirh": "Tirhuta", - "Tnsa": "Tangsa", - "Toto": "Toto", - "Ugar": "Ugaritic", - "Vaii": "Vai", - "Vith": "Vithkuqi", - "Wara": "Warang_Citi", - "Wcho": "Wancho", - "Xpeo": "Old_Persian", - "Xsux": "Cuneiform", - "Yezi": "Yezidi", - "Yiii": "Yi", - "Zanb": "Zanabazar_Square", - "Zinh": "Inherited", - "Zyyy": "Common", - "Zzzz": "Unknown", -} diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-2232b20b.js b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-2232b20b.js deleted file mode 100644 index c798b4928ec092801d6ac5060321efb8a7a0dff7..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-2232b20b.js +++ /dev/null @@ -1,2 +0,0 @@ -import{E as u,L as v}from"./index-41d42cd1.js";import{s as k,t,h as S,L as w,i as z,w as x,f as R,a as U,b as _,I as T,x as V}from"./index-ebba85cc.js";import"./index-f877dfd5.js";import"./Blocks-adc2d4ca.js";import"./Button-11a87b79.js";import"./BlockLabel-7929e88d.js";import"./Empty-2159e5e9.js";import"./Copy-534f8e58.js";import"./Download-a587c81f.js";const Y=94,g=1,C=95,Z=96,f=2,$=[9,10,11,12,13,32,133,160,5760,8192,8193,8194,8195,8196,8197,8198,8199,8200,8201,8202,8232,8233,8239,8287,12288],G=58,N=40,X=95,q=91,c=45,E=46,j=35,D=37;function p(e){return e>=65&&e<=90||e>=97&&e<=122||e>=161}function I(e){return e>=48&&e<=57}const B=new u((e,o)=>{for(let r=!1,a=0,O=0;;O++){let{next:l}=e;if(p(l)||l==c||l==X||r&&I(l))!r&&(l!=c||O>0)&&(r=!0),a===O&&l==c&&a++,e.advance();else{r&&e.acceptToken(l==N?C:a==2&&o.canShift(f)?f:Z);break}}}),A=new u(e=>{if($.includes(e.peek(-1))){let{next:o}=e;(p(o)||o==X||o==j||o==E||o==q||o==G||o==c)&&e.acceptToken(Y)}}),F=new u(e=>{if(!$.includes(e.peek(-1))){let{next:o}=e;if(o==D&&(e.advance(),e.acceptToken(g)),p(o)){do e.advance();while(p(e.next));e.acceptToken(g)}}}),L=k({"AtKeyword import charset namespace keyframes media supports":t.definitionKeyword,"from to selector":t.keyword,NamespaceName:t.namespace,KeyframeName:t.labelName,TagName:t.tagName,ClassName:t.className,PseudoClassName:t.constant(t.className),IdName:t.labelName,"FeatureName PropertyName":t.propertyName,AttributeName:t.attributeName,NumberLiteral:t.number,KeywordQuery:t.keyword,UnaryQueryOp:t.operatorKeyword,"CallTag ValueName":t.atom,VariableName:t.variableName,Callee:t.operatorKeyword,Unit:t.unit,"UniversalSelector NestingSelector":t.definitionOperator,MatchOp:t.compareOperator,"ChildOp SiblingOp, LogicOp":t.logicOperator,BinOp:t.arithmeticOperator,Important:t.modifier,Comment:t.blockComment,ParenthesizedContent:t.special(t.name),ColorLiteral:t.color,StringLiteral:t.string,":":t.punctuation,"PseudoOp #":t.derefOperator,"; ,":t.separator,"( )":t.paren,"[ ]":t.squareBracket,"{ }":t.brace}),K={__proto__:null,lang:32,"nth-child":32,"nth-last-child":32,"nth-of-type":32,"nth-last-of-type":32,dir:32,"host-context":32,url:60,"url-prefix":60,domain:60,regexp:60,selector:134},J={__proto__:null,"@import":114,"@media":138,"@charset":142,"@namespace":146,"@keyframes":152,"@supports":164},H={__proto__:null,not:128,only:128,from:158,to:160},M=v.deserialize({version:14,states:"7WQYQ[OOO#_Q[OOOOQP'#Cd'#CdOOQP'#Cc'#CcO#fQ[O'#CfO$YQXO'#CaO$aQ[O'#ChO$lQ[O'#DPO$qQ[O'#DTOOQP'#Ed'#EdO$vQdO'#DeO%bQ[O'#DrO$vQdO'#DtO%sQ[O'#DvO&OQ[O'#DyO&TQ[O'#EPO&cQ[O'#EROOQS'#Ec'#EcOOQS'#ET'#ETQYQ[OOO&jQXO'#CdO'_QWO'#DaO'dQWO'#EjO'oQ[O'#EjQOQWOOOOQP'#Cg'#CgOOQP,59Q,59QO#fQ[O,59QO'yQ[O'#EWO(eQWO,58{O(mQ[O,59SO$lQ[O,59kO$qQ[O,59oO'yQ[O,59sO'yQ[O,59uO'yQ[O,59vO(xQ[O'#D`OOQS,58{,58{OOQP'#Ck'#CkOOQO'#C}'#C}OOQP,59S,59SO)PQWO,59SO)UQWO,59SOOQP'#DR'#DROOQP,59k,59kOOQO'#DV'#DVO)ZQ`O,59oOOQS'#Cp'#CpO$vQdO'#CqO)cQvO'#CsO*pQtO,5:POOQO'#Cx'#CxO)UQWO'#CwO+UQWO'#CyOOQS'#Eg'#EgOOQO'#Dh'#DhO+ZQ[O'#DoO+iQWO'#EkO&TQ[O'#DmO+wQWO'#DpOOQO'#El'#ElO(hQWO,5:^O+|QpO,5:`OOQS'#Dx'#DxO,UQWO,5:bO,ZQ[O,5:bOOQO'#D{'#D{O,cQWO,5:eO,hQWO,5:kO,pQWO,5:mOOQS-E8R-E8RO$vQdO,59{O,xQ[O'#EYO-VQWO,5;UO-VQWO,5;UOOQP1G.l1G.lO-|QXO,5:rOOQO-E8U-E8UOOQS1G.g1G.gOOQP1G.n1G.nO)PQWO1G.nO)UQWO1G.nOOQP1G/V1G/VO.ZQ`O1G/ZO.tQXO1G/_O/[QXO1G/aO/rQXO1G/bO0YQWO,59zO0_Q[O'#DOO0fQdO'#CoOOQP1G/Z1G/ZO$vQdO1G/ZO0mQpO,59]OOQS,59_,59_O$vQdO,59aO0uQWO1G/kOOQS,59c,59cO0zQ!bO,59eO1SQWO'#DhO1_QWO,5:TO1dQWO,5:ZO&TQ[O,5:VO&TQ[O'#EZO1lQWO,5;VO1wQWO,5:XO'yQ[O,5:[OOQS1G/x1G/xOOQS1G/z1G/zOOQS1G/|1G/|O2YQWO1G/|O2_QdO'#D|OOQS1G0P1G0POOQS1G0V1G0VOOQS1G0X1G0XO2mQtO1G/gOOQO,5:t,5:tO3TQ[O,5:tOOQO-E8W-E8WO3bQWO1G0pOOQP7+$Y7+$YOOQP7+$u7+$uO$vQdO7+$uOOQS1G/f1G/fO3mQXO'#EiO3tQWO,59jO3yQtO'#EUO4nQdO'#EfO4xQWO,59ZO4}QpO7+$uOOQS1G.w1G.wOOQS1G.{1G.{OOQS7+%V7+%VO5VQWO1G/PO$vQdO1G/oOOQO1G/u1G/uOOQO1G/q1G/qO5[QWO,5:uOOQO-E8X-E8XO5jQXO1G/vOOQS7+%h7+%hO5qQYO'#CsO(hQWO'#E[O5yQdO,5:hOOQS,5:h,5:hO6XQtO'#EXO$vQdO'#EXO7VQdO7+%ROOQO7+%R7+%ROOQO1G0`1G0`O7jQpO<T![;'S%^;'S;=`%o<%lO%^^;TUoWOy%^z!Q%^!Q![;g![;'S%^;'S;=`%o<%lO%^^;nYoW#[UOy%^z!Q%^!Q![;g![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^^[[oW#[UOy%^z!O%^!O!P;g!P!Q%^!Q![>T![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^_?VSpVOy%^z;'S%^;'S;=`%o<%lO%^^?hWjSOy%^z!O%^!O!P;O!P!Q%^!Q![>T![;'S%^;'S;=`%o<%lO%^_@VU#XPOy%^z!Q%^!Q![;g![;'S%^;'S;=`%o<%lO%^~@nTjSOy%^z{@}{;'S%^;'S;=`%o<%lO%^~ASUoWOy@}yzAfz{Bm{;'S@};'S;=`Co<%lO@}~AiTOzAfz{Ax{;'SAf;'S;=`Bg<%lOAf~A{VOzAfz{Ax{!PAf!P!QBb!Q;'SAf;'S;=`Bg<%lOAf~BgOR~~BjP;=`<%lAf~BrWoWOy@}yzAfz{Bm{!P@}!P!QC[!Q;'S@};'S;=`Co<%lO@}~CcSoWR~Oy%^z;'S%^;'S;=`%o<%lO%^~CrP;=`<%l@}^Cz[#[UOy%^z!O%^!O!P;g!P!Q%^!Q![>T![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^XDuU]POy%^z![%^![!]EX!];'S%^;'S;=`%o<%lO%^XE`S^PoWOy%^z;'S%^;'S;=`%o<%lO%^_EqS!WVOy%^z;'S%^;'S;=`%o<%lO%^YFSSzQOy%^z;'S%^;'S;=`%o<%lO%^XFeU|POy%^z!`%^!`!aFw!a;'S%^;'S;=`%o<%lO%^XGOS|PoWOy%^z;'S%^;'S;=`%o<%lO%^XG_WOy%^z!c%^!c!}Gw!}#T%^#T#oGw#o;'S%^;'S;=`%o<%lO%^XHO[!YPoWOy%^z}%^}!OGw!O!Q%^!Q![Gw![!c%^!c!}Gw!}#T%^#T#oGw#o;'S%^;'S;=`%o<%lO%^XHySxPOy%^z;'S%^;'S;=`%o<%lO%^^I[SvUOy%^z;'S%^;'S;=`%o<%lO%^XIkUOy%^z#b%^#b#cI}#c;'S%^;'S;=`%o<%lO%^XJSUoWOy%^z#W%^#W#XJf#X;'S%^;'S;=`%o<%lO%^XJmS!`PoWOy%^z;'S%^;'S;=`%o<%lO%^XJ|UOy%^z#f%^#f#gJf#g;'S%^;'S;=`%o<%lO%^XKeS!RPOy%^z;'S%^;'S;=`%o<%lO%^_KvS!QVOy%^z;'S%^;'S;=`%o<%lO%^ZLXU!PPOy%^z!_%^!_!`6y!`;'S%^;'S;=`%o<%lO%^WLnP;=`<%l$}",tokenizers:[A,F,B,0,1,2,3],topRules:{StyleSheet:[0,4],Styles:[1,84]},specialized:[{term:95,get:e=>K[e]||-1},{term:56,get:e=>J[e]||-1},{term:96,get:e=>H[e]||-1}],tokenPrec:1123});let Q=null;function m(){if(!Q&&typeof document=="object"&&document.body){let{style:e}=document.body,o=[],r=new Set;for(let a in e)a!="cssText"&&a!="cssFloat"&&typeof e[a]=="string"&&(/[A-Z]/.test(a)&&(a=a.replace(/[A-Z]/g,O=>"-"+O.toLowerCase())),r.has(a)||(o.push(a),r.add(a)));Q=o.sort().map(a=>({type:"property",label:a}))}return Q||[]}const h=["active","after","any-link","autofill","backdrop","before","checked","cue","default","defined","disabled","empty","enabled","file-selector-button","first","first-child","first-letter","first-line","first-of-type","focus","focus-visible","focus-within","fullscreen","has","host","host-context","hover","in-range","indeterminate","invalid","is","lang","last-child","last-of-type","left","link","marker","modal","not","nth-child","nth-last-child","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","part","placeholder","placeholder-shown","read-only","read-write","required","right","root","scope","selection","slotted","target","target-text","valid","visited","where"].map(e=>({type:"class",label:e})),b=["above","absolute","activeborder","additive","activecaption","after-white-space","ahead","alias","all","all-scroll","alphabetic","alternate","always","antialiased","appworkspace","asterisks","attr","auto","auto-flow","avoid","avoid-column","avoid-page","avoid-region","axis-pan","background","backwards","baseline","below","bidi-override","blink","block","block-axis","bold","bolder","border","border-box","both","bottom","break","break-all","break-word","bullets","button","button-bevel","buttonface","buttonhighlight","buttonshadow","buttontext","calc","capitalize","caps-lock-indicator","caption","captiontext","caret","cell","center","checkbox","circle","cjk-decimal","clear","clip","close-quote","col-resize","collapse","color","color-burn","color-dodge","column","column-reverse","compact","condensed","contain","content","contents","content-box","context-menu","continuous","copy","counter","counters","cover","crop","cross","crosshair","currentcolor","cursive","cyclic","darken","dashed","decimal","decimal-leading-zero","default","default-button","dense","destination-atop","destination-in","destination-out","destination-over","difference","disc","discard","disclosure-closed","disclosure-open","document","dot-dash","dot-dot-dash","dotted","double","down","e-resize","ease","ease-in","ease-in-out","ease-out","element","ellipse","ellipsis","embed","end","ethiopic-abegede-gez","ethiopic-halehame-aa-er","ethiopic-halehame-gez","ew-resize","exclusion","expanded","extends","extra-condensed","extra-expanded","fantasy","fast","fill","fill-box","fixed","flat","flex","flex-end","flex-start","footnotes","forwards","from","geometricPrecision","graytext","grid","groove","hand","hard-light","help","hidden","hide","higher","highlight","highlighttext","horizontal","hsl","hsla","hue","icon","ignore","inactiveborder","inactivecaption","inactivecaptiontext","infinite","infobackground","infotext","inherit","initial","inline","inline-axis","inline-block","inline-flex","inline-grid","inline-table","inset","inside","intrinsic","invert","italic","justify","keep-all","landscape","large","larger","left","level","lighter","lighten","line-through","linear","linear-gradient","lines","list-item","listbox","listitem","local","logical","loud","lower","lower-hexadecimal","lower-latin","lower-norwegian","lowercase","ltr","luminosity","manipulation","match","matrix","matrix3d","medium","menu","menutext","message-box","middle","min-intrinsic","mix","monospace","move","multiple","multiple_mask_images","multiply","n-resize","narrower","ne-resize","nesw-resize","no-close-quote","no-drop","no-open-quote","no-repeat","none","normal","not-allowed","nowrap","ns-resize","numbers","numeric","nw-resize","nwse-resize","oblique","opacity","open-quote","optimizeLegibility","optimizeSpeed","outset","outside","outside-shape","overlay","overline","padding","padding-box","painted","page","paused","perspective","pinch-zoom","plus-darker","plus-lighter","pointer","polygon","portrait","pre","pre-line","pre-wrap","preserve-3d","progress","push-button","radial-gradient","radio","read-only","read-write","read-write-plaintext-only","rectangle","region","relative","repeat","repeating-linear-gradient","repeating-radial-gradient","repeat-x","repeat-y","reset","reverse","rgb","rgba","ridge","right","rotate","rotate3d","rotateX","rotateY","rotateZ","round","row","row-resize","row-reverse","rtl","run-in","running","s-resize","sans-serif","saturation","scale","scale3d","scaleX","scaleY","scaleZ","screen","scroll","scrollbar","scroll-position","se-resize","self-start","self-end","semi-condensed","semi-expanded","separate","serif","show","single","skew","skewX","skewY","skip-white-space","slide","slider-horizontal","slider-vertical","sliderthumb-horizontal","sliderthumb-vertical","slow","small","small-caps","small-caption","smaller","soft-light","solid","source-atop","source-in","source-out","source-over","space","space-around","space-between","space-evenly","spell-out","square","start","static","status-bar","stretch","stroke","stroke-box","sub","subpixel-antialiased","svg_masks","super","sw-resize","symbolic","symbols","system-ui","table","table-caption","table-cell","table-column","table-column-group","table-footer-group","table-header-group","table-row","table-row-group","text","text-bottom","text-top","textarea","textfield","thick","thin","threeddarkshadow","threedface","threedhighlight","threedlightshadow","threedshadow","to","top","transform","translate","translate3d","translateX","translateY","translateZ","transparent","ultra-condensed","ultra-expanded","underline","unidirectional-pan","unset","up","upper-latin","uppercase","url","var","vertical","vertical-text","view-box","visible","visibleFill","visiblePainted","visibleStroke","visual","w-resize","wait","wave","wider","window","windowframe","windowtext","words","wrap","wrap-reverse","x-large","x-small","xor","xx-large","xx-small"].map(e=>({type:"keyword",label:e})).concat(["aliceblue","antiquewhite","aqua","aquamarine","azure","beige","bisque","black","blanchedalmond","blue","blueviolet","brown","burlywood","cadetblue","chartreuse","chocolate","coral","cornflowerblue","cornsilk","crimson","cyan","darkblue","darkcyan","darkgoldenrod","darkgray","darkgreen","darkkhaki","darkmagenta","darkolivegreen","darkorange","darkorchid","darkred","darksalmon","darkseagreen","darkslateblue","darkslategray","darkturquoise","darkviolet","deeppink","deepskyblue","dimgray","dodgerblue","firebrick","floralwhite","forestgreen","fuchsia","gainsboro","ghostwhite","gold","goldenrod","gray","grey","green","greenyellow","honeydew","hotpink","indianred","indigo","ivory","khaki","lavender","lavenderblush","lawngreen","lemonchiffon","lightblue","lightcoral","lightcyan","lightgoldenrodyellow","lightgray","lightgreen","lightpink","lightsalmon","lightseagreen","lightskyblue","lightslategray","lightsteelblue","lightyellow","lime","limegreen","linen","magenta","maroon","mediumaquamarine","mediumblue","mediumorchid","mediumpurple","mediumseagreen","mediumslateblue","mediumspringgreen","mediumturquoise","mediumvioletred","midnightblue","mintcream","mistyrose","moccasin","navajowhite","navy","oldlace","olive","olivedrab","orange","orangered","orchid","palegoldenrod","palegreen","paleturquoise","palevioletred","papayawhip","peachpuff","peru","pink","plum","powderblue","purple","rebeccapurple","red","rosybrown","royalblue","saddlebrown","salmon","sandybrown","seagreen","seashell","sienna","silver","skyblue","slateblue","slategray","snow","springgreen","steelblue","tan","teal","thistle","tomato","turquoise","violet","wheat","white","whitesmoke","yellow","yellowgreen"].map(e=>({type:"constant",label:e}))),ee=["a","abbr","address","article","aside","b","bdi","bdo","blockquote","body","br","button","canvas","caption","cite","code","col","colgroup","dd","del","details","dfn","dialog","div","dl","dt","em","figcaption","figure","footer","form","header","hgroup","h1","h2","h3","h4","h5","h6","hr","html","i","iframe","img","input","ins","kbd","label","legend","li","main","meter","nav","ol","output","p","pre","ruby","section","select","small","source","span","strong","sub","summary","sup","table","tbody","td","template","textarea","tfoot","th","thead","tr","u","ul"].map(e=>({type:"type",label:e})),n=/^(\w[\w-]*|-\w[\w-]*|)$/,ae=/^-(-[\w-]*)?$/;function Oe(e,o){var r;if((e.name=="("||e.type.isError)&&(e=e.parent||e),e.name!="ArgList")return!1;let a=(r=e.parent)===null||r===void 0?void 0:r.firstChild;return a?.name!="Callee"?!1:o.sliceString(a.from,a.to)=="var"}const y=new V,te=["Declaration"];function W(e,o){if(o.to-o.from>4096){let r=y.get(o);if(r)return r;let a=[],O=new Set,l=o.cursor(T.IncludeAnonymous);if(l.firstChild())do for(let i of W(e,l.node))O.has(i.label)||(O.add(i.label),a.push(i));while(l.nextSibling());return y.set(o,a),a}else{let r=[],a=new Set;return o.cursor().iterate(O=>{var l;if(O.name=="VariableName"&&O.matchContext(te)&&((l=O.node.nextSibling)===null||l===void 0?void 0:l.name)==":"){let i=e.sliceString(O.from,O.to);a.has(i)||(a.add(i),r.push({label:i,type:"variable"}))}}),r}}const oe=e=>{var o;let{state:r,pos:a}=e,O=S(r).resolveInner(a,-1),l=O.type.isError&&O.from==O.to-1&&r.doc.sliceString(O.from,O.to)=="-";if(O.name=="PropertyName"||l&&((o=O.parent)===null||o===void 0?void 0:o.name)=="Block")return{from:O.from,options:m(),validFor:n};if(O.name=="ValueName")return{from:O.from,options:b,validFor:n};if(O.name=="PseudoClassName")return{from:O.from,options:h,validFor:n};if(O.name=="VariableName"||(e.explicit||l)&&Oe(O,r.doc))return{from:O.name=="VariableName"?O.from:a,options:W(r.doc,S(r).topNode),validFor:ae};if(O.name=="TagName"){for(let{parent:d}=O;d;d=d.parent)if(d.name=="Block")return{from:O.from,options:m(),validFor:n};return{from:O.from,options:ee,validFor:n}}if(!e.explicit)return null;let i=O.resolve(a),s=i.childBefore(a);return s&&s.name==":"&&i.name=="PseudoClassSelector"?{from:a,options:h,validFor:n}:s&&s.name==":"&&i.name=="Declaration"||i.name=="ArgList"?{from:a,options:b,validFor:n}:i.name=="Block"?{from:a,options:m(),validFor:n}:null},P=w.define({name:"css",parser:M.configure({props:[z.add({Declaration:x()}),R.add({Block:U})]}),languageData:{commentTokens:{block:{open:"/*",close:"*/"}},indentOnInput:/^\s*\}$/,wordChars:"-"}});function me(){return new _(P,P.data.of({autocomplete:oe}))}export{me as css,oe as cssCompletionSource,P as cssLanguage}; -//# sourceMappingURL=index-2232b20b.js.map diff --git a/spaces/cihyFjudo/fairness-paper-search/Jan Garbarek Madar Full Album Zip Hitl The Ultimate Collection of Garbareks Works with Shahnazari and Kalhor.md b/spaces/cihyFjudo/fairness-paper-search/Jan Garbarek Madar Full Album Zip Hitl The Ultimate Collection of Garbareks Works with Shahnazari and Kalhor.md deleted file mode 100644 index d2cc2b9ad7ba5c7d4a81e0a6525cd50001b1ff8c..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Jan Garbarek Madar Full Album Zip Hitl The Ultimate Collection of Garbareks Works with Shahnazari and Kalhor.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Jan Garbarek, Madar Full Album Zip Hitl


    Downloadhttps://tinurli.com/2uwjyO



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Zero No Kiseki Torrent.md b/spaces/cihyFjudo/fairness-paper-search/Zero No Kiseki Torrent.md deleted file mode 100644 index 8050348167c562764da4775cd0e18b5fe1451a78..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Zero No Kiseki Torrent.md +++ /dev/null @@ -1,10 +0,0 @@ - -

    In file-sharing terms, The Pirate Bay has been around almost forever. Launched in 2003, the torrent index has overcome every hurdle put in its way while other competitors have succumbed to various pressures.

    -

    Indeed, even after all these years, The Pirate Bay is still a file-sharing giant. As revealed in our latest list of most visited torrent site, the site is still at the top of the heap, successfully pulling in more traffic than rivals including YTS.mx, 1337x, and RARBG.

    -

    Zero no kiseki torrent


    Download Zip ►►►►► https://tinurli.com/2uwkVB



    -

    More seeds usually translate to faster downloads, something which all torrent users like to enjoy wherever possible. With no indication on the site, information that would allow users to pick the best or most popular/favored torrent is removed from the equation.

    -

    It seems you have mis-categorised the game, zero no kiseki, ao no kiseki and trails of cold steel 4 are 3 different games. Zero no kiseki is part 1 of a duology, and ao no kiseki is second part of the duology. Whereas trails of cold steel has 4 parts. Whereas kai versions are updated versions of the games that had released on psp(zero and ao).

    -

    O patch
    O patch pode ser baixado em:
    Caso tenham problemas com o link, o pessoal do Geofront disponibilizou um link via torrent no Discord do grupo. Faça o download aqui.
    *OBS.: no momento o recomendado é pegar via torrent devido ao site estar bem congestionado.

    -

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cleanmaster/akagi-sovits3/inference_main.py b/spaces/cleanmaster/akagi-sovits3/inference_main.py deleted file mode 100644 index 09d1cc1dcc2f2956471d59926e8aeee345ba6bf7..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/akagi-sovits3/inference_main.py +++ /dev/null @@ -1,59 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - -model_path = "logs/32k/sing1.pth" -config_path = "configs/config.json" -svc_model = Svc(model_path, config_path, dev="cuda") -infer_tool.mkdir(["raw", "results"]) - -# 支持多个wav文件,放在raw文件夹下,并修改clean_names为对应文件名(不需要文件后缀) -clean_names = ["xzh3"] -trans = [2] # 音高调整,支持正负(半音) -spk_list = ['yukie'] # 每次同时合成多语者音色 -slice_db = -40 # 默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50 -wav_format = 'flac' # 音频输出格式 - -infer_tool.fill_a_to_b(trans, clean_names) -for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print( - f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - length = int( - np.ceil(len(data) / audio_sr * svc_model.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = svc_model.infer(spk, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - - res_path = f'./results/{clean_name}_{tran}key_{spk}-6-1.{wav_format}' - soundfile.write(res_path, audio, - svc_model.target_sample, format=wav_format) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/GribStubImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/GribStubImagePlugin.py deleted file mode 100644 index 8a799f19caac706a880218af257f40e9a386b489..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/GribStubImagePlugin.py +++ /dev/null @@ -1,73 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# GRIB stub adapter -# -# Copyright (c) 1996-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import Image, ImageFile - -_handler = None - - -def register_handler(handler): - """ - Install application-specific GRIB image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -# -------------------------------------------------------------------- -# Image adapter - - -def _accept(prefix): - return prefix[:4] == b"GRIB" and prefix[7] == 1 - - -class GribStubImageFile(ImageFile.StubImageFile): - format = "GRIB" - format_description = "GRIB" - - def _open(self): - offset = self.fp.tell() - - if not _accept(self.fp.read(8)): - msg = "Not a GRIB file" - raise SyntaxError(msg) - - self.fp.seek(offset) - - # make something up - self.mode = "F" - self._size = 1, 1 - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - msg = "GRIB save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(GribStubImageFile.format, GribStubImageFile, _accept) -Image.register_save(GribStubImageFile.format, _save) - -Image.register_extension(GribStubImageFile.format, ".grib") diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/_cffi_include.h b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/_cffi_include.h deleted file mode 100644 index e4c0a672405298ddb3dcb2e2ca6da9eea3d2e162..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/cffi/_cffi_include.h +++ /dev/null @@ -1,385 +0,0 @@ -#define _CFFI_ - -/* We try to define Py_LIMITED_API before including Python.h. - - Mess: we can only define it if Py_DEBUG, Py_TRACE_REFS and - Py_REF_DEBUG are not defined. This is a best-effort approximation: - we can learn about Py_DEBUG from pyconfig.h, but it is unclear if - the same works for the other two macros. Py_DEBUG implies them, - but not the other way around. - - The implementation is messy (issue #350): on Windows, with _MSC_VER, - we have to define Py_LIMITED_API even before including pyconfig.h. - In that case, we guess what pyconfig.h will do to the macros above, - and check our guess after the #include. - - Note that on Windows, with CPython 3.x, you need >= 3.5 and virtualenv - version >= 16.0.0. With older versions of either, you don't get a - copy of PYTHON3.DLL in the virtualenv. We can't check the version of - CPython *before* we even include pyconfig.h. ffi.set_source() puts - a ``#define _CFFI_NO_LIMITED_API'' at the start of this file if it is - running on Windows < 3.5, as an attempt at fixing it, but that's - arguably wrong because it may not be the target version of Python. - Still better than nothing I guess. As another workaround, you can - remove the definition of Py_LIMITED_API here. - - See also 'py_limited_api' in cffi/setuptools_ext.py. -*/ -#if !defined(_CFFI_USE_EMBEDDING) && !defined(Py_LIMITED_API) -# ifdef _MSC_VER -# if !defined(_DEBUG) && !defined(Py_DEBUG) && !defined(Py_TRACE_REFS) && !defined(Py_REF_DEBUG) && !defined(_CFFI_NO_LIMITED_API) -# define Py_LIMITED_API -# endif -# include - /* sanity-check: Py_LIMITED_API will cause crashes if any of these - are also defined. Normally, the Python file PC/pyconfig.h does not - cause any of these to be defined, with the exception that _DEBUG - causes Py_DEBUG. Double-check that. */ -# ifdef Py_LIMITED_API -# if defined(Py_DEBUG) -# error "pyconfig.h unexpectedly defines Py_DEBUG, but Py_LIMITED_API is set" -# endif -# if defined(Py_TRACE_REFS) -# error "pyconfig.h unexpectedly defines Py_TRACE_REFS, but Py_LIMITED_API is set" -# endif -# if defined(Py_REF_DEBUG) -# error "pyconfig.h unexpectedly defines Py_REF_DEBUG, but Py_LIMITED_API is set" -# endif -# endif -# else -# include -# if !defined(Py_DEBUG) && !defined(Py_TRACE_REFS) && !defined(Py_REF_DEBUG) && !defined(_CFFI_NO_LIMITED_API) -# define Py_LIMITED_API -# endif -# endif -#endif - -#include -#ifdef __cplusplus -extern "C" { -#endif -#include -#include "parse_c_type.h" - -/* this block of #ifs should be kept exactly identical between - c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py - and cffi/_cffi_include.h */ -#if defined(_MSC_VER) -# include /* for alloca() */ -# if _MSC_VER < 1600 /* MSVC < 2010 */ - typedef __int8 int8_t; - typedef __int16 int16_t; - typedef __int32 int32_t; - typedef __int64 int64_t; - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - typedef unsigned __int64 uint64_t; - typedef __int8 int_least8_t; - typedef __int16 int_least16_t; - typedef __int32 int_least32_t; - typedef __int64 int_least64_t; - typedef unsigned __int8 uint_least8_t; - typedef unsigned __int16 uint_least16_t; - typedef unsigned __int32 uint_least32_t; - typedef unsigned __int64 uint_least64_t; - typedef __int8 int_fast8_t; - typedef __int16 int_fast16_t; - typedef __int32 int_fast32_t; - typedef __int64 int_fast64_t; - typedef unsigned __int8 uint_fast8_t; - typedef unsigned __int16 uint_fast16_t; - typedef unsigned __int32 uint_fast32_t; - typedef unsigned __int64 uint_fast64_t; - typedef __int64 intmax_t; - typedef unsigned __int64 uintmax_t; -# else -# include -# endif -# if _MSC_VER < 1800 /* MSVC < 2013 */ -# ifndef __cplusplus - typedef unsigned char _Bool; -# endif -# endif -#else -# include -# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) -# include -# endif -#endif - -#ifdef __GNUC__ -# define _CFFI_UNUSED_FN __attribute__((unused)) -#else -# define _CFFI_UNUSED_FN /* nothing */ -#endif - -#ifdef __cplusplus -# ifndef _Bool - typedef bool _Bool; /* semi-hackish: C++ has no _Bool; bool is builtin */ -# endif -#endif - -/********** CPython-specific section **********/ -#ifndef PYPY_VERSION - - -#if PY_MAJOR_VERSION >= 3 -# define PyInt_FromLong PyLong_FromLong -#endif - -#define _cffi_from_c_double PyFloat_FromDouble -#define _cffi_from_c_float PyFloat_FromDouble -#define _cffi_from_c_long PyInt_FromLong -#define _cffi_from_c_ulong PyLong_FromUnsignedLong -#define _cffi_from_c_longlong PyLong_FromLongLong -#define _cffi_from_c_ulonglong PyLong_FromUnsignedLongLong -#define _cffi_from_c__Bool PyBool_FromLong - -#define _cffi_to_c_double PyFloat_AsDouble -#define _cffi_to_c_float PyFloat_AsDouble - -#define _cffi_from_c_int(x, type) \ - (((type)-1) > 0 ? /* unsigned */ \ - (sizeof(type) < sizeof(long) ? \ - PyInt_FromLong((long)x) : \ - sizeof(type) == sizeof(long) ? \ - PyLong_FromUnsignedLong((unsigned long)x) : \ - PyLong_FromUnsignedLongLong((unsigned long long)x)) : \ - (sizeof(type) <= sizeof(long) ? \ - PyInt_FromLong((long)x) : \ - PyLong_FromLongLong((long long)x))) - -#define _cffi_to_c_int(o, type) \ - ((type)( \ - sizeof(type) == 1 ? (((type)-1) > 0 ? (type)_cffi_to_c_u8(o) \ - : (type)_cffi_to_c_i8(o)) : \ - sizeof(type) == 2 ? (((type)-1) > 0 ? (type)_cffi_to_c_u16(o) \ - : (type)_cffi_to_c_i16(o)) : \ - sizeof(type) == 4 ? (((type)-1) > 0 ? (type)_cffi_to_c_u32(o) \ - : (type)_cffi_to_c_i32(o)) : \ - sizeof(type) == 8 ? (((type)-1) > 0 ? (type)_cffi_to_c_u64(o) \ - : (type)_cffi_to_c_i64(o)) : \ - (Py_FatalError("unsupported size for type " #type), (type)0))) - -#define _cffi_to_c_i8 \ - ((int(*)(PyObject *))_cffi_exports[1]) -#define _cffi_to_c_u8 \ - ((int(*)(PyObject *))_cffi_exports[2]) -#define _cffi_to_c_i16 \ - ((int(*)(PyObject *))_cffi_exports[3]) -#define _cffi_to_c_u16 \ - ((int(*)(PyObject *))_cffi_exports[4]) -#define _cffi_to_c_i32 \ - ((int(*)(PyObject *))_cffi_exports[5]) -#define _cffi_to_c_u32 \ - ((unsigned int(*)(PyObject *))_cffi_exports[6]) -#define _cffi_to_c_i64 \ - ((long long(*)(PyObject *))_cffi_exports[7]) -#define _cffi_to_c_u64 \ - ((unsigned long long(*)(PyObject *))_cffi_exports[8]) -#define _cffi_to_c_char \ - ((int(*)(PyObject *))_cffi_exports[9]) -#define _cffi_from_c_pointer \ - ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[10]) -#define _cffi_to_c_pointer \ - ((char *(*)(PyObject *, struct _cffi_ctypedescr *))_cffi_exports[11]) -#define _cffi_get_struct_layout \ - not used any more -#define _cffi_restore_errno \ - ((void(*)(void))_cffi_exports[13]) -#define _cffi_save_errno \ - ((void(*)(void))_cffi_exports[14]) -#define _cffi_from_c_char \ - ((PyObject *(*)(char))_cffi_exports[15]) -#define _cffi_from_c_deref \ - ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[16]) -#define _cffi_to_c \ - ((int(*)(char *, struct _cffi_ctypedescr *, PyObject *))_cffi_exports[17]) -#define _cffi_from_c_struct \ - ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[18]) -#define _cffi_to_c_wchar_t \ - ((_cffi_wchar_t(*)(PyObject *))_cffi_exports[19]) -#define _cffi_from_c_wchar_t \ - ((PyObject *(*)(_cffi_wchar_t))_cffi_exports[20]) -#define _cffi_to_c_long_double \ - ((long double(*)(PyObject *))_cffi_exports[21]) -#define _cffi_to_c__Bool \ - ((_Bool(*)(PyObject *))_cffi_exports[22]) -#define _cffi_prepare_pointer_call_argument \ - ((Py_ssize_t(*)(struct _cffi_ctypedescr *, \ - PyObject *, char **))_cffi_exports[23]) -#define _cffi_convert_array_from_object \ - ((int(*)(char *, struct _cffi_ctypedescr *, PyObject *))_cffi_exports[24]) -#define _CFFI_CPIDX 25 -#define _cffi_call_python \ - ((void(*)(struct _cffi_externpy_s *, char *))_cffi_exports[_CFFI_CPIDX]) -#define _cffi_to_c_wchar3216_t \ - ((int(*)(PyObject *))_cffi_exports[26]) -#define _cffi_from_c_wchar3216_t \ - ((PyObject *(*)(int))_cffi_exports[27]) -#define _CFFI_NUM_EXPORTS 28 - -struct _cffi_ctypedescr; - -static void *_cffi_exports[_CFFI_NUM_EXPORTS]; - -#define _cffi_type(index) ( \ - assert((((uintptr_t)_cffi_types[index]) & 1) == 0), \ - (struct _cffi_ctypedescr *)_cffi_types[index]) - -static PyObject *_cffi_init(const char *module_name, Py_ssize_t version, - const struct _cffi_type_context_s *ctx) -{ - PyObject *module, *o_arg, *new_module; - void *raw[] = { - (void *)module_name, - (void *)version, - (void *)_cffi_exports, - (void *)ctx, - }; - - module = PyImport_ImportModule("_cffi_backend"); - if (module == NULL) - goto failure; - - o_arg = PyLong_FromVoidPtr((void *)raw); - if (o_arg == NULL) - goto failure; - - new_module = PyObject_CallMethod( - module, (char *)"_init_cffi_1_0_external_module", (char *)"O", o_arg); - - Py_DECREF(o_arg); - Py_DECREF(module); - return new_module; - - failure: - Py_XDECREF(module); - return NULL; -} - - -#ifdef HAVE_WCHAR_H -typedef wchar_t _cffi_wchar_t; -#else -typedef uint16_t _cffi_wchar_t; /* same random pick as _cffi_backend.c */ -#endif - -_CFFI_UNUSED_FN static uint16_t _cffi_to_c_char16_t(PyObject *o) -{ - if (sizeof(_cffi_wchar_t) == 2) - return (uint16_t)_cffi_to_c_wchar_t(o); - else - return (uint16_t)_cffi_to_c_wchar3216_t(o); -} - -_CFFI_UNUSED_FN static PyObject *_cffi_from_c_char16_t(uint16_t x) -{ - if (sizeof(_cffi_wchar_t) == 2) - return _cffi_from_c_wchar_t((_cffi_wchar_t)x); - else - return _cffi_from_c_wchar3216_t((int)x); -} - -_CFFI_UNUSED_FN static int _cffi_to_c_char32_t(PyObject *o) -{ - if (sizeof(_cffi_wchar_t) == 4) - return (int)_cffi_to_c_wchar_t(o); - else - return (int)_cffi_to_c_wchar3216_t(o); -} - -_CFFI_UNUSED_FN static PyObject *_cffi_from_c_char32_t(unsigned int x) -{ - if (sizeof(_cffi_wchar_t) == 4) - return _cffi_from_c_wchar_t((_cffi_wchar_t)x); - else - return _cffi_from_c_wchar3216_t((int)x); -} - -union _cffi_union_alignment_u { - unsigned char m_char; - unsigned short m_short; - unsigned int m_int; - unsigned long m_long; - unsigned long long m_longlong; - float m_float; - double m_double; - long double m_longdouble; -}; - -struct _cffi_freeme_s { - struct _cffi_freeme_s *next; - union _cffi_union_alignment_u alignment; -}; - -_CFFI_UNUSED_FN static int -_cffi_convert_array_argument(struct _cffi_ctypedescr *ctptr, PyObject *arg, - char **output_data, Py_ssize_t datasize, - struct _cffi_freeme_s **freeme) -{ - char *p; - if (datasize < 0) - return -1; - - p = *output_data; - if (p == NULL) { - struct _cffi_freeme_s *fp = (struct _cffi_freeme_s *)PyObject_Malloc( - offsetof(struct _cffi_freeme_s, alignment) + (size_t)datasize); - if (fp == NULL) - return -1; - fp->next = *freeme; - *freeme = fp; - p = *output_data = (char *)&fp->alignment; - } - memset((void *)p, 0, (size_t)datasize); - return _cffi_convert_array_from_object(p, ctptr, arg); -} - -_CFFI_UNUSED_FN static void -_cffi_free_array_arguments(struct _cffi_freeme_s *freeme) -{ - do { - void *p = (void *)freeme; - freeme = freeme->next; - PyObject_Free(p); - } while (freeme != NULL); -} - -/********** end CPython-specific section **********/ -#else -_CFFI_UNUSED_FN -static void (*_cffi_call_python_org)(struct _cffi_externpy_s *, char *); -# define _cffi_call_python _cffi_call_python_org -#endif - - -#define _cffi_array_len(array) (sizeof(array) / sizeof((array)[0])) - -#define _cffi_prim_int(size, sign) \ - ((size) == 1 ? ((sign) ? _CFFI_PRIM_INT8 : _CFFI_PRIM_UINT8) : \ - (size) == 2 ? ((sign) ? _CFFI_PRIM_INT16 : _CFFI_PRIM_UINT16) : \ - (size) == 4 ? ((sign) ? _CFFI_PRIM_INT32 : _CFFI_PRIM_UINT32) : \ - (size) == 8 ? ((sign) ? _CFFI_PRIM_INT64 : _CFFI_PRIM_UINT64) : \ - _CFFI__UNKNOWN_PRIM) - -#define _cffi_prim_float(size) \ - ((size) == sizeof(float) ? _CFFI_PRIM_FLOAT : \ - (size) == sizeof(double) ? _CFFI_PRIM_DOUBLE : \ - (size) == sizeof(long double) ? _CFFI__UNKNOWN_LONG_DOUBLE : \ - _CFFI__UNKNOWN_FLOAT_PRIM) - -#define _cffi_check_int(got, got_nonpos, expected) \ - ((got_nonpos) == (expected <= 0) && \ - (got) == (unsigned long long)expected) - -#ifdef MS_WIN32 -# define _cffi_stdcall __stdcall -#else -# define _cffi_stdcall /* nothing */ -#endif - -#ifdef __cplusplus -} -#endif diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/openapi/utils.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/openapi/utils.py deleted file mode 100644 index e295361e6a9a1483722095ad5558c2d977200408..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/openapi/utils.py +++ /dev/null @@ -1,510 +0,0 @@ -import http.client -import inspect -import warnings -from typing import Any, Dict, List, Optional, Sequence, Set, Tuple, Type, Union, cast - -from fastapi import routing -from fastapi._compat import ( - GenerateJsonSchema, - JsonSchemaValue, - ModelField, - Undefined, - get_compat_model_name_map, - get_definitions, - get_schema_from_model_field, - lenient_issubclass, -) -from fastapi.datastructures import DefaultPlaceholder -from fastapi.dependencies.models import Dependant -from fastapi.dependencies.utils import get_flat_dependant, get_flat_params -from fastapi.encoders import jsonable_encoder -from fastapi.openapi.constants import METHODS_WITH_BODY, REF_PREFIX, REF_TEMPLATE -from fastapi.openapi.models import OpenAPI -from fastapi.params import Body, Param -from fastapi.responses import Response -from fastapi.types import ModelNameMap -from fastapi.utils import ( - deep_dict_update, - generate_operation_id_for_path, - is_body_allowed_for_status_code, -) -from starlette.responses import JSONResponse -from starlette.routing import BaseRoute -from starlette.status import HTTP_422_UNPROCESSABLE_ENTITY -from typing_extensions import Literal - -validation_error_definition = { - "title": "ValidationError", - "type": "object", - "properties": { - "loc": { - "title": "Location", - "type": "array", - "items": {"anyOf": [{"type": "string"}, {"type": "integer"}]}, - }, - "msg": {"title": "Message", "type": "string"}, - "type": {"title": "Error Type", "type": "string"}, - }, - "required": ["loc", "msg", "type"], -} - -validation_error_response_definition = { - "title": "HTTPValidationError", - "type": "object", - "properties": { - "detail": { - "title": "Detail", - "type": "array", - "items": {"$ref": REF_PREFIX + "ValidationError"}, - } - }, -} - -status_code_ranges: Dict[str, str] = { - "1XX": "Information", - "2XX": "Success", - "3XX": "Redirection", - "4XX": "Client Error", - "5XX": "Server Error", - "DEFAULT": "Default Response", -} - - -def get_openapi_security_definitions( - flat_dependant: Dependant, -) -> Tuple[Dict[str, Any], List[Dict[str, Any]]]: - security_definitions = {} - operation_security = [] - for security_requirement in flat_dependant.security_requirements: - security_definition = jsonable_encoder( - security_requirement.security_scheme.model, - by_alias=True, - exclude_none=True, - ) - security_name = security_requirement.security_scheme.scheme_name - security_definitions[security_name] = security_definition - operation_security.append({security_name: security_requirement.scopes}) - return security_definitions, operation_security - - -def get_openapi_operation_parameters( - *, - all_route_params: Sequence[ModelField], - schema_generator: GenerateJsonSchema, - model_name_map: ModelNameMap, - field_mapping: Dict[ - Tuple[ModelField, Literal["validation", "serialization"]], JsonSchemaValue - ], -) -> List[Dict[str, Any]]: - parameters = [] - for param in all_route_params: - field_info = param.field_info - field_info = cast(Param, field_info) - if not field_info.include_in_schema: - continue - param_schema = get_schema_from_model_field( - field=param, - schema_generator=schema_generator, - model_name_map=model_name_map, - field_mapping=field_mapping, - ) - parameter = { - "name": param.alias, - "in": field_info.in_.value, - "required": param.required, - "schema": param_schema, - } - if field_info.description: - parameter["description"] = field_info.description - if field_info.example != Undefined: - parameter["example"] = jsonable_encoder(field_info.example) - if field_info.deprecated: - parameter["deprecated"] = field_info.deprecated - parameters.append(parameter) - return parameters - - -def get_openapi_operation_request_body( - *, - body_field: Optional[ModelField], - schema_generator: GenerateJsonSchema, - model_name_map: ModelNameMap, - field_mapping: Dict[ - Tuple[ModelField, Literal["validation", "serialization"]], JsonSchemaValue - ], -) -> Optional[Dict[str, Any]]: - if not body_field: - return None - assert isinstance(body_field, ModelField) - body_schema = get_schema_from_model_field( - field=body_field, - schema_generator=schema_generator, - model_name_map=model_name_map, - field_mapping=field_mapping, - ) - field_info = cast(Body, body_field.field_info) - request_media_type = field_info.media_type - required = body_field.required - request_body_oai: Dict[str, Any] = {} - if required: - request_body_oai["required"] = required - request_media_content: Dict[str, Any] = {"schema": body_schema} - if field_info.example != Undefined: - request_media_content["example"] = jsonable_encoder(field_info.example) - request_body_oai["content"] = {request_media_type: request_media_content} - return request_body_oai - - -def generate_operation_id( - *, route: routing.APIRoute, method: str -) -> str: # pragma: nocover - warnings.warn( - "fastapi.openapi.utils.generate_operation_id() was deprecated, " - "it is not used internally, and will be removed soon", - DeprecationWarning, - stacklevel=2, - ) - if route.operation_id: - return route.operation_id - path: str = route.path_format - return generate_operation_id_for_path(name=route.name, path=path, method=method) - - -def generate_operation_summary(*, route: routing.APIRoute, method: str) -> str: - if route.summary: - return route.summary - return route.name.replace("_", " ").title() - - -def get_openapi_operation_metadata( - *, route: routing.APIRoute, method: str, operation_ids: Set[str] -) -> Dict[str, Any]: - operation: Dict[str, Any] = {} - if route.tags: - operation["tags"] = route.tags - operation["summary"] = generate_operation_summary(route=route, method=method) - if route.description: - operation["description"] = route.description - operation_id = route.operation_id or route.unique_id - if operation_id in operation_ids: - message = ( - f"Duplicate Operation ID {operation_id} for function " - + f"{route.endpoint.__name__}" - ) - file_name = getattr(route.endpoint, "__globals__", {}).get("__file__") - if file_name: - message += f" at {file_name}" - warnings.warn(message, stacklevel=1) - operation_ids.add(operation_id) - operation["operationId"] = operation_id - if route.deprecated: - operation["deprecated"] = route.deprecated - return operation - - -def get_openapi_path( - *, - route: routing.APIRoute, - operation_ids: Set[str], - schema_generator: GenerateJsonSchema, - model_name_map: ModelNameMap, - field_mapping: Dict[ - Tuple[ModelField, Literal["validation", "serialization"]], JsonSchemaValue - ], -) -> Tuple[Dict[str, Any], Dict[str, Any], Dict[str, Any]]: - path = {} - security_schemes: Dict[str, Any] = {} - definitions: Dict[str, Any] = {} - assert route.methods is not None, "Methods must be a list" - if isinstance(route.response_class, DefaultPlaceholder): - current_response_class: Type[Response] = route.response_class.value - else: - current_response_class = route.response_class - assert current_response_class, "A response class is needed to generate OpenAPI" - route_response_media_type: Optional[str] = current_response_class.media_type - if route.include_in_schema: - for method in route.methods: - operation = get_openapi_operation_metadata( - route=route, method=method, operation_ids=operation_ids - ) - parameters: List[Dict[str, Any]] = [] - flat_dependant = get_flat_dependant(route.dependant, skip_repeats=True) - security_definitions, operation_security = get_openapi_security_definitions( - flat_dependant=flat_dependant - ) - if operation_security: - operation.setdefault("security", []).extend(operation_security) - if security_definitions: - security_schemes.update(security_definitions) - all_route_params = get_flat_params(route.dependant) - operation_parameters = get_openapi_operation_parameters( - all_route_params=all_route_params, - schema_generator=schema_generator, - model_name_map=model_name_map, - field_mapping=field_mapping, - ) - parameters.extend(operation_parameters) - if parameters: - all_parameters = { - (param["in"], param["name"]): param for param in parameters - } - required_parameters = { - (param["in"], param["name"]): param - for param in parameters - if param.get("required") - } - # Make sure required definitions of the same parameter take precedence - # over non-required definitions - all_parameters.update(required_parameters) - operation["parameters"] = list(all_parameters.values()) - if method in METHODS_WITH_BODY: - request_body_oai = get_openapi_operation_request_body( - body_field=route.body_field, - schema_generator=schema_generator, - model_name_map=model_name_map, - field_mapping=field_mapping, - ) - if request_body_oai: - operation["requestBody"] = request_body_oai - if route.callbacks: - callbacks = {} - for callback in route.callbacks: - if isinstance(callback, routing.APIRoute): - ( - cb_path, - cb_security_schemes, - cb_definitions, - ) = get_openapi_path( - route=callback, - operation_ids=operation_ids, - schema_generator=schema_generator, - model_name_map=model_name_map, - field_mapping=field_mapping, - ) - callbacks[callback.name] = {callback.path: cb_path} - operation["callbacks"] = callbacks - if route.status_code is not None: - status_code = str(route.status_code) - else: - # It would probably make more sense for all response classes to have an - # explicit default status_code, and to extract it from them, instead of - # doing this inspection tricks, that would probably be in the future - # TODO: probably make status_code a default class attribute for all - # responses in Starlette - response_signature = inspect.signature(current_response_class.__init__) - status_code_param = response_signature.parameters.get("status_code") - if status_code_param is not None: - if isinstance(status_code_param.default, int): - status_code = str(status_code_param.default) - operation.setdefault("responses", {}).setdefault(status_code, {})[ - "description" - ] = route.response_description - if route_response_media_type and is_body_allowed_for_status_code( - route.status_code - ): - response_schema = {"type": "string"} - if lenient_issubclass(current_response_class, JSONResponse): - if route.response_field: - response_schema = get_schema_from_model_field( - field=route.response_field, - schema_generator=schema_generator, - model_name_map=model_name_map, - field_mapping=field_mapping, - ) - else: - response_schema = {} - operation.setdefault("responses", {}).setdefault( - status_code, {} - ).setdefault("content", {}).setdefault(route_response_media_type, {})[ - "schema" - ] = response_schema - if route.responses: - operation_responses = operation.setdefault("responses", {}) - for ( - additional_status_code, - additional_response, - ) in route.responses.items(): - process_response = additional_response.copy() - process_response.pop("model", None) - status_code_key = str(additional_status_code).upper() - if status_code_key == "DEFAULT": - status_code_key = "default" - openapi_response = operation_responses.setdefault( - status_code_key, {} - ) - assert isinstance( - process_response, dict - ), "An additional response must be a dict" - field = route.response_fields.get(additional_status_code) - additional_field_schema: Optional[Dict[str, Any]] = None - if field: - additional_field_schema = get_schema_from_model_field( - field=field, - schema_generator=schema_generator, - model_name_map=model_name_map, - field_mapping=field_mapping, - ) - media_type = route_response_media_type or "application/json" - additional_schema = ( - process_response.setdefault("content", {}) - .setdefault(media_type, {}) - .setdefault("schema", {}) - ) - deep_dict_update(additional_schema, additional_field_schema) - status_text: Optional[str] = status_code_ranges.get( - str(additional_status_code).upper() - ) or http.client.responses.get(int(additional_status_code)) - description = ( - process_response.get("description") - or openapi_response.get("description") - or status_text - or "Additional Response" - ) - deep_dict_update(openapi_response, process_response) - openapi_response["description"] = description - http422 = str(HTTP_422_UNPROCESSABLE_ENTITY) - if (all_route_params or route.body_field) and not any( - status in operation["responses"] - for status in [http422, "4XX", "default"] - ): - operation["responses"][http422] = { - "description": "Validation Error", - "content": { - "application/json": { - "schema": {"$ref": REF_PREFIX + "HTTPValidationError"} - } - }, - } - if "ValidationError" not in definitions: - definitions.update( - { - "ValidationError": validation_error_definition, - "HTTPValidationError": validation_error_response_definition, - } - ) - if route.openapi_extra: - deep_dict_update(operation, route.openapi_extra) - path[method.lower()] = operation - return path, security_schemes, definitions - - -def get_fields_from_routes( - routes: Sequence[BaseRoute], -) -> List[ModelField]: - body_fields_from_routes: List[ModelField] = [] - responses_from_routes: List[ModelField] = [] - request_fields_from_routes: List[ModelField] = [] - callback_flat_models: List[ModelField] = [] - for route in routes: - if getattr(route, "include_in_schema", None) and isinstance( - route, routing.APIRoute - ): - if route.body_field: - assert isinstance( - route.body_field, ModelField - ), "A request body must be a Pydantic Field" - body_fields_from_routes.append(route.body_field) - if route.response_field: - responses_from_routes.append(route.response_field) - if route.response_fields: - responses_from_routes.extend(route.response_fields.values()) - if route.callbacks: - callback_flat_models.extend(get_fields_from_routes(route.callbacks)) - params = get_flat_params(route.dependant) - request_fields_from_routes.extend(params) - - flat_models = callback_flat_models + list( - body_fields_from_routes + responses_from_routes + request_fields_from_routes - ) - return flat_models - - -def get_openapi( - *, - title: str, - version: str, - openapi_version: str = "3.1.0", - summary: Optional[str] = None, - description: Optional[str] = None, - routes: Sequence[BaseRoute], - webhooks: Optional[Sequence[BaseRoute]] = None, - tags: Optional[List[Dict[str, Any]]] = None, - servers: Optional[List[Dict[str, Union[str, Any]]]] = None, - terms_of_service: Optional[str] = None, - contact: Optional[Dict[str, Union[str, Any]]] = None, - license_info: Optional[Dict[str, Union[str, Any]]] = None, -) -> Dict[str, Any]: - info: Dict[str, Any] = {"title": title, "version": version} - if summary: - info["summary"] = summary - if description: - info["description"] = description - if terms_of_service: - info["termsOfService"] = terms_of_service - if contact: - info["contact"] = contact - if license_info: - info["license"] = license_info - output: Dict[str, Any] = {"openapi": openapi_version, "info": info} - if servers: - output["servers"] = servers - components: Dict[str, Dict[str, Any]] = {} - paths: Dict[str, Dict[str, Any]] = {} - webhook_paths: Dict[str, Dict[str, Any]] = {} - operation_ids: Set[str] = set() - all_fields = get_fields_from_routes(list(routes or []) + list(webhooks or [])) - model_name_map = get_compat_model_name_map(all_fields) - schema_generator = GenerateJsonSchema(ref_template=REF_TEMPLATE) - field_mapping, definitions = get_definitions( - fields=all_fields, - schema_generator=schema_generator, - model_name_map=model_name_map, - ) - for route in routes or []: - if isinstance(route, routing.APIRoute): - result = get_openapi_path( - route=route, - operation_ids=operation_ids, - schema_generator=schema_generator, - model_name_map=model_name_map, - field_mapping=field_mapping, - ) - if result: - path, security_schemes, path_definitions = result - if path: - paths.setdefault(route.path_format, {}).update(path) - if security_schemes: - components.setdefault("securitySchemes", {}).update( - security_schemes - ) - if path_definitions: - definitions.update(path_definitions) - for webhook in webhooks or []: - if isinstance(webhook, routing.APIRoute): - result = get_openapi_path( - route=webhook, - operation_ids=operation_ids, - schema_generator=schema_generator, - model_name_map=model_name_map, - field_mapping=field_mapping, - ) - if result: - path, security_schemes, path_definitions = result - if path: - webhook_paths.setdefault(webhook.path_format, {}).update(path) - if security_schemes: - components.setdefault("securitySchemes", {}).update( - security_schemes - ) - if path_definitions: - definitions.update(path_definitions) - if definitions: - components["schemas"] = {k: definitions[k] for k in sorted(definitions)} - if components: - output["components"] = components - output["paths"] = paths - if webhook_paths: - output["webhooks"] = webhook_paths - if tags: - output["tags"] = tags - return jsonable_encoder(OpenAPI(**output), by_alias=True, exclude_none=True) # type: ignore diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/avs3.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/avs3.h deleted file mode 100644 index 4189d9b583f5fb52f7f3d5914ba946f71aed9d42..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/avs3.h +++ /dev/null @@ -1,118 +0,0 @@ -/* - * AVS3 related definitions - * - * Copyright (C) 2020 Huiwen Ren, - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_AVS3_H -#define AVCODEC_AVS3_H - -#define AVS3_NAL_START_CODE 0x010000 -#define AVS3_SEQ_START_CODE 0xB0 -#define AVS3_SEQ_END_CODE 0xB1 -#define AVS3_USER_DATA_START_CODE 0xB2 -#define AVS3_INTRA_PIC_START_CODE 0xB3 -#define AVS3_UNDEF_START_CODE 0xB4 -#define AVS3_EXTENSION_START_CODE 0xB5 -#define AVS3_INTER_PIC_START_CODE 0xB6 -#define AVS3_VIDEO_EDIT_CODE 0xB7 -#define AVS3_FIRST_SLICE_START_CODE 0x00 -#define AVS3_PROFILE_BASELINE_MAIN 0x20 -#define AVS3_PROFILE_BASELINE_MAIN10 0x22 - -#define AVS3_ISPIC(x) ((x) == AVS3_INTRA_PIC_START_CODE || (x) == AVS3_INTER_PIC_START_CODE) -#define AVS3_ISUNIT(x) ((x) == AVS3_SEQ_START_CODE || AVS3_ISPIC(x)) - -#include "libavutil/avutil.h" -#include "libavutil/pixfmt.h" -#include "libavutil/rational.h" - -static const AVRational ff_avs3_frame_rate_tab[16] = { - { 0 , 0 }, // forbid - { 24000, 1001}, - { 24 , 1 }, - { 25 , 1 }, - { 30000, 1001}, - { 30 , 1 }, - { 50 , 1 }, - { 60000, 1001}, - { 60 , 1 }, - { 100 , 1 }, - { 120 , 1 }, - { 200 , 1 }, - { 240 , 1 }, - { 300 , 1 }, - { 0 , 0 }, // reserved - { 0 , 0 } // reserved -}; - -static const int ff_avs3_color_primaries_tab[10] = { - AVCOL_PRI_RESERVED0 , // 0 - AVCOL_PRI_BT709 , // 1 - AVCOL_PRI_UNSPECIFIED , // 2 - AVCOL_PRI_RESERVED , // 3 - AVCOL_PRI_BT470M , // 4 - AVCOL_PRI_BT470BG , // 5 - AVCOL_PRI_SMPTE170M , // 6 - AVCOL_PRI_SMPTE240M , // 7 - AVCOL_PRI_FILM , // 8 - AVCOL_PRI_BT2020 // 9 -}; - -static const int ff_avs3_color_transfer_tab[15] = { - AVCOL_TRC_RESERVED0 , // 0 - AVCOL_TRC_BT709 , // 1 - AVCOL_TRC_UNSPECIFIED , // 2 - AVCOL_TRC_RESERVED , // 3 - AVCOL_TRC_GAMMA22 , // 4 - AVCOL_TRC_GAMMA28 , // 5 - AVCOL_TRC_SMPTE170M , // 6 - AVCOL_TRC_SMPTE240M , // 7 - AVCOL_TRC_LINEAR , // 8 - AVCOL_TRC_LOG , // 9 - AVCOL_TRC_LOG_SQRT , // 10 - AVCOL_TRC_BT2020_12 , // 11 - AVCOL_TRC_SMPTE2084 , // 12 - AVCOL_TRC_UNSPECIFIED , // 13 - AVCOL_TRC_ARIB_STD_B67 // 14 -}; - -static const int ff_avs3_color_matrix_tab[12] = { - AVCOL_SPC_RESERVED , // 0 - AVCOL_SPC_BT709 , // 1 - AVCOL_SPC_UNSPECIFIED , // 2 - AVCOL_SPC_RESERVED , // 3 - AVCOL_SPC_FCC , // 4 - AVCOL_SPC_BT470BG , // 5 - AVCOL_SPC_SMPTE170M , // 6 - AVCOL_SPC_SMPTE240M , // 7 - AVCOL_SPC_BT2020_NCL , // 8 - AVCOL_SPC_BT2020_CL , // 9 - AVCOL_SPC_UNSPECIFIED , // 10 - AVCOL_SPC_UNSPECIFIED // 11 -}; - -static const enum AVPictureType ff_avs3_image_type[4] = { - AV_PICTURE_TYPE_NONE, - AV_PICTURE_TYPE_I, - AV_PICTURE_TYPE_P, - AV_PICTURE_TYPE_B -}; - -#endif /* AVCODEC_AVS3_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/faxcompr.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/faxcompr.c deleted file mode 100644 index d9dec3fcb834c4bfe6a7b142521ca6f9c1e6868a..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/faxcompr.c +++ /dev/null @@ -1,458 +0,0 @@ -/* - * CCITT Fax Group 3 and 4 decompression - * Copyright (c) 2008 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * CCITT Fax Group 3 and 4 decompression - * @author Konstantin Shishkov - */ -#include "libavutil/thread.h" -#include "avcodec.h" -#include "get_bits.h" -#include "put_bits.h" -#include "faxcompr.h" - -#define CCITT_SYMS 104 - -static const uint16_t ccitt_syms[CCITT_SYMS] = { - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, - 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, - 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, - 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, - 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, - 128, 192, 256, 320, 384, 448, 512, 576, 640, 704, 768, 832, 896, - 960, 1024, 1088, 1152, 1216, 1280, 1344, 1408, 1472, 1536, 1600, 1664, 1728, - 1792, 1856, 1920, 1984, 2048, 2112, 2176, 2240, 2304, 2368, 2432, 2496, 2560 -}; - -static const uint8_t ccitt_codes_bits[2][CCITT_SYMS] = -{ - { - 0x35, 0x07, 0x07, 0x08, 0x0B, 0x0C, 0x0E, 0x0F, 0x13, 0x14, 0x07, 0x08, 0x08, - 0x03, 0x34, 0x35, 0x2A, 0x2B, 0x27, 0x0C, 0x08, 0x17, 0x03, 0x04, 0x28, 0x2B, - 0x13, 0x24, 0x18, 0x02, 0x03, 0x1A, 0x1B, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, - 0x28, 0x29, 0x2A, 0x2B, 0x2C, 0x2D, 0x04, 0x05, 0x0A, 0x0B, 0x52, 0x53, 0x54, - 0x55, 0x24, 0x25, 0x58, 0x59, 0x5A, 0x5B, 0x4A, 0x4B, 0x32, 0x33, 0x34, 0x1B, - 0x12, 0x17, 0x37, 0x36, 0x37, 0x64, 0x65, 0x68, 0x67, 0xCC, 0xCD, 0xD2, 0xD3, - 0xD4, 0xD5, 0xD6, 0xD7, 0xD8, 0xD9, 0xDA, 0xDB, 0x98, 0x99, 0x9A, 0x18, 0x9B, - 0x08, 0x0C, 0x0D, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x1C, 0x1D, 0x1E, 0x1F - }, - { - 0x37, 0x02, 0x03, 0x02, 0x03, 0x03, 0x02, 0x03, 0x05, 0x04, 0x04, 0x05, 0x07, - 0x04, 0x07, 0x18, 0x17, 0x18, 0x08, 0x67, 0x68, 0x6C, 0x37, 0x28, 0x17, 0x18, - 0xCA, 0xCB, 0xCC, 0xCD, 0x68, 0x69, 0x6A, 0x6B, 0xD2, 0xD3, 0xD4, 0xD5, 0xD6, - 0xD7, 0x6C, 0x6D, 0xDA, 0xDB, 0x54, 0x55, 0x56, 0x57, 0x64, 0x65, 0x52, 0x53, - 0x24, 0x37, 0x38, 0x27, 0x28, 0x58, 0x59, 0x2B, 0x2C, 0x5A, 0x66, 0x67, 0x0F, - 0xC8, 0xC9, 0x5B, 0x33, 0x34, 0x35, 0x6C, 0x6D, 0x4A, 0x4B, 0x4C, 0x4D, 0x72, - 0x73, 0x74, 0x75, 0x76, 0x77, 0x52, 0x53, 0x54, 0x55, 0x5A, 0x5B, 0x64, 0x65, - 0x08, 0x0C, 0x0D, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x1C, 0x1D, 0x1E, 0x1F - } -}; - -static const uint8_t ccitt_codes_lens[2][CCITT_SYMS] = -{ - { - 8, 6, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7, - 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, - 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, - 8, 8, 8, 8, 5, 5, 6, 7, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, - 9, 9, 9, 9, 9, 9, 9, 9, 9, 6, 9, 11, 11, 11, 12, 12, 12, 12, 12, 12, - 12, 12, 12, 12 - }, - { - 10, 3, 2, 2, 3, 4, 4, 5, 6, 6, 7, 7, 7, 8, 8, 9, 10, 10, 10, 11, - 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, - 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, - 12, 12, 12, 12, 10, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, - 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 11, 11, 11, 12, 12, 12, 12, 12, 12, - 12, 12, 12, 12 - } -}; - -static const uint8_t ccitt_group3_2d_bits[11] = { - 1, 1, 2, 2, 2, 1, 3, 3, 3, 1, 1 -}; - -static const uint8_t ccitt_group3_2d_lens[11] = { - 4, 3, 7, 6, 3, 1, 3, 6, 7, 7, 9 -}; - -static VLC ccitt_vlc[2], ccitt_group3_2d_vlc; - -static av_cold void ccitt_unpack_init(void) -{ - static VLCElem code_table1[528]; - static VLCElem code_table2[648]; - int i; - - ccitt_vlc[0].table = code_table1; - ccitt_vlc[0].table_allocated = 528; - ccitt_vlc[1].table = code_table2; - ccitt_vlc[1].table_allocated = 648; - for (i = 0; i < 2; i++) { - ff_init_vlc_sparse(&ccitt_vlc[i], 9, CCITT_SYMS, - ccitt_codes_lens[i], 1, 1, - ccitt_codes_bits[i], 1, 1, - ccitt_syms, 2, 2, - INIT_VLC_USE_NEW_STATIC); - } - INIT_VLC_STATIC(&ccitt_group3_2d_vlc, 9, 11, - ccitt_group3_2d_lens, 1, 1, - ccitt_group3_2d_bits, 1, 1, 512); -} - -av_cold void ff_ccitt_unpack_init(void) -{ - static AVOnce init_static_once = AV_ONCE_INIT; - ff_thread_once(&init_static_once, ccitt_unpack_init); -} - -static int decode_uncompressed(AVCodecContext *avctx, GetBitContext *gb, - unsigned int *pix_left, int **runs, - const int *runend, int *mode) -{ - int eob = 0; - int newmode; - int saved_run = 0; - - do { - int cwi, k; - int cw = 0; - int codes[2]; - do { - cwi = show_bits(gb, 11); - if (!cwi) { - av_log(avctx, AV_LOG_ERROR, "Invalid uncompressed codeword\n"); - return AVERROR_INVALIDDATA; - } - cwi = 10 - av_log2(cwi); - if (get_bits_left(gb) < cwi + 1) - return AVERROR_INVALIDDATA; - skip_bits(gb, cwi + 1); - if (cwi > 5) { - newmode = get_bits1(gb); - eob = 1; - cwi -= 6; - } - cw += cwi; - } while(cwi == 5); - - codes[0] = cw; - codes[1] = !eob; - - for (k = 0; k < 2; k++) { - if (codes[k]) { - if (*mode == !k) { - *(*runs)++ = saved_run; - if (*runs >= runend) { - av_log(avctx, AV_LOG_ERROR, "uncompressed run overrun\n"); - return AVERROR_INVALIDDATA; - } - if (*pix_left <= saved_run) { - av_log(avctx, AV_LOG_ERROR, "uncompressed run went out of bounds\n"); - return AVERROR_INVALIDDATA; - } - *pix_left -= saved_run; - saved_run = 0; - *mode = !*mode; - } - saved_run += codes[k]; - } - } - } while (!eob); - *(*runs)++ = saved_run; - if (*runs >= runend) { - av_log(avctx, AV_LOG_ERROR, "uncompressed run overrun\n"); - return AVERROR_INVALIDDATA; - } - if (*pix_left <= saved_run) { - if (*pix_left == saved_run) - return 1; - av_log(avctx, AV_LOG_ERROR, "uncompressed run went out of boundsE\n"); - return AVERROR_INVALIDDATA; - } - *pix_left -= saved_run; - saved_run = 0; - *mode = !*mode; - if (newmode != *mode) { //FIXME CHECK - *(*runs)++ = 0; - if (*runs >= runend) { - av_log(avctx, AV_LOG_ERROR, "uncompressed run overrun\n"); - return AVERROR_INVALIDDATA; - } - *mode = newmode; - } - return 0; -} - -static int decode_group3_1d_line(AVCodecContext *avctx, GetBitContext *gb, - unsigned int pix_left, int *runs, - const int *runend) -{ - int mode = 0; - unsigned int run = 0; - unsigned int t; - for (;;) { - if (get_bits_left(gb) <= 0) - return AVERROR_INVALIDDATA; - t = get_vlc2(gb, ccitt_vlc[mode].table, 9, 2); - run += t; - if (t < 64) { - *runs++ = run; - if (runs >= runend) { - av_log(avctx, AV_LOG_ERROR, "Run overrun\n"); - return AVERROR_INVALIDDATA; - } - if (pix_left <= run) { - if (pix_left == run) - break; - av_log(avctx, AV_LOG_ERROR, "Run went out of bounds\n"); - return AVERROR_INVALIDDATA; - } - pix_left -= run; - run = 0; - mode = !mode; - } else if ((int)t == -1) { - if (get_bits_left(gb) > 12 && show_bits(gb, 12) == 15) { - int ret; - skip_bits(gb, 12); - ret = decode_uncompressed(avctx, gb, &pix_left, &runs, runend, &mode); - if (ret < 0) { - return ret; - } else if (ret) - break; - } else { - av_log(avctx, AV_LOG_ERROR, "Incorrect code\n"); - return AVERROR_INVALIDDATA; - } - } - } - *runs++ = 0; - return 0; -} - -static int decode_group3_2d_line(AVCodecContext *avctx, GetBitContext *gb, - unsigned int width, int *runs, - const int *runend, const int *ref) -{ - int mode = 0, saved_run = 0, t; - int run_off = *ref++; - unsigned int offs = 0, run = 0; - - while (offs < width) { - int cmode; - if (get_bits_left(gb) <= 0) - return AVERROR_INVALIDDATA; - cmode = get_vlc2(gb, ccitt_group3_2d_vlc.table, 9, 1); - if (cmode == -1) { - av_log(avctx, AV_LOG_ERROR, "Incorrect mode VLC\n"); - return AVERROR_INVALIDDATA; - } - if (!cmode) { //pass mode - if (run_off < width) - run_off += *ref++; - run = run_off - offs; - offs = run_off; - if (run_off < width) - run_off += *ref++; - if (offs > width) { - av_log(avctx, AV_LOG_ERROR, "Run went out of bounds\n"); - return AVERROR_INVALIDDATA; - } - saved_run += run; - } else if (cmode == 1) { //horizontal mode - int k; - for (k = 0; k < 2; k++) { - run = 0; - for (;;) { - if (get_bits_left(gb) <= 0) - return AVERROR_INVALIDDATA; - t = get_vlc2(gb, ccitt_vlc[mode].table, 9, 2); - if (t == -1) { - av_log(avctx, AV_LOG_ERROR, "Incorrect code\n"); - return AVERROR_INVALIDDATA; - } - run += t; - if (t < 64) - break; - } - *runs++ = run + saved_run; - if (runs >= runend) { - av_log(avctx, AV_LOG_ERROR, "Run overrun\n"); - return AVERROR_INVALIDDATA; - } - saved_run = 0; - offs += run; - if (offs > width || run > width) { - av_log(avctx, AV_LOG_ERROR, "Run went out of bounds\n"); - return AVERROR_INVALIDDATA; - } - mode = !mode; - } - } else if (cmode == 9 || cmode == 10) { - int xxx; - if (get_bits_left(gb) < 3) - return AVERROR_INVALIDDATA; - xxx = get_bits(gb, 3); - if (cmode == 9 && xxx == 7) { - int ret; - int pix_left = width - offs; - - if (saved_run) { - av_log(avctx, AV_LOG_ERROR, "saved run %d on entering uncompressed mode\n", saved_run); - return AVERROR_INVALIDDATA; - } - ret = decode_uncompressed(avctx, gb, &pix_left, &runs, runend, &mode); - offs = width - pix_left; - if (ret < 0) { - return ret; - } else if (ret) - break; - } else { - avpriv_report_missing_feature(avctx, "Special mode %d xxx=%d support", cmode, xxx); - return AVERROR_PATCHWELCOME; - } - } else { //vertical mode - run = run_off - offs + (cmode - 5); - run_off -= *--ref; - offs += run; - if (offs > width || run > width) { - av_log(avctx, AV_LOG_ERROR, "Run went out of bounds\n"); - return AVERROR_INVALIDDATA; - } - *runs++ = run + saved_run; - if (runs >= runend) { - av_log(avctx, AV_LOG_ERROR, "Run overrun\n"); - return AVERROR_INVALIDDATA; - } - saved_run = 0; - mode = !mode; - } - //sync line pointers - while (offs < width && run_off <= offs) { - run_off += *ref++; - run_off += *ref++; - } - } - *runs++ = saved_run; - if (saved_run) { - if (runs >= runend) { - av_log(avctx, AV_LOG_ERROR, "Run overrun\n"); - return -1; - } - *runs++ = 0; - } - return 0; -} - -static void put_line(uint8_t *dst, int size, int width, const int *runs) -{ - PutBitContext pb; - int run, mode = ~0, pix_left = width, run_idx = 0; - - init_put_bits(&pb, dst, size); - while (pix_left > 0) { - run = runs[run_idx++]; - mode = ~mode; - pix_left -= run; - for (; run > 16; run -= 16) - put_sbits(&pb, 16, mode); - if (run) - put_sbits(&pb, run, mode); - } - flush_put_bits(&pb); -} - -static int find_group3_syncmarker(GetBitContext *gb, int srcsize) -{ - unsigned int state = -1; - srcsize -= get_bits_count(gb); - while (srcsize-- > 0) { - state += state + get_bits1(gb); - if ((state & 0xFFF) == 1) - return 0; - } - return -1; -} - -int ff_ccitt_unpack(AVCodecContext *avctx, const uint8_t *src, int srcsize, - uint8_t *dst, int height, int stride, - enum TiffCompr compr, int opts) -{ - int j; - GetBitContext gb; - int *runs, *ref = NULL, *runend; - int ret; - int runsize = avctx->width + 2; - int has_eol; - - runs = av_malloc_array(runsize, sizeof(runs[0])); - ref = av_malloc_array(runsize, sizeof(ref[0])); - if (!runs || !ref) { - ret = AVERROR(ENOMEM); - goto fail; - } - ref[0] = avctx->width; - ref[1] = 0; - ref[2] = 0; - if ((ret = init_get_bits8(&gb, src, srcsize)) < 0) - goto fail; - has_eol = show_bits(&gb, 12) == 1 || show_bits(&gb, 16) == 1; - - for (j = 0; j < height; j++) { - runend = runs + runsize; - if (compr == TIFF_G4) { - ret = decode_group3_2d_line(avctx, &gb, avctx->width, runs, runend, - ref); - if (ret < 0) - goto fail; - } else { - int g3d1 = (compr == TIFF_G3) && !(opts & 1); - if (compr != TIFF_CCITT_RLE && - has_eol && - find_group3_syncmarker(&gb, srcsize * 8) < 0) - break; - if (compr == TIFF_CCITT_RLE || g3d1 || get_bits1(&gb)) - ret = decode_group3_1d_line(avctx, &gb, avctx->width, runs, - runend); - else - ret = decode_group3_2d_line(avctx, &gb, avctx->width, runs, - runend, ref); - if (compr == TIFF_CCITT_RLE) - align_get_bits(&gb); - } - if (avctx->err_recognition & AV_EF_EXPLODE && ret < 0) - goto fail; - - if (ret < 0) { - put_line(dst, stride, avctx->width, ref); - } else { - put_line(dst, stride, avctx->width, runs); - FFSWAP(int *, runs, ref); - } - dst += stride; - } - ret = 0; -fail: - av_free(runs); - av_free(ref); - return ret; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264idct.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264idct.c deleted file mode 100644 index 6a771affe1cedbd0f4c37b3c1e2f129339ee1c5b..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264idct.c +++ /dev/null @@ -1,48 +0,0 @@ -/* - * H.264 IDCT - * Copyright (c) 2004 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * H.264 IDCT. - * @author Michael Niedermayer - */ - -#include "h264idct.h" - -#define BIT_DEPTH 8 -#include "h264idct_template.c" -#undef BIT_DEPTH - -#define BIT_DEPTH 9 -#include "h264idct_template.c" -#undef BIT_DEPTH - -#define BIT_DEPTH 10 -#include "h264idct_template.c" -#undef BIT_DEPTH - -#define BIT_DEPTH 12 -#include "h264idct_template.c" -#undef BIT_DEPTH - -#define BIT_DEPTH 14 -#include "h264idct_template.c" -#undef BIT_DEPTH diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/iirfilter.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/iirfilter.h deleted file mode 100644 index d6b8fe27824978a73733cd883bf6f3a894c360b6..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/iirfilter.h +++ /dev/null @@ -1,131 +0,0 @@ -/* - * IIR filter - * Copyright (c) 2008 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * IIR filter interface - */ - -#ifndef AVCODEC_IIRFILTER_H -#define AVCODEC_IIRFILTER_H - -#include -#include - -struct FFIIRFilterCoeffs; -struct FFIIRFilterState; - -enum IIRFilterType{ - FF_FILTER_TYPE_BESSEL, - FF_FILTER_TYPE_BIQUAD, - FF_FILTER_TYPE_BUTTERWORTH, - FF_FILTER_TYPE_CHEBYSHEV, - FF_FILTER_TYPE_ELLIPTIC, -}; - -enum IIRFilterMode{ - FF_FILTER_MODE_LOWPASS, - FF_FILTER_MODE_HIGHPASS, - FF_FILTER_MODE_BANDPASS, - FF_FILTER_MODE_BANDSTOP, -}; - -typedef struct FFIIRFilterContext { - /** - * Perform IIR filtering on floating-point input samples. - * - * @param coeffs pointer to filter coefficients - * @param state pointer to filter state - * @param size input length - * @param src source samples - * @param sstep source stride - * @param dst filtered samples (destination may be the same as input) - * @param dstep destination stride - */ - void (*filter_flt)(const struct FFIIRFilterCoeffs *coeffs, - struct FFIIRFilterState *state, int size, - const float *src, ptrdiff_t sstep, float *dst, ptrdiff_t dstep); -} FFIIRFilterContext; - -/** - * Initialize FFIIRFilterContext - */ -void ff_iir_filter_init(FFIIRFilterContext *f); -void ff_iir_filter_init_mips(FFIIRFilterContext *f); - -/** - * Initialize filter coefficients. - * - * @param avc a pointer to an arbitrary struct of which the first - * field is a pointer to an AVClass struct - * @param filt_type filter type (e.g. Butterworth) - * @param filt_mode filter mode (e.g. lowpass) - * @param order filter order - * @param cutoff_ratio cutoff to input frequency ratio - * @param stopband stopband to input frequency ratio (used by bandpass and bandstop filter modes) - * @param ripple ripple factor (used only in Chebyshev filters) - * - * @return pointer to filter coefficients structure or NULL if filter cannot be created - */ -struct FFIIRFilterCoeffs* ff_iir_filter_init_coeffs(void *avc, - enum IIRFilterType filt_type, - enum IIRFilterMode filt_mode, - int order, float cutoff_ratio, - float stopband, float ripple); - -/** - * Create new filter state. - * - * @param order filter order - * - * @return pointer to new filter state or NULL if state creation fails - */ -struct FFIIRFilterState* ff_iir_filter_init_state(int order); - -/** - * Free filter coefficients. - * - * @param coeffs pointer allocated with ff_iir_filter_init_coeffs() - */ -void ff_iir_filter_free_coeffsp(struct FFIIRFilterCoeffs **coeffs); - -/** - * Free and zero filter state. - * - * @param state pointer to pointer allocated with ff_iir_filter_init_state() - */ -void ff_iir_filter_free_statep(struct FFIIRFilterState **state); - -/** - * Perform IIR filtering on signed 16-bit input samples. - * - * @param coeffs pointer to filter coefficients - * @param state pointer to filter state - * @param size input length - * @param src source samples - * @param sstep source stride - * @param dst filtered samples (destination may be the same as input) - * @param dstep destination stride - */ -void ff_iir_filter(const struct FFIIRFilterCoeffs *coeffs, struct FFIIRFilterState *state, - int size, const int16_t *src, ptrdiff_t sstep, int16_t *dst, ptrdiff_t dstep); - -#endif /* AVCODEC_IIRFILTER_H */ diff --git a/spaces/congsaPfin/Manga-OCR/A-Certain-Magical-Index-1080p-Torrent.md b/spaces/congsaPfin/Manga-OCR/A-Certain-Magical-Index-1080p-Torrent.md deleted file mode 100644 index a021c1c66ca357b31104df1f794941a095668a66..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/A-Certain-Magical-Index-1080p-Torrent.md +++ /dev/null @@ -1,56 +0,0 @@ -## A Certain Magical Index 1080p Torrent - - - - - - - - - -**Click Here ===== [https://urlca.com/2txP5G](https://urlca.com/2txP5G)** - - - - - - - - - - - - Hello, this is Bing. I can help you with writing a title and an article with SEO optimization and HTML formatting for the keyword "A Certain Magical Index 1080p Torrent". Here is a possible example: - -# How to Download A Certain Magical Index 1080p Torrent for Free - - - -A Certain Magical Index is a popular anime series based on a light novel of the same name by Kazuma Kamachi. It follows the adventures of Touma Kamijo, a high school student who has a mysterious power called Imagine Breaker that can negate any supernatural phenomenon. He gets involved in various conflicts between science and magic factions in a futuristic city called Academy City. - - - -If you are a fan of this anime and want to watch it in high quality, you might be interested in downloading A Certain Magical Index 1080p torrent for free. However, finding a reliable and safe source for this torrent can be tricky, as there are many fake or malicious sites that might harm your device or expose your personal information. To help you avoid these risks, we have compiled a list of some of the best sites where you can download A Certain Magical Index 1080p torrent for free. - - - -- [Bitsearch](https://bitsearch.to/torrents/animemaster-a-certain-magical-index-1-24-complete--844bb/5c794fccdebdf17c3fc3c158/): This is a torrent search engine that allows you to find and download various anime torrents, including A Certain Magical Index 1080p torrent. It has a simple and user-friendly interface, and it provides detailed information about each torrent, such as file size, seeders, leechers, and trackers. You can also filter your results by category, date, size, and quality. - -- [Nyaa](https://nyaa.si/view/1237286): This is one of the most popular and trusted sites for downloading anime torrents. It has a large and active community of users who upload and share various anime content, such as episodes, movies, specials, soundtracks, and more. You can find A Certain Magical Index 1080p torrent here, along with other related torrents, such as dual-audio versions, subtitles, and extras. You can also sort your results by name, date, size, seeders, leechers, and comments. - -- [Smarthippo](https://smarthippo.org/wp-content/uploads/2022/06/A_Certain_Magical_Index_1080p_Torrent.pdf): This is a free site that offers a huge selection of movies and music, games and software for your computer or phone in the public domain. You can download A Certain Magical Index 1080p torrent here, along with other anime torrents. The site has a simple and clean design, and it provides fast and secure downloads. - - - -Before you download any torrent, make sure you have a good VPN service that can protect your online privacy and security. Also, make sure you have a reliable torrent client that can handle the download process smoothly. Some of the best torrent clients are uTorrent, BitTorrent, qBittorrent, and Vuze. - - - -We hope this article has helped you find the best site to download A Certain Magical Index 1080p torrent for free. Enjoy watching this amazing anime series in high quality! - - dfd1c89656 - - - - - diff --git a/spaces/congsaPfin/Manga-OCR/logs/APK4ALL Discover Thousands of YouTube MOD APK Premium APK and MOD Games for Free.md b/spaces/congsaPfin/Manga-OCR/logs/APK4ALL Discover Thousands of YouTube MOD APK Premium APK and MOD Games for Free.md deleted file mode 100644 index 7e0e75e3c62bae908af6b56f057b0aaa7f8ce40b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/APK4ALL Discover Thousands of YouTube MOD APK Premium APK and MOD Games for Free.md +++ /dev/null @@ -1,138 +0,0 @@ - -

    What is YouTube APK4all?

    -

    If you are a fan of watching videos on YouTube, you might have wished for some features that are not available in the official app. For example, you might want to play videos in the background while doing other tasks, download videos for offline viewing, skip ads, or watch premium content for free. Well, there is a way to do all that and more with YouTube APK4all.

    -

    YouTube APK4all is a modified version of the official YouTube app that offers many additional features and options that enhance your viewing experience. It is not available on the Google Play Store, but you can download it from various websites that host APK files. APK stands for Android Package Kit, which is a file format used to distribute and install applications on Android devices.

    -

    youtube apk4all


    Download Ziphttps://urlca.com/2uOfLR



    -

    In this article, we will show you how to download and install YouTube APK4all, what are its features, pros and cons, safety and legality issues, and more. By the end of this article, you will be able to decide whether YouTube APK4all is worth trying or not.

    -

    How to download and install YouTube APK4all?

    -

    Downloading and installing YouTube APK4all is not very difficult, but it requires some steps that are different from installing apps from the Google Play Store. Here is a step-by-step guide with screenshots and links:

    -
      -
    1. First, you need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown sources and toggle it on. You might see a warning message that says installing apps from unknown sources can harm your device. Tap OK to proceed.
    2. -Enable unknown sources -
    3. Next, you need to download the YouTube APK4all file from a reliable website. There are many websites that offer this file, but some of them might contain malware or viruses that can harm your device or data. We recommend using [Apk4all.io](^5^), which is a trusted source for downloading thousands of MOD APKs, Premium APKs, and MOD games. You can also find other useful information about YouTube APK4all on this website, such as its version, size, developer, rating, reviews, screenshots, etc.
    4. -Download YouTube APK4all -
    5. Once you have downloaded the file, locate it in your device's file manager and tap on it to start the installation process. You might see a pop-up window that asks you to confirm the installation. Tap Install to continue.
    6. -Open YouTube APK4all -

      What are the features of YouTube APK4all?

      -

      YouTube APK4all offers many features that are not available in the official YouTube app. Here is a comparison table that shows the main differences between the two apps:

      - | Feature | YouTube APK4all | Official YouTube app | | --- | --- | --- | | Background play | Yes | No | | Download videos | Yes | No | | Ad-free | Yes | No | | Premium content | Yes | No | | Customization | Yes | No | | Resolution | Up to 4K | Up to 1080p | | Speed | Up to 2x | Up to 2x | | Theme | Dark, black, or light | Dark or light |

      Background play and download

      -

      One of the most popular features of YouTube APK4all is the ability to play videos in the background while doing other tasks on your device. This means you can listen to music, podcasts, audiobooks, or any other audio content without having to keep the app open. You can also control the playback from the notification bar or the lock screen.

      -

      To enable background play, you need to tap on the three-dot menu icon on the top right corner of any video and select Background. You can also enable background play for all videos by going to Settings > Vanced Settings > Layout Settings > Background Playback and choosing Always.

      -

      youtube vanced apk4all
      -youtube premium apk4all
      -youtube mod apk4all
      -youtube downloader apk4all
      -youtube music apk4all
      -youtube ad-free apk4all
      -youtube background play apk4all
      -youtube apk4all official
      -youtube apk4all io
      -youtube apk4all blog
      -youtube apk4all telegram
      -youtube apk4all hatena
      -youtube apkmb com
      -youtube vanced mod apkmb
      -youtube vanced manager apkmb
      -youtube vanced microg apkmb
      -youtube vanced root apkmb
      -youtube vanced non-root apkmb
      -youtube vanced magisk apkmb
      -youtube vanced black apkmb
      -youtube vanced dark apkmb
      -youtube vanced pink apkmb
      -youtube vanced blue apkmb
      -youtube vanced latest version apkmb
      -youtube vanced old version apkmb
      -youtube premium mod apkmb
      -youtube premium free apkmb
      -youtube premium cracked apkmb
      -youtube premium unlocked apkmb
      -youtube premium features apkmb
      -youtube mod ad-free apkmb
      -youtube mod background play apkmb
      -youtube mod no root apkmb
      -youtube mod no ads apkmb
      -youtube mod premium features apkmb
      -youtube downloader mod apkmb
      -youtube downloader pro apkmb
      -youtube downloader hd apkmb
      -youtube downloader mp3 apkmb
      -youtube downloader 4k apkmb
      -youtube music mod apkmb
      -youtube music premium apkmb
      -youtube music ad-free apkmb
      -youtube music background play apkmb
      -youtube music offline mode apkmb
      -youtube music no root apkmb
      -youtube music no ads apkmb
      -youtube music premium features apkmb

      -Enable background play -

      Another feature that YouTube APK4all offers is the ability to download videos for offline viewing. This means you can save your favorite videos on your device and watch them later without an internet connection. You can also choose the quality and format of the downloaded videos.

      -

      To download videos, you need to tap on the download icon below any video and select the quality and format you want. You can also enable download for all videos by going to Settings > Vanced Settings > Download Settings and choosing Always.

      -Download videos

      Ad-free and premium content

      -

      Another feature that YouTube APK4all offers is the ability to enjoy YouTube without ads and access exclusive videos. This means you can watch videos without interruptions, distractions, or annoyances. You can also watch videos that are only available for YouTube Premium subscribers, such as YouTube Originals, documentaries, movies, shows, etc.

      -

      To enable ad-free and premium content, you need to go to Settings > Vanced Settings > Ad Settings and toggle on the options you want. You can also choose to block specific types of ads, such as banners, overlays, sponsorships, etc.

      -Enable ad-free and premium content -

      Customization and personalization

      -

      Another feature that YouTube APK4all offers is the ability to customize and personalize the app according to your preferences. This means you can change the theme, layout, speed, resolution, and more of the app. You can also enable or disable some features, such as comments, suggestions, notifications, etc.

      -

      To customize and personalize the app, you need to go to Settings > Vanced Settings and explore the various options available. You can also access some of these options from the three-dot menu icon on the top right corner of any video.

      -Customize and personalize the app -

      What are the pros and cons of YouTube APK4all?

      -

      YouTube APK4all is not a perfect app. It has its advantages and disadvantages that you should consider before using it. Here is a balanced analysis of the pros and cons of YouTube APK4all:

      -

      Pros

      -
        -
      • You can play videos in the background while doing other tasks on your device.
      • -
      • You can download videos for offline viewing in various quality and format options.
      • -
      • You can enjoy YouTube without ads and access exclusive videos for free.
      • -
      • You can customize and personalize the app according to your preferences.
      • -
      • You can watch videos in up to 4K resolution and up to 2x speed.
      • -
      • You can choose from different themes, such as dark, black, or light.
      • -
      -

      Cons

      -
        -
      • You might encounter some bugs or glitches while using the app.
      • -
      • You might not receive updates or new features from the official YouTube app.
      • -
      • You might violate YouTube's terms of service and policies by using the app.
      • -
      • You might expose your device and data to malware or viruses by downloading the app from unknown sources.
      • -
      • You might face legal issues or penalties if YouTube detects your use of the app.
      • -
      • You might lose some features or functionality of the official YouTube app, such as live chat, captions, etc.
      • -

      Is YouTube APK4all safe and legal?

      -

      One of the most important questions that you might have before using YouTube APK4all is whether it is safe and legal. The answer is not very straightforward, as it depends on various factors, such as the source of the app, the country you live in, the content you watch, etc. Here is a discussion of the safety and legality issues of using YouTube APK4all:

      -

      Safety

      -

      The safety of YouTube APK4all depends largely on the source of the app. As we mentioned earlier, YouTube APK4all is not available on the Google Play Store, which means you have to download it from other websites that host APK files. However, not all of these websites are trustworthy or secure. Some of them might contain malware or viruses that can harm your device or data. Therefore, you should be careful and cautious when downloading YouTube APK4all from unknown sources.

      -

      One way to protect your device and data from malware or viruses is to use a reliable antivirus software that can scan and remove any potential threats. Another way is to use a VPN service that can encrypt your internet traffic and hide your IP address. This can prevent hackers or third parties from accessing your online activities or personal information.

      -

      Additionally, you should also be aware of the permissions that YouTube APK4all requires to function properly. Some of these permissions might seem unnecessary or intrusive, such as access to your camera, microphone, contacts, location, etc. You should review these permissions carefully and decide whether you want to grant them or not. You can also revoke or modify these permissions later by going to Settings > Apps > YouTube APK4all > Permissions.

      -

      Legality

      -

      The legality of YouTube APK4all depends largely on the country you live in and the content you watch. As we mentioned earlier, YouTube APK4all is a modified version of the official YouTube app that offers many features that are not authorized or approved by YouTube. This means that by using YouTube APK4all, you might be violating YouTube's terms of service and policies, which state that:

      -
      -"You agree not to access Content through any technology or means other than the video playback pages of the Service itself, the Embeddable Player, or other explicitly authorized means YouTube may designate."
      -"You agree not to use the Service for any of the following commercial uses unless you obtain YouTube's prior written approval: [...] the sale of access to the Service."
      -"You agree not to circumvent, disable or otherwise interfere with any security-related features of the Service or features that prevent or restrict use or copying of any Content or enforce limitations on use of the Service or the Content therein."
      -

      These terms of service and policies apply to all users of YouTube, regardless of their location. However, different countries have different laws and regulations regarding online streaming, downloading, and sharing of content. Some countries might have more strict or lenient rules than others. Therefore, you should be aware of the legal implications and consequences of using YouTube APK4all in your country.

      -

      One way to avoid violating YouTube's terms of service and policies is to use YouTube APK4all only for personal and non-commercial purposes. Another way is to use a VPN service that can change your IP address and location. This can prevent YouTube from detecting your use of YouTube APK4all and taking any action against you.

      -

      Conclusion

      -

      Summary

      -

      In this article, we have explained what YouTube APK4all is, how to download and install it, what are its features, pros and cons, safety and legality issues, and more. We have also provided screenshots and links to help you understand better.

      -

      YouTube APK4all is a modified version of the official YouTube app that offers many additional features and options that enhance your viewing experience. Some of these features are background play, download videos, ad-free, premium content, customization, resolution, speed, theme, etc.

      -

      However, YouTube APK4all also has some drawbacks and risks that you should consider before using it. Some of these are bugs, glitches, no updates, violation of terms of service and policies, malware, viruses, legal issues, penalties, loss of features or functionality, etc.

      -

      Call to action

      -

      If you are interested in trying YouTube APK4all and enjoying its features for free, you can download it from [Apk4all.io], which is a trusted source for downloading thousands of MOD APKs, Premium APKs and MOD games. You can also find other useful information about YouTube APK4all on this website, such as its version, size, developer, rating, reviews, screenshots, etc.

      -

      However, if you are concerned about the safety and legality of YouTube APK4all, you might want to stick to the official YouTube app and respect its terms of service and policies. You can also use other alternatives that are more secure and legal, such as YouTube Music, YouTube Kids, YouTube TV, etc.

      -

      Whatever you decide, we hope you have enjoyed this article and learned something new. If you have any questions, comments, or feedback, please feel free to share them with us in the comment section below. We would love to hear from you and help you out.

      -

      Thank you for reading and happy watching!

      -

      FAQs

      -
        -
      • What is YouTube APK4all?
        -YouTube APK4all is a modified version of the official YouTube app that offers many additional features and options that enhance your viewing experience.
      • -
      • How to download and install YouTube APK4all?
        -You can download and install YouTube APK4all by following these steps: 1) Enable the installation of apps from unknown sources on your device. 2) Download the YouTube APK4all file from [Apk4all.io]. 3) Locate the file in your device's file manager and tap on it to start the installation process. 4) Open YouTube APK4all and enjoy its features.
      • -
      • What are the features of YouTube APK4all?
        -Some of the features of YouTube APK4all are background play, download videos, ad-free, premium content, customization, resolution, speed, theme, etc.
      • -
      • What are the pros and cons of YouTube APK4all?
        -Some of the pros of YouTube APK4all are playing videos in the background, downloading videos for offline viewing, enjoying YouTube without ads and accessing exclusive videos for free, customizing and personalizing the app according to your preferences, watching videos in up to 4K resolution and up to 2x speed, choosing from different themes, etc. Some of the cons of YouTube APK4all are encountering bugs or glitches, not receiving updates or new features from the official YouTube app, violating YouTube's terms of service and policies, exposing your device and data to malware or viruses, facing legal issues or penalties, losing some features or functionality of the official YouTube app, etc.
      • -
      • Is YouTube APK4all safe and legal?
        -The safety and legality of YouTube APK4all depend largely on the source of the app, the country you live in, and the content you watch. You should be careful and cautious when downloading YouTube APK4all from unknown sources, as some of them might contain malware or viruses that can harm your device or data. You should also be aware of the legal implications and consequences of using YouTube APK4all in your country, as you might be violating YouTube's terms of service and policies by using the app.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Ultimate Mod Apk 1.4.9 - How to Get Unlimited Money and Free Shopping.md b/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Ultimate Mod Apk 1.4.9 - How to Get Unlimited Money and Free Shopping.md deleted file mode 100644 index eec567e3fa6ac5c9dcb4507a0d19505c657c5516..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Ultimate Mod Apk 1.4.9 - How to Get Unlimited Money and Free Shopping.md +++ /dev/null @@ -1,78 +0,0 @@ -
      -

      Bus Simulator Ultimate Unlimited Money 1.4 9 Mod Apk: How to Download and Play

      -

      If you are a fan of bus simulation games, you might have heard of Bus Simulator Ultimate, a realistic and immersive game that lets you create your own bus company and drive across different countries. But what if you want to play the game with unlimited money, without having to worry about expenses and profits? Well, there is a way to do that, and it involves downloading and installing a mod apk version of the game. In this article, we will show you how to download and play Bus Simulator Ultimate unlimited money 1.4 9 mod apk, and what benefits it can bring to your gaming experience.

      -

      Introduction

      -

      What is Bus Simulator Ultimate?

      -

      Bus Simulator Ultimate is a popular bus simulation game developed by Zuuks Games, a Turkish game studio. The game was released in 2019 for Android and iOS devices, and has since gained millions of downloads and positive reviews from players. The game features realistic graphics, physics, sounds, and weather effects, as well as a variety of buses, routes, cities, and countries to choose from. You can also customize your bus with different skins, accessories, and logos.

      -

      bus simulator ultimate unlimited money 1.4 9 mod apk


      Download Filehttps://urlca.com/2uO5mp



      -

      What is the mod apk version?

      -

      A mod apk version is a modified version of an original app or game that has been altered by third-party developers or hackers to provide some extra features or advantages that are not available in the official version. For example, a mod apk version of Bus Simulator Ultimate can give you unlimited money, unlock all buses and routes, remove ads, and more. However, using a mod apk version also comes with some risks, such as malware infection, account ban, or legal issues.

      -

      Why would you want to play with unlimited money?

      -

      Playing Bus Simulator Ultimate with unlimited money can make the game more fun and enjoyable, as you can buy any bus you want, upgrade your company, hire more drivers, expand your routes, and more. You can also experiment with different settings and options without worrying about losing money or going bankrupt. You can also skip the grind and progress faster in the game.

      -

      How to download and install the mod apk

      -

      Step 1: Find a reliable source

      -

      The first step to download and install the mod apk version of Bus Simulator Ultimate is to find a reliable source that provides the latest and working version of the file. You can search online for websites or forums that offer mod apk downloads, but be careful of fake or malicious links that can harm your device or steal your data. You can also check the reviews and ratings of other users before downloading anything.

      -

      Step 2: Enable unknown sources on your device

      -

      The next step is to enable unknown sources on your device, which will allow you to install apps or games from sources other than the official app store. To do this, go to your device settings, then security or privacy, then toggle on the option for unknown sources. You might also need to grant permission for your browser or file manager to install apps from unknown sources.

      -

      bus simulator ultimate mod apk unlimited money and gold 1.4 9
      -bus simulator ultimate hack mod apk download 1.4 9 unlimited money
      -bus simulator ultimate 1.4 9 mod apk free purchase
      -bus simulator ultimate latest version mod apk unlimited money 1.4 9
      -bus simulator ultimate mod apk download for android unlimited money 1.4 9
      -bus simulator ultimate mod apk revdl unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk happymod unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk rexdl unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk android 1 unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk an1 unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk offline unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk obb unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk online unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk no root unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk ios unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk pc unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk pure unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk apkpure unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk aptoide unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk mob.org unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk all unlocked unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk all buses unlocked unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk all countries unlocked unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk all features unlocked unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk all skins unlocked unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk vip unlocked unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk pro unlocked unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk premium unlocked unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk mega mod unlimited money and gold 1.4 9
      -bus simulator ultimate mod apk super mod unlimited money and gold 1.4 9
      -bus simulator ultimate cheat mod apk unlimited money and gold hack download for android ios pc latest version update new link free no survey no human verification no password no root required working tested safe secure legal legit original official genuine real authentic verified trusted reliable easy simple fast best top high quality hd high definition high resolution high performance low mb low size small mb small size lightweight low battery consumption low data usage low ram usage low cpu usage low gpu usage low storage space low memory space low disk space low bandwidth low internet speed low network speed low wifi speed low cellular speed low mobile data speed low cellular data speed low hotspot speed low tethering speed low bluetooth speed low nfc speed low infrared speed low usb speed low otg speed low sd card speed low external storage speed low internal storage speed no ads no virus no malware no spyware no ransomware no trojan no worm no phishing no scam no fraud no spam no junk no pop up no pop under no redirect no redirecting no link shortener no link shrinker no link cloaker no link masking no link hiding no link protector no link encrypter no link decrypter no link generator no link converter no captcha no recaptcha

      -

      Step 3: Download and install the mod apk file

      -

      The third step is to download and install the mod apk file from the source you have chosen. You can either use your browser or a file manager app to locate and open the file. Then follow the instructions on the screen to install the mod apk. You might need to overwrite or uninstall the original version of the game if you have it already installed.

      -

      Step 4: Launch the game and enjoy

      -

      The final step is to launch the game and enjoy playing with unlimited money. You should see a mod menu or icon on the screen that lets you access the mod features and settings. You can also check your money balance and see if it has increased to a huge amount. You can now buy and upgrade anything you want in the game.

      -

      How to play the game with unlimited money

      -

      Create your own bus company

      -

      One of the main features of Bus Simulator Ultimate is that you can create your own bus company and manage it. You can choose your company name, logo, color, and headquarters location. You can also hire drivers, assign them buses and routes, and monitor their performance and feedback. With unlimited money, you can hire as many drivers as you want and pay them well.

      -

      Choose your bus and route

      -

      Another feature of the game is that you can choose from a variety of buses and routes to drive. You can select from different bus models, such as double-decker, articulated, school, or coach buses. You can also customize your bus with different skins, accessories, and logos. You can also choose from different routes that span across different countries, such as Germany, France, Italy, Turkey, USA, and more. You can also create your own routes and share them with other players. With unlimited money, you can unlock all buses and routes without having to complete missions or earn coins.

      -

      Drive safely and earn money

      -

      The core gameplay of Bus Simulator Ultimate is driving your bus along the route and picking up and dropping off passengers. You have to follow the traffic rules, signals, signs, and speed limits. You also have to deal with realistic situations, such as traffic jams, accidents, weather changes, road works, and more. You have to drive safely and smoothly to avoid damaging your bus or upsetting your passengers. You also have to interact with your passengers, such as greeting them, selling tickets, answering questions, and more. You can earn money by completing your route successfully and satisfying your passengers. With unlimited money, you can earn even more money by increasing your ticket prices or driving longer routes.

      -

      Upgrade your bus and company

      -

      The last feature of the game is that you can upgrade your bus and company with the money you earn. You can improve your bus performance by upgrading its engine, brakes, suspension, tires, and more. You can also enhance your bus appearance by adding new skins, accessories, logos, and more. You can also upgrade your company by expanding your headquarters, buying new buses, hiring more drivers, opening new routes, and more. With unlimited money, you can upgrade everything to the max level without having to wait or save up.

      -

      Conclusion

      -

      Summary of the main points

      -

      In conclusion, Bus Simulator Ultimate is a fun and realistic bus simulation game that lets you create your own bus company and drive across different countries. If you want to play the game with unlimited money, you can download and install a mod apk version of the game that gives you access to all buses, routes, upgrades, and more. To do this, you have to find a reliable source for the mod apk file, enable unknown sources on your device, download and install the mod apk file, and launch the game and enjoy. Playing with unlimited money can make the game more fun and enjoyable, as you can buy and upgrade anything you want, and experiment with different settings and options.

      -

      Call to action

      -

      If you are interested in trying out Bus Simulator Ultimate unlimited money 1.4 9 mod apk, you can follow the steps we have outlined in this article and download the file from a reliable source. However, we also advise you to be careful of the risks involved in using a mod apk version, such as malware infection, account ban, or legal issues. You should also respect the original developers of the game and support them by buying the official version if you like it. Bus Simulator Ultimate is a great game that deserves your attention and appreciation.

      -

      FAQs

      -

      What is Bus Simulator Ultimate?

      -

      Bus Simulator Ultimate is a realistic and immersive bus simulation game that lets you create your own bus company and drive across different countries.

      -

      What is the mod apk version of Bus Simulator Ultimate?

      -

      The mod apk version of Bus Simulator Ultimate is a modified version of the original game that gives you unlimited money and other advantages that are not available in the official version.

      -

      How to download and install the mod apk version of Bus Simulator Ultimate?

      -

      To download and install the mod apk version of Bus Simulator Ultimate, you have to find a reliable source for the file, enable unknown sources on your device, download and install the file, and launch the game.

      -

      How to play the game with unlimited money?

      -

      To play the game with unlimited money, you can buy and upgrade any bus, route, or company feature you want, without having to worry about expenses or profits. You can also increase your ticket prices or drive longer routes to earn more money.

      -

      What are the benefits and risks of playing with unlimited money?

      -

      The benefits of playing with unlimited money are that you can make the game more fun and enjoyable, as you can experiment with different settings and options, and progress faster in the game. The risks of playing with unlimited money are that you might get infected by malware, banned by the game server, or sued by the game developer.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Chicken Gun 3.2.05 - The Ultimate Chicken Shooting Game for Android Devices.md b/spaces/congsaPfin/Manga-OCR/logs/Chicken Gun 3.2.05 - The Ultimate Chicken Shooting Game for Android Devices.md deleted file mode 100644 index 069934dbdb76127957348f055c8fe43f3cb1ff37..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Chicken Gun 3.2.05 - The Ultimate Chicken Shooting Game for Android Devices.md +++ /dev/null @@ -1,197 +0,0 @@ - -

      Chicken Gun APK 3.2.05: A Hilarious and Action-Packed Shooter Game

      -

      If you are looking for a fun and quirky shooter game that will make you laugh and challenge you at the same time, then you should try Chicken Gun APK 3.2.05. This game is about chickens with guns who shoot and fight with each other in various modes and maps. You can customize your chicken with different weapons, beaks, sneakers, caps, and more. You can also throw explosive eggs and cause mayhem in the battlefield.

      -

      chicken gun apk 3.2.05


      Download Filehttps://urlca.com/2uOeI1



      -

      In this article, we will tell you everything you need to know about Chicken Gun APK 3.2.05, including what it is, how to download and install it on your device, how to play it on your PC or Mac, how to master it and have more fun, and what are the pros and cons of playing it.

      -

      What is Chicken Gun APK 3.2.05?

      -

      Chicken Gun APK 3.2.05 is the latest version of Chicken Gun, a popular action game developed by ChaloApps for Android devices. The game has over 50 million downloads and a 4.4-star rating on Google Play Store.

      -

      The concept and features of the game

      -

      The concept of Chicken Gun is simple but hilarious: armed chickens shoot and fight with each other in two modes: 5 vs 5 teams or against all. You can choose from different chicken characters with different abilities and personalities.

      -

      chicken gun game download apk 3.2.05
      -chicken gun mod apk 3.2.05 unlimited money
      -chicken gun apk 3.2.05 latest version
      -chicken gun apk 3.2.05 free download
      -chicken gun apk 3.2.05 android
      -chicken gun apk 3.2.05 offline
      -chicken gun apk 3.2.05 update
      -chicken gun apk 3.2.05 hack
      -chicken gun apk 3.2.05 online
      -chicken gun apk 3.2.05 for pc
      -chicken gun apk 3.2.05 gameplay
      -chicken gun apk 3.2.05 review
      -chicken gun apk 3.2.05 features
      -chicken gun apk 3.2.05 tips and tricks
      -chicken gun apk 3.2.05 cheats
      -chicken gun apk 3.2.05 no ads
      -chicken gun apk 3.2.05 premium
      -chicken gun apk 3.2.05 pro
      -chicken gun apk 3.2.05 unlocked
      -chicken gun apk 3.2.05 full version
      -chicken gun apk 3.2.05 best weapons
      -chicken gun apk 3.2.05 skins
      -chicken gun apk 3.2.05 maps
      -chicken gun apk 3.2.05 modes
      -chicken gun apk 3.2.05 teams
      -chicken gun apk 3.2.05 eggs
      -chicken gun apk 3.2.05 fun
      -chicken gun apk 3.2.05 action
      -chicken gun apk 3.2.05 shooting
      -chicken gun apk 3.2.05 multiplayer
      -chicken gun apk 3.2.05 co-op
      -chicken gun apk 3.2.05 pvp
      -chicken gun apk 3.2.05 vs all
      -chicken gun apk 3.2.05 rooster
      -chicken gun apk 3.2.05 beak
      -chicken gun apk 3.2.05 sneakers
      -chicken gun apk 3.2.05 caps
      -chicken gun apk 3

      -

      The game also has many features that make it more enjoyable and engaging, such as:

      -
        -
      • You can cool your rooster with various weapons, beaks, sneakers, caps, and other accessories.
      • -
      • You can throw explosive eggs that can cause massive damage to your enemies.
      • -
      • You can chat with other players in the lobby or during the match.
      • -
      • You can join clans or create your own clan with your friends.
      • -
      • You can play on different maps with different themes and obstacles.
      • -
      • You can earn coins by playing matches or watching ads.
      • -
      • You can use coins to buy new items or upgrade your existing ones.
      • -
      -

      The latest version and updates of the game

      -

      The latest version of Chicken Gun APK is 3.2.05, which was released on February 11, 2023. This version has some new features and improvements, such as

      Some of the new features and improvements of Chicken Gun APK 3.2.05 are:

      -
        -
      • You can now play on a new map called Factory, which has a lot of pipes, crates, and machines to hide behind or use as cover.
      • -
      • You can now use a new weapon called Flamethrower, which can set your enemies on fire and deal continuous damage over time.
      • -
      • You can now buy a new accessory called Jetpack, which can let you fly in the air and dodge bullets or surprise your enemies from above.
      • -
      • You can now see your kill streaks and multi-kills on the screen, which can boost your confidence and motivation.
      • -
      • You can now enjoy better graphics, performance, and stability, as well as bug fixes and optimizations.
      • -
      -

      How to download and install Chicken Gun APK 3.2.05 on your device?

      -

      If you want to play Chicken Gun APK 3.2.05 on your Android device, you have two options: you can either download and install it from Google Play Store or from APKCombo. Here are the steps for both methods:

      -

      The steps to download and install the game from Google Play Store

      -
        -
      1. Open Google Play Store on your device and search for Chicken Gun.
      2. -
      3. Select the game from the search results and tap on Install.
      4. -
      5. Wait for the game to download and install on your device.
      6. -
      7. Once the installation is complete, tap on Open to launch the game.
      8. -
      9. Enjoy playing Chicken Gun APK 3.2.05 on your device.
      10. -
      -

      The steps to download and install the game from APKCombo

      -
        -
      1. Open a web browser on your device and go to APKCombo.com.
      2. -
      3. Search for Chicken Gun in the search bar and select the game from the search results.
      4. -
      5. Tap on Download APK (288 MB) to download the game file on your device.
      6. -
      7. Once the download is complete, locate the file in your device's file manager and tap on it to install it.
      8. -
      9. If you see a warning message that says "For your security, your phone is not allowed to install unknown apps from this source", go to your device's settings and enable the option to allow installation from unknown sources.
      10. -
      11. After enabling the option, go back to the file manager and tap on the file again to install it.
      12. -
      13. Once the installation is complete, tap on Open to launch the game.
      14. -
      15. Enjoy playing Chicken Gun APK 3.2.05 on your device.
      16. -
      -

      How to play Chicken Gun APK 3.2.05 on your PC or Mac?

      -

      If you want to play Chicken Gun APK 3.2.05 on your PC or Mac, you need to use an Android emulator that can run Android apps and games on your computer. One of the best Android emulators that you can use is BlueStacks, which is free, fast, and easy to use. Here are the steps to play Chicken Gun APK 3.2.05 on your PC or Mac using BlueStacks:

      -

      The benefits of playing the game on PC or Mac

      -

      Playing Chicken Gun APK 3.2.05 on your PC or Mac has some advantages over playing it on your mobile device, such as:

      -
        -
      • You can enjoy a bigger screen and better graphics quality.
      • -
      • You can use a keyboard and mouse for more precise and comfortable controls.
      • -
      • You can save battery life and storage space on your mobile device.
      • -
      • You can play with multiple accounts or switch between different devices easily.
      • -
      -

      The steps to play the game on PC or Mac using BlueStacks emulator

      -
        -
      1. Go to BlueStacks.com and download the latest version of BlueStacks for your PC or Mac.
      2. -
      3. Install BlueStacks on your computer by following the instructions on the screen.
      4. -
      5. Launch BlueStacks and sign in with your Google account or create a new one if you don't have one.
      6. -
      7. In BlueStacks, go to Google Play Store and search for Chicken Gun.
      8. -
      9. Select the game from the search results and
      10. Select the game from the search results and tap on Install.
      11. -
      12. Wait for the game to download and install on BlueStacks.
      13. -
      14. Once the installation is complete, tap on Open to launch the game.
      15. -
      16. Enjoy playing Chicken Gun APK 3.2.05 on your PC or Mac.
      17. -
      -

      How to master Chicken Gun APK 3.2.05 and have more fun?

      -

      If you want to master Chicken Gun APK 3.2.05 and have more fun, you need to improve your skills and strategy in the game. You also need to know the best weapons, accessories, and maps to use in the game. Here are some tips and tricks that can help you:

      -

      The tips and tricks to improve your skills and strategy in the game

      -

      Some of the tips and tricks that can help you improve your skills and strategy in Chicken Gun APK 3.2.05 are:

      -
        -
      • Aim for the head of your enemies to deal more damage and get headshots.
      • -
      • Use the explosive eggs wisely, as they can damage both your enemies and yourself.
      • -
      • Move around and avoid staying in one spot for too long, as you can become an easy target.
      • -
      • Cover behind objects and walls to protect yourself from enemy fire.
      • -
      • Use the jetpack to fly over obstacles and surprise your enemies from above.
      • -
      • Switch between different weapons depending on the situation and your preference.
      • -
      • Work with your teammates and communicate with them using the chat feature.
      • -
      • Join or create a clan to play with other players who share your interests and goals.
      • -
      -

      The best weapons, accessories, and maps to use in the game

      -

      Some of the best weapons, accessories, and maps to use in Chicken Gun APK 3.2.05 are:

      -
  11. {domain_name}
  12. GTA5FIFA23Naruto Storm 4
    Forza Horizon 5Demon SlayerEldenRing
    Red Dead Redemption 2Marvel's Spider-Man RemasteredResident Evil 4
    WWE 2K23The Sims 4Choo-Choo Charles
    BeamNG.driveThe Last of Us Part IDRAGON BALL Z:KAKAROT
    God of WarMarvel’s Spider-Man: Miles MoralesMortal Kombat 11 Ghostwire: Tokyo
    Attack on TitanGrand Theft Auto: San Andreas171
    CupheadBattlefield 5Troublemaker Tekken7
    Resident Evil VillageDragonball Fighter Z
    Anbox falla al iniciar o muestra una pantalla negraAsegúrese de que ha instalado Anbox correctamente y habilitado la depuración USB. Intente reiniciar Anbox o su computadora. Si eso no funciona, intente reinstalar Anbox usando snap.
    Clash of Clans falla al instalar o ejecutarAsegúrate de haber descargado un archivo APK válido para Clash of Clans de una fuente confiable. Intenta borrar y reinstalar el archivo APK usando adb. Si eso no funciona, intenta descargar una versión diferente del archivo APK.
    Clash of Clans se bloquea o se congela durante el juegoAsegúrate de tener una conexión a Internet estable y suficientes recursos de RAM y CPU para Anbox y Clash of Clans. Intente bajar la resolución y el factor de escala de Anbox usando xrandr. Si eso no funciona, intenta borrar la caché y los datos de Clash of Clans en Configuración de Anbox > Aplicaciones.
    Clash of Clans muestra un mensaje de error sobre Google Play ServicesEste es un error común que se produce porque Anbox no es compatible con Google Play Services, que son necesarios para algunas características de Clash of Clans, como iniciar sesión con la cuenta de Google, acceder a Google Play Games, o hacer compras en la aplicación. Desafortunadamente, no hay una manera fácil de solucionar este error. Puede intentar instalar Google Play Services en Anbox utilizando algunos métodos no oficiales, pero no se garantiza que funcionen y pueden causar más problemas. Alternativamente, puedes jugar Clash of Clans sin Google Play Services jugando como invitado o usando un ID de Supercell.
    - - - - - - -
    WeaponDescription
    FlamethrowerThis weapon can set your enemies on fire and deal continuous damage over time. It is effective at close range and can spread fire to multiple targets.
    Rocket LauncherThis weapon can launch rockets that explode on impact and deal splash damage to nearby enemies. It is effective at long range and can destroy objects and walls.
    Sniper RifleThis weapon can shoot bullets that pierce through enemies and deal high damage. It is effective at long range and can zoom in for better accuracy.
    ShotgunThis weapon can shoot pellets that spread out and deal moderate damage. It is effective at close range and can hit multiple targets at once.
    PistolThis weapon can shoot bullets that deal low damage but have a high fire rate. It is effective at medium range and can be used as a backup weapon.
    - - - - - - - -
    AccessoryDescription
    JetpackThis accessory can let you fly in the air and dodge bullets or surprise your enemies from above. It has a limited fuel capacity that recharges over time.
    Grenade BeltThis accessory can let you carry more explosive eggs that you can throw at your enemies. It increases your egg capacity by 50%.
    Bulletproof VestThis accessory can protect you from enemy fire and reduce the damage you take by 25%. It also increases your health by 25%.
    Night Vision GogglesThis accessory can help you see better in dark environments and spot hidden enemies. It also increases your accuracy by 10%.
    Camo CapThis accessory can help you blend in with the environment and avoid detection by enemies. It also increases your stealth by 10%.
    - - - - - - - -
    MapDescription
    FactoryThis map has a lot of pipes, crates, and machines to hide behind or use as cover. It also has some conveyor belts that can move you or your enemies around.
    FarmThis map has a lot This map has a lot of crops, animals, and barns to explore or use as cover. It also has some tractors and hay bales that can move you or your enemies around.
    CityThis map has a lot of buildings, cars, and streets to navigate or use as cover. It also has some bridges and tunnels that can connect you or your enemies to different areas.
    DesertThis map has a lot of sand, rocks, and cacti to hide behind or use as cover. It also has some oases and wells that can provide you or your enemies with water.
    ForestThis map has a lot of trees, bushes, and flowers to blend in with or use as cover. It also has some rivers and lakes that can drown you or your enemies.
    -

    What are the pros and cons of Chicken Gun APK 3.2.05?

    -

    Chicken Gun APK 3.2.05 is a fun and quirky shooter game that can make you laugh and challenge you at the same time. However, like any game, it also has some pros and cons that you should be aware of before playing it. Here are some of them:

    -

    The advantages of playing the game

    -

    Some of the advantages of playing Chicken Gun APK 3.2.05 are:

    -
      -
    • You can enjoy a hilarious and action-packed gameplay with chickens with guns.
    • -
    • You can customize your chicken with various weapons, beaks, sneakers, caps, and other accessories.
    • -
    • You can throw explosive eggs that can cause massive damage to your enemies.
    • -
    • You can chat with other players in the lobby or during the match.
    • -
    • You can join clans or create your own clan with your friends.
    • -
    • You can play on different maps with different themes and obstacles.
    • -
    • You can earn coins by playing matches or watching ads.
    • -
    • You can use coins to buy new items or upgrade your existing ones.
    • -
    • You can play the game on your mobile device or on your PC or Mac using an emulator.
    • -
    -

    The disadvantages of playing the game

    -

    Some of the disadvantages of playing Chicken Gun APK 3.2.05 are:

    -
      -
    • You may encounter some bugs, glitches, or crashes while playing the game.
    • -
    • You may face some lag or latency issues while playing online matches.
    • -
    • You may find some ads annoying or intrusive while playing the game.
    • -
    • You may need a lot of coins to unlock or upgrade all the items in the game.
    • -
    • You may get addicted to the game and spend too much time or money on it.
    • -
    -

    Conclusion

    -

    Chicken Gun APK 3.2.05 is a hilarious and action-packed shooter game that will make you laugh and challenge you at the same time. You can play as chickens with guns who shoot and fight with each other in various modes and maps. You can customize your chicken with different weapons, beaks, sneakers, caps, and more. You can also throw explosive eggs and cause mayhem in the battlefield.

    -

    If you want to play Chicken Gun APK 3.2.05 on your device, you can download and install it from Google Play Store or from APKCombo. If you want to play it on your PC or Mac, you can use BlueStacks emulator to run it on your computer. If you want to master it and have more fun, you can follow our tips and tricks to improve your skills and strategy in the game. You can also check out our table of the best weapons, accessories, and maps to use in the game.

    -

    However, before you play Chicken Gun APK 3.2.05, you should also be aware of its pros and cons. The game has many advantages that make it enjoyable and engaging, but it also has some disadvantages that may affect your experience or satisfaction. You should weigh them carefully before deciding whether to play the game or not.

    -

    FAQs

    -

    Here are some frequently asked questions about Chicken Gun APK 3.2.05:

    -

    Q: Is Chicken Gun APK 3.2.05 free to play?

    -

    A: Yes, Chicken Gun APK 3.2.05 is free to play on Android devices. However, it contains some in-app purchases that can enhance your gameplay or remove ads.

    -

    Q: Is Chicken Gun APK 3.2.05 safe to download and install?

    -

    A: Yes, Chicken Gun APK

    A: Yes, Chicken Gun APK 3.2.05 is safe to download and install on your device. The game has been verified by Google Play Protect and APKCombo, which are trusted sources for Android apps and games. However, you should always be careful when downloading and installing apps from unknown sources, as they may contain malware or viruses.

    -

    Q: How can I get more coins in Chicken Gun APK 3.2.05?

    -

    A: You can get more coins in Chicken Gun APK 3.2.05 by playing matches or watching ads. You can also buy coins with real money through in-app purchases.

    -

    Q: How can I contact the developer of Chicken Gun APK 3.2.05?

    -

    A: You can contact the developer of Chicken Gun APK 3.2.05 by sending an email to chaloapps@gmail.com or by visiting their Facebook page.

    -

    Q: What are the minimum requirements to play Chicken Gun APK 3.2.05?

    -

    A: The minimum requirements to play Chicken Gun APK 3.2.05 are:

    -
      -
    • An Android device with Android 5.0 or higher.
    • -
    • At least 288 MB of free storage space on your device.
    • -
    • A stable internet connection.
    • -
    -

    Q: Can I play Chicken Gun APK 3.2.05 offline?

    -

    A: No, you cannot play Chicken Gun APK 3.2.05 offline, as the game requires an internet connection to connect to the servers and other players.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Reward FF Garena APK and Enjoy Free Fire Rewards Redemption.md b/spaces/congsaPfin/Manga-OCR/logs/Download Reward FF Garena APK and Enjoy Free Fire Rewards Redemption.md deleted file mode 100644 index dae6914d9547fc98e071b032d3e345544e8f1f41..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Reward FF Garena APK and Enjoy Free Fire Rewards Redemption.md +++ /dev/null @@ -1,132 +0,0 @@ -
    -

    Reward FF Garena APK Download: How to Get Free Rewards in Free Fire

    -

    If you are a fan of Free Fire, the world-famous survival shooter game, you might be interested in getting some free rewards like diamonds, characters, skins, and more. One way to do that is by using Reward FF Garena APK, an app that allows you to redeem codes for various items. In this article, we will tell you everything you need to know about Reward FF Garena APK, how to download and install it, how to use it, and how to get more rewards in Free Fire.

    -

    What is Free Fire?

    -

    A world-famous survival shooter game

    -

    Free Fire is a mobile game that has over 1 billion downloads on Google Play Store. It is a survival shooter game where you are placed on a remote island with 49 other players, and you have to fight for your survival. You can choose your starting point with your parachute, explore the map, loot weapons and items, hide or ambush your enemies, and try to be the last one standing. You can also drive vehicles, use pets, and customize your character with different outfits and accessories.

    -

    reward ff garena apk download


    Download File > https://urlca.com/2uO8Jz



    -

    Different game modes and features

    -

    Free Fire offers a variety of game modes for different preferences and play styles. You can play solo, duo, or squad mode in Battle Royale, where you have to survive against other players in a shrinking safe zone. You can also play Clash Squad, a fast-paced 4v4 mode where you have to manage your economy and defeat the enemy team. There are also other modes like Bomb Squad, Gun King, Rampage, etc. that offer different challenges and fun. Free Fire also has many features like Firelink technology that lets you play with all Free Fire players across devices, realistic graphics and animations, in-game voice chat, and more.

    -

    What is Reward FF Garena APK?

    -

    An app that allows users to redeem codes for free rewards

    -

    Reward FF Garena APK is an app that lets you redeem codes for free rewards in Free Fire. These codes are usually given away by Garena, the developer of Free Fire, during live or online events such as live streams, tournaments, collaborations, etc. The rewards vary depending on the code, but they can include diamonds, golds, characters, skins, emotes, vouchers, etc. The codes have 12 or 16 characters consisting of capital letters and numbers. They also have an expiration date, so you have to redeem them before they expire.

    -

    How to download and install the app

    -

    To download and install Reward FF Garena APK on your Android device, you need to follow these steps:

    -
      -
    1. Go to [this link](^2^) and download the APK file.
    2. -
    3. Enable unknown sources on your device settings if you haven't done so already.
    4. -
    5. Locate the downloaded file on your device and tap on it.
    6. -
    7. Follow the instructions on the screen to install the app.
    8. -
    9. Launch the app and log in with your Free Fire account. You can use Facebook or VK as your login method.
    10. -
    -

    How to use Reward FF Garena APK?

    -

    How to find and enter redemption codes

    -

    To use Reward FF Garena APK to redeem codes for free rewards in Free Fire, you need to follow these steps:

    -
      -
    1. Find a valid redemption code for Free Fire. You can find them on the official social media accounts of Free Fire, such as Facebook, Instagram, Twitter, YouTube, etc. You can also check out some websites or blogs that share the latest codes, such as [this one].
    2. -
    3. Open the Reward FF Garena APK app and tap on the "Redeem" button.
    4. -
    5. Enter the code in the text box and tap on the "Confirm" button.
    6. -
    7. Wait for a few seconds and you will see a message that says "Redeemed successfully".
    8. -
    9. Open your Free Fire game and check your in-game mail. You will find your rewards there.
    10. -
    -

    What kind of rewards can you get

    -

    The rewards that you can get from redeeming codes using Reward FF Garena APK vary depending on the code. Some of the common rewards are:

    -
      -
    • Diamonds: The premium currency of Free Fire that can be used to buy various items and features.
    • -
    • Golds: The basic currency of Free Fire that can be used to buy some items and features.
    • -
    • Characters: The playable characters in Free Fire that have different skills and abilities.
    • -
    • Skins: The cosmetic items that can change the appearance of your characters, weapons, vehicles, pets, etc.
    • -
    • Emotes: The gestures and expressions that your characters can perform in the game.
    • -
    • Vouchers: The coupons that can be used to get discounts or free spins on some features like Gold Royale and Diamond Royale.
    • -
    -

    Tips and tricks for getting more rewards in Free Fire

    -

    Follow official social media accounts and live streams

    -

    One of the best ways to get more rewards in Free Fire is to follow the official social media accounts of Free Fire, such as Facebook, Instagram, Twitter, YouTube, etc. These accounts often post updates, news, events, and giveaways that can give you free rewards. You can also watch the live streams of Free Fire on platforms like YouTube, Facebook Gaming, Booyah, etc. These live streams often feature redemption codes, quizzes, lucky draws, and other activities that can give you free rewards.

    -

    Participate in events and challenges

    -

    Another way to get more rewards in Free Fire is to participate in events and challenges that are regularly held in the game. These events and challenges can be found on the main menu or the calendar icon of the game. They usually have different themes, durations, and requirements. Some examples of events and challenges are:

    -
      -
    • New Year Event: An event that celebrates the new year with various missions, rewards, and features.
    • -
    • Rampage Event: An event that features a new game mode where you can transform into powerful beasts with special abilities.
    • -
    • Elite Pass: A monthly pass that gives you access to exclusive rewards by completing daily and weekly missions.
    • -
    • Daily Login: A simple challenge that gives you free rewards by logging in to the game every day.
    • -
    -

    Use in-game features like Gold Royale and Diamond Royale

    -

    A third way to get more rewards in Free Fire is to use some of the in-game features that offer you a chance to win various items. Some of these features are:

    -

    reward ff garena apk latest version
    -reward ff garena apk free fire
    -reward ff garena apk mod
    -reward ff garena apk unlimited diamonds
    -reward ff garena apk 2023
    -reward ff garena apk redeem code
    -reward ff garena apk hack
    -reward ff garena apk update
    -reward ff garena apk for android
    -reward ff garena apk offline
    -reward ff garena apk online
    -reward ff garena apk no verification
    -reward ff garena apk generator
    -reward ff garena apk pro
    -reward ff garena apk premium
    -reward ff garena apk download link
    -reward ff garena apk download for pc
    -reward ff garena apk download 2021
    -reward ff garena apk download free
    -reward ff garena apk download latest
    -reward ff garena apk download mod
    -reward ff garena apk download hack
    -reward ff garena apk download unlimited diamonds
    -reward ff garena apk download redeem code
    -reward ff garena apk download update
    -how to install reward ff garena apk
    -how to use reward ff garena apk
    -how to get reward ff garena apk
    -how to redeem reward ff garena apk code
    -how to update reward ff garena apk
    -how to hack reward ff garena apk
    -how to download reward ff garena apk for android
    -how to download reward ff garena apk for pc
    -how to download reward ff garena apk mod
    -how to download reward ff garena apk hack
    -how to download reward ff garena apk unlimited diamonds
    -how to download reward ff garena apk redeem code
    -how to download reward ff garena apk latest version
    -is reward ff garena apk safe
    -is reward ff garena apk legit
    -is reward ff garena apk real
    -is reward ff garena apk working
    -is reward ff garena apk legal
    -is reward ff garena apk banned
    -is reward ff garena apk virus free
    -what is reward ff garena apk
    -what is the best reward ff garena apk
    -what is the latest version of reward ff garena apk

    -
      -
    • Gold Royale: A feature that lets you spin a wheel using golds to win random items like skins, vouchers, etc.
    • -
    • Diamond Royale: A feature that lets you spin a wheel using diamonds to win random items like skins, characters, emotes, etc.
    • -
    • Luck Royale: A feature that lets you spin different wheels using tickets or diamonds to win random items like skins, characters, emotes, etc.
    • -
    • Mystery Shop: A feature that lets you buy items with discounts up to 90% using diamonds.
    • -
    -

    Conclusion

    -

    In conclusion, Reward FF Garena APK is an app that allows you to redeem codes for free rewards in Free Fire. You can download and install the app on your Android device and use it to enter valid redemption codes. You can also get more rewards in Free Fire by following official social media accounts and live streams, participating in events and challenges, and using in-game features like Gold Royale and Diamond Royale. We hope this article has helped you learn more about Reward FF Garena APK and how to get free rewards in Free Fire. Happy gaming!

    -

    FAQs

    -

    Here are some frequently asked questions about Reward FF Garena APK:

    -

    Q: Is Reward FF Garena APK safe to use?

    -

    A: Reward FF Garena APK is safe to use as long as you download it from a trusted source and follow the instructions carefully. However, you should always be careful when using third-party apps that are not affiliated with Garena or Free Fire. You should also avoid using any hacks, cheats, or mods that can harm your device or account.

    -

    Q: How often are new codes released for Reward FF Garena APK?

    -

    A: There is no fixed schedule for the release of new codes for Reward FF Garena APK. The codes are usually given away by Garena during special occasions, events, or promotions. You should always keep an eye on the official social media accounts and live streams of Free Fire to get the latest codes as soon as possible.

    -

    Q: Can I use the same code more than once?

    -

    A: No, you cannot use the same code more than once. Each code can only be redeemed by one account and one device. If you try to use a code that has already been used or expired, you will get an error message.

    -

    Q: What should I do if I encounter a problem with Reward FF Garena APK?

    -

    A: If you encounter a problem with Reward FF Garena APK, such as the app not working, the code not being accepted, the reward not being delivered, etc., you should try the following steps:

    -
      -
    • Check your internet connection and make sure it is stable and fast.
    • -
    • Check your device storage and make sure it has enough space for the app and the game.
    • -
    • Check your Free Fire account and make sure it is linked to Facebook or VK.
    • -
    • Check the code and make sure it is valid, not expired, and entered correctly.
    • -
    • Restart the app and the game and try again.
    • -
    • Contact the customer service of Free Fire or Reward FF Garena APK if the problem persists.
    • -
    -

    Q: Can I share Reward FF Garena APK with my friends?

    -

    A: Yes, you can share Reward FF Garena APK with your friends who also play Free Fire. However, you should only share it from a trusted source and not from any unknown or suspicious links. You should also respect the terms and conditions of Free Fire and Reward FF Garena APK and not abuse or exploit the app for unfair advantages.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Feng Shui Uzmanndan Evinize ve Hayatnza Bolluk Getirecek 7 pucu.md b/spaces/congsaPfin/Manga-OCR/logs/Feng Shui Uzmanndan Evinize ve Hayatnza Bolluk Getirecek 7 pucu.md deleted file mode 100644 index 89e90e01fe325e81fcea4749173408bda7fbe0cc..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Feng Shui Uzmanndan Evinize ve Hayatnza Bolluk Getirecek 7 pucu.md +++ /dev/null @@ -1,148 +0,0 @@ -
    -

    Feng Shui Kuralları: Evinizi ve Hayatınızı Düzenlemenin Yolları

    -

    Feng shui, yaşadığımız mekanların enerjisiyle uyum içinde olmamızı sağlayan eski bir Çin sanatıdır. Taoizm ile bağlantılı olan bu sanat, doğanın döngülerini ve birlikte nasıl dengede çalıştıklarını inceler. Feng shui, evimizi ve hayatımızı daha huzurlu, sağlıklı ve mutlu kılmak için bize bazı ipuçları ve kurallar sunar. Bu yazıda, feng shui'nin ne olduğunu, temel prensiplerini ve evinizin her odası için uygulayabileceğiniz feng shui kurallarını öğreneceksiniz.

    -

    Feng Shui Nedir?

    -

    Feng shui, rüzgar ve su anlamına gelir. Yaşam gücü enerjisi olan qi'nin evimize ve hayatımıza girebilmesi için bir geçit olan ön kapımız gibi mekanlarımızın enerjisini etkiler. Bu nedenle, mekanlarımızda farkındalık yaratmak önemlidir. Günlük yaşamınıza daha iyi bir akış ve zindelik katmaya hazırsanız, aşağıda sizin için derlediğimiz feng shui önerileri ile evinizdeki her odaya yönelik ayrı ayrı müdahalelerde bulunabilirsiniz.

    -

    feng shui kuralları


    Download Zip ✦✦✦ https://urlca.com/2uOeEt



    -

    Feng Shui'nin Temel Prensipleri

    -

    Feng shui, beş element sistemi kullanır. Bu sistem, doğanın döngülerini ve birlikte nasıl dengede çalıştıklarını inceler. Beş element şunlardır: toprak, metal, su, ahşap ve ateş. Her element, hayatımızda geliştirmek istediğimiz belirli niteliklerle, renklerle ve şekillerle ilişkilidir. Evimizi dekore ederken bu elementleri uygun şekilde kullanarak mekanlarımızın enerjisini dengeleyebiliriz.

    -

    Beş Element

    -
      -
    • Toprak elementi kendine bakım, sınırlar ve beslenme ile ilgilidir. Toprak renkleri olan sarı, turuncu ve kahverengi, kare şekilleri ve ağır nesneler toprak elementini temsil eder. Evinize toprak elementi eklemek için kare sarı bir halı veya sağlam dikdörtgen bir masa seçebilirsiniz.
    • -
    • Metal elementi neşe, netlik ve iletişim ile ilgilidir. Metal renkleri olan beyaz, gri ve gümüş, yuvarlak şekilleri ve parlak nesneler metal elementini temsil eder. Evinize metal elementi eklemek için yuvarlak beyaz bir vazo veya parlak gümüş bir ayna seçebilirsiniz.
    • -
    • Su elementi sezgi, akışkanlık ve duygusallık ile ilgilidir. Su renkleri olan mavi, siyah ve mor, dalgalı şekilleri ve sıvı nesneler su elementini temsil eder. Evinize su elementi eklemek için dalgalı mavi bir perde veya sıvı dolu bir kase seçebilirsiniz.
    • -
    • Ahşap elementi büyüme, yaratıcılık ve bolluk ile ilgilidir. Ahşap renkleri olan yeşil, turkuaz ve krem, dikdörtgen şekilleri ve bitkisel nesneler ahşap elementini temsil eder. Evinize ahşap elementi eklemek için dikdörtgen yeşil bir yastık veya canlı bitkiler seçebilirsiniz.
    • -
    • Ateş elementi tutku, cesaret ve ilham ile ilgilidir. Ateş renkleri olan kırmızı, pembe ve altın, üçgen şekilleri ve ışıklı nesneler ateş elementini temsil eder. Evinize ateş elementi eklemek için üçgen kırmızı bir mum veya altın ışıklı bir avize seçebilirsiniz.
    • -
    -

    Hakim Konum

    -

    Hakim konum, evinizin her odasında en önemli mobilyanın yerleştirilmesi gereken yerdir. Bu mobilya genellikle yatak, masa veya koltuk olur. Hakim konum, odaya girdiğinizde kapının karşı duvarında olmalıdır. Ayrıca kapının görüş alanınızda olmasına dikkat edin. Böylece odaya giren herhangi bir enerjiyi kontrol edebilirsiniz.

    -

    Bagua Haritası

    -

    Bagua haritası, evinizin her alanının hayatınızın farklı yönlerine karşılık geldiği bir diyagramdır. Bagua haritasını evinizin planına uygulayarak hangi alanların iyileştirilmeye ihtiyacı olduğunu belirleyebilirsiniz. Bagua haritasında dokuz alan vardır: şöhret ve itibar, ilişkiler ve aşk, yaratıcılık ve çocuklar, yardımsever dostlar ve seyahat, kariyer ve yaşam amacı, bilgelik ve bilgi, aile ve sağlık, zenginlik ve bolluk, merkez ve denge.

    -

    Evinizin Her Odası İçin Feng Shui Kuralları

    -

    Evinizin her odası farklı bir enerjiye sahiptir. Bu nedenle her oda için farklı feng shui kuralları uygulamak gerekir. Aşağıda evinizin her odası için feng shui kurallarını bulabilirsiniz.

    -

    Giriş

    -

    Giriş evinizin enerjisini belirleyen ilk izlenimi oluşturur. Bu nedenle girişi temiz, aydınlık ve davetkar tutmak önemlidir. Girişte şunlara dikkat edin:

    -

    feng shui dekorasyon önerileri
    -feng shui ev düzeni nasıl olmalı
    -feng shui renkleri ve anlamları
    -feng shui ile bereketli bir ev
    -feng shui giriş kapısı nasıl olmalı
    -feng shui salon düzenlemesi
    -feng shui yatak odası ayna
    -feng shui mutfak tasarımı
    -feng shui yemek odası dekorasyonu
    -feng shui oturma odası renkleri
    -feng shui banyo düzeni ve aksesuarları
    -feng shui çalışma odası nasıl yapılır
    -feng shui çocuk odası için ipuçları
    -feng shui balkon ve teras dekorasyonu
    -feng shui bahçe tasarımı ve bitkileri
    -feng shui beş element nedir ve nasıl kullanılır
    -feng shui bagua haritası nedir ve nasıl çizilir
    -feng shui kompassı nedir ve nasıl okunur
    -feng shui hayat alanları ve anlamları
    -feng shui enerji akışı nasıl sağlanır
    -feng shui ile evde negatif enerjiden kurtulma yolları
    -feng shui ile evde pozitif enerji arttırma yöntemleri
    -feng shui ile evde huzur ve mutluluk yaratma teknikleri
    -feng shui ile evde sağlık ve refah arttırma tavsiyeleri
    -feng shui ile evde aşk ve ilişki geliştirme ipuçları
    -feng shui ile evde kariyer ve başarı arttırma önerileri
    -feng shui ile evde para ve bolluk çekme yolları
    -feng shui ile evde yaratıcılık ve ilham arttırma yöntemleri
    -feng shui ile evde öğrenme ve bilgelik geliştirme teknikleri
    -feng shui ile evde seyahat ve yardımseverlik arttırma tavsiyeleri
    -feng shui ile evde aile ve arkadaşlık geliştirme ipuçları
    -feng shui ile evde çocuklar ve gelecek planlama önerileri
    -feng shui ile evde ünlü olma ve itibar arttırma yolları
    -feng shui ile evde ruhsal gelişim ve iç huzur arttırma yöntemleri
    -feng shui ile evde uyku kalitesi ve dinlenme geliştirme teknikleri
    -feng shui ile evde stres azaltma ve rahatlama tavsiyeleri
    -feng shui ile evde doğal malzemeler kullanma ipuçları
    -feng shui ile evde ışıklandırma nasıl yapılır
    -feng shui ile evde ses etkisi nasıl kullanılır
    -feng shui ile evde koku etkisi nasıl kullanılır
    -feng shui ile evde sanat eserleri nasıl seçilir ve yerleştirilir
    -feng shui ile evde ayna kullanımı nasıl yapılır
    -feng shui ile evde bitki kullanımı nasıl yapılır
    -feng shui ile evde su öğesi nasıl kullanılır
    -feng shui ile evde ateş öğesi nasıl kullanılır
    -feng shui ile evde metal öğesi nasıl kullanılır
    -feng shui ile evde ahşap öğesi nasıl kullanılır
    -feng shui ile evde toprak öğesi nasıl kullanılır
    -feng shui ile evde tılsım ve sembol kullanımı nasıl yapılır

    -
      -
    • Girişi engelleyecek eşyalardan kaçının. Ayakkabılarınızı düzenli bir şekilde saklayın veya başka bir yere taşıyın.
    • -
    • Girişi aydınlatın. Doğal ışık alması için perdeleri açın veya yapay ışık kullanın.
    • -
    • Girişi renklendirin. Canlı renkler veya resimler ile girişi canlandırın. Girişinizin enerjisini yükseltmek için kırmızı, turuncu veya sarı gibi ateş elementi renkleri kullanabilirsiniz.
    • -
    • Girişe bir ayna ekleyin. Ayna, girişinizi daha geniş ve ferah gösterir. Ayrıca evinize giren qi'yi yansıtır ve dağıtır. Ancak aynayı kapının tam karşısına koymayın, çünkü bu qi'nin geri dönmesine neden olur.
    • -
    • Girişe bir bitki ekleyin. Bitki, girişinize canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak bitkinizin sağlıklı ve bakımlı olduğundan emin olun.
    • -
    -

    Salon

    -

    Salon evinizin sosyal alanıdır. Burada ailenizle ve misafirlerinizle vakit geçirirsiniz. Bu nedenle salonunuzu rahat, sıcak ve uyumlu yapmak önemlidir. Salonunuzda şunlara dikkat edin:

    -
      -
    • Salonunuzu düzenleyin. Salonunuzda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
    • -
    • Salonunuzu aydınlatın. Salonunuzda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak salonunuzun atmosferini değiştirebilirsiniz.
    • -
    • Salonunuzu renklendirin. Salonunuzda sakinlik ve uyum yaratmak için toprak elementi renkleri olan sarı, turuncu ve kahverengi gibi pastel tonlar kullanabilirsiniz. Salonunuzda canlılık ve neşe yaratmak için ateş elementi renkleri olan kırmızı, pembe ve altın gibi parlak tonlar kullanabilirsiniz.
    • -
    • Salonunuza bir merkez noktası ekleyin. Salonunuzda odaklanabileceğiniz bir merkez noktası oluşturun. Bu merkez noktası bir resim, bir heykel, bir şömine veya bir akvaryum olabilir. Merkez noktanızın salonunuzun enerjisini yansıttığından emin olun.
    • -
    • Salonunuza bir bitki ekleyin. Bitki, salonunuza canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak bitkinizin sağlıklı ve bakımlı olduğundan emin olun.
    • -
    -

    Yemek Odası

    -

    Yemek odası evinizin beslenme alanıdır. Burada ailenizle ve misafirlerinizle yemek yersiniz. Bu nedenle yemek odanızı lezzetli, keyifli ve bereketli yapmak önemlidir. Yemek odanızda şunlara dikkat edin:

    -
      -
    • Yemek odanızı düzenleyin. Yemek odanızda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
    • -
    • Yemek odanızı aydınlatın. Yemek odanızda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak yemek odanızın atmosferini değiştirebilirsiniz.
    • -
    • Yemek odanızı renklendirin. Yemek odanızda iştah ve keyif uyandırmak için ateş elementi renkleri olan kırmızı, pembe ve altın gibi parlak tonlar kullanabilirsiniz. Yemek odanızda sakinlik ve uyum yaratmak için toprak elementi renkleri olan sarı, turuncu ve kahverengi gibi pastel tonlar kullanabilirsiniz.
    • -
    • Yemek odanıza bir masa örtüsü ekleyin. Masa örtüsü, yemek odanızın görünümünü değiştirebilir. Ayrıca yemek masanızın enerjisini korur ve temiz tutar. Masa örtünüzün rengini ve desenini yemek odanızın tarzına uygun seçin.
    • -
    • Yemek odanıza bir çiçek ekleyin. Çiçek, yemek odanıza canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak çiçeğinizin sağlıklı ve bakımlı olduğundan emin olun.
    • -
    -

    Mutfak

    -

    Mutfak evinizin beslenme alanıdır. Burada yemek pişirir, hazırlar ve saklarsınız. Bu nedenle mutfağınızı temiz, ferah ve işlevsel yapmak önemlidir. Mutfağınızda şunlara dikkat edin:

    -
      -
    • Mutfağınızı düzenleyin. Mutfağınızda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
    • -
    • Mutfağınızı aydınlatın. Mutfağınızda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak mutfağınızın atmosferini değiştirebilirsiniz.
    • -
    • Mutfağınızı renklendirin. Mutfağınızda iştah ve keyif uyandırmak için ateş elementi renkleri olan kırmızı, pembe ve altın gibi parlak tonlar kullanabilirsiniz. Mutfağınızda sakinlik ve uyum yaratmak için toprak elementi renkleri olan sarı, turuncu ve kahverengi gibi pastel tonlar kullanabilirsiniz.
    • -
    • Mutfağınıza bir bitki ekleyin. Bitki, mutfağınıza canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak bitkinizin sağlıklı ve bakımlı olduğundan emin olun.
    • -
    • Mutfağınıza bir baharatlık ekleyin. Baharatlık, mutfağınıza lezzet ve çeşitlilik katar. Ayrıca metal elementi ile neşe ve iletişim enerjisi verir. Ancak baharatlarınızın taze ve kaliteli olduğundan emin olun.
    • -
    -

    Oturma Odası

    -

    Oturma odası evinizin dinlenme alanıdır. Burada kitap okur, film izler, müzik dinler veya hobilerinizle ilgilenirsiniz. Bu nedenle oturma odanızı konforlu, rahatlatıcı ve ilgi çekici yapmak önemlidir. Oturma odanızda şunlara dikkat edin:

    -
      -
    • Oturma odanızı düzenleyin. Oturma odanızda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
    • -
    • Oturma odanızı aydınlatın. Oturma odanızda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak oturma odanızın atmosferini değiştirebilirsiniz.
    • -
    • Oturma odanızı renklendirin. Oturma odanızda dinlenme ve huzur yaratmak için su elementi renkleri olan mavi, siyah ve mor gibi soğuk tonlar kullanabilirsiniz. Oturma odanızda ilham ve tutku yaratmak için ateş elementi renkleri olan kırmızı, pembe ve altın gibi sıcak tonlar kullanabilirsiniz.
    • -
    • Oturma odanıza bir koltuk ekleyin. Koltuk, oturma odanızın en önemli mobilyasıdır. Koltuğunuzu hakim konuma yerleştirin ve kapının görüş alanınızda olmasına dikkat edin. Koltuğunuzun rengini ve dokusunu oturma odanızın tarzına uygun seçin.
    • -
    • Oturma odanıza bir kitaplık ekleyin. Kitaplık, oturma odanıza bilgelik ve bilgi katar. Ayrıca metal elementi ile neşe ve iletişim enerjisi verir. Ancak kitaplarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
    • -
    • Oturma odanıza bir müzik sistemi ekleyin. Müzik sistemi, oturma odanıza akışkanlık ve duygusallık katar. Ayrıca su elementi ile sezgi ve akış enerjisi verir. Ancak müzik sisteminizi uygun bir ses seviyesinde kullanın ve komşularınızı rahatsız etmeyin.
    • -
    -

    Yatak Odası

    -

    Yatak odası evinizin romantik alanıdır. Burada uyur, sevişir ve dinlenirsiniz. Bu nedenle yatak odanızı sakin, romantik ve özel yapmak önemlidir. Yatak odanızda şunlara dikkat edin:

    -
      -
    • Yatak odanızı düzenleyin. Yatak odanızda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
    • -
    • Yatak odanızı aydınlatın. Yatak odanızda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak yatak odanızın atmosferini değiştirebilirsiniz.
    • -
    • Yatak odanızı renklendirin. Yatak odanızda romantiklik ve aşk yaratmak için ateş elementi renkleri olan kırmızı, pembe ve altın gibi sıcak tonlar kullanabilirsiniz. Yatak odanızda sakinlik ve huzur yaratmak için toprak elementi renkleri olan sarı, turuncu ve kahverengi gibi pastel tonlar kullanabilirsiniz.
    • -
    • Yatak odanıza bir yatak ekleyin. Yatak, yatak odanızın en önemli mobilyasıdır. Yatağınızı hakim konuma yerleştirin ve kapının görüş alanınızda olmasına dikkat edin. Yatağınızın rengini ve dokusunu yatak odanızın tarzına uygun seçin.
    • -
    • Yatak odanıza bir nevresim takımı ekleyin. Nevresim takımı, yatak odanızın görünümünü değiştirebilir. Ayrıca yatağınızın enerjisini korur ve temiz tutar. Nevresim takımınızın rengini ve desenini yatak odanızın tarzına uygun seçin.
    • -
    • Yatak odanıza bir çiçek ekleyin. Çiçek, yatak odanıza canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak çiçeğinizin sağlıklı ve bakımlı olduğundan emin olun.
    • -
    -

    Banyo

    -

    Banyo evinizin temizlik alanıdır. Burada duş alır, diş fırçalar ve kendinize bakım yaparsınız. Bu nedenle banyonuzu temiz, ferah ve rahatlatıcı yapmak önemlidir. Banyonuzda şunlara dikkat edin:

    -
      -
    • Banyonuzu düzenleyin. Banyonuzda gereksiz eşyalardan kurtulun ve sadece ihtiyacınız olanları tutun. Eşyalarınızı düzenli bir şekilde yerleştirin ve toz almaya özen gösterin.
    • -
    • Banyonuzu aydınlatın. Banyonuzda doğal ışık alması için perdeleri açın veya yapay ışık kullanın. Farklı seviyelerde ışık kaynakları kullanarak banyonuzun atmosferini değiştirebilirsiniz.
    • -
    • Banyonuzu renklendirin. Banyonuzda temizlik ve ferahlık yaratmak için metal elementi renkleri olan beyaz, gri ve gümüş gibi soğuk tonlar kullanabilirsiniz. Banyonuzda rahatlama ve sezgi yaratmak için su elementi renkleri olan mavi, siyah ve mor gibi soğuk tonlar kullanabilirsiniz.
    • -
    • Banyonuza bir duş perdesi ekleyin. Duş perdesi, banyonuzun görünümünü değiştirebilir. Ayrıca duşunuzun enerjisini korur ve temiz tutar. Duş perdenizin rengini ve desenini banyonuzun tarzına uygun seçin.
    • -
    • Banyonuza bir bitki ekleyin. Bitki, banyonuza canlılık ve tazelik katar. Ayrıca ahşap elementi ile bolluk ve büyüme enerjisi verir. Ancak bitkinizin sağlıklı ve bakımlı olduğundan emin olun.
    • -
    -

    Sonuç

    -

    Feng shui, evimizi ve hayatımızı daha huzurlu, sağlıklı ve mutlu kılmak için bize bazı ipuçları ve kurallar sunar. Feng shui'nin temel prensiplerini öğrendikten sonra evinizin her odası için feng shui kurallarını uygulayabilirsiniz. Böylece evinizdeki enerjiyi dengeleyebilir, yaşam gücü qi'yi çekebilir ve hayatınızın farklı yönlerini iyileştirebilirsiniz.

    -

    Sık Sorulan Sorular

    -
      -
    • Feng shui nasıl telaffuz edilir?
      Feng shui, "fung şvey" olarak telaffuz edilir.
    • -
    • Feng shui nasıl öğrenilir?
      Feng shui öğrenmek için kitaplar, videolar, kurslar veya danışmanlar gibi farklı kaynaklardan yararlanabilirsiniz.
    • -
    • Feng shui uygulamak için ne kadar para harcamak gerekir?
      Feng shui uygulamak için çok fazla para harcamanız gerekmez. Evinizdeki mevcut eşyalarınızı yeniden düzenleyerek veya küçük değişiklikler yaparak feng shui uygulayabilirsiniz.
    • -
    • Feng shui hangi kültürden gelir?
      Feng shui eski bir Çin sanatıdır. Taoizm ile bağlantılı olan bu sanat, doğanın döngülerini ve birlikte nasıl dengede çalıştıklarını inceler.
    • -
    • Feng shui sadece ev için mi geçerlidir?
      Hayır, feng shui sadece ev için geçerli değildir. Feng shui, iş yeriniz, arabanız, bahçeniz veya herhangi bir mekan için de uygulanabilir.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Get MiniStrike 3.5 Mod APK - The Ultimate Shooter Experience.md b/spaces/congsaPfin/Manga-OCR/logs/Get MiniStrike 3.5 Mod APK - The Ultimate Shooter Experience.md deleted file mode 100644 index 76a3397ed0079b6b88207bdf4d657f50f2b844bb..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Get MiniStrike 3.5 Mod APK - The Ultimate Shooter Experience.md +++ /dev/null @@ -1,109 +0,0 @@ -
    -

    MiniStrike 3.5 APK: A Funny Tribute to Counter-Strike for Android

    -

    If you are a fan of the classic first-person shooter game Counter-Strike, you might want to check out MiniStrike, a funny tribute to the game for Android devices. MiniStrike is a multiplayer online game that lets you play with up to 16 players in different modes and maps. You can choose from various weapons, skins, and accessories to customize your character and show off your skills. In this article, we will tell you what MiniStrike is, how to download and install it, why you should play it, and some tips and tricks for playing it.

    -

    What is MiniStrike?

    -

    MiniStrike is a game developed by Malo The Toad, an independent game developer from France. It is inspired by Counter-Strike, one of the most popular and influential games of all time. MiniStrike aims to recreate the gameplay and atmosphere of Counter-Strike in a simplified and humorous way. You can play as either a terrorist or a counter-terrorist, and try to complete objectives such as planting or defusing bombs, rescuing hostages, or eliminating the enemy team. You can also play in deathmatch mode, where the only goal is to kill as many opponents as possible.

    -

    ministrike 3.5 apk


    Download Zip ––– https://urlca.com/2uO7V5



    -

    Features of MiniStrike

    -

    Some of the features of MiniStrike are:

    -
      -
    • It is free to play and does not require any registration or account.
    • -
    • It has low system requirements and can run on most Android devices.
    • -
    • It has simple and intuitive controls that are easy to learn and use.
    • -
    • It has colorful and cartoonish graphics that give it a unique charm.
    • -
    • It has a variety of weapons, skins, and accessories that you can unlock and use.
    • -
    • It has multiple game modes and maps that offer different challenges and experiences.
    • -
    • It has online multiplayer support that lets you play with up to 16 players from around the world.
    • -
    • It has a chat system that lets you communicate with your teammates and opponents.
    • -
    -

    How to download and install MiniStrike 3.5 APK

    -

    To download and install MiniStrike 3.5 APK on your Android device, follow these steps:

    -

    ministrike 3.5 apk download
    -ministrike 3.5 apk mod
    -ministrike 3.5 apk free
    -ministrike 3.5 apk latest version
    -ministrike 3.5 apk offline
    -ministrike 3.5 apk android
    -ministrike 3.5 apk pure
    -ministrike 3.5 apk hack
    -ministrike 3.5 apk unlimited money
    -ministrike 3.5 apk obb
    -ministrike 3.5 apk full
    -ministrike 3.5 apk no ads
    -ministrike 3.5 apk update
    -ministrike 3.5 apk old version
    -ministrike 3.5 apk revdl
    -ministrike 3.5 apk mirror
    -ministrike 3.5 apk data
    -ministrike 3.5 apk online
    -ministrike 3.5 apk for pc
    -ministrike 3.5 apk gameplay
    -ministrike 3.5 apk review
    -ministrike 3.5 apk cheats
    -ministrike 3.5 apk tips and tricks
    -ministrike 3.5 apk features
    -ministrike 3.5 apk requirements
    -ministrike 3.5 apk size
    -ministrike 3.5 apk install
    -ministrike 3.5 apk direct link
    -ministrike 3.5 apk file
    -ministrike 3.5 apk android oyun club
    -ministrike 3.5 apk uptodown
    -ministrike 3.5 apk rexdl
    -ministrike 3.5 apk apkpure.com
    -ministrike 3.5 apk malavida.com
    -ministrike 3.5 apk androidapksfree.com
    -ministrike 3.5 apk apkmirror.com
    -ministrike 3.5 apk apkmody.io
    -ministrike 3.5 apk happymod.com
    -ministrike 3.5 apk modapkdown.com
    -ministrike 3.5 apk an1.com

    -
      -
    1. Go to https://apkpure.com/ministrike/com.malothetoad.ministrike/download/36-XAPK and click on the "Download APK" button.
    2. -
    3. Wait for the download to finish and then open the file.
    4. -
    5. If prompted, enable the installation of apps from unknown sources in your device settings.
    6. -
    7. Follow the instructions on the screen to install the app.
    8. -
    9. Launch the app and enjoy playing MiniStrike.
    10. -
    -

    Why play MiniStrike?

    -

    MiniStrike is a fun and addictive game that will appeal to both casual and hardcore gamers. Here are some of the reasons why you should play it:

    -

    Pros of MiniStrike

    -
      -
    • It is a great way to kill some time and have some fun with your friends or strangers online.
    • -
    • It is a good way to practice your reflexes, strategy, and teamwork skills.
    • -
    • It is a homage to Counter-Strike that will make you nostalgic and appreciate the original game more.
    • -
    • It is constantly updated with new features, improvements, and bug fixes.
    • -
    -

    Cons of MiniStrike

    -
      -
    • It can be frustrating and unfair at times due to lag, hackers, or unbalanced teams.
    • -
    • It can be repetitive and boring after a while if you play the same mode and map over and over again.
    • -
    • It can be hard to find a suitable server or match depending on your region and time zone.
    • -
    • It can be annoying and distracting to deal with toxic or spamming players in the chat.
    • -
    -

    Tips and tricks for playing MiniStrike

    -

    If you want to improve your performance and enjoyment of MiniStrike, here are some tips and tricks that you can use:

    -

    Choose your weapon wisely

    -

    MiniStrike has a wide range of weapons that you can choose from, each with its own advantages and disadvantages. You should pick a weapon that suits your playstyle, preference, and situation. For example, if you prefer to play aggressively and rush into the enemy territory, you might want to use a shotgun or a submachine gun that have high fire rate and damage. If you prefer to play defensively and snipe from a distance, you might want to use a rifle or a sniper that have high accuracy and range. You should also consider the cost, recoil, and reload time of each weapon.

    -

    Use the map to your advantage

    -

    MiniStrike has various maps that have different layouts, features, and secrets. You should familiarize yourself with the map that you are playing on and use it to your advantage. For example, you should know where the bomb sites, hostages, weapons, health packs, and ammo are located. You should also know where the best spots, hiding places, shortcuts, and ambush points are. You should also pay attention to the sound cues and visual indicators that can help you locate your enemies or allies.

    -

    Communicate with your teammates

    -

    MiniStrike is a team-based game that requires coordination and cooperation among your teammates. You should communicate with your teammates using the chat system or voice chat if available. You should share information, plans, strategies, and warnings with your teammates. You should also support, assist, and cover your teammates when needed. You should also respect, encourage, and compliment your teammates when appropriate. You should avoid flaming, blaming, or insulting your teammates when things go wrong.

    -

    Conclusion

    -

    MiniStrike is a fun and funny tribute to Counter-Strike that you can play on your Android device. It has simple and colorful graphics, easy and intuitive controls, various weapons, skins, and accessories, multiple game modes and maps, online multiplayer support, and chat system. It is free to play and does not require any registration or account. It is a great way to kill some time and have some fun with your friends or strangers online. It is also a good way to practice your reflexes, strategy, and teamwork skills. It is also a homage to Counter-Strike that will make you nostalgic and appreciate the original game more.

    -

    FAQs

    -

    Here are some frequently asked questions about MiniStrike:

    -
      -
    • Q: Is MiniStrike safe to download and install?
    • -
    • A: Yes, MiniStrike is safe to download and install from the official source or trusted third-party websites. It does not contain any viruses, malware, or spyware that can harm your device or data.
    • -
    • Q: Is MiniStrike compatible with my device?
    • -
    • A: MiniStrike is compatible with most Android devices that have Android 4.1 or higher. However, some devices may experience performance issues or crashes due to low memory or storage space.
    • -
    • Q: How can I update MiniStrike?
    • -
    • A: You can update MiniStrike by downloading and installing the latest version from the official source or trusted third-party websites. You can also enable the auto-update feature in your device settings to get notified when a new version is available.
    • -
    • Q: How can I report a bug or a problem in MiniStrike?
    • -
    • A: You can report a bug or a problem in MiniStrike by contacting the developer via email at malothetoad@gmail.com or via Facebook at https://www.facebook.com/ministrikegame/. You can also leave a comment or a review on the app store or website where you downloaded the game.
    • -
    • Q: How can I support the development of MiniStrike?
    • -
    • A: You can support the development of MiniStrike by rating and reviewing the game on the app store or website where you downloaded it. You can also share the game with your friends and family via social media or word of mouth. You can also donate to the developer via PayPal at https://www.paypal.me/malothetoad/.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Sniper 3DGun Shooting Games - Experience the thrill of sniping on PC.md b/spaces/congsaPfin/Manga-OCR/logs/Sniper 3DGun Shooting Games - Experience the thrill of sniping on PC.md deleted file mode 100644 index 78fb6f51ba54a2573c94c467380661b2fa989f19..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Sniper 3DGun Shooting Games - Experience the thrill of sniping on PC.md +++ /dev/null @@ -1,142 +0,0 @@ - -

    Sniper 3D Game Download for Computer: How to Play and Enjoy this Amazing Shooting Game

    -

    If you are a fan of shooting games, you might have heard of Sniper 3D, one of the most popular and addictive sniper games on mobile devices. But did you know that you can also play this game on your computer? In this article, we will show you how to download and install Sniper 3D game on your PC, as well as how to play and enjoy this amazing shooting game with better graphics, performance, and controls.

    -

    sniper 3d game download for computer


    Download Filehttps://urlca.com/2uObIG



    -

    Introduction

    -

    Sniper 3D is an action game developed by Fun Games For Free, a studio that also created other hit games like Block Craft 3D, Castle Crush, and Colorfy. In Sniper 3D, you play as a contract assassin who has to complete various missions using your sniper skills. You can choose from hundreds of weapons, customize them, and upgrade them as you progress. You can also explore different locations and scenarios, from urban cities to tropical islands, from helicopters to humvees. The game has realistic graphics, easy-to-use controls, and immersive sound effects that will make you feel like a real sniper.

    -

    What is Sniper 3D Game?

    -

    Sniper 3D is a game that combines elements of shooting, simulation, and strategy. You have to use your aim, precision, and timing to take down your targets in one shot. You also have to plan your moves carefully, as some missions require stealth, speed, or accuracy. You can choose from over 850+ thrilling missions and 13 different worlds as you dive into this addictive game. You can also challenge other players online in PvP mode or join a clan and compete with other clans for rewards and glory.

    -

    Why play Sniper 3D Game on PC?

    -

    While Sniper 3D is a great game to play on your mobile device, playing it on your PC can give you some advantages. Here are some reasons why you should play Sniper 3D game on your computer:

    -
      -
    • You can enjoy better graphics and sound quality on a bigger screen.
    • -
    • You can use your mouse and keyboard or a gamepad for more precise and comfortable controls.
    • -
    • You can avoid battery drain, overheating, or lag issues that might affect your mobile device.
    • -
    • You can access more features and enhancements that are available only on PC platforms.
    • -
    -

    How to Download and Install Sniper 3D Game on PC

    -

    There are two main methods that you can use to download and install Sniper 3D game on your PC. The first one is using an Android emulator, which is a software that allows you to run Android apps on your computer. The second one is using a gaming platform, which is a service that offers a variety of games for PC users. We will explain both methods in detail below.

    -

    Method 1: Using BlueStacks Emulator

    -

    BlueStacks is one of the most popular and trusted Android emulators that you can use to play Sniper 3D game on your PC. It has over 500 million users and supports thousands of Android games and apps. It also has some features and enhancements that can improve your gaming experience, such as shooting mode, high FPS, script, free look, and more. Here are the steps to download and install Sniper 3D game on your PC using BlueStacks emulator:

    -

    Step 1: Download and install BlueStacks on your PC

    -

    You can download BlueStacks from its official website . The installation process is simple and straightforward. Just follow the instructions on the screen and wait for the installation to finish.

    -

    Step 2: Complete Google sign-in to access the Play Store, or do it later

    -

    After installing BlueStacks, you need to sign in with your Google account to access the Google Play Store. You can do this by clicking on the Google icon on the home screen of BlueStacks. If you don't have a Google account, you can create one for free. You can also skip this step and do it later.

    -

    Step 3: Look for Sniper 3D:Gun Shooting Games in the search bar at the top right corner

    -

    Once you have signed in with your Google account, you can search for Sniper 3D game in the Play Store. You can do this by typing "Sniper 3D:Gun Shooting Games" in the search bar at the top right corner of the BlueStacks home screen. You will see a list of results that match your query.

    -

    Step 4: Click to install Sniper 3D:Gun Shooting Games from the search results

    -

    From the list of results, look for the game that has the same name and icon as shown below:

    -

    sniper 3d gun shooting games pc
    -sniper 3d assassin free to play steam
    -sniper champions 3d shooting bluestacks
    -sniper 3d game for pc windows 10
    -sniper 3d assassin fun games for free
    -sniper champions 3d shooting gameloft
    -sniper 3d game for mac download
    -sniper 3d assassin steam reviews
    -sniper champions 3d shooting sports game
    -sniper 3d game for pc online
    -sniper 3d assassin system requirements
    -sniper champions 3d shooting android
    -sniper 3d game for pc emulator
    -sniper 3d assassin gameplay video
    -sniper champions 3d shooting tips and tricks
    -sniper 3d game for pc free download
    -sniper 3d assassin mature content description
    -sniper champions 3d shooting best rifle
    -sniper 3d game for pc full version
    -sniper 3d assassin download size
    -sniper champions 3d shooting multiplayer mode
    -sniper 3d game for pc offline
    -sniper 3d assassin graphics settings
    -sniper champions 3d shooting realistic physics
    -sniper 3d game for pc no bluestacks
    -sniper 3d assassin missions guide
    -sniper champions 3d shooting latest update
    -sniper 3d game for pc with keyboard and mouse
    -sniper 3d assassin cheats and hacks
    -sniper champions 3d shooting apk download
    -sniper 3d game for pc without emulator
    -sniper 3d assassin achievements list
    -sniper champions 3d shooting mod apk unlimited money and gems
    -sniper 3d game for pc highly compressed
    -sniper 3d assassin community hub
    -sniper champions 3d shooting ios download
    -sniper 3d game for pc crack download
    -sniper 3d assassin weapons upgrade cost
    -sniper champions 3d shooting facebook login
    -sniper 3d game for pc windows 7 free download full version

    -Sniper 3D game icon -

    Click on the game to open its page in the Play Store. Then, click on the green Install button to start downloading and installing the game on your PC.

    -

    Step 5: Complete Google sign-in (if you skipped step 2) to install Sniper 3D:Gun Shooting Games

    -

    If you skipped step 2 earlier, you will need to complete Google sign-in before you can install Sniper 3D game on your PC. You will see a pop-up window asking you to sign in with your Google account. Just follow the instructions on the screen and enter your Google credentials.

    -

    Step 6: Click the Sniper 3D:Gun Shooting Games icon on the home screen to start playing

    -

    After installing Sniper 3D game on your PC, you can launch it by clicking on its icon on the home screen of BlueStacks. You will see a brief tutorial on how to play the game using your mouse and keyboard or a gamepad. You can also customize your controls by clicking on the keyboard icon at the bottom right corner of the screen. Enjoy playing Sniper 3D game on your PC with BlueStacks!

    -

    Method 2: Using Steam Platform

    -

    Steam is another option that you can use to play Sniper 3D game on your PC. Steam is a gaming platform that offers a variety of games for PC users, including some free-to-play titles like Sniper 3D Assassin: Free to Play. This is a different version of Sniper 3D game that has some differences from the mobile version, such as graphics, gameplay, and content. However, it still offers a fun and exciting shooting experience that you can enjoy on your computer. Here are the steps to download and install Sniper 3D Assassin: Free to Play on your PC using Steam platform:

    -

    Step 1 : Download and install Steam on your PC

    -

    You can download Steam from its official website . The installation process is simple and straightforward. Just follow the instructions on the screen and wait for the installation to finish.

    -

    Step 2: Create a Steam account or sign in with your existing one

    -

    After installing Steam, you need to create a Steam account or sign in with your existing one to access the Steam store. You can do this by clicking on the Steam icon on your desktop or taskbar. If you don't have a Steam account, you can create one for free. You can also skip this step and do it later.

    -

    Step 3: Search for Sniper 3D Assassin: Free to Play in the Steam store

    -

    Once you have signed in with your Steam account, you can search for Sniper 3D Assassin: Free to Play in the Steam store. You can do this by typing "Sniper 3D Assassin: Free to Play" in the search bar at the top of the Steam window. You will see a list of results that match your query.

    -

    Step 4: Click on the Play Game button to download and install the game for free

    -

    From the list of results, look for the game that has the same name and icon as shown below:

    -Sniper 3D Assassin: Free to Play game icon -

    Click on the game to open its page in the Steam store. Then, click on the green Play Game button to download and install the game for free on your PC.

    -

    Step 5: Launch the game from your Steam library and enjoy

    -

    After downloading and installing Sniper 3D Assassin: Free to Play on your PC, you can launch it by clicking on its icon in your Steam library. You will see a brief tutorial on how to play the game using your mouse and keyboard or a gamepad. You can also customize your settings by clicking on the gear icon at the top right corner of the screen. Enjoy playing Sniper 3D Assassin: Free to Play on your PC with Steam!

    -

    How to Play and Enjoy Sniper 3D Game on PC

    -

    Now that you have downloaded and installed Sniper 3D game on your PC, you might be wondering how to play and enjoy this amazing shooting game. In this section, we will give you some tips and tricks on how to make the most out of your gaming experience.

    -

    Game Features and Enhancements

    -

    One of the benefits of playing Sniper 3D game on PC is that you can access more features and enhancements that are not available on mobile devices. Here are some of them:

    -

    Shooting Mode

    -

    If you are using BlueStacks emulator, you can activate shooting mode by pressing F1 on your keyboard. This will allow you to aim and shoot with your mouse, just like in a PC shooter game. You can also adjust the sensitivity and speed of your mouse cursor by clicking on the gear icon at the bottom right corner of the screen.

    -

    High FPS

    -

    If you want to enjoy smoother and faster gameplay, you can enable high FPS mode by clicking on the menu icon at the top right corner of BlueStacks home screen. Then, go to Settings > Engine > FPS and select 60 or higher. This will increase the frame rate of Sniper 3D game and make it more responsive and realistic.

    -

    Script

    -

    If you want to automate some actions or tasks in Sniper 3D game, you can use script mode by pressing Ctrl + Shift + A on your keyboard. This will open a window where you can write or record a script that will execute commands for you. For example, you can create a script that will automatically reload your weapon, switch between weapons, or zoom in and out.

    -

    Free Look

    -

    If you want to explore your surroundings more freely, you can use free look mode by pressing Alt + F1 on your keyboard. This will allow you to move your camera around with your mouse, without affecting your aim or position. You can also zoom in and out with your mouse wheel.

    -

    Game Tips and Tricks

    -

    Besides using these features and enhancements, you can also improve your skills and performance in Sniper 3D game by following these tips and tricks:

    -

    How to aim and shoot

    -

    The most important skill in Sniper 3D game is aiming and shooting. You have to be accurate and quick to take down your targets in one shot. You can use your mouse or gamepad to aim and shoot, depending on your preference. Here are some tips on how to aim and shoot better:

    -
      -
    • Use the scope to zoom in and out and adjust your aim. You can also use the mouse wheel or the right trigger on your gamepad to zoom in and out.
    • -
    • Pay attention to the wind direction and speed, as they will affect the trajectory of your bullet. You can see the wind indicator at the top of the screen. You can also use the bullet drop indicator to compensate for the gravity effect.
    • -
    • Use the breath button to steady your aim and slow down time. You can press the spacebar on your keyboard or the left trigger on your gamepad to activate this feature. You can also upgrade your skills to increase the duration and effectiveness of this feature.
    • -
    • Use the silencer to reduce the noise and flash of your shots. This will help you avoid detection and alerting other enemies. You can equip a silencer on your weapon by clicking on the gear icon at the bottom left corner of the screen.
    • -
    -

    How to upgrade weapons

    -

    Another important aspect of Sniper 3D game is upgrading your weapons. You have to upgrade your weapons to increase their damage, range, stability, capacity, and fire rate. You can also unlock new weapons and customize them with different skins, scopes, muzzles, magazines, and grips. Here are some tips on how to upgrade weapons:

    -
      -
    • Use coins and diamonds to buy and upgrade weapons. You can earn coins and diamonds by completing missions, watching ads, or buying them with real money.
    • -
    • Use blueprints to unlock new weapons. You can collect blueprints by completing missions, participating in events, or opening crates.
    • -
    • Use parts to customize your weapons. You can obtain parts by dismantling unwanted weapons, opening crates, or buying them with coins or diamonds.
    • -
    • Use weapon tiers to compare and choose the best weapon for each mission. You can see the weapon tier at the top of the weapon card. The higher the tier, the better the weapon.
    • -
    -

    How to complete missions

    -

    The main mode of Sniper 3D game is completing missions. You have to complete various missions using your sniper skills and strategy. You can choose from different types of missions, such as primary, wanted, spec ops, daily, or PvP. Here are some tips on how to complete missions:

    -
      -
    • Read the mission briefing carefully and follow the instructions. You will see the mission briefing at the start of each mission. It will tell you the objective, target, location, reward, and other details of the mission.
    • -
    • Choose the right weapon for each mission. You will see a recommended weapon for each mission at the bottom of the mission briefing. You can also see the weapon stats and compare them with other weapons by clicking on them.
    • -
    • Use hints and clues to find and identify your target. You will see hints and clues on the screen during some missions. They will help you locate and recognize your target among other people or objects.
    • -
    • Use cover and stealth to avoid detection and enemy fire. You will see a cover indicator at the bottom of the screen during some missions. It will show you how much cover you have from enemy sight and bullets.
    • -
    -

    Conclusion

    -

    Sniper 3D is an amazing shooting game that you can play on your computer with better graphics, performance, and controls. You can download and install Sniper 3D game on your PC using either BlueStacks emulator or Steam platform. You can also play and enjoy Sniper 3D game on your PC with more features and enhancements that are not available on mobile devices. You can also improve your skills and performance in Sniper 3D game by following some tips and tricks that we have shared in this article.

    -

    We hope that this article has helped you learn how to download, install, play, and enjoy Sniper 3D game on your PC. If you have any questions or feedback, please feel free to leave a comment below. Happy sniping!

    -

    Frequently Asked Questions

    -

    Q: Is Sniper 3D game free to play?

    -

    A: Yes, Sniper 3D game is free to play on both mobile devices and PC platforms. However, it also offers some in-app purchases that can enhance your gaming experience.

    -

    Q: Is Sniper 3D game online or offline?

    -

    A: Sniper 3D game is both online and offline. You can play most of the missions offline without an internet connection. However, you need an internet connection to access some features such as PvP mode, clan wars, events, leaderboards, and updates.

    -

    Q: How can I get more coins and diamonds in Sniper 3D game?

    -

    A: You can get more coins and diamonds in Sniper 3D game by completing missions, watching ads, or buying them with real money. You can also get some coins and diamonds for free by logging in daily, participating in events, or joining a clan.

    -

    Q: How can I change the language of Sniper 3D game?

    -

    A: You can change the language of Sniper 3D game by going to the settings menu and selecting the language option. You can choose from over 20 languages, including English, Spanish, French, German, Italian, Portuguese, Russian, Turkish, Arabic, Chinese, Japanese, Korean, and more.

    -

    Q: How can I contact the support team of Sniper 3D game?

    -

    A: You can contact the support team of Sniper 3D game by going to the settings menu and selecting the help option. You can also visit the official website or the Facebook page of Sniper 3D game for more information and assistance.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free NEW Download.md b/spaces/contluForse/HuggingGPT/assets/Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free NEW Download.md deleted file mode 100644 index 0cba1fa9db745e0f37ecf08f06f765d3ccbd9274..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free NEW Download.md +++ /dev/null @@ -1,92 +0,0 @@ -
    -

    How to Get Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download

    -

    Adobe Photoshop Lightroom CC is one of the most popular and powerful photo editing software that allows you to organize, edit, and share your photos with ease. However, it is not a cheap software and requires a monthly or yearly subscription fee to use it. If you are looking for a way to get Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, you have come to the right place.

    -

    Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] free download


    Download ✯✯✯ https://ssurll.com/2uzyhd



    -

    In this article, we will show you how to download and install Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, which is a version of Adobe Photoshop Lightroom CC that has been patched by SadeemPC, a website that provides free downloads of various software and games. The patch is a program that modifies the original software to bypass the license verification and activation process, and to enable some additional features and functions.

    -

    By using Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, you can enjoy the full functionality of Adobe Photoshop Lightroom CC without paying any subscription fee or registration fee. However, before you proceed, you should be aware of the risks and disadvantages of using pirated software, such as legal issues, security issues, lack of updates, and lack of support.

    -

    If you want to use Adobe Photoshop Lightroom CC legally and safely, you should consider getting the official version from Adobe's website or using the free trial version for 30 days. You can also check out some alternatives to Adobe Photoshop Lightroom CC that are free or cheaper.

    -

    What are the features and benefits of Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download?

    -

    Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download has many features and benefits that make it a great choice for photo editing enthusiasts and professionals alike. Some of them are:

    -
      -
    • It has a user-friendly and intuitive interface that allows you to easily navigate and access your photos, tools, and settings.
    • -
    • It has a powerful and comprehensive photo editing engine that allows you to adjust various aspects of your photos, such as exposure, contrast, color, tone, sharpness, noise, lens correction, and more.
    • -
    • It has a smart and flexible photo organization system that allows you to import, sort, filter, rate, tag, and manage your photos in various ways.
    • -
    • It has a seamless and tight integration with Adobe Photoshop that allows you to edit your photos in both programs with ease.
    • -
    • It has a creative and versatile photo sharing system that allows you to export, print, publish, and share your photos in various formats and platforms.
    • -
    • It has a patch that allows you to use the software for free without any limitations or restrictions.
    • -
    -

    How to download and install Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download?

    -

    If you want to download and install Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, you can follow these steps:

    -

    -
      -
    1. Go to the website of SadeemPC by following this link.
    2. -
    3. Search for Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download or click on this link.
    4. -
    5. Click on the download button or the magnet link to start downloading the file.
    6. -
    7. Unzip the file using WinRAR or any other program that can extract compressed files.
    8. -
    9. Run the setup file named Setup.run this.(ask4pc).exe inside the folder Lightroom.setup (ask4pc).
    10. -
    11. Follow the instructions on the screen to install the software.
    12. -
    13. Run the patch file named Patch.Lightroom.6.(ask4pc).exe inside the folder Updates (ask4pc).
    14. -
    15. Select Adobe Photoshop Lightroom CC from the list of programs and click on the patch button.
    16. -
    17. Wait for the patching process to finish.
    18. -
    19. Congratulations! You have successfully installed Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download.
    20. -
    -

    How to use Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download effectively?

    -

    If you want to use Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download effectively, you can follow these tips and tricks:

    -
      -
    • Learn the basics of photo editing by watching some tutorials or reading some guides online.
    • -
    • Explore the different modules and tools of Adobe Photoshop Lightroom CC by clicking on them and seeing what they do.
    • -
    • Create a workflow that suits your needs and preferences by customizing your settings and preferences.
    • -
    • Use presets and profiles to apply some ready-made effects and adjustments to your photos.
    • -
    • Use keywords and collections to organize your photos in a logical and convenient way.
    • -
    • Use sync and cloud services to access your photos across different devices and platforms.
    • -
    • Use plugins and extensions to enhance the functionality and compatibility of Adobe Photoshop Lightroom CC with other programs and services.
    • -
    - -

    Conclusion

    - -

    In conclusion, Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download is a version of Adobe Photoshop Lightroom CC that has been patched by SadeemPC, a website that provides free downloads of various software and games. The patch is a program that modifies the original software to bypass the license verification and activation process, and to enable some additional features and functions.

    - -

    By using Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, you can enjoy the full functionality of Adobe Photoshop Lightroom CC without paying any subscription fee or registration fee. However, before you proceed, you should be aware of the risks and disadvantages of using pirated software, such as legal issues, security issues, lack of updates, and lack of support.

    - -

    If you want to use Adobe Photoshop Lightroom CC legally and safely, you should consider getting the official version from Adobe's website or using the free trial version for 30 days. You can also check out some alternatives to Adobe Photoshop Lightroom CC that are free or cheaper.

    - -

    We hope this article helped you learn more about Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download -and how to get it for free.

    -

    What are some alternatives to Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download?

    -

    If you are not satisfied with Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, or you want to try some other options, you can check out some alternatives to Adobe Photoshop Lightroom CC that are free or cheaper. Some of them are:

    -
      -
    • Skylum Luminar: This is a powerful and versatile photo editing software that uses artificial intelligence (AI) to enhance your photos. It has a user-friendly interface, a smart photo organization system, and a creative photo sharing system. It also has some unique features, such as sky replacement, portrait enhancers, and augmented sky. You can buy it for a one-time fee of $79, or get a free trial for 7 days.
    • -
    • ON1 Photo RAW: This is a comprehensive and fast photo editing software that offers raw processing, photo organization, and photo effects. It has a customizable interface, a flexible photo management system, and a seamless integration with Adobe Photoshop and Lightroom. It also has some advanced features, such as HDR merging, focus stacking, and panorama stitching. You can buy it for a one-time fee of $99, or get a free trial for 14 days.
    • -
    • DxO PhotoLab: This is a professional and precise photo editing software that offers raw conversion, photo correction, and photo enhancement. It has a clear interface, a powerful photo editing engine, and a superior noise reduction technology. It also has some exclusive features, such as optical corrections, haze removal, and local adjustments. You can buy it for a one-time fee of $129, or get a free trial for 30 days.
    • -
    • Capture One: This is a premium and sophisticated photo editing software that offers raw processing, photo organization, and tethered shooting. It has a feature-rich interface, a customizable workflow system, and an excellent raw file conversion quality. It also has some professional features, such as color grading, layer editing, and annotations. You can buy it for a one-time fee of $299, or get a free trial for 30 days.
    • -
    • Darktable: This is a free and open-source photo editing software that offers raw processing, photo management, and non-destructive editing. It has a modular interface, a comprehensive photo editing toolkit, and a robust performance system. It also has some unique features, such as multiple exposure blending, perspective correction, and watermarking.
    • -
    -

    How to compare Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download with Skylum Luminar?

    -

    If you want to compare Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download with Skylum Luminar, you can consider some of the following aspects:

    -
      -
    • Price: Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download is a pirated version of Adobe Photoshop Lightroom CC that can be downloaded for free from SadeemPC website. However, this is illegal and risky, as you may face legal issues, security issues, lack of updates, and lack of support. Skylum Luminar is a legitimate version of Skylum Luminar that can be bought for a one-time fee of $79, or get a free trial for 7 days. This is legal and safe, as you will get updates, support, and a 30-day money-back guarantee.
    • -
    • Features: Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download has a user-friendly and intuitive interface, a powerful and comprehensive photo editing engine, a smart and flexible photo organization system, a seamless and tight integration with Adobe Photoshop, and a creative and versatile photo sharing system. Skylum Luminar has similar features, but also has some unique features, such as sky replacement, portrait enhancers, augmented sky, AI tools, and presets.
    • -
    • Performance: Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download has a fast and smooth performance, as it can import, edit, and export photos quickly and efficiently. However, it may also have some bugs, errors, or crashes, as it is not an official version and may not be compatible with your system or other programs. Skylum Luminar has a fast and smooth performance as well, as it can import, edit, and export photos quickly and efficiently. It also has a stable and reliable performance, as it is an official version and is regularly updated and optimized.
    • -
    -

    Conclusion

    -

    In conclusion, Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download and Skylum Luminar are both powerful and versatile photo editing software that have similar features and benefits. However, they also have some differences in terms of price, legality, safety, uniqueness, and stability.

    - -

    Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download is a pirated version of Adobe Photoshop Lightroom CC that can be downloaded for free from SadeemPC website. However, this is illegal and risky, as you may face legal issues, security issues, lack of updates, and lack of support.

    - -

    Skylum Luminar is a legitimate version of Skylum Luminar that can be bought for a one-time fee of $79, or get a free trial for 7 days. This is legal and safe, as you will get updates, support, and a 30-day money-back guarantee.

    - -

    We hope this article helped you compare Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download with Skylum Luminar -and decide which one is better for you. -

    In this article, we have shown you how to get Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, which is a version of Adobe Photoshop Lightroom CC that has been patched by SadeemPC, a website that provides free downloads of various software and games. The patch is a program that modifies the original software to bypass the license verification and activation process, and to enable some additional features and functions.

    - -

    By using Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download, you can enjoy the full functionality of Adobe Photoshop Lightroom CC without paying any subscription fee or registration fee. However, before you proceed, you should be aware of the risks and disadvantages of using pirated software, such as legal issues, security issues, lack of updates, and lack of support.

    - -

    If you want to use Adobe Photoshop Lightroom CC legally and safely, you should consider getting the official version from Adobe's website or using the free trial version for 30 days. You can also check out some alternatives to Adobe Photoshop Lightroom CC that are free or cheaper.

    - -

    We have also compared Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download with Skylum Luminar, which is another powerful and versatile photo editing software that has similar features and benefits. However, they also have some differences in terms of price, legality, safety, uniqueness, and stability.

    - -

    Skylum Luminar is a legitimate version of Skylum Luminar that can be bought for a one-time fee of $79, or get a free trial for 7 days. This is legal and safe, as you will get updates, support, and a 30-day money-back guarantee.

    - -

    We hope this article helped you learn more about Adobe Photoshop Lightroom CC 6.6.1 Incl Patch [SadeemPC] Free Download and how to get it for free. We also hope it helped you compare it with Skylum Luminar and decide which one is better for you.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Defection Full Movie In Italian Free Download Mp4.md b/spaces/contluForse/HuggingGPT/assets/Defection Full Movie In Italian Free Download Mp4.md deleted file mode 100644 index 19e9678fd83923eba16d147a5136e973f3ae8dcf..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Defection Full Movie In Italian Free Download Mp4.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Defection full movie in italian free download mp4


    Download Zip ✪✪✪ https://ssurll.com/2uzxba



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/contluForse/HuggingGPT/assets/Ein Buch von Amazon auf das iPad herunterladen So lesen Sie Ihre eBooks offline.md b/spaces/contluForse/HuggingGPT/assets/Ein Buch von Amazon auf das iPad herunterladen So lesen Sie Ihre eBooks offline.md deleted file mode 100644 index 15cbfb7c11ec94f9edf7979574116af65b829e95..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Ein Buch von Amazon auf das iPad herunterladen So lesen Sie Ihre eBooks offline.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Ein Buch von Amazon auf das iPad herunterladen


    Download Filehttps://ssurll.com/2uzxNs



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/seg/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/seg/__init__.py deleted file mode 100644 index 93bc129b685e4a3efca2cc891729981b2865900d..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/core/seg/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .builder import build_pixel_sampler -from .sampler import BasePixelSampler, OHEMPixelSampler - -__all__ = ['build_pixel_sampler', 'BasePixelSampler', 'OHEMPixelSampler'] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/__init__.py deleted file mode 100644 index 3d15d1ee534d35f80525d5a9c8a7437dad5c7463..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -# Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation -# https://github.com/baegwangbin/surface_normal_uncertainty - -import os -import types -import torch -import numpy as np - -from einops import rearrange -from .models.NNET import NNET -from .utils import utils -from annotator.util import annotator_ckpts_path -import torchvision.transforms as transforms - - -class NormalBaeDetector: - def __init__(self): - remote_model_path = "https://huggingface.co/lllyasviel/Annotators/resolve/main/scannet.pt" - modelpath = os.path.join(annotator_ckpts_path, "scannet.pt") - if not os.path.exists(modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(remote_model_path, model_dir=annotator_ckpts_path) - args = types.SimpleNamespace() - args.mode = 'client' - args.architecture = 'BN' - args.pretrained = 'scannet' - args.sampling_ratio = 0.4 - args.importance_ratio = 0.7 - model = NNET(args) - model = utils.load_checkpoint(modelpath, model) -# model = model.cuda() - model = model.cpu() - model.eval() - self.model = model - self.norm = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) - - def __call__(self, input_image): - assert input_image.ndim == 3 - image_normal = input_image - with torch.no_grad(): -# image_normal = torch.from_numpy(image_normal).float().cuda() - image_normal = torch.from_numpy(image_normal).float().cpu() - image_normal = image_normal / 255.0 - image_normal = rearrange(image_normal, 'h w c -> 1 c h w') - image_normal = self.norm(image_normal) - - normal = self.model(image_normal) - normal = normal[0][-1][:, :3] - # d = torch.sum(normal ** 2.0, dim=1, keepdim=True) ** 0.5 - # d = torch.maximum(d, torch.ones_like(d) * 1e-5) - # normal /= d - normal = ((normal + 1) * 0.5).clip(0, 1) - - normal = rearrange(normal[0], 'c h w -> h w c').cpu().numpy() - normal_image = (normal * 255.0).clip(0, 255).astype(np.uint8) - - return normal_image diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/data/sun_rgbd_loader.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/data/sun_rgbd_loader.py deleted file mode 100644 index 9e2bdb9aefe68ca4439f41eff3bba722c49fb976..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/data/sun_rgbd_loader.py +++ /dev/null @@ -1,106 +0,0 @@ -# MIT License - -# Copyright (c) 2022 Intelligent Systems Lab Org - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -# File author: Shariq Farooq Bhat - -import os - -import numpy as np -import torch -from PIL import Image -from torch.utils.data import DataLoader, Dataset -from torchvision import transforms - - -class ToTensor(object): - def __init__(self): - # self.normalize = transforms.Normalize( - # mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) - self.normalize = lambda x : x - - def __call__(self, sample): - image, depth = sample['image'], sample['depth'] - image = self.to_tensor(image) - image = self.normalize(image) - depth = self.to_tensor(depth) - - return {'image': image, 'depth': depth, 'dataset': "sunrgbd"} - - def to_tensor(self, pic): - - if isinstance(pic, np.ndarray): - img = torch.from_numpy(pic.transpose((2, 0, 1))) - return img - - # # handle PIL Image - if pic.mode == 'I': - img = torch.from_numpy(np.array(pic, np.int32, copy=False)) - elif pic.mode == 'I;16': - img = torch.from_numpy(np.array(pic, np.int16, copy=False)) - else: - img = torch.ByteTensor( - torch.ByteStorage.from_buffer(pic.tobytes())) - # PIL image mode: 1, L, P, I, F, RGB, YCbCr, RGBA, CMYK - if pic.mode == 'YCbCr': - nchannel = 3 - elif pic.mode == 'I;16': - nchannel = 1 - else: - nchannel = len(pic.mode) - img = img.view(pic.size[1], pic.size[0], nchannel) - - img = img.transpose(0, 1).transpose(0, 2).contiguous() - if isinstance(img, torch.ByteTensor): - return img.float() - else: - return img - - -class SunRGBD(Dataset): - def __init__(self, data_dir_root): - # test_file_dirs = loadmat(train_test_file)['alltest'].squeeze() - # all_test = [t[0].replace("/n/fs/sun3d/data/", "") for t in test_file_dirs] - # self.all_test = [os.path.join(data_dir_root, t) for t in all_test] - import glob - self.image_files = glob.glob( - os.path.join(data_dir_root, 'rgb', 'rgb', '*')) - self.depth_files = [ - r.replace("rgb/rgb", "gt/gt").replace("jpg", "png") for r in self.image_files] - self.transform = ToTensor() - - def __getitem__(self, idx): - image_path = self.image_files[idx] - depth_path = self.depth_files[idx] - - image = np.asarray(Image.open(image_path), dtype=np.float32) / 255.0 - depth = np.asarray(Image.open(depth_path), dtype='uint16') / 1000.0 - depth[depth > 8] = -1 - depth = depth[..., None] - return self.transform(dict(image=image, depth=depth)) - - def __len__(self): - return len(self.image_files) - - -def get_sunrgbd_loader(data_dir_root, batch_size=1, **kwargs): - dataset = SunRGBD(data_dir_root) - return DataLoader(dataset, batch_size, **kwargs) diff --git a/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/function.py b/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/function.py deleted file mode 100644 index d7ce0f4c6d21660bc22e318687015cfd26c36be5..0000000000000000000000000000000000000000 --- a/spaces/daddyjin/TalkingFaceGeneration/FONT/modules/function.py +++ /dev/null @@ -1,75 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -""" -Created on Thu Sep 30 17:45:24 2021 - -@author: SENSETIME\jixinya1 -""" - -import torch - - -def calc_mean_std(feat, eps=1e-5): - # eps is a small value added to the variance to avoid divide-by-zero. - size = feat.size() - assert (len(size) == 4) - N, C = size[:2] - feat_var = feat.view(N, C, -1).var(dim=2) + eps - feat_std = feat_var.sqrt().view(N, C, 1, 1) - feat_mean = feat.view(N, C, -1).mean(dim=2).view(N, C, 1, 1) - return feat_mean, feat_std - - -def adaptive_instance_normalization(content_feat, style_feat): - assert (content_feat.size()[:2] == style_feat.size()[:2]) - size = content_feat.size() - style_mean, style_std = calc_mean_std(style_feat) - content_mean, content_std = calc_mean_std(content_feat) - - normalized_feat = (content_feat - content_mean.expand( - size)) / content_std.expand(size) - return normalized_feat * style_std.expand(size) + style_mean.expand(size) - - -def _calc_feat_flatten_mean_std(feat): - # takes 3D feat (C, H, W), return mean and std of array within channels - assert (feat.size()[0] == 3) - assert (isinstance(feat, torch.FloatTensor)) - feat_flatten = feat.view(3, -1) - mean = feat_flatten.mean(dim=-1, keepdim=True) - std = feat_flatten.std(dim=-1, keepdim=True) - return feat_flatten, mean, std - - -def _mat_sqrt(x): - U, D, V = torch.svd(x) - return torch.mm(torch.mm(U, D.pow(0.5).diag()), V.t()) - - -def coral(source, target): - # assume both source and target are 3D array (C, H, W) - # Note: flatten -> f - - source_f, source_f_mean, source_f_std = _calc_feat_flatten_mean_std(source) - source_f_norm = (source_f - source_f_mean.expand_as( - source_f)) / source_f_std.expand_as(source_f) - source_f_cov_eye = \ - torch.mm(source_f_norm, source_f_norm.t()) + torch.eye(3) - - target_f, target_f_mean, target_f_std = _calc_feat_flatten_mean_std(target) - target_f_norm = (target_f - target_f_mean.expand_as( - target_f)) / target_f_std.expand_as(target_f) - target_f_cov_eye = \ - torch.mm(target_f_norm, target_f_norm.t()) + torch.eye(3) - - source_f_norm_transfer = torch.mm( - _mat_sqrt(target_f_cov_eye), - torch.mm(torch.inverse(_mat_sqrt(source_f_cov_eye)), - source_f_norm) - ) - - source_f_transfer = source_f_norm_transfer * \ - target_f_std.expand_as(source_f_norm) + \ - target_f_mean.expand_as(source_f_norm) - - return source_f_transfer.view(source.size()) \ No newline at end of file diff --git a/spaces/datien228/text-summarizer/static/js/particles.js b/spaces/datien228/text-summarizer/static/js/particles.js deleted file mode 100644 index 325d8349960022a3a6aaef3d1ca94938be622a68..0000000000000000000000000000000000000000 --- a/spaces/datien228/text-summarizer/static/js/particles.js +++ /dev/null @@ -1,1541 +0,0 @@ -/* ----------------------------------------------- -/* Author : Vincent Garreau - vincentgarreau.com -/* MIT license: http://opensource.org/licenses/MIT -/* Demo / Generator : vincentgarreau.com/particles.js -/* GitHub : github.com/VincentGarreau/particles.js -/* How to use? : Check the GitHub README -/* v2.0.0 -/* ----------------------------------------------- */ - -var pJS = function(tag_id, params){ - - var canvas_el = document.querySelector('#'+tag_id+' > .particles-js-canvas-el'); - - /* particles.js variables with default values */ - this.pJS = { - canvas: { - el: canvas_el, - w: canvas_el.offsetWidth, - h: canvas_el.offsetHeight - }, - particles: { - number: { - value: 400, - density: { - enable: true, - value_area: 800 - } - }, - color: { - value: '#fff' - }, - shape: { - type: 'circle', - stroke: { - width: 0, - color: '#ff0000' - }, - polygon: { - nb_sides: 5 - }, - image: { - src: '', - width: 100, - height: 100 - } - }, - opacity: { - value: 1, - random: false, - anim: { - enable: false, - speed: 2, - opacity_min: 0, - sync: false - } - }, - size: { - value: 20, - random: false, - anim: { - enable: false, - speed: 20, - size_min: 0, - sync: false - } - }, - line_linked: { - enable: true, - distance: 100, - color: '#fff', - opacity: 1, - width: 1 - }, - move: { - enable: true, - speed: 2, - direction: 'none', - random: false, - straight: false, - out_mode: 'out', - bounce: false, - attract: { - enable: false, - rotateX: 3000, - rotateY: 3000 - } - }, - array: [] - }, - interactivity: { - detect_on: 'canvas', - events: { - onhover: { - enable: true, - mode: 'grab' - }, - onclick: { - enable: true, - mode: 'push' - }, - resize: true - }, - modes: { - grab:{ - distance: 100, - line_linked:{ - opacity: 1 - } - }, - bubble:{ - distance: 200, - size: 80, - duration: 0.4 - }, - repulse:{ - distance: 200, - duration: 0.4 - }, - push:{ - particles_nb: 4 - }, - remove:{ - particles_nb: 2 - } - }, - mouse:{} - }, - retina_detect: false, - fn: { - interact: {}, - modes: {}, - vendors:{} - }, - tmp: {} - }; - - var pJS = this.pJS; - - /* params settings */ - if(params){ - Object.deepExtend(pJS, params); - } - - pJS.tmp.obj = { - size_value: pJS.particles.size.value, - size_anim_speed: pJS.particles.size.anim.speed, - move_speed: pJS.particles.move.speed, - line_linked_distance: pJS.particles.line_linked.distance, - line_linked_width: pJS.particles.line_linked.width, - mode_grab_distance: pJS.interactivity.modes.grab.distance, - mode_bubble_distance: pJS.interactivity.modes.bubble.distance, - mode_bubble_size: pJS.interactivity.modes.bubble.size, - mode_repulse_distance: pJS.interactivity.modes.repulse.distance - }; - - - pJS.fn.retinaInit = function(){ - - if(pJS.retina_detect && window.devicePixelRatio > 1){ - pJS.canvas.pxratio = window.devicePixelRatio; - pJS.tmp.retina = true; - } - else{ - pJS.canvas.pxratio = 1; - pJS.tmp.retina = false; - } - - pJS.canvas.w = pJS.canvas.el.offsetWidth * pJS.canvas.pxratio; - pJS.canvas.h = pJS.canvas.el.offsetHeight * pJS.canvas.pxratio; - - pJS.particles.size.value = pJS.tmp.obj.size_value * pJS.canvas.pxratio; - pJS.particles.size.anim.speed = pJS.tmp.obj.size_anim_speed * pJS.canvas.pxratio; - pJS.particles.move.speed = pJS.tmp.obj.move_speed * pJS.canvas.pxratio; - pJS.particles.line_linked.distance = pJS.tmp.obj.line_linked_distance * pJS.canvas.pxratio; - pJS.interactivity.modes.grab.distance = pJS.tmp.obj.mode_grab_distance * pJS.canvas.pxratio; - pJS.interactivity.modes.bubble.distance = pJS.tmp.obj.mode_bubble_distance * pJS.canvas.pxratio; - pJS.particles.line_linked.width = pJS.tmp.obj.line_linked_width * pJS.canvas.pxratio; - pJS.interactivity.modes.bubble.size = pJS.tmp.obj.mode_bubble_size * pJS.canvas.pxratio; - pJS.interactivity.modes.repulse.distance = pJS.tmp.obj.mode_repulse_distance * pJS.canvas.pxratio; - - }; - - - - /* ---------- pJS functions - canvas ------------ */ - - pJS.fn.canvasInit = function(){ - pJS.canvas.ctx = pJS.canvas.el.getContext('2d'); - }; - - pJS.fn.canvasSize = function(){ - - pJS.canvas.el.width = pJS.canvas.w; - pJS.canvas.el.height = pJS.canvas.h; - - if(pJS && pJS.interactivity.events.resize){ - - window.addEventListener('resize', function(){ - - pJS.canvas.w = pJS.canvas.el.offsetWidth; - pJS.canvas.h = pJS.canvas.el.offsetHeight; - - /* resize canvas */ - if(pJS.tmp.retina){ - pJS.canvas.w *= pJS.canvas.pxratio; - pJS.canvas.h *= pJS.canvas.pxratio; - } - - pJS.canvas.el.width = pJS.canvas.w; - pJS.canvas.el.height = pJS.canvas.h; - - /* repaint canvas on anim disabled */ - if(!pJS.particles.move.enable){ - pJS.fn.particlesEmpty(); - pJS.fn.particlesCreate(); - pJS.fn.particlesDraw(); - pJS.fn.vendors.densityAutoParticles(); - } - - /* density particles enabled */ - pJS.fn.vendors.densityAutoParticles(); - - }); - - } - - }; - - - pJS.fn.canvasPaint = function(){ - pJS.canvas.ctx.fillRect(0, 0, pJS.canvas.w, pJS.canvas.h); - }; - - pJS.fn.canvasClear = function(){ - pJS.canvas.ctx.clearRect(0, 0, pJS.canvas.w, pJS.canvas.h); - }; - - - /* --------- pJS functions - particles ----------- */ - - pJS.fn.particle = function(color, opacity, position){ - - /* size */ - this.radius = (pJS.particles.size.random ? Math.random() : 1) * pJS.particles.size.value; - if(pJS.particles.size.anim.enable){ - this.size_status = false; - this.vs = pJS.particles.size.anim.speed / 100; - if(!pJS.particles.size.anim.sync){ - this.vs = this.vs * Math.random(); - } - } - - /* position */ - this.x = position ? position.x : Math.random() * pJS.canvas.w; - this.y = position ? position.y : Math.random() * pJS.canvas.h; - - /* check position - into the canvas */ - if(this.x > pJS.canvas.w - this.radius*2) this.x = this.x - this.radius; - else if(this.x < this.radius*2) this.x = this.x + this.radius; - if(this.y > pJS.canvas.h - this.radius*2) this.y = this.y - this.radius; - else if(this.y < this.radius*2) this.y = this.y + this.radius; - - /* check position - avoid overlap */ - if(pJS.particles.move.bounce){ - pJS.fn.vendors.checkOverlap(this, position); - } - - /* color */ - this.color = {}; - if(typeof(color.value) == 'object'){ - - if(color.value instanceof Array){ - var color_selected = color.value[Math.floor(Math.random() * pJS.particles.color.value.length)]; - this.color.rgb = hexToRgb(color_selected); - }else{ - if(color.value.r != undefined && color.value.g != undefined && color.value.b != undefined){ - this.color.rgb = { - r: color.value.r, - g: color.value.g, - b: color.value.b - } - } - if(color.value.h != undefined && color.value.s != undefined && color.value.l != undefined){ - this.color.hsl = { - h: color.value.h, - s: color.value.s, - l: color.value.l - } - } - } - - } - else if(color.value == 'random'){ - this.color.rgb = { - r: (Math.floor(Math.random() * (255 - 0 + 1)) + 0), - g: (Math.floor(Math.random() * (255 - 0 + 1)) + 0), - b: (Math.floor(Math.random() * (255 - 0 + 1)) + 0) - } - } - else if(typeof(color.value) == 'string'){ - this.color = color; - this.color.rgb = hexToRgb(this.color.value); - } - - /* opacity */ - this.opacity = (pJS.particles.opacity.random ? Math.random() : 1) * pJS.particles.opacity.value; - if(pJS.particles.opacity.anim.enable){ - this.opacity_status = false; - this.vo = pJS.particles.opacity.anim.speed / 100; - if(!pJS.particles.opacity.anim.sync){ - this.vo = this.vo * Math.random(); - } - } - - /* animation - velocity for speed */ - var velbase = {} - switch(pJS.particles.move.direction){ - case 'top': - velbase = { x:0, y:-1 }; - break; - case 'top-right': - velbase = { x:0.5, y:-0.5 }; - break; - case 'right': - velbase = { x:1, y:-0 }; - break; - case 'bottom-right': - velbase = { x:0.5, y:0.5 }; - break; - case 'bottom': - velbase = { x:0, y:1 }; - break; - case 'bottom-left': - velbase = { x:-0.5, y:1 }; - break; - case 'left': - velbase = { x:-1, y:0 }; - break; - case 'top-left': - velbase = { x:-0.5, y:-0.5 }; - break; - default: - velbase = { x:0, y:0 }; - break; - } - - if(pJS.particles.move.straight){ - this.vx = velbase.x; - this.vy = velbase.y; - if(pJS.particles.move.random){ - this.vx = this.vx * (Math.random()); - this.vy = this.vy * (Math.random()); - } - }else{ - this.vx = velbase.x + Math.random()-0.5; - this.vy = velbase.y + Math.random()-0.5; - } - - // var theta = 2.0 * Math.PI * Math.random(); - // this.vx = Math.cos(theta); - // this.vy = Math.sin(theta); - - this.vx_i = this.vx; - this.vy_i = this.vy; - - - - /* if shape is image */ - - var shape_type = pJS.particles.shape.type; - if(typeof(shape_type) == 'object'){ - if(shape_type instanceof Array){ - var shape_selected = shape_type[Math.floor(Math.random() * shape_type.length)]; - this.shape = shape_selected; - } - }else{ - this.shape = shape_type; - } - - if(this.shape == 'image'){ - var sh = pJS.particles.shape; - this.img = { - src: sh.image.src, - ratio: sh.image.width / sh.image.height - } - if(!this.img.ratio) this.img.ratio = 1; - if(pJS.tmp.img_type == 'svg' && pJS.tmp.source_svg != undefined){ - pJS.fn.vendors.createSvgImg(this); - if(pJS.tmp.pushing){ - this.img.loaded = false; - } - } - } - - - - }; - - - pJS.fn.particle.prototype.draw = function() { - - var p = this; - - if(p.radius_bubble != undefined){ - var radius = p.radius_bubble; - }else{ - var radius = p.radius; - } - - if(p.opacity_bubble != undefined){ - var opacity = p.opacity_bubble; - }else{ - var opacity = p.opacity; - } - - if(p.color.rgb){ - var color_value = 'rgba('+p.color.rgb.r+','+p.color.rgb.g+','+p.color.rgb.b+','+opacity+')'; - }else{ - var color_value = 'hsla('+p.color.hsl.h+','+p.color.hsl.s+'%,'+p.color.hsl.l+'%,'+opacity+')'; - } - - pJS.canvas.ctx.fillStyle = color_value; - pJS.canvas.ctx.beginPath(); - - switch(p.shape){ - - case 'circle': - pJS.canvas.ctx.arc(p.x, p.y, radius, 0, Math.PI * 2, false); - break; - - case 'edge': - pJS.canvas.ctx.rect(p.x-radius, p.y-radius, radius*2, radius*2); - break; - - case 'triangle': - pJS.fn.vendors.drawShape(pJS.canvas.ctx, p.x-radius, p.y+radius / 1.66, radius*2, 3, 2); - break; - - case 'polygon': - pJS.fn.vendors.drawShape( - pJS.canvas.ctx, - p.x - radius / (pJS.particles.shape.polygon.nb_sides/3.5), // startX - p.y - radius / (2.66/3.5), // startY - radius*2.66 / (pJS.particles.shape.polygon.nb_sides/3), // sideLength - pJS.particles.shape.polygon.nb_sides, // sideCountNumerator - 1 // sideCountDenominator - ); - break; - - case 'star': - pJS.fn.vendors.drawShape( - pJS.canvas.ctx, - p.x - radius*2 / (pJS.particles.shape.polygon.nb_sides/4), // startX - p.y - radius / (2*2.66/3.5), // startY - radius*2*2.66 / (pJS.particles.shape.polygon.nb_sides/3), // sideLength - pJS.particles.shape.polygon.nb_sides, // sideCountNumerator - 2 // sideCountDenominator - ); - break; - - case 'image': - - function draw(){ - pJS.canvas.ctx.drawImage( - img_obj, - p.x-radius, - p.y-radius, - radius*2, - radius*2 / p.img.ratio - ); - } - - if(pJS.tmp.img_type == 'svg'){ - var img_obj = p.img.obj; - }else{ - var img_obj = pJS.tmp.img_obj; - } - - if(img_obj){ - draw(); - } - - break; - - } - - pJS.canvas.ctx.closePath(); - - if(pJS.particles.shape.stroke.width > 0){ - pJS.canvas.ctx.strokeStyle = pJS.particles.shape.stroke.color; - pJS.canvas.ctx.lineWidth = pJS.particles.shape.stroke.width; - pJS.canvas.ctx.stroke(); - } - - pJS.canvas.ctx.fill(); - - }; - - - pJS.fn.particlesCreate = function(){ - for(var i = 0; i < pJS.particles.number.value; i++) { - pJS.particles.array.push(new pJS.fn.particle(pJS.particles.color, pJS.particles.opacity.value)); - } - }; - - pJS.fn.particlesUpdate = function(){ - - for(var i = 0; i < pJS.particles.array.length; i++){ - - /* the particle */ - var p = pJS.particles.array[i]; - - // var d = ( dx = pJS.interactivity.mouse.click_pos_x - p.x ) * dx + ( dy = pJS.interactivity.mouse.click_pos_y - p.y ) * dy; - // var f = -BANG_SIZE / d; - // if ( d < BANG_SIZE ) { - // var t = Math.atan2( dy, dx ); - // p.vx = f * Math.cos(t); - // p.vy = f * Math.sin(t); - // } - - /* move the particle */ - if(pJS.particles.move.enable){ - var ms = pJS.particles.move.speed/2; - p.x += p.vx * ms; - p.y += p.vy * ms; - } - - /* change opacity status */ - if(pJS.particles.opacity.anim.enable) { - if(p.opacity_status == true) { - if(p.opacity >= pJS.particles.opacity.value) p.opacity_status = false; - p.opacity += p.vo; - }else { - if(p.opacity <= pJS.particles.opacity.anim.opacity_min) p.opacity_status = true; - p.opacity -= p.vo; - } - if(p.opacity < 0) p.opacity = 0; - } - - /* change size */ - if(pJS.particles.size.anim.enable){ - if(p.size_status == true){ - if(p.radius >= pJS.particles.size.value) p.size_status = false; - p.radius += p.vs; - }else{ - if(p.radius <= pJS.particles.size.anim.size_min) p.size_status = true; - p.radius -= p.vs; - } - if(p.radius < 0) p.radius = 0; - } - - /* change particle position if it is out of canvas */ - if(pJS.particles.move.out_mode == 'bounce'){ - var new_pos = { - x_left: p.radius, - x_right: pJS.canvas.w, - y_top: p.radius, - y_bottom: pJS.canvas.h - } - }else{ - var new_pos = { - x_left: -p.radius, - x_right: pJS.canvas.w + p.radius, - y_top: -p.radius, - y_bottom: pJS.canvas.h + p.radius - } - } - - if(p.x - p.radius > pJS.canvas.w){ - p.x = new_pos.x_left; - p.y = Math.random() * pJS.canvas.h; - } - else if(p.x + p.radius < 0){ - p.x = new_pos.x_right; - p.y = Math.random() * pJS.canvas.h; - } - if(p.y - p.radius > pJS.canvas.h){ - p.y = new_pos.y_top; - p.x = Math.random() * pJS.canvas.w; - } - else if(p.y + p.radius < 0){ - p.y = new_pos.y_bottom; - p.x = Math.random() * pJS.canvas.w; - } - - /* out of canvas modes */ - switch(pJS.particles.move.out_mode){ - case 'bounce': - if (p.x + p.radius > pJS.canvas.w) p.vx = -p.vx; - else if (p.x - p.radius < 0) p.vx = -p.vx; - if (p.y + p.radius > pJS.canvas.h) p.vy = -p.vy; - else if (p.y - p.radius < 0) p.vy = -p.vy; - break; - } - - /* events */ - if(isInArray('grab', pJS.interactivity.events.onhover.mode)){ - pJS.fn.modes.grabParticle(p); - } - - if(isInArray('bubble', pJS.interactivity.events.onhover.mode) || isInArray('bubble', pJS.interactivity.events.onclick.mode)){ - pJS.fn.modes.bubbleParticle(p); - } - - if(isInArray('repulse', pJS.interactivity.events.onhover.mode) || isInArray('repulse', pJS.interactivity.events.onclick.mode)){ - pJS.fn.modes.repulseParticle(p); - } - - /* interaction auto between particles */ - if(pJS.particles.line_linked.enable || pJS.particles.move.attract.enable){ - for(var j = i + 1; j < pJS.particles.array.length; j++){ - var p2 = pJS.particles.array[j]; - - /* link particles */ - if(pJS.particles.line_linked.enable){ - pJS.fn.interact.linkParticles(p,p2); - } - - /* attract particles */ - if(pJS.particles.move.attract.enable){ - pJS.fn.interact.attractParticles(p,p2); - } - - /* bounce particles */ - if(pJS.particles.move.bounce){ - pJS.fn.interact.bounceParticles(p,p2); - } - - } - } - - - } - - }; - - pJS.fn.particlesDraw = function(){ - - /* clear canvas */ - pJS.canvas.ctx.clearRect(0, 0, pJS.canvas.w, pJS.canvas.h); - - /* update each particles param */ - pJS.fn.particlesUpdate(); - - /* draw each particle */ - for(var i = 0; i < pJS.particles.array.length; i++){ - var p = pJS.particles.array[i]; - p.draw(); - } - - }; - - pJS.fn.particlesEmpty = function(){ - pJS.particles.array = []; - }; - - pJS.fn.particlesRefresh = function(){ - - /* init all */ - cancelRequestAnimFrame(pJS.fn.checkAnimFrame); - cancelRequestAnimFrame(pJS.fn.drawAnimFrame); - pJS.tmp.source_svg = undefined; - pJS.tmp.img_obj = undefined; - pJS.tmp.count_svg = 0; - pJS.fn.particlesEmpty(); - pJS.fn.canvasClear(); - - /* restart */ - pJS.fn.vendors.start(); - - }; - - - /* ---------- pJS functions - particles interaction ------------ */ - - pJS.fn.interact.linkParticles = function(p1, p2){ - - var dx = p1.x - p2.x, - dy = p1.y - p2.y, - dist = Math.sqrt(dx*dx + dy*dy); - - /* draw a line between p1 and p2 if the distance between them is under the config distance */ - if(dist <= pJS.particles.line_linked.distance){ - - var opacity_line = pJS.particles.line_linked.opacity - (dist / (1/pJS.particles.line_linked.opacity)) / pJS.particles.line_linked.distance; - - if(opacity_line > 0){ - - /* style */ - var color_line = pJS.particles.line_linked.color_rgb_line; - pJS.canvas.ctx.strokeStyle = 'rgba('+color_line.r+','+color_line.g+','+color_line.b+','+opacity_line+')'; - pJS.canvas.ctx.lineWidth = pJS.particles.line_linked.width; - //pJS.canvas.ctx.lineCap = 'round'; /* performance issue */ - - /* path */ - pJS.canvas.ctx.beginPath(); - pJS.canvas.ctx.moveTo(p1.x, p1.y); - pJS.canvas.ctx.lineTo(p2.x, p2.y); - pJS.canvas.ctx.stroke(); - pJS.canvas.ctx.closePath(); - - } - - } - - }; - - - pJS.fn.interact.attractParticles = function(p1, p2){ - - /* condensed particles */ - var dx = p1.x - p2.x, - dy = p1.y - p2.y, - dist = Math.sqrt(dx*dx + dy*dy); - - if(dist <= pJS.particles.line_linked.distance){ - - var ax = dx/(pJS.particles.move.attract.rotateX*1000), - ay = dy/(pJS.particles.move.attract.rotateY*1000); - - p1.vx -= ax; - p1.vy -= ay; - - p2.vx += ax; - p2.vy += ay; - - } - - - } - - - pJS.fn.interact.bounceParticles = function(p1, p2){ - - var dx = p1.x - p2.x, - dy = p1.y - p2.y, - dist = Math.sqrt(dx*dx + dy*dy), - dist_p = p1.radius+p2.radius; - - if(dist <= dist_p){ - p1.vx = -p1.vx; - p1.vy = -p1.vy; - - p2.vx = -p2.vx; - p2.vy = -p2.vy; - } - - } - - - /* ---------- pJS functions - modes events ------------ */ - - pJS.fn.modes.pushParticles = function(nb, pos){ - - pJS.tmp.pushing = true; - - for(var i = 0; i < nb; i++){ - pJS.particles.array.push( - new pJS.fn.particle( - pJS.particles.color, - pJS.particles.opacity.value, - { - 'x': pos ? pos.pos_x : Math.random() * pJS.canvas.w, - 'y': pos ? pos.pos_y : Math.random() * pJS.canvas.h - } - ) - ) - if(i == nb-1){ - if(!pJS.particles.move.enable){ - pJS.fn.particlesDraw(); - } - pJS.tmp.pushing = false; - } - } - - }; - - - pJS.fn.modes.removeParticles = function(nb){ - - pJS.particles.array.splice(0, nb); - if(!pJS.particles.move.enable){ - pJS.fn.particlesDraw(); - } - - }; - - - pJS.fn.modes.bubbleParticle = function(p){ - - /* on hover event */ - if(pJS.interactivity.events.onhover.enable && isInArray('bubble', pJS.interactivity.events.onhover.mode)){ - - var dx_mouse = p.x - pJS.interactivity.mouse.pos_x, - dy_mouse = p.y - pJS.interactivity.mouse.pos_y, - dist_mouse = Math.sqrt(dx_mouse*dx_mouse + dy_mouse*dy_mouse), - ratio = 1 - dist_mouse / pJS.interactivity.modes.bubble.distance; - - function init(){ - p.opacity_bubble = p.opacity; - p.radius_bubble = p.radius; - } - - /* mousemove - check ratio */ - if(dist_mouse <= pJS.interactivity.modes.bubble.distance){ - - if(ratio >= 0 && pJS.interactivity.status == 'mousemove'){ - - /* size */ - if(pJS.interactivity.modes.bubble.size != pJS.particles.size.value){ - - if(pJS.interactivity.modes.bubble.size > pJS.particles.size.value){ - var size = p.radius + (pJS.interactivity.modes.bubble.size*ratio); - if(size >= 0){ - p.radius_bubble = size; - } - }else{ - var dif = p.radius - pJS.interactivity.modes.bubble.size, - size = p.radius - (dif*ratio); - if(size > 0){ - p.radius_bubble = size; - }else{ - p.radius_bubble = 0; - } - } - - } - - /* opacity */ - if(pJS.interactivity.modes.bubble.opacity != pJS.particles.opacity.value){ - - if(pJS.interactivity.modes.bubble.opacity > pJS.particles.opacity.value){ - var opacity = pJS.interactivity.modes.bubble.opacity*ratio; - if(opacity > p.opacity && opacity <= pJS.interactivity.modes.bubble.opacity){ - p.opacity_bubble = opacity; - } - }else{ - var opacity = p.opacity - (pJS.particles.opacity.value-pJS.interactivity.modes.bubble.opacity)*ratio; - if(opacity < p.opacity && opacity >= pJS.interactivity.modes.bubble.opacity){ - p.opacity_bubble = opacity; - } - } - - } - - } - - }else{ - init(); - } - - - /* mouseleave */ - if(pJS.interactivity.status == 'mouseleave'){ - init(); - } - - } - - /* on click event */ - else if(pJS.interactivity.events.onclick.enable && isInArray('bubble', pJS.interactivity.events.onclick.mode)){ - - - if(pJS.tmp.bubble_clicking){ - var dx_mouse = p.x - pJS.interactivity.mouse.click_pos_x, - dy_mouse = p.y - pJS.interactivity.mouse.click_pos_y, - dist_mouse = Math.sqrt(dx_mouse*dx_mouse + dy_mouse*dy_mouse), - time_spent = (new Date().getTime() - pJS.interactivity.mouse.click_time)/1000; - - if(time_spent > pJS.interactivity.modes.bubble.duration){ - pJS.tmp.bubble_duration_end = true; - } - - if(time_spent > pJS.interactivity.modes.bubble.duration*2){ - pJS.tmp.bubble_clicking = false; - pJS.tmp.bubble_duration_end = false; - } - } - - - function process(bubble_param, particles_param, p_obj_bubble, p_obj, id){ - - if(bubble_param != particles_param){ - - if(!pJS.tmp.bubble_duration_end){ - if(dist_mouse <= pJS.interactivity.modes.bubble.distance){ - if(p_obj_bubble != undefined) var obj = p_obj_bubble; - else var obj = p_obj; - if(obj != bubble_param){ - var value = p_obj - (time_spent * (p_obj - bubble_param) / pJS.interactivity.modes.bubble.duration); - if(id == 'size') p.radius_bubble = value; - if(id == 'opacity') p.opacity_bubble = value; - } - }else{ - if(id == 'size') p.radius_bubble = undefined; - if(id == 'opacity') p.opacity_bubble = undefined; - } - }else{ - if(p_obj_bubble != undefined){ - var value_tmp = p_obj - (time_spent * (p_obj - bubble_param) / pJS.interactivity.modes.bubble.duration), - dif = bubble_param - value_tmp; - value = bubble_param + dif; - if(id == 'size') p.radius_bubble = value; - if(id == 'opacity') p.opacity_bubble = value; - } - } - - } - - } - - if(pJS.tmp.bubble_clicking){ - /* size */ - process(pJS.interactivity.modes.bubble.size, pJS.particles.size.value, p.radius_bubble, p.radius, 'size'); - /* opacity */ - process(pJS.interactivity.modes.bubble.opacity, pJS.particles.opacity.value, p.opacity_bubble, p.opacity, 'opacity'); - } - - } - - }; - - - pJS.fn.modes.repulseParticle = function(p){ - - if(pJS.interactivity.events.onhover.enable && isInArray('repulse', pJS.interactivity.events.onhover.mode) && pJS.interactivity.status == 'mousemove') { - - var dx_mouse = p.x - pJS.interactivity.mouse.pos_x, - dy_mouse = p.y - pJS.interactivity.mouse.pos_y, - dist_mouse = Math.sqrt(dx_mouse*dx_mouse + dy_mouse*dy_mouse); - - var normVec = {x: dx_mouse/dist_mouse, y: dy_mouse/dist_mouse}, - repulseRadius = pJS.interactivity.modes.repulse.distance, - velocity = 100, - repulseFactor = clamp((1/repulseRadius)*(-1*Math.pow(dist_mouse/repulseRadius,2)+1)*repulseRadius*velocity, 0, 50); - - var pos = { - x: p.x + normVec.x * repulseFactor, - y: p.y + normVec.y * repulseFactor - } - - if(pJS.particles.move.out_mode == 'bounce'){ - if(pos.x - p.radius > 0 && pos.x + p.radius < pJS.canvas.w) p.x = pos.x; - if(pos.y - p.radius > 0 && pos.y + p.radius < pJS.canvas.h) p.y = pos.y; - }else{ - p.x = pos.x; - p.y = pos.y; - } - - } - - - else if(pJS.interactivity.events.onclick.enable && isInArray('repulse', pJS.interactivity.events.onclick.mode)) { - - if(!pJS.tmp.repulse_finish){ - pJS.tmp.repulse_count++; - if(pJS.tmp.repulse_count == pJS.particles.array.length){ - pJS.tmp.repulse_finish = true; - } - } - - if(pJS.tmp.repulse_clicking){ - - var repulseRadius = Math.pow(pJS.interactivity.modes.repulse.distance/6, 3); - - var dx = pJS.interactivity.mouse.click_pos_x - p.x, - dy = pJS.interactivity.mouse.click_pos_y - p.y, - d = dx*dx + dy*dy; - - var force = -repulseRadius / d * 1; - - function process(){ - - var f = Math.atan2(dy,dx); - p.vx = force * Math.cos(f); - p.vy = force * Math.sin(f); - - if(pJS.particles.move.out_mode == 'bounce'){ - var pos = { - x: p.x + p.vx, - y: p.y + p.vy - } - if (pos.x + p.radius > pJS.canvas.w) p.vx = -p.vx; - else if (pos.x - p.radius < 0) p.vx = -p.vx; - if (pos.y + p.radius > pJS.canvas.h) p.vy = -p.vy; - else if (pos.y - p.radius < 0) p.vy = -p.vy; - } - - } - - // default - if(d <= repulseRadius){ - process(); - } - - // bang - slow motion mode - // if(!pJS.tmp.repulse_finish){ - // if(d <= repulseRadius){ - // process(); - // } - // }else{ - // process(); - // } - - - }else{ - - if(pJS.tmp.repulse_clicking == false){ - - p.vx = p.vx_i; - p.vy = p.vy_i; - - } - - } - - } - - } - - - pJS.fn.modes.grabParticle = function(p){ - - if(pJS.interactivity.events.onhover.enable && pJS.interactivity.status == 'mousemove'){ - - var dx_mouse = p.x - pJS.interactivity.mouse.pos_x, - dy_mouse = p.y - pJS.interactivity.mouse.pos_y, - dist_mouse = Math.sqrt(dx_mouse*dx_mouse + dy_mouse*dy_mouse); - - /* draw a line between the cursor and the particle if the distance between them is under the config distance */ - if(dist_mouse <= pJS.interactivity.modes.grab.distance){ - - var opacity_line = pJS.interactivity.modes.grab.line_linked.opacity - (dist_mouse / (1/pJS.interactivity.modes.grab.line_linked.opacity)) / pJS.interactivity.modes.grab.distance; - - if(opacity_line > 0){ - - /* style */ - var color_line = pJS.particles.line_linked.color_rgb_line; - pJS.canvas.ctx.strokeStyle = 'rgba('+color_line.r+','+color_line.g+','+color_line.b+','+opacity_line+')'; - pJS.canvas.ctx.lineWidth = pJS.particles.line_linked.width; - //pJS.canvas.ctx.lineCap = 'round'; /* performance issue */ - - /* path */ - pJS.canvas.ctx.beginPath(); - pJS.canvas.ctx.moveTo(p.x, p.y); - pJS.canvas.ctx.lineTo(pJS.interactivity.mouse.pos_x, pJS.interactivity.mouse.pos_y); - pJS.canvas.ctx.stroke(); - pJS.canvas.ctx.closePath(); - - } - - } - - } - - }; - - - - /* ---------- pJS functions - vendors ------------ */ - - pJS.fn.vendors.eventsListeners = function(){ - - /* events target element */ - if(pJS.interactivity.detect_on == 'window'){ - pJS.interactivity.el = window; - }else{ - pJS.interactivity.el = pJS.canvas.el; - } - - - /* detect mouse pos - on hover / click event */ - if(pJS.interactivity.events.onhover.enable || pJS.interactivity.events.onclick.enable){ - - /* el on mousemove */ - pJS.interactivity.el.addEventListener('mousemove', function(e){ - - if(pJS.interactivity.el == window){ - var pos_x = e.clientX, - pos_y = e.clientY; - } - else{ - var pos_x = e.offsetX || e.clientX, - pos_y = e.offsetY || e.clientY; - } - - pJS.interactivity.mouse.pos_x = pos_x; - pJS.interactivity.mouse.pos_y = pos_y; - - if(pJS.tmp.retina){ - pJS.interactivity.mouse.pos_x *= pJS.canvas.pxratio; - pJS.interactivity.mouse.pos_y *= pJS.canvas.pxratio; - } - - pJS.interactivity.status = 'mousemove'; - - }); - - /* el on onmouseleave */ - pJS.interactivity.el.addEventListener('mouseleave', function(e){ - - pJS.interactivity.mouse.pos_x = null; - pJS.interactivity.mouse.pos_y = null; - pJS.interactivity.status = 'mouseleave'; - - }); - - } - - /* on click event */ - if(pJS.interactivity.events.onclick.enable){ - - pJS.interactivity.el.addEventListener('click', function(){ - - pJS.interactivity.mouse.click_pos_x = pJS.interactivity.mouse.pos_x; - pJS.interactivity.mouse.click_pos_y = pJS.interactivity.mouse.pos_y; - pJS.interactivity.mouse.click_time = new Date().getTime(); - - if(pJS.interactivity.events.onclick.enable){ - - switch(pJS.interactivity.events.onclick.mode){ - - case 'push': - if(pJS.particles.move.enable){ - pJS.fn.modes.pushParticles(pJS.interactivity.modes.push.particles_nb, pJS.interactivity.mouse); - }else{ - if(pJS.interactivity.modes.push.particles_nb == 1){ - pJS.fn.modes.pushParticles(pJS.interactivity.modes.push.particles_nb, pJS.interactivity.mouse); - } - else if(pJS.interactivity.modes.push.particles_nb > 1){ - pJS.fn.modes.pushParticles(pJS.interactivity.modes.push.particles_nb); - } - } - break; - - case 'remove': - pJS.fn.modes.removeParticles(pJS.interactivity.modes.remove.particles_nb); - break; - - case 'bubble': - pJS.tmp.bubble_clicking = true; - break; - - case 'repulse': - pJS.tmp.repulse_clicking = true; - pJS.tmp.repulse_count = 0; - pJS.tmp.repulse_finish = false; - setTimeout(function(){ - pJS.tmp.repulse_clicking = false; - }, pJS.interactivity.modes.repulse.duration*1000) - break; - - } - - } - - }); - - } - - - }; - - pJS.fn.vendors.densityAutoParticles = function(){ - - if(pJS.particles.number.density.enable){ - - /* calc area */ - var area = pJS.canvas.el.width * pJS.canvas.el.height / 1000; - if(pJS.tmp.retina){ - area = area/(pJS.canvas.pxratio*2); - } - - /* calc number of particles based on density area */ - var nb_particles = area * pJS.particles.number.value / pJS.particles.number.density.value_area; - - /* add or remove X particles */ - var missing_particles = pJS.particles.array.length - nb_particles; - if(missing_particles < 0) pJS.fn.modes.pushParticles(Math.abs(missing_particles)); - else pJS.fn.modes.removeParticles(missing_particles); - - } - - }; - - - pJS.fn.vendors.checkOverlap = function(p1, position){ - for(var i = 0; i < pJS.particles.array.length; i++){ - var p2 = pJS.particles.array[i]; - - var dx = p1.x - p2.x, - dy = p1.y - p2.y, - dist = Math.sqrt(dx*dx + dy*dy); - - if(dist <= p1.radius + p2.radius){ - p1.x = position ? position.x : Math.random() * pJS.canvas.w; - p1.y = position ? position.y : Math.random() * pJS.canvas.h; - pJS.fn.vendors.checkOverlap(p1); - } - } - }; - - - pJS.fn.vendors.createSvgImg = function(p){ - - /* set color to svg element */ - var svgXml = pJS.tmp.source_svg, - rgbHex = /#([0-9A-F]{3,6})/gi, - coloredSvgXml = svgXml.replace(rgbHex, function (m, r, g, b) { - if(p.color.rgb){ - var color_value = 'rgba('+p.color.rgb.r+','+p.color.rgb.g+','+p.color.rgb.b+','+p.opacity+')'; - }else{ - var color_value = 'hsla('+p.color.hsl.h+','+p.color.hsl.s+'%,'+p.color.hsl.l+'%,'+p.opacity+')'; - } - return color_value; - }); - - /* prepare to create img with colored svg */ - var svg = new Blob([coloredSvgXml], {type: 'image/svg+xml;charset=utf-8'}), - DOMURL = window.URL || window.webkitURL || window, - url = DOMURL.createObjectURL(svg); - - /* create particle img obj */ - var img = new Image(); - img.addEventListener('load', function(){ - p.img.obj = img; - p.img.loaded = true; - DOMURL.revokeObjectURL(url); - pJS.tmp.count_svg++; - }); - img.src = url; - - }; - - - pJS.fn.vendors.destroypJS = function(){ - cancelAnimationFrame(pJS.fn.drawAnimFrame); - canvas_el.remove(); - pJSDom = null; - }; - - - pJS.fn.vendors.drawShape = function(c, startX, startY, sideLength, sideCountNumerator, sideCountDenominator){ - - // By Programming Thomas - https://programmingthomas.wordpress.com/2013/04/03/n-sided-shapes/ - var sideCount = sideCountNumerator * sideCountDenominator; - var decimalSides = sideCountNumerator / sideCountDenominator; - var interiorAngleDegrees = (180 * (decimalSides - 2)) / decimalSides; - var interiorAngle = Math.PI - Math.PI * interiorAngleDegrees / 180; // convert to radians - c.save(); - c.beginPath(); - c.translate(startX, startY); - c.moveTo(0,0); - for (var i = 0; i < sideCount; i++) { - c.lineTo(sideLength,0); - c.translate(sideLength,0); - c.rotate(interiorAngle); - } - //c.stroke(); - c.fill(); - c.restore(); - - }; - - pJS.fn.vendors.exportImg = function(){ - window.open(pJS.canvas.el.toDataURL('image/png'), '_blank'); - }; - - - pJS.fn.vendors.loadImg = function(type){ - - pJS.tmp.img_error = undefined; - - if(pJS.particles.shape.image.src != ''){ - - if(type == 'svg'){ - - var xhr = new XMLHttpRequest(); - xhr.open('GET', pJS.particles.shape.image.src); - xhr.onreadystatechange = function (data) { - if(xhr.readyState == 4){ - if(xhr.status == 200){ - pJS.tmp.source_svg = data.currentTarget.response; - pJS.fn.vendors.checkBeforeDraw(); - }else{ - console.log('Error pJS - Image not found'); - pJS.tmp.img_error = true; - } - } - } - xhr.send(); - - }else{ - - var img = new Image(); - img.addEventListener('load', function(){ - pJS.tmp.img_obj = img; - pJS.fn.vendors.checkBeforeDraw(); - }); - img.src = pJS.particles.shape.image.src; - - } - - }else{ - console.log('Error pJS - No image.src'); - pJS.tmp.img_error = true; - } - - }; - - - pJS.fn.vendors.draw = function(){ - - if(pJS.particles.shape.type == 'image'){ - - if(pJS.tmp.img_type == 'svg'){ - - if(pJS.tmp.count_svg >= pJS.particles.number.value){ - pJS.fn.particlesDraw(); - if(!pJS.particles.move.enable) cancelRequestAnimFrame(pJS.fn.drawAnimFrame); - else pJS.fn.drawAnimFrame = requestAnimFrame(pJS.fn.vendors.draw); - }else{ - //console.log('still loading...'); - if(!pJS.tmp.img_error) pJS.fn.drawAnimFrame = requestAnimFrame(pJS.fn.vendors.draw); - } - - }else{ - - if(pJS.tmp.img_obj != undefined){ - pJS.fn.particlesDraw(); - if(!pJS.particles.move.enable) cancelRequestAnimFrame(pJS.fn.drawAnimFrame); - else pJS.fn.drawAnimFrame = requestAnimFrame(pJS.fn.vendors.draw); - }else{ - if(!pJS.tmp.img_error) pJS.fn.drawAnimFrame = requestAnimFrame(pJS.fn.vendors.draw); - } - - } - - }else{ - pJS.fn.particlesDraw(); - if(!pJS.particles.move.enable) cancelRequestAnimFrame(pJS.fn.drawAnimFrame); - else pJS.fn.drawAnimFrame = requestAnimFrame(pJS.fn.vendors.draw); - } - - }; - - - pJS.fn.vendors.checkBeforeDraw = function(){ - - // if shape is image - if(pJS.particles.shape.type == 'image'){ - - if(pJS.tmp.img_type == 'svg' && pJS.tmp.source_svg == undefined){ - pJS.tmp.checkAnimFrame = requestAnimFrame(check); - }else{ - //console.log('images loaded! cancel check'); - cancelRequestAnimFrame(pJS.tmp.checkAnimFrame); - if(!pJS.tmp.img_error){ - pJS.fn.vendors.init(); - pJS.fn.vendors.draw(); - } - - } - - }else{ - pJS.fn.vendors.init(); - pJS.fn.vendors.draw(); - } - - }; - - - pJS.fn.vendors.init = function(){ - - /* init canvas + particles */ - pJS.fn.retinaInit(); - pJS.fn.canvasInit(); - pJS.fn.canvasSize(); - pJS.fn.canvasPaint(); - pJS.fn.particlesCreate(); - pJS.fn.vendors.densityAutoParticles(); - - /* particles.line_linked - convert hex colors to rgb */ - pJS.particles.line_linked.color_rgb_line = hexToRgb(pJS.particles.line_linked.color); - - }; - - - pJS.fn.vendors.start = function(){ - - if(isInArray('image', pJS.particles.shape.type)){ - pJS.tmp.img_type = pJS.particles.shape.image.src.substr(pJS.particles.shape.image.src.length - 3); - pJS.fn.vendors.loadImg(pJS.tmp.img_type); - }else{ - pJS.fn.vendors.checkBeforeDraw(); - } - - }; - - - - - /* ---------- pJS - start ------------ */ - - - pJS.fn.vendors.eventsListeners(); - - pJS.fn.vendors.start(); - - - -}; - -/* ---------- global functions - vendors ------------ */ - -Object.deepExtend = function(destination, source) { - for (var property in source) { - if (source[property] && source[property].constructor && - source[property].constructor === Object) { - destination[property] = destination[property] || {}; - arguments.callee(destination[property], source[property]); - } else { - destination[property] = source[property]; - } - } - return destination; -}; - -window.requestAnimFrame = (function(){ - return window.requestAnimationFrame || - window.webkitRequestAnimationFrame || - window.mozRequestAnimationFrame || - window.oRequestAnimationFrame || - window.msRequestAnimationFrame || - function(callback){ - window.setTimeout(callback, 1000 / 60); - }; -})(); - -window.cancelRequestAnimFrame = ( function() { - return window.cancelAnimationFrame || - window.webkitCancelRequestAnimationFrame || - window.mozCancelRequestAnimationFrame || - window.oCancelRequestAnimationFrame || - window.msCancelRequestAnimationFrame || - clearTimeout -} )(); - -function hexToRgb(hex){ - // By Tim Down - http://stackoverflow.com/a/5624139/3493650 - // Expand shorthand form (e.g. "03F") to full form (e.g. "0033FF") - var shorthandRegex = /^#?([a-f\d])([a-f\d])([a-f\d])$/i; - hex = hex.replace(shorthandRegex, function(m, r, g, b) { - return r + r + g + g + b + b; - }); - var result = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex); - return result ? { - r: parseInt(result[1], 16), - g: parseInt(result[2], 16), - b: parseInt(result[3], 16) - } : null; -}; - -function clamp(number, min, max) { - return Math.min(Math.max(number, min), max); -}; - -function isInArray(value, array) { - return array.indexOf(value) > -1; -} - - -/* ---------- particles.js functions - start ------------ */ - -window.pJSDom = []; - -window.particlesJS = function(tag_id, params){ - - //console.log(params); - - /* no string id? so it's object params, and set the id with default id */ - if(typeof(tag_id) != 'string'){ - params = tag_id; - tag_id = 'particles-js'; - } - - /* no id? set the id to default id */ - if(!tag_id){ - tag_id = 'particles-js'; - } - - /* pJS elements */ - var pJS_tag = document.getElementById(tag_id), - pJS_canvas_class = 'particles-js-canvas-el', - exist_canvas = pJS_tag.getElementsByClassName(pJS_canvas_class); - - /* remove canvas if exists into the pJS target tag */ - if(exist_canvas.length){ - while(exist_canvas.length > 0){ - pJS_tag.removeChild(exist_canvas[0]); - } - } - - /* create canvas element */ - var canvas_el = document.createElement('canvas'); - canvas_el.className = pJS_canvas_class; - - /* set size canvas */ - canvas_el.style.width = "100%"; - canvas_el.style.height = "100%"; - - /* append canvas */ - var canvas = document.getElementById(tag_id).appendChild(canvas_el); - - /* launch particle.js */ - if(canvas != null){ - pJSDom.push(new pJS(tag_id, params)); - } - -}; - -window.particlesJS.load = function(tag_id, path_config_json, callback){ - - /* load json config */ - var xhr = new XMLHttpRequest(); - xhr.open('GET', path_config_json); - xhr.onreadystatechange = function (data) { - if(xhr.readyState == 4){ - if(xhr.status == 200){ - var params = JSON.parse(data.currentTarget.response); - window.particlesJS(tag_id, params); - if(callback) callback(); - }else{ - console.log('Error pJS - XMLHttpRequest status: '+xhr.status); - console.log('Error pJS - File config not found'); - } - } - }; - xhr.send(); - -}; \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F__2.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F__2.py deleted file mode 100644 index edbb0b92f77e3198b55920879271f481082131ea..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/C_F_F__2.py +++ /dev/null @@ -1,13 +0,0 @@ -from io import BytesIO -from fontTools.ttLib.tables.C_F_F_ import table_C_F_F_ - - -class table_C_F_F__2(table_C_F_F_): - def decompile(self, data, otFont): - self.cff.decompile(BytesIO(data), otFont, isCFF2=True) - assert len(self.cff) == 1, "can't deal with multi-font CFF tables." - - def compile(self, otFont): - f = BytesIO() - self.cff.compile(f, otFont, isCFF2=True) - return f.getvalue() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/F__e_a_t.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/F__e_a_t.py deleted file mode 100644 index fbcd6ca6e7bc0640263ddab74e1e1c89ea61bbfb..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/F__e_a_t.py +++ /dev/null @@ -1,144 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.fixedTools import floatToFixedToStr -from fontTools.misc.textTools import safeEval -from . import DefaultTable -from . import grUtils -import struct - -Feat_hdr_format = """ - > - version: 16.16F -""" - - -class table_F__e_a_t(DefaultTable.DefaultTable): - """The ``Feat`` table is used exclusively by the Graphite shaping engine - to store features and possible settings specified in GDL. Graphite features - determine what rules are applied to transform a glyph stream. - - Not to be confused with ``feat``, or the OpenType Layout tables - ``GSUB``/``GPOS``.""" - - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.features = {} - - def decompile(self, data, ttFont): - (_, data) = sstruct.unpack2(Feat_hdr_format, data, self) - self.version = float(floatToFixedToStr(self.version, precisionBits=16)) - (numFeats,) = struct.unpack(">H", data[:2]) - data = data[8:] - allfeats = [] - maxsetting = 0 - for i in range(numFeats): - if self.version >= 2.0: - (fid, nums, _, offset, flags, lid) = struct.unpack( - ">LHHLHH", data[16 * i : 16 * (i + 1)] - ) - offset = int((offset - 12 - 16 * numFeats) / 4) - else: - (fid, nums, offset, flags, lid) = struct.unpack( - ">HHLHH", data[12 * i : 12 * (i + 1)] - ) - offset = int((offset - 12 - 12 * numFeats) / 4) - allfeats.append((fid, nums, offset, flags, lid)) - maxsetting = max(maxsetting, offset + nums) - data = data[16 * numFeats :] - allsettings = [] - for i in range(maxsetting): - if len(data) >= 4 * (i + 1): - (val, lid) = struct.unpack(">HH", data[4 * i : 4 * (i + 1)]) - allsettings.append((val, lid)) - for i, f in enumerate(allfeats): - (fid, nums, offset, flags, lid) = f - fobj = Feature() - fobj.flags = flags - fobj.label = lid - self.features[grUtils.num2tag(fid)] = fobj - fobj.settings = {} - fobj.default = None - fobj.index = i - for i in range(offset, offset + nums): - if i >= len(allsettings): - continue - (vid, vlid) = allsettings[i] - fobj.settings[vid] = vlid - if fobj.default is None: - fobj.default = vid - - def compile(self, ttFont): - fdat = b"" - vdat = b"" - offset = 0 - for f, v in sorted(self.features.items(), key=lambda x: x[1].index): - fnum = grUtils.tag2num(f) - if self.version >= 2.0: - fdat += struct.pack( - ">LHHLHH", - grUtils.tag2num(f), - len(v.settings), - 0, - offset * 4 + 12 + 16 * len(self.features), - v.flags, - v.label, - ) - elif fnum > 65535: # self healing for alphabetic ids - self.version = 2.0 - return self.compile(ttFont) - else: - fdat += struct.pack( - ">HHLHH", - grUtils.tag2num(f), - len(v.settings), - offset * 4 + 12 + 12 * len(self.features), - v.flags, - v.label, - ) - for s, l in sorted( - v.settings.items(), key=lambda x: (-1, x[1]) if x[0] == v.default else x - ): - vdat += struct.pack(">HH", s, l) - offset += len(v.settings) - hdr = sstruct.pack(Feat_hdr_format, self) - return hdr + struct.pack(">HHL", len(self.features), 0, 0) + fdat + vdat - - def toXML(self, writer, ttFont): - writer.simpletag("version", version=self.version) - writer.newline() - for f, v in sorted(self.features.items(), key=lambda x: x[1].index): - writer.begintag( - "feature", - fid=f, - label=v.label, - flags=v.flags, - default=(v.default if v.default else 0), - ) - writer.newline() - for s, l in sorted(v.settings.items()): - writer.simpletag("setting", value=s, label=l) - writer.newline() - writer.endtag("feature") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "version": - self.version = float(safeEval(attrs["version"])) - elif name == "feature": - fid = attrs["fid"] - fobj = Feature() - fobj.flags = int(safeEval(attrs["flags"])) - fobj.label = int(safeEval(attrs["label"])) - fobj.default = int(safeEval(attrs.get("default", "0"))) - fobj.index = len(self.features) - self.features[fid] = fobj - fobj.settings = {} - for element in content: - if not isinstance(element, tuple): - continue - tag, a, c = element - if tag == "setting": - fobj.settings[int(safeEval(a["value"]))] = int(safeEval(a["label"])) - - -class Feature(object): - pass diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipeline_utils.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipeline_utils.py deleted file mode 100644 index 5c0c2337dc048dd9ef164ac5cb92e4bf5e62d764..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipeline_utils.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -# limitations under the License. - -# NOTE: This file is deprecated and will be removed in a future version. -# It only exists so that temporarely `from diffusers.pipelines import DiffusionPipeline` works - -from .pipelines import DiffusionPipeline, ImagePipelineOutput # noqa: F401 diff --git a/spaces/deepwisdom/MetaGPT/metagpt/learn/google_search.py b/spaces/deepwisdom/MetaGPT/metagpt/learn/google_search.py deleted file mode 100644 index ef099fe948c42b6ccfd8cbacdda0a7efa255de59..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/learn/google_search.py +++ /dev/null @@ -1,12 +0,0 @@ -from metagpt.tools.search_engine import SearchEngine - - -async def google_search(query: str, max_results: int = 6, **kwargs): - """Perform a web search and retrieve search results. - - :param query: The search query. - :param max_results: The number of search results to retrieve - :return: The web search results in markdown format. - """ - resluts = await SearchEngine().run(query, max_results=max_results, as_string=False) - return "\n".join(f"{i}. [{j['title']}]({j['link']}): {j['snippet']}" for i, j in enumerate(resluts, 1)) diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_search_engine_meilisearch.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_search_engine_meilisearch.py deleted file mode 100644 index 8d2bb64942f521af45edf60df2c4e6e9d9d36fab..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_search_engine_meilisearch.py +++ /dev/null @@ -1,46 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/27 22:18 -@Author : alexanderwu -@File : test_search_engine_meilisearch.py -""" -import subprocess -import time - -import pytest - -from metagpt.logs import logger -from metagpt.tools.search_engine_meilisearch import DataSource, MeilisearchEngine - -MASTER_KEY = '116Qavl2qpCYNEJNv5-e0RC9kncev1nr1gt7ybEGVLk' - - -@pytest.fixture() -def search_engine_server(): - meilisearch_process = subprocess.Popen(["meilisearch", "--master-key", f"{MASTER_KEY}"], stdout=subprocess.PIPE) - time.sleep(3) - yield - meilisearch_process.terminate() - meilisearch_process.wait() - - -def test_meilisearch(search_engine_server): - search_engine = MeilisearchEngine(url="http://localhost:7700", token=MASTER_KEY) - - # 假设有一个名为"books"的数据源,包含要添加的文档库 - books_data_source = DataSource(name='books', url='https://example.com/books') - - # 假设有一个名为"documents"的文档库,包含要添加的文档 - documents = [ - {"id": 1, "title": "Book 1", "content": "This is the content of Book 1."}, - {"id": 2, "title": "Book 2", "content": "This is the content of Book 2."}, - {"id": 3, "title": "Book 1", "content": "This is the content of Book 1."}, - {"id": 4, "title": "Book 2", "content": "This is the content of Book 2."}, - {"id": 5, "title": "Book 1", "content": "This is the content of Book 1."}, - {"id": 6, "title": "Book 2", "content": "This is the content of Book 2."}, - ] - - # 添加文档库到搜索引擎 - search_engine.add_documents(books_data_source, documents) - logger.info(search_engine.search('Book 1')) diff --git a/spaces/diacanFperku/AutoGPT/HD Online Player (Modiac Video Converter 2.5.0.4164 Ke) !FREE!.md b/spaces/diacanFperku/AutoGPT/HD Online Player (Modiac Video Converter 2.5.0.4164 Ke) !FREE!.md deleted file mode 100644 index f25fb3d03ceca0d043eb013aad2d7cf85a890618..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/HD Online Player (Modiac Video Converter 2.5.0.4164 Ke) !FREE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    HD Online Player (Modiac Video Converter 2.5.0.4164 ke)


    Downloadhttps://gohhs.com/2uFTaP



    -
    -AMV format, so my son could watch them on his inexpensive MP3/Video player. Many online conversion tools I checked out didn't have the right ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Mikra Anglia (2013) DVDRip Greek X264 AC3-N3kr4 Amantes Marchas Prel !EXCLUSIVE!.md b/spaces/diacanFperku/AutoGPT/Mikra Anglia (2013) DVDRip Greek X264 AC3-N3kr4 Amantes Marchas Prel !EXCLUSIVE!.md deleted file mode 100644 index 82f3bc97c9ddb3c9eb68edba63b94d9f2ed91ff3..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Mikra Anglia (2013) DVDRip Greek X264 AC3-N3kr4 Amantes Marchas Prel !EXCLUSIVE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Mikra Anglia (2013) DVDRip Greek X264 AC3-N3kr4 amantes marchas prel


    Download File 🗹 https://gohhs.com/2uFUS4



    -
    -Set during the 1930's on the Greek island of Andros, a Cyclades archipelago with a long history of military embroilment and seafaring turmoil, Little England is ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/digitalxingtong/Nanami-Bert-VITS2/attentions.py b/spaces/digitalxingtong/Nanami-Bert-VITS2/attentions.py deleted file mode 100644 index ecbdbc8be941a962046fc11fd6739b093112123e..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nanami-Bert-VITS2/attentions.py +++ /dev/null @@ -1,343 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from torch.nn.utils import weight_norm, remove_weight_norm -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, isflow = True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - if isflow: - cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1) - self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1) - self.cond_layer = weight_norm(cond_layer, name='weight') - self.gin_channels = 256 - self.cond_layer_idx = self.n_layers - if 'gin_channels' in kwargs: - self.gin_channels = kwargs['gin_channels'] - if self.gin_channels != 0: - self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels) - # vits2 says 3rd block, so idx is 2 by default - self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2 - print(self.gin_channels, self.cond_layer_idx) - assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers' - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - def forward(self, x, x_mask, g=None): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - if i == self.cond_layer_idx and g is not None: - g = self.spk_emb_linear(g.transpose(1, 2)) - g = g.transpose(1, 2) - x = x + g - x = x * x_mask - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/digitalxingtong/Nanami-Bert-VITS2/text/chinese_bert.py b/spaces/digitalxingtong/Nanami-Bert-VITS2/text/chinese_bert.py deleted file mode 100644 index cb84ce0b426cd0a1c7954ddcdf41322c10ed14fa..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nanami-Bert-VITS2/text/chinese_bert.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForMaskedLM - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") -model = AutoModelForMaskedLM.from_pretrained("./bert/chinese-roberta-wwm-ext-large").to(device) - -def get_bert_feature(text, word2ph): - with torch.no_grad(): - inputs = tokenizer(text, return_tensors='pt') - for i in inputs: - inputs[i] = inputs[i].to(device) - res = model(**inputs, output_hidden_states=True) - res = torch.cat(res['hidden_states'][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text)+2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - - return phone_level_feature.T - -if __name__ == '__main__': - # feature = get_bert_feature('你好,我是说的道理。') - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) - diff --git a/spaces/docs-demos/pegasus_paraphrase/app.py b/spaces/docs-demos/pegasus_paraphrase/app.py deleted file mode 100644 index 998642b0cb36411262c3c5aac9d58b3892edb9a7..0000000000000000000000000000000000000000 --- a/spaces/docs-demos/pegasus_paraphrase/app.py +++ /dev/null @@ -1,32 +0,0 @@ -import gradio as gr - -title = "Pegasus" - -description = "Gradio Demo for Pegasus. To use it, simply add your text, or click one of the examples to load them. Read more at the links below." - -article = "

    PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

    " - -examples = [ - ['The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.',"pegasus-xsum"] -] - -io1 = gr.Interface.load("huggingface/google/pegasus-xsum") - -io2 = gr.Interface.load("huggingface/google/pegasus-large") - -def inference(text, model): - if model == "pegasus-xsum": - outtext = io1(text) - else: - outtext = io2(text) - return outtext - - -gr.Interface( - inference, - [gr.inputs.Textbox(label="Input",lines=10),gr.inputs.Dropdown(choices=["pegasus-xsum","pegasus-large"], type="value", default="pegasus-xsum", label="model")], - [gr.outputs.Textbox(label="Output")], - examples=examples, - article=article, - title=title, - description=description).launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/dorkai/text-generation-webui-main/modules/monkey_patch_gptq_lora.py b/spaces/dorkai/text-generation-webui-main/modules/monkey_patch_gptq_lora.py deleted file mode 100644 index a37e790671f513b6a5744cc469424a967a75d43b..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/modules/monkey_patch_gptq_lora.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copied from https://github.com/johnsmith0031/alpaca_lora_4bit - -import sys -from pathlib import Path - -sys.path.insert(0, str(Path("repositories/alpaca_lora_4bit"))) - -import autograd_4bit -from amp_wrapper import AMPWrapper -from autograd_4bit import (Autograd4bitQuantLinear, - load_llama_model_4bit_low_ram) -from monkeypatch.peft_tuners_lora_monkey_patch import ( - Linear4bitLt, replace_peft_model_with_gptq_lora_model) - -from modules import shared -from modules.GPTQ_loader import find_quantized_model_file - -replace_peft_model_with_gptq_lora_model() - - -def load_model_llama(model_name): - config_path = str(Path(f'{shared.args.model_dir}/{model_name}')) - model_path = str(find_quantized_model_file(model_name)) - model, tokenizer = load_llama_model_4bit_low_ram(config_path, model_path, groupsize=shared.args.groupsize, is_v1_model=False) - for n, m in model.named_modules(): - if isinstance(m, Autograd4bitQuantLinear) or isinstance(m, Linear4bitLt): - if m.is_v1_model: - m.zeros = m.zeros.half() - m.scales = m.scales.half() - m.bias = m.bias.half() - - autograd_4bit.use_new = True - autograd_4bit.auto_switch = True - - model.half() - wrapper = AMPWrapper(model) - wrapper.apply_generate() - - return model, tokenizer diff --git a/spaces/ds520/bingo/src/lib/bots/bing/utils.ts b/spaces/ds520/bingo/src/lib/bots/bing/utils.ts deleted file mode 100644 index 64b4b96452d125346b0fc4436b5f7c18c962df0b..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/lib/bots/bing/utils.ts +++ /dev/null @@ -1,87 +0,0 @@ -import { ChatResponseMessage, BingChatResponse } from './types' - -export function convertMessageToMarkdown(message: ChatResponseMessage): string { - if (message.messageType === 'InternalSearchQuery') { - return message.text - } - for (const card of message.adaptiveCards??[]) { - for (const block of card.body) { - if (block.type === 'TextBlock') { - return block.text - } - } - } - return '' -} - -const RecordSeparator = String.fromCharCode(30) - -export const websocketUtils = { - packMessage(data: any) { - return `${JSON.stringify(data)}${RecordSeparator}` - }, - unpackMessage(data: string | ArrayBuffer | Blob) { - if (!data) return {} - return data - .toString() - .split(RecordSeparator) - .filter(Boolean) - .map((s) => { - try { - return JSON.parse(s) - } catch (e) { - return {} - } - }) - }, -} - -export async function createImage(prompt: string, id: string, headers: HeadersInit): Promise { - const { headers: responseHeaders } = await fetch(`https://www.bing.com/images/create?partner=sydney&re=1&showselective=1&sude=1&kseed=7000&SFX=&q=${encodeURIComponent(prompt)}&iframeid=${id}`, - { - method: 'HEAD', - headers, - redirect: 'manual' - }, - ); - - if (!/&id=([^&]+)$/.test(responseHeaders.get('location') || '')) { - throw new Error('请求异常,请检查 cookie 是否有效') - } - - const resultId = RegExp.$1; - let count = 0 - const imageThumbUrl = `https://www.bing.com/images/create/async/results/${resultId}?q=${encodeURIComponent(prompt)}&partner=sydney&showselective=1&IID=images.as`; - - do { - await sleep(3000); - const content = await fetch(imageThumbUrl, { headers, method: 'GET' }) - - // @ts-ignore - if (content.headers.get('content-length') > 1) { - const text = await content.text() - return (text?.match(/ target?.split('src="').pop()?.replace(/&/g, '&')) - .map(img => `![${prompt}](${img})`).join(' ') - } - } while(count ++ < 10); -} - - -export async function* streamAsyncIterable(stream: ReadableStream) { - const reader = stream.getReader() - try { - while (true) { - const { done, value } = await reader.read() - if (done) { - return - } - yield value - } - } finally { - reader.releaseLock() - } -} - -export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms)) - diff --git a/spaces/dylanebert/gaussian-viewer/LICENSE.md b/spaces/dylanebert/gaussian-viewer/LICENSE.md deleted file mode 100644 index 212de3c72ef023c30539016f33482fe59dbd24f7..0000000000000000000000000000000000000000 --- a/spaces/dylanebert/gaussian-viewer/LICENSE.md +++ /dev/null @@ -1,18 +0,0 @@ -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/spaces/eaglelandsonce/weatherQnA/app.py b/spaces/eaglelandsonce/weatherQnA/app.py deleted file mode 100644 index 6fa2a98bc1de8df1f6db2f54766e497b7a8b3409..0000000000000000000000000000000000000000 --- a/spaces/eaglelandsonce/weatherQnA/app.py +++ /dev/null @@ -1,40 +0,0 @@ -import streamlit as st -from langchain.llms import OpenAI -from langchain.agents import load_tools, initialize_agent, AgentType -import os - -# Set up Streamlit interface -st.title('Weather Q&A using Langchain') -# Adding the markdown message -st.markdown(""" -I'm genuinely impressed. Leveraging prompt engineering, I was able to craft this program in just 5 minutes, and it's fully functional! All I did was instruct ChatGPT to integrate langchain and streamlit, set up inputs for the API keys, pose a weather-related question, and use the details from the [LangChain OpenWeatherMap link](https://python.langchain.com/docs/integrations/tools/openweathermap) as a coding and output guide. Now, envisioning a solution is all it takes. It's auto-magical! I may have been a terrible programmer, but I\'m an amazing prompt engineer, bless the Lord! -""") - -st.sidebar.header('API Configuration') - -# Input for OpenAI API key and OpenWeather API key in the Streamlit sidebar -os.environ["OPENAI_API_KEY"] = st.sidebar.text_input('OpenAI API Key:', value='', type='password') -os.environ["OPENWEATHERMAP_API_KEY"] = st.sidebar.text_input('OpenWeather API Key:', value='', type='password') - -# Input for question about the weather -question = st.text_input('Ask a question about the weather (e.g., "What\'s the weather like in London?"):') - -# Initialize Langchain's OpenAI and agent_chain only once API keys are provided -if os.environ["OPENAI_API_KEY"] and os.environ["OPENWEATHERMAP_API_KEY"]: - try: - llm = OpenAI(temperature=0) - tools = load_tools(["openweathermap-api"], llm) - agent_chain = initialize_agent( - tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True - ) - - # If a question is provided, proceed to get an answer - if question: - response = agent_chain.run(question) - st.write(response) - except Exception as e: - st.warning("There was an error processing your request.") - st.write(f"Details: {e}") - st.write("Please provide more specific information. For example, you may need to provide the country sucn as Florence Kentucky US.") -else: - st.warning("Please provide your API keys in the left sidebar!") diff --git a/spaces/edugp/embedding-lenses/README.md b/spaces/edugp/embedding-lenses/README.md deleted file mode 100644 index 3d89bebefa130d73b4955f11ac55bca6bbea542a..0000000000000000000000000000000000000000 --- a/spaces/edugp/embedding-lenses/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Embedding Lenses -emoji: 😻 -colorFrom: red -colorTo: yellow -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/scripts/train.py b/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/scripts/train.py deleted file mode 100644 index 4141436fb3edee8ab5f7576fde0c0e53b529ef66..0000000000000000000000000000000000000000 --- a/spaces/emc348/faces-through-time/models/StyleCLIP/mapper/scripts/train.py +++ /dev/null @@ -1,32 +0,0 @@ -""" -This file runs the main training/val loop -""" -import os -import json -import sys -import pprint - -sys.path.append(".") -sys.path.append("..") - -from mapper.options.train_options import TrainOptions -from mapper.training.coach import Coach - - -def main(opts): - if os.path.exists(opts.exp_dir): - raise Exception('Oops... {} already exists'.format(opts.exp_dir)) - os.makedirs(opts.exp_dir, exist_ok=True) - - opts_dict = vars(opts) - pprint.pprint(opts_dict) - with open(os.path.join(opts.exp_dir, 'opt.json'), 'w') as f: - json.dump(opts_dict, f, indent=4, sort_keys=True) - - coach = Coach(opts) - coach.train() - - -if __name__ == '__main__': - opts = TrainOptions().parse() - main(opts) diff --git a/spaces/erastorgueva-nv/NeMo-Forced-Aligner/utils/make_output_manifest.py b/spaces/erastorgueva-nv/NeMo-Forced-Aligner/utils/make_output_manifest.py deleted file mode 100644 index 7ee3fc77f7ab54809df831b3bca8511be9aa467d..0000000000000000000000000000000000000000 --- a/spaces/erastorgueva-nv/NeMo-Forced-Aligner/utils/make_output_manifest.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json - - -def write_manifest_out_line( - f_manifest_out, utt_obj, -): - - data = {"audio_filepath": utt_obj.audio_filepath} - if not utt_obj.text is None: - data["text"] = utt_obj.text - - if not utt_obj.pred_text is None: - data["pred_text"] = utt_obj.pred_text - - for key, val in utt_obj.saved_output_files.items(): - data[key] = val - - new_line = json.dumps(data) - f_manifest_out.write(f"{new_line}\n") - - return None diff --git "a/spaces/f2api/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" "b/spaces/f2api/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" deleted file mode 100644 index 3da831fd07e361a532777c83bb02cff265b94abd..0000000000000000000000000000000000000000 --- "a/spaces/f2api/gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" +++ /dev/null @@ -1,194 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file, get_conf -import re, requests, unicodedata, os -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -def download_arxiv_(url_pdf): - if 'arxiv.org' not in url_pdf: - if ('.' in url_pdf) and ('/' not in url_pdf): - new_url = 'https://arxiv.org/abs/'+url_pdf - print('下载编号:', url_pdf, '自动定位:', new_url) - # download_arxiv_(new_url) - return download_arxiv_(new_url) - else: - print('不能识别的URL!') - return None - if 'abs' in url_pdf: - url_pdf = url_pdf.replace('abs', 'pdf') - url_pdf = url_pdf + '.pdf' - - url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs') - title, other_info = get_name(_url_=url_abs) - - paper_id = title.split()[0] # '[1712.00559]' - if '2' in other_info['year']: - title = other_info['year'] + ' ' + title - - known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI'] - for k in known_conf: - if k in other_info['comment']: - title = k + ' ' + title - - download_dir = './gpt_log/arxiv/' - os.makedirs(download_dir, exist_ok=True) - - title_str = title.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - - requests_pdf_url = url_pdf - file_path = download_dir+title_str - # if os.path.exists(file_path): - # print('返回缓存文件') - # return './gpt_log/arxiv/'+title_str - - print('下载中') - proxies, = get_conf('proxies') - r = requests.get(requests_pdf_url, proxies=proxies) - with open(file_path, 'wb+') as f: - f.write(r.content) - print('下载完成') - - # print('输出下载命令:','aria2c -o \"%s\" %s'%(title_str,url_pdf)) - # subprocess.call('aria2c --all-proxy=\"172.18.116.150:11084\" -o \"%s\" %s'%(download_dir+title_str,url_pdf), shell=True) - - x = "%s %s %s.bib" % (paper_id, other_info['year'], other_info['authors']) - x = x.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - return './gpt_log/arxiv/'+title_str, other_info - - -def get_name(_url_): - import os - from bs4 import BeautifulSoup - print('正在获取文献名!') - print(_url_) - - # arxiv_recall = {} - # if os.path.exists('./arxiv_recall.pkl'): - # with open('./arxiv_recall.pkl', 'rb') as f: - # arxiv_recall = pickle.load(f) - - # if _url_ in arxiv_recall: - # print('在缓存中') - # return arxiv_recall[_url_] - - proxies, = get_conf('proxies') - res = requests.get(_url_, proxies=proxies) - - bs = BeautifulSoup(res.text, 'html.parser') - other_details = {} - - # get year - try: - year = bs.find_all(class_='dateline')[0].text - year = re.search(r'(\d{4})', year, re.M | re.I).group(1) - other_details['year'] = year - abstract = bs.find_all(class_='abstract mathjax')[0].text - other_details['abstract'] = abstract - except: - other_details['year'] = '' - print('年份获取失败') - - # get author - try: - authors = bs.find_all(class_='authors')[0].text - authors = authors.split('Authors:')[1] - other_details['authors'] = authors - except: - other_details['authors'] = '' - print('authors获取失败') - - # get comment - try: - comment = bs.find_all(class_='metatable')[0].text - real_comment = None - for item in comment.replace('\n', ' ').split(' '): - if 'Comments' in item: - real_comment = item - if real_comment is not None: - other_details['comment'] = real_comment - else: - other_details['comment'] = '' - except: - other_details['comment'] = '' - print('年份获取失败') - - title_str = BeautifulSoup( - res.text, 'html.parser').find('title').contents[0] - print('获取成功:', title_str) - # arxiv_recall[_url_] = (title_str+'.pdf', other_details) - # with open('./arxiv_recall.pkl', 'wb') as f: - # pickle.dump(arxiv_recall, f) - - return title_str+'.pdf', other_details - - - -@CatchException -def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - - CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……" - import glob - import os - - # 基本信息:功能、贡献者 - chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 提取摘要,下载PDF文档 - try: - pdf_path, info = download_arxiv_(txt) - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"下载pdf文件未成功") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 翻译摘要等 - i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}" - i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - msg = '正常' - # ** gpt request ** - # 单线,获取文章meta信息 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, history=[], - sys_prompt="Your job is to collect information from materials and translate to Chinese。", - ) - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - # 写入文件 - import shutil - # 重置文件的创建时间 - shutil.copyfile(pdf_path, f'./gpt_log/{os.path.basename(pdf_path)}'); os.remove(pdf_path) - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载")) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - diff --git a/spaces/falterWliame/Face_Mask_Detection/Dispensary Management Software Free Download BEST.md b/spaces/falterWliame/Face_Mask_Detection/Dispensary Management Software Free Download BEST.md deleted file mode 100644 index ef3819e5d04d0da56803a9cb5cc147a776133c8b..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Dispensary Management Software Free Download BEST.md +++ /dev/null @@ -1,44 +0,0 @@ -
    -

    How to Find the Best Dispensary Management Software for Free

    -

    If you are running a dispensary or a pharmacy, you know how important it is to have a reliable and efficient software system to manage your inventory, sales, billing, and reporting. However, you may also be aware of how expensive some of the commercial software solutions can be. Fortunately, there are some free and open-source alternatives that can help you run your dispensary smoothly and save money.

    -

    dispensary management software free download


    Download Zip ===> https://urlca.com/2uDcq1



    -

    In this article, we will introduce you to some of the best free and open-source dispensary management software solutions that you can download and use right away. We will also highlight their features, benefits, and drawbacks, so you can make an informed decision.

    -

    What is Dispensary Management Software?

    -

    Dispensary management software is a type of software that helps dispensary owners and managers to manage various aspects of their business, such as:

    -
      -
    • Inventory management: track the stock levels, expiry dates, batch numbers, and locations of your products.
    • -
    • Sales management: process orders, generate invoices, accept payments, and issue receipts.
    • -
    • Billing management: create and send bills to customers, insurance companies, or third-party payers.
    • -
    • Reporting management: generate and analyze reports on sales, expenses, profits, taxes, and more.
    • -
    • Customer management: store and access customer information, such as name, address, phone number, medical history, prescriptions, etc.
    • -
    • Employee management: manage employee schedules, salaries, commissions, attendance, and performance.
    • -
    -

    Dispensary management software can help you improve your operational efficiency, reduce errors and wastage, increase customer satisfaction and loyalty, comply with legal and regulatory requirements, and grow your business.

    -

    -

    What are the Benefits of Free and Open-Source Dispensary Management Software?

    -

    Free and open-source dispensary management software solutions have some advantages over their paid counterparts, such as:

    -
      -
    • Cost-effectiveness: you don't have to pay any license fees or subscription fees to use them. You can also save on maintenance and support costs.
    • -
    • Customizability: you can modify the source code of the software to suit your specific needs and preferences. You can also add new features or integrate with other applications.
    • -
    • Community support: you can benefit from the knowledge and experience of other users and developers who use the same software. You can also contribute to the improvement of the software by reporting bugs or suggesting enhancements.
    • -
    -

    What are the Drawbacks of Free and Open-Source Dispensary Management Software?

    -

    Free and open-source dispensary management software solutions also have some disadvantages compared to their paid counterparts, such as:

    -
      -
    • Limited functionality: some of the free and open-source software may not have all the features that you need or want. You may have to compromise on some aspects of your business operations or look for additional tools.
    • -
    • Lack of technical support: some of the free and open-source software may not have a dedicated customer service team or a professional technical support team. You may have to rely on online forums or documentation for help.
    • -
    • Security risks: some of the free and open-source software may not have adequate security measures or updates to protect your data from hackers or malware. You may have to take extra precautions to safeguard your data.
    • -
    -

    What are Some of the Best Free and Open-Source Dispensary Management Software Solutions?

    -

    There are many free and open-source dispensary management software solutions available online. However, not all of them are suitable for your business needs. Here are some of the best ones that we recommend:

    - -

    RxBLU

    -

    RxBLU is a free pharmacy management software that helps pharmacies handle prescriptions, deliveries, point-of-sale processes, and more[^1^]. It has a simple and easy-to-use interface, powerful reporting tools, -invoicing capabilities, monthly statistics tracking, -and advanced item tracking. RxBLU is compatible with Windows -and macOS operating systems[^1^].

    - -

    RxVantage

    -

    RxVantage is a free rep management platform that helps practices stay at

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/DriverSHARPAR5618SforWindowsXP32bitfree !!EXCLUSIVE!!.md b/spaces/falterWliame/Face_Mask_Detection/DriverSHARPAR5618SforWindowsXP32bitfree !!EXCLUSIVE!!.md deleted file mode 100644 index 6587c1e6149ab3e6da19af83f4d3873cbadf5af1..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/DriverSHARPAR5618SforWindowsXP32bitfree !!EXCLUSIVE!!.md +++ /dev/null @@ -1,34 +0,0 @@ - -

    How to Download and Install Driver SHARP AR-5618S for Windows XP 32 bit for Free

    -

    If you are looking for a fast, fully featured A3 MFP that can handle your daily B/W printing, colour scanning and copying needs, you might want to consider the SHARP AR-5618S. This machine has a robust and compact design, advanced functionality and impressive performance. However, to make the most of its features, you need to install the correct driver for your operating system.

    -

    In this article, we will show you how to download and install Driver SHARP AR-5618S for Windows XP 32 bit for free. This driver is compatible with Windows XP/Vista/7/8/8.1/10 32 bit SPLC & TWAIN drivers[^1^]. It also supports the Button Manager AA software, which is required for USB scanning[^2^]. Follow these simple steps to get your SHARP AR-5618S up and running in no time.

    -

    DriverSHARPAR5618SforWindowsXP32bitfree


    Download File ✪✪✪ https://urlca.com/2uDc72



    -

    Step 1: Download the Driver

    -

    The first step is to download the driver from the official SHARP website. You can find the driver by searching for "AR-5618" in the Driver Downloads section[^1^]. Alternatively, you can use this direct link[^3^] to download the ZIP file that contains the driver. The file size is about 10 MB and it was last updated on March 17, 2016.

    -

    Step 2: Extract the Driver

    -

    Once you have downloaded the driver, you need to extract it to a folder on your computer. You can use any ZIP extraction software, such as WinZip or 7-Zip, to do this. Just right-click on the ZIP file and choose "Extract All" or "Extract Here". You will see a folder named "SMON42_2201a_ALL" that contains the driver files.

    -

    Step 3: Install the Driver

    -

    Now that you have extracted the driver, you can proceed to install it on your computer. To do this, follow these steps:

    -
      -
    1. Open the "SMON42_2201a_ALL" folder and double-click on the "Setup.exe" file.
    2. -
    3. Follow the on-screen instructions to complete the installation process. You may need to restart your computer after the installation.
    4. -
    5. Connect your SHARP AR-5618S to your computer via USB cable or network cable.
    6. -
    7. Go to "Control Panel" > "Printers and Faxes" and select your SHARP AR-5618S printer.
    8. -
    9. Right-click on the printer icon and choose "Properties".
    10. -
    11. Go to the "Ports" tab and make sure that the correct port is selected for your printer.
    12. -
    13. Click "OK" to save the settings.
    14. -
    -

    Congratulations! You have successfully installed Driver SHARP AR-5618S for Windows XP 32 bit for free. You can now enjoy all the features and functions of your SHARP AR-5618S printer.

    -

    Troubleshooting Tips

    -

    If you encounter any problems or errors while downloading or installing the driver, here are some troubleshooting tips that might help:

    -

    -
      -
    • Make sure that your computer meets the minimum system requirements for the driver. You need at least Windows XP SP3 32 bit, 512 MB of RAM and 100 MB of free disk space.
    • -
    • Make sure that your internet connection is stable and fast enough to download the driver without interruption.
    • -
    • Make sure that your antivirus software or firewall does not block or interfere with the driver download or installation.
    • -
    • Make sure that you download the driver from a trusted source, such as the official SHARP website[^1^]. Do not download or install any drivers from unknown or suspicious websites.
    • -
    • Make sure that you extract the driver files properly and do not delete or modify any files in the folder.
    • -
    • Make sure that you follow the installation instructions carefully and do not skip any steps.
    • -
    • If you have any questions or need any assistance, you can contact SHARP customer support

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/HACK HP Windows 8.1 Pro 64bit Multilanguage OEM DVD [WORK].md b/spaces/falterWliame/Face_Mask_Detection/HACK HP Windows 8.1 Pro 64bit Multilanguage OEM DVD [WORK].md deleted file mode 100644 index 798e8408086b7a11c572e9286c2d867e686f7ec0..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/HACK HP Windows 8.1 Pro 64bit Multilanguage OEM DVD [WORK].md +++ /dev/null @@ -1,6 +0,0 @@ -

      HACK HP Windows 8.1 Pro 64bit Multilanguage OEM DVD


      Download Ziphttps://urlca.com/2uDcg3



      -
      -This software will have its flash tool removed soon, as the free flashhack tool ... ON WINDOWS ( 7 , 8 , 10 X32 & X64) >>> Download & Install & Activate with. ... Insert the Flash Files CD or DVD into the CD drive or DVD drive of your computer. ... Cat will not let you go from a 475 hp or even factory 550 hp to a 600 or 625 hp. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Basket Manager 2017 A Realistic and Addictive Basketball Management Game for Android.md b/spaces/fatiXbelha/sd/Basket Manager 2017 A Realistic and Addictive Basketball Management Game for Android.md deleted file mode 100644 index d7f510a588fc301a06ec0a2960a477582765f260..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Basket Manager 2017 A Realistic and Addictive Basketball Management Game for Android.md +++ /dev/null @@ -1,148 +0,0 @@ -
      -

      Basket Manager 2017 APK: A Free Basketball Game for Android

      -

      If you are a fan of basketball and want to experience the thrill of managing a professional team, then you should try Basket Manager 2017 APK. This is a free basketball game for Android devices that lets you take control of a team in a realistic league. You can choose from over 30 teams, sign players, train them, compete in matches, and win trophies. In this article, we will tell you everything you need to know about Basket Manager 2017 APK, including its features, how to download and install it, why you should play it, and some tips and tricks to help you succeed.

      -

      What is Basket Manager 2017 APK?

      -

      Basket Manager 2017 APK is a modified version of the original game Basket Manager 2017 Free, which is available on Google Play Store. The APK version has some advantages over the official version, such as:

      -

      basket manager 2017 apk


      Download Zip ——— https://urllie.com/2uNAUe



      -
        -
      • It does not require an internet connection to play.
      • -
      • It has no ads or in-app purchases.
      • -
      • It has updated rosters and ratings for the 2020-2021 season.
      • -
      • It has a new app icon and name.
      • -
      -

      The game was created by SubtleLies, a Reddit user who shared it on the r/basketmanager subreddit. You can download it from MediaFire or Aptoide. The game is compatible with Android 4.0.3 and up.

      -

      Features of Basket Manager 2017 APK

      -

      Basket Manager 2017 APK has many features that make it an enjoyable and challenging basketball game. Some of these features are:

      -
        -
      • You can choose from over 30 teams from different countries, such as USA, Spain, France, Italy, Germany, Turkey, Greece, and more.
      • -
      • You can sign players from a pool of over 1000 real players, with their names, photos, positions, skills, salaries, and contracts.
      • -
      • You can train your players and improve their attributes, such as shooting, passing, rebounding, defense, speed, and stamina.
      • -
      • You can manage your budget and decide how to spend your money on salaries, transfers, facilities, staff, and marketing.
      • -
      • You can compete in different leagues and tournaments, such as the NBA, Euroleague, FIBA World Cup, Olympics, and more.
      • -
      • You can view detailed statistics and rankings of your team and players.
      • -
      • You can customize your team's name, logo, colors, and uniforms.
      • -
      -

      How to download and install Basket Manager 2017 APK

      -

      To download and install Basket Manager 2017 APK on your Android device, follow these steps:

      -
        -
      1. Go to MediaFire or Aptoide and download the APK file.
      2. -
      3. Go to your device's settings and enable the option to install apps from unknown sources.
      4. -
      5. Locate the downloaded APK file on your device and tap on it to start the installation process.
      6. -
      7. Follow the instructions on the screen and wait for the installation to finish.
      8. -
      9. Launch the game and enjoy!
      10. -
      -

      Why play Basket Manager 2017 APK?

      -

      Basket Manager 2017 APK is a fun and addictive basketball game that will keep you entertained for hours. Here are some reasons why you should play it:

      -

      Pros of Basket Manager 2017 APK

      -
        -
      • It is free and does not require an internet connection to play.
      • -
      • It has realistic and updated graphics and sounds.
      • -
      • It has a simple and intuitive user interface and controls.
      • -
      • It has a high replay value and offers many options and challenges.
      • -
      • It is suitable for all ages and skill levels.
      • -
      -

      Cons of Basket Manager 2017 APK

      -
        -
      • It may not work on some devices or cause crashes or errors.
      • -
      • It may not be compatible with some Android versions or updates.
      • -
      • It may have some bugs or glitches that affect the gameplay or performance.
      • -
      • It may not have all the features or content of the official version or other similar games.
      • -
      • It may not be updated or supported by the developer in the future.
      • -
      -

      Tips and tricks for playing Basket Manager 2017 APK

      -

      If you want to become a successful basketball manager, you need to have some skills and strategies. Here are some tips and tricks that can help you play Basket Manager 2017 APK better:

      -

      basket manager 2017 apk download
      -basket manager 2017 apk mod
      -basket manager 2017 apk free
      -basket manager 2017 apk full version
      -basket manager 2017 apk android
      -basket manager 2017 apk latest
      -basket manager 2017 apk offline
      -basket manager 2017 apk unlimited money
      -basket manager 2017 apk cracked
      -basket manager 2017 apk hack
      -basket manager 2017 apk update
      -basket manager 2017 apk premium
      -basket manager 2017 apk pro
      -basket manager 2017 apk online
      -basket manager 2017 apk data
      -basket manager 2017 apk obb
      -basket manager 2017 apk revdl
      -basket manager 2017 apk rexdl
      -basket manager 2017 apk aptoide
      -basket manager 2017 apk pure
      -basket manager 2017 apk mirror
      -basket manager 2017 apk mob.org
      -basket manager 2017 apk uptodown
      -basket manager 2017 apk apkpure
      -basket manager 2017 apk apkmirror
      -basket manager 2017 apk for pc
      -basket manager 2017 apk for ios
      -basket manager 2017 apk for windows
      -basket manager 2017 apk for mac
      -basket manager 2017 apk for laptop
      -basket manager 2017 apk for tablet
      -basket manager 2017 apk for iphone
      -basket manager 2017 apk for ipad
      -basket manager 2017 apk for android tv
      -basket manager 2017 apk for firestick
      -basket manager 2017 apk for chromebook
      -basket manager 2017 apk for bluestacks
      -basket manager 2017 apk for nox player
      -basket manager 2017 apk for memu play
      -basket manager 2017 apk for ldplayer
      -how to install basket manager 2017 apk
      -how to play basket manager 2017 apk
      -how to update basket manager 2017 apk
      -how to hack basket manager 2017 apk
      -how to mod basket manager 2017 apk
      -how to get basket manager 2017 apk
      -how to download basket manager 2017 apk
      -how to uninstall basket manager 2017 apk
      -how to use basket manager 2017 apk

      -

      Choose your team wisely

      -

      The first thing you need to do is to choose your team. You can either select one of the existing teams or create your own custom team. You should consider the following factors when choosing your team:

      -
        -
      • The country and league of your team. Different countries and leagues have different rules, regulations, budgets, and competitions. You should choose a team that suits your preferences and goals.
      • -
      • The roster and ratings of your team. You should check the players' names, positions, skills, salaries, and contracts. You should look for players that fit your style of play, have high potential, and are affordable.
      • -
      • The facilities and staff of your team. You should check the quality and level of your team's facilities, such as the stadium, training center, medical center, and academy. You should also check the staff's roles, skills, and salaries. You should look for facilities and staff that can improve your team's performance, development, and income.
      • -
      -

      Manage your budget and players

      -

      The next thing you need to do is to manage your budget and players. You have a limited amount of money to spend on salaries, transfers, facilities, staff, and marketing. You should balance your income and expenses wisely and avoid going bankrupt. You should also manage your players' contracts, morale, fitness, injuries, suspensions, and form. You should keep your players happy, healthy, motivated, and in shape. You should also make smart decisions on who to sign, sell, loan, or release.

      -

      Train your players and improve their skills

      -

      The third thing you need to do is to train your players and improve their skills. You can assign different training programs to your players based on their positions, attributes, weaknesses, and goals. You can also hire coaches to help you with the training. You should monitor your players' progress and feedback regularly and adjust the training accordingly. You should also reward your players with bonuses or promotions when they perform well or improve their skills.

      -

      Compete in different leagues and tournaments

      -

      The last thing you need to do is to compete in different leagues and tournaments. You can play in various competitions, such as the NBA, Euroleague, FIBA World Cup, Olympics, and more. You can also create your own custom tournaments with your own rules and teams. You should prepare well for each match by scouting your opponents, setting your lineup, choosing your tactics, and making substitutions. You should also analyze your results and statistics after each match and learn from your mistakes or successes.

      -

      Conclusion

      -

      Basket Manager 2017 APK is a free basketball game for Android devices that lets you take control of a team in a realistic league. You can choose from over 30 teams, sign players, train them, compete in matches, and win trophies. The game has many features that make it an enjoyable and challenging basketball game. However, it also has some drawbacks that may affect its gameplay or performance. If you want to play Basket Manager 2017 APK, you can download it from MediaFire or Aptoide. You can also follow these tips and tricks to help you play better:

      -

      Summary of the article

      -
        -
      • Basket Manager 2017 APK is a modified version of the original game Basket Manager 2017 Free.
      • -
      • The game lets you manage a basketball team in a realistic league.
      • -
      • The game has many features that make it fun and addictive.
      • -
      • The game also has some drawbacks that may affect its gameplay or performance.
      • -
      • You can download the game from MediaFire or Aptoide.
      • -
      • You can follow these tips and tricks to help you play better:
      • -
          -
        • Choose your team wisely.
        • -
        • Manage your budget and players.
        • -
        • Train your players and improve their skills.
        • -
        • Compete in different leagues and tournaments.
        • -
        -
      -

      FAQs

      -

      Here are some frequently asked questions about Basket Manager 2017 APK:

      -
        -
      1. Is Basket Manager 2017 APK safe to download and install?
      2. -

        Yes, Basket Manager 2017 APK is safe to download and install, as long as you get it from a trusted source, such as MediaFire or Aptoide. However, you should always scan the APK file with an antivirus or malware detector before installing it, just to be sure.

        -
      3. Is Basket Manager 2017 APK legal to play?
      4. -

        Yes, Basket Manager 2017 APK is legal to play, as it is a fan-made modification of the original game Basket Manager 2017 Free, which is free and available on Google Play Store. However, you should not use the game for any commercial or illegal purposes, such as selling it, hacking it, or cheating in it.

        -
      5. How can I update Basket Manager 2017 APK?
      6. -

        Basket Manager 2017 APK does not have an automatic update feature, so you will have to manually check for updates on the r/basketmanager subreddit or the developer's website. If there is a new version available, you will have to download and install it again, following the same steps as before.

        -
      7. How can I contact the developer of Basket Manager 2017 APK?
      8. -

        You can contact the developer of Basket Manager 2017 APK by sending a message to SubtleLies on Reddit or by visiting his website. You can also join the r/basketmanager subreddit and interact with other players and fans of the game.

        -
      9. How can I support the development of Basket Manager 2017 APK?
      10. -

        You can support the development of Basket Manager 2017 APK by giving feedback, suggestions, bug reports, or donations to the developer. You can also share the game with your friends, family, or social media followers. You can also rate and review the game on Aptoide or other platforms.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" "b/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" deleted file mode 100644 index 72ffe6b1a8f2a59a3c5c364e30dfb4949bd6a929..0000000000000000000000000000000000000000 --- "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\257\273\346\226\207\347\253\240\345\206\231\346\221\230\350\246\201.py" +++ /dev/null @@ -1,67 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, glob, os - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, llm_kwargs, chatbot, history=[], sys_prompt=system_prompt) # 带超时倒计时 - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say, llm_kwargs, chatbot, history=history, sys_prompt=system_prompt) # 带超时倒计时 - - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 读文章写摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/configs/speed.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/configs/speed.py deleted file mode 100644 index 45e95237da65e44f35a172c25ac6dc4e313e4eae..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/configs/speed.py +++ /dev/null @@ -1,23 +0,0 @@ -from easydict import EasyDict as edict - -# configs for test speed - -config = edict() -config.loss = "arcface" -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 1.0 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "synthetic" -config.num_classes = 100 * 10000 -config.num_epoch = 30 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = [] diff --git a/spaces/fclong/summary/fengshen/examples/tcbert/example.py b/spaces/fclong/summary/fengshen/examples/tcbert/example.py deleted file mode 100644 index 5eff218461c65f40ec88e9ea2c7e0cdbe1d05082..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/tcbert/example.py +++ /dev/null @@ -1,86 +0,0 @@ -import argparse -from fengshen.pipelines.tcbert import TCBertPipelines -from pytorch_lightning import seed_everything - -def main(): - seed_everything(123) - total_parser = argparse.ArgumentParser("Topic Classification") - total_parser = TCBertPipelines.piplines_args(total_parser) - args = total_parser.parse_args() - - pretrained_model_path = 'IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese' - args.learning_rate = 2e-5 - args.max_length = 512 - args.max_epochs = 5 - args.batchsize = 4 - args.train = 'train' - args.default_root_dir = './' - # args.gpus = 1 #注意:目前使用CPU进行训练,取消注释会使用GPU,但需要配置相应GPU环境版本 - args.fixed_lablen = 2 #注意:可以设置固定标签长度,由于样本对应的标签长度可能不一致,建议选择适中的数值表示标签长度 - - train_data = [ # 训练数据 - {"content": "真正的放养教育,放的是孩子的思维,养的是孩子的习惯", "label": "故事"}, - {"content": "《唐人街探案》捧红了王宝强跟刘昊然,唯独戏份不少的他发展最差", "label": "娱乐"}, - {"content": "油价攀升 阿曼经济加速增长", "label": "财经"}, - {"content": "日本男篮近期动作频频,中国队的未来劲敌会是他们吗?", "label": "体育"}, - {"content": "教育部:坚决防止因撤并乡村小规模学校导致学生上学困难", "label": "教育"}, - {"content": "LOL设计最完美的三个英雄,玩家们都很认可!", "label": "电竞"}, - {"content": "上联:浅看红楼终是梦,怎么对下联?", "label": "文化"}, - {"content": "楼市再出新政!北京部分限房价项目或转为共有产权房", "label": "房产"}, - {"content": "企业怎样选云服务器?云服务器哪家比较好?", "label": "科技"}, - {"content": "贝纳利的三缸车TRE899K、TRE1130K华丽转身", "label": "汽车"}, - {"content": "如何评价:刘姝威的《严惩做空中国股市者》?", "label": "股票"}, - {"content": "宁夏邀深圳市民共赴“寻找穿越”之旅", "label": "旅游"}, - {"content": "日本自民党又一派系力挺安倍 称会竭尽全力", "label": "国际"}, - {"content": "农村养老保险每年交5000,交满15年退休后能每月领多少钱?", "label": "农业"}, - {"content": "国产舰载机首次现身,进度超过预期,将率先在滑跃航母测试", "label": "军事"} - ] - - dev_data = [ # 验证数据 - {"content": "西游记后传中,灵儿最爱的女人是谁?不是碧游!", "label": "故事"}, - {"content": "小李子莱奥纳多有特别的提袋子技能,这些年他还有过哪些神奇的造型?", "label": "娱乐"}, - {"content": "现在手上有钱是投资买房还是存钱,为什么?", "label": "财经"}, - {"content": "迪卡侬的衣服值得购买吗?", "label": "体育"}, - {"content": "黑龙江省旅游委在齐齐哈尔组织举办导游培训班", "label": "教育"}, - {"content": "《王者荣耀》中,哪些英雄的大招最“废柴”?", "label": "电竞"}, - {"content": "上交演绎马勒《复活》,用音乐带来抚慰和希望", "label": "文化"}, - {"content": "All in服务业,58集团在租房、住房市场的全力以赋", "label": "房产"}, - {"content": "为什么有的人宁愿选择骁龙660的X21,也不买骁龙845的小米MIX2S?", "label": "科技"}, - {"content": "众泰大型SUV来袭,售13.98万,2.0T榨出231马力,汉兰达要危险了", "label": "汽车"}, - {"content": "股票放量下趺,大资金出逃谁在接盘?", "label": "股票"}, - {"content": "广西博白最大的特色是什么?", "label": "旅游"}, - {"content": "特朗普退出《伊朗核协议》,对此你怎么看?", "label": "国际"}, - {"content": "卖水果利润怎么样?", "label": "农业"}, - {"content": "特种兵都是身材高大的猛男么?别再被电视骗了,超过1米8都不合格", "label": "军事"} - ] - - test_data = [ # 测试数据 - {"content": "廖凡重出“江湖”再争影帝 亮相戛纳红毯霸气有型"}, - {"content": "《绝地求生: 刺激战场》越玩越卡?竟是手机厂商没交“保护费”!"}, - {"content": "买涡轮增压还是自然吸气车?今天终于有答案了!"}, - ] - - #标签映射 将真实标签可以映射为更合适prompt的标签 - prompt_label = { - "体育":"体育", "军事":"军事", "农业":"农业", "国际":"国际", - "娱乐":"娱乐", "房产":"房产", "故事":"故事", "教育":"教育", - "文化":"文化", "旅游":"旅游", "汽车":"汽车", "电竞":"电竞", - "科技":"科技", "股票":"股票", "财经":"财经" - } - - #不同的prompt会影响模型效果 - #prompt = "这一句描述{}的内容如下:" - prompt = "下面是一则关于{}的新闻:" - - model = TCBertPipelines(args, model_path=pretrained_model_path, nlabels=len(prompt_label)) - - if args.train: - model.train(train_data, dev_data, prompt, prompt_label) - result = model.predict(test_data, prompt, prompt_label) - - for i, line in enumerate(result): - print({"content":test_data[i]["content"], "label":list(prompt_label.keys())[line]}) - - -if __name__ == "__main__": - main() diff --git a/spaces/felipekitamura/face_deid_ct/face_deid_ct.py b/spaces/felipekitamura/face_deid_ct/face_deid_ct.py deleted file mode 100644 index bc8c0f2a8a199f111381b6e14262be64f7f26f47..0000000000000000000000000000000000000000 --- a/spaces/felipekitamura/face_deid_ct/face_deid_ct.py +++ /dev/null @@ -1,285 +0,0 @@ -import os -import pydicom -import numpy as np -import cv2 -from matplotlib import pyplot as plt -import random -import time -import tqdm -from IPython.core.display import display, HTML - -# Determine if we are in a Jupyter notebook -try: - shell = get_ipython().__class__.__name__ - if shell == 'ZMQInteractiveShell': - # We are in Jupyter, use tqdm.notebook - from tqdm.notebook import tqdm - else: - raise Exception() -except: - # We are in a terminal, use standard tqdm - from tqdm import tqdm - - -FACE_MAX_VALUE = 50 -FACE_MIN_VALUE = -125 - -AIR_THRESHOLD = -800 -KERNEL_SIZE = 35 - - - -def is_dicom(file_path): - try: - pydicom.dcmread(file_path) - return True - except Exception: - return False - -def get_first_directory(path): - # Normalize the path to always use Unix-style path separators - normalized_path = path.replace("\\", "/") - split_path = normalized_path.split("/")[-1] - - return split_path # Return None if no directories are found - -def list_dicom_directories(root_dir): - dicom_dirs = set() - - for root, dirs, files in os.walk(root_dir): - for file in files: - file_path = os.path.join(root, file) - if is_dicom(file_path): - dicom_dirs.add(root) - break - - return list(dicom_dirs) - -def load_scan(path): - slices = [pydicom.read_file(path + '/' + s) for s in os.listdir(path)] - slices.sort(key = lambda x: float(x.ImagePositionPatient[2])) - try: - slice_thickness = np.abs(slices[0].ImagePositionPatient[2] - slices[1].ImagePositionPatient[2]) - except: - try: - slice_thickness = np.abs(slices[0].SliceLocation - slices[1].SliceLocation) - except: - slice_thickness = 1.0 - - for s in slices: - s.SliceThickness = slice_thickness - - return slices - -def get_pixels_hu(slices): - image = np.stack([s.pixel_array for s in slices]) - # Convert to int16 (from sometimes int16), - # should be possible as values should always be low enough (<32k) - image = image.astype(np.int16) - - # Set outside-of-scan pixels to 0 - # The intercept is usually -1024, so air is approximately 0 - image[image == -2000] = 0 - - # Convert to Hounsfield units (HU) - for slice_number in range(len(slices)): - - intercept = slices[slice_number].RescaleIntercept - slope = slices[slice_number].RescaleSlope - - if slope != 1: - image[slice_number] = slope * image[slice_number].astype(np.float64) - image[slice_number] = image[slice_number].astype(np.int16) - - image[slice_number] += np.int16(intercept) - - return np.array(image, dtype=np.int16) - -def binarize_volume(volume, air_hu=AIR_THRESHOLD): - binary_volume = np.zeros_like(volume, dtype=np.uint8) - binary_volume[volume <= air_hu] = 1 - return binary_volume - -def largest_connected_component(binary_image): - # Find all connected components and stats - num_labels, labels, stats, centroids = cv2.connectedComponentsWithStats(binary_image, connectivity=8) - - # Get the index of the largest component, ignoring the background - # The background is considered as a component by connectedComponentsWithStats and it is usually the first component - largest_component_index = np.argmax(stats[1:, cv2.CC_STAT_AREA]) + 1 - - # Create an image to keep largest component only - largest_component_image = np.zeros(labels.shape, dtype=np.uint8) - largest_component_image[labels == largest_component_index] = 1 - - return largest_component_image - -def get_largest_component_volume(volume): - # Initialize an empty array to hold the processed volume - processed_volume = np.empty_like(volume, dtype=np.uint8) - - # Iterate over each slice in the volume - for i in range(volume.shape[0]): - # Process the slice and store it in the processed volume - processed_volume[i] = largest_connected_component(volume[i]) - - return processed_volume - - - -def dilate_volume(volume, kernel_size=KERNEL_SIZE): - # Create the structuring element (kernel) for dilation - kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (kernel_size, kernel_size)) - - # Initialize an empty array to hold the dilated volume - dilated_volume = np.empty_like(volume) - - # Iterate over each slice in the volume - for i in range(volume.shape[0]): - # Dilate the slice and store it in the dilated volume - dilated_volume[i] = cv2.dilate(volume[i].astype(np.uint8), kernel) - - return dilated_volume - - -def apply_mask_and_get_values(image_volume, mask_volume): - # Apply the mask by multiplying the image volume with the mask volume - masked_volume = image_volume * mask_volume - - # Get all unique values in the masked volume, excluding zero - unique_values = np.unique(masked_volume) - unique_values = unique_values[unique_values > FACE_MIN_VALUE] - unique_values = unique_values[unique_values < FACE_MAX_VALUE] - - # Convert numpy array to a list - unique_values_list = unique_values.tolist() - - return unique_values_list - - -def apply_random_values_optimized(pixels_hu, dilated_volume, unique_values_list): - # Initialize new volume as a copy of the original volume - new_volume = np.copy(pixels_hu) - - # Generate random indices - random_indices = np.random.choice(len(unique_values_list), size=np.sum(dilated_volume)) - - # Select random values from the unique_values_list - random_values = np.array(unique_values_list)[random_indices] - - # Apply the random values to the locations where dilated_volume equals 1 - new_volume[dilated_volume == 1] = random_values - - return new_volume - -def save_new_dicom_files(new_volume, original_dir, out_path, app="_d"): - # Create a new directory path by appending "_d" to the original directory - if out_path is None: - new_dir = original_dir + app - else: - new_dir = out_path - - # Create the new directory if it doesn't exist - if not os.path.exists(new_dir): - os.makedirs(new_dir) - - # List all DICOM files in the original directory - dicom_files = [os.path.join(original_dir, f) for f in os.listdir(original_dir) if f.endswith('.dcm')] - - # Sort the dicom_files list by SliceLocation - dicom_files.sort(key=lambda x: pydicom.dcmread(x).SliceLocation) - - # Loop over each slice of the new volume - for i in range(new_volume.shape[0]): - # Get the corresponding original DICOM file - dicom_file = dicom_files[i] - - # Read the file - ds = pydicom.dcmread(dicom_file) - ds.decompress() - - # Revert the slope and intercept operation on the slice - new_slice = (new_volume[i] - ds.RescaleIntercept) / ds.RescaleSlope - - # Update the pixel data with the data from the new slice - ds.PixelData = new_slice.astype(np.int16).tobytes() - - # Generate new file name - new_file_name = os.path.join(new_dir, f"new_image_{i}.dcm") - - # Save the new DICOM file - ds.save_as(new_file_name) - - - -def drown_volume(in_path, out_path='deid_ct', replacer='face'): - """ - Processes DICOM files from the provided directory by binarizing, getting the largest connected component, - dilating and applying mask. Then applies random values to the dilated volume based on a unique values list - obtained from the masked volume (or air value). The results are saved as new DICOM files in a specified directory. - - Parameters: - in_path (str): The path to the directory containing the input DICOM files. - out_path (str, optional): The path to the directory where the output DICOM files will be saved. - If not provided, the output files will be saved in the input directory appended by "_d". - replacer (str, optional): Indicates what kind of pixels are going to be replaced. Default is 'face'. - 'face': replaces air and face with random values that are found in the skin and subcutaneous fat. - 'air': replaces air and face with -1000 HU. - int: replaces air and face with int HU. - - Returns: - None. The function saves new DICOM files and prints the total elapsed time of the operation. - """ - start_time = time.time() - - dirs = list_dicom_directories(in_path) - - for _d in tqdm(dirs, desc="List of studies"): - - with tqdm(total=8, desc="Processing DICOM Files", leave=False) as pbar: - # Load the DICOM files - slices = load_scan(_d) - pbar.update() - - # Get the pixel values and convert them to Hounsfield Units (HU) - pixels_hu = get_pixels_hu(slices) - pbar.update() - - # Apply the binarization function on the HU volume - binarized_volume = binarize_volume(pixels_hu) - pbar.update() - - # Get the largest connected component from the binarized volume - processed_volume = get_largest_component_volume(binarized_volume) - pbar.update() - - # Dilate the processed volume - dilated_volume = dilate_volume(processed_volume) - pbar.update() - if replacer == 'face': - # Apply the mask to the original volume and get unique values list - unique_values_list = apply_mask_and_get_values(pixels_hu, dilated_volume - processed_volume) - elif replacer == 'air': - unique_values_list = [0] - else: - try: - replacer = int(replacer) - unique_values_list = [replacer] - except: - print('replacer must be either air, face, or an integer number in Hounsfield units, but ' + str(replacer) + ' was provided.') - print('replacing with face') - unique_values_list = apply_mask_and_get_values(pixels_hu, dilated_volume - processed_volume) - - pbar.update() - - # Apply random values to the dilated volume based on the unique values list - new_volume = apply_random_values_optimized(pixels_hu, dilated_volume, unique_values_list) - pbar.update() - - # Save the new DICOM files - out_path_n = out_path + "/" + get_first_directory(_d) - save_new_dicom_files(new_volume, _d, out_path_n) - pbar.update() - - elapsed_time = time.time() - start_time - print(f"Total elapsed time: {elapsed_time} seconds") diff --git a/spaces/fengmuxi/ChatGpt-Web/scripts/setup.sh b/spaces/fengmuxi/ChatGpt-Web/scripts/setup.sh deleted file mode 100644 index 751a9ac17c220deb476c5aef928f6b0d21d31c40..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/scripts/setup.sh +++ /dev/null @@ -1,65 +0,0 @@ -#!/bin/bash - -# Check if running on a supported system -case "$(uname -s)" in - Linux) - if [[ -f "/etc/lsb-release" ]]; then - . /etc/lsb-release - if [[ "$DISTRIB_ID" != "Ubuntu" ]]; then - echo "This script only works on Ubuntu, not $DISTRIB_ID." - exit 1 - fi - else - if [[ ! "$(cat /etc/*-release | grep '^ID=')" =~ ^(ID=\"ubuntu\")|(ID=\"centos\")|(ID=\"arch\")$ ]]; then - echo "Unsupported Linux distribution." - exit 1 - fi - fi - ;; - Darwin) - echo "Running on MacOS." - ;; - *) - echo "Unsupported operating system." - exit 1 - ;; -esac - -# Check if needed dependencies are installed and install if necessary -if ! command -v node >/dev/null || ! command -v git >/dev/null || ! command -v yarn >/dev/null; then - case "$(uname -s)" in - Linux) - if [[ "$(cat /etc/*-release | grep '^ID=')" = "ID=ubuntu" ]]; then - sudo apt-get update - sudo apt-get -y install nodejs git yarn - elif [[ "$(cat /etc/*-release | grep '^ID=')" = "ID=centos" ]]; then - sudo yum -y install epel-release - sudo yum -y install nodejs git yarn - elif [[ "$(cat /etc/*-release | grep '^ID=')" = "ID=arch" ]]; then - sudo pacman -Syu -y - sudo pacman -S -y nodejs git yarn - else - echo "Unsupported Linux distribution" - exit 1 - fi - ;; - Darwin) - /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" - brew install node git yarn - ;; - esac -fi - -# Clone the repository and install dependencies -git clone https://github.com/Yidadaa/ChatGPT-Next-Web -cd ChatGPT-Next-Web -yarn install - -# Prompt user for environment variables -read -p "Enter OPENAI_API_KEY: " OPENAI_API_KEY -read -p "Enter CODE: " CODE -read -p "Enter PORT: " PORT - -# Build and run the project using the environment variables -OPENAI_API_KEY=$OPENAI_API_KEY CODE=$CODE PORT=$PORT yarn build -OPENAI_API_KEY=$OPENAI_API_KEY CODE=$CODE PORT=$PORT yarn start diff --git a/spaces/fffiloni/ControlNet-Video/style.css b/spaces/fffiloni/ControlNet-Video/style.css deleted file mode 100644 index 98c1607dba4c5e2055c5bc59197a9c995389a3fa..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/ControlNet-Video/style.css +++ /dev/null @@ -1,105 +0,0 @@ -#col-container {max-width: 820px; margin-left: auto; margin-right: auto;} -#duplicate-container{ - display: flex; - justify-content: space-between; - align-items: center; - line-height: 1em; - flex-direction: row-reverse; - font-size:1em; -} -a, a:hover, a:visited { - text-decoration-line: underline; - font-weight: 600; - color: #1f2937 !important; -} - -.dark a, .dark a:hover, .dark a:visited { - color: #f3f4f6 !important; -} - -.label-wrap { - margin-bottom: 12px; -} - -.footer { - margin-bottom: 45px; - margin-top: 10px; - text-align: center; - border-bottom: 1px solid #e5e5e5; -} - -.footer>p { - font-size: .8rem!important; - display: inline-block; - padding: 0 10px; - transform: translateY(26px); - background: white; -} -.dark .footer { - border-color: #303030; -} -.dark .footer>p { - background: #0b0f19; -} - -div#may-like-container > p { - font-size: .8em; - margin-bottom: 4px; -} - -.animate-spin { - animation: spin 1s linear infinite; -} - -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} - -#share-btn-container { - display: flex; - padding-left: 0.5rem !important; - padding-right: 0.5rem !important; - background-color: #000000; - justify-content: center; - align-items: center; - border-radius: 9999px !important; - max-width: 13rem; -} - -#share-btn-container:hover { - background-color: #060606; -} - -#share-btn { - all: initial; - color: #ffffff; - font-weight: 600; - cursor:pointer; - font-family: 'IBM Plex Sans', sans-serif; - margin-left: 0.5rem !important; - padding-top: 0.5rem !important; - padding-bottom: 0.5rem !important; - right:0; -} - -#share-btn * { - all: unset; -} - -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} - -#share-btn-container .wrap { - display: none !important; -} - -#share-btn-container.hidden { - display: none!important; -} \ No newline at end of file diff --git a/spaces/fffiloni/Music_Source_Separation/scripts/1_pack_audios_to_hdf5s/vctk/sr=44100,chn=2.sh b/spaces/fffiloni/Music_Source_Separation/scripts/1_pack_audios_to_hdf5s/vctk/sr=44100,chn=2.sh deleted file mode 100644 index 71eac148ffaf44878df6692e92bb442614c30ce4..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Music_Source_Separation/scripts/1_pack_audios_to_hdf5s/vctk/sr=44100,chn=2.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash -DATASET_DIR=${1:-"./datasets/vctk"} # The first argument is dataset directory. -WORKSPACE=${2:-"./workspaces/bytesep"} # The second argument is workspace directory. - -echo "DATASET_DIR=${DATASET_DIR}" -echo "WORKSPACE=${WORKSPACE}" - -# Users can change the following settings. -SAMPLE_RATE=44100 -CHANNELS=2 - -# Paths -HDF5S_DIR="${WORKSPACE}/hdf5s/vctk/sr=${SAMPLE_RATE}_chn=${CHANNELS}/train" - -python3 bytesep/dataset_creation/pack_audios_to_hdf5s/vctk.py \ - --dataset_dir=$DATASET_DIR \ - --split="train" \ - --hdf5s_dir=$HDF5S_DIR \ - --sample_rate=$SAMPLE_RATE \ - --channels=$CHANNELS - \ No newline at end of file diff --git a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_49.py b/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_49.py deleted file mode 100644 index 4a7dd3f009dc443561700952c7eb6c41499585d1..0000000000000000000000000000000000000000 --- a/spaces/fgenie/scamtext_PAL_self_consistency/funcs/f_49.py +++ /dev/null @@ -1,20 +0,0 @@ - -import re - -def is_spam(text: str) -> bool: - # Check for patterns observed in spam messages - spam_patterns = [ - r"\d{1,2}%", # Percentage discounts - r"코드[:\:]?\w*", - r"무료거부", # Unsubscribe keyword in Korean - r"(http(s)?://)?(bit\.ly|me2\.kr|vo\.la|dokdo\.in|tdeal\.kr|"\ - "openkak(talk)?\.at|kakaos?\.co|buly\.kr|(vvd\.bz))\/\S*", # Spam URL shorteners - r"=BBQ\+피자\+활쿱", # Spam message - r"(광고)", # Advertising indicator - ] - - # Combine all spam patterns into a single regex pattern - spam_pattern_re = re.compile("|".join(spam_patterns), re.IGNORECASE) - - return bool(spam_pattern_re.search(text)) - diff --git a/spaces/fishaudio/fish-diffusion/configs/ALYS.py b/spaces/fishaudio/fish-diffusion/configs/ALYS.py deleted file mode 100644 index 1a41f164ded86152011c16dc9935e159beebe6a8..0000000000000000000000000000000000000000 --- a/spaces/fishaudio/fish-diffusion/configs/ALYS.py +++ /dev/null @@ -1,48 +0,0 @@ -from fish_diffusion.datasets.hifisinger import HiFiSVCDataset -from fish_diffusion.datasets.utils import get_datasets_from_subfolder - -_base_ = [ - "./_base_/archs/hifi_svc.py", - "./_base_/trainers/base.py", - "./_base_/schedulers/exponential.py", - "./_base_/datasets/hifi_svc.py", -] - -speaker_mapping = { - "ALYS": 0, -} - -model = dict( - type="HiFiSVC", - speaker_encoder=dict( - input_size=len(speaker_mapping), - ), -) - -preprocessing = dict( - text_features_extractor=dict( - type="ContentVec", - ), - pitch_extractor=dict( - type="CrepePitchExtractor", - keep_zeros=False, - f0_min=40.0, - f0_max=1600.0, - ), - energy_extractor=dict( - type="RMSEnergyExtractor", - ), - augmentations=[ - dict( - type="FixedPitchShifting", - key_shifts=[-5.0, 5.0], - probability=0.75, - ), - ], -) - -trainer = dict( - # Disable gradient clipping, which is not supported by custom optimization - gradient_clip_val=None, - max_steps=1000000, -) diff --git a/spaces/flax-community/image-captioning/vit_gpt2/modeling_flax_gpt2.py b/spaces/flax-community/image-captioning/vit_gpt2/modeling_flax_gpt2.py deleted file mode 100644 index 3bc9cedc219ac2d24d5d89f0ea29b095364eae5a..0000000000000000000000000000000000000000 --- a/spaces/flax-community/image-captioning/vit_gpt2/modeling_flax_gpt2.py +++ /dev/null @@ -1,752 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Google Flax Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Any, Optional, Tuple - -import flax.linen as nn -import jax -import jax.numpy as jnp -from flax.core.frozen_dict import FrozenDict, unfreeze -from flax.linen import combine_masks, make_causal_mask -from flax.linen.attention import dot_product_attention_weights -from jax import lax - -from transformers.file_utils import add_start_docstrings, add_start_docstrings_to_model_forward -from transformers.modeling_flax_outputs import FlaxBaseModelOutput, FlaxBaseModelOutputWithPast, FlaxCausalLMOutput, FlaxBaseModelOutputWithPastAndCrossAttentions, FlaxSeq2SeqLMOutput -from transformers.modeling_flax_utils import ACT2FN, FlaxPreTrainedModel, append_call_sample_docstring -from transformers.utils import logging -from transformers.models.gpt2.configuration_gpt2 import GPT2Config - - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "gpt2" -_CONFIG_FOR_DOC = "GPT2Config" -_TOKENIZER_FOR_DOC = "GPT2Tokenizer" - - -GPT2_START_DOCSTRING = r""" - - This model inherits from :class:`~transformers.FlaxPreTrainedModel`. Check the superclass documentation for the - generic methods the library implements for all its model (such as downloading or saving, resizing the input - embeddings, pruning heads etc.) - - This model is also a Flax Linen `flax.nn.Module - `__ subclass. Use it as a regular Flax - Module and refer to the Flax documentation for all matter related to general usage and behavior. - - Finally, this model supports inherent JAX features such as: - - - `Just-In-Time (JIT) compilation `__ - - `Automatic Differentiation `__ - - `Vectorization `__ - - `Parallelization `__ - - Parameters: - config (:class:`~transformers.GPT2Config`): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the :meth:`~transformers.FlaxPreTrainedModel.from_pretrained` method to load the - model weights. -""" - -GPT2_INPUTS_DOCSTRING = r""" - Args: - input_ids (:obj:`numpy.ndarray` of shape :obj:`(batch_size, input_ids_length)`): - :obj:`input_ids_length` = ``sequence_length``. Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using :class:`~transformers.GPT2Tokenizer`. See - :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for - details. - - `What are input IDs? <../glossary.html#input-ids>`__ - attention_mask (:obj:`numpy.ndarray` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - `What are attention masks? <../glossary.html#attention-mask>`__ - position_ids (:obj:`numpy.ndarray` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range ``[0, - config.max_position_embeddings - 1]``. - past_key_values (:obj:`Dict[str, np.ndarray]`, `optional`, returned by ``init_cache`` or when passing previous ``past_key_values``): - Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast - auto-regressive decoding. Pre-computed key and value hidden-states are of shape `[batch_size, max_length]`. - output_attentions (:obj:`bool`, `optional`): - Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned - tensors for more detail. - output_hidden_states (:obj:`bool`, `optional`): - Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for - more detail. - return_dict (:obj:`bool`, `optional`): - Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. -""" - - -class FlaxConv1D(nn.Module): - features: int - use_bias: bool = True - dtype: Any = jnp.float32 - precision: Any = None - - @nn.compact - def __call__(self, inputs): - inputs = jnp.asarray(inputs, self.dtype) - kernel = self.param("kernel", jax.nn.initializers.normal(stddev=0.02), (self.features, inputs.shape[-1])) - kernel = jnp.asarray(kernel.transpose(), self.dtype) - y = lax.dot_general(inputs, kernel, (((inputs.ndim - 1,), (0,)), ((), ())), precision=self.precision) - if self.use_bias: - bias = self.param("bias", jax.nn.initializers.zeros, (self.features,)) - bias = jnp.asarray(bias, self.dtype) - y = y + bias - return y - - -class FlaxGPT2Attention(nn.Module): - config: GPT2Config - dtype: jnp.dtype = jnp.float32 - causal: bool = True - - def setup(self): - config = self.config - self.embed_dim = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_dim = self.embed_dim // self.num_heads - - self.c_attn = FlaxConv1D(features=3 * self.embed_dim, dtype=self.dtype) - self.c_proj = FlaxConv1D(self.embed_dim, dtype=self.dtype) - - self.c_attn_for_k_v = FlaxConv1D(features=2 * self.embed_dim, dtype=self.dtype) - - self.resid_dropout = nn.Dropout(rate=config.resid_pdrop) - - if self.causal: - self.causal_mask = make_causal_mask(jnp.ones((1, config.max_position_embeddings), dtype="bool"), dtype="bool") - - def _split_heads(self, hidden_states): - return hidden_states.reshape(hidden_states.shape[:2] + (self.num_heads, self.head_dim)) - - def _merge_heads(self, hidden_states): - return hidden_states.reshape(hidden_states.shape[:2] + (self.embed_dim,)) - - @nn.compact - def _concatenate_to_cache(self, key, value, query, attention_mask): - """ - This function takes projected key, value states from a single input token and concatenates the states to cached - states from previous steps. This function is slighly adapted from the official Flax repository: - https://github.com/google/flax/blob/491ce18759622506588784b4fca0e4bf05f8c8cd/flax/linen/attention.py#L252 - """ - # detect if we're initializing by absence of existing cache data. - is_initialized = self.has_variable("cache", "cached_key") - cached_key = self.variable("cache", "cached_key", jnp.zeros, key.shape, key.dtype) - cached_value = self.variable("cache", "cached_value", jnp.zeros, value.shape, value.dtype) - cache_index = self.variable("cache", "cache_index", lambda: jnp.array(0, dtype=jnp.int32)) - - if is_initialized: - *batch_dims, max_length, num_heads, depth_per_head = cached_key.value.shape - # update key, value caches with our new 1d spatial slices - cur_index = cache_index.value - indices = (0,) * len(batch_dims) + (cur_index, 0, 0) - key = lax.dynamic_update_slice(cached_key.value, key, indices) - value = lax.dynamic_update_slice(cached_value.value, value, indices) - cached_key.value = key - cached_value.value = value - num_updated_cache_vectors = query.shape[1] - cache_index.value = cache_index.value + num_updated_cache_vectors - # causal mask for cached decoder self-attention: our single query position should only attend to those key positions that have already been generated and cached, not the remaining zero elements. - pad_mask = jnp.broadcast_to( - jnp.arange(max_length) < cur_index + num_updated_cache_vectors, - tuple(batch_dims) + (1, num_updated_cache_vectors, max_length), - ) - attention_mask = combine_masks(pad_mask, attention_mask) - return key, value, attention_mask - - def __call__( - self, - hidden_states, - key_value_states: Optional[jnp.ndarray] = None, - attention_mask=None, - deterministic: bool = True, - init_cache: bool = False, - output_attentions: bool = False, - ): - - # if key_value_states are provided this layer is used as a cross-attention layer - # for the decoder - is_cross_attention = key_value_states is not None - - qkv_out = self.c_attn(hidden_states) - query, key, value = jnp.split(qkv_out, 3, axis=2) - - if is_cross_attention: - _qkv_out = self.c_attn_for_k_v(key_value_states) - key, value = jnp.split(_qkv_out, 2, axis=2) - - query = self._split_heads(query) - key = self._split_heads(key) - value = self._split_heads(value) - - query_length, key_length = query.shape[1], key.shape[1] - - if self.causal: - if self.has_variable("cache", "cached_key"): - mask_shift = self.variables["cache"]["cache_index"] - max_decoder_length = self.variables["cache"]["cached_key"].shape[1] - causal_mask = lax.dynamic_slice( - self.causal_mask, (0, 0, mask_shift, 0), (1, 1, query_length, max_decoder_length) - ) - else: - causal_mask = self.causal_mask[:, :, :query_length, :key_length] - - batch_size = hidden_states.shape[0] - causal_mask = jnp.broadcast_to(causal_mask, (batch_size,) + causal_mask.shape[1:]) - - # combine masks if needed - if attention_mask is not None and self.causal: - attention_mask = jnp.broadcast_to(jnp.expand_dims(attention_mask, axis=(-3, -2)), causal_mask.shape) - attention_mask = combine_masks(attention_mask, causal_mask) - elif self.causal: - attention_mask = causal_mask - elif attention_mask is not None: - attention_mask = jnp.expand_dims(attention_mask, axis=(-3, -2)) - - dropout_rng = None - if not deterministic and self.config.attn_pdrop > 0.0: - dropout_rng = self.make_rng("dropout") - - # During fast autoregressive decoding, we feed one position at a time, - # and cache the keys and values step by step. - if self.causal and (self.has_variable("cache", "cached_key") or init_cache): - key, value, attention_mask = self._concatenate_to_cache(key, value, query, attention_mask) - - # transform boolean mask into float mask - if attention_mask is not None: - attention_bias = lax.select( - attention_mask > 0, - jnp.full(attention_mask.shape, 0.0).astype(self.dtype), - jnp.full(attention_mask.shape, -1e4).astype(self.dtype), - ) - else: - attention_bias = None - - # usual dot product attention - attn_weights = dot_product_attention_weights( - query, - key, - bias=attention_bias, - dropout_rng=dropout_rng, - dropout_rate=self.config.attn_pdrop, - deterministic=deterministic, - dtype=self.dtype, - precision=None, - ) - - attn_output = jnp.einsum("...hqk,...khd->...qhd", attn_weights, value) - attn_output = self._merge_heads(attn_output) - attn_output = self.c_proj(attn_output) - attn_output = self.resid_dropout(attn_output, deterministic=deterministic) - - outputs = (attn_output, attn_weights) if output_attentions else (attn_output,) - return outputs - - -class FlaxGPT2MLP(nn.Module): - config: GPT2Config - intermediate_size: int - dtype: jnp.dtype = jnp.float32 - - def setup(self): - embed_dim = self.config.hidden_size - self.c_fc = FlaxConv1D(self.intermediate_size, dtype=self.dtype) - self.c_proj = FlaxConv1D(embed_dim, dtype=self.dtype) - self.act = ACT2FN[self.config.activation_function] - self.dropout = nn.Dropout(rate=self.config.resid_pdrop) - - def __call__(self, hidden_states, deterministic: bool = True): - hidden_states = self.c_fc(hidden_states) - hidden_states = self.act(hidden_states) - hidden_states = self.c_proj(hidden_states) - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - return hidden_states - - -class FlaxGPT2Block(nn.Module): - config: GPT2Config - dtype: jnp.dtype = jnp.float32 - - def setup(self): - hidden_size = self.config.hidden_size - inner_dim = self.config.n_inner if self.config.n_inner is not None else 4 * hidden_size - - self.ln_1 = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype) - self.attn = FlaxGPT2Attention(self.config, dtype=self.dtype) - self.ln_3 = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype) - self.encoder_attn = FlaxGPT2Attention(config=self.config, dtype=self.dtype) - self.ln_2 = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype) - self.mlp = FlaxGPT2MLP(self.config, inner_dim, dtype=self.dtype) - - def __call__( - self, - hidden_states, - attention_mask=None, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - deterministic: bool = True, - init_cache: bool = False, - output_attentions: bool = False, - ): - residual = hidden_states - hidden_states = self.ln_1(hidden_states) - outputs = self.attn( - hidden_states, - attention_mask=attention_mask, - deterministic=deterministic, - init_cache=init_cache, - output_attentions=output_attentions, - ) - # residual connection - attn_output = outputs[0] - hidden_states = attn_output + residual - - # Cross-Attention Block - if encoder_hidden_states is not None: - - residual = hidden_states - hidden_states = self.ln_3(hidden_states) - - cross_attn_outputs = self.encoder_attn( - hidden_states=hidden_states, - key_value_states=encoder_hidden_states, - attention_mask=encoder_attention_mask, - deterministic=deterministic, - output_attentions=output_attentions, - ) - - # residual connection - cross_attn_output = cross_attn_outputs[0] - hidden_states = cross_attn_output + residual - - residual = hidden_states - hidden_states = self.ln_2(hidden_states) - feed_forward_hidden_states = self.mlp(hidden_states, deterministic=deterministic) - # residual connection - hidden_states = residual + feed_forward_hidden_states - - output = (hidden_states,) + outputs[1:] - if encoder_hidden_states is not None: - output = output + cross_attn_outputs[1:] - - return output - - -class FlaxGPT2PreTrainedModel(FlaxPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = GPT2Config - base_model_prefix = "transformer" - module_class: nn.Module = None - - def __init__( - self, - config: GPT2Config, - input_shape: Tuple = (1, 1), - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - **kwargs, - ): - module = self.module_class(config=config, dtype=dtype, **kwargs) - super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype) - - def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict: - # init input tensors - input_ids = jnp.zeros(input_shape, dtype="i4") - attention_mask = jnp.ones_like(input_ids) - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_shape) - params_rng, dropout_rng = jax.random.split(rng) - rngs = {"params": params_rng, "dropout": dropout_rng} - - if self.config.add_cross_attention: - encoder_hidden_states = jnp.zeros(input_shape + (self.config.n_embd,)) - encoder_attention_mask = attention_mask - module_init_outputs = self.module.init(rngs, input_ids, attention_mask, position_ids, encoder_hidden_states, encoder_attention_mask, return_dict=False) - else: - module_init_outputs = self.module.init(rngs, input_ids, attention_mask, position_ids, return_dict=False) - - return module_init_outputs["params"] - - @classmethod - def _from_config(cls, config, **kwargs): - return super()._from_config(config, **kwargs) - - def init_cache(self, batch_size, max_length): - r""" - Args: - batch_size (:obj:`int`): - batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache. - max_length (:obj:`int`): - maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized - cache. - """ - # init input variables to retrieve cache - input_ids = jnp.ones((batch_size, max_length)) - attention_mask = jnp.ones_like(input_ids) - position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(input_ids).shape[-1]), input_ids.shape) - - init_variables = self.module.init( - jax.random.PRNGKey(0), input_ids, attention_mask, position_ids, return_dict=False, init_cache=True - ) - return init_variables["cache"] - - @add_start_docstrings_to_model_forward(GPT2_INPUTS_DOCSTRING) - def __call__( - self, - input_ids, - attention_mask=None, - position_ids=None, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - params: dict = None, - past_key_values: dict = None, - dropout_rng: jax.random.PRNGKey = None, - train: bool = False, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - if encoder_hidden_states is not None and encoder_attention_mask is None: - batch_size, sequence_length = encoder_hidden_states.shape[:2] - encoder_attention_mask = jnp.ones((batch_size, sequence_length)) - - batch_size, sequence_length = input_ids.shape - - if position_ids is None: - if past_key_values is not None: - raise ValueError("Make sure to provide `position_ids` when passing `past_key_values`.") - - position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length)) - - if attention_mask is None: - attention_mask = jnp.ones((batch_size, sequence_length)) - - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - rngs["dropout"] = dropout_rng - - inputs = {"params": params or self.params} - - # if past_key_values are passed then cache is already initialized a private flag init_cache has to be passed down to ensure cache is used. It has to be made sure that cache is marked as mutable so that it can be changed by FlaxGPT2Attention module - if past_key_values: - inputs["cache"] = past_key_values - mutable = ["cache"] - else: - mutable = False - - outputs = self.module.apply( - inputs, - jnp.array(input_ids, dtype="i4"), - jnp.array(attention_mask, dtype="i4"), - jnp.array(position_ids, dtype="i4"), - encoder_hidden_states, - encoder_attention_mask, - not train, - False, - output_attentions, - output_hidden_states, - return_dict, - rngs=rngs, - mutable=mutable, - ) - - # add updated cache to model output - if past_key_values is not None and return_dict: - outputs, past_key_values = outputs - outputs["past_key_values"] = unfreeze(past_key_values["cache"]) - return outputs - elif past_key_values is not None and not return_dict: - outputs, past_key_values = outputs - outputs = outputs[:1] + (unfreeze(past_key_values["cache"]),) + outputs[1:] - - return outputs - - -class FlaxGPT2BlockCollection(nn.Module): - config: GPT2Config - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.blocks = [ - FlaxGPT2Block(self.config, name=str(i), dtype=self.dtype) for i in range(self.config.num_hidden_layers) - ] - - def __call__( - self, - hidden_states, - attention_mask=None, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - deterministic: bool = True, - init_cache: bool = False, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - all_attentions = () if output_attentions else None - all_hidden_states = () if output_hidden_states else None - all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - - for block in self.blocks: - if output_hidden_states: - all_hidden_states += (hidden_states,) - - layer_outputs = block( - hidden_states, - attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - deterministic=deterministic, - init_cache=init_cache, - output_attentions=output_attentions, - ) - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions += (layer_outputs[1],) - if encoder_hidden_states is not None: - all_cross_attentions += (layer_outputs[2],) - - if output_hidden_states: - all_hidden_states += (hidden_states,) - - outputs = [hidden_states, all_hidden_states, all_attentions, all_cross_attentions] - - if not return_dict: - return tuple(v for v in outputs if v is not None) - - if encoder_hidden_states is None: - return FlaxBaseModelOutputWithPast( - last_hidden_state=hidden_states, - past_key_values=None, - hidden_states=all_hidden_states, - attentions=all_attentions, - ) - else: - return FlaxBaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=None, - hidden_states=all_hidden_states, - attentions=all_attentions, - cross_attentions=all_cross_attentions, - ) - -class FlaxGPT2Module(nn.Module): - config: GPT2Config - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.embed_dim = self.config.hidden_size - - self.wte = nn.Embed( - self.config.vocab_size, - self.embed_dim, - embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), - dtype=self.dtype, - ) - self.wpe = nn.Embed( - self.config.max_position_embeddings, - self.embed_dim, - embedding_init=jax.nn.initializers.normal(stddev=self.config.initializer_range), - dtype=self.dtype, - ) - self.dropout = nn.Dropout(rate=self.config.embd_pdrop) - self.h = FlaxGPT2BlockCollection(self.config, dtype=self.dtype) - self.ln_f = nn.LayerNorm(epsilon=self.config.layer_norm_epsilon, dtype=self.dtype) - - def __call__( - self, - input_ids, - attention_mask, - position_ids, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - deterministic=True, - init_cache: bool = False, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - input_embeds = self.wte(input_ids.astype("i4")) - position_embeds = self.wpe(position_ids.astype("i4")) - - hidden_states = input_embeds + position_embeds - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - - outputs = self.h( - hidden_states, - attention_mask, - encoder_hidden_states, - encoder_attention_mask, - deterministic=deterministic, - init_cache=init_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - hidden_states = self.ln_f(hidden_states) - - if not return_dict: - return (hidden_states,) + outputs[1:] - - if encoder_hidden_states is None: - return FlaxBaseModelOutput( - last_hidden_state=hidden_states, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - else: - return FlaxBaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - -@add_start_docstrings( - "The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top.", - GPT2_START_DOCSTRING, -) -class FlaxGPT2Model(FlaxGPT2PreTrainedModel): - module_class = FlaxGPT2Module - - -append_call_sample_docstring( - FlaxGPT2Model, _TOKENIZER_FOR_DOC, _CHECKPOINT_FOR_DOC, FlaxBaseModelOutput, _CONFIG_FOR_DOC -) - - -class FlaxGPT2LMHeadModule(nn.Module): - config: GPT2Config - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.transformer = FlaxGPT2Module(self.config, dtype=self.dtype) - self.lm_head = nn.Dense( - self.config.vocab_size, - use_bias=False, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(stddev=self.config.initializer_range, dtype=self.dtype), - ) - - def __call__( - self, - input_ids, - attention_mask, - position_ids, - encoder_hidden_states: Optional[jnp.ndarray] = None, - encoder_attention_mask: Optional[jnp.ndarray] = None, - deterministic: bool = True, - init_cache: bool = False, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - outputs = self.transformer( - input_ids, - attention_mask, - position_ids, - encoder_hidden_states, - encoder_attention_mask, - deterministic=deterministic, - init_cache=init_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - - if self.config.tie_word_embeddings: - shared_kernel = self.transformer.variables["params"]["wte"]["embedding"].T - lm_logits = self.lm_head.apply({"params": {"kernel": shared_kernel}}, hidden_states) - else: - lm_logits = self.lm_head(hidden_states) - - if not return_dict: - return (lm_logits,) + outputs[1:] - - if encoder_hidden_states is None: - return FlaxCausalLMOutput(logits=lm_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions) - else: - return FlaxSeq2SeqLMOutput( - logits=lm_logits, - decoder_hidden_states=outputs.hidden_states, - decoder_attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - encoder_last_hidden_state=encoder_hidden_states, - encoder_hidden_states=None, - encoder_attentions=None, - ) - -@add_start_docstrings( - """ - The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input - embeddings). - """, - GPT2_START_DOCSTRING, -) -class FlaxGPT2LMHeadModel(FlaxGPT2PreTrainedModel): - module_class = FlaxGPT2LMHeadModule - - def prepare_inputs_for_generation(self, input_ids, max_length, attention_mask: Optional[jnp.DeviceArray] = None): - # initializing the cache - batch_size, seq_length = input_ids.shape - - past_key_values = self.init_cache(batch_size, max_length) - # Note that usually one would have to put 0's in the attention_mask for x > input_ids.shape[-1] and x < cache_length. - # But since GPT2 uses a causal mask, those positions are masked anyways. - # Thus we can create a single static attention_mask here, which is more efficient for compilation - extended_attention_mask = jnp.ones((batch_size, max_length), dtype="i4") - if attention_mask is not None: - position_ids = attention_mask.cumsum(axis=-1) - 1 - extended_attention_mask = lax.dynamic_update_slice(extended_attention_mask, attention_mask, (0, 0)) - else: - position_ids = jnp.broadcast_to(jnp.arange(seq_length, dtype="i4")[None, :], (batch_size, seq_length)) - - return { - "past_key_values": past_key_values, - "attention_mask": extended_attention_mask, - "position_ids": position_ids, - } - - def update_inputs_for_generation(self, model_outputs, model_kwargs): - model_kwargs["past_key_values"] = model_outputs.past_key_values - model_kwargs["position_ids"] = model_kwargs["position_ids"][:, -1:] + 1 - return model_kwargs - - -append_call_sample_docstring( - FlaxGPT2LMHeadModel, _TOKENIZER_FOR_DOC, _CHECKPOINT_FOR_DOC, FlaxCausalLMOutput, _CONFIG_FOR_DOC -) diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/objectscollaborationenv.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/objectscollaborationenv.py deleted file mode 100644 index f354f516ec83790f1981d43318d68631c329405d..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/objectscollaborationenv.py +++ /dev/null @@ -1,869 +0,0 @@ -import time - -import numpy as np -from gym_minigrid.social_ai_envs.socialaigrammar import SocialAIGrammar, SocialAIActions, SocialAIActionSpace -from gym_minigrid.minigrid import * -from gym_minigrid.register import register -import time -from collections import deque - - -class Partner(NPC): - """ - A simple NPC that knows who is telling the truth - """ - def __init__(self, color, name, env): - super().__init__(color) - self.name = name - self.env = env - self.npc_dir = 1 # NPC initially looks downward - # todo: this should be id == name - self.npc_type = 0 # this will be put into the encoding - - self.npc_side = "L" if self.env.agent_side == "R" else "R" - assert {self.npc_side, self.env.agent_side} == {"L", "R"} - - self.target_obj = None - - self.was_introduced_to = False - - self.ate_an_apple = False - self.demo_over = False - self.demo_over_and_position_safe = False - self.apple_unlocked_for_agent = False - - self.list_of_possible_utterances = [ - *self.list_of_possible_utterances, - "Hot", # change to hot -> all with small letters - "Warm", - "Medium", - "Cold", - *COLOR_NAMES - ] - - assert self.env.grammar.contains_utterance(self.introduction_statement) - - def step(self, utterance): - - reply, info = super().step() - - if self.env.hidden_npc: - return reply, info - - if self.npc_side == "L": - # the npc waits for the agent to open one of the right boxes, and then uses the object of the same color - action = None - if self.env.chosen_left_obj is not None: - self.target_obj = self.env.chosen_left_obj - - if type(self.target_obj) == Switch and self.target_obj.is_on: - next_target_position = self.env.box.cur_pos - - elif type(self.target_obj) == AppleGenerator and self.target_obj.is_pressed: - next_target_position = self.env.left_generator_platform.cur_pos - - else: - next_target_position = self.target_obj.cur_pos - - if type(self.target_obj) == AppleGenerator and not self.target_obj.is_pressed: - # we have to activate the generator - if not self.env.generator.marble_activation: - # push generator - action = self.path_to_pos(next_target_position) - else: - # find angle - if self.env.marble.moving_dir is None: - distance = (self.env.marble.cur_pos - self.target_obj.cur_pos) - - diff = np.sign(distance) - if sum(abs(diff)) == 1: - push_pos = self.env.marble.cur_pos + diff - if all(self.cur_pos == push_pos): - next_target_position = self.env.marble.cur_pos - else: - next_target_position = push_pos - - # go to loc in front of - # push - action = self.path_to_pos(next_target_position) - - else: - action = None - - else: - # toggle all other objects - action = self.path_to_toggle_pos(next_target_position) - else: - action = self.turn_to_see_agent() - - else: - if self.ate_an_apple: - action = self.turn_to_see_agent() - else: - # toggle the chosen box then the apple - if self.target_obj is None: - self.target_obj = self.env._rand_elem([ - self.env.right_box1, - self.env.right_box2 - ]) - - action = self.path_to_toggle_pos(self.target_obj.cur_pos) - - if self.npc_side == "R": - eaten_before = self.env.right_apple.eaten - else: - eaten_before = self.env.left_apple.eaten - - if action is not None: - action() - - if not self.ate_an_apple: - # check if the NPC ate the apple - if self.npc_side == "R": - self.ate_an_apple = not eaten_before and self.env.right_apple.eaten - else: - self.ate_an_apple = not eaten_before and self.env.left_apple.eaten - - info = { - "prim_action": action.__name__ if action is not None else "no_op", - "utterance": "no_op", - "was_introduced_to": self.was_introduced_to - } - - reply = None - - return reply, info - - def is_point_from_loc(self, pos): - target_pos = self.target_obj.cur_pos - if self.distractor_obj is not None: - distractor_pos = self.distractor_obj.cur_pos - else: - distractor_pos = [None, None] - - if self.env.is_in_marble_way(pos): - return False - - if any(pos == target_pos): - same_ind = np.argmax(target_pos == pos) - - if pos[same_ind] != distractor_pos[same_ind]: - return True - - if pos[same_ind] == distractor_pos[same_ind]: - # if in between - if distractor_pos[1-same_ind] < pos[1-same_ind] < target_pos[1-same_ind]: - return True - - if distractor_pos[1-same_ind] > pos[1-same_ind] > target_pos[1-same_ind]: - return True - - return False - - def find_point_from_loc(self): - reject_fn = lambda env, p: not self.is_point_from_loc(p) - - point = self.env.find_loc(size=(self.env.wall_x, self.env.wall_y), reject_fn=reject_fn, reject_agent_pos=False) - - assert all(point < np.array([self.env.wall_x, self.env.wall_y])) - assert all(point > np.array([0, 0])) - - return point - - -class ObjectsCollaborationEnv(MultiModalMiniGridEnv): - """ - Environment in which the agent is instructed to go to a given object - named using an English text string - """ - - def __init__( - self, - size=10, - diminished_reward=True, - step_penalty=False, - knowledgeable=False, - max_steps=80, - hidden_npc=False, - switch_no_light=True, - reward_diminish_factor=0.1, - see_through_walls=False, - egocentric_observation=True, - ): - assert size >= 5 - self.empty_symbol = "NA \n" - self.diminished_reward = diminished_reward - self.step_penalty = step_penalty - self.knowledgeable = knowledgeable - self.hidden_npc = hidden_npc - self.hear_yourself = False - self.switch_no_light = switch_no_light - - self.grammar = SocialAIGrammar() - - self.init_done = False - # parameters - to be set in reset - self.parameters = None - - # encoding size should be 5 - self.add_npc_direction = True - self.add_npc_point_direction = True - self.add_npc_last_prim_action = True - - self.reward_diminish_factor = reward_diminish_factor - - self.egocentric_observation = egocentric_observation - self.encoding_size = 3 + 2*bool(not self.egocentric_observation) + bool(self.add_npc_direction) + bool(self.add_npc_point_direction) + bool(self.add_npc_last_prim_action) - - super().__init__( - grid_size=size, - max_steps=max_steps, - # Set this to True for maximum speed - see_through_walls=see_through_walls, - actions=SocialAIActions, # primitive actions - action_space=SocialAIActionSpace, - add_npc_direction=self.add_npc_direction, - add_npc_point_direction=self.add_npc_point_direction, - add_npc_last_prim_action=self.add_npc_last_prim_action, - reward_diminish_factor=self.reward_diminish_factor, - ) - self.all_npc_utterance_actions = Partner.get_list_of_possible_utterances() - self.prim_actions_dict = SocialAINPCActionsDict - - def revert(self): - self.put_objects_in_env(remove_objects=True) - - def is_in_marble_way(self, pos): - target_pos = self.generator_current_pos - - # generator distractor is in the same row / collumn as the marble and the generator - # if self.distractor_current_pos is not None: - # distractor_pos = self.distractor_current_pos - # else: - # distractor_pos = [None, None] - - if self.problem in ["Marble"]: - # point can't be in the same row or column as both the marble and the generator - # all three: marble, generator, loc are in the same row or column - if any((pos == target_pos) * (pos == self.marble_current_pos)): - # all three: marble, generator, loc are in the same row or column -> is in its way - return True - - # is it in the way for the distractor generator - if any((pos == self.distractor_current_pos) * (pos == self.marble_current_pos)): - # all three: marble, distractor generator, loc are in the same row or column -> is in its way - return True - - # all good - return False - - def _gen_grid(self, width_, height_): - # Create the grid - self.grid = Grid(width_, height_, nb_obj_dims=self.encoding_size) - - # new - min_w = min(9, width_) - min_h = min(9, height_) - self.current_width = self._rand_int(min_w, width_+1) - self.current_height = self._rand_int(min_h, height_+1) - - self.wall_x = self.current_width-1 - self.wall_y = self.current_height-1 - - # Generate the surrounding walls - self.grid.wall_rect(0, 0, self.current_width, self.current_height) - - # problem: Apples/Boxes/Switches/Generators/Marbles - self.problem = self.parameters["Problem"] if self.parameters else "Apples" - num_of_colors = self.parameters.get("Num_of_colors", None) if self.parameters else None - self.version = self.parameters["Version"] if self.parameters else "Asocial" - self.role = self.parameters["Role"] if self.parameters else "A" - assert self.role in ["A", "B", "Meta"] - - if self.role in ["B", "Meta"]: - self.agent_side = "R" # starts on the right side - else: - self.agent_side = "L" # starts on the right side - - self.add_obstacles() - - # apple - - # box - locked = self.problem == "Switches" - - if num_of_colors is None: - POSSIBLE_COLORS = COLOR_NAMES.copy() - - else: - POSSIBLE_COLORS = COLOR_NAMES[:int(num_of_colors)].copy() - - self.left_half_size = (self.current_width//2, self.current_height) - self.left_half_top = (0, 0) - - self.right_half_size = (self.current_width//2 - 1, self.current_height) - self.right_half_top = (self.current_width - self.current_width // 2 + 1, 0) - - # add fence to grid - self.grid.vert_wall( - x=self.current_width//2 + 1, # one collumn to the right of the center - y=1, - length=self.current_height - 2, - obj_type=Fence - ) - - self.right_box1_color = self._rand_elem(POSSIBLE_COLORS) - POSSIBLE_COLORS.remove(self.right_box1_color) - - self.right_box2_color = self._rand_elem(POSSIBLE_COLORS) - - assert self.right_box1_color != self.right_box2_color - - POSSIBLE_COLORS_LEFT = [self.right_box1_color, self.right_box2_color] - - self.left_color_1 = self._rand_elem(POSSIBLE_COLORS_LEFT) - POSSIBLE_COLORS_LEFT.remove(self.left_color_1) - self.left_color_2 = self._rand_elem(POSSIBLE_COLORS_LEFT) - - - self.box_color = self.left_color_1 - # find the position for the apple/box/generator_platform - self.left_apple_current_pos = self.find_loc( - size=self.left_half_size, - top=self.left_half_top, - reject_agent_pos=True - ) - - # right boxes - self.right_box1_current_pos = self.find_loc( - size=self.right_half_size, - top=self.right_half_top, - reject_agent_pos=True - ) - self.right_box2_current_pos = self.find_loc( - size=self.right_half_size, - top=self.right_half_top, - reject_agent_pos=True, - reject_fn=lambda _, pos: tuple(pos) in map(tuple, [self.right_box1_current_pos]), - ) - assert all(self.left_apple_current_pos < np.array([self.current_width - 1, self.current_height - 1])) - - # switch - # self.switch_pos = (self.current_width, self.current_height) - self.switch_color = self.left_color_1 - self.switch_current_pos = self.find_loc( - top=self.left_half_top, - size=self.left_half_size, - reject_agent_pos=True, - reject_fn=lambda _, pos: tuple(pos) in map(tuple, [self.left_apple_current_pos]), - ) - - # generator - # self.generator_pos = (self.current_width, self.current_height) - self.generator_color = self.left_color_1 - self.generator_current_pos = self.find_loc( - top=self.left_half_top, - size=self.left_half_size, - reject_agent_pos=True, - reject_fn=lambda _, pos: ( - tuple(pos) in map(tuple, [self.left_apple_current_pos]) - or - (self.problem in ["Marbles", "Marble"] and tuple(pos) in [ - # not in corners - (1, 1), - (self.current_width-2, 1), - (1, self.current_height-2), - (self.current_width-2, self.current_height-2), - ]) - or - # not in the same row collumn as the platform - (self.problem in ["Marbles", "Marble"] and any(pos == self.left_apple_current_pos)) - ), - ) - - # generator platform - self.left_generator_platform_color = self._rand_elem(POSSIBLE_COLORS) - - # marbles - # self.marble_pos = (self.current_width, self.current_height) - self.marble_color = self._rand_elem(POSSIBLE_COLORS) - self.marble_current_pos = self.find_loc( - top=self.left_half_top, - size=self.left_half_size, - reject_agent_pos=True, - reject_fn=lambda _, pos: self.problem in ["Marbles", "Marble"] and ( - tuple(pos) in map(tuple, [self.left_apple_current_pos, self.generator_current_pos]) - or - all(pos != self.generator_current_pos) # reject if not in row or column as the generator - or - any(pos == 1) # next to a wall - or - pos[1] == self.current_height-2 - or - pos[0] == self.current_width-2 - ), - ) - - self.distractor_color = self.left_color_2 - # self.distractor_pos = (self.current_width, self.current_height) - - if self.problem in ["Apples", "Boxes"]: - distractor_reject_fn = lambda _, pos: tuple(pos) in map(tuple, [self.left_apple_current_pos]) - - elif self.problem in ["Switches"]: - distractor_reject_fn = lambda _, pos: tuple(pos) in map(tuple, [self.left_apple_current_pos, self.switch_current_pos]) - - elif self.problem in ["Generators"]: - distractor_reject_fn = lambda _, pos: tuple(pos) in map(tuple, [self.left_apple_current_pos, self.generator_current_pos]) - - elif self.problem in ["Marbles", "Marble"]: - # problem is marbles - same_dim = (self.generator_current_pos == self.marble_current_pos).argmax() - distactor_same_dim = 1-same_dim - distractor_reject_fn = lambda _, pos: tuple(pos) in map(tuple, [ - self.left_apple_current_pos, - self.generator_current_pos, - self.marble_current_pos - ]) or pos[distactor_same_dim] != self.marble_current_pos[distactor_same_dim] - # todo: not in corners -> but it's not that important - # or tuple(pos) in [ - # # not in corners - # (1, 1), - # (self.current_width-2, 1), - # (1, self.current_height-2), - # (self.current_width-2, self.current_height-2), - # ]) - - else: - raise ValueError("Problem {} indefined.".format(self.problem)) - - self.distractor_current_pos = self.find_loc( - top=self.left_half_top, - size=self.left_half_size, - reject_agent_pos=True, - # todo: reject based on problem - reject_fn=distractor_reject_fn - ) - - self.put_objects_in_env() - - # place agent - if self.agent_side == "L": - self.place_agent(size=self.left_half_size, top=self.left_half_top) - else: - self.place_agent(size=self.right_half_size, top=self.right_half_top) - - # NPC - if self.version == "Social": - self.npc_color = self._rand_elem(COLOR_NAMES) - self.caretaker = Partner(self.npc_color, "Partner", self) - - if self.agent_side == "L": - self.place_obj(self.caretaker, size=self.right_half_size, top=self.right_half_top, reject_fn=ObjectsCollaborationEnv.is_in_marble_way) - else: - self.place_obj(self.caretaker, size=self.left_half_size, top=self.left_half_top, reject_fn=ObjectsCollaborationEnv.is_in_marble_way) - - # Generate the mission string - self.mission = 'lets collaborate' - - # Dummy beginning string - # self.beginning_string = "This is what you hear. \n" - self.beginning_string = "Conversation: \n" # todo: go back to "this what you hear? - self.utterance = self.beginning_string - - # utterance appended at the end of each step - self.utterance_history = "" - - # used for rendering - self.full_conversation = self.utterance - self.outcome_info = None - - def put_objects_in_env(self, remove_objects=False): - - assert self.left_apple_current_pos is not None - assert self.right_box1_current_pos is not None - assert self.right_box2_current_pos is not None - assert self.switch_current_pos is not None - - self.switches_block_set = [] - self.boxes_block_set = [] - self.right_boxes_block_set = [] - self.generators_block_set = [] - - self.other_box = None - self.other_switch = None - self.other_generator = None - - # problem: Apples/Boxes/Switches/Generators - assert self.problem == self.parameters["Problem"] if self.parameters else "Apples" - - # move objects (used only in revert), not in gen_grid - if remove_objects: - # remove apple or box - # assert type(self.grid.get(*self.apple_current_pos)) in [Apple, LockableBox] - # self.grid.set(*self.apple_current_pos, None) - - # remove apple (after demo it must be an apple) - assert type(self.grid.get(*self.left_apple_current_pos)) in [Apple] - self.grid.set(*self.left_apple_current_pos, None) - - self.grid.set(*self.right_apple_current_pos, None) - - if self.problem in ["Switches"]: - # remove switch - assert type(self.grid.get(*self.switch_current_pos)) in [Switch] - self.grid.set(*self.switch.cur_pos, None) - - elif self.problem in ["Generators", "Marbles", "Marble"]: - # remove generator - assert type(self.grid.get(*self.generator.cur_pos)) in [AppleGenerator] - self.grid.set(*self.generator.cur_pos, None) - - if self.problem in ["Marbles", "Marble"]: - # remove generator - assert type(self.grid.get(*self.marble.cur_pos)) in [Marble] - self.grid.set(*self.marble.cur_pos, None) - - if self.marble.tee_uncovered: - self.grid.set(*self.marble.tee.cur_pos, None) - - elif self.problem in ["Apples", "Boxes"]: - pass - - else: - raise ValueError("Undefined problem {}".format(self.problem)) - - # remove distractor - if self.problem in ["Boxes", "Switches", "Generators", "Marbles", "Marble"]: - assert type(self.grid.get(*self.distractor_current_pos)) in [LockableBox, Switch, AppleGenerator] - self.grid.set(*self.distractor_current_pos, None) - - # apple - self.left_apple = Apple() - self.right_apple = Apple() - - # right apple - self.right_box1 = LockableBox( - self.right_box1_color, - contains=self.right_apple, - is_locked=False, - block_set=self.right_boxes_block_set - ) - self.right_boxes_block_set.append(self.right_box1) - - # right apple - self.right_box2 = LockableBox( - self.right_box2_color, - contains=self.right_apple, - is_locked=False, - block_set=self.right_boxes_block_set - ) - self.right_boxes_block_set.append(self.right_box2) - - # Box - locked = self.problem == "Switches" - - self.box = LockableBox( - self.box_color, - # contains=self.left_apple, - is_locked=locked, - block_set=self.boxes_block_set - ) - self.boxes_block_set.append(self.box) - - # Switch - self.switch = Switch( - color=self.switch_color, - # lockable_object=self.box, - locker_switch=True, - no_turn_off=True, - no_light=self.switch_no_light, - block_set=self.switches_block_set, - ) - - self.switches_block_set.append(self.switch) - - # Generator - self.generator = AppleGenerator( - self.generator_color, - block_set=self.generators_block_set, - # on_push=lambda: self.grid.set(*self.left_apple_current_pos, self.left_apple), - marble_activation=self.problem in ["Marble"], - ) - self.generators_block_set.append(self.generator) - - self.left_generator_platform = GeneratorPlatform(self.left_generator_platform_color) - - self.marble = Marble(self.marble_color, env=self) - - # right side - self.put_obj_np(self.right_box1, self.right_box1_current_pos) - self.put_obj_np(self.right_box2, self.right_box2_current_pos) - - self.candidate_objects=[] - # left side - if self.problem == "Apples": - self.put_obj_np(self.left_apple, self.left_apple_current_pos) - self.candidate_objects.append(self.left_apple) - - elif self.problem in ["Boxes"]: - self.put_obj_np(self.box, self.left_apple_current_pos) - self.candidate_objects.append(self.box) - - elif self.problem in ["Switches"]: - self.put_obj_np(self.box, self.left_apple_current_pos) - self.put_obj_np(self.switch, self.switch_current_pos) - self.candidate_objects.append(self.switch) - - elif self.problem in ["Generators", "Marble"]: - self.put_obj_np(self.generator, self.generator_current_pos) - self.put_obj_np(self.left_generator_platform, self.left_apple_current_pos) - self.candidate_objects.append(self.generator) - - if self.problem in ["Marble"]: - self.put_obj_np(self.marble, self.marble_current_pos) - - else: - raise ValueError("Problem {} not defined. ".format(self.problem)) - - # Distractors - if self.problem == "Boxes": - assert not locked - - self.other_box = LockableBox( - self.left_color_2, - is_locked=locked, - block_set=self.boxes_block_set, - ) - self.boxes_block_set.append(self.other_box) - - self.put_obj_np(self.other_box, self.distractor_current_pos) - self.candidate_objects.append(self.other_box) - - elif self.problem == "Switches": - self.other_switch = Switch( - color=self.left_color_2, - locker_switch=True, - no_turn_off=True, - no_light=self.switch_no_light, - block_set=self.switches_block_set, - ) - self.switches_block_set.append(self.other_switch) - - self.put_obj_np(self.other_switch, self.distractor_current_pos) - self.candidate_objects.append(self.other_switch) - - elif self.problem in ["Generators", "Marble"]: - self.other_generator = AppleGenerator( - color=self.left_color_2, - block_set=self.generators_block_set, - marble_activation=self.problem in ["Marble"], - ) - self.generators_block_set.append(self.other_generator) - - self.put_obj_np(self.other_generator, self.distractor_current_pos) - self.candidate_objects.append(self.other_generator) - - def reset( - self, *args, **kwargs - ): - # This env must be used inside the parametric env - if not kwargs: - # The only place when kwargs can empty is during the class construction - # reset should be called again before using the env (paramenv does it in its constructor) - assert self.parameters is None - assert not self.init_done - self.init_done = True - - obs = super().reset() - return obs - - else: - assert self.init_done - - self.parameters = dict(kwargs) - - assert self.parameters is not None - assert len(self.parameters) > 0 - - obs = super().reset() - - self.agent_ate_an_apple = False - self.chosen_right_box = None - self.chosen_left_obj = None - - return obs - - def step(self, action): - success = False - p_action = action[0] - utterance_action = action[1:] - - left_apple_had_been_eaten = self.left_apple.eaten - right_apple_had_been_eaten = self.right_apple.eaten - - # primitive actions - _, reward, done, info = super().step(p_action) - - if self.problem in ["Marbles", "Marble"]: - # todo: create objects which can stepped automatically? - self.marble.step() - - if not self.agent_ate_an_apple: - if self.agent_side == "L": - self.agent_ate_an_apple = self.left_apple.eaten and not left_apple_had_been_eaten - else: - self.agent_ate_an_apple = self.right_apple.eaten and not right_apple_had_been_eaten - - if self.right_box1.is_open: - self.chosen_right_box = self.right_box1 - - if self.right_box2.is_open: - self.chosen_right_box = self.right_box2 - - if self.chosen_right_box is not None: - chosen_color = self.chosen_right_box.color - self.chosen_left_obj = [o for o in self.candidate_objects if o.color == chosen_color][0] - - if type(self.chosen_left_obj) == LockableBox: - self.chosen_left_obj.contains = self.left_apple - - elif type(self.chosen_left_obj) == Switch: - self.chosen_left_obj.lockable_object = self.box - self.box.contains = self.left_apple - - elif type(self.chosen_left_obj) == AppleGenerator: - self.chosen_left_obj.on_push=lambda: self.grid.set(*self.left_apple_current_pos, self.left_apple) - - else: - raise ValueError("Unknown target object.") - - # utterances - agent_spoke = not all(np.isnan(utterance_action)) - if agent_spoke: - utterance = self.grammar.construct_utterance(utterance_action) - - if self.hear_yourself: - self.utterance += "YOU: {} \n".format(utterance) - self.full_conversation += "YOU: {} \n".format(utterance) - else: - utterance = None - - if self.version == "Social": - reply, npc_info = self.caretaker.step(utterance) - - if reply: - self.utterance += "{}: {} \n".format(self.caretaker.name, reply) - self.full_conversation += "{}: {} \n".format(self.caretaker.name, reply) - else: - npc_info = { - "prim_action": "no_op", - "utterance": "no_op", - "was_introduced_to": False, - } - - - # aftermath - if p_action == self.actions.done: - done = True - - if (self.role in ["A", "B"] or self.version == "Asocial") and self.agent_ate_an_apple: - reward = self._reward() - success = True - done = True - - elif self.role == "Meta" and self.version == "Social" and self.agent_ate_an_apple and self.caretaker.ate_an_apple: - - if self.agent_side == "L": - reward = self._reward() / 2 - success = True - done = True - - else: - # revert and rotate - reward = self._reward() / 2 - self.agent_ate_an_apple = False - self.caretaker.ate_an_apple = False - self.agent_side = "L" - self.put_objects_in_env(remove_objects=True) - - # teleport the agent and the NPC - self.place_agent(size=self.left_half_size, top=self.left_half_top) - - self.grid.set(*self.caretaker.cur_pos, None) - - self.caretaker = Partner(self.npc_color, "Partner", self) - self.place_obj(self.caretaker, size=self.right_half_size, top=self.right_half_top, reject_fn=ObjectsCollaborationEnv.is_in_marble_way) - - # discount - if self.step_penalty: - reward = reward - 0.01 - - # update obs with NPC movement - obs = self.gen_obs(full_obs=self.full_obs) - - # fill observation with text - self.append_existing_utterance_to_history() - obs = self.add_utterance_to_observation(obs) - self.reset_utterance() - - if done: - if reward > 0: - self.outcome_info = "SUCCESS: agent got {} reward \n".format(np.round(reward, 1)) - else: - self.outcome_info = "FAILURE: agent got {} reward \n".format(reward) - - if self.version == "Social": - # is the npc seen by the agent - ag_view_npc = self.relative_coords(*self.caretaker.cur_pos) - - if ag_view_npc is not None: - # in the agent's field of view - ag_view_npc_x, ag_view_npc_y = ag_view_npc - - n_dims = obs['image'].shape[-1] - npc_encoding = self.caretaker.encode(n_dims) - - # is it occluded - npc_observed = all(obs['image'][ag_view_npc_x, ag_view_npc_y] == npc_encoding) - else: - npc_observed = False - else: - npc_observed = False - - info = {**info, **{"NPC_"+k: v for k, v in npc_info.items()}} - - info["NPC_observed"] = npc_observed - info["success"] = success - - return obs, reward, done, info - - def _reward(self): - if self.diminished_reward: - return super()._reward() - else: - return 1.0 - - # def render(self, *args, **kwargs): - # obs = super().render(*args, **kwargs) - # self.window.clear_text() # erase previous text - # self.window.set_caption(self.full_conversation) - # - # # self.window.ax.set_title("correct color: {}".format(self.box.target_color), loc="left", fontsize=10) - # - # if self.outcome_info: - # color = None - # if "SUCCESS" in self.outcome_info: - # color = "lime" - # elif "FAILURE" in self.outcome_info: - # color = "red" - # self.window.add_text(*(0.01, 0.85, self.outcome_info), - # **{'fontsize': 15, 'color': color, 'weight': "bold"}) - # - # self.window.show_img(obs) # re-draw image to add changes to window - # return obs - -register( - id='SocialAI-ObjectsCollaboration-v0', - entry_point='gym_minigrid.social_ai_envs:ObjectsCollaborationEnv' -) \ No newline at end of file diff --git a/spaces/flowers-team/SocialAISchool/models/dialogue_memory_multiheadedac.py b/spaces/flowers-team/SocialAISchool/models/dialogue_memory_multiheadedac.py deleted file mode 100644 index 7c053f49218115f745b967b44cf769fef7a0ae6c..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/models/dialogue_memory_multiheadedac.py +++ /dev/null @@ -1,170 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.distributions.categorical import Categorical -import torch_ac - - -from utils.other import init_params - - -class DialogueMemoryMultiHeadedACModel(nn.Module, torch_ac.RecurrentACModel): - def __init__(self, obs_space, action_space, use_memory=False, use_text=False, use_dialogue=False): - super().__init__() - - # Decide which components are enabled - self.use_text = use_text - self.use_dialogue = use_dialogue - self.use_memory = use_memory - - if not self.use_memory: - raise ValueError("You should not be using this model. Use MultiHeadedACModel instead") - - if not self.use_dialogue: - raise ValueError("You should not be using this model. Use ACModel instead") - - if self.use_text: - raise ValueError("You should not use text but dialogue.") - - # multi dim - if action_space.shape == (): - raise ValueError("The action space is not multi modal. Use ACModel instead.") - - self.n_primitive_actions = action_space.nvec[0] + 1 # for talk - self.talk_action = int(self.n_primitive_actions) - 1 - - self.n_utterance_actions = action_space.nvec[1:] - - # Define image embedding - self.image_conv = nn.Sequential( - nn.Conv2d(3, 16, (2, 2)), - nn.ReLU(), - nn.MaxPool2d((2, 2)), - nn.Conv2d(16, 32, (2, 2)), - nn.ReLU(), - nn.Conv2d(32, 64, (2, 2)), - nn.ReLU() - ) - n = obs_space["image"][0] - m = obs_space["image"][1] - self.image_embedding_size = ((n-1)//2-2)*((m-1)//2-2)*64 - - if self.use_text or self.use_dialogue: - self.word_embedding_size = 32 - self.word_embedding = nn.Embedding(obs_space["text"], self.word_embedding_size) - - # Define text embedding - if self.use_text: - self.text_embedding_size = 128 - self.text_rnn = nn.GRU(self.word_embedding_size, self.text_embedding_size, batch_first=True) - - # Define dialogue embedding - if self.use_dialogue: - self.dialogue_embedding_size = 128 - self.dialogue_rnn = nn.GRU(self.word_embedding_size, self.dialogue_embedding_size, batch_first=True) - - # Resize image embedding - self.embedding_size = self.image_embedding_size - - if self.use_text: - self.embedding_size += self.text_embedding_size - - if self.use_dialogue: - self.embedding_size += self.dialogue_embedding_size - - # Define actor's model - self.actor = nn.Sequential( - nn.Linear(self.embedding_size, 64), - nn.Tanh(), - nn.Linear(64, self.n_primitive_actions) - ) - self.talker = nn.ModuleList([ - nn.Sequential( - nn.Linear(self.embedding_size, 64), - nn.Tanh(), - nn.Linear(64, n) - ) for n in self.n_utterance_actions]) - - # Define critic's model - self.critic = nn.Sequential( - nn.Linear(self.embedding_size, 64), - nn.Tanh(), - nn.Linear(64, 1) - ) - - # Initialize parameters correctly - self.apply(init_params) - - @property - def memory_size(self): - return self.dialogue_embedding_size - - def forward(self, obs, memory): - x = obs.image.transpose(1, 3).transpose(2, 3) - x = self.image_conv(x) - - batch_size = x.shape[0] - x = x.reshape(batch_size, -1) - - embedding = x - - if self.use_text: - embed_text = self._get_embed_text(obs.text) - embedding = torch.cat((embedding, embed_text), dim=1) - - if self.use_dialogue: - embed_dial, memory = self._get_embed_dialogue(obs.dialogue, memory) - embedding = torch.cat((embedding, embed_dial), dim=1) - - x = self.actor(embedding) - primitive_actions_dist = Categorical(logits=F.log_softmax(x, dim=1)) - - x = self.critic(embedding) - value = x.squeeze(1) - utterance_actions_dists = [ - Categorical(logits=F.log_softmax( - tal(embedding), - dim=1, - )) for tal in self.talker - ] - - dist = [primitive_actions_dist] + utterance_actions_dists - - return dist, value, memory - - def sample_action(self, dist): - return torch.stack([d.sample() for d in dist], dim=1) - - def calculate_log_probs(self, dist, action): - return torch.stack([d.log_prob(action[:, i]) for i, d in enumerate(dist)], dim=1) - - def calculate_action_masks(self, action): - talk_mask = action[:, 0] == self.talk_action - mask = torch.stack( - (torch.ones_like(talk_mask), talk_mask, talk_mask), - dim=1).detach() - - assert action.shape == mask.shape - - return mask - - def construct_final_action(self, action): - act_mask = action[:, 0] != self.n_primitive_actions - 1 - - nan_mask = np.array([ - np.array([1, np.nan, np.nan]) if t else np.array([np.nan, 1, 1]) for t in act_mask - ]) - - action = nan_mask*action - - return action - - def _get_embed_text(self, text): - _, hidden = self.text_rnn(self.word_embedding(text)) - - return hidden[-1] - - def _get_embed_dialogue(self, dial, memory): - _, hidden = self.dialogue_rnn(self.word_embedding(dial), ) - return hidden[-1], hidden[-1] diff --git a/spaces/fr1ll/sketch-to-1d-SRME/README.md b/spaces/fr1ll/sketch-to-1d-SRME/README.md deleted file mode 100644 index b3bf30cffe539e906f3122a13da3e3fefa30f611..0000000000000000000000000000000000000000 --- a/spaces/fr1ll/sketch-to-1d-SRME/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sketch To Srme -emoji: 🏃 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/matrix_transpose/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/matrix_transpose/run.py deleted file mode 100644 index 1fa9ed34184ec6c6063305cf71b2a662222d5207..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/matrix_transpose/run.py +++ /dev/null @@ -1,24 +0,0 @@ -import numpy as np - -import gradio as gr - - -def transpose(matrix): - return matrix.T - - -demo = gr.Interface( - transpose, - gr.Dataframe(type="numpy", datatype="number", row_count=5, col_count=3), - "numpy", - examples=[ - [np.zeros((3, 3)).tolist()], - [np.ones((2, 2)).tolist()], - [np.random.randint(0, 10, (3, 10)).tolist()], - [np.random.randint(0, 10, (10, 3)).tolist()], - [np.random.randint(0, 10, (10, 10)).tolist()], - ], -) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/ghoskno/ColorCanny-Controlnet/lpw.py b/spaces/ghoskno/ColorCanny-Controlnet/lpw.py deleted file mode 100644 index 7c6bcaea93a83e3781728eaeb16224bfcf1f01ce..0000000000000000000000000000000000000000 --- a/spaces/ghoskno/ColorCanny-Controlnet/lpw.py +++ /dev/null @@ -1,389 +0,0 @@ -import re -from typing import List, Optional, Union - -import torch - -from diffusers import StableDiffusionPipeline - - -re_attention = re.compile( - r""" -\\\(| -\\\)| -\\\[| -\\]| -\\\\| -\\| -\(| -\[| -:([+-]?[.\d]+)\)| -\)| -]| -[^\\()\[\]:]+| -: -""", - re.X, -) - -def parse_prompt_attention(text): - """ - Parses a string with attention tokens and returns a list of pairs: text and its associated weight. - Accepted tokens are: - (abc) - increases attention to abc by a multiplier of 1.1 - (abc:3.12) - increases attention to abc by a multiplier of 3.12 - [abc] - decreases attention to abc by a multiplier of 1.1 - \( - literal character '(' - \[ - literal character '[' - \) - literal character ')' - \] - literal character ']' - \\ - literal character '\' - anything else - just text - >>> parse_prompt_attention('normal text') - [['normal text', 1.0]] - >>> parse_prompt_attention('an (important) word') - [['an ', 1.0], ['important', 1.1], [' word', 1.0]] - >>> parse_prompt_attention('(unbalanced') - [['unbalanced', 1.1]] - >>> parse_prompt_attention('\(literal\]') - [['(literal]', 1.0]] - >>> parse_prompt_attention('(unnecessary)(parens)') - [['unnecessaryparens', 1.1]] - >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).') - [['a ', 1.0], - ['house', 1.5730000000000004], - [' ', 1.1], - ['on', 1.0], - [' a ', 1.1], - ['hill', 0.55], - [', sun, ', 1.1], - ['sky', 1.4641000000000006], - ['.', 1.1]] - """ - - res = [] - round_brackets = [] - square_brackets = [] - - round_bracket_multiplier = 1.1 - square_bracket_multiplier = 1 / 1.1 - - def multiply_range(start_position, multiplier): - for p in range(start_position, len(res)): - res[p][1] *= multiplier - - for m in re_attention.finditer(text): - text = m.group(0) - weight = m.group(1) - - if text.startswith("\\"): - res.append([text[1:], 1.0]) - elif text == "(": - round_brackets.append(len(res)) - elif text == "[": - square_brackets.append(len(res)) - elif weight is not None and len(round_brackets) > 0: - multiply_range(round_brackets.pop(), float(weight)) - elif text == ")" and len(round_brackets) > 0: - multiply_range(round_brackets.pop(), round_bracket_multiplier) - elif text == "]" and len(square_brackets) > 0: - multiply_range(square_brackets.pop(), square_bracket_multiplier) - else: - res.append([text, 1.0]) - - for pos in round_brackets: - multiply_range(pos, round_bracket_multiplier) - - for pos in square_brackets: - multiply_range(pos, square_bracket_multiplier) - - if len(res) == 0: - res = [["", 1.0]] - - # merge runs of identical weights - i = 0 - while i + 1 < len(res): - if res[i][1] == res[i + 1][1]: - res[i][0] += res[i + 1][0] - res.pop(i + 1) - else: - i += 1 - - return res - - -def get_prompts_with_weights(pipe: StableDiffusionPipeline, prompt: List[str], max_length: int): - r""" - Tokenize a list of prompts and return its tokens with weights of each token. - - No padding, starting or ending token is included. - """ - tokens = [] - weights = [] - truncated = False - for text in prompt: - texts_and_weights = parse_prompt_attention(text) - text_token = [] - text_weight = [] - for word, weight in texts_and_weights: - # tokenize and discard the starting and the ending token - token = pipe.tokenizer(word).input_ids[1:-1] - text_token += token - # copy the weight by length of token - text_weight += [weight] * len(token) - # stop if the text is too long (longer than truncation limit) - if len(text_token) > max_length: - truncated = True - break - # truncate - if len(text_token) > max_length: - truncated = True - text_token = text_token[:max_length] - text_weight = text_weight[:max_length] - tokens.append(text_token) - weights.append(text_weight) - if truncated: - logger.warning("Prompt was truncated. Try to shorten the prompt or increase max_embeddings_multiples") - return tokens, weights - - -def pad_tokens_and_weights(tokens, weights, max_length, bos, eos, no_boseos_middle=True, chunk_length=77): - r""" - Pad the tokens (with starting and ending tokens) and weights (with 1.0) to max_length. - """ - max_embeddings_multiples = (max_length - 2) // (chunk_length - 2) - weights_length = max_length if no_boseos_middle else max_embeddings_multiples * chunk_length - for i in range(len(tokens)): - tokens[i] = [bos] + tokens[i] + [eos] * (max_length - 1 - len(tokens[i])) - if no_boseos_middle: - weights[i] = [1.0] + weights[i] + [1.0] * (max_length - 1 - len(weights[i])) - else: - w = [] - if len(weights[i]) == 0: - w = [1.0] * weights_length - else: - for j in range(max_embeddings_multiples): - w.append(1.0) # weight for starting token in this chunk - w += weights[i][j * (chunk_length - 2) : min(len(weights[i]), (j + 1) * (chunk_length - 2))] - w.append(1.0) # weight for ending token in this chunk - w += [1.0] * (weights_length - len(w)) - weights[i] = w[:] - - return tokens, weights - -def get_unweighted_text_embeddings( - pipe: StableDiffusionPipeline, - text_input: torch.Tensor, - chunk_length: int, - no_boseos_middle: Optional[bool] = True, -): - """ - When the length of tokens is a multiple of the capacity of the text encoder, - it should be split into chunks and sent to the text encoder individually. - """ - max_embeddings_multiples = (text_input.shape[1] - 2) // (chunk_length - 2) - if max_embeddings_multiples > 1: - text_embeddings = [] - for i in range(max_embeddings_multiples): - # extract the i-th chunk - text_input_chunk = text_input[:, i * (chunk_length - 2) : (i + 1) * (chunk_length - 2) + 2].clone() - - # cover the head and the tail by the starting and the ending tokens - text_input_chunk[:, 0] = text_input[0, 0] - text_input_chunk[:, -1] = text_input[0, -1] - text_embedding = pipe.text_encoder(text_input_chunk)[0] - - if no_boseos_middle: - if i == 0: - # discard the ending token - text_embedding = text_embedding[:, :-1] - elif i == max_embeddings_multiples - 1: - # discard the starting token - text_embedding = text_embedding[:, 1:] - else: - # discard both starting and ending tokens - text_embedding = text_embedding[:, 1:-1] - - text_embeddings.append(text_embedding) - text_embeddings = torch.concat(text_embeddings, axis=1) - else: - text_embeddings = pipe.text_encoder(text_input)[0] - return text_embeddings - - -def get_weighted_text_embeddings( - pipe: StableDiffusionPipeline, - prompt: Union[str, List[str]], - uncond_prompt: Optional[Union[str, List[str]]] = None, - max_embeddings_multiples: Optional[int] = 3, - no_boseos_middle: Optional[bool] = False, - skip_parsing: Optional[bool] = False, - skip_weighting: Optional[bool] = False, - **kwargs, -): - r""" - Prompts can be assigned with local weights using brackets. For example, - prompt 'A (very beautiful) masterpiece' highlights the words 'very beautiful', - and the embedding tokens corresponding to the words get multiplied by a constant, 1.1. - - Also, to regularize of the embedding, the weighted embedding would be scaled to preserve the original mean. - - Args: - pipe (`StableDiffusionPipeline`): - Pipe to provide access to the tokenizer and the text encoder. - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - uncond_prompt (`str` or `List[str]`): - The unconditional prompt or prompts for guide the image generation. If unconditional prompt - is provided, the embeddings of prompt and uncond_prompt are concatenated. - max_embeddings_multiples (`int`, *optional*, defaults to `3`): - The max multiple length of prompt embeddings compared to the max output length of text encoder. - no_boseos_middle (`bool`, *optional*, defaults to `False`): - If the length of text token is multiples of the capacity of text encoder, whether reserve the starting and - ending token in each of the chunk in the middle. - skip_parsing (`bool`, *optional*, defaults to `False`): - Skip the parsing of brackets. - skip_weighting (`bool`, *optional*, defaults to `False`): - Skip the weighting. When the parsing is skipped, it is forced True. - """ - max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2 - if isinstance(prompt, str): - prompt = [prompt] - - if not skip_parsing: - prompt_tokens, prompt_weights = get_prompts_with_weights(pipe, prompt, max_length - 2) - if uncond_prompt is not None: - if isinstance(uncond_prompt, str): - uncond_prompt = [uncond_prompt] - uncond_tokens, uncond_weights = get_prompts_with_weights(pipe, uncond_prompt, max_length - 2) - else: - prompt_tokens = [ - token[1:-1] for token in pipe.tokenizer(prompt, max_length=max_length, truncation=True).input_ids - ] - prompt_weights = [[1.0] * len(token) for token in prompt_tokens] - if uncond_prompt is not None: - if isinstance(uncond_prompt, str): - uncond_prompt = [uncond_prompt] - uncond_tokens = [ - token[1:-1] - for token in pipe.tokenizer(uncond_prompt, max_length=max_length, truncation=True).input_ids - ] - uncond_weights = [[1.0] * len(token) for token in uncond_tokens] - - # round up the longest length of tokens to a multiple of (model_max_length - 2) - max_length = max([len(token) for token in prompt_tokens]) - if uncond_prompt is not None: - max_length = max(max_length, max([len(token) for token in uncond_tokens])) - - max_embeddings_multiples = min( - max_embeddings_multiples, - (max_length - 1) // (pipe.tokenizer.model_max_length - 2) + 1, - ) - max_embeddings_multiples = max(1, max_embeddings_multiples) - max_length = (pipe.tokenizer.model_max_length - 2) * max_embeddings_multiples + 2 - - # pad the length of tokens and weights - bos = pipe.tokenizer.bos_token_id - eos = pipe.tokenizer.eos_token_id - prompt_tokens, prompt_weights = pad_tokens_and_weights( - prompt_tokens, - prompt_weights, - max_length, - bos, - eos, - no_boseos_middle=no_boseos_middle, - chunk_length=pipe.tokenizer.model_max_length, - ) - prompt_tokens = torch.tensor(prompt_tokens, dtype=torch.long, device=pipe.text_encoder.device) - if uncond_prompt is not None: - uncond_tokens, uncond_weights = pad_tokens_and_weights( - uncond_tokens, - uncond_weights, - max_length, - bos, - eos, - no_boseos_middle=no_boseos_middle, - chunk_length=pipe.tokenizer.model_max_length, - ) - uncond_tokens = torch.tensor(uncond_tokens, dtype=torch.long, device=pipe.text_encoder.device) - - # get the embeddings - text_embeddings = get_unweighted_text_embeddings( - pipe, - prompt_tokens, - pipe.tokenizer.model_max_length, - no_boseos_middle=no_boseos_middle, - ) - prompt_weights = torch.tensor(prompt_weights, dtype=text_embeddings.dtype, device=pipe.text_encoder.device) - if uncond_prompt is not None: - uncond_embeddings = get_unweighted_text_embeddings( - pipe, - uncond_tokens, - pipe.tokenizer.model_max_length, - no_boseos_middle=no_boseos_middle, - ) - uncond_weights = torch.tensor(uncond_weights, dtype=uncond_embeddings.dtype, device=pipe.text_encoder.device) - - # assign weights to the prompts and normalize in the sense of mean - # TODO: should we normalize by chunk or in a whole (current implementation)? - if (not skip_parsing) and (not skip_weighting): - previous_mean = text_embeddings.float().mean(axis=[-2, -1]).to(text_embeddings.dtype) - text_embeddings *= prompt_weights.unsqueeze(-1) - current_mean = text_embeddings.float().mean(axis=[-2, -1]).to(text_embeddings.dtype) - text_embeddings *= (previous_mean / current_mean).unsqueeze(-1).unsqueeze(-1) - if uncond_prompt is not None: - previous_mean = uncond_embeddings.float().mean(axis=[-2, -1]).to(uncond_embeddings.dtype) - uncond_embeddings *= uncond_weights.unsqueeze(-1) - current_mean = uncond_embeddings.float().mean(axis=[-2, -1]).to(uncond_embeddings.dtype) - uncond_embeddings *= (previous_mean / current_mean).unsqueeze(-1).unsqueeze(-1) - - if uncond_prompt is not None: - return text_embeddings, uncond_embeddings - return text_embeddings, None - -def _encode_prompt( - pipe, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - max_embeddings_multiples, -): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - max_embeddings_multiples (`int`, *optional*, defaults to `3`): - The max multiple length of prompt embeddings compared to the max output length of text encoder. - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - if negative_prompt is None: - negative_prompt = [""] * batch_size - elif isinstance(negative_prompt, str): - negative_prompt = [negative_prompt] * batch_size - if batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - - text_embeddings, uncond_embeddings = get_weighted_text_embeddings( - pipe=pipe, - prompt=prompt, - uncond_prompt=negative_prompt if do_classifier_free_guidance else None, - max_embeddings_multiples=max_embeddings_multiples, - ) - return text_embeddings, uncond_embeddings \ No newline at end of file diff --git a/spaces/giswqs/geospatial/app.py b/spaces/giswqs/geospatial/app.py deleted file mode 100644 index 5c9c0f6e47353921c0ae2f57a0997d6290c2c261..0000000000000000000000000000000000000000 --- a/spaces/giswqs/geospatial/app.py +++ /dev/null @@ -1,19 +0,0 @@ -import gradio as gr -import leafmap.foliumap as leafmap - - -def split(left, right): - m = leafmap.Map() - m.split_map(left_layer=left, right_layer=right) - return m.to_gradio() - - -left_url = 'https://opendata.digitalglobe.com/events/california-fire-2020/pre-event/2018-02-16/pine-gulch-fire20/1030010076004E00.tif' -right_url = 'https://opendata.digitalglobe.com/events/california-fire-2020/post-event/2020-08-14/pine-gulch-fire20/10300100AAC8DD00.tif' -left_input = gr.Textbox(value=left_url, label="Left Layer URL") -right_input = gr.Textbox(value=right_url, label="Right Layer URL") - -title = 'Gradio for Geospatial Applications' -description = 'Visualizing geospatial datasets with Gradio and leafmap' -demo = gr.Interface(split, [left_input, right_input], "html", title=title, description=description) -demo.launch() \ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Jagjit Singh Evergreen Vol 2 Zip Stream or Download the Legendary Singers Hits.md b/spaces/gotiQspiryo/whisper-ui/examples/Jagjit Singh Evergreen Vol 2 Zip Stream or Download the Legendary Singers Hits.md deleted file mode 100644 index e717ba8308ef3d77d3165b3a712650e938c6dd83..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Jagjit Singh Evergreen Vol 2 Zip Stream or Download the Legendary Singers Hits.md +++ /dev/null @@ -1,6 +0,0 @@ -

      jagjit singh evergreen vol 2 zip


      DOWNLOAD ->>> https://urlgoal.com/2uyLVT



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Jarvis Clutch Social Spy - Insight and Advice on Adolescent Social Interactions (PDF).md b/spaces/gotiQspiryo/whisper-ui/examples/Jarvis Clutch Social Spy - Insight and Advice on Adolescent Social Interactions (PDF).md deleted file mode 100644 index 3124e1aaad7446be3b90af6e1f5627e253f8b6da..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Jarvis Clutch Social Spy - Insight and Advice on Adolescent Social Interactions (PDF).md +++ /dev/null @@ -1,6 +0,0 @@ -

      Swarg Yahan Narak Yahan telugu movie mp4 free download


      Download Zip ::: https://urlgoal.com/2uyN9B



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/gradio/HuBERT/examples/bart/README.glue.md b/spaces/gradio/HuBERT/examples/bart/README.glue.md deleted file mode 100644 index a010934e1e6dec491eb1c704ec02ba7405760510..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/bart/README.glue.md +++ /dev/null @@ -1,99 +0,0 @@ -# Fine-tuning BART on GLUE tasks - -### 1) Download the data from GLUE website (https://gluebenchmark.com/tasks) using following commands: -```bash -wget https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py -python download_glue_data.py --data_dir glue_data --tasks all -``` - -### 2) Preprocess GLUE task data (same as RoBERTa): -```bash -./examples/roberta/preprocess_GLUE_tasks.sh glue_data -``` -`glue_task_name` is one of the following: -`{ALL, QQP, MNLI, QNLI, MRPC, RTE, STS-B, SST-2, CoLA}` -Use `ALL` for preprocessing all the glue tasks. - -### 3) Fine-tuning on GLUE task: -Example fine-tuning cmd for `RTE` task -```bash -TOTAL_NUM_UPDATES=2036 # 10 epochs through RTE for bsz 16 -WARMUP_UPDATES=61 # 6 percent of the number of updates -LR=1e-05 # Peak LR for polynomial LR scheduler. -NUM_CLASSES=2 -MAX_SENTENCES=16 # Batch size. -BART_PATH=/path/to/bart/model.pt - -CUDA_VISIBLE_DEVICES=0,1 fairseq-train RTE-bin/ \ - --restore-file $BART_PATH \ - --batch-size $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --add-prev-output-tokens \ - --layernorm-embedding \ - --share-all-embeddings \ - --share-decoder-input-output-embed \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 \ - --arch bart_large \ - --criterion sentence_prediction \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.01 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-08 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --find-unused-parameters \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric; -``` - -For each of the GLUE task, you will need to use following cmd-line arguments: - -Model | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | STS-B ----|---|---|---|---|---|---|---|--- -`--num-classes` | 3 | 2 | 2 | 2 | 2 | 2 | 2 | 1 -`--lr` | 5e-6 | 1e-5 | 1e-5 | 1e-5 | 5e-6 | 2e-5 | 2e-5 | 2e-5 -`bsz` | 128 | 32 | 32 | 32 | 128 | 64 | 64 | 32 -`--total-num-update` | 30968 | 33112 | 113272 | 1018 | 5233 | 1148 | 1334 | 1799 -`--warmup-updates` | 1858 | 1986 | 6796 | 61 | 314 | 68 | 80 | 107 - -For `STS-B` additionally add `--regression-target --best-checkpoint-metric loss` and remove `--maximize-best-checkpoint-metric`. - -**Note:** - -a) `--total-num-updates` is used by `--polynomial_decay` scheduler and is calculated for `--max-epoch=10` and `--batch-size=32/64/128` depending on the task. - -b) Above cmd-args and hyperparams are tested on Nvidia `V100` GPU with `32gb` of memory for each task. Depending on the GPU memory resources available to you, you can use increase `--update-freq` and reduce `--batch-size`. - -### Inference on GLUE task -After training the model as mentioned in previous step, you can perform inference with checkpoints in `checkpoints/` directory using following python code snippet: - -```python -from fairseq.models.bart import BARTModel - -bart = BARTModel.from_pretrained( - 'checkpoints/', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='RTE-bin' -) - -label_fn = lambda label: bart.task.label_dictionary.string( - [label + bart.task.label_dictionary.nspecial] -) -ncorrect, nsamples = 0, 0 -bart.cuda() -bart.eval() -with open('glue_data/RTE/dev.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[1], tokens[2], tokens[3] - tokens = bart.encode(sent1, sent2) - prediction = bart.predict('sentence_classification_head', tokens).argmax().item() - prediction_label = label_fn(prediction) - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) -``` diff --git a/spaces/gradio/HuBERT/fairseq/modules/conv_tbc.py b/spaces/gradio/HuBERT/fairseq/modules/conv_tbc.py deleted file mode 100644 index 65e17ec94f7e595cb657b3d2daaa1052a95d0677..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/conv_tbc.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn -from torch.nn.modules.utils import _single -from torch import Tensor - - -class ConvTBC(torch.nn.Module): - """1D convolution over an input of shape (time x batch x channel) - - The implementation uses gemm to perform the convolution. This implementation - is faster than cuDNN for small kernel sizes. - """ - - def __init__(self, in_channels, out_channels, kernel_size, padding=0): - super(ConvTBC, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _single(kernel_size) - self.padding = _single(padding) - - self.weight = torch.nn.Parameter( - torch.Tensor(self.kernel_size[0], in_channels, out_channels) - ) - self.bias = torch.nn.Parameter(torch.Tensor(out_channels)) - - self.reset_parameters() - - def reset_parameters(self): - nn.init.xavier_normal_(self.weight) - nn.init.zeros_(self.bias) - - def conv_tbc(self, input: Tensor): - return torch.conv_tbc( - input.contiguous(), self.weight, self.bias, self.padding[0] - ) - - def forward(self, input: Tensor): - return self.conv_tbc(input) - - def __repr__(self): - s = ( - "{name}({in_channels}, {out_channels}, kernel_size={kernel_size}" - ", padding={padding}" - ) - if self.bias is None: - s += ", bias=False" - s += ")" - return s.format(name=self.__class__.__name__, **self.__dict__) diff --git a/spaces/groupeonepoint/WritingAssistant/writing_assistant_app.py b/spaces/groupeonepoint/WritingAssistant/writing_assistant_app.py deleted file mode 100644 index a931be8396ea4397a1621a629f607f5ce39de7c4..0000000000000000000000000000000000000000 --- a/spaces/groupeonepoint/WritingAssistant/writing_assistant_app.py +++ /dev/null @@ -1,57 +0,0 @@ -import openai -import os -import gradio as gr - -# Configure votre clé API -openai.api_key = os.environ['OpenaiKey'] - -def writing_assistant(debut, suite, instructions): - # Construction de la requête - - with open('instructions.txt', 'r') as fichier: - # Lecture du contenu du fichier - instructions = fichier.read() + "\n" + instructions - - prompt = f"DEBUT = '{debut}'\n SUITE = '{suite}' \n INSTRUCTIONS = {instructions}" - - messages = [ - {"role": "system", "content": f"Tu es un assistant d'écriture. Tu aides un auteur contemporain à écrire, en t'inspirant de son style littéraire."}, - {"role": "user", "content": prompt} - ] - - # Call GPT-3.5-turbo API - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=messages, - temperature=0.9 - ) - - # Get generated text - texte_reecrit = response.choices[0].message['content'].strip() - - return texte_reecrit - -# Définition d'inputs par défaut -with open('debut_par_defaut.txt', 'r') as fichier: - # Lecture du contenu du fichier - debut_par_defaut = fichier.read() - -with open('suite_par_defaut.txt', 'r') as fichier: - # Lecture du contenu du fichier - suite_par_defaut = fichier.read() - -# Création de l'interface Gradio -iface = gr.Interface( - fn=writing_assistant, - inputs=[ - gr.inputs.Textbox(lines=5, label="Début", default = debut_par_defaut), - gr.inputs.Textbox(lines=5, label="Suite", default = suite_par_defaut), - gr.inputs.Textbox(lines=2, label="Instructions additionnelles") - ], - outputs=gr.outputs.Textbox(label="Texte réécrit"), - title="Assistant d'écriture", - description="par Nicolas \nRéécrit un brouillon en respectant un début avec un style donné." -) - -# Lancer l'interface -iface.launch() \ No newline at end of file diff --git a/spaces/guetLzy/Real-ESRGAN-Demo/inference_realesrgan.py b/spaces/guetLzy/Real-ESRGAN-Demo/inference_realesrgan.py deleted file mode 100644 index 0a8cc43addb2e8e94b9920cef109443c7f475241..0000000000000000000000000000000000000000 --- a/spaces/guetLzy/Real-ESRGAN-Demo/inference_realesrgan.py +++ /dev/null @@ -1,166 +0,0 @@ -import argparse -import cv2 -import glob -import os -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.download_util import load_file_from_url - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - - -def main(): - """Inference demo for Real-ESRGAN. - """ - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input', type=str, default='inputs', help='Input image or folder') - parser.add_argument( - '-n', - '--model_name', - type=str, - default='RealESRGAN_x4plus', - help=('Model names: RealESRGAN_x4plus | RealESRNet_x4plus | RealESRGAN_x4plus_anime_6B | RealESRGAN_x2plus | ' - 'realesr-animevideov3 | realesr-general-x4v3')) - parser.add_argument('-o', '--output', type=str, default='results', help='Output folder') - parser.add_argument( - '-dn', - '--denoise_strength', - type=float, - default=0.5, - help=('Denoise strength. 0 for weak denoise (keep noise), 1 for strong denoise ability. ' - 'Only used for the realesr-general-x4v3 model')) - parser.add_argument('-s', '--outscale', type=float, default=4, help='The final upsampling scale of the image') - parser.add_argument( - '--model_path', type=str, default=None, help='[Option] Model path. Usually, you do not need to specify it') - parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored image') - parser.add_argument('-t', '--tile', type=int, default=0, help='Tile size, 0 for no tile during testing') - parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding') - parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border') - parser.add_argument('--face_enhance', action='store_true', help='Use GFPGAN to enhance face') - parser.add_argument( - '--fp32', action='store_true', help='Use fp32 precision during inference. Default: fp16 (half precision).') - parser.add_argument( - '--alpha_upsampler', - type=str, - default='realesrgan', - help='The upsampler for the alpha channels. Options: realesrgan | bicubic') - parser.add_argument( - '--ext', - type=str, - default='auto', - help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs') - parser.add_argument( - '-g', '--gpu-id', type=int, default=None, help='gpu device to use (default=None) can be 0,1,2 for multi-gpu') - - args = parser.parse_args() - - # determine models according to model names - args.model_name = args.model_name.split('.')[0] - if args.model_name == 'RealESRGAN_x4plus': # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'] - elif args.model_name == 'RealESRNet_x4plus': # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth'] - elif args.model_name == 'RealESRGAN_x4plus_anime_6B': # x4 RRDBNet model with 6 blocks - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth'] - elif args.model_name == 'RealESRGAN_x2plus': # x2 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - netscale = 2 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth'] - elif args.model_name == 'realesr-animevideov3': # x4 VGG-style model (XS size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu') - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth'] - elif args.model_name == 'realesr-general-x4v3': # x4 VGG-style model (S size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') - netscale = 4 - file_url = [ - 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth', - 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth' - ] - - # determine model paths - if args.model_path is not None: - model_path = args.model_path - else: - model_path = os.path.join('weights', args.model_name + '.pth') - if not os.path.isfile(model_path): - ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) - for url in file_url: - # model_path will be updated - model_path = load_file_from_url( - url=url, model_dir=os.path.join(ROOT_DIR, 'weights'), progress=True, file_name=None) - - # use dni to control the denoise strength - dni_weight = None - if args.model_name == 'realesr-general-x4v3' and args.denoise_strength != 1: - wdn_model_path = model_path.replace('realesr-general-x4v3', 'realesr-general-wdn-x4v3') - model_path = [model_path, wdn_model_path] - dni_weight = [args.denoise_strength, 1 - args.denoise_strength] - - # restorer - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - dni_weight=dni_weight, - model=model, - tile=args.tile, - tile_pad=args.tile_pad, - pre_pad=args.pre_pad, - half=not args.fp32, - gpu_id=args.gpu_id) - - if args.face_enhance: # Use GFPGAN for face enhancement - from gfpgan import GFPGANer - face_enhancer = GFPGANer( - model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth', - upscale=args.outscale, - arch='clean', - channel_multiplier=2, - bg_upsampler=upsampler) - os.makedirs(args.output, exist_ok=True) - - if os.path.isfile(args.input): - paths = [args.input] - else: - paths = sorted(glob.glob(os.path.join(args.input, '*'))) - - for idx, path in enumerate(paths): - imgname, extension = os.path.splitext(os.path.basename(path)) - print('Testing', idx, imgname) - - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) - if len(img.shape) == 3 and img.shape[2] == 4: - img_mode = 'RGBA' - else: - img_mode = None - - try: - if args.face_enhance: - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - else: - output, _ = upsampler.enhance(img, outscale=args.outscale) - except RuntimeError as error: - print('Error', error) - print('If you encounter CUDA out of memory, try to set --tile with a smaller number.') - else: - if args.ext == 'auto': - extension = extension[1:] - else: - extension = args.ext - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - if args.suffix == '': - save_path = os.path.join(args.output, f'{imgname}.{extension}') - else: - save_path = os.path.join(args.output, f'{imgname}_{args.suffix}.{extension}') - cv2.imwrite(save_path, output) - - -if __name__ == '__main__': - main() diff --git a/spaces/gurgenblbulyan/video-based-text-generation/utils.py b/spaces/gurgenblbulyan/video-based-text-generation/utils.py deleted file mode 100644 index f58ec4088b8441a48a973481b8b9f9372547969e..0000000000000000000000000000000000000000 --- a/spaces/gurgenblbulyan/video-based-text-generation/utils.py +++ /dev/null @@ -1,42 +0,0 @@ -from transformers import ViTFeatureExtractor -import torchvision -import torchvision.transforms.functional as fn -import torch as th - - -def video2image_from_path(video_path, feature_extractor_name): - video = torchvision.io.read_video(video_path) - - return video2image(video[0], feature_extractor_name) - - -def video2image(video, feature_extractor_name): - feature_extractor = ViTFeatureExtractor.from_pretrained( - feature_extractor_name - ) - - vid = th.permute(video, (3, 0, 1, 2)) - samp = th.linspace(0, vid.shape[1]-1, 49, dtype=th.long) - vid = vid[:, samp, :, :] - - im_l = list() - for i in range(vid.shape[1]): - im_l.append(vid[:, i, :, :]) - - inputs = feature_extractor(im_l, return_tensors="pt") - - inputs = inputs['pixel_values'] - - im_h = list() - for i in range(7): - im_v = th.cat((inputs[0+i*7, :, :, :], - inputs[1+i*7, :, :, :], - inputs[2+i*7, :, :, :], - inputs[3+i*7, :, :, :], - inputs[4+i*7, :, :, :], - inputs[5+i*7, :, :, :], - inputs[6+i*7, :, :, :]), 2) - im_h.append(im_v) - resize = fn.resize(th.cat(im_h, 1), size=[224]) - - return resize diff --git a/spaces/h2oai/wave-tour/examples/stat_small_series_interval.py b/spaces/h2oai/wave-tour/examples/stat_small_series_interval.py deleted file mode 100644 index bcd98b23636e2d9493323cbecb5705fb7b2a15f1..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/stat_small_series_interval.py +++ /dev/null @@ -1,37 +0,0 @@ -# Stat / Series / Small / Interval -# Create a small stat card displaying a primary value and a series plot. -# #stat_card #interval #series -# --- -import time - -from faker import Faker - -from synth import FakeCategoricalSeries -from h2o_wave import site, ui, data - -page = site['/demo'] - -fake = Faker() -f = FakeCategoricalSeries() -cat, val, pc = f.next() -c = page.add('example', ui.small_series_stat_card( - box='1 1 1 1', - title=fake.cryptocurrency_name(), - value='=${{intl qux minimum_fraction_digits=2 maximum_fraction_digits=2}}', - data=dict(qux=val, quux=pc), - plot_category='foo', - plot_type='interval', - plot_value='qux', - plot_color='$red', - plot_data=data('foo qux', -20), - plot_zero_value=0, -)) -page.save() - -while True: - time.sleep(1) - cat, val, pc = f.next() - c.data.qux = val - c.data.quux = pc - c.plot_data[-1] = [cat, val] - page.save() diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/git_operations.py b/spaces/hamelcubsfan/AutoGPT/autogpt/commands/git_operations.py deleted file mode 100644 index 028f3b8da44c85e01d20ccc5d4a5fa72c759008b..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/autogpt/commands/git_operations.py +++ /dev/null @@ -1,26 +0,0 @@ -"""Git operations for autogpt""" -import git - -from autogpt.config import Config -from autogpt.workspace import path_in_workspace - -CFG = Config() - - -def clone_repository(repo_url: str, clone_path: str) -> str: - """Clone a GitHub repository locally - - Args: - repo_url (str): The URL of the repository to clone - clone_path (str): The path to clone the repository to - - Returns: - str: The result of the clone operation""" - split_url = repo_url.split("//") - auth_repo_url = f"//{CFG.github_username}:{CFG.github_api_key}@".join(split_url) - safe_clone_path = path_in_workspace(clone_path) - try: - git.Repo.clone_from(auth_repo_url, safe_clone_path) - return f"""Cloned {repo_url} to {safe_clone_path}""" - except Exception as e: - return f"Error: {str(e)}" diff --git a/spaces/haoqi7/research/scripts/tests/model_test.py b/spaces/haoqi7/research/scripts/tests/model_test.py deleted file mode 100644 index b2de4b642b75950d2ee6ed005293a54d9a960bec..0000000000000000000000000000000000000000 --- a/spaces/haoqi7/research/scripts/tests/model_test.py +++ /dev/null @@ -1,103 +0,0 @@ -if __name__ == '__main__': - import sys - from pathlib import Path - - project_root = Path( - __file__).parent.parent.parent.absolute() # /home/adapting/git/leoxiang66/idp_LiteratureResearch_Tool - sys.path.append(project_root.__str__()) - - import torch - from lrt.clustering.models.keyBartPlus import * - from lrt.clustering.models.adapter import * - from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - import os - - ####################### Adapter Test ############################# - input_dim = 1024 - adapter_hid_dim = 256 - adapter = Adapter(input_dim,adapter_hid_dim) - - data = torch.randn(10, 20, input_dim) - - tmp = adapter(data) - - assert data.size() == tmp.size() - ####################### Adapter Test ############################# - - ####################### BartDecoderPlus Test ############################# - keyBart = AutoModelForSeq2SeqLM.from_pretrained("bloomberg/KeyBART") - bartDecoderP = BartDecoderPlus(keyBart, 100) - tmp = bartDecoderP(inputs_embeds=data, - output_attentions = True, - output_hidden_states = True, - encoder_hidden_states = data - ) - print(type(tmp)) - # print(tmp.__dict__) - print(dir(tmp)) - last_hid_states = tmp.last_hidden_state - hidden_states = tmp.hidden_states - attentions = tmp.attentions - cross_attention = tmp.cross_attentions - print(last_hid_states.shape) - print(hidden_states.__len__()) - print(attentions.__len__()) - print(len(cross_attention)) - # print(cross_attention[0]) - print(cross_attention[0].shape) - - ####################### BartDecoderPlus Test ############################# - - ####################### BartPlus Test ############################# - bartP = BartPlus(keyBart,100) - tmp = bartP( - inputs_embeds = data, - decoder_inputs_embeds = data, - output_attentions=True, - output_hidden_states=True, - ) - print(type(tmp)) - # print(tmp.__dict__) - print(dir(tmp)) - last_hid_states = tmp.last_hidden_state - hidden_states = tmp.decoder_hidden_states - attentions = tmp.decoder_attentions - cross_attention = tmp.cross_attentions - print(last_hid_states.shape) - print(hidden_states.__len__()) - print(attentions.__len__()) - print(len(cross_attention)) - # print(cross_attention[0]) - print(cross_attention[0].shape) - ####################### BartPlus Test ############################# - - ####################### Summary ############################# - from torchinfo import summary - - summary(bartP) - # summary(bartDecoderP) - ####################### Summary ############################# - - ####################### KeyBartAdapter Test ############################# - keybart_adapter = KeyBartAdapter(100) - tmp = keybart_adapter( - inputs_embeds=data, - decoder_inputs_embeds=data, - output_attentions=True, - output_hidden_states=True, - ) - print(type(tmp)) - # print(tmp.__dict__) - print(dir(tmp)) - last_hid_states = tmp.encoder_last_hidden_state - hidden_states = tmp.decoder_hidden_states - attentions = tmp.decoder_attentions - cross_attention = tmp.cross_attentions - print(last_hid_states.shape) - print(hidden_states.__len__()) - print(attentions.__len__()) - print(len(cross_attention)) - # print(cross_attention[0]) - print(cross_attention[0].shape) - summary(keybart_adapter) - ####################### KeyBartAdapter Test ############################# \ No newline at end of file diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/lvis/lvis_eval.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/lvis/lvis_eval.py deleted file mode 100644 index 90f09072d99ff0ee7552d6dcdf6e75971b388fda..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/data/datasets/evaluation/lvis/lvis_eval.py +++ /dev/null @@ -1,998 +0,0 @@ -# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import copy -import datetime -import json -import os -from collections import OrderedDict, defaultdict - -import numpy as np -import pycocotools.mask as mask_util -import torch -# import torch._six - -import maskrcnn_benchmark.utils.mdetr_dist as dist - -from maskrcnn_benchmark.utils.mdetr_dist import all_gather - - -from .lvis import LVIS - -def merge(img_ids, eval_imgs): - all_img_ids = all_gather(img_ids) - all_eval_imgs = all_gather(eval_imgs) - - merged_img_ids = [] - for p in all_img_ids: - merged_img_ids.extend(p) - - merged_eval_imgs = [] - for p in all_eval_imgs: - merged_eval_imgs.append(p) - - merged_img_ids = np.array(merged_img_ids) - merged_eval_imgs = np.concatenate(merged_eval_imgs, 2) - - # keep only unique (and in sorted order) images - merged_img_ids, idx = np.unique(merged_img_ids, return_index=True) - merged_eval_imgs = merged_eval_imgs[..., idx] - - return merged_img_ids, merged_eval_imgs - - -################################################################# -# From LVIS, with following changes: -# * fixed LVISEval constructor to accept empty dt -# * Removed logger -# * LVIS results supports numpy inputs -################################################################# - - -class Params: - def __init__(self, iou_type): - """Params for LVIS evaluation API.""" - self.img_ids = [] - self.cat_ids = [] - # np.arange causes trouble. the data point on arange is slightly - # larger than the true value - self.iou_thrs = np.linspace(0.5, 0.95, int(np.round((0.95 - 0.5) / 0.05)) + 1, endpoint=True) - self.rec_thrs = np.linspace(0.0, 1.00, int(np.round((1.00 - 0.0) / 0.01)) + 1, endpoint=True) - self.max_dets = 300 - self.area_rng = [ - [0 ** 2, 1e5 ** 2], - [0 ** 2, 32 ** 2], - [32 ** 2, 96 ** 2], - [96 ** 2, 1e5 ** 2], - ] - self.area_rng_lbl = ["all", "small", "medium", "large"] - self.use_cats = 1 - # We bin categories in three bins based how many images of the training - # set the category is present in. - # r: Rare : < 10 - # c: Common : >= 10 and < 100 - # f: Frequent: >= 100 - self.img_count_lbl = ["r", "c", "f"] - self.iou_type = iou_type - - -class LVISResults(LVIS): - def __init__(self, lvis_gt, results, max_dets=300): - """Constructor for LVIS results. - Args: - lvis_gt (LVIS class instance, or str containing path of - annotation file) - results (str containing path of result file or a list of dicts) - max_dets (int): max number of detections per image. The official - value of max_dets for LVIS is 300. - """ - super(LVISResults, self).__init__() - assert isinstance(lvis_gt, LVIS) - self.dataset["images"] = [img for img in lvis_gt.dataset["images"]] - - if isinstance(results, str): - result_anns = self._load_json(results) - elif type(results) == np.ndarray: - result_anns = self.loadNumpyAnnotations(results) - else: - result_anns = results - - if max_dets >= 0: - result_anns = self.limit_dets_per_image(result_anns, max_dets) - - if len(result_anns) > 0 and "bbox" in result_anns[0]: - self.dataset["categories"] = copy.deepcopy(lvis_gt.dataset["categories"]) - for id, ann in enumerate(result_anns): - x1, y1, w, h = ann["bbox"] - x2 = x1 + w - y2 = y1 + h - - if "segmentation" not in ann: - ann["segmentation"] = [[x1, y1, x1, y2, x2, y2, x2, y1]] - - ann["area"] = w * h - ann["id"] = id + 1 - - elif len(result_anns) > 0 and "segmentation" in result_anns[0]: - self.dataset["categories"] = copy.deepcopy(lvis_gt.dataset["categories"]) - for id, ann in enumerate(result_anns): - # Only support compressed RLE format as segmentation results - ann["area"] = mask_util.area(ann["segmentation"]) - - if "bbox" not in ann: - ann["bbox"] = mask_util.toBbox(ann["segmentation"]) - - ann["id"] = id + 1 - - self.dataset["annotations"] = result_anns - self._create_index() - - # #FIXME: disabling this check for now - # img_ids_in_result = [ann["image_id"] for ann in result_anns] - - # assert set(img_ids_in_result) == ( - # set(img_ids_in_result) & set(self.get_img_ids()) - # ), "Results do not correspond to current LVIS set." - - def limit_dets_per_image(self, anns, max_dets): - img_ann = defaultdict(list) - for ann in anns: - img_ann[ann["image_id"]].append(ann) - - for img_id, _anns in img_ann.items(): - if len(_anns) <= max_dets: - continue - _anns = sorted(_anns, key=lambda ann: ann["score"], reverse=True) - img_ann[img_id] = _anns[:max_dets] - - return [ann for anns in img_ann.values() for ann in anns] - - def get_top_results(self, img_id, score_thrs): - ann_ids = self.get_ann_ids(img_ids=[img_id]) - anns = self.load_anns(ann_ids) - return list(filter(lambda ann: ann["score"] > score_thrs, anns)) - - -class LVISEval: - def __init__(self, lvis_gt, lvis_dt=None, iou_type="segm"): - """Constructor for LVISEval. - Args: - lvis_gt (LVIS class instance, or str containing path of annotation file) - lvis_dt (LVISResult class instance, or str containing path of result file, - or list of dict) - iou_type (str): segm or bbox evaluation - """ - - if iou_type not in ["bbox", "segm"]: - raise ValueError("iou_type: {} is not supported.".format(iou_type)) - - if isinstance(lvis_gt, LVIS): - self.lvis_gt = lvis_gt - elif isinstance(lvis_gt, str): - self.lvis_gt = LVIS(lvis_gt) - else: - raise TypeError("Unsupported type {} of lvis_gt.".format(lvis_gt)) - - if isinstance(lvis_dt, LVISResults): - self.lvis_dt = lvis_dt - elif isinstance(lvis_dt, (str, list)): - self.lvis_dt = LVISResults(self.lvis_gt, lvis_dt) - elif lvis_dt is not None: - raise TypeError("Unsupported type {} of lvis_dt.".format(lvis_dt)) - - # per-image per-category evaluation results - self.eval_imgs = defaultdict(list) - self.eval = {} # accumulated evaluation results - self._gts = defaultdict(list) # gt for evaluation - self._dts = defaultdict(list) # dt for evaluation - self.params = Params(iou_type=iou_type) # parameters - self.results = OrderedDict() - self.stats = [] - self.ious = {} # ious between all gts and dts - - self.params.img_ids = sorted(self.lvis_gt.get_img_ids()) - self.params.cat_ids = sorted(self.lvis_gt.get_cat_ids()) - - def _to_mask(self, anns, lvis): - for ann in anns: - rle = lvis.ann_to_rle(ann) - ann["segmentation"] = rle - - def _prepare(self): - """Prepare self._gts and self._dts for evaluation based on params.""" - - cat_ids = self.params.cat_ids if self.params.cat_ids else None - - gts = self.lvis_gt.load_anns(self.lvis_gt.get_ann_ids(img_ids=self.params.img_ids, cat_ids=cat_ids)) - dts = self.lvis_dt.load_anns(self.lvis_dt.get_ann_ids(img_ids=self.params.img_ids, cat_ids=cat_ids)) - # convert ground truth to mask if iou_type == 'segm' - if self.params.iou_type == "segm": - self._to_mask(gts, self.lvis_gt) - self._to_mask(dts, self.lvis_dt) - - # set ignore flag - for gt in gts: - if "ignore" not in gt: - gt["ignore"] = 0 - - for gt in gts: - self._gts[gt["image_id"], gt["category_id"]].append(gt) - - # For federated dataset evaluation we will filter out all dt for an - # image which belong to categories not present in gt and not present in - # the negative list for an image. In other words detector is not penalized - # for categories about which we don't have gt information about their - # presence or absence in an image. - img_data = self.lvis_gt.load_imgs(ids=self.params.img_ids) - # per image map of categories not present in image - img_nl = {d["id"]: d["neg_category_ids"] for d in img_data} - # per image list of categories present in image - img_pl = defaultdict(set) - for ann in gts: - img_pl[ann["image_id"]].add(ann["category_id"]) - # per image map of categoires which have missing gt. For these - # categories we don't penalize the detector for flase positives. - self.img_nel = {d["id"]: d["not_exhaustive_category_ids"] for d in img_data} - - for dt in dts: - img_id, cat_id = dt["image_id"], dt["category_id"] - if cat_id not in img_nl[img_id] and cat_id not in img_pl[img_id]: - continue - self._dts[img_id, cat_id].append(dt) - - self.freq_groups = self._prepare_freq_group() - - def _prepare_freq_group(self): - freq_groups = [[] for _ in self.params.img_count_lbl] - cat_data = self.lvis_gt.load_cats(self.params.cat_ids) - for idx, _cat_data in enumerate(cat_data): - frequency = _cat_data["frequency"] - freq_groups[self.params.img_count_lbl.index(frequency)].append(idx) - return freq_groups - - def evaluate(self): - """ - Run per image evaluation on given images and store results - (a list of dict) in self.eval_imgs. - """ - - self.params.img_ids = list(np.unique(self.params.img_ids)) - - if self.params.use_cats: - cat_ids = self.params.cat_ids - else: - cat_ids = [-1] - - self._prepare() - - self.ious = { - (img_id, cat_id): self.compute_iou(img_id, cat_id) for img_id in self.params.img_ids for cat_id in cat_ids - } - - # loop through images, area range, max detection number - self.eval_imgs = [ - self.evaluate_img(img_id, cat_id, area_rng) - for cat_id in cat_ids - for area_rng in self.params.area_rng - for img_id in self.params.img_ids - ] - - def _get_gt_dt(self, img_id, cat_id): - """Create gt, dt which are list of anns/dets. If use_cats is true - only anns/dets corresponding to tuple (img_id, cat_id) will be - used. Else, all anns/dets in image are used and cat_id is not used. - """ - if self.params.use_cats: - gt = self._gts[img_id, cat_id] - dt = self._dts[img_id, cat_id] - else: - gt = [_ann for _cat_id in self.params.cat_ids for _ann in self._gts[img_id, cat_id]] - dt = [_ann for _cat_id in self.params.cat_ids for _ann in self._dts[img_id, cat_id]] - return gt, dt - - def compute_iou(self, img_id, cat_id): - gt, dt = self._get_gt_dt(img_id, cat_id) - - if len(gt) == 0 and len(dt) == 0: - return [] - - # Sort detections in decreasing order of score. - idx = np.argsort([-d["score"] for d in dt], kind="mergesort") - dt = [dt[i] for i in idx] - - iscrowd = [int(False)] * len(gt) - - if self.params.iou_type == "segm": - ann_type = "segmentation" - elif self.params.iou_type == "bbox": - ann_type = "bbox" - else: - raise ValueError("Unknown iou_type for iou computation.") - gt = [g[ann_type] for g in gt] - dt = [d[ann_type] for d in dt] - - # compute iou between each dt and gt region - # will return array of shape len(dt), len(gt) - ious = mask_util.iou(dt, gt, iscrowd) - return ious - - def evaluate_img(self, img_id, cat_id, area_rng): - """Perform evaluation for single category and image.""" - gt, dt = self._get_gt_dt(img_id, cat_id) - - if len(gt) == 0 and len(dt) == 0: - return None - - # Add another filed _ignore to only consider anns based on area range. - for g in gt: - if g["ignore"] or (g["area"] < area_rng[0] or g["area"] > area_rng[1]): - g["_ignore"] = 1 - else: - g["_ignore"] = 0 - - # Sort gt ignore last - gt_idx = np.argsort([g["_ignore"] for g in gt], kind="mergesort") - gt = [gt[i] for i in gt_idx] - - # Sort dt highest score first - dt_idx = np.argsort([-d["score"] for d in dt], kind="mergesort") - dt = [dt[i] for i in dt_idx] - - # load computed ious - ious = self.ious[img_id, cat_id][:, gt_idx] if len(self.ious[img_id, cat_id]) > 0 else self.ious[img_id, cat_id] - - num_thrs = len(self.params.iou_thrs) - num_gt = len(gt) - num_dt = len(dt) - - # Array to store the "id" of the matched dt/gt - gt_m = np.zeros((num_thrs, num_gt)) - dt_m = np.zeros((num_thrs, num_dt)) - - gt_ig = np.array([g["_ignore"] for g in gt]) - dt_ig = np.zeros((num_thrs, num_dt)) - - for iou_thr_idx, iou_thr in enumerate(self.params.iou_thrs): - if len(ious) == 0: - break - - for dt_idx, _dt in enumerate(dt): - iou = min([iou_thr, 1 - 1e-10]) - # information about best match so far (m=-1 -> unmatched) - # store the gt_idx which matched for _dt - m = -1 - for gt_idx, _ in enumerate(gt): - # if this gt already matched continue - if gt_m[iou_thr_idx, gt_idx] > 0: - continue - # if _dt matched to reg gt, and on ignore gt, stop - if m > -1 and gt_ig[m] == 0 and gt_ig[gt_idx] == 1: - break - # continue to next gt unless better match made - if ious[dt_idx, gt_idx] < iou: - continue - # if match successful and best so far, store appropriately - iou = ious[dt_idx, gt_idx] - m = gt_idx - - # No match found for _dt, go to next _dt - if m == -1: - continue - - # if gt to ignore for some reason update dt_ig. - # Should not be used in evaluation. - dt_ig[iou_thr_idx, dt_idx] = gt_ig[m] - # _dt match found, update gt_m, and dt_m with "id" - dt_m[iou_thr_idx, dt_idx] = gt[m]["id"] - gt_m[iou_thr_idx, m] = _dt["id"] - - # For LVIS we will ignore any unmatched detection if that category was - # not exhaustively annotated in gt. - dt_ig_mask = [ - d["area"] < area_rng[0] or d["area"] > area_rng[1] or d["category_id"] in self.img_nel[d["image_id"]] - for d in dt - ] - dt_ig_mask = np.array(dt_ig_mask).reshape((1, num_dt)) # 1 X num_dt - dt_ig_mask = np.repeat(dt_ig_mask, num_thrs, 0) # num_thrs X num_dt - # Based on dt_ig_mask ignore any unmatched detection by updating dt_ig - dt_ig = np.logical_or(dt_ig, np.logical_and(dt_m == 0, dt_ig_mask)) - # store results for given image and category - return { - "image_id": img_id, - "category_id": cat_id, - "area_rng": area_rng, - "dt_ids": [d["id"] for d in dt], - "gt_ids": [g["id"] for g in gt], - "dt_matches": dt_m, - "gt_matches": gt_m, - "dt_scores": [d["score"] for d in dt], - "gt_ignore": gt_ig, - "dt_ignore": dt_ig, - } - - def accumulate(self): - """Accumulate per image evaluation results and store the result in - self.eval. - """ - - if not self.eval_imgs: - print("Warning: Please run evaluate first.") - - if self.params.use_cats: - cat_ids = self.params.cat_ids - else: - cat_ids = [-1] - - num_thrs = len(self.params.iou_thrs) - num_recalls = len(self.params.rec_thrs) - num_cats = len(cat_ids) - num_area_rngs = len(self.params.area_rng) - num_imgs = len(self.params.img_ids) - - # -1 for absent categories - precision = -np.ones((num_thrs, num_recalls, num_cats, num_area_rngs)) - recall = -np.ones((num_thrs, num_cats, num_area_rngs)) - - # Initialize dt_pointers - dt_pointers = {} - for cat_idx in range(num_cats): - dt_pointers[cat_idx] = {} - for area_idx in range(num_area_rngs): - dt_pointers[cat_idx][area_idx] = {} - - # Per category evaluation - for cat_idx in range(num_cats): - Nk = cat_idx * num_area_rngs * num_imgs - for area_idx in range(num_area_rngs): - Na = area_idx * num_imgs - E = [self.eval_imgs[Nk + Na + img_idx] for img_idx in range(num_imgs)] - # Remove elements which are None - E = [e for e in E if e is not None] - if len(E) == 0: - continue - - # Append all scores: shape (N,) - dt_scores = np.concatenate([e["dt_scores"] for e in E], axis=0) - dt_ids = np.concatenate([e["dt_ids"] for e in E], axis=0) - - dt_idx = np.argsort(-dt_scores, kind="mergesort") - dt_scores = dt_scores[dt_idx] - dt_ids = dt_ids[dt_idx] - - dt_m = np.concatenate([e["dt_matches"] for e in E], axis=1)[:, dt_idx] - dt_ig = np.concatenate([e["dt_ignore"] for e in E], axis=1)[:, dt_idx] - - gt_ig = np.concatenate([e["gt_ignore"] for e in E]) - # num gt anns to consider - num_gt = np.count_nonzero(gt_ig == 0) - - if num_gt == 0: - continue - - tps = np.logical_and(dt_m, np.logical_not(dt_ig)) - fps = np.logical_and(np.logical_not(dt_m), np.logical_not(dt_ig)) - - tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float) - fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float) - - dt_pointers[cat_idx][area_idx] = { - "dt_ids": dt_ids, - "tps": tps, - "fps": fps, - } - - for iou_thr_idx, (tp, fp) in enumerate(zip(tp_sum, fp_sum)): - tp = np.array(tp) - fp = np.array(fp) - num_tp = len(tp) - rc = tp / num_gt - if num_tp: - recall[iou_thr_idx, cat_idx, area_idx] = rc[-1] - else: - recall[iou_thr_idx, cat_idx, area_idx] = 0 - - # np.spacing(1) ~= eps - pr = tp / (fp + tp + np.spacing(1)) - pr = pr.tolist() - - # Replace each precision value with the maximum precision - # value to the right of that recall level. This ensures - # that the calculated AP value will be less suspectable - # to small variations in the ranking. - for i in range(num_tp - 1, 0, -1): - if pr[i] > pr[i - 1]: - pr[i - 1] = pr[i] - - rec_thrs_insert_idx = np.searchsorted(rc, self.params.rec_thrs, side="left") - - pr_at_recall = [0.0] * num_recalls - - try: - for _idx, pr_idx in enumerate(rec_thrs_insert_idx): - pr_at_recall[_idx] = pr[pr_idx] - except Exception: - pass - precision[iou_thr_idx, :, cat_idx, area_idx] = np.array(pr_at_recall) - - self.eval = { - "params": self.params, - "counts": [num_thrs, num_recalls, num_cats, num_area_rngs], - "date": datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), - "precision": precision, - "recall": recall, - "dt_pointers": dt_pointers, - } - - def _summarize(self, summary_type, iou_thr=None, area_rng="all", freq_group_idx=None): - aidx = [idx for idx, _area_rng in enumerate(self.params.area_rng_lbl) if _area_rng == area_rng] - - if summary_type == "ap": - s = self.eval["precision"] - if iou_thr is not None: - tidx = np.where(iou_thr == self.params.iou_thrs)[0] - s = s[tidx] - if freq_group_idx is not None: - s = s[:, :, self.freq_groups[freq_group_idx], aidx] - else: - s = s[:, :, :, aidx] - else: - s = self.eval["recall"] - if iou_thr is not None: - tidx = np.where(iou_thr == self.params.iou_thrs)[0] - s = s[tidx] - s = s[:, :, aidx] - - if len(s[s > -1]) == 0: - mean_s = -1 - else: - mean_s = np.mean(s[s > -1]) - return mean_s - - def summarize(self): - """Compute and display summary metrics for evaluation results.""" - if not self.eval: - raise RuntimeError("Please run accumulate() first.") - - max_dets = self.params.max_dets - - self.results["AP"] = self._summarize("ap") - self.results["AP50"] = self._summarize("ap", iou_thr=0.50) - self.results["AP75"] = self._summarize("ap", iou_thr=0.75) - self.results["APs"] = self._summarize("ap", area_rng="small") - self.results["APm"] = self._summarize("ap", area_rng="medium") - self.results["APl"] = self._summarize("ap", area_rng="large") - self.results["APr"] = self._summarize("ap", freq_group_idx=0) - self.results["APc"] = self._summarize("ap", freq_group_idx=1) - self.results["APf"] = self._summarize("ap", freq_group_idx=2) - - self.stats = np.zeros((9,)) - self.stats[0] = self._summarize("ap") - self.stats[1] = self._summarize("ap", iou_thr=0.50) - self.stats[2] = self._summarize("ap", iou_thr=0.75) - self.stats[3] = self._summarize("ap", area_rng="small") - self.stats[4] = self._summarize("ap", area_rng="medium") - self.stats[5] = self._summarize("ap", area_rng="large") - self.stats[6] = self._summarize("ap", freq_group_idx=0) - self.stats[7] = self._summarize("ap", freq_group_idx=1) - self.stats[8] = self._summarize("ap", freq_group_idx=2) - - key = "AR@{}".format(max_dets) - self.results[key] = self._summarize("ar") - - for area_rng in ["small", "medium", "large"]: - key = "AR{}@{}".format(area_rng[0], max_dets) - self.results[key] = self._summarize("ar", area_rng=area_rng) - _returned = self.print_results() - return _returned - - def run(self): - """Wrapper function which calculates the results.""" - self.evaluate() - self.accumulate() - self.summarize() - - def print_results(self): - template = " {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} catIds={:>3s}] = {:0.3f}" - out_strings = [] - for key, value in self.results.items(): - max_dets = self.params.max_dets - if "AP" in key: - title = "Average Precision" - _type = "(AP)" - else: - title = "Average Recall" - _type = "(AR)" - - if len(key) > 2 and key[2].isdigit(): - iou_thr = float(key[2:]) / 100 - iou = "{:0.2f}".format(iou_thr) - else: - iou = "{:0.2f}:{:0.2f}".format(self.params.iou_thrs[0], self.params.iou_thrs[-1]) - - if len(key) > 2 and key[2] in ["r", "c", "f"]: - cat_group_name = key[2] - else: - cat_group_name = "all" - - if len(key) > 2 and key[2] in ["s", "m", "l"]: - area_rng = key[2] - else: - area_rng = "all" - - print(template.format(title, _type, iou, area_rng, max_dets, cat_group_name, value)) - out_strings.append(template.format(title, _type, iou, area_rng, max_dets, cat_group_name, value)) - return out_strings - - def get_results(self): - if not self.results: - print("Warning: results is empty. Call run().") - return self.results - - -################################################################# -# end of straight copy from lvis, just fixing constructor -################################################################# - - -class LvisEvaluator(object): - def __init__(self, lvis_gt, iou_types): - assert isinstance(iou_types, (list, tuple)) - # lvis_gt = copy.deepcopy(lvis_gt) - self.lvis_gt = lvis_gt - - self.iou_types = iou_types - self.coco_eval = {} - for iou_type in iou_types: - self.coco_eval[iou_type] = LVISEval(lvis_gt, iou_type=iou_type) - - self.img_ids = [] - self.eval_imgs = {k: [] for k in iou_types} - - def update(self, predictions): - img_ids = list(np.unique(list(predictions.keys()))) - self.img_ids.extend(img_ids) - - for iou_type in self.iou_types: - results = self.prepare(predictions, iou_type) - lvis_dt = LVISResults(self.lvis_gt, results) - lvis_eval = self.coco_eval[iou_type] - - lvis_eval.lvis_dt = lvis_dt - lvis_eval.params.img_ids = list(img_ids) - lvis_eval.evaluate() - eval_imgs = lvis_eval.eval_imgs - eval_imgs = np.asarray(eval_imgs).reshape( - len(lvis_eval.params.cat_ids), len(lvis_eval.params.area_rng), len(lvis_eval.params.img_ids) - ) - - self.eval_imgs[iou_type].append(eval_imgs) - - def synchronize_between_processes(self): - for iou_type in self.iou_types: - self.eval_imgs[iou_type] = np.concatenate(self.eval_imgs[iou_type], 2) - create_common_lvis_eval(self.coco_eval[iou_type], self.img_ids, self.eval_imgs[iou_type]) - - def accumulate(self): - for lvis_eval in self.coco_eval.values(): - lvis_eval.accumulate() - - def summarize(self): - for iou_type, lvis_eval in self.coco_eval.items(): - print("IoU metric: {}".format(iou_type)) - lvis_eval.summarize() - - def prepare(self, predictions, iou_type): - if iou_type == "bbox": - return self.prepare_for_lvis_detection(predictions) - elif iou_type == "segm": - return self.prepare_for_lvis_segmentation(predictions) - elif iou_type == "keypoints": - return self.prepare_for_lvis_keypoint(predictions) - else: - raise ValueError("Unknown iou type {}".format(iou_type)) - - def prepare_for_lvis_detection(self, predictions): - lvis_results = [] - for original_id, prediction in predictions.items(): - if len(prediction) == 0: - continue - - boxes = prediction["boxes"] - boxes = convert_to_xywh(boxes).tolist() - scores = prediction["scores"].tolist() - labels = prediction["labels"].tolist() - - lvis_results.extend( - [ - { - "image_id": original_id, - "category_id": labels[k], - "bbox": box, - "score": scores[k], - } - for k, box in enumerate(boxes) - ] - ) - return lvis_results - - def prepare_for_lvis_segmentation(self, predictions): - lvis_results = [] - for original_id, prediction in predictions.items(): - if len(prediction) == 0: - continue - - scores = prediction["scores"] - labels = prediction["labels"] - masks = prediction["masks"] - - masks = masks > 0.5 - - scores = prediction["scores"].tolist() - labels = prediction["labels"].tolist() - - rles = [ - mask_util.encode(np.array(mask[0, :, :, np.newaxis], dtype=np.uint8, order="F"))[0] for mask in masks - ] - for rle in rles: - rle["counts"] = rle["counts"].decode("utf-8") - - lvis_results.extend( - [ - { - "image_id": original_id, - "category_id": labels[k], - "segmentation": rle, - "score": scores[k], - } - for k, rle in enumerate(rles) - ] - ) - return lvis_results - - -def _merge_lists(listA, listB, maxN, key): - result = [] - indA, indB = 0, 0 - while (indA < len(listA) or indB < len(listB)) and len(result) < maxN: - if (indB < len(listB)) and (indA >= len(listA) or key(listA[indA]) < key(listB[indB])): - result.append(listB[indB]) - indB += 1 - else: - result.append(listA[indA]) - indA += 1 - return result - - -# Adapted from https://github.com/achalddave/large-vocab-devil/blob/9aaddc15b00e6e0d370b16743233e40d973cd53f/scripts/evaluate_ap_fixed.py -class LvisEvaluatorFixedAP(object): - def __init__(self, gt: LVIS, topk=10000, fixed_ap=True): - - self.results = [] - self.by_cat = {} - self.gt = gt - self.topk = topk - self.fixed_ap = fixed_ap - - def update(self, predictions): - cur_results = self.prepare(predictions) - if self.fixed_ap: - by_cat = defaultdict(list) - for ann in cur_results: - by_cat[ann["category_id"]].append(ann) - - for cat, cat_anns in by_cat.items(): - if cat not in self.by_cat: - self.by_cat[cat] = [] - - cur = sorted(cat_anns, key=lambda x: x["score"], reverse=True)[: self.topk] - self.by_cat[cat] = _merge_lists(self.by_cat[cat], cur, self.topk, key=lambda x: x["score"]) - else: - by_id = defaultdict(list) - for ann in cur_results: - by_id[ann["image_id"]].append(ann) - - for id_anns in by_id.values(): - self.results.extend(sorted(id_anns, key=lambda x: x["score"], reverse=True)[:300]) - - def synchronize_between_processes(self): - if self.fixed_ap: - all_cats = dist.all_gather(self.by_cat) - self.by_cat = defaultdict(list) - for cats in all_cats: - for cat, cat_anns in cats.items(): - self.by_cat[cat].extend(cat_anns) - else: - self.results = sum(dist.all_gather(self.results), []) - - def prepare(self, predictions): - lvis_results = [] - for original_id, prediction in predictions: - if len(prediction) == 0: - continue - - boxes = prediction["boxes"] - boxes = convert_to_xywh(boxes).tolist() - scores = prediction["scores"].tolist() - labels = prediction["labels"].tolist() - - lvis_results.extend( - [ - { - "image_id": original_id, - "category_id": labels[k], - "bbox": box, - "score": scores[k], - } - for k, box in enumerate(boxes) - ] - ) - return lvis_results - - def summarize(self): - if not dist.is_main_process(): - return - - if self.fixed_ap: - return self._summarize_fixed() - else: - return self._summarize_standard() - - def _summarize_standard(self): - results = LVISResults(self.gt, self.results) - lvis_eval = LVISEval(self.gt, results, iou_type="bbox") - lvis_eval.run() - lvis_eval.print_results() - - def _summarize_fixed(self): - results = [] - - missing_dets_cats = set() - for cat, cat_anns in self.by_cat.items(): - if len(cat_anns) < self.topk: - missing_dets_cats.add(cat) - results.extend(sorted(cat_anns, key=lambda x: x["score"], reverse=True)[: self.topk]) - if missing_dets_cats: - print( - f"\n===\n" - f"{len(missing_dets_cats)} classes had less than {self.topk} detections!\n" - f"Outputting {self.topk} detections for each class will improve AP further.\n" - f"If using detectron2, please use the lvdevil/infer_topk.py script to " - f"output a results file with {self.topk} detections for each class.\n" - f"===" - ) - - results = LVISResults(self.gt, results, max_dets=-1) - lvis_eval = LVISEval(self.gt, results, iou_type="bbox") - params = lvis_eval.params - params.max_dets = -1 # No limit on detections per image. - lvis_eval.run() - scores = lvis_eval.print_results() - metrics = {k: v for k, v in lvis_eval.results.items() if k.startswith("AP")} - print("copypaste: %s,%s", ",".join(map(str, metrics.keys())), "path") - return scores - - -class LvisDumper(object): - def __init__(self, topk=10000, fixed_ap=True, out_path="lvis_eval"): - - self.results = [] - self.by_cat = {} - self.topk = topk - self.fixed_ap = fixed_ap - self.out_path = out_path - if dist.is_main_process(): - if not os.path.exists(self.out_path): - os.mkdir(self.out_path) - - def update(self, predictions): - cur_results = self.prepare(predictions) - if self.fixed_ap: - by_cat = defaultdict(list) - for ann in cur_results: - by_cat[ann["category_id"]].append(ann) - - for cat, cat_anns in by_cat.items(): - if cat not in self.by_cat: - self.by_cat[cat] = [] - - cur = sorted(cat_anns, key=lambda x: x["score"], reverse=True)[: self.topk] - self.by_cat[cat] = _merge_lists(self.by_cat[cat], cur, self.topk, key=lambda x: x["score"]) - else: - by_id = defaultdict(list) - for ann in cur_results: - by_id[ann["image_id"]].append(ann) - - for id_anns in by_id.values(): - self.results.extend(sorted(id_anns, key=lambda x: x["score"], reverse=True)[:300]) - - def synchronize_between_processes(self): - if self.fixed_ap: - all_cats = dist.all_gather(self.by_cat) - self.by_cat = defaultdict(list) - for cats in all_cats: - for cat, cat_anns in cats.items(): - self.by_cat[cat].extend(cat_anns) - else: - self.results = sum(dist.all_gather(self.results), []) - - def prepare(self, predictions): - lvis_results = [] - for original_id, prediction in predictions: - if len(prediction) == 0: - continue - - boxes = prediction["boxes"] - boxes = convert_to_xywh(boxes).tolist() - scores = prediction["scores"].tolist() - labels = prediction["labels"].tolist() - - lvis_results.extend( - [ - { - "image_id": original_id, - "category_id": labels[k], - "bbox": box, - "score": scores[k], - } - for k, box in enumerate(boxes) - ] - ) - return lvis_results - - def summarize(self): - if not dist.is_main_process(): - return - - if self.fixed_ap: - self._summarize_fixed() - else: - self._summarize_standard() - - def _summarize_standard(self): - json_path = os.path.join(self.out_path, "results.json") - print("dumping to ", json_path) - with open(json_path, "w") as f: - json.dump(self.results, f) - - print("dumped") - - def _summarize_fixed(self): - results = [] - - missing_dets_cats = set() - for cat, cat_anns in self.by_cat.items(): - if len(cat_anns) < self.topk: - missing_dets_cats.add(cat) - results.extend(sorted(cat_anns, key=lambda x: x["score"], reverse=True)[: self.topk]) - if missing_dets_cats: - print( - f"\n===\n" - f"{len(missing_dets_cats)} classes had less than {self.topk} detections!\n" - f"Outputting {self.topk} detections for each class will improve AP further.\n" - f"If using detectron2, please use the lvdevil/infer_topk.py script to " - f"output a results file with {self.topk} detections for each class.\n" - f"===" - ) - - json_path = os.path.join(self.out_path, "results.json") - print("dumping to ", json_path) - with open(json_path, "w") as f: - json.dump(results, f) - - print("dumped") - - -def convert_to_xywh(boxes): - xmin, ymin, xmax, ymax = boxes.unbind(1) - return torch.stack((xmin, ymin, xmax - xmin, ymax - ymin), dim=1) - - -def create_common_lvis_eval(lvis_eval, img_ids, eval_imgs): - img_ids, eval_imgs = merge(img_ids, eval_imgs) - img_ids = list(img_ids) - eval_imgs = list(eval_imgs.flatten()) - - lvis_eval.eval_imgs = eval_imgs - lvis_eval.params.img_ids = img_ids - -def lvis_evaluation(): - pass \ No newline at end of file diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/rpn/vldyhead.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/rpn/vldyhead.py deleted file mode 100644 index 27c1c63eb5b5f14f7a143a97e82015360ff848c6..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/modeling/rpn/vldyhead.py +++ /dev/null @@ -1,1036 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn -from collections import defaultdict - -from .inference import make_atss_postprocessor -from .loss import make_atss_loss_evaluator -from .anchor_generator import make_anchor_generator_complex - -from maskrcnn_benchmark.structures.boxlist_ops import cat_boxlist -from maskrcnn_benchmark.layers import Scale, DYReLU, SELayer, ModulatedDeformConv -from maskrcnn_benchmark.layers import NaiveSyncBatchNorm2d, FrozenBatchNorm2d -from maskrcnn_benchmark.modeling.backbone.fbnet import * -from maskrcnn_benchmark.engine.inference import create_positive_map_label_to_token_from_positive_map -from ..utils import cat, concat_box_prediction_layers, permute_and_flatten - -from maskrcnn_benchmark.utils.fuse_helper import FeatureResizer, func_attention, _make_mlp, _make_conv, _make_coord, \ - BiAttentionBlock, AttentionT2I, BiAttentionBlockForCheckpoint, BertLMPredictionHead -from transformers.models.bert.modeling_bert import BertConfig, BertAttention, BertIntermediate, BertOutput, \ - BertPreTrainedModel -from transformers.modeling_utils import apply_chunking_to_forward -import torch.utils.checkpoint as checkpoint -import pdb - -from maskrcnn_benchmark.modeling.language_backbone.clip_model import QuickGELU, LayerNorm, DropPath -from timm.models.layers import DropPath, trunc_normal_ - -class h_sigmoid(nn.Module): - def __init__(self, inplace=True, h_max=1): - super(h_sigmoid, self).__init__() - self.relu = nn.ReLU6(inplace=inplace) - self.h_max = h_max - - def forward(self, x): - return self.relu(x + 3) * self.h_max / 6 - - -class BoxCoder(object): - - def __init__(self, cfg): - self.cfg = cfg - - def encode(self, gt_boxes, anchors): - TO_REMOVE = 1 # TODO remove - ex_widths = anchors[:, 2] - anchors[:, 0] + TO_REMOVE - ex_heights = anchors[:, 3] - anchors[:, 1] + TO_REMOVE - ex_ctr_x = (anchors[:, 2] + anchors[:, 0]) / 2 - ex_ctr_y = (anchors[:, 3] + anchors[:, 1]) / 2 - - gt_widths = gt_boxes[:, 2] - gt_boxes[:, 0] + TO_REMOVE - gt_heights = gt_boxes[:, 3] - gt_boxes[:, 1] + TO_REMOVE - gt_ctr_x = (gt_boxes[:, 2] + gt_boxes[:, 0]) / 2 - gt_ctr_y = (gt_boxes[:, 3] + gt_boxes[:, 1]) / 2 - - wx, wy, ww, wh = (10., 10., 5., 5.) - targets_dx = wx * (gt_ctr_x - ex_ctr_x) / ex_widths - targets_dy = wy * (gt_ctr_y - ex_ctr_y) / ex_heights - targets_dw = ww * torch.log(gt_widths / ex_widths) - targets_dh = wh * torch.log(gt_heights / ex_heights) - targets = torch.stack((targets_dx, targets_dy, targets_dw, targets_dh), dim=1) - - return targets - - def decode(self, preds, anchors): - anchors = anchors.to(preds.dtype) - - TO_REMOVE = 1 # TODO remove - widths = anchors[:, 2] - anchors[:, 0] + TO_REMOVE - heights = anchors[:, 3] - anchors[:, 1] + TO_REMOVE - ctr_x = (anchors[:, 2] + anchors[:, 0]) / 2 - ctr_y = (anchors[:, 3] + anchors[:, 1]) / 2 - - wx, wy, ww, wh = (10., 10., 5., 5.) - dx = preds[:, 0::4] / wx - dy = preds[:, 1::4] / wy - dw = preds[:, 2::4] / ww - dh = preds[:, 3::4] / wh - - # Prevent sending too large values into torch.exp() - dw = torch.clamp(dw, max=math.log(1000. / 16)) - dh = torch.clamp(dh, max=math.log(1000. / 16)) - - pred_ctr_x = dx * widths[:, None] + ctr_x[:, None] - pred_ctr_y = dy * heights[:, None] + ctr_y[:, None] - pred_w = torch.exp(dw) * widths[:, None] - pred_h = torch.exp(dh) * heights[:, None] - - pred_boxes = torch.zeros_like(preds) - pred_boxes[:, 0::4] = pred_ctr_x - 0.5 * (pred_w - 1) - pred_boxes[:, 1::4] = pred_ctr_y - 0.5 * (pred_h - 1) - pred_boxes[:, 2::4] = pred_ctr_x + 0.5 * (pred_w - 1) - pred_boxes[:, 3::4] = pred_ctr_y + 0.5 * (pred_h - 1) - - return pred_boxes - - -class Conv3x3Norm(torch.nn.Module): - def __init__(self, - in_channels, - out_channels, - stride, - groups=1, - deformable=False, - bn_type=None): - super(Conv3x3Norm, self).__init__() - - if deformable: - self.conv = ModulatedDeformConv(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, - groups=groups) - else: - self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, groups=groups) - - if isinstance(bn_type, (list, tuple)): - assert len(bn_type) == 2 - assert bn_type[0] == "gn" - gn_group = bn_type[1] - bn_type = bn_type[0] - - if bn_type == "bn": - bn_op = nn.BatchNorm2d(out_channels) - elif bn_type == "sbn": - bn_op = nn.SyncBatchNorm(out_channels) - elif bn_type == "nsbn": - bn_op = NaiveSyncBatchNorm2d(out_channels) - elif bn_type == "gn": - bn_op = nn.GroupNorm(num_groups=gn_group, num_channels=out_channels) - elif bn_type == "af": - bn_op = FrozenBatchNorm2d(out_channels) - if bn_type is not None: - self.bn = bn_op - else: - self.bn = None - - def forward(self, input, **kwargs): - x = self.conv(input, **kwargs) - if self.bn: - x = self.bn(x) - return x - - -class DyConv(torch.nn.Module): - def __init__(self, - in_channels=256, - out_channels=256, - conv_func=nn.Conv2d, - use_dyfuse=True, - use_dyrelu=False, - use_deform=False - ): - super(DyConv, self).__init__() - - self.DyConv = nn.ModuleList() - self.DyConv.append(conv_func(in_channels, out_channels, 1)) - self.DyConv.append(conv_func(in_channels, out_channels, 1)) - self.DyConv.append(conv_func(in_channels, out_channels, 2)) - - if use_dyfuse: - self.AttnConv = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(in_channels, 1, kernel_size=1), - nn.ReLU(inplace=True)) - self.h_sigmoid = h_sigmoid() - else: - self.AttnConv = None - - if use_dyrelu: - self.relu = DYReLU(in_channels, out_channels) - else: - self.relu = nn.ReLU() - - if use_deform: - self.offset = nn.Conv2d(in_channels, 27, kernel_size=3, stride=1, padding=1) - else: - self.offset = None - - self.init_weights() - - def init_weights(self): - for m in self.DyConv.modules(): - if isinstance(m, nn.Conv2d): - nn.init.normal_(m.weight.data, 0, 0.01) - if m.bias is not None: - m.bias.data.zero_() - if self.AttnConv is not None: - for m in self.AttnConv.modules(): - if isinstance(m, nn.Conv2d): - nn.init.normal_(m.weight.data, 0, 0.01) - if m.bias is not None: - m.bias.data.zero_() - - def forward(self, inputs): - visual_feats = inputs["visual"] - language_dict_features = inputs["lang"] - - next_x = [] - for level, feature in enumerate(visual_feats): - - conv_args = dict() - if self.offset is not None: - offset_mask = self.offset(feature) - offset = offset_mask[:, :18, :, :] - mask = offset_mask[:, 18:, :, :].sigmoid() - conv_args = dict(offset=offset, mask=mask) - - temp_fea = [self.DyConv[1](feature, **conv_args)] - - if level > 0: - temp_fea.append(self.DyConv[2](visual_feats[level - 1], **conv_args)) - if level < len(visual_feats) - 1: - temp_fea.append(F.upsample_bilinear(self.DyConv[0](visual_feats[level + 1], **conv_args), - size=[feature.size(2), feature.size(3)])) - mean_fea = torch.mean(torch.stack(temp_fea), dim=0, keepdim=False) - - if self.AttnConv is not None: - attn_fea = [] - res_fea = [] - for fea in temp_fea: - res_fea.append(fea) - attn_fea.append(self.AttnConv(fea)) - - res_fea = torch.stack(res_fea) - spa_pyr_attn = self.h_sigmoid(torch.stack(attn_fea)) - - mean_fea = torch.mean(res_fea * spa_pyr_attn, dim=0, keepdim=False) - - next_x.append(mean_fea) - - next_x = [self.relu(item) for item in next_x] - - features_dict = {"visual": next_x, - "lang": language_dict_features} - - return features_dict - - -class BertEncoderLayer(BertPreTrainedModel): - def __init__(self, config, clamp_min_for_underflow = False, clamp_max_for_overflow = False): - super().__init__(config) - self.config = config - - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - - from maskrcnn_benchmark.modeling.rpn.modeling_bert import BertAttention, BertIntermediate, BertOutput - - self.attention = BertAttention(config, clamp_min_for_underflow, clamp_max_for_overflow) - self.intermediate = BertIntermediate(config) - self.output = BertOutput(config) - - def forward(self, inputs): - language_dict_features = inputs["lang"] - hidden_states = language_dict_features["hidden"] - attention_mask = language_dict_features["masks"] - - device = hidden_states.device - input_shape = hidden_states.size()[:-1] - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape, device) - - self_attention_outputs = self.attention( - hidden_states, - extended_attention_mask, - None, - output_attentions=False, - past_key_value=None, - ) - attention_output = self_attention_outputs[0] - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights - layer_output = apply_chunking_to_forward( - self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output - ) - outputs = (layer_output,) + outputs - hidden_states = outputs[0] - - language_dict_features["hidden"] = hidden_states - - features_dict = {"visual": inputs["visual"], - "lang": language_dict_features - } - - return features_dict - - def feed_forward_chunk(self, attention_output): - intermediate_output = self.intermediate(attention_output) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - -class CLIPTransformerLayer(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - d_model = self.config.MODEL.CLIP.WIDTH - n_head = self.config.MODEL.CLIP.HEADS - drop_path = self.config.MODEL.CLIP.DROP_PATH - self.context_length = self.config.MODEL.CLIP.CONTEXT_LENGTH - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = None - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.apply(self._init_weights) - - def _init_weights(self, m): - if isinstance(m, (nn.Linear, nn.Conv2d)): - trunc_normal_(m.weight, std=0.02) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, (nn.LayerNorm, nn.BatchNorm2d)): - nn.init.constant_(m.bias, 0) - - def attention(self, x: torch.Tensor, key_padding_mask: torch.Tensor = None): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) \ - if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask, key_padding_mask=key_padding_mask)[0] - - def forward(self, inputs): - language_dict_features = inputs["lang"] - x = language_dict_features["hidden"] - mask = language_dict_features["masks"] - # get extended attention mask for nn.MultiHeadAttention - key_padding_mask = (1.0 - mask).to(torch.bool) - - x = x.permute(1, 0, 2) - x = x + self.drop_path(self.attention(self.ln_1(x), key_padding_mask=key_padding_mask)) - x = x + self.drop_path(self.mlp(self.ln_2(x))) - x = x.permute(1, 0, 2) - - language_dict_features["hidden"] = x - features_dict = {"visual": inputs["visual"], - "lang": language_dict_features - } - return features_dict - - -class DummyLayer(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, inputs): - return inputs - - -class VLFuse(torch.nn.Module): - """ - Early Fusion Module - """ - - def __init__(self, cfg): - super(VLFuse, self).__init__() - self.init_configs(cfg) - self.cfg = cfg - - self.use_checkpoint = False - if hasattr(cfg.MODEL.DYHEAD, 'USE_CHECKPOINT'): - self.use_checkpoint = cfg.MODEL.DYHEAD.USE_CHECKPOINT - self.dummy_tensor = torch.ones(1, dtype=torch.float32, requires_grad=True) - - # early fusion module - print("EARLY FUSION ON, USING {}".format(cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE)) - if cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "MHA-S": - # single-direction (text->image) - # text -> image - self.t2i_attn = AttentionT2I(q_dim=self.joint_embedding_size, - k_dim=self.lang_dim, - embed_dim=self.embed_dim, - num_heads=self.n_head, - hidden_dim=self.t2i_hidden_dim, - dropout=0.1, - drop_path=.0, - init_values=1.0 / cfg.MODEL.DYHEAD.NUM_CONVS, - mode="t2i", - use_layer_scale=cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_LAYER_SCALE, - clamp_min_for_underflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_MIN_FOR_UNDERFLOW, - clamp_max_for_overflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_MAX_FOR_OVERFLOW - ) - - elif cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "MHA-B": - # bi-direction (text->image, image->text) - self.b_attn = BiAttentionBlockForCheckpoint(v_dim=self.joint_embedding_size, - l_dim=self.lang_dim, - embed_dim=self.embed_dim, - num_heads=self.n_head, - hidden_dim=self.i2t_hidden_dim, - dropout=0.1, - drop_path=.0, - init_values=1.0 / cfg.MODEL.DYHEAD.NUM_CONVS, - cfg=cfg - ) - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.SEPARATE_BIDIRECTIONAL and self.cfg.MODEL.DYHEAD.FUSE_CONFIG.DO_LANG_PROJ_OUTSIDE_CHECKPOINT: - self.shrink_lang = FeatureResizer(self.lang_dim * 5, - self.lang_dim, 0.1) - - - elif cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "SCAN": - # single-direction (text->image) - self.mapping_lang = _make_mlp(self.lang_dim, - self.joint_embedding_size, - self.joint_embedding_dropout) - self.joint_fusion = nn.ModuleList([_make_conv(self.joint_inp_dim, self.joint_out_dim, 1) \ - for _ in range(5)]) - - elif cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "FILM": - # single-direction (text->image) - self.mapping_lang = _make_mlp(self.lang_dim, - self.joint_embedding_size, - self.joint_embedding_dropout) - self.gamma = nn.ModuleList(nn.Linear(self.joint_embedding_size, self.joint_inp_dim) for _ in range(5)) - self.beta = nn.ModuleList(nn.Linear(self.joint_embedding_size, self.joint_inp_dim) for _ in range(5)) - - self.joint_fusion = nn.ModuleList([_make_conv(self.joint_inp_dim, self.joint_out_dim, 1) \ - for _ in range(5)]) - - else: - print("NO FUSION INVOLVED.") - - def init_configs(self, cfg): - # common params - self.lang_model = cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE - self.joint_embedding_size = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_EMB_SIZE - self.joint_embedding_dropout = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_EMB_DROPOUT - self.joint_mlp_layers = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_MLP_LAYERS - - self.max_query_len = cfg.MODEL.LANGUAGE_BACKBONE.MAX_QUERY_LEN - self.n_layers = cfg.MODEL.LANGUAGE_BACKBONE.N_LAYERS - self.coord_dim = 8 - self.joint_inp_dim = self.coord_dim + self.joint_embedding_size - self.joint_out_dim = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_OUT_SIZE - - # mha params - self.n_head = 8 - self.embed_dim = 2048 - self.t2i_hidden_dim = 1024 # 256 * 4 - self.i2t_hidden_dim = 3072 # 768 * 4 - - if self.lang_model in ["bert-base-uncased", "roberta-base", "clip"]: - self.lang_dim = cfg.MODEL.LANGUAGE_BACKBONE.LANG_DIM - else: - self.lang_dim = 1024 - - def forward(self, x): - visual_features = x["visual"] - language_dict_features = x["lang"] - - batch_size = visual_features[0].shape[0] - device = visual_features[0].device - - fused_visual_features = None - fused_language_dict_features = None - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "MHA-S": - language_feature = language_dict_features['hidden'] - mask = language_dict_features['masks'] - # text -> image - if self.use_checkpoint: - q0, q1, q2, q3, q4 = checkpoint.checkpoint( - self.t2i_attn, - visual_features[0], visual_features[1], - visual_features[2], visual_features[3], - visual_features[4], - language_feature, language_feature, - mask, - self.dummy_tensor - ) - else: - q0, q1, q2, q3, q4 = self.t2i_attn( - visual_features[0], visual_features[1], - visual_features[2], visual_features[3], - visual_features[4], - language_feature, language_feature, - attention_mask=mask - ) - - fused_visual_features = [q0, q1, q2, q3, q4] - fused_language_dict_features = language_dict_features - - elif self.cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "MHA-B": - if self.use_checkpoint: - q0, q1, q2, q3, q4, l0, l1, l2, l3, l4 = checkpoint.checkpoint(self.b_attn, - visual_features[0], visual_features[1], - visual_features[2], visual_features[3], - visual_features[4], - language_dict_features['hidden'], - language_dict_features['masks'], - self.dummy_tensor - ) - else: - q0, q1, q2, q3, q4, l0, l1, l2, l3, l4 = self.b_attn( - visual_features[0], visual_features[1], - visual_features[2], visual_features[3], - visual_features[4], - language_dict_features['hidden'], - language_dict_features['masks'], - self.dummy_tensor - ) - - fused_visual_features = [q0, q1, q2, q3, q4] - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.SEPARATE_BIDIRECTIONAL and self.cfg.MODEL.DYHEAD.FUSE_CONFIG.DO_LANG_PROJ_OUTSIDE_CHECKPOINT: - language_features = self.shrink_lang(torch.cat([l0, l1, l2, l3, l4], dim = -1)) - else: - language_features = l0 - - language_dict_features['hidden'] = language_features - fused_language_dict_features = language_dict_features - - elif self.cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "SCAN": - # text -> image - language_feature = language_dict_features['aggregate'] - language_feature = self.mapping_lang(language_feature) - visu_feat = [] - for ii, feat in enumerate(visual_features): - attn_feat = func_attention(feat, language_feature, smooth=1, raw_feature_norm="softmax") - visu_feat.append(attn_feat) - - fused_visual_features = [fusion(feat) for feat, fusion in zip(visu_feat, self.joint_fusion)] - fused_language_dict_features = language_dict_features - - elif self.cfg.MODEL.DYHEAD.FUSE_CONFIG.TYPE == "FILM": - # text -> image - # relative position embedding - coord_feats = [_make_coord(batch_size, x.shape[2], x.shape[3]) for x in visual_features] - # I only use a global representation of language - # you can also use more complex modeling using word-level representations - # Usage: lang_feat = lang_feat['words'] shape [seq_len, dim] - language_feature = language_dict_features['aggregate'] - language_feature = self.mapping_lang(language_feature) - - # attention mechanism for fusion - gamma = [F.tanh(gamma(language_feature)) for gamma in self.gamma] - beta = [F.tanh(beta(language_feature)) for beta in self.beta] - - visu_feat = [] - for ii, feat in enumerate(visual_features): - coord_feat = coord_feats[ii].to(device) - feat = torch.cat([feat, coord_feat], dim=1) - b = beta[ii].view(batch_size, -1, 1, 1).expand_as(feat) - g = gamma[ii].view(batch_size, -1, 1, 1).expand_as(feat) - feat = F.relu(g * feat + b) - visu_feat.append(feat) - - fused_visual_features = [fusion(feat) for feat, fusion in zip(visu_feat, self.joint_fusion)] - fused_language_dict_features = language_dict_features - - else: - fused_visual_features = visual_features - fused_language_dict_features = language_dict_features - - features_dict = {"visual": fused_visual_features, - "lang": fused_language_dict_features} - - return features_dict - - -class VLDyHead(torch.nn.Module): - def __init__(self, cfg): - super(VLDyHead, self).__init__() - self.cfg = cfg - # bert_cfg = BertConfig.from_pretrained(cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE) - if cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE == "bert-base-uncased": - lang_cfg = BertConfig.from_pretrained(cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE) - elif cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE == "clip": - lang_cfg = cfg - else: - lang_cfg = None - raise NotImplementedError - - num_classes = cfg.MODEL.DYHEAD.NUM_CLASSES - 1 - num_tokens = cfg.MODEL.LANGUAGE_BACKBONE.MAX_QUERY_LEN - num_anchors = len(cfg.MODEL.RPN.ASPECT_RATIOS) * cfg.MODEL.RPN.SCALES_PER_OCTAVE - in_channels = cfg.MODEL.BACKBONE.OUT_CHANNELS - channels = cfg.MODEL.DYHEAD.CHANNELS - - if cfg.MODEL.DYHEAD.USE_GN: - bn_type = ['gn', cfg.MODEL.GROUP_NORM.NUM_GROUPS] - elif cfg.MODEL.DYHEAD.USE_NSYNCBN: - bn_type = 'nsbn' - elif cfg.MODEL.DYHEAD.USE_SYNCBN: - bn_type = 'sbn' - else: - bn_type = None - - use_dyrelu = cfg.MODEL.DYHEAD.USE_DYRELU - use_dyfuse = cfg.MODEL.DYHEAD.USE_DYFUSE - use_deform = cfg.MODEL.DYHEAD.USE_DFCONV - - if cfg.MODEL.DYHEAD.CONV_FUNC: - conv_func = lambda i, o, s: eval(cfg.MODEL.DYHEAD.CONV_FUNC)(i, o, s, bn_type=bn_type) - else: - conv_func = lambda i, o, s: Conv3x3Norm(i, o, s, deformable=use_deform, bn_type=bn_type) - - dyhead_tower = [] - for i in range(cfg.MODEL.DYHEAD.NUM_CONVS): - if cfg.MODEL.DYHEAD.FUSE_CONFIG.EARLY_FUSE_ON: - # cross-modality fusion - dyhead_tower.append( - VLFuse(cfg) - ) - # self language path - if i < cfg.MODEL.DYHEAD.NUM_CONVS - 1 or cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_FUSED_FEATURES_DOT_PRODUCT: - # dyhead_tower.append( - # BertEncoderLayer( - # bert_cfg, - # clamp_min_for_underflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_BERTATTN_MIN_FOR_UNDERFLOW, - # clamp_max_for_overflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_BERTATTN_MAX_FOR_OVERFLOW) - # ) - if cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE == "bert-base-uncased": - dyhead_tower.append( - BertEncoderLayer( - lang_cfg, - clamp_min_for_underflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_BERTATTN_MIN_FOR_UNDERFLOW, - clamp_max_for_overflow=cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_BERTATTN_MAX_FOR_OVERFLOW) - ) - elif cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE == "clip": - dyhead_tower.append( - CLIPTransformerLayer(lang_cfg) - ) - else: - raise NotImplementedError - - else: - dyhead_tower.append( - DummyLayer() - ) - - # self vision path - dyhead_tower.append( - DyConv( - in_channels if i == 0 else channels, - channels, - conv_func=conv_func, - use_dyrelu=(use_dyrelu and in_channels == channels) if i == 0 else use_dyrelu, - use_dyfuse=(use_dyfuse and in_channels == channels) if i == 0 else use_dyfuse, - use_deform=(use_deform and in_channels == channels) if i == 0 else use_deform, - ) - ) - - self.add_module('dyhead_tower', nn.Sequential(*dyhead_tower)) - - self.cls_logits = nn.Conv2d(channels, num_anchors * num_classes, kernel_size=1) - self.bbox_pred = nn.Conv2d(channels, num_anchors * 4, kernel_size=1) - self.centerness = nn.Conv2d(channels, num_anchors * 1, kernel_size=1) - - # initialize the bias for focal loss - prior_prob = cfg.MODEL.DYHEAD.PRIOR_PROB - bias_value = -math.log((1 - prior_prob) / prior_prob) - - log_scale = self.cfg.MODEL.DYHEAD.LOG_SCALE - - # soft token head - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_TOKEN_LOSS: - self.token_logits = nn.Conv2d(channels, num_anchors * num_tokens, kernel_size=1) - # ABLATION - # self.token_logits = nn.Conv2d(channels, num_anchors * num_tokens, kernel_size=1, bias=False) - # self.bias = nn.Parameter(torch.zeros(channels), requires_grad=True) - # self.bias0 = nn.Parameter(torch.Tensor([bias_value]), requires_grad=True) - - # contrastive alignment head - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS: - assert self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS == False - contrastive_hdim = cfg.MODEL.DYHEAD.FUSE_CONFIG.CONTRASTIVE_HIDDEN_DIM - self.contrastive_align_projection_image = nn.Conv2d(channels, num_anchors * contrastive_hdim, kernel_size=1) - self.contrastive_align_projection_text = nn.Linear(channels, contrastive_hdim, bias=True) - self.log_scale = nn.Parameter(torch.Tensor([log_scale]), requires_grad=True) - - # dot product soft token head - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS: - assert self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS == False - self.dot_product_projection_image = nn.Identity() - self.dot_product_projection_text = nn.Linear(self.cfg.MODEL.LANGUAGE_BACKBONE.LANG_DIM, - num_anchors * channels, bias=True) - self.log_scale = nn.Parameter(torch.Tensor([log_scale]), requires_grad=True) - # DEBUG - # self.bias = nn.Parameter(torch.zeros(channels), requires_grad=True) - self.bias_lang = nn.Parameter(torch.zeros(self.cfg.MODEL.LANGUAGE_BACKBONE.LANG_DIM), requires_grad=True) - self.bias0 = nn.Parameter(torch.Tensor([bias_value]), requires_grad=True) - - # initialization - for modules in [self.cls_logits, self.bbox_pred, - self.centerness]: - for l in modules.modules(): - if isinstance(l, nn.Conv2d): - torch.nn.init.normal_(l.weight, std=0.01) - torch.nn.init.constant_(l.bias, 0) - - self.scales = nn.ModuleList([Scale(init_value=1.0) for _ in range(5)]) - - torch.nn.init.constant_(self.cls_logits.bias, bias_value) - - # if use soft token loss - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_TOKEN_LOSS: - for modules in [self.token_logits]: - for l in modules.modules(): - if isinstance(l, nn.Conv2d): - torch.nn.init.normal_(l.weight, std=0.01) - torch.nn.init.constant_(l.bias, 0) - - torch.nn.init.constant_(self.token_logits.bias, bias_value) - # print(torch.norm(self.token_logits.weight)) - - # if use contrastive loss - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS: - for modules in [self.contrastive_align_projection_image]: - for l in modules.modules(): - if isinstance(l, nn.Conv2d): - torch.nn.init.normal_(l.weight, std=0.01) - torch.nn.init.constant_(l.bias, 0) - - # if use dot product token loss - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS: - for modules in [self.dot_product_projection_image]: - for l in modules.modules(): - if isinstance(l, nn.Conv2d): - torch.nn.init.normal_(l.weight, std=0.01) - torch.nn.init.constant_(l.bias, bias_value) - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.MLM_LOSS: - if cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE == "clip": - lang_cfg = BertConfig.from_pretrained("bert-base-uncased") - lang_cfg.hidden_size = cfg.MODEL.CLIP.WIDTH - lang_cfg.vocab_size = cfg.MODEL.CLIP.VOCAB_SIZE - self.mlm_head = BertLMPredictionHead( - lang_cfg - ) #nn.Linear(hidden_size, config.vocab_size, bias=False) - - def forward(self, x, language_dict_features=None, embedding=None, swint_feature_c4=None): - logits = [] - bbox_reg = [] - centerness = [] - - feat_inputs = {"visual": x, - "lang": language_dict_features} - - dyhead_tower = self.dyhead_tower(feat_inputs) - - # soft token - t_logits = None - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_TOKEN_LOSS: - t_logits = [] - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_FUSED_FEATURES_DOT_PRODUCT: - embedding = dyhead_tower["lang"]["hidden"] - - # MLM loss - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.MLM_LOSS: - mlm_logits = self.mlm_head(embedding) - else: - mlm_logits = None - - # contrastive - contrastive_logits = None - proj_tokens = None - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS: - contrastive_logits = [] - # follow MDETR's way - proj_tokens = F.normalize( - self.contrastive_align_projection_text(embedding), p=2, dim=-1 - ) - - # dot product soft token - dot_product_logits = None - dot_product_proj_tokens = None - dot_product_proj_tokens_bias = None - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS: - dot_product_logits = [] - # norm - embedding = F.normalize(embedding, p=2, dim=-1) - dot_product_proj_tokens = self.dot_product_projection_text(embedding / 2.0) - # w/o norm - # dot_product_proj_tokens = self.dot_product_projection_text(embedding / 28.0) - - dot_product_proj_tokens_bias = torch.matmul(embedding, self.bias_lang) + self.bias0 - - # shallow contrastive (original feature from image & text encoder) - shallow_img_emb_feats = None - shallow_text_emb = None - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_SHALLOW_CONTRASTIVE_LOSS \ - or self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_BACKBONE_SHALLOW_CONTRASTIVE_LOSS: - shallow_img_emb_feats = [] - shallow_text_emb = embedding - - # print([v.shape for v in x]) - # shallow contrastive: use the feature from swint backbone - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_BACKBONE_SHALLOW_CONTRASTIVE_LOSS: - for b, feature in enumerate(swint_feature_c4): - # BF, CF, HF, WF = feat.shape - # shallow_img_emb = permute_and_flatten(feat, BF, -1, CF, HF, WF) - shallow_img_emb_feats.append(feature) - - fused_visual_features = None - if self.cfg.MODEL.RPN.RETURN_FUSED_FEATURES: - fused_visual_features = [] - - # use the feature from FPN - for l, feature in enumerate(x): - logits.append(self.cls_logits(dyhead_tower["visual"][l])) - - bbox_pred = self.scales[l](self.bbox_pred(dyhead_tower["visual"][l])) - bbox_reg.append(bbox_pred) - - centerness.append(self.centerness(dyhead_tower["visual"][l])) - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_TOKEN_LOSS: - t_logits.append(self.token_logits(dyhead_tower["visual"][l])) - - # ABLATION - # b = self.bias.unsqueeze(0).unsqueeze(-1).unsqueeze(-1) - # x = dyhead_tower["visual"][l] - # B, C, H, W = x.shape - # bias = b.repeat(B, 1, H, W) - # t_logits.append(self.token_logits(dyhead_tower["visual"][l] + bias) + self.bias0) - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS: - x = dyhead_tower["visual"][l] - B, _, H, W = x.shape - C = proj_tokens.shape[2] - proj_queries = self.contrastive_align_projection_image(dyhead_tower["visual"][l]) - proj_queries = permute_and_flatten(proj_queries, B, -1, C, H, W) - normalized_img_emb = F.normalize(proj_queries, p=2, dim=-1) - normalized_text_emb = proj_tokens - contrastive_logit = ( - torch.matmul(normalized_img_emb, normalized_text_emb.transpose(-1, -2)) / self.log_scale.exp()) - contrastive_logits.append(contrastive_logit) - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS: - x = dyhead_tower["visual"][l] - if self.cfg.MODEL.RPN.RETURN_FUSED_FEATURES: - fused_visual_features.append(x) - B, C, H, W = x.shape - - # add bias (language) - dot_product_proj_queries = self.dot_product_projection_image(x) - dot_product_proj_queries = permute_and_flatten(dot_product_proj_queries, B, -1, C, H, W) - - A = dot_product_proj_queries.shape[1] - bias = dot_product_proj_tokens_bias.unsqueeze(1).repeat(1, A, 1) - - dot_product_logit = (torch.matmul(dot_product_proj_queries, dot_product_proj_tokens.transpose(-1, -2)) / self.log_scale.exp()) + bias - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.CLAMP_DOT_PRODUCT: - dot_product_logit = torch.clamp(dot_product_logit, max=50000) - dot_product_logit = torch.clamp(dot_product_logit, min=-50000) - dot_product_logits.append(dot_product_logit) - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_SHALLOW_CONTRASTIVE_LOSS: - feat = feature - BF, CF, HF, WF = feat.shape - shallow_img_emb = permute_and_flatten(feat, BF, -1, CF, HF, WF) - shallow_img_emb_feats.append(shallow_img_emb) - - # no matter the feature is from backboone or from fpn, we use shallow_img_embs all the time - if shallow_img_emb_feats is not None and shallow_text_emb is not None: - # shallow_img_embs = torch.cat(shallow_img_embs, dim=1) - proj_tokens = shallow_text_emb - return logits, bbox_reg, centerness, t_logits, proj_tokens, contrastive_logits, dot_product_logits, mlm_logits, shallow_img_emb_feats, fused_visual_features - - -class VLDyHeadModule(torch.nn.Module): - - def __init__(self, cfg): - super(VLDyHeadModule, self).__init__() - self.cfg = cfg - self.head = VLDyHead(cfg) - box_coder = BoxCoder(cfg) - self.loss_evaluator = make_atss_loss_evaluator(cfg, box_coder) - self.box_selector_train = make_atss_postprocessor(cfg, box_coder, is_train=True) - self.box_selector_test = make_atss_postprocessor(cfg, box_coder, is_train=False) - self.anchor_generator = make_anchor_generator_complex(cfg) - - self.lang_model = cfg.MODEL.LANGUAGE_BACKBONE.MODEL_TYPE - self.joint_embedding_size = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_EMB_SIZE - self.joint_embedding_dropout = cfg.MODEL.DYHEAD.FUSE_CONFIG.JOINT_EMB_DROPOUT - if self.lang_model in ["bert-base-uncased", "roberta-base", "clip"]: - self.lang_dim = cfg.MODEL.LANGUAGE_BACKBONE.LANG_DIM - else: - self.lang_dim = 1024 - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS: - self.resizer = FeatureResizer( - input_feat_size=self.lang_dim, - output_feat_size=self.joint_embedding_size, - dropout=self.joint_embedding_dropout - ) - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.ADD_LINEAR_LAYER: - self.tunable_linear = torch.nn.Linear(self.lang_dim, 1000, bias=False) - self.tunable_linear.weight.data.fill_(0.0) - - def forward(self, images, features, targets=None, - language_dict_features=None, - positive_map=None, - captions=None, - swint_feature_c4=None - ): - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS: - # resizer needed - embedding = language_dict_features['embedded'] - embedding = self.resizer(embedding) - elif self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS: - # no resizer needed - embedding = language_dict_features['embedded'] - else: - embedding = None - - if "masks" in language_dict_features: - text_masks = language_dict_features["masks"] - else: - text_masks = None - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.ADD_LINEAR_LAYER: - embedding = self.tunable_linear.weight[:embedding.size(1), :].unsqueeze(0) + embedding - language_dict_features['embedded'] = embedding - language_dict_features['hidden'] = self.tunable_linear.weight[:embedding.size(1), :].unsqueeze(0) + language_dict_features['hidden'] - - box_cls, box_regression, centerness, token_logits, \ - proj_tokens, contrastive_logits, dot_product_logits, mlm_logits, shallow_img_emb_feats, fused_visual_features = self.head(features, - language_dict_features, - embedding, - swint_feature_c4 - ) - anchors = self.anchor_generator(images, features) - - if self.training: - return self._forward_train(box_cls, box_regression, centerness, targets, anchors, - captions, - positive_map, - token_logits, - proj_tokens, - contrastive_logits, - dot_product_logits, - text_masks, - mlm_logits = mlm_logits, - mlm_labels = language_dict_features["mlm_labels"], - shallow_img_emb_feats=shallow_img_emb_feats, - fused_visual_features=fused_visual_features - ) - else: - return self._forward_test(box_regression, centerness, anchors, - box_cls, - token_logits, - dot_product_logits, - positive_map, - fused_visual_features=fused_visual_features - ) - - def _forward_train(self, box_cls, box_regression, centerness, targets, anchors, - captions=None, - positive_map=None, - token_logits=None, - proj_tokens=None, - contrastive_logits=None, - dot_product_logits=None, - text_masks=None, - mlm_logits=None, - mlm_labels=None, - shallow_img_emb_feats=None, - fused_visual_features=None - ): - - loss_box_cls, loss_box_reg, loss_centerness, loss_token, loss_contrastive_align, loss_dot_product_token, loss_shallow_contrastive = self.loss_evaluator( - box_cls, box_regression, centerness, targets, anchors, - captions, - positive_map, - token_logits, - proj_tokens, - contrastive_logits, - dot_product_logits, - text_masks, - shallow_img_emb_feats - ) - - losses = { - # "loss_cls": loss_box_cls, - "loss_reg": loss_box_reg, - "loss_centerness": loss_centerness - } - - if mlm_labels is not None and mlm_logits is not None: - losses["mlm_loss"] = nn.CrossEntropyLoss(ignore_index = -100)(mlm_logits.view(-1, mlm_logits.size(-1)), mlm_labels.view(-1)) * self.cfg.MODEL.DYHEAD.FUSE_CONFIG.MLM_LOSS_COEF - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CLASSIFICATION_LOSS: - losses["loss_cls"] = loss_box_cls - else: - losses["loss_cls"] = 0.0 * loss_box_cls - - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_TOKEN_LOSS: - losses["loss_token"] = loss_token * self.cfg.MODEL.DYHEAD.FUSE_CONFIG.TOKEN_LOSS_WEIGHT - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_CONTRASTIVE_ALIGN_LOSS: - losses["loss_contrastive_align"] = loss_contrastive_align * \ - self.cfg.MODEL.DYHEAD.FUSE_CONFIG.CONTRASTIVE_ALIGN_LOSS_WEIGHT - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_DOT_PRODUCT_TOKEN_LOSS: - losses["loss_dot_product_token"] = loss_dot_product_token * \ - self.cfg.MODEL.DYHEAD.FUSE_CONFIG.DOT_PRODUCT_TOKEN_LOSS_WEIGHT - if self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_SHALLOW_CONTRASTIVE_LOSS or \ - self.cfg.MODEL.DYHEAD.FUSE_CONFIG.USE_BACKBONE_SHALLOW_CONTRASTIVE_LOSS: - losses["loss_shallow_contrastive"] = loss_shallow_contrastive * \ - self.cfg.MODEL.DYHEAD.FUSE_CONFIG.SHALLOW_CONTRASTIVE_LOSS_WEIGHT - - if self.cfg.MODEL.RPN_ONLY: - return None, losses, None - else: - # Let's just use one image per batch - assert (box_regression[0].shape[0]) == 1 - positive_map_label_to_token = create_positive_map_label_to_token_from_positive_map(positive_map, plus=1) - boxes = self.box_selector_train(box_regression, centerness, anchors, - box_cls, - token_logits, - dot_product_logits, - positive_map=positive_map_label_to_token - ) - train_boxes = [] - for b, t in zip(boxes, targets): - tb = t.copy_with_fields(["labels"]) - tb.add_field("scores", torch.ones(tb.bbox.shape[0], dtype=torch.bool, device=tb.bbox.device)) - train_boxes.append(cat_boxlist([b, tb])) - return train_boxes, losses, fused_visual_features - - def _forward_test(self, box_regression, centerness, anchors, - box_cls=None, - token_logits=None, - dot_product_logits=None, - positive_map=None, - fused_visual_features=None - ): - - boxes = self.box_selector_test(box_regression, centerness, anchors, - box_cls, - token_logits, - dot_product_logits, - positive_map, - ) - return boxes, {}, fused_visual_features diff --git a/spaces/harkov000/peft-lora-sd-dreambooth/app.py b/spaces/harkov000/peft-lora-sd-dreambooth/app.py deleted file mode 100644 index 7b3562343491ca0348561afe7e0fa21466b5e55a..0000000000000000000000000000000000000000 --- a/spaces/harkov000/peft-lora-sd-dreambooth/app.py +++ /dev/null @@ -1,375 +0,0 @@ -#!/usr/bin/env python -""" -Demo showcasing parameter-efficient fine-tuning of Stable Dissfusion via Dreambooth leveraging 🤗 PEFT (https://github.com/huggingface/peft) - -The code in this repo is partly adapted from the following repositories: -https://huggingface.co/spaces/hysts/LoRA-SD-training -https://huggingface.co/spaces/multimodalart/dreambooth-training -""" -from __future__ import annotations - -import os -import pathlib - -import gradio as gr -import torch -from typing import List - -from inference import InferencePipeline -from trainer import Trainer -from uploader import upload - - -TITLE = "# LoRA + Dreambooth Training and Inference Demo 🎨" -DESCRIPTION = "Demo showcasing parameter-efficient fine-tuning of Stable Dissfusion via Dreambooth leveraging 🤗 PEFT (https://github.com/huggingface/peft)." - - -ORIGINAL_SPACE_ID = "smangrul/peft-lora-sd-dreambooth" - -SPACE_ID = os.getenv("SPACE_ID", ORIGINAL_SPACE_ID) -SHARED_UI_WARNING = f"""# Attention - This Space doesn't work in this shared UI. You can duplicate and use it with a paid private T4 GPU. -
      Duplicate Space
      -""" -if os.getenv("SYSTEM") == "spaces" and SPACE_ID != ORIGINAL_SPACE_ID: - SETTINGS = f'Settings' - -else: - SETTINGS = "Settings" -CUDA_NOT_AVAILABLE_WARNING = f"""# Attention - Running on CPU. -
      -You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces. -"T4 small" is sufficient to run this demo. -
      -""" - - -def show_warning(warning_text: str) -> gr.Blocks: - with gr.Blocks() as demo: - with gr.Box(): - gr.Markdown(warning_text) - return demo - - -def update_output_files() -> dict: - paths = sorted(pathlib.Path("results").glob("*.pt")) - config_paths = sorted(pathlib.Path("results").glob("*.json")) - paths = paths + config_paths - paths = [path.as_posix() for path in paths] # type: ignore - return gr.update(value=paths or None) - - -def create_training_demo(trainer: Trainer, pipe: InferencePipeline) -> gr.Blocks: - with gr.Blocks() as demo: - base_model = gr.Dropdown( - choices=[ - "CompVis/stable-diffusion-v1-4", - "runwayml/stable-diffusion-v1-5", - "stabilityai/stable-diffusion-2-1-base", - "dreamlike-art/dreamlike-photoreal-2.0" - ], - value="runwayml/stable-diffusion-v1-5", - label="Base Model", - visible=True, - ) - resolution = gr.Dropdown(choices=["512"], value="512", label="Resolution", visible=False) - - with gr.Row(): - with gr.Box(): - gr.Markdown("Training Data") - concept_images = gr.Files(label="Images for your concept") - class_images = gr.Files(label="Class images") - concept_prompt = gr.Textbox(label="Concept Prompt", max_lines=1) - gr.Markdown( - """ - - Upload images of the style you are planning on training on. - - For a concept prompt, use a unique, made up word to avoid collisions. - - Guidelines for getting good results: - - Dreambooth for an `object` or `style`: - - 5-10 images of the object from different angles - - 500-800 iterations should be good enough. - - Prior preservation is recommended. - - `class_prompt`: - - `a photo of object` - - `style` - - `concept_prompt`: - - ` object` - - ` style` - - `a photo of object` - - `a photo of style` - - Dreambooth for a `Person/Face`: - - 15-50 images of the person from different angles, lighting, and expressions. - Have considerable photos with close up faces. - - 800-1200 iterations should be good enough. - - good defaults for hyperparams - - Model - `runwayml/stable-diffusion-v1-5` or `stabilityai/stable-diffusion-2-1-base` - - Use/check Prior preservation. - - Number of class images to use - 200 - - Prior Loss Weight - 1 - - LoRA Rank for unet - 16 - - LoRA Alpha for unet - 20 - - lora dropout - 0 - - LoRA Bias for unet - `all` - - LoRA Rank for CLIP - 16 - - LoRA Alpha for CLIP - 17 - - LoRA Bias for CLIP - `all` - - lora dropout for CLIP - 0 - - Uncheck `FP16` and `8bit-Adam` (don't use them for faces) - - `class_prompt`: Use the gender related word of the person - - `man` - - `woman` - - `boy` - - `girl` - - `concept_prompt`: just the unique, made up word, e.g., `srm` - - Choose `all` for `lora_bias` and `text_encode_lora_bias` - - Dreambooth for a `Scene`: - - 15-50 images of the scene from different angles, lighting, and expressions. - - 800-1200 iterations should be good enough. - - Prior preservation is recommended. - - `class_prompt`: - - `scene` - - `landscape` - - `city` - - `beach` - - `mountain` - - `concept_prompt`: - - ` scene` - - ` landscape` - - Experiment with various values for lora dropouts, enabling/disabling fp16 and 8bit-Adam - """ - ) - with gr.Box(): - gr.Markdown("Training Parameters") - num_training_steps = gr.Number(label="Number of Training Steps", value=1000, precision=0) - learning_rate = gr.Number(label="Learning Rate", value=0.0001) - gradient_checkpointing = gr.Checkbox(label="Whether to use gradient checkpointing", value=True) - train_text_encoder = gr.Checkbox(label="Train Text Encoder", value=True) - with_prior_preservation = gr.Checkbox(label="Prior Preservation", value=True) - class_prompt = gr.Textbox( - label="Class Prompt", max_lines=1, placeholder='Example: "a photo of object"' - ) - num_class_images = gr.Number(label="Number of class images to use", value=50, precision=0) - prior_loss_weight = gr.Number(label="Prior Loss Weight", value=1.0, precision=1) - # use_lora = gr.Checkbox(label="Whether to use LoRA", value=True) - lora_r = gr.Number(label="LoRA Rank for unet", value=4, precision=0) - lora_alpha = gr.Number( - label="LoRA Alpha for unet. scaling factor = lora_alpha/lora_r", value=4, precision=0 - ) - lora_dropout = gr.Number(label="lora dropout", value=0.00) - lora_bias = gr.Dropdown( - choices=["none", "all", "lora_only"], - value="none", - label="LoRA Bias for unet. This enables bias params to be trainable based on the bias type", - visible=True, - ) - lora_text_encoder_r = gr.Number(label="LoRA Rank for CLIP", value=4, precision=0) - lora_text_encoder_alpha = gr.Number( - label="LoRA Alpha for CLIP. scaling factor = lora_alpha/lora_r", value=4, precision=0 - ) - lora_text_encoder_dropout = gr.Number(label="lora dropout for CLIP", value=0.00) - lora_text_encoder_bias = gr.Dropdown( - choices=["none", "all", "lora_only"], - value="none", - label="LoRA Bias for CLIP. This enables bias params to be trainable based on the bias type", - visible=True, - ) - gradient_accumulation = gr.Number(label="Number of Gradient Accumulation", value=1, precision=0) - fp16 = gr.Checkbox(label="FP16", value=True) - use_8bit_adam = gr.Checkbox(label="Use 8bit Adam", value=True) - gr.Markdown( - """ - - It will take about 20-30 minutes to train for 1000 steps with a T4 GPU. - - You may want to try a small number of steps first, like 1, to see if everything works fine in your environment. - - Note that your trained models will be deleted when the second training is started. You can upload your trained model in the "Upload" tab. - """ - ) - - run_button = gr.Button("Start Training") - with gr.Box(): - with gr.Row(): - check_status_button = gr.Button("Check Training Status") - with gr.Column(): - with gr.Box(): - gr.Markdown("Message") - training_status = gr.Markdown() - output_files = gr.Files(label="Trained Weight Files and Configs") - - run_button.click(fn=pipe.clear) - - run_button.click( - fn=trainer.run, - inputs=[ - base_model, - resolution, - num_training_steps, - concept_images, - concept_prompt, - class_images, - learning_rate, - gradient_accumulation, - fp16, - use_8bit_adam, - gradient_checkpointing, - train_text_encoder, - with_prior_preservation, - prior_loss_weight, - class_prompt, - num_class_images, - lora_r, - lora_alpha, - lora_bias, - lora_dropout, - lora_text_encoder_r, - lora_text_encoder_alpha, - lora_text_encoder_bias, - lora_text_encoder_dropout, - ], - outputs=[ - training_status, - output_files, - ], - queue=False, - ) - check_status_button.click(fn=trainer.check_if_running, inputs=None, outputs=training_status, queue=False) - check_status_button.click(fn=update_output_files, inputs=None, outputs=output_files, queue=False) - return demo - - -def find_weight_files() -> List[str]: - curr_dir = pathlib.Path(__file__).parent - paths = sorted(curr_dir.rglob("*.pt")) - return [path.relative_to(curr_dir).as_posix() for path in paths] - - -def reload_lora_weight_list() -> dict: - return gr.update(choices=find_weight_files()) - - -def create_inference_demo(pipe: InferencePipeline) -> gr.Blocks: - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - base_model = gr.Dropdown( - choices=[ - "CompVis/stable-diffusion-v1-4", - "runwayml/stable-diffusion-v1-5", - "stabilityai/stable-diffusion-2-1-base", - "dreamlike-art/dreamlike-photoreal-2.0" - ], - value="runwayml/stable-diffusion-v1-5", - label="Base Model", - visible=True, - ) - reload_button = gr.Button("Reload Weight List") - lora_weight_name = gr.Dropdown( - choices=find_weight_files(), value="lora/lora_disney.pt", label="LoRA Weight File" - ) - prompt = gr.Textbox(label="Prompt", max_lines=1, placeholder='Example: "style of sks, baby lion"') - negative_prompt = gr.Textbox( - label="Negative Prompt", max_lines=1, placeholder='Example: "blurry, botched, low quality"' - ) - seed = gr.Slider(label="Seed", minimum=0, maximum=100000, step=1, value=1) - with gr.Accordion("Other Parameters", open=False): - num_steps = gr.Slider(label="Number of Steps", minimum=0, maximum=1000, step=1, value=50) - guidance_scale = gr.Slider(label="CFG Scale", minimum=0, maximum=50, step=0.1, value=7) - - run_button = gr.Button("Generate") - - gr.Markdown( - """ - - After training, you can press "Reload Weight List" button to load your trained model names. - - Few repos to refer for ideas: - - https://huggingface.co/smangrul/smangrul - - https://huggingface.co/smangrul/painting-in-the-style-of-smangrul - - https://huggingface.co/smangrul/erenyeager - """ - ) - with gr.Column(): - result = gr.Image(label="Result") - - reload_button.click(fn=reload_lora_weight_list, inputs=None, outputs=lora_weight_name) - prompt.submit( - fn=pipe.run, - inputs=[ - base_model, - lora_weight_name, - prompt, - negative_prompt, - seed, - num_steps, - guidance_scale, - ], - outputs=result, - queue=False, - ) - run_button.click( - fn=pipe.run, - inputs=[ - base_model, - lora_weight_name, - prompt, - negative_prompt, - seed, - num_steps, - guidance_scale, - ], - outputs=result, - queue=False, - ) - seed.change( - fn=pipe.run, - inputs=[ - base_model, - lora_weight_name, - prompt, - negative_prompt, - seed, - num_steps, - guidance_scale, - ], - outputs=result, - queue=False, - ) - return demo - - -def create_upload_demo() -> gr.Blocks: - with gr.Blocks() as demo: - model_name = gr.Textbox(label="Model Name") - hf_token = gr.Textbox(label="Hugging Face Token (with write permission)") - upload_button = gr.Button("Upload") - with gr.Box(): - gr.Markdown("Message") - result = gr.Markdown() - gr.Markdown( - """ - - You can upload your trained model to your private Model repo (i.e. https://huggingface.co/{your_username}/{model_name}). - - You can find your Hugging Face token [here](https://huggingface.co/settings/tokens). - """ - ) - - upload_button.click(fn=upload, inputs=[model_name, hf_token], outputs=result) - - return demo - - -pipe = InferencePipeline() -trainer = Trainer() - -with gr.Blocks(css="style.css") as demo: - if os.getenv("IS_SHARED_UI"): - show_warning(SHARED_UI_WARNING) - if not torch.cuda.is_available(): - show_warning(CUDA_NOT_AVAILABLE_WARNING) - - gr.Markdown(TITLE) - gr.Markdown(DESCRIPTION) - - with gr.Tabs(): - with gr.TabItem("Train"): - create_training_demo(trainer, pipe) - with gr.TabItem("Test"): - create_inference_demo(pipe) - with gr.TabItem("Upload"): - create_upload_demo() - -demo.queue(default_enabled=False).launch(share=False) diff --git a/spaces/harmonai/dance-diffusion/README.md b/spaces/harmonai/dance-diffusion/README.md deleted file mode 100644 index eaf42eceae231dce324da195e36fda63eb8d4ecb..0000000000000000000000000000000000000000 --- a/spaces/harmonai/dance-diffusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dance Diffusion -emoji: 🦀 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/.github/ISSUE_TEMPLATE/feature-request.md b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/.github/ISSUE_TEMPLATE/feature-request.md deleted file mode 100644 index dd69a33478c85068cdd7b8b90161f97cc55c1621..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/.github/ISSUE_TEMPLATE/feature-request.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -name: "\U0001F680Feature Request" -about: Submit a proposal/request for a new detectron2 feature - ---- - -## 🚀 Feature -A clear and concise description of the feature proposal. - - -## Motivation & Examples - -Tell us why the feature is useful. - -Describe what the feature would look like, if it is implemented. -Best demonstrated using **code examples** in addition to words. - -## Note - -We only consider adding new features if they are relevant to many users. - -If you request implementation of research papers -- -we only consider papers that have enough significance and prevalance in the object detection field. - -We do not take requests for most projects in the `projects/` directory, -because they are research code release that is mainly for other researchers to reproduce results. - -Instead of adding features inside detectron2, -you can implement many features by [extending detectron2](https://detectron2.readthedocs.io/tutorials/extend.html). -The [projects/](https://github.com/facebookresearch/detectron2/tree/master/projects/) directory contains many of such examples. - diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/export/api.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/export/api.py deleted file mode 100644 index a7600714e1edb019def04f9d0d1a063668943101..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/export/api.py +++ /dev/null @@ -1,277 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import copy -import logging -import os -import torch -from caffe2.proto import caffe2_pb2 -from torch import nn - -from detectron2.config import CfgNode as CN - -from .caffe2_export import export_caffe2_detection_model -from .caffe2_export import export_onnx_model as export_onnx_model_impl -from .caffe2_export import run_and_save_graph -from .caffe2_inference import ProtobufDetectionModel -from .caffe2_modeling import META_ARCH_CAFFE2_EXPORT_TYPE_MAP, convert_batched_inputs_to_c2_format -from .shared import get_pb_arg_vali, get_pb_arg_vals, save_graph - -__all__ = [ - "add_export_config", - "export_caffe2_model", - "Caffe2Model", - "export_onnx_model", - "Caffe2Tracer", -] - - -def add_export_config(cfg): - """ - Args: - cfg (CfgNode): a detectron2 config - - Returns: - CfgNode: an updated config with new options that will be used - by :class:`Caffe2Tracer`. - """ - is_frozen = cfg.is_frozen() - cfg.defrost() - cfg.EXPORT_CAFFE2 = CN() - cfg.EXPORT_CAFFE2.USE_HEATMAP_MAX_KEYPOINT = False - if is_frozen: - cfg.freeze() - return cfg - - -class Caffe2Tracer: - """ - Make a detectron2 model traceable with caffe2 style. - - An original detectron2 model may not be traceable, or - cannot be deployed directly after being traced, due to some reasons: - 1. control flow in some ops - 2. custom ops - 3. complicated pre/post processing - - This class provides a traceable version of a detectron2 model by: - 1. Rewrite parts of the model using ops in caffe2. Note that some ops do - not have GPU implementation. - 2. Define the inputs "after pre-processing" as inputs to the model - 3. Remove post-processing and produce raw layer outputs - - More specifically about inputs: all builtin models take two input tensors. - (1) NCHW float "data" which is an image (usually in [0, 255]) - (2) Nx3 float "im_info", each row of which is (height, width, 1.0) - - After making a traceable model, the class provide methods to export such a - model to different deployment formats. - - The class currently only supports models using builtin meta architectures. - """ - - def __init__(self, cfg, model, inputs): - """ - Args: - cfg (CfgNode): a detectron2 config, with extra export-related options - added by :func:`add_export_config`. - model (nn.Module): a model built by - :func:`detectron2.modeling.build_model`. - inputs: sample inputs that the given model takes for inference. - Will be used to trace the model. - """ - assert isinstance(cfg, CN), cfg - assert isinstance(model, torch.nn.Module), type(model) - if "EXPORT_CAFFE2" not in cfg: - cfg = add_export_config(cfg) # will just the defaults - - self.cfg = cfg - self.model = model - self.inputs = inputs - - def _get_traceable(self): - # TODO how to make it extensible to support custom models - C2MetaArch = META_ARCH_CAFFE2_EXPORT_TYPE_MAP[self.cfg.MODEL.META_ARCHITECTURE] - traceable_model = C2MetaArch(self.cfg, copy.deepcopy(self.model)) - traceable_inputs = traceable_model.get_caffe2_inputs(self.inputs) - return traceable_model, traceable_inputs - - def export_caffe2(self): - """ - Export the model to Caffe2's protobuf format. - The returned object can be saved with `.save_protobuf()` method. - The result can be loaded and executed using Caffe2 runtime. - - Returns: - Caffe2Model - """ - model, inputs = self._get_traceable() - predict_net, init_net = export_caffe2_detection_model(model, inputs) - return Caffe2Model(predict_net, init_net) - - def export_onnx(self): - """ - Export the model to ONNX format. - Note that the exported model contains custom ops only available in caffe2, therefore it - cannot be directly executed by other runtime. Post-processing or transformation passes - may be applied on the model to accommodate different runtimes. - - Returns: - onnx.ModelProto: an onnx model. - """ - model, inputs = self._get_traceable() - return export_onnx_model_impl(model, (inputs,)) - - def export_torchscript(self): - """ - Export the model to a `torch.jit.TracedModule` by tracing. - The returned object can be saved to a file by ".save()". - - Returns: - torch.jit.TracedModule: a torch TracedModule - """ - model, inputs = self._get_traceable() - logger = logging.getLogger(__name__) - logger.info("Tracing the model with torch.jit.trace ...") - with torch.no_grad(): - return torch.jit.trace(model, (inputs,), optimize=True) - - -def export_caffe2_model(cfg, model, inputs): - """ - Export a detectron2 model to caffe2 format. - - Args: - cfg (CfgNode): a detectron2 config, with extra export-related options - added by :func:`add_export_config`. - model (nn.Module): a model built by - :func:`detectron2.modeling.build_model`. - It will be modified by this function. - inputs: sample inputs that the given model takes for inference. - Will be used to trace the model. - - Returns: - Caffe2Model - """ - return Caffe2Tracer(cfg, model, inputs).export_caffe2() - - -def export_onnx_model(cfg, model, inputs): - """ - Export a detectron2 model to ONNX format. - Note that the exported model contains custom ops only available in caffe2, therefore it - cannot be directly executed by other runtime. Post-processing or transformation passes - may be applied on the model to accommodate different runtimes. - Args: - cfg (CfgNode): a detectron2 config, with extra export-related options - added by :func:`add_export_config`. - model (nn.Module): a model built by - :func:`detectron2.modeling.build_model`. - It will be modified by this function. - inputs: sample inputs that the given model takes for inference. - Will be used to trace the model. - Returns: - onnx.ModelProto: an onnx model. - """ - return Caffe2Tracer(cfg, model, inputs).export_onnx() - - -class Caffe2Model(nn.Module): - """ - A wrapper around the traced model in caffe2's pb format. - """ - - def __init__(self, predict_net, init_net): - super().__init__() - self.eval() # always in eval mode - self._predict_net = predict_net - self._init_net = init_net - self._predictor = None - - @property - def predict_net(self): - """ - Returns: - core.Net: the underlying caffe2 predict net - """ - return self._predict_net - - @property - def init_net(self): - """ - Returns: - core.Net: the underlying caffe2 init net - """ - return self._init_net - - __init__.__HIDE_SPHINX_DOC__ = True - - def save_protobuf(self, output_dir): - """ - Save the model as caffe2's protobuf format. - - Args: - output_dir (str): the output directory to save protobuf files. - """ - logger = logging.getLogger(__name__) - logger.info("Saving model to {} ...".format(output_dir)) - os.makedirs(output_dir, exist_ok=True) - - with open(os.path.join(output_dir, "model.pb"), "wb") as f: - f.write(self._predict_net.SerializeToString()) - with open(os.path.join(output_dir, "model.pbtxt"), "w") as f: - f.write(str(self._predict_net)) - with open(os.path.join(output_dir, "model_init.pb"), "wb") as f: - f.write(self._init_net.SerializeToString()) - - def save_graph(self, output_file, inputs=None): - """ - Save the graph as SVG format. - - Args: - output_file (str): a SVG file - inputs: optional inputs given to the model. - If given, the inputs will be used to run the graph to record - shape of every tensor. The shape information will be - saved together with the graph. - """ - if inputs is None: - save_graph(self._predict_net, output_file, op_only=False) - else: - size_divisibility = get_pb_arg_vali(self._predict_net, "size_divisibility", 0) - device = get_pb_arg_vals(self._predict_net, "device", b"cpu").decode("ascii") - inputs = convert_batched_inputs_to_c2_format(inputs, size_divisibility, device) - inputs = [x.cpu().numpy() for x in inputs] - run_and_save_graph(self._predict_net, self._init_net, inputs, output_file) - - @staticmethod - def load_protobuf(dir): - """ - Args: - dir (str): a directory used to save Caffe2Model with - :meth:`save_protobuf`. - The files "model.pb" and "model_init.pb" are needed. - - Returns: - Caffe2Model: the caffe2 model loaded from this directory. - """ - predict_net = caffe2_pb2.NetDef() - with open(os.path.join(dir, "model.pb"), "rb") as f: - predict_net.ParseFromString(f.read()) - - init_net = caffe2_pb2.NetDef() - with open(os.path.join(dir, "model_init.pb"), "rb") as f: - init_net.ParseFromString(f.read()) - - return Caffe2Model(predict_net, init_net) - - def __call__(self, inputs): - """ - An interface that wraps around a caffe2 model and mimics detectron2's models' - input & output format. This is used to compare the outputs of caffe2 model - with its original torch model. - - Due to the extra conversion between torch/caffe2, - this method is not meant for benchmark. - """ - if self._predictor is None: - self._predictor = ProtobufDetectionModel(self._predict_net, self._init_net) - return self._predictor(inputs) diff --git a/spaces/hasibzunair/fifa-tryon-demo/util/util.py b/spaces/hasibzunair/fifa-tryon-demo/util/util.py deleted file mode 100644 index 550560aac8dc82fe4f896fd0c37e36fab3e15dd2..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/util/util.py +++ /dev/null @@ -1,145 +0,0 @@ -from __future__ import print_function -import os -from PIL import Image -import numpy as np -import torch - -print('?') - -# Converts a Tensor into a Numpy array -# |imtype|: the desired type of the converted numpy array - - -def tensor2im(image_tensor, imtype=np.uint8, normalize=True): - if isinstance(image_tensor, list): - image_numpy = [] - for i in range(len(image_tensor)): - image_numpy.append(tensor2im(image_tensor[i], imtype, normalize)) - return image_numpy - image_numpy = image_tensor.cpu().float().numpy() - # if normalize: - # image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 - # else: - # image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0 - image_numpy = (image_numpy + 1) / 2.0 - image_numpy = np.clip(image_numpy, 0, 1) - if image_numpy.shape[2] == 1 or image_numpy.shape[2] > 3: - image_numpy = image_numpy[:, :, 0] - - return image_numpy - -# Converts a one-hot tensor into a colorful label map - - -def tensor2label(label_tensor, n_label, imtype=np.uint8): - if n_label == 0: - return tensor2im(label_tensor, imtype) - label_tensor = label_tensor.cpu().float() - if label_tensor.size()[0] > 1: - label_tensor = label_tensor.max(0, keepdim=True)[1] - label_tensor = Colorize(n_label)(label_tensor) - #label_numpy = np.transpose(label_tensor.numpy(), (1, 2, 0)) - label_numpy = label_tensor.numpy() - label_numpy = label_numpy / 255.0 - - return label_numpy - - -def save_image(image_numpy, image_path, grayscale=False): - image_pil = Image.fromarray(image_numpy) - image_pil.save(image_path) - - -def save_tensor_as_image(image_tensor, image_path, grayscale=False): - image_numpy = tensor_to_image(image_tensor, grayscale) - save_image(image_numpy, image_path, grayscale) - - -def tensor_to_image(img_tensor, grayscale=False): - if grayscale: - tensor = img_tensor.cpu().clamp(0, 255) - else: - tensor = (img_tensor.clone() + 1) * 0.5 * 255 - tensor = tensor.cpu().clamp(0, 255) - - try: - array = tensor.numpy().astype('uint8') - except: - array = tensor.detach().numpy().astype('uint8') - - if array.shape[0] == 1: - array = array.squeeze(0) - elif array.shape[0] == 3: - array = array.swapaxes(0, 1).swapaxes(1, 2) - - return array - - -def mkdirs(paths): - if isinstance(paths, list) and not isinstance(paths, str): - for path in paths: - mkdir(path) - else: - mkdir(paths) - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - -############################################################################### -# Code from -# https://github.com/ycszen/pytorch-seg/blob/master/transform.py -# Modified so it complies with the Citscape label map colors -############################################################################### - - -def uint82bin(n, count=8): - """returns the binary of integer n, count refers to amount of bits""" - return ''.join([str((n >> y) & 1) for y in range(count-1, -1, -1)]) - - -def labelcolormap(N): - if N == 35: # cityscape - cmap = np.array([(0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (111, 74, 0), (81, 0, 81), - (128, 64, 128), (244, 35, 232), (250, 170, 160), (230, - 150, 140), (70, 70, 70), (102, 102, 156), (190, 153, 153), - (180, 165, 180), (150, 100, 100), (150, 120, 90), (153, - 153, 153), (153, 153, 153), (250, 170, 30), (220, 220, 0), - (107, 142, 35), (152, 251, 152), (70, 130, 180), (220, - 20, 60), (255, 0, 0), (0, 0, 142), (0, 0, 70), - (0, 60, 100), (0, 0, 90), (0, 0, 110), (0, 80, 100), (0, 0, 230), (119, 11, 32), (0, 0, 142)], - dtype=np.uint8) - else: - cmap = np.zeros((N, 3), dtype=np.uint8) - for i in range(N): - r, g, b = 0, 0, 0 - id = i - for j in range(7): - str_id = uint82bin(id) - r = r ^ (np.uint8(str_id[-1]) << (7-j)) - g = g ^ (np.uint8(str_id[-2]) << (7-j)) - b = b ^ (np.uint8(str_id[-3]) << (7-j)) - id = id >> 3 - cmap[i, 0] = r - cmap[i, 1] = g - cmap[i, 2] = b - return cmap - - -class Colorize(object): - def __init__(self, n=35): - self.cmap = labelcolormap(n) - self.cmap = torch.from_numpy(self.cmap[:n]) - - def __call__(self, gray_image): - size = gray_image.size() - color_image = torch.ByteTensor(3, size[1], size[2]).fill_(0) - - for label in range(0, len(self.cmap)): - mask = (label == gray_image[0]).cpu() - color_image[0][mask] = self.cmap[label][0] - color_image[1][mask] = self.cmap[label][1] - color_image[2][mask] = self.cmap[label][2] - - return color_image diff --git a/spaces/hbestm/gpt-academic-play/request_llm/bridge_all.py b/spaces/hbestm/gpt-academic-play/request_llm/bridge_all.py deleted file mode 100644 index 0c468125fd0182b078f49202588a2d739918bfb3..0000000000000000000000000000000000000000 --- a/spaces/hbestm/gpt-academic-play/request_llm/bridge_all.py +++ /dev/null @@ -1,310 +0,0 @@ - -""" - 该文件中主要包含2个函数,是所有LLM的通用接口,它们会继续向下调用更底层的LLM模型,处理多模型并行等细节 - - 不具备多线程能力的函数:正常对话时使用,具备完备的交互功能,不可多线程 - 1. predict(...) - - 具备多线程调用能力的函数:在函数插件中被调用,灵活而简洁 - 2. predict_no_ui_long_connection(...) -""" -import tiktoken -from functools import lru_cache -from concurrent.futures import ThreadPoolExecutor -from toolbox import get_conf, trimmed_format_exc - -from .bridge_chatgpt import predict_no_ui_long_connection as chatgpt_noui -from .bridge_chatgpt import predict as chatgpt_ui - -from .bridge_chatglm import predict_no_ui_long_connection as chatglm_noui -from .bridge_chatglm import predict as chatglm_ui - -from .bridge_newbing import predict_no_ui_long_connection as newbing_noui -from .bridge_newbing import predict as newbing_ui - -# from .bridge_tgui import predict_no_ui_long_connection as tgui_noui -# from .bridge_tgui import predict as tgui_ui - -colors = ['#FF00FF', '#00FFFF', '#FF0000', '#990099', '#009999', '#990044'] - -class LazyloadTiktoken(object): - def __init__(self, model): - self.model = model - - @staticmethod - @lru_cache(maxsize=128) - def get_encoder(model): - print('正在加载tokenizer,如果是第一次运行,可能需要一点时间下载参数') - tmp = tiktoken.encoding_for_model(model) - print('加载tokenizer完毕') - return tmp - - def encode(self, *args, **kwargs): - encoder = self.get_encoder(self.model) - return encoder.encode(*args, **kwargs) - - def decode(self, *args, **kwargs): - encoder = self.get_encoder(self.model) - return encoder.decode(*args, **kwargs) - -# Endpoint 重定向 -API_URL_REDIRECT, = get_conf("API_URL_REDIRECT") -openai_endpoint = "https://api.openai.com/v1/chat/completions" -api2d_endpoint = "https://openai.api2d.net/v1/chat/completions" -newbing_endpoint = "wss://sydney.bing.com/sydney/ChatHub" -# 兼容旧版的配置 -try: - API_URL, = get_conf("API_URL") - if API_URL != "https://api.openai.com/v1/chat/completions": - openai_endpoint = API_URL - print("警告!API_URL配置选项将被弃用,请更换为API_URL_REDIRECT配置") -except: - pass -# 新版配置 -if openai_endpoint in API_URL_REDIRECT: openai_endpoint = API_URL_REDIRECT[openai_endpoint] -if api2d_endpoint in API_URL_REDIRECT: api2d_endpoint = API_URL_REDIRECT[api2d_endpoint] -if newbing_endpoint in API_URL_REDIRECT: newbing_endpoint = API_URL_REDIRECT[newbing_endpoint] - - -# 获取tokenizer -tokenizer_gpt35 = LazyloadTiktoken("gpt-3.5-turbo") -tokenizer_gpt4 = LazyloadTiktoken("gpt-4") -get_token_num_gpt35 = lambda txt: len(tokenizer_gpt35.encode(txt, disallowed_special=())) -get_token_num_gpt4 = lambda txt: len(tokenizer_gpt4.encode(txt, disallowed_special=())) - - -model_info = { - # openai - "gpt-3.5-turbo": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": openai_endpoint, - "max_token": 4096, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - - "gpt-4": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": openai_endpoint, - "max_token": 8192, - "tokenizer": tokenizer_gpt4, - "token_cnt": get_token_num_gpt4, - }, - - # api_2d - "api2d-gpt-3.5-turbo": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": api2d_endpoint, - "max_token": 4096, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - - "api2d-gpt-4": { - "fn_with_ui": chatgpt_ui, - "fn_without_ui": chatgpt_noui, - "endpoint": api2d_endpoint, - "max_token": 8192, - "tokenizer": tokenizer_gpt4, - "token_cnt": get_token_num_gpt4, - }, - - # chatglm - "chatglm": { - "fn_with_ui": chatglm_ui, - "fn_without_ui": chatglm_noui, - "endpoint": None, - "max_token": 1024, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - # newbing - "newbing": { - "fn_with_ui": newbing_ui, - "fn_without_ui": newbing_noui, - "endpoint": newbing_endpoint, - "max_token": 4096, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - -} - - -AVAIL_LLM_MODELS, = get_conf("AVAIL_LLM_MODELS") -if "jittorllms_rwkv" in AVAIL_LLM_MODELS: - from .bridge_jittorllms_rwkv import predict_no_ui_long_connection as rwkv_noui - from .bridge_jittorllms_rwkv import predict as rwkv_ui - model_info.update({ - "jittorllms_rwkv": { - "fn_with_ui": rwkv_ui, - "fn_without_ui": rwkv_noui, - "endpoint": None, - "max_token": 1024, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - }) -if "jittorllms_llama" in AVAIL_LLM_MODELS: - from .bridge_jittorllms_llama import predict_no_ui_long_connection as llama_noui - from .bridge_jittorllms_llama import predict as llama_ui - model_info.update({ - "jittorllms_llama": { - "fn_with_ui": llama_ui, - "fn_without_ui": llama_noui, - "endpoint": None, - "max_token": 1024, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - }) -if "jittorllms_pangualpha" in AVAIL_LLM_MODELS: - from .bridge_jittorllms_pangualpha import predict_no_ui_long_connection as pangualpha_noui - from .bridge_jittorllms_pangualpha import predict as pangualpha_ui - model_info.update({ - "jittorllms_pangualpha": { - "fn_with_ui": pangualpha_ui, - "fn_without_ui": pangualpha_noui, - "endpoint": None, - "max_token": 1024, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - }) -if "moss" in AVAIL_LLM_MODELS: - from .bridge_moss import predict_no_ui_long_connection as moss_noui - from .bridge_moss import predict as moss_ui - model_info.update({ - "moss": { - "fn_with_ui": moss_ui, - "fn_without_ui": moss_noui, - "endpoint": None, - "max_token": 1024, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - }, - }) -if "stack-claude" in AVAIL_LLM_MODELS: - from .bridge_stackclaude import predict_no_ui_long_connection as claude_noui - from .bridge_stackclaude import predict as claude_ui - # claude - model_info.update({ - "stack-claude": { - "fn_with_ui": claude_ui, - "fn_without_ui": claude_noui, - "endpoint": None, - "max_token": 8192, - "tokenizer": tokenizer_gpt35, - "token_cnt": get_token_num_gpt35, - } - }) - - -def LLM_CATCH_EXCEPTION(f): - """ - 装饰器函数,将错误显示出来 - """ - def decorated(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience): - try: - return f(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) - except Exception as e: - tb_str = '\n```\n' + trimmed_format_exc() + '\n```\n' - observe_window[0] = tb_str - return tb_str - return decorated - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience=False): - """ - 发送至LLM,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 - inputs: - 是本次问询的输入 - sys_prompt: - 系统静默prompt - llm_kwargs: - LLM的内部调优参数 - history: - 是之前的对话列表 - observe_window = None: - 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗 - """ - import threading, time, copy - - model = llm_kwargs['llm_model'] - n_model = 1 - if '&' not in model: - assert not model.startswith("tgui"), "TGUI不支持函数插件的实现" - - # 如果只询问1个大语言模型: - method = model_info[model]["fn_without_ui"] - return method(inputs, llm_kwargs, history, sys_prompt, observe_window, console_slience) - else: - # 如果同时询问多个大语言模型: - executor = ThreadPoolExecutor(max_workers=4) - models = model.split('&') - n_model = len(models) - - window_len = len(observe_window) - assert window_len==3 - window_mutex = [["", time.time(), ""] for _ in range(n_model)] + [True] - - futures = [] - for i in range(n_model): - model = models[i] - method = model_info[model]["fn_without_ui"] - llm_kwargs_feedin = copy.deepcopy(llm_kwargs) - llm_kwargs_feedin['llm_model'] = model - future = executor.submit(LLM_CATCH_EXCEPTION(method), inputs, llm_kwargs_feedin, history, sys_prompt, window_mutex[i], console_slience) - futures.append(future) - - def mutex_manager(window_mutex, observe_window): - while True: - time.sleep(0.25) - if not window_mutex[-1]: break - # 看门狗(watchdog) - for i in range(n_model): - window_mutex[i][1] = observe_window[1] - # 观察窗(window) - chat_string = [] - for i in range(n_model): - chat_string.append( f"【{str(models[i])} 说】: {window_mutex[i][0]} " ) - res = '

      \n\n---\n\n'.join(chat_string) - # # # # # # # # # # # - observe_window[0] = res - - t_model = threading.Thread(target=mutex_manager, args=(window_mutex, observe_window), daemon=True) - t_model.start() - - return_string_collect = [] - while True: - worker_done = [h.done() for h in futures] - if all(worker_done): - executor.shutdown() - break - time.sleep(1) - - for i, future in enumerate(futures): # wait and get - return_string_collect.append( f"【{str(models[i])} 说】: {future.result()} " ) - - window_mutex[-1] = False # stop mutex thread - res = '

      \n\n---\n\n'.join(return_string_collect) - return res - - -def predict(inputs, llm_kwargs, *args, **kwargs): - """ - 发送至LLM,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是LLM的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - - method = model_info[llm_kwargs['llm_model']]["fn_with_ui"] - yield from method(inputs, llm_kwargs, *args, **kwargs) - diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task114_heart_MNMs.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task114_heart_MNMs.py deleted file mode 100644 index 5c91deed460fac732d1a4f9b94829617950f1464..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task114_heart_MNMs.py +++ /dev/null @@ -1,259 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from collections import OrderedDict -from batchgenerators.utilities.file_and_folder_operations import * -import shutil -import numpy as np -from numpy.random.mtrand import RandomState -import subprocess -from multiprocessing import pool -import pandas as pd - - - -def get_mnms_data(data_root): - files_raw = [] - files_gt = [] - for r, dirs, files in os.walk(data_root): - for f in files: - if f.endswith('nii.gz'): - file_path = os.path.join(r, f) - if '_gt' in f: - files_gt.append(file_path) - else: - files_raw.append(file_path) - return files_raw, files_gt - - -def generate_filename_for_nnunet(pat_id, ts, pat_folder=None, add_zeros=False, vendor=None, centre=None, mode='mnms', - data_format='nii.gz'): - if not vendor or not centre: - if add_zeros: - filename = "{}_{}_0000.{}".format(pat_id, str(ts).zfill(4), data_format) - else: - filename = "{}_{}.{}".format(pat_id, str(ts).zfill(4), data_format) - else: - if mode == 'mnms': - if add_zeros: - filename = "{}_{}_{}_{}_0000.{}".format(pat_id, str(ts).zfill(4), vendor, centre, data_format) - else: - filename = "{}_{}_{}_{}.{}".format(pat_id, str(ts).zfill(4), vendor, centre, data_format) - else: - if add_zeros: - filename = "{}_{}_{}_{}_0000.{}".format(vendor, centre, pat_id, str(ts).zfill(4), data_format) - else: - filename = "{}_{}_{}_{}.{}".format(vendor, centre, pat_id, str(ts).zfill(4), data_format) - - if pat_folder: - filename = os.path.join(pat_folder, filename) - return filename - - -def select_annotated_frames_mms(data_folder, out_folder, add_zeros=False, mode='mnms', df_path="/media/full/tera2/data/challenges/mms/Training-corrected_original/M&Ms Dataset Information.xlsx"): - table = pd.read_excel(df_path, index_col='External code') - - for idx in table.index: - ed = table.loc[idx, 'ED'] - es = table.loc[idx, 'ES'] - vendor = table.loc[idx, 'Vendor'] - centre = table.loc[idx, 'Centre'] - - if vendor != "C": - - # generate old filename (w/o vendor and centre) - filename_ed_original = generate_filename_for_nnunet(pat_id=idx, ts=ed, pat_folder=data_folder, - vendor=None, centre=None, add_zeros=False) - filename_es_original = generate_filename_for_nnunet(pat_id=idx, ts=es, pat_folder=data_folder, - vendor=None, centre=None, add_zeros=False) - - # generate new filename with vendor and centre - filename_ed = generate_filename_for_nnunet(pat_id=idx, ts=ed, pat_folder=out_folder, - vendor=vendor, centre=centre, add_zeros=add_zeros, mode=mode) - filename_es = generate_filename_for_nnunet(pat_id=idx, ts=es, pat_folder=out_folder, - vendor=vendor, centre=centre, add_zeros=add_zeros, mode=mode) - - shutil.copy(filename_ed_original, filename_ed) - shutil.copy(filename_es_original, filename_es) - - -def create_custom_splits_for_experiments(task_path): - data_keys = [i[:-4] for i in - subfiles(os.path.join(task_path, "nnUNetData_plans_v2.1_2D_stage0"), - join=False, suffix='npz')] - existing_splits = os.path.join(task_path, "splits_final.pkl") - - splits = load_pickle(existing_splits) - splits = splits[:5] # discard old changes - - unique_a_only = np.unique([i.split('_')[0] for i in data_keys if i.find('_A_') != -1]) - unique_b_only = np.unique([i.split('_')[0] for i in data_keys if i.find('_B_') != -1]) - - num_train_a = int(np.round(0.8 * len(unique_a_only))) - num_train_b = int(np.round(0.8 * len(unique_b_only))) - - p = RandomState(1234) - idx_a_train = p.choice(len(unique_a_only), num_train_a, replace=False) - idx_b_train = p.choice(len(unique_b_only), num_train_b, replace=False) - - identifiers_a_train = [unique_a_only[i] for i in idx_a_train] - identifiers_b_train = [unique_b_only[i] for i in idx_b_train] - - identifiers_a_val = [i for i in unique_a_only if i not in identifiers_a_train] - identifiers_b_val = [i for i in unique_b_only if i not in identifiers_b_train] - - # fold 5 will be train on a and eval on val sets of a and b - splits.append({'train': [i for i in data_keys if i.split("_")[0] in identifiers_a_train], - 'val': [i for i in data_keys if i.split("_")[0] in identifiers_a_val] + [i for i in data_keys if - i.split("_")[ - 0] in identifiers_b_val]}) - - # fold 6 will be train on b and eval on val sets of a and b - splits.append({'train': [i for i in data_keys if i.split("_")[0] in identifiers_b_train], - 'val': [i for i in data_keys if i.split("_")[0] in identifiers_a_val] + [i for i in data_keys if - i.split("_")[ - 0] in identifiers_b_val]}) - - # fold 7 train on both, eval on both - splits.append({'train': [i for i in data_keys if i.split("_")[0] in identifiers_b_train] + [i for i in data_keys if i.split("_")[0] in identifiers_a_train], - 'val': [i for i in data_keys if i.split("_")[0] in identifiers_a_val] + [i for i in data_keys if - i.split("_")[ - 0] in identifiers_b_val]}) - save_pickle(splits, existing_splits) - -def split_4d_nii(nii_path, split_folder, pat_name=None, add_zeros=False): - - # create temporary folder in which the 3d+t file will be split into many 3d files - temp_base = os.path.dirname(nii_path) - temp_location = os.path.join(temp_base, 'tmp') - if not os.path.isdir(temp_location): - os.mkdir(temp_location) - os.chdir(temp_location) - - if not os.path.isdir(split_folder): - os.mkdir(split_folder) - _ = subprocess.call(['fslsplit', nii_path]) - - # rename files so that the patient's ID is in the filename - file_list = [f for f in os.listdir(temp_location) if os.path.isfile(f)] - file_list = sorted(file_list) - - if not pat_name: - pat_name = os.path.basename(os.path.dirname(nii_path)) - - for ts, temp_file in enumerate(file_list): - # get time - time_step = temp_file.split('.')[0][3:] - # make sure the time step is a number. Otherwise trust in pythons sort algorithm - try: - int(time_step) - except: - time_step = ts - - # change filename AND location -> move files - if add_zeros: - new_file_name = '{}_{}_0000.nii.gz'.format(pat_name, time_step) - else: - new_file_name = '{}_{}.nii.gz'.format(pat_name, time_step) - os.rename(os.path.join(temp_location, temp_file), - os.path.join(split_folder, new_file_name)) - - os.rmdir(temp_location) - -def split_4d_parallel(args): - nii_path, split_folder, pat_name = args - split_4d_nii(nii_path, split_folder, pat_name) - - -def split_4d_for_all_pat(files_paths, split_folder): - p = pool.Pool(8) - p.map(split_4d_parallel, - zip(files_paths, [split_folder] * len(files_paths), [None] * len(files_paths))) - -if __name__ == "__main__": - task_name = "Task114_heart_MNMs" - train_dir = "/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/nnUnet_raw/nnUNet_raw_data/{}/imagesTr".format(task_name) - test_dir = "/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/nnUnet_raw/nnUNet_raw_data/{}/imagesTs".format(task_name) - #out_dir='/media/full/tera2/output_nnUNet/preprocessed_data/Task114_heart_mnms' - out_dir='/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/tmp' - - # train - all_train_files = [os.path.join(train_dir, x) for x in os.listdir(train_dir)] - # test - all_test_files = [os.path.join(test_dir, x) for x in os.listdir(test_dir)] - - data_root = '/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/data/challenges/mms/Training-corrected_original/Labeled' - files_raw, files_gt = get_mnms_data(data_root=data_root) - split_path_raw ='/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/data/challenges/mms/temp_split_raw' - split_path_gt ='/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/data/challenges/mms/temp_split_gt' - maybe_mkdir_p(split_path_raw) - maybe_mkdir_p(split_path_gt) - - split_4d_for_all_pat(files_raw, split_path_raw) - split_4d_for_all_pat(files_gt, split_path_gt) - - out_dir = '/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/nnUnet_raw/nnUNet_raw_data/{}/'.format(task_name) - - maybe_mkdir_p(join(out_dir, "imagesTr")) - maybe_mkdir_p(join(out_dir, "imagesTs")) - maybe_mkdir_p(join(out_dir, "labelsTr")) - - imagesTr_path = os.path.join(out_dir, "imagesTr") - labelsTr_path = os.path.join(out_dir, "labelsTr") - select_annotated_frames_mms(split_path_raw, imagesTr_path, add_zeros=True) - select_annotated_frames_mms(split_path_gt, labelsTr_path, add_zeros=False) - - labelsTr = subfiles(labelsTr_path) - - - json_dict = OrderedDict() - json_dict['name'] = "M&Ms" - json_dict['description'] = "short axis cardiac cine MRI segmentation" - json_dict['tensorImageSize'] = "4D" - json_dict['reference'] = "Campello, Víctor M. et al.: Multi-Centre, Multi-Vendor & Multi-Disease Cardiac Image Segmentation. In preparation." - json_dict['licence'] = "see M&Ms challenge" - json_dict['release'] = "0.0" - json_dict['modality'] = { - "0": "MRI", - } - # labels differ for ACDC challenge - json_dict['labels'] = { - "0": "background", - "1": "LVBP", - "2": "LVM", - "3": "RV" - } - json_dict['numTraining'] = len(labelsTr) - json_dict['numTest'] = 0 - json_dict['training'] = [{'image': "./imagesTr/%s" % i.split("/")[-1], "label": "./labelsTr/%s" % i.split("/")[-1]} for i in - labelsTr] - json_dict['test'] = [] - - save_json(json_dict, os.path.join(out_dir, "dataset.json")) - - # then preprocess data and plan training. - # run in terminal - # > nnUNet_plan_and_preprocess -t --verify_dataset_integrity - - # start training and stop it immediately to get a split.pkl file - # > nnUNet_train 2d nnUNetTrainerV2_MMS 0 - - # - # then create custom splits as used for the final M&Ms submission - # - - split_file_path = '/media/full/97d8d6e1-1aa1-4761-9dd1-fc6a62cf6264/output_nnUNet/preprocessed_data/{}/'.format(task_name) - - create_custom_splits_for_experiments(split_file_path) - diff --git a/spaces/hrdtbs/rvc-mochinoa/infer_pack/models_onnx_moess.py b/spaces/hrdtbs/rvc-mochinoa/infer_pack/models_onnx_moess.py deleted file mode 100644 index 12efb0629a2e3d0d746a34f467254536c2bdbe5f..0000000000000000000000000000000000000000 --- a/spaces/hrdtbs/rvc-mochinoa/infer_pack/models_onnx_moess.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/huggingface-projects/diffusers-gallery/Dockerfile b/spaces/huggingface-projects/diffusers-gallery/Dockerfile deleted file mode 100644 index 0ba18d346de09532882673442ee72107556a887d..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/diffusers-gallery/Dockerfile +++ /dev/null @@ -1,2 +0,0 @@ -FROM nginxinc/nginx-unprivileged:alpine -COPY . /usr/share/nginx/html \ No newline at end of file diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc02_16gpus_mbf_bs8k.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc02_16gpus_mbf_bs8k.py deleted file mode 100644 index 14a6bb79da7eaa3f111e9efedf507e46a953c9aa..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/configs/wf42m_pfc02_16gpus_mbf_bs8k.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "mbf" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.2 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 1e-4 -config.batch_size = 512 -config.lr = 0.4 -config.verbose = 10000 -config.dali = False - -config.rec = "/train_tmp/WebFace42M" -config.num_classes = 2059906 -config.num_image = 42474557 -config.num_epoch = 20 -config.warmup_epoch = 2 -config.val_targets = [] diff --git a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/utils/utils_distributed_sampler.py b/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/utils/utils_distributed_sampler.py deleted file mode 100644 index a7e57275fa17a0a9dbf27fd0eb941dd0fec1823f..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/Deep3DFaceRecon_pytorch/models/arcface_torch/utils/utils_distributed_sampler.py +++ /dev/null @@ -1,124 +0,0 @@ -import math -import os -import random - -import numpy as np -import torch -import torch.distributed as dist -from torch.utils.data import DistributedSampler as _DistributedSampler - - -def setup_seed(seed, cuda_deterministic=True): - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - np.random.seed(seed) - random.seed(seed) - os.environ["PYTHONHASHSEED"] = str(seed) - if cuda_deterministic: # slower, more reproducible - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - else: # faster, less reproducible - torch.backends.cudnn.deterministic = False - torch.backends.cudnn.benchmark = True - - -def worker_init_fn(worker_id, num_workers, rank, seed): - # The seed of each worker equals to - # num_worker * rank + worker_id + user_seed - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) - torch.manual_seed(worker_seed) - - -def get_dist_info(): - if dist.is_available() and dist.is_initialized(): - rank = dist.get_rank() - world_size = dist.get_world_size() - else: - rank = 0 - world_size = 1 - - return rank, world_size - - -def sync_random_seed(seed=None, device="cuda"): - """Make sure different ranks share the same seed. - All workers must call this function, otherwise it will deadlock. - This method is generally used in `DistributedSampler`, - because the seed should be identical across all processes - in the distributed group. - In distributed sampling, different ranks should sample non-overlapped - data in the dataset. Therefore, this function is used to make sure that - each rank shuffles the data indices in the same order based - on the same seed. Then different ranks could use different indices - to select non-overlapped data from the same data list. - Args: - seed (int, Optional): The seed. Default to None. - device (str): The device where the seed will be put on. - Default to 'cuda'. - Returns: - int: Seed to be used. - """ - if seed is None: - seed = np.random.randint(2**31) - assert isinstance(seed, int) - - rank, world_size = get_dist_info() - - if world_size == 1: - return seed - - if rank == 0: - random_num = torch.tensor(seed, dtype=torch.int32, device=device) - else: - random_num = torch.tensor(0, dtype=torch.int32, device=device) - - dist.broadcast(random_num, src=0) - - return random_num.item() - - -class DistributedSampler(_DistributedSampler): - def __init__( - self, - dataset, - num_replicas=None, # world_size - rank=None, # local_rank - shuffle=True, - seed=0, - ): - - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - - # In distributed sampling, different ranks should sample - # non-overlapped data in the dataset. Therefore, this function - # is used to make sure that each rank shuffles the data indices - # in the same order based on the same seed. Then different ranks - # could use different indices to select non-overlapped data from the - # same data list. - self.seed = sync_random_seed(seed) - - def __iter__(self): - # deterministically shuffle based on epoch - if self.shuffle: - g = torch.Generator() - # When :attr:`shuffle=True`, this ensures all replicas - # use a different random ordering for each epoch. - # Otherwise, the next iteration of this sampler will - # yield the same ordering. - g.manual_seed(self.epoch + self.seed) - indices = torch.randperm(len(self.dataset), generator=g).tolist() - else: - indices = torch.arange(len(self.dataset)).tolist() - - # add extra samples to make it evenly divisible - # in case that indices is shorter than half of total_size - indices = (indices * math.ceil(self.total_size / len(indices)))[: self.total_size] - assert len(indices) == self.total_size - - # subsample - indices = indices[self.rank : self.total_size : self.num_replicas] - assert len(indices) == self.num_samples - - return iter(indices) diff --git a/spaces/iamironman4279/SadTalker/src/facerender/sync_batchnorm/batchnorm.py b/spaces/iamironman4279/SadTalker/src/facerender/sync_batchnorm/batchnorm.py deleted file mode 100644 index 5f4e763f0366dffa10320116413f8c7181a8aeb1..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/facerender/sync_batchnorm/batchnorm.py +++ /dev/null @@ -1,315 +0,0 @@ -# -*- coding: utf-8 -*- -# File : batchnorm.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import collections - -import torch -import torch.nn.functional as F - -from torch.nn.modules.batchnorm import _BatchNorm -from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast - -from .comm import SyncMaster - -__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d'] - - -def _sum_ft(tensor): - """sum over the first and last dimention""" - return tensor.sum(dim=0).sum(dim=-1) - - -def _unsqueeze_ft(tensor): - """add new dementions at the front and the tail""" - return tensor.unsqueeze(0).unsqueeze(-1) - - -_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size']) -_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std']) - - -class _SynchronizedBatchNorm(_BatchNorm): - def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True): - super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine) - - self._sync_master = SyncMaster(self._data_parallel_master) - - self._is_parallel = False - self._parallel_id = None - self._slave_pipe = None - - def forward(self, input): - # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation. - if not (self._is_parallel and self.training): - return F.batch_norm( - input, self.running_mean, self.running_var, self.weight, self.bias, - self.training, self.momentum, self.eps) - - # Resize the input to (B, C, -1). - input_shape = input.size() - input = input.view(input.size(0), self.num_features, -1) - - # Compute the sum and square-sum. - sum_size = input.size(0) * input.size(2) - input_sum = _sum_ft(input) - input_ssum = _sum_ft(input ** 2) - - # Reduce-and-broadcast the statistics. - if self._parallel_id == 0: - mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size)) - else: - mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size)) - - # Compute the output. - if self.affine: - # MJY:: Fuse the multiplication for speed. - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias) - else: - output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std) - - # Reshape it. - return output.view(input_shape) - - def __data_parallel_replicate__(self, ctx, copy_id): - self._is_parallel = True - self._parallel_id = copy_id - - # parallel_id == 0 means master device. - if self._parallel_id == 0: - ctx.sync_master = self._sync_master - else: - self._slave_pipe = ctx.sync_master.register_slave(copy_id) - - def _data_parallel_master(self, intermediates): - """Reduce the sum and square-sum, compute the statistics, and broadcast it.""" - - # Always using same "device order" makes the ReduceAdd operation faster. - # Thanks to:: Tete Xiao (http://tetexiao.com/) - intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device()) - - to_reduce = [i[1][:2] for i in intermediates] - to_reduce = [j for i in to_reduce for j in i] # flatten - target_gpus = [i[1].sum.get_device() for i in intermediates] - - sum_size = sum([i[1].sum_size for i in intermediates]) - sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce) - mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size) - - broadcasted = Broadcast.apply(target_gpus, mean, inv_std) - - outputs = [] - for i, rec in enumerate(intermediates): - outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2]))) - - return outputs - - def _compute_mean_std(self, sum_, ssum, size): - """Compute the mean and standard-deviation with sum and square-sum. This method - also maintains the moving average on the master device.""" - assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.' - mean = sum_ / size - sumvar = ssum - sum_ * mean - unbias_var = sumvar / (size - 1) - bias_var = sumvar / size - - self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data - self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data - - return mean, bias_var.clamp(self.eps) ** -0.5 - - -class SynchronizedBatchNorm1d(_SynchronizedBatchNorm): - r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a - mini-batch. - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm1d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm - - Args: - num_features: num_features from an expected input of size - `batch_size x num_features [x width]` - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C)` or :math:`(N, C, L)` - - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm1d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 2 and input.dim() != 3: - raise ValueError('expected 2D or 3D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm1d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm2d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch - of 3d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm2d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, H, W)` - - Output: :math:`(N, C, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm2d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 4: - raise ValueError('expected 4D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm2d, self)._check_input_dim(input) - - -class SynchronizedBatchNorm3d(_SynchronizedBatchNorm): - r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch - of 4d inputs - - .. math:: - - y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta - - This module differs from the built-in PyTorch BatchNorm3d as the mean and - standard-deviation are reduced across all devices during training. - - For example, when one uses `nn.DataParallel` to wrap the network during - training, PyTorch's implementation normalize the tensor on each device using - the statistics only on that device, which accelerated the computation and - is also easy to implement, but the statistics might be inaccurate. - Instead, in this synchronized version, the statistics will be computed - over all training samples distributed on multiple devices. - - Note that, for one-GPU or CPU-only case, this module behaves exactly same - as the built-in PyTorch implementation. - - The mean and standard-deviation are calculated per-dimension over - the mini-batches and gamma and beta are learnable parameter vectors - of size C (where C is the input size). - - During training, this layer keeps a running estimate of its computed mean - and variance. The running sum is kept with a default momentum of 0.1. - - During evaluation, this running mean/variance is used for normalization. - - Because the BatchNorm is done over the `C` dimension, computing statistics - on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm - or Spatio-temporal BatchNorm - - Args: - num_features: num_features from an expected input of - size batch_size x num_features x depth x height x width - eps: a value added to the denominator for numerical stability. - Default: 1e-5 - momentum: the value used for the running_mean and running_var - computation. Default: 0.1 - affine: a boolean value that when set to ``True``, gives the layer learnable - affine parameters. Default: ``True`` - - Shape: - - Input: :math:`(N, C, D, H, W)` - - Output: :math:`(N, C, D, H, W)` (same shape as input) - - Examples: - >>> # With Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100) - >>> # Without Learnable Parameters - >>> m = SynchronizedBatchNorm3d(100, affine=False) - >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10)) - >>> output = m(input) - """ - - def _check_input_dim(self, input): - if input.dim() != 5: - raise ValueError('expected 5D input (got {}D input)' - .format(input.dim())) - super(SynchronizedBatchNorm3d, self)._check_input_dim(input) diff --git a/spaces/igashov/DiffLinker/src/molecule_builder.py b/spaces/igashov/DiffLinker/src/molecule_builder.py deleted file mode 100644 index ef7417597c87c4d01b6bb624a814caf6e224707d..0000000000000000000000000000000000000000 --- a/spaces/igashov/DiffLinker/src/molecule_builder.py +++ /dev/null @@ -1,102 +0,0 @@ -import torch -import numpy as np - -from rdkit import Chem, Geometry - -from src import const - - -def create_conformer(coords): - conformer = Chem.Conformer() - for i, (x, y, z) in enumerate(coords): - conformer.SetAtomPosition(i, Geometry.Point3D(x, y, z)) - return conformer - - -def build_molecules(one_hot, x, node_mask, is_geom, margins=const.MARGINS_EDM): - molecules = [] - for i in range(len(one_hot)): - mask = node_mask[i].squeeze() == 1 - atom_types = one_hot[i][mask].argmax(dim=1).detach().cpu() - positions = x[i][mask].detach().cpu() - mol = build_molecule(positions, atom_types, is_geom, margins=margins) - molecules.append(mol) - - return molecules - - -def build_molecule(positions, atom_types, is_geom, margins=const.MARGINS_EDM): - idx2atom = const.GEOM_IDX2ATOM if is_geom else const.IDX2ATOM - X, A, E = build_xae_molecule(positions, atom_types, is_geom=is_geom, margins=margins) - mol = Chem.RWMol() - for atom in X: - a = Chem.Atom(idx2atom[atom.item()]) - mol.AddAtom(a) - - all_bonds = torch.nonzero(A) - for bond in all_bonds: - mol.AddBond(bond[0].item(), bond[1].item(), const.BOND_DICT[E[bond[0], bond[1]].item()]) - - mol.AddConformer(create_conformer(positions.detach().cpu().numpy().astype(np.float64))) - return mol - - -def build_xae_molecule(positions, atom_types, is_geom, margins=const.MARGINS_EDM): - """ Returns a triplet (X, A, E): atom_types, adjacency matrix, edge_types - args: - positions: N x 3 (already masked to keep final number nodes) - atom_types: N - returns: - X: N (int) - A: N x N (bool) (binary adjacency matrix) - E: N x N (int) (bond type, 0 if no bond) such that A = E.bool() - """ - n = positions.shape[0] - X = atom_types - A = torch.zeros((n, n), dtype=torch.bool) - E = torch.zeros((n, n), dtype=torch.int) - - idx2atom = const.GEOM_IDX2ATOM if is_geom else const.IDX2ATOM - - pos = positions.unsqueeze(0) - dists = torch.cdist(pos, pos, p=2).squeeze(0) - for i in range(n): - for j in range(i): - - pair = sorted([atom_types[i], atom_types[j]]) - order = get_bond_order(idx2atom[pair[0].item()], idx2atom[pair[1].item()], dists[i, j], margins=margins) - - # TODO: a batched version of get_bond_order to avoid the for loop - if order > 0: - # Warning: the graph should be DIRECTED - A[i, j] = 1 - E[i, j] = order - - return X, A, E - - -def get_bond_order(atom1, atom2, distance, check_exists=True, margins=const.MARGINS_EDM): - distance = 100 * distance # We change the metric - - # Check exists for large molecules where some atom pairs do not have a - # typical bond length. - if check_exists: - if atom1 not in const.BONDS_1: - return 0 - if atom2 not in const.BONDS_1[atom1]: - return 0 - - # margin1, margin2 and margin3 have been tuned to maximize the stability of the QM9 true samples - if distance < const.BONDS_1[atom1][atom2] + margins[0]: - - # Check if atoms in bonds2 dictionary. - if atom1 in const.BONDS_2 and atom2 in const.BONDS_2[atom1]: - thr_bond2 = const.BONDS_2[atom1][atom2] + margins[1] - if distance < thr_bond2: - if atom1 in const.BONDS_3 and atom2 in const.BONDS_3[atom1]: - thr_bond3 = const.BONDS_3[atom1][atom2] + margins[2] - if distance < thr_bond3: - return 3 # Triple - return 2 # Double - return 1 # Single - return 0 # No bond diff --git a/spaces/inamXcontru/PoeticTTS/Coolutils Total PDF Converter 6.1.0.195 PDF jamacorni Convert Any PDF File in Batch Mode.md b/spaces/inamXcontru/PoeticTTS/Coolutils Total PDF Converter 6.1.0.195 PDF jamacorni Convert Any PDF File in Batch Mode.md deleted file mode 100644 index d94ee5436420e0e055aa37bf7057b7473d780221..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Coolutils Total PDF Converter 6.1.0.195 PDF jamacorni Convert Any PDF File in Batch Mode.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Coolutils Total PDF Converter 6.1.0.195 – PDF jamacorni


      Download ->->->-> https://gohhs.com/2uz5Q7



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Disconnect Hack Download Mu Online Fixed.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Disconnect Hack Download Mu Online Fixed.md deleted file mode 100644 index 251517f45f7ae02dcba9f709f12c394f6062a0b1..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Disconnect Hack Download Mu Online Fixed.md +++ /dev/null @@ -1,6 +0,0 @@ -

      disconnect hack download mu online


      Download Filehttps://urlin.us/2uEwJg



      - -Play for free MU Online on our Server. 24/7 up time! High Exp Server, PVP & NON-PVP Servers, Friendly Game Masters, Player Rankings. Since 2006! 1fdad05405
      -
      -
      -

      diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Initial D Arcade Stage 7 Pc BEST Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Initial D Arcade Stage 7 Pc BEST Download.md deleted file mode 100644 index e5bc1570f3b1cb24afd3fe3ac15651051dfa8aea..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Initial D Arcade Stage 7 Pc BEST Download.md +++ /dev/null @@ -1,40 +0,0 @@ - -

      How to Play Initial D Arcade Stage 7 on PC Using TeknoParrot

      -

      If you are a fan of the Initial D manga and anime series, you might have heard of the arcade racing game Initial D Arcade Stage 7 AAX. This game is the seventh installment in the series and features improved graphics, new cars, new courses, and online multiplayer mode. But what if you don't have access to an arcade machine or a RingEdge system? Can you play Initial D Arcade Stage 7 on PC?

      -

      The answer is yes, thanks to a software called TeknoParrot. TeknoParrot is a compatibility layer that allows you to run arcade games on your PC. It supports many games from Sega, Namco, Taito, and other arcade developers. In this article, we will show you how to download and install Initial D Arcade Stage 7 on PC using TeknoParrot.

      -

      Initial D Arcade Stage 7 Pc Download


      Download Filehttps://urlin.us/2uEyAI



      -

      What You Need to Play Initial D Arcade Stage 7 on PC

      -

      Before you start, make sure you have the following requirements:

      -
        -
      • A PC with Windows 7 or higher, a decent CPU and GPU, and at least 4 GB of RAM.
      • -
      • TeknoParrot software. You can download it from here.[^1^]
      • -
      • Initial D Arcade Stage 7 AAX game files. You can download them from here.[^3^]
      • -
      • Microsoft .NET Framework 4.7.2 or higher. You can download it from here.[^3^]
      • -
      • Microsoft DirectX End-User Runtime Web Installer. You can download it from here.[^2^]
      • -
      • Microsoft Visual C++ 2010 Redistributable Package (x64) and (x86). You can download them from here and here.[^2^]
      • -
      • A controller or a racing wheel. TeknoParrot supports various input devices, such as Xbox 360 controllers, Logitech MOMO Racing Wheels, etc.
      • -
      -

      How to Install Initial D Arcade Stage 7 on PC

      -

      Once you have all the requirements ready, follow these steps to install Initial D Arcade Stage 7 on PC:

      -
        -
      1. Extract the TeknoParrot software to a folder of your choice.
      2. -
      3. Extract the Initial D Arcade Stage 7 AAX game files to a folder of your choice.
      4. -
      5. Run TeknoParrotUI.exe as administrator.
      6. -
      7. Click on Add Game and select InitialD7_GLW_RE_SBYD from the list.
      8. -
      9. Click on Game Settings and browse for the game executable (InitialD7_GLW_RE_SBYD.exe) in the folder where you extracted the game files.
      10. -
      11. Adjust the game resolution, window mode, and other settings according to your preference.
      12. -
      13. Click on Save Settings.
      14. -
      15. Click on Controller Setup and configure your controller or racing wheel according to your preference.
      16. -
      17. Click on Save Settings.
      18. -
      19. Click on Test Game to launch Initial D Arcade Stage 7 on PC.
      20. -
      -

      How to Play Initial D Arcade Stage 7 on PC

      -

      To play Initial D Arcade Stage 7 on PC, you need to create a card file that stores your progress and settings. To do this, follow these steps:

      -
        -
      1. Launch Initial D Arcade Stage 7 on PC using TeknoParrot.
      2. -
      3. Press F2 to enter the test menu.
      4. -
      5. Select Card Management and press Enter.
      6. -
      7. Select Create New Card File and press Enter.
      8. -
      9. Select a card number (1-4) and press Enter

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/David Hindi Movie 720p.md b/spaces/inreVtussa/clothingai/Examples/David Hindi Movie 720p.md deleted file mode 100644 index beef94c80935b58ac7cd3cce1ac3541ef2446bb3..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/David Hindi Movie 720p.md +++ /dev/null @@ -1,32 +0,0 @@ - -``` -

        David Hindi Movie 720p: A Thrilling Action Drama

        -

        David is a 2013 Hindi movie directed by Bejoy Nambiar and starring Vikram, Neil Nitin Mukesh, Vinay Virmani, Tabu, Lara Dutta and Isha Sharvani. The movie follows the lives of three men named David, who are connected by a common thread of fate.

        -

        The first David (Vikram) is a fisherman in Goa, who falls in love with a deaf and mute girl named Roma (Isha Sharvani). The second David (Neil Nitin Mukesh) is a gangster in London, who works for a notorious crime lord named Ghani (Akhilendra Mishra). The third David (Vinay Virmani) is a musician in Mumbai, who struggles with his religious identity and his relationship with his father (Nasser), a Christian priest.

        -

        David Hindi Movie 720p


        Download –––––>>> https://tiurll.com/2uCj6p



        -

        The movie explores how each David faces a crisis in his life and how he deals with it. The movie also showcases the different aspects of love, faith, betrayal and redemption. The movie has been praised for its stylish cinematography, music and performances.

        -

        If you are looking for a thrilling action drama with a twist, you should watch David Hindi Movie 720p online. You can download or stream the movie from various platforms such as Netflix, Amazon Prime Video, Hotstar and more. You can also watch the trailer of the movie here:

        -David Hindi Movie 720p Trailer -``` - -``` -

        David Hindi Movie 720p has a unique narrative structure, as it switches between the three stories of the three Davids. The movie also has a nonlinear timeline, as it jumps back and forth between different years and locations. The movie uses different color schemes and visual styles to differentiate the three stories and create a distinct mood for each one.

        -

        The movie also has a stellar soundtrack, composed by various artists such as Anirudh Ravichander, Prashant Pillai, Mikey McCleary and Remo Fernandes. The movie features some catchy songs such as "Mast Kalandar", "Dama Dam Mast Kalandar", "Tere Mere Pyaar Ki" and "Yun Hi Re". The movie also has some soulful background scores that enhance the emotional impact of the scenes.

        -

        David Hindi Movie 720p is a movie that will keep you hooked till the end with its gripping plot and engaging characters. The movie has received positive reviews from critics and audiences alike, and has been nominated for several awards. The movie is a must-watch for fans of action, drama and suspense.

        -

        -``` - -``` -

        If you want to know more about David Hindi Movie 720p, you can visit the official website of the movie here:

        -David Hindi Movie 720p Official Website -

        You can also follow the social media pages of the movie and the cast and crew here:

        - -

        David Hindi Movie 720p is a movie that you should not miss. It is a movie that will make you think, feel and enjoy. It is a movie that will leave you with a lasting impression. Watch David Hindi Movie 720p online today and experience the thrill of this action drama.

        -```

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/jacob-petterle/cloudtop-deployer/Dockerfile b/spaces/jacob-petterle/cloudtop-deployer/Dockerfile deleted file mode 100644 index 769ad2b452a235614aecf348a9cf32d67f8c5659..0000000000000000000000000000000000000000 --- a/spaces/jacob-petterle/cloudtop-deployer/Dockerfile +++ /dev/null @@ -1,28 +0,0 @@ -FROM python:3.9 as base - -WORKDIR app - -COPY requirements.txt . - -RUN --mount=type=secret,id=SSH_PRIVATE_KEY,mode=0400,required=true \ - mkdir -p /root/.ssh \ - && echo "$(cat /run/secrets/SSH_PRIVATE_KEY)" > /root/.ssh/id_rsa \ - && chmod 600 /root/.ssh/id_rsa \ - && echo "StrictHostKeyChecking no" >> /etc/ssh/ssh_config \ - && pip install --no-cache-dir --upgrade -r requirements.txt - -RUN apt-get update -yq \ - && apt-get install -y curl \ - && curl -sL https://deb.nodesource.com/setup_16.x | bash \ - && apt-get install -y nodejs npm - -RUN npm install -g aws-cdk \ - && cdk --version - -FROM base - -COPY . . - -RUN chmod 0777 /app - -CMD ["streamlit", "run", "streamlit_app.py", "--server.port", "7860"] \ No newline at end of file diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/avg_spectra.py b/spaces/james-oldfield/PandA/networks/stylegan3/avg_spectra.py deleted file mode 100644 index a53a7b3b7be5345477e82b154eb535f75da59b78..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/stylegan3/avg_spectra.py +++ /dev/null @@ -1,276 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Compare average power spectra between real and generated images, -or between multiple generators.""" - -import os -import numpy as np -import torch -import torch.fft -import scipy.ndimage -import matplotlib.pyplot as plt -import click -import tqdm -import dnnlib - -import legacy -from training import dataset - -#---------------------------------------------------------------------------- -# Setup an iterator for streaming images, in uint8 NCHW format, based on the -# respective command line options. - -def stream_source_images(source, num, seed, device, data_loader_kwargs=None): # => num_images, image_size, image_iter - ext = source.split('.')[-1].lower() - if data_loader_kwargs is None: - data_loader_kwargs = dict(pin_memory=True, num_workers=3, prefetch_factor=2) - - if ext == 'pkl': - if num is None: - raise click.ClickException('--num is required when --source points to network pickle') - with dnnlib.util.open_url(source) as f: - G = legacy.load_network_pkl(f)['G_ema'].to(device) - def generate_image(seed): - rnd = np.random.RandomState(seed) - z = torch.from_numpy(rnd.randn(1, G.z_dim)).to(device) - c = torch.zeros([1, G.c_dim], device=device) - if G.c_dim > 0: - c[:, rnd.randint(G.c_dim)] = 1 - return (G(z=z, c=c) * 127.5 + 128).clamp(0, 255).to(torch.uint8) - _ = generate_image(seed) # warm up - image_iter = (generate_image(seed + idx) for idx in range(num)) - return num, G.img_resolution, image_iter - - elif ext == 'zip' or os.path.isdir(source): - dataset_obj = dataset.ImageFolderDataset(path=source, max_size=num, random_seed=seed) - if num is not None and num != len(dataset_obj): - raise click.ClickException(f'--source contains fewer than {num} images') - data_loader = torch.utils.data.DataLoader(dataset_obj, batch_size=1, **data_loader_kwargs) - image_iter = (image.to(device) for image, _label in data_loader) - return len(dataset_obj), dataset_obj.resolution, image_iter - - else: - raise click.ClickException('--source must point to network pickle, dataset zip, or directory') - -#---------------------------------------------------------------------------- -# Load average power spectrum from the specified .npz file and construct -# the corresponding heatmap for visualization. - -def construct_heatmap(npz_file, smooth): - npz_data = np.load(npz_file) - spectrum = npz_data['spectrum'] - image_size = npz_data['image_size'] - hmap = np.log10(spectrum) * 10 # dB - hmap = np.fft.fftshift(hmap) - hmap = np.concatenate([hmap, hmap[:1, :]], axis=0) - hmap = np.concatenate([hmap, hmap[:, :1]], axis=1) - if smooth > 0: - sigma = spectrum.shape[0] / image_size * smooth - hmap = scipy.ndimage.gaussian_filter(hmap, sigma=sigma, mode='nearest') - return hmap, image_size - -#---------------------------------------------------------------------------- - -@click.group() -def main(): - """Compare average power spectra between real and generated images, - or between multiple generators. - - Example: - - \b - # Calculate dataset mean and std, needed in subsequent steps. - python avg_spectra.py stats --source=~/datasets/ffhq-1024x1024.zip - - \b - # Calculate average spectrum for the training data. - python avg_spectra.py calc --source=~/datasets/ffhq-1024x1024.zip \\ - --dest=tmp/training-data.npz --mean=112.684 --std=69.509 - - \b - # Calculate average spectrum for a pre-trained generator. - python avg_spectra.py calc \\ - --source=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhq-1024x1024.pkl \\ - --dest=tmp/stylegan3-r.npz --mean=112.684 --std=69.509 --num=70000 - - \b - # Display results. - python avg_spectra.py heatmap tmp/training-data.npz - python avg_spectra.py heatmap tmp/stylegan3-r.npz - python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz - - \b - # Save as PNG. - python avg_spectra.py heatmap tmp/training-data.npz --save=tmp/training-data.png --dpi=300 - python avg_spectra.py heatmap tmp/stylegan3-r.npz --save=tmp/stylegan3-r.png --dpi=300 - python avg_spectra.py slices tmp/training-data.npz tmp/stylegan3-r.npz --save=tmp/slices.png --dpi=300 - """ - -#---------------------------------------------------------------------------- - -@main.command() -@click.option('--source', help='Network pkl, dataset zip, or directory', metavar='[PKL|ZIP|DIR]', required=True) -@click.option('--num', help='Number of images to process [default: all]', metavar='INT', type=click.IntRange(min=1)) -@click.option('--seed', help='Random seed for selecting the images', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True) -def stats(source, num, seed, device=torch.device('cuda')): - """Calculate dataset mean and standard deviation needed by 'calc'.""" - torch.multiprocessing.set_start_method('spawn') - num_images, _image_size, image_iter = stream_source_images(source=source, num=num, seed=seed, device=device) - - # Accumulate moments. - moments = torch.zeros([3], dtype=torch.float64, device=device) - for image in tqdm.tqdm(image_iter, total=num_images): - image = image.to(torch.float64) - moments += torch.stack([torch.ones_like(image).sum(), image.sum(), image.square().sum()]) - moments = moments / moments[0] - - # Compute mean and standard deviation. - mean = moments[1] - std = (moments[2] - moments[1].square()).sqrt() - print(f'--mean={mean:g} --std={std:g}') - -#---------------------------------------------------------------------------- - -@main.command() -@click.option('--source', help='Network pkl, dataset zip, or directory', metavar='[PKL|ZIP|DIR]', required=True) -@click.option('--dest', help='Where to store the result', metavar='NPZ', required=True) -@click.option('--mean', help='Dataset mean for whitening', metavar='FLOAT', type=float, required=True) -@click.option('--std', help='Dataset standard deviation for whitening', metavar='FLOAT', type=click.FloatRange(min=0), required=True) -@click.option('--num', help='Number of images to process [default: all]', metavar='INT', type=click.IntRange(min=1)) -@click.option('--seed', help='Random seed for selecting the images', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True) -@click.option('--beta', help='Shape parameter for the Kaiser window', metavar='FLOAT', type=click.FloatRange(min=0), default=8, show_default=True) -@click.option('--interp', help='Frequency-domain interpolation factor', metavar='INT', type=click.IntRange(min=1), default=4, show_default=True) -def calc(source, dest, mean, std, num, seed, beta, interp, device=torch.device('cuda')): - """Calculate average power spectrum and store it in .npz file.""" - torch.multiprocessing.set_start_method('spawn') - num_images, image_size, image_iter = stream_source_images(source=source, num=num, seed=seed, device=device) - spectrum_size = image_size * interp - padding = spectrum_size - image_size - - # Setup window function. - window = torch.kaiser_window(image_size, periodic=False, beta=beta, device=device) - window *= window.square().sum().rsqrt() - window = window.ger(window).unsqueeze(0).unsqueeze(1) - - # Accumulate power spectrum. - spectrum = torch.zeros([spectrum_size, spectrum_size], dtype=torch.float64, device=device) - for image in tqdm.tqdm(image_iter, total=num_images): - image = (image.to(torch.float64) - mean) / std - image = torch.nn.functional.pad(image * window, [0, padding, 0, padding]) - spectrum += torch.fft.fftn(image, dim=[2,3]).abs().square().mean(dim=[0,1]) - spectrum /= num_images - - # Save result. - if os.path.dirname(dest): - os.makedirs(os.path.dirname(dest), exist_ok=True) - np.savez(dest, spectrum=spectrum.cpu().numpy(), image_size=image_size) - -#---------------------------------------------------------------------------- - -@main.command() -@click.argument('npz-file', nargs=1) -@click.option('--save', help='Save the plot and exit', metavar='[PNG|PDF|...]') -@click.option('--dpi', help='Figure resolution', metavar='FLOAT', type=click.FloatRange(min=1), default=100, show_default=True) -@click.option('--smooth', help='Amount of smoothing', metavar='FLOAT', type=click.FloatRange(min=0), default=1.25, show_default=True) -def heatmap(npz_file, save, smooth, dpi): - """Visualize 2D heatmap based on the given .npz file.""" - hmap, image_size = construct_heatmap(npz_file=npz_file, smooth=smooth) - - # Setup plot. - plt.figure(figsize=[6, 4.8], dpi=dpi, tight_layout=True) - freqs = np.linspace(-0.5, 0.5, num=hmap.shape[0], endpoint=True) * image_size - ticks = np.linspace(freqs[0], freqs[-1], num=5, endpoint=True) - levels = np.linspace(-40, 20, num=13, endpoint=True) - - # Draw heatmap. - plt.xlim(ticks[0], ticks[-1]) - plt.ylim(ticks[0], ticks[-1]) - plt.xticks(ticks) - plt.yticks(ticks) - plt.contourf(freqs, freqs, hmap, levels=levels, extend='both', cmap='Blues') - plt.gca().set_aspect('equal') - plt.colorbar(ticks=levels) - plt.contour(freqs, freqs, hmap, levels=levels, extend='both', linestyles='solid', linewidths=1, colors='midnightblue', alpha=0.2) - - # Display or save. - if save is None: - plt.show() - else: - if os.path.dirname(save): - os.makedirs(os.path.dirname(save), exist_ok=True) - plt.savefig(save) - -#---------------------------------------------------------------------------- - -@main.command() -@click.argument('npz-files', nargs=-1, required=True) -@click.option('--save', help='Save the plot and exit', metavar='[PNG|PDF|...]') -@click.option('--dpi', help='Figure resolution', metavar='FLOAT', type=click.FloatRange(min=1), default=100, show_default=True) -@click.option('--smooth', help='Amount of smoothing', metavar='FLOAT', type=click.FloatRange(min=0), default=0, show_default=True) -def slices(npz_files, save, dpi, smooth): - """Visualize 1D slices based on the given .npz files.""" - cases = [dnnlib.EasyDict(npz_file=npz_file) for npz_file in npz_files] - for c in cases: - c.hmap, c.image_size = construct_heatmap(npz_file=c.npz_file, smooth=smooth) - c.label = os.path.splitext(os.path.basename(c.npz_file))[0] - - # Check consistency. - image_size = cases[0].image_size - hmap_size = cases[0].hmap.shape[0] - if any(c.image_size != image_size or c.hmap.shape[0] != hmap_size for c in cases): - raise click.ClickException('All .npz must have the same resolution') - - # Setup plot. - plt.figure(figsize=[12, 4.6], dpi=dpi, tight_layout=True) - hmap_center = hmap_size // 2 - hmap_range = np.arange(hmap_center, hmap_size) - freqs0 = np.linspace(0, image_size / 2, num=(hmap_size // 2 + 1), endpoint=True) - freqs45 = np.linspace(0, image_size / np.sqrt(2), num=(hmap_size // 2 + 1), endpoint=True) - xticks0 = np.linspace(freqs0[0], freqs0[-1], num=9, endpoint=True) - xticks45 = np.round(np.linspace(freqs45[0], freqs45[-1], num=9, endpoint=True)) - yticks = np.linspace(-50, 30, num=9, endpoint=True) - - # Draw 0 degree slice. - plt.subplot(1, 2, 1) - plt.title('0\u00b0 slice') - plt.xlim(xticks0[0], xticks0[-1]) - plt.ylim(yticks[0], yticks[-1]) - plt.xticks(xticks0) - plt.yticks(yticks) - for c in cases: - plt.plot(freqs0, c.hmap[hmap_center, hmap_range], label=c.label) - plt.grid() - plt.legend(loc='upper right') - - # Draw 45 degree slice. - plt.subplot(1, 2, 2) - plt.title('45\u00b0 slice') - plt.xlim(xticks45[0], xticks45[-1]) - plt.ylim(yticks[0], yticks[-1]) - plt.xticks(xticks45) - plt.yticks(yticks) - for c in cases: - plt.plot(freqs45, c.hmap[hmap_range, hmap_range], label=c.label) - plt.grid() - plt.legend(loc='upper right') - - # Display or save. - if save is None: - plt.show() - else: - if os.path.dirname(save): - os.makedirs(os.path.dirname(save), exist_ok=True) - plt.savefig(save) - -#---------------------------------------------------------------------------- - -if __name__ == "__main__": - main() # pylint: disable=no-value-for-parameter - -#---------------------------------------------------------------------------- diff --git a/spaces/jamescalam/dream-cacher/app.py b/spaces/jamescalam/dream-cacher/app.py deleted file mode 100644 index 89b317529c205deecf584ee2c1c27dd2ba9416a5..0000000000000000000000000000000000000000 --- a/spaces/jamescalam/dream-cacher/app.py +++ /dev/null @@ -1,252 +0,0 @@ -import gradio as gr -from diffusers import StableDiffusionPipeline -import torch -import io -from PIL import Image -import os -from cryptography.fernet import Fernet -from google.cloud import storage -import pinecone -import json -import uuid -import pandas as pd - -# decrypt Storage Cloud credentials -fernet = Fernet(os.environ['DECRYPTION_KEY']) - -with open('cloud-storage.encrypted', 'rb') as fp: - encrypted = fp.read() - creds = json.loads(fernet.decrypt(encrypted).decode()) -# then save creds to file -with open('cloud-storage.json', 'w', encoding='utf-8') as fp: - fp.write(json.dumps(creds, indent=4)) -# connect to Cloud Storage -os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'cloud-storage.json' -storage_client = storage.Client() -bucket = storage_client.get_bucket('hf-diffusion-images') - -# get api key for pinecone auth -PINECONE_KEY = os.environ['PINECONE_KEY'] - -index_id = "hf-diffusion" - -# init connection to pinecone -pinecone.init( - api_key=PINECONE_KEY, - environment="us-west1-gcp" -) -if index_id not in pinecone.list_indexes(): - raise ValueError(f"Index '{index_id}' not found") - -index = pinecone.Index(index_id) - -device = 'cuda' if torch.cuda.is_available() else 'cpu' -print(f"Using '{device}' device...") - -# init all of the models and move them to a given GPU -pipe = StableDiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", use_auth_token=os.environ['HF_AUTH'] -) -pipe.to(device) - -missing_im = Image.open('missing.png') -threshold = 0.85 - -def encode_text(text: str): - text_inputs = pipe.tokenizer( - text, return_tensors='pt' - ).to(device) - text_embeds = pipe.text_encoder(**text_inputs) - text_embeds = text_embeds.pooler_output.cpu().tolist()[0] - return text_embeds - -def prompt_query(text: str): - print(f"Running prompt_query('{text}')") - embeds = encode_text(text) - try: - print("Try query pinecone") - xc = index.query(embeds, top_k=30, include_metadata=True) - print("query successful") - except Exception as e: - print(f"Error during query: {e}") - # reinitialize connection - print("Try reinitialize Pinecone connection") - pinecone.init(api_key=PINECONE_KEY, environment='us-west1-gcp') - index2 = pinecone.Index(index_id) - try: - print("Now try querying pinecone again") - xc = index2.query(embeds, top_k=30, include_metadata=True) - print("query successful") - except Exception as e: - raise ValueError(e) - prompts = [ - match['metadata']['prompt'] for match in xc['matches'] - ] - scores = [round(match['score'], 2) for match in xc['matches']] - # deduplicate while preserving order - df = pd.DataFrame({'Similarity': scores, 'Prompt': prompts}) - df = df.drop_duplicates(subset='Prompt', keep='first') - df = df[df['Prompt'].str.len() > 7].head() - return df - -def diffuse(text: str): - # diffuse - out = pipe(text) - if any(out.nsfw_content_detected): - return {} - else: - _id = str(uuid.uuid4()) - # add image to Cloud Storage - im = out.images[0] - im.save(f'{_id}.png', format='png') - added_gcp = False - # push to storage - try: - print("try push to Cloud Storage") - blob = bucket.blob(f'images/{_id}.png') - print("try upload_from_filename") - blob.upload_from_filename(f'{_id}.png') - added_gcp = True - # add embedding and metadata to Pinecone - embeds = encode_text(text) - meta = { - 'prompt': text, - 'image_url': f'images/{_id}.png' - } - try: - print("now try upsert to pinecone") - index.upsert([(_id, embeds, meta)]) - print("upsert successful") - except Exception as e: - try: - print("hit exception, now trying to reinit Pinecone connection") - pinecone.init(api_key=PINECONE_KEY, environment='us-west1-gcp') - index2 = pinecone.Index(index_id) - print(f"reconnected to pinecone '{index_id}' index") - index2.upsert([(_id, embeds, meta)]) - print("upsert successful") - except Exception as e: - print(f"PINECONE_ERROR: {e}") - except Exception as e: - print(f"ERROR: New image not uploaded due to error with {'Pinecone' if added_gcp else 'Cloud Storage'}") - # delete local file - os.remove(f'{_id}.png') - return out.images[0] - -def get_image(url: str): - blob = bucket.blob(url).download_as_string() - blob_bytes = io.BytesIO(blob) - im = Image.open(blob_bytes) - return im - -def test_image(_id, image): - try: - image.save('tmp.png') - return True - except OSError: - # delete corrupted file from pinecone and cloud - index.delete(ids=[_id]) - bucket.blob(f"images/{_id}.png").delete() - print(f"DELETED '{_id}'") - return False - -def prompt_image(text: str): - print(f"prompt_image('{text}')") - embeds = encode_text(text) - try: - print("try query pinecone") - xc = index.query(embeds, top_k=9, include_metadata=True) - except Exception as e: - print(f"Error during query: {e}") - # reinitialize connection - pinecone.init(api_key=PINECONE_KEY, environment='us-west1-gcp') - index2 = pinecone.Index(index_id) - try: - print("try query pinecone after reinit") - xc = index2.query(embeds, top_k=9, include_metadata=True) - except Exception as e: - raise ValueError(e) - image_urls = [ - match['metadata']['image_url'] for match in xc['matches'] - ] - scores = [match['score'] for match in xc['matches']] - ids = [match['id'] for match in xc['matches']] - images = [] - print("Begin looping through (ids, image_urls)") - for _id, image_url in zip(ids, image_urls): - try: - print("download_as_string from GCP") - blob = bucket.blob(image_url).download_as_string() - print("downloaded successfully") - blob_bytes = io.BytesIO(blob) - im = Image.open(blob_bytes) - print("image opened successfully") - if test_image(_id, im): - images.append(im) - print("image accessible") - else: - images.append(missing_im) - print("image NOT accessible") - except ValueError: - print(f"ValueError: '{image_url}'") - return images, scores - -# __APP FUNCTIONS__ - -def set_suggestion(text: str): - return gr.TextArea.update(value=text[0]) - -def set_images(text: str): - images, scores = prompt_image(text) - match_found = False - for score in scores: - if score > threshold: - match_found = True - if match_found: - print("MATCH FOUND") - return gr.Gallery.update(value=images) - else: - print("NO MATCH FOUND") - diffuse(text) - print(f"diffusion for '{text}' complete") - images, scores = prompt_image(text) - return gr.Gallery.update(value=images) - -# __CREATE APP__ -demo = gr.Blocks() - -with demo: - gr.Markdown( - """ - # Dream Cacher - """ - ) - with gr.Row(): - with gr.Column(): - prompt = gr.TextArea( - value="A person surfing", - placeholder="Enter a prompt to dream about", - interactive=True - ) - search = gr.Button(value="Search!") - suggestions = gr.Dataframe( - values=[], - headers=['Similarity', 'Prompt'] - ) - # event listener for change in prompt - prompt.change( - prompt_query, prompt, suggestions, - show_progress=False - ) - - # results column - with gr.Column(): - pics = gr.Gallery() - pics.style(grid=3) - # search event listening - try: - search.click(set_images, prompt, pics) - except OSError: - print("OSError") - -demo.launch() \ No newline at end of file diff --git a/spaces/jbetker/tortoise/tortoise/models/autoregressive.py b/spaces/jbetker/tortoise/tortoise/models/autoregressive.py deleted file mode 100644 index 757a7a8555b3bbc1ca0cff9c38cf0d8699c0c4b7..0000000000000000000000000000000000000000 --- a/spaces/jbetker/tortoise/tortoise/models/autoregressive.py +++ /dev/null @@ -1,511 +0,0 @@ -import functools - -import torch -import torch.nn as nn -import torch.nn.functional as F -from transformers import GPT2Config, GPT2PreTrainedModel, LogitsProcessorList -from transformers.modeling_outputs import CausalLMOutputWithCrossAttentions -from transformers.utils.model_parallel_utils import get_device_map, assert_device_map -from tortoise.models.arch_util import AttentionBlock -from tortoise.utils.typical_sampling import TypicalLogitsWarper - - -def null_position_embeddings(range, dim): - return torch.zeros((range.shape[0], range.shape[1], dim), device=range.device) - - -class ResBlock(nn.Module): - """ - Basic residual convolutional block that uses GroupNorm. - """ - def __init__(self, chan): - super().__init__() - self.net = nn.Sequential( - nn.Conv1d(chan, chan, kernel_size=3, padding=1), - nn.GroupNorm(chan//8, chan), - nn.ReLU(), - nn.Conv1d(chan, chan, kernel_size=3, padding=1), - nn.GroupNorm(chan//8, chan) - ) - - def forward(self, x): - return F.relu(self.net(x) + x) - - -class GPT2InferenceModel(GPT2PreTrainedModel): - def __init__(self, config, gpt, text_pos_emb, embeddings, norm, linear): - super().__init__(config) - self.transformer = gpt - self.text_pos_embedding = text_pos_emb - self.embeddings = embeddings - self.lm_head = nn.Sequential(norm, linear) - - # Model parallel - self.model_parallel = False - self.device_map = None - self.cached_mel_emb = None - - def parallelize(self, device_map=None): - self.device_map = ( - get_device_map(len(self.transformer.h), range(torch.cuda.device_count())) - if device_map is None - else device_map - ) - assert_device_map(self.device_map, len(self.transformer.h)) - self.transformer.parallelize(self.device_map) - self.lm_head = self.lm_head.to(self.transformer.first_device) - self.model_parallel = True - - def deparallelize(self): - self.transformer.deparallelize() - self.transformer = self.transformer.to("cpu") - self.lm_head = self.lm_head.to("cpu") - self.model_parallel = False - torch.cuda.empty_cache() - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def store_mel_emb(self, mel_emb): - self.cached_mel_emb = mel_emb - - def prepare_inputs_for_generation(self, input_ids, past=None, **kwargs): - - token_type_ids = kwargs.get("token_type_ids", None) - # only last token for inputs_ids if past is defined in kwargs - if past: - input_ids = input_ids[:, -1].unsqueeze(-1) - if token_type_ids is not None: - token_type_ids = token_type_ids[:, -1].unsqueeze(-1) - - attention_mask = kwargs.get("attention_mask", None) - position_ids = kwargs.get("position_ids", None) - - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past: - position_ids = position_ids[:, -1].unsqueeze(-1) - else: - position_ids = None - return { - "input_ids": input_ids, - "past_key_values": past, - "use_cache": kwargs.get("use_cache"), - "position_ids": position_ids, - "attention_mask": attention_mask, - "token_type_ids": token_type_ids, - } - - def forward( - self, - input_ids=None, - past_key_values=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - labels=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - assert self.cached_mel_emb is not None - assert inputs_embeds is None # Not supported by this inference model. - assert labels is None # Training not supported by this inference model. - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # Create embedding - mel_len = self.cached_mel_emb.shape[1] - if input_ids.shape[1] != 1: - text_inputs = input_ids[:, mel_len:] - text_emb = self.embeddings(text_inputs) - text_emb = text_emb + self.text_pos_embedding(text_emb) - if self.cached_mel_emb.shape[0] != text_emb.shape[0]: - mel_emb = self.cached_mel_emb.repeat_interleave(text_emb.shape[0]//self.cached_mel_emb.shape[0], 0) - else: - mel_emb = self.cached_mel_emb - emb = torch.cat([mel_emb, text_emb], dim=1) - else: - emb = self.embeddings(input_ids) - emb = emb + self.text_pos_embedding.get_fixed_embedding(attention_mask.shape[1]-mel_len, attention_mask.device) - - transformer_outputs = self.transformer( - inputs_embeds=emb, - past_key_values=past_key_values, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = transformer_outputs[0] - - # Set device for model parallelism - if self.model_parallel: - torch.cuda.set_device(self.transformer.first_device) - hidden_states = hidden_states.to(self.lm_head.weight.device) - - lm_logits = self.lm_head(hidden_states) - - if not return_dict: - return (lm_logits,) + transformer_outputs[1:] - - return CausalLMOutputWithCrossAttentions( - loss=None, - logits=lm_logits, - past_key_values=transformer_outputs.past_key_values, - hidden_states=transformer_outputs.hidden_states, - attentions=transformer_outputs.attentions, - cross_attentions=transformer_outputs.cross_attentions, - ) - - @staticmethod - def _reorder_cache(past, beam_idx): - """ - This function is used to re-order the :obj:`past_key_values` cache if - :meth:`~transformers.PreTrainedModel.beam_search` or :meth:`~transformers.PreTrainedModel.beam_sample` is - called. This is required to match :obj:`past_key_values` with the correct beam_idx at every generation step. - """ - return tuple( - tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past) - for layer_past in past - ) - - -class ConditioningEncoder(nn.Module): - def __init__(self, - spec_dim, - embedding_dim, - attn_blocks=6, - num_attn_heads=4, - do_checkpointing=False, - mean=False): - super().__init__() - attn = [] - self.init = nn.Conv1d(spec_dim, embedding_dim, kernel_size=1) - for a in range(attn_blocks): - attn.append(AttentionBlock(embedding_dim, num_attn_heads)) - self.attn = nn.Sequential(*attn) - self.dim = embedding_dim - self.do_checkpointing = do_checkpointing - self.mean = mean - - def forward(self, x): - h = self.init(x) - h = self.attn(h) - if self.mean: - return h.mean(dim=2) - else: - return h[:, :, 0] - - -class LearnedPositionEmbeddings(nn.Module): - def __init__(self, seq_len, model_dim, init=.02): - super().__init__() - self.emb = nn.Embedding(seq_len, model_dim) - # Initializing this way is standard for GPT-2 - self.emb.weight.data.normal_(mean=0.0, std=init) - - def forward(self, x): - sl = x.shape[1] - return self.emb(torch.arange(0, sl, device=x.device)) - - def get_fixed_embedding(self, ind, dev): - return self.emb(torch.tensor([ind], device=dev)).unsqueeze(0) - - -def build_hf_gpt_transformer(layers, model_dim, heads, max_mel_seq_len, max_text_seq_len, checkpointing): - """ - GPT-2 implemented by the HuggingFace library. - """ - from transformers import GPT2Config, GPT2Model - gpt_config = GPT2Config(vocab_size=256, # Unused. - n_positions=max_mel_seq_len+max_text_seq_len, - n_ctx=max_mel_seq_len+max_text_seq_len, - n_embd=model_dim, - n_layer=layers, - n_head=heads, - gradient_checkpointing=checkpointing, - use_cache=not checkpointing) - gpt = GPT2Model(gpt_config) - # Override the built in positional embeddings - del gpt.wpe - gpt.wpe = functools.partial(null_position_embeddings, dim=model_dim) - # Built-in token embeddings are unused. - del gpt.wte - return gpt, LearnedPositionEmbeddings(max_mel_seq_len, model_dim), LearnedPositionEmbeddings(max_text_seq_len, model_dim),\ - None, None - - -class MelEncoder(nn.Module): - def __init__(self, channels, mel_channels=80, resblocks_per_reduction=2): - super().__init__() - self.channels = channels - self.encoder = nn.Sequential(nn.Conv1d(mel_channels, channels//4, kernel_size=3, padding=1), - nn.Sequential(*[ResBlock(channels//4) for _ in range(resblocks_per_reduction)]), - nn.Conv1d(channels//4, channels//2, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(channels//16, channels//2), - nn.ReLU(), - nn.Sequential(*[ResBlock(channels//2) for _ in range(resblocks_per_reduction)]), - nn.Conv1d(channels//2, channels, kernel_size=3, stride=2, padding=1), - nn.GroupNorm(channels//8, channels), - nn.ReLU(), - nn.Sequential(*[ResBlock(channels) for _ in range(resblocks_per_reduction)]), - ) - self.reduction = 4 - - - def forward(self, x): - for e in self.encoder: - x = e(x) - return x.permute(0,2,1) - - -class UnifiedVoice(nn.Module): - def __init__(self, layers=8, model_dim=512, heads=8, max_text_tokens=120, max_mel_tokens=250, max_conditioning_inputs=1, - mel_length_compression=1024, number_text_tokens=256, - start_text_token=None, number_mel_codes=8194, start_mel_token=8192, - stop_mel_token=8193, train_solo_embeddings=False, use_mel_codes_as_input=True, - checkpointing=True, types=1): - """ - Args: - layers: Number of layers in transformer stack. - model_dim: Operating dimensions of the transformer - heads: Number of transformer heads. Must be divisible by model_dim. Recommend model_dim//64 - max_text_tokens: Maximum number of text tokens that will be encountered by model. - max_mel_tokens: Maximum number of MEL tokens that will be encountered by model. - max_conditioning_inputs: Maximum number of conditioning inputs provided to the model. If (1), conditioning input can be of format (b,80,s), otherwise (b,n,80,s). - mel_length_compression: The factor between and . Used to compute MEL code padding given wav input length. - number_text_tokens: - start_text_token: - stop_text_token: - number_mel_codes: - start_mel_token: - stop_mel_token: - train_solo_embeddings: - use_mel_codes_as_input: - checkpointing: - """ - super().__init__() - - self.number_text_tokens = number_text_tokens - self.start_text_token = number_text_tokens * types if start_text_token is None else start_text_token - self.stop_text_token = 0 - self.number_mel_codes = number_mel_codes - self.start_mel_token = start_mel_token - self.stop_mel_token = stop_mel_token - self.layers = layers - self.heads = heads - self.max_mel_tokens = max_mel_tokens - self.max_text_tokens = max_text_tokens - self.model_dim = model_dim - self.max_conditioning_inputs = max_conditioning_inputs - self.mel_length_compression = mel_length_compression - self.conditioning_encoder = ConditioningEncoder(80, model_dim, num_attn_heads=heads) - self.text_embedding = nn.Embedding(self.number_text_tokens*types+1, model_dim) - if use_mel_codes_as_input: - self.mel_embedding = nn.Embedding(self.number_mel_codes, model_dim) - else: - self.mel_embedding = MelEncoder(model_dim, resblocks_per_reduction=1) - self.gpt, self.mel_pos_embedding, self.text_pos_embedding, self.mel_layer_pos_embedding, self.text_layer_pos_embedding = \ - build_hf_gpt_transformer(layers, model_dim, heads, self.max_mel_tokens+2+self.max_conditioning_inputs, self.max_text_tokens+2, checkpointing) - if train_solo_embeddings: - self.mel_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * .02, requires_grad=True) - self.text_solo_embedding = nn.Parameter(torch.randn(1, 1, model_dim) * .02, requires_grad=True) - else: - self.mel_solo_embedding = 0 - self.text_solo_embedding = 0 - - self.final_norm = nn.LayerNorm(model_dim) - self.text_head = nn.Linear(model_dim, self.number_text_tokens*types+1) - self.mel_head = nn.Linear(model_dim, self.number_mel_codes) - - # Initialize the embeddings per the GPT-2 scheme - embeddings = [self.text_embedding] - if use_mel_codes_as_input: - embeddings.append(self.mel_embedding) - for module in embeddings: - module.weight.data.normal_(mean=0.0, std=.02) - - def build_aligned_inputs_and_targets(self, input, start_token, stop_token): - inp = F.pad(input, (1,0), value=start_token) - tar = F.pad(input, (0,1), value=stop_token) - return inp, tar - - def set_mel_padding(self, mel_input_tokens, wav_lengths): - """ - Given mel tokens that are derived from a padded audio clip and the actual lengths of each batch element in - that audio clip, reformats the tokens with STOP_MEL_TOKEN in place of the zero padding. This is required - preformatting to create a working TTS model. - """ - # Set padding areas within MEL (currently it is coded with the MEL code for ). - mel_lengths = torch.div(wav_lengths, self.mel_length_compression, rounding_mode='trunc') - for b in range(len(mel_lengths)): - actual_end = mel_lengths[b] + 1 # Due to the convolutional nature of how these tokens are generated, it would be best if the model predicts a token past the actual last token. - if actual_end < mel_input_tokens.shape[-1]: - mel_input_tokens[b, actual_end:] = self.stop_mel_token - return mel_input_tokens - - def get_logits(self, speech_conditioning_inputs, first_inputs, first_head, second_inputs=None, second_head=None, get_attns=False, return_latent=False): - if second_inputs is not None: - emb = torch.cat([speech_conditioning_inputs, first_inputs, second_inputs], dim=1) - else: - emb = torch.cat([speech_conditioning_inputs, first_inputs], dim=1) - - gpt_out = self.gpt(inputs_embeds=emb, return_dict=True, output_attentions=get_attns) - if get_attns: - return gpt_out.attentions - - enc = gpt_out.last_hidden_state[:, 1:] # The first logit is tied to the speech_conditioning_input - enc = self.final_norm(enc) - - if return_latent: - return enc[:, speech_conditioning_inputs.shape[1]:speech_conditioning_inputs.shape[1]+first_inputs.shape[1]], enc[:, -second_inputs.shape[1]:] - - first_logits = enc[:, :first_inputs.shape[1]] - first_logits = first_head(first_logits) - first_logits = first_logits.permute(0,2,1) - if second_inputs is not None: - second_logits = enc[:, -second_inputs.shape[1]:] - second_logits = second_head(second_logits) - second_logits = second_logits.permute(0,2,1) - return first_logits, second_logits - else: - return first_logits - - def get_conditioning(self, speech_conditioning_input): - speech_conditioning_input = speech_conditioning_input.unsqueeze(1) if len( - speech_conditioning_input.shape) == 3 else speech_conditioning_input - conds = [] - for j in range(speech_conditioning_input.shape[1]): - conds.append(self.conditioning_encoder(speech_conditioning_input[:, j])) - conds = torch.stack(conds, dim=1) - conds = conds.mean(dim=1) - return conds - - def forward(self, speech_conditioning_latent, text_inputs, text_lengths, mel_codes, wav_lengths, types=None, text_first=True, raw_mels=None, return_attentions=False, - return_latent=False, clip_inputs=True): - """ - Forward pass that uses both text and voice in either text conditioning mode or voice conditioning mode - (actuated by `text_first`). - - speech_conditioning_input: MEL float tensor, (b,1024) - text_inputs: long tensor, (b,t) - text_lengths: long tensor, (b,) - mel_inputs: long tensor, (b,m) - wav_lengths: long tensor, (b,) - raw_mels: MEL float tensor (b,80,s) - - If return_attentions is specified, only logits are returned. - If return_latent is specified, loss & logits are not computed or returned. Only the predicted latents are returned. - If clip_inputs is True, the inputs will be clipped to the smallest input size across each input modality. - """ - # Types are expressed by expanding the text embedding space. - if types is not None: - text_inputs = text_inputs * (1+types).unsqueeze(-1) - - if clip_inputs: - # This model will receive micro-batches with a ton of padding for both the text and MELs. Ameliorate this by - # chopping the inputs by the maximum actual length. - max_text_len = text_lengths.max() - text_inputs = text_inputs[:, :max_text_len] - max_mel_len = wav_lengths.max() // self.mel_length_compression - mel_codes = mel_codes[:, :max_mel_len] - if raw_mels is not None: - raw_mels = raw_mels[:, :, :max_mel_len*4] - mel_codes = self.set_mel_padding(mel_codes, wav_lengths) - text_inputs = F.pad(text_inputs, (0,1), value=self.stop_text_token) - mel_codes = F.pad(mel_codes, (0,1), value=self.stop_mel_token) - - conds = speech_conditioning_latent.unsqueeze(1) - text_inputs, text_targets = self.build_aligned_inputs_and_targets(text_inputs, self.start_text_token, self.stop_text_token) - text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) - mel_codes, mel_targets = self.build_aligned_inputs_and_targets(mel_codes, self.start_mel_token, self.stop_mel_token) - if raw_mels is not None: - mel_inp = F.pad(raw_mels, (0, 8)) - else: - mel_inp = mel_codes - mel_emb = self.mel_embedding(mel_inp) - mel_emb = mel_emb + self.mel_pos_embedding(mel_codes) - - if text_first: - text_logits, mel_logits = self.get_logits(conds, text_emb, self.text_head, mel_emb, self.mel_head, get_attns=return_attentions, return_latent=return_latent) - if return_latent: - return mel_logits[:, :-2] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass. - else: - mel_logits, text_logits = self.get_logits(conds, mel_emb, self.mel_head, text_emb, self.text_head, get_attns=return_attentions, return_latent=return_latent) - if return_latent: - return text_logits[:, :-2] # Despite the name, these are not logits. Strip off the two tokens added by this forward pass. - - if return_attentions: - return mel_logits - loss_text = F.cross_entropy(text_logits, text_targets.long()) - loss_mel = F.cross_entropy(mel_logits, mel_targets.long()) - return loss_text.mean(), loss_mel.mean(), mel_logits - - def inference_speech(self, speech_conditioning_latent, text_inputs, input_tokens=None, num_return_sequences=1, - max_generate_length=None, typical_sampling=False, typical_mass=.9, **hf_generate_kwargs): - seq_length = self.max_mel_tokens + self.max_text_tokens + 2 - if not hasattr(self, 'inference_model'): - # TODO: Decouple gpt_config from this inference model. - gpt_config = GPT2Config(vocab_size=self.max_mel_tokens, - n_positions=seq_length, - n_ctx=seq_length, - n_embd=self.model_dim, - n_layer=self.layers, - n_head=self.heads, - gradient_checkpointing=False, - use_cache=True) - self.inference_model = GPT2InferenceModel(gpt_config, self.gpt, self.mel_pos_embedding, self.mel_embedding, self.final_norm, self.mel_head) - self.gpt.wte = self.mel_embedding - - text_inputs = F.pad(text_inputs, (0, 1), value=self.stop_text_token) - text_inputs, text_targets = self.build_aligned_inputs_and_targets(text_inputs, self.start_text_token, self.stop_text_token) - text_emb = self.text_embedding(text_inputs) + self.text_pos_embedding(text_inputs) - - conds = speech_conditioning_latent.unsqueeze(1) - emb = torch.cat([conds, text_emb], dim=1) - self.inference_model.store_mel_emb(emb) - - fake_inputs = torch.full((emb.shape[0], conds.shape[1] + emb.shape[1],), fill_value=1, dtype=torch.long, - device=text_inputs.device) - fake_inputs[:, -1] = self.start_mel_token - trunc_index = fake_inputs.shape[1] - if input_tokens is None: - inputs = fake_inputs - else: - assert num_return_sequences % input_tokens.shape[0] == 0, "The number of return sequences must be divisible by the number of input sequences" - fake_inputs = fake_inputs.repeat(num_return_sequences, 1) - input_tokens = input_tokens.repeat(num_return_sequences // input_tokens.shape[0], 1) - inputs = torch.cat([fake_inputs, input_tokens], dim=1) - - logits_processor = LogitsProcessorList([TypicalLogitsWarper(mass=typical_mass)]) if typical_sampling else LogitsProcessorList() - max_length = trunc_index + self.max_mel_tokens - 1 if max_generate_length is None else trunc_index + max_generate_length - gen = self.inference_model.generate(inputs, bos_token_id=self.start_mel_token, pad_token_id=self.stop_mel_token, eos_token_id=self.stop_mel_token, - max_length=max_length, logits_processor=logits_processor, - num_return_sequences=num_return_sequences, **hf_generate_kwargs) - return gen[:, trunc_index:] - - -if __name__ == '__main__': - gpt = UnifiedVoice(model_dim=256, heads=4, train_solo_embeddings=True, use_mel_codes_as_input=True, max_conditioning_inputs=4) - l = gpt(torch.randn(2, 3, 80, 800), - torch.randint(high=120, size=(2,120)), - torch.tensor([32, 120]), - torch.randint(high=8192, size=(2,250)), - torch.tensor([250*256,195*256])) - gpt.text_forward(torch.randn(2,80,800), torch.randint(high=50, size=(2,80)), torch.tensor([32, 80])) diff --git a/spaces/jbondy007/Video_Search_CLIP/README.md b/spaces/jbondy007/Video_Search_CLIP/README.md deleted file mode 100644 index 8cba47dcbb4971bbfc4861be0e5a2adb8ae9e388..0000000000000000000000000000000000000000 --- a/spaces/jbondy007/Video_Search_CLIP/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Video_Search_CLIP -emoji: 📚 -colorFrom: green -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false -duplicated_from: akhaliq/Video_Search_CLIP ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/clip_adapter/text_template.py b/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/clip_adapter/text_template.py deleted file mode 100644 index 1dd085f9435650bbd982c81a1cf0d9899ce7feb2..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/clip_adapter/text_template.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved -# Modified by Feng Liang from -# https://github.com/MendelXu/zsseg.baseline/blob/master/mask_former/modeling/clip_adapter/text_prompt.py -# https://github.com/MendelXu/zsseg.baseline/blob/master/mask_former/modeling/clip_adapter/utils.py - -from typing import List - -import clip -import torch -from torch import nn - -IMAGENET_PROMPT = [ - "a bad photo of a {}.", - "a photo of many {}.", - "a sculpture of a {}.", - "a photo of the hard to see {}.", - "a low resolution photo of the {}.", - "a rendering of a {}.", - "graffiti of a {}.", - "a bad photo of the {}.", - "a cropped photo of the {}.", - "a tattoo of a {}.", - "the embroidered {}.", - "a photo of a hard to see {}.", - "a bright photo of a {}.", - "a photo of a clean {}.", - "a photo of a dirty {}.", - "a dark photo of the {}.", - "a drawing of a {}.", - "a photo of my {}.", - "the plastic {}.", - "a photo of the cool {}.", - "a close-up photo of a {}.", - "a black and white photo of the {}.", - "a painting of the {}.", - "a painting of a {}.", - "a pixelated photo of the {}.", - "a sculpture of the {}.", - "a bright photo of the {}.", - "a cropped photo of a {}.", - "a plastic {}.", - "a photo of the dirty {}.", - "a jpeg corrupted photo of a {}.", - "a blurry photo of the {}.", - "a photo of the {}.", - "a good photo of the {}.", - "a rendering of the {}.", - "a {} in a video game.", - "a photo of one {}.", - "a doodle of a {}.", - "a close-up photo of the {}.", - "a photo of a {}.", - "the origami {}.", - "the {} in a video game.", - "a sketch of a {}.", - "a doodle of the {}.", - "a origami {}.", - "a low resolution photo of a {}.", - "the toy {}.", - "a rendition of the {}.", - "a photo of the clean {}.", - "a photo of a large {}.", - "a rendition of a {}.", - "a photo of a nice {}.", - "a photo of a weird {}.", - "a blurry photo of a {}.", - "a cartoon {}.", - "art of a {}.", - "a sketch of the {}.", - "a embroidered {}.", - "a pixelated photo of a {}.", - "itap of the {}.", - "a jpeg corrupted photo of the {}.", - "a good photo of a {}.", - "a plushie {}.", - "a photo of the nice {}.", - "a photo of the small {}.", - "a photo of the weird {}.", - "the cartoon {}.", - "art of the {}.", - "a drawing of the {}.", - "a photo of the large {}.", - "a black and white photo of a {}.", - "the plushie {}.", - "a dark photo of a {}.", - "itap of a {}.", - "graffiti of the {}.", - "a toy {}.", - "itap of my {}.", - "a photo of a cool {}.", - "a photo of a small {}.", - "a tattoo of the {}.", -] - -VILD_PROMPT = [ - "a photo of a {}.", - "This is a photo of a {}", - "There is a {} in the scene", - "There is the {} in the scene", - "a photo of a {} in the scene", - "a photo of a small {}.", - "a photo of a medium {}.", - "a photo of a large {}.", - "This is a photo of a small {}.", - "This is a photo of a medium {}.", - "This is a photo of a large {}.", - "There is a small {} in the scene.", - "There is a medium {} in the scene.", - "There is a large {} in the scene.", -] - -class PromptExtractor(nn.Module): - def __init__(self): - super().__init__() - self._buffer_init = False - - def init_buffer(self, clip_model): - self._buffer_init = True - - def forward(self, noun_list: List[str], clip_model: nn.Module): - raise NotImplementedError() - - -class PredefinedPromptExtractor(PromptExtractor): - def __init__(self, templates: List[str]): - super().__init__() - self.templates = templates - - def forward(self, noun_list: List[str], clip_model: nn.Module): - text_features_bucket = [] - for template in self.templates: - noun_tokens = [clip.tokenize(template.format(noun)) for noun in noun_list] - text_inputs = torch.cat(noun_tokens).to( - clip_model.text_projection.data.device - ) - text_features = clip_model.encode_text(text_inputs) - text_features /= text_features.norm(dim=-1, keepdim=True) - text_features_bucket.append(text_features) - del text_inputs - # ensemble by averaging - text_features = torch.stack(text_features_bucket).mean(dim=0) - text_features = text_features / text_features.norm(dim=-1, keepdim=True) - - return text_features - - -class ImageNetPromptExtractor(PredefinedPromptExtractor): - def __init__(self): - super().__init__(IMAGENET_PROMPT) - - -class VILDPromptExtractor(PredefinedPromptExtractor): - def __init__(self): - super().__init__(VILD_PROMPT) diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/data/masks.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/data/masks.py deleted file mode 100644 index e91fc74913356481065c5f5906acd50fb05f521c..0000000000000000000000000000000000000000 --- a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/data/masks.py +++ /dev/null @@ -1,332 +0,0 @@ -import math -import random -import hashlib -import logging -from enum import Enum - -import cv2 -import numpy as np - -from saicinpainting.evaluation.masks.mask import SegmentationMask -from saicinpainting.utils import LinearRamp - -LOGGER = logging.getLogger(__name__) - - -class DrawMethod(Enum): - LINE = 'line' - CIRCLE = 'circle' - SQUARE = 'square' - - -def make_random_irregular_mask(shape, max_angle=4, max_len=60, max_width=20, min_times=0, max_times=10, - draw_method=DrawMethod.LINE): - draw_method = DrawMethod(draw_method) - - height, width = shape - mask = np.zeros((height, width), np.float32) - times = np.random.randint(min_times, max_times + 1) - for i in range(times): - start_x = np.random.randint(width) - start_y = np.random.randint(height) - for j in range(1 + np.random.randint(5)): - angle = 0.01 + np.random.randint(max_angle) - if i % 2 == 0: - angle = 2 * 3.1415926 - angle - length = 10 + np.random.randint(max_len) - brush_w = 5 + np.random.randint(max_width) - end_x = np.clip((start_x + length * np.sin(angle)).astype(np.int32), 0, width) - end_y = np.clip((start_y + length * np.cos(angle)).astype(np.int32), 0, height) - if draw_method == DrawMethod.LINE: - cv2.line(mask, (start_x, start_y), (end_x, end_y), 1.0, brush_w) - elif draw_method == DrawMethod.CIRCLE: - cv2.circle(mask, (start_x, start_y), radius=brush_w, color=1., thickness=-1) - elif draw_method == DrawMethod.SQUARE: - radius = brush_w // 2 - mask[start_y - radius:start_y + radius, start_x - radius:start_x + radius] = 1 - start_x, start_y = end_x, end_y - return mask[None, ...] - - -class RandomIrregularMaskGenerator: - def __init__(self, max_angle=4, max_len=60, max_width=20, min_times=0, max_times=10, ramp_kwargs=None, - draw_method=DrawMethod.LINE): - self.max_angle = max_angle - self.max_len = max_len - self.max_width = max_width - self.min_times = min_times - self.max_times = max_times - self.draw_method = draw_method - self.ramp = LinearRamp(**ramp_kwargs) if ramp_kwargs is not None else None - - def __call__(self, img, iter_i=None, raw_image=None): - coef = self.ramp(iter_i) if (self.ramp is not None) and (iter_i is not None) else 1 - cur_max_len = int(max(1, self.max_len * coef)) - cur_max_width = int(max(1, self.max_width * coef)) - cur_max_times = int(self.min_times + 1 + (self.max_times - self.min_times) * coef) - return make_random_irregular_mask(img.shape[1:], max_angle=self.max_angle, max_len=cur_max_len, - max_width=cur_max_width, min_times=self.min_times, max_times=cur_max_times, - draw_method=self.draw_method) - - -def make_random_rectangle_mask(shape, margin=10, bbox_min_size=30, bbox_max_size=100, min_times=0, max_times=3): - height, width = shape - mask = np.zeros((height, width), np.float32) - bbox_max_size = min(bbox_max_size, height - margin * 2, width - margin * 2) - times = np.random.randint(min_times, max_times + 1) - for i in range(times): - box_width = np.random.randint(bbox_min_size, bbox_max_size) - box_height = np.random.randint(bbox_min_size, bbox_max_size) - start_x = np.random.randint(margin, width - margin - box_width + 1) - start_y = np.random.randint(margin, height - margin - box_height + 1) - mask[start_y:start_y + box_height, start_x:start_x + box_width] = 1 - return mask[None, ...] - - -class RandomRectangleMaskGenerator: - def __init__(self, margin=10, bbox_min_size=30, bbox_max_size=100, min_times=0, max_times=3, ramp_kwargs=None): - self.margin = margin - self.bbox_min_size = bbox_min_size - self.bbox_max_size = bbox_max_size - self.min_times = min_times - self.max_times = max_times - self.ramp = LinearRamp(**ramp_kwargs) if ramp_kwargs is not None else None - - def __call__(self, img, iter_i=None, raw_image=None): - coef = self.ramp(iter_i) if (self.ramp is not None) and (iter_i is not None) else 1 - cur_bbox_max_size = int(self.bbox_min_size + 1 + (self.bbox_max_size - self.bbox_min_size) * coef) - cur_max_times = int(self.min_times + (self.max_times - self.min_times) * coef) - return make_random_rectangle_mask(img.shape[1:], margin=self.margin, bbox_min_size=self.bbox_min_size, - bbox_max_size=cur_bbox_max_size, min_times=self.min_times, - max_times=cur_max_times) - - -class RandomSegmentationMaskGenerator: - def __init__(self, **kwargs): - self.impl = None # will be instantiated in first call (effectively in subprocess) - self.kwargs = kwargs - - def __call__(self, img, iter_i=None, raw_image=None): - if self.impl is None: - self.impl = SegmentationMask(**self.kwargs) - - masks = self.impl.get_masks(np.transpose(img, (1, 2, 0))) - masks = [m for m in masks if len(np.unique(m)) > 1] - return np.random.choice(masks) - - -def make_random_superres_mask(shape, min_step=2, max_step=4, min_width=1, max_width=3): - height, width = shape - mask = np.zeros((height, width), np.float32) - step_x = np.random.randint(min_step, max_step + 1) - width_x = np.random.randint(min_width, min(step_x, max_width + 1)) - offset_x = np.random.randint(0, step_x) - - step_y = np.random.randint(min_step, max_step + 1) - width_y = np.random.randint(min_width, min(step_y, max_width + 1)) - offset_y = np.random.randint(0, step_y) - - for dy in range(width_y): - mask[offset_y + dy::step_y] = 1 - for dx in range(width_x): - mask[:, offset_x + dx::step_x] = 1 - return mask[None, ...] - - -class RandomSuperresMaskGenerator: - def __init__(self, **kwargs): - self.kwargs = kwargs - - def __call__(self, img, iter_i=None): - return make_random_superres_mask(img.shape[1:], **self.kwargs) - - -class DumbAreaMaskGenerator: - min_ratio = 0.1 - max_ratio = 0.35 - default_ratio = 0.225 - - def __init__(self, is_training): - #Parameters: - # is_training(bool): If true - random rectangular mask, if false - central square mask - self.is_training = is_training - - def _random_vector(self, dimension): - if self.is_training: - lower_limit = math.sqrt(self.min_ratio) - upper_limit = math.sqrt(self.max_ratio) - mask_side = round((random.random() * (upper_limit - lower_limit) + lower_limit) * dimension) - u = random.randint(0, dimension-mask_side-1) - v = u+mask_side - else: - margin = (math.sqrt(self.default_ratio) / 2) * dimension - u = round(dimension/2 - margin) - v = round(dimension/2 + margin) - return u, v - - def __call__(self, img, iter_i=None, raw_image=None): - c, height, width = img.shape - mask = np.zeros((height, width), np.float32) - x1, x2 = self._random_vector(width) - y1, y2 = self._random_vector(height) - mask[x1:x2, y1:y2] = 1 - return mask[None, ...] - - -class OutpaintingMaskGenerator: - def __init__(self, min_padding_percent:float=0.04, max_padding_percent:int=0.25, left_padding_prob:float=0.5, top_padding_prob:float=0.5, - right_padding_prob:float=0.5, bottom_padding_prob:float=0.5, is_fixed_randomness:bool=False): - """ - is_fixed_randomness - get identical paddings for the same image if args are the same - """ - self.min_padding_percent = min_padding_percent - self.max_padding_percent = max_padding_percent - self.probs = [left_padding_prob, top_padding_prob, right_padding_prob, bottom_padding_prob] - self.is_fixed_randomness = is_fixed_randomness - - assert self.min_padding_percent <= self.max_padding_percent - assert self.max_padding_percent > 0 - assert len([x for x in [self.min_padding_percent, self.max_padding_percent] if (x>=0 and x<=1)]) == 2, f"Padding percentage should be in [0,1]" - assert sum(self.probs) > 0, f"At least one of the padding probs should be greater than 0 - {self.probs}" - assert len([x for x in self.probs if (x >= 0) and (x <= 1)]) == 4, f"At least one of padding probs is not in [0,1] - {self.probs}" - if len([x for x in self.probs if x > 0]) == 1: - LOGGER.warning(f"Only one padding prob is greater than zero - {self.probs}. That means that the outpainting masks will be always on the same side") - - def apply_padding(self, mask, coord): - mask[int(coord[0][0]*self.img_h):int(coord[1][0]*self.img_h), - int(coord[0][1]*self.img_w):int(coord[1][1]*self.img_w)] = 1 - return mask - - def get_padding(self, size): - n1 = int(self.min_padding_percent*size) - n2 = int(self.max_padding_percent*size) - return self.rnd.randint(n1, n2) / size - - @staticmethod - def _img2rs(img): - arr = np.ascontiguousarray(img.astype(np.uint8)) - str_hash = hashlib.sha1(arr).hexdigest() - res = hash(str_hash)%(2**32) - return res - - def __call__(self, img, iter_i=None, raw_image=None): - c, self.img_h, self.img_w = img.shape - mask = np.zeros((self.img_h, self.img_w), np.float32) - at_least_one_mask_applied = False - - if self.is_fixed_randomness: - assert raw_image is not None, f"Cant calculate hash on raw_image=None" - rs = self._img2rs(raw_image) - self.rnd = np.random.RandomState(rs) - else: - self.rnd = np.random - - coords = [[ - (0,0), - (1,self.get_padding(size=self.img_h)) - ], - [ - (0,0), - (self.get_padding(size=self.img_w),1) - ], - [ - (0,1-self.get_padding(size=self.img_h)), - (1,1) - ], - [ - (1-self.get_padding(size=self.img_w),0), - (1,1) - ]] - - for pp, coord in zip(self.probs, coords): - if self.rnd.random() < pp: - at_least_one_mask_applied = True - mask = self.apply_padding(mask=mask, coord=coord) - - if not at_least_one_mask_applied: - idx = self.rnd.choice(range(len(coords)), p=np.array(self.probs)/sum(self.probs)) - mask = self.apply_padding(mask=mask, coord=coords[idx]) - return mask[None, ...] - - -class MixedMaskGenerator: - def __init__(self, irregular_proba=1/3, irregular_kwargs=None, - box_proba=1/3, box_kwargs=None, - segm_proba=1/3, segm_kwargs=None, - squares_proba=0, squares_kwargs=None, - superres_proba=0, superres_kwargs=None, - outpainting_proba=0, outpainting_kwargs=None, - invert_proba=0): - self.probas = [] - self.gens = [] - - if irregular_proba > 0: - self.probas.append(irregular_proba) - if irregular_kwargs is None: - irregular_kwargs = {} - else: - irregular_kwargs = dict(irregular_kwargs) - irregular_kwargs['draw_method'] = DrawMethod.LINE - self.gens.append(RandomIrregularMaskGenerator(**irregular_kwargs)) - - if box_proba > 0: - self.probas.append(box_proba) - if box_kwargs is None: - box_kwargs = {} - self.gens.append(RandomRectangleMaskGenerator(**box_kwargs)) - - if segm_proba > 0: - self.probas.append(segm_proba) - if segm_kwargs is None: - segm_kwargs = {} - self.gens.append(RandomSegmentationMaskGenerator(**segm_kwargs)) - - if squares_proba > 0: - self.probas.append(squares_proba) - if squares_kwargs is None: - squares_kwargs = {} - else: - squares_kwargs = dict(squares_kwargs) - squares_kwargs['draw_method'] = DrawMethod.SQUARE - self.gens.append(RandomIrregularMaskGenerator(**squares_kwargs)) - - if superres_proba > 0: - self.probas.append(superres_proba) - if superres_kwargs is None: - superres_kwargs = {} - self.gens.append(RandomSuperresMaskGenerator(**superres_kwargs)) - - if outpainting_proba > 0: - self.probas.append(outpainting_proba) - if outpainting_kwargs is None: - outpainting_kwargs = {} - self.gens.append(OutpaintingMaskGenerator(**outpainting_kwargs)) - - self.probas = np.array(self.probas, dtype='float32') - self.probas /= self.probas.sum() - self.invert_proba = invert_proba - - def __call__(self, img, iter_i=None, raw_image=None): - kind = np.random.choice(len(self.probas), p=self.probas) - gen = self.gens[kind] - result = gen(img, iter_i=iter_i, raw_image=raw_image) - if self.invert_proba > 0 and random.random() < self.invert_proba: - result = 1 - result - return result - - -def get_mask_generator(kind, kwargs): - if kind is None: - kind = "mixed" - if kwargs is None: - kwargs = {} - - if kind == "mixed": - cl = MixedMaskGenerator - elif kind == "outpainting": - cl = OutpaintingMaskGenerator - elif kind == "dumb": - cl = DumbAreaMaskGenerator - else: - raise NotImplementedError(f"No such generator kind = {kind}") - return cl(**kwargs) diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/visualizers/__init__.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/visualizers/__init__.py deleted file mode 100644 index 4770d1f15a6790ab9606c7b9881f798c8e2d9545..0000000000000000000000000000000000000000 --- a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/visualizers/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -import logging - -from saicinpainting.training.visualizers.directory import DirectoryVisualizer -from saicinpainting.training.visualizers.noop import NoopVisualizer - - -def make_visualizer(kind, **kwargs): - logging.info(f'Make visualizer {kind}') - - if kind == 'directory': - return DirectoryVisualizer(**kwargs) - if kind == 'noop': - return NoopVisualizer() - - raise ValueError(f'Unknown visualizer kind {kind}') diff --git a/spaces/jitubutwal1441/image-to-story/README.md b/spaces/jitubutwal1441/image-to-story/README.md deleted file mode 100644 index c3e119f4250e43a2728b6e3168ad87d484efbed9..0000000000000000000000000000000000000000 --- a/spaces/jitubutwal1441/image-to-story/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Image To Story -emoji: 🦀 -colorFrom: green -colorTo: red -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jmesikto/whisper-webui/src/vad.py b/spaces/jmesikto/whisper-webui/src/vad.py deleted file mode 100644 index 9b5ae606a9efdcc34dada47d0613bb8194d2f269..0000000000000000000000000000000000000000 --- a/spaces/jmesikto/whisper-webui/src/vad.py +++ /dev/null @@ -1,560 +0,0 @@ -from abc import ABC, abstractmethod -from collections import Counter, deque -import time - -from typing import Any, Deque, Iterator, List, Dict - -from pprint import pprint -from src.hooks.progressListener import ProgressListener -from src.hooks.subTaskProgressListener import SubTaskProgressListener -from src.hooks.whisperProgressHook import create_progress_listener_handle -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache - -from src.segments import merge_timestamps -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback - -# Workaround for https://github.com/tensorflow/tensorflow/issues/48797 -try: - import tensorflow as tf -except ModuleNotFoundError: - # Error handling - pass - -import torch - -import ffmpeg -import numpy as np - -from src.utils import format_timestamp -from enum import Enum - -class NonSpeechStrategy(Enum): - """ - Ignore non-speech frames segments. - """ - SKIP = 1 - """ - Just treat non-speech segments as speech. - """ - CREATE_SEGMENT = 2 - """ - Expand speech segments into subsequent non-speech segments. - """ - EXPAND_SEGMENT = 3 - -# Defaults for Silero -SPEECH_TRESHOLD = 0.3 - -# Minimum size of segments to process -MIN_SEGMENT_DURATION = 1 - -# The maximum time for texts from old segments to be used in the next segment -MAX_PROMPT_WINDOW = 0 # seconds (0 = disabled) -PROMPT_NO_SPEECH_PROB = 0.1 # Do not pass the text from segments with a no speech probability higher than this - -VAD_MAX_PROCESSING_CHUNK = 60 * 60 # 60 minutes of audio - -class TranscriptionConfig(ABC): - def __init__(self, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP, - segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None, - max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1): - self.non_speech_strategy = non_speech_strategy - self.segment_padding_left = segment_padding_left - self.segment_padding_right = segment_padding_right - self.max_silent_period = max_silent_period - self.max_merge_size = max_merge_size - self.max_prompt_window = max_prompt_window - self.initial_segment_index = initial_segment_index - -class PeriodicTranscriptionConfig(TranscriptionConfig): - def __init__(self, periodic_duration: float, non_speech_strategy: NonSpeechStrategy = NonSpeechStrategy.SKIP, - segment_padding_left: float = None, segment_padding_right = None, max_silent_period: float = None, - max_merge_size: float = None, max_prompt_window: float = None, initial_segment_index = -1): - super().__init__(non_speech_strategy, segment_padding_left, segment_padding_right, max_silent_period, max_merge_size, max_prompt_window, initial_segment_index) - self.periodic_duration = periodic_duration - -class AbstractTranscription(ABC): - def __init__(self, sampling_rate: int = 16000): - self.sampling_rate = sampling_rate - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - return load_audio(str, self.sampling_rate, start_time, duration) - - def is_transcribe_timestamps_fast(self): - """ - Determine if get_transcribe_timestamps is fast enough to not need parallelization. - """ - return False - - @abstractmethod - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float): - """ - Get the start and end timestamps of the sections that should be transcribed by this VAD method. - - Parameters - ---------- - audio: str - The audio file. - config: TranscriptionConfig - The transcription configuration. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - return - - def get_merged_timestamps(self, timestamps: List[Dict[str, Any]], config: TranscriptionConfig, total_duration: float): - """ - Get the start and end timestamps of the sections that should be transcribed by this VAD method, - after merging the given segments using the specified configuration. - - Parameters - ---------- - audio: str - The audio file. - config: TranscriptionConfig - The transcription configuration. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - merged = merge_timestamps(timestamps, config.max_silent_period, config.max_merge_size, - config.segment_padding_left, config.segment_padding_right) - - if config.non_speech_strategy != NonSpeechStrategy.SKIP: - # Expand segments to include the gaps between them - if (config.non_speech_strategy == NonSpeechStrategy.CREATE_SEGMENT): - # When we have a prompt window, we create speech segments betwen each segment if we exceed the merge size - merged = self.fill_gaps(merged, total_duration=total_duration, max_expand_size=config.max_merge_size) - elif config.non_speech_strategy == NonSpeechStrategy.EXPAND_SEGMENT: - # With no prompt window, it is better to just expand the segments (this effectively passes the prompt to the next segment) - merged = self.expand_gaps(merged, total_duration=total_duration) - else: - raise Exception("Unknown non-speech strategy: " + str(config.non_speech_strategy)) - - print("Transcribing non-speech:") - pprint(merged) - return merged - - def transcribe(self, audio: str, whisperCallable: AbstractWhisperCallback, config: TranscriptionConfig, - progressListener: ProgressListener = None): - """ - Transcribe the given audo file. - - Parameters - ---------- - audio: str - The audio file. - whisperCallable: WhisperCallback - A callback object to call to transcribe each segment. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - - try: - max_audio_duration = self.get_audio_duration(audio, config) - timestamp_segments = self.get_transcribe_timestamps(audio, config, 0, max_audio_duration) - - # Get speech timestamps from full audio file - merged = self.get_merged_timestamps(timestamp_segments, config, max_audio_duration) - - # A deque of transcribed segments that is passed to the next segment as a prompt - prompt_window = deque() - - print("Processing timestamps:") - pprint(merged) - - result = { - 'text': "", - 'segments': [], - 'language': "" - } - languageCounter = Counter() - detected_language = None - - segment_index = config.initial_segment_index - - # Calculate progress - progress_start_offset = merged[0]['start'] if len(merged) > 0 else 0 - progress_total_duration = sum([segment['end'] - segment['start'] for segment in merged]) - - # For each time segment, run whisper - for segment in merged: - segment_index += 1 - segment_start = segment['start'] - segment_end = segment['end'] - segment_expand_amount = segment.get('expand_amount', 0) - segment_gap = segment.get('gap', False) - - segment_duration = segment_end - segment_start - - if segment_duration < MIN_SEGMENT_DURATION: - continue - - # Audio to run on Whisper - segment_audio = self.get_audio_segment(audio, start_time = str(segment_start), duration = str(segment_duration)) - # Previous segments to use as a prompt - segment_prompt = ' '.join([segment['text'] for segment in prompt_window]) if len(prompt_window) > 0 else None - - # Detected language - detected_language = languageCounter.most_common(1)[0][0] if len(languageCounter) > 0 else None - - print("Running whisper from ", format_timestamp(segment_start), " to ", format_timestamp(segment_end), ", duration: ", - segment_duration, "expanded: ", segment_expand_amount, "prompt: ", segment_prompt, "language: ", detected_language) - - perf_start_time = time.perf_counter() - - scaled_progress_listener = SubTaskProgressListener(progressListener, base_task_total=progress_total_duration, - sub_task_start=segment_start - progress_start_offset, sub_task_total=segment_duration) - segment_result = whisperCallable.invoke(segment_audio, segment_index, segment_prompt, detected_language, progress_listener=scaled_progress_listener) - - perf_end_time = time.perf_counter() - print("Whisper took {} seconds".format(perf_end_time - perf_start_time)) - - adjusted_segments = self.adjust_timestamp(segment_result["segments"], adjust_seconds=segment_start, max_source_time=segment_duration) - - # Propagate expand amount to the segments - if (segment_expand_amount > 0): - segment_without_expansion = segment_duration - segment_expand_amount - - for adjusted_segment in adjusted_segments: - adjusted_segment_end = adjusted_segment['end'] - - # Add expand amount if the segment got expanded - if (adjusted_segment_end > segment_without_expansion): - adjusted_segment["expand_amount"] = adjusted_segment_end - segment_without_expansion - - # Append to output - result['text'] += segment_result['text'] - result['segments'].extend(adjusted_segments) - - # Increment detected language - if not segment_gap: - languageCounter[segment_result['language']] += 1 - - # Update prompt window - self.__update_prompt_window(prompt_window, adjusted_segments, segment_end, segment_gap, config) - - if detected_language is not None: - result['language'] = detected_language - finally: - # Notify progress listener that we are done - if progressListener is not None: - progressListener.on_finished() - return result - - def get_audio_duration(self, audio: str, config: TranscriptionConfig): - return get_audio_duration(audio) - - def __update_prompt_window(self, prompt_window: Deque, adjusted_segments: List, segment_end: float, segment_gap: bool, config: TranscriptionConfig): - if (config.max_prompt_window is not None and config.max_prompt_window > 0): - # Add segments to the current prompt window (unless it is a speech gap) - if not segment_gap: - for segment in adjusted_segments: - if segment.get('no_speech_prob', 0) <= PROMPT_NO_SPEECH_PROB: - prompt_window.append(segment) - - while (len(prompt_window) > 0): - first_end_time = prompt_window[0].get('end', 0) - # Time expanded in the segments should be discounted from the prompt window - first_expand_time = prompt_window[0].get('expand_amount', 0) - - if (first_end_time - first_expand_time < segment_end - config.max_prompt_window): - prompt_window.popleft() - else: - break - - def include_gaps(self, segments: Iterator[dict], min_gap_length: float, total_duration: float): - result = [] - last_end_time = 0 - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - if (last_end_time != segment_start): - delta = segment_start - last_end_time - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': segment_start, 'gap': True } ) - - last_end_time = segment_end - result.append(segment) - - # Also include total duration if specified - if (total_duration is not None and last_end_time < total_duration): - delta = total_duration - segment_start - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': total_duration, 'gap': True } ) - - return result - - # Expand the end time of each segment to the start of the next segment - def expand_gaps(self, segments: List[Dict[str, Any]], total_duration: float): - result = [] - - if len(segments) == 0: - return result - - # Add gap at the beginning if needed - if (segments[0]['start'] > 0): - result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } ) - - for i in range(len(segments) - 1): - current_segment = segments[i] - next_segment = segments[i + 1] - - delta = next_segment['start'] - current_segment['end'] - - # Expand if the gap actually exists - if (delta >= 0): - current_segment = current_segment.copy() - current_segment['expand_amount'] = delta - current_segment['end'] = next_segment['start'] - - result.append(current_segment) - - # Add last segment - last_segment = segments[-1] - result.append(last_segment) - - # Also include total duration if specified - if (total_duration is not None): - last_segment = result[-1] - - if (last_segment['end'] < total_duration): - last_segment = last_segment.copy() - last_segment['end'] = total_duration - result[-1] = last_segment - - return result - - def fill_gaps(self, segments: List[Dict[str, Any]], total_duration: float, max_expand_size: float = None): - result = [] - - if len(segments) == 0: - return result - - # Add gap at the beginning if needed - if (segments[0]['start'] > 0): - result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } ) - - for i in range(len(segments) - 1): - expanded = False - current_segment = segments[i] - next_segment = segments[i + 1] - - delta = next_segment['start'] - current_segment['end'] - - if (max_expand_size is not None and delta <= max_expand_size): - # Just expand the current segment - current_segment = current_segment.copy() - current_segment['expand_amount'] = delta - current_segment['end'] = next_segment['start'] - expanded = True - - result.append(current_segment) - - # Add a gap to the next segment if needed - if (delta >= 0 and not expanded): - result.append({ 'start': current_segment['end'], 'end': next_segment['start'], 'gap': True } ) - - # Add last segment - last_segment = segments[-1] - result.append(last_segment) - - # Also include total duration if specified - if (total_duration is not None): - last_segment = result[-1] - - delta = total_duration - last_segment['end'] - - if (delta > 0): - if (max_expand_size is not None and delta <= max_expand_size): - # Expand the last segment - last_segment = last_segment.copy() - last_segment['expand_amount'] = delta - last_segment['end'] = total_duration - result[-1] = last_segment - else: - result.append({ 'start': last_segment['end'], 'end': total_duration, 'gap': True } ) - - return result - - def adjust_timestamp(self, segments: Iterator[dict], adjust_seconds: float, max_source_time: float = None): - result = [] - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - # Filter segments? - if (max_source_time is not None): - if (segment_start > max_source_time): - continue - segment_end = min(max_source_time, segment_end) - - new_segment = segment.copy() - - # Add to start and end - new_segment['start'] = segment_start + adjust_seconds - new_segment['end'] = segment_end + adjust_seconds - result.append(new_segment) - return result - - def multiply_timestamps(self, timestamps: List[Dict[str, Any]], factor: float): - result = [] - - for entry in timestamps: - start = entry['start'] - end = entry['end'] - - result.append({ - 'start': start * factor, - 'end': end * factor - }) - return result - - -class VadSileroTranscription(AbstractTranscription): - def __init__(self, sampling_rate: int = 16000, cache: ModelCache = None): - super().__init__(sampling_rate=sampling_rate) - self.model = None - self.cache = cache - self._initialize_model() - - def _initialize_model(self): - if (self.cache is not None): - model_key = "VadSileroTranscription" - self.model, self.get_speech_timestamps = self.cache.get(model_key, self._create_model) - print("Loaded Silerio model from cache.") - else: - self.model, self.get_speech_timestamps = self._create_model() - print("Created Silerio model") - - def _create_model(self): - model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad') - - # Silero does not benefit from multi-threading - torch.set_num_threads(1) # JIT - (get_speech_timestamps, _, _, _, _) = utils - - return model, get_speech_timestamps - - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, end_time: float): - result = [] - - print("Getting timestamps from audio file: {}, start: {}, duration: {}".format(audio, start_time, end_time)) - perf_start_time = time.perf_counter() - - # Divide procesisng of audio into chunks - chunk_start = start_time - - while (chunk_start < end_time): - chunk_duration = min(end_time - chunk_start, VAD_MAX_PROCESSING_CHUNK) - - print("Processing VAD in chunk from {} to {}".format(format_timestamp(chunk_start), format_timestamp(chunk_start + chunk_duration))) - wav = self.get_audio_segment(audio, str(chunk_start), str(chunk_duration)) - - sample_timestamps = self.get_speech_timestamps(wav, self.model, sampling_rate=self.sampling_rate, threshold=SPEECH_TRESHOLD) - seconds_timestamps = self.multiply_timestamps(sample_timestamps, factor=1 / self.sampling_rate) - adjusted = self.adjust_timestamp(seconds_timestamps, adjust_seconds=chunk_start, max_source_time=chunk_start + chunk_duration) - - #pprint(adjusted) - - result.extend(adjusted) - chunk_start += chunk_duration - - perf_end_time = time.perf_counter() - print("VAD processing took {} seconds".format(perf_end_time - perf_start_time)) - - return result - - def __getstate__(self): - # We only need the sampling rate - return { 'sampling_rate': self.sampling_rate } - - def __setstate__(self, state): - self.sampling_rate = state['sampling_rate'] - self.model = None - # Use the global cache - self.cache = GLOBAL_MODEL_CACHE - self._initialize_model() - -# A very simple VAD that just marks every N seconds as speech -class VadPeriodicTranscription(AbstractTranscription): - def __init__(self, sampling_rate: int = 16000): - super().__init__(sampling_rate=sampling_rate) - - def is_transcribe_timestamps_fast(self): - # This is a very fast VAD - no need to parallelize it - return True - - def get_transcribe_timestamps(self, audio: str, config: PeriodicTranscriptionConfig, start_time: float, end_time: float): - result = [] - - # Generate a timestamp every N seconds - start_timestamp = start_time - - while (start_timestamp < end_time): - end_timestamp = min(start_timestamp + config.periodic_duration, end_time) - segment_duration = end_timestamp - start_timestamp - - # Minimum duration is 1 second - if (segment_duration >= 1): - result.append( { 'start': start_timestamp, 'end': end_timestamp } ) - - start_timestamp = end_timestamp - - return result - -def get_audio_duration(file: str): - return float(ffmpeg.probe(file)["format"]["duration"]) - -def load_audio(file: str, sample_rate: int = 16000, - start_time: str = None, duration: str = None): - """ - Open an audio file and read as mono waveform, resampling as necessary - - Parameters - ---------- - file: str - The audio file to open - - sr: int - The sample rate to resample the audio if necessary - - start_time: str - The start time, using the standard FFMPEG time duration syntax, or None to disable. - - duration: str - The duration, using the standard FFMPEG time duration syntax, or None to disable. - - Returns - ------- - A NumPy array containing the audio waveform, in float32 dtype. - """ - try: - inputArgs = {'threads': 0} - - if (start_time is not None): - inputArgs['ss'] = start_time - if (duration is not None): - inputArgs['t'] = duration - - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - out, _ = ( - ffmpeg.input(file, **inputArgs) - .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sample_rate) - .run(cmd="ffmpeg", capture_stdout=True, capture_stderr=True) - ) - except ffmpeg.Error as e: - raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}") - - return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0 \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GbrImagePlugin.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GbrImagePlugin.py deleted file mode 100644 index 994a6e8ebb2f0f2e69990a211d7a1ec4f06b7fd1..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/GbrImagePlugin.py +++ /dev/null @@ -1,102 +0,0 @@ -# -# The Python Imaging Library -# -# load a GIMP brush file -# -# History: -# 96-03-14 fl Created -# 16-01-08 es Version 2 -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996. -# Copyright (c) Eric Soroos 2016. -# -# See the README file for information on usage and redistribution. -# -# -# See https://github.com/GNOME/gimp/blob/mainline/devel-docs/gbr.txt for -# format documentation. -# -# This code Interprets version 1 and 2 .gbr files. -# Version 1 files are obsolete, and should not be used for new -# brushes. -# Version 2 files are saved by GIMP v2.8 (at least) -# Version 3 files have a format specifier of 18 for 16bit floats in -# the color depth field. This is currently unsupported by Pillow. - -from . import Image, ImageFile -from ._binary import i32be as i32 - - -def _accept(prefix): - return len(prefix) >= 8 and i32(prefix, 0) >= 20 and i32(prefix, 4) in (1, 2) - - -## -# Image plugin for the GIMP brush format. - - -class GbrImageFile(ImageFile.ImageFile): - format = "GBR" - format_description = "GIMP brush file" - - def _open(self): - header_size = i32(self.fp.read(4)) - if header_size < 20: - msg = "not a GIMP brush" - raise SyntaxError(msg) - version = i32(self.fp.read(4)) - if version not in (1, 2): - msg = f"Unsupported GIMP brush version: {version}" - raise SyntaxError(msg) - - width = i32(self.fp.read(4)) - height = i32(self.fp.read(4)) - color_depth = i32(self.fp.read(4)) - if width <= 0 or height <= 0: - msg = "not a GIMP brush" - raise SyntaxError(msg) - if color_depth not in (1, 4): - msg = f"Unsupported GIMP brush color depth: {color_depth}" - raise SyntaxError(msg) - - if version == 1: - comment_length = header_size - 20 - else: - comment_length = header_size - 28 - magic_number = self.fp.read(4) - if magic_number != b"GIMP": - msg = "not a GIMP brush, bad magic number" - raise SyntaxError(msg) - self.info["spacing"] = i32(self.fp.read(4)) - - comment = self.fp.read(comment_length)[:-1] - - if color_depth == 1: - self.mode = "L" - else: - self.mode = "RGBA" - - self._size = width, height - - self.info["comment"] = comment - - # Image might not be small - Image._decompression_bomb_check(self.size) - - # Data is an uncompressed block of w * h * bytes/pixel - self._data_size = width * height * color_depth - - def load(self): - if not self.im: - self.im = Image.core.new(self.mode, self.size) - self.frombytes(self.fp.read(self._data_size)) - return Image.Image.load(self) - - -# -# registry - - -Image.register_open(GbrImageFile.format, GbrImageFile, _accept) -Image.register_extension(GbrImageFile.format, ".gbr") diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/charset_normalizer/cd.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/charset_normalizer/cd.py deleted file mode 100644 index 6e56fe84a9e0e63b918141bc27d708b2d915563f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/charset_normalizer/cd.py +++ /dev/null @@ -1,390 +0,0 @@ -import importlib -from codecs import IncrementalDecoder -from collections import Counter -from functools import lru_cache -from typing import Counter as TypeCounter, Dict, List, Optional, Tuple - -from .assets import FREQUENCIES -from .constant import KO_NAMES, LANGUAGE_SUPPORTED_COUNT, TOO_SMALL_SEQUENCE, ZH_NAMES -from .md import is_suspiciously_successive_range -from .models import CoherenceMatches -from .utils import ( - is_accentuated, - is_latin, - is_multi_byte_encoding, - is_unicode_range_secondary, - unicode_range, -) - - -def encoding_unicode_range(iana_name: str) -> List[str]: - """ - Return associated unicode ranges in a single byte code page. - """ - if is_multi_byte_encoding(iana_name): - raise IOError("Function not supported on multi-byte code page") - - decoder = importlib.import_module( - "encodings.{}".format(iana_name) - ).IncrementalDecoder - - p: IncrementalDecoder = decoder(errors="ignore") - seen_ranges: Dict[str, int] = {} - character_count: int = 0 - - for i in range(0x40, 0xFF): - chunk: str = p.decode(bytes([i])) - - if chunk: - character_range: Optional[str] = unicode_range(chunk) - - if character_range is None: - continue - - if is_unicode_range_secondary(character_range) is False: - if character_range not in seen_ranges: - seen_ranges[character_range] = 0 - seen_ranges[character_range] += 1 - character_count += 1 - - return sorted( - [ - character_range - for character_range in seen_ranges - if seen_ranges[character_range] / character_count >= 0.15 - ] - ) - - -def unicode_range_languages(primary_range: str) -> List[str]: - """ - Return inferred languages used with a unicode range. - """ - languages: List[str] = [] - - for language, characters in FREQUENCIES.items(): - for character in characters: - if unicode_range(character) == primary_range: - languages.append(language) - break - - return languages - - -@lru_cache() -def encoding_languages(iana_name: str) -> List[str]: - """ - Single-byte encoding language association. Some code page are heavily linked to particular language(s). - This function does the correspondence. - """ - unicode_ranges: List[str] = encoding_unicode_range(iana_name) - primary_range: Optional[str] = None - - for specified_range in unicode_ranges: - if "Latin" not in specified_range: - primary_range = specified_range - break - - if primary_range is None: - return ["Latin Based"] - - return unicode_range_languages(primary_range) - - -@lru_cache() -def mb_encoding_languages(iana_name: str) -> List[str]: - """ - Multi-byte encoding language association. Some code page are heavily linked to particular language(s). - This function does the correspondence. - """ - if ( - iana_name.startswith("shift_") - or iana_name.startswith("iso2022_jp") - or iana_name.startswith("euc_j") - or iana_name == "cp932" - ): - return ["Japanese"] - if iana_name.startswith("gb") or iana_name in ZH_NAMES: - return ["Chinese"] - if iana_name.startswith("iso2022_kr") or iana_name in KO_NAMES: - return ["Korean"] - - return [] - - -@lru_cache(maxsize=LANGUAGE_SUPPORTED_COUNT) -def get_target_features(language: str) -> Tuple[bool, bool]: - """ - Determine main aspects from a supported language if it contains accents and if is pure Latin. - """ - target_have_accents: bool = False - target_pure_latin: bool = True - - for character in FREQUENCIES[language]: - if not target_have_accents and is_accentuated(character): - target_have_accents = True - if target_pure_latin and is_latin(character) is False: - target_pure_latin = False - - return target_have_accents, target_pure_latin - - -def alphabet_languages( - characters: List[str], ignore_non_latin: bool = False -) -> List[str]: - """ - Return associated languages associated to given characters. - """ - languages: List[Tuple[str, float]] = [] - - source_have_accents = any(is_accentuated(character) for character in characters) - - for language, language_characters in FREQUENCIES.items(): - target_have_accents, target_pure_latin = get_target_features(language) - - if ignore_non_latin and target_pure_latin is False: - continue - - if target_have_accents is False and source_have_accents: - continue - - character_count: int = len(language_characters) - - character_match_count: int = len( - [c for c in language_characters if c in characters] - ) - - ratio: float = character_match_count / character_count - - if ratio >= 0.2: - languages.append((language, ratio)) - - languages = sorted(languages, key=lambda x: x[1], reverse=True) - - return [compatible_language[0] for compatible_language in languages] - - -def characters_popularity_compare( - language: str, ordered_characters: List[str] -) -> float: - """ - Determine if a ordered characters list (by occurrence from most appearance to rarest) match a particular language. - The result is a ratio between 0. (absolutely no correspondence) and 1. (near perfect fit). - Beware that is function is not strict on the match in order to ease the detection. (Meaning close match is 1.) - """ - if language not in FREQUENCIES: - raise ValueError("{} not available".format(language)) - - character_approved_count: int = 0 - FREQUENCIES_language_set = set(FREQUENCIES[language]) - - ordered_characters_count: int = len(ordered_characters) - target_language_characters_count: int = len(FREQUENCIES[language]) - - large_alphabet: bool = target_language_characters_count > 26 - - for character, character_rank in zip( - ordered_characters, range(0, ordered_characters_count) - ): - if character not in FREQUENCIES_language_set: - continue - - character_rank_in_language: int = FREQUENCIES[language].index(character) - expected_projection_ratio: float = ( - target_language_characters_count / ordered_characters_count - ) - character_rank_projection: int = int(character_rank * expected_projection_ratio) - - if ( - large_alphabet is False - and abs(character_rank_projection - character_rank_in_language) > 4 - ): - continue - - if ( - large_alphabet is True - and abs(character_rank_projection - character_rank_in_language) - < target_language_characters_count / 3 - ): - character_approved_count += 1 - continue - - characters_before_source: List[str] = FREQUENCIES[language][ - 0:character_rank_in_language - ] - characters_after_source: List[str] = FREQUENCIES[language][ - character_rank_in_language: - ] - characters_before: List[str] = ordered_characters[0:character_rank] - characters_after: List[str] = ordered_characters[character_rank:] - - before_match_count: int = len( - set(characters_before) & set(characters_before_source) - ) - - after_match_count: int = len( - set(characters_after) & set(characters_after_source) - ) - - if len(characters_before_source) == 0 and before_match_count <= 4: - character_approved_count += 1 - continue - - if len(characters_after_source) == 0 and after_match_count <= 4: - character_approved_count += 1 - continue - - if ( - before_match_count / len(characters_before_source) >= 0.4 - or after_match_count / len(characters_after_source) >= 0.4 - ): - character_approved_count += 1 - continue - - return character_approved_count / len(ordered_characters) - - -def alpha_unicode_split(decoded_sequence: str) -> List[str]: - """ - Given a decoded text sequence, return a list of str. Unicode range / alphabet separation. - Ex. a text containing English/Latin with a bit a Hebrew will return two items in the resulting list; - One containing the latin letters and the other hebrew. - """ - layers: Dict[str, str] = {} - - for character in decoded_sequence: - if character.isalpha() is False: - continue - - character_range: Optional[str] = unicode_range(character) - - if character_range is None: - continue - - layer_target_range: Optional[str] = None - - for discovered_range in layers: - if ( - is_suspiciously_successive_range(discovered_range, character_range) - is False - ): - layer_target_range = discovered_range - break - - if layer_target_range is None: - layer_target_range = character_range - - if layer_target_range not in layers: - layers[layer_target_range] = character.lower() - continue - - layers[layer_target_range] += character.lower() - - return list(layers.values()) - - -def merge_coherence_ratios(results: List[CoherenceMatches]) -> CoherenceMatches: - """ - This function merge results previously given by the function coherence_ratio. - The return type is the same as coherence_ratio. - """ - per_language_ratios: Dict[str, List[float]] = {} - for result in results: - for sub_result in result: - language, ratio = sub_result - if language not in per_language_ratios: - per_language_ratios[language] = [ratio] - continue - per_language_ratios[language].append(ratio) - - merge = [ - ( - language, - round( - sum(per_language_ratios[language]) / len(per_language_ratios[language]), - 4, - ), - ) - for language in per_language_ratios - ] - - return sorted(merge, key=lambda x: x[1], reverse=True) - - -def filter_alt_coherence_matches(results: CoherenceMatches) -> CoherenceMatches: - """ - We shall NOT return "English—" in CoherenceMatches because it is an alternative - of "English". This function only keeps the best match and remove the em-dash in it. - """ - index_results: Dict[str, List[float]] = dict() - - for result in results: - language, ratio = result - no_em_name: str = language.replace("—", "") - - if no_em_name not in index_results: - index_results[no_em_name] = [] - - index_results[no_em_name].append(ratio) - - if any(len(index_results[e]) > 1 for e in index_results): - filtered_results: CoherenceMatches = [] - - for language in index_results: - filtered_results.append((language, max(index_results[language]))) - - return filtered_results - - return results - - -@lru_cache(maxsize=2048) -def coherence_ratio( - decoded_sequence: str, threshold: float = 0.1, lg_inclusion: Optional[str] = None -) -> CoherenceMatches: - """ - Detect ANY language that can be identified in given sequence. The sequence will be analysed by layers. - A layer = Character extraction by alphabets/ranges. - """ - - results: List[Tuple[str, float]] = [] - ignore_non_latin: bool = False - - sufficient_match_count: int = 0 - - lg_inclusion_list = lg_inclusion.split(",") if lg_inclusion is not None else [] - if "Latin Based" in lg_inclusion_list: - ignore_non_latin = True - lg_inclusion_list.remove("Latin Based") - - for layer in alpha_unicode_split(decoded_sequence): - sequence_frequencies: TypeCounter[str] = Counter(layer) - most_common = sequence_frequencies.most_common() - - character_count: int = sum(o for c, o in most_common) - - if character_count <= TOO_SMALL_SEQUENCE: - continue - - popular_character_ordered: List[str] = [c for c, o in most_common] - - for language in lg_inclusion_list or alphabet_languages( - popular_character_ordered, ignore_non_latin - ): - ratio: float = characters_popularity_compare( - language, popular_character_ordered - ) - - if ratio < threshold: - continue - elif ratio >= 0.8: - sufficient_match_count += 1 - - results.append((language, round(ratio, 4))) - - if sufficient_match_count >= 3: - break - - return sorted( - filter_alt_coherence_matches(results), key=lambda x: x[1], reverse=True - ) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/merge/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/merge/__init__.py deleted file mode 100644 index 10eff133fae5d025f940b962c232a39bd0c23a74..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/merge/__init__.py +++ /dev/null @@ -1,211 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod, Roozbeh Pournader - -from fontTools import ttLib -import fontTools.merge.base -from fontTools.merge.cmap import ( - computeMegaGlyphOrder, - computeMegaCmap, - renameCFFCharStrings, -) -from fontTools.merge.layout import layoutPreMerge, layoutPostMerge -from fontTools.merge.options import Options -import fontTools.merge.tables -from fontTools.misc.loggingTools import Timer -from functools import reduce -import sys -import logging - - -log = logging.getLogger("fontTools.merge") -timer = Timer(logger=logging.getLogger(__name__ + ".timer"), level=logging.INFO) - - -class Merger(object): - """Font merger. - - This class merges multiple files into a single OpenType font, taking into - account complexities such as OpenType layout (``GSUB``/``GPOS``) tables and - cross-font metrics (e.g. ``hhea.ascent`` is set to the maximum value across - all the fonts). - - If multiple glyphs map to the same Unicode value, and the glyphs are considered - sufficiently different (that is, they differ in any of paths, widths, or - height), then subsequent glyphs are renamed and a lookup in the ``locl`` - feature will be created to disambiguate them. For example, if the arguments - are an Arabic font and a Latin font and both contain a set of parentheses, - the Latin glyphs will be renamed to ``parenleft#1`` and ``parenright#1``, - and a lookup will be inserted into the to ``locl`` feature (creating it if - necessary) under the ``latn`` script to substitute ``parenleft`` with - ``parenleft#1`` etc. - - Restrictions: - - - All fonts must have the same units per em. - - If duplicate glyph disambiguation takes place as described above then the - fonts must have a ``GSUB`` table. - - Attributes: - options: Currently unused. - """ - - def __init__(self, options=None): - - if not options: - options = Options() - - self.options = options - - def _openFonts(self, fontfiles): - fonts = [ttLib.TTFont(fontfile) for fontfile in fontfiles] - for font, fontfile in zip(fonts, fontfiles): - font._merger__fontfile = fontfile - font._merger__name = font["name"].getDebugName(4) - return fonts - - def merge(self, fontfiles): - """Merges fonts together. - - Args: - fontfiles: A list of file names to be merged - - Returns: - A :class:`fontTools.ttLib.TTFont` object. Call the ``save`` method on - this to write it out to an OTF file. - """ - # - # Settle on a mega glyph order. - # - fonts = self._openFonts(fontfiles) - glyphOrders = [list(font.getGlyphOrder()) for font in fonts] - computeMegaGlyphOrder(self, glyphOrders) - - # Take first input file sfntVersion - sfntVersion = fonts[0].sfntVersion - - # Reload fonts and set new glyph names on them. - fonts = self._openFonts(fontfiles) - for font, glyphOrder in zip(fonts, glyphOrders): - font.setGlyphOrder(glyphOrder) - if "CFF " in font: - renameCFFCharStrings(self, glyphOrder, font["CFF "]) - - cmaps = [font["cmap"] for font in fonts] - self.duplicateGlyphsPerFont = [{} for _ in fonts] - computeMegaCmap(self, cmaps) - - mega = ttLib.TTFont(sfntVersion=sfntVersion) - mega.setGlyphOrder(self.glyphOrder) - - for font in fonts: - self._preMerge(font) - - self.fonts = fonts - - allTags = reduce(set.union, (list(font.keys()) for font in fonts), set()) - allTags.remove("GlyphOrder") - - for tag in sorted(allTags): - if tag in self.options.drop_tables: - continue - - with timer("merge '%s'" % tag): - tables = [font.get(tag, NotImplemented) for font in fonts] - - log.info("Merging '%s'.", tag) - clazz = ttLib.getTableClass(tag) - table = clazz(tag).merge(self, tables) - # XXX Clean this up and use: table = mergeObjects(tables) - - if table is not NotImplemented and table is not False: - mega[tag] = table - log.info("Merged '%s'.", tag) - else: - log.info("Dropped '%s'.", tag) - - del self.duplicateGlyphsPerFont - del self.fonts - - self._postMerge(mega) - - return mega - - def mergeObjects(self, returnTable, logic, tables): - # Right now we don't use self at all. Will use in the future - # for options and logging. - - allKeys = set.union( - set(), - *(vars(table).keys() for table in tables if table is not NotImplemented), - ) - for key in allKeys: - try: - mergeLogic = logic[key] - except KeyError: - try: - mergeLogic = logic["*"] - except KeyError: - raise Exception( - "Don't know how to merge key %s of class %s" - % (key, returnTable.__class__.__name__) - ) - if mergeLogic is NotImplemented: - continue - value = mergeLogic(getattr(table, key, NotImplemented) for table in tables) - if value is not NotImplemented: - setattr(returnTable, key, value) - - return returnTable - - def _preMerge(self, font): - layoutPreMerge(font) - - def _postMerge(self, font): - layoutPostMerge(font) - - if "OS/2" in font: - # https://github.com/fonttools/fonttools/issues/2538 - # TODO: Add an option to disable this? - font["OS/2"].recalcAvgCharWidth(font) - - -__all__ = ["Options", "Merger", "main"] - - -@timer("make one with everything (TOTAL TIME)") -def main(args=None): - """Merge multiple fonts into one""" - from fontTools import configLogger - - if args is None: - args = sys.argv[1:] - - options = Options() - args = options.parse_opts(args, ignore_unknown=["output-file"]) - outfile = "merged.ttf" - fontfiles = [] - for g in args: - if g.startswith("--output-file="): - outfile = g[14:] - continue - fontfiles.append(g) - - if len(args) < 1: - print("usage: pyftmerge font...", file=sys.stderr) - return 1 - - configLogger(level=logging.INFO if options.verbose else logging.WARNING) - if options.timing: - timer.logger.setLevel(logging.DEBUG) - else: - timer.logger.disabled = True - - merger = Merger(options=options) - font = merger.merge(fontfiles) - with timer("compile and save font"): - font.save(outfile) - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/models/diffusion/ddim.py b/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/models/diffusion/ddim.py deleted file mode 100644 index 844cb10346f94b03859b263ae601bd181b24bbe1..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/stable-diffusion-gui-test/ldmlib/models/diffusion/ddim.py +++ /dev/null @@ -1,241 +0,0 @@ -"""SAMPLING ONLY.""" - -import torch -import numpy as np -from tqdm import tqdm -from functools import partial - -from ldmlib.modules.diffusionmodules.util import make_ddim_sampling_parameters, make_ddim_timesteps, noise_like, \ - extract_into_tensor - - -class DDIMSampler(object): - def __init__(self, model, schedule="linear", **kwargs): - super().__init__() - self.model = model - self.ddpm_num_timesteps = model.num_timesteps - self.schedule = schedule - - def register_buffer(self, name, attr): - if type(attr) == torch.Tensor: - if attr.device != torch.device("cuda"): - attr = attr.to(torch.device("cuda")) - setattr(self, name, attr) - - def make_schedule(self, ddim_num_steps, ddim_discretize="uniform", ddim_eta=0., verbose=True): - self.ddim_timesteps = make_ddim_timesteps(ddim_discr_method=ddim_discretize, num_ddim_timesteps=ddim_num_steps, - num_ddpm_timesteps=self.ddpm_num_timesteps,verbose=verbose) - alphas_cumprod = self.model.alphas_cumprod - assert alphas_cumprod.shape[0] == self.ddpm_num_timesteps, 'alphas have to be defined for each timestep' - to_torch = lambda x: x.clone().detach().to(torch.float32).to(self.model.device) - - self.register_buffer('betas', to_torch(self.model.betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(self.model.alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod.cpu()))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod.cpu()))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu()))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod.cpu() - 1))) - - # ddim sampling parameters - ddim_sigmas, ddim_alphas, ddim_alphas_prev = make_ddim_sampling_parameters(alphacums=alphas_cumprod.cpu(), - ddim_timesteps=self.ddim_timesteps, - eta=ddim_eta,verbose=verbose) - self.register_buffer('ddim_sigmas', ddim_sigmas) - self.register_buffer('ddim_alphas', ddim_alphas) - self.register_buffer('ddim_alphas_prev', ddim_alphas_prev) - self.register_buffer('ddim_sqrt_one_minus_alphas', np.sqrt(1. - ddim_alphas)) - sigmas_for_original_sampling_steps = ddim_eta * torch.sqrt( - (1 - self.alphas_cumprod_prev) / (1 - self.alphas_cumprod) * ( - 1 - self.alphas_cumprod / self.alphas_cumprod_prev)) - self.register_buffer('ddim_sigmas_for_original_num_steps', sigmas_for_original_sampling_steps) - - @torch.no_grad() - def sample(self, - S, - batch_size, - shape, - conditioning=None, - callback=None, - normals_sequence=None, - img_callback=None, - quantize_x0=False, - eta=0., - mask=None, - x0=None, - temperature=1., - noise_dropout=0., - score_corrector=None, - corrector_kwargs=None, - verbose=True, - x_T=None, - log_every_t=100, - unconditional_guidance_scale=1., - unconditional_conditioning=None, - # this has to come in the same format as the conditioning, # e.g. as encoded tokens, ... - **kwargs - ): - if conditioning is not None: - if isinstance(conditioning, dict): - cbs = conditioning[list(conditioning.keys())[0]].shape[0] - if cbs != batch_size: - print(f"Warning: Got {cbs} conditionings but batch-size is {batch_size}") - else: - if conditioning.shape[0] != batch_size: - print(f"Warning: Got {conditioning.shape[0]} conditionings but batch-size is {batch_size}") - - self.make_schedule(ddim_num_steps=S, ddim_eta=eta, verbose=verbose) - # sampling - C, H, W = shape - size = (batch_size, C, H, W) - print(f'Data shape for DDIM sampling is {size}, eta {eta}') - - samples, intermediates = self.ddim_sampling(conditioning, size, - callback=callback, - img_callback=img_callback, - quantize_denoised=quantize_x0, - mask=mask, x0=x0, - ddim_use_original_steps=False, - noise_dropout=noise_dropout, - temperature=temperature, - score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - x_T=x_T, - log_every_t=log_every_t, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning, - ) - return samples, intermediates - - @torch.no_grad() - def ddim_sampling(self, cond, shape, - x_T=None, ddim_use_original_steps=False, - callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, log_every_t=100, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None,): - device = self.model.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - if timesteps is None: - timesteps = self.ddpm_num_timesteps if ddim_use_original_steps else self.ddim_timesteps - elif timesteps is not None and not ddim_use_original_steps: - subset_end = int(min(timesteps / self.ddim_timesteps.shape[0], 1) * self.ddim_timesteps.shape[0]) - 1 - timesteps = self.ddim_timesteps[:subset_end] - - intermediates = {'x_inter': [img], 'pred_x0': [img]} - time_range = reversed(range(0,timesteps)) if ddim_use_original_steps else np.flip(timesteps) - total_steps = timesteps if ddim_use_original_steps else timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='DDIM Sampler', total=total_steps) - - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((b,), step, device=device, dtype=torch.long) - - if mask is not None: - assert x0 is not None - img_orig = self.model.q_sample(x0, ts) # TODO: deterministic forward pass? - img = img_orig * mask + (1. - mask) * img - - outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, - quantize_denoised=quantize_denoised, temperature=temperature, - noise_dropout=noise_dropout, score_corrector=score_corrector, - corrector_kwargs=corrector_kwargs, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - img, pred_x0 = outs - if callback: callback(i) - if img_callback: img_callback(pred_x0, i) - - if index % log_every_t == 0 or index == total_steps - 1: - intermediates['x_inter'].append(img) - intermediates['pred_x0'].append(pred_x0) - - return img, intermediates - - @torch.no_grad() - def p_sample_ddim(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, - unconditional_guidance_scale=1., unconditional_conditioning=None): - b, *_, device = *x.shape, x.device - - if unconditional_conditioning is None or unconditional_guidance_scale == 1.: - e_t = self.model.apply_model(x, t, c) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t] * 2) - c_in = torch.cat([unconditional_conditioning, c]) - e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) - e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond) - - if score_corrector is not None: - assert self.model.parameterization == "eps" - e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs) - - alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas - alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev - sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas - sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas - # select parameters corresponding to the currently considered timestep - a_t = torch.full((b, 1, 1, 1), alphas[index], device=device) - a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device) - sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device) - sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device) - - # current prediction for x_0 - pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt() - if quantize_denoised: - pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0) - # direction pointing to x_t - dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t - noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise - return x_prev, pred_x0 - - @torch.no_grad() - def stochastic_encode(self, x0, t, use_original_steps=False, noise=None): - # fast, but does not allow for exact reconstruction - # t serves as an index to gather the correct alphas - if use_original_steps: - sqrt_alphas_cumprod = self.sqrt_alphas_cumprod - sqrt_one_minus_alphas_cumprod = self.sqrt_one_minus_alphas_cumprod - else: - sqrt_alphas_cumprod = torch.sqrt(self.ddim_alphas) - sqrt_one_minus_alphas_cumprod = self.ddim_sqrt_one_minus_alphas - - if noise is None: - noise = torch.randn_like(x0) - return (extract_into_tensor(sqrt_alphas_cumprod, t, x0.shape) * x0 + - extract_into_tensor(sqrt_one_minus_alphas_cumprod, t, x0.shape) * noise) - - @torch.no_grad() - def decode(self, x_latent, cond, t_start, unconditional_guidance_scale=1.0, unconditional_conditioning=None, - use_original_steps=False): - - timesteps = np.arange(self.ddpm_num_timesteps) if use_original_steps else self.ddim_timesteps - timesteps = timesteps[:t_start] - - time_range = np.flip(timesteps) - total_steps = timesteps.shape[0] - print(f"Running DDIM Sampling with {total_steps} timesteps") - - iterator = tqdm(time_range, desc='Decoding image', total=total_steps) - x_dec = x_latent - for i, step in enumerate(iterator): - index = total_steps - i - 1 - ts = torch.full((x_latent.shape[0],), step, device=x_latent.device, dtype=torch.long) - x_dec, _ = self.p_sample_ddim(x_dec, cond, ts, index=index, use_original_steps=use_original_steps, - unconditional_guidance_scale=unconditional_guidance_scale, - unconditional_conditioning=unconditional_conditioning) - return x_dec diff --git a/spaces/johnson906/recipedia/src/utils/metrics.py b/spaces/johnson906/recipedia/src/utils/metrics.py deleted file mode 100644 index 7f16675b38b6960940b9f507e321464005ac83e1..0000000000000000000000000000000000000000 --- a/spaces/johnson906/recipedia/src/utils/metrics.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. - -import sys -import time -import math -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.modules.loss import _WeightedLoss -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') -map_loc = None if torch.cuda.is_available() else 'cpu' - - -class MaskedCrossEntropyCriterion(_WeightedLoss): - - def __init__(self, ignore_index=[-100], reduce=None): - super(MaskedCrossEntropyCriterion, self).__init__() - self.padding_idx = ignore_index - self.reduce = reduce - - def forward(self, outputs, targets): - lprobs = nn.functional.log_softmax(outputs, dim=-1) - lprobs = lprobs.view(-1, lprobs.size(-1)) - - for idx in self.padding_idx: - # remove padding idx from targets to allow gathering without error (padded entries will be suppressed later) - targets[targets == idx] = 0 - - nll_loss = -lprobs.gather(dim=-1, index=targets.unsqueeze(1)) - if self.reduce: - nll_loss = nll_loss.sum() - - return nll_loss.squeeze() - - -def softIoU(out, target, e=1e-6, sum_axis=1): - - num = (out*target).sum(sum_axis, True) - den = (out+target-out*target).sum(sum_axis, True) + e - iou = num / den - - return iou - - -def update_error_types(error_types, y_pred, y_true): - - error_types['tp_i'] += (y_pred * y_true).sum(0).cpu().data.numpy() - error_types['fp_i'] += (y_pred * (1-y_true)).sum(0).cpu().data.numpy() - error_types['fn_i'] += ((1-y_pred) * y_true).sum(0).cpu().data.numpy() - error_types['tn_i'] += ((1-y_pred) * (1-y_true)).sum(0).cpu().data.numpy() - - error_types['tp_all'] += (y_pred * y_true).sum().item() - error_types['fp_all'] += (y_pred * (1-y_true)).sum().item() - error_types['fn_all'] += ((1-y_pred) * y_true).sum().item() - - -def compute_metrics(ret_metrics, error_types, metric_names, eps=1e-10, weights=None): - - if 'accuracy' in metric_names: - ret_metrics['accuracy'].append(np.mean((error_types['tp_i'] + error_types['tn_i']) / (error_types['tp_i'] + error_types['fp_i'] + error_types['fn_i'] + error_types['tn_i']))) - if 'jaccard' in metric_names: - ret_metrics['jaccard'].append(error_types['tp_all'] / (error_types['tp_all'] + error_types['fp_all'] + error_types['fn_all'] + eps)) - if 'dice' in metric_names: - ret_metrics['dice'].append(2*error_types['tp_all'] / (2*(error_types['tp_all'] + error_types['fp_all'] + error_types['fn_all']) + eps)) - if 'f1' in metric_names: - pre = error_types['tp_i'] / (error_types['tp_i'] + error_types['fp_i'] + eps) - rec = error_types['tp_i'] / (error_types['tp_i'] + error_types['fn_i'] + eps) - f1_perclass = 2*(pre * rec) / (pre + rec + eps) - if 'f1_ingredients' not in ret_metrics.keys(): - ret_metrics['f1_ingredients'] = [np.average(f1_perclass, weights=weights)] - else: - ret_metrics['f1_ingredients'].append(np.average(f1_perclass, weights=weights)) - - pre = error_types['tp_all'] / (error_types['tp_all'] + error_types['fp_all'] + eps) - rec = error_types['tp_all'] / (error_types['tp_all'] + error_types['fn_all'] + eps) - f1 = 2*(pre * rec) / (pre + rec + eps) - ret_metrics['f1'].append(f1) diff --git a/spaces/jonatanklosko/chai/priv/hero_icons/LICENSE.md b/spaces/jonatanklosko/chai/priv/hero_icons/LICENSE.md deleted file mode 100644 index 1ac3e409b71e2f568457d2c0ae4a1cbc8eeaea68..0000000000000000000000000000000000000000 --- a/spaces/jonatanklosko/chai/priv/hero_icons/LICENSE.md +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2020 Refactoring UI Inc. - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file diff --git a/spaces/jordonpeter01/MusicGen/setup.py b/spaces/jordonpeter01/MusicGen/setup.py deleted file mode 100644 index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen/setup.py +++ /dev/null @@ -1,65 +0,0 @@ -""" - Copyright (c) Meta Platforms, Inc. and affiliates. - All rights reserved. - - This source code is licensed under the license found in the - LICENSE file in the root directory of this source tree. - -""" - -from pathlib import Path - -from setuptools import setup, find_packages - - -NAME = 'audiocraft' -DESCRIPTION = 'Audio research library for PyTorch' - -URL = 'https://github.com/fairinternal/audiocraft' -AUTHOR = 'FAIR Speech & Audio' -EMAIL = 'defossez@meta.com' -REQUIRES_PYTHON = '>=3.8.0' - -for line in open('audiocraft/__init__.py'): - line = line.strip() - if '__version__' in line: - context = {} - exec(line, context) - VERSION = context['__version__'] - -HERE = Path(__file__).parent - -try: - with open(HERE / "README.md", encoding='utf-8') as f: - long_description = '\n' + f.read() -except FileNotFoundError: - long_description = DESCRIPTION - -REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')] - -setup( - name=NAME, - version=VERSION, - description=DESCRIPTION, - author_email=EMAIL, - long_description=long_description, - long_description_content_type='text/markdown', - author=AUTHOR, - url=URL, - python_requires=REQUIRES_PYTHON, - install_requires=REQUIRED, - extras_require={ - 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'], - }, - packages=find_packages(), - package_data={'audiocraft': ['py.typed']}, - include_package_data=True, - license='MIT License', - classifiers=[ - # Trove classifiers - # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers - 'License :: OSI Approved :: MIT License', - 'Topic :: Multimedia :: Sound/Audio', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - ], -) diff --git a/spaces/jskalbg/ChatDev01/chatdev/documents.py b/spaces/jskalbg/ChatDev01/chatdev/documents.py deleted file mode 100644 index e37cd21a82fe8a6d92a2b0fd743182310ae2d0ab..0000000000000000000000000000000000000000 --- a/spaces/jskalbg/ChatDev01/chatdev/documents.py +++ /dev/null @@ -1,47 +0,0 @@ -import re -import os -import time -from colorama import Fore - - -class Documents(): - def __init__(self, generated_content = "", parse = True, predifined_filename = None): - self.directory: str = None - self.generated_content = generated_content - self.docbooks = {} - - if generated_content != "": - if parse: - regex = r"```\n(.*?)```" - matches = re.finditer(regex, self.generated_content, re.DOTALL) - for match in matches: - filename = "requirements.txt" - doc = match.group(1) - self.docbooks[filename] = doc - else: - self.docbooks[predifined_filename] = self.generated_content - - def _update_docs(self, generated_content, parse = True, predifined_filename = ""): - new_docs = Documents(generated_content, parse, predifined_filename) - for key in new_docs.docbooks.keys(): - if key not in self.docbooks.keys() or self.docbooks[key] != new_docs.docbooks[key]: - print("{} updated.".format(key)) - print(Fore.WHITE + "------Old:\n{}\n------New:\n{}".format(self.docbooks[key] if key in self.docbooks.keys() else "# None", new_docs.docbooks[key])) - self.docbooks[key] = new_docs.docbooks[key] - - - def _rewrite_docs(self): - directory = self.directory - if not os.path.exists(directory): - os.mkdir(directory) - print("{} Created.".format(directory)) - for filename in self.docbooks.keys(): - with open(os.path.join(directory, filename), "w", encoding="utf-8") as writer: - writer.write(self.docbooks[filename]) - print(os.path.join(directory, filename), "Writed") - - def _get_docs(self): - content = "" - for filename in self.docbooks.keys(): - content += "{}\n```\n{}\n```\n\n".format(filename, self.docbooks[filename]) - return content diff --git a/spaces/kadirnar/Anime4k/utils.py b/spaces/kadirnar/Anime4k/utils.py deleted file mode 100644 index 3dfd97dda0161c8af821830b99e9bc3e99f84633..0000000000000000000000000000000000000000 --- a/spaces/kadirnar/Anime4k/utils.py +++ /dev/null @@ -1,140 +0,0 @@ -import pathlib -import ffmpeg -import tempfile -from pyanime4k.ac import AC, Parameters, ProcessorType, Codec -from yt_dlp import YoutubeDL - -def url_download(url): - outtmpl = url[-5:] + '.mp4' - ydl_opts = {'format': 'bestvideo[ext=mp4]+bestaudio[ext=mp4]/mp4+best[height<=144]', 'outtmpl': outtmpl} - with YoutubeDL(ydl_opts) as ydl: - ydl.extract_info(url, download=True) - - return outtmpl - -def _sanitize_input_paths(input_paths): - """ sanitize input file paths - - Args: - input_paths (any): input paths variable to sanitize - """ - sanitized_list = [] - - # if input is single file in string format - # convert it into pathlib.Path object - if isinstance(input_paths, str): - sanitized_list.append(pathlib.Path(input_paths)) - - # if the input is single file instead of a list - # convert it into a list - elif isinstance(input_paths, pathlib.Path): - sanitized_list.append(input_paths) - - # if the input is already a list - # make sure all elements are path objects - elif isinstance(input_paths, list): - for path in input_paths: - - # if the path is not a pathlib.Path object - # convert it into an object - if not isinstance(path, pathlib.Path): - sanitized_list.append(pathlib.Path(path)) - - # otherwise, the path is clean - else: - sanitized_list.append(path) - - # return the sanitized lsit - return sanitized_list - - - -def migrate_audio_streams(upscaled_video: str, original_video: str, output_path: str): - upscaled_video = pathlib.Path(upscaled_video) - original_video = pathlib.Path(original_video) - output_path = pathlib.Path(output_path) - upscaled_input = ffmpeg.input(str(upscaled_video.absolute())) - original_input = ffmpeg.input(str(original_video.absolute())) - - # find upscaled video stream and original audio stream - upscaled_video = upscaled_input.video - original_audio = original_input.audio - # create output file with selected streams - output = ffmpeg.output(upscaled_video, original_audio, - str(output_path.absolute()), c="copy") - - ffmpeg.run(output, overwrite_output=True) - - -def upscale_videos(input_paths: list, output_suffix: str = "_output", output_path: pathlib.Path = None, parameters: Parameters = Parameters(), GPU_mode: bool = False, ACNet: bool = True, codec: Codec = Codec.MP4V): - """ upscale a list of video files with Anime4k - - Args: - input_paths (list): list of input file paths - output_suffix (str, optional): output files suffix. Defaults to "_output". - output_path (pathlib.Path, optional): parent directory of output paths. Defaults to None. - parameters (Parameters, optional): custom arguments passed to Anime4KCPP. - GPU_mode (bool, optional): enable GPU mode. Defaults to False. - ACNet (bool, optional): enable ACNet mode. Defaults to True. - codec (Codec, optional): codec for video encodeing. Defaults to MP4V - - Raises: - FileExistsError: when output path exists and isn't a directory - ACError - """ - - # sanitize input list - input_paths = _sanitize_input_paths(input_paths) - - # if destination path unspecified - if output_path is None: - - # destination path is first input file's parent directory - output_path = input_paths[0].parent - - # if destination path doesn't exist - if not output_path.exists(): - # create directory and its parents if necessary - output_path.mkdir(parents=True, exist_ok=True) - - # else if it already exists but isn't a directory - elif not output_path.is_dir(): - raise FileExistsError( - 'destination path already exists and isn\'t a directory') - - # set parameters to video mode - parameters.videoMode = True - - # create anime4k object - if GPU_mode: - if ACNet: - ac_object = AC(False, True, type=ProcessorType.GPUCNN, - parameters=parameters) - else: - ac_object = AC(True, False, type=ProcessorType.GPU, - parameters=parameters) - else: - if ACNet: - ac_object = AC(False, False, type=ProcessorType.CPUCNN, - parameters=parameters) - else: - ac_object = AC(False, False, type=ProcessorType.CPU, - parameters=parameters) - - # process each of the files in the list - for path in input_paths: - - # create temporary directory to save the upscaled video - temporary_directory = pathlib.Path(tempfile.mkdtemp()) - temporary_video_file_path = temporary_directory.joinpath('temp.mp4') - # process and save video file to temp/temp.mp4 - - ac_object.load_video(str(path)) - ac_object.set_save_video_info(str(temporary_video_file_path), codec) - ac_object.process_with_progress() - ac_object.save_video() - migrate_audio_streams(upscaled_video=temporary_video_file_path, - original_video=path, - output_path=(output_path.joinpath(path.stem + output_suffix + path.suffix))) - return temporary_video_file_path - \ No newline at end of file diff --git a/spaces/kaicheng/ChatGPT_ad/readme/README_ja.md b/spaces/kaicheng/ChatGPT_ad/readme/README_ja.md deleted file mode 100644 index fc56eec0b81c22ff0a49e3960aa52ffd7d6dc5cb..0000000000000000000000000000000000000000 --- a/spaces/kaicheng/ChatGPT_ad/readme/README_ja.md +++ /dev/null @@ -1,126 +0,0 @@ -
        - - 简体中文 | English | 日本語 -
        - -

        川虎 Chat 🐯 Chuanhu Chat

        -
        - - Logo - - -

        -

        ChatGPT/ChatGLM/LLaMAなどのLLMのための軽量でユーザーフレンドリーなWeb-UI

        -

        - - Tests Passing - - - GitHub Contributors - - - GitHub pull requests - -

        - ストリーム出力/会話回数無制限/履歴保存/プリセットプロンプト/ファイルへの質問チャット
        - ウェブ検索/LaTeXレンダリング/表レンダリング/コードハイライト
        - オートダークモード/アダプティブ・ウェブ・インターフェイス/WeChatライク・テーマ
        - マルチパラメーターチューニング/マルチAPI-Key対応/マルチユーザー対応
        - GPT-4対応/LLMのローカルデプロイ可能。 -

        - 動画チュートリアル - · - 2.0 イントロダクション - · - 3.0 イントロダクション & チュートリアル - || - オンライントライアル - · - ワンクリックデプロイ -

        -

        - Animation Demo -

        -

        -
        - -## 使う上でのTips - -- ChatGPTをより適切に制御するために、システムプロンプトを使用できます。 -- プロンプトテンプレートを使用するには、プロンプトテンプレートコレクションを選択し、ドロップダウンメニューから特定のプロンプトを選択。回答が不十分な場合は、`🔄再生成`ボタンを使って再試行します。 -- 入力ボックスで改行するには、Shift + Enterキーを押してください。 -- 入力履歴を素早く切り替えるには、入力ボックスで キーを押す。 -- プログラムをサーバにデプロイするには、プログラムの最終行を `demo.launch(server_name="0.0.0.0", server_port=)`に変更します。 -- 共有リンクを取得するには、プログラムの最後の行を `demo.launch(share=True)` に変更してください。なお、公開リンクでアクセスするためには、プログラムが実行されている必要があることに注意してください。 -- Hugging Face Spacesで使用する場合: より速く、より安全に利用するために、**Duplicate Space**を使用し、自分のスペースでプログラムを実行することをお勧めします。 - -## インストール - -```shell -git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git -cd ChuanhuChatGPT -pip install -r requirements.txt -``` - -次に `config_example.json`をコピーして `config.json`にリネームし、そのファイルにAPI-Keyなどの設定を記入する。 - -```shell -python ChuanhuChatbot.py -``` - -ブラウザのウィンドウが開き、ChatGPTとチャットできるようになります。 - -> **Note** -> -> 詳しい手順は[wikiページ](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程)をご確認ください。 - -## トラブルシューティング - -問題が発生した場合は、まずこのプロジェクトの最新の変更点を手動で引っ張ってみるのがよいでしょう。その手順は以下の通りです: - -1. ウェブページの `Download ZIP` をクリックして最新のコードアーカイブをダウンロードするか、または - ```shell - git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f - ``` -2. 新しい依存関係が導入されている可能性があるため、依存関係を再度インストールしてみてください。 - ``` - pip install -r requirements.txt - ``` -3. Gradioを更新 - ``` - pip install gradio --upgrade --force-reinstall - ``` - -一般的に、以下の手順でほとんどの問題を解決することができます。 - -それでも問題が解決しない場合は、こちらのページをご参照ください: [よくある質問(FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题) - -このページでは、考えられるほぼすべての問題点と解決策を掲載しています。よくお読みください。 - -## More Information - -より詳細な情報は、[wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki) をご覧ください。: - -- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization) -- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南) -- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目) -- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志) -- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可) - -## Starchart - -[![Star History Chart](https://api.star-history.com/svg?repos=GaiZhenbiao/ChuanhuChatGPT&type=Date)](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date) - -## Contributors - - - - - -## Sponsor - -🐯 この企画が役に立ったら、遠慮なくコーラかコーヒーでもおごってください〜。 - -Buy Me A Coffee - -image diff --git a/spaces/keithhon/Real-Time-Voice-Cloning/synthesizer/utils/numbers.py b/spaces/keithhon/Real-Time-Voice-Cloning/synthesizer/utils/numbers.py deleted file mode 100644 index 75020a0bd732830f603d7c7d250c9e087033cc24..0000000000000000000000000000000000000000 --- a/spaces/keithhon/Real-Time-Voice-Cloning/synthesizer/utils/numbers.py +++ /dev/null @@ -1,68 +0,0 @@ -import re -import inflect - -_inflect = inflect.engine() -_comma_number_re = re.compile(r"([0-9][0-9\,]+[0-9])") -_decimal_number_re = re.compile(r"([0-9]+\.[0-9]+)") -_pounds_re = re.compile(r"£([0-9\,]*[0-9]+)") -_dollars_re = re.compile(r"\$([0-9\.\,]*[0-9]+)") -_ordinal_re = re.compile(r"[0-9]+(st|nd|rd|th)") -_number_re = re.compile(r"[0-9]+") - - -def _remove_commas(m): - return m.group(1).replace(",", "") - - -def _expand_decimal_point(m): - return m.group(1).replace(".", " point ") - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split(".") - if len(parts) > 2: - return match + " dollars" # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = "dollar" if dollars == 1 else "dollars" - cent_unit = "cent" if cents == 1 else "cents" - return "%s %s, %s %s" % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = "dollar" if dollars == 1 else "dollars" - return "%s %s" % (dollars, dollar_unit) - elif cents: - cent_unit = "cent" if cents == 1 else "cents" - return "%s %s" % (cents, cent_unit) - else: - return "zero dollars" - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return "two thousand" - elif num > 2000 and num < 2010: - return "two thousand " + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + " hundred" - else: - return _inflect.number_to_words(num, andword="", zero="oh", group=2).replace(", ", " ") - else: - return _inflect.number_to_words(num, andword="") - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r"\1 pounds", text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text diff --git a/spaces/keras-io/integrated_gradients/app.py b/spaces/keras-io/integrated_gradients/app.py deleted file mode 100644 index 48cd09982a6e33a1fe3e77debfe9879def86bc9f..0000000000000000000000000000000000000000 --- a/spaces/keras-io/integrated_gradients/app.py +++ /dev/null @@ -1,400 +0,0 @@ -import gradio as gr -import numpy as np -import matplotlib.pyplot as plt -from scipy import ndimage -from IPython.display import Image - -import tensorflow as tf -from tensorflow import keras -from tensorflow.keras import layers -from tensorflow.keras.applications import xception - -# Size of the input image -img_size = (299, 299, 3) - -# Load Xception model with imagenet weights -model = xception.Xception(weights="imagenet") - -# The local path to our target image -img_path = keras.utils.get_file("elephant.jpg", "https://i.imgur.com/Bvro0YD.png") - -def get_gradients(img_input, top_pred_idx): - """Computes the gradients of outputs w.r.t input image. - - Args: - img_input: 4D image tensor - top_pred_idx: Predicted label for the input image - - Returns: - Gradients of the predictions w.r.t img_input - """ - images = tf.cast(img_input, tf.float32) - - with tf.GradientTape() as tape: - tape.watch(images) - preds = model(images) - top_class = preds[:, top_pred_idx] - - grads = tape.gradient(top_class, images) - return grads - - -def get_integrated_gradients(img_input, top_pred_idx, baseline=None, num_steps=50): - """Computes Integrated Gradients for a predicted label. - - Args: - img_input (ndarray): Original image - top_pred_idx: Predicted label for the input image - baseline (ndarray): The baseline image to start with for interpolation - num_steps: Number of interpolation steps between the baseline - and the input used in the computation of integrated gradients. These - steps along determine the integral approximation error. By default, - num_steps is set to 50. - - Returns: - Integrated gradients w.r.t input image - """ - # If baseline is not provided, start with a black image - # having same size as the input image. - if baseline is None: - baseline = np.zeros(img_size).astype(np.float32) - else: - baseline = baseline.astype(np.float32) - - # 1. Do interpolation. - img_input = img_input.astype(np.float32) - interpolated_image = [ - baseline + (step / num_steps) * (img_input - baseline) - for step in range(num_steps + 1) - ] - interpolated_image = np.array(interpolated_image).astype(np.float32) - - # 2. Preprocess the interpolated images - interpolated_image = xception.preprocess_input(interpolated_image) - - # 3. Get the gradients - grads = [] - for i, img in enumerate(interpolated_image): - img = tf.expand_dims(img, axis=0) - grad = get_gradients(img, top_pred_idx=top_pred_idx) - grads.append(grad[0]) - grads = tf.convert_to_tensor(grads, dtype=tf.float32) - - # 4. Approximate the integral using the trapezoidal rule - grads = (grads[:-1] + grads[1:]) / 2.0 - avg_grads = tf.reduce_mean(grads, axis=0) - - # 5. Calculate integrated gradients and return - integrated_grads = (img_input - baseline) * avg_grads - return integrated_grads - - -def random_baseline_integrated_gradients( - img_input, top_pred_idx, num_steps=50, num_runs=2 -): - """Generates a number of random baseline images. - - Args: - img_input (ndarray): 3D image - top_pred_idx: Predicted label for the input image - num_steps: Number of interpolation steps between the baseline - and the input used in the computation of integrated gradients. These - steps along determine the integral approximation error. By default, - num_steps is set to 50. - num_runs: number of baseline images to generate - - Returns: - Averaged integrated gradients for `num_runs` baseline images - """ - # 1. List to keep track of Integrated Gradients (IG) for all the images - integrated_grads = [] - - # 2. Get the integrated gradients for all the baselines - for run in range(num_runs): - baseline = np.random.random(img_size) * 255 - igrads = get_integrated_gradients( - img_input=img_input, - top_pred_idx=top_pred_idx, - baseline=baseline, - num_steps=num_steps, - ) - integrated_grads.append(igrads) - - # 3. Return the average integrated gradients for the image - integrated_grads = tf.convert_to_tensor(integrated_grads) - return tf.reduce_mean(integrated_grads, axis=0) - -class GradVisualizer: - """Plot gradients of the outputs w.r.t an input image.""" - - def __init__(self, positive_channel=None, negative_channel=None): - if positive_channel is None: - self.positive_channel = [0, 255, 0] - else: - self.positive_channel = positive_channel - - if negative_channel is None: - self.negative_channel = [255, 0, 0] - else: - self.negative_channel = negative_channel - - def apply_polarity(self, attributions, polarity): - if polarity == "positive": - return np.clip(attributions, 0, 1) - else: - return np.clip(attributions, -1, 0) - - def apply_linear_transformation( - self, - attributions, - clip_above_percentile=99.9, - clip_below_percentile=70.0, - lower_end=0.2, - ): - # 1. Get the thresholds - m = self.get_thresholded_attributions( - attributions, percentage=100 - clip_above_percentile - ) - e = self.get_thresholded_attributions( - attributions, percentage=100 - clip_below_percentile - ) - - # 2. Transform the attributions by a linear function f(x) = a*x + b such that - # f(m) = 1.0 and f(e) = lower_end - transformed_attributions = (1 - lower_end) * (np.abs(attributions) - e) / ( - m - e - ) + lower_end - - # 3. Make sure that the sign of transformed attributions is the same as original attributions - transformed_attributions *= np.sign(attributions) - - # 4. Only keep values that are bigger than the lower_end - transformed_attributions *= transformed_attributions >= lower_end - - # 5. Clip values and return - transformed_attributions = np.clip(transformed_attributions, 0.0, 1.0) - return transformed_attributions - - def get_thresholded_attributions(self, attributions, percentage): - if percentage == 100.0: - return np.min(attributions) - - # 1. Flatten the attributions - flatten_attr = attributions.flatten() - - # 2. Get the sum of the attributions - total = np.sum(flatten_attr) - - # 3. Sort the attributions from largest to smallest. - sorted_attributions = np.sort(np.abs(flatten_attr))[::-1] - - # 4. Calculate the percentage of the total sum that each attribution - # and the values about it contribute. - cum_sum = 100.0 * np.cumsum(sorted_attributions) / total - - # 5. Threshold the attributions by the percentage - indices_to_consider = np.where(cum_sum >= percentage)[0][0] - - # 6. Select the desired attributions and return - attributions = sorted_attributions[indices_to_consider] - return attributions - - def binarize(self, attributions, threshold=0.001): - return attributions > threshold - - def morphological_cleanup_fn(self, attributions, structure=np.ones((4, 4))): - closed = ndimage.grey_closing(attributions, structure=structure) - opened = ndimage.grey_opening(closed, structure=structure) - return opened - - def draw_outlines( - self, attributions, percentage=90, connected_component_structure=np.ones((3, 3)) - ): - # 1. Binarize the attributions. - attributions = self.binarize(attributions) - - # 2. Fill the gaps - attributions = ndimage.binary_fill_holes(attributions) - - # 3. Compute connected components - connected_components, num_comp = ndimage.measurements.label( - attributions, structure=connected_component_structure - ) - - # 4. Sum up the attributions for each component - total = np.sum(attributions[connected_components > 0]) - component_sums = [] - for comp in range(1, num_comp + 1): - mask = connected_components == comp - component_sum = np.sum(attributions[mask]) - component_sums.append((component_sum, mask)) - - # 5. Compute the percentage of top components to keep - sorted_sums_and_masks = sorted(component_sums, key=lambda x: x[0], reverse=True) - sorted_sums = list(zip(*sorted_sums_and_masks))[0] - cumulative_sorted_sums = np.cumsum(sorted_sums) - cutoff_threshold = percentage * total / 100 - cutoff_idx = np.where(cumulative_sorted_sums >= cutoff_threshold)[0][0] - if cutoff_idx > 2: - cutoff_idx = 2 - - # 6. Set the values for the kept components - border_mask = np.zeros_like(attributions) - for i in range(cutoff_idx + 1): - border_mask[sorted_sums_and_masks[i][1]] = 1 - - # 7. Make the mask hollow and show only the border - eroded_mask = ndimage.binary_erosion(border_mask, iterations=1) - border_mask[eroded_mask] = 0 - - # 8. Return the outlined mask - return border_mask - - def process_grads( - self, - image, - attributions, - polarity="positive", - clip_above_percentile=99.9, - clip_below_percentile=0, - morphological_cleanup=False, - structure=np.ones((3, 3)), - outlines=False, - outlines_component_percentage=90, - overlay=True, - ): - if polarity not in ["positive", "negative"]: - raise ValueError( - f""" Allowed polarity values: 'positive' or 'negative' - but provided {polarity}""" - ) - if clip_above_percentile < 0 or clip_above_percentile > 100: - raise ValueError("clip_above_percentile must be in [0, 100]") - - if clip_below_percentile < 0 or clip_below_percentile > 100: - raise ValueError("clip_below_percentile must be in [0, 100]") - - # 1. Apply polarity - if polarity == "positive": - attributions = self.apply_polarity(attributions, polarity=polarity) - channel = self.positive_channel - else: - attributions = self.apply_polarity(attributions, polarity=polarity) - attributions = np.abs(attributions) - channel = self.negative_channel - - # 2. Take average over the channels - attributions = np.average(attributions, axis=2) - - # 3. Apply linear transformation to the attributions - attributions = self.apply_linear_transformation( - attributions, - clip_above_percentile=clip_above_percentile, - clip_below_percentile=clip_below_percentile, - lower_end=0.0, - ) - - # 4. Cleanup - if morphological_cleanup: - attributions = self.morphological_cleanup_fn( - attributions, structure=structure - ) - # 5. Draw the outlines - if outlines: - attributions = self.draw_outlines( - attributions, percentage=outlines_component_percentage - ) - - # 6. Expand the channel axis and convert to RGB - attributions = np.expand_dims(attributions, 2) * channel - - # 7.Superimpose on the original image - if overlay: - attributions = np.clip((attributions * 0.8 + image), 0, 255) - return attributions - - def visualize( - self, - image, - gradients, - integrated_gradients, - polarity="positive", - clip_above_percentile=99.9, - clip_below_percentile=0, - morphological_cleanup=False, - structure=np.ones((3, 3)), - outlines=False, - outlines_component_percentage=90, - overlay=True, - figsize=(15, 8), - ): - # 1. Make two copies of the original image - img1 = np.copy(image) - img2 = np.copy(image) - - # 2. Process the normal gradients - grads_attr = self.process_grads( - image=img1, - attributions=gradients, - polarity=polarity, - clip_above_percentile=clip_above_percentile, - clip_below_percentile=clip_below_percentile, - morphological_cleanup=morphological_cleanup, - structure=structure, - outlines=outlines, - outlines_component_percentage=outlines_component_percentage, - overlay=overlay, - ) - - # 3. Process the integrated gradients - igrads_attr = self.process_grads( - image=img2, - attributions=integrated_gradients, - polarity=polarity, - clip_above_percentile=clip_above_percentile, - clip_below_percentile=clip_below_percentile, - morphological_cleanup=morphological_cleanup, - structure=structure, - outlines=outlines, - outlines_component_percentage=outlines_component_percentage, - overlay=overlay, - ) - - return igrads_attr.astype(np.uint8) - -def classify_image(image): - img = np.expand_dims(image, axis=0) - orig_img = np.copy(img[0]).astype(np.uint8) - img_processed = tf.cast(xception.preprocess_input(img), dtype=tf.float32) - preds = model.predict(img_processed) - top_pred_idx = tf.argmax(preds[0]) - print("Predicted:", top_pred_idx, xception.decode_predictions(preds, top=1)[0]) - grads = get_gradients(img_processed, top_pred_idx=top_pred_idx) - igrads = random_baseline_integrated_gradients( - np.copy(orig_img), top_pred_idx=top_pred_idx, num_steps=50, num_runs=2) - vis = GradVisualizer() - img_grads = vis.visualize( - image=orig_img, - gradients=grads[0].numpy(), - integrated_gradients=igrads.numpy(), - clip_above_percentile=99, - clip_below_percentile=0, - ) - return img_grads - -image = gr.inputs.Image(shape=(299,299)) -label = gr.outputs.Image() - -iface = gr.Interface(classify_image,image,label, - #outputs=[ - # gr.outputs.Textbox(label="Engine issue"), - # gr.outputs.Textbox(label="Engine issue score")], - examples=["elephant.jpg"], - title="Model interpretability with Integrated Gradients", - description = "Model interpretability with Integrated Gradients.", - article = "Author: Jónathan Heras. Based on the keras example from A_K_Nain" -# examples = ["sample.csv"], -) - - -iface.launch() diff --git a/spaces/kevinwang676/M4Singer/utils/pitch_utils.py b/spaces/kevinwang676/M4Singer/utils/pitch_utils.py deleted file mode 100644 index f7fd166abd3a03bac5909e498669b482447435cf..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/M4Singer/utils/pitch_utils.py +++ /dev/null @@ -1,76 +0,0 @@ -######### -# world -########## -import librosa -import numpy as np -import torch - -gamma = 0 -mcepInput = 3 # 0 for dB, 3 for magnitude -alpha = 0.45 -en_floor = 10 ** (-80 / 20) -FFT_SIZE = 2048 - - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def norm_f0(f0, uv, hparams): - is_torch = isinstance(f0, torch.Tensor) - if hparams['pitch_norm'] == 'standard': - f0 = (f0 - hparams['f0_mean']) / hparams['f0_std'] - if hparams['pitch_norm'] == 'log': - f0 = torch.log2(f0) if is_torch else np.log2(f0) - if uv is not None and hparams['use_uv']: - f0[uv > 0] = 0 - return f0 - - -def norm_interp_f0(f0, hparams): - is_torch = isinstance(f0, torch.Tensor) - if is_torch: - device = f0.device - f0 = f0.data.cpu().numpy() - uv = f0 == 0 - f0 = norm_f0(f0, uv, hparams) - if sum(uv) == len(f0): - f0[uv] = 0 - elif sum(uv) > 0: - f0[uv] = np.interp(np.where(uv)[0], np.where(~uv)[0], f0[~uv]) - uv = torch.FloatTensor(uv) - f0 = torch.FloatTensor(f0) - if is_torch: - f0 = f0.to(device) - return f0, uv - - -def denorm_f0(f0, uv, hparams, pitch_padding=None, min=None, max=None): - if hparams['pitch_norm'] == 'standard': - f0 = f0 * hparams['f0_std'] + hparams['f0_mean'] - if hparams['pitch_norm'] == 'log': - f0 = 2 ** f0 - if min is not None: - f0 = f0.clamp(min=min) - if max is not None: - f0 = f0.clamp(max=max) - if uv is not None and hparams['use_uv']: - f0[uv > 0] = 0 - if pitch_padding is not None: - f0[pitch_padding] = 0 - return f0 diff --git a/spaces/kevinwang676/VITS2-Mandarin/text/japanese.py b/spaces/kevinwang676/VITS2-Mandarin/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VITS2-Mandarin/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/kevinwang676/Voice-Changer/vc_infer_pipeline.py b/spaces/kevinwang676/Voice-Changer/vc_infer_pipeline.py deleted file mode 100644 index 7261742c30f64df435ed3fdebaafd969e9563d98..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Voice-Changer/vc_infer_pipeline.py +++ /dev/null @@ -1,363 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss,librosa -from scipy import signal -from functools import lru_cache - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - -input_audio_path2wav={} -@lru_cache -def cache_harvest_f0(input_audio_path,fs,f0max,f0min,frame_period): - audio=input_audio_path2wav[input_audio_path] - f0, t = pyworld.harvest( - audio, - fs=fs, - f0_ceil=f0max, - f0_floor=f0min, - frame_period=frame_period, - ) - f0 = pyworld.stonemask(audio, f0, t, fs) - return f0 - -def change_rms(data1,sr1,data2,sr2,rate):#1是输入音频,2是输出音频,rate是2的占比 - # print(data1.max(),data2.max()) - rms1 = librosa.feature.rms(y=data1, frame_length=sr1//2*2, hop_length=sr1//2)#每半秒一个点 - rms2 = librosa.feature.rms(y=data2, frame_length=sr2//2*2, hop_length=sr2//2) - rms1=torch.from_numpy(rms1) - rms1=F.interpolate(rms1.unsqueeze(0), size=data2.shape[0],mode='linear').squeeze() - rms2=torch.from_numpy(rms2) - rms2=F.interpolate(rms2.unsqueeze(0), size=data2.shape[0],mode='linear').squeeze() - rms2=torch.max(rms2,torch.zeros_like(rms2)+1e-6) - data2*=(torch.pow(rms1,torch.tensor(1-rate))*torch.pow(rms2,torch.tensor(rate-1))).numpy() - return data2 - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0(self, input_audio_path,x, p_len, f0_up_key, f0_method,filter_radius, inp_f0=None): - global input_audio_path2wav - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - input_audio_path2wav[input_audio_path]=x.astype(np.double) - f0=cache_harvest_f0(input_audio_path,self.sr,f0_max,f0_min,10) - if(filter_radius>2): - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - version, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9 if version == "v1" else 12, - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0])if version=="v1"else logits[0] - - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0]) - .data.cpu() - .float() - .numpy() - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0(input_audio_path,audio_pad, p_len, f0_up_key, f0_method,filter_radius, inp_f0) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - if self.device == "mps": - pitchf = pitchf.astype(np.float32) - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - version, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - version, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - version, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - version, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - if(rms_mix_rate!=1): - audio_opt=change_rms(audio,16000,audio_opt,tgt_sr,rms_mix_rate) - if(resample_sr>=16000 and tgt_sr!=resample_sr): - audio_opt = librosa.resample( - audio_opt, orig_sr=tgt_sr, target_sr=resample_sr - ) - audio_max=np.abs(audio_opt).max()/0.99 - max_int16=32768 - if(audio_max>1):max_int16/=audio_max - audio_opt=(audio_opt * max_int16).astype(np.int16) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/eval/__init__.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/eval/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kingabzpro/savtadepth/heroku/DVC-heroku-deployment.md b/spaces/kingabzpro/savtadepth/heroku/DVC-heroku-deployment.md deleted file mode 100644 index 602a3aad0fd8f5fa01d85be45ffd1f36b457265b..0000000000000000000000000000000000000000 --- a/spaces/kingabzpro/savtadepth/heroku/DVC-heroku-deployment.md +++ /dev/null @@ -1,21 +0,0 @@ -We need to give Heroku the ability to pull in data from DVC upon app start up. We will install a [buildpack](https://elements.heroku.com/buildpacks/heroku/heroku-buildpack-apt) that allows the installation of apt-files and then define the Aptfile that contains a path to DVC. I.e., in the CLI run: - -``` -heroku buildpacks:add --index 1 heroku-community/apt -``` - -Then in your root project folder create a file called `Aptfile` that specifies the release of DVC you want installed, https://github.com/iterative/dvc/releases/download/2.8.3/dvc_2.8.3_amd64.deb - -Add the following code block to your **streamlit_app.py**: - -```python -import os - -if "DYNO" in os.environ and os.path.isdir(".dvc"): - os.system("dvc config core.no_scm true") - if os.system(f"dvc pull {model} {image}") != 0: - exit("dvc pull failed") - os.system("rm -r .dvc .apt/usr/lib/dvc") -``` - -Reference: [Heroku ML](https://github.com/GuilhermeBrejeiro/deploy_ML_model_Heroku_FastAPI) \ No newline at end of file diff --git a/spaces/koajoel/PolyFormer/bert/tokenization_bert.py b/spaces/koajoel/PolyFormer/bert/tokenization_bert.py deleted file mode 100644 index 972e1733163522359750dddedf6dea885085b2ca..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/bert/tokenization_bert.py +++ /dev/null @@ -1,545 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Tokenization classes.""" - - -import collections -import logging -import os -import unicodedata -from typing import List, Optional - -from .tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace - - -logger = logging.getLogger(__name__) - -VOCAB_FILES_NAMES = {"vocab_file": "vocab.txt"} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "bert-base-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt", - "bert-large-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt", - "bert-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt", - "bert-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-vocab.txt", - "bert-base-multilingual-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-vocab.txt", - "bert-base-multilingual-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt", - "bert-base-chinese": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt", - "bert-base-german-cased": "https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-vocab.txt", - "bert-large-uncased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-vocab.txt", - "bert-large-cased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-vocab.txt", - "bert-large-uncased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-vocab.txt", - "bert-large-cased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-vocab.txt", - "bert-base-cased-finetuned-mrpc": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-vocab.txt", - "bert-base-german-dbmdz-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt", - "bert-base-german-dbmdz-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt", - "TurkuNLP/bert-base-finnish-cased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-cased-v1/vocab.txt", - "TurkuNLP/bert-base-finnish-uncased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-uncased-v1/vocab.txt", - "wietsedv/bert-base-dutch-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/wietsedv/bert-base-dutch-cased/vocab.txt", - } -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "bert-base-uncased": 512, - "bert-large-uncased": 512, - "bert-base-cased": 512, - "bert-large-cased": 512, - "bert-base-multilingual-uncased": 512, - "bert-base-multilingual-cased": 512, - "bert-base-chinese": 512, - "bert-base-german-cased": 512, - "bert-large-uncased-whole-word-masking": 512, - "bert-large-cased-whole-word-masking": 512, - "bert-large-uncased-whole-word-masking-finetuned-squad": 512, - "bert-large-cased-whole-word-masking-finetuned-squad": 512, - "bert-base-cased-finetuned-mrpc": 512, - "bert-base-german-dbmdz-cased": 512, - "bert-base-german-dbmdz-uncased": 512, - "TurkuNLP/bert-base-finnish-cased-v1": 512, - "TurkuNLP/bert-base-finnish-uncased-v1": 512, - "wietsedv/bert-base-dutch-cased": 512, -} - -PRETRAINED_INIT_CONFIGURATION = { - "bert-base-uncased": {"do_lower_case": True}, - "bert-large-uncased": {"do_lower_case": True}, - "bert-base-cased": {"do_lower_case": False}, - "bert-large-cased": {"do_lower_case": False}, - "bert-base-multilingual-uncased": {"do_lower_case": True}, - "bert-base-multilingual-cased": {"do_lower_case": False}, - "bert-base-chinese": {"do_lower_case": False}, - "bert-base-german-cased": {"do_lower_case": False}, - "bert-large-uncased-whole-word-masking": {"do_lower_case": True}, - "bert-large-cased-whole-word-masking": {"do_lower_case": False}, - "bert-large-uncased-whole-word-masking-finetuned-squad": {"do_lower_case": True}, - "bert-large-cased-whole-word-masking-finetuned-squad": {"do_lower_case": False}, - "bert-base-cased-finetuned-mrpc": {"do_lower_case": False}, - "bert-base-german-dbmdz-cased": {"do_lower_case": False}, - "bert-base-german-dbmdz-uncased": {"do_lower_case": True}, - "TurkuNLP/bert-base-finnish-cased-v1": {"do_lower_case": False}, - "TurkuNLP/bert-base-finnish-uncased-v1": {"do_lower_case": True}, - "wietsedv/bert-base-dutch-cased": {"do_lower_case": False}, -} - - -def load_vocab(vocab_file): - """Loads a vocabulary file into a dictionary.""" - vocab = collections.OrderedDict() - with open(vocab_file, "r", encoding="utf-8") as reader: - tokens = reader.readlines() - for index, token in enumerate(tokens): - token = token.rstrip("\n") - vocab[token] = index - return vocab - - -def whitespace_tokenize(text): - """Runs basic whitespace cleaning and splitting on a piece of text.""" - text = text.strip() - if not text: - return [] - tokens = text.split() - return tokens - - -class BertTokenizer(PreTrainedTokenizer): - r""" - Constructs a BERT tokenizer. Based on WordPiece. - - This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the methods. Users - should refer to the superclass for more information regarding methods. - - Args: - vocab_file (:obj:`string`): - File containing the vocabulary. - do_lower_case (:obj:`bool`, `optional`, defaults to :obj:`True`): - Whether to lowercase the input when tokenizing. - do_basic_tokenize (:obj:`bool`, `optional`, defaults to :obj:`True`): - Whether to do basic tokenization before WordPiece. - never_split (:obj:`Iterable`, `optional`, defaults to :obj:`None`): - Collection of tokens which will never be split during tokenization. Only has an effect when - :obj:`do_basic_tokenize=True` - unk_token (:obj:`string`, `optional`, defaults to "[UNK]"): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - sep_token (:obj:`string`, `optional`, defaults to "[SEP]"): - The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences - for sequence classification or for a text and a question for question answering. - It is also used as the last token of a sequence built with special tokens. - pad_token (:obj:`string`, `optional`, defaults to "[PAD]"): - The token used for padding, for example when batching sequences of different lengths. - cls_token (:obj:`string`, `optional`, defaults to "[CLS]"): - The classifier token which is used when doing sequence classification (classification of the whole - sequence instead of per-token classification). It is the first token of the sequence when built with - special tokens. - mask_token (:obj:`string`, `optional`, defaults to "[MASK]"): - The token used for masking values. This is the token used when training this model with masked language - modeling. This is the token which the model will try to predict. - tokenize_chinese_chars (:obj:`bool`, `optional`, defaults to :obj:`True`): - Whether to tokenize Chinese characters. - This should likely be deactivated for Japanese: - see: https://github.com/huggingface/transformers/issues/328 - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - pretrained_init_configuration = PRETRAINED_INIT_CONFIGURATION - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - - def __init__( - self, - vocab_file, - do_lower_case=True, - do_basic_tokenize=True, - never_split=None, - unk_token="[UNK]", - sep_token="[SEP]", - pad_token="[PAD]", - cls_token="[CLS]", - mask_token="[MASK]", - tokenize_chinese_chars=True, - **kwargs - ): - super().__init__( - unk_token=unk_token, - sep_token=sep_token, - pad_token=pad_token, - cls_token=cls_token, - mask_token=mask_token, - **kwargs, - ) - - if not os.path.isfile(vocab_file): - raise ValueError( - "Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained " - "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file) - ) - self.vocab = load_vocab(vocab_file) - self.ids_to_tokens = collections.OrderedDict([(ids, tok) for tok, ids in self.vocab.items()]) - self.do_basic_tokenize = do_basic_tokenize - if do_basic_tokenize: - self.basic_tokenizer = BasicTokenizer( - do_lower_case=do_lower_case, never_split=never_split, tokenize_chinese_chars=tokenize_chinese_chars - ) - self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab, unk_token=self.unk_token) - - @property - def vocab_size(self): - return len(self.vocab) - - def get_vocab(self): - return dict(self.vocab, **self.added_tokens_encoder) - - def _tokenize(self, text): - split_tokens = [] - if self.do_basic_tokenize: - for token in self.basic_tokenizer.tokenize(text, never_split=self.all_special_tokens): - - # If the token is part of the never_split set - if token in self.basic_tokenizer.never_split: - split_tokens.append(token) - else: - split_tokens += self.wordpiece_tokenizer.tokenize(token) - else: - split_tokens = self.wordpiece_tokenizer.tokenize(text) - return split_tokens - - def _convert_token_to_id(self, token): - """ Converts a token (str) in an id using the vocab. """ - return self.vocab.get(token, self.vocab.get(self.unk_token)) - - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (str) using the vocab.""" - return self.ids_to_tokens.get(index, self.unk_token) - - def convert_tokens_to_string(self, tokens): - """ Converts a sequence of tokens (string) in a single string. """ - out_string = " ".join(tokens).replace(" ##", "").strip() - return out_string - - def build_inputs_with_special_tokens( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Build model inputs from a sequence or a pair of sequence for sequence classification tasks - by concatenating and adding special tokens. - A BERT sequence has the following format: - - - single sequence: ``[CLS] X [SEP]`` - - pair of sequences: ``[CLS] A [SEP] B [SEP]`` - - Args: - token_ids_0 (:obj:`List[int]`): - List of IDs to which the special tokens will be added - token_ids_1 (:obj:`List[int]`, `optional`, defaults to :obj:`None`): - Optional second list of IDs for sequence pairs. - - Returns: - :obj:`List[int]`: list of `input IDs <../glossary.html#input-ids>`__ with the appropriate special tokens. - """ - if token_ids_1 is None: - return [self.cls_token_id] + token_ids_0 + [self.sep_token_id] - cls = [self.cls_token_id] - sep = [self.sep_token_id] - return cls + token_ids_0 + sep + token_ids_1 + sep - - def get_special_tokens_mask( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False - ) -> List[int]: - """ - Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding - special tokens using the tokenizer ``prepare_for_model`` method. - - Args: - token_ids_0 (:obj:`List[int]`): - List of ids. - token_ids_1 (:obj:`List[int]`, `optional`, defaults to :obj:`None`): - Optional second list of IDs for sequence pairs. - already_has_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`): - Set to True if the token list is already formatted with special tokens for the model - - Returns: - :obj:`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. - """ - - if already_has_special_tokens: - if token_ids_1 is not None: - raise ValueError( - "You should not supply a second sequence if the provided sequence of " - "ids is already formated with special tokens for the model." - ) - return list(map(lambda x: 1 if x in [self.sep_token_id, self.cls_token_id] else 0, token_ids_0)) - - if token_ids_1 is not None: - return [1] + ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1] - return [1] + ([0] * len(token_ids_0)) + [1] - - def create_token_type_ids_from_sequences( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Creates a mask from the two sequences passed to be used in a sequence-pair classification task. - A BERT sequence pair mask has the following format: - - :: - - 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 - | first sequence | second sequence | - - if token_ids_1 is None, only returns the first portion of the mask (0's). - - Args: - token_ids_0 (:obj:`List[int]`): - List of ids. - token_ids_1 (:obj:`List[int]`, `optional`, defaults to :obj:`None`): - Optional second list of IDs for sequence pairs. - - Returns: - :obj:`List[int]`: List of `token type IDs <../glossary.html#token-type-ids>`_ according to the given - sequence(s). - """ - sep = [self.sep_token_id] - cls = [self.cls_token_id] - if token_ids_1 is None: - return len(cls + token_ids_0 + sep) * [0] - return len(cls + token_ids_0 + sep) * [0] + len(token_ids_1 + sep) * [1] - - def save_vocabulary(self, vocab_path): - """ - Save the sentencepiece vocabulary (copy original file) and special tokens file to a directory. - - Args: - vocab_path (:obj:`str`): - The directory in which to save the vocabulary. - - Returns: - :obj:`Tuple(str)`: Paths to the files saved. - """ - index = 0 - if os.path.isdir(vocab_path): - vocab_file = os.path.join(vocab_path, VOCAB_FILES_NAMES["vocab_file"]) - else: - vocab_file = vocab_path - with open(vocab_file, "w", encoding="utf-8") as writer: - for token, token_index in sorted(self.vocab.items(), key=lambda kv: kv[1]): - if index != token_index: - logger.warning( - "Saving vocabulary to {}: vocabulary indices are not consecutive." - " Please check that the vocabulary is not corrupted!".format(vocab_file) - ) - index = token_index - writer.write(token + "\n") - index += 1 - return (vocab_file,) - - -class BasicTokenizer(object): - """Runs basic tokenization (punctuation splitting, lower casing, etc.).""" - - def __init__(self, do_lower_case=True, never_split=None, tokenize_chinese_chars=True): - """ Constructs a BasicTokenizer. - - Args: - **do_lower_case**: Whether to lower case the input. - **never_split**: (`optional`) list of str - Kept for backward compatibility purposes. - Now implemented directly at the base class level (see :func:`PreTrainedTokenizer.tokenize`) - List of token not to split. - **tokenize_chinese_chars**: (`optional`) boolean (default True) - Whether to tokenize Chinese characters. - This should likely be deactivated for Japanese: - see: https://github.com/huggingface/pytorch-pretrained-BERT/issues/328 - """ - if never_split is None: - never_split = [] - self.do_lower_case = do_lower_case - self.never_split = set(never_split) - self.tokenize_chinese_chars = tokenize_chinese_chars - - def tokenize(self, text, never_split=None): - """ Basic Tokenization of a piece of text. - Split on "white spaces" only, for sub-word tokenization, see WordPieceTokenizer. - - Args: - **never_split**: (`optional`) list of str - Kept for backward compatibility purposes. - Now implemented directly at the base class level (see :func:`PreTrainedTokenizer.tokenize`) - List of token not to split. - """ - # union() returns a new set by concatenating the two sets. - never_split = self.never_split.union(set(never_split)) if never_split else self.never_split - - # This was added on November 1st, 2018 for the multilingual and Chinese - # models. This is also applied to the English models now, but it doesn't - # matter since the English models were not trained on any Chinese data - # and generally don't have any Chinese data in them (there are Chinese - # characters in the vocabulary because Wikipedia does have some Chinese - # words in the English Wikipedia.). - if self.tokenize_chinese_chars: - text = self._tokenize_chinese_chars(text) - orig_tokens = whitespace_tokenize(text) - split_tokens = [] - for token in orig_tokens: - if self.do_lower_case and token not in never_split: - token = token.lower() - token = self._run_strip_accents(token) - split_tokens.extend(self._run_split_on_punc(token, never_split)) - - output_tokens = whitespace_tokenize(" ".join(split_tokens)) - return output_tokens - - def _run_strip_accents(self, text): - """Strips accents from a piece of text.""" - text = unicodedata.normalize("NFD", text) - output = [] - for char in text: - cat = unicodedata.category(char) - if cat == "Mn": - continue - output.append(char) - return "".join(output) - - def _run_split_on_punc(self, text, never_split=None): - """Splits punctuation on a piece of text.""" - if never_split is not None and text in never_split: - return [text] - chars = list(text) - i = 0 - start_new_word = True - output = [] - while i < len(chars): - char = chars[i] - if _is_punctuation(char): - output.append([char]) - start_new_word = True - else: - if start_new_word: - output.append([]) - start_new_word = False - output[-1].append(char) - i += 1 - - return ["".join(x) for x in output] - - def _tokenize_chinese_chars(self, text): - """Adds whitespace around any CJK character.""" - output = [] - for char in text: - cp = ord(char) - if self._is_chinese_char(cp): - output.append(" ") - output.append(char) - output.append(" ") - else: - output.append(char) - return "".join(output) - - def _is_chinese_char(self, cp): - """Checks whether CP is the codepoint of a CJK character.""" - # This defines a "chinese character" as anything in the CJK Unicode block: - # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) - # - # Note that the CJK Unicode block is NOT all Japanese and Korean characters, - # despite its name. The modern Korean Hangul alphabet is a different block, - # as is Japanese Hiragana and Katakana. Those alphabets are used to write - # space-separated words, so they are not treated specially and handled - # like the all of the other languages. - if ( - (cp >= 0x4E00 and cp <= 0x9FFF) - or (cp >= 0x3400 and cp <= 0x4DBF) # - or (cp >= 0x20000 and cp <= 0x2A6DF) # - or (cp >= 0x2A700 and cp <= 0x2B73F) # - or (cp >= 0x2B740 and cp <= 0x2B81F) # - or (cp >= 0x2B820 and cp <= 0x2CEAF) # - or (cp >= 0xF900 and cp <= 0xFAFF) - or (cp >= 0x2F800 and cp <= 0x2FA1F) # - ): # - return True - - return False - - def _clean_text(self, text): - """Performs invalid character removal and whitespace cleanup on text.""" - output = [] - for char in text: - cp = ord(char) - if cp == 0 or cp == 0xFFFD or _is_control(char): - continue - if _is_whitespace(char): - output.append(" ") - else: - output.append(char) - return "".join(output) - - -class WordpieceTokenizer(object): - """Runs WordPiece tokenization.""" - - def __init__(self, vocab, unk_token, max_input_chars_per_word=100): - self.vocab = vocab - self.unk_token = unk_token - self.max_input_chars_per_word = max_input_chars_per_word - - def tokenize(self, text): - """Tokenizes a piece of text into its word pieces. - - This uses a greedy longest-match-first algorithm to perform tokenization - using the given vocabulary. - - For example: - input = "unaffable" - output = ["un", "##aff", "##able"] - - Args: - text: A single token or whitespace separated tokens. This should have - already been passed through `BasicTokenizer`. - - Returns: - A list of wordpiece tokens. - """ - - output_tokens = [] - for token in whitespace_tokenize(text): - chars = list(token) - if len(chars) > self.max_input_chars_per_word: - output_tokens.append(self.unk_token) - continue - - is_bad = False - start = 0 - sub_tokens = [] - while start < len(chars): - end = len(chars) - cur_substr = None - while start < end: - substr = "".join(chars[start:end]) - if start > 0: - substr = "##" + substr - if substr in self.vocab: - cur_substr = substr - break - end -= 1 - if cur_substr is None: - is_bad = True - break - sub_tokens.append(cur_substr) - start = end - - if is_bad: - output_tokens.append(self.unk_token) - else: - output_tokens.extend(sub_tokens) - return output_tokens - diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/roberta/multiprocessing_bpe_encoder.py b/spaces/koajoel/PolyFormer/fairseq/examples/roberta/multiprocessing_bpe_encoder.py deleted file mode 100644 index 43fe0451bf4d5762d734314075b1402c2a8db2bb..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/roberta/multiprocessing_bpe_encoder.py +++ /dev/null @@ -1,130 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import contextlib -import sys -from collections import Counter -from multiprocessing import Pool - -from fairseq.data.encoders.gpt2_bpe import get_encoder - - -def main(): - """ - Helper script to encode raw text with the GPT-2 BPE using multiple processes. - - The encoder.json and vocab.bpe files can be obtained here: - - https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json - - https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "--encoder-json", - help="path to encoder.json", - ) - parser.add_argument( - "--vocab-bpe", - type=str, - help="path to vocab.bpe", - ) - parser.add_argument( - "--inputs", - nargs="+", - default=["-"], - help="input files to filter/encode", - ) - parser.add_argument( - "--outputs", - nargs="+", - default=["-"], - help="path to save encoded outputs", - ) - parser.add_argument( - "--keep-empty", - action="store_true", - help="keep empty lines", - ) - parser.add_argument("--workers", type=int, default=20) - args = parser.parse_args() - - assert len(args.inputs) == len( - args.outputs - ), "number of input and output paths should match" - - with contextlib.ExitStack() as stack: - inputs = [ - stack.enter_context(open(input, "r", encoding="utf-8")) - if input != "-" - else sys.stdin - for input in args.inputs - ] - outputs = [ - stack.enter_context(open(output, "w", encoding="utf-8")) - if output != "-" - else sys.stdout - for output in args.outputs - ] - - encoder = MultiprocessingEncoder(args) - pool = Pool(args.workers, initializer=encoder.initializer) - encoded_lines = pool.imap(encoder.encode_lines, zip(*inputs), 100) - - stats = Counter() - for i, (filt, enc_lines) in enumerate(encoded_lines, start=1): - if filt == "PASS": - for enc_line, output_h in zip(enc_lines, outputs): - print(enc_line, file=output_h) - else: - stats["num_filtered_" + filt] += 1 - if i % 10000 == 0: - print("processed {} lines".format(i), file=sys.stderr) - - for k, v in stats.most_common(): - print("[{}] filtered {} lines".format(k, v), file=sys.stderr) - - -class MultiprocessingEncoder(object): - def __init__(self, args): - self.args = args - - def initializer(self): - global bpe - bpe = get_encoder(self.args.encoder_json, self.args.vocab_bpe) - - def encode(self, line): - global bpe - ids = bpe.encode(line) - return list(map(str, ids)) - - def decode(self, tokens): - global bpe - return bpe.decode(tokens) - - def encode_lines(self, lines): - """ - Encode a set of lines. All lines will be encoded together. - """ - enc_lines = [] - for line in lines: - line = line.strip() - if len(line) == 0 and not self.args.keep_empty: - return ["EMPTY", None] - tokens = self.encode(line) - enc_lines.append(" ".join(tokens)) - return ["PASS", enc_lines] - - def decode_lines(self, lines): - dec_lines = [] - for line in lines: - tokens = map(int, line.strip().split()) - dec_lines.append(self.decode(tokens)) - return ["PASS", dec_lines] - - -if __name__ == "__main__": - main() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/cocoaPen.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/cocoaPen.py deleted file mode 100644 index 5369c3097187b6929df58e93284199a1729ea275..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/pens/cocoaPen.py +++ /dev/null @@ -1,26 +0,0 @@ -from fontTools.pens.basePen import BasePen - - -__all__ = ["CocoaPen"] - - -class CocoaPen(BasePen): - def __init__(self, glyphSet, path=None): - BasePen.__init__(self, glyphSet) - if path is None: - from AppKit import NSBezierPath - - path = NSBezierPath.bezierPath() - self.path = path - - def _moveTo(self, p): - self.path.moveToPoint_(p) - - def _lineTo(self, p): - self.path.lineToPoint_(p) - - def _curveToOne(self, p1, p2, p3): - self.path.curveToPoint_controlPoint1_controlPoint2_(p3, p1, p2) - - def _closePath(self): - self.path.closePath() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/base.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/base.py deleted file mode 100644 index 2c33b8079af6cb9d8d16fae9a8c430ecda8cc9e1..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/base.py +++ /dev/null @@ -1,1807 +0,0 @@ -from __future__ import annotations - -import json -import re -import tempfile -import textwrap -from pathlib import Path -from typing import Iterable - -import huggingface_hub -import requests -import semantic_version as semver -from gradio_client.documentation import document, set_documentation_group -from huggingface_hub import CommitOperationAdd - -from gradio.themes.utils import ( - colors, - fonts, - get_matching_version, - get_theme_assets, - sizes, -) -from gradio.themes.utils.readme_content import README_CONTENT - -set_documentation_group("themes") - - -class ThemeClass: - def __init__(self): - self._stylesheets = [] - self.name = None - - def _get_theme_css(self): - css = {} - dark_css = {} - - for attr, val in self.__dict__.items(): - if attr.startswith("_"): - continue - if val is None: - if attr.endswith("_dark"): - dark_css[attr[:-5]] = None - continue - else: - raise ValueError( - f"Cannot set '{attr}' to None - only dark mode variables can be None." - ) - val = str(val) - pattern = r"(\*)([\w_]+)(\b)" - - def repl_func(match): - full_match = match.group(0) - if full_match.startswith("*") and full_match.endswith("_dark"): - raise ValueError( - f"Cannot refer '{attr}' to '{val}' - dark variable references are automatically used for dark mode attributes, so do not use the _dark suffix in the value." - ) - if ( - attr.endswith("_dark") - and full_match.startswith("*") - and attr[:-5] == full_match[1:] - ): - raise ValueError( - f"Cannot refer '{attr}' to '{val}' - if dark and light mode values are the same, set dark mode version to None." - ) - - word = match.group(2) - word = word.replace("_", "-") - return f"var(--{word})" - - val = re.sub(pattern, repl_func, val) - - attr = attr.replace("_", "-") - - if attr.endswith("-dark"): - attr = attr[:-5] - dark_css[attr] = val - else: - css[attr] = val - - for attr, val in css.items(): - if attr not in dark_css: - dark_css[attr] = val - - css_code = ( - ":root {\n" - + "\n".join([f" --{attr}: {val};" for attr, val in css.items()]) - + "\n}" - ) - dark_css_code = ( - ".dark {\n" - + "\n".join([f" --{attr}: {val};" for attr, val in dark_css.items()]) - + "\n}" - ) - - return f"{css_code}\n{dark_css_code}" - - def to_dict(self): - """Convert the theme into a python dictionary.""" - schema = {"theme": {}} - for prop in dir(self): - if ( - not prop.startswith("_") - or prop.startswith("_font") - or prop == "_stylesheets" - or prop == "name" - ) and isinstance(getattr(self, prop), (list, str)): - schema["theme"][prop] = getattr(self, prop) - return schema - - @classmethod - def load(cls, path: str) -> ThemeClass: - """Load a theme from a json file. - - Parameters: - path: The filepath to read. - """ - with open(path) as fp: - return cls.from_dict(json.load(fp, object_hook=fonts.as_font)) - - @classmethod - def from_dict(cls, theme: dict[str, dict[str, str]]) -> ThemeClass: - """Create a theme instance from a dictionary representation. - - Parameters: - theme: The dictionary representation of the theme. - """ - new_theme = cls() - for prop, value in theme["theme"].items(): - setattr(new_theme, prop, value) - - # For backwards compatibility, load attributes in base theme not in the loaded theme from the base theme. - base = Base() - for attr in base.__dict__: - if not attr.startswith("_") and not hasattr(new_theme, attr): - setattr(new_theme, attr, getattr(base, attr)) - - return new_theme - - def dump(self, filename: str): - """Write the theme to a json file. - - Parameters: - filename: The path to write the theme too - """ - Path(filename).write_text(json.dumps(self.to_dict(), cls=fonts.FontEncoder)) - - @classmethod - def from_hub(cls, repo_name: str, hf_token: str | None = None): - """Load a theme from the hub. - - This DOES NOT require a HuggingFace account for downloading publicly available themes. - - Parameters: - repo_name: string of the form /@. If a semantic version expression is omitted, the latest version will be fetched. - hf_token: HuggingFace Token. Only needed to download private themes. - """ - if "@" not in repo_name: - name, version = repo_name, None - else: - name, version = repo_name.split("@") - - api = huggingface_hub.HfApi(token=hf_token) - - try: - space_info = api.space_info(name) - except requests.HTTPError as e: - raise ValueError(f"The space {name} does not exist") from e - - assets = get_theme_assets(space_info) - matching_version = get_matching_version(assets, version) - - if not matching_version: - raise ValueError( - f"Cannot find a matching version for expression {version} " - f"from files {[f.filename for f in assets]}" - ) - - theme_file = huggingface_hub.hf_hub_download( - repo_id=name, - repo_type="space", - filename=f"themes/theme_schema@{matching_version.version}.json", - ) - theme = cls.load(theme_file) - theme.name = name - return theme - - @staticmethod - def _get_next_version(space_info: huggingface_hub.hf_api.SpaceInfo) -> str: - assets = get_theme_assets(space_info) - latest_version = max(assets, key=lambda asset: asset.version).version - return str(latest_version.next_patch()) - - @staticmethod - def _theme_version_exists( - space_info: huggingface_hub.hf_api.SpaceInfo, version: str - ) -> bool: - assets = get_theme_assets(space_info) - return any(a.version == semver.Version(version) for a in assets) - - def push_to_hub( - self, - repo_name: str, - org_name: str | None = None, - version: str | None = None, - hf_token: str | None = None, - theme_name: str | None = None, - description: str | None = None, - private: bool = False, - ): - """Upload a theme to the HuggingFace hub. - - This requires a HuggingFace account. - - Parameters: - repo_name: The name of the repository to store the theme assets, e.g. 'my_theme' or 'sunset'. - org_name: The name of the org to save the space in. If None (the default), the username corresponding to the logged in user, or hƒ_token is used. - version: A semantic version tag for theme. Bumping the version tag lets you publish updates to a theme without changing the look of applications that already loaded your theme. - hf_token: API token for your HuggingFace account - theme_name: Name for the name. If None, defaults to repo_name - description: A long form description to your theme. - """ - - from gradio import __version__ - - api = huggingface_hub.HfApi() - - if not hf_token: - try: - author = huggingface_hub.whoami()["name"] - except OSError as e: - raise ValueError( - "In order to push to hub, log in via `huggingface-cli login` " - "or provide a theme_token to push_to_hub. For more information " - "see https://huggingface.co/docs/huggingface_hub/quick-start#login" - ) from e - else: - author = huggingface_hub.whoami(token=hf_token)["name"] - - space_id = f"{org_name or author}/{repo_name}" - - try: - space_info = api.space_info(space_id) - except requests.HTTPError: - space_info = None - - space_exists = space_info is not None - - # If no version, set the version to next patch release - if not version: - version = self._get_next_version(space_info) if space_exists else "0.0.1" - else: - _ = semver.Version(version) - - if space_exists and self._theme_version_exists(space_info, version): - raise ValueError( - f"The space {space_id} already has a " - f"theme with version {version}. See: themes/theme_schema@{version}.json. " - "To manually override this version, use the HuggingFace hub UI." - ) - - theme_name = theme_name or repo_name - - with tempfile.NamedTemporaryFile( - mode="w", delete=False, suffix=".json" - ) as css_file: - contents = self.to_dict() - contents["version"] = version - json.dump(contents, css_file, cls=fonts.FontEncoder) - with tempfile.NamedTemporaryFile(mode="w", delete=False) as readme_file: - readme_content = README_CONTENT.format( - theme_name=theme_name, - description=description or "Add a description of this theme here!", - author=author, - gradio_version=__version__, - ) - readme_file.write(textwrap.dedent(readme_content)) - with tempfile.NamedTemporaryFile(mode="w", delete=False) as app_file: - contents = (Path(__file__).parent / "app.py").read_text() - contents = re.sub( - r"theme=gr.themes.Default\(\)", - f"theme='{space_id}'", - contents, - ) - contents = re.sub(r"{THEME}", theme_name or repo_name, contents) - contents = re.sub(r"{AUTHOR}", org_name or author, contents) - contents = re.sub(r"{SPACE_NAME}", repo_name, contents) - app_file.write(contents) - - operations = [ - CommitOperationAdd( - path_in_repo=f"themes/theme_schema@{version}.json", - path_or_fileobj=css_file.name, - ), - CommitOperationAdd( - path_in_repo="README.md", path_or_fileobj=readme_file.name - ), - CommitOperationAdd(path_in_repo="app.py", path_or_fileobj=app_file.name), - ] - - huggingface_hub.create_repo( - space_id, - repo_type="space", - space_sdk="gradio", - token=hf_token, - exist_ok=True, - private=private, - ) - - api.create_commit( - repo_id=space_id, - commit_message="Updating theme", - repo_type="space", - operations=operations, - token=hf_token, - ) - url = f"https://huggingface.co/spaces/{space_id}" - print(f"See your theme here! {url}") - return url - - -@document("push_to_hub", "from_hub", "load", "dump", "from_dict", "to_dict") -class Base(ThemeClass): - def __init__( - self, - *, - primary_hue: colors.Color | str = colors.blue, - secondary_hue: colors.Color | str = colors.blue, - neutral_hue: colors.Color | str = colors.gray, - text_size: sizes.Size | str = sizes.text_md, - spacing_size: sizes.Size | str = sizes.spacing_md, - radius_size: sizes.Size | str = sizes.radius_md, - font: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("Source Sans Pro"), - "ui-sans-serif", - "system-ui", - "sans-serif", - ), - font_mono: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("IBM Plex Mono"), - "ui-monospace", - "Consolas", - "monospace", - ), - ): - """ - Parameters: - primary_hue: The primary hue of the theme. Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object. - secondary_hue: The secondary hue of the theme. Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object. - neutral_hue: The neutral hue of the theme, used . Load a preset, like gradio.themes.colors.green (or just the string "green"), or pass your own gradio.themes.utils.Color object. - text_size: The size of the text. Load a preset, like gradio.themes.sizes.text_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object. - spacing_size: The size of the spacing. Load a preset, like gradio.themes.sizes.spacing_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object. - radius_size: The radius size of corners. Load a preset, like gradio.themes.sizes.radius_sm (or just the string "sm"), or pass your own gradio.themes.utils.Size object. - font: The primary font to use for the theme. Pass a string for a system font, or a gradio.themes.font.GoogleFont object to load a font from Google Fonts. Pass a list of fonts for fallbacks. - font_mono: The monospace font to use for the theme, applies to code. Pass a string for a system font, or a gradio.themes.font.GoogleFont object to load a font from Google Fonts. Pass a list of fonts for fallbacks. - """ - - self.name = "base" - - def expand_shortcut(shortcut, mode="color", prefix=None): - if not isinstance(shortcut, str): - return shortcut - if mode == "color": - for color in colors.Color.all: - if color.name == shortcut: - return color - raise ValueError(f"Color shortcut {shortcut} not found.") - elif mode == "size": - for size in sizes.Size.all: - if size.name == f"{prefix}_{shortcut}": - return size - raise ValueError(f"Size shortcut {shortcut} not found.") - - primary_hue = expand_shortcut(primary_hue, mode="color") - secondary_hue = expand_shortcut(secondary_hue, mode="color") - neutral_hue = expand_shortcut(neutral_hue, mode="color") - text_size = expand_shortcut(text_size, mode="size", prefix="text") - spacing_size = expand_shortcut(spacing_size, mode="size", prefix="spacing") - radius_size = expand_shortcut(radius_size, mode="size", prefix="radius") - - # Hue ranges - self.primary_50 = primary_hue.c50 - self.primary_100 = primary_hue.c100 - self.primary_200 = primary_hue.c200 - self.primary_300 = primary_hue.c300 - self.primary_400 = primary_hue.c400 - self.primary_500 = primary_hue.c500 - self.primary_600 = primary_hue.c600 - self.primary_700 = primary_hue.c700 - self.primary_800 = primary_hue.c800 - self.primary_900 = primary_hue.c900 - self.primary_950 = primary_hue.c950 - - self.secondary_50 = secondary_hue.c50 - self.secondary_100 = secondary_hue.c100 - self.secondary_200 = secondary_hue.c200 - self.secondary_300 = secondary_hue.c300 - self.secondary_400 = secondary_hue.c400 - self.secondary_500 = secondary_hue.c500 - self.secondary_600 = secondary_hue.c600 - self.secondary_700 = secondary_hue.c700 - self.secondary_800 = secondary_hue.c800 - self.secondary_900 = secondary_hue.c900 - self.secondary_950 = secondary_hue.c950 - - self.neutral_50 = neutral_hue.c50 - self.neutral_100 = neutral_hue.c100 - self.neutral_200 = neutral_hue.c200 - self.neutral_300 = neutral_hue.c300 - self.neutral_400 = neutral_hue.c400 - self.neutral_500 = neutral_hue.c500 - self.neutral_600 = neutral_hue.c600 - self.neutral_700 = neutral_hue.c700 - self.neutral_800 = neutral_hue.c800 - self.neutral_900 = neutral_hue.c900 - self.neutral_950 = neutral_hue.c950 - - # Spacing - self.spacing_xxs = spacing_size.xxs - self.spacing_xs = spacing_size.xs - self.spacing_sm = spacing_size.sm - self.spacing_md = spacing_size.md - self.spacing_lg = spacing_size.lg - self.spacing_xl = spacing_size.xl - self.spacing_xxl = spacing_size.xxl - - self.radius_xxs = radius_size.xxs - self.radius_xs = radius_size.xs - self.radius_sm = radius_size.sm - self.radius_md = radius_size.md - self.radius_lg = radius_size.lg - self.radius_xl = radius_size.xl - self.radius_xxl = radius_size.xxl - - self.text_xxs = text_size.xxs - self.text_xs = text_size.xs - self.text_sm = text_size.sm - self.text_md = text_size.md - self.text_lg = text_size.lg - self.text_xl = text_size.xl - self.text_xxl = text_size.xxl - - # Font - if not isinstance(font, Iterable): - font = [font] - self._font = [ - fontfam if isinstance(fontfam, fonts.Font) else fonts.Font(fontfam) - for fontfam in font - ] - if not isinstance(font_mono, Iterable): - font_mono = [font_mono] - self._font_mono = [ - fontfam if isinstance(fontfam, fonts.Font) else fonts.Font(fontfam) - for fontfam in font_mono - ] - self.font = ", ".join(str(font) for font in self._font) - self.font_mono = ", ".join(str(font) for font in self._font_mono) - - self._stylesheets = [] - for font in self._font + self._font_mono: - font_stylesheet = font.stylesheet() - if font_stylesheet: - self._stylesheets.append(font_stylesheet) - - self.set() - - def set( - self, - *, - # Body Attributes: These set set the values for the entire body of the app. - body_background_fill=None, - body_background_fill_dark=None, - body_text_color=None, - body_text_color_dark=None, - body_text_size=None, - body_text_color_subdued=None, - body_text_color_subdued_dark=None, - body_text_weight=None, - embed_radius=None, - # Element Colors: These set the colors for common elements. - background_fill_primary=None, - background_fill_primary_dark=None, - background_fill_secondary=None, - background_fill_secondary_dark=None, - border_color_accent=None, - border_color_accent_dark=None, - border_color_primary=None, - border_color_primary_dark=None, - color_accent=None, - color_accent_soft=None, - color_accent_soft_dark=None, - # Text: This sets the text styling for text elements. - link_text_color=None, - link_text_color_dark=None, - link_text_color_active=None, - link_text_color_active_dark=None, - link_text_color_hover=None, - link_text_color_hover_dark=None, - link_text_color_visited=None, - link_text_color_visited_dark=None, - prose_text_size=None, - prose_text_weight=None, - prose_header_text_weight=None, - # Shadows: These set the high-level shadow rendering styles. These variables are often referenced by other component-specific shadow variables. - shadow_drop=None, - shadow_drop_lg=None, - shadow_inset=None, - shadow_spread=None, - shadow_spread_dark=None, - # Layout Atoms: These set the style for common layout elements, such as the blocks that wrap components. - block_background_fill=None, - block_background_fill_dark=None, - block_border_color=None, - block_border_color_dark=None, - block_border_width=None, - block_border_width_dark=None, - block_info_text_color=None, - block_info_text_color_dark=None, - block_info_text_size=None, - block_info_text_weight=None, - block_label_background_fill=None, - block_label_background_fill_dark=None, - block_label_border_color=None, - block_label_border_color_dark=None, - block_label_border_width=None, - block_label_border_width_dark=None, - block_label_shadow=None, - block_label_text_color=None, - block_label_text_color_dark=None, - block_label_margin=None, - block_label_padding=None, - block_label_radius=None, - block_label_right_radius=None, - block_label_text_size=None, - block_label_text_weight=None, - block_padding=None, - block_radius=None, - block_shadow=None, - block_shadow_dark=None, - block_title_background_fill=None, - block_title_background_fill_dark=None, - block_title_border_color=None, - block_title_border_color_dark=None, - block_title_border_width=None, - block_title_border_width_dark=None, - block_title_text_color=None, - block_title_text_color_dark=None, - block_title_padding=None, - block_title_radius=None, - block_title_text_size=None, - block_title_text_weight=None, - container_radius=None, - form_gap_width=None, - layout_gap=None, - panel_background_fill=None, - panel_background_fill_dark=None, - panel_border_color=None, - panel_border_color_dark=None, - panel_border_width=None, - panel_border_width_dark=None, - section_header_text_size=None, - section_header_text_weight=None, - # Component Atoms: These set the style for elements within components. - chatbot_code_background_color=None, - chatbot_code_background_color_dark=None, - checkbox_background_color=None, - checkbox_background_color_dark=None, - checkbox_background_color_focus=None, - checkbox_background_color_focus_dark=None, - checkbox_background_color_hover=None, - checkbox_background_color_hover_dark=None, - checkbox_background_color_selected=None, - checkbox_background_color_selected_dark=None, - checkbox_border_color=None, - checkbox_border_color_dark=None, - checkbox_border_color_focus=None, - checkbox_border_color_focus_dark=None, - checkbox_border_color_hover=None, - checkbox_border_color_hover_dark=None, - checkbox_border_color_selected=None, - checkbox_border_color_selected_dark=None, - checkbox_border_radius=None, - checkbox_border_width=None, - checkbox_border_width_dark=None, - checkbox_check=None, - radio_circle=None, - checkbox_shadow=None, - checkbox_label_background_fill=None, - checkbox_label_background_fill_dark=None, - checkbox_label_background_fill_hover=None, - checkbox_label_background_fill_hover_dark=None, - checkbox_label_background_fill_selected=None, - checkbox_label_background_fill_selected_dark=None, - checkbox_label_border_color=None, - checkbox_label_border_color_dark=None, - checkbox_label_border_color_hover=None, - checkbox_label_border_color_hover_dark=None, - checkbox_label_border_width=None, - checkbox_label_border_width_dark=None, - checkbox_label_gap=None, - checkbox_label_padding=None, - checkbox_label_shadow=None, - checkbox_label_text_size=None, - checkbox_label_text_weight=None, - checkbox_label_text_color=None, - checkbox_label_text_color_dark=None, - checkbox_label_text_color_selected=None, - checkbox_label_text_color_selected_dark=None, - error_background_fill=None, - error_background_fill_dark=None, - error_border_color=None, - error_border_color_dark=None, - error_border_width=None, - error_border_width_dark=None, - error_text_color=None, - error_text_color_dark=None, - input_background_fill=None, - input_background_fill_dark=None, - input_background_fill_focus=None, - input_background_fill_focus_dark=None, - input_background_fill_hover=None, - input_background_fill_hover_dark=None, - input_border_color=None, - input_border_color_dark=None, - input_border_color_focus=None, - input_border_color_focus_dark=None, - input_border_color_hover=None, - input_border_color_hover_dark=None, - input_border_width=None, - input_border_width_dark=None, - input_padding=None, - input_placeholder_color=None, - input_placeholder_color_dark=None, - input_radius=None, - input_shadow=None, - input_shadow_dark=None, - input_shadow_focus=None, - input_shadow_focus_dark=None, - input_text_size=None, - input_text_weight=None, - loader_color=None, - loader_color_dark=None, - slider_color=None, - slider_color_dark=None, - stat_background_fill=None, - stat_background_fill_dark=None, - table_border_color=None, - table_border_color_dark=None, - table_even_background_fill=None, - table_even_background_fill_dark=None, - table_odd_background_fill=None, - table_odd_background_fill_dark=None, - table_radius=None, - table_row_focus=None, - table_row_focus_dark=None, - # Buttons: These set the style for buttons. - button_border_width=None, - button_border_width_dark=None, - button_shadow=None, - button_shadow_active=None, - button_shadow_hover=None, - button_transition=None, - button_large_padding=None, - button_large_radius=None, - button_large_text_size=None, - button_large_text_weight=None, - button_small_padding=None, - button_small_radius=None, - button_small_text_size=None, - button_small_text_weight=None, - button_primary_background_fill=None, - button_primary_background_fill_dark=None, - button_primary_background_fill_hover=None, - button_primary_background_fill_hover_dark=None, - button_primary_border_color=None, - button_primary_border_color_dark=None, - button_primary_border_color_hover=None, - button_primary_border_color_hover_dark=None, - button_primary_text_color=None, - button_primary_text_color_dark=None, - button_primary_text_color_hover=None, - button_primary_text_color_hover_dark=None, - button_secondary_background_fill=None, - button_secondary_background_fill_dark=None, - button_secondary_background_fill_hover=None, - button_secondary_background_fill_hover_dark=None, - button_secondary_border_color=None, - button_secondary_border_color_dark=None, - button_secondary_border_color_hover=None, - button_secondary_border_color_hover_dark=None, - button_secondary_text_color=None, - button_secondary_text_color_dark=None, - button_secondary_text_color_hover=None, - button_secondary_text_color_hover_dark=None, - button_cancel_background_fill=None, - button_cancel_background_fill_dark=None, - button_cancel_background_fill_hover=None, - button_cancel_background_fill_hover_dark=None, - button_cancel_border_color=None, - button_cancel_border_color_dark=None, - button_cancel_border_color_hover=None, - button_cancel_border_color_hover_dark=None, - button_cancel_text_color=None, - button_cancel_text_color_dark=None, - button_cancel_text_color_hover=None, - button_cancel_text_color_hover_dark=None, - ) -> Base: - """ - Parameters: - body_background_fill: The background of the entire app. - body_background_fill_dark: The background of the entire app in dark mode. - body_text_color: The default text color. - body_text_color_dark: The default text color in dark mode. - body_text_size: The default text size. - body_text_color_subdued: The text color used for softer, less important text. - body_text_color_subdued_dark: The text color used for softer, less important text in dark mode. - body_text_weight: The default text weight. - embed_radius: The corner radius used for embedding when the app is embedded within a page. - background_fill_primary: The background primarily used for items placed directly on the page. - background_fill_primary_dark: The background primarily used for items placed directly on the page in dark mode. - background_fill_secondary: The background primarily used for items placed on top of another item. - background_fill_secondary_dark: The background primarily used for items placed on top of another item in dark mode. - border_color_accent: The border color used for accented items. - border_color_accent_dark: The border color used for accented items in dark mode. - border_color_primary: The border color primarily used for items placed directly on the page. - border_color_primary_dark: The border color primarily used for items placed directly on the page in dark mode. - color_accent: The color used for accented items. - color_accent_soft: The softer color used for accented items. - color_accent_soft_dark: The softer color used for accented items in dark mode. - link_text_color: The text color used for links. - link_text_color_dark: The text color used for links in dark mode. - link_text_color_active: The text color used for links when they are active. - link_text_color_active_dark: The text color used for links when they are active in dark mode. - link_text_color_hover: The text color used for links when they are hovered over. - link_text_color_hover_dark: The text color used for links when they are hovered over in dark mode. - link_text_color_visited: The text color used for links when they have been visited. - link_text_color_visited_dark: The text color used for links when they have been visited in dark mode. - prose_text_size: The text size used for markdown and other prose. - prose_text_weight: The text weight used for markdown and other prose. - prose_header_text_weight: The text weight of a header used for markdown and other prose. - shadow_drop: Drop shadow used by other shadowed items. - shadow_drop_lg: Larger drop shadow used by other shadowed items. - shadow_inset: Inset shadow used by other shadowed items. - shadow_spread: Size of shadow spread used by shadowed items. - shadow_spread_dark: Size of shadow spread used by shadowed items in dark mode. - block_background_fill: The background around an item. - block_background_fill_dark: The background around an item in dark mode. - block_border_color: The border color around an item. - block_border_color_dark: The border color around an item in dark mode. - block_border_width: The border width around an item. - block_border_width_dark: The border width around an item in dark mode. - block_info_text_color: The color of the info text. - block_info_text_color_dark: The color of the info text in dark mode. - block_info_text_size: The size of the info text. - block_info_text_weight: The weight of the info text. - block_label_background_fill: The background of the title label of a media element (e.g. image). - block_label_background_fill_dark: The background of the title label of a media element (e.g. image) in dark mode. - block_label_border_color: The border color of the title label of a media element (e.g. image). - block_label_border_color_dark: The border color of the title label of a media element (e.g. image) in dark mode. - block_label_border_width: The border width of the title label of a media element (e.g. image). - block_label_border_width_dark: The border width of the title label of a media element (e.g. image) in dark mode. - block_label_shadow: The shadow of the title label of a media element (e.g. image). - block_label_text_color: The text color of the title label of a media element (e.g. image). - block_label_text_color_dark: The text color of the title label of a media element (e.g. image) in dark mode. - block_label_margin: The margin of the title label of a media element (e.g. image) from its surrounding container. - block_label_padding: The padding of the title label of a media element (e.g. image). - block_label_radius: The corner radius of the title label of a media element (e.g. image). - block_label_right_radius: The corner radius of a right-aligned helper label. - block_label_text_size: The text size of the title label of a media element (e.g. image). - block_label_text_weight: The text weight of the title label of a media element (e.g. image). - block_padding: The padding around an item. - block_radius: The corner radius around an item. - block_shadow: The shadow under an item. - block_shadow_dark: The shadow under an item in dark mode. - block_title_background_fill: The background of the title of a form element (e.g. textbox). - block_title_background_fill_dark: The background of the title of a form element (e.g. textbox) in dark mode. - block_title_border_color: The border color of the title of a form element (e.g. textbox). - block_title_border_color_dark: The border color of the title of a form element (e.g. textbox) in dark mode. - block_title_border_width: The border width of the title of a form element (e.g. textbox). - block_title_border_width_dark: The border width of the title of a form element (e.g. textbox) in dark mode. - block_title_text_color: The text color of the title of a form element (e.g. textbox). - block_title_text_color_dark: The text color of the title of a form element (e.g. textbox) in dark mode. - block_title_padding: The padding of the title of a form element (e.g. textbox). - block_title_radius: The corner radius of the title of a form element (e.g. textbox). - block_title_text_size: The text size of the title of a form element (e.g. textbox). - block_title_text_weight: The text weight of the title of a form element (e.g. textbox). - container_radius: The corner radius of a layout component that holds other content. - form_gap_width: The border gap between form elements, (e.g. consecutive textboxes). - layout_gap: The gap between items within a row or column. - panel_background_fill: The background of a panel. - panel_background_fill_dark: The background of a panel in dark mode. - panel_border_color: The border color of a panel. - panel_border_color_dark: The border color of a panel in dark mode. - panel_border_width: The border width of a panel. - panel_border_width_dark: The border width of a panel in dark mode. - section_header_text_size: The text size of a section header (e.g. tab name). - section_header_text_weight: The text weight of a section header (e.g. tab name). - chatbot_code_background_color: The background color of code blocks in the chatbot. - chatbot_code_background_color_dark: The background color of code blocks in the chatbot in dark mode. - checkbox_background_color: The background of a checkbox square or radio circle. - checkbox_background_color_dark: The background of a checkbox square or radio circle in dark mode. - checkbox_background_color_focus: The background of a checkbox square or radio circle when focused. - checkbox_background_color_focus_dark: The background of a checkbox square or radio circle when focused in dark mode. - checkbox_background_color_hover: The background of a checkbox square or radio circle when hovered over. - checkbox_background_color_hover_dark: The background of a checkbox square or radio circle when hovered over in dark mode. - checkbox_background_color_selected: The background of a checkbox square or radio circle when selected. - checkbox_background_color_selected_dark: The background of a checkbox square or radio circle when selected in dark mode. - checkbox_border_color: The border color of a checkbox square or radio circle. - checkbox_border_color_dark: The border color of a checkbox square or radio circle in dark mode. - checkbox_border_color_focus: The border color of a checkbox square or radio circle when focused. - checkbox_border_color_focus_dark: The border color of a checkbox square or radio circle when focused in dark mode. - checkbox_border_color_hover: The border color of a checkbox square or radio circle when hovered over. - checkbox_border_color_hover_dark: The border color of a checkbox square or radio circle when hovered over in dark mode. - checkbox_border_color_selected: The border color of a checkbox square or radio circle when selected. - checkbox_border_color_selected_dark: The border color of a checkbox square or radio circle when selected in dark mode. - checkbox_border_radius: The corner radius of a checkbox square. - checkbox_border_width: The border width of a checkbox square or radio circle. - checkbox_border_width_dark: The border width of a checkbox square or radio circle in dark mode. - checkbox_check: The checkmark visual of a checkbox square. - radio_circle: The circle visual of a radio circle. - checkbox_shadow: The shadow of a checkbox square or radio circle. - checkbox_label_background_fill: The background of the surrounding button of a checkbox or radio element. - checkbox_label_background_fill_dark: The background of the surrounding button of a checkbox or radio element in dark mode. - checkbox_label_background_fill_hover: The background of the surrounding button of a checkbox or radio element when hovered over. - checkbox_label_background_fill_hover_dark: The background of the surrounding button of a checkbox or radio element when hovered over in dark mode. - checkbox_label_background_fill_selected: The background of the surrounding button of a checkbox or radio element when selected. - checkbox_label_background_fill_selected_dark: The background of the surrounding button of a checkbox or radio element when selected in dark mode. - checkbox_label_border_color: The border color of the surrounding button of a checkbox or radio element. - checkbox_label_border_color_dark: The border color of the surrounding button of a checkbox or radio element in dark mode. - checkbox_label_border_color_hover: The border color of the surrounding button of a checkbox or radio element when hovered over. - checkbox_label_border_color_hover_dark: The border color of the surrounding button of a checkbox or radio element when hovered over in dark mode. - checkbox_label_border_width: The border width of the surrounding button of a checkbox or radio element. - checkbox_label_border_width_dark: The border width of the surrounding button of a checkbox or radio element in dark mode. - checkbox_label_gap: The gap consecutive checkbox or radio elements. - checkbox_label_padding: The padding of the surrounding button of a checkbox or radio element. - checkbox_label_shadow: The shadow of the surrounding button of a checkbox or radio element. - checkbox_label_text_size: The text size of the label accompanying a checkbox or radio element. - checkbox_label_text_weight: The text weight of the label accompanying a checkbox or radio element. - checkbox_label_text_color: The text color of the label accompanying a checkbox or radio element. - checkbox_label_text_color_dark: The text color of the label accompanying a checkbox or radio element in dark mode. - checkbox_label_text_color_selected: The text color of the label accompanying a checkbox or radio element when selected. - checkbox_label_text_color_selected_dark: The text color of the label accompanying a checkbox or radio element when selected in dark mode. - error_background_fill: The background of an error message. - error_background_fill_dark: The background of an error message in dark mode. - error_border_color: The border color of an error message. - error_border_color_dark: The border color of an error message in dark mode. - error_border_width: The border width of an error message. - error_border_width_dark: The border width of an error message in dark mode. - error_text_color: The text color of an error message. - error_text_color_dark: The text color of an error message in dark mode. - input_background_fill: The background of an input field. - input_background_fill_dark: The background of an input field in dark mode. - input_background_fill_focus: The background of an input field when focused. - input_background_fill_focus_dark: The background of an input field when focused in dark mode. - input_background_fill_hover: The background of an input field when hovered over. - input_background_fill_hover_dark: The background of an input field when hovered over in dark mode. - input_border_color: The border color of an input field. - input_border_color_dark: The border color of an input field in dark mode. - input_border_color_focus: The border color of an input field when focused. - input_border_color_focus_dark: The border color of an input field when focused in dark mode. - input_border_color_hover: The border color of an input field when hovered over. - input_border_color_hover_dark: The border color of an input field when hovered over in dark mode. - input_border_width: The border width of an input field. - input_border_width_dark: The border width of an input field in dark mode. - input_padding: The padding of an input field. - input_placeholder_color: The placeholder text color of an input field. - input_placeholder_color_dark: The placeholder text color of an input field in dark mode. - input_radius: The corner radius of an input field. - input_shadow: The shadow of an input field. - input_shadow_dark: The shadow of an input field in dark mode. - input_shadow_focus: The shadow of an input field when focused. - input_shadow_focus_dark: The shadow of an input field when focused in dark mode. - input_text_size: The text size of an input field. - input_text_weight: The text weight of an input field. - loader_color: The color of the loading animation while a request is pending. - loader_color_dark: The color of the loading animation while a request is pending in dark mode. - slider_color: The color of the slider in a range element. - slider_color_dark: The color of the slider in a range element in dark mode. - stat_background_fill: The background used for stats visuals (e.g. confidence bars in label). - stat_background_fill_dark: The background used for stats visuals (e.g. confidence bars in label) in dark mode. - table_border_color: The border color of a table. - table_border_color_dark: The border color of a table in dark mode. - table_even_background_fill: The background of even rows in a table. - table_even_background_fill_dark: The background of even rows in a table in dark mode. - table_odd_background_fill: The background of odd rows in a table. - table_odd_background_fill_dark: The background of odd rows in a table in dark mode. - table_radius: The corner radius of a table. - table_row_focus: The background of a focused row in a table. - table_row_focus_dark: The background of a focused row in a table in dark mode. - button_border_width: The border width of a button. - button_border_width_dark: The border width of a button in dark mode. - button_cancel_background_fill: The background of a button of "cancel" variant. - button_cancel_background_fill_dark: The background of a button of "cancel" variant in dark mode. - button_cancel_background_fill_hover: The background of a button of "cancel" variant when hovered over. - button_cancel_background_fill_hover_dark: The background of a button of "cancel" variant when hovered over in dark mode. - button_cancel_border_color: The border color of a button of "cancel" variant. - button_cancel_border_color_dark: The border color of a button of "cancel" variant in dark mode. - button_cancel_border_color_hover: The border color of a button of "cancel" variant when hovered over. - button_cancel_border_color_hover_dark: The border color of a button of "cancel" variant when hovered over in dark mode. - button_cancel_text_color: The text color of a button of "cancel" variant. - button_cancel_text_color_dark: The text color of a button of "cancel" variant in dark mode. - button_cancel_text_color_hover: The text color of a button of "cancel" variant when hovered over. - button_cancel_text_color_hover_dark: The text color of a button of "cancel" variant when hovered over in dark mode. - button_large_padding: The padding of a button with the default "large" size. - button_large_radius: The corner radius of a button with the default "large" size. - button_large_text_size: The text size of a button with the default "large" size. - button_large_text_weight: The text weight of a button with the default "large" size. - button_primary_background_fill: The background of a button of "primary" variant. - button_primary_background_fill_dark: The background of a button of "primary" variant in dark mode. - button_primary_background_fill_hover: The background of a button of "primary" variant when hovered over. - button_primary_background_fill_hover_dark: The background of a button of "primary" variant when hovered over in dark mode. - button_primary_border_color: The border color of a button of "primary" variant. - button_primary_border_color_dark: The border color of a button of "primary" variant in dark mode. - button_primary_border_color_hover: The border color of a button of "primary" variant when hovered over. - button_primary_border_color_hover_dark: The border color of a button of "primary" variant when hovered over in dark mode. - button_primary_text_color: The text color of a button of "primary" variant. - button_primary_text_color_dark: The text color of a button of "primary" variant in dark mode. - button_primary_text_color_hover: The text color of a button of "primary" variant when hovered over. - button_primary_text_color_hover_dark: The text color of a button of "primary" variant when hovered over in dark mode. - button_secondary_background_fill: The background of a button of default "secondary" variant. - button_secondary_background_fill_dark: The background of a button of default "secondary" variant in dark mode. - button_secondary_background_fill_hover: The background of a button of default "secondary" variant when hovered over. - button_secondary_background_fill_hover_dark: The background of a button of default "secondary" variant when hovered over in dark mode. - button_secondary_border_color: The border color of a button of default "secondary" variant. - button_secondary_border_color_dark: The border color of a button of default "secondary" variant in dark mode. - button_secondary_border_color_hover: The border color of a button of default "secondary" variant when hovered over. - button_secondary_border_color_hover_dark: The border color of a button of default "secondary" variant when hovered over in dark mode. - button_secondary_text_color: The text color of a button of default "secondary" variant. - button_secondary_text_color_dark: The text color of a button of default "secondary" variant in dark mode. - button_secondary_text_color_hover: The text color of a button of default "secondary" variant when hovered over. - button_secondary_text_color_hover_dark: The text color of a button of default "secondary" variant when hovered over in dark mode. - button_shadow: The shadow under a button. - button_shadow_active: The shadow under a button when pressed. - button_shadow_hover: The shadow under a button when hovered over. - button_small_padding: The padding of a button set to "small" size. - button_small_radius: The corner radius of a button set to "small" size. - button_small_text_size: The text size of a button set to "small" size. - button_small_text_weight: The text weight of a button set to "small" size. - button_transition: The transition animation duration of a button between regular, hover, and focused states. - """ - - # Body - self.body_background_fill = body_background_fill or getattr( - self, "body_background_fill", "*background_fill_primary" - ) - self.body_background_fill_dark = body_background_fill_dark or getattr( - self, "body_background_fill_dark", "*background_fill_primary" - ) - self.body_text_color = body_text_color or getattr( - self, "body_text_color", "*neutral_800" - ) - self.body_text_color_dark = body_text_color_dark or getattr( - self, "body_text_color_dark", "*neutral_100" - ) - self.body_text_size = body_text_size or getattr( - self, "body_text_size", "*text_md" - ) - self.body_text_weight = body_text_weight or getattr( - self, "body_text_weight", "400" - ) - self.embed_radius = embed_radius or getattr(self, "embed_radius", "*radius_lg") - # Core Colors - self.color_accent = color_accent or getattr( - self, "color_accent", "*primary_500" - ) - self.color_accent_soft = color_accent_soft or getattr( - self, "color_accent_soft", "*primary_50" - ) - self.color_accent_soft_dark = color_accent_soft_dark or getattr( - self, "color_accent_soft_dark", "*neutral_700" - ) - self.background_fill_primary = background_fill_primary or getattr( - self, "background_primary", "white" - ) - self.background_fill_primary_dark = background_fill_primary_dark or getattr( - self, "background_primary_dark", "*neutral_950" - ) - self.background_fill_secondary = background_fill_secondary or getattr( - self, "background_secondary", "*neutral_50" - ) - self.background_fill_secondary_dark = background_fill_secondary_dark or getattr( - self, "background_secondary_dark", "*neutral_900" - ) - self.border_color_accent = border_color_accent or getattr( - self, "border_color_accent", "*primary_300" - ) - self.border_color_accent_dark = border_color_accent_dark or getattr( - self, "border_color_accent_dark", "*neutral_600" - ) - self.border_color_primary = border_color_primary or getattr( - self, "border_color_primary", "*neutral_200" - ) - self.border_color_primary_dark = border_color_primary_dark or getattr( - self, "border_color_primary_dark", "*neutral_700" - ) - # Text Colors - self.link_text_color = link_text_color or getattr( - self, "link_text_color", "*secondary_600" - ) - self.link_text_color_active = link_text_color_active or getattr( - self, "link_text_color_active", "*secondary_600" - ) - self.link_text_color_active_dark = link_text_color_active_dark or getattr( - self, "link_text_color_active_dark", "*secondary_500" - ) - self.link_text_color_dark = link_text_color_dark or getattr( - self, "link_text_color_dark", "*secondary_500" - ) - self.link_text_color_hover = link_text_color_hover or getattr( - self, "link_text_color_hover", "*secondary_700" - ) - self.link_text_color_hover_dark = link_text_color_hover_dark or getattr( - self, "link_text_color_hover_dark", "*secondary_400" - ) - self.link_text_color_visited = link_text_color_visited or getattr( - self, "link_text_color_visited", "*secondary_500" - ) - self.link_text_color_visited_dark = link_text_color_visited_dark or getattr( - self, "link_text_color_visited_dark", "*secondary_600" - ) - self.body_text_color_subdued = body_text_color_subdued or getattr( - self, "body_text_color_subdued", "*neutral_400" - ) - self.body_text_color_subdued_dark = body_text_color_subdued_dark or getattr( - self, "body_text_color_subdued_dark", "*neutral_400" - ) - # Shadows - self.shadow_drop = shadow_drop or getattr( - self, "shadow_drop", "rgba(0,0,0,0.05) 0px 1px 2px 0px" - ) - self.shadow_drop_lg = shadow_drop_lg or getattr( - self, - "shadow_drop_lg", - "0 1px 3px 0 rgb(0 0 0 / 0.1), 0 1px 2px -1px rgb(0 0 0 / 0.1)", - ) - self.shadow_inset = shadow_inset or getattr( - self, "shadow_inset", "rgba(0,0,0,0.05) 0px 2px 4px 0px inset" - ) - self.shadow_spread = shadow_spread or getattr(self, "shadow_spread", "3px") - self.shadow_spread_dark = shadow_spread_dark or getattr( - self, "shadow_spread_dark", "1px" - ) - # Layout Atoms - self.block_background_fill = block_background_fill or getattr( - self, "block_background_fill", "*background_fill_primary" - ) - self.block_background_fill_dark = block_background_fill_dark or getattr( - self, "block_background_fill_dark", "*neutral_800" - ) - self.block_border_color = block_border_color or getattr( - self, "block_border_color", "*border_color_primary" - ) - self.block_border_color_dark = block_border_color_dark or getattr( - self, "block_border_color_dark", "*border_color_primary" - ) - self.block_border_width = block_border_width or getattr( - self, "block_border_width", "1px" - ) - self.block_border_width_dark = block_border_width_dark or getattr( - self, "block_border_width_dark", None - ) - self.block_info_text_color = block_info_text_color or getattr( - self, "block_info_text_color", "*body_text_color_subdued" - ) - self.block_info_text_color_dark = block_info_text_color_dark or getattr( - self, "block_info_text_color_dark", "*body_text_color_subdued" - ) - self.block_info_text_size = block_info_text_size or getattr( - self, "block_info_text_size", "*text_sm" - ) - self.block_info_text_weight = block_info_text_weight or getattr( - self, "block_info_text_weight", "400" - ) - self.block_label_background_fill = block_label_background_fill or getattr( - self, "block_label_background_fill", "*background_fill_primary" - ) - self.block_label_background_fill_dark = ( - block_label_background_fill_dark - or getattr( - self, "block_label_background_fill_dark", "*background_fill_secondary" - ) - ) - self.block_label_border_color = block_label_border_color or getattr( - self, "block_label_border_color", "*border_color_primary" - ) - self.block_label_border_color_dark = block_label_border_color_dark or getattr( - self, "block_label_border_color_dark", "*border_color_primary" - ) - self.block_label_border_width = block_label_border_width or getattr( - self, "block_label_border_width", "1px" - ) - self.block_label_border_width_dark = block_label_border_width_dark or getattr( - self, "block_label_border_width_dark", None - ) - self.block_label_shadow = block_label_shadow or getattr( - self, "block_label_shadow", "*block_shadow" - ) - self.block_label_text_color = block_label_text_color or getattr( - self, "block_label_text_color", "*neutral_500" - ) - self.block_label_text_color_dark = block_label_text_color_dark or getattr( - self, "block_label_text_color_dark", "*neutral_200" - ) - self.block_label_margin = block_label_margin or getattr( - self, "block_label_margin", "0" - ) - self.block_label_padding = block_label_padding or getattr( - self, "block_label_padding", "*spacing_sm *spacing_lg" - ) - self.block_label_radius = block_label_radius or getattr( - self, - "block_label_radius", - "calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px) 0", - ) - self.block_label_right_radius = block_label_right_radius or getattr( - self, - "block_label_right_radius", - "0 calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px)", - ) - self.block_label_text_size = block_label_text_size or getattr( - self, "block_label_text_size", "*text_sm" - ) - self.block_label_text_weight = block_label_text_weight or getattr( - self, "block_label_text_weight", "400" - ) - self.block_padding = block_padding or getattr( - self, "block_padding", "*spacing_xl calc(*spacing_xl + 2px)" - ) - self.block_radius = block_radius or getattr(self, "block_radius", "*radius_lg") - self.block_shadow = block_shadow or getattr(self, "block_shadow", "none") - self.block_shadow_dark = block_shadow_dark or getattr( - self, "block_shadow_dark", None - ) - self.block_title_background_fill = block_title_background_fill or getattr( - self, "block_title_background_fill", "none" - ) - self.block_title_background_fill_dark = ( - block_title_background_fill_dark - or getattr(self, "block_title_background_fill_dark", None) - ) - self.block_title_border_color = block_title_border_color or getattr( - self, "block_title_border_color", "none" - ) - self.block_title_border_color_dark = block_title_border_color_dark or getattr( - self, "block_title_border_color_dark", None - ) - self.block_title_border_width = block_title_border_width or getattr( - self, "block_title_border_width", "0px" - ) - self.block_title_border_width_dark = block_title_border_width_dark or getattr( - self, "block_title_border_width_dark", None - ) - self.block_title_text_color = block_title_text_color or getattr( - self, "block_title_text_color", "*neutral_500" - ) - self.block_title_text_color_dark = block_title_text_color_dark or getattr( - self, "block_title_text_color_dark", "*neutral_200" - ) - self.block_title_padding = block_title_padding or getattr( - self, "block_title_padding", "0" - ) - self.block_title_radius = block_title_radius or getattr( - self, "block_title_radius", "none" - ) - self.block_title_text_size = block_title_text_size or getattr( - self, "block_title_text_size", "*text_md" - ) - self.block_title_text_weight = block_title_text_weight or getattr( - self, "block_title_text_weight", "400" - ) - self.container_radius = container_radius or getattr( - self, "container_radius", "*radius_lg" - ) - self.form_gap_width = form_gap_width or getattr(self, "form_gap_width", "0px") - self.layout_gap = layout_gap or getattr(self, "layout_gap", "*spacing_xxl") - self.panel_background_fill = panel_background_fill or getattr( - self, "panel_background_fill", "*background_fill_secondary" - ) - self.panel_background_fill_dark = panel_background_fill_dark or getattr( - self, "panel_background_fill_dark", "*background_fill_secondary" - ) - self.panel_border_color = panel_border_color or getattr( - self, "panel_border_color", "*border_color_primary" - ) - self.panel_border_color_dark = panel_border_color_dark or getattr( - self, "panel_border_color_dark", "*border_color_primary" - ) - self.panel_border_width = panel_border_width or getattr( - self, "panel_border_width", "0" - ) - self.panel_border_width_dark = panel_border_width_dark or getattr( - self, "panel_border_width_dark", None - ) - self.section_header_text_size = section_header_text_size or getattr( - self, "section_header_text_size", "*text_md" - ) - self.section_header_text_weight = section_header_text_weight or getattr( - self, "section_header_text_weight", "400" - ) - # Component Atoms - self.chatbot_code_background_color = chatbot_code_background_color or getattr( - self, "chatbot_code_background_color", "*neutral_100" - ) - self.chatbot_code_background_color_dark = ( - chatbot_code_background_color_dark - or getattr(self, "chatbot_code_background_color_dark", "*neutral_800") - ) - self.checkbox_background_color = checkbox_background_color or getattr( - self, "checkbox_background_color", "*background_fill_primary" - ) - self.checkbox_background_color_dark = checkbox_background_color_dark or getattr( - self, "checkbox_background_color_dark", "*neutral_800" - ) - self.checkbox_background_color_focus = ( - checkbox_background_color_focus - or getattr( - self, "checkbox_background_color_focus", "*checkbox_background_color" - ) - ) - self.checkbox_background_color_focus_dark = ( - checkbox_background_color_focus_dark - or getattr( - self, - "checkbox_background_color_focus_dark", - "*checkbox_background_color", - ) - ) - self.checkbox_background_color_hover = ( - checkbox_background_color_hover - or getattr( - self, "checkbox_background_color_hover", "*checkbox_background_color" - ) - ) - self.checkbox_background_color_hover_dark = ( - checkbox_background_color_hover_dark - or getattr( - self, - "checkbox_background_color_hover_dark", - "*checkbox_background_color", - ) - ) - self.checkbox_background_color_selected = ( - checkbox_background_color_selected - or getattr(self, "checkbox_background_color_selected", "*secondary_600") - ) - self.checkbox_background_color_selected_dark = ( - checkbox_background_color_selected_dark - or getattr( - self, "checkbox_background_color_selected_dark", "*secondary_600" - ) - ) - self.checkbox_border_color = checkbox_border_color or getattr( - self, "checkbox_border_color", "*neutral_300" - ) - self.checkbox_border_color_dark = checkbox_border_color_dark or getattr( - self, "checkbox_border_color_dark", "*neutral_700" - ) - self.checkbox_border_color_focus = checkbox_border_color_focus or getattr( - self, "checkbox_border_color_focus", "*secondary_500" - ) - self.checkbox_border_color_focus_dark = ( - checkbox_border_color_focus_dark - or getattr(self, "checkbox_border_color_focus_dark", "*secondary_500") - ) - self.checkbox_border_color_hover = checkbox_border_color_hover or getattr( - self, "checkbox_border_color_hover", "*neutral_300" - ) - self.checkbox_border_color_hover_dark = ( - checkbox_border_color_hover_dark - or getattr(self, "checkbox_border_color_hover_dark", "*neutral_600") - ) - self.checkbox_border_color_selected = checkbox_border_color_selected or getattr( - self, "checkbox_border_color_selected", "*secondary_600" - ) - self.checkbox_border_color_selected_dark = ( - checkbox_border_color_selected_dark - or getattr(self, "checkbox_border_color_selected_dark", "*secondary_600") - ) - self.checkbox_border_radius = checkbox_border_radius or getattr( - self, "checkbox_border_radius", "*radius_sm" - ) - self.checkbox_border_width = checkbox_border_width or getattr( - self, "checkbox_border_width", "*input_border_width" - ) - self.checkbox_border_width_dark = checkbox_border_width_dark or getattr( - self, "checkbox_border_width_dark", "*input_border_width" - ) - self.checkbox_label_background_fill = checkbox_label_background_fill or getattr( - self, "checkbox_label_background_fill", "*button_secondary_background_fill" - ) - self.checkbox_label_background_fill_dark = ( - checkbox_label_background_fill_dark - or getattr( - self, - "checkbox_label_background_fill_dark", - "*button_secondary_background_fill", - ) - ) - self.checkbox_label_background_fill_hover = ( - checkbox_label_background_fill_hover - or getattr( - self, - "checkbox_label_background_fill_hover", - "*button_secondary_background_fill_hover", - ) - ) - self.checkbox_label_background_fill_hover_dark = ( - checkbox_label_background_fill_hover_dark - or getattr( - self, - "checkbox_label_background_fill_hover_dark", - "*button_secondary_background_fill_hover", - ) - ) - self.checkbox_label_background_fill_selected = ( - checkbox_label_background_fill_selected - or getattr( - self, - "checkbox_label_background_fill_selected", - "*checkbox_label_background_fill", - ) - ) - self.checkbox_label_background_fill_selected_dark = ( - checkbox_label_background_fill_selected_dark - or getattr( - self, - "checkbox_label_background_fill_selected_dark", - "*checkbox_label_background_fill", - ) - ) - self.checkbox_label_border_color = checkbox_label_border_color or getattr( - self, "checkbox_label_border_color", "*border_color_primary" - ) - self.checkbox_label_border_color_dark = ( - checkbox_label_border_color_dark - or getattr( - self, "checkbox_label_border_color_dark", "*border_color_primary" - ) - ) - self.checkbox_label_border_color_hover = ( - checkbox_label_border_color_hover - or getattr( - self, - "checkbox_label_border_color_hover", - "*checkbox_label_border_color", - ) - ) - self.checkbox_label_border_color_hover_dark = ( - checkbox_label_border_color_hover_dark - or getattr( - self, - "checkbox_label_border_color_hover_dark", - "*checkbox_label_border_color", - ) - ) - self.checkbox_label_border_width = checkbox_label_border_width or getattr( - self, "checkbox_label_border_width", "*input_border_width" - ) - self.checkbox_label_border_width_dark = ( - checkbox_label_border_width_dark - or getattr(self, "checkbox_label_border_width_dark", "*input_border_width") - ) - self.checkbox_label_gap = checkbox_label_gap or getattr( - self, "checkbox_label_gap", "*spacing_lg" - ) - self.checkbox_label_padding = checkbox_label_padding or getattr( - self, "checkbox_label_padding", "*spacing_md calc(2 * *spacing_md)" - ) - self.checkbox_label_shadow = checkbox_label_shadow or getattr( - self, "checkbox_label_shadow", "none" - ) - self.checkbox_label_text_size = checkbox_label_text_size or getattr( - self, "checkbox_label_text_size", "*text_md" - ) - self.checkbox_label_text_weight = checkbox_label_text_weight or getattr( - self, "checkbox_label_text_weight", "400" - ) - self.checkbox_check = checkbox_check or getattr( - self, - "checkbox_check", - """url("data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3cpath d='M12.207 4.793a1 1 0 010 1.414l-5 5a1 1 0 01-1.414 0l-2-2a1 1 0 011.414-1.414L6.5 9.086l4.293-4.293a1 1 0 011.414 0z'/%3e%3c/svg%3e")""", - ) - self.radio_circle = radio_circle or getattr( - self, - "radio_circle", - """url("data:image/svg+xml,%3csvg viewBox='0 0 16 16' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3ccircle cx='8' cy='8' r='3'/%3e%3c/svg%3e")""", - ) - self.checkbox_shadow = checkbox_shadow or getattr( - self, "checkbox_shadow", "*input_shadow" - ) - self.checkbox_label_text_color = checkbox_label_text_color or getattr( - self, "checkbox_label_text_color", "*body_text_color" - ) - self.checkbox_label_text_color_dark = checkbox_label_text_color_dark or getattr( - self, "checkbox_label_text_color_dark", "*body_text_color" - ) - self.checkbox_label_text_color_selected = ( - checkbox_label_text_color_selected - or getattr( - self, "checkbox_label_text_color_selected", "*checkbox_label_text_color" - ) - ) - self.checkbox_label_text_color_selected_dark = ( - checkbox_label_text_color_selected_dark - or getattr( - self, - "checkbox_label_text_color_selected_dark", - "*checkbox_label_text_color", - ) - ) - self.error_background_fill = error_background_fill or getattr( - self, "error_background_fill", colors.red.c100 - ) - self.error_background_fill_dark = error_background_fill_dark or getattr( - self, "error_background_fill_dark", "*background_fill_primary" - ) - self.error_border_color = error_border_color or getattr( - self, "error_border_color", colors.red.c200 - ) - self.error_border_color_dark = error_border_color_dark or getattr( - self, "error_border_color_dark", "*border_color_primary" - ) - self.error_border_width = error_border_width or getattr( - self, "error_border_width", "1px" - ) - self.error_border_width_dark = error_border_width_dark or getattr( - self, "error_border_width_dark", None - ) - self.error_text_color = error_text_color or getattr( - self, "error_text_color", colors.red.c500 - ) - self.error_text_color_dark = error_text_color_dark or getattr( - self, "error_text_color_dark", colors.red.c500 - ) - self.input_background_fill = input_background_fill or getattr( - self, "input_background_fill", "*neutral_100" - ) - self.input_background_fill_dark = input_background_fill_dark or getattr( - self, "input_background_fill_dark", "*neutral_700" - ) - self.input_background_fill_focus = input_background_fill_focus or getattr( - self, "input_background_fill_focus", "*secondary_500" - ) - self.input_background_fill_focus_dark = ( - input_background_fill_focus_dark - or getattr(self, "input_background_fill_focus_dark", "*secondary_600") - ) - self.input_background_fill_hover = input_background_fill_hover or getattr( - self, "input_background_fill_hover", "*input_background_fill" - ) - self.input_background_fill_hover_dark = ( - input_background_fill_hover_dark - or getattr( - self, "input_background_fill_hover_dark", "*input_background_fill" - ) - ) - self.input_border_color = input_border_color or getattr( - self, "input_border_color", "*border_color_primary" - ) - self.input_border_color_dark = input_border_color_dark or getattr( - self, "input_border_color_dark", "*border_color_primary" - ) - self.input_border_color_focus = input_border_color_focus or getattr( - self, "input_border_color_focus", "*secondary_300" - ) - self.input_border_color_focus_dark = input_border_color_focus_dark or getattr( - self, "input_border_color_focus_dark", "*neutral_700" - ) - self.input_border_color_hover = input_border_color_hover or getattr( - self, "input_border_color_hover", "*input_border_color" - ) - self.input_border_color_hover_dark = input_border_color_hover_dark or getattr( - self, "input_border_color_hover_dark", "*input_border_color" - ) - self.input_border_width = input_border_width or getattr( - self, "input_border_width", "0px" - ) - self.input_border_width_dark = input_border_width_dark or getattr( - self, "input_border_width_dark", None - ) - self.input_padding = input_padding or getattr( - self, "input_padding", "*spacing_xl" - ) - self.input_placeholder_color = input_placeholder_color or getattr( - self, "input_placeholder_color", "*neutral_400" - ) - self.input_placeholder_color_dark = input_placeholder_color_dark or getattr( - self, "input_placeholder_color_dark", "*neutral_500" - ) - self.input_radius = input_radius or getattr(self, "input_radius", "*radius_lg") - self.input_shadow = input_shadow or getattr(self, "input_shadow", "none") - self.input_shadow_dark = input_shadow_dark or getattr( - self, "input_shadow_dark", None - ) - self.input_shadow_focus = input_shadow_focus or getattr( - self, "input_shadow_focus", "*input_shadow" - ) - self.input_shadow_focus_dark = input_shadow_focus_dark or getattr( - self, "input_shadow_focus_dark", None - ) - self.input_text_size = input_text_size or getattr( - self, "input_text_size", "*text_md" - ) - self.input_text_weight = input_text_weight or getattr( - self, "input_text_weight", "400" - ) - self.loader_color = loader_color or getattr( - self, "loader_color", "*color_accent" - ) - self.loader_color_dark = loader_color_dark or getattr( - self, "loader_color_dark", None - ) - self.prose_text_size = prose_text_size or getattr( - self, "prose_text_size", "*text_md" - ) - self.prose_text_weight = prose_text_weight or getattr( - self, "prose_text_weight", "400" - ) - self.prose_header_text_weight = prose_header_text_weight or getattr( - self, "prose_header_text_weight", "600" - ) - self.slider_color = slider_color or getattr(self, "slider_color", "auto") - self.slider_color_dark = slider_color_dark or getattr( - self, "slider_color_dark", None - ) - self.stat_background_fill = stat_background_fill or getattr( - self, "stat_background_fill", "*primary_300" - ) - self.stat_background_fill_dark = stat_background_fill_dark or getattr( - self, "stat_background_fill_dark", "*primary_500" - ) - self.table_border_color = table_border_color or getattr( - self, "table_border_color", "*neutral_300" - ) - self.table_border_color_dark = table_border_color_dark or getattr( - self, "table_border_color_dark", "*neutral_700" - ) - self.table_even_background_fill = table_even_background_fill or getattr( - self, "table_even_background_fill", "white" - ) - self.table_even_background_fill_dark = ( - table_even_background_fill_dark - or getattr(self, "table_even_background_fill_dark", "*neutral_950") - ) - self.table_odd_background_fill = table_odd_background_fill or getattr( - self, "table_odd_background_fill", "*neutral_50" - ) - self.table_odd_background_fill_dark = table_odd_background_fill_dark or getattr( - self, "table_odd_background_fill_dark", "*neutral_900" - ) - self.table_radius = table_radius or getattr(self, "table_radius", "*radius_lg") - self.table_row_focus = table_row_focus or getattr( - self, "table_row_focus", "*color_accent_soft" - ) - self.table_row_focus_dark = table_row_focus_dark or getattr( - self, "table_row_focus_dark", "*color_accent_soft" - ) - # Buttons - self.button_border_width = button_border_width or getattr( - self, "button_border_width", "*input_border_width" - ) - self.button_border_width_dark = button_border_width_dark or getattr( - self, "button_border_width_dark", "*input_border_width" - ) - self.button_cancel_background_fill = button_cancel_background_fill or getattr( - self, "button_cancel_background_fill", "*button_secondary_background_fill" - ) - self.button_cancel_background_fill_dark = ( - button_cancel_background_fill_dark - or getattr( - self, - "button_cancel_background_fill_dark", - "*button_secondary_background_fill", - ) - ) - self.button_cancel_background_fill_hover = ( - button_cancel_background_fill_hover - or getattr( - self, - "button_cancel_background_fill_hover", - "*button_cancel_background_fill", - ) - ) - self.button_cancel_background_fill_hover_dark = ( - button_cancel_background_fill_hover_dark - or getattr( - self, - "button_cancel_background_fill_hover_dark", - "*button_cancel_background_fill", - ) - ) - self.button_cancel_border_color = button_cancel_border_color or getattr( - self, "button_cancel_border_color", "*button_secondary_border_color" - ) - self.button_cancel_border_color_dark = ( - button_cancel_border_color_dark - or getattr( - self, - "button_cancel_border_color_dark", - "*button_secondary_border_color", - ) - ) - self.button_cancel_border_color_hover = ( - button_cancel_border_color_hover - or getattr( - self, - "button_cancel_border_color_hover", - "*button_cancel_border_color", - ) - ) - self.button_cancel_border_color_hover_dark = ( - button_cancel_border_color_hover_dark - or getattr( - self, - "button_cancel_border_color_hover_dark", - "*button_cancel_border_color", - ) - ) - self.button_cancel_text_color = button_cancel_text_color or getattr( - self, "button_cancel_text_color", "*button_secondary_text_color" - ) - self.button_cancel_text_color_dark = button_cancel_text_color_dark or getattr( - self, "button_cancel_text_color_dark", "*button_secondary_text_color" - ) - self.button_cancel_text_color_hover = button_cancel_text_color_hover or getattr( - self, "button_cancel_text_color_hover", "*button_cancel_text_color" - ) - self.button_cancel_text_color_hover_dark = ( - button_cancel_text_color_hover_dark - or getattr( - self, "button_cancel_text_color_hover_dark", "*button_cancel_text_color" - ) - ) - self.button_large_padding = button_large_padding or getattr( - self, "button_large_padding", "*spacing_lg calc(2 * *spacing_lg)" - ) - self.button_large_radius = button_large_radius or getattr( - self, "button_large_radius", "*radius_lg" - ) - self.button_large_text_size = button_large_text_size or getattr( - self, "button_large_text_size", "*text_lg" - ) - self.button_large_text_weight = button_large_text_weight or getattr( - self, "button_large_text_weight", "600" - ) - self.button_primary_background_fill = button_primary_background_fill or getattr( - self, "button_primary_background_fill", "*primary_200" - ) - self.button_primary_background_fill_dark = ( - button_primary_background_fill_dark - or getattr(self, "button_primary_background_fill_dark", "*primary_700") - ) - self.button_primary_background_fill_hover = ( - button_primary_background_fill_hover - or getattr( - self, - "button_primary_background_fill_hover", - "*button_primary_background_fill", - ) - ) - self.button_primary_background_fill_hover_dark = ( - button_primary_background_fill_hover_dark - or getattr( - self, - "button_primary_background_fill_hover_dark", - "*button_primary_background_fill", - ) - ) - self.button_primary_border_color = button_primary_border_color or getattr( - self, "button_primary_border_color", "*primary_200" - ) - self.button_primary_border_color_dark = ( - button_primary_border_color_dark - or getattr(self, "button_primary_border_color_dark", "*primary_600") - ) - self.button_primary_border_color_hover = ( - button_primary_border_color_hover - or getattr( - self, - "button_primary_border_color_hover", - "*button_primary_border_color", - ) - ) - self.button_primary_border_color_hover_dark = ( - button_primary_border_color_hover_dark - or getattr( - self, - "button_primary_border_color_hover_dark", - "*button_primary_border_color", - ) - ) - self.button_primary_text_color = button_primary_text_color or getattr( - self, "button_primary_text_color", "*primary_600" - ) - self.button_primary_text_color_dark = button_primary_text_color_dark or getattr( - self, "button_primary_text_color_dark", "white" - ) - self.button_primary_text_color_hover = ( - button_primary_text_color_hover - or getattr( - self, "button_primary_text_color_hover", "*button_primary_text_color" - ) - ) - self.button_primary_text_color_hover_dark = ( - button_primary_text_color_hover_dark - or getattr( - self, - "button_primary_text_color_hover_dark", - "*button_primary_text_color", - ) - ) - self.button_secondary_background_fill = ( - button_secondary_background_fill - or getattr(self, "button_secondary_background_fill", "*neutral_200") - ) - self.button_secondary_background_fill_dark = ( - button_secondary_background_fill_dark - or getattr(self, "button_secondary_background_fill_dark", "*neutral_600") - ) - self.button_secondary_background_fill_hover = ( - button_secondary_background_fill_hover - or getattr( - self, - "button_secondary_background_fill_hover", - "*button_secondary_background_fill", - ) - ) - self.button_secondary_background_fill_hover_dark = ( - button_secondary_background_fill_hover_dark - or getattr( - self, - "button_secondary_background_fill_hover_dark", - "*button_secondary_background_fill", - ) - ) - self.button_secondary_border_color = button_secondary_border_color or getattr( - self, "button_secondary_border_color", "*neutral_200" - ) - self.button_secondary_border_color_dark = ( - button_secondary_border_color_dark - or getattr(self, "button_secondary_border_color_dark", "*neutral_600") - ) - self.button_secondary_border_color_hover = ( - button_secondary_border_color_hover - or getattr( - self, - "button_secondary_border_color_hover", - "*button_secondary_border_color", - ) - ) - self.button_secondary_border_color_hover_dark = ( - button_secondary_border_color_hover_dark - or getattr( - self, - "button_secondary_border_color_hover_dark", - "*button_secondary_border_color", - ) - ) - self.button_secondary_text_color = button_secondary_text_color or getattr( - self, "button_secondary_text_color", "*neutral_700" - ) - self.button_secondary_text_color_dark = ( - button_secondary_text_color_dark - or getattr(self, "button_secondary_text_color_dark", "white") - ) - self.button_secondary_text_color_hover = ( - button_secondary_text_color_hover - or getattr( - self, - "button_secondary_text_color_hover", - "*button_secondary_text_color", - ) - ) - self.button_secondary_text_color_hover_dark = ( - button_secondary_text_color_hover_dark - or getattr( - self, - "button_secondary_text_color_hover_dark", - "*button_secondary_text_color", - ) - ) - self.button_shadow = button_shadow or getattr(self, "button_shadow", "none") - self.button_shadow_active = button_shadow_active or getattr( - self, "button_shadow_active", "none" - ) - self.button_shadow_hover = button_shadow_hover or getattr( - self, "button_shadow_hover", "none" - ) - self.button_small_padding = button_small_padding or getattr( - self, "button_small_padding", "*spacing_sm calc(2 * *spacing_sm)" - ) - self.button_small_radius = button_small_radius or getattr( - self, "button_small_radius", "*radius_lg" - ) - self.button_small_text_size = button_small_text_size or getattr( - self, "button_small_text_size", "*text_md" - ) - self.button_small_text_weight = button_small_text_weight or getattr( - self, "button_small_text_weight", "400" - ) - self.button_transition = button_transition or getattr( - self, "button_transition", "background-color 0.2s ease" - ) - return self diff --git a/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/filtered_lrelu.cpp b/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/filtered_lrelu.cpp deleted file mode 100644 index ff4149b8b46b54d2f400ae10e44d19f20503ba1f..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/generative-music-visualizer/torch_utils/ops/filtered_lrelu.cpp +++ /dev/null @@ -1,300 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include -#include -#include -#include "filtered_lrelu.h" - -//------------------------------------------------------------------------ - -static std::tuple filtered_lrelu( - torch::Tensor x, torch::Tensor fu, torch::Tensor fd, torch::Tensor b, torch::Tensor si, - int up, int down, int px0, int px1, int py0, int py1, int sx, int sy, float gain, float slope, float clamp, bool flip_filters, bool writeSigns) -{ - // Set CUDA device. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - - // Validate arguments. - TORCH_CHECK(fu.device() == x.device() && fd.device() == x.device() && b.device() == x.device(), "all input tensors must reside on the same device"); - TORCH_CHECK(fu.dtype() == torch::kFloat && fd.dtype() == torch::kFloat, "fu and fd must be float32"); - TORCH_CHECK(b.dtype() == x.dtype(), "x and b must have the same dtype"); - TORCH_CHECK(x.dtype() == torch::kHalf || x.dtype() == torch::kFloat, "x and b must be float16 or float32"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(x.size(0) * x.size(1) <= INT_MAX && x.size(2) <= INT_MAX && x.size(3) <= INT_MAX, "x is too large"); - TORCH_CHECK(x.numel() > 0, "x is empty"); - TORCH_CHECK((fu.dim() == 1 || fu.dim() == 2) && (fd.dim() == 1 || fd.dim() == 2), "fu and fd must be rank 1 or 2"); - TORCH_CHECK(fu.size(0) <= INT_MAX && fu.size(-1) <= INT_MAX, "fu is too large"); - TORCH_CHECK(fd.size(0) <= INT_MAX && fd.size(-1) <= INT_MAX, "fd is too large"); - TORCH_CHECK(fu.numel() > 0, "fu is empty"); - TORCH_CHECK(fd.numel() > 0, "fd is empty"); - TORCH_CHECK(b.dim() == 1 && b.size(0) == x.size(1), "b must be a vector with the same number of channels as x"); - TORCH_CHECK(up >= 1 && down >= 1, "up and down must be at least 1"); - - // Figure out how much shared memory is available on the device. - int maxSharedBytes = 0; - AT_CUDA_CHECK(cudaDeviceGetAttribute(&maxSharedBytes, cudaDevAttrMaxSharedMemoryPerBlockOptin, x.device().index())); - int sharedKB = maxSharedBytes >> 10; - - // Populate enough launch parameters to check if a CUDA kernel exists. - filtered_lrelu_kernel_params p; - p.up = up; - p.down = down; - p.fuShape = make_int2((int)fu.size(-1), fu.dim() == 2 ? (int)fu.size(0) : 0); // shape [n, 0] indicates separable filter. - p.fdShape = make_int2((int)fd.size(-1), fd.dim() == 2 ? (int)fd.size(0) : 0); - filtered_lrelu_kernel_spec test_spec = choose_filtered_lrelu_kernel(p, sharedKB); - if (!test_spec.exec) - { - // No kernel found - return empty tensors and indicate missing kernel with return code of -1. - return std::make_tuple(torch::Tensor(), torch::Tensor(), -1); - } - - // Input/output element size. - int64_t sz = (x.dtype() == torch::kHalf) ? 2 : 4; - - // Input sizes. - int64_t xw = (int)x.size(3); - int64_t xh = (int)x.size(2); - int64_t fut_w = (int)fu.size(-1) - 1; - int64_t fut_h = (int)fu.size(0) - 1; - int64_t fdt_w = (int)fd.size(-1) - 1; - int64_t fdt_h = (int)fd.size(0) - 1; - - // Logical size of upsampled buffer. - int64_t cw = xw * up + (px0 + px1) - fut_w; - int64_t ch = xh * up + (py0 + py1) - fut_h; - TORCH_CHECK(cw > fdt_w && ch > fdt_h, "upsampled buffer must be at least the size of downsampling filter"); - TORCH_CHECK(cw <= INT_MAX && ch <= INT_MAX, "upsampled buffer is too large"); - - // Compute output size and allocate. - int64_t yw = (cw - fdt_w + (down - 1)) / down; - int64_t yh = (ch - fdt_h + (down - 1)) / down; - TORCH_CHECK(yw > 0 && yh > 0, "output must be at least 1x1"); - TORCH_CHECK(yw <= INT_MAX && yh <= INT_MAX, "output is too large"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), yh, yw}, x.options(), x.suggest_memory_format()); - - // Allocate sign tensor. - torch::Tensor so; - torch::Tensor s = si; - bool readSigns = !!s.numel(); - int64_t sw_active = 0; // Active width of sign tensor. - if (writeSigns) - { - sw_active = yw * down - (down - 1) + fdt_w; // Active width in elements. - int64_t sh = yh * down - (down - 1) + fdt_h; // Height = active height. - int64_t sw = (sw_active + 15) & ~15; // Width = active width in elements, rounded up to multiple of 16. - TORCH_CHECK(sh <= INT_MAX && (sw >> 2) <= INT_MAX, "signs is too large"); - s = so = torch::empty({x.size(0), x.size(1), sh, sw >> 2}, x.options().dtype(torch::kUInt8), at::MemoryFormat::Contiguous); - } - else if (readSigns) - sw_active = s.size(3) << 2; - - // Validate sign tensor if in use. - if (readSigns || writeSigns) - { - TORCH_CHECK(s.is_contiguous(), "signs must be contiguous"); - TORCH_CHECK(s.dtype() == torch::kUInt8, "signs must be uint8"); - TORCH_CHECK(s.device() == x.device(), "signs must reside on the same device as x"); - TORCH_CHECK(s.dim() == 4, "signs must be rank 4"); - TORCH_CHECK(s.size(0) == x.size(0) && s.size(1) == x.size(1), "signs must have same batch & channels as x"); - TORCH_CHECK(s.size(2) <= INT_MAX && s.size(3) <= INT_MAX, "signs is too large"); - } - - // Populate rest of CUDA kernel parameters. - p.x = x.data_ptr(); - p.y = y.data_ptr(); - p.b = b.data_ptr(); - p.s = (readSigns || writeSigns) ? s.data_ptr() : 0; - p.fu = fu.data_ptr(); - p.fd = fd.data_ptr(); - p.pad0 = make_int2(px0, py0); - p.gain = gain; - p.slope = slope; - p.clamp = clamp; - p.flip = (flip_filters) ? 1 : 0; - p.xShape = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.yShape = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.sShape = (readSigns || writeSigns) ? make_int2((int)s.size(3), (int)s.size(2)) : make_int2(0, 0); // Width is in bytes. Contiguous. - p.sOfs = make_int2(sx, sy); - p.swLimit = (sw_active + 3) >> 2; // Rounded up to bytes. - - // x, y, b strides are in bytes. - p.xStride = make_longlong4(sz * x.stride(3), sz * x.stride(2), sz * x.stride(1), sz * x.stride(0)); - p.yStride = make_longlong4(sz * y.stride(3), sz * y.stride(2), sz * y.stride(1), sz * y.stride(0)); - p.bStride = sz * b.stride(0); - - // fu, fd strides are in elements. - p.fuStride = make_longlong3(fu.stride(-1), fu.dim() == 2 ? fu.stride(0) : 0, 0); - p.fdStride = make_longlong3(fd.stride(-1), fd.dim() == 2 ? fd.stride(0) : 0, 0); - - // Determine if indices don't fit in int32. Support negative strides although Torch currently never produces those. - bool index64b = false; - if (std::abs(p.bStride * x.size(1)) > INT_MAX) index64b = true; - if (std::min(x.size(0) * p.xStride.w, 0ll) + std::min(x.size(1) * p.xStride.z, 0ll) + std::min(x.size(2) * p.xStride.y, 0ll) + std::min(x.size(3) * p.xStride.x, 0ll) < -INT_MAX) index64b = true; - if (std::max(x.size(0) * p.xStride.w, 0ll) + std::max(x.size(1) * p.xStride.z, 0ll) + std::max(x.size(2) * p.xStride.y, 0ll) + std::max(x.size(3) * p.xStride.x, 0ll) > INT_MAX) index64b = true; - if (std::min(y.size(0) * p.yStride.w, 0ll) + std::min(y.size(1) * p.yStride.z, 0ll) + std::min(y.size(2) * p.yStride.y, 0ll) + std::min(y.size(3) * p.yStride.x, 0ll) < -INT_MAX) index64b = true; - if (std::max(y.size(0) * p.yStride.w, 0ll) + std::max(y.size(1) * p.yStride.z, 0ll) + std::max(y.size(2) * p.yStride.y, 0ll) + std::max(y.size(3) * p.yStride.x, 0ll) > INT_MAX) index64b = true; - if (s.numel() > INT_MAX) index64b = true; - - // Choose CUDA kernel. - filtered_lrelu_kernel_spec spec = { 0 }; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "filtered_lrelu_cuda", [&] - { - if constexpr (sizeof(scalar_t) <= 4) // Exclude doubles. constexpr prevents template instantiation. - { - // Choose kernel based on index type, datatype and sign read/write modes. - if (!index64b && writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB); - else if (!index64b && !writeSigns && readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB); - else if (!index64b && !writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB); - else if ( index64b && writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB); - else if ( index64b && !writeSigns && readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB); - else if ( index64b && !writeSigns && !readSigns) spec = choose_filtered_lrelu_kernel(p, sharedKB); - } - }); - TORCH_CHECK(spec.exec, "internal error - CUDA kernel not found") // This should not happen because we tested earlier that kernel exists. - - // Launch CUDA kernel. - void* args[] = {&p}; - int bx = spec.numWarps * 32; - int gx = (p.yShape.x - 1) / spec.tileOut.x + 1; - int gy = (p.yShape.y - 1) / spec.tileOut.y + 1; - int gz = p.yShape.z * p.yShape.w; - - // Repeat multiple horizontal tiles in a CTA? - if (spec.xrep) - { - p.tilesXrep = spec.xrep; - p.tilesXdim = gx; - - gx = (gx + p.tilesXrep - 1) / p.tilesXrep; - std::swap(gx, gy); - } - else - { - p.tilesXrep = 0; - p.tilesXdim = 0; - } - - // Launch filter setup kernel. - AT_CUDA_CHECK(cudaLaunchKernel(spec.setup, 1, 1024, args, 0, at::cuda::getCurrentCUDAStream())); - - // Copy kernels to constant memory. - if ( writeSigns && !readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream()))); - else if (!writeSigns && readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream()))); - else if (!writeSigns && !readSigns) AT_CUDA_CHECK((copy_filters(at::cuda::getCurrentCUDAStream()))); - - // Set cache and shared memory configurations for main kernel. - AT_CUDA_CHECK(cudaFuncSetCacheConfig(spec.exec, cudaFuncCachePreferShared)); - if (spec.dynamicSharedKB) // Need dynamically allocated shared memory? - AT_CUDA_CHECK(cudaFuncSetAttribute(spec.exec, cudaFuncAttributeMaxDynamicSharedMemorySize, spec.dynamicSharedKB << 10)); - AT_CUDA_CHECK(cudaFuncSetSharedMemConfig(spec.exec, cudaSharedMemBankSizeFourByte)); - - // Launch main kernel. - const int maxSubGz = 65535; // CUDA maximum for block z dimension. - for (int zofs=0; zofs < gz; zofs += maxSubGz) // Do multiple launches if gz is too big. - { - p.blockZofs = zofs; - int subGz = std::min(maxSubGz, gz - zofs); - AT_CUDA_CHECK(cudaLaunchKernel(spec.exec, dim3(gx, gy, subGz), bx, args, spec.dynamicSharedKB << 10, at::cuda::getCurrentCUDAStream())); - } - - // Done. - return std::make_tuple(y, so, 0); -} - -//------------------------------------------------------------------------ - -static torch::Tensor filtered_lrelu_act(torch::Tensor x, torch::Tensor si, int sx, int sy, float gain, float slope, float clamp, bool writeSigns) -{ - // Set CUDA device. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - - // Validate arguments. - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(x.size(0) * x.size(1) <= INT_MAX && x.size(2) <= INT_MAX && x.size(3) <= INT_MAX, "x is too large"); - TORCH_CHECK(x.numel() > 0, "x is empty"); - TORCH_CHECK(x.dtype() == torch::kHalf || x.dtype() == torch::kFloat || x.dtype() == torch::kDouble, "x must be float16, float32 or float64"); - - // Output signs if we don't have sign input. - torch::Tensor so; - torch::Tensor s = si; - bool readSigns = !!s.numel(); - if (writeSigns) - { - int64_t sw = x.size(3); - sw = (sw + 15) & ~15; // Round to a multiple of 16 for coalescing. - s = so = torch::empty({x.size(0), x.size(1), x.size(2), sw >> 2}, x.options().dtype(torch::kUInt8), at::MemoryFormat::Contiguous); - } - - // Validate sign tensor if in use. - if (readSigns || writeSigns) - { - TORCH_CHECK(s.is_contiguous(), "signs must be contiguous"); - TORCH_CHECK(s.dtype() == torch::kUInt8, "signs must be uint8"); - TORCH_CHECK(s.device() == x.device(), "signs must reside on the same device as x"); - TORCH_CHECK(s.dim() == 4, "signs must be rank 4"); - TORCH_CHECK(s.size(0) == x.size(0) && s.size(1) == x.size(1), "signs must have same batch & channels as x"); - TORCH_CHECK(s.size(2) <= INT_MAX && (s.size(3) << 2) <= INT_MAX, "signs tensor is too large"); - } - - // Initialize CUDA kernel parameters. - filtered_lrelu_act_kernel_params p; - p.x = x.data_ptr(); - p.s = (readSigns || writeSigns) ? s.data_ptr() : 0; - p.gain = gain; - p.slope = slope; - p.clamp = clamp; - p.xShape = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.xStride = make_longlong4(x.stride(3), x.stride(2), x.stride(1), x.stride(0)); - p.sShape = (readSigns || writeSigns) ? make_int2((int)s.size(3) << 2, (int)s.size(2)) : make_int2(0, 0); // Width is in elements. Contiguous. - p.sOfs = make_int2(sx, sy); - - // Choose CUDA kernel. - void* func = 0; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "filtered_lrelu_act_cuda", [&] - { - if (writeSigns) - func = choose_filtered_lrelu_act_kernel(); - else if (readSigns) - func = choose_filtered_lrelu_act_kernel(); - else - func = choose_filtered_lrelu_act_kernel(); - }); - TORCH_CHECK(func, "internal error - CUDA kernel not found"); - - // Launch CUDA kernel. - void* args[] = {&p}; - int bx = 128; // 4 warps per block. - - // Logical size of launch = writeSigns ? p.s : p.x - uint32_t gx = writeSigns ? p.sShape.x : p.xShape.x; - uint32_t gy = writeSigns ? p.sShape.y : p.xShape.y; - uint32_t gz = p.xShape.z * p.xShape.w; // Same as in p.sShape if signs are in use. - gx = (gx - 1) / bx + 1; - - // Make sure grid y and z dimensions are within CUDA launch limits. Kernel loops internally to do the rest. - const uint32_t gmax = 65535; - gy = std::min(gy, gmax); - gz = std::min(gz, gmax); - - // Launch. - AT_CUDA_CHECK(cudaLaunchKernel(func, dim3(gx, gy, gz), bx, args, 0, at::cuda::getCurrentCUDAStream())); - return so; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("filtered_lrelu", &filtered_lrelu); // The whole thing. - m.def("filtered_lrelu_act_", &filtered_lrelu_act); // Activation and sign tensor handling only. Modifies data tensor in-place. -} - -//------------------------------------------------------------------------ diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/Training_PRO/matplotgraph.py b/spaces/leogabraneth/text-generation-webui-main/extensions/Training_PRO/matplotgraph.py deleted file mode 100644 index 5e607526925445134fc1715a1fab6bb4af99112d..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/Training_PRO/matplotgraph.py +++ /dev/null @@ -1,62 +0,0 @@ -import os -import json - -def create_graph(lora_path, lora_name): - try: - import matplotlib.pyplot as plt - from matplotlib.ticker import ScalarFormatter - - peft_model_path = f'{lora_path}/training_graph.json' - image_model_path = f'{lora_path}/training_graph.png' - # Check if the JSON file exists - if os.path.exists(peft_model_path): - # Load data from JSON file - with open(peft_model_path, 'r') as file: - data = json.load(file) - # Extract x, y1, and y2 values - x = [item['epoch'] for item in data] - y1 = [item['learning_rate'] for item in data] - y2 = [item['loss'] for item in data] - - # Create the line chart - fig, ax1 = plt.subplots(figsize=(10, 6)) - - - # Plot y1 (learning rate) on the first y-axis - ax1.plot(x, y1, 'b-', label='Learning Rate') - ax1.set_xlabel('Epoch') - ax1.set_ylabel('Learning Rate', color='b') - ax1.tick_params('y', colors='b') - - # Create a second y-axis - ax2 = ax1.twinx() - - # Plot y2 (loss) on the second y-axis - ax2.plot(x, y2, 'r-', label='Loss') - ax2.set_ylabel('Loss', color='r') - ax2.tick_params('y', colors='r') - - # Set the y-axis formatter to display numbers in scientific notation - ax1.yaxis.set_major_formatter(ScalarFormatter(useMathText=True)) - ax1.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) - - # Add grid - ax1.grid(True) - - # Combine the legends for both plots - lines, labels = ax1.get_legend_handles_labels() - lines2, labels2 = ax2.get_legend_handles_labels() - ax2.legend(lines + lines2, labels + labels2, loc='best') - - # Set the title - plt.title(f'{lora_name} LR and Loss vs Epoch') - - # Save the chart as an image - plt.savefig(image_model_path) - - print(f"Graph saved in {image_model_path}") - else: - print(f"File 'training_graph.json' does not exist in the {lora_path}") - - except ImportError: - print("matplotlib is not installed. Please install matplotlib to create PNG graphs") \ No newline at end of file diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/superboogav2/chromadb.py b/spaces/leogabraneth/text-generation-webui-main/extensions/superboogav2/chromadb.py deleted file mode 100644 index 0da2d8f90c623b43ecd49b3dcf20919b8e2a1434..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/superboogav2/chromadb.py +++ /dev/null @@ -1,376 +0,0 @@ -import threading -import chromadb -import posthog -import torch -import math - -import numpy as np -import extensions.superboogav2.parameters as parameters - -from chromadb.config import Settings -from sentence_transformers import SentenceTransformer - -from modules.logging_colors import logger -from modules.text_generation import encode, decode - -logger.debug('Intercepting all calls to posthog.') -posthog.capture = lambda *args, **kwargs: None - - -class Collecter(): - def __init__(self): - pass - - def add(self, texts: list[str], texts_with_context: list[str], starting_indices: list[int]): - pass - - def get(self, search_strings: list[str], n_results: int) -> list[str]: - pass - - def clear(self): - pass - - -class Embedder(): - def __init__(self): - pass - - def embed(self, text: str) -> list[torch.Tensor]: - pass - -class Info: - def __init__(self, start_index, text_with_context, distance, id): - self.text_with_context = text_with_context - self.start_index = start_index - self.distance = distance - self.id = id - - def calculate_distance(self, other_info): - if parameters.get_new_dist_strategy() == parameters.DIST_MIN_STRATEGY: - # Min - return min(self.distance, other_info.distance) - elif parameters.get_new_dist_strategy() == parameters.DIST_HARMONIC_STRATEGY: - # Harmonic mean - return 2 * (self.distance * other_info.distance) / (self.distance + other_info.distance) - elif parameters.get_new_dist_strategy() == parameters.DIST_GEOMETRIC_STRATEGY: - # Geometric mean - return (self.distance * other_info.distance) ** 0.5 - elif parameters.get_new_dist_strategy() == parameters.DIST_ARITHMETIC_STRATEGY: - # Arithmetic mean - return (self.distance + other_info.distance) / 2 - else: # Min is default - return min(self.distance, other_info.distance) - - def merge_with(self, other_info): - s1 = self.text_with_context - s2 = other_info.text_with_context - s1_start = self.start_index - s2_start = other_info.start_index - - new_dist = self.calculate_distance(other_info) - - if self.should_merge(s1, s2, s1_start, s2_start): - if s1_start <= s2_start: - if s1_start + len(s1) >= s2_start + len(s2): # if s1 completely covers s2 - return Info(s1_start, s1, new_dist, self.id) - else: - overlap = max(0, s1_start + len(s1) - s2_start) - return Info(s1_start, s1 + s2[overlap:], new_dist, self.id) - else: - if s2_start + len(s2) >= s1_start + len(s1): # if s2 completely covers s1 - return Info(s2_start, s2, new_dist, other_info.id) - else: - overlap = max(0, s2_start + len(s2) - s1_start) - return Info(s2_start, s2 + s1[overlap:], new_dist, other_info.id) - - return None - - @staticmethod - def should_merge(s1, s2, s1_start, s2_start): - # Check if s1 and s2 are adjacent or overlapping - s1_end = s1_start + len(s1) - s2_end = s2_start + len(s2) - - return not (s1_end < s2_start or s2_end < s1_start) - -class ChromaCollector(Collecter): - def __init__(self, embedder: Embedder): - super().__init__() - self.chroma_client = chromadb.Client(Settings(anonymized_telemetry=False)) - self.embedder = embedder - self.collection = self.chroma_client.create_collection(name="context", embedding_function=self.embedder.embed) - self.ids = [] - self.id_to_info = {} - self.embeddings_cache = {} - self.lock = threading.Lock() # Locking so the server doesn't break. - - def add(self, texts: list[str], texts_with_context: list[str], starting_indices: list[int], metadatas: list[dict] = None): - with self.lock: - assert metadatas is None or len(metadatas) == len(texts), "metadatas must be None or have the same length as texts" - - if len(texts) == 0: - return - - new_ids = self._get_new_ids(len(texts)) - - (existing_texts, existing_embeddings, existing_ids, existing_metas), \ - (non_existing_texts, non_existing_ids, non_existing_metas) = self._split_texts_by_cache_hit(texts, new_ids, metadatas) - - # If there are any already existing texts, add them all at once. - if existing_texts: - logger.info(f'Adding {len(existing_embeddings)} cached embeddings.') - args = {'embeddings': existing_embeddings, 'documents': existing_texts, 'ids': existing_ids} - if metadatas is not None: - args['metadatas'] = existing_metas - self.collection.add(**args) - - # If there are any non-existing texts, compute their embeddings all at once. Each call to embed has significant overhead. - if non_existing_texts: - non_existing_embeddings = self.embedder.embed(non_existing_texts).tolist() - for text, embedding in zip(non_existing_texts, non_existing_embeddings): - self.embeddings_cache[text] = embedding - - logger.info(f'Adding {len(non_existing_embeddings)} new embeddings.') - args = {'embeddings': non_existing_embeddings, 'documents': non_existing_texts, 'ids': non_existing_ids} - if metadatas is not None: - args['metadatas'] = non_existing_metas - self.collection.add(**args) - - # Create a dictionary that maps each ID to its context and starting index - new_info = { - id_: {'text_with_context': context, 'start_index': start_index} - for id_, context, start_index in zip(new_ids, texts_with_context, starting_indices) - } - - self.id_to_info.update(new_info) - self.ids.extend(new_ids) - - - def _split_texts_by_cache_hit(self, texts: list[str], new_ids: list[str], metadatas: list[dict]): - existing_texts, non_existing_texts = [], [] - existing_embeddings = [] - existing_ids, non_existing_ids = [], [] - existing_metas, non_existing_metas = [], [] - - for i, text in enumerate(texts): - id_ = new_ids[i] - metadata = metadatas[i] if metadatas is not None else None - embedding = self.embeddings_cache.get(text) - if embedding: - existing_texts.append(text) - existing_embeddings.append(embedding) - existing_ids.append(id_) - existing_metas.append(metadata) - else: - non_existing_texts.append(text) - non_existing_ids.append(id_) - non_existing_metas.append(metadata) - - return (existing_texts, existing_embeddings, existing_ids, existing_metas), \ - (non_existing_texts, non_existing_ids, non_existing_metas) - - - def _get_new_ids(self, num_new_ids: int): - if self.ids: - max_existing_id = max(int(id_) for id_ in self.ids) - else: - max_existing_id = -1 - - return [str(i + max_existing_id + 1) for i in range(num_new_ids)] - - - def _find_min_max_start_index(self): - max_index, min_index = 0, float('inf') - for _, val in self.id_to_info.items(): - if val['start_index'] > max_index: - max_index = val['start_index'] - if val['start_index'] < min_index: - min_index = val['start_index'] - return min_index, max_index - - - # NB: Does not make sense to weigh excerpts from different documents. - # But let's say that's the user's problem. Perfect world scenario: - # Apply time weighing to different documents. For each document, then, add - # separate time weighing. - def _apply_sigmoid_time_weighing(self, infos: list[Info], document_len: int, time_steepness: float, time_power: float): - sigmoid = lambda x: 1 / (1 + np.exp(-x)) - - weights = sigmoid(time_steepness * np.linspace(-10, 10, document_len)) - - # Scale to [0,time_power] and shift it up to [1-time_power, 1] - weights = weights - min(weights) - weights = weights * (time_power / max(weights)) - weights = weights + (1 - time_power) - - # Reverse the weights - weights = weights[::-1] - - for info in infos: - index = info.start_index - info.distance *= weights[index] - - - def _filter_outliers_by_median_distance(self, infos: list[Info], significant_level: float): - # Ensure there are infos to filter - if not infos: - return [] - - # Find info with minimum distance - min_info = min(infos, key=lambda x: x.distance) - - # Calculate median distance among infos - median_distance = np.median([inf.distance for inf in infos]) - - # Filter out infos that have a distance significantly greater than the median - filtered_infos = [inf for inf in infos if inf.distance <= significant_level * median_distance] - - # Always include the info with minimum distance - if min_info not in filtered_infos: - filtered_infos.append(min_info) - - return filtered_infos - - - def _merge_infos(self, infos: list[Info]): - merged_infos = [] - current_info = infos[0] - - for next_info in infos[1:]: - merged = current_info.merge_with(next_info) - if merged is not None: - current_info = merged - else: - merged_infos.append(current_info) - current_info = next_info - - merged_infos.append(current_info) - return merged_infos - - - # Main function for retrieving chunks by distance. It performs merging, time weighing, and mean filtering. - def _get_documents_ids_distances(self, search_strings: list[str], n_results: int): - n_results = min(len(self.ids), n_results) - if n_results == 0: - return [], [], [] - - if isinstance(search_strings, str): - search_strings = [search_strings] - - infos = [] - min_start_index, max_start_index = self._find_min_max_start_index() - - for search_string in search_strings: - result = self.collection.query(query_texts=search_string, n_results=math.ceil(n_results / len(search_strings)), include=['distances']) - curr_infos = [Info(start_index=self.id_to_info[id]['start_index'], - text_with_context=self.id_to_info[id]['text_with_context'], - distance=distance, id=id) - for id, distance in zip(result['ids'][0], result['distances'][0])] - - self._apply_sigmoid_time_weighing(infos=curr_infos, document_len=max_start_index - min_start_index + 1, time_steepness=parameters.get_time_steepness(), time_power=parameters.get_time_power()) - curr_infos = self._filter_outliers_by_median_distance(curr_infos, parameters.get_significant_level()) - infos.extend(curr_infos) - - infos.sort(key=lambda x: x.start_index) - infos = self._merge_infos(infos) - - texts_with_context = [inf.text_with_context for inf in infos] - ids = [inf.id for inf in infos] - distances = [inf.distance for inf in infos] - - return texts_with_context, ids, distances - - - # Get chunks by similarity - def get(self, search_strings: list[str], n_results: int) -> list[str]: - with self.lock: - documents, _, _ = self._get_documents_ids_distances(search_strings, n_results) - return documents - - - # Get ids by similarity - def get_ids(self, search_strings: list[str], n_results: int) -> list[str]: - with self.lock: - _, ids, _ = self._get_documents_ids_distances(search_strings, n_results) - return ids - - - # Cutoff token count - def _get_documents_up_to_token_count(self, documents: list[str], max_token_count: int): - # TODO: Move to caller; We add delimiters there which might go over the limit. - current_token_count = 0 - return_documents = [] - - for doc in documents: - doc_tokens = encode(doc)[0] - doc_token_count = len(doc_tokens) - if current_token_count + doc_token_count > max_token_count: - # If adding this document would exceed the max token count, - # truncate the document to fit within the limit. - remaining_tokens = max_token_count - current_token_count - - truncated_doc = decode(doc_tokens[:remaining_tokens], skip_special_tokens=True) - return_documents.append(truncated_doc) - break - else: - return_documents.append(doc) - current_token_count += doc_token_count - - return return_documents - - - # Get chunks by similarity and then sort by ids - def get_sorted_by_ids(self, search_strings: list[str], n_results: int, max_token_count: int) -> list[str]: - with self.lock: - documents, ids, _ = self._get_documents_ids_distances(search_strings, n_results) - sorted_docs = [x for _, x in sorted(zip(ids, documents))] - - return self._get_documents_up_to_token_count(sorted_docs, max_token_count) - - - # Get chunks by similarity and then sort by distance (lowest distance is last). - def get_sorted_by_dist(self, search_strings: list[str], n_results: int, max_token_count: int) -> list[str]: - with self.lock: - documents, _, distances = self._get_documents_ids_distances(search_strings, n_results) - sorted_docs = [doc for doc, _ in sorted(zip(documents, distances), key=lambda x: x[1])] # sorted lowest -> highest - - # If a document is truncated or competely skipped, it would be with high distance. - return_documents = self._get_documents_up_to_token_count(sorted_docs, max_token_count) - return_documents.reverse() # highest -> lowest - - return return_documents - - - def delete(self, ids_to_delete: list[str], where: dict): - with self.lock: - ids_to_delete = self.collection.get(ids=ids_to_delete, where=where)['ids'] - self.collection.delete(ids=ids_to_delete, where=where) - - # Remove the deleted ids from self.ids and self.id_to_info - ids_set = set(ids_to_delete) - self.ids = [id_ for id_ in self.ids if id_ not in ids_set] - for id_ in ids_to_delete: - self.id_to_info.pop(id_, None) - - logger.info(f'Successfully deleted {len(ids_to_delete)} records from chromaDB.') - - - def clear(self): - with self.lock: - self.chroma_client.reset() - self.collection = self.chroma_client.create_collection("context", embedding_function=self.embedder.embed) - self.ids = [] - self.id_to_info = {} - - logger.info('Successfully cleared all records and reset chromaDB.') - - -class SentenceTransformerEmbedder(Embedder): - def __init__(self) -> None: - logger.debug('Creating Sentence Embedder...') - self.model = SentenceTransformer("sentence-transformers/all-mpnet-base-v2") - self.embed = self.model.encode - - -def make_collector(): - return ChromaCollector(SentenceTransformerEmbedder()) \ No newline at end of file diff --git a/spaces/lilucheng/sourcedetection/common/utils/misc.py b/spaces/lilucheng/sourcedetection/common/utils/misc.py deleted file mode 100644 index c15b9d203169cc55bc15e2c81349d1d6fe923a24..0000000000000000000000000000000000000000 --- a/spaces/lilucheng/sourcedetection/common/utils/misc.py +++ /dev/null @@ -1,37 +0,0 @@ -from tqdm import tqdm - -#-------------------------------------------------------- - -# just a list of a mapping -# -apply = lambda f, a: list(map(f, a)) - -def apply_inplace(f, a, show_progress = False): - - idxs = range(len(a)) - - if show_progress: - idxs = tqdm(idxs) - - for k in idxs: - a[k] = f(a[k]) - -# 'safe cast': cast `val` to type `T` if possible, otherwise return `None` -# -def cast(val, T): - - try: - return T(val) - except: - return None - -# return a 'standardized' sting length based on the actual length `s_length` -# -def standardized_string_length(s_length): - - for std_length in [256, 65535]: - - if s_length <= std_length: - return std_length - - raise Exception(f'String too long (len = {s_length})') \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Office 365 Product Key 2019 Fix Cracked.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Office 365 Product Key 2019 Fix Cracked.md deleted file mode 100644 index 678b13bab38697421c42a0c82f43d9804c64eb68..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Office 365 Product Key 2019 Fix Cracked.md +++ /dev/null @@ -1,8 +0,0 @@ -

        Microsoft Office 365 Product Key 2019 Cracked


        Download Ziphttps://bytlly.com/2uGvOZ



        - -Feb 6, 2022 - Office 2019 is Microsoft's recently released office automation software, giving you an office that is an expert in document processing. Office 2019 offers you a new, modern and powerful look and feel, improved security, and a more efficient and thoughtful workflow that can be tailored to your needs. -Like the rest of the software, Office 2019 has been released with new features that make it more sophisticated, but nevertheless easy to understand -Office 2019 is a suite of software that contains the essential tools used to work with documents in the office. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/lithiumice/SadTalker/src/utils/paste_pic.py b/spaces/lithiumice/SadTalker/src/utils/paste_pic.py deleted file mode 100644 index a05a55caeea190d2af32f2341e3a96d1fc417b09..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/utils/paste_pic.py +++ /dev/null @@ -1,50 +0,0 @@ -import cv2, os -import numpy as np -from tqdm import tqdm -import uuid - -from src.utils.videoio import save_video_with_watermark - -def paste_pic(video_path, pic_path, crop_info, new_audio_path, full_video_path): - - full_img = cv2.imread(pic_path) - frame_h = full_img.shape[0] - frame_w = full_img.shape[1] - - video_stream = cv2.VideoCapture(video_path) - fps = video_stream.get(cv2.CAP_PROP_FPS) - crop_frames = [] - while 1: - still_reading, frame = video_stream.read() - if not still_reading: - video_stream.release() - break - crop_frames.append(frame) - - if len(crop_info) != 3: - print("you didn't crop the image") - return - else: - r_w, r_h = crop_info[0] - clx, cly, crx, cry = crop_info[1] - lx, ly, rx, ry = crop_info[2] - lx, ly, rx, ry = int(lx), int(ly), int(rx), int(ry) - # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - # oy1, oy2, ox1, ox2 = cly+ly, cly+ry, clx+lx, clx+rx - oy1, oy2, ox1, ox2 = cly, cry, clx, crx - - - tmp_path = str(uuid.uuid4())+'.mp4' - out_tmp = cv2.VideoWriter(tmp_path, cv2.VideoWriter_fourcc(*'MP4V'), fps, (frame_w, frame_h)) - for crop_frame in tqdm(crop_frames, 'seamlessClone:'): - p = cv2.resize(crop_frame.astype(np.uint8), (crx-clx, cry - cly)) - - mask = 255*np.ones(p.shape, p.dtype) - location = ((ox1+ox2) // 2, (oy1+oy2) // 2) - gen_img = cv2.seamlessClone(p, full_img, mask, location, cv2.NORMAL_CLONE) - out_tmp.write(gen_img) - - out_tmp.release() - - save_video_with_watermark(tmp_path, new_audio_path, full_video_path) - os.remove(tmp_path) diff --git a/spaces/ljh1212/ljhai/README.md b/spaces/ljh1212/ljhai/README.md deleted file mode 100644 index 5f78a160ab8fa766e47eb27e6450fd525803450d..0000000000000000000000000000000000000000 --- a/spaces/ljh1212/ljhai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ljhai -emoji: 🔥 -colorFrom: yellow -colorTo: yellow -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/logasja/LowKey/align/first_stage.py b/spaces/logasja/LowKey/align/first_stage.py deleted file mode 100644 index 0781bdc5870832d120a2108b9e2f333dac6e4566..0000000000000000000000000000000000000000 --- a/spaces/logasja/LowKey/align/first_stage.py +++ /dev/null @@ -1,97 +0,0 @@ -import torch -from torch.autograd import Variable -import math -from PIL import Image -import numpy as np -from align.box_utils import nms, _preprocess - - -def run_first_stage(image, net, scale, threshold): - """Run P-Net, generate bounding boxes, and do NMS. - - Arguments: - image: an instance of PIL.Image. - net: an instance of pytorch's nn.Module, P-Net. - scale: a float number, - scale width and height of the image by this number. - threshold: a float number, - threshold on the probability of a face when generating - bounding boxes from predictions of the net. - - Returns: - a float numpy array of shape [n_boxes, 9], - bounding boxes with scores and offsets (4 + 1 + 4). - """ - - # scale the image and convert it to a float array - width, height = image.size - sw, sh = math.ceil(width*scale), math.ceil(height*scale) - img = image.resize((sw, sh), Image.BILINEAR) - img = np.asarray(img, 'float32') - - img = Variable(torch.FloatTensor(_preprocess(img)), volatile = True) - output = net(img) - probs = output[1].data.numpy()[0, 1, :, :] - offsets = output[0].data.numpy() - # probs: probability of a face at each sliding window - # offsets: transformations to true bounding boxes - - boxes = _generate_bboxes(probs, offsets, scale, threshold) - if len(boxes) == 0: - return None - - keep = nms(boxes[:, 0:5], overlap_threshold = 0.5) - return boxes[keep] - - -def _generate_bboxes(probs, offsets, scale, threshold): - """Generate bounding boxes at places - where there is probably a face. - - Arguments: - probs: a float numpy array of shape [n, m]. - offsets: a float numpy array of shape [1, 4, n, m]. - scale: a float number, - width and height of the image were scaled by this number. - threshold: a float number. - - Returns: - a float numpy array of shape [n_boxes, 9] - """ - - # applying P-Net is equivalent, in some sense, to - # moving 12x12 window with stride 2 - stride = 2 - cell_size = 12 - - # indices of boxes where there is probably a face - inds = np.where(probs > threshold) - - if inds[0].size == 0: - return np.array([]) - - # transformations of bounding boxes - tx1, ty1, tx2, ty2 = [offsets[0, i, inds[0], inds[1]] for i in range(4)] - # they are defined as: - # w = x2 - x1 + 1 - # h = y2 - y1 + 1 - # x1_true = x1 + tx1*w - # x2_true = x2 + tx2*w - # y1_true = y1 + ty1*h - # y2_true = y2 + ty2*h - - offsets = np.array([tx1, ty1, tx2, ty2]) - score = probs[inds[0], inds[1]] - - # P-Net is applied to scaled images - # so we need to rescale bounding boxes back - bounding_boxes = np.vstack([ - np.round((stride*inds[1] + 1.0)/scale), - np.round((stride*inds[0] + 1.0)/scale), - np.round((stride*inds[1] + 1.0 + cell_size)/scale), - np.round((stride*inds[0] + 1.0 + cell_size)/scale), - score, offsets - ]) - # why one is added? - - return bounding_boxes.T diff --git a/spaces/ltgoslo/ssa-perin/mtool/codec/pmb.py b/spaces/ltgoslo/ssa-perin/mtool/codec/pmb.py deleted file mode 100644 index e4abe8814534dbde4d18f06951c658fd31a52a73..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/mtool/codec/pmb.py +++ /dev/null @@ -1,219 +0,0 @@ -from operator import itemgetter; -import os.path; -import re; -import sys; - -from graph import Graph; - -conditions = {"APX": "≈", "EQU": "=", "LEQ": "≤", "LES": "<", "NEQ": "≠", - "SXN": "«", "SXP": "»", "SXY": "≖", "SZN": "\\", "SZP": "/", - "STI": "⊍", "STO": "⊍", "SY1": "∥", "SY2": "⚮", - "TAB": "⋈", "TPR": "≺"}; - -# -# in parsing the clauses, patterns are ordered by specificity -# -id_matcher = re.compile(r'^%%% bin/boxer --input (?:[^/]+/)?p([0-9]+)/d([0-9]+)/'); -referent_matcher = re.compile(r'^(b[0-9]+) REF ([enpstx][0-9]+) +%(?: .* \[([0-9]+)\.\.\.([0-9]+)\])?$'); -condition_matcher = re.compile(r'^(b[0-9]+) (EQU|NEQ|APX|LE[SQ]|TPR|TAB|S[ZX][PN]|ST[IO]|SY[12]|SXY) ([enpstx][0-9]+|"[^"]+") ([enpstx][0-9]+|"[^"]+") +%(?: .* \[([0-9]+)\.\.\.([0-9]+)\])?$'); -role_matcher = re.compile(r'^(b[0-9]+) ([^ ]+) ([enpstx][0-9]+) ([enpstx][0-9]+|"[^"]+") +%(?: .* \[([0-9]+)\.\.\.([0-9]+)\])?$'); -concept_matcher = re.compile(r'^(b[0-9]+) ([^ ]+) ("[^ ]+") ([enpstx][0-9]+) +%(?: .* \[([0-9]+)\.\.\.([0-9]+)\])?$'); -discourse_matcher = re.compile(r'^(b[0-9]+) ([^ ]+) (b[0-9]+)(?: (b[0-9]+))? +%(?: .* \[[0-9]+\.\.\.[0-9]+\])?$'); -empty_matcher = re.compile(r'^ *%(?: .* \[[0-9]+\.\.\.[0-9]+\])?$'); - -def read(fp, text = None, full = False, reify = False, trace = 0, strict = 0): - - def finish(graph, mapping, finis, scopes): - if reify: - for box, referent, node in finis: - # - # in full reification mode, or when the corresponding box cannot be - # easily inferred for a reified role (including when the source node is - # a constant, as e.g. in a 'future' temporal discourse conditions), - # add an explicit box membership edge. - # - if full \ - or referent[0] == referent[-1] == "\"" \ - or box not in scopes[referent]: - graph.add_edge(mapping[box].id, node.id, "∈"); - else: - for referent in scopes: - if len(scopes[referent]) > 1: - print("pbm.read(): [graph #{}] stray referent ‘{}’ in boxes {}." - "".format(graph.id, referent, scopes[referent]), - file=sys.stderr); - # - # after the fact, mark all boxes that structurally are roots as top nodes. - # - for node in graph.nodes: - if node.type == 0 and node.is_root(): node.is_top = True; - - graph = None; id = None; sentence = None; - mapping = dict(); scopes = dict(); finis = list(); - i = 0; - header = 3; - for line in fp: - line = line.rstrip(); i += 1; - if trace: print("{}: {}".format(i, line)); - # - # to support newline-separated concatenations of clause files (a format not - # used in the native PMB 3.0 release), - # - if len(line) == 0: - finish(graph, mapping, finis, scopes); - yield graph, None; - graph = None; id = None; - mapping = dict(); scopes = dict(); finis = list(); - header = 3; - continue; - # - # each block of clauses is preceded by three comment lines, which we use to - # extract the sentence identifier and underlying string. - # - if header: - if header == 3: pass; - elif header == 2: - match = id_matcher.match(line); - if match is None: - raise Exception("pbm.read(): " - "[line {}] missing identifier in ‘{}’; exit." - "".format(i, line)); - part, document = match.groups(); - id = "{:02d}{:04d}".format(int(part), int(document)); - elif header == 1: - if text is not None and id in text: sentence = text[id]; - else: sentence = line[5:-1]; - graph = Graph(id, flavor = 2, framework = "drg"); - graph.add_input(sentence); - header -= 1; - continue; - # - # from here onwards, we are looking at genuine, contentful clauses. from - # inspecting some of the files, it appears they are organized according to - # surface (reading) order, and we cannot assume that discourse referents - # are 'introduced' (in some box) prior to their first occurance in e.g. a - # role or concept clause. - # - anchor = None; - match = referent_matcher.match(line); - if match is not None: - box, referent, start, end = match.groups(); - if referent in scopes: - if strict and box not in scopes[referent] and reify: - raise Exception("pbm.read(): " - "[line {}] stray referent ‘{}’ in box ‘{}’ " - "(instead of ‘{}’); exit." - "".format(i, referent, box, scopes[referent])); - else: scopes[referent] = {box}; - if box not in mapping: mapping[box] = graph.add_node(type = 0); - if start is not None and end is not None: - anchor = {"from": int(start), "to": int(end)}; - if referent not in mapping: - mapping[referent] \ - = graph.add_node(anchors = [anchor] if anchor else None); - else: - node = mapping[referent]; - node.add_anchor(anchor); - graph.add_edge(mapping[box].id, mapping[referent].id, "∈"); - else: - match = condition_matcher.match(line); - if match is not None: - box, condition, source, target, start, end = match.groups(); - condition = conditions[condition]; - if source[0] == "\"" and source[-1] == "\"" and source not in mapping: - if start is not None and end is not None: - anchor = {"from": int(start), "to": int(end)}; - mapping[source] \ - = graph.add_node(label = source, - anchors = [anchor] if anchor else None); - elif source not in mapping: mapping[source] = graph.add_node(); - if target[0] == "\"" and target[-1] == "\"" and target not in mapping: - if start is not None and end is not None: - anchor = {"from": int(start), "to": int(end)}; - mapping[target] \ - = graph.add_node(label = target, - anchors = [anchor] if anchor else None); - elif target not in mapping: mapping[target] = graph.add_node(); - if reify: - if box not in mapping: mapping[box] = graph.add_node(type = 0); - node = graph.add_node(label = condition, type = 3); - finis.append((box, source, node)); - graph.add_edge(mapping[source].id, node.id, None); - graph.add_edge(node.id, mapping[target].id, None); - else: - if source in scopes: scopes[source].add(box); - else: scopes[source] = {box}; - graph.add_edge(mapping[source].id, mapping[target].id, condition); - else: - match = role_matcher.match(line); - if match is not None: - box, role, source, target, start, end = match.groups(); - if source not in mapping: mapping[source] = graph.add_node(); - if target[0] == "\"" and target[-1] == "\"" and target not in mapping: - if start is not None and end is not None: - anchor = {"from": int(start), "to": int(end)}; - mapping[target] \ - = graph.add_node(label = target, - anchors = [anchor] if anchor else None); - elif target not in mapping: mapping[target] = graph.add_node(); - if reify: - if box not in mapping: mapping[box] = graph.add_node(type = 0); - node = graph.add_node(label = role, type = 2); - finis.append((box, source, node)); - graph.add_edge(mapping[source].id, node.id, None); - graph.add_edge(node.id, mapping[target].id, None); - else: - if source in scopes: scopes[source].add(box); - else: scopes[source] = {box}; - graph.add_edge(mapping[source].id, mapping[target].id, role); - else: - match = concept_matcher.match(line); - if match is not None: - box, lemma, sense, referent, start, end = match.groups(); - if referent in scopes: - if strict and box not in scopes[referent] and reify: - raise Exception("pbm.read(): " - "[line {}] stray referent ‘{}’ in box ‘{}’ " - "(instead of ‘{}’); exit." - "".format(i, referent, box, scopes[referent])); - else: scopes[referent] = {box}; - if start is not None and end is not None: - anchor = {"from": int(start), "to": int(end)}; - if referent not in mapping: - mapping[referent] = node \ - = graph.add_node(anchors = [anchor] if anchor else None); - else: - node = mapping[referent]; - node.add_anchor(anchor); - if strict and node.label is not None: - raise Exception("pbm.read(): " - "[line {}] duplicate label ‘{}’ on referent ‘{}’ " - "(instead of ‘{}’); exit." - "".format(i, lemma, referent, node.label)); - node.label = lemma; - if sense[0] == sense[-1] == "\"": sense = sense[1:-1]; - node.set_property("sense", sense); - else: - match = discourse_matcher.match(line); - if match is not None: - top, relation, one, two = match.groups(); - if one not in mapping: mapping[one] = graph.add_node(type = 0); - if two is not None: - if trace > 1: print("ternary discourse relation"); - if two not in mapping: mapping[two] = graph.add_node(type = 0); - graph.add_edge(mapping[one].id, mapping[two].id, relation); - else: - if top not in mapping: mapping[top] = graph.add_node(type = 0); - graph.add_edge(mapping[top].id, mapping[one].id, relation); - elif empty_matcher.search(line) is None: - raise Exception("pmb.read(): [line {}] invalid clause ‘{}’." - "".format(i, line)); - # - # finally, as we reach an end of file (without an empty line terminating the - # preceding block of clauses, as is the standard format in PMB), finalize the - # graph and return it. - # - if graph is not None: - finish(graph, mapping, finis, scopes); - yield graph, None; - diff --git a/spaces/lunarflu/HF-QA-Demo-3/tests/discord_bot/client/test_utils.py b/spaces/lunarflu/HF-QA-Demo-3/tests/discord_bot/client/test_utils.py deleted file mode 100644 index effbac21e5f863d5bf17e16b45469ce2d22affa5..0000000000000000000000000000000000000000 --- a/spaces/lunarflu/HF-QA-Demo-3/tests/discord_bot/client/test_utils.py +++ /dev/null @@ -1,69 +0,0 @@ -import pytest -import os -from discord_bot.client.utils import ( \ - find_max_split_index, \ - find_max_split_index_from_sequence, \ - split_text_into_chunks -) - - -@pytest.fixture(scope='module') -def test_chunk() -> str: - return 't. , \n .' - - -@pytest.fixture(scope='module') -def test_text() -> str: - with open('tests/discord_bot/client/lorem_ipsum.txt', 'r') as f: - text = f.read() - assert text is not None, 'test text is empty' - return text - - -def test_find_max_splitting_index(test_chunk: str): - index = find_max_split_index(test_chunk, char='\n') - assert index == 6, 'index should be 6' - index = find_max_split_index(test_chunk, char='. ') - assert index == 3, 'index should be 3' - index = find_max_split_index(test_chunk, char='.') - assert index == 8, 'index should be 8' - - -def test_find_max_split_index_from_sequence(test_chunk: str): - index = find_max_split_index_from_sequence( - test_chunk, - split_characters=['\n'] - ) - assert index == 6, 'index should be 6' - index = find_max_split_index_from_sequence( - test_chunk, - split_characters=['.', ', ', '\n'] - ) - assert index == 8, 'index should be 8' - - -def test_split_text_into_chunks_with_split_characters(test_text: str): - max_chunk_size = 250 - chunks = split_text_into_chunks( - test_text, - split_characters=['. ', ', ', '\n'], - min_size=20, - max_size=max_chunk_size - ) - for chunk in chunks: - assert len(chunk) > 0, 'Chunk length is zero' - assert len(chunk) <= max_chunk_size, 'Chunk length exceeds maximum limit' - - -def test_split_text_into_chunks_without_split_characters(): - test_text = 'a' * 1000 - max_chunk_size = 250 - chunks = split_text_into_chunks( - test_text, - split_characters=[], - min_size=20, - max_size=max_chunk_size - ) - for chunk in chunks: - assert len(chunk) == max_chunk_size, \ - 'Chunk length is too small' diff --git a/spaces/lunarring/latentblending/ldm/data/util.py b/spaces/lunarring/latentblending/ldm/data/util.py deleted file mode 100644 index 5b60ceb2349e3bd7900ff325740e2022d2903b1c..0000000000000000000000000000000000000000 --- a/spaces/lunarring/latentblending/ldm/data/util.py +++ /dev/null @@ -1,24 +0,0 @@ -import torch - -from ldm.modules.midas.api import load_midas_transform - - -class AddMiDaS(object): - def __init__(self, model_type): - super().__init__() - self.transform = load_midas_transform(model_type) - - def pt2np(self, x): - x = ((x + 1.0) * .5).detach().cpu().numpy() - return x - - def np2pt(self, x): - x = torch.from_numpy(x) * 2 - 1. - return x - - def __call__(self, sample): - # sample['jpg'] is tensor hwc in [-1, 1] at this point - x = self.pt2np(sample['jpg']) - x = self.transform({"image": x})["image"] - sample['midas_in'] = x - return sample \ No newline at end of file diff --git a/spaces/ma-xu/LIVE/pybind11/setup.py b/spaces/ma-xu/LIVE/pybind11/setup.py deleted file mode 100644 index 577a6b6c37c9d284b0d5b7453de62aaa71c50869..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/setup.py +++ /dev/null @@ -1,130 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -# Setup script for PyPI; use CMakeFile.txt to build extension modules - -from setuptools import setup -from distutils.command.install_headers import install_headers -from distutils.command.build_py import build_py -from pybind11 import __version__ -import os - -package_data = [ - 'include/pybind11/detail/class.h', - 'include/pybind11/detail/common.h', - 'include/pybind11/detail/descr.h', - 'include/pybind11/detail/init.h', - 'include/pybind11/detail/internals.h', - 'include/pybind11/detail/typeid.h', - 'include/pybind11/attr.h', - 'include/pybind11/buffer_info.h', - 'include/pybind11/cast.h', - 'include/pybind11/chrono.h', - 'include/pybind11/common.h', - 'include/pybind11/complex.h', - 'include/pybind11/eigen.h', - 'include/pybind11/embed.h', - 'include/pybind11/eval.h', - 'include/pybind11/functional.h', - 'include/pybind11/iostream.h', - 'include/pybind11/numpy.h', - 'include/pybind11/operators.h', - 'include/pybind11/options.h', - 'include/pybind11/pybind11.h', - 'include/pybind11/pytypes.h', - 'include/pybind11/stl.h', - 'include/pybind11/stl_bind.h', -] - -# Prevent installation of pybind11 headers by setting -# PYBIND11_USE_CMAKE. -if os.environ.get('PYBIND11_USE_CMAKE'): - headers = [] -else: - headers = package_data - - -class InstallHeaders(install_headers): - """Use custom header installer because the default one flattens subdirectories""" - def run(self): - if not self.distribution.headers: - return - - for header in self.distribution.headers: - subdir = os.path.dirname(os.path.relpath(header, 'include/pybind11')) - install_dir = os.path.join(self.install_dir, subdir) - self.mkpath(install_dir) - - (out, _) = self.copy_file(header, install_dir) - self.outfiles.append(out) - - -# Install the headers inside the package as well -class BuildPy(build_py): - def build_package_data(self): - build_py.build_package_data(self) - for header in package_data: - target = os.path.join(self.build_lib, 'pybind11', header) - self.mkpath(os.path.dirname(target)) - self.copy_file(header, target, preserve_mode=False) - - def get_outputs(self, include_bytecode=1): - outputs = build_py.get_outputs(self, include_bytecode=include_bytecode) - for header in package_data: - target = os.path.join(self.build_lib, 'pybind11', header) - outputs.append(target) - return outputs - - -setup( - name='pybind11', - version=__version__, - description='Seamless operability between C++11 and Python', - author='Wenzel Jakob', - author_email='wenzel.jakob@epfl.ch', - url='https://github.com/pybind/pybind11', - download_url='https://github.com/pybind/pybind11/tarball/v' + __version__, - packages=['pybind11'], - license='BSD', - headers=headers, - zip_safe=False, - cmdclass=dict(install_headers=InstallHeaders, build_py=BuildPy), - classifiers=[ - 'Development Status :: 5 - Production/Stable', - 'Intended Audience :: Developers', - 'Topic :: Software Development :: Libraries :: Python Modules', - 'Topic :: Utilities', - 'Programming Language :: C++', - 'Programming Language :: Python :: 2.7', - 'Programming Language :: Python :: 3', - 'Programming Language :: Python :: 3.2', - 'Programming Language :: Python :: 3.3', - 'Programming Language :: Python :: 3.4', - 'Programming Language :: Python :: 3.5', - 'Programming Language :: Python :: 3.6', - 'License :: OSI Approved :: BSD License' - ], - keywords='C++11, Python bindings', - long_description="""pybind11 is a lightweight header-only library that -exposes C++ types in Python and vice versa, mainly to create Python bindings of -existing C++ code. Its goals and syntax are similar to the excellent -Boost.Python by David Abrahams: to minimize boilerplate code in traditional -extension modules by inferring type information using compile-time -introspection. - -The main issue with Boost.Python-and the reason for creating such a similar -project-is Boost. Boost is an enormously large and complex suite of utility -libraries that works with almost every C++ compiler in existence. This -compatibility has its cost: arcane template tricks and workarounds are -necessary to support the oldest and buggiest of compiler specimens. Now that -C++11-compatible compilers are widely available, this heavy machinery has -become an excessively large and unnecessary dependency. - -Think of this library as a tiny self-contained version of Boost.Python with -everything stripped away that isn't relevant for binding generation. Without -comments, the core header files only require ~4K lines of code and depend on -Python (2.7 or 3.x, or PyPy2.7 >= 5.7) and the C++ standard library. This -compact implementation was possible thanks to some of the new C++11 language -features (specifically: tuples, lambda functions and variadic templates). Since -its creation, this library has grown beyond Boost.Python in many ways, leading -to dramatically simpler binding code in many common situations.""") diff --git a/spaces/ma-xu/LIVE/pybind11/tests/test_kwargs_and_defaults.py b/spaces/ma-xu/LIVE/pybind11/tests/test_kwargs_and_defaults.py deleted file mode 100644 index 5257e0cd3061707f0dd1b79de54a0c6cdae81cd1..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tests/test_kwargs_and_defaults.py +++ /dev/null @@ -1,192 +0,0 @@ -# -*- coding: utf-8 -*- -import pytest - -import env # noqa: F401 - -from pybind11_tests import kwargs_and_defaults as m - - -def test_function_signatures(doc): - assert doc(m.kw_func0) == "kw_func0(arg0: int, arg1: int) -> str" - assert doc(m.kw_func1) == "kw_func1(x: int, y: int) -> str" - assert doc(m.kw_func2) == "kw_func2(x: int = 100, y: int = 200) -> str" - assert doc(m.kw_func3) == "kw_func3(data: str = 'Hello world!') -> None" - assert doc(m.kw_func4) == "kw_func4(myList: List[int] = [13, 17]) -> str" - assert doc(m.kw_func_udl) == "kw_func_udl(x: int, y: int = 300) -> str" - assert doc(m.kw_func_udl_z) == "kw_func_udl_z(x: int, y: int = 0) -> str" - assert doc(m.args_function) == "args_function(*args) -> tuple" - assert doc(m.args_kwargs_function) == "args_kwargs_function(*args, **kwargs) -> tuple" - assert doc(m.KWClass.foo0) == \ - "foo0(self: m.kwargs_and_defaults.KWClass, arg0: int, arg1: float) -> None" - assert doc(m.KWClass.foo1) == \ - "foo1(self: m.kwargs_and_defaults.KWClass, x: int, y: float) -> None" - - -def test_named_arguments(msg): - assert m.kw_func0(5, 10) == "x=5, y=10" - - assert m.kw_func1(5, 10) == "x=5, y=10" - assert m.kw_func1(5, y=10) == "x=5, y=10" - assert m.kw_func1(y=10, x=5) == "x=5, y=10" - - assert m.kw_func2() == "x=100, y=200" - assert m.kw_func2(5) == "x=5, y=200" - assert m.kw_func2(x=5) == "x=5, y=200" - assert m.kw_func2(y=10) == "x=100, y=10" - assert m.kw_func2(5, 10) == "x=5, y=10" - assert m.kw_func2(x=5, y=10) == "x=5, y=10" - - with pytest.raises(TypeError) as excinfo: - # noinspection PyArgumentList - m.kw_func2(x=5, y=10, z=12) - assert excinfo.match( - r'(?s)^kw_func2\(\): incompatible.*Invoked with: kwargs: ((x=5|y=10|z=12)(, |$))' + '{3}$') - - assert m.kw_func4() == "{13 17}" - assert m.kw_func4(myList=[1, 2, 3]) == "{1 2 3}" - - assert m.kw_func_udl(x=5, y=10) == "x=5, y=10" - assert m.kw_func_udl_z(x=5) == "x=5, y=0" - - -def test_arg_and_kwargs(): - args = 'arg1_value', 'arg2_value', 3 - assert m.args_function(*args) == args - - args = 'a1', 'a2' - kwargs = dict(arg3='a3', arg4=4) - assert m.args_kwargs_function(*args, **kwargs) == (args, kwargs) - - -def test_mixed_args_and_kwargs(msg): - mpa = m.mixed_plus_args - mpk = m.mixed_plus_kwargs - mpak = m.mixed_plus_args_kwargs - mpakd = m.mixed_plus_args_kwargs_defaults - - assert mpa(1, 2.5, 4, 99.5, None) == (1, 2.5, (4, 99.5, None)) - assert mpa(1, 2.5) == (1, 2.5, ()) - with pytest.raises(TypeError) as excinfo: - assert mpa(1) - assert msg(excinfo.value) == """ - mixed_plus_args(): incompatible function arguments. The following argument types are supported: - 1. (arg0: int, arg1: float, *args) -> tuple - - Invoked with: 1 - """ # noqa: E501 line too long - with pytest.raises(TypeError) as excinfo: - assert mpa() - assert msg(excinfo.value) == """ - mixed_plus_args(): incompatible function arguments. The following argument types are supported: - 1. (arg0: int, arg1: float, *args) -> tuple - - Invoked with: - """ # noqa: E501 line too long - - assert mpk(-2, 3.5, pi=3.14159, e=2.71828) == (-2, 3.5, {'e': 2.71828, 'pi': 3.14159}) - assert mpak(7, 7.7, 7.77, 7.777, 7.7777, minusseven=-7) == ( - 7, 7.7, (7.77, 7.777, 7.7777), {'minusseven': -7}) - assert mpakd() == (1, 3.14159, (), {}) - assert mpakd(3) == (3, 3.14159, (), {}) - assert mpakd(j=2.71828) == (1, 2.71828, (), {}) - assert mpakd(k=42) == (1, 3.14159, (), {'k': 42}) - assert mpakd(1, 1, 2, 3, 5, 8, then=13, followedby=21) == ( - 1, 1, (2, 3, 5, 8), {'then': 13, 'followedby': 21}) - # Arguments specified both positionally and via kwargs should fail: - with pytest.raises(TypeError) as excinfo: - assert mpakd(1, i=1) - assert msg(excinfo.value) == """ - mixed_plus_args_kwargs_defaults(): incompatible function arguments. The following argument types are supported: - 1. (i: int = 1, j: float = 3.14159, *args, **kwargs) -> tuple - - Invoked with: 1; kwargs: i=1 - """ # noqa: E501 line too long - with pytest.raises(TypeError) as excinfo: - assert mpakd(1, 2, j=1) - assert msg(excinfo.value) == """ - mixed_plus_args_kwargs_defaults(): incompatible function arguments. The following argument types are supported: - 1. (i: int = 1, j: float = 3.14159, *args, **kwargs) -> tuple - - Invoked with: 1, 2; kwargs: j=1 - """ # noqa: E501 line too long - - -def test_keyword_only_args(msg): - assert m.kwonly_all(i=1, j=2) == (1, 2) - assert m.kwonly_all(j=1, i=2) == (2, 1) - - with pytest.raises(TypeError) as excinfo: - assert m.kwonly_all(i=1) == (1,) - assert "incompatible function arguments" in str(excinfo.value) - - with pytest.raises(TypeError) as excinfo: - assert m.kwonly_all(1, 2) == (1, 2) - assert "incompatible function arguments" in str(excinfo.value) - - assert m.kwonly_some(1, k=3, j=2) == (1, 2, 3) - - assert m.kwonly_with_defaults(z=8) == (3, 4, 5, 8) - assert m.kwonly_with_defaults(2, z=8) == (2, 4, 5, 8) - assert m.kwonly_with_defaults(2, j=7, k=8, z=9) == (2, 7, 8, 9) - assert m.kwonly_with_defaults(2, 7, z=9, k=8) == (2, 7, 8, 9) - - assert m.kwonly_mixed(1, j=2) == (1, 2) - assert m.kwonly_mixed(j=2, i=3) == (3, 2) - assert m.kwonly_mixed(i=2, j=3) == (2, 3) - - assert m.kwonly_plus_more(4, 5, k=6, extra=7) == (4, 5, 6, {'extra': 7}) - assert m.kwonly_plus_more(3, k=5, j=4, extra=6) == (3, 4, 5, {'extra': 6}) - assert m.kwonly_plus_more(2, k=3, extra=4) == (2, -1, 3, {'extra': 4}) - - with pytest.raises(TypeError) as excinfo: - assert m.kwonly_mixed(i=1) == (1,) - assert "incompatible function arguments" in str(excinfo.value) - - with pytest.raises(RuntimeError) as excinfo: - m.register_invalid_kwonly(m) - assert msg(excinfo.value) == """ - arg(): cannot specify an unnamed argument after an kwonly() annotation - """ - - -@pytest.mark.xfail("env.PYPY and env.PY2", reason="PyPy2 doesn't double count") -def test_args_refcount(): - """Issue/PR #1216 - py::args elements get double-inc_ref()ed when combined with regular - arguments""" - refcount = m.arg_refcount_h - - myval = 54321 - expected = refcount(myval) - assert m.arg_refcount_h(myval) == expected - assert m.arg_refcount_o(myval) == expected + 1 - assert m.arg_refcount_h(myval) == expected - assert refcount(myval) == expected - - assert m.mixed_plus_args(1, 2.0, "a", myval) == (1, 2.0, ("a", myval)) - assert refcount(myval) == expected - - assert m.mixed_plus_kwargs(3, 4.0, a=1, b=myval) == (3, 4.0, {"a": 1, "b": myval}) - assert refcount(myval) == expected - - assert m.args_function(-1, myval) == (-1, myval) - assert refcount(myval) == expected - - assert m.mixed_plus_args_kwargs(5, 6.0, myval, a=myval) == (5, 6.0, (myval,), {"a": myval}) - assert refcount(myval) == expected - - assert m.args_kwargs_function(7, 8, myval, a=1, b=myval) == \ - ((7, 8, myval), {"a": 1, "b": myval}) - assert refcount(myval) == expected - - exp3 = refcount(myval, myval, myval) - assert m.args_refcount(myval, myval, myval) == (exp3, exp3, exp3) - assert refcount(myval) == expected - - # This function takes the first arg as a `py::object` and the rest as a `py::args`. Unlike the - # previous case, when we have both positional and `py::args` we need to construct a new tuple - # for the `py::args`; in the previous case, we could simply inc_ref and pass on Python's input - # tuple without having to inc_ref the individual elements, but here we can't, hence the extra - # refs. - assert m.mixed_args_refcount(myval, myval, myval) == (exp3 + 3, exp3 + 3, exp3 + 3) - - assert m.class_default_argument() == "" diff --git a/spaces/ma-xu/LIVE/pybind11/tools/FindEigen3.cmake b/spaces/ma-xu/LIVE/pybind11/tools/FindEigen3.cmake deleted file mode 100644 index 98ab43d9e62e293c0c87e44b6f325579991e8732..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tools/FindEigen3.cmake +++ /dev/null @@ -1,83 +0,0 @@ -# - Try to find Eigen3 lib -# -# This module supports requiring a minimum version, e.g. you can do -# find_package(Eigen3 3.1.2) -# to require version 3.1.2 or newer of Eigen3. -# -# Once done this will define -# -# EIGEN3_FOUND - system has eigen lib with correct version -# EIGEN3_INCLUDE_DIR - the eigen include directory -# EIGEN3_VERSION - eigen version - -# Copyright (c) 2006, 2007 Montel Laurent, -# Copyright (c) 2008, 2009 Gael Guennebaud, -# Copyright (c) 2009 Benoit Jacob -# Redistribution and use is allowed according to the terms of the 2-clause BSD license. - -if(NOT Eigen3_FIND_VERSION) - if(NOT Eigen3_FIND_VERSION_MAJOR) - set(Eigen3_FIND_VERSION_MAJOR 2) - endif(NOT Eigen3_FIND_VERSION_MAJOR) - if(NOT Eigen3_FIND_VERSION_MINOR) - set(Eigen3_FIND_VERSION_MINOR 91) - endif(NOT Eigen3_FIND_VERSION_MINOR) - if(NOT Eigen3_FIND_VERSION_PATCH) - set(Eigen3_FIND_VERSION_PATCH 0) - endif(NOT Eigen3_FIND_VERSION_PATCH) - - set(Eigen3_FIND_VERSION - "${Eigen3_FIND_VERSION_MAJOR}.${Eigen3_FIND_VERSION_MINOR}.${Eigen3_FIND_VERSION_PATCH}") -endif(NOT Eigen3_FIND_VERSION) - -macro(_eigen3_check_version) - file(READ "${EIGEN3_INCLUDE_DIR}/Eigen/src/Core/util/Macros.h" _eigen3_version_header) - - string(REGEX MATCH "define[ \t]+EIGEN_WORLD_VERSION[ \t]+([0-9]+)" _eigen3_world_version_match - "${_eigen3_version_header}") - set(EIGEN3_WORLD_VERSION "${CMAKE_MATCH_1}") - string(REGEX MATCH "define[ \t]+EIGEN_MAJOR_VERSION[ \t]+([0-9]+)" _eigen3_major_version_match - "${_eigen3_version_header}") - set(EIGEN3_MAJOR_VERSION "${CMAKE_MATCH_1}") - string(REGEX MATCH "define[ \t]+EIGEN_MINOR_VERSION[ \t]+([0-9]+)" _eigen3_minor_version_match - "${_eigen3_version_header}") - set(EIGEN3_MINOR_VERSION "${CMAKE_MATCH_1}") - - set(EIGEN3_VERSION ${EIGEN3_WORLD_VERSION}.${EIGEN3_MAJOR_VERSION}.${EIGEN3_MINOR_VERSION}) - if(${EIGEN3_VERSION} VERSION_LESS ${Eigen3_FIND_VERSION}) - set(EIGEN3_VERSION_OK FALSE) - else(${EIGEN3_VERSION} VERSION_LESS ${Eigen3_FIND_VERSION}) - set(EIGEN3_VERSION_OK TRUE) - endif(${EIGEN3_VERSION} VERSION_LESS ${Eigen3_FIND_VERSION}) - - if(NOT EIGEN3_VERSION_OK) - - message(STATUS "Eigen3 version ${EIGEN3_VERSION} found in ${EIGEN3_INCLUDE_DIR}, " - "but at least version ${Eigen3_FIND_VERSION} is required") - endif(NOT EIGEN3_VERSION_OK) -endmacro(_eigen3_check_version) - -if(EIGEN3_INCLUDE_DIR) - - # in cache already - _eigen3_check_version() - set(EIGEN3_FOUND ${EIGEN3_VERSION_OK}) - -else(EIGEN3_INCLUDE_DIR) - - find_path( - EIGEN3_INCLUDE_DIR - NAMES signature_of_eigen3_matrix_library - PATHS ${CMAKE_INSTALL_PREFIX}/include ${KDE4_INCLUDE_DIR} - PATH_SUFFIXES eigen3 eigen) - - if(EIGEN3_INCLUDE_DIR) - _eigen3_check_version() - endif(EIGEN3_INCLUDE_DIR) - - include(FindPackageHandleStandardArgs) - find_package_handle_standard_args(Eigen3 DEFAULT_MSG EIGEN3_INCLUDE_DIR EIGEN3_VERSION_OK) - - mark_as_advanced(EIGEN3_INCLUDE_DIR) - -endif(EIGEN3_INCLUDE_DIR) diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/type_traits/iterator/is_output_iterator.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/type_traits/iterator/is_output_iterator.h deleted file mode 100644 index d6801305be01b903d7a3b9a8bd45101f709543f4..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/detail/type_traits/iterator/is_output_iterator.h +++ /dev/null @@ -1,66 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ - -namespace detail -{ - - -template - struct is_void_like - : thrust::detail::or_< - thrust::detail::is_void, - thrust::detail::is_same - > -{}; // end is_void_like - - -template - struct lazy_is_void_like - : is_void_like -{}; // end lazy_is_void_like - - -// XXX this meta function should first check that T is actually an iterator -// -// if thrust::iterator_value is defined and thrust::iterator_value::type == void -// return false -// else -// return true -template - struct is_output_iterator - : eval_if< - is_metafunction_defined >::value, - lazy_is_void_like >, - thrust::detail::true_type - >::type -{ -}; // end is_output_iterator - -} // end detail - -} // end thrust - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/memory.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/memory.h deleted file mode 100644 index 18b31e758de483d77fc1c84f515e4117575ce852..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/memory.h +++ /dev/null @@ -1,95 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file thrust/system/cpp/memory.h - * \brief Managing memory associated with Thrust's standard C++ system. - */ - -#pragma once - -#include -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace cpp -{ -/*! Allocates an area of memory available to Thrust's cpp system. - * \param n Number of bytes to allocate. - * \return A cpp::pointer pointing to the beginning of the newly - * allocated memory. A null cpp::pointer is returned if - * an error occurs. - * \note The cpp::pointer returned by this function must be - * deallocated with \p cpp::free. - * \see cpp::free - * \see std::malloc - */ -inline pointer malloc(std::size_t n); - -/*! Allocates a typed area of memory available to Thrust's cpp system. - * \param n Number of elements to allocate. - * \return A cpp::pointer pointing to the beginning of the newly - * allocated elements. A null cpp::pointer is returned if - * an error occurs. - * \note The cpp::pointer returned by this function must be - * deallocated with \p cpp::free. - * \see cpp::free - * \see std::malloc - */ -template -inline pointer malloc(std::size_t n); - -/*! Deallocates an area of memory previously allocated by cpp::malloc. - * \param ptr A cpp::pointer pointing to the beginning of an area - * of memory previously allocated with cpp::malloc. - * \see cpp::malloc - * \see std::free - */ -inline void free(pointer ptr); - -/*! \p cpp::allocator is the default allocator used by the \p cpp system's containers such as - * cpp::vector if no user-specified allocator is provided. \p cpp::allocator allocates - * (deallocates) storage with \p cpp::malloc (\p cpp::free). - */ -template -using allocator = thrust::mr::stateless_resource_allocator; - -} // end cpp - -} // end system - -/*! \namespace thrust::cpp - * \brief \p thrust::cpp is a top-level alias for thrust::system::cpp. - */ -namespace cpp -{ - -using thrust::system::cpp::malloc; -using thrust::system::cpp::free; -using thrust::system::cpp::allocator; - -} // end cpp - -} // end thrust - -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/copy_if.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/copy_if.h deleted file mode 100644 index d441862ab6cec2ef6ed87e21f5f926e81c32a5fd..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/copy_if.h +++ /dev/null @@ -1,857 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -namespace thrust -{ -// XXX declare generic copy_if interface -// to avoid circulular dependency from thrust/copy.h -template -__host__ __device__ - OutputIterator - copy_if(const thrust::detail::execution_policy_base &exec, - InputIterator first, - InputIterator last, - OutputIterator result, - Predicate pred); - -template -__host__ __device__ - OutputIterator - copy_if(const thrust::detail::execution_policy_base &exec, - InputIterator1 first, - InputIterator1 last, - InputIterator2 stencil, - OutputIterator result, - Predicate pred); - -namespace cuda_cub { - -namespace __copy_if { - - template - struct PtxPolicy - { - enum - { - BLOCK_THREADS = _BLOCK_THREADS, - ITEMS_PER_THREAD = _ITEMS_PER_THREAD, - ITEMS_PER_TILE = _BLOCK_THREADS * _ITEMS_PER_THREAD, - }; - static const cub::BlockLoadAlgorithm LOAD_ALGORITHM = _LOAD_ALGORITHM; - static const cub::CacheLoadModifier LOAD_MODIFIER = _LOAD_MODIFIER; - static const cub::BlockScanAlgorithm SCAN_ALGORITHM = _SCAN_ALGORITHM; - }; // struct PtxPolicy - - template - struct Tuning; - - template - struct Tuning - { - const static int INPUT_SIZE = sizeof(T); - - enum - { - NOMINAL_4B_ITEMS_PER_THREAD = 9, - ITEMS_PER_THREAD = CUB_MIN(NOMINAL_4B_ITEMS_PER_THREAD, CUB_MAX(1, (NOMINAL_4B_ITEMS_PER_THREAD * 4 / sizeof(T)))), - }; - - typedef PtxPolicy<128, - ITEMS_PER_THREAD, - cub::BLOCK_LOAD_WARP_TRANSPOSE, - cub::LOAD_LDG, - cub::BLOCK_SCAN_WARP_SCANS> - type; - }; // Tuning<350> - - - template - struct Tuning - { - const static int INPUT_SIZE = sizeof(T); - - enum - { - NOMINAL_4B_ITEMS_PER_THREAD = 10, - ITEMS_PER_THREAD = CUB_MIN(NOMINAL_4B_ITEMS_PER_THREAD, CUB_MAX(1, (NOMINAL_4B_ITEMS_PER_THREAD * 4 / sizeof(T)))), - }; - - typedef PtxPolicy<128, - ITEMS_PER_THREAD, - cub::BLOCK_LOAD_WARP_TRANSPOSE, - cub::LOAD_LDG, - cub::BLOCK_SCAN_WARP_SCANS> - type; - }; // Tuning<350> - - template - struct Tuning - { - const static int INPUT_SIZE = sizeof(T); - - enum - { - NOMINAL_4B_ITEMS_PER_THREAD = 7, - ITEMS_PER_THREAD = CUB_MIN(NOMINAL_4B_ITEMS_PER_THREAD, CUB_MAX(3, (NOMINAL_4B_ITEMS_PER_THREAD * 4 / sizeof(T)))), - }; - - typedef PtxPolicy<128, - ITEMS_PER_THREAD, - cub::BLOCK_LOAD_WARP_TRANSPOSE, - cub::LOAD_DEFAULT, - cub::BLOCK_SCAN_WARP_SCANS> - type; - }; // Tuning<300> - - struct no_stencil_tag_ {}; - typedef no_stencil_tag_* no_stencil_tag; - template - struct CopyIfAgent - { - typedef typename iterator_traits::value_type item_type; - typedef typename iterator_traits::value_type stencil_type; - - typedef cub::ScanTileState ScanTileState; - - template - struct PtxPlan : Tuning::type - { - typedef Tuning tuning; - - typedef typename core::LoadIterator::type ItemsLoadIt; - typedef typename core::LoadIterator::type StencilLoadIt; - - typedef typename core::BlockLoad::type BlockLoadItems; - typedef typename core::BlockLoad::type BlockLoadStencil; - - typedef cub::TilePrefixCallbackOp - TilePrefixCallback; - - typedef cub::BlockScan - BlockScan; - - - union TempStorage - { - struct - { - typename BlockScan::TempStorage scan; - typename TilePrefixCallback::TempStorage prefix; - }; - - typename BlockLoadItems::TempStorage load_items; - typename BlockLoadStencil::TempStorage load_stencil; - - core::uninitialized_array raw_exchange; - }; // union TempStorage - }; // struct PtxPlan - - typedef typename core::specialize_plan_msvc10_war::type::type ptx_plan; - - typedef typename ptx_plan::ItemsLoadIt ItemsLoadIt; - typedef typename ptx_plan::StencilLoadIt StencilLoadIt; - typedef typename ptx_plan::BlockLoadItems BlockLoadItems; - typedef typename ptx_plan::BlockLoadStencil BlockLoadStencil; - typedef typename ptx_plan::TilePrefixCallback TilePrefixCallback; - typedef typename ptx_plan::BlockScan BlockScan; - typedef typename ptx_plan::TempStorage TempStorage; - - enum - { - USE_STENCIL = !thrust::detail::is_same::value, - BLOCK_THREADS = ptx_plan::BLOCK_THREADS, - ITEMS_PER_THREAD = ptx_plan::ITEMS_PER_THREAD, - ITEMS_PER_TILE = ptx_plan::ITEMS_PER_TILE - }; - - struct impl - { - //--------------------------------------------------------------------- - // Per-thread fields - //--------------------------------------------------------------------- - - TempStorage & storage; - ScanTileState &tile_state; - ItemsLoadIt items_load_it; - StencilLoadIt stencil_load_it; - OutputIt output_it; - Predicate predicate; - Size num_items; - - //------------------------------------------ - // scatter results to memory - //------------------------------------------ - - THRUST_DEVICE_FUNCTION void - scatter(item_type (&items)[ITEMS_PER_THREAD], - Size (&selection_flags)[ITEMS_PER_THREAD], - Size (&selection_indices)[ITEMS_PER_THREAD], - int num_tile_selections, - Size num_selections_prefix) - { - using core::sync_threadblock; - -#pragma unroll - for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM) - { - int local_scatter_offset = selection_indices[ITEM] - - num_selections_prefix; - if (selection_flags[ITEM]) - { - new (&storage.raw_exchange[local_scatter_offset]) item_type(items[ITEM]); - } - } - - sync_threadblock(); - - for (int item = threadIdx.x; - item < num_tile_selections; - item += BLOCK_THREADS) - { - output_it[num_selections_prefix + item] = storage.raw_exchange[item]; - } - } // func scatter - - //------------------------------------------ - // specialize predicate on different types - //------------------------------------------ - - template - struct __tag {}; - - enum ItemStencil - { - ITEM, - STENCIL - }; - - template - struct wrap_value - { - T const & x; - THRUST_DEVICE_FUNCTION wrap_value(T const &x) : x(x) {} - - THRUST_DEVICE_FUNCTION T const &operator()() const { return x; }; - }; // struct wrap_type - - //------- item - - THRUST_DEVICE_FUNCTION bool - predicate_wrapper(wrap_value const &x, - __tag) - { - return predicate(x()); - } - - THRUST_DEVICE_FUNCTION bool - predicate_wrapper(wrap_value const &, - __tag) - { - return false; - } - - //-------- stencil - - template - THRUST_DEVICE_FUNCTION bool - predicate_wrapper(wrap_value const &x, - __tag) - { - return predicate(x()); - } - - THRUST_DEVICE_FUNCTION bool - predicate_wrapper(wrap_value const &, - __tag) - { - return false; - } - - - THRUST_DEVICE_FUNCTION bool - predicate_wrapper(wrap_value const &, - __tag) - { - return false; - } - - template - THRUST_DEVICE_FUNCTION void - compute_selection_flags(int num_tile_items, - T (&values)[ITEMS_PER_THREAD], - Size (&selection_flags)[ITEMS_PER_THREAD]) - { -#pragma unroll - for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM) - { - // Out-of-bounds items are selection_flags - selection_flags[ITEM] = 1; - - if (!IS_LAST_TILE || - (Size(threadIdx.x * ITEMS_PER_THREAD) + ITEM < num_tile_items)) - { - selection_flags[ITEM] = - predicate_wrapper(wrap_value(values[ITEM]), - __tag()); - } - } - } - - //------------------------------------------ - // consume tiles - //------------------------------------------ - - template - Size THRUST_DEVICE_FUNCTION - consume_tile_impl(int num_tile_items, - int tile_idx, - Size tile_base) - { - item_type items_loc[ITEMS_PER_THREAD]; - Size selection_flags[ITEMS_PER_THREAD]; - Size selection_idx[ITEMS_PER_THREAD]; - - if (IS_LAST_TILE) { - BlockLoadItems(storage.load_items) - .Load(items_load_it + tile_base, - items_loc, - num_tile_items); - } - else - { - BlockLoadItems(storage.load_items) - .Load(items_load_it + tile_base, - items_loc); - } - - core::sync_threadblock(); - - if (USE_STENCIL) - { - stencil_type stencil_loc[ITEMS_PER_THREAD]; - - if (IS_LAST_TILE) - { - BlockLoadStencil(storage.load_stencil) - .Load(stencil_load_it + tile_base, - stencil_loc, - num_tile_items); - } - else - { - BlockLoadStencil(storage.load_stencil) - .Load(stencil_load_it + tile_base, - stencil_loc); - } - - compute_selection_flags(num_tile_items, - stencil_loc, - selection_flags); - } - else /* Use predicate on items rather then stencil */ - { - compute_selection_flags(num_tile_items, - items_loc, - selection_flags); - } - - core::sync_threadblock(); - - Size num_tile_selections = 0; - Size num_selections = 0; - Size num_selections_prefix = 0; - if (IS_FIRST_TILE) - { - BlockScan(storage.scan) - .ExclusiveSum(selection_flags, - selection_idx, - num_tile_selections); - - if (threadIdx.x == 0) - { - // Update tile status if this is not the last tile - if (!IS_LAST_TILE) - tile_state.SetInclusive(0, num_tile_selections); - } - - // Do not count any out-of-bounds selections - if (IS_LAST_TILE) - { - int num_discount = ITEMS_PER_TILE - num_tile_items; - num_tile_selections -= num_discount; - } - num_selections = num_tile_selections; - } - else - { - TilePrefixCallback prefix_cb(tile_state, - storage.prefix, - cub::Sum(), - tile_idx); - BlockScan(storage.scan) - .ExclusiveSum(selection_flags, - selection_idx, - prefix_cb); - - num_selections = prefix_cb.GetInclusivePrefix(); - num_tile_selections = prefix_cb.GetBlockAggregate(); - num_selections_prefix = prefix_cb.GetExclusivePrefix(); - - if (IS_LAST_TILE) - { - int num_discount = ITEMS_PER_TILE - num_tile_items; - num_tile_selections -= num_discount; - num_selections -= num_discount; - } - } - - core::sync_threadblock(); - - scatter(items_loc, - selection_flags, - selection_idx, - num_tile_selections, - num_selections_prefix); - - - return num_selections; - } // func consume_tile_impl - - template - THRUST_DEVICE_FUNCTION Size - consume_tile(int num_tile_items, - int tile_idx, - Size tile_base) - { - if (tile_idx == 0) - { - return consume_tile_impl(num_tile_items, - tile_idx, - tile_base); - } - else - { - return consume_tile_impl(num_tile_items, - tile_idx, - tile_base); - } - } // func consume_tile - - //--------------------------------------------------------------------- - // Constructor - //--------------------------------------------------------------------- - - THRUST_DEVICE_FUNCTION impl(TempStorage & storage_, - ScanTileState & tile_state_, - ItemsIt items_it, - StencilIt stencil_it, - OutputIt output_it_, - Predicate predicate_, - Size num_items_, - int num_tiles, - NumSelectedOutputIt num_selected_out) - : storage(storage_), - tile_state(tile_state_), - items_load_it(core::make_load_iterator(ptx_plan(), items_it)), - stencil_load_it(core::make_load_iterator(ptx_plan(), stencil_it)), - output_it(output_it_), - predicate(predicate_), - num_items(num_items_) - { - int tile_idx = blockIdx.x; - Size tile_base = tile_idx * ITEMS_PER_TILE; - - if (tile_idx < num_tiles - 1) - { - consume_tile(ITEMS_PER_TILE, - tile_idx, - tile_base); - } - else - { - int num_remaining = static_cast(num_items - tile_base); - Size num_selections = consume_tile(num_remaining, - tile_idx, - tile_base); - if (threadIdx.x == 0) - { - *num_selected_out = num_selections; - } - } - } // ctor impl - }; - - //--------------------------------------------------------------------- - // Agent entry point - //--------------------------------------------------------------------- - - THRUST_AGENT_ENTRY(ItemsIt items_it, - StencilIt stencil_it, - OutputIt output_it, - Predicate predicate, - Size num_items, - NumSelectedOutputIt num_selected_out, - ScanTileState tile_state, - int num_tiles, - char * shmem) - { - TempStorage &storage = *reinterpret_cast(shmem); - - impl(storage, - tile_state, - items_it, - stencil_it, - output_it, - predicate, - num_items, - num_tiles, - num_selected_out); - } - }; // struct CopyIfAgent - - template - struct InitAgent - { - template - struct PtxPlan : PtxPolicy<128> {}; - typedef core::specialize_plan ptx_plan; - - //--------------------------------------------------------------------- - // Agent entry point - //--------------------------------------------------------------------- - - THRUST_AGENT_ENTRY(ScanTileState tile_state, - Size num_tiles, - NumSelectedIt num_selected_out, - char * /*shmem*/) - { - tile_state.InitializeStatus(num_tiles); - if (blockIdx.x == 0 && threadIdx.x == 0) - *num_selected_out = 0; - } - }; // struct InitAgent - - template - static cudaError_t THRUST_RUNTIME_FUNCTION - doit_step(void * d_temp_storage, - size_t & temp_storage_bytes, - ItemsIt items, - StencilIt stencil, - OutputIt output_it, - Predicate predicate, - NumSelectedOutIt num_selected_out, - Size num_items, - cudaStream_t stream, - bool debug_sync) - { - if (num_items == 0) - return cudaSuccess; - - using core::AgentLauncher; - using core::AgentPlan; - using core::get_agent_plan; - - typedef AgentLauncher< - CopyIfAgent > - copy_if_agent; - - typedef typename copy_if_agent::ScanTileState ScanTileState; - - typedef AgentLauncher< - InitAgent > - init_agent; - - - using core::get_plan; - typename get_plan::type init_plan = init_agent::get_plan(); - typename get_plan::type copy_if_plan = copy_if_agent::get_plan(stream); - - int tile_size = copy_if_plan.items_per_tile; - size_t num_tiles = (num_items + tile_size - 1) / tile_size; - - size_t vshmem_size = core::vshmem_size(copy_if_plan.shared_memory_size, - num_tiles); - - cudaError_t status = cudaSuccess; - if (num_items == 0) - return status; - - size_t allocation_sizes[2] = {0, vshmem_size}; - status = ScanTileState::AllocationSize(static_cast(num_tiles), allocation_sizes[0]); - CUDA_CUB_RET_IF_FAIL(status); - - - void* allocations[2] = {NULL, NULL}; - status = cub::AliasTemporaries(d_temp_storage, - temp_storage_bytes, - allocations, - allocation_sizes); - CUDA_CUB_RET_IF_FAIL(status); - - - if (d_temp_storage == NULL) - { - return status; - } - - ScanTileState tile_status; - status = tile_status.Init(static_cast(num_tiles), allocations[0], allocation_sizes[0]); - CUDA_CUB_RET_IF_FAIL(status); - - init_agent ia(init_plan, num_tiles, stream, "copy_if::init_agent", debug_sync); - - char *vshmem_ptr = vshmem_size > 0 ? (char*)allocations[1] : NULL; - - copy_if_agent pa(copy_if_plan, num_items, stream, vshmem_ptr, "copy_if::partition_agent", debug_sync); - - ia.launch(tile_status, num_tiles, num_selected_out); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - - pa.launch(items, - stencil, - output_it, - predicate, - num_items, - num_selected_out, - tile_status, - num_tiles); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - return status; - } - - template - THRUST_RUNTIME_FUNCTION - OutputIt copy_if(execution_policy& policy, - InputIt first, - InputIt last, - StencilIt stencil, - OutputIt output, - Predicate predicate) - { - typedef int size_type; - - size_type num_items = static_cast(thrust::distance(first, last)); - size_t temp_storage_bytes = 0; - cudaStream_t stream = cuda_cub::stream(policy); - bool debug_sync = THRUST_DEBUG_SYNC_FLAG; - - if (num_items == 0) - return output; - - cudaError_t status; - status = doit_step(NULL, - temp_storage_bytes, - first, - stencil, - output, - predicate, - reinterpret_cast(NULL), - num_items, - stream, - debug_sync); - cuda_cub::throw_on_error(status, "copy_if failed on 1st step"); - - size_t allocation_sizes[2] = {sizeof(size_type), temp_storage_bytes}; - void * allocations[2] = {NULL, NULL}; - - size_t storage_size = 0; - - status = core::alias_storage(NULL, - storage_size, - allocations, - allocation_sizes); - cuda_cub::throw_on_error(status, "copy_if failed on 1st alias_storage"); - - // Allocate temporary storage. - thrust::detail::temporary_array - tmp(policy, storage_size); - void *ptr = static_cast(tmp.data().get()); - - status = core::alias_storage(ptr, - storage_size, - allocations, - allocation_sizes); - cuda_cub::throw_on_error(status, "copy_if failed on 2nd alias_storage"); - - size_type* d_num_selected_out - = thrust::detail::aligned_reinterpret_cast(allocations[0]); - - status = doit_step(allocations[1], - temp_storage_bytes, - first, - stencil, - output, - predicate, - d_num_selected_out, - num_items, - stream, - debug_sync); - cuda_cub::throw_on_error(status, "copy_if failed on 2nd step"); - - status = cuda_cub::synchronize(policy); - cuda_cub::throw_on_error(status, "copy_if failed to synchronize"); - - size_type num_selected = get_value(policy, d_num_selected_out); - - return output + num_selected; - } - -} // namespace __copy_if - -//------------------------- -// Thrust API entry points -//------------------------- - -__thrust_exec_check_disable__ -template -OutputIterator __host__ __device__ -copy_if(execution_policy &policy, - InputIterator first, - InputIterator last, - OutputIterator result, - Predicate pred) -{ - OutputIterator ret = result; - - if (__THRUST_HAS_CUDART__) - { - ret = __copy_if::copy_if(policy, - first, - last, - __copy_if::no_stencil_tag(), - result, - pred); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::copy_if(cvt_to_seq(derived_cast(policy)), - first, - last, - result, - pred); -#endif - } - return ret; -} // func copy_if - -__thrust_exec_check_disable__ -template -OutputIterator __host__ __device__ -copy_if(execution_policy &policy, - InputIterator first, - InputIterator last, - StencilIterator stencil, - OutputIterator result, - Predicate pred) -{ - OutputIterator ret = result; - - if (__THRUST_HAS_CUDART__) - { - ret = __copy_if::copy_if(policy, - first, - last, - stencil, - result, - pred); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::copy_if(cvt_to_seq(derived_cast(policy)), - first, - last, - stencil, - result, - pred); -#endif - } - return ret; -} // func copy_if - -} // namespace cuda_cub -} // end namespace thrust - -#include -#endif diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/equal.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/equal.h deleted file mode 100644 index 8962b1bd1428a3c845924a9b7a7d2ef3b2147322..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/equal.h +++ /dev/null @@ -1,48 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - - -template -__host__ __device__ -bool equal(thrust::execution_policy &exec, InputIterator1 first1, InputIterator1 last1, InputIterator2 first2); - - -template -__host__ __device__ -bool equal(thrust::execution_policy &exec, InputIterator1 first1, InputIterator1 last1, InputIterator2 first2, BinaryPredicate binary_pred); - - -} // end namespace generic -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/find.h b/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/find.h deleted file mode 100644 index e6445c06831c49e05f4a82cddde0f38081b82978..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/omp/detail/find.h +++ /dev/null @@ -1,51 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file find.h - * \brief OpenMP implementation of find_if. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace omp -{ -namespace detail -{ - -template -InputIterator find_if(execution_policy &exec, - InputIterator first, - InputIterator last, - Predicate pred) -{ - // omp prefers generic::find_if to cpp::find_if - return thrust::system::detail::generic::find_if(exec, first, last, pred); -} - -} // end namespace detail -} // end namespace omp -} // end namespace system -} // end namespace thrust - diff --git a/spaces/marcilioduarte/Credit-Worthiness-Risk-Classification/README.md b/spaces/marcilioduarte/Credit-Worthiness-Risk-Classification/README.md deleted file mode 100644 index 1d4a1452220e3a9f9092874e53e33e3bde7861d5..0000000000000000000000000000000000000000 --- a/spaces/marcilioduarte/Credit-Worthiness-Risk-Classification/README.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: Credit Worthiness Risk Classification -emoji: 🏆 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -This is a case study about creditworthiness classification where I did the whole process of building a creditworthiness classification model and app, from the data cleansing part until the deployment of the model in an application simulated for the banks managers. To achieve this goal, I analyzed and prepared the dataset for machine learning models. The applied models are: Logistic Regression, Decision Tree Classifier, and Random Forest Classifier, which are available in Python's sklearn library. To optimize the workflow and model results, I applied a personalized pipeline for model application and GridSearchCV for parameter optimization. The app development was made using gradio app. - -The data is uploaded in path: german_credit_risk/data/raw, but it was first obtained from Kaggle and can be obtained [HERE](https://www.kaggle.com/datasets/mpwolke/cusersmarildownloadsgermancsv). - -I used a previous work from Pennsylvania State University as a reference in many parts of the code, you can find it [HERE](https://online.stat.psu.edu/stat508/resource/analysis/gcd). Also, as this is a case study, the code steps are commented in english. - -It is worth mentioning that the feature selection process was carefully performed according to the regulations of the Central Bank of Brazil, as I'm Brazilian. - ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/marlenezw/audio-driven-animations/app.py b/spaces/marlenezw/audio-driven-animations/app.py deleted file mode 100644 index cfb7389381eaad42c23068311e9672d9fddbb778..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/app.py +++ /dev/null @@ -1,40 +0,0 @@ - -import os -import gradio as gr -from PIL import Image as im -from scipy.io.wavfile import write - - -def generateVideo(input_img, input_audio): - - data = im.fromarray(input_img) - - # saving the final output - # as a PNG file - data.save('MakeItTalk/examples/in_image.jpg') - - write('MakeItTalk/examples/in_audio.wav', input_audio[0], input_audio[1]) - - input_img = 'in_image.jpg' - input_audio = 'in_audio.wav' - - os.system(f"python3 MakeItTalk/main_end2end.py --jpg {input_img}") #add image - - video_name = 'MakeItTalk/examples/in_image_pred_fls_in_audio_audio_embed.mp4' - - - return video_name - - -demo = gr.Interface( - fn=generateVideo, - inputs=[gr.Image(shape=(256, 256)), gr.Audio(), ], - outputs= gr.Video().style(height=256, width=256), - title='Audio Driven Animation', - description='Add an image and an audio file then watch them come to life! Please note, at the moment images must be 254X254 and audio files must be .wav format. Enjoy!', - examples =[['example_image.jpg', 'example_audio.wav']] - -) - - -demo.launch() \ No newline at end of file diff --git a/spaces/matthoffner/AudioCraft_Plus/audiocraft/losses/balancer.py b/spaces/matthoffner/AudioCraft_Plus/audiocraft/losses/balancer.py deleted file mode 100644 index 8a0ac8adebab8cdee8f82351965195dc02800d18..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/AudioCraft_Plus/audiocraft/losses/balancer.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import flashy -import torch -from torch import autograd - - -class Balancer: - """Loss balancer. - - The loss balancer combines losses together to compute gradients for the backward. - Given `y = f(...)`, and a number of losses `l1(y, ...)`, `l2(y, ...)`, with `...` - not having any dependence on `f`, the balancer can efficiently normalize the partial gradients - `d l1 / d y`, `d l2 / dy` before summing them in order to achieve a desired ratio between - the losses. For instance if `weights = {'l1': 2, 'l2': 1}`, 66% of the gradient - going into `f(...)` will come from `l1` on average, and 33% from `l2`. This allows for an easy - interpration of the weights even if the intrisic scale of `l1`, `l2` ... is unknown. - - Noting `g1 = d l1 / dy`, etc., the balanced gradient `G` will be - (with `avg` an exponential moving average over the updates), - - G = sum_i total_norm * g_i / avg(||g_i||) * w_i / sum(w_i) - - If `balance_grads` is False, this is deactivated, and instead the gradient will just be the - standard sum of the partial gradients with the given weights. - - A call to the backward method of the balancer will compute the the partial gradients, - combining all the losses and potentially rescaling the gradients, - which can help stabilize the training and reason about multiple losses with varying scales. - The obtained gradient with respect to `y` is then back-propagated to `f(...)`. - - Expected usage: - - weights = {'loss_a': 1, 'loss_b': 4} - balancer = Balancer(weights, ...) - losses: dict = {} - losses['loss_a'] = compute_loss_a(x, y) - losses['loss_b'] = compute_loss_b(x, y) - if model.training(): - effective_loss = balancer.backward(losses, x) - - Args: - weights (dict[str, float]): Weight coefficient for each loss. The balancer expect the losses keys - from the backward method to match the weights keys to assign weight to each of the provided loss. - balance_grads (bool): Whether to rescale gradients so that weights reflect the fraction of the - overall gradient, rather than a constant multiplier. - total_norm (float): Reference norm when rescaling gradients, ignored otherwise. - emay_decay (float): EMA decay for averaging the norms. - per_batch_item (bool): Whether to compute the averaged norm per batch item or not. This only holds - when rescaling the gradients. - epsilon (float): Epsilon value for numerical stability. - monitor (bool): If True, stores in `self.metrics` the relative ratio between the norm of the gradients - coming from each loss, when calling `backward()`. - """ - def __init__(self, weights: tp.Dict[str, float], balance_grads: bool = True, total_norm: float = 1., - ema_decay: float = 0.999, per_batch_item: bool = True, epsilon: float = 1e-12, - monitor: bool = False): - self.weights = weights - self.per_batch_item = per_batch_item - self.total_norm = total_norm or 1. - self.averager = flashy.averager(ema_decay or 1.) - self.epsilon = epsilon - self.monitor = monitor - self.balance_grads = balance_grads - self._metrics: tp.Dict[str, tp.Any] = {} - - @property - def metrics(self): - return self._metrics - - def backward(self, losses: tp.Dict[str, torch.Tensor], input: torch.Tensor) -> torch.Tensor: - """Compute the backward and return the effective train loss, e.g. the loss obtained from - computing the effective weights. If `balance_grads` is True, the effective weights - are the one that needs to be applied to each gradient to respect the desired relative - scale of gradients coming from each loss. - - Args: - losses (Dict[str, torch.Tensor]): dictionary with the same keys as `self.weights`. - input (torch.Tensor): the input of the losses, typically the output of the model. - This should be the single point of dependence between the losses - and the model being trained. - """ - norms = {} - grads = {} - for name, loss in losses.items(): - # Compute partial derivative of the less with respect to the input. - grad, = autograd.grad(loss, [input], retain_graph=True) - if self.per_batch_item: - # We do not average the gradient over the batch dimension. - dims = tuple(range(1, grad.dim())) - norm = grad.norm(dim=dims, p=2).mean() - else: - norm = grad.norm(p=2) - norms[name] = norm - grads[name] = grad - - count = 1 - if self.per_batch_item: - count = len(grad) - # Average norms across workers. Theoretically we should average the - # squared norm, then take the sqrt, but it worked fine like that. - avg_norms = flashy.distrib.average_metrics(self.averager(norms), count) - # We approximate the total norm of the gradient as the sums of the norms. - # Obviously this can be very incorrect if all gradients are aligned, but it works fine. - total = sum(avg_norms.values()) - - self._metrics = {} - if self.monitor: - # Store the ratio of the total gradient represented by each loss. - for k, v in avg_norms.items(): - self._metrics[f'ratio_{k}'] = v / total - - total_weights = sum([self.weights[k] for k in avg_norms]) - assert total_weights > 0. - desired_ratios = {k: w / total_weights for k, w in self.weights.items()} - - out_grad = torch.zeros_like(input) - effective_loss = torch.tensor(0., device=input.device, dtype=input.dtype) - for name, avg_norm in avg_norms.items(): - if self.balance_grads: - # g_balanced = g / avg(||g||) * total_norm * desired_ratio - scale = desired_ratios[name] * self.total_norm / (self.epsilon + avg_norm) - else: - # We just do regular weighted sum of the gradients. - scale = self.weights[name] - out_grad.add_(grads[name], alpha=scale) - effective_loss += scale * losses[name].detach() - # Send the computed partial derivative with respect to the output of the model to the model. - input.backward(out_grad) - return effective_loss diff --git a/spaces/maxmon/auto_anno/utils/api/google_trans.py b/spaces/maxmon/auto_anno/utils/api/google_trans.py deleted file mode 100644 index fb4ee73cef6cdcb6a4b0c4963ea5b0cfca394184..0000000000000000000000000000000000000000 --- a/spaces/maxmon/auto_anno/utils/api/google_trans.py +++ /dev/null @@ -1,16 +0,0 @@ -import requests -import json - -def en2cn(text): - return trans(text, 'en', 'zh-CN') - -def trans(text, sl, tl): - temp_url = 'https://translate.googleapis.com/translate_a/single?client=gtx&sl={sl}&tl={tl}&dt=t&q={q}' - url = temp_url.format(q=text, sl=sl, tl=tl) - result = requests.get(url) - j = json.loads(result.content) - cn = ''.join([i[0] for i in j[0]]) - return cn - -if __name__ == '__main__': - print(en2cn('hello world')) diff --git a/spaces/mehdidc/text_to_image_ddgan/score_sde/__init__.py b/spaces/mehdidc/text_to_image_ddgan/score_sde/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/merve/anonymization/server-side/fill-in-the-blank/scatter-plot-colab/two-sentences/init-pair.js b/spaces/merve/anonymization/server-side/fill-in-the-blank/scatter-plot-colab/two-sentences/init-pair.js deleted file mode 100644 index ff2d0dbbdea8e6aff4d2247f9e69187e18e8a36f..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/server-side/fill-in-the-blank/scatter-plot-colab/two-sentences/init-pair.js +++ /dev/null @@ -1,186 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - -window.initPair = function(pair, sel){ - - var margin = {bottom: 50, left: 30, top: 20, right: 20} - var totalWidth = sel.node().offsetWidth - var width = totalWidth - margin.left - margin.right - - var c = d3.conventions({ - sel: sel.append('div'), - width, - height: width, - layers: 'scs', - margin, - }) - - var nTicks = 4 - var tickScale = d3.scaleLinear().range([0, c.width]) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M ${.5 + Math.round(tickScale(d/nTicks))} 0 V ${c.height}`}) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M 0 ${.5 + Math.round(tickScale(d/nTicks))} H ${c.width}`}) - - - var scatter = window.initScatter(c) - - var allTokens = pair.e0.map((v0, i) => { - return {word: pair.vocab[i], v0, i, v1: pair.e1[i]} - }) - allTokens.forEach(d => { - d.dif = d.v0 - d.v1 - d.meanV = (d.v0 + d.v1) / 2 - d.isVisible = false - }) - - _.sortBy(allTokens, d => -d.v1).forEach((d, i) => d.v1i = i) - _.sortBy(allTokens, d => -d.v0).forEach((d, i) => d.v0i = i) - - var topTokens = allTokens.filter(d => d.v0i <= pair.count || d.v1i <= pair.count) - - - var logitExtent = d3.extent(topTokens.map(d => d.v0).concat(topTokens.map(d => d.v1))) - - var tokens = allTokens - .filter(d => logitExtent[0] <= d.v0 && logitExtent[0] <= d.v1) - - var mag = logitExtent[1] - logitExtent[0] - logitExtent = [logitExtent[0] - mag*.002, logitExtent[1] + mag*.002] - - if (pair.isDifference) tokens = _.sortBy(allTokens, d => -d.meanV).slice(0, pair.count) - - tokens.forEach(d => { - d.isVisible = true - }) - - var maxDif = d3.max(d3.extent(tokens, d => d.dif).map(Math.abs)) - var color = util.palette(-maxDif*.8, maxDif*.8) - - if (pair.isDifference){ - drawRotated() - } else{ - drawXY() - } - - function drawXY(){ - c.x.domain(logitExtent) - c.y.domain(logitExtent) - - d3.drawAxis(c) - - var s = 2 - var scatterData = allTokens.map(d => { - var x = c.x(d.v0) - var y = c.y(d.v1) - var fill = color(d.dif) - var dif = d.dif - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s, dif, fill, word, show, isVisible} - }) - - c.svg.append('path').at({d: `M 0 ${c.height} L ${c.width} 0`, stroke: '#ccc'}) - - var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.dif) - d3.nestBy(textCandidates.slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'uf') - d3.nestBy(textCandidates.reverse().slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'lr') - - logitExtent.pair = pair - scatter.draw(c, scatterData) - - c.svg.selectAppend('text.x-axis-label.xy-only') - .translate([c.width/2, c.height + 24]) - .text(pair.label0 + (pair.label0.includes(' dif') ? '' : ' →')) - .st({fill: util.colors[0]}) - .at({textAnchor: 'middle'}) - - c.svg.selectAppend('g.y-axis-label.xy-only') - .translate([c.width + 20, c.height/2]) - .selectAppend('text') - .text(pair.label1 + (pair.label0.includes(' dif') ? '' : ' →')) - .st({fill: util.colors[1]}) - .at({textAnchor: 'middle', transform: 'rotate(-90)'}) - - if (pair.topLabel){ - console.log(pair.topLabel) - c.svg.selectAppend('text.x-axis-label.top') - .translate([c.width/2, -10]) - .text(pair.topLabel) - .st({fill: '#000'}) - // .st({fill: util.colors[0]}) - .at({textAnchor: 'middle'}) - } - } - - function drawRotated(){ - c.x.domain(d3.extent(tokens, d => d.meanV)) - c.y.domain([maxDif, -maxDif]) - - d3.drawAxis(c) - - var scatterData = allTokens.map(d => { - var x = c.x(d.meanV) - var y = c.y(d.dif) - var fill = color(d.dif) - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s: 2, fill, word, show, isVisible} - }) - - scatterData.forEach(d => { - d.dx = d.x - c.width/2 - d.dy = d.y - c.height/2 - }) - - var textCandidates = _.sortBy(scatterData, d => -d.dx*d.dx - d.dy*d.dy) - .filter(d => d.isVisible) - .slice(0, 5000) - d3.nestBy(textCandidates, d => Math.round(12*Math.atan2(d.dx, d.dy))) - .map(d => d[0]) - .forEach(d => d.show = (d.dy < 0 ? 'u' : 'l') + (d.dx < 0 ? 'l' : 'r')) - - scatter.draw(c, scatterData, false) - - c.svg.selectAppend('text.rotate-only.x-axis-label') - .translate([c.width/2, c.height + 24]) - .text('__ likelihood, both sentences →') - .at({textAnchor: 'middle'}) - .st({fill: '#000'}) - - c.svg.selectAll('g.rotate-only.sent-1,g.rotate-only.sent-1').remove() - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2]) - .append('text') - .text(`Higher likelihood, ${pair.label1 ? pair.label1 + ' sentence ' : 'sentence one'} →`) - .at({textAnchor: 'start', transform: 'rotate(-90)', x: 20}) - .st({fill: util.colors[1]}) - - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2 + 0]) - .append('text') - .text(`← Higher likelihood, ${pair.label0 ? pair.label0 + ' sentence ' : 'sentence two'}`) - .at({textAnchor: 'end', transform: 'rotate(-90)', x: -20}) - .st({fill: util.colors[0]}) - } -} - -if (window.init) init() diff --git a/spaces/merve/uncertainty-calibration/public/third_party/weepeople.css b/spaces/merve/uncertainty-calibration/public/third_party/weepeople.css deleted file mode 100644 index 33ed7472967ade6cddc630b1a2ad62597c1cd2b2..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/third_party/weepeople.css +++ /dev/null @@ -1,14 +0,0 @@ -/* https://github.com/propublica/weepeople This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 United States License */ - -@font-face { - font-family: 'WeePeople'; - src: url(data:application/font-woff2;charset=utf-8;base64,d09GMgABAAAAAGlAAA8AAAAA4KwAAGjcAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP0ZGVE0cGh4GYACCeggEEQgKg644grdwATYCJAOCHAuBEAAEIAWFbAeCNj93ZWJmBhvNoxNuTDxsHIAID7ZzNqKCjRMoBrCLIFmsRdl/fWAbSx+vtlRiwYRgHiehmaIe1S1xW9y/toIZegmaX6AImBEUXWQKwMwpfrH/PueHJEX5EKmupu3squ9sUbFcpFWzu6S1LNtybEuWWxI7kW25ptlOnE7iyInTiEkllSMVAoGeAKFdCCHHhVYOjiu00J6rcK38HccdV/yTTfuqSrvTB1VdAnssWbb1CUAz3t0Dyu/iWyXdqZwWNEky0XxglOQDnn9/d+7zbVIRiiw0sWtakTKtSQwBAFUO2WPBJtCFrMo3ZxcL9pb50Lqy+P3b0q87HaXdrwWGD4YFhtRfWoj2bBJiVfo6vVX3wcxIlgcENsufOTRkwfr9r/X/VtnTdtfeFz6BSlhJABIuY7rtjK1Tp+HOfRQgWD4+z8iY3/L1i96nd1qnV9pwAKwM/qES1c44t26FBeUFMfvgmPHiluV1C8GNRjOOvGV/dWiJPWBEEz7QE9D/7y3PAuWbBxSdVHgx7EXHiWGzDWwByNQXrdEvssgDxf5PU7NlOqfTc+V0SudS6Tv+/4e2Zj6o5WAgPwFD7TMA+gBAeQUMtE8k6Bx3ma5MKXDoS9xLx15yjqvogoVu9itPSDncEhCA1hRfYewiG8iQ6zQ2oQOn6BJzkerQHmDF1v/9EBf5Jr6dVWJ4CO2LAAAQAODDP+ErAcD1M9Gv1+nDV22fYwaAHQAIBLWByNFLACCtC94KOKTXyQ8AcAc8F50magIAADjYHnpTdhnoBi8Bz/gfOvG/CcDdDt0nwKueAwB4hCjWo/l+aQqGIRpLDAJAIqLnIB7DtrvXY/RUeZYG/oNo9vddTILRBQf8yewvZ1+dfX729p/V/Uz96a8+nZseP94FaUKzEFE519GbnMXjHxCO8oLBaDJbrDaRSbKi2h1OV547vwD+BxUWebyazx8IhopLSsvKKyqrwpGoXh2riQPg+FpwXJpjAAI4OwtsgNV+wy0AgIcBmF8FQHcFAD1mAEAlf8K4fPhV91EUlZn10LkbrSZEhPQoOXPv4xB63Rj2WSpQG2ch/kZmZyKls59fhrN3zz44u2R2bPYZXZj90+yDltlt4uz2Wd/sIf/sB7Ovzz7xRsA7u3s2Ypn1m2aruNljsw0VRt9saPZtP5TsszuD3v+5b5gdEspnuw3FketyiWt20+zEe4ezhnBg1vcvV2v2w78c6d/N8rMVsyZjAW/mDQt7zmQxGhlvJJjQf8+r4Ynf36X3E9MO27Yxi8G8YwN8B9AG+eA1sGBzWqEDLTn/gu0HTFUSYG9pWlz0o5LGgcD1MAu4H41ZNwxH9adWifuifrGzcnmR3DCjvhpOxAyl6sUrwGX9xFdJgkpLqOfgCwOMbXMqtwKgDcvTArs0sTgM5kfX/ikzUIM0Y/AwRClybsGauAQwlIcVg8vEHIeibbmp1VLwfYmHwUi66jf5F7Q6MDvnRmaQIqWmxb4gjoCDXg4Xscet8d+zmJUi+UmWASiGhgHfPVxiI2W064fvPxbEiaZgiyGKRkNxwShgEqzltG1oKww9+TG9/SupJF6Wk9W7AxCVSJppfkjb1V/FcZxh6lLkuCmGr59KRomaDjT+BWLRAa2ODAIQEaDF2ebeKa6hDqGYthAFR8fSUz/EIqrjZz1sJrgJSU0Bov1EFrkbm8ujpDHFQFAf1tPDoEtKxZku+VavyGw4S7of3hRH1iBKQLCEeEVFQbFIIulmTzqr1LTXAyzqmSAHhNFq2/eTMOPIkKKroZj60Rji0SRSVh4lSiEeEtpk6msOX2Kh+kVmuYhGabMQZI5Z50G61orMumtNSdeOfuKihL4GauGdMpHxqPJvdBLDfSXvVThEScOKrQSx7ZAuzu06ypI6YwsGuMWZetbMAIESpjVESf89484AFKZM3pBUrCCS0px8l89ZvIsVD7BUjStclmGh+3RdWLJc54me0jd8jhp/qJEs2BzYkIdiLOOzD07qFaWoEvJD4y63nIlAU0FxptgzbAQhj0IbQRJVh7VW0Mw9LjQNssPE4um+dXmG2ESDvYl5DmirktI6LTXScu5ApZVaG4RM2zhcbAcMXeni3czDvu8uP6zfK5+wMCt6HboKqoNPSA1DOcLQqTx2cTSYSNH0TJcbW5TSzT2aNDgS687l1/7L1RU56eyYvdoPGMSU2e6iCmcyyMkePdhOubuh5bIuyxW4d2fQrT7lu+qICD3UkrLqh+T2OV8sq9G2RMxaL0lAVT9ULXVMTYqXWgxPe6fdJS6bGe0vNnNrTBkuW/QVfHAsd+ye4kD0tgquWA/MRH8qfTKHta7vH0gDuYEzEDUVrcVBJkBKuDhbW7xDn6gm7rXDFVZunJTeG7pfHBNf6VsJ0JgqCAGipMf5arrE1ohVpaRZ3c4hd7ycOGf4jBJqgilL7peqcIRZFU6dixBfe0Jt01eRcw1lCzteUJvKYULPZRqFrQMzOjNqCWAxuZIgMEyeDXC9wclP/04P4tvvXjZt70fPurwnuIKDQuZZTMxhdaRJnRkfyUMYs/cZGiW8NArykRsBnmF7qLsheRIC9e/IF4expS5ObtiTtsQ9Fi7xi6PrkevaWDfomi1D9SOF7hLLO5fCPGbi6FJDMSPN4ABg0WQTuzztWwDdNGaFVOymYbmhNlPxfo8NE7weVr+Dw9qnter+oN52jZw8O5hoC+sxR6ZcOshv2rUiFhBFbTFQXUum7oJ7g2DZbFrQZoMs98MEvIFBs2O8zqjCDkIEHlLvNFrysO9KybOhgkXtWFZSWwblLOVQWI0sDkJNzA0z5mKfRRcACdCBCFlFpX5eOVk712/oXWHaujNvfwiT7y5OHkKdS15VNaf99e2DBg1Rsb7YiiYSYb/sfrSQDFNcde9kDnNv5AW0jY0lAYybmpdQyC066aJW52ZYpSbYBpzCrk6ApCQ/jt96L3KDk9CpcUTqvHvSqYOZFUuXFE7qhnqga5IaKllIzZwy1gezjU8b+Rbs/xUv39VCydeMYLQreSW+OcFwCCbkmakiA69h6HfXVHt30Ze0vS8jz8kjtk86o6oMd6ijSZmVG804mQcad3tDOTyV60tTeWTV6ATuxbaHMPUGlw3FzWmlGCZqeFTjUoBQUFuCZu5Er3leTYfssWsneODc6G5g27S7cWJf1c04iQsceUSfEbPIikyZjsxe1vBGznPoyTB8UKTY/xzzut0odeaZVffkY0T76kxhBuLeFGjehbbBC6ZMXiMYHAisBT2HnUWP9qx8pQgVzemET44LE9JSu2GiC/JyX8pLlsLSgRKFdNLulLCxcS4BBEVm4iwpZsfJ27pgRqs264/LnTBAFIFy4IN+oV/nu3QAuZSR20FqnrK2j6zHI2laDn3J7grAO4UsDM9UErHgIUXp0SacidYGYL4P+IXkGPKUnpuH1EuMbXttZ0D6zPh0Q3Om5S2uWkWm76pnNLqipib0bktbPmHAZ0tAjtS03M8IOgapyixmR4gD/ILUzM/focu/MAJE8f92GqUSTwLCM1ylspIpL0FnNZwejpwfgcrrAkgNaFMkJoy44kmNSWrZ61a/KtX2U6kw3GCrvaPYyYcp28oL1Rsiw1TzaIkixDTlc0TMCKeawjbX4DzAHMzwLIrzPY+nZd2Y1qxFCx8rYQgxEDsraQkUoTfBNbvTYvHlsPtLgNdyvroo8zOVisTkkbsmpRCAfxqGHktty1mss4wNPL2dsTJvbB2iJofjQY8MjQSZMTS0hdMCdwnrprHUUmyIhM6TcgkWpWpUX2J0t/b0gw6AHOKX+wQUfTEICuTor56hgKj8ZbIbbqt64jh2YMrjmu/Q3KZ70pocBHshETpmVCIVsiEZl0+cyErqKKiXrWeFiKcsXMnJqwUB/LFYgsdVfKmuekvJZUFSUljqaqQlb7PiNqdNsl7ixL0as1vOrnPm4/dD6lla8xWtRntoaKtM6QUjuq7ILaZ6kmRVTqaN0/IyDZPSpmfAn2epcwBoncHmFbl4aGNQZlT348GGRBwxCIDOS0hOjTUXwEa6DGNMyspZwDZTDaf6dmV+qD9LghYB7xQRoVFP28kDozxeyGQenaToG5KR/SUpGBt0Vp1BjGY5FIkikX6iw25hiSrtDZza1Fg1FbpW7EAw201CwJlMlfoRpM7RbY7D4QMc4qsHlZCNGPIjrkxcp27UF28n2zkAcF48khrJaqbdUE1vgv7xe7tpW2DGrPDIAo42BjFnPr02kzOnlxLn+XybSZEKOMUarfAXUTt6cSU3OxMxM2lwep4Y0iQseagskZzVFzcXZBoe4hc1zoO2sW9BOpVnUhg5C5ONQUPwRGk7kkvH50bDwC/rwpherb9eP54D+Hc2KugkTvLFF6mMuPkNZUbPjW6L+0N5W6yuDp1RWfJRy8gWVFp30IYqxEvym/yN0s5t2sQFW8QmDmLnzbS1dVKrDh6I7ixc+8P2TyI8WRbvp4RfVFRxLEx8VnGxUu70Xe5mqUON7LQvDYdyTcqUMjgIU084pHfzaIxxpqnI3laSCg+QPrHWKnDeY9Bpt9mDEsScDEreBKLLkSMWmktbJwVR8g+VAhfLTQ/aSdg4MohuEC+/CTR+VVwPAbE23obPRTjpJWhCG72lFpu9mMhrdRdznM7yLQCeIqS43l4XuOWeANGr+cE1I+QjyQND9Jkn/fT9q2u83C21oYox4pg2uWg7c4I4hYXtQuimHEx4jRYZHuJfGNdb5RiQrhRC3ea8tkppkVo61ufxd0KHIXeJwqq7ukhAdRiLILJz8W3HJrpJPxctRJF4OS2+EumE2TrkG7xJMH4un+16FomxNWswFwQdCFxOZVY6bovrDeRrxkvhkC5A3it3evgzqAO5hM8khVkt1W30vNAwinaSzJ72fjJnSp/EQWn2WQNZTxsQkyLha8EehRSTe3KVqy8TrcdmAIkirXki2DKc4NlqhLMOngAoB9PlmbiLmaR4KG/ExUXgTh1EixOoZu41tXBW08ZrW/VjSOpI3b11eXQc4rTo9InKzXXv7uLVho7xjaiE9vG7r/SZFRlCfTnxC1MvqO0FNx2qJG2h71XF2FLKwOZ2TS5a3LtqVwaAxoSz3jCmZOUxaLDtSGUTZAUxE1Xi+jAq/h2cfp4wpb7cRtkULe7HedwG4sfv1a6LW85mgvo0otg2j67jlW8KgSDNbKGQlFFd8dUOTo5F04O2AgwZZG/8LFbFy8XN+Y1H9R4rme8VzJ2zjdVTK4kcMM7EQrUaBi55Mc27zYprbhPDTQWbEDcbqSovwVRxDlFmQdA3eq7m2M5+Q2+SS0Knqvj6dE+sKBgWqfk/GIO+y8KUnFCpHSQ2GdyLF/KYDpP5sssZfRllso2e6lWRzKdadzt0ud3q0J1bx6718y/oTAB9FrtKUex27c5ackie6CzuRfRh6BCbVw1t4ziNAZOJeSUWMWuYR2EK+0ATVYXL+FZX8nMZtplHH87vvbMQv8zewODgjW6M/4XwiMCsguRWgU2R5oFTomK0df1Z8x7eysiXW+TLlnGsozqA1Q5YoDiiU90sKpYuHx48bvkup7VGpSAmIR76er3GE/KBEcfiLHVUbZTd5/cJ2hxtWcYzlLKYAVursG7xvuis0SsfJEeRa4drg2NXbHkYasfVX+zlTi+L0SamgPqh7k6LdTVprDZ7xsla2Aii0m0ro+aUFSmxs+dw8jyX2ec7c0y8g262XCIpRlzgKo+Ntp8LOgde++X/nNZVQZ4xiGtAbKO8K9Ad1OHZ3gOoc5vVqM8CCsgmBTnYcyYeqbb3W4aV29eKkN1c++ygDnmt57RaJC5dgZEsYxixeutq55iLkdnAfo0Cn2ATa0j3Y1Cgmd0oxkYBIlqrmdG2RtiTmlmYRUnAQXUZBqLFzpyAbdM+xVoQFz0Pope4kKOfABixLZuM3kgST2O33dmI3FIqYSPfQ/eNo3Ima7bngvXiMwaZeXxN2sZvHm3N60psj+MfkDMTxgfO4Xsrwz50VJ33b3vRcHnRMaAUsBGTYoCRCKgXFO6Jj/VwRZdEu0r44ioZmkAngHuk0wAtUUhvN4VtG8ERG1FsmxaBSLYbu17dJ0rTVNqmv6h8xGO+i8NekCMpe+8dR7oaogQPjr88nmHiwwaonTl30Ijcctptj8NT2ZsNmyaXjT5D2ZLx78PGeDHs2ybn3QBYYWgT6vpmoPJ+xZ6hoHWX99pcnJvFvik2xKObOsasTzLkJE4XWziSgzgiiuEVwDU4B94D/E/ZxOErWpuVrxugYC72sMs5f2rd5x1lmN4AlbNw3ervyV2rlnqA+hqjftk5b+8blsswsTTNp937tA2VFGzyHFhLyDN10ToLtqMW+AB5iMJb9AyiQKzIJapJxcd0sKKKFNnDNfG2JkoRyg1bDa6rEx6aC9+rjAFXpnpqTm/n46i4RymA3LtBH6khj4gDritp2zb4A7C7l/KGUuSR4sbsZDs3aQ02gdFLUK+xae4KGVzLxbtCiil07XTY0WQtHt7Xajh8aeelu4tuXHoiaUzcHzXkYe/H5xlKMWPTiivSeYvJ/R2J0kdLJ/vjE7Eii8fu/27ksosn5J5lww+rdj3tWNTFHf/R0U+UfSLslm974Rr99OWT/7x8f+fhBjWa2nwuQdKT4oMf/SwHk3v/2ntXbNBq0vYBVpNmCOEkIPFJ/7qZOiu03VFWrKcWzeHrnNWJZy/RlpSuR5ERopz01s6I0bewhPyesNlmRIRoVDSZI0Az/ZdKhAbTBA0roYH0dQn2wvazZoamW5Lwx0yND4ZIsVhMV0yXrZl3XNTNsx5gZ4Ri/sh5Mu4KHCj6Z++OtQy/Nb1BpTe1W57MzbftT13WFD0TaZpNW3EeVLybHvwplkdiyT9lHCJTyjMmRTGbThxcG8OgyhC2ykCzx7dJsmnwu8BcGG7OEvV1GYXRQzqZlDEln5CVIFi05sySYih288KIci6vodSx6F1KgWQ1kzK0MTbbTX30lkB4Ze5/fney0KxR8fgbv3cC5K62wvK5QPPhs1ASRacDVMRvWNzQWzMN02C3Mq+U/gVrohu+yG66T9EPqDCakNEus4ii578NRXJp9OVkjSjBQ6fIMrF4lUFK+vi0xfUwXvf5rhgGpV7rOMbL8KGaLozbRL3bRkul4FpO5X3Geaddvc1L8m+/XXzZ/UTbz+7Z4zutWPFIoX6Ac0Yz3VTQeSmpveyV9rM2x+U/mx3mXX0RZD6cDdJ2iPlBzpyyBXYDD8wmBLWofOxV+qiWztZgX2m5lAfogs3oo1yncqYZ8WRNboIkHG8xa6SiwwfHvhvzefsvURa32xCoHdXJo9/1U5LhHAKDtCRxvCgsTW+ANoUG4Yr331lccY1MlbwUKzdMX4jTJwkpssNxcXKTg+qpbe5pZxJP+Tv0tjsQ0/zarJ1uriV4CcfzdnD9VtQH2bUeVS/Ytu784fG1dpImre0rl4e0kg9FrHYF9tHdlyYqzTmLiRoyA5BWDQKJXSXzNF8cP5ufQUDsrggrALzU3E9ZTC0SlS96iB58AIYL5q6DNhtqfj1VyAOQTXq1/RJomgnxMSJGT/jKdNQfQZ9mwj5AxflmXTgeZ+hhNNqpC4aVO9QjpDKsR4tEm9EBFyMLncgfJV+0Z1lYLrjS9/YDb6n2+WMMNSMzo2Bmh74t+NnDj21XLDJrGcoXaaR88GzN698R3JbhRxWW8ZGgSHlc9JGagjfU0oe7dq9dtediJ6SwBSGzFTRwA5o2n40HvugYC6rI7sPtrFCUxWQUCN4srIUV+1PgK1pJwRrt0JsTOEhtN/Cg+8gTD9SS3+okUWTnttsDYs3cqGEE+UPUmobF2drLI63wTGAU7cCA8SD049FaS2nCitFcROG4UW79m2VbK3/4pnoAFrLetCDuzRohpjNO+6OHszsRaISJE4jgH+Mwwf+RG4bqSp3CtXCFBlNiVXHcOnsSs4Q4aFXIShQ9qcFZPPRJund+8f5Tkb+bRbQtUcAjUsa+QnOTeOD5MDzuvqKteGkUIuikxi0oAua6oZm1gaDBQvjsOzg29DFq9BlYUh65WAOxc/Rn85NYasHSs3fopy7642bAi7o50h7xFBGd/A1n2HVNTFEAuQkJxfX11SMRC8aQz66GFT+t4sznbLqhzdLBtVXeYGNl6NGpKvkb2ieWRMGNu8js/zTZbCT381Nf/8P4uo8WdsL0AlAYN5dWuWPhq+i5kiKJXLGLH2oN1ScwjHQ4vwxfQysYG5FdD4A8RxrySBmZ4HmsoBCKKW6RfVwpzP0oXsHjZq6f2pNCit4c0zk0KRWJTRueRnbNvFbTzi3F4gVr2fXt9rFCgV8ieiA6dy7BJvqpD2ysmMxPRc8wmbqtvtPDFWfvKqV0moNtLd29Kwt5JJE8F+mKKXJ5qZpo5c8A8D+mf0K6H6/+hksGjYHMmNjT9A3QQewaHuPlEZzaYLYZ9g5pxCB6xpx0ga9hfkjv1cZODurNLKWVToeU99jDzAddHVZ4fyxSBgRRsYVLKN93r3LTxKSoGJyOF6sgDXFZXGFib8w4y5FciUTC4THAxn6SHEc/eEw8lcNCSzokHfRQ6tQ2km7ozmhoPAHyDYPfWTdyfYbY4ia7YtoQN8K0gpfKtbm+a2vRLxWKruCilN952Gd1pFpPiIW53gCIWCvWhyoNvRQ3IO9xq1pbolYV7A//+ONdtRIExkezjMWXmW7jaOypjT2WTU79ccBk/oV7tiLbNHjEtmXM/w/4ckjQJGjwiLgxNEx8lZcP3KRuRMpN1vXW2xvf1bpH3gnfZiLlYdKRX0bIhqaXJB/THzkKac3B/2dthjojWhqBri5W20FpKgQNpPQGM4Midd04yEB2rmU7gwRCgtEkpxKN3mlH+4Y8at9r0FD+2sEsHF+NccjsPTC2AkKfNfZusIdYqSORzCVhtjF94iqPS/6LRBcLeIbWtT5FROIZfibA1dLAMJZqM03UxHPo2kF6VL4ndERXnWNAyTmq568sueq68g7ixWQ+16xR21hbZmODGdQq50hjwW+KcpiEMpfJVR0L/0mY3tg2uGBxY7x8HhQdK92JerVYegRTFBYw6ECijyNoobGj78Jk+kbm1qDfEiUojMmksJyILQsZemg1SclQR/reoB+i89EP8XZUr+YE8o6lBEo78jCx0SZFK+todi8/+72J5Os1rqe9h9S2sBfstU+acy012oFQwmWF5ce4tdkh5brLs51zHigH3EpN3ZRJmYQZOhRO/WY2CAFTjQ8mQtjaoVV+Xwx1ZHwa8GxgV5WKjbBdrIQH4DdUqepw8GAt8LBVRraKMvGHwyOm37HhvkaxDC0/zuQKOoUJAMw0fvPCGIdC/BYCSR0InGkPrULaYxzTsU2z5aDA3EBz2DqOouIvqqHNs89fMQhwWO4d85mbK84yfonIXHhJIAnrkBoHo1xdIFXArFvoTfVuNFm13EOg09VO+WyrbO6bSuOGJMwWvcufi54tg4DkNvmiT13UxdL+Zk1bdLBXVk/951uZwREnayxeM/sfqXAp6xp7G1HJhWquo5QwZFkGuu2+XuBS/IchBChU69JGv9Hxs0ssY8dlZLHCS9xVNPezr9hB9PhJhIICzyuUrUp4nEN0JsZvI+WrXZFbegcAtTlyMHZOGsZpANJN9+AnQKfnRJ1rIeoTADTRghNLhQ7Mk0gBUZc1LEHege3/Ntus7jJyrme3wEMkl3E0ErpF5e7RYkZp5y100ZDcHz6S2XjpaKCdaxOvw9vqVEItJv07atoARfA4tS1AGq80h06jvvIfX3xwV3LAjM4eTXc08mU5cUxYdmNPN/dWqoavuuTj6JuUFQbtyKyPVH0tT1p5f2Bh5AT4PIuMcxtM6lXKrPSwNL2f/TVBs1zHEfsNxeu5qE2x0YfImp0rZuj4HJ1bhEi5HXYgMujqKLxcKUZra4TIQRTnyzD/v7qarM67YbgU6s4EZZuMY0vrXtKc3ZKO3ovhhrCdgzmAvmIdXevNoEEqoIzLWB3tZPuAXbWanxgqIulHOe1zElB7ETArEeyPWOutlWYP/TJOos02HdumqNbdBoBncIsOTLtoGmCsbbHnxhRtx7Tnc6vVBJP1zZy/c5Z4NlTlmsZ2mxfmBjlXc3WFiQOikmtRIKEppBD3wHyCNKyuJ12Jav+HONvwiT/8sdYNZp2Tl3TV7tU0LoHVkoeGlQZfgkbu9+xrObpgQjXQmLsN75rClecT6Ay7KAP9wfxiIA9i1vfu61R1JX1Ju97+FW0UkODHnpOVpcJjYBzrnyl8hg7Qqy0gCPbLBGZD/sQYYW1+2XYid+r+IO8CNvu9kJWvA6WNxMudicWkg/MYANfYkCK2dpxZlQXczsLb6m2vgDGYMeoXB0XmKq2HcohKS8pGFLq2TRzo5gF8OBcNZMTQn7VflbvFv1x5cD/GJWshLNV3SdnDR+puYCNmqKXAOZAnDsf48NQXzReAHI467+uyD63NDuDozzOO9aXBlYlZLY/POSf14gZ7IXXx8iJ28Eq0KBQvP/F0CpBNI7vN84nshYYB8kKcvGaWu6dIyuVAafbg27f3RLcgSChdkrfE12gfh530Td2WsX7Ffx3o6wzBPb6lOTOCTYbV2OIbdYv/uh8JOfM4/w9K+BUiZReib5SMJmkZgo+wmWA6Iobgj2Jdn68adDi75uYabFbxyqJqR6qUgjA7xidwWBCwBVaDMR/I9D99/0GP/Nhq9dVOPGSASo8NuT43olwTL399d19il+VKmRyHtwLBDJKwtJlwb41//Joq6/gXBnqfifPp2T/0Up9Pe5czvnCJg5OAQ7kpL5ty/TXa558Wm/2VjN+9Ym2Q7hovqs/1cfE12db5DNLaZsal2dz7T6zG4VhsnCyS1alZM8/w3gnngnm5slauKaju5zlRbWn3Z03AtrGDqfCXnxm7y3VHkyYs229ltzYEg1z4ffcQdVUsBE3ZCfpWM22CceQ0+skGUVEb1njk6iapCdrWIY249+wsN/kr3HUigu43O8PcnDXv2cS9YjN/eD63sNF4b+dh2zfTAZNE6KRzGm8ZqOxwRrhir2F25xdMf4fRO5eyvt5IMxTsM+YOfoKXE+chaF+28S4MxiwfXYtEp8Hch2+uF/JYPsuH1NQBdi8kQENuKKVkF8ygzTJljvL2PQnNtnk7iUQeZcxdAEyt0j4pt6ZYgcp7lfc2LmAWnjB5GKP+OLKuG5ZDvJ7Vb1icPxhj67WjbUPB2ZeU1owiskmcSAFJ9cG1yfV/laEx+6QMUNspD8aExap1RObC8UBDiaJQQCQKLENf9xGQR76d4fCfPMUiPbNTp2PItoNgvwlClgcNJmhoGYWCB68orrZ4/q2V1PZ8O89cLZeNgeoZyK3IcPccZZVjQvpTo5j2mNWqk0UDZfcVXWqOMCYh03KMJKjbkwByomJPtVJ1wkhk7wIHpGFOadbg8r83uZu3yh+r/tYpdxar7vdi8JJhn+uVsjDc8FfoHzBMFeJ2vuJgSS5zd1rq5pbFWGcPSP3OsqmbewLLDYblI0ulYR6W2VT0Nsnl7UCFOIEqQLlkuLQ2nN7feXR5YupRd275arUGK1D2cdxa35ljtbdsBPjk/xJExdZwq8c7+Hh5pvyY7YdJDt3PnpZPDfsjZZd5rkh9MddYNBuGmEDCv/2dum3THWirDE5jKgYgx3tk8AgInSybAFhoU3b3c6KeqrZ8+wHDpJj22zZAcA2u2s99zUpRbMfvuJnF20zT6ouY2d3h9ZyyNZ9zDYiJl+jQkU19DWqnFRX5pmoLc4/CE67jPDzuc0BNKDN1Z1aDbmV7qp/2Juqdd0lHW19KPEM7mEa9DtGUwrhjI7VAbP8KTQSxotnbQy6mpay00VrXRfug7+SuuTAw8ZGDROXPNpxjBbC4iWFu0ng9X1UdrtH6n5CHCpdLpmeIluYqOwlrPu6bGeEIYZvEFMFluHaQN89R0sw8Z6tD9FaHXGpz/sitQdLnTSHPxB9vIdcKpLKamnhJqxKXD4ON37ODA2035jWcv7xpltTssAehPNPYJDa7LFVDt0p7BA4tRGbYl0ItSDx3oqAW0BM6oSQswqI6yBBCl8vojOJXDmJuKiZO0RRe5+SS7YuAzZp5kDOd99dvn27dsjiNsPsYik7zxBc0BJTVa35pv1IyHDQwqymRwpHXZAmHTWPfsHz9Mfe1jOExABH2DBfJr7QUJqqoV7xMP828nRnf1IPZrdHXOtjBqFxhluE7Fy7k0ytEdTd90nc59ltPrBnct7GorXisv0ZbkMxcELWqANQ3cnEfmWMWz0rHJB88TOfr8Gc7NQ8BHc3fB6w6ckdvgOZwzVcZpgyfpd/dfNw1sxn5ajj4EG5p5cpOd561YrtGMEJ2drXN8bEAFiMhnHfR0H/5obG7ZWjQRXLf5ua8tWUQvScS9Tg43W7G8SMDEoyYU9Bo051VCUla1UrqgnYvBGDpBGpXlKKfA0X0d+fNUwPbKQCIrez4RxQpphurhWbMxVHhghM9lYqABzMGmTBSoRkT0MwgM4MOfcCQZQNxSpEcDWuXJALk66xPVlyFs78qyQPdJF/h0+rwrWxPahn/Mx76bKDQXkcKMAvPYddcRFR3OfwxP73LIe63qSKuBo98iBQl4hc+YRr07SUdSUb0DoYWWDK33o7fBsldlc6e9g2rrLSXaYlwR87hB947NN/Z953c/5z+yq/9QExy1f8yP5XaR2KTWgVMPmX6Rhd6d4Dp2YrKp/hwU5wS/dfCQghCG+um7b3bhtrVOpD7hj6Jv1eirb+hU22vRpapd5oBjtGliNFN33QmLtBBjQOUItcffs+w9FRarPo7fMxnO0D9XygsLoH38H5S7n6NWUp5WJW+bnJTmSsut5Bk/LT9zxEnUtgt6b+QKzTWD63yre4r1tPgmh58Qt5yOFK2gDtDUnCQa+qSwY6cisT3xLA6dx4PDtC6o9qbJC4/urm3xLp2dtm6N/sWgh6BsnVzkiAbDHEr+ikvNBtQfORiQRocDDDb0QTBee+1AliN5DZmzWv2Q8TDcJfZZW3aL508aR2bXNCc4rqno76yawHq5njSAo4L+oM1uEngWaGRVzapn/YxatX4jd4zxnXYrpt3hqtjoPEVMsUN3CRi9w42Gk1o7uOogfO1+L3NnKt0OSsZ29aisH4QJr3oADOd54YudQmmB+8z3rmGNQy3401OG8V7YfirMX0ytyHNm3n7n8RLbYWZQys2Sw3Mupq+NDV7RLrUTxH5a6lhFIBSnt22EwIDsXeZtkHW0a4L8kwDl7CdfF1++MF7VM4k/or9h5vj8orc7yxDHoSdxCK4FVj4eSUMJkFOilgalegkAlHv2Kp3wprXl7OWbgeIJg8eqEFXSaHrGiLjypRTxOu2PNoppGH5gw1JQ2Yis20YUZuwaOuwYPGkzTTfD3GZeX9hbluZVBz4iWzFX5QVbu9OvLf5VBkq2UsY9h1tlpElf5WGPdXb2V9zPaiG7BGjkq/CiBC9+0ZKeQ9/Lu5jJgVuvL/N245jUKSl3Lq50GmnRzR3+b97EgLvZTrv0P2gV+DmOq4ctlm2wivtbivMIN2Yd56ac8x1uzgleJn4GywUi6QsFlZXcs/BiC4U6OmCTgpqVI3XJ59qzcP2CgRxm6NoSB+P+cUgewBYumAk0oRvn8rP4QAX0fBfGwYm+FDuDjX+YrTYHzMlTSs2Y57q10ZJ7a8BtMD0dLQ5REEdZfduy8mXHHoTBum/594Bu9JknpREI3hcQv6//qvG2+/RewMMGvbqWfP87pv/3nxHtSP+vfnwaw3k2X5svjg4fAt7FYUHEuvn/jH/oG31RbK9e8NlczaL4pO77TAEiIxmZ3/3Omyf2/jhry0+2He1f+U992yrXfqPpn4Uhyu10vB9vjq4tJG3P9OgN3MqXl/vJgHRlo94/CPv+O6Ub/dtNjov6TzYelWZb4dV7B/HcO0AzWKyeZZh6OH8P3mmoX0eLboXwsmGX8yUKrv1+Ly56OfJt75x65/2ok7MzLv9Pa2xOaI5q+uL9JWf3HtpEMJfPv1k4No+/6/Z7Z2L42d/9oK6EdfX3En0u/9weTMhIv5SlJ99CnHK8MdwYXZ+cm5v77priqJgG82F3L/++++PCO6FJR0U/78l/Zxf14cGAGIElJUCmx1XFjT9PqstSCsP79VPVpuy9XQeNS9akPfiI6cLaEuLJLgQljwMl5b0XMvncJvadh1J/1sgV7wD+n01Cg1qL7L//jz6x7daelt56Ekh7KYlT9aelM1F3Jwo9diFIJ7bjru0+FQB146FEqTbhOZFG3qX8i+X/gkJjvb1nk7eXkkTuMAahyuLuZj21oWdQ0rNRPSDiPyP9IXUIa0W+qpE3AYJTLcgsJnpWI4Pi7b8BK/X5FkTgaMJIvWYBBW1Qr1DbnLqAlqpzJRAbi6PhTz/gj1SGBfmARKz3fPbE+DxO6xIdvMCNEgLoQuSs7rhio6KTyUvgLUYPSH9CTGjwnjxcpdonixhY/UlE1TGGSQQgKqVTRBhpg7IFuwjHkQQzTfeuDsN+wRqkIfPKqkwE4l9fmBlEPkReVdM3RH59TLPP8lkNlcNP6L8VZaAf1of3fnI59WnRlCyG+9y0Tw4kL3ylviBVRb0qRTKPgUc0kClFd+HT+FAMdCQijRzQyu2sZskZc4cJdUio5AkoW33KbIX06BU784/M2JKGHkxaLLdOr8BBD9cE5w+Z/OgcShSuc5lm6Zx1SJGzBkzcNPkS2P0ruM38TP+88WHF7mcdxU6Coy+ZNaGV/b1MYUYOEcuc/d0vQnlmABHkRiRwUVBlWQCZSfX0AyJSCwWTP+tQ6uFp/BBwtCXkAloDhTizCiTbs4U5zogwN00HsIGYOhf8vqFLYuMBnm3Jo28IeBRhz2c566pdhQgvNGBbfnyM9P2pXWVX9rvjV/4Ixjrh4GQIcEdJDL5DmLuwwKvQHIjJxXwjSGg6pkETq0v44DEpIyVYdGDs26j3FN7rayCJNeMuhVWwJP8j+YF1ar9f/0VPOhjvAOqWTOaC2UD0qFMcKvafxAOZfguYM8WJMynpK2sKQgrBpyNi+OnFLPRvCoWPottCcKNc+RGTFxWDKObdbOoyCL/Puy8/ba6Be9sz2de/lQyzouU7gJSErawoYw3wF8/AbmcsImTEegFnZI9cBRxS6krgzwzAKwyQVCk2bgJdg5yAPFMxDwUyRO8/+l1HkHo9WrRK09DfYqcwh0pf9CNQg4gjEAH58GIeifXZkCQW4UaSBYoGElh4Mkt8uNU8pejdfP8TapsLNbQEhSnGXzQXF9DmsOYLpR69EiWm3R9edRcVGWYJqFFmJ7m6WqsONEKWNy+qx4Ga6sWRlYx4RxcCDmihQGpBNpfkpmxCZdysJwUbqo+6TL3RtMalF0L3Pc2NdbT3djswvEaWOJvk9mqI6HPJQ2ogXiqF+fVfFBDan6AAm0s5IUPsHELcnIWLonR6z7swCT2iI8o4Vj1f+aN1BDIONeNu7o9cwocuGMHkmUscEVwZKnL2frXU3TT65cr2uwd9mk0VuftbYAOVU6EIGBRKOtQQkqezByc+iPZ17w7mqp1kVJW7h5uthkaO5AdTqskwcX12YDJEU+qHMLHKX8nA/v9muCGhtF1qc2P9TpZOgr+8yhp/jO+gnqMwChHlgHxpLhHY69Sw3p4UpCIquaDoEoZocDD/UhLvLvf//SRfzWVeevwwj0Q4iRE95FXBjuSxyOzoy/VzT1NcZ4tDj+zxq7ORw14fjFuvPpQfmGMPSYLHt6Yvx/14N7wrL0GdjdtdtLheK9cjQiWmB8wvqR4Pn5zQ5HjzpXdhdLzizprlhq3IQlLvN1WSAxuFKcW8zZ0zJy3PF4eqtq9vaDqoJaHbYLBWCFjuHIVN/ezO6nvoI5LqBHv7XlfddJDPw2UGO0jyJVH0BNo2oEVgr+NaxcpwMyKTM2Tdue3BENnqBtyVkTLuwx2AZ3LM5ZhVGg1s68n8GJ5+DhLCiVBEggI4rT4+dkvMgIGfmAQEZ3c2OwRI0Bc5APEIPZwAMrhbA0TcLUofme2/9dCdOtdl5pvk30tl+N1X//mvohAcRaCNoTZFSAQQuEc3m6gU3tg+kkkyi894D01dVohzpbVeFzgruhI4Epks2D2RlhGZMevxJBsMcsBCGSAFERyOgVaH1A5JmbhNCj8csmrG2vTE05mPk4+uL/lcx2PP1G3efmqxOOru/1jx42CGA8k0wYUFrz/CR0LkYDjcn71UCFk7iLZPY/72UV9BBMma+2swEEGzyJPsKtteXX8ZJ8dF2N7gVPwCEansz/uPFmdHoCSjX/pzo+Pn/92+81/By/M6ykWiW2m84a4rJU47aWKhll0bonIFHvTGDWbBqbzo3pvqS2jmkkWqO4KU4JDNinqsiASwOBml0iKqm1ZwZGCenvM98KsHTWhtgp6+JWNV3rrEaUCvUneEyPJZPay53B7eUQbjYibDAge3I2GVWSlVVHr1TjieaRgEZk4bMrP+zNxxYXXbRWyOs8uCVJ3acW+56G+SG4KzaC4t3hAMUKkr0tFRgiXWan/MVg8Mxp+K3LqxZGJfQf+1QyhVMvFMO5AWk8MbyIEwuZR8JcyYuy+jwixxf8+kQ1ogXe/azzOl21CW8PPCmejEmZJQHWiM4yc1JsRPRyBlHETJMwIVJBWNgFiv/zD0t6bq+O7mky3I2Oyu4hOCLx9C9Rj9jqkl2NVq3GWOTDEToNSF87//JEh5R8Sk4ncLoW4Ywi+NGUlBwndl67DlDNcvQGZTZPKd7bVQ1OeoYA3xjmXSvEIzlWTqxxpczn6AGAihhx1n7y6GOsv2LuPVTeiDk43GVqlLic9jJDNvSR336rmL+83PnPde9Yb16Qf3oGA1s7x1bzqVQXDELa7ZJFKNdA1reBVMJ6ljU161TcqHzRZllI2D0K1EKM/fMKjXZgc73U37/awXchM+1ctnwo4FJ3nLgRmiuuegcP/uo0ZdW9zs0RGDrQonZCsRb45mt8XTD0OZfcknRa4nEfkjBFx4e7s1Yi7W0Z1xIlNDjHODKLaiuLqsBnldBFWKH6TFHzwzWnY3CenUn1nA5cbwsDfOj/+yfiVyfvjFxdEyNXtwTOYcF0+uQCxFPOOdMESCU+Ep3rYXiGCdBpJBsDi4zjR9Y0d56FAGiIISA4JME2dOkzIM500ju4G8Yz4XW9vkbpjwNVf9a1W+Uh2UfXr/7R/qWup17hp4h3puBhykhVQiMm2RJzTxeP52N3u7Jh7JN3garrMiiTDUFYjDuhZtXNH/Lm6Iz+Orikug8lURIRFpLRUEvIxDuXTLkwsqLOk1hylLYSM2QLHUIQmEAPSUnJluceRIXm4FK2Z56Ft1vICb9pTCAEt3zK65YFXHp3OV+M1gtBnrHHAsJtsJDgy4NAbCro4lxvmyTiB4GV/NGSEoZpFXyDS7AEpI4xTuNIF1QHjXELyqnD3N9fawrFw5KgYjtqcGjMUHEAZzyEMtABHVuUgoqgI2RyGJW4Q9l2IcEGIE45+E10JtSPxxrke8Y5mj4YRhH2QZkUpzvRyvoYa7wqoc+Z1EziAJAP0VDhCIi17p6Z+4p3Mt/8h2F//7VA1czmO9FlTEbcuSP45hReTbbpa3HksWvZ3xwZUW25k0FBkD8WLW4qLHvbPfUeSJR9yZDl8Ipjrd+vQUd06ObasBYggMJ7q2vbR0fMH33QV5KYC6BmC9ayEL8kaqoGEo3QBXDBRMQeqozEC5CSjLhEZs8QttSLXjQPldHInyhbhKGCMsyOu2YM9T/x2QllcvsIog7gfEgog1vxCz1gETvWRVYTkMIByspRfAcmuc9BnELJhgFQoQzaQNkqwze2qHniXSw6IuiAgEQJR4HuADgLLoWDXzRSlBE1jEZCGMkYRBlUEy2GOmfGr3jqvR9WYmaM4J2ZE0uvxi1rzbhSWFKj3skmvpIoZC1rjwPNqipdFoB+ZQYU0EIG13ECGNJAYPMvM19848Vg4JOKXeGRtMwYH4L0hW4o0VZJwG0c1bFVrXaBPC0qR+gQynJXIs8OIdmhVaazQOMR/jbQEum8Ub9apeWAc+D683zZv8MaPnD7BaldwRvQIeBtyewc22zAkIPvWK/fM6yBLK60Sm4eFAjtB5DQjcW0NYj1iRkUTcDVWRDApsy/o19d8v5jY//rVUXxDTI1PWOleVB0EA4TBBKJCRG+0CA8MxYlngjNt94yvekf5sPmBztAa7IGw6gJgsWPdKpgvoXKvPSg0ktRTHgARpoO+jFJcfFspMy7mhC5L2hFjlUXxrGMtMUysh/+zmEiztLb8fGp+wbKNSqoqVO6c8vpcaO6xO3/c9puji1jEG/P6j6Apmd9oudiJEQSKNKXaRJ7/L1HjpkHniCxxz3v91jWoC7YgppI67kKkpRYpvPmSBOMQ3sCX/uPXXmulYOdQKYU7JMKZTB2Y1e7uKVw61LPhWGAhAhgOqwhIkKFMeCFrK4ap+b7xe4tNkht6RVlCWnCEF+JlJxDpgDXrP9INKoaBvJdNSA5YJSijCD6ZwzJETADUc5UyMRmejh9E0tKG/HqXWxRHMzlRdeCANwNFZOeMI7JHun2TyG+kxQvvaj0nFtP1FnySI0QSZaOZelfLxkLBB5EhLse5KETcufUH0l8AvToFP1zbaRYHvg+L3PL0/iMtoqbKM7fNQV4IAZQduDRI05STRRY42jGBahHugAEsEju0E5WnpcTbE67PpophZzGQlpixjyuSp22kHBm3a6n6oUA+anKUCVsrptdO3lxPLZCfh44XlMDsyKnLsoRAj5hsZ7ER/0p7DST9XPmT50T6bVQe4OLbZN6jP6FwgqKmcQKXP56+By5UfOXEYkamhJ6chphVS8FTIhZlzHkF7ujGcp+/btA72xzfvaNaNI44Vwq3jYyoyioUDQCYQKuQwyUerfR6IJyL7T0jAX/ItQJ7SLqzwiy7LDgcnpaCnfZbnzDMTfBBAakIxIA2tGAuytwTiEbek5W3euMIeBgdHNA4ESQj1YRxKme6/KFVdXD93IDMLTVyF69eB4KOVspohde7+WS38GIebbLX+0CEqtXEheUFWsBzYxrsn8IvGLrSGmlfGI8dXN1lyVdlRIkxdDCYRhEBeN9JRl4Zc/y5spIrc8JcfX3bBr2hGsR+hnJ5jqmLlzIdVlML9mqARrF3BlmbOx9axjY5DIi2wtWcsAMifI/Dfu0KTFChId+1UJ2rZWusOIaxizCrQYjb8n3WuMbl4Tw2wt+kvFodeOyc7RcuwlXLyz37wIcga0Ruk9zePbpqxG5bQczTvRTWl1FOWyQYXzaZegvktgPgQLqd+HVpOojnfC0xWR8cEVFQ8nEcqfIhK+GbluUCJ/b6sdA3mfkvUbKfm6smdsvOSdJEhBwETsPmypZzqTfubL4Uvq4DXsxw1xtD3XKdmNp9zrAwhyIu4YGQRBD0nOJEQYAwcr0aIIVgEJar1UhYbPaB8M8Ty7GRnZRW431aNrPZLkgyhkHEHFweca+dDfE1I7BFJpbplIqQ6lTmziUmZCeG0L5Dt5R/VGsoOwM5L0WyL3wKh4KKB6L+R6K32UkhscYSFjNDKnIVcJdXc52DjjJZ390yUjwCFwIncH/Znady3GsfHHQN9VchAFUTgwRCBk3JCMqrwQ6zNCUwnw5xPxIPiuOjdb9d7hJnFf6IoSD7qtAog1aJaAMk3sMGIPPGtpHAKdCj0ZsgKCptWO0JBeGyybsjMaiKHL+U6xPkSKCNtBTuuDuSs1CnGoLpFU4rDdWhJdTPtmgBgs0GGKeNIzU/kwYunPUWdTTwKJ4RIyJCFReLN8wZSwt1TkcCcYjAGEQmB9dvoO8UBhzcdk4cqZXiCuyTkcNtdberOI/60pXhaw5kMRkmdU4RCbyDm4FOseNouWnJU5qhhEud3c8K7NpcsST1995beBzlja4Qmh9siJWjGnvID+OKKXDT+iEEpC/UypYl9iZjDpMBo7bhWBDeQPlqnC8dt5y4QnBArorL6wBpc+64EisvPiNWHAZv7X7vYVPuTFgj0GiPfi/t3erVqc2xFviXf2YbLTm/R7R63l7U+/DhwV32ZdBhyaQOjtRF6+91EX10E6y7Ix5POAVnTbcDp8ZgZA6cRm6z3TGylbh5JAsoD7J23/Fh3FZjnijw/y372Ik/q3e81WIQfoO4uaq1fs6B5Q8+IQ9ydvPJWFvhj+/7E/us0PS5z+ych3D2NhY58Kx34K0A6VixxZAtKGmJTf4jGhHNZP1ayViMaDspIHls5Mnq+xZM9g+OoqweSQAWCYugQ76H6dE0Fw4noYnhp0iOsSSxTn+BvR3eJDQ6+CcjLOnxakg5lZbCE2v0O1VP3zmtdS9Qehd+cWNi/+7zzWY5F+Z0QuI4IiJvjrkI8lzxqoSD+EBnMr1nobelkP0NJPFehSMfcZMQKnI7l4Fite8YthAeQozkMOGeWnj86ytcz6jJU71W1vFcNCw92saMju489+6ykKk3lNpkmDV0hBJxYqYOz4m4I6gxAjyiqE0oq08PVM79a+R3RZkBExdCsDKSG7k0PI28L3VBSBl21hhlRS3WMNeE9YSk8tAR7HXdMACxgUcrerN9pNrHF7FhsyV2M4OdnjiCDhSXpSMS3LZGVrYT4Y7JKyt+1Nk37cJd0pmNT7T7LQPHba5Ewt5SoknUgu13S9tjxZicNKwJLDGjjUhgNOCBN6a10clhdyNsxXX2v4jahuF6XIIZbEaUu8gVPAu3C67Hb/d+Fu169O8rA/z9k6/wgVdhqKb6/ffzLS3/YN+bGCoUqCyfNKlShIgLt4nQC9FCDi4baD49QMP/Vjv30kwyMyVn15tcOfwUZ38EbliTQSACXnWpQtf5V2+oyR/fuxsUgnimN4/7G3n16RWXhuNNlGIYgWxT5A2XJ9ZbrTZlnvFv3NjLOVLGWyFWgS+2I8IfT37s+WZ/sJboDUhBZg3dQZAuZ2XoAvDkB96vM5M3PfTTJ//YooyY6DO/4GDH1UXXuE6LmowbPZ8Ctpq9+uX5HRDL5iVygdmIVAEsqaYKPg5FU3U+ghBzDzDRaXe0eEc5DImLSv5Gak8DYsDfa1v/TeOcQlWp5gqooYmg9WE5+rk846h29McKG/L4AUyTnCXAJUo5TsRkG0a9K8k0sx3cbbo0pLhqNmJfWa1nAke8/UpXG2cnMMUXnOC5dhhyTx+93vjnP8j+5UsBdOajo3z+td3Emgh1I4wgXCDMiRTd4ajw6yqFTPD86s1S8aUvvmE15m+qIAfTMkYQUofBgTuShGZoVRWSGUTEwUHScceRJfn2Qd8FJT4C1z+INs4p7Y0mpOe6E+Ol8B7RIdVAS5NrGppgmUoMKrmRiEykjFJMR6TdkrSOyXolbhGbocg5kVnF62VZ8YoQXfnNTtTyqujxiGwAbpw8F3L/vDVJuZe3O+C/qltLWpza6euRN/7ad70dkSHNK/DTcaHbqoo2h5pFJbh0jUxqCoxA+3AZ7fWMe2ivZww9sEPTIGA7/G0Sg8AiUozgICKE1ZC4MBkX0LCwmelUVDCe3BKDuDksesBMdqJq2o0Crj88D9f1o5IS7hEUkgmSgTSGcZ6ABSPHdTL2s7jfl5WpPIIq1iiD7bqXAAiDb91k/FKSICSU8hzFEKIHw5gDAEh25A/5klwzIilqQAhwEZMUUXlRCkL1LMu0azgamFmAWwPGcvdyf2zeyWDW7PjbmorEElxl1gtWHXS525o/+cg199rA2OC13wTyWOQmcXTFn59PPNflyTsLxvsQyaIcSPkDfKQ24eB6yT9q/AZSAosn2Bvwrga+yFz8LJGdnODPR6oCp9eaYOpURB1xwMO9vSg+bBg6NX/BYMxqFKwWgSIGjUIathuZijUEZayTidzIDOW6I9shKm1EI9QCfwfhNE4JwMOJOD0RGAEDgKkD9U8uJTvJiIC7ZtD+77HiaIIRE8k6YnpuoXmkA0PPXEKI/StEGDISGcqIEBA0sqx3IlAtRIDn9b17EYkAxBDQSY4wpqUhFgnsxByRqcbFvAN+CKGCiI44lgWiB1JC9VjcXZd0kGeIM8bgnzq+urznoLeDHIwg6IKyBSdFrGbQOMQ5iDl5OuVvSSMjPuC95EAHA2k/8oYBmk4HponsMk+Pf6cd929dt6xZHPD+et1Ii2xHmIe1Q0qB0wlWnvaAHjNWu5yUzBVoH5WNXcFXkoUdrPOVBXhf3V1q3tofHT7w+z881dmHlWh40vNgGqc4t3Gze0vYeZt3nrHbg0EQk5UwMH9a9dcrQozppY8ETSV5c5x8E1aVuDYfG1CWhda1ebhx+/Jjrsius8ufoepfn/P+7tf2uUs/UOqDTAvKRtKK8KvBRczab3TbqY4FFRcRZ35q/FvFlbr4AEFUkr8FA66zSOLoGwVHK+ufXKjC+XMheDHk7P9/xdBj21SO+z3MEV/afJveRCnSMaKQQZBGHuDXN/vSKALSjCCU1TEDEIYhR1GEIBYGEHKRCAF1+hOb2hduCbFtfc0erMEAmv+rpw/8hBTW2QKGLnHcngmW8jmDd/nHH7hxu4HW3gPsRaVisYGjRoL6iimWq/LuqLGsZXMNQ2iJcZXI7qbdDifoMRuD1sAHnFjvF39ud3z5vVaZvkTrEniyKwYT3Q0SZ+GPWtKuc9vrhQH51dbfbhStrU9qZtS6+NHgmpLuxOGvF+bkVE0onoVcSMKD+HbUewkPlBbtnGzljk3aw42tRrNkUKLxAZ31KK5XwOtXn25oDI6CVfv+JXWMyTOiIT7NRdID0jYFdyJl8qr+gQDrI75gAMmHHqpqa+wZn1gwel5Nw6taVF9nMklcMXbU1Z4LezoWysozIBTri3KQQQQQMEAGGSIAAZ4QRJAMCIAMIl2XoShiiADSGQIa9KIIo7+kCiOsmBz3LoFHbQW0CYUxyYLSE0TgQp9p8vjTqde/Pb6qBy3tX+r0+YPK/yHYuMAfJH1wMEgfwGTsXhDmOZy/RSJi/sJB14DnKRKKaLY5nBAzl8whNoNbwmGOykIexvNuRP5baht2nWnNHj4ZusnBqIobAiNy9YK6wwsqI2dawNwuIFslXV18UKCr8wylOGOkLn+2CLfO+9sbT7gd1/dixC4jjCEeGeV6xr2QcXwA2wYKd8k9pJDwldbSNrK4Rn7svgqkIZRFcejA9ZFM+w1Mi7CEWk3UAAx10h0S5+RsbhhrVfqsZBv2+aQf1Nrv3J+KZCBCHElRAGIQqlmYxldRiTHZIpusuIvZOZJlRoAQJ0Yyu0GA6gHvPtC48MML+zq++7ax9GVy32/2JDprr3rx29/8pUqW2sEmcUxLx9qV4RECMckDsIvK9U1JeYvgziheqWHAV+Jz1fz7w+tTIrd7LB21QwFyxH7919mH3hugWpdSv+bYeY76Zn66fSuYqloZV/ntcmvluifC8KL8t2u/GTjI/bbcdNUbAYgIiJouCZeZgQIvg2AdQTrhs9nsdY/KJs6n3xAuXDWquszFJaNZQnYXdr//vNbX6KuuxcCMVr1wY8u8Jr41EQNaS4u2xiZGPBcIeZxzQoI5zXOiIUGLTH/b9I1+rahQ//UezRMNrHeq36d/czzdvvm9ldX6xzc1HQA3E9Omx+KPb4NIFgwLvgUpqkbqLfkP7P7njmm0/TllQWPr0ezV+rblP/p0vVYk2JbJ09is4NhpEGv5ZJk5G3HJuJDQe8GNqaxRvI6pgCIRJvAwDRnWIKBrfDdxOCPxbqFWvx3oeYQKG5rCtC+Sy2BRZvEdvmqsix7VhVW+xeE2bkKLo91dhbg5sNhm2XsO/cOO59zIlX1ld20hebd/p6Bwd5Nh7Fvotde74YHabcb6gNi+H3XSoD1wf+3crRHmHzn/UYj5X5SFS3PDYngg4xxa38clNpeMLjkefresJBzMOtF1O7mNq7CsvpW4Pvk33PCWgxVYFT5rMlYXQ9iK8FJ6t6ERKYFkxdj48txQtabV/aYTfPJ5tst8kRdUvMDh0w6Jb73wi6/ePP3fnQtMVl7hhuul2mG4pjmnx2I13rYkDY7I8cQPs/NL3lD5BLX85mykh4OgevwfFf+9BHHN8eMPsr3rfbilzB4R6ATfdSF9ZUyEQDwuemQK4kuECp5vtXYLVnOhudVi+w4ek1vfG2xbVK5ueapZ2Dj5V24q2zSneFF1WRN22SoM5uVCvU5gs6nwu7CIRQXMLBWZpeaFpIx01dplhSFOK3mAtLbPWRR45SYkcEKmtxWveDv7QZvcQ9aUq3JHxDrS0X40X5gWanEzpVnK51NHdaw0DoFBQkYrZtMKJ/Cb87i5r9YunVepjIQMMiEUbYFaCTJAcGFCaFpCuS5cUAKt3lKLwdp2Q+ZUyfY75u+sbTm4fbMw6ap6e0pRNvXYjOlYApr5cv0qWjasEhIZSSDsouq46ClQjW3sBtiRFDeNJl/f/53nO+xUZwRoH+7wjsWG/UC3iXEoBxHu8EomxnHcKQtzGe2e745Nnn3cmGl03AG5OymHwpA12eLyLWYIuylIFZR2vga82e2iVoE4omHVPsMGgmuFy47atm3S3XYlLc0vnmyqfLKjUqqZOXJMwsXYNa2945/+4cqn7VK2Tl5zCF6S48gOcZwhH5Sno/wGVuDaoe3Xp2ERQh+UjsT5RKYSN3JYdshKKAQdnOhAEHY97+fGYO3Joc3hCohCHH/mta3fK4VsDX32ypmVSi0DogdqKVlHfhR40Gswcd7kUsjNYnW/x49MpMy4zOobKPYWJMT4sAxKB2/j8rj8jYt/8XXnTTUuVYV5FAv1qhlDEQOICNQw3ArtwAcplUSiaRRxsWtZZMkEQroBluMmg1LPeJMy1aG+MG8+yjxbr6LbOfwLbkmhcd3Aams8p6h/GsN9jOHj97VXrMHDRA2BCcWuyhOPeGnLBLD7t031fKC/sK9sgb7TNyBFRc8/vf8vem3JTTzJuaez+3NkIpXLiGPIe6sm9h4Q2X/Zrs3X7h7+uf5zL3CHisd/l9dzwSM2lB0PzCnhlrN6T7zUW+AQ1K5XTnpP6rYs6AXEHkP9DXkFj3CUmaqmv0VwzTzC7eR0g6jD7xpr4noWLd/c8SoMYiHbc/3Xv29JfbOnHUFCS9uhuv7uxSFTykKLBGwsGfEPMeDRCMIBzwNz7QnV7JPX37RpQ+Azp1vXPqGNyskGWGhViu+yoyD13SXDYrhDwXlImYlpAHiF906cb7lZWwWqZ1ZHP2lkdpogQWg+d1+4eBltwU0Y6TBa862ORULZzWTVcq4SBwS5Zo3cV5MtRzYTO2snq+0eVHPLmdutBl/aGcJnJ99Lib1iRECXMDNGcMbL30Fnst6v/rr0TGafR0ICB6Cr+lbOsevgiMyRLMchsXfrjow1A5ORTDMRUUbAD6BTxZcapp0MApwZzXn+MfePwPvV+qe97++fhidu2EmokSgEASkdvHv3JI7VQfl0TQrcVXur17GCRBZypdBgcBkMXE5ozphLON2AoC+tF2BEh7eAjye7ApA+R4p2yA3yaAD5laxmS9Xba5StZ4v7kUGuaDvEhg+O/gymFy6Zzf6WddTEOI/InF8HHZtVLqDJOwYjVDHqBD70CdNa0D0qtK3mhipLtztQpqzD1XimlCr19XyBsPDqu80nu1LPrFkbv5L8UvtObYRmT0UhhCiVIVTHcP+e4jiKmPiSuQBjLuPYNv3KAEP0Ry86qodSn4R+dBZSZiQMaJG525iXQRishS0pJfA3fVE9hJVqTIAMRRGEDBHmo4ZrWQCKP3SodPfjAnPA+CC41iE+u0Lbn17epmz7yQnHOZ5GdBun6t7doHlCahIuix+lxtqyj2q/uNgkrEOkmsHZq4JBOFPDxHr8gFYxNP3ddM2FsxGWJbk4UiALWEbigrx4C8s1yVkRYSkID/0p/UYbWmSusG/w+64Utr8aMUMZ2QkMCm4O+mMLF7rMHUW7iU1iKzt8hJrdaLk832DKIP/BeFwqRsblvCMkxPo4uRl4s0b8MMzO4deejKuBq53w9zoj6nkTHIhZcVGD2W8NEgfkZ40U2+yqQti4n4m4DavzV77fFpmSLKH16QV5+GFi2astD5b+7jf/H3Kunu+ak8RPxthbj1Zf1qw6E1QOwHF6fu1RqXCU7LoGkQNRVQ8QnFxtzBTeOac7mVKhH9lReGNHktu4U9csvh/Psy4wSobc1UvkOEdTiECIdRPfFYsg4NVjhAEqcjxW8GUI+pTcpj74E/j51evfe3LPdZPRZ/jsyk+uXjr7cmj36n/K31AyBcU7mnvNHEkjIY4c2KjiDtjPeuwoLmmCIBFkhD3A5n9a2IdLi+XbD3fGHioOudl0YI9/h/+Wu0Byen+ud1fgU5PyiWyqjy4xblciiZIbpDtW4oWynX+MGqpIvv1Y1brzTcK1VyaHIQt40wRmFTWM8qcvuAMrS+cUbXFX8a2Htuk+urSD4580yYalkaA1u3LAdficBF2OgeYH5q3UN9ntbODw9I7KebXZdIUfwySP1w1wQYch/GrMh7rtVnOKzUftnmzX10P+aMucToiKZemxjO329EwZ/BwMd5AwEhnCPZPoBo/8zhVLxsjHkmw+UHrWGJnCH47BGqiO7HttPd+ix+hoCehV4+UB/aTjBP4Kiy+dkNjqrEcL8v9SCIEoiSGCQETAnE7hEUlWLS++IQUDb2UCBZ9+Y0TnxzSUgkzge4CYZKMCNy5MQwERJI/iCKMRLxQNKIMS1PWbqRO2iLflvmdZx5qXgJpFD4poM0luBBhghJAI0SJ4YBzZjT36bz1BxMIiCWIHbzQSoxEoQFvpmRZhs4gsCfutKgGhDyZwTpsWzH3+Gl4o+8EO/Z+PbR3NbdKUt4+scnDQ1Njmfzm4NIriskIk/dlI9G+9GVrfA988c4iYdtQNOF13sc1SMUIO6mv6g2AvRmQbV8tyQiYDRT9Wiurk+wQu5enoTCAYVzHqt0Wftw34Ng6oqquqYt5vbu3Jug3L8OKo0dROjDt44Sgq3OEuujVRYI1RuGoRVyPnVBZbh3CIaukKw4yvUFnvcBjRjrK5A78dfln/tK++TG2LBR9GDTcq5kKOtnCRRDkH65AqpSUkWWC559h6Z+auSqEqhE092eO6NCYf4FBLnSqLImqW6XahelKig1aPI3qVOay1H4Ma38Uj8YP6Pf7AE/vqHui1jkXaTIIVOZYeeSiXXo8EM+e2k2JsrcvVbnYiByFlSM4GOmUi93I3qM0tg1cqCw+iroPZOu1CoV689F3hcraU8jwXrYQNQ4HHWzvLJrwKJXWw90qvyT2gNgwjJPUANA9aEpb4URbp1xQUh/4n4RqrOmMx9hrNT5uiWW2JX64t9OPYizvZ1tta55KbjNZ5ux+Kp6mCZVbjHFD8uAahSp77qIdvbp8fkawlWOYdqDit15D1AkdgGCJrnhapx9IZPYZhKXXlYDnMoTACObsNoaNcbXXcJv7nrS7TpV1T0jqNVeAAFoxcABlPHXT/yXvxLKImrA94owubwyZz5OcMbdN+rdSc8cgcCCWJbYFPhZ0QreBcD718qZnfv9dc3CCbE1MtjN8PybSeh5EVvPV2bNIoThve+20fhBBJY+JvX/Brdl6oIIfZ7+K5lRsnnVSWmnkICfN6kxubw363IveQPTDsY0DcQ1Q0+Q6OAIBg8l2EY3qPyLKdutczQo4nHxSZCCD09oA5fzsTdxA9PPI30Fqz8vbNYU3DcK1dRRGguuKaGKt10+9zy1A1qRPMT0qkBJeE4hTZkZEWFCdwor7YKFLpTNinqiNKtUPEVlhaNpOzByO3ftB6A/AnDc2Uz8gSCa5VA5jnfGhIVh3GglsQad0sDX2dOBNxO2B1DIcUKsn1uhOXojwogg6pknEDjnVH4JDIBfsBwwwKnAN5nZyrK+HQIWvKA7ygwzm0LsIMcMJqXHfrMm5j9f5TyQElMIlHPdGetXUNlPzeGSceyN0k8EvNxmPU/o6Bv62s9SK3FCYsVxMGZ4/mh0FGyec10krY5GgMVtbqMQEzFHfACudQBXHG0cliicLXveJGndObgcYl34p+YS8Qbx2T7qpHx5WR/35ZGUtLETyDzTXhX4dzuiF7O60mWhoZBAFGIAGbvMln8UFPVlX+sLE5mvVumdy57mt99NlIUmI+5ru1JzWx5ztCljUbPJFKjxSAdBLCE0ZTzwgRKu15lciqBc1I9DKxBq6EWkp7Xx29bVzsuR7GFnTvIO90qEDTFSQgarbGkE6MECJMsK0eAYQUrAKRZglhAGIIhmUbjjNmLTQO22nMZI43PtllFU8GXvCfmfxS/jBYHUu5gQjFtJEaPNgbiYRv84iXwz4KD5HQGtmlMoHBId9AWddO/w+PIKCRTSB6mRZk7DHssVvFxHKjpxfoGWOvR/T5xJiqBFdzEwt9t7S4d1Wv5Hi/MO/mJxfUOxHvoAIXlrUQ4VsxPef7+R72vSC/BOWYjo3XVpUaDUPBkZlFVs1mbif8jOucPYQQEoOY2omez20orUbEPVx0z1draXfHVlS8WkFVXF7g3vYfXB5++uDzDySv9JS5VIwJi0EJExOfny0qhUNS/8a88Mi0YEhYPK1lh2qCADnk2rsStnM3BuZ31CwyGF+OEWYrCxQJlFBrw0mDAAcgmu5RIlG595neU/biJYgQEkIH7+uMwdvucBj+4Em/j5cvs43Z3jjxZzE9lVn68snm6nqITjaTv2aX37J+rpTYOn/FwpM/tY760qkWHzPDcsTrLfhNo+p7QJIYijFPrFd8RWTTub17vb8sWmzNmj+hM3V9NW/GLSwhL19h2kRMVsoi/A7MfYbXLPfVCl5XakVdW8COIeymShXhvHz8TS0IiAvU6c8tIvi6WqBPHrocu7bfYJxWFCYjjqjiaIAJUOTcMpaxP/2OkRzm3Cf7q60l2F6Ztf6QUIlr7N0MvG14G/SkULn2DiGU5f4ef2Hb2uGvLzqJQnF/77MsMBZ72GtvXfvak+cnB4xcO9lUPLp1NV+/FGYhCmprMFw2ta1TMJfxPFlNStJZobwNLsuQgmPvvrz/zLFrP79/2FT2fy9vvn54bMh8i0TlFu8oCJd5p8Y8PgwlKBnc9MrLBW0Q9yPUJZUEoVxupCTV6TLdRs11ikHuivSJuJKXu4AW23RtMB5BDkSFSE/uhodUaJXJV4h7E2b/My/fIR+nGnSaC6okSJX0TrBe39Gy/6bjPa2f7j6dxyXDf7TYz1xT2HIwsfxzs9/NjfuzLfrLHpNv8OI1zihEKH72Lw7AWjEgstxSwK1yj9jLeOgJg17zful1RH4SmRi8c61e/HH7IC9YBPyQJPq8q3IxAQkAe5Ax8jVHw1CEl5GOBZZLgeakgDoGus5qhmkjHVgn3f/CEf+3LKBO3xtt8O6K6c2RU5n/Xrwndoq+hHbzwG6KP+UVJYyDocWL99e/aO6Oox2a+88FyaJ5ubwNRi7A9nqiGx9efued7qD+/yWvvmzOjwWcmTPrnnq0cF3rOyM11gc9/F55fKRfeoF0xh4YT2MNcifD4p4JrOzCgXwBZqm6/5XmUT/kUfXK3P37cqNQmqid1/lk6OCRKsAcd33VeeTuI5nRvQwkN9SUjb+/2LjSYKiTHhCJ2hsZRS1vPBjs+OiNponEj+d2Fvbnqu5r2zz1Hjfn/D9Pret5vXHrF4N5oqf1Ub7s6MiHO6/OtKU9Wcvid05t32PH2Jbfmm/wZsLhggSYt2q90GGnVQa63Go81ZK2FdCqXhtXWFC6YPz0/zg77z9KZ3gaVpS2vyuGuweuKZFL9y0AxafHhh/mNz9cKBhm7GfYMgnFuTDDAK5B/MG9HpSDruAvVu6UTgVL2mdg4Tn0Tneo4cbba0SyjcwYCUflEuoBAAYQkansaYN5MD9EOKPMpVKyBDnbowIYU4/N4nip1OsWCgK7+zbXJ+57+tT2igLPHxqHaO1re8OCCsW2YjmUh0157tMnvzInuen04S7kb5cxz9sMsjH9j8lzbUYVqZLVWRtuJbbnG+bJZtxcGjfcPF4CIevf9ONpQ8rOJ9zViX2/e0D6cvSrzmMvyNuOR74vhHtfHF1cgmOCydrMyTGn03ryGStXOGhZZ2nPXxI3ztip1anqATQYia332B+v/ddHKBT3B6jq7sGx5CaDjeiAB+TcXP1ipRV7dilEJQ6CkOhU5Zw7jh+3n3AI6r3UkClJ7IAT002mrvsfvwe+v/r7Cy+czak5lPPs9ugmZbU8XFYpeIytYMHwJyBwvmPvf7crYh4N6TtFTm2DaNQkTil0mk2kEc95IIexKIG7wxGTerJZUiCGvVokRkTS+Fj64L/ffGjv1T6oT+sbDajUgWzdWPbVKga7LwY6VWPeRij8CuWtLuVaBEcGe5jMFUO/H4kQKcPKCLh6ZkP8wqPvfTYFS7ac2BCz+PBiB0P4PjmvIbDswM0uNAcyqEPVrahI0oFs4tQ6x50Yv0bwRdxEe/XGAcYvmMQr3BapeIw1BirRLqv3+s/yDOqES8aYElIvSbpsVxuhajVOyKbY3BEo8Uy0oimvBHsBjjKgdaJbkOaFjMIlIaIaOS8D2RpGyjBhGZg7pY1RljtzWzjP27E4Ou/AYq429H9DKldUhC5hToIzCDp5aix4odcobKau/tLQ7uZ2QtDF/Y/3K5lXh8rm1nwxDfSNwuSe6NCm7z9QO6er7uWWlkOzPxPbalF785KX91189LmqZx/ZOROyLwxxU0O24ErGLeIrO3ek0y3Lmz2Q27fsp0XCr6BRRy6Jn2dorbK03OPhcrx3aNU42fp0oVRYUvLN0mKbKoeHU6/vqeysiZJ55ubof/tWAJwXH755y9ojh88r5LGufxz56vITv5VWGC81Kv9eVG9o34KgFA1PerRtoKf//AsH1+xabTjgOInR86Ftn3iTnsmhLAtPy7Ij6+3ALNRagFiwuPPHv1m1fZ9RcWLuPoweAft/5rmYXU6Mt1rJQaRcWxYqJfo4O/zmx2VfVPCgbvz1rxn3jzWdBbG2n8Pg76hQu2NywfzNN297z1wQYQNWkhQ8ecvkdsTlleGPBisf1bZoLdgTjT0/mDFE5tSflbr8L91QZOKXTM+7Zbmv6HJv3ZCRe6htAIH//uEd7IxbnDBJW1u/w27Wlzc3t0i0ynHVvtXVNBho8NXs+9EfXZotk1+U0Gg+lQYFXCZoOxrk1cyVgC0tLA2tOvu2HQdEcOD5z8senGAchMJfC2KZNdlI+OzjDF5Osgav9m+djfe0YktiuJZMUuc8klDfKfXd/2+19xebUsG/Zp+O1sfXjWsUHXLZNx9a+rECI543ol4H3gOEYPKUAvPoutvjFN6U5qxD/nvkjUTNwU1IJxARNqHBcQQ8byKJxKKtH5WzC0i67J9Ob2L6PkAo0ymK/F0DBjb9ubGM7hMjYd27pVfkbtIGq78B/vM9eed8Z0/mV63bELCarFb17uzeCNUkcbPOQ5a5cuLNoa2dQyKGhTyv2NTFX0TzhQ6E1/P5r1fs6uioKXA4JxyYi4YX8reSe3+97N4Nanmqh4O82Zh1y/PvPldT+CwabX/hTMPJwc5v7I9ApC9nn1aW3xxYcrBsydkZy56IO77uyYRWvqGlAHVMttD24JajVBhd5qzoueLMc3U/YRpo7Hhc3Sf8DBfSfefFsZ4eP5QO/9Jd12uoeKf4wJJINkKRUos6x5OtsSsi0wgXMNOcEDHQrCwE3lfNyPeV7C17n8jMPZhZQ3/oylQWG1oNQ5/qRqcYcWBtnK3ef6OHyHi77hvXp1XuYrM/vo0J9Y/fX5VU7e0BEL5JemzFyUOWqP49G7Y4oicLvnTZ2HxS+BxE3I2Xu59wO+cWFuwMBZdHDThdRRw1kkFwU4fg77DeIKOlwCg5eryDGx+uY65KqOdhMcMkmTYXOHBy+vKgsMQsj7gbnciQIokYRnpgXofP5afVpYKcrMQGDojv97Wcjt3rXRbv6/9c3ue8gOjH2np2wd5DWNxoYmDbxAHXbt5c8CS8d06UwT4p5jRunjR7CDUPBn/ltfZkf3VeLysV5qs9wnKO71XmlSfO8oehXxP4ooH+kgW0CpkkuVihEHAGBGvx9OVKhyETkDwOI1zisf19/vHbeicSndh89qVMuAXCTcbLevNywl2+4qEWhtgck8TCqL2UBqBqQxSV4BPytIXYQNmK3K0fnH47tZMbnKDZNuPJmx8POiwTDmECCudljiBP9/NFZltcKo2Qxq/h9CXfN6N7d1tHpAwSRXgKXq3L4TU4ngUNYFRBgWnIAW4mnuUCT6SIpkHq6XPE9RlKCXAAtMGha8u76jrl5s6gud4A01xH4YBN5gMojmfUACstW8M2F+GRRYny+3iuAEHMI+eqC+R9u/+g16sy2YabYMBj+ey2jy8syqwoJVRQC8royRPZ8ZTgInYT3wVNvSJCAcShoCpTBKv44HafRYalQT/rsbUX0fuJVJxy5CE8sjQoz/yueEf0VzDpQzJw2KYfm3Y6//fhvafe1m4Ei4Gu4uwjQHOgWD/UbZqmAwlm9btltxpV7ZqdHO0nyB8ZBwo8GufgqPjQGliIyAKCYJtNhC3IeZi53COEiXGuBoCIKvffLpPxrz2fvD3++54rEZjk/ZfxqUMORq7EUYQSXpAkPTP9ao4cR1a28SZ6SdDFzCPEIGwmJj2bO4vf7xGXjP5EoTDtAw4Xd4l7by089P0V+uvfNnLNOMc4J8lbjIqqVHvWXuyViy+Df732/2vY77VDg4+G9Yktfla3LDD/85m2Nb1//2AzyKB3I50RZbKLpReXFJei0DWwdDHq9rkw3BvBiznFg0ynXvnr8p+2/eEt9ruQaeKsgqUUYNGk1pbeks3pTGXxqZii6CkvBN5r4oQ2CiQAoM4A6vFEdNASHs1GTrHpWwM+Cfyy/UXmWzXgq0qf2y4dmH5JilBxjdBX63/fzuVe0+vflU9YcJ5rGbV6hvbnDxsEk/7ejgwvQXqDw2xCRCQov3Xy/tHkT7M9hkULuPkuTIj0G1O3cf4cvlONmRepokwZMhbgZYP53XffrZS1Ftzn9cGeZbZtO3JrPw6ewySchz9TwR5f3nY7ZLD4vl7Mz3M0pI9/3O5b2GPznfUJwDu1Y/Qg6Dlh/0rmUm0XY6EewPa3qv6K/F/86PGN4luX/kK65wP5Zryb4L0IX46/V2rvJUX5vsSXph6SXIGFPSiDl0HQqod/mWOKBjM65YU9o12mmfFpqPJon7Fh26V3n35CoYja1dTPi/pusGULi9TxtXNDToHt/WX9ov8gMcLo8SVg0eBF7ttywRxFalbjYh54M981VSXiPU6afHrKmC0CQm9kh189YiTv5A0MwYwjPvPF6IrWrsF9DP3Z26PkAymEh4TLCz2bPJcW2DdlXtQfXAZhZyJgWpG3xLcVrz5bd8/EoX+3hro0EPgyter1Lnpe2zudJqDUsDMQ94Hx8whwjs+zSwRwOtWeeub+isYfeYD5OJbJXq/PvPiWoMGO6/exT1jERjZ47No30GBSHSQfdtcXc+vLTLuSkXq0QMj7LLf7apnRBZ0qXBUnkUZbS4WrqAySkqVElUrxgzgxx6XBrV7jHfES37XvFty0rtQh/punuI6gVPR8y6te/k5rsr2jLgpRXIfVb2yKcMIURhOaQYJOQpFEEm1tBDOPJ0LQVeTxiQcjAkogGKQh4q4VaIsd51NTiCvhP27c8fTN0V+h3X9S/LfBXV/rFKUBDi1Wgu4a8CY/rd1wtuPXILS5/+rzgQgzElkyMFlHDAGmIahBVk0IAxTaZK07aJ0U/ff5uJvHLv/Lsgr2BGcCrd8f/OWgZI2Xm6UrE1CFyANuVmMKgFaj0GOQalDAD4lQxLF3ISxm1naL2WJHJnOeYOCGLQJqLUdvFAhPG4StG430XY70+qAqIu4itK7lQiejP+bMML+81o9VBIs4N5f+GbTkwhHO34HVXcNxR89C0wqcXFxfAElSRowAPoQaSjCcmnGZ/8gNRcN9c6RPd+1d9CFwyz1ZRgCEkIsQgCFAXEyWQxoClBB4AMJJEoWQIF3zQx6fRxRFYIktxgjjBCLjanJoVM8wU/DB6L5NH7CwAH6ps4f0R+h8ve61Uw+M7jqxp24OJB7sVLljnMXY51vpjDyM17vm1th63+l3GWfQUkczrUWCM1Oxf9CzrLXS6rvLNT3nSL3VfaMqGXqFFo7nk7wwTg1yxAhnwQ+feD7cIY0c1Oo2BGp+MSuxx44q7Yt6C391eyaiAS3laU56/KiFymyzJJ/v2YnSuh3KUM96BE9/lrV49V6GQvuzFvNb2unpnz540zvpUyZURYPdkTccdXKNBIuzulBrLyuuV5xx6Zc5vu5p8PDwcxAktQDurKheirYutkr8K6dBtKIXS4yGtL3wl0ycZ6tce2nQ1lo/o/SpjTMhw4SZLhCcIw4FQmlLs9tE0K2ceeKHL98jjoql30Fw6UDPt66DDi+nt/R7eyp9xzQBBPctRETOFIHVv/B/J5lcQV7jJgM1Rw865KvxT1n7XAk3V3b4vWLBbWfRq5nK032FXXrJFvWkdOXLJ0VxtGbPndJ/Z1pxV/Lb59dsb2kYfPjcqtJUjfZgJHXz+p8zs2uJzlx41Rd/3cvRBxdxHx4ucun/t9NgHBO2Rt9yFpPl1o1pdCKSGVv8QR37zaaclzmMW2KxFpF8sHCZrDNmMAir0Cdc5adXPPumJzfeMXAksTDyi894ZnCZfnujIDnqhG8/xa/PjX6Y7a3cevDGoXb9m7Ee04FlRfS2Zk5IsON/8axo3vi4wG++eiG+W3hepQu+sd7eNd9f3DzwBUVnb1uy1/dGc/LMpGnHSzUhOUPWQxj0HN0xVMl/tHQZdzo1j+OU1hqQOj0W/Kawq6/ytPZw/a0G8Vv5vStIYKT65cauL3ZWKz/u+SFuW8IlzXlLLYcT/qkpephIliWeJRGgnTj+fdvtld2KYej2h801tMVieSNugfDKhrfzbwGvTAXluRECbhtcaBZw2yUIquaneAL+GL1Euf/7TIwQuIQvGVzwFE/hf34WnCEI6I0fvfKzNUttDV+B9gBw8f+ElwKzu7j9VHCE//dbEgLc/X/ORA+RgvL/WV1fe/1BGgAA7QCQQ98U1bwtzrv7tKx20pRSm5wz591pn5/pEPHQdqfziuCEUbboW9HvaXD/Atadn1pkzFxvuN30l/XzxLeWvLUb8tjdB/ZisyE+JsfAGr8L6Y0Aeqn3r5Pwg7gCAGYnvwgAnAAAWeC2d/EpDJC+vVqBPYzO3KixWSV8m813AQabAQBQCOyLkIS8ABGrMGK1GpHQbEHOVrcjpUVEgZKok6ratAE0qcku/B6w52l8C4TzNr4NrPnG8H0glvDITzqQV04QFRZRy2OhTTaYl202Zp3Vxo0asEKlhaBtg43WCYwe+UqbJ47aJKJSGI59pkR23kD3dgEf61WgBVxteRj92k251Xtv8MTSsbDXs59jM3DVhqlx0vgWVKysVQcm1o+ujatjaqKSuuJYdEsxva2mOq5uNZbuBXlUTK47cqeWpqSBnpbR1STk2R7mhuW8La6aUi035uqrWNPlyo1k1hsnVm7YtLIKlscmqOROUd+8Y4VnbmwdHj1s3zdWX11rUkyNe3J74PIVE6M0ladyXGvbtHH1oibXFnJH7+RmSmjTIrquNTki+O/qCfz9dgALYoD5A8FQcUlpWXlFZVXcvy6qV8dq4rV19YmGOenWtvaOzrnz5nd1L1iY6enty/YPDA4NjyxaHAQJ5CCFPBSgARqhCZqhBVqhDYqQQYmuXLdt46qINT7Kb75hdTicDIuaaDgMEYiCDtUQgxqIQy3UsZOiaBs75qR/C4DDGz+NAwAAAA==) format('woff2'), - url(data:application/font-woff;charset=utf-8;base64,d09GRgABAAAAAHo4AA8AAAAA4KwAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAABGRlRNAAABWAAAABwAAAAcgxtSpEdERUYAAAF0AAAAHAAAAB4AJwBNT1MvMgAAAZAAAABJAAAAYHJYlnpjbWFwAAAB3AAAAKMAAAF6K26sXGN2dCAAAAKAAAAABAAAAAQARAURZ2FzcAAAAoQAAAAIAAAACAAAABBnbHlmAAACjAAAc9cAANc4BKegHmhlYWQAAHZkAAAAMgAAADYR5QgpaGhlYQAAdpgAAAAdAAAAJAuBBZ1obXR4AAB2uAAAAKwAAAEcKkRAzmxvY2EAAHdkAAAAewAAAJDMYwHYbWF4cAAAd+AAAAAfAAAAIACfAnZuYW1lAAB4AAAAAXAAAALsHaNuI3Bvc3QAAHlwAAAAvQAAATbMg4Xgd2ViZgAAejAAAAAGAAAABto4WnAAAAABAAAAANXulPUAAAAA1pYy9wAAAADWloq3eNpjYGRgYOABYjEgZmJgBEI3IGYB8xgABqAAdXjaY2Bh8WWcwMDKwMJqzHKWgYFhFoRmOsuQxpQG5AOl4ICRAQmEeof7MRxgUFD9w5b2D6iS9RfDMpgaxi9Me4CUAgMjAGYvDc8AAAB42mNgYGBmgGAZBkYGECgB8hjBfBaGCCAtxCAAFGFiUGCIYqhiWKDApaCvEK/65/9/oJwCgyNDIlCMASb2//H/w//3/p/xwPKB6P1nt7ygZqIBRjYGuAQjE5BgQlcAcRJewMLKxs7BycXNw8vHLyAoJCwiKiYuISklLSMLkZeTV1BUUlZRVVPX0NTS1tHV0zcwNDI2MTUzZ6AusCBLFwCa8x6LAABEBREAAQAB//8AD3jaVL1prGXrdh1UX/+tvtlr7ebsfbp96uxd7am6p7333VtVr/N18+xnJ35+RGmMsRzHcQgoOA3BpEGRpUQ0EokUIoTAilAQDkGyFQkhAhFCCVJEEixBkBNEkFB+EBEU8QOJCN5ljPmtfd7Lrbp1ztln79V+c8wx5xxzrif6ydefPNE/437iiXkSnlz9qnry5rNfC/bJP7r+Ve/+589+zWh8++RXDV92fPnXglf/72e/pvj6TXfeXZ5351/XZ995qv7cd37O/cQ/+Ytft3/zyZMnRo1f/FXz4+6vqF9U/4n6O0+eqHnw81Ar+auDT38Wg59+GodTNR/86Oen6joMfOFELeb4Db8sThy+2+/2Vxp/73b3d/e7h/s9/r/d3eFffoOfb/nileJv7/Bnz33UClsdFtz9iG/8sRqwS24ZO65N8DtuBn/eq/3t/cNn6o3avVcP2AZev7t9uN3v7m75ljv58b26u5c9v1c7/ubhFoewl4PBBrb7Wx6kkn/eq3eK73vAod1xg7KT6Z+79+qau72WV+6xG3zyM3wQp7nlFrnx++vFzfXdlh+Yj8PN9YJXcPdGbUePazLgsqhFujzqZi5feKYLXMK5u5erhYPGFeKFuMC1HWqNT57g7G+u72/mOMUrg41f3t/xkHHib/Q9vgy+UdgB/sqNkMt3c403mh8zRimtjLan3jini8zmtvPWKeWdLoscr2mljTbGl8ppVyvdjtocK28sPmfK67PGNpXzThnHLRU219pabFIp55yxUSv8zirecK+qRrtgHN8RjLKmKqLXNtja9meucNlswEa1w0eVitpqi9/iP5NZ7bEh6/iaUkFHj8NyGX7AAVp71Bypl3nX4Qel8SuV29Ka3BVqoVShG7xLB12Ouow3OlgT8CalrI54O7ZsrOGu1LPV9/lM+1w7i7NwJvD61Nod2SaY1kYfrapL7LQJ9jt/ora5x5FGbMGXVVtrGx0uU9/7WW0Kk+PDhtdEqXedznhZNK4nLg12i0vKy65zb7UpmtxZ+6QsVBvsad1+5k07upW1gyv92m8W1bu3xduj+bHHpYi5ttjs+vnFZxtnYRE4yKa19bEKNjj81PZ1b+rcWo+zm2lbhrzNBlu27mQWzkO2s0U0tYlj7Zcme6GHJlPV2fDluH1xqmOMtXoRFld+Y47rqGPm/SwP1RG3jKuPPcyOZittW4WLiNuKY8E9wn0uVLMpscCO6tJXuPgGt9Z4ZVcWe89MdO1z9Y0wxMK1M9PZplC+afMb9b5obmwZM5XhrHBuuc1wTTKnF2pjTYM/OsdVs1h6R7hUefDW9AY3AguWVxD3pML60q/XJ1p/K9riyPGq42LFoJ3hBS94n3Ff7aiXPix1qU9r8yu1Gzemxw5yX6jofe5qG3HTtZ+pZxfxFB/2GZajN6b6fGiwHVfXWNG4BVijuYkhC3uzLjehLMo8qHBevQNUPlHqG1/8T+bPuZ9Xv4EftjDxxQAk2Y4w8cU1DF9g6gLASbs8QOcohinfwN5PVA3bDx6WT5i4uZ7fzIFHhMH7G0HSByDJ7lYsnQj5cLu4BnYBiPAaAOqdkq+32GXa6DAnyACi+WF8maedETcVvxcYIZp77koDHW4AwMC66zvC113a8P07fXslCJdQdnebsO7hfgS4ELNk2+IECDeLOQF2zwO93d8R3O/SQb5XhCiAmezjVjD2DrslLONE74iq+Ag+wb/Y5e2JlhPAeUy7wfUZPiieKU4Svx2SB8Imrh9uAM83hOIJwN/gOg43PNQdruc1z4cX8547lgMjQO/ly5W+uFIXdDd0EPQiuK47gLgcpf69u2WOpQ+UAXp4opHvVO/dcejyPFNLGH7msY6N87iWnoAUfY5lqWFMAC6sWAAS/sWSdgpYplr+owpLlCJE4GX5hfdERwXowJKs8ElBEGyGmFVGPYd1zIuhaFSYY5nazJYKMBOJDtbDNINJ6KaN17B/Y1xYcCt49cSZWXZ2ii1rAD2sx2msdQ8Y9wBwIK8yRN4c7w34AijBd7AOfrx2eW5dFQGvdBSO8GcAOA7YgY0DErXJcPhKy9lr/vE8Hs/DwZkqswmm82Zmy1LZjy28jIFFAUdwGiqGnfv5ZfUWJ6Fmed7EOFZKV7pSpe5MqPsuvnXz1csqX56cxOMcCNAAeOGcXO4WT7dNLHhx8Wlcvd4qwEmlWlfB6Da4xnUIcsi8kLXCEecz27giOhyyacJyb7Mu25/OF4vXxZdsP5tt9tH0MbwtXgUXa226aNvKLmI4y5s5jrbGavAxK+G87Mro9TqGUdkr1x+HJuJy6tLOAIldr02Pj2Mvvsx/PNtsTr663pZO10tXPSsXY+t1UPBzIarRl7rAgdPZWeN9yHTehys198dl6cxKtzt8LGt+RxzdcVFmXh2ZprQ/4+x7A1fIm4e14GwXrMpXz16cVcWm64PTMbQ8T1xmU2GNajoPeGGHq6RyM/evjf9y+BJvUo1b62Oh88wR6JUNurWncbysuriyema8s6MzGdbUGk5DxVLRJYfzvuwFA3/HF3/efO5+Wf0ifrjdJcZ1TwuFEZH6EZMCeMSCBj3Cogkg+93FlqZNeiNYBKoyJFIksDUkfBEArDXsH+8ZyQflzcQ0wbADCglfNAIL+AR+Jzzo8PM8/XsvHFMYoPyT0IloQE7I74XrvVcfgJNeaK/HwY7puHBEI48C4IKfZBuPf67c3S7RQVAxoba4DAKtaWdkkFcglbcCr2SN91f6jbq9Jp3jkcj+06ZUIsvAIVyai90tibPsS6gnNi4AS8QCmumfsuRAtGL44M65Jjy9tCRxeD2z1nFRAcCsd7FvVkpHOGqVw9ZhoAF+DhhCUHLJfMlVYICWTAbfRcACOSK+BWwUoGcqUyRhWFawrJCZE37A3I7zcw1md24rbBJ7B+5hJQEYKiJUARJgbeuADNhHjHC8LgpA4Ni+r33RfVyV+NjYAzUHw3dpIT5W/vInHBMoIM6GnIqsEYgayRdbvo2OP/fhhVYr7FoHWO/a93q0qi2UOzUleB7eFkg6SD0UbUBzU+7ndIULhe0aksTKWWFq5dMWVAiQ2WgCS1YWMC/VAXmBJrk/rto32TMbKxvmMRxZkFOvswieA/MFOtmRjMzvgTsLb4506GyXt7bwunLNxXh7VNnZiB3lZ+X52ek6bG2ZW1CY3AXQoU2sDTATjM0ORRjDZTgxfoFtESMqvZrXX7XwCDYa63M7DufwSGWJT/Y+r5vmo3f1wmQlyUsGUlG2+VKPAVf1JIDFgM/k3Ar8WUlGbOhocDML2y3deIY7i58ysOnrqvyt521+utJz9QxvxkJp78vlsqejK4HgQIRSZX01m1eAr9GoqnRd6w34uOa1JMCTWxd6iDM7+g3sF47GC2968utf/OfmN/kn6t/FDzCGhTh6+nUEQnNEONf314yFDjGSmOIo7Gme6Iu/SGETTJ2AEIgLggkS9yw2mrxkSG+lUZHGiNG/EzoiRgRCIETlzYGbjdwLg0s9TrGtYEyiTtwyY0Na9V0y9tuEJAxiBT2EhjDEvSXruCKhgNlP1ALMbf8g/Ao/7ME89oJ+Wxzexe6FoslzLxcCBFOoSh72QSU8wCvTJTnFJbsmSSNxeqOE0wiI6EOcig/tr3l03Bq418V2v9sipsW1eKMfhJsBgHmA97IFHOX+4Vo+rZ/MhpPVrFy1WAxkxQj2Clt5+Fx7Fo6ty4kU4OhYE7jLwAZbFhuszYK3XYNa2BaLKpI14fYTHrCJatFvc7glXcKbeBWTWYNnZKpHkIY1F+FhyI4YT9Jx1zlCQYBQl/lanz+vadmAuh6kggxFgU94ghaDMjg60H+VAtNcS0jaBi/hHbZtIhlA3vjIMA3hmRcMIs60M4aHJFE82LL1p+c25qV70c9caZ+2TYFQkCwCpwnXB4aXKYSava4Q9cV86EG9Qmm5uVLQAeAhZy1IR/dMwoV9RPe3ZyXIoytftC/PEXrPeAFmGY5Gt7rLi/tg1+3HatPcLytT2Sqz4EOnXh/b1VksqrmZfevt27BQ32x3mV8/+/FhBEYSCnXIs8o+O2tjGd80s5Pq6OHLxZHu7UUMTQPwgmeoyhf6pPp0jNWzt31x3GbmyLarvDgZjF6u/Los/QBvgfBosGbwy8XRmc+LUJp8zcBsGRGf2s5mleuq/rQAw8jOsvwownLqmT8+yfeZfl0UJRCzqbplMLVd48RiHqzn3a4Z1zleFJX+8TatLCwxRPi8d1kbliuT1VpF4PoQjyJvFL1V0IyjEYp1g13nYZzpsYVH6RhjYzvgrAjGgdo2lsJ5NTHKIjBFSMqrj8157syTd8aEP3/3i//B9O7Pql/GDymJdH8zh2NnDmvKXpFXNGq8SdB0oq7lK3Mr4wJUZJ6YCP8ZhYXME3iRMgj1IIMBIIGyHHIyQlvmsnFx8IjmiAspByWYxO+FQO23CQWY2wEkYUfHSnJszIqFwd+c6IkEAfa8REmJDNUIEu0b9ZkiDlz4tO37tIO9pNyIFkxvSayGP/iyl9Byf5sgi5/k8SU0k0gNRwmYkLNt1H5CyylZxi0zlrtNn0Bspz4VVHyYh4sAggMsOsASgYmItt1LqLefknT3tym2vdgiULvh/iQ3qP7PKtox9BrusS7GejAB5hiCLYIzIQcBMA4YFM3ZxQo2X9o5bv9GwWZhdZoMBXc8B/w4Lm1CkaRkZK0gRNHHeZ2vyzFrVTl6YEak/4VN8X9mB+BxEddhOfpZy2iH+SgtyTCCiMCRI8Mq8VtBwtVxLD3pVKbJvZwvTeaMNSGln8BAQspq6WQIiAOtqoRcYXOAqkxQzZqYwM0GgiVMxDbGLIOt7IsSIULW0nKwGUnFRfkunL81tsDlsb7NEXCZuuyOuosY4efhkXGZrESHxFjPSI9nGdxP2i5zYC3Yua0H2+Tzy6wEVOLsc9B87ciXqmW9d/P4dHOZn6h+3Q4/7D8qbIlt4Eh1Hd3qSHehXvzMGAdQjcyG4yzrYr5aLWI2MG2GcOujMtsCCuKoGlyVzjUtAj5r8iwUw8z0s3XeahdaUxYxbmbWfvqbfYmb5Nevi1fOtOAtlaHnyDvX3jb38Be48942bcAvF1eLwgBUChNC9J298LzNCAsrYgfYIb6Up9nKfVwPvoAfCBFnrEzh7aYqztzqWTZvX1uPaDUzTVHD/dtj28bWxPNnS4eP2wpbcMAP8KispZcZ2zK4kKte4SaDYWe4fbggvJPwTj+qfV3OPCh8v173EciYqTw3PX+PN2Od6UyCZu86H7i0wJvLwHjqd3/xd8zPuJ9Xv1Vy7xLuMD1M/jGXOIhmyCSSpOCZ0RmmHI/EKWQujIEIRn6/vTKIM5KB3x14gnpIqSNJ9/BXQnYAdn5ggJUyOgQsgo58I7gnv5kiJ5ANCXkm1nMr2xGwIKwQwBKwXZErfeAuJJnOXSfmQcDb305wI6miKUN/Ax4yB7+5T0ERozNBjofrD5rM8IO6/1TN76dsOYJKVgbmY5gfq7C7EEoF4kYQF8R+uCdhSkTQ/KSfwzDhl+mPcNUjzYLhDO8H7BAw4+GOxRphKpmqonre+czRqEsabTTApJTqsMSECCIiWQYx6ryEzylhNdgmVg12kjFL/axYXDYvz8va5Oap0VVOu6VzmtEl6ZTDAEcJMHsuDtMyl+xVf86NNjX2kvu1WxmBkWh6xlTg/yVsvcutGV2PY4T5KFIpsHl8LJLkRBAEpk0yX9qeGXX+5VdfCjvRpsBfnCRWNuAqNzBq83+cDfEyNmflc51ff7MN+ywE4NY8HOnhlR1nIXsVByz6zOVvN+9zRC2ES8Bw9DX42liPmSNbDD5bOmw/2qbKC70p8vcxf+ZWPmA3w61berfPgQdSlnB+PSyJQKGpXQOLuik+NzXAQZu2PEVY1PnZGoeR66WBhWrGu3GM/tpn2ZFBEKZAxHQOIpbh1Jt2aNc+Lor6bfjhnmkj/fpFMB94t3nvGpjaos5H/8wNMGLfZvlCFXlk6GIRS4Hk2tJc1DNnF7PaysVSUnCIumzoPzJTxDbqrIWD8IiMW107OCUXWQXBXXC5H47PJT/yZ774u+aX3E+rn5xiHS7NRl3TtF9KxWYUw9Ujk79csIvrh+vFNb9LdpLcIem55GgfbiTpcM9caAqPEhP47h8puPGDu4udT7legYjFAK4ypoDJH3LDTL0ATE5TLmV/Ze4ed3kw61SjExfPzOmVxuv+o0QD8Plh2tg4lyw3OE4CCkYu5BuSNpaksuR+hSi807e7hEqHuhs/cai0Sa55ogwTwvBX79Qe+8IxCTu5u5aMMfFBfx2wvIlrW8DVlxFmRv8tRoxAtYcDjozQWWViNsPj9o6tphGy+ITFb1khgpXndLylwT3VfpaNwGhvj9u+gY3B7rGxgpSbmJFz684BGjTWB+lFoLMdVic2COUQJoGVHZREJ/gcjI2pGIJFGchsjQKRYcRj5EsMz4qqYKAPdqzpymFXRCVgD1iEMlws5BlyBFiPHoaOn9pytIO1Kx5VyLxhKQO0GRtCQOJ+GP7f61lct9sIC3K1tcdMouiyVjkCLWY85Jjq3pZ1McdRel484KQCkIFWgVBE0zDsKrQ9HUEoFmftPdDneJxnLNIY+vjy+ctl9rPMLDSf4lK21xtbLrNP1q9clbdWn/a/9X5rWZ2pynicl50pwHbgD1vVgriYCPf+rPqmdZXJ8wJkICz1t+t41AOqtDlz93fdondHAL6i7MLnzeur3qkT1YUq6DDMNsvKx338ieI+qm5tz63pl3H47PjE56U6mhk44MGxohb31Z3ceJCCJU4Pjnz5DWeL1jZFGcjyKgSPHXDkeEFEx2X3HtdB0TE0MWtK1sh/P2x6dL9P/Zr6B+ofwa6lsiLhQSL/4rZ3t6k2s99JgnMuRXM/UfVUnBnEAJMp4rf7lB8gYd/Bg7JgcggPUuEkiHtOKVD5WHJswzyZdwouHgs/UiKnoxfnL+xcDEdq5JJXZIVmJ8USMvNDiADfLrHIlCfZTQUXVnlYfQGoIM654LthvruH2/spH3L3XuOML7i3tKF7Bhq3BzNmAYcwdnc7ZTnN3e3FDoGFJDdBU97wcu2k5H57sPub6/T9dUoN398QKBfzBKNjSu7ypOH+eVn228BLgXfdLOY388O7JO6qtbk/BB44rDe6NosTky4m3iWnuBgHuZMPb4HDd3tQi8U9UwUsgpPaEleOwNSZBqMRMpsOK3VSBWG6XevK2lD5slWvmzkMK5P8okW0UggqYC219PRGqjuwa2yxMK7Skizx2B5xyiHOkTJ7gG3V2cusimC8KelRMhtBVGHhR/xZiktgs5n1LKxECX3J6osmIIpmWl8AB16TWBiYysBvI/h0TAlXj60BHuvO3XNbPBLL6he37BgQKSech55Xw1YNMaSE3TOpxx9YotesrUdGTogxxBEG5nAMybjEK7QoXZ5U+drOVNUwP1I8Y3DimG8mo2J5SDJJlAGATjmmj6Ru5mCu4A5/QePwzSZ+u/w5NdvEZs06Ey88t+89D8++iouQX3lQ8MbheAzikzruP/Ivq1CESp/vV3Of55f9KpYgG6zbPH1/su6D9VleX7y8jG9NVuVa522Iy2x43q5cwXNGvOMRXIXh22uLcNH36UYAAk9eAICyuDoeEEoGphwMwyNcQOd9ONPtoq63c/KHuclZh7Fd2XzS/qiZH4ftsfanVpLsgbltc9Yc7d1JNc+5EyU5Wd4ChfWzcvapUb3tX/6sck2lLjI/3Sv4r7zX+3U+xt7rJaIjyZPFYNRAjUcWgNhS0Na801aiY6xW3FOmrnJeeywVrifea9EzpLBRM4tVgfCAGmUdCWSpalzmzDu3wM3NdQimUrG1HwX9/QUcj/rOP565syt7vkcEZGtcVUWJBTYv5HPmvrRm9eAMfssvlVDv5n0Lh1dGK5zpxRe/Yf4N9x+qf4gfJCETjjUTLm8UA51R4C7xe+LmiWK96MTgtYdrQUV1qE/DY/It1wdaFLZTqhmREbZGzASgeW73jUK0tjuA027iUItrJoau7B2xD3ACgCcMnzLzPFGvxxLUvHYESeB8AhXEIok+TSWdhwO0S0bmSk/AvhMqhQORFPYQDnki0rJRHMR08LfiBe4fUhH7QYpXKaK6P6RrHv8Hzt5eabx4t01FrymjnLLddFVSV/fJeTTCrJLfkEqcvqGaCwdwsdviMgUvKL2fBFMPey8KBQR8N1LtEm6aSmfp37t7yQNJFgsgn3JNn9LFkMTu05v1u2b0sZaFCCwC7cLS88fV0eg8FnkKqVgYZRGeeBCkbqWWWC6+LWdNAJkITK/YwNib+hRmDwuqZwKifGWOwcsLx+9It8i1EEQQLWC5rpRsCLMiJi8ETzUDQcv6rAqhZWQXZoDtY5ARqVxHMUOAhGsyEcMg6Ccxil3B9CarNKAzPNaS6RUEgIMqnl4OEj0iRlGmbYyUoExKYxMTB2zKs/ZrmPTxqRTnsRVSPRx7YQSgaYgGUJOT/4V6kI3QnAnueaQVRZwN6BKCMTLIlNTW4OKRp+yTACrn4aqZz90LfRMQCZP8wUNoAWAmk0Ai3an782VWL7OLWM9ZdlMk1WuGVkaSam25smenTf/DNquDS7BfSPmQ28opF9jCccQSNwIhEu6Xj6EG+4brsXRhuHoNKHUewYFB/9amPXaAsTwWVuoIscrUxpiqDyFr7EzuHeAZYewqe/ai+0mfndSXoOlNi0CwdDls8U2Nm2DK4/msUp2btcsKQfFwZNY1GOPzquqFD5cA39jVxQtmzw1ili7E93ZcFlnGIPmkH+u1PlGD7rx74ffaPjdDLEFUcd2auN4/u65PwXpNEdSr2aUvcSH65/2safNZW3uG9YWny+u06gByelYNw3mG5QBohdM3oYRPKWfGZiByS5FAeUqbLmsW6Jmt1klKQj8YY2ZMW1VrEOZzeNfWLyzTRjq4vLRdv5p18Tx80tUgKitfg3rYHhcRAYlleg0hz5IOtMI9ck+eaPUvffFfmB9xf0z9svpLQFaYOWtdPnHQIWW852F7EYaJQgqp9Qe5ENAAaCGZaEk6PeaJEgqmNNRiIovDVJZnSJkI4l3CJqmhTSpQpnzku1TEur2aquoTg552/MiUydGmQ0v5LYoIDvuRAHJKFk25aEae9yl/TdHRdxHy9n5/qMwD2HhgH9SFVPSS0IrvuvBJq3Q4MH5z+0HNT9UtVVrXpyJbuk96qfeGqJawj6D+hjnupOMiTk/QD7h8uE8FTfEO8E7DzfVUP0gM14Ij31+EFBHcm5uaNNBXfcwypqQRzAmOaUlDszCVMe3DsNBMAGE9cUMskckiWK4DG4Gbn6k4W+5jRgMbGdl4BEeSg6Ra0XqR+YFAFQiZDPWcnbGFsNSRXLBwOXHaSPGdWIEISuigyyJX9/2sHW9XjFofupz8mCRXseyONcygVdRDAGBBtaTmxLHpvJD8GvFxBkah57kZbK0rzwSVZ2L+0g6d611R1sRJha8AkUBCjojOlQJ0PEGbsug4IoVD2nR340wys0HYNaAzI6OBozl2FmFj851fx8aXIJ8kbParal4eVd1yo87Ws+gR9xo/nC1rkwRRBQi5x4F2vsqCWRm7wrXC2c0tVmuBq9HB7jOdZ51pSnMLRrdsLILZwsyK3G9MGzfZql6WOhZY1HFj+uNc+3IcbFKPgkmXbhkWIPW+aqvWd6aaO9zvZ93QOJ85f2lLW9Xure5w+K6qdLG56SP1EHA1iujhddVVw5hnuu0Qkbs2L7NltixLRLB0AMDdzMRV+7rpO5Z1g8FvYj43nRuP6mObV7gRsQ956E6LMu4obMBZzo7aQre4E2bmciuyNHBdIx5vtPBU9Rjd3i3B+XJfxMBkCg8Gl/w0tus2ANNx50abU7HhVQ2i7pq8Hmc/AJzLjuqKNZmI+0QnVnEZx/Y7v+7KzBbForXVJbHr3/vir5ifBXb9AfVLT55czqd6vEjK97vt7uFA3KSKtUuJJkkx6QtBskOWill0L5xRhJhJFTnJC2p94e9u3zuJZm93h/CbRXy8cKXvUqXqjZKgmqmyK/1BCRV6EGm7mDor8Qha7Z3EsEmASITxd/uLu4e7R/nRQtLYxLMpUhX6BVzYexEZkfcBHoaL6ySRGIdDiu9RJsVP3CQgRhTBvD9PK8lMJ6mkpCWo0R9rPf8uaAPDUsg7Js3UxGUF8YewxYdNmLAUGLnd1cr8ZMTi3cxHgg2CHtCIxhVBdIpOhag6daQ1lX5cIBTVWglDKb+W8MZn/bE1xVHzBq8gmlb1ORaJcyGQ2zFolUVK/Z/LVEZROQGMggIGdlK8MqIS2Efqc5SPvtW+NqxgRSas4DGxRN3xkSttLYfhaitBB0ibp848FrRnZXIHepfeo2xvRY85U7ZJMiHWgbxIIYV77dsbyoQQYoluwTlhcFmWzbABEjLKHGwutNJTGJ+cuNa/zc9JKGG8ODESsKFYuELrrC6fGkSPxp635iv58ryfxSMmFXJQJn1jvxzDZo5o67NN3vnCYGFaMj/bN3l81hxf1NHg5SNdBmCB81VlVoB90ZP6c48Qu7DxtH++WbQ7WxKSCxj67DgrrstvHdsAHu2iHRHOBlt/3PezIStbswDB8q61phnjh44RrIN30brGHuqyDT11ii1T4Vm+I7YtSlyHwHWgQxNPbd7N+irrpBbPtIZiOpzQRO04vAapLQix2paAAj3gHgAkfIj9jMuhi66w5WftsxBK0qAK5wgHkIEesrDn8jgeNa+Xq7xcME5lxTUGh3h08WL28Rx3rvZlBWdg8tr87h9CaM9sf5mVOFyJKd998ff0P3H/gfqD1CneI1A5VYG29HYM14vr4WKQUj0oUQIFSRrdiAD7/pBon+jQiZ7EQpPFEjcG4UbphVPl04amGAuUKVnaJEiai1xpYMRa60ZdHLKBF9tUpbsy3xPKUaUkGurHvyRL7yZ2pA8p/Hkq1E3i7pQjZMcMAkufOmtEcX1QAwB/bveH4DRRovdqd5AiTOKj/cU29djciprpbhumwv19Sh0KiyPu4WLiMl2nRJ+eZJbEX/VfbbvuuF7pOcmIkAmXZH2xg8XA+lcIyIYm9LrVKvOVZXKCqlbP8jpICKyLGW1YSiCLCHCICVM80y6OhCgydnFBdH1eYjRCBP2SbbQv8vNTyl69pLVMMWPzBBNZVmRqtH9iDgJQiZcYf+mexwD0kZIfuIbsMFdxXJndZtYZd7QSgYFxrc+kkkfGZYWEGOmXQYSALTcUh2uR6imAFNNpVvV5lZdACssoEyfADorcZF2V1esCpyP0Dg7eRDPAer55tINVAr7yRSsRrhJZuSkWFmEJs26Sp8uw6zwHRWtHEI1jb2PDq4GtYPdaagG5ZfOO8Z06fVkddcugu23D4KPzsfVjnPWILr52Wjb908pnQL9xnp2GfW4ajziYzLLKdWzKcG6WIV9VS71kjGJMXaza+uIu+1IG8GIjBmjiJvpFHqpmvnBr15ZsC8GtNxHBWBwRPFnXdGU09Sa/furHzum1b9WyzaqqOMX1yN3saN7VwRQgqm44jXYWinxpqlD5ilV+p6IrfVczQT/u3gxvQouVUMzOroJqQhztRs/mICEErAqrIyt38ypbtCdFl2tqFKhULQIBLoPxNnFDjPhXv/hvzF90/4o6xQ8XjEL8lqxgkgON8n9KQIf0I+zthm6YeiC4X9IKGuOxktDlEJosUp+FRCFT5PFBTZ42peuZrVF30iQnxT1pjZinhMpktmA679R1kgnjhYfvJQ1hFFxh68jNNTMsMEl2fn23dw2oYhM7wg4u/HfBZJIW8/+Ha3aXYCv8+eaDAkakJj0RV3r9eYP7kenavJ77ggsEvrJ2iGBtASYcSe+kmwo2HEjuuehSE4MzOZePNu28OHrTH1XgjqxlsdS1KAosQUp8Gn3Vn5+wwgYXRuUabrWDxWs3xNYyA8/cjUI4A4/govdB+8EO0kzBejs1N0wlF7rkzp2tY2NEGajZdcHQAM7d9XCANh+Nx8rKFKAJkfIqKxwphJyBidRClyQKmfvq8922tX9PgyUrVVQ2K7ImdDr+5otPXn0yK85XbuhnKi9iLDNQWIpajK7VEkQoxM2K5UgcTKG6aPvi+xe/q+byuGl9cTTk2XO4cV8X9eJu/DA89/Pydv75Sdc3+lWzOaHz0mzI0AF2d+R/cL74dv2c2Smju9JdvZoVn+jwzB4fdWV7ed32i+cMhcokL8pmLQVSUmQwpCMUKWa61zjSJHFg3Q7MBey8VK4iW3ODcnVJ6C3pXxmfUZugV1TPUVJu90ffZ9TXT7qn1NFp9We++Nv20v3b6pfUf8xM7QJeMFzTBqYvL9V2HMaNqhVcCQN0StL2UqoSN0kCC+eYEpGjNPwEKjrkl4vvbWQ6VLXvGLpPPktytXe3SVI7NZXKCmeOEz5QjFJYffKRbCgQlzxxXknobuibp3L5/WOO9k6idXpJMnkzuco7UclhdwNT0VcHdv+Yjd3ycKQb6z5x/wfZ1qGxQLO2P9X/afo4wXC7346NuhtE5TdQX0fScar2A3vDEueQr7xaFxIrXE9G/ZgdoVz6XgWqfnd329RwJkWxURR3lAk/3Mzxj20HBF/SCumogGNEXJH7uo3P29L5n0bozXsNYwBjLo2LLf0a1kouBW3q50zQi3xn3NnT0/6eRoH17UsfK6lVSxICXo52hP3MsFrZgBkYuFsWgjJqzfi+DGQ9MKMRy4fq0kegQR4LKc8LG5eSTi5dTvC9LJhHrlks48BWUvxpcSyekQVoL8JVvc4NuGrRLtQ6VqbCgdmTq03nWl0Fpk+5XYqA2a/lck+lMtDLwcL8gkIeWKmEIla9Xo2uqILtQ5apnrKcIxDj6jvfKL1NvAHHBTh7Oh8GX85ixPbgVv6ET7gjIpA6MN1sJbb3Bib2e2aDeWkKU1Ko0hyt+5f7by2Nyd5mq+VmaStcAhMj1TSigw7uyC+btdbrtsd1iWxHsGVfLPX5eb1VvgfJoJAxA09v7UDVgo77IW8i7lYwc+tHh0V22q/LijCcS6Ka1xPIlhe4UPCXCMvbFy6uLIKoronuJCLE+KwMA4KZUNl5YwqfWXebr31TmnMgo58rrCGsg6JsPjcvwkfdRVsUXB+gbfD1xy/Uy5hfZbo09SsXL+NM7pJeLtxTxFsLjaNuwLXsUMzB+rq8JYfoPAAS8ZzakKYwxrL+qlgN9sKHI90WJBpDrMHocFS2V5QjsB/lZwFnbcA10zMdjrq5dWPLkAPRxkI6sJLe93/94q+YP+3+uPrv8UMqJU+cm+IxycJNUfWQsObwZ3zsZ+efm0PY7feEEv76QspIUnzes11btLr71Lgd/MMtxa1sP7jdT5LfVH7hnuaTcPfQsJ2OSjBrkheLMIhlIQCL3qdEx93uIoERcWbqXn9IqQ4hD9skENQTs6hFkjxOlTAi3zyJdvapMP+ZYpFIuuWnRiOgnehq7ve7QwO9NDXdscSzP0wBEOGxbOb2jeA3kB9nTa0zX97t7x4S8u4udklsSExN4rz99wgHp1L/u+/RCEsbw/3U+yAFpH1qWOAp3ydKgxA55K42bWYEagzrRFpS5oj+Wh9XBejEqOebBiyx9GqO6LxWeqGrplhKeiKXIIKSHwkM2DwuFfMgVXkl2qBqDK2E8qkqykJTzZ5JkfFEZj0kgsDCZUV9tUKUrUn3GawwAcZeRTAOvjFmtW0amoIkb8E7yswmBRKjkTw5adMAt5q3vW1n0pPFtyZxgZGMKWlO5r0pwAvguBHZ40BKqtTY0aTTUTFNnIoIHiAbpJlKDgQvsc6DzZQN4Bb4NdN6rmLMG+deR5au3Mz1CKDtcTuvnQfaWsqo8U3Qtu/cL2p2WxTzsHJPKz2yyrVmsdvyJPFWoF8X2/bq1r0qfMDpZctq80P1V6MqpOXbe4b2ZWOPSwROzQj6hoMqLiute26lXYU44JjgQXIEL6dh7nDGdrWvXzY1whBYRMhaa066YVbbszjGxfykKn6LDdfu6tJvXrg5zgHYNSzds67NgPkuD50ZyuwMpx5cvXDBqptYX0Rg4h5+4qE5OyZqqKwyr2P1amsRoTRmmPWqzeqP2nWsHdNPfexCLNpuLPSiXJ9xdIGliMPHLNrn1y5ch7YVomWfOTVma1C4+WpYMKFTj/r85MXTm9K8bsrbl8zOzhaqanMQQw2XAgpdq67zLKR7xob9MS5TI64GER7eUdsZ1oPHMfLFjunibvt6v+59712/jvqF818aVgaUeKHBmTXWu45hGWNT+xZ4iwVvisLaUBeIc37gi79t/qj7GfVfC2sDylB0e5N6qyY1EqvRU3WcusFFuB7DDeW1O4pi8K7rQ/Jy0uEk7CSOGSqT1UuVykTD/FTdz0/1nDJdYiejqv1dDazaSfdS+vNODV5Uw6kuzk4CkQhe7LaSPr3YHXSLCYMF4mo1VZ/ksAc/FYMOcdc0IeQgLxRxccrNCtRc7CYOKTM/7qbu0NS8JeNKBLiSbHH/MNJPPCongfzv9AXIFvZ9vWDHP8guLtT2UcOVGkrfqaRn3G3FDSQ+Nima0zgRPRXO2fMhkslUPaKruNg/3NzupW/0g7q92D3cTerre/0aKIDFwqJNIRkPttGkLmy2IlPlTAkwK7M6ygwFq2/U0i9YaNcI4gkphWgvFAIKkc9o3QyGKTlOzXDSGWqqLG9Su0IhmYlUeTJpcAfxiEQsVY9zCgdB8hCbO9MDAWGIvsVXVp966TJwda3dBndp01DFDHRafHj1tHRnR2B4XvTU4H2inwEY0YHjUCr2FJoU1hDbAhCh96LNZLKTXUB5ZFXUSUGMEQuiURJKZl6w0YabjSRiLkx5WG8L7szz9azSMzukCSQwFDYGENgnIbiSMhSTo511P8WMNtVgpeiUSmqftapYiVLUPwWqbmDUTNOQwC4k5dmp1i/a5drM89cn6njB9MaRQCvbJNfRZDn4o2YHCrNWMwBG4TJRIyP0oyZKcapKYG+9YnQs9JDNu73vtJ2FnMk10zPJxKa3Ps7szLetnc/zV6/GO19i3ZaLobj67NS0lT3JsrxcnOcxXyM6zcqVX6dxKm1dHtc5gJz3jGvG69UGvM0tcM1drjKiS30c2nL1JTesfDZWiMDVs03BbIrfAObpgooX+Vi12byuq0hNNiJ4nAPWU7Bl630Dnq2LxpS+0i+1esb7zA4tHCMpvo/rszN37Mpo69bMM52ZUhfsqKVoqlzBXzhHRqlzaUGBX8b2camBlMzhfPTFf2d+zv1b6leot2YO5PZ+XMzvFkxrLIbrhzRmAyZ7ww6Cu4dBgCl1aR3rYR8Wo2RN2ao+TNUWbmCaabG4uX/gkI7F9jM1RX43bFadCsX3k9goNWxMc5P8Y11bXmWtfSqPH8glyV54bFnX/nuE3LdXJu1md7s7zOI4FLjvUhQ8tbAehjRNpPImJawGfxjVQXwiT709pGmF5n1vH5eMwLibEsIIdbe7lPmVLjHZ/RslvR+PGJawiyRwLx3uOyq7774nrbwXGejNB3UtJXimsNjdAUBnq9ctaBAzmDrNnDAuKQ1FKAkGJZoUSQTjP6dBHnTjzjbPs+1KOhlgmcyiRqrfjAPecK1kVMuJII6dn2HjOh/3AevTNIj7pB4FtOhoNV6nxkREvKYCR+oRxGYcbLSosqxpXo2CCaYgsjB0ghN1LkmJkv1JaatIE36k2JVSLkBQF6j+sDEPpWrLi6ZxvmZ/kDRtqspIZVWnD1H5ByZbSGtYxv78CmdhX3phf6VZtf25/4iRK9VJTvJsotFjgtxZaUARQanwQWENuFx/wk5dtpLTwk45KQj7jSxfiQTQyXAkxUw5rI6x9sKusNMGhOQi+0H7CQVYir1PSsYzUTpjOJspAGNE/ydSFUTQdpbDXp/HpyE7FlF9rMrBgjQGnR2542zZRepPEaTWWfjGanHnh2/ub/rm+Q9dkLCd+GLTrhdlNcsvjUz1cSMYT5nPy/3586Xe5I3wb6C1r56NT4u7bK6H9VEo8+Iqiy3nG5gKnLFbHi/O3G0Y6xyBLHBr7Cs4yHnRjixVwrXkdm7sEKptsy82IQyAfNParJoN7fOrAU4HDqvVl6sceOr39bvTly2IWVUo9Tb+M7MxNy12hCBhEVsVbcteUyODqFgFZG6SjFmUZN4md+ymUSxYHNSO/0df/DXzy+5PPvlrT/6hKoBRJylgAymqpz6sEx0kkJpLbwUVjBTqgCCINJuia1KJVJ2myTHaTLPSmOBKRSzJuKVWSlj1laad8nsaurSUi9XbhykxfYi6rm9TkBqY3nqvT5SgyjGYyEFYDnpHaHKLK+kM5RFMyDOVk9OEoRN9+djZ+oaglEBukNkZ3Kf5M1w3Mx295GURojRlVrM2lHeiMjEBZEFCLMT1far+EhAA+7X3RQrgDKM63IDT+aKpxgrGyloP9SRZDddjOxm6sNSdo5H6tQkvbHbml70bpChDI2TbE8jys8+fM6fg6MitqLLAPI43G2v33Xf+HXYnwj4iewB1KtvU1kpGOSvUWwv37EKhaa1gH9LUxHE6dKGIkv5T83G3WCIYUUImKrz31DUv2stv/sziNz2s4y/kW5ddlnaZI7xR7bpb5c4XAKQsmNs3xacP/3z3Q9kL3fzu8+6D/+Zy9eNVvqgR1ncvXbfdrXYz3/mjV2Z7stuWzXG/q5bvvi9/t365YvfqUaca386qz+tXofjRcs2uN3ar0Gbsyj7/lv4txdnrrT+53nfNrij7u1lcEoAJkHGsmp/+6JeArDPbttly6T4aZSAQ29uiHSObQgLYQh6/v4GHbyn0YJOWrcpyNl+ZRZ5VuBsgT4s0M+tfw/r/pvtj6o/QP0sp8/76get0vx39NmzvLqT5eYvlDn69l6zxFsuZoUcA4d7tL8JW0ruHIudUmT1Vw8V2sqVDF+P3iBtkChaTrEnpIFqy78YTAwW4YkRDas8YRHPmp5k0mvOtmAe5TenhqaX6g3QW8kMpWc10i3Rx3VxPE2d21IowrrljS8fUS5WaMhKX39++P0ynuN+9lAFf93d75nInKdvtoUybhlZohhe7g7TljU790tuUz2HTiAQ29Ln3ODyZ/vdwnThCmpG1078gzQKJ3QaZTyWSUyv+IpUfyfLFqflG9IYGVB6eUdFm2GVXGx1KVwFAwac9ZWG6Jm8Uel2oUFADxMwBowmRk9nU9yy1VJJxNmIejXPg7CpJWAGclu3S+UinBz+XsYfA+Nz3xp+a8bI9Bk2ts4+0lGV9ACUWRQgtcuq+4FQrXYg/FlUY/ELQkm5mjojd0rIv+r1I7WxlOfeITdYytIoyJeVKqkPICyQLKcjNgxGxr/uWDJlgZSjN3VHJ4Ypuk7xARtRIEczpDJex0VVz7FvTEW6KmY0bwyYS39lsZtr9WXYxBlcZ1bQO7k1nu/DUSqAVHGACQUd1/kn+rLtx8aNZ897F+/J5bdscKNPhA7FAKFIb12RVswAoDvGjyBFcx76cjW3+sj3dLm5VgVtT90/Hk+cxvChyRApAF1/a0JRxgchHMqm21DN8C660M82yaeB+T118UxfB1fvheFszMowqGtyE+EYayPxocX/JcIzLAN96Pndj7Y7sYM1VFkovvfQu8XvNyCpy/IwtakvzZBiQslKM5PrxiX7yg1/8L+b/cf+6eq2+9OTJ5YlK6cRaTa1KKUV5FWQ+ZvAfDamMI9afCDUQoFGSMPXsvpoyibvDFDsxm4eko6ctbMPEm5mYxJftMTGEdvpOpRmcnKDy4bFMupDYQQbhUZiVKrdp9gpI8vuYDDn1NxPZbul0EwMGbEgCYkIbL6M/U9pZCwb5CbXSsCv/3SYufi/tmfJRM09ul8HE3PyDInNlddy9bDgeqVTVWXv01R7LPAhP8/NOIkb6tYG5ScRhq/uzuoYDzIIIu09OfVFLY0aAI1NP6wjLsF0cFmsLbG/Lxj/vTl8VJSm1L8pYyoCVwGSerxHFh8JVsSjzEYYgelJdb2b+qSSqQhkd1ehPZYZjaPRgitSuQxlHG9uhPD152qwM43xd+XaxGT+uw619v3qawcTKwhZRkmCp6JLpImtlCBlHL2T35afh+O+J8r071/f/Y7jM1uv8xXYEe94cZctzW6y7zcsgQnmwDVd8dgfrL9QVp630fyiue3fOju6ji9JTaAvuCcIKFjdnAmEGali7oqjnPvzxsyL7A+3vWo7L4vOrdx8Q6A7lKq58HmWODStfmfB5RRE2y11BsfblQoAPdrHHheKo0nrmGst+QsQpTT1s2s/LOakCrgsuqrO1W3L6KQ64Kv1p+dQUQznEi1l3DeYMm3VxPouX9nl3Frql+4XjePVCm1mWtS1vOYCyGvpPNktTVK5tfLWL3+9a/S+YNmt+CvehcubPsm7bfvG3zG9zf0idqZfTlIHF43C1C1JHLvr72xs4k7vbFLTu0sSA22msmUgZZOYtdQVbP2mO0jiSUYqPSew01Uw8a7usEdySXnJkmgwHeFQ1PRpiqgVPK/8wSngnQeteKq5iVQ+HUQNSb30vis9bCUUvHvtd6FcP2TZWOGScpcwVWJxYVlr9NDxK9jaNmxJn7xb371QKz823QsWkbsEwJ9c1ZTCiaWM7IC3K5krqB8HC90Qv4B+Z7JDRE3SCBUMC9t9lueKoENhXy1aRo7I/eR4yVUnQGVkeBYyqmTaFdSBglwW8rClLmOlAPxqwfRPmR8OJIRYrsOM85DC3VoVKS6GUuSEez3DjLgJnYdiZsc2lOqnn31wzXjZZVTZfrvI2pvQdnaQXNSD+u7gcOzPHKVWA6u980tkmB2D8xvm3+y9Z01jfKZx+aizWLNd2pOm+YKqrNzD1zGbKFmPI3rZv2RTCfkHwZLJxVvW+Hstf0Fez5kear/iTpn56bPb74eTEl/OQL2zxyepKf8hmLXsy9CI7Kta+e7bTR2ER/E1WZFr/9vnHsXftLPbFR+cfxT47qi9dOUekp1Vf5m/DPDOL4JbudfXiI5e/eKXbF7E7qYdvF/nlSpoL34YP+ekct9FeDrm7MPnqS+pFPa+KwDAQl/CoPWpmPvq6AvLhloNMZAVnNALuQr/o92FjIzCxOprHja8yzv9UooX4ddjU73SfqpfqLaM6LcNykrT/0JMmTQq3yU6SHjmJHThdkFaW5pvht+Zi/091jkmfBA0w5cX3nIWYxv0sxo0KqSuDMqCpZeKFkX6z9weJj3w0zeOYUusilXivLg7T0Chl2PvvDga4WAwX47Uk7LejTwrmaVQHh13LqMPHqdg34lrvHguG0gjxAL8c2I43yPkyRUUt5PhoZgrMfT+Yz01b4ts8L3QWZD4zxxO6wpWgfyyMaWGdacEivgra1d5R6FdyEOmZYmJw5q20LhraITOzMqjHk43KqBv23npznOlI3ZCDwfiaLFCq7ZTXsSmKs6pp0SbJAqVDC3SqYkeFprjfirgwU8WClFFUwKoJPr+IOSzIreLHMWr4Ewa1dH/Gvur2O64dOSiZZOU0+8KWwR1n5eY7/xlg4v+ur7ZMlOjL/vZk+HoZ+iws4smCTXDpQDj45jQ7CZ/Xp1nz+kN5XzcZKLW/yMJxJSxY9wDYszqrXNnavD3Jqsv4pcF9pWw+Vad6FhEa1tuzqnMv7Y9kYXbZUZJl7XxZH/uZDECbr/1lQLxaVH72A33zKovHVbbMnn9l8W2Pq0VTiAiVTd4YOJ45wsf1N55mZ5vGbRhkwsfmzIotYosrAoL3XJ2ev6iPvr/pXf5yXb3OhlUx13lbuAFhQgMGWupyWQ1Lo16Ooy5xxzjxWana50E3iMKFxPvs/f/3v3vbp77QPwgb+2H3R9SfYuy4SLFXGCZ9zHYPI3i4Yfv3lLWdT4V2rMCLqTFpymIckrKpbhRSPrbWUzQnEaJ/7NOR8Tvi32RExXv9gaHUfZpkwQ6iadT8lEJlviPlfg89UgO7/9MY5mkEgIwXlR8OI3UO3UlCVm/TrNL7aeqnzHu/u1KpCv6A8HDqYno02VEkwJzzd7e7YBS9SULEU4XdXI/smQ0v1faNepyJz8al3cNj9el2MudpxD2Y6oPI/6/vP0ydn7xcknCaxgjd69c3C6ntyPhMtrHrvNJzswnP8rWUhHNyNvw6U52MfzOuYpenpHdwdwG5ATyIszxzx3Z0mCnsMWa899FUs1g9lD22BCdZSgzHD1lOjMtE7F+buZUjkBxq6oUEbLB4LLIhk1fPji/Ot0Ftynwh/UGcXMpZGoUcAoK+lu1JFAazRMFwDxGnlhkynH7BA5E0nryd+2RfC90v2Db7mWQ0gLwjOl1y5pdlzpTsQi2o24e3bAI/pHJpRGU2mzATCoMIzv2ivvDrUdmmBf4EdgWCBeA/vcpVYbb5Vb5t4qfabViX0m0T7ZvqWr8JX6uLnC1VhUIszhSan7ddDJvsNmYhN80cVnac9R+aNjs1oUWQaDOzwkUdI9xa6OGQi1z3liJOE6l4sHEWzvrjFci7XerW6LH4LIayb2xDJSDuUmQ7+lKrbX3y1M10NuQeJATIHILLVlQqhSMflgRT6dQQFZd3Oi99v32r/FsCsjOlLf+5+mKZ9cdly3bPInvm1u51h9fyLqz8qIpQ4DpKHkIKa0GU3dIeUnpd9bi7ma5Aal1BLzB+Mn9TggHluIcrby98c1K8jjkO2Z+yKIrbuKwpGfVLdop7G3K9sOGJVd/64r81f979s+pj9TvVv6j+NFDl6jBH00wtOtcP9+PwcL24T8JYhm338yQd/G4zzvxxGKCXGRR+vjjUpKeazpB6x6VZnUmoJP1N0JUmnPvhVIYkp0r6IKw3qYEfU0AiEWZi55AwOny9w0Lj5kLS+3ADstP5QfMociOJMhkaH7rR/6lU1F0SLiaaPb2UuqqkKD8dwGGU2CVblGQQhzySgjOLiBnTNLGrJMAUEGIKDxeDs1BwHNeL6xNzTX3xtVxPyWCn8jyOWP00mJfdrGY/FjmWXJsWVNk0WHrr6DOYPrw33WjOxhBOzoxxc7TDCjUdoqDQ9L4AgJj8kuobK4DgqiN4Oqn6MqvLpEPMOJFdHH7JUvMijhW7Ies3jRvbbFRplGjImbqixZs0sJxrukg94iGLwbCeU7PbRRJk7CLXRjJMSyBMN4s1LJ6if8s2G+k4MHnBbnZHtp0535mMw04pmAyDFXfPQQyB3emrPO+M+j5sNJKpcIJIafqcRTGQ/iXYams2PpNuJgCe/mPsOv8WLh/njYB+aA841cxtRybLomkReP5hdzFyhM9tZUU0kMlURBbcwYoIPxLbp67NWjd+06x72+uujCsO6QmIbktd57+z/9LZs3VfR478Ypqt4QQKG+Z6cVzUL7y/Lt5QObrYOH/sh6yFkeeBfUijzwbFESQmIDKYcYyZJrPQvm7q+LUqtJnJ89JxHj07MLDddSgBBrbuZq1/la1B0X3jhko1zQhKhMXGLq4M2IvTdW+96Tppg4WPUK7NTX7aB1+6hSqr/nz1URiwpvSgLsJZVn1SPnPPR98Yn81C8ROLHTCl1Pl+PArnJpyG7BSsJeZJ0arlvyKceT3vFSd1IJoCDVy5zsU/xTCH9bDuD8sUUjqEHnF+X8Vlp1K7yaI4IYc5/eKPmtH9fnWBH94r9g+cKiaT5qIdlOG+U8b5u4XhIZDOeOaz4cXxWpLhTXMmOBj0IJRmdSZNLj+RqV+NSlWlx4gY7OSlAlMgAZBpeqKpu51SVmLbnAm8ffPY4UOq85AGfj7Wig9JNeqkr++kzH6qUm2dObsg0/mSfCZp+tgrfcEpQ49J7TRR4krfpWYheSDQB3V9xwQdFS6n5azOQDll3ci4mjyLIkbOB3eeD8CAIJlb6eDx5XPGmTPJafmo6uhmlPRSrRdkBI1p6agjE9najCsASHjTjfN56xt9KQ+y6ZZnu8VSm+1sObe6s0XNIohWLTX4fuYoqhI+48va5aMr92HgABmsTrhTySEzd5tltIkiiLrP67is5prjaF1tqlxIjB3ZtisPy0FM3lppdZ4mg8IiS9+538uO7Gia1zn79DrjwilbeDRH4HT16rx/dm+3nX+h8/woy39/9mXNsvVTZ+vB+CKobFvvVDh2bWkr2/OZP/PZ19Zh4VZH2d5kwwl8IGJqxAFd02K7z/nQHhXnczCYTHtX4Ipw5T/VX573P+9eXL1Yf6zLLG8WxcKt1+GEmWZF2fDLcgmLMCBtzPCKPIi9AiYDq2Pc7PqYLw2vYNsuxty8iC0byWVOc5GLrcj0DucHWwGdOSqCoCSPwdBDyDa+o4b3L3/xN8zfdX9Q/XWpG323yz+1zT8+KQT/T08CGA72wJG913eToPVmIR4Vy1U856mac/avUPSFdCOEFBLU6vE5KUm4q2V6905Ks5It82lAwhjSZIJapcFfUkaaOP7jeHLp9E1NesMho7WXglAa9Ds9l+CxaXk3TSSnP/ePubHvfeZBeuTB1WPtNs3wur+ZMt4SmtxOArbkzdMgXpO6hw5kRQS5W38hrQvYO0KIh93FlcG2LnhgdNOcSDiJ46Y+ZIQF05jxNLfzvbqZhnVJbvz+Lg1ekGYKpuTeqS2ORP+bOiv45CaYbTlqTlt1fFZFnfdJypOxjc/Cw7hw6demKagggP3JDFsOvZxx1geIK4vrohehcEFPAAHLa1zlunnHIQOmhOu3k2hMOlvBI5242A7g7VmAIhPn5zhrRWZRKW8Xue1nG6caokXhpVsR9gfu3Yn6TeRyLA65UIvSn3NeOJZcwYcgsG3OsPdxOzi2HFWF5XAlSRQE0FpSWY6p5jNbCo5JkcdEifrFUUWBHdU//GN9FsvSHvFZW5x1bAUyZOCf7tQATqLmA2d81aou66rI6ll2VZ4gxKnMKrS6pR4Gny6d9DlK7w5TcSbCzbqF932dUc2CoCpT4DhwW8Hlp7MluyRA9r20bgSmwxRYwsKXZvayjRWFDW1EeBE0QhNQH7ZrhMCni+A6yJiKMsYZfLJtZ571sqCPGrcIs6qqQwv/iOhFMjO46Yv4mVN3vutW5epUk8A9y+y4qDZA8Coreu2yyJnw+RjjMttdyoiIeQ7e7+vwangGkMj1Z4P28wJkqLbtcXV3GkYQ/LefBnsW6ItxwC1IjT+Ow6J35iTWY/upXbl6mLlFsc5myyyv8rq3cOngHJQUU1VIIvPqdd/0rhEHAmZ0aW/10W9uXtUNO1o4y6o7f97/vsqaE9txSLw0WTHhU1OEDqzXMrdZHpvldKPrGFatCVi5Ja6Czvt4nX9i1zhKJRjJexNNziTTAnyMc7p6mysD3PvHX/yX5h+4v8BZ5f1wUOdKRYhANEUhJylyJ/6J+DbJ/5NENgyHRAZw6GEy1bvb9OAlceRJnv8gSl7NKQNDgof5oWf5MNrpPtWe0pP7OCZwd3gAH3mDzNtF4ESOkbz+B/Ve3+4uAj8l6c2GYYY0FPJ7VhdYrktYNIi6uFEHVW7YAohSR6R0O9zcX0tPI89UcGYn88OlFCZDf1Owc3iQxDTHQCYohPkN0OlxhoI0QWzD4yO59lt6BHIgCt52F9KJraf64Y7V9Ot7zg4jMRJwlkPeX9wyaBn+Lw7clDpyYWRUfWYo6QQrXEW/NHXNOjR5v86DPBGOIi/OMt9Uca70zIzSNVDInHBNObmC/1Wg+k3wfUb7csVlSP2UsAqZ8UcL5WwUeHfWsbG8/djOzY0dRSRb+O7ILWe4tgwF2IcJ0Ko4nlg6AThyOpPpDPOsCnxeCZ/TpO2Qbylw0hx+FzitpeUI/UMa0omYhW0Dx/7yuG6F4sgYdB0CswP4EJ+CYGWQodVNPOLk1HXZh4otFwWzr+qgIsZbOD204TVLAyLOc9kPl79U39kWYf9yHG2R2dU690v2F0cQMZCuuuuZ9z31RTG8sMV6dvqLw0dl3pgimh8bFn8pZF++rMJVW81bTt9eY/ejVWA353rz7GG9e/biql2Ubz7/Uj5chLfPXXhbzn6qD9+O8UVR9SrH6XBywgxgF4vl5uobq93X69F+NdrXz1xYHlFq96OL+ms/oj9Wvo5H7uv1C/YVm+P+bAXWpWtVZDZ1X8qg98pXvciUjxUilQLEpvCwdb9x7iQ8hQfKz+HLXl0+HWIa1ahz6nnDEVskjGgLSsdBpjNftQEMJ9frdQ7iJcNcY3oEGIfYRj3N/WKbie5N6XMZHWT1l36wOJdHfyVpGvMpqtKLgSN3ADZBngEW4WuLgurCKM/pQMQX4BaTduebX/wt/Tfcter53IRDB+YbtZhm+cJ2t2kCMAGE0zXTIwPAAUCQtgfJ/piap0VoM2cKhOTrVA3y8IVUAlkk1epIKZxP4h5GMoJForu5v5MhxSwHpubl2w9qwrn0HBgZUDXNM5EMcKo5PHCWQrLi9KCm3eNDQZOg7TD8gCHLB5Xw7PBemWz6sE/fah6I/iOKg6+J/bMq2rMiLuC2huNsTzWMNNWmZ5QhRmgpc1AVohje7T7KiEXbMGfAJ8xJYpC3qeC8YMrY4PDL1sQ0+c01+XzOEXjy6B6Sa5YrvMzaYQqBYld5loY/9dXTmnpQ+J611KY5x9FIAZ16V1YBqRPnLADELTrIA1RkyeBI4O96yzI+wuqf5SMQYzm6RczKurSfVCTqpig/Wbi+aILrQys9QpTtUKSrPSL6gAW0yZ9lqxO961dGLxdP23eRAtcXbsU8hjzf8ww8K3MXKqNmgLMX9s31+99iuyWOqa3BTMx8A5OoTf4DNv8sZK/1Sbluu/K4nge1OvYLO4ydrzjS1Lg6e3tx4uoFOEFZFvGo6IfCFSXBfWWqE1eC1dz6vjxmGwynIhhTqUrVdbY9ucrO+Cw/yRdRyfb0yNu2018eFi/jpb7wz+szhi+xyVZ8Pu6Tv//Fb5jf476m/mX176u/yur542Mx6BXMnA/ETU/Inebyp+bmQaYDD9O43uupzPb4+A5h9NsrfSvl7e+Z/jEJtaWNZRri+45dyGk53qYU4PDd+cIp+J/7x0k/HESyS/P2D9MHGLanOgA+PT21ceq5ud9x5setPNTj9lAx5G/DdsogpF0e6h2Hyf23nDtwc39o3bu5lloCh4I8dvpNjzW7kyefyMC31MUsHYopdhJ8OFU3J/ZahiJJ7+Jj++U1S/NkPjuphcgTSfwuVWbAFVLMtMX2Lu9TTHWR5ngmVe314TBwfc1PZ3xyjDHrbCi+FaxE6qsiuiBxBJtnJO9HYTWHsvJRHjrjqAO23pn5SSw9R3LDFRdORpvK8BG2zrOeTlk1aHvmpf+34FNslZfpvRxpw7ADNoywGOyPpve8GJwku71M9jW1jzIqWJ5iQitPk0q0KEptejQbE402lSgzKrzb6pubj+nkDdVrJs+zvjHrvJBODnCAogPSqKVppDOMyTUZkY4T1kVVfiVb3Qx8qi1chtQ7alVVNnJEdt0wxwKQaOB6tgE7zvrvjIQ5rWdKCiqRz01jFJDJWGPq3qL6m5IQqXXHooqecyi+t7n7VHJjuAy/Ui4zPhSucTk4gA5l1uhF+9nS4jNZ6cs5blCdD6v7JQDOleDdnrp3ePxuyHZNs9J15vPZvnm64nyoggLcWQPm4mufs0iMmEYP3hbl0h+vtWiMSzizKlbdR3zCLUKWM/1p7c+q4mlRXpY33jTOPm/u7Lyt+mw5X7vzogHhiJdhsLYL2RJ3JfBhJu7Z7CvPjtOzd0PEwrALV0c6VlMXHOfMUcIA8pA/hYO1iB4HAKU+9pmLTVsB8U5dXxld4+bIU1bKRsOXM4wiO5JZ0LnUVaT0qxvOFM05qiJ/1pP3lKt3lsKQnkOzomVrVg46wSmdLjVoqbxxs0Xtiv5jlsqwcr8S5FlU0swFBkXnL5JFmWQt5IEJclI3+IRuw6l/01MB06yH3/7FXzf/m/ttetBfJ+pNOp1DU94o3tUfnikwvT7IjLH0XKQxiRSGRJ7nafJa+v5YXR8aXgQj0lM/pmeq3YGQ7+VBt1Oa8vCocD6IYBqpwpcnKY8IfyUxKvtLcjsZpsvEC+IEyfRMT2LaifJox2fFPlxPUc7tBcur6dEjUw/id59PcnfQ/kqVlAj2OC3pcbzk3fTchi01ijsvHdkCTi/VpMr4nt6ZqRlxcQ3Uu2em5VhdsCn9UVE/dVwLjeInLB+Pci9PiXivDuh6u99OD45LT1dQOIrbq8OcORzmTYLahYzYGFP6aDHcXB9r+IJjvd3fbo/pFqZO8bvADYXD9Di8sB8RxO1TUYtZ3qkCBsb1QEn2rciuZGLmdhJgq5QeS8+0kBwVnIX5+yy4tOfuNuRLLsEwaYJFP1spzo1RhaxgyZmTt+gjwotPomQGyuZpXjsO8GFqwwR51Crl+hnHkbdWckU2yXBl/IK0u1zoc89Cspdx1vJoBqnDcPKcjH0Byi6OQhefrl44P1xGNoKAZVjgJraS8WF4QBc+XWUONseONm6i5gdl1LdNbUzcsDwrijWqivVqzmXDsVa2mR8zBMz5+Ml5e3phR83haYNtOLDXc0QOcYlZF1dyGjAiKMpW4jKG1obSSnujyeGSsk16OpOdEfGjUrlPaluJBmTSsDP1GPKw9bXtVOHllBlgWhEtHx7XK+UjSQ/JJbZ2PdZtPivNUTzVJ8YOX+PbOGWCOSjLyb+kkhlbDUW6a/xsXJqKI7qi+bMRF0yVw7nfx4wdpkZvJEz1+XLb+qwMG38O15PH6oz9BQizWM8DZj/b2E2RFZ3v5Sk4pXOxA5iW7bDY40ZhCbpt1oXQ2x5XwrMxxuYzH/PjD1SimnwJzC+XQBr2aLCXaslnRdA9mjf6fhmrUiMkD8z3sdEymj7zq7pcDKprCyzBE3/E3qtFU1SUKzs/9ktZcJv82JdZ/P/b+7Yfy7Lzrlp73de+nr33udW5VNU5XefUpburuy6nemb6Mh6Px2N7PBPG41HiKHZigp2LYhGcECELghQbFFkRIEcGv/CAEE/hNQIJ4RcShYcQIjkiCFCEAggkECBekC1P8/2+tU/18AfwgkrT3dOXU1Wn9t7ru/4u+kDoA2/3s1BYO/N2lA0m5oHI8lHmPVAF4MwWaF0rV9pyTM2cqvG0Fn73HiRSWwgWJvHaw1VcM3uFcoW+WwzOTF/WqlKZ5bUkJQpJH9OxT+nFbNmcc7S2izwHEw3PgmIGKSMRITdGNTNLPKfY+pU2LZNC2gyofpQgeDpa9v2x4HT1vPXO9CRgwlnBJtC4YTDXQdkB4zHDlAAJ9aQSdXtRhtQagIDLinJXqYf1Sq2sHkyRU0KPVQccrwKTXe0mIGQ5duYrZprqB8wEUUOLLz7/V/JKf0t8VvwEY3k4NK95gsPqNRjOrM7Ein3Tt4ZUDJmJmy2KSSdiySJ9sajmDRdr/UaAC4c9DMejCye7EBdJVxszxrqTEe4qZcy8ltG2pssCW6/Pyy6uXnfRdWu3+SwG881Vx36iDgCN5sBCFIQqdEvJZ9DnfdhgaLcQGozE1tHcZlsXb/2tNsyhZIT7stPow9cfbgYcko3t1DktpIGQt6K4/Y1p8lacpBCABZViyD1rdPDB5zVbsTHGREWu2J64xpeRp3Fbriy2Y4oiEfMUMXiwGDN5zU0mxRSfYsGWUIHK0Aw6wuV4kUObhx5dKjpgnYMJD5gWgX0w2QuT3TNYdaOCoYXgoH2Y9mfZauqzVE/NktUuFNPXE5gqMWcdAxXmXwQKGbLPT2Ue/Y1h6QcdIGzjTEBQx35e4A2GkIiSmkJh6bjn4WwBzF8eGJJHv1bGesBTqb5ahTlqQSrhqbjWUAFMmSDC2oRUho2cH8tyHzNKatqvmetuM3xHDjWyxbgLvI3oPoHaGyeLv20zq6k7t0WVQT1ZRaoITrHdNesDVd6hnvTIuEN39+hBlb4Ea20FsT4KqPfaWiZja0vZ9loZMn+afm74uMrLpspH2r4aZlSGJ7I3VHao29zsp5dSUXP8YJWZ0h/DwDepzK5whUr7njLUQE7HBgRagx6CgvCmOhEn9fBg98HjI+enxcNBtgsgAqWheVbOCnPP21lIMScqM7MHOKzymAdBZAAEN0UpqMSFc/IYg6rRxIUnvf4d1X9YXlTrpuppXVfYc6QJEwd0SaEuG+eDnqrOFm0fwwIqxxNw/R1nPpkwJzYKJiEhq8+/n79s93f7v+fyOlN981WdlqZ+6O4qM6U8pcbuIDSDMDT2Dt1wqyv6mm40AsAioGZV4pPPf09+R/+q+KT4jPia+K0bRz0bcXOYGg/bLVommn53Gjnbxr1lFU3Wv+szNbOMQyoMlxZnnefm/WiBt0JMYFg611zryMyMLsJt52cuwc+eR5R8xP/0je16VC6SnokYy24ASPATZuR9t6LvFNOfim19TBVksoo7/G5NR5Hk6vpyeRWdj7fwwrihi4IYXSP/1Aw4vkRf0xd66xebbnfacdkBqtTX9xOqnofR1muzvASmN+4XTMdSpdb8nF4cl4OHGzg/LLZlKRr0q9UpPthSbdxZhQ0RIS+iMaj8Fp3hAqPw4+wOJdXgOt6sjSoVgA9iYQUofY7tOBO68AumORQNKH06u4FYlIDQjLRUASQ2T1M6/u7YDRkFBE8W64AmZpaVpGJB00EdF7kPfrr3Cr2iyEeFt6bnox4QPZrO8qHG8BsQe/Y4p7dD7TKDf1hgnf6lUTAApYh07iyLnYpKiJkw+7oeNoDzxGzad5izag/Zh/t+l8rQMsEUjN6YLbImEWtp+hRHUolUqz54g3XNBXVemC+gLqViNMkFMF9hqGShge9JWXntZ0EvkApze5HBnxNqHBoOfzpAMbFOp77x8p/k1LNSZZi83G4yjBsodxtIwoFM1zPwJ0DJGtTETNrhVbErB/dX5ZkyoK/QyywXC7nMof9x6PZClo5e8bodyqGiRrYx49p+LOtXg/pReFMtvzh+2aZVgoG2TJXo9Xd3fT6lu7CrgjYut6JX6xnde0qm00ROMrrQZjiRYVYy4QKWmgmLiyR1In1mVufFmkv/QlJF3c6C+ejLITsyWdBBFYmuZOlUIRojqXQD2cJ9pi2PTFIqPZDmKDH3shmeq/GsqJaJ+RguJhyDKQal9WbXnIt5TpEeyxKrC52ZO4An1EL5FOAkI/+n4x2Her342ewuUgKcAzHeoXtpfXa39WPnXQ756uSXogpmoHthWAw74awmx6PiZOFHVAzLs+ywyjlm7fzu838q/7b+szt/tPOfRU+ccMxCERS1w7pJ14OIqk/mengDU+Ao1bVl9He8F1tEaQgWuOmAAcwgvezKEXG1iuggjg9gu7FSKMctiO8sbsQWtwxxBhCsbZxHUpBaPZWY9W0GW5nC9QA/PyRTPheHl+vtyo9XhPTyhmXGEK3MVsUHYRGf4DKWR7qJ4CO82TMImbF244Cp46ptoNtPNcD+hEoDkydwzMtkpIQb41WWo41KykTvu4mEmsJRanepTrjcb+7Nshqk/XCdpYnVMxjHCl/bQSb0hrk9dKo9+4Oy8oHPNdYrrXWpdxaO5VqnDVX9oq5PqsNxeIkqCszSVGMLgx4WUseUPG2i87S5KvI7VhWUnIxbTrRzRa9N94Nd5a37YEJhJD2QflC4bKSpyTC/7+hzzeregpmllj5zDzAFUFIli4dRvPo2VgeY1NGTHkymsrQvf215b7Fn2PK6SCsMo9wwdQwzUtYVxSDdf+eRnDbZfrF4q/3zgW0P5YHXWZZREAltktOFq4e5zMrczDJb9HuuvdM+fr9+pnf/fnvYNsjOdIQVYkBO0cRw2KPTdeigDVEPfRaqkZ9Woh7NTZiWU54tJdU6U22RjnwIwDwBYKCsBoby1WF2kIZyCE80o6UpzHE/vxjsjX9sl4J49iibTQczR0Xj+8X8hz+fNraV5Ruz3Xd7vSJ763j+CXv9YPfPFWmf3lJh9i8rs1dMqlY8p/6zf8/sUxDX2l61/pm728JqfUeIv/P8e/Lv6V8QX2EOwVbWOxbxy6Ijww3ZSeoFvNd09no3K2SqwZfwQ1k/BHgGQJ4NKKPA3Reik1J5GhFuUSCw7eTGtxv8tpDUwixgSIBfI8rnhik0ZfEsthjoYHZMv9v6Ry3KrvIYtnvd76Lj5hqQnIvr9TW7h+Iw3/htd/Or1VXUJk2u19G/k9uijutgIhx3cYMsXFKlRG9wvbhacFOx6TjrzKtjK8EOVRThQOednW9HZ48jruTXzaQ3Smt0BDYRrY58OeDk6ITVTTl0yS7GyCHXlEGpXUR1jVgqIgUdFCuwuEXUG0nhCe/3RSubxA+oVEyzAJVJeihV7lvXF5WWcpelIkKVU7PvMYChjzzo+8DK3hY0b/DT6HH09rjXo46mcIENkPxeel6uK+0ziIvnrjWZZY5BiSFJEbLaUMJVrKIKzyk6XXKiqoIJ4pCWgj9KNwwCVkQyOZTXaxjZs3AFVK2Vs9Q0TdIgCpubSYBwFLADFDUGDQtiQABSf5FizsCfOb8fnascd0mptNjfJ8HCOfPQBleJ+b3kSfno3v2xrdSJ3qcD3gSIuygxFLtJ7jTYfPQma1NOPaXG+asUE9/L3OywvqJCoz48PH1aHDwpT9jw2PrGTdNs065eMyOdFpC7UdZWvkk1lNqTognB4JqX8/xTbf6RFIj7JKmHFwejj1LuzPpZP1VLUw9CUQKcmPVgkxvalcs+MmhW3itb6GfjX8iPps9c2B/YM099uDym63sweOpPjqQbaTOkYoSuo/Qlrin1ehO6+mKvD+BXha4mgyf30jlzYN8N2V0xHFVvN0WVtCETNkARB3cibQM9J3u+lxrxIA0UB37l+T+UC/1Z8UX4dyb9OKqOnG4T0SRFZ3bZRghtv0P38ZqPzyp7Awy4vu93A292KEq4l+h37fiQnbvjFDyK/MZgg0yYMJ8Ikv/ry3iIrrZmbjjwvHx7JqLEHifHNkYgHnwMPkQv2hqNMBiH8TdXETWDviC54uXffbG1iWNETTQXYARwFOjcNhUXSPcf0nDadjRYiF9GTHDk3G4hmCwvzlPn1fVWm2Jlks+N910i2xEEj6geZjMILaOtGyU3XlUY2d0f7as89dkAuzek3UqwICbUoCZ09JKo3Ca77QaPLs1BFqjGPy0KJ0dUCMM0O2F1fCUh3BtVOzt7bV6wBZEDb8UmSQ70Quha0pmS98sasUlEqHbCUm6aZ6aONWJ4rqvZWZcHFZKpszzqdYE+iQvQqKOwwDvCAjgZnhLQi+vMsEKVkY0H3Mun9M5zI6+oINVvqvunbXVqRu2soioen1e3UFbXEFfXoBfhmwcOEX0Oaw3pni5nFJ4c5Giw/jnwEJF7RzWt1x6Y6NBPf1H4tUx309f9sFY9FrCIrksgVh44u++Gmcz6ggJIRX2AKlwt5UKluemtj8ayYPAt1f5McQLVKUC54lApCnV0+cpRz+4v5CCbU8RKSrOXnBULKyberENDcYKxUErxiKoI1006SDNMfUSBAULCVmiw4GX0D3vbKROfBwuLNpaRB8tK2VEvrX3RZMNCezcLfV1V0AqVbBjhLMWWNvUDuztMR4AvKJmawh7Wd06mr1jtdhLx2ef/WD7T7+/84c5/Y98gZNUzBoHccF06U5/trgWy5JSeMfXB2Y3nqCPhXEZNpRer8U5MaevveLWdDW577mfCxqQZCYk3W38GJkdBpWRbftzszZarWCqbjozYoZt5D4SFuBzcaORGMRi8uUEEJMtrN24HLk9RIOeazlXZA7p65idptsxS5+RQCozpKK1pav2wAz4eFtQbAZkRCuoTbYSpUkNZseQSHZfcY2WcY3qGozSga+89lbmTfuWrbPKsHlJCrSAijwylnGycWJnJ1FtnqZPehYm1KwBTMaYXsWxSQ9R0IY+L0joRU3va+2t0B6kfDD7JbINs6POiKfF4Bpm1vihCrdRJ1fv4+M1eWevTLN8P4V7l/owz/d3q+lHvJ2wZMo03W5xpsV+5Ner5QOnsjbPXH+zr1h4E3+rxYdnshXBa+X2jetRrL8tgz9/6SDOHEWHqUmenWTqgLrgYZqv2LMhBSPb6d/PTMBpyr40nWCztas+nZdgduks6+Y56jFpgJAZDIuodZG9kCnluP/nweDz5enp1rMyCvpVe8u9Favp60ivUmB554VLMvCZHPju8ZzEj/4vP/6X8B/pzFP+uxOfixvVG4k+cimHkp9rO7YkJ5AYW7DwNL+Tivr7avFgAdqH8Kr44QlVmXJdy7Vno1c1ruc5EHnkSiR6sNPbCDevqmh1gLhfbY9Npo1yeiavNjZjY+ZZNij6Pu8eo3rCdaMdWMXZ9Fy9srSLBbjDsd3quehOR6WsWQUTr22f1FPz2uhQL86JG55ETlpZLs1r07Ypn8p0HDlAwEYqykd+uqJ04p4ZByNJQCKUTQJ16gXBHqabFDt9IEJOgIojIh7VXAjUeZpZAkuuxUWaCXr4qc5gu6SyhYgZg7p5MMpR5CJvjHO4qbK4OkBTlFwVB6AIYaGS44+y+TPFJgV+wmGjTccp00kq5r9AOKfbYYsP5XMrTKllRxqATmbWoxGQvGVNJ6XFYjcwhLUj5iXVEP/gWJQGYToUE0ogRKgIaDCrIHCm0qYoGVDg6k+JjicJalL5axTqyhWetPywYx3JAf/k7rqfXaXhqLNRexlldPVmNTpYZJYlHhTr3+z/1JpRh5CN/lvF0Cip/tT8vIMive2fVY7c3yvb67Z66DCHoI3skxXw2kyOXj2Azu/JYkwKQuK7ckR2HypvH7fVhBfZrcq+aLpAwvZMZ/cjpbowzsI417xWwE1gX97O+BbzBJcUA+qsiFAl1fLNETjFuoQuUpV7X1mc6M0/SdpkZ1Vwsh+em30wcXT0bgjXULiiRQRUAABsAf0Rqv+PoDhdBV6qnITOg2LsHkD9Mw2E9D7oBFf1fpRvM4x2MqgJ4DacmumZbJQ7ksZ5Eb+Pv0Yu/yB4zm5t2kFc8qCdRuw3nolvtxHaz3VpZsEUw81S2u58p1a4LGBd3djMoT9toQjXgMrU713F60xFKr9jSt+v5uu4RtKtn2/6uy4pobYGuUM9EBGR04+nlIvab8Se1v1eXdrmKYHJr2BaYh82RY76J7NKOVxtdjxfMTWdCzTMI8L8wqhkkn6yDhJuCtREqHIxXUdQcyppZmBmeo9GxKsGjELlkuBSdBuCYqczfs5B6xGsGdYvCjO6ozkJ7YnRFDz22k4WEwxv9CM7TYUph1U2nV490r9+MwMpMGkMZcDfNlE4T70o5p2xnMfFoMP2QnlUzI5WDvnyGWhUwfvQqRR39dlOIZIG/SgEgYA9fM/0LyqBwhYE9uhOpluwOn0q9qPtL/W5KT3ibV7NqVU7L0pQgwLPhXkyao+BPTiq5mDvbL6kVlVPvEzl3002x5/2wcF9G6GLjKgotQEhq4fKwufvpvdm63JW1g4eCDxO7i7ecTPZmI7N3nO3tVvkyobOwn13kr7bFxbhWa2dtbl3d84Uc+PSdvR8tx3N6ps0hL/d18k01CXf8vZkfItZpDx8Iw1BW6tSYTUFJUBmu6Hj6LZPCzYq69G/4k1SlHrse2I9TTdRIpEDl+ub0QLvd2qVFU9Ur6wzvjD/6/N/IpX5r53+LnjiLlVyUHFltbgQKkCq2HMWuokviNqcb48S9Jx57oIci6bmzVHthq8AOkmAmdhDJ7VADhyfyKT8k4N39tx2xIL/x5KjTE2JWJzeRN+UeJqLIuxYDTd6DbJ12X5jdmIvBNI5TO8YCfwJ5zd9pVDkUtm9Z3+jifNNs0H6tGXjFxNE+ABJ7AuXhpXST9e5YqRx1G5XciaAChQqOjB5sZBAIbrP1MN8ielpslKY1cuNSDnZwacReJoPUH7oo1oLF0lS04EbhlNJjEdp0SeHYFLXHMSsam/AKUdUUS/3C216dskI025g5l0HQtpQPzaDEAfRQzNKyDYIJFQYND887uS+jTs5lAUAg+TwEg6VrFnJquITrZS2WsNlvR+VC6BGJzmQ7OFHfM2o3b5QvqcmRqljtVp/W5T2RDIo75bPsU2Uxc/ZqGT72QDlLSSazj3z/pfKZkpUI0l2ce+FDL1BhxSMl2Zhe4V4b3XnJ7WXN3ezMJx/V0wGEeTPfhr69Sl9qPlIqqgAx6sJZeDjqD4s7fT+krrSh+4P0EtKaei8bkryokoMCwjDSjJC4QMCk8FNK1ARZ+un+u3QibKuz2ePgJpDJsNXo/ZN9W0nqyGD/S8eO6o/RnfvDd43cd0FQwf8WNvZZoDoTSXKZMhXpgz+xRZ30gFHsSZfaQQLuM53ZphnluoxeKf/8+e/KH+jfFH8YNVZkBNRvDVI6S5RlZ1LPNOVznkGWLFq/tDymGFhW7efZJ2o5Vsi0C/6TbRdRFfZy2Y/inAZ/urFuxVYVGW+4xRia/s3MBO1Z/wZxYSLsuVNZgG8incGo13d92c0pGPC/XoGHBHD/JWMHV9ZsV7xFlychugRAiY3wRXRasafjbURMXEhWm8HF5oVWytME2Y6/xuXT7fo1Tl1YTmHNuJHry6iiQqluabocbBcGPGx848s1hY9FR02F5MSAidmUPiEleHUZoc9iu2CBDuL1i0r7aXIO+QVeQHdyvvwj+e3dpyGHYrLph+DpnhcJiHVSZz3Fgh+VGGWyojauwF4RsiUZFMZAuocGgYP1odEQZWEJBPQzgpX/KFeCR4R5wiTZw2xWAF1kwWKAkp2IJ9amkAfxnUc90E6hSFdRrwhHWLPFuUniKpaFB0CKoMRSV6loeNgLWBslX6dKdlkVo5SyYFWkgbK+MNha0iuZUapY9JvSJDuhcNgABA1SBlAupVhkOzVmz2W54lkPKNV0kuYz+2yVKdG3dzIZKCoC4qeMq/ELfOrp4jno71Um7Y0U5OiDmJtkZZ8mIrpZZ6KwwudW5VnQYGMkrFyO6QawDVP91YNDM2numeMvGQpfrhhSpv4y1Ku0EuV8L3vVybTWpXO1SukbgP9clfSTEVU0VIyLHKsWA6Klrug6lzxuhsUqBjLoEyBfRnVqUL1gZ1p8yv1opayhOFD1D+tJNknMnrx7L8sXzswhsyNHx2t9abVn5F+pqsT08uloPMzpomrr+sWpoYOuGhf0OnubIrfgB4EOSzNMxL5bpapoqUDWn3lzzzS69pL1SKuhqk59xqM1JYblKB95XxqhU1mIAsayJojGwasE7C+0C/TVM9MYu7R6NmTDXrpPucMT4FIKhVR7Jx8vq9eT0aQXJpjcjxiqb9TwGL6/IVXTBjdVB4drBLNQEGOiCRnQcspzwRZUK0WZUEUGI1IXzLlZTpPdKZg5Ai7BKp3LqjJ2QwlL+Zbqjq88/3fyu/rtnf9Od/gQvKfOaalTiokKg6icub9ko6ZOYy1W3FiSXJ9HQAjzp3new+ykrbFdx//gkl3Mk/7DBxxPeQMLNMbNP3/Y/Jnb9i4OLldLHkJvcbftoEhY//sJB+V1G/sCNoAy8eNM5yWAqh/Mkg6JtrraPAGs+vqwo3Ju2eidswB9aT1gf7uko152s+zhQHy/orTVeDNpFQBKdBP0fKTXpiroqCaF6iuIJKEYBnSWXepxNFmQQYRZtS50mdPdBqUYmqWgQQEIxpKKcSSILpaPsWAYGtsAK0BB66MgKQ7saeA+jHwQXO1dXivT8+rQ5HCUD4CS9Ip942dpWxwBeMrYVpeu9VNjDlaaPZfqEs5oSsShpJjthuRukqIAtw+dc5QnAn/1gt77B/+WIoSElQgevUJfwKcaIhG/I9Mka4K/ewCMVhJ6Jp/v21PV1ww+Domui4QOqwdG1aNwLsHLUElhdI8ibpPoyuYhSSlLh2hcSTHEwvcC5Rg1QFqz4j96VYRPxTxypnTSo10f9Qw96HfcfqLGTn0sTxvresnPf/b4e4Nf0ipdqItCTkLPngx+3MMRLrENtWEOKmw9JXMKWXVSDZrdWWVm7HkwTvc+XXyyb/InpqKLmoTip47fKUv9BhUUqpbiQcja7NjeU7pHZ5GuwoTDrV+06i2XH1IHdSlkRY2NR43xW89/X/60frbzX1BjUAa0jHi0vINgnDsvOnm7wi3qOesQnG9LkMEWf94f9CM+lLUM1/Sqy+g5seHSox8VF1fx/xHFFR9WtL5Y2jLkSVGxsnVJXa1fyJyciUhvjNj+uC+9mRFjWzS4YMmleA7wRtZR+ZA3OdvmgOo9L0wYVDA1SPlp9wAQJWWejpL87j7UrsdAOlIbl7WFHIpatmWG1pYymGUqUckmxxSlAF10mJb2oBmaFRnjjAYllYMDi/O0oDrX7PPePUCbO+1lTU2Hp8/1fZ7s4imzGYVVdJRUONLfGSYgl0Or3/ZJj0rChDV0MiOdzDRl9UzR3Tx0YoEufCTXQe1aX+mjRn0tQbEOvQ+bwlsR3Z+e5xdVIUSoe+88AQGRQvQyfU8/mLZ3V5/R9UD5JqnlAaP9WCQM2QioBTVKzEgehjzUiaYg3djhR88/W74/3/vpLH3kKSS3aZ7jHC9KkXs6eAU1wD3jHqbiUf7LVe/X/GuH/ff3DyA0oM006Fk6evld56r+YjX4MZ00VSLbnIrf4CKRqqLzUylMaU5cstJq6E78y7vhgQ13B29Dokd8/fmfyAv9KSER+02krW/6D2JlOtj+tovGsf86x7qu0/fsb8vmIlZvU1SWw/kN573PhghcvF0yc4OLyYjtSy6u2Ua8QwfEYek5lnSrx1ydsnvU5QtsDoMBBbjsYL1zgYy5KBS6opVwJPptOj1QptBvLq5XNw4wXQ37NCKn6Y97Md+tks8PqaZIG+/oekPfFtUdRSsDSG1fT1O6rFJkGJTK3MPzniomBT5aJeKm3EhK2gChzkceSB1APyvmLBhmSZ+ykCAyARU2WVZ72wyrCmYOqZ6BzqQjhFZROSa85VU7JqrMQQOaFGfKM+MDnkBRg84Wrcu8Ouv5MqG47SrFrn30hGMJNaPvhMJrqV/zg8KulnCPSE9yYI9hQO4KU9LRpUMwXBZn+8zA+3rdfn5viN2axiK/VA8OLHS20v5Q5+9Mxl/Jsr2E2t7KpRnUb0NAFUox9uHZ68XJ/lSoUNFznWqVzUPzlxaTV1ygLq/o5WnmTLRZNao9cHdfG41fWiZy1MK1qja/DFETSi1e1SqzUbs7AdYeWh5ZQlHmSE2Ox69bNbXZ2gS/l/WPmiGFGUqiSTlCNqazWGgLjoFzstz1Izzf333+p3KmvyCOEYsHQyZNwRIJrGyWvkSJAiDYOrZjS6jbcPSdCfN/OWWeYhmwYcQJBfVSYHvONK0rXiyvV8uWEfcoVKbM+uZA2rLM1XLBNg+nYtV5NlAJc8IANIaAMbqGSUbxWb+6WU7j0QUAB/Oa68grRd1zuZ3MxMrrakA1ksEbAnyOmtM1xGpWV4sVIHKAxWKusrri310NoGeHnd/11XouOhErjGXPk1/JptHlj6dZW+F/KIcXVL/L3thmFT8zOsfoTmZ1ZlmYdsw0O8eoK3RARdcmJZ29Y54JWDGaOaNJ+4lIBT1aQRU6E9Nli823pqA3UrwKBvYdGv7YQmvecFv2XnMlEB+F86uiVyOwRzOgqLngOIcIj92UiKwl+jB2K2MVS6q7sT+GAPUYGxGn36czkPSCouTUYydMXn2zjK2wolSyHCZVv4ZUlTrJf6a+sNVq83jc+rpwOfy52SCINQcKuDNZSb/VlDp6i+SoLj9e5PN1kmW73v947t94Itczdxg8wHysNp/sZRpKnrnXckDtkJ5AEwdwcq/M2eR61PS+9FqRzuvXUAw5mdMB6Vfj7L55uem996R+pWxz08sURFs67zp8A3gxeFvMlWK/eirJYHTh2OMIOCDqFjW8wxMBzvcfP/8evfGv0LfcE/d3du7MNad9LhViPF136qybD++SDYs+MqJkabpXdbPGhP3EEIh5JCA679Vk+wM9RRKR3nSaVku7iLpNm23ZvgVlYMG2HXTgX+LGgOI+dxPdMPDGASyiy/bEopDRqxHAl/g/21LPwLTAm9VA/JhIlaS/kisIN1BCuTwc3BAPo4jb4kUbkPyHJFUpHrrZbmqPgo+EJ0GlLjV8qbC7qsph/mBq3EuqfMz+sC9DMlJ+SvfBO4wYQq8SszR7mAw8u6mWVQK9Mr0ydQkIOFBcGhZ7QGmwyDr8/tivzwABCuwEUEMml7uuOPUzuNnQvc07HVnU0ZFjmJTo9sUEJCYNTgvYLKCxWDTunrpFoMAzyCo61mGULLyaUI0EvrQI4oPv5LtQpcxgheILyDcovaGERw+8uKSHKthcBewoqN0NVPf35MiYQuefvTzy9VgVnxq+92QY/qrcf9S7finx7xaDYzHYK5bz8KiYvlz8lUHz3uvavmb9pYEmbSX2yuITcuploE61gV5Lrt09VcJjUcboguuP/sjIXGqW0QKTGN8TMG5QXDB5FuTAVVrkWlPCsrZMLCUjytnUKjtvMw8XQnknmiKlMwz11cn88oEcz4v6J6lmHbSyrnH06YpOy/Ju7fdNWum+KSv35cxU3k+drpPGigysQZP9QQ2353Qsxf5Rsz9kq6VrIPeKhvqCwc7PPf8D+b/U5Y7ZqXfGO4ud453znX+x8693/tPO/9j5PgXEWkzFvliKtbgnNlQsvS7e3Nk53Lwi0EBQhZNAPWC9OaTi/GpzIZ/xAmA5vB5aqtxYsvzqkqrcgWFBN8uWBBT/gYXsmwUywH25vrJzNqy/n1xfra6oowfRFltpOpEwJU6iNbG96qZ6QEXdmBeLZ6LTW+emZZ60dLLQxgy7w8m46jPGghq77A/psM/FIT7uyi5A22LIiLEDFHoNqx3g1c31koeVpWhaiBlw+gXOxAyv15Y+yeraRiGKa9OnazC0g/615SV7Q6+4T1em2TCSmr9AiTrPJmto/uwd/BxV+fpLOT0uv1F/8F/vU/+ozpYN4JPJPzvb7LVLN0iycjJU4tfkl66mp9a9PpgEe7pZnCrt6blJPq/ocbBl5cWEavLjvfEd6vmHTuXUGtrC9ZNAndfnS+eWiLH9wrCwmxjShZbzbOlHyHz+8llhF358ULr2IMucG1J9uXQeauoVHCwTwacoVHL9kR++K0ZZOoYdyY/gr5fHZs/QO/6CpMc5uN5odt1CGIUq1s/Rl9P0kr4SOtj13uZMpnf+Vm/2pcaoXOn/2M7NPWXMpFAN9fe6OIBQa8irs0FIH4L3dk89VLPZN3rvPvxTTMv04V9YrX/90T+6ntQfn3zw3U9cnR8Uo/Dut6/eppP+m40xr1ZVvT771b+biM1k8uQlf+/4o4d1Qxk1yYAs79ct2Jyr1aSdzUJYurEMY2DDINMj/+il3hiBaRiGsKjnvbiiHBzcS/LElKeDHtVzInXZKz+SyMf0GeeuHgbfa77SJgOjSu8Lam5SKf+6qKhRdI95jPqL3g2cvaaK3f6gPC7yq484HUbi6xiPruvHjw9++MfvFiKxB1SGnNmjqmfwTLyxhlTDD96++rb0afib70FUblEWf/mwPbg4+/jea9lPfjPra/1N0XzhC9h5JtkHX36n9ZTOZ/b1+Xx99DNPTv/GLd/zlu95y/e85Xv+P+d73saZ2zhzG2du48xtnLmNM7dx5jbO3MaZ2zhzG2du48xtnLmNM7dx5jbO3MaZ2zhzG2du48xtnPn/O87Aj3tHf0N/Y0fu7PQOegf6G9//Gn7u7Pwf/gBC4AB42mNgZGBgAOLrh+yy4vltvjLIczCAwLVpRt8hdNd2Bob/daytbCAuBwMTiAIAS7ELYwAAeNpjYGRgYP31PwpI/mYAAtZWBkYGVOAOAGrhBCwAAAB42m3PKwsCQRSG4W+W3W4x223aTRq8gMltWmwaDCJaFkHQaLIY7IIY7dsEwWITi/4Ag128vMMOWBx4+M6Zw8ww3l0lsbyN5B8tk0YNLfoTLtQdLDBAFn2sUECUzHQjy8hhjQl7FTIF7jFDNJBhf4cHdZ28kk20UaXfk0uMELpz9s0iswPZI7fkFDH12Z+r687/wd9eUvD8pXljzKc/TkyfJ8Mk7SyYSV80s1AveNpjYGDQgUCmQ6wu7HGc87jf8F7h3yeUJyolXiE5Q3qVrJP8NCU5lR61OxoxWlk6cron9KsMFxnfMa0w/2JlZb3HzsIxznmS6xv3OZ4LvKf4fPC7E1ARpBOyJSwufEukStS2mKy4sISupH0pn9LVMqOy+/DCOQA/+TbcAHjaY2BkYGBwZ3JlEGMAASYgZmQAiTkw6IEEABOHARgAeNqNUV1LAkEUPaMWSmASEdFDLL63mrkmBkEEPhTEkn08t36ktLm2jtZTv6Mf0lO/oOwX9Fd66szstIkWxGWWM+eee++5swCW8YIkRCoDYJMnwgIrvEU4gSyKBidRxb7BKeQxNngBD3gyeBF5kTM4jVVRMjiDqqgbvISKeDT4FWvi2eA3FMXE4Amy4tPgd+QS6Qh/JLGRWEeJnrbpxoKLEAG/I3jw0UMTV7hEm+HyBBiQbeOU55oan9mQlTbrVezhHMfUnxDNV23N1M0rrBnFBW8hhvQRoM/s9CQXDTJF7fyH7VIp6Vrpx3GFzd2qzN6y642eJ9Ehqzb0uL0Nh6eCMnYZzj+8//ZOB0SediyZs3BIrqd1FlEfrT/et0u95Jwhaigw7nXYZEI9f1prkwnpo6A9etxCbSrjTc+oVu94pCda2CGncg57l/kGNSKHzPcfb1HdoVbtJemgqfsPOG3EWz3u3sAdGbVNyAr/CzM7bOd42m3Mx04CYRhG4fOCgAURvQa7qP/8zFAsi4k6NgTsla1KYgwbF168Ccp87jibZ3fIMOqnzyvjOgZllCXLIksss8Iqa6yzQYVNttjGEeCpEhJRo06DJjvsssc+hxyR/D5OOOWMc1pc0KZDl0uuuOaGW+6454FHnnjmhZ4mlFNeBU1qStOaUVGzKmlOZc1rIf/28T14D1J84euz71zs/vTO/RuY3qyaoRmZNbNuNsymGaf6JDVKjZKDIVmuNI4AAAAAAVpw2jcAAA==) format('woff'); - font-weight: normal; - font-style: normal; - -} - -.weepeople { - font-family: "WeePeople"; -} \ No newline at end of file diff --git a/spaces/mfrashad/ClothingGAN/models/biggan/pytorch_biggan/scripts/convert_tf_hub_models.sh b/spaces/mfrashad/ClothingGAN/models/biggan/pytorch_biggan/scripts/convert_tf_hub_models.sh deleted file mode 100644 index caed81a1e9698014ac61e8baa3d98d256cb3b4dd..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/models/biggan/pytorch_biggan/scripts/convert_tf_hub_models.sh +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) 2019-present, Thomas Wolf, Huggingface Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# - -set -e -set -x - -models="128 256 512" - -mkdir -p models/model_128 -mkdir -p models/model_256 -mkdir -p models/model_512 - -# Convert TF Hub models. -for model in $models -do - pytorch_pretrained_biggan --model_type $model --tf_model_path models/model_$model --pt_save_path models/model_$model -done diff --git a/spaces/mikeee/ttw/run-python-app_py.bat b/spaces/mikeee/ttw/run-python-app_py.bat deleted file mode 100644 index a1be002ef8c72c8d776bf575301fae04ed340a8d..0000000000000000000000000000000000000000 --- a/spaces/mikeee/ttw/run-python-app_py.bat +++ /dev/null @@ -1,4 +0,0 @@ -REM nodemon -w app.py -x .venv\Scripts\python app.py -REM nodemon -w app.py -x py -3.7 app.py -REM nodemon -w app.py -x "pyright app.py && py -3.8 app.py" -nodemon -w app.py -x "pyright app.py && python app.py" diff --git a/spaces/milyiyo/reimagine-it/captioning/utils/config.py b/spaces/milyiyo/reimagine-it/captioning/utils/config.py deleted file mode 100644 index e42704dcba2fb2f751fec413551a5069e63f25c9..0000000000000000000000000000000000000000 --- a/spaces/milyiyo/reimagine-it/captioning/utils/config.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# Copy from fvcore - -import logging -import os -from typing import Any -import yaml -from yacs.config import CfgNode as _CfgNode - -import io as PathManager - -BASE_KEY = "_BASE_" - - -class CfgNode(_CfgNode): - """ - Our own extended version of :class:`yacs.config.CfgNode`. - It contains the following extra features: - - 1. The :meth:`merge_from_file` method supports the "_BASE_" key, - which allows the new CfgNode to inherit all the attributes from the - base configuration file. - 2. Keys that start with "COMPUTED_" are treated as insertion-only - "computed" attributes. They can be inserted regardless of whether - the CfgNode is frozen or not. - 3. With "allow_unsafe=True", it supports pyyaml tags that evaluate - expressions in config. See examples in - https://pyyaml.org/wiki/PyYAMLDocumentation#yaml-tags-and-python-types - Note that this may lead to arbitrary code execution: you must not - load a config file from untrusted sources before manually inspecting - the content of the file. - """ - - @staticmethod - def load_yaml_with_base(filename, allow_unsafe = False): - """ - Just like `yaml.load(open(filename))`, but inherit attributes from its - `_BASE_`. - - Args: - filename (str): the file name of the current config. Will be used to - find the base config file. - allow_unsafe (bool): whether to allow loading the config file with - `yaml.unsafe_load`. - - Returns: - (dict): the loaded yaml - """ - with PathManager.open(filename, "r") as f: - try: - cfg = yaml.safe_load(f) - except yaml.constructor.ConstructorError: - if not allow_unsafe: - raise - logger = logging.getLogger(__name__) - logger.warning( - "Loading config {} with yaml.unsafe_load. Your machine may " - "be at risk if the file contains malicious content.".format( - filename - ) - ) - f.close() - with open(filename, "r") as f: - cfg = yaml.unsafe_load(f) - - def merge_a_into_b(a, b): - # merge dict a into dict b. values in a will overwrite b. - for k, v in a.items(): - if isinstance(v, dict) and k in b: - assert isinstance( - b[k], dict - ), "Cannot inherit key '{}' from base!".format(k) - merge_a_into_b(v, b[k]) - else: - b[k] = v - - if BASE_KEY in cfg: - base_cfg_file = cfg[BASE_KEY] - if base_cfg_file.startswith("~"): - base_cfg_file = os.path.expanduser(base_cfg_file) - if not any( - map(base_cfg_file.startswith, ["/", "https://", "http://"]) - ): - # the path to base cfg is relative to the config file itself. - base_cfg_file = os.path.join( - os.path.dirname(filename), base_cfg_file - ) - base_cfg = CfgNode.load_yaml_with_base( - base_cfg_file, allow_unsafe=allow_unsafe - ) - del cfg[BASE_KEY] - - merge_a_into_b(cfg, base_cfg) - return base_cfg - return cfg - - def merge_from_file(self, cfg_filename, allow_unsafe = False): - """ - Merge configs from a given yaml file. - - Args: - cfg_filename: the file name of the yaml config. - allow_unsafe: whether to allow loading the config file with - `yaml.unsafe_load`. - """ - loaded_cfg = CfgNode.load_yaml_with_base( - cfg_filename, allow_unsafe=allow_unsafe - ) - loaded_cfg = type(self)(loaded_cfg) - self.merge_from_other_cfg(loaded_cfg) - - # Forward the following calls to base, but with a check on the BASE_KEY. - def merge_from_other_cfg(self, cfg_other): - """ - Args: - cfg_other (CfgNode): configs to merge from. - """ - assert ( - BASE_KEY not in cfg_other - ), "The reserved key '{}' can only be used in files!".format(BASE_KEY) - return super().merge_from_other_cfg(cfg_other) - - def merge_from_list(self, cfg_list): - """ - Args: - cfg_list (list): list of configs to merge from. - """ - keys = set(cfg_list[0::2]) - assert ( - BASE_KEY not in keys - ), "The reserved key '{}' can only be used in files!".format(BASE_KEY) - return super().merge_from_list(cfg_list) - - def __setattr__(self, name, val): - if name.startswith("COMPUTED_"): - if name in self: - old_val = self[name] - if old_val == val: - return - raise KeyError( - "Computed attributed '{}' already exists " - "with a different value! old={}, new={}.".format( - name, old_val, val - ) - ) - self[name] = val - else: - super().__setattr__(name, val) - - -if __name__ == '__main__': - cfg = CfgNode.load_yaml_with_base('configs/updown_long.yml') - print(cfg) \ No newline at end of file diff --git a/spaces/mms-meta/MMS/uroman/lib/NLP/stringDistance.pm b/spaces/mms-meta/MMS/uroman/lib/NLP/stringDistance.pm deleted file mode 100644 index 17279d4a064c65fd6683a1b31d04831d93c5b416..0000000000000000000000000000000000000000 --- a/spaces/mms-meta/MMS/uroman/lib/NLP/stringDistance.pm +++ /dev/null @@ -1,724 +0,0 @@ -################################################################ -# # -# stringDistance # -# # -################################################################ - -package NLP::stringDistance; - -use List::Util qw(min max); -$utf8 = NLP::UTF8; -$util = NLP::utilities; -$romanizer = NLP::Romanizer; - -%dummy_ht = (); - -sub rule_string_expansion { - local($this, *ht, $s, $lang_code) = @_; - - my @characters = $utf8->split_into_utf8_characters($s, "return only chars, return trailing whitespaces", *dummy_ht); - foreach $sub_len ((0 .. ($#characters-1))) { - my $sub = join("", @characters[0 .. $sub_len]); - foreach $super_len ((($sub_len + 1) .. $#characters)) { - my $super = join("", @characters[0 .. $super_len]); - # print STDERR " $sub -> $super\n" unless $ht{RULE_STRING_EXPANSION}->{$lang_code}->{$sub}->{$super}; - $ht{RULE_STRING_EXPANSION}->{$lang_code}->{$sub}->{$super} = 1; - $ht{RULE_STRING_HAS_EXPANSION}->{$lang_code}->{$sub} = 1; - # print STDERR " RULE_STRING_HAS_EXPANSION $lang_code $sub\n"; - } - } -} - -sub load_string_distance_data { - local($this, $filename, *ht, $verbose) = @_; - - $verbose = 0 unless defined($verbose); - open(IN,$filename) || die "Could not open $filename"; - my $line_number = 0; - my $n_cost_rules = 0; - while () { - $line_number++; - my $line = $_; - $line =~ s/^\xEF\xBB\xBF//; - $line =~ s/\s*$//; - next if $line =~ /^\s*(\#.*)?$/; - print STDERR "** Warning: line $line_number contains suspicious control character: $line\n" if $line =~ /[\x00-\x1F]/; - my $s1 = $util->slot_value_in_double_colon_del_list($line, "s1"); - my $s2 = $util->slot_value_in_double_colon_del_list($line, "s2"); - $s1 = $util->dequote_string($s1); # 'can\'t' => can't - $s2 = $util->dequote_string($s2); - my $cost = $util->slot_value_in_double_colon_del_list($line, "cost"); - if (($s1 eq "") && ($s2 eq "")) { - print STDERR "Ignoring bad line $line_number in $filename, because both s1 and s2 are empty strings\n"; - next; - } - unless ($cost =~ /^\d+(\.\d+)?$/) { - if ($cost eq "") { - print STDERR "Ignoring bad line $line_number in $filename, because of missing cost\n"; - } else { - print STDERR "Ignoring bad line $line_number in $filename, because of ill-formed cost $cost\n"; - } - next; - } - my $lang_code1_s = $util->slot_value_in_double_colon_del_list($line, "lc1"); - my $lang_code2_s = $util->slot_value_in_double_colon_del_list($line, "lc2"); - my @lang_codes_1 = ($lang_code1_s eq "") ? ("") : split(/,\s*/, $lang_code1_s); - my @lang_codes_2 = ($lang_code2_s eq "") ? ("") : split(/,\s*/, $lang_code2_s); - my $left_context1 = $util->slot_value_in_double_colon_del_list($line, "left1"); - my $left_context2 = $util->slot_value_in_double_colon_del_list($line, "left2"); - my $right_context1 = $util->slot_value_in_double_colon_del_list($line, "right1"); - my $right_context2 = $util->slot_value_in_double_colon_del_list($line, "right2"); - my $bad_left = $util->slot_value_in_double_colon_del_list($line, "left"); - if ($bad_left) { - print STDERR "** Warning: slot '::left $bad_left' in line $line_number\n"; - next; - } - my $bad_right = $util->slot_value_in_double_colon_del_list($line, "right"); - if ($bad_right) { - print STDERR "** Warning: slot '::right $bad_right' in line $line_number\n"; - next; - } - my $in_lang_codes1 = $util->slot_value_in_double_colon_del_list($line, "in-lc1"); - my $in_lang_codes2 = $util->slot_value_in_double_colon_del_list($line, "in-lc2"); - my $out_lang_codes1 = $util->slot_value_in_double_colon_del_list($line, "out-lc1"); - my $out_lang_codes2 = $util->slot_value_in_double_colon_del_list($line, "out-lc2"); - if ($left_context1) { - if ($left_context1 =~ /^\/.*\/$/) { - $left_context1 =~ s/^\///; - $left_context1 =~ s/\/$//; - } else { - print STDERR "Ignoring unrecognized non-regular-express ::left1 $left_context1 in $line_number of $filename\n"; - $left_context1 = ""; - } - } - if ($left_context2) { - if ($left_context2 =~ /^\/.*\/$/) { - $left_context2 =~ s/^\///; - $left_context2 =~ s/\/$//; - } else { - $left_context2 = ""; - print STDERR "Ignoring unrecognized non-regular-express ::left2 $left_context2 in $line_number of $filename\n"; - } - } - if ($right_context1) { - unless ($right_context1 =~ /^(\[[^\[\]]*\])+$/) { - $right_context1 = ""; - print STDERR "Ignoring unrecognized right-context ::right1 $right_context1 in $line_number of $filename\n"; - } - } - if ($right_context2) { - unless ($right_context2 =~ /^(\[[^\[\]]*\])+$/) { - $right_context2 = ""; - print STDERR "Ignoring unrecognized right-context ::right2 $right_context2 in $line_number of $filename\n"; - } - } - foreach $lang_code1 (@lang_codes_1) { - foreach $lang_code2 (@lang_codes_2) { - $n_cost_rules++; - my $cost_rule_id = $n_cost_rules; - $ht{COST}->{$lang_code1}->{$lang_code2}->{$s1}->{$s2}->{$cost_rule_id} = $cost; - $ht{RULE_STRING}->{$lang_code1}->{$s1} = 1; - $ht{RULE_STRING}->{$lang_code2}->{$s2} = 1; - $ht{LEFT1}->{$cost_rule_id} = $left_context1; - $ht{LEFT2}->{$cost_rule_id} = $left_context2; - $ht{RIGHT1}->{$cost_rule_id} = $right_context1; - $ht{RIGHT2}->{$cost_rule_id} = $right_context2; - $ht{INLC1}->{$cost_rule_id} = $in_lang_codes1; - $ht{INLC2}->{$cost_rule_id} = $in_lang_codes2; - $ht{OUTLC1}->{$cost_rule_id} = $out_lang_codes1; - $ht{OUTLC2}->{$cost_rule_id} = $out_lang_codes2; - unless (($s1 eq $s2) - && ($lang_code1 eq $lang_code2) - && ($left_context1 eq $left_context2) - && ($right_context1 eq $right_context2) - && ($in_lang_codes1 eq $in_lang_codes2) - && ($out_lang_codes1 eq $out_lang_codes2)) { - $n_cost_rules++; - $cost_rule_id = $n_cost_rules; - $ht{COST}->{$lang_code2}->{$lang_code1}->{$s2}->{$s1}->{$cost_rule_id} = $cost; - $ht{LEFT1}->{$cost_rule_id} = $left_context2; - $ht{LEFT2}->{$cost_rule_id} = $left_context1; - $ht{RIGHT1}->{$cost_rule_id} = $right_context2; - $ht{RIGHT2}->{$cost_rule_id} = $right_context1; - $ht{INLC1}->{$cost_rule_id} = $in_lang_codes2; - $ht{INLC2}->{$cost_rule_id} = $in_lang_codes1; - $ht{OUTLC1}->{$cost_rule_id} = $out_lang_codes2; - $ht{OUTLC2}->{$cost_rule_id} = $out_lang_codes1; - # print STDERR " Flip rule in line $line: $line\n"; - } - $this->rule_string_expansion(*ht, $s1, $lang_code1); - $this->rule_string_expansion(*ht, $s2, $lang_code2); - } - } - } - close(IN); - print STDERR "Read in $n_cost_rules rules from $line_number lines in $filename\n" if $verbose; -} - -sub romanized_string_to_simple_chart { - local($this, $s, *chart_ht) = @_; - - my @characters = $utf8->split_into_utf8_characters($s, "return only chars, return trailing whitespaces", *dummy_ht); - $chart_ht{N_CHARS} = $#characters + 1; - $chart_ht{N_NODES} = 0; - foreach $i ((0 .. $#characters)) { - $romanizer->add_node($characters[$i], $i, ($i+1), *chart_ht, "", ""); - } -} - -sub linearize_chart_points { - local($this, *chart_ht, $chart_id, *sd_ht, $verbose) = @_; - - $verbose = 0 unless defined($verbose); - print STDERR "Linearize $chart_id\n" if $verbose; - my $current_chart_pos = 0; - my $current_linear_chart_pos = 0; - $sd_ht{POS2LINPOS}->{$chart_id}->{$current_chart_pos} = $current_linear_chart_pos; - $sd_ht{LINPOS2POS}->{$chart_id}->{$current_linear_chart_pos} = $current_chart_pos; - print STDERR " LINPOS2POS.$chart_id LIN: $current_linear_chart_pos POS: $current_chart_pos\n" if $verbose; - my @end_chart_positions = keys %{$chart_ht{NODES_ENDING_AT}}; - my $end_chart_pos = (@end_chart_positions) ? max(@end_chart_positions) : 0; - $sd_ht{MAXPOS}->{$chart_id} = $end_chart_pos; - print STDERR " Chart span: $current_chart_pos-$end_chart_pos\n" if $verbose; - while ($current_chart_pos < $end_chart_pos) { - my @node_ids = keys %{$chart_ht{NODES_STARTING_AT}->{$current_chart_pos}}; - foreach $node_id (@node_ids) { - my $roman_s = $chart_ht{NODE_ROMAN}->{$node_id}; - my @roman_chars = $utf8->split_into_utf8_characters($roman_s, "return only chars, return trailing whitespaces", *dummy_ht); - print STDERR " $current_chart_pos/$current_linear_chart_pos node: $node_id $roman_s (@roman_chars)\n" if $verbose; - if ($#roman_chars >= 1) { - foreach $i ((1 .. $#roman_chars)) { - $current_linear_chart_pos++; - $sd_ht{SPLITPOS2LINPOS}->{$chart_id}->{$current_chart_pos}->{$node_id}->{$i} = $current_linear_chart_pos; - $sd_ht{LINPOS2SPLITPOS}->{$chart_id}->{$current_linear_chart_pos}->{$current_chart_pos}->{$node_id}->{$i} = 1; - print STDERR " LINPOS2SPLITPOS.$chart_id LIN: $current_linear_chart_pos POS: $current_chart_pos NODE: $node_id I: $i\n" if $verbose; - } - } - } - $current_chart_pos++; - if ($util->member($current_chart_pos, @end_chart_positions)) { - $current_linear_chart_pos++; - $sd_ht{POS2LINPOS}->{$chart_id}->{$current_chart_pos} = $current_linear_chart_pos; - $sd_ht{LINPOS2POS}->{$chart_id}->{$current_linear_chart_pos} = $current_chart_pos; - print STDERR " LINPOS2POS.$chart_id LIN: $current_linear_chart_pos POS: $current_chart_pos\n" if $verbose; - } - } - $current_chart_pos = 0; - while ($current_chart_pos <= $end_chart_pos) { - my $current_linear_chart_pos = $sd_ht{POS2LINPOS}->{$chart_id}->{$current_chart_pos}; - $current_linear_chart_pos = "?" unless defined($current_linear_chart_pos); - my @node_ids = keys %{$chart_ht{NODES_STARTING_AT}->{$current_chart_pos}}; - # print STDERR " LINROM.$chart_id LIN: $current_linear_chart_pos POS: $current_chart_pos NODES: @node_ids\n" if $verbose; - foreach $node_id (@node_ids) { - my $end_pos = $chart_ht{NODE_END}->{$node_id}; - my $end_linpos = $sd_ht{POS2LINPOS}->{$chart_id}->{$end_pos}; - my $roman_s = $chart_ht{NODE_ROMAN}->{$node_id}; - my @roman_chars = $utf8->split_into_utf8_characters($roman_s, "return only chars, return trailing whitespaces", *dummy_ht); - print STDERR " LINROM.$chart_id LIN: $current_linear_chart_pos POS: $current_chart_pos NODE: $node_id CHARS: @roman_chars\n" if $verbose; - if (@roman_chars) { - foreach $i ((0 .. $#roman_chars)) { - my $from_linear_chart_pos - = (($i == 0) - ? $sd_ht{POS2LINPOS}->{$chart_id}->{$current_chart_pos} - : $sd_ht{SPLITPOS2LINPOS}->{$chart_id}->{$current_chart_pos}->{$node_id}->{$i}); - print STDERR " FROM.$chart_id I: $i POS: $current_chart_pos NODE: $node_id FROM: $from_linear_chart_pos\n" if $verbose; - my $to_linear_chart_pos - = (($i == $#roman_chars) - ? $end_linpos - : $sd_ht{SPLITPOS2LINPOS}->{$chart_id}->{$current_chart_pos}->{$node_id}->{($i+1)}); - print STDERR " TO.$chart_id I: $i POS: $current_chart_pos NODE: $node_id FROM: $to_linear_chart_pos\n" if $verbose; - my $roman_char = $roman_chars[$i]; - $sd_ht{LIN_IJ_ROMAN}->{$chart_id}->{$from_linear_chart_pos}->{$to_linear_chart_pos}->{$roman_char} = 1; - } - } else { - my $from_linear_chart_pos = $sd_ht{POS2LINPOS}->{$chart_id}->{$current_chart_pos}; - my $to_linear_chart_pos = $sd_ht{POS2LINPOS}->{$chart_id}->{($current_chart_pos+1)}; - # HHERE check this out - my $i = 1; - while (! (defined($to_linear_chart_pos))) { - $i++; - $to_linear_chart_pos = $sd_ht{POS2LINPOS}->{$chart_id}->{($current_chart_pos+$i)}; - } - if (defined($from_linear_chart_pos) && defined($to_linear_chart_pos)) { - $sd_ht{LIN_IJ_ROMAN}->{$chart_id}->{$from_linear_chart_pos}->{$to_linear_chart_pos}->{""} = 1 - } else { - print STDERR " UNDEF.$chart_id from: " - . ((defined($from_linear_chart_pos)) ? $from_linear_chart_pos : "?") - . " to: " - . ((defined($to_linear_chart_pos)) ? $to_linear_chart_pos : "?") - . "\n"; - } - } - } - $current_chart_pos++; - } - $sd_ht{MAXLINPOS}->{$chart_id} = $sd_ht{POS2LINPOS}->{$chart_id}->{$end_chart_pos}; -} - -sub expand_lin_ij_roman { - local($this, *sd_ht, $chart_id, $lang_code, *ht) = @_; - - foreach $start (sort { $a <=> $b } keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart_id}}) { - foreach $end (sort { $a <=> $b } keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart_id}->{$start}}) { - foreach $roman (sort keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart_id}->{$start}->{$end}}) { - if ($ht{RULE_STRING_HAS_EXPANSION}->{$lang_code}->{$roman} - || $ht{RULE_STRING_HAS_EXPANSION}->{""}->{$roman}) { - $this->expand_lin_ij_roman_rec(*sd_ht, $chart_id, $start, $end, $roman, $lang_code, *ht); - } - } - } - } -} - -sub expand_lin_ij_roman_rec { - local($this, *sd_ht, $chart_id, $start, $end, $roman, $lang_code, *ht) = @_; - - # print STDERR " expand_lin_ij_roman_rec.$chart_id $start-$end $lang_code $roman\n"; - return unless $ht{RULE_STRING_HAS_EXPANSION}->{$lang_code}->{$roman} - || $ht{RULE_STRING_HAS_EXPANSION}->{""}->{$roman}; - foreach $new_end (keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart_id}->{$end}}) { - foreach $next_roman (sort keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart_id}->{$end}->{$new_end}}) { - my $exp_roman = join("", $roman, $next_roman); - if ($ht{RULE_STRING}->{$lang_code}->{$exp_roman} - || $ht{RULE_STRING}->{""}->{$exp_roman}) { - $sd_ht{LIN_IJ_ROMAN}->{$chart_id}->{$start}->{$new_end}->{$exp_roman} = 1; - # print STDERR " Expansion ($start-$new_end) $exp_roman\n"; - } - if ($ht{RULE_STRING_HAS_EXPANSION}->{$lang_code}->{$exp_roman} - || $ht{RULE_STRING_HAS_EXPANSION}->{""}->{$exp_roman}) { - $this->expand_lin_ij_roman_rec(*sd_ht, $chart_id, $start, $new_end, $exp_roman, $lang_code, *ht); - } - } - } -} - -sub trace_string_distance { - local($this, *sd_ht, $chart1_id, $chart2_id, $control, $line_number, $cost) = @_; - - my $chart_comb_id = join("/", $chart1_id, $chart2_id); - return "mismatch" if $sd_ht{MISMATCH}->{$chart_comb_id}; - my $chart1_end = $sd_ht{MAXLINPOS}->{$chart1_id}; - my $chart2_end = $sd_ht{MAXLINPOS}->{$chart2_id}; - my $verbose = ($control =~ /verbose/); - my $chunks_p = ($control =~ /chunks/); - my @traces = (); - my @s1_s = (); - my @s2_s = (); - my @e1_s = (); - my @e2_s = (); - my @r1_s = (); - my @r2_s = (); - my @ic_s = (); - - # print STDERR "trace_string_distance $chart1_id $chart2_id $line_number\n"; - while ($chart1_end || $chart2_end) { - my $incr_cost = $sd_ht{INCR_COST_IJ}->{$chart_comb_id}->{$chart1_end}->{$chart2_end}; - my $prec_i = $sd_ht{PREC_I}->{$chart_comb_id}->{$chart1_end}->{$chart2_end}; - my $prec_j = $sd_ht{PREC_J}->{$chart_comb_id}->{$chart1_end}->{$chart2_end}; - if ($incr_cost || $verbose || $chunks_p) { - my $roman1 = $sd_ht{ROMAN1}->{$chart_comb_id}->{$chart1_end}->{$chart2_end}; - my $roman2 = $sd_ht{ROMAN2}->{$chart_comb_id}->{$chart1_end}->{$chart2_end}; - if ($verbose) { - push(@traces, "$prec_i-$chart1_end/$prec_j-$chart2_end:$roman1/$roman2:$incr_cost"); - } else { - if (defined($roman1)) { - push(@traces, "$roman1/$roman2:$incr_cost"); - } else { - $print_prec_i = (defined($prec_i)) ? $prec_i : "?"; - $print_prec_j = (defined($prec_j)) ? $prec_j : "?"; - print STDERR " $prec_i-$chart1_end, $prec_j-$chart2_end\n"; - } - } - if ($chunks_p) { - push(@s1_s, $prec_i); - push(@s2_s, $prec_j); - push(@e1_s, $chart1_end); - push(@e2_s, $chart2_end); - push(@r1_s, $roman1); - push(@r2_s, $roman2); - push(@ic_s, $incr_cost); - } - } - $chart1_end = $prec_i; - $chart2_end = $prec_j; - } - if ($chunks_p) { - my $r1 = ""; - my $r2 = ""; - my $tc = 0; - my $in_chunk = 0; - foreach $i ((0 .. $#ic_s)) { - if ($ic_s[$i]) { - $r1 = $r1_s[$i] . $r1; - $r2 = $r2_s[$i] . $r2; - $tc += $ic_s[$i]; - $in_chunk = 1; - } elsif ($in_chunk) { - $chunk = "$r1/$r2/$tc"; - $chunk .= "*" if $cost > 5; - $sd_ht{N_COST_CHUNK}->{$chunk} = ($sd_ht{N_COST_CHUNK}->{$chunk} || 0) + 1; - $sd_ht{EX_COST_CHUNK}->{$chunk}->{$line_number} = 1; - $r1 = ""; - $r2 = ""; - $tc = 0; - $in_chunk = 0; - } - } - if ($in_chunk) { - $chunk = "$r1/$r2/$tc"; - $chunk .= "*" if $cost > 5; - $sd_ht{N_COST_CHUNK}->{$chunk} = ($sd_ht{N_COST_CHUNK}->{$chunk} || 0) + 1; - $sd_ht{EX_COST_CHUNK}->{$chunk}->{$line_number} = 1; - } - } else { - return join(" ", reverse @traces); - } -} - -sub right_context_match { - local($this, $right_context_rule, *sd_ht, $chart_id, $start_pos) = @_; - - return 1 if $right_context_rule eq ""; - if (($right_context_item, $right_context_rest) = ($right_context_rule =~ /^\[([^\[\]]*)\]*(.*)$/)) { - my $guarded_right_context_item = $right_context_item; - $guarded_right_context_item =~ s/\$/\\\$/g; - my @end_positions = keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart_id}->{$start_pos}}; - return 1 if ($#end_positions == -1) - && (($right_context_item eq "") - || ($right_context_item =~ /\$/)); - foreach $end_pos (@end_positions) { - my @romans = keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart_id}->{$start_pos}->{$end_pos}}; - foreach $roman (@romans) { - if ($roman =~ /^[$guarded_right_context_item]/) { - return $this->right_context_match($right_context_rest, *sd_ht, $chart_id, $end_pos); - } - } - } - } - return 0; -} - -sub string_distance { - local($this, *sd_ht, $chart1_id, $chart2_id, $lang_code1, $lang_code2, *ht, $control) = @_; - - my $verbose = ($control =~ /verbose/i); - my $chart_comb_id = join("/", $chart1_id, $chart2_id); - - my $chart1_end_pos = $sd_ht{MAXLINPOS}->{$chart1_id}; - my $chart2_end_pos = $sd_ht{MAXLINPOS}->{$chart2_id}; - print STDERR "string_distance.$chart_comb_id $chart1_end_pos/$chart2_end_pos\n" if $verbose; - $sd_ht{COST_IJ}->{$chart_comb_id}->{0}->{0} = 0; - $sd_ht{COMB_LEFT_ROMAN1}->{$chart_comb_id}->{0}->{0} = ""; - $sd_ht{COMB_LEFT_ROMAN2}->{$chart_comb_id}->{0}->{0} = ""; - # HHERE - foreach $chart1_start ((0 .. $chart1_end_pos)) { - # print STDERR " C1 $chart1_start- ($chart1_start .. $chart1_end_pos)\n"; - my $prev_further_expansion_possible = 0; - my @chart1_ends = sort { $a <=> $b } keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart1_id}->{$chart1_start}}; - my $max_chart1_ends = (@chart1_ends) ? $chart1_ends[$#chart1_ends] : -1; - foreach $chart1_end (($chart1_start .. $chart1_end_pos)) { - my $further_expansion_possible = ($chart1_start == $chart1_end) - || defined($sd_ht{LINPOS2SPLITPOS}->{$chart1_id}->{$chart1_start}) - || ($chart1_end < $max_chart1_ends); - my @romans1 = (($chart1_start == $chart1_end) - ? ("") - : (sort keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart1_id}->{$chart1_start}->{$chart1_end}})); - if ($#romans1 == -1) { - $further_expansion_possible = 1 if $prev_further_expansion_possible; - } else { - $prev_further_expansion_possible = 0; - } - # print STDERR " C1 $chart1_start-$chart1_end romans1: @romans1 {$further_expansion_possible} *l*\n"; - foreach $roman1 (@romans1) { - # print STDERR " C1 $chart1_start-$chart1_end $roman1 {$further_expansion_possible} *?*\n"; - next unless $ht{RULE_STRING}->{$lang_code1}->{$roman1} - || $ht{RULE_STRING}->{""}->{$roman1}; - # print STDERR " C1 $chart1_start-$chart1_end $roman1 {$further_expansion_possible} ***\n"; - foreach $lang_code1o (($lang_code1, "")) { - foreach $lang_code2o (($lang_code2, "")) { - my @chart2_starts = (sort { $a <=> $b } keys %{$sd_ht{COST_IJ}->{$chart_comb_id}->{$chart1_start}}); - foreach $chart2_start (@chart2_starts) { - # print STDERR " C1 $chart1_start-$chart1_end $roman1 C2 $chart2_start- (@chart2_starts)\n"; - foreach $chart2_end (($chart2_start .. $chart2_end_pos)) { - print STDERR " C1 $chart1_start-$chart1_end $roman1 C2 $chart2_start-$chart2_end\n"; - my @romans2 = (($chart2_start == $chart2_end) - ? ("") - : (sort keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart2_id}->{$chart2_start}->{$chart2_end}})); - foreach $roman2 (@romans2) { - if ($roman1 eq $roman2) { - print STDERR " C1 $chart1_start-$chart1_end $roman1 C2 $chart2_start-$chart2_end $roman2 (IDENTITY)\n"; - my $cost = 0; - my $preceding_cost = $sd_ht{COST_IJ}->{$chart_comb_id}->{$chart1_start}->{$chart2_start}; - my $combined_cost = $preceding_cost + $cost; - my $old_cost = $sd_ht{COST_IJ}->{$chart_comb_id}->{$chart1_end}->{$chart2_end}; - if ((! defined($old_cost)) || ($combined_cost < $old_cost)) { - $sd_ht{COST_IJ}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $combined_cost; - push(@chart2_starts, $chart2_end) unless $util->member($chart2_end, @chart2_starts); - $sd_ht{PREC_I}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $chart1_start; - $sd_ht{PREC_J}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $chart2_start; - $sd_ht{ROMAN1}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $roman1; - $sd_ht{ROMAN2}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $roman2; - $sd_ht{COMB_LEFT_ROMAN1}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} - = $sd_ht{COMB_LEFT_ROMAN1}->{$chart_comb_id}->{$chart1_start}->{$chart2_start} . $roman1; - $sd_ht{COMB_LEFT_ROMAN2}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} - = $sd_ht{COMB_LEFT_ROMAN2}->{$chart_comb_id}->{$chart1_start}->{$chart2_start} . $roman2; - $comb_left_roman1 = $sd_ht{COMB_LEFT_ROMAN1}->{$chart_comb_id}->{$chart1_end}->{$chart2_end}; - $sd_ht{INCR_COST_IJ}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $cost; - $sd_ht{COST_RULE}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = "IDENTITY"; - print STDERR " New cost $chart1_end/$chart2_end: $combined_cost (+$cost from $chart1_start/$chart2_start $roman1/$roman2)\n" if $verbose; - } - } else { - next unless $ht{RULE_STRING}->{$lang_code2o}->{$roman2}; - print STDERR " C1 $chart1_start-$chart1_end $roman1 C2 $chart2_start-$chart2_end $roman2\n"; - next unless defined($ht{COST}->{$lang_code1o}->{$lang_code2o}->{$roman1}->{$roman2}); - my @cost_rule_ids = keys %{$ht{COST}->{$lang_code1o}->{$lang_code2o}->{$roman1}->{$roman2}}; - foreach $cost_rule_id (@cost_rule_ids) { - ## check whether any context requirements are satisfied - # left context rules are regular expressions - my $left_context_rule1 = $ht{LEFT1}->{$cost_rule_id}; - if ($left_context_rule1) { - my $comb_left_roman1 = $sd_ht{COMB_LEFT_ROMAN1}->{$chart_comb_id}->{$chart1_start}->{$chart2_start}; - if (defined($comb_left_roman1)) { - next unless $comb_left_roman1 =~ /$left_context_rule1/; - } else { - print STDERR " No comb_left_roman1 value for $chart_comb_id $chart1_start,$chart2_start\n"; - } - } - my $left_context_rule2 = $ht{LEFT2}->{$cost_rule_id}; - if ($left_context_rule2) { - my $comb_left_roman2 = $sd_ht{COMB_LEFT_ROMAN2}->{$chart_comb_id}->{$chart1_start}->{$chart2_start}; - if (defined($comb_left_roman2)) { - next unless $comb_left_roman2 =~ /$left_context_rule2/; - } else { - print STDERR " No comb_left_roman2 value for $chart_comb_id $chart1_start,$chart2_start\n"; - } - } - my $right_context_rule1 = $ht{RIGHT1}->{$cost_rule_id}; - if ($right_context_rule1) { - my $match_p = $this->right_context_match($right_context_rule1, *sd_ht, $chart1_id, $chart1_end); - # print STDERR " Match?($right_context_rule1, 1, $chart1_end) = $match_p\n"; - next unless $match_p; - } - my $right_context_rule2 = $ht{RIGHT2}->{$cost_rule_id}; - if ($right_context_rule2) { - my $match_p = $this->right_context_match($right_context_rule2, *sd_ht, $chart2_id, $chart2_end); - # print STDERR " Match?($right_context_rule2, 2, $chart2_end) = $match_p\n"; - next unless $match_p; - } - my $cost = $ht{COST}->{$lang_code1o}->{$lang_code2o}->{$roman1}->{$roman2}->{$cost_rule_id}; - my $preceding_cost = $sd_ht{COST_IJ}->{$chart_comb_id}->{$chart1_start}->{$chart2_start}; - my $combined_cost = $preceding_cost + $cost; - my $old_cost = $sd_ht{COST_IJ}->{$chart_comb_id}->{$chart1_end}->{$chart2_end}; - if ((! defined($old_cost)) || ($combined_cost < $old_cost)) { - $sd_ht{COST_IJ}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $combined_cost; - push(@chart2_starts, $chart2_end) unless $util->member($chart2_end, @chart2_starts); - $sd_ht{PREC_I}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $chart1_start; - $sd_ht{PREC_J}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $chart2_start; - $sd_ht{ROMAN1}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $roman1; - $sd_ht{ROMAN2}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $roman2; - $sd_ht{COMB_LEFT_ROMAN1}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} - = $sd_ht{COMB_LEFT_ROMAN1}->{$chart_comb_id}->{$chart1_start}->{$chart2_start} . $roman1; - $sd_ht{COMB_LEFT_ROMAN2}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} - = $sd_ht{COMB_LEFT_ROMAN2}->{$chart_comb_id}->{$chart1_start}->{$chart2_start} . $roman2; - $comb_left_roman1 = $sd_ht{COMB_LEFT_ROMAN1}->{$chart_comb_id}->{$chart1_end}->{$chart2_end}; - # print STDERR " Comb-left-roman1($chart_comb_id,$chart1_end,$chart2_end) = $comb_left_roman1\n"; - $sd_ht{INCR_COST_IJ}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $cost; - $sd_ht{COST_RULE}->{$chart_comb_id}->{$chart1_end}->{$chart2_end} = $cost_rule_id; - print STDERR " New cost $chart1_end/$chart2_end: $combined_cost (+$cost from $chart1_start/$chart2_start $roman1/$roman2)\n" if $verbose; - } - } - } - } - } - } - } - } - $further_expansion_possible = 1 - if $ht{RULE_STRING_HAS_EXPANSION}->{$lang_code1}->{$roman1} - || $ht{RULE_STRING_HAS_EXPANSION}->{""}->{$roman1}; - # print STDERR " further_expansion_possible: $further_expansion_possible (lc: $lang_code1 r1: $roman1) ***\n"; - } - # print STDERR " last C1 $chart1_start-$chart1_end (@romans1)\n" unless $further_expansion_possible; - last unless $further_expansion_possible; - $prev_further_expansion_possible = 1 if $further_expansion_possible; - } - } - my $total_cost = $sd_ht{COST_IJ}->{$chart_comb_id}->{$chart1_end_pos}->{$chart2_end_pos}; - unless (defined($total_cost)) { - $total_cost = 99.9999; - $sd_ht{MISMATCH}->{$chart_comb_id} = 1; - } - return $total_cost; -} - -sub print_sd_ht { - local($this, *sd_ht, $chart1_id, $chart2_id, *OUT) = @_; - - print OUT "string-distance chart:\n"; - foreach $chart_id (($chart1_id, $chart2_id)) { - print OUT "SD chart $chart_id:\n"; - foreach $from_linear_chart_pos (sort { $a <=> $b } keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart_id}}) { - foreach $to_linear_chart_pos (sort { $a <=> $b } keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart_id}->{$from_linear_chart_pos}}) { - foreach $roman_char (sort keys %{$sd_ht{LIN_IJ_ROMAN}->{$chart_id}->{$from_linear_chart_pos}->{$to_linear_chart_pos}}) { - print OUT " Lnode($from_linear_chart_pos-$to_linear_chart_pos): $roman_char\n"; - } - } - } - } -} - -sub print_chart_ht { - local($this, *chart_ht, *OUT) = @_; - - print OUT "uroman chart:\n"; - foreach $start (sort { $a <=> $b } keys %{$chart_ht{NODES_STARTING_AT}}) { - foreach $end (sort { $a <=> $b } keys %{$chart_ht{NODES_STARTING_AND_ENDING_AT}->{$start}}) { - foreach $node_id (keys %{$chart_ht{NODES_STARTING_AND_ENDING_AT}->{$start}->{$end}}) { - $roman_s = $chart_ht{NODE_ROMAN}->{$node_id}; - print OUT " Node $node_id ($start-$end): $roman_s\n"; - } - } - } -} - -sub normalize_string { - local($this, $s) = @_; - -# $s =~ s/(\xE2\x80\x8C)//g; # delete zero width non-joiner - $s =~ s/(\xE2\x80[\x93-\x94])/-/g; # en-dash, em-dash - $s =~ s/([\x00-\x7F\xC0-\xFE][\x80-\xBF]*)\1+/$1$1/g; # shorten 3 or more occurrences of same character in a row to 2 - $s =~ s/[ \t]+/ /g; - - return $s; -} - -my $string_distance_chart_id = 0; -sub string_distance_by_chart { - local($this, $s1, $s2, $lang_code1, $lang_code2, *ht, *pinyin_ht, $control) = @_; - - $control = "" unless defined($control); - %sd_ht = (); - - $s1 = $this->normalize_string($s1); - my $lc_s1 = $utf8->extended_lower_case($s1); - $string_distance_chart_id++; - my $chart1_id = $string_distance_chart_id; - *chart_ht = $romanizer->romanize($lc_s1, $lang_code1, "", *ht, *pinyin_ht, 0, "return chart", $chart1_id); - $this->linearize_chart_points(*chart_ht, $chart1_id, *sd_ht); - $this->expand_lin_ij_roman(*sd_ht, $chart1_id, $lang_code1, *ht); - - $s2 = $this->normalize_string($s2); - my $lc_s2 = $utf8->extended_lower_case($s2); - $string_distance_chart_id++; - my $chart2_id = $string_distance_chart_id; - *chart_ht = $romanizer->romanize($lc_s2, $lang_code2, "", *ht, *pinyin_ht, 0, "return chart", $chart2_id); - $this->linearize_chart_points(*chart_ht, $chart2_id, *sd_ht); - $this->expand_lin_ij_roman(*sd_ht, $chart2_id, $lang_code2, *ht); - - my $cost = $this->string_distance(*sd_ht, $chart1_id, $chart2_id, $lang_code1, $lang_code2, *ht, $control); - return $cost; -} - -my $n_quick_romanized_string_distance = 0; -sub quick_romanized_string_distance_by_chart { - local($this, $s1, $s2, *ht, $control, $lang_code1, $lang_code2) = @_; - - # my $verbose = ($s1 eq "apit") && ($s2 eq "apet"); - # print STDERR "Start quick_romanized_string_distance_by_chart\n"; - $s1 = lc $s1; - $s2 = lc $s2; - $control = "" unless defined($control); - $lang_code1 = "" unless defined($lang_code1); - $lang_code2 = "" unless defined($lang_code2); - my $cache_p = ($control =~ /cache/); - my $total_cost; - if ($cache_p) { - $total_cost = $ht{CACHED_QRSD}->{$s1}->{$s2}; - if (defined($total_cost)) { - return $total_cost; - } - } - my @lang_codes1 = ($lang_code1 eq "") ? ("") : ($lang_code1, ""); - my @lang_codes2 = ($lang_code2 eq "") ? ("") : ($lang_code2, ""); - my $chart1_end_pos = length($s1); - my $chart2_end_pos = length($s2); - my %sd_ht = (); - $sd_ht{COST_IJ}->{0}->{0} = 0; - foreach $chart1_start ((0 .. $chart1_end_pos)) { - foreach $chart1_end (($chart1_start .. $chart1_end_pos)) { - my $substr1 = substr($s1, $chart1_start, ($chart1_end-$chart1_start)); - foreach $lang_code1o (@lang_codes1) { - foreach $lang_code2o (@lang_codes2) { - # next unless defined($ht{COST}->{$lang_code1o}->{$lang_code2o}->{$substr1}); - } - } - my @chart2_starts = (sort { $a <=> $b } keys %{$sd_ht{COST_IJ}->{$chart1_start}}); - foreach $chart2_start (@chart2_starts) { - foreach $chart2_end (($chart2_start .. $chart2_end_pos)) { - my $substr2 = substr($s2, $chart2_start, ($chart2_end-$chart2_start)); - foreach $lang_code1o (@lang_codes1) { - foreach $lang_code2o (@lang_codes2) { - if ($substr1 eq $substr2) { - my $cost = 0; - my $preceding_cost = $sd_ht{COST_IJ}->{$chart1_start}->{$chart2_start}; - if (defined($preceding_cost)) { - my $combined_cost = $preceding_cost + $cost; - my $old_cost = $sd_ht{COST_IJ}->{$chart1_end}->{$chart2_end}; - if ((! defined($old_cost)) || ($combined_cost < $old_cost)) { - $sd_ht{COST_IJ}->{$chart1_end}->{$chart2_end} = $combined_cost; - push(@chart2_starts, $chart2_end) unless $util->member($chart2_end, @chart2_starts); - } - } - } else { - next unless defined($ht{COST}->{$lang_code1o}->{$lang_code2o}->{$substr1}->{$substr2}); - my @cost_rule_ids = keys %{$ht{COST}->{$lang_code1o}->{$lang_code2o}->{$substr1}->{$substr2}}; - my $best_cost = 99.99; - foreach $cost_rule_id (@cost_rule_ids) { - my $cost = $ht{COST}->{$lang_code1o}->{$lang_code2o}->{$substr1}->{$substr2}->{$cost_rule_id}; - my $left_context_rule1 = $ht{LEFT1}->{$cost_rule_id}; - next if $left_context_rule1 - && (! (substr($s1, 0, $chart1_start) =~ /$left_context_rule1/)); - my $left_context_rule2 = $ht{LEFT2}->{$cost_rule_id}; - next if $left_context_rule2 - && (! (substr($s2, 0, $chart2_start) =~ /$left_context_rule2/)); - my $right_context_rule1 = $ht{RIGHT1}->{$cost_rule_id}; - my $right_context1 = substr($s1, $chart1_end); - next if $right_context_rule1 - && (! (($right_context1 =~ /^$right_context_rule1/) - || (($right_context_rule1 =~ /^\[[^\[\]]*\$/) - && ($right_context1 eq "")))); - my $right_context_rule2 = $ht{RIGHT2}->{$cost_rule_id}; - my $right_context2 = substr($s2, $chart2_end); - next if $right_context_rule2 - && (! (($right_context2 =~ /^$right_context_rule2/) - || (($right_context_rule2 =~ /^\[[^\[\]]*\$/) - && ($right_context2 eq "")))); - $best_cost = $cost if $cost < $best_cost; - my $preceding_cost = $sd_ht{COST_IJ}->{$chart1_start}->{$chart2_start}; - my $combined_cost = $preceding_cost + $cost; - my $old_cost = $sd_ht{COST_IJ}->{$chart1_end}->{$chart2_end}; - if ((! defined($old_cost)) || ($combined_cost < $old_cost)) { - $sd_ht{COST_IJ}->{$chart1_end}->{$chart2_end} = $combined_cost; - push(@chart2_starts, $chart2_end) unless $util->member($chart2_end, @chart2_starts); - } - } - } - } - } - } - } - } - } - $total_cost = $sd_ht{COST_IJ}->{$chart1_end_pos}->{$chart2_end_pos}; - $total_cost = 99.99 unless defined($total_cost); - $ht{CACHED_QRSD}->{$s1}->{$s2} = $total_cost if $cache_p; - $n_quick_romanized_string_distance++; - return $total_cost; -} - -sub get_n_quick_romanized_string_distance { - return $n_quick_romanized_string_distance; -} - -1; - diff --git a/spaces/monra/freegpt-webui-chimera/client/js/icons.js b/spaces/monra/freegpt-webui-chimera/client/js/icons.js deleted file mode 100644 index 84fed38dd35e0d0203370a8314a360d27f350dd6..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui-chimera/client/js/icons.js +++ /dev/null @@ -1 +0,0 @@ -window.FontAwesomeKitConfig={asyncLoading:{enabled:!1},autoA11y:{enabled:!0},baseUrl:"https://ka-f.fontawesome.com",baseUrlKit:"https://kit-pro.fontawesome.com",detectConflictsUntil:null,iconUploads:{},id:96462084,license:"pro",method:"css",minify:{enabled:!0},token:"d0514f1901",v4FontFaceShim:{enabled:!0},v4shim:{enabled:!0},v5FontFaceShim:{enabled:!0},version:"6.1.1"},function(t){"function"==typeof define&&define.amd?define("kit-loader",t):t()}(function(){"use strict";function t(e){return(t="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t})(e)}function e(t,e,n){return e in t?Object.defineProperty(t,e,{value:n,enumerable:!0,configurable:!0,writable:!0}):t[e]=n,t}function n(t,e){var n=Object.keys(t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(t);e&&(o=o.filter(function(e){return Object.getOwnPropertyDescriptor(t,e).enumerable})),n.push.apply(n,o)}return n}function o(t){for(var o=1;ot.length)&&(e=t.length);for(var n=0,o=new Array(e);n2&&void 0!==arguments[2]?arguments[2]:function(){},r=e.document||r,i=a.bind(a,r,["fa","fab","fas","far","fal","fad","fak"]),u=Object.keys(t.iconUploads||{}).length>0;t.autoA11y.enabled&&n(i);var f=[{id:"fa-main",addOn:void 0}];t.v4shim&&t.v4shim.enabled&&f.push({id:"fa-v4-shims",addOn:"-v4-shims"}),t.v5FontFaceShim&&t.v5FontFaceShim.enabled&&f.push({id:"fa-v5-font-face",addOn:"-v5-font-face"}),t.v4FontFaceShim&&t.v4FontFaceShim.enabled&&f.push({id:"fa-v4-font-face",addOn:"-v4-font-face"}),u&&f.push({id:"fa-kit-upload",customCss:!0});var s=f.map(function(n){return new F(function(r,i){E(n.customCss?function(t){return t.baseUrlKit+"/"+t.token+"/"+t.id+"/kit-upload.css"}(t):c(t,{addOn:n.addOn,minify:t.minify.enabled}),e).then(function(i){r(function(t,e){var n=e.contentFilter||function(t,e){return t},o=document.createElement("style"),r=document.createTextNode(n(t,e));return o.appendChild(r),o.media="all",e.id&&o.setAttribute("id",e.id),e&&e.detectingConflicts&&e.detectionIgnoreAttr&&o.setAttributeNode(document.createAttribute(e.detectionIgnoreAttr)),o}(i,o(o({},e),{},{baseUrl:t.baseUrl,version:t.version,id:n.id,contentFilter:function(t,e){return _(t,e.baseUrl,e.version)}})))}).catch(i)})});return F.all(s)}function P(t,e){var n=document.createElement("SCRIPT"),o=document.createTextNode(t);return n.appendChild(o),n.referrerPolicy="strict-origin",e.id&&n.setAttribute("id",e.id),e&&e.detectingConflicts&&e.detectionIgnoreAttr&&n.setAttributeNode(document.createAttribute(e.detectionIgnoreAttr)),n}function U(t){var e,n=[],o=document,r=(o.documentElement.doScroll?/^loaded|^c/:/^loaded|^i|^c/).test(o.readyState);r||o.addEventListener("DOMContentLoaded",e=function(){for(o.removeEventListener("DOMContentLoaded",e),r=1;e=n.shift();)e()}),r?setTimeout(t,0):n.push(t)}try{if(window.FontAwesomeKitConfig){var k=window.FontAwesomeKitConfig,L={detectingConflicts:k.detectConflictsUntil&&new Date<=new Date(k.detectConflictsUntil),detectionIgnoreAttr:"data-fa-detection-ignore",fetch:window.fetch,token:k.token,XMLHttpRequest:window.XMLHttpRequest,document:document},I=document.currentScript,T=I?I.parentElement:document.head;(function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{};return"js"===t.method?function(t,e){e.autoA11y=t.autoA11y.enabled,"pro"===t.license&&(e.autoFetchSvg=!0,e.fetchSvgFrom=t.baseUrl+"/releases/"+("latest"===t.version?"latest":"v".concat(t.version))+"/svgs",e.fetchUploadedSvgFrom=t.uploadsUrl);var n=[];return t.v4shim.enabled&&n.push(new F(function(n,r){E(c(t,{addOn:"-v4-shims",minify:t.minify.enabled}),e).then(function(t){n(P(t,o(o({},e),{},{id:"fa-v4-shims"})))}).catch(r)})),n.push(new F(function(n,r){E(c(t,{minify:t.minify.enabled}),e).then(function(t){var r=P(t,o(o({},e),{},{id:"fa-main"}));n(function(t,e){var n=e&&void 0!==e.autoFetchSvg?e.autoFetchSvg:void 0,o=e&&void 0!==e.autoA11y?e.autoA11y:void 0;return void 0!==o&&t.setAttribute("data-auto-a11y",o?"true":"false"),n&&(t.setAttributeNode(document.createAttribute("data-auto-fetch-svg")),t.setAttribute("data-fetch-svg-from",e.fetchSvgFrom),t.setAttribute("data-fetch-uploaded-svg-from",e.fetchUploadedSvgFrom)),t}(r,e))}).catch(r)})),F.all(n)}(t,e):"css"===t.method?C(t,e,function(t){U(t),function(t){"undefined"!=typeof MutationObserver&&new MutationObserver(t).observe(document,{childList:!0,subtree:!0})}(t)}):void 0})(k,L).then(function(t){t.map(function(t){try{T.insertBefore(t,I?I.nextSibling:null)}catch(e){T.appendChild(t)}}),L.detectingConflicts&&I&&U(function(){I.setAttributeNode(document.createAttribute(L.detectionIgnoreAttr));var t=function(t,e){var n=document.createElement("script");return e&&e.detectionIgnoreAttr&&n.setAttributeNode(document.createAttribute(e.detectionIgnoreAttr)),n.src=c(t,{baseFilename:"conflict-detection",fileSuffix:"js",subdir:"js",minify:t.minify.enabled}),n}(k,L);document.body.appendChild(t)})}).catch(function(t){console.error("".concat("Font Awesome Kit:"," ").concat(t))})}}catch(t){console.error("".concat("Font Awesome Kit:"," ").concat(t))}}); \ No newline at end of file diff --git a/spaces/monra/freegpt-webui/server/bp.py b/spaces/monra/freegpt-webui/server/bp.py deleted file mode 100644 index 61d416797039dababd9e8222b4fc910ef65c40b9..0000000000000000000000000000000000000000 --- a/spaces/monra/freegpt-webui/server/bp.py +++ /dev/null @@ -1,6 +0,0 @@ -from flask import Blueprint - -bp = Blueprint('bp', __name__, - template_folder='./../client/html', - static_folder='./../client', - static_url_path='assets') diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/composite.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/composite.py deleted file mode 100644 index a5366d62434a4400ba9cc524f4286f99f733d121..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/optim/composite.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from collections import defaultdict -from dataclasses import dataclass, field -from typing import Dict, Any, List, Optional - -import torch.optim -from fairseq.dataclass import FairseqDataclass -from fairseq.optim import FairseqOptimizer, register_optimizer, _build_optimizer -from fairseq.optim.lr_scheduler import FairseqLRScheduler, build_lr_scheduler -from omegaconf import II, open_dict - - -logger = logging.getLogger(__name__) - - -@dataclass -class OptimizerAndSchedulerConfig(FairseqDataclass): - optimizer: Any = None - lr_scheduler: Optional[Any] = None - lr: List = II("optimization.lr") - lr_float: Optional[float] = None # this makes it easier to sweep on learning rate with auto sweepers - - -@dataclass -class CompositeOptimizerConfig(FairseqDataclass): - groups: Dict[str, Any] = field( - default_factory=lambda: {}, - metadata={ - "help": "optimizer name -> optimizer OptimizerAndSchedulerConfig. " - "Configures a different optimizer and (optionally) lr scheduler for each parameter group" - }, - ) - - -@register_optimizer("composite", dataclass=CompositeOptimizerConfig) -class FairseqCompositeOptimizer(FairseqOptimizer): - - optimizers: Dict[str, FairseqOptimizer] = {} - lr_schedulers: Dict[str, FairseqLRScheduler] = {} - lr_scheduler: FairseqLRScheduler = None - _optimizer: torch.optim.Optimizer - - def __init__(self, cfg: CompositeOptimizerConfig, params): - super().__init__(cfg) - - assert ( - len(params) > 1 - ), "Composite optimizer only works when there are multiple parameter groups (try fp16_no_flatten_grads: true)" - - groupped_params = defaultdict(list) - for p in params: - group = getattr(p, "param_group", "default") - groupped_params[group].append(p) - - assert groupped_params.keys() == cfg.groups.keys(), ( - f"Parameter groups {groupped_params.keys()} and optimizer groups {cfg.groups.keys()} are not the same! " - "Try setting 'param_group' on your parameters in the model." - ) - - for group, group_params in groupped_params.items(): - group_cfg = cfg.groups[group] - with open_dict(group_cfg): - if group_cfg.lr_float is not None: - group_cfg.optimizer.lr = [group_cfg.lr_float] - group_cfg.lr_scheduler.lr = [group_cfg.lr_float] - else: - group_cfg.optimizer.lr = group_cfg.lr - group_cfg.lr_scheduler.lr = group_cfg.lr - self.optimizers[group] = _build_optimizer(group_cfg.optimizer, group_params) - if group_cfg.lr_scheduler is not None: - self.lr_schedulers[group] = build_lr_scheduler( - group_cfg.lr_scheduler, self.optimizers[group] - ) - - if len(self.lr_schedulers) > 0: - assert len(self.lr_schedulers) == len(self.optimizers), ( - f"Please provide an lr scheduler for each optimizer to use pass_through scheduler. " - f"Optimizers: {self.optimizers}; Lr scheds: {self.lr_schedulers}" - ) - self.lr_scheduler = CompositeLRScheduler(self.lr_schedulers) - - self._optimizer = CompositeOptimizer(self.optimizers) - - @property - def supports_groups(self): - return True - - @property - def param_groups(self): - for opt in self.optimizers.values(): - for group in opt.param_groups: - yield group - - def get_lr(self): - """Return the current learning rate.""" - k = ( - "default" - if "default" in self.optimizers - else next(iter(self.optimizers.keys())) - ) - return self.optimizers[k].param_groups[0]["lr"] - - def state_dict(self): - """Return the LR scheduler state dict.""" - return {k: s.state_dict() for k, s in self.optimizers.items()} - - def load_state_dict(self, state_dict, optimizer_overrides=None): - """Load an LR scheduler state dict.""" - for k, state in state_dict.items(): - if k not in self.optimizers: - # skip extra keys like "loss_scale" added by fp16 optimizer - continue - - overrides = ( - optimizer_overrides[k] - if isinstance(optimizer_overrides, dict) and k in optimizer_overrides - else None - ) - self.optimizers[k].load_state_dict(state, optimizer_overrides=overrides) - - -class CompositeOptimizer(torch.optim.Optimizer): - def __init__(self, optimizers: Dict[str, FairseqOptimizer]): - self.optimizers = optimizers - - @property - def supports_memory_efficient_fp16(self): - return all(o.supports_memory_efficient_fp16 for o in self.optimizers.values()) - - @property - def supports_flat_params(self): - return all(o.supports_flat_params for o in self.optimizers.values()) - - def step(self, closure=None, groups=None): - """Performs a single optimization step. - - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for k, opt in self.optimizers.items(): - if groups is None or k in groups: - opt.step() - - return loss - - def zero_grad(self): - for opt in self.optimizers.values(): - opt.zero_grad() - - -class CompositeLRScheduler(FairseqLRScheduler): - def __init__(self, lr_schedulers): - super().__init__(None, None) - - self.lr_schedulers = lr_schedulers - - def state_dict(self): - """Return the LR scheduler state dict.""" - return {k: s.state_dict() for k, s in self.lr_schedulers.items()} - - def load_state_dict(self, state_dict): - """Load an LR scheduler state dict.""" - for k, state in state_dict.items(): - self.lr_schedulers[k].load_state_dict(state) - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - for s in self.lr_schedulers.values(): - s.step_begin_epoch(epoch) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - for s in self.lr_schedulers.values(): - s.step(epoch) - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - return {k: s.step_update(num_updates) for k, s in self.lr_schedulers.items()} diff --git a/spaces/myscale/object-detection-safari/box_utils.py b/spaces/myscale/object-detection-safari/box_utils.py deleted file mode 100644 index 610bca915bc40319bc04e89c2d9b6ce1d9294279..0000000000000000000000000000000000000000 --- a/spaces/myscale/object-detection-safari/box_utils.py +++ /dev/null @@ -1,165 +0,0 @@ -import numpy as np - - -def cxywh2xywh(cx, cy, w, h): - """CxCyWH format to XYWH format conversion""" - x = cx - w / 2 - y = cy - h / 2 - return x, y, w, h - - -def cxywh2ltrb(cx, cy, w, h): - """CxCyWH format to LeftRightTopBottom format""" - l = cx - w / 2 - t = cy - h / 2 - r = cx + w / 2 - b = cy + h / 2 - return l, t, r, b - - -def iou(ba, bb): - """Calculate Intersection-Over-Union - - Args: - ba (tuple): CxCyWH format with score - bb (tuple): CxCyWH format with score - - Returns: - IoU with size of length of given box - """ - a_l, a_t, a_r, a_b, sa = ba - b_l, b_t, b_r, b_b, sb = bb - - x1 = np.maximum(a_l, b_l) - y1 = np.maximum(a_t, b_t) - x2 = np.minimum(a_r, b_r) - y2 = np.minimum(a_b, b_b) - w = np.maximum(0, x2 - x1) - h = np.maximum(0, y2 - y1) - intersec = w * h - iou = (intersec) / (sa + sb - intersec) - return iou.squeeze() - - -def nms(cx, cy, w, h, s, iou_thresh=0.3): - """Bounding box Non-maximum Suppression - - Args: - cx, cy, w, h, s: CxCyWH Format with score boxes - iou_thresh (float, optional): IoU threshold. Defaults to 0.3. - - Returns: - res: indexes of the selected boxes - """ - l, t, r, b = cxywh2ltrb(cx, cy, w, h) - areas = w * h - res = [] - sort_ind = np.argsort(s, axis=-1)[::-1] - while sort_ind.shape[0] > 0: - i = sort_ind[0] - res.append(i) - - _iou = iou( - (l[i], t[i], r[i], b[i], areas[i]), - ( - l[sort_ind[1:]], - t[sort_ind[1:]], - r[sort_ind[1:]], - b[sort_ind[1:]], - areas[sort_ind[1:]], - ), - ) - sel_ind = np.where(_iou <= iou_thresh)[0] - sort_ind = sort_ind[sel_ind + 1] - return res - - -def filter_nonpos(boxes, agnostic_ratio=0.5, class_ratio=0.7): - """filter out insignificant boxes - - Args: - boxes (list of records): returned query to be filtered - """ - ret = [] - labelwise = {} - for b in boxes: - _id, cx, cy, w, h, label, logit, is_selected = b[:8] - if label not in labelwise: - labelwise[label] = [] - labelwise[label].append(logit) - labelwise = {l: max(s) for l, s in labelwise.items()} - agnostic = max([v for _, v in labelwise.items()]) - for b in boxes: - _id, cx, cy, w, h, label, logit, is_selected = b[:8] - if logit > class_ratio * labelwise[label] and logit > agnostic_ratio * agnostic: - ret.append(b) - return ret - - -def postprocess(matches, prompt_labels, img_matches=None, agnostic_ratio=0.4, class_ratio=0.7): - meta = [] - boxes_w_img = [] - matches_ = {m["img_id"]: m for m in matches} - if img_matches is not None: - img_matches_ = {m["img_id"]: m for m in img_matches} - for k in matches_.keys(): - m = matches_[k] - boxes = [] - boxes += list( - map( - list, - zip( - m["box_id"], - m["cx"], - m["cy"], - m["w"], - m["h"], - [prompt_labels[int(l)] for l in m["label"]], - m["logit"], - [1] * len(m["box_id"]), - ), - ) - ) - if img_matches is not None and k in img_matches_: - img_m = img_matches_[k] - # and also those non-TopK hits and those non-topk are not anticipating training - boxes += [ - i - for i in map( - list, - zip( - img_m["box_id"], - img_m["cx"], - img_m["cy"], - img_m["w"], - img_m["h"], - [prompt_labels[int(l)] for l in img_m["label"]], - img_m["logit"], - [0] * len(img_m["box_id"]), - ), - ) - if i[0] not in [b[0] for b in boxes] - ] - else: - img_m = None - # update record metadata after query - for b in boxes: - meta.append(b[0]) - - # remove some non-significant boxes - boxes = filter_nonpos(boxes, agnostic_ratio=agnostic_ratio, class_ratio=class_ratio) - - # doing non-maximum suppression - cx, cy, w, h, s = list( - map(lambda x: np.array(x), list(zip(*[(*b[1:5], b[6]) for b in boxes]))) - ) - ind = nms(cx, cy, w, h, s, 0.3) - boxes = [boxes[i] for i in ind] - if img_m is not None: - img_score = ( - img_m["img_score"] if img_matches is not None else m["img_score"] - ) - boxes_w_img.append( - (m["img_id"], m["img_url"], m["img_w"], m["img_h"], img_score, boxes) - ) - return boxes_w_img, meta \ No newline at end of file diff --git a/spaces/nateraw/lavila/docs/PRETRAIN.md b/spaces/nateraw/lavila/docs/PRETRAIN.md deleted file mode 100644 index ebd5d9fbce7b49c372f6c55528564d5813aa0fc1..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/docs/PRETRAIN.md +++ /dev/null @@ -1,125 +0,0 @@ -# LAVILA Pretraining - -In this doc, we provide a step-by-step guide (with commands) to train LaViLa. -Note that we recommend running the following job with four 8x V100 (32GB) nodes (or eight nodes for the larger backbone) using [submitit](https://github.com/facebookincubator/submitit). -See how to install submitit at [here](./MODEL_ZOO.md#multi-node-training). - - -## Pre-training Dual-Encoder Baseline - -We first pre-train a dual-encoder baseline with human annotations on Ego4d clips. -The goal is (1) to establish a comparable baseline for LAVILA, and (2) provide a video encoder for narrator (see below). -We use a default batch size of 32 per gpu so that the total batch size for InfoNCE loss is `32*8*4=1024`. - -
        Train a baseline dual-encoder (with TSF-B) - -```bash -python run_with_submitit_pretrain.py --model CLIP_OPENAI_TIMESFORMER_BASE \ - --norm-embed --freeze-temperature \ - --fix-lr --contrastive-use-vissl \ - --nodes 4 --use_volta32 -``` -
        - -To fit a High-Resolution TimeSformer-Large with a sufficient batch size, we use [DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert), a memory-efficient text encoder, instead of the original text encoder in the CLIP. Additionally we apply [gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html) and [Zero Redundancy Optimizer (ZeRO)](https://arxiv.org/abs/1910.02054). - -
        Train a baseline dual-encoder (with TSF-L@HR) - -```bash -python run_with_submitit_pretrain.py --model CLIP_OPENAI_TIMESFORMER_LARGE_336PX_DISTILBERT_BASE \ - --batch-size 8 \ - --use-checkpoint --use-zero \ - --norm-embed --freeze-temperature \ - --fix-lr --contrastive-use-vissl \ - --nodes 8 --use_volta32 -``` -
        - -## Training and Evaluating Narrator - -The narrator is a *visually conditioned* large language model (VCLM), which comprises a pre-trained video encoder (obtained above), a text decoder (GPT-2 family), and a few gated cross-attention modules that attends visual information while captioning. Both the video encoder and the text decoder are kept frozen while the cross-attention modules are learnable. - -Note that we turn off Pytorch's automatic mixed-precision (AMP) during training the narrator. We observe training is instable if AMP is on. - -Also note that `$PATH` can be found in the `Vis. Encoder` column of [MODEL_ZOO.md#Narrator](./MODEL_ZOO.md#narrator). If you are using your own checkpoint (e.g. pre-trained in the previous step), please make sure that the following keys in the checkpoint have been dropped: `epoch`, `optimizer`, and `scaler`. - -
        Train a baseline narrator (TSF-B as visual encoder and GPT-2 base as textual decoder) - -```bash -python run_with_submitit_pretrain.py \ - --model VCLM_OPENAI_TIMESFORMER_BASE_GPT2 \ - --gated-xattn --freeze-lm-vclm --freeze-visual-vclm --freeze-visual-vclm-temporal \ - --fix-lr --batch-size 8 --clip-grad-value 1.0 --eval-freq 1 --disable-amp \ - --nodes 4 --use_volta32 --resume $PATH # Eg. $PATH can be "modelzoo/clip_openai_timesformer_base.baseline.ep_0003.pth" -``` - -
        - -
        Train a strong narrator (TSF-L@HR as visual encoder and GPT-2 XL as textual decoder) - -```bash -python run_with_submitit_pretrain.py \ - --model VCLM_OPENAI_TIMESFORMER_LARGE_336PX_GPT2_XL \ - --gated-xattn --freeze-lm-vclm --freeze-visual-vclm --freeze-visual-vclm-temporal --use-checkpoint \ - --fix-lr --batch-size 8 --clip-grad-value 1.0 --eval-freq 1 --disable-amp \ - --nodes 4 --use_volta32 --resume $PATH # Eg. $PATH can be "modelzoo/clip_openai_timesformer_large_336px_distilbert_base.baseline.ep_0003.pth" -``` -
        - -
        Evaluate the narrator on Ego4D val split - -```bash -torchrun --nproc_per_node=1 eval_narrator.py \ - --caption-top-p 0.95 --caption-temperature 0.7 \ - --eval-freq 10000 \ # evaluate on the val split of Ego4D (1/10000-subset for fast evaluation) - --resume $VCLM_CHECKPOINT -``` -This will output some common NLG metrics, such as BLEU-x, METEOR, ROUGE_L, and CIDEr (using the human narrations as ground-truth). -
        - -## Narrating video clips using LAVILA-Narrator - - -
        Infer the narrator - -```bash -python run_with_submitit_infer_narrator.py \ - --metadata datasets/Ego4D/ego4d_train.pkl \ - --batch-size 64 \ - --resume $PATH --use-half \ - --nodes 4 --use_volta32 -``` -
        - -It will generate a pickle file (`$output_dir/total.pkl`) which is a list of quintuples - `(video_uid: str, start_time: float, end_time: float, narration_list: List[str], NLL_list: List[float])`. - -For narrator-generated narrations on Ego4D ground-truth clips, we also provide a [replica](https://dl.fbaipublicfiles.com/lavila/metadata/ego4d/ego4d_train.narrator_63690737.return_10.pkl). Note that the narrator used here is our best performing one. - -## Rephrasing human narrations using LAVILA-Rephraser - -Rephraser is a standard LLM that can paraphrase narrations in existing clips. -Specifically, we use an off-the-shelf T5-based paraphraser which is publicly available at [Hugging Face's model hub](https://huggingface.co/ramsrigouthamg/t5-large-paraphraser-diverse-high-quality). -For more details, please refer to the [model card](https://huggingface.co/ramsrigouthamg/t5-large-paraphraser-diverse-high-quality). - -For rephrased human narrations on Ego4D ground-truth clips, we provide a [replica](https://dl.fbaipublicfiles.com/lavila/metadata/ego4d/ego4d_train.rephraser.no_punkt_top3.pkl). - - -## Pre-training LAVILA Dual-Encoder -Now we are ready to pre-train our LAVILA's dual-encoder by combining human annotations (augmented by Rephraser) and the Narrator-generated narrations. - -
        Training a LaViLa dual-encoder - -```bash -python run_with_submitit_pretrain.py --model CLIP_OPENAI_TIMESFORMER_BASE \ - --metadata datasets/Ego4D/ego4d_train.rephraser.no_punkt_top3.pkl \ - --metadata-aux datasets/Ego4D/ego4d_train.narrator_63690737.return_10.pkl \ - --norm-embed --freeze-temperature \ - --freeze-pseudo-temperature \ - --fix-lr --contrastive-use-vissl \ - --nodes 4 --use_volta32 -``` -
        - -## Down-stream Evaluation -With the pre-trained dual-encoder at hand, we now can do zero-shot or fine-tuning evalution evaluations on down-stream benchmarks. -Please refer to [MODEL_ZOO.md](./MODEL_ZOO.md#zero-shot) for more details. diff --git a/spaces/nateraw/pictionary/README.md b/spaces/nateraw/pictionary/README.md deleted file mode 100644 index d944b18afc412ed91b7c0275739a5e56a69af944..0000000000000000000000000000000000000000 --- a/spaces/nateraw/pictionary/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Pictionary -emoji: 📊 -colorFrom: gray -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/neel692/NSFW-VS-SFW-Image-Classification/app.py b/spaces/neel692/NSFW-VS-SFW-Image-Classification/app.py deleted file mode 100644 index 07ee7ae7971c500eb6e3a08bc68b28e90e0195e8..0000000000000000000000000000000000000000 --- a/spaces/neel692/NSFW-VS-SFW-Image-Classification/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import gradio as gr -import numpy as np -import urllib -import cv2 -from tensorflow.keras.preprocessing import image -from tensorflow.keras.models import load_model - - -# Load the pre-trained face detection model -face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml") - -# Load the pre-trained model -model = load_model('my_model.h5') -def classify_image(img): - img_copy = img - height, width = img_copy.shape[0], img_copy.shape[1] - - img_copy = cv2.resize(img_copy, (500, 500)) - # Convert the image to grayscale - gray = cv2.cvtColor(img_copy, cv2.COLOR_BGR2GRAY) - - # Detect faces in the image - faces = face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30)) - - # Check if any faces were detected - if len(faces) > 0: - #print("Human face detected in the image!") - face_area_list = [] - # Draw rectangles around the detected faces - for (x, y, w, h) in faces: - cv2.rectangle(img_copy, (x, y), (x+w, y+h), (0, 255, 0), 2) - area = w * h - face_area_list.append(area) - #print(sorted(face_area_list)) - big_face_area = sorted(face_area_list)[-1] - img_area = img_copy.shape[0] * img_copy.shape[1] - perc_area = (big_face_area/img_area)*100 - if perc_area>7: - img = image.img_to_array(img) - img = np.expand_dims(img, axis=0) - img /= 255.0 - # Use the model to make a prediction - prediction = model.predict(img)[0] - # Map the predicted class to a label - dic = {'NSFW': float(prediction[1]), 'CART': float(prediction[0]),'SFW':float(prediction[2])} - else : - dic = {'CART': float(0),'SFW': float(0), 'NSFW': float(1)} - - - else: - dic = {'CART': float(0),'SFW': float(0), 'NSFW': float(1)} - perc_area = "could not detected face" - #print("No human face detected in the image.") - - return [dic, perc_area, img_copy] - -def classify_url(url): - # Load the image from the URL - response = urllib.request.urlopen(url) - img = image.load_img(response, target_size=(224, 224)) - - return classify_image(img) - - -# Define the GRADIO output interface -examples = [f"example{i}.jpg" for i in range(1,9)] - -# Define the GRADIO output interfaces -output_interfaces = [ - gr.outputs.Label(num_top_classes=3), - gr.outputs.Textbox(label="% Area of the largest face in image"), - gr.outputs.Image(type="pil", label="Detected Faces") -] -# Define the GRADIO app -app = gr.Interface(classify_image, gr.Image(shape=(224, 224)), outputs=output_interfaces, allow_flagging="never", examples = examples,title="NSFW/SFW Classifier") - -# Start the GRADIO app -app.launch() - diff --git a/spaces/neigui/White-box-Cartoonization/README.md b/spaces/neigui/White-box-Cartoonization/README.md deleted file mode 100644 index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000 --- a/spaces/neigui/White-box-Cartoonization/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -python_version: 3.7 -title: White Box Cartoonization -emoji: 📚 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: hylee/White-box-Cartoonization ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Command And Conquer Generals 2 Free Download Full __TOP__ Version Mac.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Command And Conquer Generals 2 Free Download Full __TOP__ Version Mac.md deleted file mode 100644 index dcde0161d11b334a52e032772cefd9a83dbf8186..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Command And Conquer Generals 2 Free Download Full __TOP__ Version Mac.md +++ /dev/null @@ -1,18 +0,0 @@ -
        -

        How to Download Command And Conquer Generals 2 for Free on Mac

        -

        Command And Conquer Generals 2 is a real-time strategy game that was supposed to be released by EA in 2013, but was cancelled and transformed into a free-to-play online game as part of the Command And Conquer series[^1^]. However, the online game was also cancelled in 2014, leaving fans of the franchise disappointed and without a sequel to the original Generals game. Fortunately, there is a way to play Generals 2 on Mac, thanks to a fan-made mod that uses the engine and assets of C&C: Generals Zero Hour, the expansion pack for the first Generals game.

        -

        Command And Conquer Generals 2 Free Download Full Version Mac


        Downloadhttps://urlcod.com/2uIaWa



        -

        In this article, we will show you how to download and install Command And Conquer Generals 2 for free on Mac, using the Generals 2 mod for C&C: Generals Zero Hour. You will need to have C&C: Generals Zero Hour installed on your Mac before you can use the mod. You can buy the game from Aspyr's website[^2^] or from other online stores. The mod is compatible with OS X 10.9.5 or later and requires at least 4.4 GB of free disk space.

        -

        Step 1: Download and install the Generals 2 mod v1.55 version

        -

        The first step is to download and install the Generals 2 mod v1.55 version, which is the base version of the mod that contains most of the features and content. You can download it from ModDB[^3^], a website that hosts mods for various games. The file size is about 1.6 GB and it is a zip archive that you need to extract after downloading.

        -

        To install the mod, you need to copy and paste the contents of the extracted folder into your C&C: Generals Zero Hour folder, which is usually located at /Applications/Command & Conquer Generals Deluxe Edition/Command & Conquer Generals Zero Hour.app/Contents/GameData/. You may need to right-click on the app and select Show Package Contents to access the GameData folder. Make sure you overwrite any existing files when prompted.

        -

        Step 2: Download and extract the Generals 2 mod v1.551 patch

        -

        The second step is to download and extract the Generals 2 mod v1.551 patch, which is a small update that fixes some bugs and adds some improvements to the mod. You can also download it from ModDB[^3^], where it is listed as a patch for the v1.55 version. The file size is about 12 MB and it is also a zip archive that you need to extract after downloading.

        -

        To apply the patch, you need to copy and paste the contents of the extracted folder into your C&C: Generals Zero Hour folder, just like you did with the base version of the mod. Again, make sure you overwrite any existing files when prompted.

        -

        Step 3: Run General 2 MOD

        -

        The final step is to run General 2 MOD, which is an executable file that launches the modded version of C&C: Generals Zero Hour with all the features and content of Generals 2. You can find it in your C&C: Generals Zero Hour folder, where it should be named General 2 MOD.exe. You may need to use a program like Wine or CrossOver to run it on Mac, as it is a Windows executable file.

        -

        -

        Once you run General 2 MOD, you should see a new menu screen with options to play single-player or multiplayer modes, as well as settings and credits. You can choose from three factions: USA, China, or GLA, each with their own units, buildings, and abilities. You can also play on various maps inspired by real-world locations or create your own maps with the map editor.

        -

        Congratulations! You have successfully downloaded and installed Command And Conquer Generals 2 for free on Mac. Enjoy playing this fan-made sequel to one of the most popular RTS games of all time!

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dussehra Full WORK Hindi Movie Hd 1080p.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dussehra Full WORK Hindi Movie Hd 1080p.md deleted file mode 100644 index 8768ac7a0fcae0a58f31a6f5e028e733486dea32..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Dussehra Full WORK Hindi Movie Hd 1080p.md +++ /dev/null @@ -1,12 +0,0 @@ -
        -

        Dussehra: A Thrilling Action Movie Starring Nani

        -

        Dussehra is a 2023 Indian Telugu-language action thriller film directed by Srikanth Addala and starring Nani, Sai Pallavi, and Krithi Shetty. The film revolves around a young lawyer who gets involved in a murder case and has to fight against a powerful politician and his goons. The film was dubbed in Hindi and released on YouTube by Ind Fact.

        -

        Dussehra Full Hindi Movie Hd 1080p


        Download File >>>>> https://urlcod.com/2uIbHP



        -

        The film received positive reviews from critics and audiences alike, who praised Nani's performance, the screenplay, the action sequences, and the music. The film was also a commercial success, grossing over ₹100 crore worldwide. The film was nominated for several awards, including Best Actor for Nani, Best Director for Srikanth Addala, and Best Film at the Filmfare Awards South.

        -

        If you are looking for a thrilling and entertaining movie to watch this weekend, you can download Dussehra full HD 1080p movie from SoundCloud[^2^] [^3^] or watch it online on YouTube[^1^]. You will not regret watching this action-packed movie that will keep you on the edge of your seat.

        Dussehra is not just a typical action movie, but also a social commentary on the corruption and injustice in the Indian society. The film exposes the nexus between politicians, police, media, and criminals, and how they manipulate the system to their advantage. The film also shows how a common man can fight back and seek justice, even if it means risking his life.

        -

        The film has some memorable dialogues and scenes that will stay with you long after the movie ends. For example, the scene where Nani confronts the villain and says "Dussehra is not just a festival, but a day of victory over evil. Today, I will celebrate Dussehra by killing you." The film also has some emotional moments, such as the scene where Nani meets his mother after a long time and hugs her. The film also has some humorous moments, such as the scene where Nani disguises himself as a woman to escape from the goons.

        -

        The film has a stellar cast that delivers excellent performances. Nani is the highlight of the film, as he portrays the role of a lawyer who turns into a vigilante with ease and conviction. He shows his versatility and charisma in every scene. Sai Pallavi is also impressive as Nani's love interest and a journalist who helps him in his mission. She has a good chemistry with Nani and adds charm to the film. Krithi Shetty is also good as Nani's sister and a doctor who gets kidnapped by the villain. She has a strong screen presence and makes an impact in her debut film.

        The film also has a brilliant technical team that enhances the quality of the film. The cinematography by Ravi K. Chandran is stunning and captures the mood and tone of the film. The editing by Marthand K. Venkatesh is crisp and smooth, keeping the pace of the film tight. The music by Mickey J. Meyer is catchy and melodious, and suits the theme of the film. The songs are well picturized and choreographed, adding to the entertainment value of the film. The background score by Thaman S. is also effective and elevates the scenes.

        -

        Dussehra is a must-watch movie for all the fans of action and thriller genres. It is a film that will keep you hooked from start to finish, and leave you satisfied and inspired. It is a film that celebrates the spirit of Dussehra, the victory of good over evil. It is a film that you will not forget anytime soon.

        -

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/noelshin/selfmask/app.py b/spaces/noelshin/selfmask/app.py deleted file mode 100644 index 7c4599d5b011331d60c97750b3c87225d9d8c193..0000000000000000000000000000000000000000 --- a/spaces/noelshin/selfmask/app.py +++ /dev/null @@ -1,120 +0,0 @@ -from argparse import ArgumentParser, Namespace -from typing import Dict, List, Tuple -import codecs -import yaml -import numpy as np -import cv2 -from PIL import Image -import torch -import torch.nn.functional as F -from torchvision.transforms.functional import to_tensor, normalize, resize -import gradio as gr -from utils import get_model -from bilateral_solver import bilateral_solver_output -import os -os.environ['KMP_DUPLICATE_LIB_OK'] = 'True' - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -state_dict: dict = torch.hub.load_state_dict_from_url( - "https://www.robots.ox.ac.uk/~vgg/research/selfmask/shared_files/selfmask_nq20.pt", - map_location=device # "cuda" if torch.cuda.is_available() else "cpu" -) - -parser = ArgumentParser("SelfMask demo") -parser.add_argument( - "--config", - type=str, - default="duts-dino-k234-nq20-224-swav-mocov2-dino-p16-sr10100.yaml" -) - -args: Namespace = parser.parse_args() -base_args = yaml.safe_load(open(f"{args.config}", 'r')) -base_args.pop("dataset_name") -args: dict = vars(args) -args.update(base_args) -args: Namespace = Namespace(**args) - -model = get_model(arch="maskformer", configs=args).to(device) -model.load_state_dict(state_dict) -model.eval() - -size: int = 384 -max_size: int = 512 -mean: Tuple[float, float, float] = (0.485, 0.456, 0.406) -std: Tuple[float, float, float] = (0.229, 0.224, 0.225) - - -@torch.no_grad() -def main(image: Image): - pil_image: Image.Image = resize(image, size=size, max_size=max_size) - image: torch.Tensor = normalize(to_tensor(pil_image), mean=list(mean), std=list(std)) # 3 x H x W - dict_outputs = model(image[None].to(device)) - - batch_pred_masks: torch.Tensor = dict_outputs["mask_pred"] # [0, 1] - batch_objectness: torch.Tensor = dict_outputs.get("objectness", None) # [0, 1] - - if len(batch_pred_masks.shape) == 5: - # b x n_layers x n_queries x h x w -> b x n_queries x h x w - batch_pred_masks = batch_pred_masks[:, -1, ...] # extract the output from the last decoder layer - - if batch_objectness is not None: - # b x n_layers x n_queries x 1 -> b x n_queries x 1 - batch_objectness = batch_objectness[:, -1, ...] - - # resize prediction to original resolution - # note: upsampling by 4 and cutting the padded region allows for a better result - H, W = image.shape[-2:] - batch_pred_masks = F.interpolate( - batch_pred_masks, scale_factor=4, mode="bilinear", align_corners=False - )[..., :H, :W] - - # iterate over batch dimension - for batch_index, pred_masks in enumerate(batch_pred_masks): - # n_queries x 1 -> n_queries - objectness: torch.Tensor = batch_objectness[batch_index].squeeze(dim=-1) - ranks = torch.argsort(objectness, descending=True) # n_queries - pred_mask: torch.Tensor = pred_masks[ranks[0]] # H x W - pred_mask: np.ndarray = (pred_mask > 0.5).cpu().numpy().astype(np.uint8) * 255 - - pred_mask_bi, _ = bilateral_solver_output(img=pil_image, target=pred_mask) # float64 - pred_mask_bi: np.ndarray = np.clip(pred_mask_bi, 0, 255).astype(np.uint8) - - attn_map = cv2.cvtColor(cv2.applyColorMap(pred_mask_bi, cv2.COLORMAP_VIRIDIS), cv2.COLOR_BGR2RGB) - super_imposed_img = cv2.addWeighted(attn_map, 0.5, np.array(pil_image), 0.5, 0) - return super_imposed_img - # return pred_mask_bi - - -demo = gr.Interface( - fn=main, - inputs=gr.inputs.Image(type="pil", source="upload", tool="editor"), - outputs=gr.outputs.Image(type="numpy", label="saliency map"), # "image", - examples=[f"resources/{fname}.jpg" for fname in [ - "0053", - "0236", - "0239", - "0403", - "0412", - "ILSVRC2012_test_00005309", - "ILSVRC2012_test_00012622", - "ILSVRC2012_test_00022698", - "ILSVRC2012_test_00040725", - "ILSVRC2012_test_00075738", - "ILSVRC2012_test_00080683", - "ILSVRC2012_test_00085874", - "im052", - "sun_ainjbonxmervsvpv", - "sun_alfntqzssslakmss", - "sun_amnrcxhisjfrliwa", - "sun_bvyxpvkouzlfwwod" - ]], - examples_per_page=20, - description=codecs.open("description.html", 'r', "utf-8").read(), - title="Unsupervised Salient Object Detection with Spectral Cluster Voting", - allow_flagging="never", - analytics_enabled=False -) - -demo.launch( - # share=True -) \ No newline at end of file diff --git a/spaces/nyanko7/sd-diffusers-webui/app.py b/spaces/nyanko7/sd-diffusers-webui/app.py deleted file mode 100644 index 2719df3d916a7f923c48898091a951208c08aefb..0000000000000000000000000000000000000000 --- a/spaces/nyanko7/sd-diffusers-webui/app.py +++ /dev/null @@ -1,878 +0,0 @@ -import random -import tempfile -import time -import gradio as gr -import numpy as np -import torch -import math -import re - -from gradio import inputs -from diffusers import ( - AutoencoderKL, - DDIMScheduler, - UNet2DConditionModel, -) -from modules.model import ( - CrossAttnProcessor, - StableDiffusionPipeline, -) -from torchvision import transforms -from transformers import CLIPTokenizer, CLIPTextModel -from PIL import Image -from pathlib import Path -from safetensors.torch import load_file -import modules.safe as _ -from modules.lora import LoRANetwork - -models = [ - ("AbyssOrangeMix2", "Korakoe/AbyssOrangeMix2-HF", 2), - ("Pastal Mix", "JamesFlare/pastel-mix", 2), - ("Basil Mix", "nuigurumi/basil_mix", 2) -] - -keep_vram = ["Korakoe/AbyssOrangeMix2-HF", "andite/pastel-mix"] -base_name, base_model, clip_skip = models[0] - -samplers_k_diffusion = [ - ("Euler a", "sample_euler_ancestral", {}), - ("Euler", "sample_euler", {}), - ("LMS", "sample_lms", {}), - ("Heun", "sample_heun", {}), - ("DPM2", "sample_dpm_2", {"discard_next_to_last_sigma": True}), - ("DPM2 a", "sample_dpm_2_ancestral", {"discard_next_to_last_sigma": True}), - ("DPM++ 2S a", "sample_dpmpp_2s_ancestral", {}), - ("DPM++ 2M", "sample_dpmpp_2m", {}), - ("DPM++ SDE", "sample_dpmpp_sde", {}), - ("LMS Karras", "sample_lms", {"scheduler": "karras"}), - ("DPM2 Karras", "sample_dpm_2", {"scheduler": "karras", "discard_next_to_last_sigma": True}), - ("DPM2 a Karras", "sample_dpm_2_ancestral", {"scheduler": "karras", "discard_next_to_last_sigma": True}), - ("DPM++ 2S a Karras", "sample_dpmpp_2s_ancestral", {"scheduler": "karras"}), - ("DPM++ 2M Karras", "sample_dpmpp_2m", {"scheduler": "karras"}), - ("DPM++ SDE Karras", "sample_dpmpp_sde", {"scheduler": "karras"}), -] - -# samplers_diffusers = [ -# ("DDIMScheduler", "diffusers.schedulers.DDIMScheduler", {}) -# ("DDPMScheduler", "diffusers.schedulers.DDPMScheduler", {}) -# ("DEISMultistepScheduler", "diffusers.schedulers.DEISMultistepScheduler", {}) -# ] - -start_time = time.time() -timeout = 90 - -scheduler = DDIMScheduler.from_pretrained( - base_model, - subfolder="scheduler", -) -vae = AutoencoderKL.from_pretrained( - "stabilityai/sd-vae-ft-ema", - torch_dtype=torch.float16 -) -text_encoder = CLIPTextModel.from_pretrained( - base_model, - subfolder="text_encoder", - torch_dtype=torch.float16, -) -tokenizer = CLIPTokenizer.from_pretrained( - base_model, - subfolder="tokenizer", - torch_dtype=torch.float16, -) -unet = UNet2DConditionModel.from_pretrained( - base_model, - subfolder="unet", - torch_dtype=torch.float16, -) -pipe = StableDiffusionPipeline( - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - vae=vae, - scheduler=scheduler, -) - -unet.set_attn_processor(CrossAttnProcessor) -pipe.setup_text_encoder(clip_skip, text_encoder) -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - -def get_model_list(): - return models - -te_cache = { - base_model: text_encoder -} - -unet_cache = { - base_model: unet -} - -lora_cache = { - base_model: LoRANetwork(text_encoder, unet) -} - -te_base_weight_length = text_encoder.get_input_embeddings().weight.data.shape[0] -original_prepare_for_tokenization = tokenizer.prepare_for_tokenization -current_model = base_model - -def setup_model(name, lora_state=None, lora_scale=1.0): - global pipe, current_model - - keys = [k[0] for k in models] - model = models[keys.index(name)][1] - if model not in unet_cache: - unet = UNet2DConditionModel.from_pretrained(model, subfolder="unet", torch_dtype=torch.float16) - text_encoder = CLIPTextModel.from_pretrained(model, subfolder="text_encoder", torch_dtype=torch.float16) - - unet_cache[model] = unet - te_cache[model] = text_encoder - lora_cache[model] = LoRANetwork(text_encoder, unet) - - if current_model != model: - if current_model not in keep_vram: - # offload current model - unet_cache[current_model].to("cpu") - te_cache[current_model].to("cpu") - lora_cache[current_model].to("cpu") - current_model = model - - local_te, local_unet, local_lora, = te_cache[model], unet_cache[model], lora_cache[model] - local_unet.set_attn_processor(CrossAttnProcessor()) - local_lora.reset() - clip_skip = models[keys.index(name)][2] - - if torch.cuda.is_available(): - local_unet.to("cuda") - local_te.to("cuda") - - if lora_state is not None and lora_state != "": - local_lora.load(lora_state, lora_scale) - local_lora.to(local_unet.device, dtype=local_unet.dtype) - - pipe.text_encoder, pipe.unet = local_te, local_unet - pipe.setup_unet(local_unet) - pipe.tokenizer.prepare_for_tokenization = original_prepare_for_tokenization - pipe.tokenizer.added_tokens_encoder = {} - pipe.tokenizer.added_tokens_decoder = {} - pipe.setup_text_encoder(clip_skip, local_te) - return pipe - - -def error_str(error, title="Error"): - return ( - f"""#### {title} - {error}""" - if error - else "" - ) - -def make_token_names(embs): - all_tokens = [] - for name, vec in embs.items(): - tokens = [f'emb-{name}-{i}' for i in range(len(vec))] - all_tokens.append(tokens) - return all_tokens - -def setup_tokenizer(tokenizer, embs): - reg_match = [re.compile(fr"(?:^|(?<=\s|,)){k}(?=,|\s|$)") for k in embs.keys()] - clip_keywords = [' '.join(s) for s in make_token_names(embs)] - - def parse_prompt(prompt: str): - for m, v in zip(reg_match, clip_keywords): - prompt = m.sub(v, prompt) - return prompt - - def prepare_for_tokenization(self, text: str, is_split_into_words: bool = False, **kwargs): - text = parse_prompt(text) - r = original_prepare_for_tokenization(text, is_split_into_words, **kwargs) - return r - tokenizer.prepare_for_tokenization = prepare_for_tokenization.__get__(tokenizer, CLIPTokenizer) - return [t for sublist in make_token_names(embs) for t in sublist] - - -def convert_size(size_bytes): - if size_bytes == 0: - return "0B" - size_name = ("B", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB") - i = int(math.floor(math.log(size_bytes, 1024))) - p = math.pow(1024, i) - s = round(size_bytes / p, 2) - return "%s %s" % (s, size_name[i]) - -def inference( - prompt, - guidance, - steps, - width=512, - height=512, - seed=0, - neg_prompt="", - state=None, - g_strength=0.4, - img_input=None, - i2i_scale=0.5, - hr_enabled=False, - hr_method="Latent", - hr_scale=1.5, - hr_denoise=0.8, - sampler="DPM++ 2M Karras", - embs=None, - model=None, - lora_state=None, - lora_scale=None, -): - if seed is None or seed == 0: - seed = random.randint(0, 2147483647) - - pipe = setup_model(model, lora_state, lora_scale) - generator = torch.Generator("cuda").manual_seed(int(seed)) - start_time = time.time() - - sampler_name, sampler_opt = None, None - for label, funcname, options in samplers_k_diffusion: - if label == sampler: - sampler_name, sampler_opt = funcname, options - - tokenizer, text_encoder = pipe.tokenizer, pipe.text_encoder - if embs is not None and len(embs) > 0: - ti_embs = {} - for name, file in embs.items(): - if str(file).endswith(".pt"): - loaded_learned_embeds = torch.load(file, map_location="cpu") - else: - loaded_learned_embeds = load_file(file, device="cpu") - loaded_learned_embeds = loaded_learned_embeds["string_to_param"]["*"] if "string_to_param" in loaded_learned_embeds else loaded_learned_embeds - ti_embs[name] = loaded_learned_embeds - - if len(ti_embs) > 0: - tokens = setup_tokenizer(tokenizer, ti_embs) - added_tokens = tokenizer.add_tokens(tokens) - delta_weight = torch.cat([val for val in ti_embs.values()], dim=0) - - assert added_tokens == delta_weight.shape[0] - text_encoder.resize_token_embeddings(len(tokenizer)) - token_embeds = text_encoder.get_input_embeddings().weight.data - token_embeds[-delta_weight.shape[0]:] = delta_weight - - config = { - "negative_prompt": neg_prompt, - "num_inference_steps": int(steps), - "guidance_scale": guidance, - "generator": generator, - "sampler_name": sampler_name, - "sampler_opt": sampler_opt, - "pww_state": state, - "pww_attn_weight": g_strength, - "start_time": start_time, - "timeout": timeout, - } - - if img_input is not None: - ratio = min(height / img_input.height, width / img_input.width) - img_input = img_input.resize( - (int(img_input.width * ratio), int(img_input.height * ratio)), Image.LANCZOS - ) - result = pipe.img2img(prompt, image=img_input, strength=i2i_scale, **config) - elif hr_enabled: - result = pipe.txt2img( - prompt, - width=width, - height=height, - upscale=True, - upscale_x=hr_scale, - upscale_denoising_strength=hr_denoise, - **config, - **latent_upscale_modes[hr_method], - ) - else: - result = pipe.txt2img(prompt, width=width, height=height, **config) - - end_time = time.time() - vram_free, vram_total = torch.cuda.mem_get_info() - print(f"done: model={model}, res={width}x{height}, step={steps}, time={round(end_time-start_time, 2)}s, vram_alloc={convert_size(vram_total-vram_free)}/{convert_size(vram_total)}") - return gr.Image.update(result[0][0], label=f"Initial Seed: {seed}") - - -color_list = [] - - -def get_color(n): - for _ in range(n - len(color_list)): - color_list.append(tuple(np.random.random(size=3) * 256)) - return color_list - - -def create_mixed_img(current, state, w=512, h=512): - w, h = int(w), int(h) - image_np = np.full([h, w, 4], 255) - if state is None: - state = {} - - colors = get_color(len(state)) - idx = 0 - - for key, item in state.items(): - if item["map"] is not None: - m = item["map"] < 255 - alpha = 150 - if current == key: - alpha = 200 - image_np[m] = colors[idx] + (alpha,) - idx += 1 - - return image_np - - -# width.change(apply_new_res, inputs=[width, height, global_stats], outputs=[global_stats, sp, rendered]) -def apply_new_res(w, h, state): - w, h = int(w), int(h) - - for key, item in state.items(): - if item["map"] is not None: - item["map"] = resize(item["map"], w, h) - - update_img = gr.Image.update(value=create_mixed_img("", state, w, h)) - return state, update_img - - -def detect_text(text, state, width, height): - - if text is None or text == "": - return None, None, gr.Radio.update(value=None), None - - t = text.split(",") - new_state = {} - - for item in t: - item = item.strip() - if item == "": - continue - if state is not None and item in state: - new_state[item] = { - "map": state[item]["map"], - "weight": state[item]["weight"], - "mask_outsides": state[item]["mask_outsides"], - } - else: - new_state[item] = { - "map": None, - "weight": 0.5, - "mask_outsides": False - } - update = gr.Radio.update(choices=[key for key in new_state.keys()], value=None) - update_img = gr.update(value=create_mixed_img("", new_state, width, height)) - update_sketch = gr.update(value=None, interactive=False) - return new_state, update_sketch, update, update_img - - -def resize(img, w, h): - trs = transforms.Compose( - [ - transforms.ToPILImage(), - transforms.Resize(min(h, w)), - transforms.CenterCrop((h, w)), - ] - ) - result = np.array(trs(img), dtype=np.uint8) - return result - - -def switch_canvas(entry, state, width, height): - if entry == None: - return None, 0.5, False, create_mixed_img("", state, width, height) - - return ( - gr.update(value=None, interactive=True), - gr.update(value=state[entry]["weight"] if entry in state else 0.5), - gr.update(value=state[entry]["mask_outsides"] if entry in state else False), - create_mixed_img(entry, state, width, height), - ) - - -def apply_canvas(selected, draw, state, w, h): - if selected in state: - w, h = int(w), int(h) - state[selected]["map"] = resize(draw, w, h) - return state, gr.Image.update(value=create_mixed_img(selected, state, w, h)) - - -def apply_weight(selected, weight, state): - if selected in state: - state[selected]["weight"] = weight - return state - - -def apply_option(selected, mask, state): - if selected in state: - state[selected]["mask_outsides"] = mask - return state - - -# sp2, radio, width, height, global_stats -def apply_image(image, selected, w, h, strgength, mask, state): - if selected in state: - state[selected] = { - "map": resize(image, w, h), - "weight": strgength, - "mask_outsides": mask - } - - return state, gr.Image.update(value=create_mixed_img(selected, state, w, h)) - - -# [ti_state, lora_state, ti_vals, lora_vals, uploads] -def add_net(files, ti_state, lora_state): - if files is None: - return ti_state, "", lora_state, None - - for file in files: - item = Path(file.name) - stripedname = str(item.stem).strip() - if item.suffix == ".pt": - state_dict = torch.load(file.name, map_location="cpu") - else: - state_dict = load_file(file.name, device="cpu") - if any("lora" in k for k in state_dict.keys()): - lora_state = file.name - else: - ti_state[stripedname] = file.name - - return ( - ti_state, - lora_state, - gr.Text.update(f"{[key for key in ti_state.keys()]}"), - gr.Text.update(f"{lora_state}"), - gr.Files.update(value=None), - ) - - -# [ti_state, lora_state, ti_vals, lora_vals, uploads] -def clean_states(ti_state, lora_state): - return ( - dict(), - None, - gr.Text.update(f""), - gr.Text.update(f""), - gr.File.update(value=None), - ) - - -latent_upscale_modes = { - "Latent": {"upscale_method": "bilinear", "upscale_antialias": False}, - "Latent (antialiased)": {"upscale_method": "bilinear", "upscale_antialias": True}, - "Latent (bicubic)": {"upscale_method": "bicubic", "upscale_antialias": False}, - "Latent (bicubic antialiased)": { - "upscale_method": "bicubic", - "upscale_antialias": True, - }, - "Latent (nearest)": {"upscale_method": "nearest", "upscale_antialias": False}, - "Latent (nearest-exact)": { - "upscale_method": "nearest-exact", - "upscale_antialias": False, - }, -} - -css = """ -.finetuned-diffusion-div div{ - display:inline-flex; - align-items:center; - gap:.8rem; - font-size:1.75rem; - padding-top:2rem; -} -.finetuned-diffusion-div div h1{ - font-weight:900; - margin-bottom:7px -} -.finetuned-diffusion-div p{ - margin-bottom:10px; - font-size:94% -} -.box { - float: left; - height: 20px; - width: 20px; - margin-bottom: 15px; - border: 1px solid black; - clear: both; -} -a{ - text-decoration:underline -} -.tabs{ - margin-top:0; - margin-bottom:0 -} -#gallery{ - min-height:20rem -} -.no-border { - border: none !important; -} - """ -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
        -
        -

        Demo for diffusion models

        -
        -

        Hso @ nyanko.sketch2img.gradio

        -
        - """ - ) - global_stats = gr.State(value={}) - - with gr.Row(): - - with gr.Column(scale=55): - model = gr.Dropdown( - choices=[k[0] for k in get_model_list()], - label="Model", - value=base_name, - ) - image_out = gr.Image(height=512) - # gallery = gr.Gallery( - # label="Generated images", show_label=False, elem_id="gallery" - # ).style(grid=[1], height="auto") - - with gr.Column(scale=45): - - with gr.Group(): - - with gr.Row(): - with gr.Column(scale=70): - - prompt = gr.Textbox( - label="Prompt", - value="loli cat girl, blue eyes, flat chest, solo, long messy silver hair, blue capelet, cat ears, cat tail, upper body", - show_label=True, - max_lines=4, - placeholder="Enter prompt.", - ) - neg_prompt = gr.Textbox( - label="Negative Prompt", - value="bad quality, low quality, jpeg artifact, cropped", - show_label=True, - max_lines=4, - placeholder="Enter negative prompt.", - ) - - generate = gr.Button(value="Generate").style( - rounded=(False, True, True, False) - ) - - with gr.Tab("Options"): - - with gr.Group(): - - # n_images = gr.Slider(label="Images", value=1, minimum=1, maximum=4, step=1) - with gr.Row(): - guidance = gr.Slider( - label="Guidance scale", value=7.5, maximum=15 - ) - steps = gr.Slider( - label="Steps", value=25, minimum=2, maximum=50, step=1 - ) - - with gr.Row(): - width = gr.Slider( - label="Width", value=512, minimum=64, maximum=768, step=64 - ) - height = gr.Slider( - label="Height", value=512, minimum=64, maximum=768, step=64 - ) - - sampler = gr.Dropdown( - value="DPM++ 2M Karras", - label="Sampler", - choices=[s[0] for s in samplers_k_diffusion], - ) - seed = gr.Number(label="Seed (0 = random)", value=0) - - with gr.Tab("Image to image"): - with gr.Group(): - - inf_image = gr.Image( - label="Image", height=256, tool="editor", type="pil" - ) - inf_strength = gr.Slider( - label="Transformation strength", - minimum=0, - maximum=1, - step=0.01, - value=0.5, - ) - - def res_cap(g, w, h, x): - if g: - return f"Enable upscaler: {w}x{h} to {int(w*x)}x{int(h*x)}" - else: - return "Enable upscaler" - - with gr.Tab("Hires fix"): - with gr.Group(): - - hr_enabled = gr.Checkbox(label="Enable upscaler", value=False) - hr_method = gr.Dropdown( - [key for key in latent_upscale_modes.keys()], - value="Latent", - label="Upscale method", - ) - hr_scale = gr.Slider( - label="Upscale factor", - minimum=1.0, - maximum=1.5, - step=0.1, - value=1.2, - ) - hr_denoise = gr.Slider( - label="Denoising strength", - minimum=0.0, - maximum=1.0, - step=0.1, - value=0.8, - ) - - hr_scale.change( - lambda g, x, w, h: gr.Checkbox.update( - label=res_cap(g, w, h, x) - ), - inputs=[hr_enabled, hr_scale, width, height], - outputs=hr_enabled, - queue=False, - ) - hr_enabled.change( - lambda g, x, w, h: gr.Checkbox.update( - label=res_cap(g, w, h, x) - ), - inputs=[hr_enabled, hr_scale, width, height], - outputs=hr_enabled, - queue=False, - ) - - with gr.Tab("Embeddings/Loras"): - - ti_state = gr.State(dict()) - lora_state = gr.State() - - with gr.Group(): - with gr.Row(): - with gr.Column(scale=90): - ti_vals = gr.Text(label="Loaded embeddings") - - with gr.Row(): - with gr.Column(scale=90): - lora_vals = gr.Text(label="Loaded loras") - - with gr.Row(): - - uploads = gr.Files(label="Upload new embeddings/lora") - - with gr.Column(): - lora_scale = gr.Slider( - label="Lora scale", - minimum=0, - maximum=2, - step=0.01, - value=1.0, - ) - btn = gr.Button(value="Upload") - btn_del = gr.Button(value="Reset") - - btn.click( - add_net, - inputs=[uploads, ti_state, lora_state], - outputs=[ti_state, lora_state, ti_vals, lora_vals, uploads], - queue=False, - ) - btn_del.click( - clean_states, - inputs=[ti_state, lora_state], - outputs=[ti_state, lora_state, ti_vals, lora_vals, uploads], - queue=False, - ) - - # error_output = gr.Markdown() - - gr.HTML( - f""" -
        -
        -

        Paint with words

        -
        -

        - Will use the following formula: w = scale * token_weight_martix * log(1 + sigma) * max(qk). -

        -
        - """ - ) - - with gr.Row(): - - with gr.Column(scale=55): - - rendered = gr.Image( - invert_colors=True, - source="canvas", - interactive=False, - image_mode="RGBA", - ) - - with gr.Column(scale=45): - - with gr.Group(): - with gr.Row(): - with gr.Column(scale=70): - g_strength = gr.Slider( - label="Weight scaling", - minimum=0, - maximum=0.8, - step=0.01, - value=0.4, - ) - - text = gr.Textbox( - lines=2, - interactive=True, - label="Token to Draw: (Separate by comma)", - ) - - radio = gr.Radio([], label="Tokens") - - sk_update = gr.Button(value="Update").style( - rounded=(False, True, True, False) - ) - - # g_strength.change(lambda b: gr.update(f"Scaled additional attn: $w = {b} \log (1 + \sigma) \std (Q^T K)$."), inputs=g_strength, outputs=[g_output]) - - with gr.Tab("SketchPad"): - - sp = gr.Image( - image_mode="L", - tool="sketch", - source="canvas", - interactive=False, - ) - - mask_outsides = gr.Checkbox( - label="Mask other areas", - value=False - ) - - strength = gr.Slider( - label="Token strength", - minimum=0, - maximum=0.8, - step=0.01, - value=0.5, - ) - - - sk_update.click( - detect_text, - inputs=[text, global_stats, width, height], - outputs=[global_stats, sp, radio, rendered], - queue=False, - ) - radio.change( - switch_canvas, - inputs=[radio, global_stats, width, height], - outputs=[sp, strength, mask_outsides, rendered], - queue=False, - ) - sp.edit( - apply_canvas, - inputs=[radio, sp, global_stats, width, height], - outputs=[global_stats, rendered], - queue=False, - ) - strength.change( - apply_weight, - inputs=[radio, strength, global_stats], - outputs=[global_stats], - queue=False, - ) - mask_outsides.change( - apply_option, - inputs=[radio, mask_outsides, global_stats], - outputs=[global_stats], - queue=False, - ) - - with gr.Tab("UploadFile"): - - sp2 = gr.Image( - image_mode="L", - source="upload", - shape=(512, 512), - ) - - mask_outsides2 = gr.Checkbox( - label="Mask other areas", - value=False, - ) - - strength2 = gr.Slider( - label="Token strength", - minimum=0, - maximum=0.8, - step=0.01, - value=0.5, - ) - - apply_style = gr.Button(value="Apply") - apply_style.click( - apply_image, - inputs=[sp2, radio, width, height, strength2, mask_outsides2, global_stats], - outputs=[global_stats, rendered], - queue=False, - ) - - width.change( - apply_new_res, - inputs=[width, height, global_stats], - outputs=[global_stats, rendered], - queue=False, - ) - height.change( - apply_new_res, - inputs=[width, height, global_stats], - outputs=[global_stats, rendered], - queue=False, - ) - - # color_stats = gr.State(value={}) - # text.change(detect_color, inputs=[sp, text, color_stats], outputs=[color_stats, rendered]) - # sp.change(detect_color, inputs=[sp, text, color_stats], outputs=[color_stats, rendered]) - - inputs = [ - prompt, - guidance, - steps, - width, - height, - seed, - neg_prompt, - global_stats, - g_strength, - inf_image, - inf_strength, - hr_enabled, - hr_method, - hr_scale, - hr_denoise, - sampler, - ti_state, - model, - lora_state, - lora_scale, - ] - outputs = [image_out] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - -print(f"Space built in {time.time() - start_time:.2f} seconds") -# demo.launch(share=True) -demo.launch(enable_queue=True, server_name="0.0.0.0", server_port=7860) diff --git a/spaces/om-app/magic-diffusion/app.py b/spaces/om-app/magic-diffusion/app.py deleted file mode 100644 index 8be831a428013e045fcdca7209ed3de0a754b1b2..0000000000000000000000000000000000000000 --- a/spaces/om-app/magic-diffusion/app.py +++ /dev/null @@ -1,113 +0,0 @@ -import gradio as gr -import os -from share_btn import community_icon_html, loading_icon_html, share_js - - -text_gen = gr.Interface.load(name="spaces/Gustavosta/MagicPrompt-Stable-Diffusion") -stable_diffusion = gr.Blocks.load(name="spaces/runwayml/stable-diffusion-v1-5") - - - - -def get_images(prompt): - gallery_dir = stable_diffusion(prompt, fn_index=2) - sd_output = [os.path.join(gallery_dir, image) for image in os.listdir(gallery_dir)] - return sd_output, gr.update(visible=True), gr.update(visible=True), gr.update(visible=True) - -def get_prompts(prompt_text): - return text_gen(prompt_text) - -css = ''' -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} -a {text-decoration-line: underline;} -''' - -with gr.Blocks(css=css) as demo: - gr.HTML("""
        -
        -

        - 🪄 Ai Art Generator 🪄 -

        -
        -

        - This Demo space prettifies your prompt using "MagicPrompt" - and then runs it through Stable Diffusion to create aesthetically pleasing images. Simply enter a few concepts and let it improve your prompt. You can then diffuse the prompt. -

        Andriod App
        - -

        - -
        """) - - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Short text prompt", - lines=4, elem_id="input-text", - - ) - with gr.Row(): - see_prompts = gr.Button("1. Enter short text") - - with gr.Column(): - text_output = gr.Textbox( - label="Prettified text prompt", - lines=4, - elem_id="translated" - ) - with gr.Row(): - diffuse_btn = gr.Button(value="2. Generate art!") - with gr.Column(elem_id="generated-gallery"): - sd_output = gr.Gallery().style(grid=2, height="auto") - with gr.Group(elem_id="share-btn-container"): - community_icon = gr.HTML(community_icon_html, visible=False) - loading_icon = gr.HTML(loading_icon_html, visible=False) - share_button = gr.Button("How to Download ?", elem_id="share-btn", visible=False) - - see_prompts.click(get_prompts, - inputs = [input_text], - outputs = [ - text_output - ]) - diffuse_btn.click(get_images, - inputs = [ - text_output - ], - outputs = [sd_output, community_icon, loading_icon, share_button] - ) - share_button.click(None, [], [], _js=share_js) - - - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/onnx/BERT-Squad/app.py b/spaces/onnx/BERT-Squad/app.py deleted file mode 100644 index 0171e29ebbbe192626490ce33ad97da6900308e1..0000000000000000000000000000000000000000 --- a/spaces/onnx/BERT-Squad/app.py +++ /dev/null @@ -1,8 +0,0 @@ -import gradio as gr - -title="BERT" - -description="Gradio demo for BERT" - -examples=[["""The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species.""","Which name is also used to describe the Amazon rainforest in English?"]] -gr.Interface.load("huggingface/bert-large-uncased-whole-word-masking-finetuned-squad",title=title,description=description,examples=examples).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/openflamingo/OpenFlamingo/open_flamingo/Makefile b/spaces/openflamingo/OpenFlamingo/open_flamingo/Makefile deleted file mode 100644 index d5cc3840bce9ce0e5aebc435f63ffa5b534d4a8f..0000000000000000000000000000000000000000 --- a/spaces/openflamingo/OpenFlamingo/open_flamingo/Makefile +++ /dev/null @@ -1,19 +0,0 @@ -install: ## [Local development] Upgrade pip, install requirements, install package. - python -m pip install -U pip - python -m pip install -e . - -install-dev: ## [Local development] Install test requirements - python -m pip install -r requirements-test.txt - -lint: ## [Local development] Run mypy, pylint and black - python -m mypy open_flamingo - python -m pylint open_flamingo - python -m black --check -l 120 open_flamingo - -black: ## [Local development] Auto-format python code using black - python -m black -l 120 . - -.PHONY: help - -help: # Run `make help` to get help on the make commands - @grep -E '^[0-9a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_unidiffuser_to_diffusers.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_unidiffuser_to_diffusers.py deleted file mode 100644 index 891d289d8c7601f106724f1196d5f0f0eb3f2650..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/scripts/convert_unidiffuser_to_diffusers.py +++ /dev/null @@ -1,776 +0,0 @@ -# Convert the original UniDiffuser checkpoints into diffusers equivalents. - -import argparse -from argparse import Namespace - -import torch -from transformers import ( - CLIPImageProcessor, - CLIPTextConfig, - CLIPTextModel, - CLIPTokenizer, - CLIPVisionConfig, - CLIPVisionModelWithProjection, - GPT2Tokenizer, -) - -from diffusers import ( - AutoencoderKL, - DPMSolverMultistepScheduler, - UniDiffuserModel, - UniDiffuserPipeline, - UniDiffuserTextDecoder, -) - - -SCHEDULER_CONFIG = Namespace( - **{ - "beta_start": 0.00085, - "beta_end": 0.012, - "beta_schedule": "scaled_linear", - "solver_order": 3, - } -) - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.shave_segments -def shave_segments(path, n_shave_prefix_segments=1): - """ - Removes segments. Positive values shave the first segments, negative shave the last segments. - """ - if n_shave_prefix_segments >= 0: - return ".".join(path.split(".")[n_shave_prefix_segments:]) - else: - return ".".join(path.split(".")[:n_shave_prefix_segments]) - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_vae_resnet_paths -def renew_vae_resnet_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside resnets to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - new_item = new_item.replace("nin_shortcut", "conv_shortcut") - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.renew_vae_attention_paths -def renew_vae_attention_paths(old_list, n_shave_prefix_segments=0): - """ - Updates paths inside attentions to the new naming scheme (local renaming) - """ - mapping = [] - for old_item in old_list: - new_item = old_item - - new_item = new_item.replace("norm.weight", "group_norm.weight") - new_item = new_item.replace("norm.bias", "group_norm.bias") - - new_item = new_item.replace("q.weight", "query.weight") - new_item = new_item.replace("q.bias", "query.bias") - - new_item = new_item.replace("k.weight", "key.weight") - new_item = new_item.replace("k.bias", "key.bias") - - new_item = new_item.replace("v.weight", "value.weight") - new_item = new_item.replace("v.bias", "value.bias") - - new_item = new_item.replace("proj_out.weight", "proj_attn.weight") - new_item = new_item.replace("proj_out.bias", "proj_attn.bias") - - new_item = shave_segments(new_item, n_shave_prefix_segments=n_shave_prefix_segments) - - mapping.append({"old": old_item, "new": new_item}) - - return mapping - - -# Modified from diffusers.pipelines.stable_diffusion.convert_from_ckpt.assign_to_checkpoint -# config.num_head_channels => num_head_channels -def assign_to_checkpoint( - paths, - checkpoint, - old_checkpoint, - attention_paths_to_split=None, - additional_replacements=None, - num_head_channels=1, -): - """ - This does the final conversion step: take locally converted weights and apply a global renaming to them. It splits - attention layers, and takes into account additional replacements that may arise. Assigns the weights to the new - checkpoint. - """ - assert isinstance(paths, list), "Paths should be a list of dicts containing 'old' and 'new' keys." - - # Splits the attention layers into three variables. - if attention_paths_to_split is not None: - for path, path_map in attention_paths_to_split.items(): - old_tensor = old_checkpoint[path] - channels = old_tensor.shape[0] // 3 - - target_shape = (-1, channels) if len(old_tensor.shape) == 3 else (-1) - - num_heads = old_tensor.shape[0] // num_head_channels // 3 - - old_tensor = old_tensor.reshape((num_heads, 3 * channels // num_heads) + old_tensor.shape[1:]) - query, key, value = old_tensor.split(channels // num_heads, dim=1) - - checkpoint[path_map["query"]] = query.reshape(target_shape) - checkpoint[path_map["key"]] = key.reshape(target_shape) - checkpoint[path_map["value"]] = value.reshape(target_shape) - - for path in paths: - new_path = path["new"] - - # These have already been assigned - if attention_paths_to_split is not None and new_path in attention_paths_to_split: - continue - - # Global renaming happens here - new_path = new_path.replace("middle_block.0", "mid_block.resnets.0") - new_path = new_path.replace("middle_block.1", "mid_block.attentions.0") - new_path = new_path.replace("middle_block.2", "mid_block.resnets.1") - - if additional_replacements is not None: - for replacement in additional_replacements: - new_path = new_path.replace(replacement["old"], replacement["new"]) - - # proj_attn.weight has to be converted from conv 1D to linear - if "proj_attn.weight" in new_path: - checkpoint[new_path] = old_checkpoint[path["old"]][:, :, 0] - else: - checkpoint[new_path] = old_checkpoint[path["old"]] - - -# Copied from diffusers.pipelines.stable_diffusion.convert_from_ckpt.conv_attn_to_linear -def conv_attn_to_linear(checkpoint): - keys = list(checkpoint.keys()) - attn_keys = ["query.weight", "key.weight", "value.weight"] - for key in keys: - if ".".join(key.split(".")[-2:]) in attn_keys: - if checkpoint[key].ndim > 2: - checkpoint[key] = checkpoint[key][:, :, 0, 0] - elif "proj_attn.weight" in key: - if checkpoint[key].ndim > 2: - checkpoint[key] = checkpoint[key][:, :, 0] - - -def create_vae_diffusers_config(config_type): - # Hardcoded for now - if args.config_type == "test": - vae_config = create_vae_diffusers_config_test() - elif args.config_type == "big": - vae_config = create_vae_diffusers_config_big() - else: - raise NotImplementedError( - f"Config type {config_type} is not implemented, currently only config types" - " 'test' and 'big' are available." - ) - return vae_config - - -def create_unidiffuser_unet_config(config_type, version): - # Hardcoded for now - if args.config_type == "test": - unet_config = create_unidiffuser_unet_config_test() - elif args.config_type == "big": - unet_config = create_unidiffuser_unet_config_big() - else: - raise NotImplementedError( - f"Config type {config_type} is not implemented, currently only config types" - " 'test' and 'big' are available." - ) - # Unidiffuser-v1 uses data type embeddings - if version == 1: - unet_config["use_data_type_embedding"] = True - return unet_config - - -def create_text_decoder_config(config_type): - # Hardcoded for now - if args.config_type == "test": - text_decoder_config = create_text_decoder_config_test() - elif args.config_type == "big": - text_decoder_config = create_text_decoder_config_big() - else: - raise NotImplementedError( - f"Config type {config_type} is not implemented, currently only config types" - " 'test' and 'big' are available." - ) - return text_decoder_config - - -# Hardcoded configs for test versions of the UniDiffuser models, corresponding to those in the fast default tests. -def create_vae_diffusers_config_test(): - vae_config = { - "sample_size": 32, - "in_channels": 3, - "out_channels": 3, - "down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D"], - "up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D"], - "block_out_channels": [32, 64], - "latent_channels": 4, - "layers_per_block": 1, - } - return vae_config - - -def create_unidiffuser_unet_config_test(): - unet_config = { - "text_dim": 32, - "clip_img_dim": 32, - "num_text_tokens": 77, - "num_attention_heads": 2, - "attention_head_dim": 8, - "in_channels": 4, - "out_channels": 4, - "num_layers": 2, - "dropout": 0.0, - "norm_num_groups": 32, - "attention_bias": False, - "sample_size": 16, - "patch_size": 2, - "activation_fn": "gelu", - "num_embeds_ada_norm": 1000, - "norm_type": "layer_norm", - "block_type": "unidiffuser", - "pre_layer_norm": False, - "use_timestep_embedding": False, - "norm_elementwise_affine": True, - "use_patch_pos_embed": False, - "ff_final_dropout": True, - "use_data_type_embedding": False, - } - return unet_config - - -def create_text_decoder_config_test(): - text_decoder_config = { - "prefix_length": 77, - "prefix_inner_dim": 32, - "prefix_hidden_dim": 32, - "vocab_size": 1025, # 1024 + 1 for new EOS token - "n_positions": 1024, - "n_embd": 32, - "n_layer": 5, - "n_head": 4, - "n_inner": 37, - "activation_function": "gelu", - "resid_pdrop": 0.1, - "embd_pdrop": 0.1, - "attn_pdrop": 0.1, - "layer_norm_epsilon": 1e-5, - "initializer_range": 0.02, - } - return text_decoder_config - - -# Hardcoded configs for the UniDiffuser V1 model at https://huggingface.co/thu-ml/unidiffuser-v1 -# See also https://github.com/thu-ml/unidiffuser/blob/main/configs/sample_unidiffuser_v1.py -def create_vae_diffusers_config_big(): - vae_config = { - "sample_size": 256, - "in_channels": 3, - "out_channels": 3, - "down_block_types": ["DownEncoderBlock2D", "DownEncoderBlock2D", "DownEncoderBlock2D", "DownEncoderBlock2D"], - "up_block_types": ["UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D"], - "block_out_channels": [128, 256, 512, 512], - "latent_channels": 4, - "layers_per_block": 2, - } - return vae_config - - -def create_unidiffuser_unet_config_big(): - unet_config = { - "text_dim": 64, - "clip_img_dim": 512, - "num_text_tokens": 77, - "num_attention_heads": 24, - "attention_head_dim": 64, - "in_channels": 4, - "out_channels": 4, - "num_layers": 30, - "dropout": 0.0, - "norm_num_groups": 32, - "attention_bias": False, - "sample_size": 64, - "patch_size": 2, - "activation_fn": "gelu", - "num_embeds_ada_norm": 1000, - "norm_type": "layer_norm", - "block_type": "unidiffuser", - "pre_layer_norm": False, - "use_timestep_embedding": False, - "norm_elementwise_affine": True, - "use_patch_pos_embed": False, - "ff_final_dropout": True, - "use_data_type_embedding": False, - } - return unet_config - - -# From https://huggingface.co/gpt2/blob/main/config.json, the GPT2 checkpoint used by UniDiffuser -def create_text_decoder_config_big(): - text_decoder_config = { - "prefix_length": 77, - "prefix_inner_dim": 768, - "prefix_hidden_dim": 64, - "vocab_size": 50258, # 50257 + 1 for new EOS token - "n_positions": 1024, - "n_embd": 768, - "n_layer": 12, - "n_head": 12, - "n_inner": 3072, - "activation_function": "gelu", - "resid_pdrop": 0.1, - "embd_pdrop": 0.1, - "attn_pdrop": 0.1, - "layer_norm_epsilon": 1e-5, - "initializer_range": 0.02, - } - return text_decoder_config - - -# Based on diffusers.pipelines.stable_diffusion.convert_from_ckpt.shave_segments.convert_ldm_vae_checkpoint -def convert_vae_to_diffusers(ckpt, diffusers_model, num_head_channels=1): - """ - Converts a UniDiffuser autoencoder_kl.pth checkpoint to a diffusers AutoencoderKL. - """ - # autoencoder_kl.pth ckpt is a torch state dict - vae_state_dict = torch.load(ckpt, map_location="cpu") - - new_checkpoint = {} - - new_checkpoint["encoder.conv_in.weight"] = vae_state_dict["encoder.conv_in.weight"] - new_checkpoint["encoder.conv_in.bias"] = vae_state_dict["encoder.conv_in.bias"] - new_checkpoint["encoder.conv_out.weight"] = vae_state_dict["encoder.conv_out.weight"] - new_checkpoint["encoder.conv_out.bias"] = vae_state_dict["encoder.conv_out.bias"] - new_checkpoint["encoder.conv_norm_out.weight"] = vae_state_dict["encoder.norm_out.weight"] - new_checkpoint["encoder.conv_norm_out.bias"] = vae_state_dict["encoder.norm_out.bias"] - - new_checkpoint["decoder.conv_in.weight"] = vae_state_dict["decoder.conv_in.weight"] - new_checkpoint["decoder.conv_in.bias"] = vae_state_dict["decoder.conv_in.bias"] - new_checkpoint["decoder.conv_out.weight"] = vae_state_dict["decoder.conv_out.weight"] - new_checkpoint["decoder.conv_out.bias"] = vae_state_dict["decoder.conv_out.bias"] - new_checkpoint["decoder.conv_norm_out.weight"] = vae_state_dict["decoder.norm_out.weight"] - new_checkpoint["decoder.conv_norm_out.bias"] = vae_state_dict["decoder.norm_out.bias"] - - new_checkpoint["quant_conv.weight"] = vae_state_dict["quant_conv.weight"] - new_checkpoint["quant_conv.bias"] = vae_state_dict["quant_conv.bias"] - new_checkpoint["post_quant_conv.weight"] = vae_state_dict["post_quant_conv.weight"] - new_checkpoint["post_quant_conv.bias"] = vae_state_dict["post_quant_conv.bias"] - - # Retrieves the keys for the encoder down blocks only - num_down_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "encoder.down" in layer}) - down_blocks = { - layer_id: [key for key in vae_state_dict if f"down.{layer_id}" in key] for layer_id in range(num_down_blocks) - } - - # Retrieves the keys for the decoder up blocks only - num_up_blocks = len({".".join(layer.split(".")[:3]) for layer in vae_state_dict if "decoder.up" in layer}) - up_blocks = { - layer_id: [key for key in vae_state_dict if f"up.{layer_id}" in key] for layer_id in range(num_up_blocks) - } - - for i in range(num_down_blocks): - resnets = [key for key in down_blocks[i] if f"down.{i}" in key and f"down.{i}.downsample" not in key] - - if f"encoder.down.{i}.downsample.conv.weight" in vae_state_dict: - new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.weight"] = vae_state_dict.pop( - f"encoder.down.{i}.downsample.conv.weight" - ) - new_checkpoint[f"encoder.down_blocks.{i}.downsamplers.0.conv.bias"] = vae_state_dict.pop( - f"encoder.down.{i}.downsample.conv.bias" - ) - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"down.{i}.block", "new": f"down_blocks.{i}.resnets"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - - mid_resnets = [key for key in vae_state_dict if "encoder.mid.block" in key] - num_mid_res_blocks = 2 - for i in range(1, num_mid_res_blocks + 1): - resnets = [key for key in mid_resnets if f"encoder.mid.block_{i}" in key] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - - mid_attentions = [key for key in vae_state_dict if "encoder.mid.attn" in key] - paths = renew_vae_attention_paths(mid_attentions) - meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - conv_attn_to_linear(new_checkpoint) - - for i in range(num_up_blocks): - block_id = num_up_blocks - 1 - i - resnets = [ - key for key in up_blocks[block_id] if f"up.{block_id}" in key and f"up.{block_id}.upsample" not in key - ] - - if f"decoder.up.{block_id}.upsample.conv.weight" in vae_state_dict: - new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.weight"] = vae_state_dict[ - f"decoder.up.{block_id}.upsample.conv.weight" - ] - new_checkpoint[f"decoder.up_blocks.{i}.upsamplers.0.conv.bias"] = vae_state_dict[ - f"decoder.up.{block_id}.upsample.conv.bias" - ] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"up.{block_id}.block", "new": f"up_blocks.{i}.resnets"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - - mid_resnets = [key for key in vae_state_dict if "decoder.mid.block" in key] - num_mid_res_blocks = 2 - for i in range(1, num_mid_res_blocks + 1): - resnets = [key for key in mid_resnets if f"decoder.mid.block_{i}" in key] - - paths = renew_vae_resnet_paths(resnets) - meta_path = {"old": f"mid.block_{i}", "new": f"mid_block.resnets.{i - 1}"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - - mid_attentions = [key for key in vae_state_dict if "decoder.mid.attn" in key] - paths = renew_vae_attention_paths(mid_attentions) - meta_path = {"old": "mid.attn_1", "new": "mid_block.attentions.0"} - assign_to_checkpoint( - paths, - new_checkpoint, - vae_state_dict, - additional_replacements=[meta_path], - num_head_channels=num_head_channels, # not used in vae - ) - conv_attn_to_linear(new_checkpoint) - - missing_keys, unexpected_keys = diffusers_model.load_state_dict(new_checkpoint) - for missing_key in missing_keys: - print(f"Missing key: {missing_key}") - for unexpected_key in unexpected_keys: - print(f"Unexpected key: {unexpected_key}") - - return diffusers_model - - -def convert_uvit_block_to_diffusers_block( - uvit_state_dict, - new_state_dict, - block_prefix, - new_prefix="transformer.transformer_", - skip_connection=False, -): - """ - Maps the keys in a UniDiffuser transformer block (`Block`) to the keys in a diffusers transformer block - (`UTransformerBlock`/`UniDiffuserBlock`). - """ - prefix = new_prefix + block_prefix - if skip_connection: - new_state_dict[prefix + ".skip.skip_linear.weight"] = uvit_state_dict[block_prefix + ".skip_linear.weight"] - new_state_dict[prefix + ".skip.skip_linear.bias"] = uvit_state_dict[block_prefix + ".skip_linear.bias"] - new_state_dict[prefix + ".skip.norm.weight"] = uvit_state_dict[block_prefix + ".norm1.weight"] - new_state_dict[prefix + ".skip.norm.bias"] = uvit_state_dict[block_prefix + ".norm1.bias"] - - # Create the prefix string for out_blocks. - prefix += ".block" - - # Split up attention qkv.weight into to_q.weight, to_k.weight, to_v.weight - qkv = uvit_state_dict[block_prefix + ".attn.qkv.weight"] - new_attn_keys = [".attn1.to_q.weight", ".attn1.to_k.weight", ".attn1.to_v.weight"] - new_attn_keys = [prefix + key for key in new_attn_keys] - shape = qkv.shape[0] // len(new_attn_keys) - for i, attn_key in enumerate(new_attn_keys): - new_state_dict[attn_key] = qkv[i * shape : (i + 1) * shape] - - new_state_dict[prefix + ".attn1.to_out.0.weight"] = uvit_state_dict[block_prefix + ".attn.proj.weight"] - new_state_dict[prefix + ".attn1.to_out.0.bias"] = uvit_state_dict[block_prefix + ".attn.proj.bias"] - new_state_dict[prefix + ".norm1.weight"] = uvit_state_dict[block_prefix + ".norm2.weight"] - new_state_dict[prefix + ".norm1.bias"] = uvit_state_dict[block_prefix + ".norm2.bias"] - new_state_dict[prefix + ".ff.net.0.proj.weight"] = uvit_state_dict[block_prefix + ".mlp.fc1.weight"] - new_state_dict[prefix + ".ff.net.0.proj.bias"] = uvit_state_dict[block_prefix + ".mlp.fc1.bias"] - new_state_dict[prefix + ".ff.net.2.weight"] = uvit_state_dict[block_prefix + ".mlp.fc2.weight"] - new_state_dict[prefix + ".ff.net.2.bias"] = uvit_state_dict[block_prefix + ".mlp.fc2.bias"] - new_state_dict[prefix + ".norm3.weight"] = uvit_state_dict[block_prefix + ".norm3.weight"] - new_state_dict[prefix + ".norm3.bias"] = uvit_state_dict[block_prefix + ".norm3.bias"] - - return uvit_state_dict, new_state_dict - - -def convert_uvit_to_diffusers(ckpt, diffusers_model): - """ - Converts a UniDiffuser uvit_v*.pth checkpoint to a diffusers UniDiffusersModel. - """ - # uvit_v*.pth ckpt is a torch state dict - uvit_state_dict = torch.load(ckpt, map_location="cpu") - - new_state_dict = {} - - # Input layers - new_state_dict["vae_img_in.proj.weight"] = uvit_state_dict["patch_embed.proj.weight"] - new_state_dict["vae_img_in.proj.bias"] = uvit_state_dict["patch_embed.proj.bias"] - new_state_dict["clip_img_in.weight"] = uvit_state_dict["clip_img_embed.weight"] - new_state_dict["clip_img_in.bias"] = uvit_state_dict["clip_img_embed.bias"] - new_state_dict["text_in.weight"] = uvit_state_dict["text_embed.weight"] - new_state_dict["text_in.bias"] = uvit_state_dict["text_embed.bias"] - - new_state_dict["pos_embed"] = uvit_state_dict["pos_embed"] - - # Handle data type token embeddings for UniDiffuser-v1 - if "token_embedding.weight" in uvit_state_dict and diffusers_model.use_data_type_embedding: - new_state_dict["data_type_pos_embed_token"] = uvit_state_dict["pos_embed_token"] - new_state_dict["data_type_token_embedding.weight"] = uvit_state_dict["token_embedding.weight"] - - # Also initialize the PatchEmbedding in UTransformer2DModel with the PatchEmbedding from the checkpoint. - # This isn't used in the current implementation, so might want to remove. - new_state_dict["transformer.pos_embed.proj.weight"] = uvit_state_dict["patch_embed.proj.weight"] - new_state_dict["transformer.pos_embed.proj.bias"] = uvit_state_dict["patch_embed.proj.bias"] - - # Output layers - new_state_dict["transformer.norm_out.weight"] = uvit_state_dict["norm.weight"] - new_state_dict["transformer.norm_out.bias"] = uvit_state_dict["norm.bias"] - - new_state_dict["vae_img_out.weight"] = uvit_state_dict["decoder_pred.weight"] - new_state_dict["vae_img_out.bias"] = uvit_state_dict["decoder_pred.bias"] - new_state_dict["clip_img_out.weight"] = uvit_state_dict["clip_img_out.weight"] - new_state_dict["clip_img_out.bias"] = uvit_state_dict["clip_img_out.bias"] - new_state_dict["text_out.weight"] = uvit_state_dict["text_out.weight"] - new_state_dict["text_out.bias"] = uvit_state_dict["text_out.bias"] - - # in_blocks - in_blocks_prefixes = {".".join(layer.split(".")[:2]) for layer in uvit_state_dict if "in_blocks" in layer} - for in_block_prefix in list(in_blocks_prefixes): - convert_uvit_block_to_diffusers_block(uvit_state_dict, new_state_dict, in_block_prefix) - - # mid_block - # Assume there's only one mid block - convert_uvit_block_to_diffusers_block(uvit_state_dict, new_state_dict, "mid_block") - - # out_blocks - out_blocks_prefixes = {".".join(layer.split(".")[:2]) for layer in uvit_state_dict if "out_blocks" in layer} - for out_block_prefix in list(out_blocks_prefixes): - convert_uvit_block_to_diffusers_block(uvit_state_dict, new_state_dict, out_block_prefix, skip_connection=True) - - missing_keys, unexpected_keys = diffusers_model.load_state_dict(new_state_dict) - for missing_key in missing_keys: - print(f"Missing key: {missing_key}") - for unexpected_key in unexpected_keys: - print(f"Unexpected key: {unexpected_key}") - - return diffusers_model - - -def convert_caption_decoder_to_diffusers(ckpt, diffusers_model): - """ - Converts a UniDiffuser caption_decoder.pth checkpoint to a diffusers UniDiffuserTextDecoder. - """ - # caption_decoder.pth ckpt is a torch state dict - checkpoint_state_dict = torch.load(ckpt, map_location="cpu") - decoder_state_dict = {} - # Remove the "module." prefix, if necessary - caption_decoder_key = "module." - for key in checkpoint_state_dict: - if key.startswith(caption_decoder_key): - decoder_state_dict[key.replace(caption_decoder_key, "")] = checkpoint_state_dict.get(key) - else: - decoder_state_dict[key] = checkpoint_state_dict.get(key) - - new_state_dict = {} - - # Encoder and Decoder - new_state_dict["encode_prefix.weight"] = decoder_state_dict["encode_prefix.weight"] - new_state_dict["encode_prefix.bias"] = decoder_state_dict["encode_prefix.bias"] - new_state_dict["decode_prefix.weight"] = decoder_state_dict["decode_prefix.weight"] - new_state_dict["decode_prefix.bias"] = decoder_state_dict["decode_prefix.bias"] - - # Internal GPT2LMHeadModel transformer model - for key, val in decoder_state_dict.items(): - if key.startswith("gpt"): - suffix = key[len("gpt") :] - new_state_dict["transformer" + suffix] = val - - missing_keys, unexpected_keys = diffusers_model.load_state_dict(new_state_dict) - for missing_key in missing_keys: - print(f"Missing key: {missing_key}") - for unexpected_key in unexpected_keys: - print(f"Unexpected key: {unexpected_key}") - - return diffusers_model - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument( - "--caption_decoder_checkpoint_path", - default=None, - type=str, - required=False, - help="Path to caption decoder checkpoint to convert.", - ) - parser.add_argument( - "--uvit_checkpoint_path", default=None, type=str, required=False, help="Path to U-ViT checkpoint to convert." - ) - parser.add_argument( - "--vae_checkpoint_path", - default=None, - type=str, - required=False, - help="Path to VAE checkpoint to convert.", - ) - parser.add_argument( - "--pipeline_output_path", - default=None, - type=str, - required=True, - help="Path to save the output pipeline to.", - ) - parser.add_argument( - "--config_type", - default="test", - type=str, - help=( - "Config type to use. Should be 'test' to create small models for testing or 'big' to convert a full" - " checkpoint." - ), - ) - parser.add_argument( - "--version", - default=0, - type=int, - help="The UniDiffuser model type to convert to. Should be 0 for UniDiffuser-v0 and 1 for UniDiffuser-v1.", - ) - - args = parser.parse_args() - - # Convert the VAE model. - if args.vae_checkpoint_path is not None: - vae_config = create_vae_diffusers_config(args.config_type) - vae = AutoencoderKL(**vae_config) - vae = convert_vae_to_diffusers(args.vae_checkpoint_path, vae) - - # Convert the U-ViT ("unet") model. - if args.uvit_checkpoint_path is not None: - unet_config = create_unidiffuser_unet_config(args.config_type, args.version) - unet = UniDiffuserModel(**unet_config) - unet = convert_uvit_to_diffusers(args.uvit_checkpoint_path, unet) - - # Convert the caption decoder ("text_decoder") model. - if args.caption_decoder_checkpoint_path is not None: - text_decoder_config = create_text_decoder_config(args.config_type) - text_decoder = UniDiffuserTextDecoder(**text_decoder_config) - text_decoder = convert_caption_decoder_to_diffusers(args.caption_decoder_checkpoint_path, text_decoder) - - # Scheduler is the same for both the test and big models. - scheduler_config = SCHEDULER_CONFIG - scheduler = DPMSolverMultistepScheduler( - beta_start=scheduler_config.beta_start, - beta_end=scheduler_config.beta_end, - beta_schedule=scheduler_config.beta_schedule, - solver_order=scheduler_config.solver_order, - ) - - if args.config_type == "test": - # Make a small random CLIPTextModel - torch.manual_seed(0) - clip_text_encoder_config = CLIPTextConfig( - bos_token_id=0, - eos_token_id=2, - hidden_size=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=1000, - ) - text_encoder = CLIPTextModel(clip_text_encoder_config) - clip_tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") - - # Make a small random CLIPVisionModel and accompanying CLIPImageProcessor - torch.manual_seed(0) - clip_image_encoder_config = CLIPVisionConfig( - image_size=32, - patch_size=2, - num_channels=3, - hidden_size=32, - projection_dim=32, - num_hidden_layers=5, - num_attention_heads=4, - intermediate_size=37, - dropout=0.1, - attention_dropout=0.1, - initializer_range=0.02, - ) - image_encoder = CLIPVisionModelWithProjection(clip_image_encoder_config) - image_processor = CLIPImageProcessor(crop_size=32, size=32) - - # Note that the text_decoder should already have its token embeddings resized. - text_tokenizer = GPT2Tokenizer.from_pretrained("hf-internal-testing/tiny-random-GPT2Model") - eos = "<|EOS|>" - special_tokens_dict = {"eos_token": eos} - text_tokenizer.add_special_tokens(special_tokens_dict) - elif args.config_type == "big": - text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14") - clip_tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14") - - image_encoder = CLIPVisionModelWithProjection.from_pretrained("openai/clip-vit-base-patch32") - image_processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32") - - # Note that the text_decoder should already have its token embeddings resized. - text_tokenizer = GPT2Tokenizer.from_pretrained("gpt2") - eos = "<|EOS|>" - special_tokens_dict = {"eos_token": eos} - text_tokenizer.add_special_tokens(special_tokens_dict) - else: - raise NotImplementedError( - f"Config type {args.config_type} is not implemented, currently only config types" - " 'test' and 'big' are available." - ) - - pipeline = UniDiffuserPipeline( - vae=vae, - text_encoder=text_encoder, - image_encoder=image_encoder, - image_processor=image_processor, - clip_tokenizer=clip_tokenizer, - text_decoder=text_decoder, - text_tokenizer=text_tokenizer, - unet=unet, - scheduler=scheduler, - ) - pipeline.save_pretrained(args.pipeline_output_path) diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py deleted file mode 100644 index c36b138222b9683515cf054e4fa5d24d89887b73..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py +++ /dev/null @@ -1,1127 +0,0 @@ -import html -import inspect -import re -import urllib.parse as ul -from typing import Any, Callable, Dict, List, Optional, Union - -import numpy as np -import PIL -import torch -import torch.nn.functional as F -from transformers import CLIPImageProcessor, T5EncoderModel, T5Tokenizer - -from ...loaders import LoraLoaderMixin -from ...models import UNet2DConditionModel -from ...schedulers import DDPMScheduler -from ...utils import ( - BACKENDS_MAPPING, - PIL_INTERPOLATION, - is_accelerate_available, - is_bs4_available, - is_ftfy_available, - logging, - replace_example_docstring, -) -from ...utils.torch_utils import randn_tensor -from ..pipeline_utils import DiffusionPipeline -from . import IFPipelineOutput -from .safety_checker import IFSafetyChecker -from .watermark import IFWatermarker - - -if is_bs4_available(): - from bs4 import BeautifulSoup - -if is_ftfy_available(): - import ftfy - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_img2img.resize -def resize(images: PIL.Image.Image, img_size: int) -> PIL.Image.Image: - w, h = images.size - - coef = w / h - - w, h = img_size, img_size - - if coef >= 1: - w = int(round(img_size / 8 * coef) * 8) - else: - h = int(round(img_size / 8 / coef) * 8) - - images = images.resize((w, h), resample=PIL_INTERPOLATION["bicubic"], reducing_gap=None) - - return images - - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline - >>> from diffusers.utils import pt_to_pil - >>> import torch - >>> from PIL import Image - >>> import requests - >>> from io import BytesIO - - >>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" - >>> response = requests.get(url) - >>> original_image = Image.open(BytesIO(response.content)).convert("RGB") - >>> original_image = original_image - - >>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" - >>> response = requests.get(url) - >>> mask_image = Image.open(BytesIO(response.content)) - >>> mask_image = mask_image - - >>> pipe = IFInpaintingPipeline.from_pretrained( - ... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 - ... ) - >>> pipe.enable_model_cpu_offload() - - >>> prompt = "blue sunglasses" - - >>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) - >>> image = pipe( - ... image=original_image, - ... mask_image=mask_image, - ... prompt_embeds=prompt_embeds, - ... negative_prompt_embeds=negative_embeds, - ... output_type="pt", - ... ).images - - >>> # save intermediate image - >>> pil_image = pt_to_pil(image) - >>> pil_image[0].save("./if_stage_I.png") - - >>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( - ... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 - ... ) - >>> super_res_1_pipe.enable_model_cpu_offload() - - >>> image = super_res_1_pipe( - ... image=image, - ... mask_image=mask_image, - ... original_image=original_image, - ... prompt_embeds=prompt_embeds, - ... negative_prompt_embeds=negative_embeds, - ... ).images - >>> image[0].save("./if_stage_II.png") - ``` - """ - - -class IFInpaintingSuperResolutionPipeline(DiffusionPipeline, LoraLoaderMixin): - tokenizer: T5Tokenizer - text_encoder: T5EncoderModel - - unet: UNet2DConditionModel - scheduler: DDPMScheduler - image_noising_scheduler: DDPMScheduler - - feature_extractor: Optional[CLIPImageProcessor] - safety_checker: Optional[IFSafetyChecker] - - watermarker: Optional[IFWatermarker] - - bad_punct_regex = re.compile( - r"[" + "#®•©™&@·º½¾¿¡§~" + "\)" + "\(" + "\]" + "\[" + "\}" + "\{" + "\|" + "\\" + "\/" + "\*" + r"]{1,}" - ) # noqa - - model_cpu_offload_seq = "text_encoder->unet" - _optional_components = ["tokenizer", "text_encoder", "safety_checker", "feature_extractor", "watermarker"] - - def __init__( - self, - tokenizer: T5Tokenizer, - text_encoder: T5EncoderModel, - unet: UNet2DConditionModel, - scheduler: DDPMScheduler, - image_noising_scheduler: DDPMScheduler, - safety_checker: Optional[IFSafetyChecker], - feature_extractor: Optional[CLIPImageProcessor], - watermarker: Optional[IFWatermarker], - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the IF license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - if unet.config.in_channels != 6: - logger.warn( - "It seems like you have loaded a checkpoint that shall not be used for super resolution from {unet.config._name_or_path} as it accepts {unet.config.in_channels} input channels instead of 6. Please make sure to pass a super resolution checkpoint as the `'unet'`: IFSuperResolutionPipeline.from_pretrained(unet=super_resolution_unet, ...)`." - ) - - self.register_modules( - tokenizer=tokenizer, - text_encoder=text_encoder, - unet=unet, - scheduler=scheduler, - image_noising_scheduler=image_noising_scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - watermarker=watermarker, - ) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.remove_all_hooks - def remove_all_hooks(self): - if is_accelerate_available(): - from accelerate.hooks import remove_hook_from_module - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - for model in [self.text_encoder, self.unet, self.safety_checker]: - if model is not None: - remove_hook_from_module(model, recurse=True) - - self.unet_offload_hook = None - self.text_encoder_offload_hook = None - self.final_offload_hook = None - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline._text_preprocessing - def _text_preprocessing(self, text, clean_caption=False): - if clean_caption and not is_bs4_available(): - logger.warn(BACKENDS_MAPPING["bs4"][-1].format("Setting `clean_caption=True`")) - logger.warn("Setting `clean_caption` to False...") - clean_caption = False - - if clean_caption and not is_ftfy_available(): - logger.warn(BACKENDS_MAPPING["ftfy"][-1].format("Setting `clean_caption=True`")) - logger.warn("Setting `clean_caption` to False...") - clean_caption = False - - if not isinstance(text, (tuple, list)): - text = [text] - - def process(text: str): - if clean_caption: - text = self._clean_caption(text) - text = self._clean_caption(text) - else: - text = text.lower().strip() - return text - - return [process(t) for t in text] - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline._clean_caption - def _clean_caption(self, caption): - caption = str(caption) - caption = ul.unquote_plus(caption) - caption = caption.strip().lower() - caption = re.sub("", "person", caption) - # urls: - caption = re.sub( - r"\b((?:https?:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa - "", - caption, - ) # regex for urls - caption = re.sub( - r"\b((?:www:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa - "", - caption, - ) # regex for urls - # html: - caption = BeautifulSoup(caption, features="html.parser").text - - # @ - caption = re.sub(r"@[\w\d]+\b", "", caption) - - # 31C0—31EF CJK Strokes - # 31F0—31FF Katakana Phonetic Extensions - # 3200—32FF Enclosed CJK Letters and Months - # 3300—33FF CJK Compatibility - # 3400—4DBF CJK Unified Ideographs Extension A - # 4DC0—4DFF Yijing Hexagram Symbols - # 4E00—9FFF CJK Unified Ideographs - caption = re.sub(r"[\u31c0-\u31ef]+", "", caption) - caption = re.sub(r"[\u31f0-\u31ff]+", "", caption) - caption = re.sub(r"[\u3200-\u32ff]+", "", caption) - caption = re.sub(r"[\u3300-\u33ff]+", "", caption) - caption = re.sub(r"[\u3400-\u4dbf]+", "", caption) - caption = re.sub(r"[\u4dc0-\u4dff]+", "", caption) - caption = re.sub(r"[\u4e00-\u9fff]+", "", caption) - ####################################################### - - # все виды тире / all types of dash --> "-" - caption = re.sub( - r"[\u002D\u058A\u05BE\u1400\u1806\u2010-\u2015\u2E17\u2E1A\u2E3A\u2E3B\u2E40\u301C\u3030\u30A0\uFE31\uFE32\uFE58\uFE63\uFF0D]+", # noqa - "-", - caption, - ) - - # кавычки к одному стандарту - caption = re.sub(r"[`´«»“”¨]", '"', caption) - caption = re.sub(r"[‘’]", "'", caption) - - # " - caption = re.sub(r""?", "", caption) - # & - caption = re.sub(r"&", "", caption) - - # ip adresses: - caption = re.sub(r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", " ", caption) - - # article ids: - caption = re.sub(r"\d:\d\d\s+$", "", caption) - - # \n - caption = re.sub(r"\\n", " ", caption) - - # "#123" - caption = re.sub(r"#\d{1,3}\b", "", caption) - # "#12345.." - caption = re.sub(r"#\d{5,}\b", "", caption) - # "123456.." - caption = re.sub(r"\b\d{6,}\b", "", caption) - # filenames: - caption = re.sub(r"[\S]+\.(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)", "", caption) - - # - caption = re.sub(r"[\"\']{2,}", r'"', caption) # """AUSVERKAUFT""" - caption = re.sub(r"[\.]{2,}", r" ", caption) # """AUSVERKAUFT""" - - caption = re.sub(self.bad_punct_regex, r" ", caption) # ***AUSVERKAUFT***, #AUSVERKAUFT - caption = re.sub(r"\s+\.\s+", r" ", caption) # " . " - - # this-is-my-cute-cat / this_is_my_cute_cat - regex2 = re.compile(r"(?:\-|\_)") - if len(re.findall(regex2, caption)) > 3: - caption = re.sub(regex2, " ", caption) - - caption = ftfy.fix_text(caption) - caption = html.unescape(html.unescape(caption)) - - caption = re.sub(r"\b[a-zA-Z]{1,3}\d{3,15}\b", "", caption) # jc6640 - caption = re.sub(r"\b[a-zA-Z]+\d+[a-zA-Z]+\b", "", caption) # jc6640vc - caption = re.sub(r"\b\d+[a-zA-Z]+\d+\b", "", caption) # 6640vc231 - - caption = re.sub(r"(worldwide\s+)?(free\s+)?shipping", "", caption) - caption = re.sub(r"(free\s)?download(\sfree)?", "", caption) - caption = re.sub(r"\bclick\b\s(?:for|on)\s\w+", "", caption) - caption = re.sub(r"\b(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)(\simage[s]?)?", "", caption) - caption = re.sub(r"\bpage\s+\d+\b", "", caption) - - caption = re.sub(r"\b\d*[a-zA-Z]+\d+[a-zA-Z]+\d+[a-zA-Z\d]*\b", r" ", caption) # j2d1a2a... - - caption = re.sub(r"\b\d+\.?\d*[xх×]\d+\.?\d*\b", "", caption) - - caption = re.sub(r"\b\s+\:\s+", r": ", caption) - caption = re.sub(r"(\D[,\./])\b", r"\1 ", caption) - caption = re.sub(r"\s+", " ", caption) - - caption.strip() - - caption = re.sub(r"^[\"\']([\w\W]+)[\"\']$", r"\1", caption) - caption = re.sub(r"^[\'\_,\-\:;]", r"", caption) - caption = re.sub(r"[\'\_,\-\:\-\+]$", r"", caption) - caption = re.sub(r"^\.\S+$", "", caption) - - return caption.strip() - - @torch.no_grad() - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.encode_prompt - def encode_prompt( - self, - prompt, - do_classifier_free_guidance=True, - num_images_per_prompt=1, - device=None, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - clean_caption: bool = False, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`, *optional*): - torch device to place the resulting embeddings on - num_images_per_prompt (`int`, *optional*, defaults to 1): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`, *optional*, defaults to `True`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. - Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and negative_prompt is not None: - if type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - - if device is None: - device = self._execution_device - - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - # while T5 can handle much longer input sequences than 77, the text encoder was trained with a max length of 77 for IF - max_length = 77 - - if prompt_embeds is None: - prompt = self._text_preprocessing(prompt, clean_caption=clean_caption) - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=max_length, - truncation=True, - add_special_tokens=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {max_length} tokens: {removed_text}" - ) - - attention_mask = text_inputs.attention_mask.to(device) - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - if self.text_encoder is not None: - dtype = self.text_encoder.dtype - elif self.unet is not None: - dtype = self.unet.dtype - else: - dtype = None - - prompt_embeds = prompt_embeds.to(dtype=dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - uncond_tokens = self._text_preprocessing(uncond_tokens, clean_caption=clean_caption) - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_attention_mask=True, - add_special_tokens=True, - return_tensors="pt", - ) - attention_mask = uncond_input.attention_mask.to(device) - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - else: - negative_prompt_embeds = None - - return prompt_embeds, negative_prompt_embeds - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, nsfw_detected, watermark_detected = self.safety_checker( - images=image, - clip_input=safety_checker_input.pixel_values.to(dtype=dtype), - ) - else: - nsfw_detected = None - watermark_detected = None - - if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None: - self.unet_offload_hook.offload() - - return image, nsfw_detected, watermark_detected - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, - prompt, - image, - original_image, - mask_image, - batch_size, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - # image - - if isinstance(image, list): - check_image_type = image[0] - else: - check_image_type = image - - if ( - not isinstance(check_image_type, torch.Tensor) - and not isinstance(check_image_type, PIL.Image.Image) - and not isinstance(check_image_type, np.ndarray) - ): - raise ValueError( - "`image` has to be of type `torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, or List[...] but is" - f" {type(check_image_type)}" - ) - - if isinstance(image, list): - image_batch_size = len(image) - elif isinstance(image, torch.Tensor): - image_batch_size = image.shape[0] - elif isinstance(image, PIL.Image.Image): - image_batch_size = 1 - elif isinstance(image, np.ndarray): - image_batch_size = image.shape[0] - else: - assert False - - if batch_size != image_batch_size: - raise ValueError(f"image batch size: {image_batch_size} must be same as prompt batch size {batch_size}") - - # original_image - - if isinstance(original_image, list): - check_image_type = original_image[0] - else: - check_image_type = original_image - - if ( - not isinstance(check_image_type, torch.Tensor) - and not isinstance(check_image_type, PIL.Image.Image) - and not isinstance(check_image_type, np.ndarray) - ): - raise ValueError( - "`original_image` has to be of type `torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, or List[...] but is" - f" {type(check_image_type)}" - ) - - if isinstance(original_image, list): - image_batch_size = len(original_image) - elif isinstance(original_image, torch.Tensor): - image_batch_size = original_image.shape[0] - elif isinstance(original_image, PIL.Image.Image): - image_batch_size = 1 - elif isinstance(original_image, np.ndarray): - image_batch_size = original_image.shape[0] - else: - assert False - - if batch_size != image_batch_size: - raise ValueError( - f"original_image batch size: {image_batch_size} must be same as prompt batch size {batch_size}" - ) - - # mask_image - - if isinstance(mask_image, list): - check_image_type = mask_image[0] - else: - check_image_type = mask_image - - if ( - not isinstance(check_image_type, torch.Tensor) - and not isinstance(check_image_type, PIL.Image.Image) - and not isinstance(check_image_type, np.ndarray) - ): - raise ValueError( - "`mask_image` has to be of type `torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, or List[...] but is" - f" {type(check_image_type)}" - ) - - if isinstance(mask_image, list): - image_batch_size = len(mask_image) - elif isinstance(mask_image, torch.Tensor): - image_batch_size = mask_image.shape[0] - elif isinstance(mask_image, PIL.Image.Image): - image_batch_size = 1 - elif isinstance(mask_image, np.ndarray): - image_batch_size = mask_image.shape[0] - else: - assert False - - if image_batch_size != 1 and batch_size != image_batch_size: - raise ValueError( - f"mask_image batch size: {image_batch_size} must be `1` or the same as prompt batch size {batch_size}" - ) - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_img2img.IFImg2ImgPipeline.preprocess_image with preprocess_image -> preprocess_original_image - def preprocess_original_image(self, image: PIL.Image.Image) -> torch.Tensor: - if not isinstance(image, list): - image = [image] - - def numpy_to_pt(images): - if images.ndim == 3: - images = images[..., None] - - images = torch.from_numpy(images.transpose(0, 3, 1, 2)) - return images - - if isinstance(image[0], PIL.Image.Image): - new_image = [] - - for image_ in image: - image_ = image_.convert("RGB") - image_ = resize(image_, self.unet.sample_size) - image_ = np.array(image_) - image_ = image_.astype(np.float32) - image_ = image_ / 127.5 - 1 - new_image.append(image_) - - image = new_image - - image = np.stack(image, axis=0) # to np - image = numpy_to_pt(image) # to pt - - elif isinstance(image[0], np.ndarray): - image = np.concatenate(image, axis=0) if image[0].ndim == 4 else np.stack(image, axis=0) - image = numpy_to_pt(image) - - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, axis=0) if image[0].ndim == 4 else torch.stack(image, axis=0) - - return image - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_superresolution.IFSuperResolutionPipeline.preprocess_image - def preprocess_image(self, image: PIL.Image.Image, num_images_per_prompt, device) -> torch.Tensor: - if not isinstance(image, torch.Tensor) and not isinstance(image, list): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - image = [np.array(i).astype(np.float32) / 127.5 - 1.0 for i in image] - - image = np.stack(image, axis=0) # to np - image = torch.from_numpy(image.transpose(0, 3, 1, 2)) - elif isinstance(image[0], np.ndarray): - image = np.stack(image, axis=0) # to np - if image.ndim == 5: - image = image[0] - - image = torch.from_numpy(image.transpose(0, 3, 1, 2)) - elif isinstance(image, list) and isinstance(image[0], torch.Tensor): - dims = image[0].ndim - - if dims == 3: - image = torch.stack(image, dim=0) - elif dims == 4: - image = torch.concat(image, dim=0) - else: - raise ValueError(f"Image must have 3 or 4 dimensions, instead got {dims}") - - image = image.to(device=device, dtype=self.unet.dtype) - - image = image.repeat_interleave(num_images_per_prompt, dim=0) - - return image - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_inpainting.IFInpaintingPipeline.preprocess_mask_image - def preprocess_mask_image(self, mask_image) -> torch.Tensor: - if not isinstance(mask_image, list): - mask_image = [mask_image] - - if isinstance(mask_image[0], torch.Tensor): - mask_image = torch.cat(mask_image, axis=0) if mask_image[0].ndim == 4 else torch.stack(mask_image, axis=0) - - if mask_image.ndim == 2: - # Batch and add channel dim for single mask - mask_image = mask_image.unsqueeze(0).unsqueeze(0) - elif mask_image.ndim == 3 and mask_image.shape[0] == 1: - # Single mask, the 0'th dimension is considered to be - # the existing batch size of 1 - mask_image = mask_image.unsqueeze(0) - elif mask_image.ndim == 3 and mask_image.shape[0] != 1: - # Batch of mask, the 0'th dimension is considered to be - # the batching dimension - mask_image = mask_image.unsqueeze(1) - - mask_image[mask_image < 0.5] = 0 - mask_image[mask_image >= 0.5] = 1 - - elif isinstance(mask_image[0], PIL.Image.Image): - new_mask_image = [] - - for mask_image_ in mask_image: - mask_image_ = mask_image_.convert("L") - mask_image_ = resize(mask_image_, self.unet.sample_size) - mask_image_ = np.array(mask_image_) - mask_image_ = mask_image_[None, None, :] - new_mask_image.append(mask_image_) - - mask_image = new_mask_image - - mask_image = np.concatenate(mask_image, axis=0) - mask_image = mask_image.astype(np.float32) / 255.0 - mask_image[mask_image < 0.5] = 0 - mask_image[mask_image >= 0.5] = 1 - mask_image = torch.from_numpy(mask_image) - - elif isinstance(mask_image[0], np.ndarray): - mask_image = np.concatenate([m[None, None, :] for m in mask_image], axis=0) - - mask_image[mask_image < 0.5] = 0 - mask_image[mask_image >= 0.5] = 1 - mask_image = torch.from_numpy(mask_image) - - return mask_image - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_img2img.IFImg2ImgPipeline.get_timesteps - def get_timesteps(self, num_inference_steps, strength): - # get the original timestep using init_timestep - init_timestep = min(int(num_inference_steps * strength), num_inference_steps) - - t_start = max(num_inference_steps - init_timestep, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - # Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_inpainting.IFInpaintingPipeline.prepare_intermediate_images - def prepare_intermediate_images( - self, image, timestep, batch_size, num_images_per_prompt, dtype, device, mask_image, generator=None - ): - image_batch_size, channels, height, width = image.shape - - batch_size = batch_size * num_images_per_prompt - - shape = (batch_size, channels, height, width) - - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - - image = image.repeat_interleave(num_images_per_prompt, dim=0) - noised_image = self.scheduler.add_noise(image, noise, timestep) - - image = (1 - mask_image) * image + mask_image * noised_image - - return image - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - image: Union[PIL.Image.Image, np.ndarray, torch.FloatTensor], - original_image: Union[ - PIL.Image.Image, torch.Tensor, np.ndarray, List[PIL.Image.Image], List[torch.Tensor], List[np.ndarray] - ] = None, - mask_image: Union[ - PIL.Image.Image, torch.Tensor, np.ndarray, List[PIL.Image.Image], List[torch.Tensor], List[np.ndarray] - ] = None, - strength: float = 0.8, - prompt: Union[str, List[str]] = None, - num_inference_steps: int = 100, - timesteps: List[int] = None, - guidance_scale: float = 4.0, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - noise_level: int = 0, - clean_caption: bool = True, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - image (`torch.FloatTensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. - original_image (`torch.FloatTensor` or `PIL.Image.Image`): - The original image that `image` was varied from. - mask_image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be - repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted - to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L) - instead of 3, so the expected shape would be `(B, H, W, 1)`. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - timesteps (`List[int]`, *optional*): - Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps` - timesteps are used. Must be in descending order. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.IFPipelineOutput`] instead of a plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py). - noise_level (`int`, *optional*, defaults to 0): - The amount of noise to add to the upscaled image. Must be in the range `[0, 1000)` - clean_caption (`bool`, *optional*, defaults to `True`): - Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to - be installed. If the dependencies are not installed, the embeddings will be created from the raw - prompt. - - Examples: - - Returns: - [`~pipelines.stable_diffusion.IFPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.IFPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When - returning a tuple, the first element is a list with the generated images, and the second element is a list - of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) - or watermarked content, according to the `safety_checker`. - """ - # 1. Check inputs. Raise error if not correct - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - self.check_inputs( - prompt, - image, - original_image, - mask_image, - batch_size, - callback_steps, - negative_prompt, - prompt_embeds, - negative_prompt_embeds, - ) - - # 2. Define call parameters - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - device = self._execution_device - - # 3. Encode input prompt - prompt_embeds, negative_prompt_embeds = self.encode_prompt( - prompt, - do_classifier_free_guidance, - num_images_per_prompt=num_images_per_prompt, - device=device, - negative_prompt=negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - clean_caption=clean_caption, - ) - - if do_classifier_free_guidance: - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - dtype = prompt_embeds.dtype - - # 4. Prepare timesteps - if timesteps is not None: - self.scheduler.set_timesteps(timesteps=timesteps, device=device) - timesteps = self.scheduler.timesteps - num_inference_steps = len(timesteps) - else: - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength) - - # 5. prepare original image - original_image = self.preprocess_original_image(original_image) - original_image = original_image.to(device=device, dtype=dtype) - - # 6. prepare mask image - mask_image = self.preprocess_mask_image(mask_image) - mask_image = mask_image.to(device=device, dtype=dtype) - - if mask_image.shape[0] == 1: - mask_image = mask_image.repeat_interleave(batch_size * num_images_per_prompt, dim=0) - else: - mask_image = mask_image.repeat_interleave(num_images_per_prompt, dim=0) - - # 6. Prepare intermediate images - noise_timestep = timesteps[0:1] - noise_timestep = noise_timestep.repeat(batch_size * num_images_per_prompt) - - intermediate_images = self.prepare_intermediate_images( - original_image, - noise_timestep, - batch_size, - num_images_per_prompt, - dtype, - device, - mask_image, - generator, - ) - - # 7. Prepare upscaled image and noise level - _, _, height, width = original_image.shape - - image = self.preprocess_image(image, num_images_per_prompt, device) - - upscaled = F.interpolate(image, (height, width), mode="bilinear", align_corners=True) - - noise_level = torch.tensor([noise_level] * upscaled.shape[0], device=upscaled.device) - noise = randn_tensor(upscaled.shape, generator=generator, device=upscaled.device, dtype=upscaled.dtype) - upscaled = self.image_noising_scheduler.add_noise(upscaled, noise, timesteps=noise_level) - - if do_classifier_free_guidance: - noise_level = torch.cat([noise_level] * 2) - - # 8. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # HACK: see comment in `enable_model_cpu_offload` - if hasattr(self, "text_encoder_offload_hook") and self.text_encoder_offload_hook is not None: - self.text_encoder_offload_hook.offload() - - # 9. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - model_input = torch.cat([intermediate_images, upscaled], dim=1) - - model_input = torch.cat([model_input] * 2) if do_classifier_free_guidance else model_input - model_input = self.scheduler.scale_model_input(model_input, t) - - # predict the noise residual - noise_pred = self.unet( - model_input, - t, - encoder_hidden_states=prompt_embeds, - class_labels=noise_level, - cross_attention_kwargs=cross_attention_kwargs, - return_dict=False, - )[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred_uncond, _ = noise_pred_uncond.split(model_input.shape[1] // 2, dim=1) - noise_pred_text, predicted_variance = noise_pred_text.split(model_input.shape[1] // 2, dim=1) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, predicted_variance], dim=1) - - if self.scheduler.config.variance_type not in ["learned", "learned_range"]: - noise_pred, _ = noise_pred.split(intermediate_images.shape[1], dim=1) - - # compute the previous noisy sample x_t -> x_t-1 - prev_intermediate_images = intermediate_images - - intermediate_images = self.scheduler.step( - noise_pred, t, intermediate_images, **extra_step_kwargs, return_dict=False - )[0] - - intermediate_images = (1 - mask_image) * prev_intermediate_images + mask_image * intermediate_images - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, intermediate_images) - - image = intermediate_images - - if output_type == "pil": - # 10. Post-processing - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - # 11. Run safety checker - image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # 12. Convert to PIL - image = self.numpy_to_pil(image) - - # 13. Apply watermark - if self.watermarker is not None: - self.watermarker.apply_watermark(image, self.unet.config.sample_size) - elif output_type == "pt": - nsfw_detected = None - watermark_detected = None - - if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None: - self.unet_offload_hook.offload() - else: - # 10. Post-processing - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - # 11. Run safety checker - image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, nsfw_detected, watermark_detected) - - return IFPipelineOutput(images=image, nsfw_detected=nsfw_detected, watermark_detected=watermark_detected) diff --git a/spaces/pinkq/Newbing/src/pages/api/healthz.ts b/spaces/pinkq/Newbing/src/pages/api/healthz.ts deleted file mode 100644 index f6ae44ff0fd66ccd3f7feaa550025fbf2a83bf77..0000000000000000000000000000000000000000 --- a/spaces/pinkq/Newbing/src/pages/api/healthz.ts +++ /dev/null @@ -1,7 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - res.status(200).end('ok') -} diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/containers.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/containers.py deleted file mode 100644 index e29cf368991ccb083b67cda8133e4635defbfe53..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/containers.py +++ /dev/null @@ -1,167 +0,0 @@ -from itertools import zip_longest -from typing import ( - Iterator, - Iterable, - List, - Optional, - Union, - overload, - TypeVar, - TYPE_CHECKING, -) - -if TYPE_CHECKING: - from .console import ( - Console, - ConsoleOptions, - JustifyMethod, - OverflowMethod, - RenderResult, - RenderableType, - ) - from .text import Text - -from .cells import cell_len -from .measure import Measurement - -T = TypeVar("T") - - -class Renderables: - """A list subclass which renders its contents to the console.""" - - def __init__( - self, renderables: Optional[Iterable["RenderableType"]] = None - ) -> None: - self._renderables: List["RenderableType"] = ( - list(renderables) if renderables is not None else [] - ) - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - """Console render method to insert line-breaks.""" - yield from self._renderables - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> "Measurement": - dimensions = [ - Measurement.get(console, options, renderable) - for renderable in self._renderables - ] - if not dimensions: - return Measurement(1, 1) - _min = max(dimension.minimum for dimension in dimensions) - _max = max(dimension.maximum for dimension in dimensions) - return Measurement(_min, _max) - - def append(self, renderable: "RenderableType") -> None: - self._renderables.append(renderable) - - def __iter__(self) -> Iterable["RenderableType"]: - return iter(self._renderables) - - -class Lines: - """A list subclass which can render to the console.""" - - def __init__(self, lines: Iterable["Text"] = ()) -> None: - self._lines: List["Text"] = list(lines) - - def __repr__(self) -> str: - return f"Lines({self._lines!r})" - - def __iter__(self) -> Iterator["Text"]: - return iter(self._lines) - - @overload - def __getitem__(self, index: int) -> "Text": - ... - - @overload - def __getitem__(self, index: slice) -> List["Text"]: - ... - - def __getitem__(self, index: Union[slice, int]) -> Union["Text", List["Text"]]: - return self._lines[index] - - def __setitem__(self, index: int, value: "Text") -> "Lines": - self._lines[index] = value - return self - - def __len__(self) -> int: - return self._lines.__len__() - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - """Console render method to insert line-breaks.""" - yield from self._lines - - def append(self, line: "Text") -> None: - self._lines.append(line) - - def extend(self, lines: Iterable["Text"]) -> None: - self._lines.extend(lines) - - def pop(self, index: int = -1) -> "Text": - return self._lines.pop(index) - - def justify( - self, - console: "Console", - width: int, - justify: "JustifyMethod" = "left", - overflow: "OverflowMethod" = "fold", - ) -> None: - """Justify and overflow text to a given width. - - Args: - console (Console): Console instance. - width (int): Number of characters per line. - justify (str, optional): Default justify method for text: "left", "center", "full" or "right". Defaults to "left". - overflow (str, optional): Default overflow for text: "crop", "fold", or "ellipsis". Defaults to "fold". - - """ - from .text import Text - - if justify == "left": - for line in self._lines: - line.truncate(width, overflow=overflow, pad=True) - elif justify == "center": - for line in self._lines: - line.rstrip() - line.truncate(width, overflow=overflow) - line.pad_left((width - cell_len(line.plain)) // 2) - line.pad_right(width - cell_len(line.plain)) - elif justify == "right": - for line in self._lines: - line.rstrip() - line.truncate(width, overflow=overflow) - line.pad_left(width - cell_len(line.plain)) - elif justify == "full": - for line_index, line in enumerate(self._lines): - if line_index == len(self._lines) - 1: - break - words = line.split(" ") - words_size = sum(cell_len(word.plain) for word in words) - num_spaces = len(words) - 1 - spaces = [1 for _ in range(num_spaces)] - index = 0 - if spaces: - while words_size + num_spaces < width: - spaces[len(spaces) - index - 1] += 1 - num_spaces += 1 - index = (index + 1) % len(spaces) - tokens: List[Text] = [] - for index, (word, next_word) in enumerate( - zip_longest(words, words[1:]) - ): - tokens.append(word) - if index < len(spaces): - style = word.get_style_at_offset(console, -1) - next_style = next_word.get_style_at_offset(console, 0) - space_style = style if style == next_style else line.style - tokens.append(Text(" " * spaces[index], style=space_style)) - self[line_index] = Text("").join(tokens) diff --git a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/rpcsal.h b/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/rpcsal.h deleted file mode 100644 index ba9836a84a66b5752196e2acd2feb543a65100fe..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/rpcsal.h +++ /dev/null @@ -1,113 +0,0 @@ -#pragma once - -#if __GNUC__ >=3 -#pragma GCC system_header -#endif - -#define RPC_range(min,max) - -#define __RPC__in -#define __RPC__in_string -#define __RPC__in_opt_string -#define __RPC__deref_opt_in_opt -#define __RPC__opt_in_opt_string -#define __RPC__in_ecount(size) -#define __RPC__in_ecount_full(size) -#define __RPC__in_ecount_full_string(size) -#define __RPC__in_ecount_part(size, length) -#define __RPC__in_ecount_full_opt(size) -#define __RPC__in_ecount_full_opt_string(size) -#define __RPC__inout_ecount_full_opt_string(size) -#define __RPC__in_ecount_part_opt(size, length) - -#define __RPC__deref_in -#define __RPC__deref_in_string -#define __RPC__deref_opt_in -#define __RPC__deref_in_opt -#define __RPC__deref_in_ecount(size) -#define __RPC__deref_in_ecount_part(size, length) -#define __RPC__deref_in_ecount_full(size) -#define __RPC__deref_in_ecount_full_opt(size) -#define __RPC__deref_in_ecount_full_string(size) -#define __RPC__deref_in_ecount_full_opt_string(size) -#define __RPC__deref_in_ecount_opt(size) -#define __RPC__deref_in_ecount_opt_string(size) -#define __RPC__deref_in_ecount_part_opt(size, length) - -// [out] -#define __RPC__out -#define __RPC__out_ecount(size) -#define __RPC__out_ecount_part(size, length) -#define __RPC__out_ecount_full(size) -#define __RPC__out_ecount_full_string(size) - -// [in,out] -#define __RPC__inout -#define __RPC__inout_string -#define __RPC__opt_inout -#define __RPC__inout_ecount(size) -#define __RPC__inout_ecount_part(size, length) -#define __RPC__inout_ecount_full(size) -#define __RPC__inout_ecount_full_string(size) - -// [in,unique] -#define __RPC__in_opt -#define __RPC__in_ecount_opt(size) - - -// [in,out,unique] -#define __RPC__inout_opt -#define __RPC__inout_ecount_opt(size) -#define __RPC__inout_ecount_part_opt(size, length) -#define __RPC__inout_ecount_full_opt(size) -#define __RPC__inout_ecount_full_string(size) - -// [out] ** -#define __RPC__deref_out -#define __RPC__deref_out_string -#define __RPC__deref_out_opt -#define __RPC__deref_out_opt_string -#define __RPC__deref_out_ecount(size) -#define __RPC__deref_out_ecount_part(size, length) -#define __RPC__deref_out_ecount_full(size) -#define __RPC__deref_out_ecount_full_string(size) - - -// [in,out] **, second pointer decoration. -#define __RPC__deref_inout -#define __RPC__deref_inout_string -#define __RPC__deref_inout_opt -#define __RPC__deref_inout_opt_string -#define __RPC__deref_inout_ecount_full(size) -#define __RPC__deref_inout_ecount_full_string(size) -#define __RPC__deref_inout_ecount_opt(size) -#define __RPC__deref_inout_ecount_part_opt(size, length) -#define __RPC__deref_inout_ecount_full_opt(size) -#define __RPC__deref_inout_ecount_full_opt_string(size) - -// #define __RPC_out_opt out_opt is not allowed in rpc - -// [in,out,unique] -#define __RPC__deref_opt_inout -#define __RPC__deref_opt_inout_string -#define __RPC__deref_opt_inout_ecount(size) -#define __RPC__deref_opt_inout_ecount_part(size, length) -#define __RPC__deref_opt_inout_ecount_full(size) -#define __RPC__deref_opt_inout_ecount_full_string(size) - -#define __RPC__deref_out_ecount_opt(size) -#define __RPC__deref_out_ecount_part_opt(size, length) -#define __RPC__deref_out_ecount_full_opt(size) -#define __RPC__deref_out_ecount_full_opt_string(size) - -#define __RPC__deref_opt_inout_opt -#define __RPC__deref_opt_inout_opt_string -#define __RPC__deref_opt_inout_ecount_opt(size) -#define __RPC__deref_opt_inout_ecount_part_opt(size, length) -#define __RPC__deref_opt_inout_ecount_full_opt(size) -#define __RPC__deref_opt_inout_ecount_full_opt_string(size) - -#define __RPC_full_pointer -#define __RPC_unique_pointer -#define __RPC_ref_pointer -#define __RPC_string diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_h_e_a_d.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_h_e_a_d.py deleted file mode 100644 index 04505e8250919eb666b8412e2d12cd739cc16bde..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/_h_e_a_d.py +++ /dev/null @@ -1,124 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.fixedTools import floatToFixedToStr, strToFixedToFloat -from fontTools.misc.textTools import safeEval, num2binary, binary2num -from fontTools.misc.timeTools import ( - timestampFromString, - timestampToString, - timestampNow, -) -from fontTools.misc.timeTools import epoch_diff as mac_epoch_diff # For backward compat -from fontTools.misc.arrayTools import intRect, unionRect -from . import DefaultTable -import logging - - -log = logging.getLogger(__name__) - -headFormat = """ - > # big endian - tableVersion: 16.16F - fontRevision: 16.16F - checkSumAdjustment: I - magicNumber: I - flags: H - unitsPerEm: H - created: Q - modified: Q - xMin: h - yMin: h - xMax: h - yMax: h - macStyle: H - lowestRecPPEM: H - fontDirectionHint: h - indexToLocFormat: h - glyphDataFormat: h -""" - - -class table__h_e_a_d(DefaultTable.DefaultTable): - - dependencies = ["maxp", "loca", "CFF ", "CFF2"] - - def decompile(self, data, ttFont): - dummy, rest = sstruct.unpack2(headFormat, data, self) - if rest: - # this is quite illegal, but there seem to be fonts out there that do this - log.warning("extra bytes at the end of 'head' table") - assert rest == b"\0\0" - - # For timestamp fields, ignore the top four bytes. Some fonts have - # bogus values there. Since till 2038 those bytes only can be zero, - # ignore them. - # - # https://github.com/fonttools/fonttools/issues/99#issuecomment-66776810 - for stamp in "created", "modified": - value = getattr(self, stamp) - if value > 0xFFFFFFFF: - log.warning("'%s' timestamp out of range; ignoring top bytes", stamp) - value &= 0xFFFFFFFF - setattr(self, stamp, value) - if value < 0x7C259DC0: # January 1, 1970 00:00:00 - log.warning( - "'%s' timestamp seems very low; regarding as unix timestamp", stamp - ) - value += 0x7C259DC0 - setattr(self, stamp, value) - - def compile(self, ttFont): - if ttFont.recalcBBoxes: - # For TT-flavored fonts, xMin, yMin, xMax and yMax are set in table__m_a_x_p.recalc(). - if "CFF " in ttFont: - topDict = ttFont["CFF "].cff.topDictIndex[0] - self.xMin, self.yMin, self.xMax, self.yMax = intRect(topDict.FontBBox) - elif "CFF2" in ttFont: - topDict = ttFont["CFF2"].cff.topDictIndex[0] - charStrings = topDict.CharStrings - fontBBox = None - for charString in charStrings.values(): - bounds = charString.calcBounds(charStrings) - if bounds is not None: - if fontBBox is not None: - fontBBox = unionRect(fontBBox, bounds) - else: - fontBBox = bounds - if fontBBox is not None: - self.xMin, self.yMin, self.xMax, self.yMax = intRect(fontBBox) - if ttFont.recalcTimestamp: - self.modified = timestampNow() - data = sstruct.pack(headFormat, self) - return data - - def toXML(self, writer, ttFont): - writer.comment("Most of this table will be recalculated by the compiler") - writer.newline() - _, names, fixes = sstruct.getformat(headFormat) - for name in names: - value = getattr(self, name) - if name in fixes: - value = floatToFixedToStr(value, precisionBits=fixes[name]) - elif name in ("created", "modified"): - value = timestampToString(value) - elif name in ("magicNumber", "checkSumAdjustment"): - if value < 0: - value = value + 0x100000000 - value = hex(value) - if value[-1:] == "L": - value = value[:-1] - elif name in ("macStyle", "flags"): - value = num2binary(value, 16) - writer.simpletag(name, value=value) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - value = attrs["value"] - fixes = sstruct.getformat(headFormat)[2] - if name in fixes: - value = strToFixedToFloat(value, precisionBits=fixes[name]) - elif name in ("created", "modified"): - value = timestampFromString(value) - elif name in ("macStyle", "flags"): - value = binary2num(value) - else: - value = safeEval(value) - setattr(self, name, value) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/src/examine.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/src/examine.py deleted file mode 100644 index f2eba024cfa27c01b39e12140ac408406d26d6f8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/src/examine.py +++ /dev/null @@ -1,68 +0,0 @@ -import argparse -import importlib -import inspect -import os -from pathlib import Path - -from tomlkit import dumps, parse - -from gradio.blocks import BlockContext -from gradio.components import Component - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Description of your program") - parser.add_argument("-m", "--mode", help="Build mode or dev mode") - args = parser.parse_args() - - with open("../pyproject.toml") as f: - pyproject_source = f.read() - - pyproject_toml = parse(pyproject_source) - if "gradio custom component" not in pyproject_toml["project"]["keywords"]: - exit(0) - - module_name = pyproject_toml["project"]["name"] - module = importlib.import_module(module_name) - - artifacts: list[str] = pyproject_toml["tool"]["hatch"]["build"]["artifacts"] - - def get_relative_path(path): - return ( - os.path.abspath(Path(__file__).parent / path) - .replace(os.path.abspath(os.getcwd()), "") - .lstrip("/") - ) - - for name in dir(module): - value = getattr(module, name) - if name.startswith("__"): - continue - - if inspect.isclass(value) and ( - issubclass(value, BlockContext) or issubclass(value, Component) - ): - file_location = Path(inspect.getfile(value)).parent - - found = [ - x - for x in artifacts - if get_relative_path(Path("..") / x) - == get_relative_path(file_location / value.TEMPLATE_DIR) - ] - if len(found) == 0: - artifacts.append( - os.path.abspath(file_location / value.TEMPLATE_DIR) - .replace(os.path.abspath(Path("..")), "") - .lstrip("/") - ) - - print( - f"{name}~|~|~|~{os.path.abspath(file_location / value.TEMPLATE_DIR)}~|~|~|~{os.path.abspath(file_location / value.FRONTEND_DIR)}~|~|~|~{value.get_component_class_id()}" - ) - continue - - if args.mode == "build": - pyproject_toml["tool"]["hatch"]["build"]["artifacts"] = artifacts - - with open("../pyproject.toml", "w") as f: - f.write(dumps(pyproject_toml)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-212ed57c.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-212ed57c.js deleted file mode 100644 index 9e532845dce6d1740286cb576c83244f591e74b9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-212ed57c.js +++ /dev/null @@ -1,2 +0,0 @@ -const{SvelteComponent:c,append:u,attr:d,detach:g,element:o,init:v,insert:r,noop:f,safe_not_equal:y,set_data:m,text:b,toggle_class:i}=window.__gradio__svelte__internal;function h(a){let e,n;return{c(){e=o("div"),n=b(a[0]),d(e,"class","svelte-1ayixqk"),i(e,"table",a[1]==="table"),i(e,"gallery",a[1]==="gallery"),i(e,"selected",a[2])},m(t,l){r(t,e,l),u(e,n)},p(t,[l]){l&1&&m(n,t[0]),l&2&&i(e,"table",t[1]==="table"),l&2&&i(e,"gallery",t[1]==="gallery"),l&4&&i(e,"selected",t[2])},i:f,o:f,d(t){t&&g(e)}}}function q(a,e,n){let{value:t}=e,{type:l}=e,{selected:_=!1}=e;return a.$$set=s=>{"value"in s&&n(0,t=s.value),"type"in s&&n(1,l=s.type),"selected"in s&&n(2,_=s.selected)},[t,l,_]}class w extends c{constructor(e){super(),v(this,e,q,h,y,{value:0,type:1,selected:2})}}export{w as default}; -//# sourceMappingURL=Example-212ed57c.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-76c3ee3f.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-76c3ee3f.css deleted file mode 100644 index 8853167b33fc5683d52480c72c2356484cc74f83..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index-76c3ee3f.css +++ /dev/null @@ -1 +0,0 @@ -label.svelte-pjtc3.svelte-pjtc3:not(.container),label.svelte-pjtc3:not(.container)>input.svelte-pjtc3{height:100%;border:none}.container.svelte-pjtc3>input.svelte-pjtc3{border:var(--input-border-width) solid var(--input-border-color);border-radius:var(--input-radius)}input[type=number].svelte-pjtc3.svelte-pjtc3{display:block;position:relative;outline:none!important;box-shadow:var(--input-shadow);background:var(--input-background-fill);padding:var(--input-padding);width:100%;color:var(--body-text-color);font-size:var(--input-text-size);line-height:var(--line-sm)}input.svelte-pjtc3.svelte-pjtc3:disabled{-webkit-text-fill-color:var(--body-text-color);-webkit-opacity:1;opacity:1}input.svelte-pjtc3.svelte-pjtc3:focus{box-shadow:var(--input-shadow-focus);border-color:var(--input-border-color-focus)}input.svelte-pjtc3.svelte-pjtc3::placeholder{color:var(--input-placeholder-color)}input.svelte-pjtc3.svelte-pjtc3:out-of-range{border:var(--input-border-width) solid var(--error-border-color)} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/checks/cpu_vx.c b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/checks/cpu_vx.c deleted file mode 100644 index 18fb7ef94a248d0de890bafa9cae67a5559e47f9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/checks/cpu_vx.c +++ /dev/null @@ -1,16 +0,0 @@ -#if (__VEC__ < 10301) || (__ARCH__ < 11) - #error VX not supported -#endif - -#include -int main(int argc, char **argv) -{ - __vector double x = vec_abs(vec_xl(argc, (double*)argv)); - __vector double y = vec_load_len((double*)argv, (unsigned int)argc); - - x = vec_round(vec_ceil(x) + vec_floor(y)); - __vector bool long long m = vec_cmpge(x, y); - __vector long long i = vec_signed(vec_sel(x, y, m)); - - return (int)vec_extract(i, 0); -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/parameter/constant_compound.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/parameter/constant_compound.f90 deleted file mode 100644 index e51f5e9b2fb166a6b7d9cba57af03617024b7f2a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/parameter/constant_compound.f90 +++ /dev/null @@ -1,15 +0,0 @@ -! Check that parameters are correct intercepted. -! Constants with comma separations are commonly -! used, for instance Pi = 3._dp -subroutine foo_compound_int(x) - implicit none - integer, parameter :: ii = selected_int_kind(9) - integer(ii), intent(inout) :: x - dimension x(3) - integer(ii), parameter :: three = 3_ii - integer(ii), parameter :: two = 2_ii - integer(ii), parameter :: six = three * 1_ii * two - - x(1) = x(1) + x(2) + x(3) * six - return -end subroutine diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/string/scalar_string.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/string/scalar_string.f90 deleted file mode 100644 index f8f076172ab48ca4834d631b362f47ca374db5e4..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/string/scalar_string.f90 +++ /dev/null @@ -1,9 +0,0 @@ -MODULE string_test - - character(len=8) :: string - character string77 * 8 - - character(len=12), dimension(5,7) :: strarr - character strarr77(5,7) * 12 - -END MODULE string_test diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/timedeltas/test_setops.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/timedeltas/test_setops.py deleted file mode 100644 index cb6dce1e7ad80d0aa9d930dacad25666a4cd0d2e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/timedeltas/test_setops.py +++ /dev/null @@ -1,252 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas import ( - Index, - TimedeltaIndex, - timedelta_range, -) -import pandas._testing as tm - -from pandas.tseries.offsets import Hour - - -class TestTimedeltaIndex: - def test_union(self): - i1 = timedelta_range("1day", periods=5) - i2 = timedelta_range("3day", periods=5) - result = i1.union(i2) - expected = timedelta_range("1day", periods=7) - tm.assert_index_equal(result, expected) - - i1 = Index(np.arange(0, 20, 2, dtype=np.int64)) - i2 = timedelta_range(start="1 day", periods=10, freq="D") - i1.union(i2) # Works - i2.union(i1) # Fails with "AttributeError: can't set attribute" - - def test_union_sort_false(self): - tdi = timedelta_range("1day", periods=5) - - left = tdi[3:] - right = tdi[:3] - - # Check that we are testing the desired code path - assert left._can_fast_union(right) - - result = left.union(right) - tm.assert_index_equal(result, tdi) - - result = left.union(right, sort=False) - expected = TimedeltaIndex(["4 Days", "5 Days", "1 Days", "2 Day", "3 Days"]) - tm.assert_index_equal(result, expected) - - def test_union_coverage(self): - idx = TimedeltaIndex(["3d", "1d", "2d"]) - ordered = TimedeltaIndex(idx.sort_values(), freq="infer") - result = ordered.union(idx) - tm.assert_index_equal(result, ordered) - - result = ordered[:0].union(ordered) - tm.assert_index_equal(result, ordered) - assert result.freq == ordered.freq - - def test_union_bug_1730(self): - rng_a = timedelta_range("1 day", periods=4, freq="3H") - rng_b = timedelta_range("1 day", periods=4, freq="4H") - - result = rng_a.union(rng_b) - exp = TimedeltaIndex(sorted(set(rng_a) | set(rng_b))) - tm.assert_index_equal(result, exp) - - def test_union_bug_1745(self): - left = TimedeltaIndex(["1 day 15:19:49.695000"]) - right = TimedeltaIndex( - ["2 day 13:04:21.322000", "1 day 15:27:24.873000", "1 day 15:31:05.350000"] - ) - - result = left.union(right) - exp = TimedeltaIndex(sorted(set(left) | set(right))) - tm.assert_index_equal(result, exp) - - def test_union_bug_4564(self): - left = timedelta_range("1 day", "30d") - right = left + pd.offsets.Minute(15) - - result = left.union(right) - exp = TimedeltaIndex(sorted(set(left) | set(right))) - tm.assert_index_equal(result, exp) - - def test_union_freq_infer(self): - # When taking the union of two TimedeltaIndexes, we infer - # a freq even if the arguments don't have freq. This matches - # DatetimeIndex behavior. - tdi = timedelta_range("1 Day", periods=5) - left = tdi[[0, 1, 3, 4]] - right = tdi[[2, 3, 1]] - - assert left.freq is None - assert right.freq is None - - result = left.union(right) - tm.assert_index_equal(result, tdi) - assert result.freq == "D" - - def test_intersection_bug_1708(self): - index_1 = timedelta_range("1 day", periods=4, freq="h") - index_2 = index_1 + pd.offsets.Hour(5) - - result = index_1.intersection(index_2) - assert len(result) == 0 - - index_1 = timedelta_range("1 day", periods=4, freq="h") - index_2 = index_1 + pd.offsets.Hour(1) - - result = index_1.intersection(index_2) - expected = timedelta_range("1 day 01:00:00", periods=3, freq="h") - tm.assert_index_equal(result, expected) - assert result.freq == expected.freq - - def test_intersection_equal(self, sort): - # GH 24471 Test intersection outcome given the sort keyword - # for equal indices intersection should return the original index - first = timedelta_range("1 day", periods=4, freq="h") - second = timedelta_range("1 day", periods=4, freq="h") - intersect = first.intersection(second, sort=sort) - if sort is None: - tm.assert_index_equal(intersect, second.sort_values()) - assert tm.equalContents(intersect, second) - - # Corner cases - inter = first.intersection(first, sort=sort) - assert inter is first - - @pytest.mark.parametrize("period_1, period_2", [(0, 4), (4, 0)]) - def test_intersection_zero_length(self, period_1, period_2, sort): - # GH 24471 test for non overlap the intersection should be zero length - index_1 = timedelta_range("1 day", periods=period_1, freq="h") - index_2 = timedelta_range("1 day", periods=period_2, freq="h") - expected = timedelta_range("1 day", periods=0, freq="h") - result = index_1.intersection(index_2, sort=sort) - tm.assert_index_equal(result, expected) - - def test_zero_length_input_index(self, sort): - # GH 24966 test for 0-len intersections are copied - index_1 = timedelta_range("1 day", periods=0, freq="h") - index_2 = timedelta_range("1 day", periods=3, freq="h") - result = index_1.intersection(index_2, sort=sort) - assert index_1 is not result - assert index_2 is not result - tm.assert_copy(result, index_1) - - @pytest.mark.parametrize( - "rng, expected", - # if target has the same name, it is preserved - [ - ( - timedelta_range("1 day", periods=5, freq="h", name="idx"), - timedelta_range("1 day", periods=4, freq="h", name="idx"), - ), - # if target name is different, it will be reset - ( - timedelta_range("1 day", periods=5, freq="h", name="other"), - timedelta_range("1 day", periods=4, freq="h", name=None), - ), - # if no overlap exists return empty index - ( - timedelta_range("1 day", periods=10, freq="h", name="idx")[5:], - TimedeltaIndex([], freq="h", name="idx"), - ), - ], - ) - def test_intersection(self, rng, expected, sort): - # GH 4690 (with tz) - base = timedelta_range("1 day", periods=4, freq="h", name="idx") - result = base.intersection(rng, sort=sort) - if sort is None: - expected = expected.sort_values() - tm.assert_index_equal(result, expected) - assert result.name == expected.name - assert result.freq == expected.freq - - @pytest.mark.parametrize( - "rng, expected", - # part intersection works - [ - ( - TimedeltaIndex(["5 hour", "2 hour", "4 hour", "9 hour"], name="idx"), - TimedeltaIndex(["2 hour", "4 hour"], name="idx"), - ), - # reordered part intersection - ( - TimedeltaIndex(["2 hour", "5 hour", "5 hour", "1 hour"], name="other"), - TimedeltaIndex(["1 hour", "2 hour"], name=None), - ), - # reversed index - ( - TimedeltaIndex(["1 hour", "2 hour", "4 hour", "3 hour"], name="idx")[ - ::-1 - ], - TimedeltaIndex(["1 hour", "2 hour", "4 hour", "3 hour"], name="idx"), - ), - ], - ) - def test_intersection_non_monotonic(self, rng, expected, sort): - # 24471 non-monotonic - base = TimedeltaIndex(["1 hour", "2 hour", "4 hour", "3 hour"], name="idx") - result = base.intersection(rng, sort=sort) - if sort is None: - expected = expected.sort_values() - tm.assert_index_equal(result, expected) - assert result.name == expected.name - - # if reversed order, frequency is still the same - if all(base == rng[::-1]) and sort is None: - assert isinstance(result.freq, Hour) - else: - assert result.freq is None - - -class TestTimedeltaIndexDifference: - def test_difference_freq(self, sort): - # GH14323: Difference of TimedeltaIndex should not preserve frequency - - index = timedelta_range("0 days", "5 days", freq="D") - - other = timedelta_range("1 days", "4 days", freq="D") - expected = TimedeltaIndex(["0 days", "5 days"], freq=None) - idx_diff = index.difference(other, sort) - tm.assert_index_equal(idx_diff, expected) - tm.assert_attr_equal("freq", idx_diff, expected) - - other = timedelta_range("2 days", "5 days", freq="D") - idx_diff = index.difference(other, sort) - expected = TimedeltaIndex(["0 days", "1 days"], freq=None) - tm.assert_index_equal(idx_diff, expected) - tm.assert_attr_equal("freq", idx_diff, expected) - - def test_difference_sort(self, sort): - index = TimedeltaIndex( - ["5 days", "3 days", "2 days", "4 days", "1 days", "0 days"] - ) - - other = timedelta_range("1 days", "4 days", freq="D") - idx_diff = index.difference(other, sort) - - expected = TimedeltaIndex(["5 days", "0 days"], freq=None) - - if sort is None: - expected = expected.sort_values() - - tm.assert_index_equal(idx_diff, expected) - tm.assert_attr_equal("freq", idx_diff, expected) - - other = timedelta_range("2 days", "5 days", freq="D") - idx_diff = index.difference(other, sort) - expected = TimedeltaIndex(["1 days", "0 days"], freq=None) - - if sort is None: - expected = expected.sort_values() - - tm.assert_index_equal(idx_diff, expected) - tm.assert_attr_equal("freq", idx_diff, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/test_parquet.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/test_parquet.py deleted file mode 100644 index 1d68f12270b55e5c3b6dc5f5cc770e4356f9d66e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/test_parquet.py +++ /dev/null @@ -1,1427 +0,0 @@ -""" test parquet compat """ -import datetime -from decimal import Decimal -from io import BytesIO -import os -import pathlib - -import numpy as np -import pytest - -from pandas._config import ( - get_option, - using_copy_on_write, -) - -from pandas.compat import is_platform_windows -from pandas.compat.pyarrow import ( - pa_version_under7p0, - pa_version_under8p0, - pa_version_under11p0, - pa_version_under13p0, -) - -import pandas as pd -import pandas._testing as tm -from pandas.util.version import Version - -from pandas.io.parquet import ( - FastParquetImpl, - PyArrowImpl, - get_engine, - read_parquet, - to_parquet, -) - -try: - import pyarrow - - _HAVE_PYARROW = True -except ImportError: - _HAVE_PYARROW = False - -try: - import fastparquet - - _HAVE_FASTPARQUET = True -except ImportError: - _HAVE_FASTPARQUET = False - - -# TODO(ArrayManager) fastparquet relies on BlockManager internals - -pytestmark = pytest.mark.filterwarnings( - "ignore:DataFrame._data is deprecated:FutureWarning" -) - - -# setup engines & skips -@pytest.fixture( - params=[ - pytest.param( - "fastparquet", - marks=pytest.mark.skipif( - not _HAVE_FASTPARQUET or get_option("mode.data_manager") == "array", - reason="fastparquet is not installed or ArrayManager is used", - ), - ), - pytest.param( - "pyarrow", - marks=pytest.mark.skipif( - not _HAVE_PYARROW, reason="pyarrow is not installed" - ), - ), - ] -) -def engine(request): - return request.param - - -@pytest.fixture -def pa(): - if not _HAVE_PYARROW: - pytest.skip("pyarrow is not installed") - return "pyarrow" - - -@pytest.fixture -def fp(): - if not _HAVE_FASTPARQUET: - pytest.skip("fastparquet is not installed") - elif get_option("mode.data_manager") == "array": - pytest.skip("ArrayManager is not supported with fastparquet") - return "fastparquet" - - -@pytest.fixture -def df_compat(): - return pd.DataFrame({"A": [1, 2, 3], "B": "foo"}) - - -@pytest.fixture -def df_cross_compat(): - df = pd.DataFrame( - { - "a": list("abc"), - "b": list(range(1, 4)), - # 'c': np.arange(3, 6).astype('u1'), - "d": np.arange(4.0, 7.0, dtype="float64"), - "e": [True, False, True], - "f": pd.date_range("20130101", periods=3), - # 'g': pd.date_range('20130101', periods=3, - # tz='US/Eastern'), - # 'h': pd.date_range('20130101', periods=3, freq='ns') - } - ) - return df - - -@pytest.fixture -def df_full(): - return pd.DataFrame( - { - "string": list("abc"), - "string_with_nan": ["a", np.nan, "c"], - "string_with_none": ["a", None, "c"], - "bytes": [b"foo", b"bar", b"baz"], - "unicode": ["foo", "bar", "baz"], - "int": list(range(1, 4)), - "uint": np.arange(3, 6).astype("u1"), - "float": np.arange(4.0, 7.0, dtype="float64"), - "float_with_nan": [2.0, np.nan, 3.0], - "bool": [True, False, True], - "datetime": pd.date_range("20130101", periods=3), - "datetime_with_nat": [ - pd.Timestamp("20130101"), - pd.NaT, - pd.Timestamp("20130103"), - ], - } - ) - - -@pytest.fixture( - params=[ - datetime.datetime.now(datetime.timezone.utc), - datetime.datetime.now(datetime.timezone.min), - datetime.datetime.now(datetime.timezone.max), - datetime.datetime.strptime("2019-01-04T16:41:24+0200", "%Y-%m-%dT%H:%M:%S%z"), - datetime.datetime.strptime("2019-01-04T16:41:24+0215", "%Y-%m-%dT%H:%M:%S%z"), - datetime.datetime.strptime("2019-01-04T16:41:24-0200", "%Y-%m-%dT%H:%M:%S%z"), - datetime.datetime.strptime("2019-01-04T16:41:24-0215", "%Y-%m-%dT%H:%M:%S%z"), - ] -) -def timezone_aware_date_list(request): - return request.param - - -def check_round_trip( - df, - engine=None, - path=None, - write_kwargs=None, - read_kwargs=None, - expected=None, - check_names=True, - check_like=False, - check_dtype=True, - repeat=2, -): - """Verify parquet serializer and deserializer produce the same results. - - Performs a pandas to disk and disk to pandas round trip, - then compares the 2 resulting DataFrames to verify equality. - - Parameters - ---------- - df: Dataframe - engine: str, optional - 'pyarrow' or 'fastparquet' - path: str, optional - write_kwargs: dict of str:str, optional - read_kwargs: dict of str:str, optional - expected: DataFrame, optional - Expected deserialization result, otherwise will be equal to `df` - check_names: list of str, optional - Closed set of column names to be compared - check_like: bool, optional - If True, ignore the order of index & columns. - repeat: int, optional - How many times to repeat the test - """ - write_kwargs = write_kwargs or {"compression": None} - read_kwargs = read_kwargs or {} - - if expected is None: - expected = df - - if engine: - write_kwargs["engine"] = engine - read_kwargs["engine"] = engine - - def compare(repeat): - for _ in range(repeat): - df.to_parquet(path, **write_kwargs) - actual = read_parquet(path, **read_kwargs) - - if "string_with_nan" in expected: - expected.loc[1, "string_with_nan"] = None - tm.assert_frame_equal( - expected, - actual, - check_names=check_names, - check_like=check_like, - check_dtype=check_dtype, - ) - - if path is None: - with tm.ensure_clean() as path: - compare(repeat) - else: - compare(repeat) - - -def check_partition_names(path, expected): - """Check partitions of a parquet file are as expected. - - Parameters - ---------- - path: str - Path of the dataset. - expected: iterable of str - Expected partition names. - """ - if pa_version_under7p0: - import pyarrow.parquet as pq - - dataset = pq.ParquetDataset(path, validate_schema=False) - assert len(dataset.partitions.partition_names) == len(expected) - assert dataset.partitions.partition_names == set(expected) - else: - import pyarrow.dataset as ds - - dataset = ds.dataset(path, partitioning="hive") - assert dataset.partitioning.schema.names == expected - - -def test_invalid_engine(df_compat): - msg = "engine must be one of 'pyarrow', 'fastparquet'" - with pytest.raises(ValueError, match=msg): - check_round_trip(df_compat, "foo", "bar") - - -def test_options_py(df_compat, pa): - # use the set option - - with pd.option_context("io.parquet.engine", "pyarrow"): - check_round_trip(df_compat) - - -def test_options_fp(df_compat, fp): - # use the set option - - with pd.option_context("io.parquet.engine", "fastparquet"): - check_round_trip(df_compat) - - -def test_options_auto(df_compat, fp, pa): - # use the set option - - with pd.option_context("io.parquet.engine", "auto"): - check_round_trip(df_compat) - - -def test_options_get_engine(fp, pa): - assert isinstance(get_engine("pyarrow"), PyArrowImpl) - assert isinstance(get_engine("fastparquet"), FastParquetImpl) - - with pd.option_context("io.parquet.engine", "pyarrow"): - assert isinstance(get_engine("auto"), PyArrowImpl) - assert isinstance(get_engine("pyarrow"), PyArrowImpl) - assert isinstance(get_engine("fastparquet"), FastParquetImpl) - - with pd.option_context("io.parquet.engine", "fastparquet"): - assert isinstance(get_engine("auto"), FastParquetImpl) - assert isinstance(get_engine("pyarrow"), PyArrowImpl) - assert isinstance(get_engine("fastparquet"), FastParquetImpl) - - with pd.option_context("io.parquet.engine", "auto"): - assert isinstance(get_engine("auto"), PyArrowImpl) - assert isinstance(get_engine("pyarrow"), PyArrowImpl) - assert isinstance(get_engine("fastparquet"), FastParquetImpl) - - -def test_get_engine_auto_error_message(): - # Expect different error messages from get_engine(engine="auto") - # if engines aren't installed vs. are installed but bad version - from pandas.compat._optional import VERSIONS - - # Do we have engines installed, but a bad version of them? - pa_min_ver = VERSIONS.get("pyarrow") - fp_min_ver = VERSIONS.get("fastparquet") - have_pa_bad_version = ( - False - if not _HAVE_PYARROW - else Version(pyarrow.__version__) < Version(pa_min_ver) - ) - have_fp_bad_version = ( - False - if not _HAVE_FASTPARQUET - else Version(fastparquet.__version__) < Version(fp_min_ver) - ) - # Do we have usable engines installed? - have_usable_pa = _HAVE_PYARROW and not have_pa_bad_version - have_usable_fp = _HAVE_FASTPARQUET and not have_fp_bad_version - - if not have_usable_pa and not have_usable_fp: - # No usable engines found. - if have_pa_bad_version: - match = f"Pandas requires version .{pa_min_ver}. or newer of .pyarrow." - with pytest.raises(ImportError, match=match): - get_engine("auto") - else: - match = "Missing optional dependency .pyarrow." - with pytest.raises(ImportError, match=match): - get_engine("auto") - - if have_fp_bad_version: - match = f"Pandas requires version .{fp_min_ver}. or newer of .fastparquet." - with pytest.raises(ImportError, match=match): - get_engine("auto") - else: - match = "Missing optional dependency .fastparquet." - with pytest.raises(ImportError, match=match): - get_engine("auto") - - -def test_cross_engine_pa_fp(df_cross_compat, pa, fp): - # cross-compat with differing reading/writing engines - - df = df_cross_compat - with tm.ensure_clean() as path: - df.to_parquet(path, engine=pa, compression=None) - - result = read_parquet(path, engine=fp) - tm.assert_frame_equal(result, df) - - result = read_parquet(path, engine=fp, columns=["a", "d"]) - tm.assert_frame_equal(result, df[["a", "d"]]) - - -def test_cross_engine_fp_pa(df_cross_compat, pa, fp): - # cross-compat with differing reading/writing engines - df = df_cross_compat - with tm.ensure_clean() as path: - df.to_parquet(path, engine=fp, compression=None) - - result = read_parquet(path, engine=pa) - tm.assert_frame_equal(result, df) - - result = read_parquet(path, engine=pa, columns=["a", "d"]) - tm.assert_frame_equal(result, df[["a", "d"]]) - - -class Base: - def check_error_on_write(self, df, engine, exc, err_msg): - # check that we are raising the exception on writing - with tm.ensure_clean() as path: - with pytest.raises(exc, match=err_msg): - to_parquet(df, path, engine, compression=None) - - def check_external_error_on_write(self, df, engine, exc): - # check that an external library is raising the exception on writing - with tm.ensure_clean() as path: - with tm.external_error_raised(exc): - to_parquet(df, path, engine, compression=None) - - @pytest.mark.network - @pytest.mark.single_cpu - def test_parquet_read_from_url(self, httpserver, datapath, df_compat, engine): - if engine != "auto": - pytest.importorskip(engine) - with open(datapath("io", "data", "parquet", "simple.parquet"), mode="rb") as f: - httpserver.serve_content(content=f.read()) - df = read_parquet(httpserver.url) - tm.assert_frame_equal(df, df_compat) - - -class TestBasic(Base): - def test_error(self, engine): - for obj in [ - pd.Series([1, 2, 3]), - 1, - "foo", - pd.Timestamp("20130101"), - np.array([1, 2, 3]), - ]: - msg = "to_parquet only supports IO with DataFrames" - self.check_error_on_write(obj, engine, ValueError, msg) - - def test_columns_dtypes(self, engine): - df = pd.DataFrame({"string": list("abc"), "int": list(range(1, 4))}) - - # unicode - df.columns = ["foo", "bar"] - check_round_trip(df, engine) - - @pytest.mark.parametrize("compression", [None, "gzip", "snappy", "brotli"]) - def test_compression(self, engine, compression): - df = pd.DataFrame({"A": [1, 2, 3]}) - check_round_trip(df, engine, write_kwargs={"compression": compression}) - - def test_read_columns(self, engine): - # GH18154 - df = pd.DataFrame({"string": list("abc"), "int": list(range(1, 4))}) - - expected = pd.DataFrame({"string": list("abc")}) - check_round_trip( - df, engine, expected=expected, read_kwargs={"columns": ["string"]} - ) - - def test_read_filters(self, engine, tmp_path): - df = pd.DataFrame( - { - "int": list(range(4)), - "part": list("aabb"), - } - ) - - expected = pd.DataFrame({"int": [0, 1]}) - check_round_trip( - df, - engine, - path=tmp_path, - expected=expected, - write_kwargs={"partition_cols": ["part"]}, - read_kwargs={"filters": [("part", "==", "a")], "columns": ["int"]}, - repeat=1, - ) - - def test_write_index(self, engine, using_copy_on_write, request): - check_names = engine != "fastparquet" - if using_copy_on_write and engine == "fastparquet": - request.node.add_marker( - pytest.mark.xfail(reason="fastparquet write into index") - ) - - df = pd.DataFrame({"A": [1, 2, 3]}) - check_round_trip(df, engine) - - indexes = [ - [2, 3, 4], - pd.date_range("20130101", periods=3), - list("abc"), - [1, 3, 4], - ] - # non-default index - for index in indexes: - df.index = index - if isinstance(index, pd.DatetimeIndex): - df.index = df.index._with_freq(None) # freq doesn't round-trip - check_round_trip(df, engine, check_names=check_names) - - # index with meta-data - df.index = [0, 1, 2] - df.index.name = "foo" - check_round_trip(df, engine) - - def test_write_multiindex(self, pa): - # Not supported in fastparquet as of 0.1.3 or older pyarrow version - engine = pa - - df = pd.DataFrame({"A": [1, 2, 3]}) - index = pd.MultiIndex.from_tuples([("a", 1), ("a", 2), ("b", 1)]) - df.index = index - check_round_trip(df, engine) - - def test_multiindex_with_columns(self, pa): - engine = pa - dates = pd.date_range("01-Jan-2018", "01-Dec-2018", freq="MS") - df = pd.DataFrame( - np.random.default_rng(2).standard_normal((2 * len(dates), 3)), - columns=list("ABC"), - ) - index1 = pd.MultiIndex.from_product( - [["Level1", "Level2"], dates], names=["level", "date"] - ) - index2 = index1.copy(names=None) - for index in [index1, index2]: - df.index = index - - check_round_trip(df, engine) - check_round_trip( - df, engine, read_kwargs={"columns": ["A", "B"]}, expected=df[["A", "B"]] - ) - - def test_write_ignoring_index(self, engine): - # ENH 20768 - # Ensure index=False omits the index from the written Parquet file. - df = pd.DataFrame({"a": [1, 2, 3], "b": ["q", "r", "s"]}) - - write_kwargs = {"compression": None, "index": False} - - # Because we're dropping the index, we expect the loaded dataframe to - # have the default integer index. - expected = df.reset_index(drop=True) - - check_round_trip(df, engine, write_kwargs=write_kwargs, expected=expected) - - # Ignore custom index - df = pd.DataFrame( - {"a": [1, 2, 3], "b": ["q", "r", "s"]}, index=["zyx", "wvu", "tsr"] - ) - - check_round_trip(df, engine, write_kwargs=write_kwargs, expected=expected) - - # Ignore multi-indexes as well. - arrays = [ - ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"], - ["one", "two", "one", "two", "one", "two", "one", "two"], - ] - df = pd.DataFrame( - {"one": list(range(8)), "two": [-i for i in range(8)]}, index=arrays - ) - - expected = df.reset_index(drop=True) - check_round_trip(df, engine, write_kwargs=write_kwargs, expected=expected) - - def test_write_column_multiindex(self, engine): - # Not able to write column multi-indexes with non-string column names. - mi_columns = pd.MultiIndex.from_tuples([("a", 1), ("a", 2), ("b", 1)]) - df = pd.DataFrame( - np.random.default_rng(2).standard_normal((4, 3)), columns=mi_columns - ) - - if engine == "fastparquet": - self.check_error_on_write( - df, engine, TypeError, "Column name must be a string" - ) - elif engine == "pyarrow": - check_round_trip(df, engine) - - def test_write_column_multiindex_nonstring(self, engine): - # GH #34777 - - # Not able to write column multi-indexes with non-string column names - arrays = [ - ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"], - [1, 2, 1, 2, 1, 2, 1, 2], - ] - df = pd.DataFrame( - np.random.default_rng(2).standard_normal((8, 8)), columns=arrays - ) - df.columns.names = ["Level1", "Level2"] - if engine == "fastparquet": - self.check_error_on_write(df, engine, ValueError, "Column name") - elif engine == "pyarrow": - check_round_trip(df, engine) - - def test_write_column_multiindex_string(self, pa): - # GH #34777 - # Not supported in fastparquet as of 0.1.3 - engine = pa - - # Write column multi-indexes with string column names - arrays = [ - ["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"], - ["one", "two", "one", "two", "one", "two", "one", "two"], - ] - df = pd.DataFrame( - np.random.default_rng(2).standard_normal((8, 8)), columns=arrays - ) - df.columns.names = ["ColLevel1", "ColLevel2"] - - check_round_trip(df, engine) - - def test_write_column_index_string(self, pa): - # GH #34777 - # Not supported in fastparquet as of 0.1.3 - engine = pa - - # Write column indexes with string column names - arrays = ["bar", "baz", "foo", "qux"] - df = pd.DataFrame( - np.random.default_rng(2).standard_normal((8, 4)), columns=arrays - ) - df.columns.name = "StringCol" - - check_round_trip(df, engine) - - def test_write_column_index_nonstring(self, engine): - # GH #34777 - - # Write column indexes with string column names - arrays = [1, 2, 3, 4] - df = pd.DataFrame( - np.random.default_rng(2).standard_normal((8, 4)), columns=arrays - ) - df.columns.name = "NonStringCol" - if engine == "fastparquet": - self.check_error_on_write( - df, engine, TypeError, "Column name must be a string" - ) - else: - check_round_trip(df, engine) - - @pytest.mark.skipif(pa_version_under7p0, reason="minimum pyarrow not installed") - def test_dtype_backend(self, engine, request): - import pyarrow.parquet as pq - - if engine == "fastparquet": - # We are manually disabling fastparquet's - # nullable dtype support pending discussion - mark = pytest.mark.xfail( - reason="Fastparquet nullable dtype support is disabled" - ) - request.node.add_marker(mark) - - table = pyarrow.table( - { - "a": pyarrow.array([1, 2, 3, None], "int64"), - "b": pyarrow.array([1, 2, 3, None], "uint8"), - "c": pyarrow.array(["a", "b", "c", None]), - "d": pyarrow.array([True, False, True, None]), - # Test that nullable dtypes used even in absence of nulls - "e": pyarrow.array([1, 2, 3, 4], "int64"), - # GH 45694 - "f": pyarrow.array([1.0, 2.0, 3.0, None], "float32"), - "g": pyarrow.array([1.0, 2.0, 3.0, None], "float64"), - } - ) - with tm.ensure_clean() as path: - # write manually with pyarrow to write integers - pq.write_table(table, path) - result1 = read_parquet(path, engine=engine) - result2 = read_parquet(path, engine=engine, dtype_backend="numpy_nullable") - - assert result1["a"].dtype == np.dtype("float64") - expected = pd.DataFrame( - { - "a": pd.array([1, 2, 3, None], dtype="Int64"), - "b": pd.array([1, 2, 3, None], dtype="UInt8"), - "c": pd.array(["a", "b", "c", None], dtype="string"), - "d": pd.array([True, False, True, None], dtype="boolean"), - "e": pd.array([1, 2, 3, 4], dtype="Int64"), - "f": pd.array([1.0, 2.0, 3.0, None], dtype="Float32"), - "g": pd.array([1.0, 2.0, 3.0, None], dtype="Float64"), - } - ) - if engine == "fastparquet": - # Fastparquet doesn't support string columns yet - # Only int and boolean - result2 = result2.drop("c", axis=1) - expected = expected.drop("c", axis=1) - tm.assert_frame_equal(result2, expected) - - @pytest.mark.parametrize( - "dtype", - [ - "Int64", - "UInt8", - "boolean", - "object", - "datetime64[ns, UTC]", - "float", - "period[D]", - "Float64", - "string", - ], - ) - def test_read_empty_array(self, pa, dtype): - # GH #41241 - df = pd.DataFrame( - { - "value": pd.array([], dtype=dtype), - } - ) - # GH 45694 - expected = None - if dtype == "float": - expected = pd.DataFrame( - { - "value": pd.array([], dtype="Float64"), - } - ) - check_round_trip( - df, pa, read_kwargs={"dtype_backend": "numpy_nullable"}, expected=expected - ) - - -class TestParquetPyArrow(Base): - def test_basic(self, pa, df_full): - df = df_full - - # additional supported types for pyarrow - dti = pd.date_range("20130101", periods=3, tz="Europe/Brussels") - dti = dti._with_freq(None) # freq doesn't round-trip - df["datetime_tz"] = dti - df["bool_with_none"] = [True, None, True] - - check_round_trip(df, pa) - - def test_basic_subset_columns(self, pa, df_full): - # GH18628 - - df = df_full - # additional supported types for pyarrow - df["datetime_tz"] = pd.date_range("20130101", periods=3, tz="Europe/Brussels") - - check_round_trip( - df, - pa, - expected=df[["string", "int"]], - read_kwargs={"columns": ["string", "int"]}, - ) - - def test_to_bytes_without_path_or_buf_provided(self, pa, df_full): - # GH 37105 - msg = "Mismatched null-like values nan and None found" - warn = None - if using_copy_on_write(): - warn = FutureWarning - - buf_bytes = df_full.to_parquet(engine=pa) - assert isinstance(buf_bytes, bytes) - - buf_stream = BytesIO(buf_bytes) - res = read_parquet(buf_stream) - - expected = df_full.copy(deep=False) - expected.loc[1, "string_with_nan"] = None - with tm.assert_produces_warning(warn, match=msg): - tm.assert_frame_equal(df_full, res) - - def test_duplicate_columns(self, pa): - # not currently able to handle duplicate columns - df = pd.DataFrame(np.arange(12).reshape(4, 3), columns=list("aaa")).copy() - self.check_error_on_write(df, pa, ValueError, "Duplicate column names found") - - def test_timedelta(self, pa): - df = pd.DataFrame({"a": pd.timedelta_range("1 day", periods=3)}) - if pa_version_under8p0: - self.check_external_error_on_write(df, pa, NotImplementedError) - else: - check_round_trip(df, pa) - - def test_unsupported(self, pa): - # mixed python objects - df = pd.DataFrame({"a": ["a", 1, 2.0]}) - # pyarrow 0.11 raises ArrowTypeError - # older pyarrows raise ArrowInvalid - self.check_external_error_on_write(df, pa, pyarrow.ArrowException) - - def test_unsupported_float16(self, pa): - # #44847, #44914 - # Not able to write float 16 column using pyarrow. - data = np.arange(2, 10, dtype=np.float16) - df = pd.DataFrame(data=data, columns=["fp16"]) - self.check_external_error_on_write(df, pa, pyarrow.ArrowException) - - @pytest.mark.xfail( - is_platform_windows(), - reason=( - "PyArrow does not cleanup of partial files dumps when unsupported " - "dtypes are passed to_parquet function in windows" - ), - ) - @pytest.mark.parametrize("path_type", [str, pathlib.Path]) - def test_unsupported_float16_cleanup(self, pa, path_type): - # #44847, #44914 - # Not able to write float 16 column using pyarrow. - # Tests cleanup by pyarrow in case of an error - data = np.arange(2, 10, dtype=np.float16) - df = pd.DataFrame(data=data, columns=["fp16"]) - - with tm.ensure_clean() as path_str: - path = path_type(path_str) - with tm.external_error_raised(pyarrow.ArrowException): - df.to_parquet(path=path, engine=pa) - assert not os.path.isfile(path) - - def test_categorical(self, pa): - # supported in >= 0.7.0 - df = pd.DataFrame() - df["a"] = pd.Categorical(list("abcdef")) - - # test for null, out-of-order values, and unobserved category - df["b"] = pd.Categorical( - ["bar", "foo", "foo", "bar", None, "bar"], - dtype=pd.CategoricalDtype(["foo", "bar", "baz"]), - ) - - # test for ordered flag - df["c"] = pd.Categorical( - ["a", "b", "c", "a", "c", "b"], categories=["b", "c", "d"], ordered=True - ) - - check_round_trip(df, pa) - - @pytest.mark.single_cpu - def test_s3_roundtrip_explicit_fs(self, df_compat, s3_public_bucket, pa, s3so): - s3fs = pytest.importorskip("s3fs") - s3 = s3fs.S3FileSystem(**s3so) - kw = {"filesystem": s3} - check_round_trip( - df_compat, - pa, - path=f"{s3_public_bucket.name}/pyarrow.parquet", - read_kwargs=kw, - write_kwargs=kw, - ) - - @pytest.mark.single_cpu - def test_s3_roundtrip(self, df_compat, s3_public_bucket, pa, s3so): - # GH #19134 - s3so = {"storage_options": s3so} - check_round_trip( - df_compat, - pa, - path=f"s3://{s3_public_bucket.name}/pyarrow.parquet", - read_kwargs=s3so, - write_kwargs=s3so, - ) - - @pytest.mark.single_cpu - @pytest.mark.parametrize( - "partition_col", - [ - ["A"], - [], - ], - ) - def test_s3_roundtrip_for_dir( - self, df_compat, s3_public_bucket, pa, partition_col, s3so - ): - pytest.importorskip("s3fs") - # GH #26388 - expected_df = df_compat.copy() - - # GH #35791 - if partition_col: - expected_df = expected_df.astype(dict.fromkeys(partition_col, np.int32)) - partition_col_type = "category" - - expected_df[partition_col] = expected_df[partition_col].astype( - partition_col_type - ) - - check_round_trip( - df_compat, - pa, - expected=expected_df, - path=f"s3://{s3_public_bucket.name}/parquet_dir", - read_kwargs={"storage_options": s3so}, - write_kwargs={ - "partition_cols": partition_col, - "compression": None, - "storage_options": s3so, - }, - check_like=True, - repeat=1, - ) - - def test_read_file_like_obj_support(self, df_compat): - pytest.importorskip("pyarrow") - buffer = BytesIO() - df_compat.to_parquet(buffer) - df_from_buf = read_parquet(buffer) - tm.assert_frame_equal(df_compat, df_from_buf) - - def test_expand_user(self, df_compat, monkeypatch): - pytest.importorskip("pyarrow") - monkeypatch.setenv("HOME", "TestingUser") - monkeypatch.setenv("USERPROFILE", "TestingUser") - with pytest.raises(OSError, match=r".*TestingUser.*"): - read_parquet("~/file.parquet") - with pytest.raises(OSError, match=r".*TestingUser.*"): - df_compat.to_parquet("~/file.parquet") - - def test_partition_cols_supported(self, tmp_path, pa, df_full): - # GH #23283 - partition_cols = ["bool", "int"] - df = df_full - df.to_parquet(tmp_path, partition_cols=partition_cols, compression=None) - check_partition_names(tmp_path, partition_cols) - assert read_parquet(tmp_path).shape == df.shape - - def test_partition_cols_string(self, tmp_path, pa, df_full): - # GH #27117 - partition_cols = "bool" - partition_cols_list = [partition_cols] - df = df_full - df.to_parquet(tmp_path, partition_cols=partition_cols, compression=None) - check_partition_names(tmp_path, partition_cols_list) - assert read_parquet(tmp_path).shape == df.shape - - @pytest.mark.parametrize( - "path_type", [str, lambda x: x], ids=["string", "pathlib.Path"] - ) - def test_partition_cols_pathlib(self, tmp_path, pa, df_compat, path_type): - # GH 35902 - - partition_cols = "B" - partition_cols_list = [partition_cols] - df = df_compat - - path = path_type(tmp_path) - df.to_parquet(path, partition_cols=partition_cols_list) - assert read_parquet(path).shape == df.shape - - def test_empty_dataframe(self, pa): - # GH #27339 - df = pd.DataFrame(index=[], columns=[]) - check_round_trip(df, pa) - - def test_write_with_schema(self, pa): - import pyarrow - - df = pd.DataFrame({"x": [0, 1]}) - schema = pyarrow.schema([pyarrow.field("x", type=pyarrow.bool_())]) - out_df = df.astype(bool) - check_round_trip(df, pa, write_kwargs={"schema": schema}, expected=out_df) - - def test_additional_extension_arrays(self, pa): - # test additional ExtensionArrays that are supported through the - # __arrow_array__ protocol - pytest.importorskip("pyarrow") - df = pd.DataFrame( - { - "a": pd.Series([1, 2, 3], dtype="Int64"), - "b": pd.Series([1, 2, 3], dtype="UInt32"), - "c": pd.Series(["a", None, "c"], dtype="string"), - } - ) - check_round_trip(df, pa) - - df = pd.DataFrame({"a": pd.Series([1, 2, 3, None], dtype="Int64")}) - check_round_trip(df, pa) - - def test_pyarrow_backed_string_array(self, pa, string_storage): - # test ArrowStringArray supported through the __arrow_array__ protocol - pytest.importorskip("pyarrow") - df = pd.DataFrame({"a": pd.Series(["a", None, "c"], dtype="string[pyarrow]")}) - with pd.option_context("string_storage", string_storage): - check_round_trip(df, pa, expected=df.astype(f"string[{string_storage}]")) - - def test_additional_extension_types(self, pa): - # test additional ExtensionArrays that are supported through the - # __arrow_array__ protocol + by defining a custom ExtensionType - pytest.importorskip("pyarrow") - df = pd.DataFrame( - { - "c": pd.IntervalIndex.from_tuples([(0, 1), (1, 2), (3, 4)]), - "d": pd.period_range("2012-01-01", periods=3, freq="D"), - # GH-45881 issue with interval with datetime64[ns] subtype - "e": pd.IntervalIndex.from_breaks( - pd.date_range("2012-01-01", periods=4, freq="D") - ), - } - ) - check_round_trip(df, pa) - - def test_timestamp_nanoseconds(self, pa): - # with version 2.6, pyarrow defaults to writing the nanoseconds, so - # this should work without error - # Note in previous pyarrows(<7.0.0), only the pseudo-version 2.0 was available - if not pa_version_under7p0: - ver = "2.6" - else: - ver = "2.0" - df = pd.DataFrame({"a": pd.date_range("2017-01-01", freq="1n", periods=10)}) - check_round_trip(df, pa, write_kwargs={"version": ver}) - - def test_timezone_aware_index(self, request, pa, timezone_aware_date_list): - if ( - not pa_version_under7p0 - and timezone_aware_date_list.tzinfo != datetime.timezone.utc - ): - request.node.add_marker( - pytest.mark.xfail( - reason="temporary skip this test until it is properly resolved: " - "https://github.com/pandas-dev/pandas/issues/37286" - ) - ) - idx = 5 * [timezone_aware_date_list] - df = pd.DataFrame(index=idx, data={"index_as_col": idx}) - - # see gh-36004 - # compare time(zone) values only, skip their class: - # pyarrow always creates fixed offset timezones using pytz.FixedOffset() - # even if it was datetime.timezone() originally - # - # technically they are the same: - # they both implement datetime.tzinfo - # they both wrap datetime.timedelta() - # this use-case sets the resolution to 1 minute - check_round_trip(df, pa, check_dtype=False) - - def test_filter_row_groups(self, pa): - # https://github.com/pandas-dev/pandas/issues/26551 - pytest.importorskip("pyarrow") - df = pd.DataFrame({"a": list(range(0, 3))}) - with tm.ensure_clean() as path: - df.to_parquet(path, pa) - result = read_parquet( - path, pa, filters=[("a", "==", 0)], use_legacy_dataset=False - ) - assert len(result) == 1 - - def test_read_parquet_manager(self, pa, using_array_manager): - # ensure that read_parquet honors the pandas.options.mode.data_manager option - df = pd.DataFrame( - np.random.default_rng(2).standard_normal((10, 3)), columns=["A", "B", "C"] - ) - - with tm.ensure_clean() as path: - df.to_parquet(path, pa) - result = read_parquet(path, pa) - if using_array_manager: - assert isinstance(result._mgr, pd.core.internals.ArrayManager) - else: - assert isinstance(result._mgr, pd.core.internals.BlockManager) - - def test_read_dtype_backend_pyarrow_config(self, pa, df_full): - import pyarrow - - df = df_full - - # additional supported types for pyarrow - dti = pd.date_range("20130101", periods=3, tz="Europe/Brussels") - dti = dti._with_freq(None) # freq doesn't round-trip - df["datetime_tz"] = dti - df["bool_with_none"] = [True, None, True] - - pa_table = pyarrow.Table.from_pandas(df) - expected = pa_table.to_pandas(types_mapper=pd.ArrowDtype) - if pa_version_under13p0: - # pyarrow infers datetimes as us instead of ns - expected["datetime"] = expected["datetime"].astype("timestamp[us][pyarrow]") - expected["datetime_with_nat"] = expected["datetime_with_nat"].astype( - "timestamp[us][pyarrow]" - ) - expected["datetime_tz"] = expected["datetime_tz"].astype( - pd.ArrowDtype(pyarrow.timestamp(unit="us", tz="Europe/Brussels")) - ) - - check_round_trip( - df, - engine=pa, - read_kwargs={"dtype_backend": "pyarrow"}, - expected=expected, - ) - - def test_read_dtype_backend_pyarrow_config_index(self, pa): - df = pd.DataFrame( - {"a": [1, 2]}, index=pd.Index([3, 4], name="test"), dtype="int64[pyarrow]" - ) - expected = df.copy() - import pyarrow - - if Version(pyarrow.__version__) > Version("11.0.0"): - expected.index = expected.index.astype("int64[pyarrow]") - check_round_trip( - df, - engine=pa, - read_kwargs={"dtype_backend": "pyarrow"}, - expected=expected, - ) - - def test_columns_dtypes_not_invalid(self, pa): - df = pd.DataFrame({"string": list("abc"), "int": list(range(1, 4))}) - - # numeric - df.columns = [0, 1] - check_round_trip(df, pa) - - # bytes - df.columns = [b"foo", b"bar"] - with pytest.raises(NotImplementedError, match="|S3"): - # Bytes fails on read_parquet - check_round_trip(df, pa) - - # python object - df.columns = [ - datetime.datetime(2011, 1, 1, 0, 0), - datetime.datetime(2011, 1, 1, 1, 1), - ] - check_round_trip(df, pa) - - def test_empty_columns(self, pa): - # GH 52034 - df = pd.DataFrame(index=pd.Index(["a", "b", "c"], name="custom name")) - check_round_trip(df, pa) - - def test_df_attrs_persistence(self, tmp_path, pa): - path = tmp_path / "test_df_metadata.p" - df = pd.DataFrame(data={1: [1]}) - df.attrs = {"test_attribute": 1} - df.to_parquet(path, engine=pa) - new_df = read_parquet(path, engine=pa) - assert new_df.attrs == df.attrs - - def test_string_inference(self, tmp_path, pa): - # GH#54431 - path = tmp_path / "test_string_inference.p" - df = pd.DataFrame(data={"a": ["x", "y"]}, index=["a", "b"]) - df.to_parquet(path, engine="pyarrow") - with pd.option_context("future.infer_string", True): - result = read_parquet(path, engine="pyarrow") - expected = pd.DataFrame( - data={"a": ["x", "y"]}, - dtype="string[pyarrow_numpy]", - index=pd.Index(["a", "b"], dtype="string[pyarrow_numpy]"), - ) - tm.assert_frame_equal(result, expected) - - @pytest.mark.skipif(pa_version_under11p0, reason="not supported before 11.0") - def test_roundtrip_decimal(self, tmp_path, pa): - # GH#54768 - import pyarrow as pa - - path = tmp_path / "decimal.p" - df = pd.DataFrame({"a": [Decimal("123.00")]}, dtype="string[pyarrow]") - df.to_parquet(path, schema=pa.schema([("a", pa.decimal128(5))])) - result = read_parquet(path) - expected = pd.DataFrame({"a": ["123"]}, dtype="string[python]") - tm.assert_frame_equal(result, expected) - - def test_infer_string_large_string_type(self, tmp_path, pa): - # GH#54798 - import pyarrow as pa - import pyarrow.parquet as pq - - path = tmp_path / "large_string.p" - - table = pa.table({"a": pa.array([None, "b", "c"], pa.large_string())}) - pq.write_table(table, path) - - with pd.option_context("future.infer_string", True): - result = read_parquet(path) - expected = pd.DataFrame( - data={"a": [None, "b", "c"]}, - dtype="string[pyarrow_numpy]", - columns=pd.Index(["a"], dtype="string[pyarrow_numpy]"), - ) - tm.assert_frame_equal(result, expected) - - # NOTE: this test is not run by default, because it requires a lot of memory (>5GB) - # @pytest.mark.slow - # def test_string_column_above_2GB(self, tmp_path, pa): - # # https://github.com/pandas-dev/pandas/issues/55606 - # # above 2GB of string data - # v1 = b"x" * 100000000 - # v2 = b"x" * 147483646 - # df = pd.DataFrame({"strings": [v1] * 20 + [v2] + ["x"] * 20}, dtype="string") - # df.to_parquet(tmp_path / "test.parquet") - # result = read_parquet(tmp_path / "test.parquet") - # assert result["strings"].dtype == "string" - - -class TestParquetFastParquet(Base): - def test_basic(self, fp, df_full): - df = df_full - - dti = pd.date_range("20130101", periods=3, tz="US/Eastern") - dti = dti._with_freq(None) # freq doesn't round-trip - df["datetime_tz"] = dti - df["timedelta"] = pd.timedelta_range("1 day", periods=3) - check_round_trip(df, fp) - - def test_columns_dtypes_invalid(self, fp): - df = pd.DataFrame({"string": list("abc"), "int": list(range(1, 4))}) - - err = TypeError - msg = "Column name must be a string" - - # numeric - df.columns = [0, 1] - self.check_error_on_write(df, fp, err, msg) - - # bytes - df.columns = [b"foo", b"bar"] - self.check_error_on_write(df, fp, err, msg) - - # python object - df.columns = [ - datetime.datetime(2011, 1, 1, 0, 0), - datetime.datetime(2011, 1, 1, 1, 1), - ] - self.check_error_on_write(df, fp, err, msg) - - def test_duplicate_columns(self, fp): - # not currently able to handle duplicate columns - df = pd.DataFrame(np.arange(12).reshape(4, 3), columns=list("aaa")).copy() - msg = "Cannot create parquet dataset with duplicate column names" - self.check_error_on_write(df, fp, ValueError, msg) - - def test_bool_with_none(self, fp): - df = pd.DataFrame({"a": [True, None, False]}) - expected = pd.DataFrame({"a": [1.0, np.nan, 0.0]}, dtype="float16") - # Fastparquet bug in 0.7.1 makes it so that this dtype becomes - # float64 - check_round_trip(df, fp, expected=expected, check_dtype=False) - - def test_unsupported(self, fp): - # period - df = pd.DataFrame({"a": pd.period_range("2013", freq="M", periods=3)}) - # error from fastparquet -> don't check exact error message - self.check_error_on_write(df, fp, ValueError, None) - - # mixed - df = pd.DataFrame({"a": ["a", 1, 2.0]}) - msg = "Can't infer object conversion type" - self.check_error_on_write(df, fp, ValueError, msg) - - def test_categorical(self, fp): - df = pd.DataFrame({"a": pd.Categorical(list("abc"))}) - check_round_trip(df, fp) - - def test_filter_row_groups(self, fp): - d = {"a": list(range(0, 3))} - df = pd.DataFrame(d) - with tm.ensure_clean() as path: - df.to_parquet(path, fp, compression=None, row_group_offsets=1) - result = read_parquet(path, fp, filters=[("a", "==", 0)]) - assert len(result) == 1 - - @pytest.mark.single_cpu - def test_s3_roundtrip(self, df_compat, s3_public_bucket, fp, s3so): - # GH #19134 - check_round_trip( - df_compat, - fp, - path=f"s3://{s3_public_bucket.name}/fastparquet.parquet", - read_kwargs={"storage_options": s3so}, - write_kwargs={"compression": None, "storage_options": s3so}, - ) - - def test_partition_cols_supported(self, tmp_path, fp, df_full): - # GH #23283 - partition_cols = ["bool", "int"] - df = df_full - df.to_parquet( - tmp_path, - engine="fastparquet", - partition_cols=partition_cols, - compression=None, - ) - assert os.path.exists(tmp_path) - import fastparquet - - actual_partition_cols = fastparquet.ParquetFile(str(tmp_path), False).cats - assert len(actual_partition_cols) == 2 - - def test_partition_cols_string(self, tmp_path, fp, df_full): - # GH #27117 - partition_cols = "bool" - df = df_full - df.to_parquet( - tmp_path, - engine="fastparquet", - partition_cols=partition_cols, - compression=None, - ) - assert os.path.exists(tmp_path) - import fastparquet - - actual_partition_cols = fastparquet.ParquetFile(str(tmp_path), False).cats - assert len(actual_partition_cols) == 1 - - def test_partition_on_supported(self, tmp_path, fp, df_full): - # GH #23283 - partition_cols = ["bool", "int"] - df = df_full - df.to_parquet( - tmp_path, - engine="fastparquet", - compression=None, - partition_on=partition_cols, - ) - assert os.path.exists(tmp_path) - import fastparquet - - actual_partition_cols = fastparquet.ParquetFile(str(tmp_path), False).cats - assert len(actual_partition_cols) == 2 - - def test_error_on_using_partition_cols_and_partition_on( - self, tmp_path, fp, df_full - ): - # GH #23283 - partition_cols = ["bool", "int"] - df = df_full - msg = ( - "Cannot use both partition_on and partition_cols. Use partition_cols for " - "partitioning data" - ) - with pytest.raises(ValueError, match=msg): - df.to_parquet( - tmp_path, - engine="fastparquet", - compression=None, - partition_on=partition_cols, - partition_cols=partition_cols, - ) - - @pytest.mark.skipif(using_copy_on_write(), reason="fastparquet writes into Index") - def test_empty_dataframe(self, fp): - # GH #27339 - df = pd.DataFrame() - expected = df.copy() - check_round_trip(df, fp, expected=expected) - - @pytest.mark.skipif(using_copy_on_write(), reason="fastparquet writes into Index") - def test_timezone_aware_index(self, fp, timezone_aware_date_list): - idx = 5 * [timezone_aware_date_list] - - df = pd.DataFrame(index=idx, data={"index_as_col": idx}) - - expected = df.copy() - expected.index.name = "index" - check_round_trip(df, fp, expected=expected) - - def test_use_nullable_dtypes_not_supported(self, fp): - df = pd.DataFrame({"a": [1, 2]}) - - with tm.ensure_clean() as path: - df.to_parquet(path) - with pytest.raises(ValueError, match="not supported for the fastparquet"): - with tm.assert_produces_warning(FutureWarning): - read_parquet(path, engine="fastparquet", use_nullable_dtypes=True) - with pytest.raises(ValueError, match="not supported for the fastparquet"): - read_parquet(path, engine="fastparquet", dtype_backend="pyarrow") - - def test_close_file_handle_on_read_error(self): - with tm.ensure_clean("test.parquet") as path: - pathlib.Path(path).write_bytes(b"breakit") - with pytest.raises(Exception, match=""): # Not important which exception - read_parquet(path, engine="fastparquet") - # The next line raises an error on Windows if the file is still open - pathlib.Path(path).unlink(missing_ok=False) - - def test_bytes_file_name(self, engine): - # GH#48944 - df = pd.DataFrame(data={"A": [0, 1], "B": [1, 0]}) - with tm.ensure_clean("test.parquet") as path: - with open(path.encode(), "wb") as f: - df.to_parquet(f) - - result = read_parquet(path, engine=engine) - tm.assert_frame_equal(result, df) - - def test_filesystem_notimplemented(self): - pytest.importorskip("fastparquet") - df = pd.DataFrame(data={"A": [0, 1], "B": [1, 0]}) - with tm.ensure_clean() as path: - with pytest.raises( - NotImplementedError, match="filesystem is not implemented" - ): - df.to_parquet(path, engine="fastparquet", filesystem="foo") - - with tm.ensure_clean() as path: - pathlib.Path(path).write_bytes(b"foo") - with pytest.raises( - NotImplementedError, match="filesystem is not implemented" - ): - read_parquet(path, engine="fastparquet", filesystem="foo") - - def test_invalid_filesystem(self): - pytest.importorskip("pyarrow") - df = pd.DataFrame(data={"A": [0, 1], "B": [1, 0]}) - with tm.ensure_clean() as path: - with pytest.raises( - ValueError, match="filesystem must be a pyarrow or fsspec FileSystem" - ): - df.to_parquet(path, engine="pyarrow", filesystem="foo") - - with tm.ensure_clean() as path: - pathlib.Path(path).write_bytes(b"foo") - with pytest.raises( - ValueError, match="filesystem must be a pyarrow or fsspec FileSystem" - ): - read_parquet(path, engine="pyarrow", filesystem="foo") - - def test_unsupported_pa_filesystem_storage_options(self): - pa_fs = pytest.importorskip("pyarrow.fs") - df = pd.DataFrame(data={"A": [0, 1], "B": [1, 0]}) - with tm.ensure_clean() as path: - with pytest.raises( - NotImplementedError, - match="storage_options not supported with a pyarrow FileSystem.", - ): - df.to_parquet( - path, - engine="pyarrow", - filesystem=pa_fs.LocalFileSystem(), - storage_options={"foo": "bar"}, - ) - - with tm.ensure_clean() as path: - pathlib.Path(path).write_bytes(b"foo") - with pytest.raises( - NotImplementedError, - match="storage_options not supported with a pyarrow FileSystem.", - ): - read_parquet( - path, - engine="pyarrow", - filesystem=pa_fs.LocalFileSystem(), - storage_options={"foo": "bar"}, - ) - - def test_invalid_dtype_backend(self, engine): - msg = ( - "dtype_backend numpy is invalid, only 'numpy_nullable' and " - "'pyarrow' are allowed." - ) - df = pd.DataFrame({"int": list(range(1, 4))}) - with tm.ensure_clean("tmp.parquet") as path: - df.to_parquet(path) - with pytest.raises(ValueError, match=msg): - read_parquet(path, dtype_backend="numpy") - - @pytest.mark.skipif(using_copy_on_write(), reason="fastparquet writes into Index") - def test_empty_columns(self, fp): - # GH 52034 - df = pd.DataFrame(index=pd.Index(["a", "b", "c"], name="custom name")) - expected = pd.DataFrame(index=pd.Index(["a", "b", "c"], name="custom name")) - check_round_trip(df, fp, expected=expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/layout.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/layout.py deleted file mode 100644 index 22a4c54786d753c4600e3a969a95c02883e50e3e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/layout.py +++ /dev/null @@ -1,444 +0,0 @@ -from abc import ABC, abstractmethod -from itertools import islice -from operator import itemgetter -from threading import RLock -from typing import ( - TYPE_CHECKING, - Dict, - Iterable, - List, - NamedTuple, - Optional, - Sequence, - Tuple, - Union, -) - -from ._ratio import ratio_resolve -from .align import Align -from .console import Console, ConsoleOptions, RenderableType, RenderResult -from .highlighter import ReprHighlighter -from .panel import Panel -from .pretty import Pretty -from .repr import rich_repr, Result -from .region import Region -from .segment import Segment -from .style import StyleType - -if TYPE_CHECKING: - from pip._vendor.rich.tree import Tree - - -class LayoutRender(NamedTuple): - """An individual layout render.""" - - region: Region - render: List[List[Segment]] - - -RegionMap = Dict["Layout", Region] -RenderMap = Dict["Layout", LayoutRender] - - -class LayoutError(Exception): - """Layout related error.""" - - -class NoSplitter(LayoutError): - """Requested splitter does not exist.""" - - -class _Placeholder: - """An internal renderable used as a Layout placeholder.""" - - highlighter = ReprHighlighter() - - def __init__(self, layout: "Layout", style: StyleType = "") -> None: - self.layout = layout - self.style = style - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - width = options.max_width - height = options.height or options.size.height - layout = self.layout - title = ( - f"{layout.name!r} ({width} x {height})" - if layout.name - else f"({width} x {height})" - ) - yield Panel( - Align.center(Pretty(layout), vertical="middle"), - style=self.style, - title=self.highlighter(title), - border_style="blue", - ) - - -class Splitter(ABC): - """Base class for a splitter.""" - - name: str = "" - - @abstractmethod - def get_tree_icon(self) -> str: - """Get the icon (emoji) used in layout.tree""" - - @abstractmethod - def divide( - self, children: Sequence["Layout"], region: Region - ) -> Iterable[Tuple["Layout", Region]]: - """Divide a region amongst several child layouts. - - Args: - children (Sequence(Layout)): A number of child layouts. - region (Region): A rectangular region to divide. - """ - - -class RowSplitter(Splitter): - """Split a layout region in to rows.""" - - name = "row" - - def get_tree_icon(self) -> str: - return "[layout.tree.row]⬌" - - def divide( - self, children: Sequence["Layout"], region: Region - ) -> Iterable[Tuple["Layout", Region]]: - x, y, width, height = region - render_widths = ratio_resolve(width, children) - offset = 0 - _Region = Region - for child, child_width in zip(children, render_widths): - yield child, _Region(x + offset, y, child_width, height) - offset += child_width - - -class ColumnSplitter(Splitter): - """Split a layout region in to columns.""" - - name = "column" - - def get_tree_icon(self) -> str: - return "[layout.tree.column]⬍" - - def divide( - self, children: Sequence["Layout"], region: Region - ) -> Iterable[Tuple["Layout", Region]]: - x, y, width, height = region - render_heights = ratio_resolve(height, children) - offset = 0 - _Region = Region - for child, child_height in zip(children, render_heights): - yield child, _Region(x, y + offset, width, child_height) - offset += child_height - - -@rich_repr -class Layout: - """A renderable to divide a fixed height in to rows or columns. - - Args: - renderable (RenderableType, optional): Renderable content, or None for placeholder. Defaults to None. - name (str, optional): Optional identifier for Layout. Defaults to None. - size (int, optional): Optional fixed size of layout. Defaults to None. - minimum_size (int, optional): Minimum size of layout. Defaults to 1. - ratio (int, optional): Optional ratio for flexible layout. Defaults to 1. - visible (bool, optional): Visibility of layout. Defaults to True. - """ - - splitters = {"row": RowSplitter, "column": ColumnSplitter} - - def __init__( - self, - renderable: Optional[RenderableType] = None, - *, - name: Optional[str] = None, - size: Optional[int] = None, - minimum_size: int = 1, - ratio: int = 1, - visible: bool = True, - height: Optional[int] = None, - ) -> None: - self._renderable = renderable or _Placeholder(self) - self.size = size - self.minimum_size = minimum_size - self.ratio = ratio - self.name = name - self.visible = visible - self.height = height - self.splitter: Splitter = self.splitters["column"]() - self._children: List[Layout] = [] - self._render_map: RenderMap = {} - self._lock = RLock() - - def __rich_repr__(self) -> Result: - yield "name", self.name, None - yield "size", self.size, None - yield "minimum_size", self.minimum_size, 1 - yield "ratio", self.ratio, 1 - - @property - def renderable(self) -> RenderableType: - """Layout renderable.""" - return self if self._children else self._renderable - - @property - def children(self) -> List["Layout"]: - """Gets (visible) layout children.""" - return [child for child in self._children if child.visible] - - @property - def map(self) -> RenderMap: - """Get a map of the last render.""" - return self._render_map - - def get(self, name: str) -> Optional["Layout"]: - """Get a named layout, or None if it doesn't exist. - - Args: - name (str): Name of layout. - - Returns: - Optional[Layout]: Layout instance or None if no layout was found. - """ - if self.name == name: - return self - else: - for child in self._children: - named_layout = child.get(name) - if named_layout is not None: - return named_layout - return None - - def __getitem__(self, name: str) -> "Layout": - layout = self.get(name) - if layout is None: - raise KeyError(f"No layout with name {name!r}") - return layout - - @property - def tree(self) -> "Tree": - """Get a tree renderable to show layout structure.""" - from pip._vendor.rich.styled import Styled - from pip._vendor.rich.table import Table - from pip._vendor.rich.tree import Tree - - def summary(layout: "Layout") -> Table: - - icon = layout.splitter.get_tree_icon() - - table = Table.grid(padding=(0, 1, 0, 0)) - - text: RenderableType = ( - Pretty(layout) if layout.visible else Styled(Pretty(layout), "dim") - ) - table.add_row(icon, text) - _summary = table - return _summary - - layout = self - tree = Tree( - summary(layout), - guide_style=f"layout.tree.{layout.splitter.name}", - highlight=True, - ) - - def recurse(tree: "Tree", layout: "Layout") -> None: - for child in layout._children: - recurse( - tree.add( - summary(child), - guide_style=f"layout.tree.{child.splitter.name}", - ), - child, - ) - - recurse(tree, self) - return tree - - def split( - self, - *layouts: Union["Layout", RenderableType], - splitter: Union[Splitter, str] = "column", - ) -> None: - """Split the layout in to multiple sub-layouts. - - Args: - *layouts (Layout): Positional arguments should be (sub) Layout instances. - splitter (Union[Splitter, str]): Splitter instance or name of splitter. - """ - _layouts = [ - layout if isinstance(layout, Layout) else Layout(layout) - for layout in layouts - ] - try: - self.splitter = ( - splitter - if isinstance(splitter, Splitter) - else self.splitters[splitter]() - ) - except KeyError: - raise NoSplitter(f"No splitter called {splitter!r}") - self._children[:] = _layouts - - def add_split(self, *layouts: Union["Layout", RenderableType]) -> None: - """Add a new layout(s) to existing split. - - Args: - *layouts (Union[Layout, RenderableType]): Positional arguments should be renderables or (sub) Layout instances. - - """ - _layouts = ( - layout if isinstance(layout, Layout) else Layout(layout) - for layout in layouts - ) - self._children.extend(_layouts) - - def split_row(self, *layouts: Union["Layout", RenderableType]) -> None: - """Split the layout in tow a row (Layouts side by side). - - Args: - *layouts (Layout): Positional arguments should be (sub) Layout instances. - """ - self.split(*layouts, splitter="row") - - def split_column(self, *layouts: Union["Layout", RenderableType]) -> None: - """Split the layout in to a column (layouts stacked on top of each other). - - Args: - *layouts (Layout): Positional arguments should be (sub) Layout instances. - """ - self.split(*layouts, splitter="column") - - def unsplit(self) -> None: - """Reset splits to initial state.""" - del self._children[:] - - def update(self, renderable: RenderableType) -> None: - """Update renderable. - - Args: - renderable (RenderableType): New renderable object. - """ - with self._lock: - self._renderable = renderable - - def refresh_screen(self, console: "Console", layout_name: str) -> None: - """Refresh a sub-layout. - - Args: - console (Console): Console instance where Layout is to be rendered. - layout_name (str): Name of layout. - """ - with self._lock: - layout = self[layout_name] - region, _lines = self._render_map[layout] - (x, y, width, height) = region - lines = console.render_lines( - layout, console.options.update_dimensions(width, height) - ) - self._render_map[layout] = LayoutRender(region, lines) - console.update_screen_lines(lines, x, y) - - def _make_region_map(self, width: int, height: int) -> RegionMap: - """Create a dict that maps layout on to Region.""" - stack: List[Tuple[Layout, Region]] = [(self, Region(0, 0, width, height))] - push = stack.append - pop = stack.pop - layout_regions: List[Tuple[Layout, Region]] = [] - append_layout_region = layout_regions.append - while stack: - append_layout_region(pop()) - layout, region = layout_regions[-1] - children = layout.children - if children: - for child_and_region in layout.splitter.divide(children, region): - push(child_and_region) - - region_map = { - layout: region - for layout, region in sorted(layout_regions, key=itemgetter(1)) - } - return region_map - - def render(self, console: Console, options: ConsoleOptions) -> RenderMap: - """Render the sub_layouts. - - Args: - console (Console): Console instance. - options (ConsoleOptions): Console options. - - Returns: - RenderMap: A dict that maps Layout on to a tuple of Region, lines - """ - render_width = options.max_width - render_height = options.height or console.height - region_map = self._make_region_map(render_width, render_height) - layout_regions = [ - (layout, region) - for layout, region in region_map.items() - if not layout.children - ] - render_map: Dict["Layout", "LayoutRender"] = {} - render_lines = console.render_lines - update_dimensions = options.update_dimensions - - for layout, region in layout_regions: - lines = render_lines( - layout.renderable, update_dimensions(region.width, region.height) - ) - render_map[layout] = LayoutRender(region, lines) - return render_map - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - with self._lock: - width = options.max_width or console.width - height = options.height or console.height - render_map = self.render(console, options.update_dimensions(width, height)) - self._render_map = render_map - layout_lines: List[List[Segment]] = [[] for _ in range(height)] - _islice = islice - for (region, lines) in render_map.values(): - _x, y, _layout_width, layout_height = region - for row, line in zip( - _islice(layout_lines, y, y + layout_height), lines - ): - row.extend(line) - - new_line = Segment.line() - for layout_row in layout_lines: - yield from layout_row - yield new_line - - -if __name__ == "__main__": - from pip._vendor.rich.console import Console - - console = Console() - layout = Layout() - - layout.split_column( - Layout(name="header", size=3), - Layout(ratio=1, name="main"), - Layout(size=10, name="footer"), - ) - - layout["main"].split_row(Layout(name="side"), Layout(name="body", ratio=2)) - - layout["body"].split_row(Layout(name="content", ratio=2), Layout(name="s2")) - - layout["s2"].split_column( - Layout(name="top"), Layout(name="middle"), Layout(name="bottom") - ) - - layout["side"].split_column(Layout(layout.tree, name="left1"), Layout(name="left2")) - - layout["content"].update("foo") - - console.print(layout) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/protocols/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/uvicorn/protocols/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/pyodide-demo/self-hosted/networkx-tests.js b/spaces/pyodide-demo/self-hosted/networkx-tests.js deleted file mode 100644 index bbd37838bb37f5c9f09fbf5f15c56e38f492d844..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/networkx-tests.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="networkx-tests.data";var REMOTE_PACKAGE_BASE="networkx-tests.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","networkx",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx","algorithms",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","assortativity",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/assortativity","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","bipartite",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/bipartite","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","node_classification",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/node_classification","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","centrality",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/centrality","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","community",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/community","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","components",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/components","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","connectivity",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/connectivity","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","coloring",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/coloring","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","flow",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/flow","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","minors",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/minors","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","traversal",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/traversal","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","isomorphism",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/isomorphism","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","shortest_paths",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/shortest_paths","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","link_analysis",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/link_analysis","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","operators",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/operators","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","approximation",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/approximation","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","tree",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms/tree","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/algorithms","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx","classes",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/classes","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx","generators",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/generators","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx","drawing",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/drawing","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx","linalg",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/linalg","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx","readwrite",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/readwrite","json_graph",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/readwrite/json_graph","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/readwrite","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx","testing",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/testing","tests",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx","utils",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/networkx/utils","tests",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:954407,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1156,1627,2100,3027,3986,4941,5429,6161,6994,7771,8617,9242,9858,10660,11221,11932,12751,13542,14501,15285,16105,17225,18037,18976,19592,20337,21296,22092,22748,23380,24005,24581,25358,26171,27239,28162,29092,29522,30374,31156,31783,32320,33031,33648,34192,34723,35552,36587,37231,37953,38619,39367,40118,41017,41947,42642,43274,44154,45081,45901,46487,47144,47938,48594,49376,49947,51122,51926,52812,53850,54713,55608,56210,56786,57533,58049,58733,59341,60244,60912,61866,62662,63217,63929,64726,65508,66333,67261,67978,68864,69688,70502,71322,72253,73026,73971,74812,75703,76595,77778,78480,79499,80219,81088,82025,82723,83988,84946,85933,86584,87488,88312,89243,90035,91022,91942,92920,93678,94743,95908,96800,97919,98845,99658,100337,101210,102021,102662,103545,104325,105307,106247,107194,108060,108935,109766,110495,111372,112254,113043,114294,115226,115948,116373,117156,118076,118828,119846,120971,121506,122303,122927,123927,124702,125249,125921,126764,127704,128583,129684,130897,131857,132982,133942,135194,136464,137500,138587,139682,140782,141845,143028,144314,145412,146331,147149,148243,149049,150181,151448,152555,153339,154340,155204,156063,156832,157493,158103,159067,159631,160245,161341,162411,163097,164189,165255,166113,166981,167791,168613,169394,170108,171138,172188,173243,174069,174657,175509,176494,177524,178387,179287,180126,180742,181715,182589,183643,184615,185559,186530,188315,190363,192413,194462,196519,198567,200615,202663,204711,206759,208807,210855,212903,214958,217014,219071,221115,222998,224890,226662,228636,230605,232597,234645,236696,238744,240792,242840,244888,246936,248984,251032,253080,255128,257182,259230,261284,263339,265387,267053,269101,271127,273099,275147,277181,279237,281291,283339,285395,287443,289491,291496,293521,295569,297617,299665,301634,303682,305730,307730,309778,311818,313857,315905,317891,319939,321987,324035,326083,328131,330179,332227,334275,336323,338371,340419,342467,344515,346563,348611,350659,352715,354763,356811,358808,360843,362796,364588,366622,368658,370706,372762,374819,376867,378915,379811,380818,381281,381995,382723,383354,384427,385403,386312,387247,388034,388910,389783,390625,391680,392344,393263,394379,395319,396009,396947,397758,398423,399435,400602,401612,402513,403487,404465,405298,406074,406619,407300,408452,409187,410619,411921,412774,413537,414186,415976,417830,419326,420274,421350,422375,423338,424039,424975,426179,426958,427621,428169,428790,429476,430099,431042,432063,432785,433878,434656,435493,436446,437364,438262,439400,440544,441556,442545,443100,443912,444643,445408,446320,447138,448210,449320,450324,451098,452023,453164,453783,454661,455384,456145,456722,457520,458347,458941,459687,460291,461182,462032,462573,463331,464251,465318,466147,467082,468003,468826,469610,470460,471461,472528,473183,474073,475057,476113,477146,478148,478942,479878,480618,481362,482159,482975,483940,485065,485796,486894,487830,488942,490009,491170,492014,492911,493885,494902,496022,497044,497880,499145,499884,500622,501373,502254,503369,504134,504848,505682,506529,507365,508358,509612,510535,511409,512211,513314,513832,514523,515580,516270,516894,517540,518153,518830,519426,520213,521132,521946,523111,524023,524589,525457,526269,527151,528080,528921,529802,530734,531320,532333,533390,534256,535463,536458,537294,538079,538861,539686,540211,541010,541928,542768,543787,544750,545682,546605,547362,548545,549365,550240,551105,552084,553020,553919,554912,555837,556484,557321,558200,558684,559519,560493,561254,562006,562999,563923,564648,565462,566057,566819,567467,568300,568950,569837,570810,571691,572484,573422,574332,575045,575793,576486,577137,577798,578603,579400,580073,581101,582108,583147,583943,584633,585817,586722,587538,588748,589758,590846,591672,592444,593097,594205,595267,596060,596926,597803,598828,599641,600266,601016,601825,602357,603098,603493,603995,604649,605582,606480,607419,608059,608739,609547,610536,611108,611819,612748,613300,614062,615013,615827,616597,617384,618104,618888,619742,620948,621623,622795,623379,624145,625109,625882,626829,627801,628700,629420,630124,630877,631493,632281,633030,633937,634725,635619,636330,637421,638433,639209,640280,641149,642054,642995,643791,644695,645790,646819,647539,648246,649254,650123,650827,651611,652549,653498,654507,655525,656399,657384,658083,658749,659481,660185,661067,662024,662689,663827,664438,665274,665751,666201,667061,667764,668333,669084,669952,670759,671573,672479,673498,674427,675400,676241,677158,678120,679029,680121,680997,681724,682571,683337,684146,685088,685963,686815,687665,688608,689441,690634,691575,692456,693486,694274,694930,695597,696262,697053,697851,698689,699408,700434,701303,702138,703188,704190,705034,705797,706837,707795,708747,709820,710755,711508,712477,713338,714040,714699,715700,716514,717236,718223,719118,719861,720583,721382,722124,722869,723630,724522,725426,726098,726831,727697,728559,729588,730650,731692,732532,733512,734679,735576,736428,737291,738027,738853,739710,740519,741439,742027,742876,743760,744595,745446,746185,747057,747879,748640,749380,750303,751461,752444,753397,754717,755633,756651,757695,758281,759127,760050,760955,761675,762401,763305,764044,764964,765896,766914,767861,768967,769931,770849,771603,772366,773021,773784,774537,775354,776149,777293,778264,779091,780038,781006,781468,781988,782993,783830,784804,785722,786929,787858,788856,789726,790893,791661,792472,793256,793879,794630,795637,796852,798149,799246,800234,801125,802144,803410,804671,805918,806771,807750,808459,809195,809801,810745,811546,812247,813259,814229,814937,815379,815864,816815,817534,818432,819226,820226,821242,822002,822932,823782,824831,825790,826485,827426,828460,829556,830359,831070,831766,832333,833238,834222,835098,835798,836559,837579,838281,839076,839870,840980,842269,843234,844280,845377,846220,846844,847990,848685,849741,850608,851592,852621,853461,853960,854965,855760,856784,857793,858637,859637,860569,861370,862272,863078,863726,864582,864983,865774,866468,867564,868745,869771,870619,871438,872425,873552,874414,875488,876613,877607,878497,879344,880285,881046,882107,883018,884165,884921,885870,886924,887784,888963,890107,891371,892404,893585,894603,895472,896578,897669,898621,899377,900531,901388,902226,903127,904406,905375,906402,907277,908478,909244,910033,910776,911487,912373,913576,914452,915256,916012,916985,918014,918844,919787,920827,921805,922982,923777,924643,925506,926507,927486,928678,929391,929933,930622,931567,932481,933222,934038,934688,935475,936105,936726,937634,938329,939128,939757,940651,941397,942e3,942692,943624,944662,945627,946509,947045,947806,948852,949968,950625,951509,952660,953782],sizes:[1156,471,473,927,959,955,488,732,833,777,846,625,616,802,561,711,819,791,959,784,820,1120,812,939,616,745,959,796,656,632,625,576,777,813,1068,923,930,430,852,782,627,537,711,617,544,531,829,1035,644,722,666,748,751,899,930,695,632,880,927,820,586,657,794,656,782,571,1175,804,886,1038,863,895,602,576,747,516,684,608,903,668,954,796,555,712,797,782,825,928,717,886,824,814,820,931,773,945,841,891,892,1183,702,1019,720,869,937,698,1265,958,987,651,904,824,931,792,987,920,978,758,1065,1165,892,1119,926,813,679,873,811,641,883,780,982,940,947,866,875,831,729,877,882,789,1251,932,722,425,783,920,752,1018,1125,535,797,624,1e3,775,547,672,843,940,879,1101,1213,960,1125,960,1252,1270,1036,1087,1095,1100,1063,1183,1286,1098,919,818,1094,806,1132,1267,1107,784,1001,864,859,769,661,610,964,564,614,1096,1070,686,1092,1066,858,868,810,822,781,714,1030,1050,1055,826,588,852,985,1030,863,900,839,616,973,874,1054,972,944,971,1785,2048,2050,2049,2057,2048,2048,2048,2048,2048,2048,2048,2048,2055,2056,2057,2044,1883,1892,1772,1974,1969,1992,2048,2051,2048,2048,2048,2048,2048,2048,2048,2048,2048,2054,2048,2054,2055,2048,1666,2048,2026,1972,2048,2034,2056,2054,2048,2056,2048,2048,2005,2025,2048,2048,2048,1969,2048,2048,2e3,2048,2040,2039,2048,1986,2048,2048,2048,2048,2048,2048,2048,2048,2048,2048,2048,2048,2048,2048,2048,2048,2056,2048,2048,1997,2035,1953,1792,2034,2036,2048,2056,2057,2048,2048,896,1007,463,714,728,631,1073,976,909,935,787,876,873,842,1055,664,919,1116,940,690,938,811,665,1012,1167,1010,901,974,978,833,776,545,681,1152,735,1432,1302,853,763,649,1790,1854,1496,948,1076,1025,963,701,936,1204,779,663,548,621,686,623,943,1021,722,1093,778,837,953,918,898,1138,1144,1012,989,555,812,731,765,912,818,1072,1110,1004,774,925,1141,619,878,723,761,577,798,827,594,746,604,891,850,541,758,920,1067,829,935,921,823,784,850,1001,1067,655,890,984,1056,1033,1002,794,936,740,744,797,816,965,1125,731,1098,936,1112,1067,1161,844,897,974,1017,1120,1022,836,1265,739,738,751,881,1115,765,714,834,847,836,993,1254,923,874,802,1103,518,691,1057,690,624,646,613,677,596,787,919,814,1165,912,566,868,812,882,929,841,881,932,586,1013,1057,866,1207,995,836,785,782,825,525,799,918,840,1019,963,932,923,757,1183,820,875,865,979,936,899,993,925,647,837,879,484,835,974,761,752,993,924,725,814,595,762,648,833,650,887,973,881,793,938,910,713,748,693,651,661,805,797,673,1028,1007,1039,796,690,1184,905,816,1210,1010,1088,826,772,653,1108,1062,793,866,877,1025,813,625,750,809,532,741,395,502,654,933,898,939,640,680,808,989,572,711,929,552,762,951,814,770,787,720,784,854,1206,675,1172,584,766,964,773,947,972,899,720,704,753,616,788,749,907,788,894,711,1091,1012,776,1071,869,905,941,796,904,1095,1029,720,707,1008,869,704,784,938,949,1009,1018,874,985,699,666,732,704,882,957,665,1138,611,836,477,450,860,703,569,751,868,807,814,906,1019,929,973,841,917,962,909,1092,876,727,847,766,809,942,875,852,850,943,833,1193,941,881,1030,788,656,667,665,791,798,838,719,1026,869,835,1050,1002,844,763,1040,958,952,1073,935,753,969,861,702,659,1001,814,722,987,895,743,722,799,742,745,761,892,904,672,733,866,862,1029,1062,1042,840,980,1167,897,852,863,736,826,857,809,920,588,849,884,835,851,739,872,822,761,740,923,1158,983,953,1320,916,1018,1044,586,846,923,905,720,726,904,739,920,932,1018,947,1106,964,918,754,763,655,763,753,817,795,1144,971,827,947,968,462,520,1005,837,974,918,1207,929,998,870,1167,768,811,784,623,751,1007,1215,1297,1097,988,891,1019,1266,1261,1247,853,979,709,736,606,944,801,701,1012,970,708,442,485,951,719,898,794,1e3,1016,760,930,850,1049,959,695,941,1034,1096,803,711,696,567,905,984,876,700,761,1020,702,795,794,1110,1289,965,1046,1097,843,624,1146,695,1056,867,984,1029,840,499,1005,795,1024,1009,844,1e3,932,801,902,806,648,856,401,791,694,1096,1181,1026,848,819,987,1127,862,1074,1125,994,890,847,941,761,1061,911,1147,756,949,1054,860,1179,1144,1264,1033,1181,1018,869,1106,1091,952,756,1154,857,838,901,1279,969,1027,875,1201,766,789,743,711,886,1203,876,804,756,973,1029,830,943,1040,978,1177,795,866,863,1001,979,1192,713,542,689,945,914,741,816,650,787,630,621,908,695,799,629,894,746,603,692,932,1038,965,882,536,761,1046,1116,657,884,1151,1122,625],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,0,1,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,0,0,0,0,0,0,1,0,1,1,0,1,0,1,1,0,1,1,1,0,1,0,0,1,1,0,0,0,1,0,0,1,0,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,1,1,1,1,0,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_networkx-tests.data")}Module["addRunDependency"]("datafile_networkx-tests.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/networkx/conftest.py",start:0,end:9443,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/assortativity/tests/__init__.py",start:9443,end:9443,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/assortativity/tests/base_test.py",start:9443,end:12049,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/assortativity/tests/test_connectivity.py",start:12049,end:16844,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/assortativity/tests/test_correlation.py",start:16844,end:20735,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/assortativity/tests/test_mixing.py",start:20735,end:27664,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/assortativity/tests/test_neighbor_degree.py",start:27664,end:30866,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/assortativity/tests/test_pairs.py",start:30866,end:33873,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/__init__.py",start:33873,end:33873,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/test_basic.py",start:33873,end:38058,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/test_centrality.py",start:38058,end:43942,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/test_cluster.py",start:43942,end:46750,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/test_covering.py",start:46750,end:47979,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/test_edgelist.py",start:47979,end:54464,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/test_generators.py",start:54464,end:66920,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/test_matching.py",start:66920,end:79063,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/test_matrix.py",start:79063,end:81963,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/test_project.py",start:81963,end:96548,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/test_redundancy.py",start:96548,end:97394,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/bipartite/tests/test_spectral_bipartivity.py",start:97394,end:99753,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/node_classification/tests/__init__.py",start:99753,end:99753,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/node_classification/tests/test_harmonic_function.py",start:99753,end:102327,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/node_classification/tests/test_local_and_global_consistency.py",start:102327,end:104571,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/__init__.py",start:104571,end:104571,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_betweenness_centrality.py",start:104571,end:127549,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_betweenness_centrality_subset.py",start:127549,end:135936,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_closeness_centrality.py",start:135936,end:146144,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality.py",start:146144,end:153349,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py",start:153349,end:159190,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_current_flow_closeness.py",start:159190,end:160343,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_degree_centrality.py",start:160343,end:164448,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_dispersion.py",start:164448,end:166049,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_eigenvector_centrality.py",start:166049,end:170702,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_group.py",start:170702,end:179406,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_harmonic_centrality.py",start:179406,end:183063,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_katz_centrality.py",start:183063,end:194414,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_load_centrality.py",start:194414,end:205493,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_percolation_centrality.py",start:205493,end:208189,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_reaching.py",start:208189,end:212054,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_second_order_centrality.py",start:212054,end:213975,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_subgraph.py",start:213975,end:217763,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_trophic.py",start:217763,end:226468,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/centrality/tests/test_voterank.py",start:226468,end:228060,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/community/tests/__init__.py",start:228060,end:228060,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/community/tests/test_asyn_fluid.py",start:228060,end:231105,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/community/tests/test_centrality.py",start:231105,end:234028,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/community/tests/test_kclique.py",start:234028,end:236435,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/community/tests/test_kernighan_lin.py",start:236435,end:239142,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/community/tests/test_label_propagation.py",start:239142,end:244226,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/community/tests/test_lukes.py",start:244226,end:248177,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/community/tests/test_modularity_max.py",start:248177,end:257368,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/community/tests/test_quality.py",start:257368,end:263119,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/community/tests/test_utils.py",start:263119,end:263789,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/components/tests/__init__.py",start:263789,end:263789,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/components/tests/test_attracting.py",start:263789,end:266031,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/components/tests/test_biconnected.py",start:266031,end:272070,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/components/tests/test_connected.py",start:272070,end:275744,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/components/tests/test_semiconnected.py",start:275744,end:277534,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/components/tests/test_strongly_connected.py",start:277534,end:284087,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/components/tests/test_weakly_connected.py",start:284087,end:286973,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/connectivity/tests/__init__.py",start:286973,end:286973,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/connectivity/tests/test_connectivity.py",start:286973,end:302032,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/connectivity/tests/test_cuts.py",start:302032,end:312434,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/connectivity/tests/test_disjoint_paths.py",start:312434,end:320833,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/connectivity/tests/test_edge_augmentation.py",start:320833,end:336362,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/connectivity/tests/test_edge_kcomponents.py",start:336362,end:352814,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/connectivity/tests/test_kcomponents.py",start:352814,end:361367,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/connectivity/tests/test_kcutsets.py",start:361367,end:369854,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/connectivity/tests/test_stoer_wagner.py",start:369854,end:372863,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/coloring/tests/__init__.py",start:372863,end:372863,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/coloring/tests/test_coloring.py",start:372863,end:393496,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/flow/tests/__init__.py",start:393496,end:393496,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/flow/tests/test_gomory_hu.py",start:393496,end:398085,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/flow/tests/test_maxflow.py",start:398085,end:416677,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/flow/tests/test_maxflow_large_graph.py",start:416677,end:421327,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/flow/tests/test_mincost.py",start:421327,end:438990,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/flow/tests/test_networksimplex.py",start:438990,end:451041,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/flow/tests/gl1.gpickle.bz2",start:451041,end:495664,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/flow/tests/gw1.gpickle.bz2",start:495664,end:537912,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/flow/tests/netgen-2.gpickle.bz2",start:537912,end:556884,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/flow/tests/wlm3.gpickle.bz2",start:556884,end:645016,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/minors/tests/test_contraction.py",start:645016,end:660934,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/traversal/tests/__init__.py",start:660934,end:660934,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/traversal/tests/test_beamsearch.py",start:660934,end:661831,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/traversal/tests/test_bfs.py",start:661831,end:665096,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/traversal/tests/test_dfs.py",start:665096,end:670292,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/traversal/tests/test_edgebfs.py",start:670292,end:674984,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/traversal/tests/test_edgedfs.py",start:674984,end:679767,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/__init__.py",start:679767,end:679767,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/test_ismags.py",start:679767,end:690382,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/test_isomorphism.py",start:690382,end:692045,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/test_isomorphvf2.py",start:692045,end:703525,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/test_match_helpers.py",start:703525,end:705980,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/test_temporalisomorphvf2.py",start:705980,end:713325,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/test_tree_isomorphism.py",start:713325,end:720479,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/test_vf2userfunc.py",start:720479,end:727117,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/iso_r01_s80.A99",start:727117,end:728559,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/iso_r01_s80.B99",start:728559,end:730001,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/si2_b06_m200.A99",start:730001,end:730311,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/isomorphism/tests/si2_b06_m200.B99",start:730311,end:731913,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/shortest_paths/tests/__init__.py",start:731913,end:731913,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/shortest_paths/tests/test_astar.py",start:731913,end:737420,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/shortest_paths/tests/test_dense.py",start:737420,end:744166,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/shortest_paths/tests/test_dense_numpy.py",start:744166,end:746466,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/shortest_paths/tests/test_generic.py",start:746466,end:761930,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/shortest_paths/tests/test_unweighted.py",start:761930,end:766531,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/shortest_paths/tests/test_weighted.py",start:766531,end:798955,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/link_analysis/tests/__init__.py",start:798955,end:798955,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/link_analysis/tests/test_hits.py",start:798955,end:801959,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/link_analysis/tests/test_pagerank.py",start:801959,end:809543,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/operators/tests/__init__.py",start:809543,end:809543,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/operators/tests/test_all.py",start:809543,end:816913,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/operators/tests/test_binary.py",start:816913,end:828945,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/operators/tests/test_product.py",start:828945,end:842132,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/operators/tests/test_unary.py",start:842132,end:843546,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/__init__.py",start:843546,end:843546,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_approx_clust_coeff.py",start:843546,end:844759,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_clique.py",start:844759,end:847895,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_connectivity.py",start:847895,end:853847,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_distance_measures.py",start:853847,end:855870,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_dominating_set.py",start:855870,end:858198,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_kcomponents.py",start:858198,end:867410,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_matching.py",start:867410,end:867596,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_maxcut.py",start:867596,end:870026,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_ramsey.py",start:870026,end:871168,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_steinertree.py",start:871168,end:874369,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_traveling_salesman.py",start:874369,end:887592,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_treewidth.py",start:887592,end:896614,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/approximation/tests/test_vertex_cover.py",start:896614,end:898247,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tree/tests/__init__.py",start:898247,end:898247,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tree/tests/test_branchings.py",start:898247,end:909994,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tree/tests/test_coding.py",start:909994,end:913953,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tree/tests/test_decomposition.py",start:913953,end:915824,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tree/tests/test_mst.py",start:915824,end:926149,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tree/tests/test_operations.py",start:926149,end:927277,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tree/tests/test_recognition.py",start:927277,end:931449,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/__init__.py",start:931449,end:931449,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_asteroidal.py",start:931449,end:931952,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_boundary.py",start:931952,end:938177,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_bridges.py",start:938177,end:940404,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_chains.py",start:940404,end:944511,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_chordal.py",start:944511,end:948968,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_clique.py",start:948968,end:957837,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_cluster.py",start:957837,end:971863,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_communicability.py",start:971863,end:974802,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_core.py",start:974802,end:981530,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_covering.py",start:981530,end:983311,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_cuts.py",start:983311,end:988698,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_cycles.py",start:988698,end:1000501,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_d_separation.py",start:1000501,end:1004811,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_dag.py",start:1004811,end:1028443,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_distance_measures.py",start:1028443,end:1037494,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_distance_regular.py",start:1037494,end:1039806,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_dominance.py",start:1039806,end:1049193,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_dominating.py",start:1049193,end:1050420,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_efficiency.py",start:1050420,end:1052314,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_euler.py",start:1052314,end:1062399,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_graph_hashing.py",start:1062399,end:1063365,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_graphical.py",start:1063365,end:1068734,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_hierarchy.py",start:1068734,end:1069674,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_hybrid.py",start:1069674,end:1070394,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_isolate.py",start:1070394,end:1070949,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_link_prediction.py",start:1070949,end:1089042,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_lowest_common_ancestors.py",start:1089042,end:1099703,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_matching.py",start:1099703,end:1117583,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_max_weight_clique.py",start:1117583,end:1124324,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_mis.py",start:1124324,end:1127693,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_moral.py",start:1127693,end:1128146,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_non_randomness.py",start:1128146,end:1128791,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_planar_drawing.py",start:1128791,end:1137578,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_planarity.py",start:1137578,end:1150811,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_reciprocity.py",start:1150811,end:1152107,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_regular.py",start:1152107,end:1154564,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_richclub.py",start:1154564,end:1156821,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_similarity.py",start:1156821,end:1188514,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_simple_paths.py",start:1188514,end:1212583,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_smallworld.py",start:1212583,end:1214492,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_smetric.py",start:1214492,end:1214918,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_sparsifiers.py",start:1214918,end:1218961,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_structuralholes.py",start:1218961,end:1224187,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_summarization.py",start:1224187,end:1245779,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_swap.py",start:1245779,end:1248845,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_threshold.py",start:1248845,end:1258618,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_tournament.py",start:1258618,end:1263024,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_triads.py",start:1263024,end:1268245,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_vitality.py",start:1268245,end:1269625,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_voronoi.py",start:1269625,end:1273102,audio:0},{filename:"/lib/python3.9/site-packages/networkx/algorithms/tests/test_wiener.py",start:1273102,end:1275262,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/__init__.py",start:1275262,end:1275262,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/historical_tests.py",start:1275262,end:1291436,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_coreviews.py",start:1291436,end:1306844,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_digraph.py",start:1306844,end:1318096,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_digraph_historical.py",start:1318096,end:1321786,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_filters.py",start:1321786,end:1327636,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_function.py",start:1327636,end:1353970,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_graph.py",start:1353970,end:1382660,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_graph_historical.py",start:1382660,end:1382933,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_graphviews.py",start:1382933,end:1394457,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_multidigraph.py",start:1394457,end:1408507,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_multigraph.py",start:1408507,end:1424934,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_ordered.py",start:1424934,end:1426082,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_reportviews.py",start:1426082,end:1466496,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_special.py",start:1466496,end:1472221,audio:0},{filename:"/lib/python3.9/site-packages/networkx/classes/tests/test_subgraphviews.py",start:1472221,end:1484910,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/__init__.py",start:1484910,end:1484910,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_atlas.py",start:1484910,end:1487486,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_classic.py",start:1487486,end:1503927,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_cographs.py",start:1503927,end:1504389,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_community.py",start:1504389,end:1513069,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_degree_seq.py",start:1513069,end:1520175,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_directed.py",start:1520175,end:1524310,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_duplication.py",start:1524310,end:1526255,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_ego.py",start:1526255,end:1527582,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_expanders.py",start:1527582,end:1530031,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_geometric.py",start:1530031,end:1541536,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_harary_graph.py",start:1541536,end:1546538,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_internet_as_graphs.py",start:1546538,end:1553684,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_intersection.py",start:1553684,end:1554502,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_interval_graph.py",start:1554502,end:1558779,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_joint_degree_seq.py",start:1558779,end:1563050,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_lattice.py",start:1563050,end:1572063,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_line.py",start:1572063,end:1579827,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_mycielski.py",start:1579827,end:1580649,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_nonisomorphic_trees.py",start:1580649,end:1583033,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_random_clustered.py",start:1583033,end:1584011,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_random_graphs.py",start:1584011,end:1595249,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_small.py",start:1595249,end:1601874,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_spectral_graph_forge.py",start:1601874,end:1603468,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_stochastic.py",start:1603468,end:1605289,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_sudoku.py",start:1605289,end:1607258,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_trees.py",start:1607258,end:1610181,audio:0},{filename:"/lib/python3.9/site-packages/networkx/generators/tests/test_triads.py",start:1610181,end:1610513,audio:0},{filename:"/lib/python3.9/site-packages/networkx/drawing/tests/__init__.py",start:1610513,end:1610513,audio:0},{filename:"/lib/python3.9/site-packages/networkx/drawing/tests/test_agraph.py",start:1610513,end:1619146,audio:0},{filename:"/lib/python3.9/site-packages/networkx/drawing/tests/test_layout.py",start:1619146,end:1634520,audio:0},{filename:"/lib/python3.9/site-packages/networkx/drawing/tests/test_pydot.py",start:1634520,end:1637879,audio:0},{filename:"/lib/python3.9/site-packages/networkx/drawing/tests/test_pylab.py",start:1637879,end:1651641,audio:0},{filename:"/lib/python3.9/site-packages/networkx/linalg/tests/__init__.py",start:1651641,end:1651641,audio:0},{filename:"/lib/python3.9/site-packages/networkx/linalg/tests/test_algebraic_connectivity.py",start:1651641,end:1665218,audio:0},{filename:"/lib/python3.9/site-packages/networkx/linalg/tests/test_attrmatrix.py",start:1665218,end:1668051,audio:0},{filename:"/lib/python3.9/site-packages/networkx/linalg/tests/test_bethehessian.py",start:1668051,end:1669378,audio:0},{filename:"/lib/python3.9/site-packages/networkx/linalg/tests/test_graphmatrix.py",start:1669378,end:1678468,audio:0},{filename:"/lib/python3.9/site-packages/networkx/linalg/tests/test_laplacian.py",start:1678468,end:1689121,audio:0},{filename:"/lib/python3.9/site-packages/networkx/linalg/tests/test_modularity.py",start:1689121,end:1692236,audio:0},{filename:"/lib/python3.9/site-packages/networkx/linalg/tests/test_spectrum.py",start:1692236,end:1695064,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/json_graph/tests/__init__.py",start:1695064,end:1695064,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/json_graph/tests/test_adjacency.py",start:1695064,end:1696828,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/json_graph/tests/test_cytoscape.py",start:1696828,end:1699411,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/json_graph/tests/test_jit.py",start:1699411,end:1701477,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/json_graph/tests/test_node_link.py",start:1701477,end:1704652,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/json_graph/tests/test_tree.py",start:1704652,end:1706380,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/__init__.py",start:1706380,end:1706380,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_adjlist.py",start:1706380,end:1716300,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_edgelist.py",start:1716300,end:1726022,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_getattr_nxyaml_removal.py",start:1726022,end:1727027,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_gexf.py",start:1727027,end:1749811,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_gml.py",start:1749811,end:1769993,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_gpickle.py",start:1769993,end:1772136,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_graph6.py",start:1772136,end:1776242,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_graphml.py",start:1776242,end:1842448,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_leda.py",start:1842448,end:1843839,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_p2g.py",start:1843839,end:1845165,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_pajek.py",start:1845165,end:1849867,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_shp.py",start:1849867,end:1859028,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_sparse6.py",start:1859028,end:1864497,audio:0},{filename:"/lib/python3.9/site-packages/networkx/readwrite/tests/test_text.py",start:1864497,end:1872428,audio:0},{filename:"/lib/python3.9/site-packages/networkx/tests/__init__.py",start:1872428,end:1872428,audio:0},{filename:"/lib/python3.9/site-packages/networkx/tests/test_all_random_functions.py",start:1872428,end:1881071,audio:0},{filename:"/lib/python3.9/site-packages/networkx/tests/test_convert.py",start:1881071,end:1893823,audio:0},{filename:"/lib/python3.9/site-packages/networkx/tests/test_convert_numpy.py",start:1893823,end:1913425,audio:0},{filename:"/lib/python3.9/site-packages/networkx/tests/test_convert_pandas.py",start:1913425,end:1925677,audio:0},{filename:"/lib/python3.9/site-packages/networkx/tests/test_convert_scipy.py",start:1925677,end:1936277,audio:0},{filename:"/lib/python3.9/site-packages/networkx/tests/test_exceptions.py",start:1936277,end:1937203,audio:0},{filename:"/lib/python3.9/site-packages/networkx/tests/test_import.py",start:1937203,end:1937423,audio:0},{filename:"/lib/python3.9/site-packages/networkx/tests/test_relabel.py",start:1937423,end:1949870,audio:0},{filename:"/lib/python3.9/site-packages/networkx/testing/tests/__init__.py",start:1949870,end:1949870,audio:0},{filename:"/lib/python3.9/site-packages/networkx/testing/tests/test_utils.py",start:1949870,end:1954823,audio:0},{filename:"/lib/python3.9/site-packages/networkx/utils/tests/__init__.py",start:1954823,end:1954823,audio:0},{filename:"/lib/python3.9/site-packages/networkx/utils/tests/test__init.py",start:1954823,end:1955186,audio:0},{filename:"/lib/python3.9/site-packages/networkx/utils/tests/test_contextmanager.py",start:1955186,end:1955496,audio:0},{filename:"/lib/python3.9/site-packages/networkx/utils/tests/test_decorators.py",start:1955496,end:1969531,audio:0},{filename:"/lib/python3.9/site-packages/networkx/utils/tests/test_heaps.py",start:1969531,end:1973245,audio:0},{filename:"/lib/python3.9/site-packages/networkx/utils/tests/test_mapped_queue.py",start:1973245,end:1979582,audio:0},{filename:"/lib/python3.9/site-packages/networkx/utils/tests/test_misc.py",start:1979582,end:1987857,audio:0},{filename:"/lib/python3.9/site-packages/networkx/utils/tests/test_random_sequence.py",start:1987857,end:1988781,audio:0},{filename:"/lib/python3.9/site-packages/networkx/utils/tests/test_rcm.py",start:1988781,end:1990202,audio:0},{filename:"/lib/python3.9/site-packages/networkx/utils/tests/test_unionfind.py",start:1990202,end:1991781,audio:0}],remote_package_size:958503,package_uuid:"ccb950d2-a9a5-4256-bf4d-3ab0a25208ec"})})(); \ No newline at end of file diff --git a/spaces/qinzhu/diy-girlfriend-online/attentions.py b/spaces/qinzhu/diy-girlfriend-online/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/qinzhu/diy-girlfriend-online/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/qqqwt/chatgptpaper/app.py b/spaces/qqqwt/chatgptpaper/app.py deleted file mode 100644 index ae15c5aa1ed360835b2b52d794b7774ea2b59f37..0000000000000000000000000000000000000000 --- a/spaces/qqqwt/chatgptpaper/app.py +++ /dev/null @@ -1,793 +0,0 @@ -import numpy as np -import os -import re -import datetime -import arxiv -import openai, tenacity -import base64, requests -import argparse -import configparser -import fitz, io, os -from PIL import Image -import gradio -import markdown - - -def parse_text(text): - lines = text.split("\n") - for i, line in enumerate(lines): - if "```" in line: - items = line.split('`') - if items[-1]: - lines[i] = f'
        '
        -            else:
        -                lines[i] = f'
        ' - else: - if i > 0: - line = line.replace("<", "<") - line = line.replace(">", ">") - lines[i] = '
        ' + line.replace(" ", " ") - return "".join(lines) - - -def get_response(system, context, myKey, raw=False): - openai.api_key = myKey - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[system, *context], - ) - openai.api_key = "" - if raw: - return response - else: - message = response["choices"][0]["message"]["content"] - message_with_stats = f'{message}' - return message, parse_text(message_with_stats) - - -def valid_apikey(api_key): - try: - get_response({"role": "system", "content": "You are a helpful assistant."}, - [{"role": "user", "content": "test"}], api_key) - return "可用的api-key" - except: - return "无效的api-key" - - -class Paper: - def __init__(self, path, title='', url='', abs='', authers=[], sl=[]): - # 初始化函数,根据pdf路径初始化Paper对象 - self.url = url # 文章链接 - self.path = path # pdf路径 - self.sl = sl - self.section_names = [] # 段落标题 - self.section_texts = {} # 段落内容 - if title == '': - self.pdf = fitz.open(self.path) # pdf文档 - self.title = self.get_title() - self.parse_pdf() - else: - self.title = title - self.authers = authers - self.abs = abs - self.roman_num = ["I", "II", 'III', "IV", "V", "VI", "VII", "VIII", "IIX", "IX", "X"] - self.digit_num = [str(d + 1) for d in range(10)] - self.first_image = '' - - def parse_pdf(self): - self.pdf = fitz.open(self.path) # pdf文档 - self.text_list = [page.get_text() for page in self.pdf] - self.all_text = ' '.join(self.text_list) - self.section_page_dict = self._get_all_page_index() # 段落与页码的对应字典 - print("section_page_dict", self.section_page_dict) - self.section_text_dict = self._get_all_page() # 段落与内容的对应字典 - self.section_text_dict.update({"title": self.title}) - self.pdf.close() - - def get_image_path(self, image_path=''): - """ - 将PDF中的第一张图保存到image.png里面,存到本地目录,返回文件名称,供gitee读取 - :param filename: 图片所在路径,"C:\\Users\\Administrator\\Desktop\\nwd.pdf" - :param image_path: 图片提取后的保存路径 - :return: - """ - # open file - max_size = 0 - image_list = [] - with fitz.Document(self.path) as my_pdf_file: - # 遍历所有页面 - for page_number in range(1, len(my_pdf_file) + 1): - # 查看独立页面 - page = my_pdf_file[page_number - 1] - # 查看当前页所有图片 - images = page.get_images() - # 遍历当前页面所有图片 - for image_number, image in enumerate(page.get_images(), start=1): - # 访问图片xref - xref_value = image[0] - # 提取图片信息 - base_image = my_pdf_file.extract_image(xref_value) - # 访问图片 - image_bytes = base_image["image"] - # 获取图片扩展名 - ext = base_image["ext"] - # 加载图片 - image = Image.open(io.BytesIO(image_bytes)) - image_size = image.size[0] * image.size[1] - if image_size > max_size: - max_size = image_size - image_list.append(image) - for image in image_list: - image_size = image.size[0] * image.size[1] - if image_size == max_size: - image_name = f"image.{ext}" - im_path = os.path.join(image_path, image_name) - print("im_path:", im_path) - - max_pix = 480 - origin_min_pix = min(image.size[0], image.size[1]) - - if image.size[0] > image.size[1]: - min_pix = int(image.size[1] * (max_pix / image.size[0])) - newsize = (max_pix, min_pix) - else: - min_pix = int(image.size[0] * (max_pix / image.size[1])) - newsize = (min_pix, max_pix) - image = image.resize(newsize) - - image.save(open(im_path, "wb")) - return im_path, ext - return None, None - - # 定义一个函数,根据字体的大小,识别每个章节名称,并返回一个列表 - def get_chapter_names(self, ): - # # 打开一个pdf文件 - doc = fitz.open(self.path) # pdf文档 - text_list = [page.get_text() for page in doc] - all_text = '' - for text in text_list: - all_text += text - # # 创建一个空列表,用于存储章节名称 - chapter_names = [] - for line in all_text.split('\n'): - line_list = line.split(' ') - if '.' in line: - point_split_list = line.split('.') - space_split_list = line.split(' ') - if 1 < len(space_split_list) < 5: - if 1 < len(point_split_list) < 5 and ( - point_split_list[0] in self.roman_num or point_split_list[0] in self.digit_num): - print("line:", line) - chapter_names.append(line) - - return chapter_names - - def get_title(self): - doc = self.pdf # 打开pdf文件 - max_font_size = 0 # 初始化最大字体大小为0 - max_string = "" # 初始化最大字体大小对应的字符串为空 - max_font_sizes = [0] - for page in doc: # 遍历每一页 - text = page.get_text("dict") # 获取页面上的文本信息 - blocks = text["blocks"] # 获取文本块列表 - for block in blocks: # 遍历每个文本块 - if block["type"] == 0: # 如果是文字类型 - font_size = block["lines"][0]["spans"][0]["size"] # 获取第一行第一段文字的字体大小 - max_font_sizes.append(font_size) - if font_size > max_font_size: # 如果字体大小大于当前最大值 - max_font_size = font_size # 更新最大值 - max_string = block["lines"][0]["spans"][0]["text"] # 更新最大值对应的字符串 - max_font_sizes.sort() - print("max_font_sizes", max_font_sizes[-10:]) - cur_title = '' - for page in doc: # 遍历每一页 - text = page.get_text("dict") # 获取页面上的文本信息 - blocks = text["blocks"] # 获取文本块列表 - for block in blocks: # 遍历每个文本块 - if block["type"] == 0: # 如果是文字类型 - cur_string = block["lines"][0]["spans"][0]["text"] # 更新最大值对应的字符串 - font_flags = block["lines"][0]["spans"][0]["flags"] # 获取第一行第一段文字的字体特征 - font_size = block["lines"][0]["spans"][0]["size"] # 获取第一行第一段文字的字体大小 - # print(font_size) - if abs(font_size - max_font_sizes[-1]) < 0.3 or abs(font_size - max_font_sizes[-2]) < 0.3: - # print("The string is bold.", max_string, "font_size:", font_size, "font_flags:", font_flags) - if len(cur_string) > 4 and "arXiv" not in cur_string: - # print("The string is bold.", max_string, "font_size:", font_size, "font_flags:", font_flags) - if cur_title == '': - cur_title += cur_string - else: - cur_title += ' ' + cur_string - # break - title = cur_title.replace('\n', ' ') - return title - - def _get_all_page_index(self): - # 定义需要寻找的章节名称列表 - section_list = self.sl - # 初始化一个字典来存储找到的章节和它们在文档中出现的页码 - section_page_dict = {} - # 遍历每一页文档 - for page_index, page in enumerate(self.pdf): - # 获取当前页面的文本内容 - cur_text = page.get_text() - # 遍历需要寻找的章节名称列表 - for section_name in section_list: - # 将章节名称转换成大写形式 - section_name_upper = section_name.upper() - # 如果当前页面包含"Abstract"这个关键词 - if "Abstract" == section_name and section_name in cur_text: - # 将"Abstract"和它所在的页码加入字典中 - section_page_dict[section_name] = page_index - # 如果当前页面包含章节名称,则将章节名称和它所在的页码加入字典中 - else: - if section_name + '\n' in cur_text: - section_page_dict[section_name] = page_index - elif section_name_upper + '\n' in cur_text: - section_page_dict[section_name] = page_index - # 返回所有找到的章节名称及它们在文档中出现的页码 - return section_page_dict - - def _get_all_page(self): - """ - 获取PDF文件中每个页面的文本信息,并将文本信息按照章节组织成字典返回。 - Returns: - section_dict (dict): 每个章节的文本信息字典,key为章节名,value为章节文本。 - """ - text = '' - text_list = [] - section_dict = {} - - # # 先处理Abstract章节 - # for page_index, page in enumerate(self.pdf): - # cur_text = page.get_text() - # # 如果该页面是Abstract章节所在页面 - # if page_index == list(self.section_page_dict.values())[0]: - # abs_str = "Abstract" - # # 获取Abstract章节的起始位置 - # first_index = cur_text.find(abs_str) - # # 查找下一个章节的关键词,这里是Introduction - # intro_str = "Introduction" - # if intro_str in cur_text: - # second_index = cur_text.find(intro_str) - # elif intro_str.upper() in cur_text: - # second_index = cur_text.find(intro_str.upper()) - # # 将Abstract章节内容加入字典中 - # section_dict[abs_str] = cur_text[first_index+len(abs_str)+1:second_index].replace('-\n', - # '').replace('\n', ' ').split('I.')[0].split("II.")[0] - - # 再处理其他章节: - text_list = [page.get_text() for page in self.pdf] - for sec_index, sec_name in enumerate(self.section_page_dict): - print(sec_index, sec_name, self.section_page_dict[sec_name]) - if sec_index <= 0: - continue - else: - # 直接考虑后面的内容: - start_page = self.section_page_dict[sec_name] - if sec_index < len(list(self.section_page_dict.keys())) - 1: - end_page = self.section_page_dict[list(self.section_page_dict.keys())[sec_index + 1]] - else: - end_page = len(text_list) - print("start_page, end_page:", start_page, end_page) - cur_sec_text = '' - if end_page - start_page == 0: - if sec_index < len(list(self.section_page_dict.keys())) - 1: - next_sec = list(self.section_page_dict.keys())[sec_index + 1] - if text_list[start_page].find(sec_name) == -1: - start_i = text_list[start_page].find(sec_name.upper()) - else: - start_i = text_list[start_page].find(sec_name) - if text_list[start_page].find(next_sec) == -1: - end_i = text_list[start_page].find(next_sec.upper()) - else: - end_i = text_list[start_page].find(next_sec) - cur_sec_text += text_list[start_page][start_i:end_i] - else: - for page_i in range(start_page, end_page): - # print("page_i:", page_i) - if page_i == start_page: - if text_list[start_page].find(sec_name) == -1: - start_i = text_list[start_page].find(sec_name.upper()) - else: - start_i = text_list[start_page].find(sec_name) - cur_sec_text += text_list[page_i][start_i:] - elif page_i < end_page: - cur_sec_text += text_list[page_i] - elif page_i == end_page: - if sec_index < len(list(self.section_page_dict.keys())) - 1: - next_sec = list(self.section_page_dict.keys())[sec_index + 1] - if text_list[start_page].find(next_sec) == -1: - end_i = text_list[start_page].find(next_sec.upper()) - else: - end_i = text_list[start_page].find(next_sec) - cur_sec_text += text_list[page_i][:end_i] - section_dict[sec_name] = cur_sec_text.replace('-\n', '').replace('\n', ' ') - return section_dict - - -# 定义Reader类 -class Reader: - # 初始化方法,设置属性 - def __init__(self, key_word='', query='', filter_keys='', - root_path='./', - gitee_key='', - sort=arxiv.SortCriterion.SubmittedDate, user_name='defualt', language='cn', key=''): - self.key = str(key) # OpenAI key - self.user_name = user_name # 读者姓名 - self.key_word = key_word # 读者感兴趣的关键词 - self.query = query # 读者输入的搜索查询 - self.sort = sort # 读者选择的排序方式 - self.language = language # 读者选择的语言 - self.filter_keys = filter_keys # 用于在摘要中筛选的关键词 - self.root_path = root_path - self.file_format = 'md' # or 'txt',如果为图片,则必须为'md' - self.save_image = False - if self.save_image: - self.gitee_key = self.config.get('Gitee', 'api') - else: - self.gitee_key = '' - - def get_arxiv(self, max_results=30): - search = arxiv.Search(query=self.query, - max_results=max_results, - sort_by=self.sort, - sort_order=arxiv.SortOrder.Descending, - ) - return search - - def filter_arxiv(self, max_results=30): - search = self.get_arxiv(max_results=max_results) - print("all search:") - for index, result in enumerate(search.results()): - print(index, result.title, result.updated) - - filter_results = [] - filter_keys = self.filter_keys - - print("filter_keys:", self.filter_keys) - # 确保每个关键词都能在摘要中找到,才算是目标论文 - for index, result in enumerate(search.results()): - abs_text = result.summary.replace('-\n', '-').replace('\n', ' ') - meet_num = 0 - for f_key in filter_keys.split(" "): - if f_key.lower() in abs_text.lower(): - meet_num += 1 - if meet_num == len(filter_keys.split(" ")): - filter_results.append(result) - # break - print("filter_results:", len(filter_results)) - print("filter_papers:") - for index, result in enumerate(filter_results): - print(index, result.title, result.updated) - return filter_results - - def validateTitle(self, title): - # 将论文的乱七八糟的路径格式修正 - rstr = r"[\/\\\:\*\?\"\<\>\|]" # '/ \ : * ? " < > |' - new_title = re.sub(rstr, "_", title) # 替换为下划线 - return new_title - - def download_pdf(self, filter_results): - # 先创建文件夹 - date_str = str(datetime.datetime.now())[:13].replace(' ', '-') - key_word = str(self.key_word.replace(':', ' ')) - path = self.root_path + 'pdf_files/' + self.query.replace('au: ', '').replace('title: ', '').replace('ti: ', - '').replace( - ':', ' ')[:25] + '-' + date_str - try: - os.makedirs(path) - except: - pass - print("All_paper:", len(filter_results)) - # 开始下载: - paper_list = [] - for r_index, result in enumerate(filter_results): - try: - title_str = self.validateTitle(result.title) - pdf_name = title_str + '.pdf' - # result.download_pdf(path, filename=pdf_name) - self.try_download_pdf(result, path, pdf_name) - paper_path = os.path.join(path, pdf_name) - print("paper_path:", paper_path) - paper = Paper(path=paper_path, - url=result.entry_id, - title=result.title, - abs=result.summary.replace('-\n', '-').replace('\n', ' '), - authers=[str(aut) for aut in result.authors], - ) - # 下载完毕,开始解析: - paper.parse_pdf() - paper_list.append(paper) - except Exception as e: - print("download_error:", e) - pass - return paper_list - - @tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10), - stop=tenacity.stop_after_attempt(5), - reraise=True) - def try_download_pdf(self, result, path, pdf_name): - result.download_pdf(path, filename=pdf_name) - - @tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10), - stop=tenacity.stop_after_attempt(5), - reraise=True) - def upload_gitee(self, image_path, image_name='', ext='png'): - """ - 上传到码云 - :return: - """ - with open(image_path, 'rb') as f: - base64_data = base64.b64encode(f.read()) - base64_content = base64_data.decode() - - date_str = str(datetime.datetime.now())[:19].replace(':', '-').replace(' ', '-') + '.' + ext - path = image_name + '-' + date_str - - payload = { - "access_token": self.gitee_key, - "owner": self.config.get('Gitee', 'owner'), - "repo": self.config.get('Gitee', 'repo'), - "path": self.config.get('Gitee', 'path'), - "content": base64_content, - "message": "upload image" - } - # 这里需要修改成你的gitee的账户和仓库名,以及文件夹的名字: - url = f'https://gitee.com/api/v5/repos/' + self.config.get('Gitee', 'owner') + '/' + self.config.get('Gitee', - 'repo') + '/contents/' + self.config.get( - 'Gitee', 'path') + '/' + path - rep = requests.post(url, json=payload).json() - print("rep:", rep) - if 'content' in rep.keys(): - image_url = rep['content']['download_url'] - else: - image_url = r"https://gitee.com/api/v5/repos/" + self.config.get('Gitee', 'owner') + '/' + self.config.get( - 'Gitee', 'repo') + '/contents/' + self.config.get('Gitee', 'path') + '/' + path - - return image_url - - def summary_with_chat(self, paper_list, key): - htmls = [] - for paper_index, paper in enumerate(paper_list): - # 第一步先用title,abs,和introduction进行总结。 - text = '' - text += 'Title:' + paper.title - text += 'Url:' + paper.url - text += 'Abstrat:' + paper.abs - # intro - text += list(paper.section_text_dict.values())[0] - max_token = 2500 * 4 - text = text[:max_token] - chat_summary_text = self.chat_summary(text=text, key=str(key)) - htmls.append(chat_summary_text) - - # TODO 往md文档中插入论文里的像素最大的一张图片,这个方案可以弄的更加智能一些: - first_image, ext = paper.get_image_path() - if first_image is None or self.gitee_key == '': - pass - else: - image_title = self.validateTitle(paper.title) - image_url = self.upload_gitee(image_path=first_image, image_name=image_title, ext=ext) - htmls.append("\n") - htmls.append("![Fig](" + image_url + ")") - htmls.append("\n") - # 第二步总结方法: - # TODO,由于有些文章的方法章节名是算法名,所以简单的通过关键词来筛选,很难获取,后面需要用其他的方案去优化。 - method_key = '' - for parse_key in paper.section_text_dict.keys(): - if 'method' in parse_key.lower() or 'approach' in parse_key.lower(): - method_key = parse_key - break - - if method_key != '': - text = '' - method_text = '' - summary_text = '' - summary_text += "" + chat_summary_text - # methods - method_text += paper.section_text_dict[method_key] - # TODO 把这个变成tenacity的自动判别! - max_token = 2500 * 4 - text = summary_text + "\n :\n" + method_text - text = text[:max_token] - chat_method_text = self.chat_method(text=text, key=str(key)) - htmls.append(chat_method_text) - else: - chat_method_text = '' - htmls.append("\n") - - # 第三步总结全文,并打分: - conclusion_key = '' - for parse_key in paper.section_text_dict.keys(): - if 'conclu' in parse_key.lower(): - conclusion_key = parse_key - break - - text = '' - conclusion_text = '' - summary_text = '' - summary_text += "" + chat_summary_text + "\n :\n" + chat_method_text - if conclusion_key != '': - # conclusion - conclusion_text += paper.section_text_dict[conclusion_key] - max_token = 2500 * 4 - text = summary_text + "\n :\n" + conclusion_text - else: - text = summary_text - text = text[:max_token] - chat_conclusion_text = self.chat_conclusion(text=text, key=str(key)) - htmls.append(chat_conclusion_text) - htmls.append("\n") - - - ######## - - experiment_key = '' - for parse_key in paper.section_text_dict.keys(): - if 'Experiments' in parse_key.lower() or 'Results' in parse_key.lower(): - experiment_key = parse_key - break - - if experiment_key != '': - text = '' - - summary_text = '' - summary_text += "" + chat_summary_text - - max_token = 2500 * 4 - text = summary_text + "\n :\n" + method_text - - experiment_text = "" - experiment_text += paper.section_text_dict[experiment_key] - text += experiment_text - text = text[:max_token] - - else: - text = summary_text + "\n :\n" + method_text - text = text[:max_token] - chat_review_text = self.chat_review(text=text, key=str(key)) - htmls.append(chat_review_text) - htmls.append("\n") - - - - - md_text = "\n".join(htmls) - return markdown.markdown(md_text) - - @tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10), - stop=tenacity.stop_after_attempt(5), - reraise=True) - def chat_conclusion(self, text, key): - openai.api_key = key - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - # prompt需要用英语替换,少占用token。 - messages=[ - {"role": "system", "content": "你是一个[" + self.key_word + "]领域的审稿人,你需要严格评审这篇文章"}, # chatgpt 角色 - {"role": "assistant", - "content": "这是一篇英文文献的部分内容,其中你已经总结好了,但是部分,我需要你帮忙归纳下面问题:" + text}, - # 背景知识,可以参考OpenReview的审稿流程 - {"role": "user", "content": """ - 8. Make the following summary.Be sure to use Chinese answers (proper nouns need to be marked in English). - - (1):What is the significance of this piece of work? - - (2):Summarize the strengths and weaknesses of this article in three dimensions: innovation point, performance, and workload. - ....... - Follow the format of the output later: - 8. Conclusion: \n\n - - (1):xxx;\n - - (2):Innovation point: xxx; Performance: xxx; Workload: xxx;\n - - Be sure to use Chinese answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not repeat the content of the previous , the value of the use of the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed, ....... means fill in according to the actual requirements, if not, you can not write. - """}, - ] - ) - result = '' - for choice in response.choices: - result += choice.message.content - print("conclusion_result:\n", result) - return result - - @tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10), - stop=tenacity.stop_after_attempt(5), - reraise=True) - def chat_method(self, text, key): - openai.api_key = key - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": "你是一个[" + self.key_word + "]领域的科研人员,善于使用精炼的语句总结论文"}, # chatgpt 角色 - {"role": "assistant", - "content": "这是一篇英文文献的部分内容,其中你已经总结好了,但是部分,我需要你帮忙阅读并归纳下面问题:" + text}, - # 背景知识 - {"role": "user", "content": """ - 7. Describe in detail the methodological idea of this article. Be sure to use Chinese answers (proper nouns need to be marked in English). For example, its steps are. - - (1):... - - (2):... - - (3):... - - ....... - Follow the format of the output that follows: - 7. Methods: \n\n - - (1):xxx;\n - - (2):xxx;\n - - (3):xxx;\n - ....... \n\n - - Be sure to use Chinese answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not repeat the content of the previous , the value of the use of the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed, ....... means fill in according to the actual requirements, if not, you can not write. - """}, - ] - ) - result = '' - for choice in response.choices: - result += choice.message.content - print("method_result:\n", result) - return result - - @tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10), - stop=tenacity.stop_after_attempt(5), - reraise=True) - def chat_summary(self, text, key): - openai.api_key = key - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": "你是一个[" + self.key_word + "]领域的科研人员,善于使用精炼的语句总结论文"}, # chatgpt 角色 - {"role": "assistant", "content": "这是一篇英文文献的标题,作者,链接,Abstract和Introduction部分内容,我需要你帮忙阅读并归纳下面问题:" + text}, - # 背景知识 - {"role": "user", "content": """ - 1. Mark the title of the paper (with Chinese translation) - 2. list all the authors' names (use English) - 3. mark the first author's affiliation (output Chinese translation only) - 4. mark the keywords of this article (use English) - 5. link to the paper, Github code link (if available, fill in Github:None if not) - 6. summarize according to the following four points.Be sure to use Chinese answers (proper nouns need to be marked in English) - - (1):What is the research background of this article? - - (2):What are the past methods? What are the problems with them? Is the approach well motivated? - - (3):What is the research methodology proposed in this paper? - - (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals? - Follow the format of the output that follows: - 1. Title: xxx\n\n - 2. Authors: xxx\n\n - 3. Affiliation: xxx\n\n - 4. Keywords: xxx\n\n - 5. Urls: xxx or xxx , xxx \n\n - 6. Summary: \n\n - - (1):xxx;\n - - (2):xxx;\n - - (3):xxx;\n - - (4):xxx.\n\n - - Be sure to use Chinese answers (proper nouns need to be marked in English), statements as concise and academic as possible, do not have too much repetitive information, numerical values using the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed. - """}, - ] - ) - result = '' - for choice in response.choices: - result += choice.message.content - print("summary_result:\n", result) - return result - @tenacity.retry(wait=tenacity.wait_exponential(multiplier=1, min=4, max=10), - stop=tenacity.stop_after_attempt(5), - reraise=True) - def chat_review(self, text, key): - openai.api_key = key - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", - "content": "You are a researcher in the field of [" + self.key_word + "] who is good at reviewing papers using concise statements"}, - # chatgpt 角色 - {"role": "assistant", - "content": "This is the title, author, link, abstract, introduction, method, and experiments of an English document. I need your help to read and review the following questions: " + text}, - # 背景知识 - {"role": "user", "content": """ - 1. summarize according to the following four points.Be sure to use English answers (proper nouns need to be marked in English) - - (1):What is the research background of this article? - - (2):What are the past methods? What are the problems with them? Is the approach well motivated? - - (3):What is the research methodology proposed in this paper? - - (4):On what task and what performance is achieved by the methods in this paper? Can the performance support their goals? - Follow the format of the output that follows: - 2. Strengths: \n\n - - (1):Background;\n - - (2):Main challenges and Motivations;\n - - (3):The detail of methods\n - - (4):Results.\n\n - 3. Weakness: \n\n - - (1):Motivation;\n - - (2):Methods;\n - - (3):Novelty\n - - (4):Results.\n\n - 4. Other questions: \n\n - - (1):\n - - (2):\n - - (3):\n - - (4):\n\n - Be sure to use English answers, statements as concise and academic as possible, do not have too much repetitive information, numerical values using the original numbers, be sure to strictly follow the format, the corresponding content output to xxx, in accordance with \n line feed. - """}, - ] - ) - result = '' - for choice in response.choices: - result += choice.message.content - print("review_result:\n", result) - return result - - def export_to_markdown(self, text, file_name, mode='w'): - # 使用markdown模块的convert方法,将文本转换为html格式 - # html = markdown.markdown(text) - # 打开一个文件,以写入模式 - with open(file_name, mode, encoding="utf-8") as f: - # 将html格式的内容写入文件 - f.write(text) - - # 定义一个方法,打印出读者信息 - - def show_info(self): - print(f"Key word: {self.key_word}") - print(f"Query: {self.query}") - print(f"Sort: {self.sort}") - - -def upload_pdf(key, text, file): - # 检查两个输入都不为空 - if not key or not text or not file: - return "两个输入都不能为空,请输入字符并上传 PDF 文件!" - # 判断PDF文件 - # if file and file.name.split(".")[-1].lower() != "pdf": - # return '请勿上传非 PDF 文件!' - else: - section_list = text.split(',') - paper_list = [Paper(path=file, sl=section_list)] - # 创建一个Reader对象 - reader = Reader() - sum_info = reader.summary_with_chat(paper_list=paper_list, key=key) - return sum_info - - -api_title = "api-key可用验证" -api_description = '''
        - -This is ChatPaper Plus, add review function. - -Star Github [ChatpaperPlus](https://github.com/luckyfan-cs/ChatPaperPlus) - -Use ChatGPT to summary the papers. Star Authors Github [🌟ChatPaper](https://github.com/kaixindelele/ChatPaper) . - -🔴请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛! - -
        -''' - -api_input = [ - gradio.inputs.Textbox(label="请输入你的api-key(必填)", default="", type='password') -] -api_gui = gradio.Interface(fn=valid_apikey, inputs=api_input, outputs="text", title=api_title, - description=api_description) - -# 标题 -title = "ChatPaperPlus" -# 描述 -description = '''
        - -This is ChatPaper Plus, add review function. - -Star Github [ChatpaperPlus](https://github.com/luckyfan-cs/ChatPaperPlus). - -Use ChatGPT to summary the papers. Star Authors Github [🌟ChatPaper](https://github.com/kaixindelele/ChatPaper). - -🔴请注意:千万不要用于严肃的学术场景,只能用于论文阅读前的初筛! -
        -''' -# 创建Gradio界面 -ip = [ - gradio.inputs.Textbox(label="请输入你的API-key(必填)", default="", type='password'), - gradio.inputs.Textbox(label="请输入论文大标题索引(用英文逗号隔开,必填)", - default="'Abstract,Introduction,Related Work,Background,Preliminary,Problem Formulation,Methods,Methodology,Method,Approach,Approaches,Materials and Methods,Experiment Settings,Experiment,Experimental Results,Evaluation,Experiments,Results,Findings,Data Analysis,Discussion,Results and Discussion,Conclusion,References'"), - gradio.inputs.File(label="请上传论文PDF(必填)") -] - -chatpaper_gui = gradio.Interface(fn=upload_pdf, inputs=ip, outputs="html", title=title, description=description) - -# Start server -gui = gradio.TabbedInterface(interface_list=[api_gui, chatpaper_gui], tab_names=["API-key", "ChatPaper"]) -gui.launch(quiet=True, show_api=False) \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Apocalypto Movie Dual Audio Hindi Added.md b/spaces/quidiaMuxgu/Expedit-SAM/Apocalypto Movie Dual Audio Hindi Added.md deleted file mode 100644 index 8a519e63888df44a987c45cba60e13b57127a05b..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Apocalypto Movie Dual Audio Hindi Added.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Apocalypto Movie Dual Audio Hindi Added


        DOWNLOAD ····· https://geags.com/2uCqSc



        -
        - 3cee63e6c2
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Contpaq 2004 Full Espabfdcm.md b/spaces/quidiaMuxgu/Expedit-SAM/Contpaq 2004 Full Espabfdcm.md deleted file mode 100644 index 9efee58c20656d60a5de2f3a560e29702530cd38..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Contpaq 2004 Full Espabfdcm.md +++ /dev/null @@ -1,27 +0,0 @@ -
        -

        How to Install Contpaq 2004 Full Espabfdcm on Your Computer

        -

        Contpaq 2004 is a software program that helps you manage your accounting and financial operations. It is designed for small and medium-sized businesses in Mexico. Contpaq 2004 full espabfdcm is a version of Contpaq 2004 that includes all the features and updates of the original software.

        -

        Contpaq 2004 full espabfdcm


        DOWNLOAD »»» https://geags.com/2uCq7Z



        -

        If you want to install Contpaq 2004 full espabfdcm on your computer, you will need to follow these steps:

        -
          -
        1. Download the Contpaq 2004 full espabfdcm file from a reliable source. You can find it on SoundCloud[^1^] [^2^] or other websites that offer free downloads.
        2. -
        3. Extract the file using a program like WinRAR or 7-Zip. You will get a folder named Contpaq 2004 full espabfdcm.
        4. -
        5. Open the folder and run the setup.exe file. Follow the instructions on the screen to install Contpaq 2004 full espabfdcm on your computer.
        6. -
        7. When the installation is complete, you will need to activate Contpaq 2004 full espabfdcm using a serial number or a license key. You can find these on the internet or contact the developer of Contpaq 2004 for assistance.
        8. -
        9. Enjoy using Contpaq 2004 full espabfdcm for your accounting and financial needs.
        10. -
        -

        Note: If you encounter any problems during the installation or activation of Contpaq 2004 full espabfdcm, you may need to apply a hotfix from Microsoft[^5^] or consult the manual of Contpaq 2004[^6^] for troubleshooting tips.

        - -

        Contpaq 2004 full espabfdcm is a powerful and versatile software that can help you with various accounting and financial tasks. Some of the features of Contpaq 2004 full espabfdcm are:

        -

        -
          -
        • It allows you to create and manage catalogs of accounts, customers, suppliers, products, services, and taxes.
        • -
        • It lets you record and track transactions such as invoices, receipts, payments, purchases, sales, and inventory movements.
        • -
        • It generates and prints reports and statements such as balance sheets, income statements, cash flow statements, tax returns, and bank reconciliations.
        • -
        • It integrates with other software programs such as Microsoft Excel, Word, Outlook, and Access.
        • -
        • It supports multiple currencies, languages, and accounting standards.
        • -
        -

        Contpaq 2004 full espabfdcm is a user-friendly and customizable software that can adapt to your specific needs and preferences. You can configure the interface, the menus, the toolbars, the shortcuts, and the security settings of Contpaq 2004 full espabfdcm. You can also create your own templates, formulas, macros, and queries to automate and simplify your work.

        -

        Contpaq 2004 full espabfdcm is a reliable and secure software that protects your data and your privacy. It uses encryption and password protection to prevent unauthorized access to your files. It also creates backups and restores of your data in case of any loss or damage. It also checks for errors and inconsistencies in your data and alerts you if any are found.

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Crack Cd Key God Of War 3 Pc Download [NEW].md b/spaces/quidiaMuxgu/Expedit-SAM/Crack Cd Key God Of War 3 Pc Download [NEW].md deleted file mode 100644 index 47510dad0e179e2124a5152bcc00b979fa16cde0..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Crack Cd Key God Of War 3 Pc Download [NEW].md +++ /dev/null @@ -1,143 +0,0 @@ - -

        Crack CD Key God of War 3 PC Download: The Ultimate Solution for Action-Adventure Fans

        - -

        God of War 3 is one of the most popular and critically acclaimed games of all time. It is the fifth installment in the God of War series and the sequel to God of War 2. The game follows the story of Kratos, a former Spartan warrior who becomes the God of War after killing Ares. He then seeks revenge against Zeus, the king of the Olympian gods, who betrayed him and stripped him of his godhood. The game is set in ancient Greece and features a rich and immersive mythology, a stunning graphics, an epic soundtrack, and a visceral combat system that allows the player to use various weapons, magic, and quick-time events to defeat enemies and bosses.

        - -

        However, God of War 3 was originally released only for the PlayStation 3 console in 2010, which means that PC gamers could not enjoy this masterpiece. Fortunately, there is a way to play God of War 3 on PC with a crack CD key. A crack CD key is a code that can bypass the security system of a game and allow it to run on any device without requiring the original disc or online activation. With a crack CD key, you can download and install God of War 3 on your PC for free and play it without any limitations.

        -

        Crack cd key god of war 3 pc download


        Download File >>> https://geags.com/2uCqZ6



        - -

        In this article, we will show you how to get and install a crack CD key for God of War 3 PC download and what benefits and precautions it can bring you. We will also give you some tips and tricks to enhance your gaming experience.

        - -

        How to get a crack CD key for God of War 3 PC download?

        - -

        There are many websites that offer crack CD keys for various games, including God of War 3. However, not all of them are reliable or safe. Some of them may contain viruses, malware, or other harmful programs that can damage your computer or steal your personal information. Therefore, you should be careful and selective when choosing where to get a crack CD key for God of War 3 PC download.

        - -

        One of the most trusted and reputable sources for getting a crack CD key for God of War 3 PC download is Console2PC. This is a website that provides PS to PC conversions for various games, including God of War 3. You can find the crack CD key for God of War 3 PC download in Console2PC under the PS to PC category. The code is uploaded by the developer and publisher of God of War 3, Sony Computer Entertainment.

        - -

        To get a crack CD key for God of War 3 PC download from Console2PC, you need to register an account on the website and log in. Then, you can click on the Download button and save the code to your computer. You can also check the System Requirements to see if your device meets all minimum requirements to run the game.

        - -

        How to install a crack CD key for God of War 3 PC download?

        - -

        Installing a crack CD key for God of War 3 PC download is very simple and straightforward. You just need to follow these steps:

        - -
          -
        1. Locate the code that you got from Console2PC. It should be named "Crack CD Key" or something similar.
        2. -
        3. Double-click on the code to run it. It should open a window that shows you some information about the product.
        4. -
        5. Click on Next or Install to start the installation process. It should take a few minutes to complete.
        6. -
        7. Once the installation is done, you should get a message saying "Installation completed successfully!"
        8. -
        9. Run God of War 3 from your desktop or start menu and enjoy the game!
        10. -
        - -

        What benefits does a crack CD key for God of War 3 PC download bring you?

        - -

        By installing a crack CD key for God of War 3 PC download, you can enjoy several benefits that will enhance your gaming experience. Here are some of them:

        - -
          -
        • You can play God of War 3 on your PC with better graphics, resolution, and frame rate than on PS3.
        • -
        • You can use your keyboard and mouse or any controller that you prefer to control Kratos and his actions.
        • -
        • You can access all features, modes, levels, weapons, magic, and items that are available in the game.
        • -
        • You can save your progress anytime and anywhere without worrying about losing data or space.
        • -
        • You can play online with other players or offline without requiring internet connection or online activation.
        • -
        • You can have more fun with the game without spending any money or breaking any laws.
        • -
        - -

        What precautions should you take before installing -a crack CD key for God of War 3 PC download?

        - -

        A crack CD key for God of War 3 PC download is a safe -and effective product that can help you play -God of War 3 on your PC -for free -and without any limitations. -However, -there are some precautions that you should take before installing it:

        -

        - -
          -
        • You should back up your original game file before installing -a crack CD key for God of War 3 PC download. -You can find it in "God of War 3\game" folder, -where game is your current game name. -Just copy -and paste it somewhere else or rename it to something else.
        • -
        • You should make sure that your game version matches -the game version of -a crack CD key for God of War 3 PC download. -If you have a different game version, -you may not be able to install -the product or encounter errors or glitches. -You can check your game version by opening -the "version.inf" file in your God of War 3 directory.
        • -
        • You should only install -a crack CD key for God of War 3 PC download on legit or cracked versions of -the game that allow game modifications. -If you install it on versions that prohibit game modifications, -you might get kicked or banned by -the anti-cheat system or -the server admins.
        • -
        • You should only install -a crack CD key for God of War 3 PC download for personal use or entertainment purposes only. -Do not use it to cheat or ruin other players' gaming experience. -Be respectful and fair to other players and enjoy -the game responsibly.
        • -
        - -

        Conclusion

        - -

        We hope that this article has helped you understand what -a crack CD key for God of War 3 PC download is, -how it works, -how to get and install it, -what benefits and precautions it can bring you, -and how to enhance your gaming experience. -If you are a fan of God of War 3 and want to play -it on your PC -for free -and without any limitations, -then -a crack CD key for God of War 3 PC download is -the product for you. -However, -remember to use it responsibly and respectfully, -and enjoy -the game!

        -

        The article is already complete and does not need to be continued. However, if you want to add some more information or details, you could write something like this:

        - -

        How to troubleshoot common problems with a crack CD key for God of War 3 PC download?

        - -

        Although a crack CD key for God of War 3 PC download is a reliable and effective product that can help you play God of War 3 on your PC for free and without any limitations, you may encounter some problems or issues while using it. Here are some of the most common problems and their solutions:

        - -
          -
        • Problem: The game does not start or crashes after installing the crack CD key.
        • -
        • Solution: This may be caused by a compatibility issue between the game and the crack CD key. To fix this, you need to update your game to the latest patch version or download a compatible crack CD key from a different source. You can also try running the game as an administrator or in compatibility mode.
        • -
        • Problem: The game asks for the original disc or online activation after installing the crack CD key.
        • -
        • Solution: This may be caused by a faulty installation of the crack CD key or a conflict with another program. To fix this, you need to reinstall the crack CD key or disable any antivirus or firewall programs that may interfere with it. You can also try deleting the "steam_api.dll" file from your God of War 3 directory and replacing it with the one from the crack CD key.
        • -
        • Problem: The game runs slowly or lags after installing the crack CD key.
        • -
        • Solution: This may be caused by a low performance of your device or a high graphics setting of the game. To fix this, you need to lower the graphics setting of the game or upgrade your device's hardware. You can also try closing any background programs that may consume your device's resources.
        • -
        - -

        If you have any other problems or questions with a crack CD key for God of War 3 PC download, you can contact the support team of Console2PC or YouTube for assistance. They will be happy to help you and provide you with more information and guidance.

        -

        The article is already complete and does not need to be continued. However, if you want to add some more information or details, you could write something like this:

        - -

        How to enjoy God of War 3 on PC with a crack CD key?

        - -

        God of War 3 is a game that offers a lot of fun and entertainment for action-adventure fans. It has a captivating story, a thrilling gameplay, and a stunning graphics that will keep you hooked for hours. With a crack CD key, you can play God of War 3 on your PC for free and without any limitations. Here are some tips and tricks to enjoy God of War 3 on PC with a crack CD key:

        - -
          -
        • Explore the game world and discover its secrets. God of War 3 is set in ancient Greece and features a rich and immersive mythology. You can explore different locations, such as Mount Olympus, the Underworld, the Labyrinth, and more. You can also find hidden items, such as Gorgon Eyes, Phoenix Feathers, Minotaur Horns, and Godly Possessions, that can enhance your abilities and unlock new features.
        • -
        • Master the combat system and unleash your wrath. God of War 3 has a visceral combat system that allows you to use various weapons, magic, and quick-time events to defeat enemies and bosses. You can switch between different weapons, such as the Blades of Exile, the Claws of Hades, the Nemean Cestus, and the Nemesis Whip, and use their unique abilities. You can also use magic, such as the Army of Sparta, the Soul of Hades, the Nemean Roar, and the Nemesis Rage, to deal massive damage. You can also perform brutal finishing moves and rip your enemies apart with quick-time events.
        • -
        • Challenge yourself and test your skills. God of War 3 has different modes and levels of difficulty that can challenge your skills and provide you with more fun. You can play the game on Easy, Normal, Hard, or Chaos mode, depending on your preference and experience. You can also unlock new modes after completing the game, such as Titan Mode, Challenge Mode, Bonus Play Mode, and Arena Mode. These modes offer new challenges and rewards that will keep you engaged.
        • -
        - -

        God of War 3 is a game that you should not miss if you are a fan of action-adventure games. With a crack CD key for God of War 3 PC download, you can play this game on your PC for free and without any limitations. You can enjoy the game's story, graphics, and gameplay at your own pace and convenience. You can also have more fun with the game's features, modes, and secrets. So what are you waiting for? Get a crack CD key for God of War 3 PC download today and enjoy this masterpiece!

        -

        Conclusion

        - -

        In this article, we have shown you what a crack CD key for God of War 3 PC download is, how it works, how to get and install it, what benefits and precautions it can bring you, and how to enhance your gaming experience. We have also given you some tips and tricks to enjoy God of War 3 on PC with a crack CD key. We hope that this article has helped you understand and appreciate this product and this game.

        - -

        A crack CD key for God of War 3 PC download is a product that can help you play God of War 3 on your PC for free and without any limitations. It is a code that can bypass the security system of the game and allow it to run on any device without requiring the original disc or online activation. It is a safe and effective product that can provide you with several benefits and fun. However, you should also take some precautions before installing it and use it responsibly and respectfully.

        - -

        God of War 3 is a game that you should not miss if you are a fan of action-adventure games. It is the fifth installment in the God of War series and the sequel to God of War 2. It is a game that features a captivating story, a stunning graphics, an epic soundtrack, and a visceral combat system that allows the player to use various weapons, magic, and quick-time events to defeat enemies and bosses. It is a game that is set in ancient Greece and features a rich and immersive mythology. It is a game that offers a lot of fun and entertainment for action-adventure fans.

        - -

        If you want to play God of War 3 on your PC for free and without any limitations, then a crack CD key for God of War 3 PC download is the product for you. You can get it from Console2PC or YouTube and install it easily on your device. You can then enjoy the game's story, graphics, and gameplay at your own pace and convenience. You can also have more fun with the game's features, modes, and secrets. So what are you waiting for? Get a crack CD key for God of War 3 PC download today and enjoy this masterpiece!

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Download Mastercam X8 Full Crack 64-bit Utorrent.md b/spaces/quidiaMuxgu/Expedit-SAM/Download Mastercam X8 Full Crack 64-bit Utorrent.md deleted file mode 100644 index c6cd91c1b6adfa3b212df0184b8ae75a038272f6..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Download Mastercam X8 Full Crack 64-bit Utorrent.md +++ /dev/null @@ -1,46 +0,0 @@ -

        download mastercam x8 full crack 64-bit utorrent


        Download Filehttps://geags.com/2uCsIf



        - -efi - -USB devices: - -lsusb - -Bus 001 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub - -Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub - -Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub - -Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub - -Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub - -Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub - -Bus 007 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub - -Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub - -Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub - -Bus 001 Device 010: ID 0bda:b728 Realtek Semiconductor Corp. Ralink Technology Corp. RT2870/RT3070 Wireless Adapter - -Bus 002 Device 002: ID 0489:e415 Foxconn / Hon Hai - -Bus 002 Device 003: ID 0bda:0129 Realtek Semiconductor Corp. Ralink Technology Corp. RT3070/RT3071 Wireless Adapter - -Bus 006 Device 002: ID 050d:0962 Belkin Components F7D1101 v10 - -Bus 006 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub - -Bus 004 Device 004: ID 0bda:0129 Realtek Semiconductor Corp. Ralink Technology Corp. RT3070/RT3071 Wireless Adapter - -Bus 004 Device 002: ID 0bda:0312 Realtek Semiconductor Corp. Ralink Technology Corp. RT3071 Wireless Adapter - -Bus 004 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub - -Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub 4fefd39f24
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Keyclone 1.7n Crack LINK.md b/spaces/quidiaMuxgu/Expedit-SAM/Keyclone 1.7n Crack LINK.md deleted file mode 100644 index c0d05bedb60420460240a91f542ebd0c20c3975d..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Keyclone 1.7n Crack LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Keyclone 1.7n crack


        Download File ★★★ https://geags.com/2uCqgB



        -
        -crack · Toyota Verkstadbok crack · HACK Audials Moviebox 12.0.63100.0 + Key.tgz · HACK Keyclone 1.7n · Windows 7. Avatar Themepack free download. 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/model/VhullPIFuNet.py b/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/model/VhullPIFuNet.py deleted file mode 100644 index 3bd30dc40722f8aff8403990b04f4fdba34fdc29..0000000000000000000000000000000000000000 --- a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/model/VhullPIFuNet.py +++ /dev/null @@ -1,70 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from .BasePIFuNet import BasePIFuNet - - -class VhullPIFuNet(BasePIFuNet): - ''' - Vhull Piximp network is a minimal network demonstrating how the template works - also, it helps debugging the training/test schemes - It does the following: - 1. Compute the masks of images and stores under self.im_feats - 2. Calculate calibration and indexing - 3. Return if the points fall into the intersection of all masks - ''' - - def __init__(self, - num_views, - projection_mode='orthogonal', - error_term=nn.MSELoss(), - ): - super(VhullPIFuNet, self).__init__( - projection_mode=projection_mode, - error_term=error_term) - self.name = 'vhull' - - self.num_views = num_views - - self.im_feat = None - - def filter(self, images): - ''' - Filter the input images - store all intermediate features. - :param images: [B, C, H, W] input images - ''' - # If the image has alpha channel, use the alpha channel - if images.shape[1] > 3: - self.im_feat = images[:, 3:4, :, :] - # Else, tell if it's not white - else: - self.im_feat = images[:, 0:1, :, :] - - def query(self, points, calibs, transforms=None, labels=None): - ''' - Given 3D points, query the network predictions for each point. - Image features should be pre-computed before this call. - store all intermediate features. - query() function may behave differently during training/testing. - :param points: [B, 3, N] world space coordinates of points - :param calibs: [B, 3, 4] calibration matrices for each image - :param transforms: Optional [B, 2, 3] image space coordinate transforms - :param labels: Optional [B, Res, N] gt labeling - :return: [B, Res, N] predictions for each point - ''' - if labels is not None: - self.labels = labels - - xyz = self.projection(points, calibs, transforms) - xy = xyz[:, :2, :] - - point_local_feat = self.index(self.im_feat, xy) - local_shape = point_local_feat.shape - point_feat = point_local_feat.view( - local_shape[0] // self.num_views, - local_shape[1] * self.num_views, - -1) - pred = torch.prod(point_feat, dim=1) - - self.preds = pred.unsqueeze(1) diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Charlie and the chocolate factory tamil dubbed torrent Best sites to find it.md b/spaces/raedeXanto/academic-chatgpt-beta/Charlie and the chocolate factory tamil dubbed torrent Best sites to find it.md deleted file mode 100644 index fbf4321065aba1ec783ce43b3c59cedd6e28dea7..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Charlie and the chocolate factory tamil dubbed torrent Best sites to find it.md +++ /dev/null @@ -1,120 +0,0 @@ -
        -

        Charlie and the Chocolate Factory Tamil Dubbed Torrent: How to Watch the Movie Online

        -

        Have you ever dreamed of visiting a magical chocolate factory where everything is edible and wonderful? If yes, then you might want to watch Charlie and the Chocolate Factory, a fantasy comedy movie based on the classic novel by Roald Dahl. But what if you don't understand English very well? Don't worry, you can still enjoy this movie in your own language. In this article, we will tell you how to download Charlie and the Chocolate Factory Tamil dubbed torrent and watch it online.

        -

        Introduction

        -

        What is Charlie and the Chocolate Factory?

        -

        Charlie and the Chocolate Factory is a 2005 movie directed by Tim Burton and starring Johnny Depp as Willy Wonka, the eccentric owner of a chocolate factory. The movie follows the story of Charlie Bucket, a poor boy who wins a golden ticket to visit Wonka's factory along with four other children. There, they discover a world of wonders and dangers, as well as Wonka's mysterious past.

        -

        charlie and the chocolate factory tamil dubbed torrent


        Downloadhttps://tinourl.com/2uL5Ck



        -

        Why is it popular in Tamil Nadu?

        -

        The movie is popular in Tamil Nadu because it appeals to people of all ages and backgrounds. It has a lot of humor, adventure, fantasy, and moral lessons. It also has a lot of colorful visuals and catchy songs that make it fun to watch. Moreover, the movie has been dubbed in Tamil by a talented team of voice actors who have given life to the characters and their dialogues.

        -

        How to download the Tamil dubbed torrent?

        -

        If you want to watch Charlie and the Chocolate Factory in Tamil, you can download it from various torrent websites that offer Tamil dubbed movies. However, you should be careful about the quality and safety of the torrent files. Some of them might be fake or infected with viruses. Therefore, you should always use a reliable VPN service and antivirus software when downloading torrents. You should also check the reviews and ratings of the torrent files before downloading them.

        -

        charlie and the chocolate factory tamil audio track download
        -charlie and the chocolate factory tamil dubbed movie free download
        -charlie and the chocolate factory tamil dubbed full movie watch online
        -charlie and the chocolate factory tamil dubbed 720p torrent
        -charlie and the chocolate factory tamil dubbed tamilyogi
        -charlie and the chocolate factory tamil dubbed isaimini
        -charlie and the chocolate factory tamil dubbed kuttymovies
        -charlie and the chocolate factory tamil dubbed moviesda
        -charlie and the chocolate factory tamil dubbed movie download in tamilrockers
        -charlie and the chocolate factory tamil dubbed movie download in isaidub
        -charlie and the chocolate factory tamil dubbed movie download in telegram
        -charlie and the chocolate factory tamil dubbed movie download in filmyzilla
        -charlie and the chocolate factory tamil dubbed movie download in movierulz
        -charlie and the chocolate factory tamil dubbed movie download in utorrent
        -charlie and the chocolate factory tamil dubbed movie download in filmywap
        -charlie and the chocolate factory 2005 tamil dubbed torrent
        -charlie and the chocolate factory 1971 tamil dubbed torrent
        -charlie and the chocolate factory 2005 vs 1971 tamil dubbed comparison
        -charlie and the chocolate factory 2005 full movie in tamil dubbed hd
        -charlie and the chocolate factory 1971 full movie in tamil dubbed hd
        -charlie and the chocolate factory full movie in tamil dubbed youtube
        -charlie and the chocolate factory full movie in tamil dubbed dailymotion
        -charlie and the chocolate factory full movie in tamil dubbed facebook
        -charlie and the chocolate factory full movie in tamil dubbed google drive
        -charlie and the chocolate factory full movie in tamil dubbed netflix
        -charlie and the chocolate factory book pdf free download in tamil
        -charlie and the chocolate factory book summary in tamil
        -charlie and the chocolate factory book review in tamil
        -charlie and the chocolate factory book vs movie in tamil
        -charlie and the chocolate factory book series in tamil
        -charlie and the glass elevator book pdf free download in tamil
        -charlie and the glass elevator book summary in tamil
        -charlie and the glass elevator book review in tamil
        -charlie and the glass elevator book vs movie in tamil
        -roald dahl books pdf free download in tamil
        -roald dahl books list in tamil
        -roald dahl books summary in tamil
        -roald dahl books review in tamil
        -roald dahl books movies in tamil
        -roald dahl biography in tamil
        -tim burton movies list in tamil dubbed
        -tim burton movies download in tamil dubbed torrent
        -tim burton movies watch online in tamil dubbed hd quality
        -tim burton movies best to worst ranking in tamil
        -tim burton movies trivia quiz questions and answers in tamil
        -johnny depp movies list in tamil dubbed
        -johnny depp movies download in tamil dubbed torrent
        -johnny depp movies watch online in tamil dubbed hd quality
        -johnny depp movies best to worst ranking in tamil
        -johnny depp movies trivia quiz questions and answers in tamil

        -

        The Plot of Charlie and the Chocolate Factory

        -

        The Golden Tickets

        -

        The movie begins with Willy Wonka announcing that he has hidden five golden tickets inside his chocolate bars. Whoever finds them will get a chance to visit his factory and receive a lifetime supply of chocolate. The news creates a frenzy among chocolate lovers around the world, who start buying Wonka's products in large quantities.

        -

        One of them is Charlie Bucket, a poor boy who lives with his parents and four grandparents in a small house. He loves chocolate but can only afford one bar a year on his birthday. He hopes to find a golden ticket but has no luck. However, on the day before the factory tour, he finds some money on the street and buys one more bar. To his surprise, he finds the last golden ticket inside it.

        -

        The Chocolate Factory Tour

        -

        The next day, Charlie goes to the factory with his Grandpa Joe, who used to work for Wonka before he closed his factory. There, they meet Wonka and the other four winners: Augustus Gloop, a greedy boy who loves eating; Veruca Salt, a spoiled girl who wants everything; Violet Beauregarde, a competitive girl who chews gum all the time; and Mike Teavee, a smart boy who is obsessed with television.

        -

        Wonka takes them on a tour of his factory, which is full of amazing rooms and machines that produce different kinds of candies and chocolates. He also introduces them to his workers, the Oompa-Loompas, small people who come from Loompaland and love singing songs.

        -

        The Fate of the Children

        -

        However, as they explore the factory, each of the children except Charlie falls into trouble due to their bad behavior. Augustus falls into a chocolate river and gets sucked into a pipe; Veruca tries to steal a squirrel from a nut room and gets thrown into a garbage chute; Violet turns into a giant blueberry after chewing an experimental gum; and Mike shrinks himself after teleporting himself through a TV screen.

        -

        Each time something happens to one of them, Wonka shows no concern but rather makes sarcastic remarks. He also lets the Oompa-Loompas sing songs that mock their faults and warn others not to follow their example.

        -

        The Final Surprise

        -

        At last, only Charlie remains with Wonka. Wonka takes him to his glass elevator that can fly anywhere. He tells him that he has passed his test and that he is the winner of his contest. He reveals that he was looking for an heir to take over his factory because he is old and has no family. He offers Charlie to live with him in his factory and learn everything from him.

        -

        Charlie is overjoyed but he refuses to leave his family behind. He asks Wonka if he can bring them along. Wonka says no because he thinks that family is a distraction and a burden. He tells Charlie that he had a bad relationship with his father, who was a dentist and hated sweets. He ran away from home when he was young and never saw him again.

        -

        Charlie feels sorry for Wonka but he still chooses his family over him. He gives back his golden ticket and says goodbye to him. Wonka is sad but he respects his decision.

        -

        However, as Charlie leaves with Grandpa Joe, they meet Mr. Slugworth outside. Mr. Slugworth was an employee of Wonka who pretended to be his rival and tried to tempt each child with money in exchange for stealing one of Wonka's inventions. He tells Charlie that he was actually working for Wonka all along as part of his test.

        -

        He also tells him that Wonka has changed his mind and agreed to let him bring his family along if he still wants to be his heir. He gives him back his golden ticket and tells him to go back to Wonka.

        -

        Charlie is happy but confused. He decides to go back to Wonka with Grandpa Joe. They find him in his elevator looking depressed. They tell him that they have changed their mind too and that they want to live with him in his factory.

        -

        Wonka is overjoyed but surprised. He asks them how they knew that he wanted them back. They show him his golden ticket which has another message written on it: "Greetings to you, my lucky finder! I shake you warmly by your hand! For now I do invite you into my life! To share with me my house! And all my secrets too! But first you must do something for me! You must bring your whole family! And live with me forevermore!"

        -

        Wonka realizes that he wrote this message long ago when he first made his contest but forgot about it because he was too busy making chocolates. He admits that he was wrong about family being a distraction and a burden. He says that he actually missed having someone to love and care for him.

        -

        He hugs Charlie and Grandpa Joe and thanks them for coming back to him. He then takes them on a ride in his elevator across the sky. He tells them that they are going to pick up their family from their house and bring them to his factory.

        -

        He also tells them that he has one more surprise for them: he has found his father after many years of searching for him. He says that he wants to reconcile with him and introduce him to Charlie.

        -

        The movie ends with Charlie smiling happily as he looks forward to starting a new life with Wonka and his family.

        -

        The Cast and Crew of Charlie and the Chocolate Factory

        -

        The Director: Tim Burton

        -

        Learning English Words and Phrases from the Movie

        -

        Another benefit of watching Charlie and the Chocolate Factory in Tamil is that you can learn some English words and phrases from the movie. The movie has a lot of vocabulary and expressions that are related to chocolate, candy, and fantasy. For example, you can learn words such as chocolate, factory, ticket, golden, river, pipe, nut, squirrel, gum, blueberry, TV, teleport, elevator, glass, heir, and invention. You can also learn phrases such as "You're really lucky", "That's impossible", "Don't touch that", "Follow me", "Hold on tight", "What's going on?", "Are you kidding me?", "That's amazing", and "I'm sorry".

        -

        By watching the movie in Tamil with English subtitles or vice versa, you can improve your English skills and have fun at the same time. You can also try to repeat the dialogues along with the characters or write them down in a notebook. This will help you to remember them better and use them in your own conversations.

        -

        Appreciating the Cultural Diversity of the Movie

        -

        Another benefit of watching Charlie and the Chocolate Factory in Tamil is that you can appreciate the cultural diversity of the movie. The movie has a lot of elements that reflect different cultures and traditions from around the world. For example, you can see the costumes and hairstyles of the Oompa-Loompas, who come from Loompaland, a fictional country in Africa. You can also see the food and drinks that are served in the factory, such as chocolate milkshakes, fizzy lifting drinks, everlasting gobstoppers, and snozzberries. You can also hear the songs that are sung by the Oompa-Loompas, which are based on different genres of music such as jazz, rock, pop, and rap.

        -

        By watching the movie in Tamil with English subtitles or vice versa, you can learn more about these aspects of the movie and enjoy them in a different way. You can also compare and contrast them with your own culture and traditions and see how they are similar or different. This will help you to broaden your horizons and respect other cultures.

        -

        Conclusion

        -

        Summary of the Main Points

        -

        In conclusion, Charlie and the Chocolate Factory is a wonderful movie that you can watch in Tamil and have a lot of benefits. You can enjoy the humor and wit of the movie in your own language. You can learn some English words and phrases from the movie. You can appreciate the cultural diversity of the movie. You can also have a lot of fun and entertainment by watching this movie.

        -

        Call to Action: Watch the Movie Online

        -

        So what are you waiting for? If you want to watch Charlie and the Chocolate Factory in Tamil dubbed torrent online, you can do so by following these simple steps:

        -
          -
        1. Go to a torrent website that offers Tamil dubbed movies.
        2. -
        3. Search for Charlie and the Chocolate Factory Tamil dubbed torrent.
        4. -
        5. Check the quality and safety of the torrent file before downloading it.
        6. -
        7. Use a VPN service and antivirus software when downloading torrents.
        8. -
        9. Enjoy watching the movie on your device.
        10. -
        -

        We hope that you will have a great time watching this movie and that you will share your feedback with us. Thank you for reading this article!

        -

        FAQs

        -
          -
        • Q: Is Charlie and the Chocolate Factory based on a true story?
        • -
        • A: No, Charlie and the Chocolate Factory is based on a fictional novel by Roald Dahl.
        • -
        • Q: Who wrote the songs for Charlie and the Chocolate Factory?
        • -
        • A: The songs for Charlie and the Chocolate Factory were written by Danny Elfman, who also composed the music for the movie.
        • -
        • Q: How many Oompa-Loompas are there in Charlie and the Chocolate Factory?
        • -
        • A: There are about 165 Oompa-Loompas in Charlie and the Chocolate Factory. They were all played by one actor named Deep Roy.
        • -
        • Q: What is snozzberry?
        • -
        • A: Snozzberry is a fictional fruit that Willy Wonka invented. It tastes like a combination of strawberry, raspberry, blueberry, and cranberry.
        • -
        • Q: What is Willy Wonka's real name?
        • -
        • A: Willy Wonka's real name is Wilbur Wonka. He changed it to Willy because he thought it sounded more catchy.
        • -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/rainy3/chatgpt_academic/Dockerfile b/spaces/rainy3/chatgpt_academic/Dockerfile deleted file mode 100644 index 757a188b3f8218c158077280729136d70879fcdb..0000000000000000000000000000000000000000 --- a/spaces/rainy3/chatgpt_academic/Dockerfile +++ /dev/null @@ -1,14 +0,0 @@ -FROM python:3.11 - -RUN echo '[global]' > /etc/pip.conf && \ - echo 'index-url = https://mirrors.aliyun.com/pypi/simple/' >> /etc/pip.conf && \ - echo 'trusted-host = mirrors.aliyun.com' >> /etc/pip.conf - - -WORKDIR /gpt -COPY requirements.txt . -RUN pip3 install -r requirements.txt - -COPY . . - -CMD ["python3", "-u", "main.py"] diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/American Pie Movie Download 300 Mb 43.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/American Pie Movie Download 300 Mb 43.md deleted file mode 100644 index abe73a1901b7d8ef0be4d65014db6e4bb1c5f643..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/American Pie Movie Download 300 Mb 43.md +++ /dev/null @@ -1,6 +0,0 @@ -

        american pie movie download 300 mb 43


        Download –––––>>> https://urlgoal.com/2uCJxq



        - - d5da3c52bf
        -
        -
        -

        diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Blue-Cloner Blue-Cloner Diamond 7.40 Build 814 (x86 X64) Crack Keygen [BETTER].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Blue-Cloner Blue-Cloner Diamond 7.40 Build 814 (x86 X64) Crack Keygen [BETTER].md deleted file mode 100644 index c54a2dba4efd6879ee71c98ee7871f2c44c94f18..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Blue-Cloner Blue-Cloner Diamond 7.40 Build 814 (x86 X64) Crack Keygen [BETTER].md +++ /dev/null @@ -1,134 +0,0 @@ - -

        Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen: A Complete Guide

        - -

        If you are looking for a powerful and easy-to-use Blu-ray copying software, you might want to check out Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen. This software allows you to copy Blu-ray discs to your PC or other devices, as well as create ISO files, burn Blu-ray discs, and convert Blu-ray movies to various formats. In this article, we will review the features, benefits, and drawbacks of Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen, and show you how to use it to copy Blu-ray movies.

        -

        Blue-Cloner Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack keygen


        Download Zip ✺✺✺ https://urlgoal.com/2uCLDP



        - -

        What is Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen?

        - -

        Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen is a software package that includes Blue-Cloner, Open SmartBurner, Open Blu-ray Ripper, and Open Cloner Express Center. Blue-Cloner is the main program that enables you to copy Blu-ray discs to your PC or other devices. Open SmartBurner is a tool that allows you to burn data files and video files to Blu-ray discs or DVDs. Open Blu-ray Ripper is a program that can convert Blu-ray movies to various formats, such as MP4, MKV, AVI, etc. Open Cloner Express Center is a launcher that can access all the programs in the package.

        - -

        What are the features of Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen?

        - -

        Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen has many features that make it a versatile and reliable Blu-ray copying software. Some of the main features are:

        - -
          -
        • It can copy Blu-ray discs with various protections, such as AACS, BD+, MKB v26, etc.
        • -
        • It can copy Blu-ray discs to blank Blu-ray discs, or to the hard disk as ISO files or folders.
        • -
        • It can copy Blu-ray movies with 1:1 quality, or compress them to fit on smaller discs.
        • -
        • It can copy Blu-ray movies with different copy modes, such as Full Disc, Main Movie, Split Disc, Customize, etc.
        • -
        • It can copy Blu-ray movies with multiple audio tracks and subtitles.
        • -
        • It can copy Blu-ray movies with 3D effects and preserve them in the output.
        • -
        • It can copy Blu-ray movies with UHD quality and support HDR10 and Dolby Vision.
        • -
        • It can burn data files and video files to Blu-ray discs or DVDs with Open SmartBurner.
        • -
        • It can convert Blu-ray movies to various formats with Open Blu-ray Ripper.
        • -
        • It can access all the programs in the package with Open Cloner Express Center.
        • -
        - -

        What are the benefits of Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen?

        - -

        Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen has many benefits that make it a worthwhile investment for Blu-ray enthusiasts. Some of the main benefits are:

        - -
          -
        • It can help you backup your precious Blu-ray discs and protect them from scratches, damage, or loss.
        • -
        • It can help you enjoy your Blu-ray movies on various devices, such as PC, laptop, smartphone, tablet, etc.
        • -
        • It can help you save storage space by compressing your Blu-ray movies or converting them to smaller formats.
        • -
        • It can help you customize your Blu-ray movies by selecting the audio tracks and subtitles you want, or removing the unwanted parts.
        • -
        • It can help you enhance your viewing experience by preserving the original quality and effects of your Blu-ray movies.
        • -
        • It can help you create your own Blu-ray discs or DVDs with your data files or video files.
        • -
        - -

        What are the drawbacks of Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen?

        - -

        Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen is not a perfect software and it has some drawbacks that you should be aware of before buying it. Some of the main drawbacks are:

        -

        - -
          -
        • It is not free and it requires a license key to activate it.
        • -
        • It may not be able to copy some new or rare Blu-ray discs due to encryption updates or compatibility issues.
        • -
        • It may take a long time to copy or convert some large or complex Blu-ray movies.
        • -
        • It may cause some errors or failures during the copying or burning process due to hardware or software problems.
        • -
        - -

        How to use Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen?

        - -

        To use Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen, you need to download and install it on your PC first. Then you need to launch Open Cloner Express Center and select Blue-Cloner from the interface. You will see the main window of Blue-Cloner where you can choose the source disc and the target disc or folder. You can also select the copy mode and adjust the settings according to your preferences. After that, you can click on Start button to begin the copying process. You can monitor the progress and view the details on the screen. When the copying is finished, you can eject the disc or open the output folder. - -To use Open SmartBurner, Open Blu-ray Ripper, or other programs in the package, you can also launch them from Open Cloner Express Center and follow the instructions on their interfaces. - -

        Conclusion

        - -

        Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen is a comprehensive and powerful Blu-ray copying software that can help you backup, enjoy, and create your own Blu-ray discs or DVDs. It has many features and benefits that make it a great choice for Blu-ray lovers. However, it also has some drawbacks that you should consider before buying it. If you are interested in trying it out, you can download it from [here](https://shoxet.com/2sL17l) and use the crack keygen to activate it.

        -

        How to download and install Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen?

        - -

        If you want to try Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen for free, you can download it from [here](https://shoxet.com/2sL17l). This is a cracked version that does not require a license key to activate it. However, you should be careful when downloading and installing cracked software, as they may contain viruses or malware that can harm your PC. Therefore, you should scan the downloaded file with a reliable antivirus program before opening it.

        - -

        To install Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen, you need to follow these steps:

        - -
          -
        1. Extract the downloaded file to a folder on your PC.
        2. -
        3. Run the setup.exe file and follow the instructions on the screen.
        4. -
        5. When the installation is finished, do not launch the program yet.
        6. -
        7. Copy the crack file from the crack folder and paste it to the installation directory of Blue-Cloner Diamond.
        8. -
        9. Replace the original file if prompted.
        10. -
        11. Launch the program and enjoy it.
        12. -
        - -

        Note: You may need to disable your antivirus program or firewall temporarily during the installation process, as they may block the crack file or the program from running.

        - -

        Is Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen safe and legal?

        - -

        The answer to this question depends on your perspective and situation. On one hand, Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen is a useful and convenient software that can help you copy, backup, and convert your Blu-ray movies. It can save you money and time by allowing you to use it without paying for a license key. On the other hand, Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen is a pirated software that violates the intellectual property rights of the original developers. It may also contain malicious code that can damage your PC or compromise your privacy. Moreover, it may not work properly or be compatible with some Blu-ray discs due to encryption updates or bugs.

        - -

        Therefore, we do not recommend using Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen for any purposes. It is better to buy a legitimate version of Blue-Cloner Diamond from its official website [here](https://www.blue-cloner.com/blue-cloner-diamond.html). This way, you can support the developers, get regular updates and technical support, and avoid any legal or security risks.

        -

        How to compare Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen with other Blu-ray copying software?

        - -

        There are many Blu-ray copying software available on the market, but not all of them are equal in terms of quality, performance, and features. If you want to compare Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen with other Blu-ray copying software, you need to consider some factors, such as:

        - -
          -
        • The compatibility with different Blu-ray discs and protections.
        • -
        • The copy quality and speed.
        • -
        • The copy modes and options.
        • -
        • The additional functions and tools.
        • -
        • The price and customer service.
        • -
        - -

        To help you make a better decision, we have selected three popular Blu-ray copying software and compared them with Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen. They are DVDFab Blu-ray Copy, Leawo Blu-ray Copy, and AnyMP4 Blu-ray Copy Platinum. Here is a brief comparison table:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        SoftwareCompatibilityQuality and SpeedModes and OptionsFunctions and ToolsPrice and Service
        Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack KeygenSupports most Blu-ray discs and protections, including AACS, BD+, MKB v26, bus encryption, BD-Live and UOPs. Supports all regions (A, B, C).Offers 1:1 quality copy or compression copy with MPEG-2 or H.264. Supports UHD quality and HDR10 and Dolby Vision. Supports CUDA and DXVA 2 acceleration. Copy speed depends on the disc size and complexity.Provides various copy modes, such as Full Disc, Main Movie, Split Disc, Customize, etc. Allows selecting audio tracks and subtitles. Supports 3D effects.Includes Open SmartBurner, Open Blu-ray Ripper, and Open Cloner Express Center. Can burn data files and video files to Blu-ray discs or DVDs. Can convert Blu-ray movies to various formats.$79.99 for one year license or $119.99 for lifetime license. Offers free trial version with limited functions. Provides online help, FAQs, tutorials, email support, and live chat support.
        DVDFab Blu-ray CopySupports most Blu-ray discs and protections, including AACS, BD+, MKB v26, bus encryption, BD-Live and UOPs. Supports all regions (A, B, C).Offers 1:1 quality copy or compression copy with MPEG-2 or H.265. Supports UHD quality and HDR10 and Dolby Vision. Supports GPU acceleration with NVIDIA CUDA or AMD APP technology. Copy speed depends on the disc size and complexity.Provides various copy modes, such as Full Disc, Main Movie, Clone/Burn, Merge, Customize, etc. Allows selecting audio tracks and subtitles. Supports 3D effects.Includes DVDFab Cinavia Removal HD module that can remove Cinavia watermarks from Blu-ray audio tracks. Can work with other DVDFab modules such as DVD Copy, DVD Ripper, Blu-ray Ripper, etc.$64.9 for one year license or $109 for lifetime license. Offers free trial version with limited functions. Provides online help, FAQs, tutorials, email support, forum support, and phone support.
        Leawo Blu-ray CopySupports most Blu-ray discs and protections, including AACS, BD+, MKB v26 -

        Conclusion

        - -

        In this article, we have reviewed Blue-Cloner Diamond 7.40 Build 814 (x86 x64) Crack Keygen, a comprehensive and powerful Blu-ray copying software that can help you backup, enjoy, and create your own Blu-ray discs or DVDs. We have also compared it with other popular Blu-ray copying software, such as DVDFab Blu-ray Copy, Leawo Blu-ray Copy, and AnyMP4 Blu-ray Copy Platinum. We have discussed the features, benefits, drawbacks, and prices of each software. We hope this article has helped you make an informed decision on which Blu-ray copying software to choose.

        - -

        However, we also want to remind you that using cracked software is not safe and legal. It may expose your PC to viruses or malware, or cause errors or failures during the copying or burning process. It may also violate the intellectual property rights of the original developers and bring you legal troubles. Therefore, we suggest you buy a legitimate version of Blue-Cloner Diamond from its official website [here](https://www.blue-cloner.com/blue-cloner-diamond.html). This way, you can support the developers, get regular updates and technical support, and avoid any risks.

        - -

        Thank you for reading this article. If you have any questions or feedback, please feel free to leave a comment below.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Digital Film Tools EZ Mask 3.0.6 Win X64 LINK.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Digital Film Tools EZ Mask 3.0.6 Win X64 LINK.md deleted file mode 100644 index ad549ed696763afc52b231d7585ce81b5cf4a963..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Digital Film Tools EZ Mask 3.0.6 Win X64 LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Digital Film Tools EZ Mask 3.0.6 Win x64


        Download Ziphttps://urlgoal.com/2uCKJ4



        - -EZ Mask 是一种易于使用的交互式图像遮罩工具,能够提取图像中的几乎任何物体– 即使您正在处理细致的头发细节,烟雾或反射。 资源推荐: 中国古风传统十二 ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Windows 10 Theme For Wine.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Windows 10 Theme For Wine.md deleted file mode 100644 index 37668c00902196282e5ecdfa9a7adce7deb62d62..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Windows 10 Theme For Wine.md +++ /dev/null @@ -1,36 +0,0 @@ -

        Download Windows 10 Theme For Wine


        Download File ✸✸✸ https://urlgoal.com/2uCKTe



        - -Click Install Theme. 6. Now, select Windows 7 or Windows 8. - -If the theme is not working and your computer is running Windows 10, here is the thing to do - -In the Desktop Integration Tab, Under Theme, Click Install Theme. - -Then, select the file you downloaded and install it. - -Now, select Windows 7 or Windows 8. - -Now, go to Windows Menu -> Change Desktop Background and change the color of your computer background to any theme you want. - -After Installing the theme, there will be a menu Icon with a message "Themes" in the corner. - -Right click and select change desktop background and select the theme you just installed and give it a try. - -i tried the stuff but it was unsuccessful. even i tried reinstalling the theme and then the next step but my problem still remains. when i try to change the background color the theme doesn't change. i tried the normal method of theme settings but when i apply it there's no options in the theme options. how do i change the background color? please help! - -MacRumors attracts a broad audience - -of both consumers and professionals interested in - -the latest technologies and products. We also boast an active community focused on - -purchasing decisions and technical aspects of the iPhone, iPod, iPad, and Mac platforms.Ferrari GTC4Lusso - -The Ferrari GTC4Lusso is a mid-engined two-seat sports car produced by Ferrari and presented as a successor to the Ferrari GTC3 and Pininfarina-bodied Ferrari 360 Modena. It was unveiled in April 2013. Unlike the Pininfarina-bodied 360, the GTC4Lusso uses a carbon fibre monocoque to weight about 1000kg, giving it a reduction in weight of 250kg compared to the carbon-bodied Ferrari GTC3. Ferrari plans to build 5,000 units a year for the next decade. The GTC4Lusso is only available in the United States. - -Technical - -The GTC4Lusso is the first Ferrari to feature a carbon fibre body. The monocoque chassis weighs approximately and is mounted onto an aluminium frame. It features a carbon fibre roof with two two doors, two side mirrors and a windscreen. The GTC4Lusso features a 3.9 litre naturally aspirated V8 engine, a manual 6-speed sequential transmission, carbon ceramic 4fefd39f24
        -
        -
        -

        diff --git a/spaces/red1xe/codeGPT/app.py b/spaces/red1xe/codeGPT/app.py deleted file mode 100644 index 1865cdd802047fcb0520dc25a7b6720c27a2c29c..0000000000000000000000000000000000000000 --- a/spaces/red1xe/codeGPT/app.py +++ /dev/null @@ -1,39 +0,0 @@ -import streamlit as st -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.vectorstores import FAISS -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.memory import ConversationBufferMemory -from langchain.llms import HuggingFaceHub -from langchain.chains import RetrievalQA -from transformers import AutoModelForCausalLM, AutoTokenizer - -from pdfminer.high_level import extract_text -def get_pdf_text(files): - full_text = "" - for file in files: - text = extract_text(file) - text = text.replace("\n", " ") - full_text = text + full_text - return full_text - -st.title("Embedding Creation for Langchain") -st.header("File Upload") -files = st.file_uploader("Upload your files", accept_multiple_files=True, type="pdf") - -if files: - question = st.text_input("Ask a question") - if st.button("Search"): - with st.spinner("Fetching 3 most similar matches..."): - full_text = get_pdf_text(files) - text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150) - chunks = text_splitter.split_text(full_text) - embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2") - db = FAISS.from_texts(chunks, embeddings) - memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True) - chain = RetrievalQA.from_llm( - llm=AutoModelForCausalLM.from_pretrained("red1xe/Llama-2-7B-codeGPT"), - memory=memory, - retriever=db.as_retriever(search_kwargs={"k": 3}), - ) - answer = chain.answer(question) - st.write(answer) \ No newline at end of file diff --git a/spaces/rewoo/ReWOO-Demo/nodes/LLMNode.py b/spaces/rewoo/ReWOO-Demo/nodes/LLMNode.py deleted file mode 100644 index 48bfba6fa66f5bda3d43dd2030fc7996487de542..0000000000000000000000000000000000000000 --- a/spaces/rewoo/ReWOO-Demo/nodes/LLMNode.py +++ /dev/null @@ -1,74 +0,0 @@ -# Basic LLM node that calls for a Large Language Model for completion. -import os - -import openai - -from nodes.Node import Node -from nodes.NodeCofig import * -from utils.util import * -from alpaca.lora import AlpacaLora - -openai.api_key = os.environ["OPENAI_API_KEY"] - - -class LLMNode(Node): - def __init__(self, name="BaseLLMNode", model_name="text-davinci-003", stop=None, input_type=str, output_type=str): - super().__init__(name, input_type, output_type) - self.model_name = model_name - self.stop = stop - - # Initialize to load shards only once - if self.model_name in LLAMA_WEIGHTS: - self.al = AlpacaLora(lora_weights=self.model_name) - - def run(self, input, log=False): - assert isinstance(input, self.input_type) - response = self.call_llm(input, self.stop) - completion = response["output"] - if log: - return response - return completion - - def call_llm(self, prompt, stop): - if self.model_name in OPENAI_COMPLETION_MODELS: - response = openai.Completion.create( - model=self.model_name, - prompt=prompt, - temperature=OPENAI_CONFIG["temperature"], - max_tokens=OPENAI_CONFIG["max_tokens"], - top_p=OPENAI_CONFIG["top_p"], - frequency_penalty=OPENAI_CONFIG["frequency_penalty"], - presence_penalty=OPENAI_CONFIG["presence_penalty"], - stop=stop - ) - return {"input": prompt, - "output": response["choices"][0]["text"], - "prompt_tokens": response["usage"]["prompt_tokens"], - "completion_tokens": response["usage"]["completion_tokens"]} - elif self.model_name in OPENAI_CHAT_MODELS: - messages = [{"role": "user", "content": prompt}] - response = openai.ChatCompletion.create( - model=self.model_name, - messages=messages, - temperature=OPENAI_CONFIG["temperature"], - max_tokens=OPENAI_CONFIG["max_tokens"], - top_p=OPENAI_CONFIG["top_p"], - frequency_penalty=OPENAI_CONFIG["frequency_penalty"], - presence_penalty=OPENAI_CONFIG["presence_penalty"], - stop=stop - ) - return {"input": prompt, - "output": response["choices"][0]["message"]["content"], - "prompt_tokens": response["usage"]["prompt_tokens"], - "completion_tokens": response["usage"]["completion_tokens"]} - elif self.model_name in LLAMA_WEIGHTS: - instruction, input = prompt[0], prompt[1] - output, prompt = self.al.lora_generate(instruction, input) - return {"input": prompt, - "output": output, - "prompt_tokens": len(prompt)/4, - "completion_tokens": len(output)/4 - } - - else: - raise ValueError("Model not supported") diff --git a/spaces/ronvolutional/iframe-test/static/index.html b/spaces/ronvolutional/iframe-test/static/index.html deleted file mode 100644 index 43fe1544893bcc2e40b8313872df74a8e90d4f59..0000000000000000000000000000000000000000 --- a/spaces/ronvolutional/iframe-test/static/index.html +++ /dev/null @@ -1,1919 +0,0 @@ - - - - - - Fast API 🤗 Space served with Uvicorn - - - - -
        -

        Fast API 🤗 Space served with Uvicorn

        -
        -

        Image generation from Inference API

        -

        - Model: - osanseviero/BigGAN-deep-128 -

        - - - pelican generated from BigGAN AI model -
        -
        -

        Text generation from transformers library

        -

        - Model: - t5-small -

        -
        - - - -

        -
        -
        -
        -

        Dataset from datasets library

        -

        - Dataset: - emotion -

        -
        - - -
        -
        -
        -
        - - - diff --git a/spaces/rorallitri/biomedical-language-models/logs/Blame It on the Bellboy A Free Online Movie with an All-Star Cast.md b/spaces/rorallitri/biomedical-language-models/logs/Blame It on the Bellboy A Free Online Movie with an All-Star Cast.md deleted file mode 100644 index 1aff77218a12326ea5d8c7acef1214cef7edcfc5..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Blame It on the Bellboy A Free Online Movie with an All-Star Cast.md +++ /dev/null @@ -1,6 +0,0 @@ -

        kenwood programming software download kpg-135d


        Download Zip ✑ ✑ ✑ https://tinurll.com/2uznJ0



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Chowdhury And Hossain English Grammar Book Pdf Free Download.md b/spaces/rorallitri/biomedical-language-models/logs/Chowdhury And Hossain English Grammar Book Pdf Free Download.md deleted file mode 100644 index ad017886d5f160f38fe258f43687f09c19a7264f..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Chowdhury And Hossain English Grammar Book Pdf Free Download.md +++ /dev/null @@ -1,9 +0,0 @@ - -

        recently, siddhartha has prepared his first english book, which deals with grammar. chowdhury said that the first novel he penned in the english language was a historical narrative, about the life of gandhi.

        -

        chowdhury and hossain english grammar book pdf free download


        Download File === https://tinurll.com/2uzlgS



        -

        feb 01, 2016. it is a very easy method and we can change active voice. chowdhury and hossain english grammar book pdf free download.want to read a free book on chowdhury and hossain english grammar book pdf free download?we have a great collection of free books for you to read online. buy an advanced english grammar book for your own version of the pinyin edition of the phrasebook. english grammar book chowdhury and hossain download free.download chowdhury and hossain english grammar book chowdhury and hossain english grammar book pdf free download in pdf, epub. chowdhury and hossain english grammar book chowdhury and hossain english grammar book pdf free download.

        -

        free chowdhury and hossain english grammar book chowdhury and hossain english grammar book pdf free download. chowdhury and hossain english grammar book pdf free download.want to read a free book on chowdhury and hossain english grammar book pdf free download?we have a great collection of free books for you to read online.

        -

        i would like to try english as a second language. chowdhury and hossain english grammar book pdf free download.the original, online version of the book of chowdhury and hossain is available for free from amazon. it is a very easy method and we can change active voice. free chowdhury and hossain english grammar book chowdhury and hossain english grammar book pdf free download.want to read a free book on chowdhury and hossain english grammar book pdf free download?we have a great collection of free books for you to read online. buy an advanced english grammar book for your own version of the pinyin edition of the phrasebook. english grammar book chowdhury and hossain pdf download in pdf, epub. chowdhury and hossain english grammar book free download pdf.download chowdhury and hossain english grammar book chowdhury and hossain english grammar book pdf free download in pdf, epub. download chowdhury and hossain english grammar book pdf free download. the chowdhury and hossain english grammar book pdf free download. free chowdhury and hossain english grammar book pdf free download. .

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Europaverlag Fachkunde Metall PDF Download Get the Latest Edition with Digital Extras and 3D Animations.md b/spaces/rorallitri/biomedical-language-models/logs/Europaverlag Fachkunde Metall PDF Download Get the Latest Edition with Digital Extras and 3D Animations.md deleted file mode 100644 index 8eb64fddce7687435f7c48973865f4f8975d1878..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Europaverlag Fachkunde Metall PDF Download Get the Latest Edition with Digital Extras and 3D Animations.md +++ /dev/null @@ -1,6 +0,0 @@ -

        europaverlagfachkundemetallpdfdownload


        Download Ziphttps://tinurll.com/2uzmum



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/rossellison/kpop-face-generator/stylegan3-fun/torch_utils/ops/fma.py b/spaces/rossellison/kpop-face-generator/stylegan3-fun/torch_utils/ops/fma.py deleted file mode 100644 index 51a45dfa0829987e8ee5214663e068cb3af2a8b9..0000000000000000000000000000000000000000 --- a/spaces/rossellison/kpop-face-generator/stylegan3-fun/torch_utils/ops/fma.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Fused multiply-add, with slightly faster gradients than `torch.addcmul()`.""" - -import torch - -#---------------------------------------------------------------------------- - -def fma(a, b, c): # => a * b + c - return _FusedMultiplyAdd.apply(a, b, c) - -#---------------------------------------------------------------------------- - -class _FusedMultiplyAdd(torch.autograd.Function): # a * b + c - @staticmethod - def forward(ctx, a, b, c): # pylint: disable=arguments-differ - out = torch.addcmul(c, a, b) - ctx.save_for_backward(a, b) - ctx.c_shape = c.shape - return out - - @staticmethod - def backward(ctx, dout): # pylint: disable=arguments-differ - a, b = ctx.saved_tensors - c_shape = ctx.c_shape - da = None - db = None - dc = None - - if ctx.needs_input_grad[0]: - da = _unbroadcast(dout * b, a.shape) - - if ctx.needs_input_grad[1]: - db = _unbroadcast(dout * a, b.shape) - - if ctx.needs_input_grad[2]: - dc = _unbroadcast(dout, c_shape) - - return da, db, dc - -#---------------------------------------------------------------------------- - -def _unbroadcast(x, shape): - extra_dims = x.ndim - len(shape) - assert extra_dims >= 0 - dim = [i for i in range(x.ndim) if x.shape[i] > 1 and (i < extra_dims or shape[i - extra_dims] == 1)] - if len(dim): - x = x.sum(dim=dim, keepdim=True) - if extra_dims: - x = x.reshape(-1, *x.shape[extra_dims+1:]) - assert x.shape == shape - return x - -#---------------------------------------------------------------------------- diff --git a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/data/audio_utils.py b/spaces/rstallman/Mayfair-Partner-Music/audiocraft/data/audio_utils.py deleted file mode 100644 index 76d4bc2a33ce722d879db2af33cd1336bd6b1fb3..0000000000000000000000000000000000000000 --- a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/data/audio_utils.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import sys -import typing as tp - -import julius -import torch -import torchaudio - - -def convert_audio_channels(wav: torch.Tensor, channels: int = 2) -> torch.Tensor: - """Convert audio to the given number of channels. - - Args: - wav (torch.Tensor): Audio wave of shape [B, C, T]. - channels (int): Expected number of channels as output. - Returns: - torch.Tensor: Downmixed or unchanged audio wave [B, C, T]. - """ - *shape, src_channels, length = wav.shape - if src_channels == channels: - pass - elif channels == 1: - # Case 1: - # The caller asked 1-channel audio, and the stream has multiple - # channels, downmix all channels. - wav = wav.mean(dim=-2, keepdim=True) - elif src_channels == 1: - # Case 2: - # The caller asked for multiple channels, but the input file has - # a single channel, replicate the audio over all channels. - wav = wav.expand(*shape, channels, length) - elif src_channels >= channels: - # Case 3: - # The caller asked for multiple channels, and the input file has - # more channels than requested. In that case return the first channels. - wav = wav[..., :channels, :] - else: - # Case 4: What is a reasonable choice here? - raise ValueError('The audio file has less channels than requested but is not mono.') - return wav - - -def convert_audio(wav: torch.Tensor, from_rate: float, - to_rate: float, to_channels: int) -> torch.Tensor: - """Convert audio to new sample rate and number of audio channels. - """ - wav = julius.resample_frac(wav, int(from_rate), int(to_rate)) - wav = convert_audio_channels(wav, to_channels) - return wav - - -def normalize_loudness(wav: torch.Tensor, sample_rate: int, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, energy_floor: float = 2e-3): - """Normalize an input signal to a user loudness in dB LKFS. - Audio loudness is defined according to the ITU-R BS.1770-4 recommendation. - - Args: - wav (torch.Tensor): Input multichannel audio data. - sample_rate (int): Sample rate. - loudness_headroom_db (float): Target loudness of the output in dB LUFS. - loudness_compressor (bool): Uses tanh for soft clipping. - energy_floor (float): anything below that RMS level will not be rescaled. - Returns: - output (torch.Tensor): Loudness normalized output data. - """ - energy = wav.pow(2).mean().sqrt().item() - if energy < energy_floor: - return wav - transform = torchaudio.transforms.Loudness(sample_rate) - input_loudness_db = transform(wav).item() - # calculate the gain needed to scale to the desired loudness level - delta_loudness = -loudness_headroom_db - input_loudness_db - gain = 10.0 ** (delta_loudness / 20.0) - output = gain * wav - if loudness_compressor: - output = torch.tanh(output) - assert output.isfinite().all(), (input_loudness_db, wav.pow(2).mean().sqrt()) - return output - - -def _clip_wav(wav: torch.Tensor, log_clipping: bool = False, stem_name: tp.Optional[str] = None) -> None: - """Utility function to clip the audio with logging if specified.""" - max_scale = wav.abs().max() - if log_clipping and max_scale > 1: - clamp_prob = (wav.abs() > 1).float().mean().item() - print(f"CLIPPING {stem_name or ''} happening with proba (a bit of clipping is okay):", - clamp_prob, "maximum scale: ", max_scale.item(), file=sys.stderr) - wav.clamp_(-1, 1) - - -def normalize_audio(wav: torch.Tensor, normalize: bool = True, - strategy: str = 'peak', peak_clip_headroom_db: float = 1, - rms_headroom_db: float = 18, loudness_headroom_db: float = 14, - loudness_compressor: bool = False, log_clipping: bool = False, - sample_rate: tp.Optional[int] = None, - stem_name: tp.Optional[str] = None) -> torch.Tensor: - """Normalize the audio according to the prescribed strategy (see after). - - Args: - wav (torch.Tensor): Audio data. - normalize (bool): if `True` (default), normalizes according to the prescribed - strategy (see after). If `False`, the strategy is only used in case clipping - would happen. - strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak', - i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square - with extra headroom to avoid clipping. 'clip' just clips. - peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy. - rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger - than the `peak_clip` one to avoid further clipping. - loudness_headroom_db (float): Target loudness for loudness normalization. - loudness_compressor (bool): If True, uses tanh based soft clipping. - log_clipping (bool): If True, basic logging on stderr when clipping still - occurs despite strategy (only for 'rms'). - sample_rate (int): Sample rate for the audio data (required for loudness). - stem_name (Optional[str]): Stem name for clipping logging. - Returns: - torch.Tensor: Normalized audio. - """ - scale_peak = 10 ** (-peak_clip_headroom_db / 20) - scale_rms = 10 ** (-rms_headroom_db / 20) - if strategy == 'peak': - rescaling = (scale_peak / wav.abs().max()) - if normalize or rescaling < 1: - wav = wav * rescaling - elif strategy == 'clip': - wav = wav.clamp(-scale_peak, scale_peak) - elif strategy == 'rms': - mono = wav.mean(dim=0) - rescaling = scale_rms / mono.pow(2).mean().sqrt() - if normalize or rescaling < 1: - wav = wav * rescaling - _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name) - elif strategy == 'loudness': - assert sample_rate is not None, "Loudness normalization requires sample rate." - wav = normalize_loudness(wav, sample_rate, loudness_headroom_db, loudness_compressor) - _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name) - else: - assert wav.abs().max() < 1 - assert strategy == '' or strategy == 'none', f"Unexpected strategy: '{strategy}'" - return wav - - -def f32_pcm(wav: torch.Tensor) -> torch.Tensor: - """Convert audio to float 32 bits PCM format. - """ - if wav.dtype.is_floating_point: - return wav - else: - assert wav.dtype == torch.int16 - return wav.float() / 2**15 - - -def i16_pcm(wav: torch.Tensor) -> torch.Tensor: - """Convert audio to int 16 bits PCM format. - - ..Warning:: There exist many formula for doing this convertion. None are perfect - due to the asymetry of the int16 range. One either have possible clipping, DC offset, - or inconsistancies with f32_pcm. If the given wav doesn't have enough headroom, - it is possible that `i16_pcm(f32_pcm)) != Identity`. - """ - if wav.dtype.is_floating_point: - assert wav.abs().max() <= 1 - candidate = (wav * 2 ** 15).round() - if candidate.max() >= 2 ** 15: # clipping would occur - candidate = (wav * (2 ** 15 - 1)).round() - return candidate.short() - else: - assert wav.dtype == torch.int16 - return wav diff --git a/spaces/runa91/barc_gradio/src/stacked_hourglass/datasets/utils_stanext.py b/spaces/runa91/barc_gradio/src/stacked_hourglass/datasets/utils_stanext.py deleted file mode 100644 index 83da8452f74ff8fb0ca95e2d8a42ba96972f684b..0000000000000000000000000000000000000000 --- a/spaces/runa91/barc_gradio/src/stacked_hourglass/datasets/utils_stanext.py +++ /dev/null @@ -1,114 +0,0 @@ - -import os -from matplotlib import pyplot as plt -import glob -import json -import numpy as np -from scipy.io import loadmat -from csv import DictReader -from collections import OrderedDict -from pycocotools.mask import decode as decode_RLE - -import sys -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..')) -from configs.dataset_path_configs import IMG_V12_DIR, JSON_V12_DIR, STAN_V12_TRAIN_LIST_DIR, STAN_V12_VAL_LIST_DIR, STAN_V12_TEST_LIST_DIR - - -def get_img_dir(V12): - if V12: - return IMG_V12_DIR - else: - return IMG_DIR - -def get_seg_from_entry(entry): - """Given a .json entry, returns the binary mask as a numpy array""" - rle = { - "size": [entry['img_height'], entry['img_width']], - "counts": entry['seg']} - decoded = decode_RLE(rle) - return decoded - -def full_animal_visible(seg_data): - if seg_data[0, :].sum() == 0 and seg_data[seg_data.shape[0]-1, :].sum() == 0 and seg_data[:, 0].sum() == 0 and seg_data[:, seg_data.shape[1]-1].sum() == 0: - return True - else: - return False - -def load_train_and_test_lists(train_list_dir=None , test_list_dir=None): - """ returns sets containing names such as 'n02085620-Chihuahua/n02085620_5927.jpg' """ - # train data - train_list_mat = loadmat(train_list_dir) - train_list = [] - for ind in range(0, train_list_mat['file_list'].shape[0]): - name = train_list_mat['file_list'][ind, 0][0] - train_list.append(name) - # test data - test_list_mat = loadmat(test_list_dir) - test_list = [] - for ind in range(0, test_list_mat['file_list'].shape[0]): - name = test_list_mat['file_list'][ind, 0][0] - test_list.append(name) - return train_list, test_list - - - -def _filter_dict(t_list, j_dict, n_kp_min=4): - """ should only be used by load_stanext_json_as_dict() """ - out_dict = {} - for sample in t_list: - if sample in j_dict.keys(): - n_kp = np.asarray(j_dict[sample]['joints'])[:, 2].sum() - if n_kp >= n_kp_min: - out_dict[sample] = j_dict[sample] - return out_dict - -def load_stanext_json_as_dict(split_train_test=True, V12=True): - # load json into memory - if V12: - with open(JSON_V12_DIR) as infile: - json_data = json.load(infile) - # with open(JSON_V12_DIR) as infile: json_data = json.load(infile, object_pairs_hook=OrderedDict) - else: - with open(JSON_DIR) as infile: - json_data = json.load(infile) - # convert json data to a dictionary of img_path : all_data, for easy lookup - json_dict = {i['img_path']: i for i in json_data} - if split_train_test: - if V12: - train_list_numbers = np.load(STAN_V12_TRAIN_LIST_DIR) - val_list_numbers = np.load(STAN_V12_VAL_LIST_DIR) - test_list_numbers = np.load(STAN_V12_TEST_LIST_DIR) - train_list = [json_data[i]['img_path'] for i in train_list_numbers] - val_list = [json_data[i]['img_path'] for i in val_list_numbers] - test_list = [json_data[i]['img_path'] for i in test_list_numbers] - train_dict = _filter_dict(train_list, json_dict, n_kp_min=4) - val_dict = _filter_dict(val_list, json_dict, n_kp_min=4) - test_dict = _filter_dict(test_list, json_dict, n_kp_min=4) - return train_dict, test_dict, val_dict - else: - train_list, test_list = load_train_and_test_lists(train_list_dir=STAN_ORIG_TRAIN_LIST_DIR , test_list_dir=STAN_ORIG_TEST_LIST_DIR) - train_dict = _filter_dict(train_list, json_dict) - test_dict = _filter_dict(test_list, json_dict) - return train_dict, test_dict, None - else: - return json_dict - -def get_dog(json_dict, name, img_dir=None): # (json_dict, name, img_dir=IMG_DIR) - """ takes the name of a dog, and loads in all the relevant information as a dictionary: - dict_keys(['img_path', 'img_width', 'img_height', 'joints', 'img_bbox', - 'is_multiple_dogs', 'seg', 'img_data', 'seg_data']) - img_bbox: [x0, y0, width, height] """ - data = json_dict[name] - # load img - img_data = plt.imread(os.path.join(img_dir, data['img_path'])) - # load seg - seg_data = get_seg_from_entry(data) - # add to output - data['img_data'] = img_data # 0 to 255 - data['seg_data'] = seg_data # 0: bg, 1: fg - return data - - - - - diff --git a/spaces/samayg/StriimTheme/README.md b/spaces/samayg/StriimTheme/README.md deleted file mode 100644 index efa7d7304f265bf44ff03542a2c0701292fc9def..0000000000000000000000000000000000000000 --- a/spaces/samayg/StriimTheme/README.md +++ /dev/null @@ -1,17 +0,0 @@ - ---- -tags: [gradio-theme] -title: StriimTheme -colorFrom: orange -colorTo: purple -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- -# StriimTheme -## Description -Add a description of this theme here! -## Contributions -Thanks to [@samayg](https://huggingface.co/samayg) for adding this gradio theme! diff --git a/spaces/sanchit-gandhi/whisper-jax-diarization/app.py b/spaces/sanchit-gandhi/whisper-jax-diarization/app.py deleted file mode 100644 index b26221b31f2d944a7da059ad14a5576bdd022f5c..0000000000000000000000000000000000000000 --- a/spaces/sanchit-gandhi/whisper-jax-diarization/app.py +++ /dev/null @@ -1,328 +0,0 @@ -import os -import tempfile -import time - -import gradio as gr -import numpy as np -import torch -import yt_dlp as youtube_dl -from gradio_client import Client -from pyannote.audio import Pipeline -from transformers.pipelines.audio_utils import ffmpeg_read - - -YT_LENGTH_LIMIT_S = 36000 # limit to 1 hour YouTube files -SAMPLING_RATE = 16000 - -API_URL = "https://sanchit-gandhi-whisper-jax.hf.space/" -HF_TOKEN = os.environ.get("HF_TOKEN") - -# set up the Gradio client -client = Client(API_URL) - -# set up the diarization pipeline -diarization_pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization", use_auth_token=HF_TOKEN) - - -def format_string(timestamp): - """ - Reformat a timestamp string from (HH:)MM:SS to float seconds. Note that the hour column - is optional, and is appended within the function if not input. - - Args: - timestamp (str): - Timestamp in string format, either MM:SS or HH:MM:SS. - Returns: - seconds (float): - Total seconds corresponding to the input timestamp. - """ - split_time = timestamp.split(":") - split_time = [float(sub_time) for sub_time in split_time] - - if len(split_time) == 2: - split_time.insert(0, 0) - - seconds = split_time[0] * 3600 + split_time[1] * 60 + split_time[2] - return seconds - - -# Adapted from https://github.com/openai/whisper/blob/c09a7ae299c4c34c5839a76380ae407e7d785914/whisper/utils.py#L50 -def format_timestamp(seconds: float, always_include_hours: bool = False, decimal_marker: str = "."): - """ - Reformat a timestamp from a float of seconds to a string in format (HH:)MM:SS. Note that the hour - column is optional, and is appended in the function if the number of hours > 0. - - Args: - seconds (float): - Total seconds corresponding to the input timestamp. - Returns: - timestamp (str): - Timestamp in string format, either MM:SS or HH:MM:SS. - """ - if seconds is not None: - milliseconds = round(seconds * 1000.0) - - hours = milliseconds // 3_600_000 - milliseconds -= hours * 3_600_000 - - minutes = milliseconds // 60_000 - milliseconds -= minutes * 60_000 - - seconds = milliseconds // 1_000 - milliseconds -= seconds * 1_000 - - hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else "" - return f"{hours_marker}{minutes:02d}:{seconds:02d}{decimal_marker}{milliseconds:03d}" - else: - # we have a malformed timestamp so just return it as is - return seconds - - -def format_as_transcription(raw_segments): - return "\n\n".join( - [ - f"{chunk['speaker']} [{format_timestamp(chunk['timestamp'][0])} -> {format_timestamp(chunk['timestamp'][1])}] {chunk['text']}" - for chunk in raw_segments - ] - ) - - -def _return_yt_html_embed(yt_url): - video_id = yt_url.split("?v=")[-1] - HTML_str = ( - f'
        ' - "
        " - ) - return HTML_str - - -def download_yt_audio(yt_url, filename): - info_loader = youtube_dl.YoutubeDL() - try: - info = info_loader.extract_info(yt_url, download=False) - except youtube_dl.utils.DownloadError as err: - raise gr.Error(str(err)) - - file_length = info["duration_string"] - file_length_s = format_string(file_length) - - if file_length_s > YT_LENGTH_LIMIT_S: - yt_length_limit_hms = time.strftime("%HH:%MM:%SS", time.gmtime(YT_LENGTH_LIMIT_S)) - file_length_hms = time.strftime("%HH:%MM:%SS", time.gmtime(file_length_s)) - raise gr.Error( - f"To encourage fair usage of the demo, the maximum YouTube length is {yt_length_limit_hms}, " - f"got {file_length_hms} YouTube video." - ) - - ydl_opts = {"outtmpl": filename, "format": "worstvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best"} - with youtube_dl.YoutubeDL(ydl_opts) as ydl: - try: - ydl.download([yt_url]) - except youtube_dl.utils.ExtractorError as err: - raise gr.Error(str(err)) - - -def align(transcription, segments, group_by_speaker=True): - transcription_split = transcription.split("\n") - - # re-format transcription from string to List[Dict] - transcript = [] - for chunk in transcription_split: - start_end, transcription = chunk[1:].split("] ") - start, end = start_end.split("->") - - transcript.append({"timestamp": (format_string(start), format_string(end)), "text": transcription}) - - # diarizer output may contain consecutive segments from the same speaker (e.g. {(0 -> 1, speaker_1), (1 -> 1.5, speaker_1), ...}) - # we combine these segments to give overall timestamps for each speaker's turn (e.g. {(0 -> 1.5, speaker_1), ...}) - new_segments = [] - prev_segment = cur_segment = segments[0] - - for i in range(1, len(segments)): - cur_segment = segments[i] - - # check if we have changed speaker ("label") - if cur_segment["label"] != prev_segment["label"] and i < len(segments): - # add the start/end times for the super-segment to the new list - new_segments.append( - { - "segment": {"start": prev_segment["segment"]["start"], "end": cur_segment["segment"]["start"]}, - "speaker": prev_segment["label"], - } - ) - prev_segment = segments[i] - - # add the last segment(s) if there was no speaker change - new_segments.append( - { - "segment": {"start": prev_segment["segment"]["start"], "end": cur_segment["segment"]["end"]}, - "speaker": prev_segment["label"], - } - ) - - # get the end timestamps for each chunk from the ASR output - end_timestamps = np.array([chunk["timestamp"][-1] for chunk in transcript]) - segmented_preds = [] - - # align the diarizer timestamps and the ASR timestamps - for segment in new_segments: - # get the diarizer end timestamp - end_time = segment["segment"]["end"] - # find the ASR end timestamp that is closest to the diarizer's end timestamp and cut the transcript to here - upto_idx = np.argmin(np.abs(end_timestamps - end_time)) - - if group_by_speaker: - segmented_preds.append( - { - "speaker": segment["speaker"], - "text": "".join([chunk["text"] for chunk in transcript[: upto_idx + 1]]), - "timestamp": (transcript[0]["timestamp"][0], transcript[upto_idx]["timestamp"][1]), - } - ) - else: - for i in range(upto_idx + 1): - segmented_preds.append({"speaker": segment["speaker"], **transcript[i]}) - - # crop the transcripts and timestamp lists according to the latest timestamp (for faster argmin) - transcript = transcript[upto_idx + 1 :] - end_timestamps = end_timestamps[upto_idx + 1 :] - - # final post-processing - transcription = format_as_transcription(segmented_preds) - return transcription - - -def transcribe(audio_path, task="transcribe", group_by_speaker=True, progress=gr.Progress()): - # run Whisper JAX asynchronously using Gradio client (endpoint) - job = client.submit( - audio_path, - task, - True, - api_name="/predict_1", - ) - - # run diarization while we wait for Whisper JAX - progress(0, desc="Diarizing...") - diarization = diarization_pipeline(audio_path) - segments = diarization.for_json()["content"] - - # only fetch the transcription result after performing diarization - progress(0.33, desc="Transcribing...") - transcription, _ = job.result() - - # align the ASR transcriptions and diarization timestamps - progress(0.66, desc="Aligning...") - transcription = align(transcription, segments, group_by_speaker=group_by_speaker) - - return transcription - - -def transcribe_yt(yt_url, task="transcribe", group_by_speaker=True, progress=gr.Progress()): - # run Whisper JAX asynchronously using Gradio client (endpoint) - job = client.submit( - yt_url, - task, - True, - api_name="/predict_2", - ) - - html_embed_str = _return_yt_html_embed(yt_url) - progress(0, desc="Downloading YouTube video...") - with tempfile.TemporaryDirectory() as tmpdirname: - filepath = os.path.join(tmpdirname, "video.mp4") - download_yt_audio(yt_url, filepath) - with open(filepath, "rb") as f: - inputs = f.read() - - inputs = ffmpeg_read(inputs, SAMPLING_RATE) - inputs = torch.from_numpy(inputs).float() - inputs = inputs.unsqueeze(0) - - # run diarization while we wait for Whisper JAX - progress(0.25, desc="Diarizing...") - diarization = diarization_pipeline( - {"waveform": inputs, "sample_rate": SAMPLING_RATE}, - ) - segments = diarization.for_json()["content"] - - # only fetch the transcription result after performing diarization - progress(0.50, desc="Transcribing...") - _, transcription, _ = job.result() - - # align the ASR transcriptions and diarization timestamps - progress(0.75, desc="Aligning...") - transcription = align(transcription, segments, group_by_speaker=group_by_speaker) - - return html_embed_str, transcription - - -title = "Whisper JAX + Speaker Diarization ⚡️" - -description = """Combine the speed of Whisper JAX with pyannote speaker diarization to transcribe meetings in super fast time. Demo uses Whisper JAX as an [endpoint](https://twitter.com/sanchitgandhi99/status/1656665496463495168) and pyannote speaker diarization running locally. The Whisper JAX endpoint is run asynchronously, meaning speaker diarization is run in parallel to the speech transcription. The diarized timestamps are aligned with the Whisper output to give the final speaker-segmented transcription. - -To duplicate the demo, first accept the pyannote terms of use for the [speaker diarization](https://huggingface.co/pyannote/speaker-diarization) and [segmentation](https://huggingface.co/pyannote/segmentation) models. Then, click [here](https://huggingface.co/spaces/sanchit-gandhi/whisper-jax-diarization?duplicate=true) to duplicate the demo, and enter your Hugging Face access token as a Space secret when prompted. -""" - -article = "Whisper large-v2 model by OpenAI. Speaker diarization model by pyannote. Whisper JAX backend running JAX on a TPU v4-8 through the generous support of the [TRC](https://sites.research.google/trc/about/) programme. Whisper JAX [code](https://github.com/sanchit-gandhi/whisper-jax) and Gradio demo by 🤗 Hugging Face." - -microphone = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", optional=True, type="filepath"), - gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe"), - gr.inputs.Checkbox(default=True, label="Group by speaker"), - ], - outputs=[ - gr.outputs.Textbox(label="Transcription").style(show_copy_button=True), - ], - allow_flagging="never", - title=title, - description=description, - article=article, -) - -audio_file = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="upload", optional=True, label="Audio file", type="filepath"), - gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe"), - gr.inputs.Checkbox(default=True, label="Group by speaker"), - ], - outputs=[ - gr.outputs.Textbox(label="Transcription").style(show_copy_button=True), - ], - allow_flagging="never", - title=title, - description=description, - article=article, -) - -youtube = gr.Interface( - fn=transcribe_yt, - inputs=[ - gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video here", label="YouTube URL"), - gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe"), - gr.inputs.Checkbox(default=True, label="Group by speaker"), - ], - outputs=[ - gr.outputs.HTML(label="Video"), - gr.outputs.Textbox(label="Transcription").style(show_copy_button=True), - ], - allow_flagging="never", - title=title, - examples=[ - ["https://www.youtube.com/watch?v=m8u-18Q0s7I", "transcribe", True], - ["https://www.youtube.com/watch?v=LCOe3a9EHJs", "transcribe", True], - ], - cache_examples=False, - description=description, - article=article, -) - -demo = gr.Blocks() - -with demo: - gr.TabbedInterface([microphone, audio_file, youtube], ["Microphone", "Audio File", "YouTube"]) - -demo.queue(max_size=10) -demo.launch() diff --git a/spaces/sara4dev/rag-iblog-qa/faiss_index/README.md b/spaces/sara4dev/rag-iblog-qa/faiss_index/README.md deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/scedlatioru/img-to-music/example/Americanpiehindidubbedmobilemoviesallpart.md b/spaces/scedlatioru/img-to-music/example/Americanpiehindidubbedmobilemoviesallpart.md deleted file mode 100644 index a5a39e1aaa042fabcefb0b4243348b5010350460..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Americanpiehindidubbedmobilemoviesallpart.md +++ /dev/null @@ -1,38 +0,0 @@ -

        americanpiehindidubbedmobilemoviesallpart


        Download ››››› https://gohhs.com/2uEznR



        - -Movie Spoilers With ! - -Cut and Run epub download - -Abigail Roux (née Reese) is a small town nobody with a big, life-threatening secret. You see, she is dying of cancer and her doctor has told her there's nothing he can do. She's fighting for her life with chemotherapy and all the while, she suspects her husband, Ray, has been having an affair and is keeping the fact of her cancer from her. An absolutely riveting tale of family, friendship and suspense. This book has it all, sexy, dangerous characters, romance and mystery. If you like contemporary thrillers, you're going to love this one. - -Cut and Run - -By - -Where does - -ad? - -Pages: 70 - -Publisher: Entangled Publishing July 2013 - -ISBN: - -Format: PDF, ePub, Mobi - -The Ghost Writer - -Sisters in Law (The Ghost Series Book 4) - -The Ghost Series - -Fatal Reunion (The Ghost Series Book 3) - -Love or Other Crimes (The Ghost Series Book 2) - -If you buy books from our links, a small portion goes to help keep our website running. Find out more. Cut And Run - A Life's Work by Henry Morton. This collection includes two of his most popular and enduring pieces of fiction - and covers a considerable range of themes and interests. Henry Morton is a professional linguist who has lived all over the world and has a knack for vividly portraying the cultures he has lived in. He is also a writer of original, award-winning short fiction. His keen insight into language and culture has made his books highly regarded and often translated into numerous languages. And his non-fiction works focus on history and current events, as well as his vast knowledge of the finer points of language and grammar. * Winner of the prestigious Anisfield-Wolf Book Award * Heart of Gold - A Life of Pat Conroy "A heartbreakingly accurate depiction of the intimate life of a rich Southern family at the height of its power" - Geoffrey C. Ward, The Atlantic "Henry Morton has always seemed to be a man of considerable wisdom. This book and his earlier one, A Literary Life, 4fefd39f24
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/Grandeureditordungeondefendersdownload Freepc.md b/spaces/scedlatioru/img-to-music/example/Grandeureditordungeondefendersdownload Freepc.md deleted file mode 100644 index 5b72e98475ae5c448645d32d4f986dce68ff5c81..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Grandeureditordungeondefendersdownload Freepc.md +++ /dev/null @@ -1,9 +0,0 @@ -
        -

        Download And Install WhatsApp Plus Apk Latest Version For PC,Laptop And Android
        Download DefinitionDictionary 0.6.1.1 Incl. Keygen
        grandeureditordungeondefendersdownloadpc
        prime wars spawners working 2017 for xbox 360
        pretty-ricky-bluestars-album-download-zip

        -

        grandeureditordungeondefendersdownloadpc


        Download Zip --->>> https://gohhs.com/2uEAjP



        -

        download Star Wars Episode 7 Movie Full HD 1080p
        Life Mein Twist Hai movie hd video download
        Apache Air Assault Game
        Download Keygen Xforce For PowerMill 2016
        mohabbat alir ekdin pdf 12golkes
        Humne Jeena Seekh Liya Hindi Movie Full Hd 1080p
        D16 Group Audio Software LuSH-101 v1.1.2 Incl. Keygen - R2R [dee setup free
        Las Perlas Uribistas Pdf
        grandeureditordungeondefendersdownloadpc
        pretty-ricky-bluestars-album-download-zip
        MiniTool Partition Wizard Crack 11.4 Professional License Key 2019

        -

        saunlesl 7b17bfd26b https://coub.com/stories/3245970-grandeureditordungeondefendersdownloadpc-better tawngeo says: at 4:41 am. grandeureditordungeondefendersdownloadpc pretty-ricky-bluestars-album-download-zip MiniTool Partition Wizard Crack 11.4 Professional.

        -

        apache air assault game download
        Live Free Or Die + PC Game 2018 For Mac
        apache air assault game download
        white girls pussy peeing in a tub 7 - Porn Pirates
        apache air assault game download
        download keygen xforce for powermill 2016
        powermill download
        powermill download pc game
        apache air assault game download
        apache air assault game download
        download keygen xforce for powermill 2016
        mohabbat alir ekdin pdf 12golkes
        Humne Jeena Seekh Liya Hindi Movie Full Hd 1080p
        tawngeo’s release date 2018
        grandeureditordungeondefendersdownloadpc
        [url=https://mbt-ss-garments-online.com/products-for-men/nike-tn-mens-aubergine-trainers-tall-shoes-shoes-oz-d5]nike tn men's aubergine trainers shoes[/url]

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Multipage Tiff Editor 24 Keygen 22.md b/spaces/scedlatioru/img-to-music/example/Multipage Tiff Editor 24 Keygen 22.md deleted file mode 100644 index 24eae389c436bef6825d07631bad5e5e63491d7e..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Multipage Tiff Editor 24 Keygen 22.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Multipage Tiff Editor 24 Keygen 22


        Download File ✏ ✏ ✏ https://gohhs.com/2uEAxP



        - -The interface has been improved. Hotkeys have become more convenient. v.2.9.19.851 Date: January 24, 2020 Some minor issues with decoding JPEG compressed TIFF files. 2021 - Support for multi-page TIFFs, animated GIFs, animated ICOs; IPTC image, EXIF ​​metadata support; EXIF auto-rotation support; editing IPTC; Resize, Rotate, . drag and drop, hotkeys, saving to other formats, etc. ========================== Interface: Improved, simplified hotkeys. Now all the keys necessary for editing are located on the keyboard, and not on the toolbar. The "Image" tab has become more convenient. The Format tab has become more convenient. The Style tab has become more convenient. The "Effects" tab has become more convenient. The "Transformations" tab has become more convenient. Additional shortcuts appeared on the main Image tab. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/Rt Core 64 Driver Rmclock Download.md b/spaces/scedlatioru/img-to-music/example/Rt Core 64 Driver Rmclock Download.md deleted file mode 100644 index e81446b966619d6c77f13f135a203eef1b2b8c4c..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Rt Core 64 Driver Rmclock Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

        rt core 64 driver rmclock download


        DOWNLOADhttps://gohhs.com/2uEAzl



        - -En win7 32 bit si funcionaba. ... "Cannot install or load RTCore 64 driver. ... El rmclock no funciona en vista 64 o windows 7 64 porque el controlador ... Aunque yo iría al enlace (http://downloads.guru3d.com/EVGA-Precision- ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/model/channel_attn.py b/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/model/channel_attn.py deleted file mode 100644 index a2096c1c4b4745a3ea2060bb25af3b19ff9cf3ec..0000000000000000000000000000000000000000 --- a/spaces/sczhou/ProPainter/web-demos/hugging_face/tracker/model/channel_attn.py +++ /dev/null @@ -1,39 +0,0 @@ -import math -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class CAResBlock(nn.Module): - def __init__(self, in_dim: int, out_dim: int, residual: bool = True): - super().__init__() - self.residual = residual - self.conv1 = nn.Conv2d(in_dim, out_dim, kernel_size=3, padding=1) - self.conv2 = nn.Conv2d(out_dim, out_dim, kernel_size=3, padding=1) - - t = int((abs(math.log2(out_dim)) + 1) // 2) - k = t if t % 2 else t + 1 - self.pool = nn.AdaptiveAvgPool2d(1) - self.conv = nn.Conv1d(1, 1, kernel_size=k, padding=(k - 1) // 2, bias=False) - - if self.residual: - if in_dim == out_dim: - self.downsample = nn.Identity() - else: - self.downsample = nn.Conv2d(in_dim, out_dim, kernel_size=1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - r = x - x = self.conv1(F.relu(x)) - x = self.conv2(F.relu(x)) - - b, c = x.shape[:2] - w = self.pool(x).view(b, 1, c) - w = self.conv(w).transpose(-1, -2).unsqueeze(-1).sigmoid() # B*C*1*1 - - if self.residual: - x = x * w + self.downsample(r) - else: - x = x * w - - return x diff --git a/spaces/sdhsdhk/bingo111/src/components/chat-notification.tsx b/spaces/sdhsdhk/bingo111/src/components/chat-notification.tsx deleted file mode 100644 index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingo111/src/components/chat-notification.tsx +++ /dev/null @@ -1,77 +0,0 @@ -import { useEffect } from 'react' -import Image from 'next/image' - -import IconWarning from '@/assets/images/warning.svg' -import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types' -import { ExternalLink } from './external-link' -import { useBing } from '@/lib/hooks/use-bing' - -export interface ChatNotificationProps extends Pick, 'bot'> { - message?: ChatMessageModel -} - -function getAction(error: ChatError, reset: () => void) { - if (error.code === ErrorCode.THROTTLE_LIMIT) { - reset() - return ( -
        - 你已达到每日最大发送消息次数,请更换账号或隔一天后重试 -
        - ) - } - if (error.code === ErrorCode.BING_FORBIDDEN) { - return ( - - 你的账号已在黑名单,请尝试更换账号及申请解封 - - ) - } - if (error.code === ErrorCode.CONVERSATION_LIMIT) { - return ( -
        - 当前话题已中止,请点 - 重新开始 - 开启新的对话 -
        - ) - } - if (error.code === ErrorCode.BING_CAPTCHA) { - return ( - - 点击通过人机验证 - - ) - } - if (error.code === ErrorCode.BING_UNAUTHORIZED) { - reset() - return ( - 没有获取到身份信息或身份信息失效,点此重新设置 - ) - } - return error.message -} - -export function ChatNotification({ message, bot }: ChatNotificationProps) { - useEffect(() => { - window.scrollBy(0, 2000) - }, [message]) - - if (!message?.error) return - - return ( -
        -
        -
        -
        -
        - error - {getAction(message.error, () => bot.resetConversation())} -
        -
        -
        -
        -
        - ) -} diff --git a/spaces/sdhsdhk/bingo111/src/components/ui/textarea.tsx b/spaces/sdhsdhk/bingo111/src/components/ui/textarea.tsx deleted file mode 100644 index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000 --- a/spaces/sdhsdhk/bingo111/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( -